Sunita Bose, Managing Director of the Digital Industry Group Inc. (DIGI) in Australia, former Head of Global Policy for Change.org
Technology captures our imagination because it taps into some of humanity’s best qualities: connection, creativity, and the desire for equality. Personally, I was drawn to a career in technology because of its democratising power to level the playing field, and its potential to give people a voice. Technology can make our lives better and easier, open up access to information, resources and new efficiencies, or bring people together across divides.
Human-centred technology policy
To keep these qualities at the centre of technological developments, our vision needs to be of a world where humans are shaping technology, rather than one where it’s negatively shaping us. The guardrails we set around technology are therefore extremely important, at a personal, corporate and national level.
The organisation I run, the Digital Industry Group Inc. (DIGI) has a unique and important role to play toward that vision. We’re an industry association for the digital industry in Australia, working with the world’s leading technology companies and the Australian government, on policy and regulatory solutions to address the challenges and opportunities of growing the digital economy. Recent advancements in artificial intelligence (AI) have brought technology policy front and centre in national and global conversations. Governments around the world are grappling with how to regulate a frontier technology, seizing its socio-economic opportunities, but mitigating potential risks.
While AI might have brought these questions into renewed public focus, similar themes have been the source of intense debate in the world of technology policy. But how do we go beyond debating different perspectives, and arrive at the right solutions?
A multi-stakeholder approach
Best practice in technology policy development requires adopting a truly multi-stakeholder approach that deeply involves consumers, civil society, industry and governments in the policy-making process. At DIGI, we believe the companies in our membership are working towards the same objectives as governments around the world: protecting people from online harms, strong consumer protections online, data privacy and cyber security, and a thriving digital-enabled economy. As new policy proposals in these areas emerge, the expertise from within the tech companies must be shared with the Government in order to deliver policies that work, and that are effective in achieving those objectives. The companies’ expertise also reflects what they are hearing from their users, and how people are actually using technology.
Consultation and collaboration across diverse groups is not an easy feat, but they’re skills I’ve reflected on and developed over the years through various roles. DIGI has a consensus- based decision-making model, which means that our founding members – who are otherwise competitors as the world’s leading technology companies – must agree on all of our strategic decisions and policy positions.
It’s also a skill I practised when I was Head of Global Policy for Change.org, where I established and led the policy team that developed the rules for users including the company’s Privacy Policy, Terms of Service, Community Guidelines, and the infrastructure to manage harmful user-generated content, in areas such as bullying, hate speech, defamation, misinformation, data privacy and child protection. This involved navigating the vastly different cultural approaches, regulation and political dynamics in issues like hate speech to find common ground, in order to develop a broadly consistent global policy.
Building trust is essential
I’ve learned that the foundation of any consensus-building is trust. Trust is established through genuine listening. It’s important to foster environments that encourage stakeholders to be forthright in their views in a meeting, or outside of it. That environment is helped by how you connect with and build relationships with your stakeholders. Clearly signpost regular opportunities for feedback and influence with developed (but not finalised) ideas so that you’re providing leadership toward a destination and enabling everyone to help build the path.
While these are some of my personal reflections working specifically with industry policies and policy positions, strong consultation needs to be a part of any policy development process, particularly those that are government-led.
Seeing the bigger picture
In technology policy specifically, consultation doesn’t just help us reconcile different views, it helps us see the big picture. When I’m evaluating proposed technology regulation, one of the questions I will often ask is whether the approach is holistic in solving the policy problem. This question is relevant across the range of policy issues DIGI works across whether in consumer protection, privacy or online safety. DIGI is supportive of smart regulation for the online world, but you’ll often see us advocate for effective digital policy alongside economy-wide or systemic approaches.
For example, DIGI has a role in developing industry codes of practice that have formed an integral part of the Australian Government’s policy approach to addressing various online harms. One of those codes is the Australian Code of Practice on Disinformation and Misinformation, that commits major technology companies to safeguards against harmful misinformation and disinformation. We’ve worked to continually strengthen the code and see it a significant step forward; but while major technology companies have critically important levers to pull, sustained shifts in the fight against mis- and disinformation rely on a multi-stakeholder approach across digital platforms, media, governments and the community. For example, media and digital literacy initiatives are critically important interventions in bolstering resilience to misinformation. We need to pull more than one lever to make a real difference; another reason why bringing diverse groups together is so important.
Think laterally and globally
Returning to the example of AI, take a moment to consider just how many stakeholders have a role to play in accessing its significant socio-economic benefits, while addressing any potential risks. It’s a complex map. AI is relevant for every sector from banking to healthcare, education, entertainment and everything in between. We need to consider upstream designers of AI models and downstream deployers of the technology and assess both application of the technology on a use case basis, as well as the broader impacts on innovation, industry, and trade. To add another layer of complexity, AI technologies are being developed at a global scale. Any domestic regulation proposals have potential ripple effects across global supply chains, international relations, trade, research, security, and more. To embrace the full picture, consultation with a range of industries is essential to ensure that any regulation that is developed addresses the right risks and opportunities in their sector, potential use cases, applications, and potential associated harms.
Keeping technology safe, secure and equitable is one of the important challenges of our time. It is also a complex endeavour and, if oversimplified or developed in a vacuum, the laws proposed to rise to that challenge will be ineffective. Getting technology policy right relies on innovative and best-in-class processes to ensure civil society, consumers, industry and Governments are all deeply engaged in the policy problem and are empowered to contribute all of their expertise to the solution.
Sunita is Managing Director of the Digital Industry Group Inc. (DIGI), the non-profit tech industry peak body that advocates for a thriving digital economy in Australia, where online safety and privacy are protected. DIGI’s founding members are Apple, Discord, eBay, Google, Linktree, Meta, Microsoft, Snap, Spotify, TikTok, Twitch, X (f.k.a Twitter) and Yahoo. Sunita was previously Head of Global Policy for the online petition platform Change. org, based in San Francisco, and policy infrastructure to manage harmful user-generated content, in areas such as bullying, hate speech, defamation, misinformation, data privacy and child protection. Before that, Sunita spent seven years working in a range of international and Australian advocacy and strategic communications roles at humanitarian aid agencies Oxfam and UNICEF, and has a Masters of Policy from the University of New South Wales.