What’s notable about the hype cycle around artificial intelligence at the moment is that it’s not just VCs, tech CEOs and advocates that are making the headlines, governments are as well.
Not wanting to repeat the deregulated disasters of Web 2.0 companies that resulted in widespread harms, disinformation, privacy breaches and monopolistic behaviours, governments are attempting various oversight frameworks for AI.
The White House has secured voluntary commitments from the largest AI players (Open AI, Google, Meta, Amazon, Microsoft, Anthropic and Inflection) with initiatives that include content watermarking and pre-release testing. The EU, world leader in digital governance has developed the most significant AI law globally, which includes a classification system ranking AI activities by risk, including some that are classified “unacceptable”. The Australian government is currently facilitating a consultation around “safe and responsible AI”, seeking feedback on concerns and governance tactics.
The future of AI is uncertain. Predictions range widely from AI causing the end of life on Earth to forecasts of tremendous economic growth, or even ending scarcity altogether. Given the conflicting narratives, it’s not surprising that governments have so far struggled to arrive at useful AI policy.
There have been numerous ethical frameworks, safety manifestoes and pledges, but these need to be supported by strong regulation, effective accountability measures and meaningful oversight.
Transparency is key. The largest AI products are privately owned black boxes, and neither regulators nor the public have a clear understanding of how they work and what data they are based on. This is particularly consequential when data used to train AI are from products where users don’t realise this is happening.
Google, for example, quietly updated their terms and conditions to determine that any public data can be used to train Google’s AI. This is on top of user data already captured by Google products (Gmail, Maps, Docs, Search).
Many use large bodies of work that has an original author or group of authors for training models, with those authors having not given consent to use their work in this way.
Of course, strong privacy protections would provide a ballast against misuse of personal information in AI products. The Australian Privacy Act, in the midst of a significant update since its decades old creation, is a great opportunity to create data protections and reasonable use frameworks.
But there is more than just privacy at stake, with many fearing for their livelihoods. AI and its labour impacts has been put in the spotlight recently when celebrities, actors and performers conducted high profile protests against worsening employment conditions, including for the potential abuse of AI to use their likenesses for future productions.
The impacts and consequences of AI need to be rigorously studied and monitored by researchers, academics, think tanks and civil society, who are often at the forefront of developing AI ethics initiatives and accountability measures. But many of these are currently funded by the same technology companies vying for AI dominance.
Will these bodies be outwardly critical, or release research that might run in conflict with the agendas of the technology companies who fund them? We know companies like Google don’t take to this very well, with prominent AI researchers Timnit Gebru and Margaret Mitchell, formerly leaders of Google’s AI ethics department both being sacked following their research which threatened Google’s AI narrative.
Ultimately, the path for truly safe and responsible AI in Australia must be paved with transparency and accountability. Without data transparency, privacy protections, fair and reasonable use frameworks, and clarity on conflicts of interest, the government’s AI actions will be but lip service.
Do you know more? Contact James Riley via Email.