Almost daily, developments in AI make headlines – from updates in regulation, to what’s allowed in schools, numerous copyright and privacy breaches, to the latest in the high-profile Hollywood Screen Actors Guild negotiations.
This week SXSW, one of the biggest creative and technology festivals in the world will be held in Sydney and showcase numerous AI panels and sessions.
Among technology and policy circles a certain narrative has taken hold, a push towards ‘responsible AI’ or ‘ethical AI’.
Australia currently has a consultation for ‘safe and responsible AI’, seeking public feedback for governance and regulatory initiatives. Our national science body, the CSIRO has a ‘Responsible AI network’ which brings together various organisations to collaborate on responsible AI principles. This is hosted by the National AI Centre – funded by the Australian Government and Google.
Before the term ‘responsible AI’ was popularised, ‘ethical AI’ was also commonly used, and Australia currently has eight ‘ethical AI’ principles which promote ideas around human-centred values, fairness, and privacy among others.
These are worthwhile pursuits, important conversations and initiatives in what promises to be a complex and transformative general technology that will have wide impact.
However, there is a risk that the umbrella term of ‘responsible AI’ is used to diffuse and generalise accountability and liability away from specific actors towards hard to enforce behavioural frameworks and principles.
The idea of ‘responsibility’ suggests agency and ownership by specific agents – how then can you pinpoint the responsible persons when we generalise around ‘Responsible AI’? Would this be the company who developed the technology, the organisation who deployed a version of it for their own use, the active user, or any number of middlemen and intermediaries throughout the whole process?
Certainly the technology – disembodied and decentralised can’t be held responsible, even though the tendency to anthropomorphise AI is all too easy to do.
The NSW Ombudsman rightly called this out in their submission to the consultation on ‘safe and responsible AI’ in Australia, that by referring to the concept, “as if it were a thing distinct from legal and moral actors and their actions, attention might easily be drawn away from thinking more directly and deeply about those questions – who is (or should be) responsible, to whom, for what, and by reference to what rules or standards?”.
Tech companies would of course relish in accountability being borne by other parties, and that their products would continue unencumbered, so long as it meets standards of ‘responsible AI’ that bear no clear penalties, or clear designations on those responsible.
In recent years, the growing demands for tech accountability have seen many tech companies publicly declare support for regulatory initiatives on the surface, but are really just efforts to diffuse or water down any restrictions.
These are well known tactics of ‘responsibility washing’ or ‘ethics washing’ – a façade of ethics while largely continuing with business as usual behaviour. Another common tactic is keeping the debate going while not slowing down on any product developments. ‘Ethics lobbying’ – advocating for a self-regulation regime, emphasising existing regulations rather than new, specific ones related to the topic, or ‘ethics shopping’ – is cherrypicking regulations that suit, advocating for deregulation on one hand, while publicly declaring support for other initiatives.
We must not let the worthy conversation of what a safe and accountable AI landscape should look like be hijacked by the very companies who are at the forefront of developing these technologies, looking for early mover advantage and gatekeeper status on this next phase of our digital experience. That would be the responsible thing to do.
Jordan Guiao is the author of Disconnect: Why we get pushed to extremes online and will be talking about AI and its impact on culture at SXSW Sydney.
Do you know more? Contact James Riley via Email.