Landmark AI rules take major step forward in EU


World-first artificial intelligence laws have cleared a major hurdle in Europe, with lawmakers agreeing to draft rules that could serve as a model for other countries grappling with the rapid rise of generative AI.

As consultations on similar regulations get underway in Australia, the European Parliament on Wednesday night agreed to the text of the draft Artificial Intelligence Act, with lawmakers voting 499 to 28 in favour of proposed regulations.

Parliament will now consult with European Union member states on the final shape of the landmark Act, meaning the agreed rules designed to promote a human-centric approach to AI and protect against harms could still change before the legislation passes.

The European Union Parliament in Brussels, Belgium. Editorial credit: PP Photos / Shutterstock.com

Despite the prospect of future changes to legislation that has already been two years in the making, European Parliament president Roberta Metsola said the Act “will no doubt be setting the global standard for years to come”.

“Going forward, we are going to need constant, clear boundaries and limits to artificial intelligence, and here there is one thing we will not compromise on: Anytime technology advances it must go hand in hand with our fundamental rights and democratic values,” she said.

Under the Act, AI technology deemed to have an “unacceptable level of risk to people’s safety” will be prohibited in Europe, including social scoring systems that classify people based on their social behavior or personal characteristics.

Other applications of the technology to be banned include real-time biometric identification systems in public spaces, predictive policing systems, and facial recognition systems that scrape images from the internet or CCTV footage (think Clearview AI).

Co-rapporteur of the European Parliament’s AI committee, Brando Benifei, said the inclusion of biometrics was a “last minute divergence”, with the safeguards aimed at avoiding “any risk of mass surveillance”.

Several “high-risk applications” – AI systems that “pose significant harm to people’s health, safety, fundamental rights or the environment” – were also identified, including systems that influence the outcome of elections and recommender systems used by social media platforms.

“General purpose” AI technologies like ChatGPT would be expected to comply with new transparency requirements, including disclosing that content is AI-generated and making public summaries of the copyrighted data used for training models.

Potential risks to health, safety, fundamental rights, the environment, democracy and the rule of law would also have to be assessed and mitigated, and be registered in the European database before release.

“We want content that is produced by AI to be recognizable as such and we want deep fakes not to poison our democracy,” Mr Benifei said during a press conference held after the European Parliament’s vote.

The rules agreed to on Wednesday also contain an exemption for research activities and AI components provided under open-source licences, while promoting regulatory sandboxes to test AI before it is deployed.

Co-rapporteur Dragos Tudorache said those features are important to “bring in industry” and provide a “level playing field”, stressing that innovation is “just as important” as protecting Europeans from harms.

“Serving the agenda of protecting our citizens is very firmly into this mandate, and at the same time we are also serving the agenda of promoting innovation, not hindering creativity and deployment and development of AI in Europe,” he said.

Mr Tudorache also said that unlike the General Data Protection Regulation, the Act contained definitions that are “not only clear in terms of what AI is, but perfectly aligned with the OECD definition [and] US definition”.

“We do not look at this Act as something where the Brussels Effect will suffice. We have to look at the Brussels Effect differently this time around because this technology is the same everywhere,” he said.

“Therefore, we have to – us as the European Union – take the lead on this, being the first ones to have courage to put rules into place, but we also have to work with other like-minded democracies out there to make sure that we reach alignment on the global stage.”

Professor of AI, technology and the law at Monash University, Chris Marsden, however, on Thursday said that the Act in its current form had “very little prospect… of successfully regulating the worst excesses of AI”.

“Australia can observe the ‘Brussels effect’ AI regulation calmly, noting that it was never intended to have teeth… This European AI Act will set a very low bar for the Australian government to match or exceed,” he said.

The federal government earlier this month released a discussion paper that proposes options for tightening the frameworks for governing AI. One of the options canvassed is a ban on the technology in “high-risk” settings.

According to the paper, high-risk is defined as having “very high impacts that are systemic, irreversible or perpetual”. Examples include the use of AI-enabled robots for surgery and self-driving cars.

Industry and government leaders will this week meet with Sam Altman, the chief executive of ChatGPT maker OpenAI, who has travelled to Australia this week at the invitation of Startup Network (previously Startup Victoria) to discuss approaches for regulating the technology.

Mr Altman last month warned that the company could leave Europe if it could not comply with the regulations, before later walking back the comments. He has also previously said that regulation is “essential”

Do you know more? Contact James Riley via Email.

Leave a Comment