Generative AI banned from research grant assessments


Brandon How
Administrator

The Australian Research Council has banned grant assessors from using generative AI tools in their assessment activities just days after reports emerged that ChatGPT had been used in assessor reports.

All peer reviewers are now prohibited from using the technology “as part of their assessment activities” because of breach of confidentiality concerns and the potential to “compromise the integrity of the ARC’s peer review process”.

The Australian Research Council (ARC), which oversees around $800 million research grants each year, on Friday released the ‘Use of Generative Artificial Intelligence in the ARC’s grants programs’ after claims that ChatGPT had been used to produce assessor reports.

Federal Education minister Jason Clare last week told InnovationAus.com that this use of generative AI was “not acceptable” and that he had instructed the ARC to “put in measures to ensure it doesn’t happen again”.

However, grant applicants are only advised to “use caution” if they use the technology to develop their applications. It warns that generative AI may produce content that “may be based on the intellectual property of others or may also be factually incorrect”.

As administering organisations are dealt with as applicants by the ARC as opposed to the individual researchers, the generative AI policy emphasises the role of deputy vice-chancellors (research) or equivalent for assuring the “authorship and intellectual content” of applications.

Aside from ARC’s main concern that uploading grant application material into a generative AI tool “constitutes a breach of confidentiality”, the policy notes that assessors are also expected to “provide high quality, constructive assessments”.

“The use of generative AI may compromise the integrity of the ARC’s peer review process by, for example, producing text that contains inappropriate content, such as generic comments and restatements of the application,” the policy reads.

The process for reporting an alleged contravention of the generative AI policy is managed under the existing Research Integrity Policy.

If it is determined a formal investigation is necessary, precautionary actions may be taken, including the removal of assessments suspected of using generative AI from the assessment process. Individuals also face potential suspensions from “ARC assessment, peer review and committee activities”.

At the conclusion of a formal investigation, if an assessor is deemed to have breached the Australian Code for the Responsible Conduct of Research by using a generative AI tool, then they may face unspecified “consequential actions” on top of those already imposed by the assessor’s employer.

The operator of the ARC_Tracker Twitter account, who first drew attention to the suspected use of ChatGPT in assessor reports, told InnovationAus.com they have “never seen [the ARC] move so fast. On anything”.

However, the operator of the popular account and researcher advocate remains concerned the new AI policy ignores the broader issue of ‘zombie reviews’ – assessor reports that regurgitate statements in a proposal without insight or critique – which have persisted since well before the availability of generative AI tools.

They also said the policy would be difficult to enforce.

“[The ARC] got lucky this time, where an assessor left the tell-tale ChatGPT’s ‘regenerate response’ signature in their assessment text… What if, for example, an assessor uses AI to start with, but edits the results a little? That’s banned too, but I doubt the ARC could detect it reliably, either with AI-detecting tools or human readers,” they said.

“What would be much more effective – and what researchers already proposed – is for grant applicants to flag such ‘unprofessional’ reviews, including zombie reviews, AI-generated or not – to the College of Experts.”

This proposal was previously made in a prebudget submission to the federal government in 2021-22, with more than a thousand signatories, including UNSW.ai chief scientist Toby Walsh.

It suggested the change would come at little additional cost and would introduce a process for reporting “grossly unprofessional reviews” which aren’t strictly inappropriate by the ARC’s current criteria.

Last Wednesday, the Digital Transformation Agency released non-binding guidance to Australian Public Service staff on the use of generative AI tools. It flagged several use-cases that were deemed to present “an unacceptable risk to government” including, the input of large amounts of government data as well as classified, sensitive or confidential information.

Industry and science minister Ed Husic launched a consultation paper on ‘Safe and Responsible AI in Australia’ to guide the modernisation of the country’s legislative and regulatory regime, ensuring it is “fit for Australian purpose”.

Do you know more? Contact James Riley via Email.

Leave a Comment