A 12-month pilot of AI-based surveillance technology designed to detect falls and abuse in two South Australian aged care homes generated more than 12,000 false alerts, a review has found.
The sheer number of alerts created alert fatigue that “overwhelmed” already overworked staff, and in at least one instance, the persistent false alerts meant a staff member did not respond to a true resident fall event.
The CCTV pilot began at Mt Pleasant Aged Care and Hortgate House in March 2021, with cameras and microphones installed in common areas and resident bedrooms. Consent for the recording devices to be turned on in bedrooms was given by 41 of the 57 residents or their guardians.
The project aimed at “exploring the acceptability and viability of using surveillance and monitoring within residential care settings” against the backdrop of improving the quality of support and safety in aged care.
Confronting reports about the sector were heard during the three-year Royal Commission into Aged Care Quality and Safety, which made dozens of recommendations for improvement in its final report in March 2021.
The system was programmed to “detect specific movements or sounds including falls, assist, calls for help, or screams”, with a text message sent to a monitoring centre when an event occurs. The message is then passed onto staff at the aged card facility.
The AI used was also “designed to learn over time and improve its ability to recognise the actions or audio cues specific to the sites and residents”.
But an independent evaluation by PwC found the “AI technology had a high rate of alerts”, meaning the pilot was “not yet sufficiently accurate at detecting incidents in a residential aged care setting”.
“Despite improvements made over time, during the 12 months of the pilot, the system generated over 12,000 arts across the two sites that were not verified as ‘true events’,” the report said, adding that a high percentage of these alerts were for staff crouching – a programmed movement.
“In these cases, while the system was detecting movement or sounds that it was programmed to detect, it was unable to reliably distinguish between the programmed events and similar movements or sounds that are reasonable to expect in residential care.”
The report said that while some ‘false alerts’ during the AI learning period was expected, the high volume was “unexpected”, leading to the introduction of a ‘sit fall’ alert that recognised crouching into the pilot in October 2021.
However, as the motion of dropping to one knee resembles a “common care position used by nurses and care attendants – the ‘knights position’”, the system also detected this movement as a fall, resulting in a spike of false alerts.
Further maturing of the system within the final months of the pilot meant that it was able to “detect some true quality and safety events, including falls by residents”, with 22 per cent of actual events detected, compared with just two per cent at the start of the trial.
But alert fatigue continued, and in the “final months of the pilot, staff were no longer able to respond to every alert”, leading to “at least one instance where staff did not respond to an alert that turned out to be a ‘true’ resident fall”.
“The evaluation found that the number number of ‘false alerts’ experienced at the sites by the CCTV system was unexpected and unacceptable to staff,” the report said.
“The number of false alerts in the first few months of the pilot means that the staff at both sites were overwhelmed by the workload associated with responding to alerts.”
The report found that at the conclusion of the pilot there was no evidence that the “AI-based surveillance used in the pilot had influence either positive or negatively, the quality and safety of the care provided at the sites”.
South Australian Health minister Chris Picton said the “botched” rollout had resulted in staff repeatedly responding to alerts that there was a “safety risk for residents when there wasn’t actually one to begin with”.
“That meant that staff had to respond time after time after time to false reports alerting from this system that meant they were taking time away from caring for patient at the bedside to report to problems with this botched IT system,” he told reporters on Wednesday.
“Clearly the system didn’t work properly, clearly it is not an acceptable level of false alerts to be making, and clearly the staff therefore had to take action in terms of not being able to respond to all of these false alerts.”
“To get to the point where the report notes that where there was some cases of actual true reports, it meant that staff weren’t responding to them because it became the case of the ‘boy who cried wolf’ to this system.”
Do you know more? Contact James Riley via Email.