Manufacturing SMEs: Chasing certainty in GenAI and cyber


Jason Stevens
Contributor

The million-dollar question about the return on investment of cybersecurity and generative AI (GenAI) for manufacturing SMEs’ may finally flicker with an answer on the APAC horizon.

“’If we spend an extra million dollars on cybersecurity, are we a million dollars safer?’ is the question Louise McGrath, head of industry development and policy at Australian Industry (Ai) Group, often hears as she guides members in the engineering, construction, defence, clean energy, food and beverage sectors towards safer practices in Industry 4.0 and emerging GenAI applications. 

“AI and cybersecurity remain a day-to-day concern as our members strive to remain internationally competitive and keep their plants productive,” she said on the first episode of Verizon’s Securing AI podcast series.

The battle to keep costs down while securing their future with innovation invites risk: One Ai Group member organisation shut down IoT systems on its digitised factory floor due to a ransomware attack.  

Trying to quantify this risk in dollars and cents in the past posed challenges, sometimes insurmountable, even though most Ai Group members adhere to the principles of the Essential Eight guidelines.  

Ms McGrath discussed these issues with Chris Novak, head of cybersecurity consulting at Verizon, who pointed out that gen AI can be a game-changer in managing cybersecurity investments. It can drive the right security actions on the proper budget under risk quantification services, offering a promising solution to the challenges faced in the past. 

“We’re getting better at quantifying risk in terms of dollars and cents across a plethora of assets, including applications, data and OT/IoT for CISOs and manufacturing leaders,” Mr Novak said.

Ms McGrath agreed: “I think this (risk quantification) will be essential for many of our members.” She emphasised that once confidence is established, many benefits await them in the technology.  

Her team already directs national workshops, preparing industrial members for the future of GenAI, particularly in safety, which drives business decisions but does not directly impact production. 

From simulating factory floors with AR/VR goggles to monitoring driver fatigue, the Ai Group consults with CSIRO and the National AI Centre to explore how genAI can also impact discrete areas of business, including proprietary data reporting, carbon reporting and other ESG activities. 

“We wanted to use real-world examples and safety touches the entire organisation,” said Ms McGrath, noting that directors can go to jail for safety lapses. Demonstrating how gen AI can improve safety standards is crucial for gaining buy-in and ensuring future adoption.  

While this is an excellent way to practice giving everyone narrow access to AI responsibly and safely, Mr Novak warns of the dangers of companies feeling pushed into using gen AI for fear of being left behind. 

He likens this to the early gold rush era of the Internet when everyone slapped a dot com on their name to boost their stock value. “We’re now seeing this with AI, and my concern is that we are over-indexing the technology and looking at it as a panacea,” he said.

Verizon’s Chris Novak, Ai Group’s Louise McGrath and InnovationAus.com’s Corrie McLeod

The danger is that SMEs will enter the AI arena with a minimally mature understanding of how to leverage or safeguard the technology. 

“The data breaches of today and the escalating cyber attack landscape can be traced back to the dot com boom, when security and privacy were largely afterthoughts.” 

GenAI feeds off massive amounts of company and customer data and holding too much again exposes us to misuse, typical of the formative years of the tech boom. 

Verizon has published its own responsible AI guidelines, warning that many SMEs must fully understand how gen AI could lead them astray.  

A lack of internal governance models leads to employees misusing tools internally, but often not out of malicious intent. Risk factors include intellectual property, copyright infringements, and even potential attacks against their AI infrastructure. 

“Just because you built something internally that allows you to do these new and exciting things doesn’t necessarily mean you put the proper safeguards and controls in place to protect it,” Mr Novak continued. 

Further risk stems from ‘shadow AI’, a term used to describe situations where employees bypass the formal IT security vetting process to build an internal application. This can lead to unregulated and potentially risky AI applications within the organisation.  

Consequently, Verizon advises SMEs and larger companies on setting up internal Responsible AI councils. These councils are responsible for overseeing the ethical and responsible use of AI within the organisation and can help mitigate the risks associated with AI misuse.   

“In addition, constant network monitoring and vigilance must be in place,” explained Mr Novak, adding that this includes “penetration testing, red teaming and other forms of assessments.” 

He warned that efforts are necessary to look for potentially new and different vulnerabilities that we may not have discovered even today. 

Ms McGrath keenly feels the tension between rushing in and holding back.  

“We’ve got members who are just gung-ho saying, ‘Let’s go! – we’ll work out the details later,’ she said,” and then we’ve got others saying, well, we used to think asbestos was safe: I don’t want to use this tech unless there are guardrails and I can use it responsibly.”  

Regulations, human agency and leadership are crucial to Ms McGrath’s conversations with manufacturing teams. She advocates that existing regulations already address many fears around AI, like discrimination in employment and decision-making and false advertising.  

Instead of focusing on the technology itself, we need to regulate the behaviour it’s used for. 

“If you’re lying to a customer,” she cautioned,” it doesn’t matter if you’re standing on a snake oil box, using a fax machine or AI. The fact is, misleading the customer is still illegal under Australian law.” 

The Securing AI podcast series and associated articles are produced by InnovationAus.com in partnership with Verizon. 

Do you know more? Contact James Riley via Email.

Leave a Comment

Related stories