Envision a world where your sensitive enterprise data flows freely into artificial intelligence systems without friction. Large Language Model (LLM) firewalls may emerge as essential for Australian businesses, unlocking generative AI’s power while reducing the risks of data leaks and compliance failures.
An LLM firewall works like a security checkpoint for AI, filtering data flowing in and out of large language models.
The concept surfaced in episode three of the Securing AI podcast during a discussion between Robert Le Busque, regional vice president for Asia Pacific at Verizon, and Huon Curtis, senior research fellow at the Tech Policy Design Centre, as they explored the tension between applications and implications as GenAI goes mainstream in Australia.
“While GenAI use is broad-based, its impact has been fairly narrow so far in areas like DevOps and customer service,” says Le Busque. “However, it can potentially be a multiplier for productivity, especially for knowledge workers.”
Safeguards like LLM firewalls protect sensitive data and support compliance, particularly in complex technological environments like telecoms and anti-money laundering (AML), where human error often becomes the weakest link.
“Safe adoption is fundamentally about how we learn from mistakes and how that feeds into innovation strategies,” adds Dr Curtis, who recently worked on telecommunications resilience. “In sectors like telecoms, failure to think through the consequences of total failure can have widespread effects.”
Curtis highlights recent outages affecting millions of customers’ phone lines and internet services. These serve as a wake-up call for stakeholders across companies, supply chains, and government, each with uneven resources and relative power.
He urges industries to use the United Nations principles of critical infrastructure resilience to better prepare systems for worst-case scenarios.
“Australia loves the Royal Commission as a way of learning, but it’s not necessarily efficient in driving innovation in response to change with cyber resilience,” he cautions. These commissions are formal inquiries into significant issues in Australia, with many critiquing their sluggish pace and lack of measurable outcomes.
Empowering leaders to manage GenAI risks while driving innovation calls for leadership that bridges technology, ethical use, and value creation.
Both agree on the need for shared responsibility, where companies and public agencies take ownership of social, economic, and environmental impacts. This means leaders must balance innovation like LLM firewalls with ethical use and secure AI adoption, ensuring teams always learn and adapt in a climate of uncertainty.
Verizon took early action, establishing a helicopter view of GenAI platforms and relevant use cases across various business areas. Centres of Excellence (CoEs) bring together cross-functional teams of up to 20 people from finance, human resources, operations, product management, product engineering, legal and more.
“COEs ensure a comprehensive approach to AI adoption, providing enterprise-wide oversight while enabling specific teams to drive AI initiatives safely and competently,” explains Le Busque.
Inside this enterprise-wide framework, the real power lies in giving teams access to standardised tools and workflows. This accelerates experimentation, allowing teams to innovate faster, stay secure, and ensure consistency across the organisation.
Crucially, COEs are designed to be flexible and technology-agnostic. Whether an organisation is dealing with software-defined networking, convergent private networks, or GenAI, the goal is to ensure dynamic users have frictionless access to dynamic applications.
Curtis emphasises, “We need to get better at thinking through the consequences of these AI applications and what they might mean economically, socially, and environmentally.”
His concern mirrors Louise McGrath’s point from episode 2 in the series, where she noted the conflicting attitudes towards AI: “Some members say, ‘Let’s go! We’ll work out the details later,’ while others warn, ‘We thought asbestos was safe once — I’m not using this without guardrails.’”
Like McGrath, who heads the Australian Industry Group, Curtis believes that maximising AI adoption requires strengthening leadership across all levels of an organisation, not just the C-suite.
This is where risk quantification becomes critical, helping leaders understand the measurable impact of AI-driven decisions. Episode 2 unravels how GenAI helps organisations price risk, turning cybersecurity decisions into measurable business strategies.
Leaders who understand the ROI of safeguards like emerging LLM firewalls may be better equipped to navigate financial operations, for example.
“At Verizon, through our GenAI program, individuals train to understand the cost implications of different types of LLMs,” explains Le Busque.
Understanding these cost factors is crucial, whether it’s the cost of a base model, a highly-tuned model, or the choice between hosting and on-premise solutions. This knowledge is essential for building a solid business case and demonstrating the value for the enterprise.
“Humans will always be involved, but they can be the weakest link,” said Le Busque. “AI’s complexity, particularly in areas like AML, can easily overwhelm even well-intentioned teams.”
Ultimately, he sees a future where LLM firewalls drive safer, automated decisions, ensure compliance and unlock genAI’s full potential.
The Securing AI podcast series and associated articles are produced by InnovationAus.com in partnership with Verizon.
Do you know more? Contact James Riley via Email.