Australia’s financial regulator has issued a warning that governance and assurance practices for AI agents remain poorly managed across the financial sector. The alert comes as banks and superannuation trustees accelerate the deployment of artificial intelligence in both internal operations and customer-facing services.
The Australian Prudential Regulation Authority (APRA) conducted a targeted review of large regulated entities in late 2025 to assess AI adoption and related prudential risks. The review found that all entities surveyed were using AI, but the maturity of risk management and operational resilience varied significantly.
Board Oversight Found Lacking
APRA noted that boards showed strong interest in AI for productivity and customer experience. However, many were still in the early stages of building robust frameworks for managing AI risks. The regulator raised concerns about boards relying too heavily on vendor presentations and summaries instead of conducting independent scrutiny.
A particular gap was the insufficient attention given to risks such as unpredictable model behavior and the potential impact of AI failures on critical operations. APRA stated that boards should develop a deeper understanding of AI technologies to set coherent strategy and oversight. AI strategy, it said, must align with an institution’s risk appetite and include monitoring procedures as well as defined steps for error handling.
Use Cases and Risk Treatment
Regulated entities were found to be trialing or introducing AI in software engineering, claims triage, and loan application processing. Additional use cases included fraud detection, scam disruption, and customer interaction systems. Some institutions treated AI risk in the same way as other technologies, but APRA warned that this approach fails to account for model-specific behavior and bias.
The review identified shortcomings in model behavior monitoring, change management, and decommissioning processes. APRA called for the creation of comprehensive inventories of AI tools and named-person ownership of each AI instance. It also emphasized the need for human involvement in high-risk decisions.
Cybersecurity and Non-Human Identities
Cybersecurity emerged as another major concern. APRA said AI adoption is altering the threat environment by introducing additional attack pathways, such as prompt injection and insecure integrations. Identity and access management practices had not been updated in some cases to account for non-human elements, including AI agents themselves.
The volume of AI-assisted software development is placing pressure on change and release controls. APRA recommended that entities apply specific controls on agentic and autonomous workflows, including privileged access management, configuration management, and patching. It also called for mandatory security testing of AI-generated code.
Vendor Dependency and Supply Chain Risks
APRA noted that some institutions had become heavily dependent on a single provider for multiple AI instances. Only a few organizations were able to demonstrate an exit plan or substitution strategy for their AI suppliers. The regulator also warned that AI can be present in upstream dependencies that entities may not be aware of, creating hidden supply chain vulnerabilities.
Industry Standards Efforts
The focus on identity and permission controls is also being addressed by standards bodies. The FIDO Alliance has formed an Agentic Authentication Technical Working Group and is developing specifications for agent-initiated commerce. FIDO noted that existing authentication and authorization models were designed for human interaction, not for delegated actions performed by software. Service providers need ways to verify who or what authorizes actions and under what conditions.
Vendors have presented solutions to FIDO for review, including Google’s Agent Payments Protocol and Mastercard’s Verifiable Intent framework. Separately, the Center for Internet Security (CIS), a non-profit largely funded by the U.S. Department for Homeland Security, has published AI security companion guides. These guides map CIS Controls version 8.1 to large language models, AI agents, and Model Context Protocol environments. The LLM guide addresses prompt injection and sensitive data issues, while the MCP guide focuses on secure access by software tools, non-human identities, and network interactions.
Looking ahead, regulators are expected to continue tightening oversight of AI governance in financial services. APRA has signaled that it will likely issue more formal guidance and expect institutions to demonstrate concrete progress in closing the identified control gaps. The evolution of standards from bodies like FIDO and CIS will further shape how organizations manage AI risks, particularly around identity and authentication for autonomous systems.