Connect with us
New Research Reveals Widespread Gaps in AI Incident Preparedness and Response

Artificial Intelligence

New Research Reveals Widespread Gaps in AI Incident Preparedness and Response

New Research Reveals Widespread Gaps in AI Incident Preparedness and Response

A new report from the professional association ISACA has exposed significant vulnerabilities in how organizations manage artificial intelligence system failures. The findings suggest that a majority of enterprises lack the governance and technical controls to swiftly halt or explain a malfunctioning AI, raising the risk of operational and reputational damage.

According to the survey, 59% of digital trust professionals could not articulate how quickly their organization could interrupt an AI system during a security incident. Only 21% reported being able to meaningfully intervene within thirty minutes. This indicates a landscape where compromised AI systems may continue to operate unchecked.

Governance and Accountability Deficits

Ali Sarrafi, CEO and Founder of the autonomous enterprise platform Kovant, commented on the structural issues highlighted by the data. He stated that AI systems are often embedded into critical workflows without the necessary governance layer to supervise and audit their actions. The consequence is a lack of control.

“If a business cannot quickly halt an AI system, explain its behaviour, or even identify who is to be held accountable, the business is not in control of that system,” Sarrafi said.

The survey data supports this assessment. Only 42% of respondents expressed confidence in their organization’s ability to analyse and clarify serious AI incidents. Furthermore, accountability remains unclear, with 20% reporting they do not know who would be responsible if an AI system caused harm. Just 38% identified the Board or an Executive as ultimately accountable.

The Limits of Human Oversight

Some organizations rely on human checks, with 40% of respondents stating that humans approve almost all AI actions before deployment, and 26% evaluating AI outcomes post-deployment. However, experts warn that without a robust governance infrastructure, human oversight alone is insufficient to identify and resolve issues before they escalate.

The problem is compounded by transparency gaps. Over a third of organizations do not require employees to disclose where and when AI is used in work products, creating potential blind spots for management and auditors.

Sarrafi emphasized that slowing AI adoption is not the solution. Instead, he advocates for a fundamental rethinking of AI management. He proposes treating AI systems as digital employees within a structured management layer.

This framework would require clear ownership, defined escalation paths, and the immediate ability to pause or override systems when risk thresholds are crossed. “This way, agents stop being mysterious bots and become systems you can inspect and trust,” he noted.

Integrating Governance from the Start

The core recommendation from industry observers is that governance cannot be an afterthought. As AI becomes more deeply embedded in core business functions, visibility and control mechanisms must be designed into the architecture from the outset. Organizations that successfully implement this integrated approach are expected to reduce risk and scale AI use with greater confidence.

Currently, many businesses appear to treat AI risk as a purely technical problem rather than an enterprise-wide management challenge. This perspective may lead to inadequate preparation for incidents. Without proper governance and clear accountability, businesses cede control of their AI systems, leaving them exposed to potential financial and reputational harm from even minor errors.

The evolving regulatory landscape, which is making senior leadership more accountable for technology failures, adds further urgency to these findings. Organizations are now under increased pressure to demonstrate they can deploy AI both safely and effectively.

Looking ahead, the industry anticipates a shift towards more formalized AI incident response plans and governance frameworks. This will likely involve cross-functional teams, regular audits, and clearer lines of responsibility. The development of standardized practices for explaining AI decisions and actions, often called AI explainability, will also be a critical focus area for both technologists and business leaders seeking to build trustworthy and controllable AI systems.

More in Artificial Intelligence