
AI promises to transform businesses, but new research from ISACA highlights a disturbing blind spot: many organisations lack the tools and processes to stop or explain a misbehaving AI system. That gap raises the risk that flawed or compromised models could keep operating unchecked, compounding damage to operations, reputation, and compliance. This article breaks down ISACA’s findings and points to practical steps for shoring up AI governance before a crisis strikes.
ISACA’s survey delivers stark numbers about incident control: 59% of digital trust professionals said they did not know how quickly their organisation could interrupt or halt an AI system during a security incident. Only a minority — 21% — reported they could meaningfully intervene within 30 minutes. Those figures suggest many companies may be unprepared to stop an AI-driven failure from growing into an irreversible problem.
Left unresolved, corrupted or runaway AI can produce cascading operational failures and expose sensitive data or business logic to attackers. In practice this means longer downtime, harder recovery, and greater legal exposure when regulators or customers demand explanations. The ability to pause, inspect, and remediate AI behavior is no longer optional — it is a basic requirement for risk management.
Key findings from the ISACA report
The report reveals a broad and uneven picture of readiness across organisations, touching on detection, analysis, and accountability. While some teams report human checks on AI outputs, many companies still lack clear responsibilities or incident analysis capabilities. Below are the headline statistics that matter for leaders planning AI adoption.
- 59% of respondents did not know how quickly they could interrupt an AI system during a security incident.
- 21% said they could step in within 30 minutes.
- 42% expressed confidence in their organisation’s ability to analyse and clarify serious AI incidents.
- 20% did not know who would be responsible if an AI system caused damage.
- 38% identified the Board or an Executive as ultimately responsible for AI failures.
- 40% said humans approve almost all AI actions before deployment, while 26% evaluate outcomes.
- Over a third of organisations do not require employees to disclose when or where they use AI in work products.
Ali Sarrafi, CEO & Founder of Kovant, distilled the problem sharply: “Systems are being embedded into critical workflows without the governance layer needed to supervise and audit their actions.” He argues that without quick-stop controls, explainability, and clear ownership, “the business is not in control of that system.”
Confidence in incident analysis is low: only 42% of respondents said they could meaningfully investigate a serious AI incident. That lack of forensic clarity makes it harder to learn from mistakes, defend decisions to regulators, or meet disclosure obligations. Repeated incidents become more likely when organisations cannot diagnose root causes or fix underlying governance gaps.
Accountability also remains fuzzy. One in five organisations admitted they don’t know who would be responsible if AI caused harm, and only 38% pinpointed the Board or an Executive as ultimately accountable. This diffusion of responsibility weakens escalation channels and slows decisive action when models go wrong, increasing legal and reputational exposure.
Some reassurance exists: many organisations keep humans in the loop, with 40% reporting human approval for most AI actions and 26% performing outcome evaluations. Those practices help, but Sarrafi warns they aren’t enough on their own — oversight must be backed by systemic controls that allow instant pause, audit trails, and enforceable escalations.
Why stronger governance matters
Effective AI governance treats models as digital employees that require clear ownership, auditing, and the ability to be paused or overridden. Sarrafi recommends a structured management layer that defines who is accountable, what thresholds trigger intervention, and how humans can inspect model decisions in real time. Built-in visibility and control are essential if organisations want to scale AI without multiplying risk.
When governance is an afterthought, even small errors can cascade into major incidents with financial loss and brand damage. Regulators increasingly expect senior leadership to be accountable for AI risks, and public scrutiny can amplify the consequences of a single high-profile failure. The organisations that succeed will bake governance into architecture and operations from day one.
Practical steps to regain control
Leaders can take concrete actions now to reduce AI risk and improve incident readiness. Start with clear ownership and escalation pathways, instrument models for real-time monitoring, and ensure explainability and audit logs are available for post-incident analysis. Below are essential control points every organisation should consider.
- Assign a named owner and executive sponsor for AI systems and outcomes.
- Define incident response playbooks that include an immediate “kill switch” capability.
- Implement logging, versioning, and explainability tools so behavior can be audited.
- Require disclosure policies so employees report where and when they use AI in work outputs.
- Keep humans in approval loops and mandate periodic outcome reviews tied to risk thresholds.
Some organisations are moving in the right direction, but the ISACA data shows many still treat AI risk as a purely technical issue. The reality is that AI governance requires organisational change: policy, people, and processes aligned with technical controls. That alignment is the difference between scaling AI safely and facing avoidable crises.
Want to learn more? The AI & Big Data Expo — part of TechEx and co-located with leading technology events in Amsterdam, California, and London — brings industry leaders together to discuss governance, strategy, and practical controls. For regular updates and expert analysis, subscribe to TechForge Media’s newsletters and follow future enterprise tech events.
Source: AI News