eCommerceNews Asia - Technology news for digital commerce decision-makers
Asia
Survey finds firms deploy AI agents before ready

Survey finds firms deploy AI agents before ready

Sat, 2nd May 2026 (Today)
Catherine Knowles
CATHERINE KNOWLES News Editor

Monte Carlo has published a survey on the state of AI agent deployment in large enterprises. It found that 64% of enterprise leaders and engineers said their organisations put AI agents into use before they felt ready.

The report surveyed 260 technology practitioners and engineering leaders at organisations with 1,000 or more employees. It highlights strains in day-to-day operations as companies move AI agents from testing into production systems that handle customer-facing or business-critical tasks.

Among respondents, 46% said AI agents are already in full production at their organisations, while a further 39% said the tools are in limited production. The findings suggest many large companies have moved beyond experimentation, but without the operational controls staff believe are needed to run the systems safely.

Pressure appeared strongest among technical staff closest to the systems. Three-quarters of software developers and engineers said their organisation deployed AI agents before it was fully prepared, a noticeably higher share than the survey-wide average.

The report also pointed to tangible failures after deployment. Nearly two-thirds of respondents who moved quickly said they had already found an AI agent accessing data or systems they did not know it could reach. More than a third said they could not disable or roll back a failing agent within minutes.

Another finding suggests many teams expect significant remedial work. Some 70% of respondents said they expect to rebuild or rearchitect systems they have already shipped, highlighting how extensively early deployments may need to be revised.

Perception gap

One of the clearest themes in the findings was a gap between the views of senior engineering leadership and frontline builders. Leaders were more likely than builders to say their organisations treat AI agents like other production applications or services across several areas of operations management.

According to the survey, 69% of leaders said their organisations conduct post-incident reviews for AI agents, compared with 62% of builders. On defined service-level objectives or agreements, the split was 62% for leaders versus 54% for builders. On automated rollbacks or kill switches, 62% of leaders reported having them in place, compared with 52% of builders.

Builders, by contrast, were more likely to say issues were discovered through customer complaints or manual engineering work. The survey found that 52% identified customer complaints as a source of discovery, while 42% pointed to manual effort by engineers.

Senior leaders, including Heads of Engineering, Vice Presidents and Chief Technology Officers, expressed the greatest confidence in their authority to act. Some 82% said they had clear authority to intervene. Even so, half of those same senior leaders said they had already discovered an AI agent accessing data or systems they did not know about.

Accountability split

The survey also examined how responsibility for failures is assigned. It found lower rates of unauthorised agent access, less pressure to deploy quickly and markedly lower expectations of future rebuilds when accountability for agent failures was explicitly shared between engineering and leadership.

Rebuild expectations were 22% in organisations with shared accountability, compared with 70% where engineering alone carried responsibility. The finding suggests governance and reporting lines may shape operational outcomes as much as technical design.

Visibility issues

Tracing failures across AI systems remained a weak point for many respondents. Only 47% of builders said their systems were easily traceable end to end when something went wrong. Most said they were either combining multiple tools and logs or relying on significant manual effort to identify the source of a problem.

The biggest blind spots centred on agent behaviour itself. Respondents cited how tools are used, where control flow breaks down and the outcomes of agent-to-agent interactions as areas with limited visibility. These issues were flagged by 62% of builders.

The sample included 165 builders, described as engineers, developers and technical leads directly responsible for building and operating AI agent systems, and 95 leaders, including Chief Technology Officers, Vice Presidents, Heads and Directors of Engineering.

Barr Moses, chief executive of Monte Carlo, said the findings reflect what technical teams are seeing on the ground. "The engineers closest to these systems have a clearer and more sobering view of their operational state than almost anyone else in their organizations," Moses said.

She added: "This report isn't an argument for slowing down. It's an argument for investing in the operational layer that makes deployment sustainable - end-to-end traceability, unified visibility, and accountability structures that give the people responsible for failures the tools to actually fix them."