Nikada/E+/Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- The adoption of AI agents among businesses is growing rapidly.
- Meanwhile, the development of safety protocols is lagging.
- Deloitte recommends implementing oversight procedures.
Businesses are ramping up their use of AI agents faster than they’re building adequate guardrails, according to Deloitte’s latest State of AI in the Enterprise report.
Published on Wednesday and based on a survey of over 3,200 business leaders across 24 countries, the study found that 23% of companies are currently using AI agents “at least moderately,” but that this figure is projected to jump to 74% in the next two years. In contrast, the portion of companies that report not using them at all, currently 25%, is expected to shrink to just 5%.
Also: 43% of workers say they’ve shared sensitive info with AI – including financial and client data
The rise of agents — AI tools trained to perform multistep tasks with little human supervision — in the workplace isn’t being supplemented with adequate guardrails, however. Only around 21% of respondents told Deloitte that their company currently has robust safety and oversight mechanisms in place to prevent possible harms caused by agents.
“Given the technology’s rapid adoption trajectory, this could be a significant limitation,” Deloitte wrote in its report. “As agentic AI scales from pilots to production deployments, establishing robust governance should be essential to capturing value while managing risk.”
What could go wrong?
Companies like OpenAI, Microsoft, Google, Amazon, and Salesforce have marketed agents as productivity-boosting tools, with the core idea that businesses can offload repetitive, low-stakes workplace operations to them while human employees focus on more important tasks.
Also: Bad vibes: How an AI agent coded its way to disaster
Greater autonomy, however, brings greater risk. Unlike more limited chatbots, which require careful and constant prompting, agents can interact with various digital tools to, for example, sign documents or make purchases on behalf of organizations. This leaves more room for error, since agents can behave in unexpected ways — sometimes with disastrous consequences — and be vulnerable to prompt injection attacks.
Zoom out
The new Deloitte report isn’t the first to point out that AI adoption is eclipsing safety.
One study published in May found that the vast majority (84%) of IT professionals surveyed said their employers were already using AI agents, while only 44% said they had policies in place to regulate the activity of those systems.
Also: How OpenAI is defending ChatGPT Atlas from attacks now – and why safety’s not guaranteed
Another study published in September by the nonprofit National Cybersecurity Alliance revealed that while a growing number of people are using AI tools like ChatGPT on a daily basis, including at work, most of them are doing so without having received any kind of safety training from their employers (teaching them, for example, about the privacy risks that come with using chatbots).
And in December, Gallup published the results of a poll showing that while the use of AI tools among individual workers had increased since the previous year, almost one-quarter (23%) of respondents said they didn’t know if their employers were using the technology at the organizational level.
The upshot
It would be unfair to business leaders, of course, to demand that they have absolutely bulletproof guardrails in place around AI agents at this very early stage. Technology always evolves quicker than our understanding of how it can go awry, and, as a result, policy at every level tends to lag behind deployment.
Also: How these state AI safety laws change the face of regulation in the US
That’s been especially true with AI, since the amount of cultural hype and economic pressure that’s been fueling tech developers to release new models and organizational leaders to start using them is arguably unprecedented.
But early studies like Deloitte’s new State of Generative AI in the Enterprise report point to what could very well become a dangerous divide between deployment and safety as industries scale up their use of agents and other powerful AI tools.
Also: 96% of IT pros say AI agents are a security risk, but they’re deploying them anyway
For now, oversight should be the watchword: Businesses need to be aware of the risks associated with their internal use of agents, and have policies and procedures in place to ensure they don’t go off the rails — and, if they do, that the resulting harm can be managed.
“Organizations need to establish clear boundaries for agent autonomy, defining which decisions agents can make independently versus which require human approval,” Deloitte recommends in its new report. “Real-time monitoring systems that track agent behavior and flag anomalies are essential, as are audit trails that capture the full chain of agent actions to help ensure accountability and enable continuous improvement.”

