
Microsoft warned that the proliferation of artificial intelligence agents is creating new security risks, urging organizations to clearly define the operational scope of AI agents and establish data protection rules.
Microsoft released its AI security report "Cyber Pulse" on the 12th, revealing that more than 80% of Fortune 500 companies are building and operating active agents using no-code and low-code tools. The figures were compiled from agents built using Microsoft Agent Builder and Microsoft Copilot Studio. No-code and low-code tools refer to programming with little or no coding required.
In the report, Microsoft forecast this year as the "Year of AI Agents." The company explained that AI-based automation is rapidly spreading across industries as the proliferation of low-code and no-code tools has created an environment where non-developers can build agents themselves.
Security risks are growing accordingly. If someone exploits an agent's access permissions and authority scope, the agent could become an unintended "double agent"—essentially a spy. Microsoft has discovered cases where multiple attackers persistently manipulated AI assistants' memory to steer responses in specific directions and distort reasoning processes.
To reduce security risks surrounding AI agents, Microsoft recommended that organizations establish systems for transparent internal sharing of AI agent-related matters. The company presented specific action items including: documenting operational purposes for each AI agent and granting minimum access privileges; strengthening data protection frameworks; providing approved AI platforms; establishing incident response plans; building regulatory compliance systems; implementing enterprise-wide integrated risk management; and fostering a culture of security innovation.
