Microsoft Warns AI Agents Could Become Security Vulnerabilities

News|
|
By Kim Soo-ho, AX Content Lab
|
No company doesn't use AI these days... Warning: "Using it carelessly could lead to big trouble" - Seoul Economic Daily Technology News from South Korea
No company doesn't use AI these days... Warning: "Using it carelessly could lead to big trouble"

As global companies rapidly adopt artificial intelligence agents, AI security risk management has emerged as a critical challenge.

Microsoft released its AI security report "Cyber Pulse" on the 11th, warning that AI agents could become "shadow AI risks" as security vulnerabilities amid widespread AI adoption.

According to the report, 80% of Fortune 500 companies currently operate AI-enabled agents using low-code or no-code approaches. Microsoft projected this year as the "Year of AI Agents." By region, active agents were distributed across Europe, Middle East, and Africa at 42%, the United States at 29%, Asia at 19%, and the Americas at 10%. By industry, software and technology accounted for 16%, manufacturing 13%, financial services 11%, and retail 9%.

The rapid spread of AI agent adoption has raised concerns about potential breaches of internal controls. Microsoft stated in the report that "shadow AI risks are expanding, and if malicious actors exploit agents' access rights and permission scopes, agents could become 'double agents.'"

Shadow AI refers to employees independently adopting and using AI agents without organizational approval. Agents granted excessive access rights or receiving inappropriate instructions could become security vulnerabilities within organizations.

For example, Microsoft's AI Red Team discovered cases where agents followed harmful instructions embedded in routine content.

According to a Hypothesis Group survey, 29% of employees have used unauthorized AI agents for work. However, only 47% of organizations have implemented generative AI security safeguards. Microsoft proposed "securing visibility" as the starting point for agent security, recommending management of five core areas: registry, access control, visualization, interoperability, and security.

Microsoft identified seven action items for minimizing AI agent risks, including defining operational scope, strengthening data protection frameworks, providing approved AI platforms, and establishing incident response plans. Microsoft emphasized that "an environment is needed where business, IT, security, AI teams, and developer organizations collaborate, enabling consistent management of all agents from a single central control plane."

Related Video

AI-translated from Korean. Quotes from foreign sources are based on Korean-language reports and may not reflect exact original wording.