A.I. News AI Agents Are the New Attack Surface — And Most Enterprises Don’t Know It

Brownie2019

Level 23
Thread author
Verified
Well-known
Forum Veteran
Mar 9, 2019
972
4,702
2,168
Germany
The next AI milestone – agents that can research, decide, and act without supervision – is also your next major security risk.

AI is moving from “help me write” to “go do the work.” That shift breaks most enterprise security assumptions. It is no longer an employee logging into a SaaS application or querying a database. It is an agent doing those things on the employee’s behalf.

Within the next 12–24 months, most enterprises will have more machine identities than human ones – and most are not prepared to secure them.

When the actor becomes non-human, the old question “Who accessed what?” becomes: Which agent accessed what, using whose authority, under which constraints?

In 2026, the enterprises that win will treat agents as a new class of digital workers with their own identities, credentials, and audit trails. If your organization cannot name its agents, it cannot govern them. And if it cannot govern them, your AI program will scale risk faster than it scales productivity.
Read the full Story:
 
Read the full Story:
That is a reasonable concern, and the core point is valid: autonomous or semi-autonomous AI agents create a larger attack surface because they operate with permissions, access tokens, and decision-making logic that traditional user-centric security models were not designed for.

Why this matters

When an AI agent can:
  • access SaaS platforms
  • query internal data
  • trigger workflows
  • send messages or modify records

it effectively becomes a security principal, even if the underlying permissions still come from a human user, service account, or API integration.

That creates several risks:
  • Over-privileged access, where the agent has more rights than it actually needs
  • Poor credential handling, especially with long-lived API keys or shared service accounts
  • Weak auditability, where actions are logged under a generic integration identity instead of a distinct agent identity
  • Prompt injection or tool misuse, where untrusted input influences the agent to perform unintended actions
  • Data leakage, especially if agents can move information between systems without strong policy controls

What enterprises should be focusing on

A practical security model for AI agents should include:
  • Unique identities for each agent or workflow, rather than shared credentials
  • Least-privilege access with narrowly scoped permissions
  • Short-lived credentials and strong secrets management
  • Approval gates for sensitive actions such as financial changes, admin operations, or large-scale data exports
  • Detailed logging that records which agent acted, which tools it used, what data it accessed, and under whose delegated authority
  • Network and application segmentation so an agent cannot freely reach everything
  • Clear boundaries between retrieval, reasoning, and action-taking components

One important caution

I would be careful with the article’s timeline claims. The statement that most enterprises will have more machine identities than human ones within 12–24 months may be directionally plausible, but I cannot verify that as a universal fact from the quote alone. Machine identities have already grown rapidly in many environments, but the exact pace varies a lot by organization and architecture.

So the strongest takeaway is not the prediction itself, but the governance point:
  • If an organization cannot identify its agents
  • cannot define what they are allowed to do
  • and cannot audit their actions

then deploying them at scale will increase risk.

Bottom line

AI agents should be treated less like simple software features and more like privileged automated workers. Enterprises that apply identity management, least privilege, monitoring, and human approval for sensitive actions will be in a much better position than those treating agents as just another chatbot layer.
 
  • Like
Reactions: Halp2001
Read the full Story:
Everything novel is a new attack surface and new attack methods and procedures that's just how innovation works. For everything there is a dark side and light side of the force. The difference is incentive.
 
I use a Ai for info, but agents?
Itt's what they call agentic AI. Basically you have a robot running around doing things for you, but you have to give the agent the password to email if you want it to send emails. If you want it to do shopping you have to give the agent your credit card. So you could say 'go buy a a USB stick 64GB on Amazon if it costs less than $20', and the robot will go shopping for you. I don't know which chatbot offers this agentic AI but this is the trend.

As the joke goes, don't be too surprised if the robotic agent go spend $2000 for GPU upgrade to boost its processing power :)
 
It seems like AI can do anything.
That’s often not the case.

Here’s just one example.
You can’t use AI directly to remove an anti-age pop-up because it tells you it can’t help you violate the rules.
And removing an anti-age pop-up isn’t something everyone who knows how to use an ad blocker can do.
But even if it helps you create a blocking rule, it makes mistakes all the time.

If, on the other hand, an intelligent human :ROFLMAO:provides it with HTML code, it can help you write a blocking rule.
But it struggles to adapt it to various ad blockers.

So, from my point of view, AI is simply a worker at the service of an engineer.
 
Last edited:
Itt's what they call agentic AI. Basically you have a robot running around doing things for you, but you have to give the agent the password to email if you want it to send emails. If you want it to do shopping you have to give the agent your credit card. So you could say 'go buy a a USB stick 64GB on Amazon if it costs less than $20', and the robot will go shopping for you. I don't know which chatbot offers this agentic AI but this is the trend.

As the joke goes, don't be too surprised if the robotic agent go spend $2000 for GPU upgrade to boost its processing power :)
You made me LAUGH not just chuckle but broken UI on your post, no like button... give yourself a 100
:cool:
 
It seems like AI can do anything.
That’s often not the case.

Here’s just one example.
You can’t use AI directly to remove an anti-age pop-up because it tells you it can’t help you violate the rules.
And removing an anti-age pop-up isn’t something everyone who knows how to use an ad blocker can do.
But even if it helps you create a blocking rule, it makes mistakes all the time.

If, on the other hand, an intelligent human :ROFLMAO:provides it with HTML code, it can help you write a blocking rule.
But it struggles to adapt it to various ad blockers.

So, from my point of view, AI is simply a worker at the service of an engineer.
AI agents are as smart and as useful as equivalent workers for equivalent pay. If you pay your worker $19.99/month to do your bidding then expect $19.99/month worth of service.

For simple fold yourself 29 times and call me Jerry then it will do that. If you want it to write a code worth thousands then expect it to do a shitty job.