Blog

Creating a Dynamic System of Record for Agentic AI (and all other identities!)

By Luke Bennett, Global Field CTO
September 17, 2025
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

We recently updated our Identity Vulnerability Management (IdVM) platform to support the discovery and monitoring of Agentic AI identities, in addition to tracking human and non-human identities.  There’s now an ability to get a complete view of Access-Chains (identity to resource paths, directly and indirectly), combining Agentic AI with the human delegation and NHIs (Non-Human Identities) they utilize.

In this blog post, I’ll cover where we are today (it evolves every week!) and, more importantly, where we’re going over the next 3-6 months; which, you may have guessed by the title, relates to creating a Dynamic System of Record (SoR) for all identities across all environments (and Agentic deployment protocols and standards, as no single model shall rule them all).  

First, some context on Agentic AI deployments.

* Note: Identity vulnerabilities are not CVEs; rather, “vulnerabilities” are items such as misaligned identities, toxic entitlements, and weak identity controls and relate to anything where a vulnerable identity can negatively impact an operational process.

Mainstream Agentic AI enterprise adoption accelerates

Enterprise interest in Agentic AI has reached an inflection point in 2025, moving beyond mere automation of repetitive tasks to fully autonomous systems that plan, decide, and act, catalyzing new operational models across all sectors. Leading cloud vendors, including Microsoft, Google, AWS, and innovators like LangChain, have made agentic capabilities a core feature, transforming how companies handle workflows, customer experience, and scale.

I won’t delve into the inner workings of Agentic AI, and by extension LLMs, here but there are some key statistics related to adoption:

  • Based on content from index.dev and gmelius.com (grain of salt taken), 85% of organizations now deploy AI agents in one or more workflows, with agentic components present in almost all new enterprise applications starting build in 2025.
  • Anetac’s internal research estimates there will be a 30-40x increase in Agentic AI deployments over the next 4 years, as older applications and services are refreshed to utilize these advancements.
  • We’ve recently heard about vibe coding, but an iteration of this is Swarm coding per a recent VentureBeat article.  Running swarms, especially in a spike that is less than 24 hours, fortifies the need for real-time Agentic AI identity vulnerability management, as even aggregate hourly snapshots of access against existing configuration will likely miss the intermediate Access-Chains created and utilized (if not for control, at least for post-swarm audits!).
  • According to the latest IDSA “2025 Trends in Identity Security Report”, 20% of identity-based incidents were from AI identities within the organization. Of this, over 85% of organizations believe they still don’t have the proper controls in place, even with the heavy focus on Agentic AI deployments.

One of the more alarming statistics revealed by a “The Times” survey was that 23% of IT professionals saw AI agents tricked into revealing credentials and 80% reported bots taking unintended or risky actions. Remediating these identity-related risks and vulnerabilities, to limit the identity attack surface, are precisely what Anetac addresses.

Limiting the Identity Attack Surface

If your identity attack surface was large before, it’s about to be exponentially multiplied with Agentic AI deployments.  Enterprises must address both the new implementations and the underlying vulnerabilities that affect human and non-human identities.  AI amplifies everything, good and bad.

The surge in Agentic AI capabilities forces a strategic pivot in AI security and governance. Gartner’s TRiSM 2025 update, which we’ve previously published in the context of Agentic AI identities, emphasizes that runtime observability, continuous risk management, and robust identity controls are now table stakes for responsible deployment.

Key items affecting Agentic AI identity security today:

  • Human-centric policies – outpaced by AI.
    Policies written on paper or for static identities (both human and long-lived machine) no longer suffice; at best they were written for when a human controlled decision making throughout the process and used automation to speed up tasks.  Now, Agentic AI controls the decision-making, and fully autonomous systems lack a natural ‘break point’ for validation.  Adaptable runtime enforcement that moves away from just gatekeeping, agent observability with behavioral anomaly detection, full lineage and provenance traceability, and decision auditing are now essential.
  • Multiple vendors, multiple environments – no unified view.
    Every major cloud vendor and consulting firm have published their recommendations for securing Agentic AI.  An example of this is Google’s whitepaper “Google’s Approach for Secure AI Agents(Diaz, Kern, Olive, 2025) that gives three primary recommendations: “Human in the Loop” (more on that shortly) to verify delegated authority, limiting agent privileges, and behavioral monitoring. Fairly standard recommendations across various cloud provider whitepapers.  However, the issue is that each of the recommendations are as you would expect, tied to the provider’s own services.  This means there is never unified discovery, monitoring and response actions across all AI deployments and environments (especially hybrid deployments, such as on-premises MCP and SaaS A2A); you cannot secure Agentic AI in isolation.
  • Human in the Loop (HitL) - can’t scale if assessing all information.
    The principle here is that humans can overlay judgement at a decision breakpoint, considering access to critical resources, nuanced prompts that may not achieve the desired outcomes, and aggregate hallucinations in multi-agent chains.  The problem is that it doesn’t scale if it’s applied to every single access decision -- there needs to be thresholds applied, and used as the exception not the rule. These may be based on the vulnerabilities and the impact score of certain identities accessing critical resources; context needs to be provided on the resources themselves (e.g., data classification, PII systems, etc.).  The ability to manage this autonomously is critical, so that humans are alerted to interject based on dynamically assessed criteria, rather than statically set breakpoints in the process.  This is critical so that there’s no “alert fatigue” on verification, else we’ll just have humans clicking the “accept” button on autopilot.
  • Agentic AI is part of a wider identity fabric – new, meet old.
    Agentic AI will have both its own identity (decentralized, typically on the blockchain) and have delegated authority from humans or non-human identities (centralized, typically within inventory stores).  Running separate tools or platforms for each identity type means you are unable to see the linkages between each of these identities.  How can you attest to the ‘correct’ human in the loop if you don’t track the human identities and everything else that they access, and the behaviors they exhibit, as an aggregate identity chain with the Agentic AI identities in the process?  Or more simply, how do you know what Agents are actually doing without real-time monitoring of access-chains for every identity in your environments?  You can’t; you need a platform across the entire identity fabric.

Safeguarding Agentic AI requires more than classic controls and traditional identity access management paradigms; it demands a new operational mindset.  Existing IAM, PAM and IGA solutions were not built to handle autonomous agents.

What Agentic AI deployments does Anetac support?

At the time of writing, Anetac delivers comprehensive discovery and monitoring of Agentic AI across the Enterprise, using Model Context Protocol (MCP).

Anetac’s current preferred scalable, enterprise deployment model combines MCP with GitHub and Microsoft Entra ID to unify agent discovery, identity mapping, and activity monitoring with all other human and non-human identities across your environments.  This combination creates a robust foundation for managing AI agents with consistent insight across identity systems, codebases, and execution environments.  Our existing customers using this pattern are large-scale, multi-team organizations seeking consistent, long-term AI governance, observability, and compliance.

Additional support beyond the MCP and Github combination

Anetac can support combinations of emerging stacks, protocols and standards, and we are actively adding additional support in the coming months (more on that below).  For example:

  • Full LangChain Support: Anetac supports LangChain for organizations already invested in its developer-centric AI frameworks. While LangChain offers powerful orchestration capabilities, it lacks native enterprise security and identity context: Anetac overlays this seamlessly.
  • Rapid Expansion via OpenTelemetry: Because MCP is an open standard, Anetac can quickly add support for any MCP-compliant service that exposes OpenTelemetry data. This means your Agentic AI stack can evolve without re-architecting security or monitoring layers

The fact that there are so many combinations of “standard”, protocols and solutions, across many different environments (both in cloud and on-premises), strengthens the need to have an Agentic AI vendor-agnostic Dynamic Systems of Record for all these identities!

The Case for a Dynamic Identity System of Record

Traditional IAM and PAM solutions were built for static, long-lived identities. They cannot keep pace with the ephemeral, distributed, and high-velocity nature of today’s environments, especially with the rise of Agentic AI.

Anetac’s IdVM establishes itself as the Dynamic Identity System of Record by closing these gaps and providing a cohesive security fabric across all identities, human, machine, and Agentic AI:

  • Every environment, every identity: Serves as the connective identity fabric, enriching and extending existing IAM, PAM, and security investments.
  • Continuous, real-time discovery: Automatically inventories Agentic AI and other identities, capturing dynamic interactions across ecosystems.
  • Automatic categorization: Labels identities and resources in motion, continuously reducing blind spots and feeding updates into systems like CMDBs.
  • Access chain mapping: Exposes direct, indirect, and inherited access paths, including unmanaged or overlooked relationships, giving security teams a full map of privilege sprawl.
  • Behavioral analysis: Establishes baselines of identity behavior, flags anomalies and outliers, and validates control plane effectiveness.
  • Dynamic risk scoring and remediation: Delivers contextual risk evaluation, real-time privilege adjustments, and toxic path revocation at machine speed, with auditability.

By acting as the single source of truth for identity access-chains, Anetac’s IdVM transforms identity security from static enforcement to adaptive, continuous protection, ensuring enterprises can safely adopt and scale both traditional and emerging identity types.

What’s on the horizon for Anetac and Agentic AI?

We aim to be in lockstep with our customers and their investment in Agentic AI.  Protocols, standards and vendors will continue to emerge and evolve, and Enterprises will make different choices along their journey to match their overall process remediation and optimization.

We will continue to bolster the long-term value of the Identity Vulnerability Management platform, and the associated Dynamic Identity System of Record capabilities.

Whilst it’s natural that we’ll increase our discovery and monitoring capabilities to Agent-to-Agent (A2A), and similar protocols as Enterprises mature their deployment, we will look to expand into other areas.  

First, detection of Shadow AI [agents] – those which have not been registered or tracked in the likes of an MCP registry, or even cases where MCP was deployed elsewhere outside of known environments and we detect interactions with our streaming and behavioral analysis.  This is a use case that naturally extends on our outlier detection enhancements, and I believe only Anetac can address it given our underlying technology and team’s experience.

Second, moving beyond developer-centric deployments and management, where Agentic AI is embedded within SaaS or PaaS environments.  Examples of this include Salesforce’s AgentForce, AWS’s Bedrock AgentCore, and Microsoft Copilot Studio, among others.  This is where we’ll see more A2A use cases, as organizations start to create processes that utilize both MCP-based agents (Agent to Resource), with those Agents making calls out to cloud agents (Agent to Agent), for a full autonomous workflow.

Key Takeaways

Agentic AI is no longer an emerging trend; it is the dominant force in enterprise transformation and acceleration for 2025 and beyond. Success requires not only innovation but also rigorous operational controls, continuous security, and a new approach for identity vulnerability management.

This acceleration creates significant opportunity, but also expands the identity attack surface exponentially. Traditional IAM and PAM systems, designed for static, long-lived identities, cannot keep pace with the dynamic, ephemeral, and distributed nature of Agentic AI.

Agentic AI Identity risk is no longer theoretical. Recent industry data shows that one in five organizations had identity-related incidents involving AI agents, and 80% of IT professionals report bots taking unintended or risky actions. Without continuous monitoring and adaptive controls, organizations risk data leaks, privilege misuse, and autonomous errors that happen at machine speed.

Building a Dynamic Identity System of Record (SoR) with Anetac and adopting AI TRiSM principles are now critical for organizations to embrace autonomous agents safely, defensively, and at enterprise scale. This enables organizations to detect anomalies early, adjust privileges dynamically, and revoke toxic paths with full auditability, ensuring compliance and governance in increasingly complex environments.

The implications are clear:

  • Legacy IAM/PAM is insufficient to govern the new era of autonomous identities.
  • Governance and compliance pressures are rising, with 75% of enterprises citing AI oversight as their top concern.
  • A Dynamic Identity SoR is a strategic imperative, providing the adaptive protection needed to scale Agentic AI safely, protect critical assets, and maintain trust.
In short, Agentic AI is transformative but introduces novel risks that cannot be managed with legacy approaches. Organizations that invest in a Dynamic Identity SoR with Anetac will be positioned to embrace autonomous systems with confidence, resilience, and speed.

DOWNLOAD NOWSEE MORE