AUTHOR
Matthew Creager
SHARE
February 13, 2026
8 Min Read

Agents Aren't Apps, And $26 Billion Proves It

While participating in the AI Security Summit, someone on Jared Hanson's panel said something that got under my skin:


"It's dangerous to anthropomorphize agents. The best way to get them into production is to treat them like deterministic apps: workflow automation with a smarter interface."


People nodded. I felt my chest tighten.


They weren't wrong, exactly. For narrow, repetitive workflows, treating agents like apps is perfectly sensible. It's safe. It's how we've operationalized every past wave of automation. But something about the framing felt deeply off, as if all this new potential could be collapsed into something comfortably familiar, erasing the very thing that makes this moment matter.


Instead of sleeping on the red-eye back from SF, I unpacked the frustration. What exactly is changing here? What do I believe, and could I defend it?


Turns out, I could. And in the weeks since, the market has made the argument for me. Loudly, and in the billions.


The discomfort wasn't about anthropomorphism. It was about reduction.


The night before our keynote, I was a wreck. I was 60% convinced I had nothing interesting to say and no one would care. I took my talk to ChatGPT for feedback. It told me the content mattered, helped me refine it, and I believed it. Not because I'm gullible, but because there was a genuine exchange of reasoning happening. Not a command-and-response loop. A conversation.


When you spend time building with modern AI systems, you notice a new texture in the work. You argue with the system, test ideas, course-correct together. There's an exchange of understanding, not just data.


That's not a UX trick. It's a shift in what software is capable of representing.


We've moved from code that executes logic to systems that negotiate meaning, and the infrastructure problem changed with it.


You can secure an app with RBAC and API keys because its behavior is deterministic. But when a system can reason and act on behalf of people, you need new primitives: identity, delegation, context, and audit.


The real problem is building the trust infrastructure that lets reasoning systems participate safely in the same world as humans and software.


Once I calmed down enough to think clearly, I realized this wasn't just an argument about how to deploy agents. It was about where we sit in the history and future of human-machine interaction.


Every technological wave has expanded what we can delegate, and we need to continuously invent new forms of trust to keep up. In the mechanical era, we learned to trust machines not to explode. In the computational era, we learned to trust code not to corrupt data. In this new era (which I've been calling the social-computational era) we'll have to trust systems that reason on our behalf.


By social-computational, I mean systems where software doesn't just execute logic, but participate in relationships. They represent people, negotiate on their behalf, operate under delegation, and remain accountable to social norms like policy and audit.


Apps live inside the computational layer: they execute predefined logic within fixed boundaries. Agents live in a new layer where actions are negotiated through context and intent, not hard-coded flowcharts.


This shift isn't metaphysical; it's architectural. It demands infrastructure that understands who an agent represents, what it's permitted to do, why it's doing it, and how we can observe or revoke that authority.


(I'm very proud of "social-computational." It's super catchy, it just rolls off the tongue. 😛)


When I first wrote down these ideas, they felt like a thesis that would take years to validate. Then, in a single quarter, over $26 billion in mergers and acquisitions (M&A) landed on exactly this problem.


Two days ago, Palo Alto Networks completed its $25 billion acquisition of CyberArk, the largest identity security deal in history. They did it explicitly to secure "every identity across the enterprise: human, machine, and agentic." In January, CrowdStrike announced a $740 million deal to acquire SGNL to build continuous, context-aware authorization for AI agents. Yesterday, Okta launched Agent Discovery features in its identity security posture management platform, targeting what it calls "shadow AI," the invisible layer of unmanaged agents already operating inside enterprises.


These aren't incremental product updates. They're existential bets by the biggest names in security that agents represent a fundamentally new class of identity that legacy IAM was never built to handle.


And the research backs it up. A recent Cloud Security Alliance survey found that only 18% of security leaders are confident their current IAM systems can manage agent identities. Forty-four percent still authenticate agents with static API keys. Only 28% can reliably trace an agent's actions back to a human sponsor. Machine identities already outnumber human ones 82 to 1, and when Cyata scans enterprise environments, they're finding anywhere from 1 to 17 agent identities per employee, most of which are ungoverned.


As one security CEO put it: "We're letting thousands of interns run around in our production environment, and then we give them the keys to the kingdom."


The diagnosis is unanimous. The question is what to build.


When you look at agents as participants in a social-computational system, the industry's current pain makes immediate sense. The models are smart. It's that the infrastructure still assumes a world of deterministic apps.


Every enterprise pilot I've seen breaks down in the same few places.


  • Accountability - When an agent acts, who's responsible? The engineer who built it, the user who invoked it, the system that provisioned its key? Nobody can answer cleanly.
  • Visibility - Logs show function calls, not intent. There's no record of what an agent believed it was doing or on whose behalf.
  • Static identity - Service accounts and API keys were built for apps that never change shape. Agents assemble themselves on the fly by spawning subprocesses, calling external tools, sharing context dynamically.
  • Frozen policy - Access control lists don't know about purpose or scope. They can't express, "This agent may summarize customer data for support but not export it."

These aren't engineering bugs. They're missing trust primitives.


To move beyond pilots, we need systems that can represent identity. This is the agent's concept of self: "I am acting on behalf of Sarah, inside the Support zone." We need temporary delegated authority using purpose-bound tokens that expire with context. We need dynamic policy enforcement through decisions based on who, what, where, and why, not just role names. And we need a durable, human-readable story of actions and intents provided by audit lineage.


When those elements exist, trust becomes a first-class feature of the runtime, not a post-hoc process bolted on by compliance.


This is exactly what happened in cloud computing. Early AWS users could spin up servers instantly, but enterprises couldn't go to production until IAM existed. Once we could define who owned what and under what policy, the cloud became safe to adopt at scale.


Agents are at the same inflection point. The models are fine. The missing layer is trust provided by the infrastructure that tells organizations, "Yes, this system can act on your behalf, within scope, and we can prove it."


I reject the framing that agents should be in production and generating value today. When you zoom out, the friction we're feeling as a community is the same rite of passage we've gone through many times during tech transformations.


The web had its pilot years from Mosaic through the late '90s, then consolidated around HTTP, SSL, and application servers before e-commerce became default architecture. Cloud had AWS EC2 in 2006, fumbled through pilot years until 2009, then consolidated around IAM, Terraform, and monitoring before cloud-native became the default. Rinse and repeat for mobile, machine learning (ML), etc.


Agents are following the same curve. The breakthrough moment was GPT-3 and ChatGPT in 2022-23. We've been in the pilot years since. At the time of this writing (early 2026) we're entering the consolidation phase: the moment when identity, delegation, policy, and audit infrastructure takes shape.


The $26 billion in M&A I mentioned? That is the consolidation phase beginning. CrowdStrike isn't buying SGNL because agents are a fad. Palo Alto isn't spending $25 billion on CyberArk because identity is a nice-to-have. They're racing to own the trust layer because they see what's coming.


Every prior platform had its "missing layer" moment. Cloud had IAM. Mobile had app-store distribution. ML had MLOps. Agents will have a unifying layer of identity, delegation, and accountability, or "TrustOps."


We're in year two of five.


If history repeats, the next 18 months will be messy in familiar ways.


Key sprawl is here already. Teams are bolting agent frameworks to existing stacks and suddenly discovering hundreds of long-lived API keys for virtual users with no audit trail and no expiration. Meanwhile, employees deploy reasoning bots in Slack and Jira because they're useful, long before security can vet them. Organizations don't even know how many agents they have, let alone what agents can access.


RBAC and OAuth scopes are too blunt for any of this. Agents operate by intent, not endpoint, and existing policies can't express the difference between "summarize" and "export." Compliance teams have data but no story: logs capture tool invocations but not decision process.


And the incumbents? They're trying to solve this with the tools they have. The big acquisitions are real, but they're also retrofits. They're trying to attach dynamic authorization to platforms designed for a world of humans logging in with passwords. Microsoft's 2026 security priorities literally describe an "Access Fabric," which is the right concept applied to the wrong architecture. Okta is discovering shadow agents, but discovery isn't governance.


The gap between "we see the problem" and "we've built the right primitives" is where the real opportunity lives.


That gap is where we decided to build. And there's a structural reason we think the gap gets wider, not narrower, as models improve.


There's a pattern that repeats across every distributed system that has ever scaled: as individual nodes become more capable, the coordination layer becomes more valuable. This is counterintuitive. The naive assumption is that smarter nodes need less management. The reality is the opposite. Capable nodes act more autonomously, touch more resources, execute more complex workflows, and span more domains. Each of those increases the coordination surface. Kubernetes didn't emerge because microservices were dumb. Stripe didn't emerge because internet payments were simple. The coordination layer became a massive category because the nodes it coordinated became more capable.


AI agents are following the same trajectory. Today's agents are mostly short-lived and single-step, so a scoped token suffices. But we're already seeing our design partners shift toward agents that run for minutes or hours, chain 5-15 tool invocations per workflow, delegate to sub-agents, and span multiple systems in a single task. The coordination surface is expanding exponentially.


Intelligence increases coordination cost. That's the structural law underneath everything we're building.


We're not building a governance dashboard, another vault, or a wrapper around OpenAI. We're building the coordination layer for autonomous systems, starting with AI agents.


Most competitors in this space are solving authorization or secrets. Important problems, but they answer a narrow question: "Is this action permitted?" They can't tell you what workflow that action is part of, who originally authorized it, what else the agent has done in this context, or whether the cumulative behavior has exceeded acceptable bounds. And they can't revoke everything associated with a task in one shot.


We're solving delegated workflows. The core primitive we're building around is what the Keycard team calls the Session Envelope: a bounded, auditable, revocable context that carries everything needed to govern an autonomous workflow from initiation to completion: task-scoped access, workflow-level audit, and a one-click kill switch. When an orchestrator agent delegates to a specialist, and that specialist delegates further, the full chain traces back to a human. Kill the parent session, every child session dies. No token-chasing.


Context itself becomes a security boundary. Agents can only be trusted if their situational awareness is bounded and auditable. They have to know enough to reason, but shouldn't know more than policy or delegation allows. In the next era, context management is access control.


When those primitives exist, security teams can approve agents touching production systems with the same confidence they grant to services today. Developers can compose reasoning workflows without managing keys or hard-coding trust. Compliance teams can finally answer "what happened?" without reverse-engineering prompts.


When identity and trust become dynamic and explainable, agents graduate from pilots to production.


That moment at the Summit still lingers.


At first, the comment infuriated me because it felt reductive; like all this new potential could be collapsed into a safer, older metaphor. But now I see it differently. It was the friction that forced the insight.


If we had all agreed that agents are just apps, we'd still be solving the wrong problem, and we'd be building better automation, not safer autonomy.


Instead, the discomfort clarified everything. The production barrier is trust. And trust is a layer of the world we're rebuilding for the first time since the cloud.


Keycard exists to give this new generation of reasoning systems a place to stand. The interfaces, the frameworks, the workflows, etc. will all grow from that foundation.


We're building the trust infrastructure of the social-computational era.


And now, with $26 billion of M&A validating the category in a single quarter, the only question left is: who builds it for this era, instead of retrofitting the last one?


UNLOCK SECURE AIINFRASTRUCTURE

© 2026 Keycard Labs, Inc. All rights reserved.