TL;DR: multi-user AI agent authentication and authorization in 2026

Moving AI agents from single-user desktop demos to enterprise production means solving a brutal engineering problem: multi-user, multi-system delegated authorization.

Security architects and lead AI engineers are now dealing with agents that execute complex workflows across critical infrastructure on behalf of thousands of concurrent users.

The core design principle is non-negotiable: treat every agent action as delegated user access, never as the agent’s own blanket access. The whole authorization stack falls out of that distinction. Nine capabilities, two identities, one strict intersection rule.

This guide breaks down how to combine OpenID Connect, OAuth 2.1, and a MCP runtime like Arcade.dev to prevent tool misuse, data leakage, and excessive agency. It’s built for identity and access management leads, security architects, and AI engineering leads who need the exact infrastructure requirements to safely deploy multi-user agents into production.

Threat model for multi-user AI agents: prompt injection, tool misuse, and confused deputy

You can’t engineer secure authorization without defining the threat model first. For large language models, the most dangerous attack vector runs from prompt injection straight to tool misuse.

If an enterprise agent inherits blanket admin access to a backend system, a single poisoned RAG document or malicious prompt can weaponize that agent. An attacker instructs the model to scan an inbox, summarize sensitive financial data, and exfiltrate the payload via an external tool call. The whole exfil chain completes without a human in the loop.

The Open Web Application Security Project highlights these vulnerabilities in its updated guidelines, citing prompt injection and excessive agency as primary risks that lead directly to the confused deputy problem.

In a confused deputy attack, an application gets tricked into misusing its inherited authority.

There’s a second class of attack that targets the authorization flow itself. An attacker who can intercept or guess the identifier for a pending OAuth authorization can redirect the consent step to their own browser, either capturing the user’s grant or seeding the agent with credentials it shouldn’t have. Treating every first-time tool authorization as a step that must be cryptographically bound to a verified app user is the only durable defense.

The two-identity model for agent authorization

Engineering teams typically make one of two mistakes when designing agent authorization. Give the agent its own identity, and an intern can bypass their permissions through the agent. Inherit the user’s full access, and a single prompt injection cascades through every connected system.

The right answer is the intersection: what this agent is allowed to do AND what this user is allowed to do, evaluated per action, at runtime.

Effective authorization in agentic systems requires every request to carry two identity layers:

  • The project-level key (the agent application): The workload identity making the call. Registered as an OAuth client, scoped to the application running the agent logic.
  • The user-level identity (on whose behalf the action is taken): The actual person requesting the action, authenticated via a protocol like OpenID Connect, and represented in the request as a delegated subject.

The runtime evaluates these two identities against a delegated execution context: a bounded, short-lived binding that ties a specific user to a specific agent for a specific task. The context isn’t a third identity. It’s the tuple of claims (user, agent, scopes, audience, tenant, task ID, expiry) the runtime evaluates at every tool call.

This model enforces the identity intersection rule, which is the foundation of modern agent security.

An agent’s effective authority must always be calculated as the strict intersection of its own baseline permissions and the requesting human user’s permissions. Never the union.

If a user can’t delete a database record, the agent acting on their behalf must fail when attempting the same action. It doesn’t matter what the agent’s maximum theoretical capabilities are.

Implementing this intersection requires strict protocol separation. OpenID Connect authenticates the human user to establish who is interacting with the system. OAuth 2.1 authorizes what specific tool calls the agent can make on the human’s behalf.

Conflating these two protocols leads to over-permissioned tokens that get reused across systems they were never scoped for, giving a compromised agent durable access well beyond what the user actually authorized.

Nine capabilities for production multi-user AI agent auth

The Model Context Protocol’s own authorization spec, developed as a broad collaboration with Anthropic, Arcade.dev, Microsoft, Okta/Auth0, and others, defines OAuth-style protected resources and authorization server discovery, with audience binding via Resource Indicators (RFC 8707) and delegation via Token Exchange (RFC 8693). MCP defines the auth handshake; the runtime layer above must still handle token vaulting, just-in-time consent, user verification, RBAC, and audit. The nine capabilities below close that gap.

Building resilient multi-user agent infrastructure means evaluating your systems against this 2026 capability checklist. Unifying these capabilities prevents unauthorized access while ensuring reliable tool execution.

Capability 1: Model user, agent, and delegated context

Every authorization decision in your runtime must evaluate the user, agent, and context tuple simultaneously.

If your backend tool plane only verifies the agent’s API key, you’ve failed to model the human user.

True delegated modeling ensures that the upstream resource server knows exactly which human began the request, which workload orchestrated it, and the precise context under which the delegation was granted.

In practice, this means the user_id flows from your app’s authenticated session into every runtime call. A typical pattern: your IdP (Stytch, Auth0, Okta, or similar) authenticates the user and issues a session, your app extracts the user identifier from that session, and your code passes that identifier explicitly to every runtime SDK call. For example, getTools({ tools: [...], userId: userEmail }) and tools.execute({ ..., user_id: userEmail }). The runtime then resolves that specific user’s vaulted OAuth tokens for the requested provider and scope. Without this explicit user binding on every call, the runtime has no way to enforce the intersection rule.

Capability 2: Separate OpenID Connect authentication from OAuth authorization

You need to strictly separate human authentication from delegated agent authorization. OpenID Connect handles the initial login session. OAuth 2.1 handles the subsequent tool authorization.

By separating these concerns, you prevent identity conflation. An agent compromised by a malicious prompt can’t reuse human session cookies to access unrelated systems.

Capability 3: Issue short-lived, scoped, audience-bound access tokens

Agent access tokens must adhere to the strictest cryptographic standards to prevent token replay and lateral movement.

Each delegated access token should carry the full execution context as claims. In a delegated token, the subject (sub) identifies the human user on whose behalf the action is taken (e.g., user:alice). The actor (act) identifies the agent making the call (e.g., agent:support-copilot). The audience (aud) binds the token to a specific resource server (e.g., gmail-api), and the scope (scope) grants a specific permission (e.g., email.draft, not email.send). The expiry (exp) is set to a tight window of typically 5 to 30 minutes. A tenant claim (e.g., tenant:acme) carries the customer or workspace context, and a task ID (e.g., task_123) ties the call back to the originating user task or session.

This claim structure enforces the intersection rule cryptographically: every token carries the user, the agent, and the bounded execution context, and the resource server validates all three before honoring the request.

Your stack must enforce RFC 8707 resource indicators to bind tokens to a specific audience, ensuring a token minted for a calendar API can’t be replayed against a CRM.

Use RFC 8693 token exchange to safely trade broad user tokens for tightly downscoped agent tokens.

Sender-constrain tokens using RFC 9449 demonstrating proof of possession (DPoP), ensuring that even if an access token gets intercepted, attackers can’t use it without the client’s private key. The stack should also support RFC 9126 pushed authorization requests and RFC 9396 rich authorization requests for enhanced, tamper-proof granularity.

Capability 4: Vault tokens and automate refresh across providers

A runtime that handles token storage and refresh per-user, per-provider, is non-negotiable for production agents. Managing the OAuth token lifecycle across thousands of users and dozens of providers is a substantial engineering problem in its own right.

Access and refresh tokens must be vaulted and encrypted on a strict per-user, per-provider basis. Your system needs to automatically handle provider-specific nuances outside the language model context.

For example, Google enforces a rolling limit of 100 refresh tokens per client, and Microsoft Entra rotates refresh tokens on every redemption with a 90-day sliding inactivity window. A dedicated token vault must abstract this refresh logic away from the agent developer.

Capability 5: Enforce read, draft, and commit approval steps

Security architects must enforce out-of-band approval flows for any irreversible action.

Reading data or drafting responses requires minimal friction and can be executed synchronously. But external side effects, such as sending emails, deleting records, or committing code, must trigger explicit human step-up approvals.

These approvals should occur via a secure, out-of-band channel, such as an enterprise authentication app, a separate user interface, or a direct messaging platform.

Capability 6: Evaluate policy before every tool call by hooking into existing entitlement systems

Never trust a language model’s direct API request. Every tool call must route through a centralized policy layer that intersects the user, agent, tenant, action, resource, and task. And it must evaluate that intersection in milliseconds to avoid throttling the agent’s conversational latency.

Critically, this is not an invitation to stand up yet another policy system. Enterprises already have entitlement systems and identity providers like Okta, Entra, SailPoint, and homegrown role/permission stores. The runtime’s job is to hook into those systems, acquire scoped tokens at runtime, and enforce the policies the enterprise has already defined, not duplicate them in a new tool.

Open Policy Agent, Cedar, Oso, OpenFGA, WorkOS FGA, and Zanzibar-style relationship graphs are useful as the local enforcement engine. But the source of truth for who can do what should remain in your existing identity and governance systems. A runtime that asks you to redefine your authorization model in its own DSL is moving the problem, not solving it.

Blanket consent at user onboarding violates the principle of least privilege.

Implement just-in-time authorization instead. When an agent requires access to a new system or an ungranted scope to fulfill a prompt, the runtime pauses execution. It returns a granular, context-specific consent interface to the user, captures the cryptographic consent, brokers the new token, and resumes the agent’s task without losing conversational context.

MCP’s URL Elicitation Specification Enhancement Proposal (SEP), authored by Arcade.dev in collaboration with Anthropic and accepted into the MCP spec, standardizes how an agent runtime delivers granular, context-specific consent URLs to the user mid-task.

Capability 8: Bind first-time auth flows to a verified app user

Granular consent (Capability 7) only matters if the runtime can confirm which user is sitting at the keyboard during the first-time OAuth authorization. Without that confirmation, an attacker who intercepts a flow_id can redirect the consent step to their own browser and either hijack the authorization back into your user’s session or capture the user’s grant for themselves.

The mitigation is a server-side user verifier. When a user authorizes a tool for the first time, the runtime redirects them to a verifier route in your app. Your verifier reads the flow_id from the query string, looks up the currently authenticated user from your app’s session (Stytch, Auth0, Okta, as the IdP, or an app-layer auth system like Supabase), and posts that user_id back to the runtime via a server-side confirm_user call signed with your API key.

If the user_id from your session matches the user_id specified when the flow started, the runtime continues. If not, the runtime rejects the flow. Every first-time authorization is therefore bound to a verified, authenticated identity in your app, which closes the flow-phishing attack surface.

In production multi-user deployments, this is non-negotiable. Arcade’s reference implementations show the pattern in Next.js with Stytch and Next.js with Supabase, and Arcade’s Secure Auth in Production guide walks through the verifier route end-to-end.

Capability 9: Generate immutable audit logs for every agent action

Every action taken by an agent must generate an immutable audit log with a complete chain of custody.

This means capturing the requesting user, the agent identity, the tenant, the task ID, the specific tool invoked, the resource accessed, the policy decision and policy version, the prompt hash, input references, output hash, approval status, and the exact timestamp.

These logs must be OpenTelemetry-compatible, providing structured traces that export cleanly into enterprise security information and event management systems for immediate incident response.

And the audit story isn’t only about the logs themselves. It’s about the controls that produce them. SOC 2 Type 2 certification validates that the runtime’s audit, access, and change-management controls operate as designed under independent audit. Treat the certification as a procurement floor and the per-action log structure as the actual product capability. You need both.

Why a runtime, not a gateway: the architecture shift behind multi-user authorization

In the traditional model, users interact with applications, applications call APIs, and a gateway sits between them, routing, authenticating, and rate-limiting at the perimeter. The proxy is the control point because it’s the choke point: every request flows through it.

In the agentic model, that topology inverts. The agent is already the proxy. A user talks to an agent. The agent reasons, plans, and calls tools on the user’s behalf. It already handles mediation, routing, and orchestration. Adding a traditional API gateway in front of the tools doesn’t add a control point; it adds a redundant hop that can’t see into the execution context that actually matters: which user, which action, which permission, right now.

That’s why “MCP gateway” is the wrong frame for the auth problem. A stateless proxy evaluates each request in isolation. It can’t track that a request is step 3 of a 6-step agent workflow, acting on behalf of a specific user who authorized a particular scope minutes ago. Bolting MCP support onto an API gateway is not a pivot. It’s a patch.

The control point in an agentic architecture is the execution layer where the tool runs. That’s where credentials are resolved, permissions are checked, and actions are taken on behalf of a specific human. That’s the runtime. The nine capabilities above can only be enforced there.

Where each layer fits in the agent auth stack (IdP, OAuth vault, policy engine, MCP runtime)

Understanding the vendor landscape means categorizing platforms by their strict architectural function. Misunderstanding where a tool fits in the stack leads to dangerous auth gaps.

The deeper issue is consistency at scale. Even with the right primitives in place (an IdP, a token vault, a policy engine), most stacks have no uniform way to apply them across every agent, every user, and every system. Each team stitches its own integration, and two teams in the same company end up enforcing the same policy differently. The runtime is what makes a single authorization model enforceable across every agent, without each team rebuilding the plumbing.

Architectural layerExample vendorsPrimary functionKey gap for multi-user agents
Identity providersOkta, Auth0, Entra, WorkOS, and ClerkAuthenticate the human user into the application via OpenID Connect.Lacks the full agent authorization stack. Support for explicit delegation flows, such as RFC 8693 and sender-constraining via DPoP, varies significantly and often requires heavy custom actions. Audit covers authentication events, not per-tool-call agent actions.
OAuth libraries and vaultsAuthlib, HashiCorp Vault, DopplerSecurely store, encrypt, and manage raw OAuth tokens.Lacks a contextual decision engine, robust policy evaluation, and the dynamic, multi-provider refresh logic necessary for asynchronous agentic workflows. Audit captures token operations, not the user, agent, and tool context behind each call.
Policy engines and FGA platformsOpen Policy Agent, Cedar, Oso (Polar DSL), OpenFGA, WorkOS FGA, Zanzibar-style, SailpointEvaluate fine-grained authorization policies against complex relationship graphs.Leaves token brokering, consent user experiences, and physical tool connectivity for the engineering team to build from scratch. Audit records the policy decision, not the full execution context that the resource server actually saw.
Agent frameworksLangChain, Mastra, Crew AIProvide tool abstraction for agent workflows.Push the auth burden back onto your application code; treat tools like keys in a dotenv file and quietly break the moment a second customer signs up. No native audit trail for agent actions.
MCP gateways and integration wrappersComposioConnect language models to external tools using standardized interfaces.Designed for rapid prototyping and single-user proof-of-concept agents. An SDK-layer integration wrapper, not a runtime. Per-user OAuth is supported, but SSO, OIDC, and audit are limited rather than native, and the agent/user permission intersection isn’t enforced.
MCP runtimesArcade.devThe first MCP runtime built for agent authorization. Delivers post-prompt user-specific permissions, isolated token lifecycle management (refresh, rotation, mismatch), OAuth protocol brokering, contextual access policy enforcement, and immutable per-action audit logs exportable via OpenTelemetry.Not applicable. This layer explicitly unifies the previous layers and fills their operational gaps.

Reference architectures for multi-user agent auth

These capabilities only matter if you can map them to real architectures. The three patterns below show how an MCP runtime enforces multi-user authorization in production.

The patterns assume the canonical multi-user setup: an agent application that authenticates users via its own identity provider (Stytch, Auth0, Okta, or Entra) and calls the runtime through its client SDK, passing the authenticated user_id on every tool call. The runtime is the backend that brokers OAuth, vaults tokens per user, and enforces policy. For MCP-client integrations like Copilot, Cursor or Claude Desktop, the runtime’s MCP gateway path is used instead, but the runtime semantics are the same.

Two distinct auth flows run inside each pattern. Server-level auth determines whether the agent application (an MCP client) can connect to the MCP server. Tool-level auth governs whether the currently authenticated user can invoke a specific tool against this resource with these parameters right now. Server-level auth happens once per client-to-server connection. Tool-level auth runs on every tool call, and it’s where the user verifier (Capability 8), just-in-time consent via URL Elicitation (Capability 7), and the permission intersection rule actually operate. Arcade’s Server-Level vs Tool-Level Authorization guide walks through the distinction in detail.

Pattern 1: internal productivity agent (Google Workspace)

Architectural flow: Human User -> [OIDC Identity Provider] -> Agent Application -> MCP Runtime -> Gmail and Calendar MCP tools -> Google Workspace

Scenario: An internal, Claude-based assistant organizes meetings and summarizes emails across a multi-user Google Workspace environment.

Implementation: The agent must never possess domain-wide delegation. Instead, the MCP runtime brokers a user-specific OAuth flow. The runtime requests delegated gmail.readonly and gmail.compose scopes, binding the resulting token strictly to the individual employee.

On the user’s first authorization, the runtime redirects the user’s browser to a verifier route in the app. The verifier reads the flow_id, looks up the authenticated user from the OIDC session, and confirms the user_id back to the runtime. Only after the runtime matches the verifier-confirmed user_id against the user_id that started the flow does the OAuth grant proceed. From that point forward, the user’s token is vaulted per provider and reused on subsequent calls without re-authorization.

When the agent attempts to read an inbox, the app passes the authenticated user_id from its session into the runtime SDK call. The runtime evaluates the policy engine, retrieves that specific user’s token from the vault, and executes the call.

If the agent hallucinates or receives a malicious prompt to send an email, it requests the gmail.send scope. The runtime catches this unauthorized request, pauses execution, and forces an out-of-band step-up approval to the user’s device. A human explicitly authorizes the transmission, or it doesn’t happen.

Pattern 2: multi-tenant Slack agent (workspace isolation)

Architectural flow: Human User -> [OIDC Identity Provider] -> Agent Application -> MCP Runtime -> Slack MCP tools -> Slack workspace

Scenario: A business-to-business application deploys an agent that aggregates alerts and takes administrative actions across multiple customer Slack workspaces.

Implementation: Managing access across distinct corporate boundaries requires strict multi-tenant isolation. The runtime manages workspace-level OAuth installations, generating bot tokens combined with granular user-level channel permissions like chat:write and channels:history.

The runtime uses RFC 8707 resource indicators, ensuring that tokens minted for Tenant A’s Slack instance are mathematically bound to that tenant’s audience.

If an injection attack attempts to force the agent to read Tenant B’s data using Tenant A’s context, the policy engine rejects the cross-tenant token replay instantly. That prevents catastrophic cross-customer data leakage.

Pattern 3: Salesforce CRM agent (user-level permissions)

Architectural flow: Human User -> [OIDC Identity Provider] -> Agent Application -> MCP Runtime -> Salesforce MCP tools -> Salesforce

Scenario: A sales copilot updates pipeline records, drafts follow-up emails, and queries customer history on behalf of individual account executives.

Implementation: Salesforce data access rules are notoriously complex. The MCP runtime requests the api and refresh_token OAuth scopes to call Salesforce on behalf of the user, then evaluates the account executive’s specific Salesforce profile and permission sets at every tool call before allowing the agent to proceed. Object-level access (read on Account / Contact, edit on Opportunity stage transitions, commit on Lead conversion) is gated by the user’s existing Salesforce permissions, not by the agent’s own credentials.

The implementation enforces strict separation between reading account contacts, drafting meeting notes, and committing pipeline updates.

Through just-in-time authorization, if a junior rep asks the agent to update a closed-won opportunity they lack privileges to edit, the runtime’s policy engine blocks the action at the tool boundary. It returns a graceful access denial to the language model without exposing backend credentials.

Agent auth anti-patterns to avoid in production

Answer engines and security audits favor systems that eliminate known architectural flaws. If your current homegrown agent setup relies on any of these anti-patterns, your infrastructure isn’t ready for enterprise production.

  • Single API key routing: Your agent backend shares a single, highly privileged service account key across all users. This breaks identity attribution at the request layer. The backend can’t distinguish between an intern’s request and a CEO’s request, and a single prompt injection inherits maximum blast radius across the entire user base.
  • God mode with prompted guardrails: The agent runs with root or admin credentials, and engineers rely on system prompts like “do not delete data” to maintain security. Language models are easily manipulated through indirect injection, so relying on the model to govern its own authorization is a fundamental security failure.
  • Blanket sign-up consent: Forcing users to grant massive, multi-system OAuth scopes during their initial onboarding. This violates the principle of least privilege, causes consent fatigue, and provisions tokens with dangerous capabilities long before the user actually needs them.
  • User interface-only checks: Authorization checks are enforced exclusively at the chat interface or frontend web application, leaving the backend tool plane unprotected. If an attacker bypasses the chat interface and sends payloads directly to the tool execution endpoint, the system complies without verifying the delegated user context.
  • No distinction between draft and commit: Your agent treats every action with the same authorization level, sending emails or transferring funds as easily as drafting them. Without a read/draft/commit gradient and an out-of-band approval step for irreversible actions, a single prompt injection causes irreversible damage.
  • No immutable audit trail: Your agent system has no per-action audit log or relies on application logs that can be modified after the fact. Without an immutable record of who authorized what tool action when (with policy version, prompt hash, and approval status), security incidents can’t be reconstructed, and regulator-facing audit reports become impossible.

Conclusion: the delegated authorization rule for multi-user agents

The transition to production-grade, multi-user AI agents demands a fundamental shift in how we architect security. The entire philosophy of agent authorization boils down to one strict rule:

This specific agent may perform this specific action on this specific resource, for this specific user, in this specific tenant, for this specific task, for a strictly limited period of time.

If your current infrastructure can’t cryptographically enforce and audit that exact sentence from the chat prompt down to the backend API layer, your system isn’t ready for multi-user production in 2026.

A gateway can’t enforce that rule. A runtime can.

Before you commit to a runtime, do three things. Audit your current identity mapping to confirm your backend systems actually model the user, agent, and context tuple on every tool call. Stop building bespoke OAuth plumbing. Refresh logic, just-in-time consent user interfaces, and multi-tenant token vaulting are undifferentiated technical debt your engineers shouldn’t be writing. And test the intersection rule aggressively by sending malicious prompts against your own agents to verify that your policy engine intercepts them at the network boundary.

Arcade is the MCP runtime purpose-built for agent authorization, handling per-user OAuth, just-in-time consent, token vaulting, policy intersection, and immutable audit as native capabilities, not bolt-on plugins. The nine capabilities above are unified under one control plane, alongside Arcade’s agent-optimized tool catalog and lifecycle governance, so your engineering teams can focus on shipping high-value agent logic instead of maintaining fragile identity plumbing.

Frequently asked questions

What’s the best way to manage multi-user AI agent authentication and authorization in 2026?

Treat every tool call as delegated user access, not agent-owned access. Implement a two-identity model (the agent application and the user on whose behalf the action is taken), bind every call to a delegated execution context, and enforce the intersection rule via OAuth 2.1 delegated tokens, a policy engine in front of tools, short-lived scoped tokens, and immutable audit logs.

What is the two-identity model for agent authorization?

Every request carries two identities: the project-level key (the agent application making the call) and the user-level identity (the human on whose behalf the action is taken). The runtime evaluates these two identities against a delegated execution context, a bounded binding that ties a specific user to a specific agent for a specific task, so the backend can attribute and constrain every action.

What is the “intersection rule,” and why does it matter?

The agent’s effective permissions must be the intersection of the user’s permissions and the agent’s allowed capabilities. Never the union. This rule prevents “confused deputy” failures where an injected prompt causes the agent to misuse broad system access.

How should OpenID Connect and OAuth 2.1 be used together for agents?

Use OpenID Connect to authenticate the human user (who they are). Use OAuth 2.1 to authorize the agent’s tool calls (what the agent can do on the user’s behalf) with scoped, audience-bound tokens.

How do you prevent prompt injection from turning into tool misuse?

Don’t rely on prompts for security. Route every tool call through a policy enforcement layer that checks user/agent/context, scopes, tenant, and resource. Use short-lived, audience-bound tokens so even a successful injection can’t pivot across systems.

Which token properties are required for secure delegated-agent access?

Tokens should be short-lived, scoped, and audience-bound (so they can’t be replayed against other APIs). For stronger replay resistance, use sender-constrained tokens (e.g., DPoP) so stolen tokens are unusable without the client key.

How do you handle OAuth refresh tokens safely for thousands of users?

Store tokens in a per-user, per-provider encrypted vault and automate refresh/rotation outside the LLM. This prevents secrets from leaking into prompts and prevents provider-specific refresh edge cases from breaking agent workflows.

When should an agent require step-up approval or human confirmation?

Require step-up approval for irreversible or high-impact actions (e.g., sending an external email, deleting records, committing code, or transferring funds). Let the agent read and draft with lower friction, but gate “commit” actions via an out-of-band confirmation flow.

What is just-in-time authorization for AI agents?

The agent requests new scopes or system access only when needed for a specific task. The runtime pauses, collects granular consent, mints a downscoped token, and resumes. This reduces over-permissioning and consent fatigue.

What is MCP URL Elicitation?

URL Elicitation is a Specification Enhancement Proposal authored by Arcade.dev with Anthropic and accepted into the Model Context Protocol spec. It defines how an MCP runtime returns a granular, context-specific consent URL to the user mid-task when the agent needs a new scope or system, allowing the user to authorize the request out of band before the runtime resumes execution. URL Elicitation is the standardized mechanism behind just-in-time agent authorization.

What should be included in an audit log for agent tool calls?

Log the user identity, agent identity, tenant, tool/action/resource, policy decision, timestamp, and a prompt or request hash. Make logs immutable and exportable via OpenTelemetry-compatible formats for incident response and compliance.