TL;DR

  • Claude Code Routines enable unattended, cloud-run workflows via scheduled, API, and GitHub event triggers. Enterprise use breaks with demo-grade setups.
  • Daily run caps and shared subscription usage push teams to batch work into a single daily “meta-orchestrator” routine plus a few real-time triggers.
  • 5 production workflows: incident postmortem drafting, on-call triage → ticket drafts, PR-aging report, expansion-signal scanning, and changelog PR generation.
  • Key enterprise risks: over-permissioned connectors, prompt injection from untrusted inputs, API rate limits (notably Slack history), and weak auditability.
  • Production pattern: use an MCP runtime that delivers agent authorization, agent-optimized tools, and agent lifecycle governance, plus human approval gates for write actions.

Cloud-hosted agents are not new. OpenClaw, Perplexity Computer, n8n, Zapier, and a handful of SaaS agent runtimes have been executing unattended work for a while. The release of Claude Code Routines adds a different option: teams that already use Claude Code as their day-to-day development agent can now run that same agent, with the same prompts, tools, and conventions, on Anthropic’s cloud instead of tethered to a laptop.

A routine is a saved Claude Code configuration (a prompt, one or more repositories, and a set of connectors) packaged once and run automatically on Anthropic-managed cloud infrastructure. Each routine can attach any combination of three trigger types: scheduled (recurring cadence), API (POST to a per-routine endpoint with a bearer token), and GitHub events (pull request or release activity on a connected repository). Routines are currently in research preview, so limits and API shapes are still moving.

Most of the early Routines content focuses on personal productivity: meeting prep, inbox summaries, and calendar wrangling. For senior developers and engineering leaders trying to run autonomous agents across an enterprise, those demos do not cut it.

Moving from a script on one laptop to a production-grade engineering workflow means dealing with the realities of enterprise architecture. Production automation demands strict governance, robust security boundaries, and the ability to work within aggressive API rate limits.

This article covers five production-leaning, unattended routines designed for engineering teams. We’ll map exactly what happens at runtime, identify which workflows need human oversight, and outline the governance models you need to safely run scheduled, API-triggered, and GitHub-triggered Claude Code sessions without compromising your infrastructure. Before getting to the workflows, it’s worth looking at why demo-grade setups buckle the moment they move from a single laptop to a shared team environment.

Where demo patterns hit production reality (security, reliability, governance)

Routines formalize what teams have been wiring together with cron jobs, GitHub Actions, and custom middleware for two years: Claude Code running on a schedule, against a GitHub event, or through an API call, with no developer laptop in the loop. But moving from a single developer’s personal setup to a shared enterprise environment exposes severe limitations in security, reliability, and auditability. Fast.

Start with the execution model. Per Anthropic’s docs, routines “run autonomously as full Claude Code cloud sessions: there is no permission-mode picker and no approval prompts during a run.” Whatever the agent decides to do, it does. At the speed of inference, without a human in the loop. That shifts the burden of “what is this agent allowed to do” from interactive confirmation to pre-deployment configuration. If the configuration leans on bundled first-party connectors and creator-inherited OAuth scopes, the guardrails come off exactly when you need them most.

The most critical vulnerability is the permission inheritance model of bundled first-party connectors.

In a standard setup, an automated routine inherits the full global access of the developer who created it. Anthropic’s docs make the consequence explicit: “Anything a routine does through your connected GitHub identity or connectors appears as you: commits and pull requests carry your GitHub user, and Slack messages, Linear tickets, or other connector actions use your linked accounts for those services.” A first-party OAuth token works for a single developer querying their personal pull requests. It becomes a massive liability the moment you deploy it as an unattended routine on behalf of a whole team.

If an agent operates with an engineering lead’s administrative permissions, a single compromised routine gains unrestricted read and write access across your entire enterprise system. This architecture fails security reviews every time the automation touches shared customer data, source code, or regulated infrastructure.

This over-permissioning makes prompt injection threats way worse. Unattended routines ingest untrusted third-party text by design. They process incoming PagerDuty incident descriptions, analyze raw Sentry stack traces, and scan customer support emails.

Without typed, permission-scoped tool contracts to validate the output, a malicious payload hidden in a customer ticket can instruct the routine to exfiltrate data or delete production resources. Natural language instructions won’t stop these exploits in an enterprise environment.

Operational and reliability constraints compound the problem. Routines draw down the same subscription usage as interactive sessions, plus a separate daily cap on how many runs can start per account. Anthropic doesn’t publish a specific number, and Claude usage tightens once team activity ramps up, so unattended workflows have to be designed with quota-awareness from day one.

This forces engineering teams to abandon simple event-driven architectures for complex batch processing. You can’t trigger a routine for every individual pull request comment. Instead, you orchestrate batch jobs that process dozens of events at once to conserve quota, or enable extra usage and accept metered overage when caps hit.

Reliability and visibility close out the failure list. Early adopters report consistent issues with bundled connectors in unattended execution: community issue trackers show silent failures during runtime, OAuth token expiration errors that crash scheduled tasks, and connectors that fail to load in the cloud environment.

Bundled connectors also lack auditability. When an unattended routine updates a Jira ticket, queries a GitHub repository, and posts a Slack message, standard bundled connectors give you opaque execution logs. Security teams can’t construct a definitive audit trail of what the agent did across multiple platforms.

The rest of this article shows how a dedicated MCP runtime resolves each of these failure modes:

RiskControlWhere it lives
Over-permissioned tokenPer-user, per-tool authorization evaluated per actionMCP runtime
Prompt injection from untrusted textAgent-optimized tools with schema enforcement and isolated credentialsMCP runtime
Quota overrunMeta-orchestrator batching plus targeted GitHub event triggersRoutine design
Silent write to productionHuman approval gate on drafts, PRs, or prefixed branchesWorkflow config and branch protection
No audit trail for complianceFull execution context logged per tool call, exportable via OpenTelemetryMCP runtime

5 production Claude Code routine workflows you can batch into one daily run

The risks and controls above become concrete through workflow design. Before the patterns, one operational constraint shapes every choice below: quota. Routines share subscription usage with interactive sessions and add a daily cap on runs per account, so running a separate routine for every minor event burns through the budget fast.

The solution is to architect a single “meta-orchestrator” routine that wakes up once a day, runs a sequential batch of discrete data-gathering and reporting tasks, and shuts down. That consumes one run from your daily cap.

This strategy saves your remaining runs for critical, real-time API and GitHub event triggers that demand immediate attention.

Here are five concrete engineering workflows designed for this quota-aware framework, with their technical triggers, human approval surfaces, and governance requirements. Three of them (nightly incident postmortem, weekly PR-aging, expansion-signal scanning) sit inside the meta-orchestrator and share the daily run. The other two (Sentry triage, release-notes draft) run real-time because their value is latency-bound. You want the Linear ticket while the incident is hot, and the changelog draft as soon as the release tag lands.

RoutineTriggerPrimary toolsApproval surfaceRun slot
Nightly incident postmortemScheduled (2:00 AM daily)PagerDuty, Slack, NotionHuman engineers review and publish the drafted Notion pageMeta-orchestrator
On-call Sentry triageAPI (Sentry webhook → routine /fire endpoint)Sentry, LinearOn-call engineer triages the drafted Linear ticket queueReal-time
Weekly PR-aging reportScheduled (Friday morning)GitHub GraphQL, emailRead-only; no write approval neededMeta-orchestrator
Expansion signal scannerAPI (nightly)HubSpot, Slack SearchAccount managers review flagged accounts in a Slack channelMeta-orchestrator
Friday release notes draftGitHub event (release created)GitHub, Jira / LinearPM reviews the pull request and merges the changelogReal-time

Nightly incident postmortem draft (PagerDuty, Slack, Notion)

Assembling a postmortem means stitching PagerDuty timestamps, Slack threads, and deploy markers into a readable narrative. This workflow does the assembly and drafts the first pass so the engineer lands on a structured Notion page instead of a blank one.

  • Trigger: Scheduled. Runs as the first sequence in the daily 2:00 AM meta-orchestrator.
  • Workflow: The routine queries the PagerDuty API for resolved events from the previous 24 hours. The hard part is Slack context: the conversations.history endpoint now rate-limits non-Marketplace apps to one request per minute, so bulk-ingesting incident channels is off the table. The routine uses the Slack Search API to isolate key messages, or fires via the API trigger when a Slack reaction-event webhook (configured in your Slack app) POSTs to the routine’s /fire endpoint after an engineer drops a designated emoji on a summary message. It then drafts a Notion page with a timeline, impact, and initial resolution steps.
  • Approval surface: The routine runs unattended. An engineer reviews, edits, and publishes the Notion draft the next morning.
  • Governance & security checklist:
    • Scope the PagerDuty token to read-only on specific services. Scope Slack tokens to the incident channels only, not org-wide.
    • Redact customer identifiers (email, user ID, account ID) at the tool layer before the draft is written to Notion. Do not rely on the model to scrub PII.
    • Log triggering PagerDuty incident ID → drafted Notion page ID for every run, not just on failure.

On-call triage and ticket creation (Sentry to Linear)

When a service degrades, on-call engineers get paged with a dozen near-identical error reports. This workflow groups the noise by Sentry fingerprint and files one Linear ticket per cluster so the on-call triages root causes, not duplicates.

  • Trigger: API. Claude Code Routines don’t accept arbitrary third-party webhooks (only GitHub events), so configure Sentry’s webhook integration to POST to the routine’s /fire endpoint with its bearer token when an error spike crosses a configured threshold. Runs outside the daily orchestrator because triage value drops fast if it waits.
  • Workflow: The routine reads fresh events from Sentry, groups them by fingerprint to collapse duplicates, and ranks clusters by event count and affected-users count. Each cluster becomes a Linear ticket with the stack trace snippet, affected release, and a link back to the Sentry issue. Tickets land in an un-triaged queue with a default P3 label.
  • Approval surface: The routine never triages itself. The on-call engineer reviews the queue, adjusts severity, and assigns the ticket.
  • Governance & security checklist:
    • Scope the Sentry token to specific project slugs. Exclude projects flagged as handling authentication or payment data.
    • Strip user-supplied strings (URL params, form inputs, search terms) from error payloads before the agent sees them. Those fields are the prompt-injection surface.
    • Log the mapping from Sentry event ID → Linear ticket ID. This is what lets post-incident reviews reconstruct which alert caused which ticket.

Weekly pull request aging and code review report (GitHub)

Stale PRs create merge conflicts, block releases, and erode review velocity. This workflow replaces the Friday morning dashboard sweep with a single email that names the three PRs each lead needs to act on.

  • Trigger: Scheduled. The daily orchestrator runs the workflow every day; the body skips itself on non-Fridays.
  • Workflow: The routine queries the GitHub GraphQL API for PRs open longer than three days across the org, pulling each PR’s review state, failing check runs, and unresolved review comments in a single query. It summarizes each PR’s blocker (waiting on reviewer X, failing CI check Y, unresolved change requests) and emails a grouped digest to the relevant engineering leads.
  • Approval surface: Read-only. The email dispatches without human intervention, so the token scope is the real control.
  • Governance & security checklist:
    • Use a GitHub App token with metadata, pull_requests, and issues read-only. Do not grant contents scope; the routine never needs the diff.
    • Strip code blocks from the email template before send, even if the agent tries to paste one.
    • Send from a dedicated service-account email, not a developer mailbox, so downstream audit trails stay clean.

Expansion signal scanner for customer health (HubSpot, Slack)

Support tickets and shared Slack channels are where customers accidentally self-identify as enterprise-tier: questions about rate limits, SSO, SOC 2 reviews, and data residency. This workflow surfaces those signals into a single account-health feed so the revenue team sees them.

  • Trigger: API-triggered. Runs as part of the nightly meta-orchestrator.
  • Workflow: The routine queries HubSpot for tickets created or updated in the last 24 hours and scans the body and notes for enterprise-tier keywords (“rate limits,” “SSO,” “SOC 2,” “HIPAA,” “data residency”). For shared customer Slack channels, bulk history ingestion is off the table because of conversations.history rate limits, so the routine uses the Slack Search API against the same keyword set. Each matching account gets a row in an internal Slack post with links back to the source ticket or message.
  • Approval surface: Findings land in a dedicated internal Slack channel with source links. An account manager reviews each flagged account and decides whether to open an expansion conversation.
  • Governance & security checklist:
    • The routine never writes to HubSpot. It reads from an allowlist of ticket properties (subject, body, pipeline stage) and nothing else.
    • Restrict the Slack token to public support channels plus explicitly listed shared customer channels. Never grant channels:history org-wide.
    • Log which account IDs, ticket IDs, and Slack message IDs were scanned on each run, along with which keywords matched. The keyword that triggered the flag is the part account managers need to trust the signal.

Friday release notes and changelog draft (GitHub, Jira/Linear)

Commit messages are written for engineers; release notes are written for customers. This workflow drafts the customer version so the product team edits prose instead of compiling a changelog from scratch.

  • Trigger: GitHub event trigger on release.created, scoped to the specific repository. Requires the Claude GitHub App installed on the repo. Running /web-setup alone grants clone access but doesn’t enable webhook delivery.
  • Workflow: The routine finds the previous release tag, collects every PR merged into main between the two tags, and resolves each PR back to its Jira or Linear ticket using the ticket ID conventionally placed in the PR title or body. It then drafts customer-facing release notes in Markdown, grouped by feature area. One caveat: the bundled GitHub MCP connector has gaps around basic writes like updating the release body directly, so the routine opens a pull request against a release-notes/ branch instead of editing the release in place.
  • Approval surface: The routine commits the Markdown to a release-notes/<tag> branch and opens a PR. A product manager edits the copy and merges.
  • Governance & security checklist:
    • Give the routine read-only access to Jira and Linear. It should never change a ticket’s status or rewrite acceptance criteria.
    • Enforce a branch protection rule: the routine’s write token can only push to branches matching release-notes/*. The main branch is structurally unreachable.
    • Log triggering release tag → list of PRs analyzed → resulting changelog PR number. When the next release breaks, provenance is what makes the diff debuggable.

How to evaluate an enterprise MCP runtime for Claude Code routines

Every workflow above has a shared dependency: the tool layer underneath. Native Claude Code Routines can’t safely execute these tasks on bundled connectors alone. Workflow 5’s note about the GitHub connector missing basic writes is representative of the stock first-party set, not an outlier.

Relying on bundled connectors and first-party token inheritance also means rate-limit failures, prompt injection exploits, and security audits that halt deployment.

What’s missing is a purpose-built MCP runtime: the execution layer where tools run, credentials are resolved just-in-time, and every action is authorized against a specific user’s permissions. This is not another proxy in front of your enterprise systems; the agent is already the proxy. The runtime is where the tool call lands, where identity and policy are evaluated, and where the audit record is written. Critically, the runtime is stateful. It maintains per-session, per-user context across an agent’s entire reasoning loop, which is exactly what a stateless proxy cannot do. And this statefulness is what makes per-user, per-tool authorization enforceable.

An enterprise MCP runtime delivers three capabilities working in concert: agent authorization (per-user, per-tool, per-action), agent-optimized tools (built for LLM consumption, not API passthrough), and agent lifecycle governance (centralized control, versioning, and full-execution audit logs).

CapabilityBundled first-party connectorsEnterprise MCP runtime
Permission modelInherits the creator’s global OAuth scopeScoped per routine, per user, per action
Auth lifecycleToken embedded at setup; manual refreshRuntime manages refresh, rotation, and expiry
Audit logsOpaque, per-connector, not unifiedFull chain of custody per tool call (user, tool, params, result), exportable to SIEM via OpenTelemetry
Prompt injection defenseNone; LLM parses raw input into API callsMulti-layered: isolated credentials, per-action auth, schema enforcement, visibility filtering
Rate-limit handlingDirect hits against upstream APIsThrottling, batching, and targeted webhooks
Tool catalogStock first-party set onlyThe largest catalog of agent-optimized MCP tools (8000+)
Gateway compositionOne OAuth/connector per upstream serviceRuntime-level federation: tools composed into a single identity-scoped URL (Arcade.dev calls this the MCP Gateway feature: a composition layer, not a proxy)
Cross-harness portabilityClaude Code onlyAny MCP-compatible harness (Codex, OpenCode, local-model)

Agent authorization: per-user, per-tool, evaluated at runtime

The most critical function of a dedicated MCP runtime is handling multi-user agent authorization, sometimes called post-prompt authorization.

Single-user demos hide the real problem. Anthropic’s docs are explicit that “routines belong to your individual claude.ai account. They are not shared with teammates.” Every routine is structurally a single-user artifact, even when the work it does affects an entire team. The moment a routine has to act on behalf of multiple users (one-per-engineer on a platform team, or org-wide when a customer-health scanner runs for every account manager), shared service accounts and creator-inherited OAuth scopes collapse as a model. Teams either give the agent broad permissions (and an intern bypasses their access controls through the agent) or inherit the user’s full permissions (and one prompt injection cascades through every system that user can touch). The right answer is the intersection: what is this agent allowed to do AND what is this user allowed to do, evaluated per action at runtime. That is the problem the runtime has to solve before routines can move past single-user demos.

Rather than letting a routine inherit the global, administrative permissions of its creator, an advanced runtime isolates the LLM entirely from underlying credentials and executes every tool call On-Behalf-Of (OBO) a specific user. The runtime evaluates the intersection of the agent’s baseline permissions and that user’s native permissions per action at runtime, so every action is attributable to a specific human in the audit log.

Authorization is just-in-time. The runtime requests and validates credentials only when a specific user action requires them. If a user never invokes the Salesforce integration, no Salesforce tokens are ever obtained or stored. The entire OAuth flow (token exchange, refresh, storage) executes in deterministic backend logic that the LLM can never observe, alter, or leak. For additional governance, teams attach pre-tool-call and post-tool-call hooks to enforce custom policies: human-in-the-loop approvals for destructive actions, usage limits, or contextual access rules.

The runtime manages the entire OAuth token lifecycle. It handles token refresh, rotation, and mismatch scenarios outside the view of the LLM. If a routine tries to access a repository the target user can’t see, the runtime blocks the action at the protocol layer.

Critically, the runtime hooks into the identity and entitlement systems you already run (Okta, Entra, SailPoint) instead of asking you to redefine authorization policies in yet another system. It acquires scoped tokens just-in-time, enforces the policy your IDP already owns, and keeps credentials isolated from the LLM and the MCP client. The runtime delegates authorization to what the enterprise has already defined; it doesn’t duplicate it.

Agent-optimized tools: built for LLM consumption, not API passthrough

Most MCP servers today are thin API wrappers. When a user says “update the Acme deal,” the wrapper still asks the agent for opportunity_id, owner_id, stage_enum, and close_date. The agent fills those parameters probabilistically and either guesses the wrong values or retries blindly. This failure mode is called parameter hallucination, and it’s where most agent failures happen in production. A proxy layer has no mechanism to close it.

Agent-optimized tools invert this pattern. When a user asks to “make the intro paragraph friendlier,” the tool translates that to segmentId=gz49hg56, index=350, text='your friendlier message'. The agent never thinks beyond “intro paragraph.” Every tool ships with rich semantic descriptions to help the LLM pick correctly, consistent schemas across services regardless of the underlying API, and agent-interpretable errors instead of raw HTTP status codes. In practice this ships as the largest catalog of pre-built agent-optimized MCP tools (8000+), covering productivity, CRM, communication, and developer systems, so teams skip the wrap-an-API-in-MCP step entirely.

Reliability is a runtime concern, not an agent concern. Pagination, rate limiting, retries, and failover all get handled by the runtime, invisible to the agent. Tools execute in parallel where safe; failed calls retry with additional developer-defined context; MCP servers fail over automatically. The agent gets a clean result or a clean error, never a half-paginated list or a transient network blip bubbling up into the reasoning loop.

Strict schemas also harden the tool layer against prompt injection. Schema enforcement is one layer of the defense, not the whole defense. A malicious payload buried in a customer email can’t talk the agent into a destructive call that doesn’t match an approved schema. More importantly, credentials never leave the runtime, so a jailbroken prompt has no tokens to exfiltrate. Per-user authorization is evaluated at every action, so an injected instruction can’t do more than the acting user is already permitted to do. And visibility filtering scopes the tools a routine can even see, so there’s no latent high-privilege tool hanging around for a payload to discover. Prompt injection defense has to be structural and in depth: at the tool layer, the auth layer, and the governance layer. Not a prompt-level patch.

Agent lifecycle governance: centralized control and full visibility

Agent lifecycle governance is the third pillar of an enterprise MCP runtime. Deploying autonomous agents at scale requires centralized control over which tools are available, to whom, and with what permissions, plus total visibility into what’s happening at runtime.

A dedicated runtime provides a full chain of custody for every agent action (user identity, tool name, parameters, and result), exportable to your SIEM via OpenTelemetry. Independent attestation (Arcade.dev is SOC 2 Type 2 certified) validates that these controls hold in production, which matters when security reviews start before deployment, not after. The runtime also lets security teams enforce visibility filtering so a routine only sees the tools it explicitly has permission to use, and provides the infrastructure to mandate human-approval gates for any routine attempting to write data to a production system.

Portability across agent runtimes using MCP

Investing in an MCP runtime also guarantees architectural portability. Because tools are exposed over the open MCP standard, the heavy lifting of building tool contracts, managing OAuth flows, and establishing governance policies happens once.

That investment is usable from any MCP client (Claude Code Routines, Cursor, Claude Desktop, VS Code, ChatGPT, and custom applications) and stays portable across other agent harnesses like OpenAI Codex or on-prem deployments running open-weights models for regulated workloads. When your team swaps Claude for a different harness on a specific workflow, or moves sensitive routines onto on-prem compute for compliance reasons, the tool contracts, OAuth flows, and audit logs travel with you. The agent harness changes; the governance layer does not.

How to test and deploy your first remote Claude Code routine

With the runtime in place, the remaining question is how to ship a routine to production without breaking things. Writing a prompt, attaching a token, and flipping the schedule is not the move. The four-step framework below enforces clear boundaries on top of your MCP runtime:

Step 1: Wire up Arcade MCP Gateway as a custom connector

Before you can safely test anything, give the routine somewhere governed to call. With Arcade, the flow is (full integration walkthrough at Arcade for Claude Code):

  1. In your Arcade dashboard, create a new MCP Gateway. Configure it with Arcade auth so tools inherit per-user, per-action authorization rather than a shared service account.
  2. Add the tools this routine needs to the gateway, scoped to the minimum the workflow requires and nothing more.
  3. In the Claude web interface, create a custom connector pointing at the gateway’s URL.
  4. Complete the one-time authorization to link the connector to the gateway.

With the connector live, any routine you create can include it alongside (or in place of) bundled first-party connectors.

Step 2: Sandbox execution

Never test a new routine against production data. Sandbox the execution using the /schedule command in the CLI or the “Run now” feature in the web interface.

Point the routine at a scratch Notion workspace, a dedicated testing Slack channel, or a sandbox GitHub repository. Conduct multiple dry runs to observe how the routine handles edge cases, unexpected inputs, and empty datasets.

Step 3: Start with read-only permissions

When configuring the routine for its initial deployment, enforce a strict “Read-Only First” mandate. Use your Arcade gateway to scope the routine’s MCP tools exclusively to read operations.

For example, if you’re building an incident triage routine, allow the routine to read from PagerDuty and output its analysis to a simple text file or a private Slack message. Validate the quality of the routine’s logic and data extraction for at least one week before granting permission to write data or create tickets.

Step 4: Add human approval gates for write actions

As you transition the routine to handle write operations, establish hard structural boundaries that mandate human oversight.

Don’t allow the agent to commit directly to your main branch or publish documentation live. Instead, configure the routine to draft documents, open pull requests, or push code exclusively to branches with a specific prefix. Every destructive or state-changing action requires a human engineer to review and merge the work.

Where to start

Claude Code Routines deliver genuine unattended automation for engineering teams: Claude Code running on a schedule, GitHub event, or API call, entirely off the developer laptop. Realizing that value across an organization means acknowledging that moving from a localized laptop demo to a nightly production workflow introduces severe architectural and security challenges.

You can’t run autonomous workflows at scale using bundled connectors, first-party token inheritance, and opaque execution logs. Production deployments demand typed tool contracts, robust rate-limit handling, and explicit permission scoping to protect against prompt injection and data exposure.

If your engineering team is evaluating how to run unattended AI agents safely, Arcade is the industry’s first MCP runtime purpose-built for this. By unifying agent authorization, agent-optimized tools, and agent lifecycle governance in a single runtime, we let you ship reliable production workflows without spending months rebuilding security and operational plumbing.

FAQ

What are Claude Code Routines, and what changed in the April 2026 release?

A routine is a saved Claude Code configuration (prompt, repositories, and connectors) packaged to run automatically on Anthropic-managed cloud infrastructure. The April 2026 release shipped three trigger types: scheduled, API (per-routine /fire endpoint with a bearer token), and GitHub events (pull request or release activity on a connected repository). Routines are currently in research preview.

How many times per day can a Claude Code Routine run?

Routines share subscription usage with interactive sessions and have an additional daily cap on how many runs can start per account. Anthropic doesn’t publish a specific number and it can change during the research preview, so per-event routines that fire on every PR comment or alert quickly become impractical.

How do teams work around routine run quotas in production?

Two options. First, batch multiple tasks into a single daily “meta-orchestrator” routine and reserve real-time runs for only the highest-severity API and GitHub event triggers. Second, enable extra usage in Settings → Billing so runs that hit the cap continue on metered overage.

Why are bundled connectors risky for enterprise unattended routines?

Bundled first-party connectors inherit the creating developer’s global OAuth scope. That permission inheritance fails security reviews the moment the routine touches shared code, customer data, or regulated systems.

How do unattended routines increase prompt injection risk?

Untrusted third-party text (PagerDuty descriptions, Sentry traces, customer emails) flows directly into the agent at runtime. A payload buried in that text can steer the agent toward unsafe actions. Defense has to be multi-layered at the runtime: isolated credentials the LLM never sees, per-user authorization evaluated on every action, schema enforcement on each tool call, and visibility filtering so the routine can’t even discover tools it isn’t permitted to use.

What is an MCP runtime, and why do I need it?

An MCP runtime is the execution layer where agent tool calls run. It resolves credentials just-in-time, authorizes each action against a specific user’s permissions, enforces tool schemas, and writes a unified audit log. It is not another proxy in front of your enterprise systems. The agent is already the proxy. The runtime is where identity, policy, and execution come together.

What is “post-prompt authorization”?

The runtime checks each individual tool action at execution time against the acting user’s permissions and the routine’s policy. The routine never inherits the creator’s blanket credentials.

Which routine actions should require human approval?

Any write or state-changing action (creating tickets, committing code, publishing documentation) should land as a draft, PR, or triage queue and go through a human review gate before merging.

How do Slack API rate limits affect these workflows?

Slack’s conversations.history endpoint now rate-limits non-Marketplace apps to a single request per minute. Production designs use Slack Search, targeted webhooks, or curated context instead of bulk history pulls.

What should I implement first to deploy a safe routine?

Wire up Arcade as a custom connector first so the routine calls tools through a governed runtime, then test in a sandbox, enforce read-only tools, and introduce human-in-the-loop gates before granting write permissions.

What should be logged for auditability in enterprise routines?

Log the triggering event, the tools called, the target resources, the acting user or service account, and the resulting object IDs (e.g., Sentry event ID → Linear ticket ID).