
MCP Has Already Won
Critics say MCP is overkill, but the data tells a different story. Arcade.dev CEO Alex Salazar breaks down why MCP has already won the protocol war.

Critics say MCP is overkill, but the data tells a different story. Arcade.dev CEO Alex Salazar breaks down why MCP has already won the protocol war.
Arcade.dev has been selected to the 2026 Enterprise Tech 30 list by Wing Venture Capital, recognizing the most promising private companies across the enterprise technology stack.

MCP makes it easy to go from “agent” to “agent that takes action.” The trap is that success compounds: every new system becomes a new server, every team ships “just one more tool,” and soon your integration surface is too large to reason about, too inconsistent to secure, and too messy to operate. Meanwhile, the model gets blamed for failure modes that are actually integration design problems. Tool definitions balloon. Selection accuracy drops. Context gets eaten before anyone types a prompt. A
Most conversations about AI agents still start the same way: models, prompts, frameworks, followed by an incredible looking demo. Then someone asks, “Okay… when can it ship to production?” That’s where things get a little awkward. The naked truth in the fading demo afterglow is that agents are apps. Which means they need identity, permissions, real integrations, and a way to behave predictably when something goes sideways. Without these components, any agent can dazzle a boardroom, but it won
Right now, somewhere in San Francisco, a foundation model company is losing money serving your API call. OpenAI spent $8.67 billion on inference in the first nine months of 2025—nearly double their revenue for the same period. Sam Altman publicly admitted they lose money on $200-per-month ChatGPT Pro subscriptions. Anthropic burns 70% of every dollar they bring in. These companies are pricing their products below cost, subsidized by the largest concentration of venture capital in technology his

The agent ecosystem has a terminology problem that masks a real architectural choice. "Tools" and "skills" get used interchangeably in marketing decks and conference talks, but they represent fundamentally different approaches to extending agent capabilities. Understanding this distinction is the difference between building agents that work in demos versus agents that work in production. But here's the uncomfortable truth that gets lost in the semantic debates: from the agent's perspective, it'

I was recently in Amsterdam meeting with some of the largest enterprises, and they all raised the same challenge: how to give AI agents access to more tools without everything falling apart? The issue is that as soon as they hit 20-30 tools, token costs became untenable and selection accuracy plummeted. The pain has been so acute that many teams have been attempting (unsuccessfully) to build their own workarounds with RAG pipelines, only to hit performance walls. That's why I'm excited about

The biggest unlock for production AI agents isn't better models—it's accepting that agents are just applications. That sounds obvious. Maybe even boring. But this one realization cuts through years of confusion, eliminates entire categories of "new" security problems, and makes multi-user agent deployment actually feasible. Here's why it matters. The Non-Human Identity Trap When agents started hitting the enterprise conversation around 2023, identity companies rushed to define the problem.

Your agent needs to pull data from Google Drive, post a summary to Slack, and create a Jira ticket. Simple request. But whose credentials does it use? Should it have permission to delete your entire Drive folder? This authorization problem kills agent demos before they reach production. It's not about users logging into your agent (LangGraph Platform handles that). It's about your agent accessing other services on behalf of those users. If you're building real agents, you've hit this wall. The
Remember that moment when you realized your phone could do more than make calls? Today feels like that—but bigger. Arcade.dev and Lithic just unlocked true agentic commerce: AI agents that can browse, compare, and actually complete purchases. This isn't another chatbot that helps you shop. This is autonomous AI that shops for you. The Agentic Commerce Problem We All Pretended Didn't Exist Here's the dirty secret: Every "agentic commerce" demo you've seen stops at checkout. Why? Because nobod
Your AI can summarize documents you feed it, answer questions about your uploaded PDFs, and explain concepts from its training data. But ask it to pull your actual Q4 revenue from NetSuite, check real customer satisfaction scores, or update a deal in Salesforce? Suddenly it's just guessing—or worse, hallucinating numbers that sound plausible but aren't your data. This disconnect between AI's intelligence and its ability to access real data and take action is why less than 30% of AI projects hav

As AI agents and LLM-based applications become increasingly sophisticated, developers face unprecedented challenges in securing these autonomous systems. The intersection of artificial intelligence with identity management has created a complex landscape where traditional security paradigms prove inadequate. This report examines the fundamental questions developers are grappling with as they attempt to build secure, scalable AI systems in this rapidly evolving space. Reconceptualizing Identity
AI agents often need to access multiple services and data sources on behalf of users. This introduces unique authentication and authorization challenges that go beyond typical single sign-on (SSO) for human users. Unlike a standard web app, an AI agent might operate without a user interface and even make autonomous decisions. To keep these agents secure and effective, it's critical to use best practices like least-privilege access and just-in-time authentication, and to understand where traditio

AI agents promise to transform how we interact with technology, but at what cost to our privacy? As these digital assistants gain the power to act on our behalf, they're raising fundamental questions about security that can no longer be ignored. The Security Gap in AI Action Signal President Meredith Whittaker recently warned about the security and privacy challenges of agentic AI, describing it as "putting your brain in a jar." Her concerns highlight a critical reality: for AI agents to be u

(And why this time the automation hype is real) The tech industry is great at overpromising automation. RPA convinced enterprises to spend billions on tools that broke whenever processes changed. Now AI is flooding the market with similar claims. But beneath the hype, something different is happening with AI agents. They're succeeding precisely where RPA failed - by bringing true adaptability to unstructured business processes. Here's the reality behind the promises. What is an AI Agent? Wha

ChatGPT can't send emails, order food, or book flights. It can write SQL but can't query databases or work well with the data results. AI can't connect to the real world. It can't authenticate to access your accounts or use your data. This disconnect is partly why less than 30% of AI projects go to production. The biggest opportunity in AI today isn't better AI models—it's enabling those models to take real actions. Developers need secure connections between AI and authenticated services, user