BetterBrainBetterBrain
Back to Industries & InsightsTechnical

Your AI Was Stuck Behind a Wall. MCP Is the Door.

April 2026·5 min read

You've probably had that moment. You're deep in a conversation with an LLM, it's giving you genuinely sharp analysis, and then you ask it something simple,"what's the weather like in London right now?" and it falls flat on its face.

That's because your model is stuck behind a wall. It can think, but it can't reach anything. Not your files, not your calendar, not your company's database, not the live web. Everything it knows is frozen in time from when it was trained.

Enter MCP. Model Context Protocol. The name explains itself quite well: Model: your LLM. Context: the extra information that helps it make better decisions. Protocol: an agreed set of rules for how that communication works. It's an open standard, built by Anthropic and driven by the community, that gives models a universal way to connect to tools and live data.

The analogy I keep coming back to is USB-C. Remember when every device had its own proprietary cable and you needed a drawer full of chargers? MCP does for AI what USB-C did for hardware.

Aside from standardisation, there are two key challenges. First, models can't see what you don't show them. Your private APIs, internal databases, local files,none of that exists to the model unless you manually feed it in. MCP lets the model pull what it needs, when it needs it. Second, context windows are expensive. Cramming everything into a prompt isn't just impractical, it's costly. MCP flips the approach,instead of pushing the world to the model, the model reaches out for exactly what's relevant.

The architecture is client-server. Your app runs an MCP client (the messenger), which talks to one or more MCP servers (the capabilities). Each server can expose three things. Resources: data the model can read,think of it like a GET request. A database query, a file, an API response. Tools: actions the model can take,think more like a POST. Updating a record, modifying a file, sending a message. This is where your assistant stops being passive and starts being useful. Prompts: reusable templates you can share across your network.

For transport, you've got two options. STDIO for anything local,CLI tools, desktop apps. And Streamable HTTP for anything over the web. That's basically the whole decision.

In practice, you can spin up multiple MCP servers at once: one for web search, one for file access, one for your internal API. The model figures out which tool it needs and calls it. Real examples include live Google search, weather data, reading and writing local files, or querying a private company API that no public model would ever have access to.

Can you build one? Yes, and it's quicker than you'd think. Install the MCP SDK and Zod for input validation, create a server, register your tools and resources with a name, description, and handler, pick a transport, connect, and you're running. A working server fits in under 60 lines.

In a nutshell, your AI was stuck behind a wall. MCP is the door. What you connect it to is up to you.

Want to learn more?

See how BetterBrain puts these ideas into practice.