🧠 What is MCP Server? (My Simple Understanding)
“I’m not an expert in AI, but recently I came across something cool while watching a video on the Chai aur Code YouTube channel — https://youtu.be/dZyQNy3-HjU?si=zkQ6BspDUpcHk-qB.
So thought I’d try to explain it in my own words.”
🛠️ What’s the Problem?
If you’ve ever used ChatGPT or any AI coding tool, you probably gave it a prompt like:
“Hey, generate a Python API with login and register routes.”
It gives you code — great!
But then what?
- You still have to copy-paste it into your project
- Install stuff manually
- Fix any broken imports
- And AI has no idea what’s already in your code
This is where I got curious…
What if AI tools could actually read your codebase, understand what’s there, and then make changes directly like a teammate?
💡 That’s where MCP comes in.
MCP = Model Context Protocol
It’s not software, not an app you install.
It’s a standard — basically a rulebook for how AI tools (like Cursor or Copilot Workspace) can talk to your files, run commands, and understand your whole setup.
💬 Let Me Explain It My Way:
Imagine you’re working on a Node.js project.
You open Cursor (which looks like VS Code), and say:
“Create an Express server with one
/health
endpoint."
Without MCP (Old Style):
- The AI gives you some sample code
- You copy it
- Paste it manually
- Create the file yourself
- Maybe forget to install
express
Basically: AI suggests, you do the work.
With MCP:
- The AI sees your current project structure
- It checks if
express
is installed - Creates the file at the right place
- Adds the route
- Even runs
npm install express
for you
🤯 Feels like a real junior developer working with you.
📦 So what exactly is an MCP server?
Think of it like the “middleman” between your AI model (like Claude or GPT-4) and your local environment.
It knows how to:
- Read files in your repo
- Edit and write code
- Run shell commands
- Check logs or errors
- Even connect to tools like Slack, GitHub, Figma, etc.
👉 Cursor, Continue.dev, Copilot Workspace — these tools use something like an MCP server behind the scenes.
🧰 What kind of things can MCP servers handle?
MCP uses “tools” like:
- File Tools -> Create, delete, rename, edit files
- Shell Tools -> run terminal commands
- Docs Tools -> search documentation
- Web Tools -> make API calls
Think of these like small plugins.
One tool just creates files. Another reads logs. Another checks errors.
Like microservices — but for AI.
💡 Why is this needed?
Right now, when AI interacts with your environment, there’s no common format or rulebook.
Everyone builds their own custom system.
MCP solves this by saying:
“Let’s all use the same protocol, so AI tools work with any editor or backend.”
Kinda like how REST API standardized web services — MCP does the same for AI + developer tools.
🔐 Bonus Example: Security Context
Let’s say your LLM is trying to fix an error in your repo.
You don’t want it accessing your company’s Slack or GitHub issues unless you allow it.
MCP defines context boundaries — like:
- What data the AI is allowed to see
- What actions it can take (read-only? write?)
- Which tools are available
- How to “talk” to those tools
So it’s safe, limited, and predictable — not some random script with full access.
🧠 MCP Has 5 Main Context Types
Just for nerds like me who want to go a bit deeper:
- Tools — Functions like
createFile
,readLogs
- Resources — Actual files (e.g., logs, CSVs)
- Sampling — When AI needs to talk to another model
- Prompts — Templates you give to the model
- Protocols — Rules about how AI communicates
📌 Summary (in simple words)
- MCP is a rulebook, not a tool
- It helps AI tools act like a real assistant, not just a chatbot
- You’ll see it behind tools like Cursor, Copilot Workspace, Continue.dev
- It’s made by Anthropic, and still evolving
🎥 Credit Where It’s Due
I learned about all this from a great video by Chai aur Code — Watch here.
I’m still exploring this space and just wanted to share what I understood.
Let me know if I explained it well — or if something is missing!