Post

Model Context Protocol (MCP) — Custom Tool for AI

Model Context Protocol (MCP) — Custom Tool for AI

I’ve been experimenting with AI assistants and recently built a small backend using the Model Context Protocol (MCP). Following the excellent MCP tutorial linked below, I documented what I learned while building a working MCP server and connecting it to Claude Desktop.

What MCP Let Me Do

In short, MCP allowed me to extend an LLM with real-world capabilities. Instead of treating the model as a closed box, MCP provides a protocol to expose tools and resources the model can call during conversations.

Key concepts I used:

  • MCP server — the backend that hosts tools and resources.
  • MCP client — the AI interface (e.g., Claude Desktop) that talks to the server.
  • Tools — Python functions decorated as MCP tools (I implemented get_latest_news).
  • Resources & prompts — files and templates that the model can reference when composing responses.

Quick setup steps

The tutorial made setup straightforward. My minimal workflow was:

  1. Add the dependencies to a requirements.txt file:
1
2
3
4
5
6
fastapi
uvicorn
python-dotenv
httpx
beautifulsoup4
mcp
  1. Install the packages:
1
pip install -r requirements.txt
  1. Run the MCP server:
1
python main.py

After these steps, the server is ready for the client to connect.

What main.py does (high level)

  • Imports necessary libraries (FastMCP, httpx, BeautifulSoup, etc.).
  • Creates an MCP server instance: mcp = FastMCP("tech_news").
  • Registers a tool with @mcp.tool() — my get_latest_news function that accepts a source name and returns headlines.
  • Uses helper code (fetch_news) to request web pages and parse them with BeautifulSoup.
  • Starts the server with mcp.run(transport='stdio') when executed directly.

In practice, calling the tool from the client runs the corresponding Python function on the server and returns structured results to the model. It’s a clean way to safely provide external data to the LLM.

Connecting to Claude Desktop

To make the client use my server, I added a simple entry in claude_desktop_config.json under the Developer settings:

1
2
3
4
5
6
7
8
{
  "mcpServers": {
    "tech_news_server": {
      "command": "python",
      "args": ["main.py"]
    }
  }
}

After restarting Claude Desktop, a small hammer icon appeared in the prompt area — indicating my MCP server was available as a tool. I could then ask Claude to fetch the latest headlines and the model would call my get_latest_news tool.

Takeaways

This tiny project taught me a lot about safely extending LLMs with external logic. A few things stood out:

  • MCP is an elegant, lightweight way to expose deterministic tools to a model.
  • Tooling like httpx + BeautifulSoup lets you provide live web data to models without giving them raw web access.
  • Keeping the server simple and well-scoped makes it easier to verify outputs and control behavior.

If you want to try it, the tutorial I followed is a great starting point:

Model Context Protocol (MCP) — an end-to-end tutorial (hands-on)

Feel free to reach out if you want the main.py snippet or a sandboxed example to try locally.

Repository

You can find the project source code in the repo:

This post is licensed under CC BY 4.0 by the author.