as good as the context provided to them. Even the most advanced model won’t be very helpful if it doesn’t have access to the data or tools it needs to get more information. That’s why tools and resources are crucial for any AI agent.
I’ve noticed that I keep repeating the same tasks over and over again: writing similar prompts or developing the same tools repeatedly. There’s a fundamental principle in software engineering called DRY, which stands for “Don’t Repeat Yourself”.
So, I started wondering if there’s a way to avoid duplicating all this work. Fortunately, the GenAI industry already has a solution in place. MCP (Model Context Protocol) is an open-source protocol that enables the connection of AI applications to external tools and data sources. Its main goal is to standardise such interactions, similar to how REST APIs standardised communication between web applications and backend servers.
With MCP, you can easily integrate third-party tools like GitHub, Stripe or even LinkedIn into your AI agent without having to build tools yourself.
You can find the list of MCP servers in this curated repository. However, it’s important to note that you should only use trusted MCP servers to avoid potential issues.
Similarly, if you want to expose your tools to customers (i.e. allow them to access your product through their LLM agents), you can simply build an MCP server. Then customers will be able to integrate with it from their LLM agents, AI assistants, desktop apps or IDEs. It’s really convenient.
MCP fundamentally solves the problem of repetitive work. Imagine you have M applications and N tools. Without MCP, you would need to build M * N integrations to connect them all.
With MCP and standardisation, you can reduce this number to just M + N.

In this article, I will use MCP to develop a toolkit for analysts. After reading this article, you will
- learn how MCP actually works under the hood,
- build your first MCP server with useful tools,
- leverage the capabilities of your own MCP server and reference servers in your local AI IDE (like Cursor or Claude Desktop),
- launch a remote MCP server that could be accessible by the community.
In the following article, we will take it a step further and learn how to integrate MCP servers into your AI agents.
That’s a lot to cover, so let’s get started.
MCP architecture
I think it’s worth understanding the basic principles before jumping into practice, since that will help us use the tools more effectively. So let’s discuss the fundamentals of this protocol.
Components
This protocol uses a client-server architecture:
- Server is an external program that exposes capabilities through the MCP protocol.
- Host is the client-facing application (like Claude Desktop app, AI IDEs such as Cursor or Lovable, or custom LLM agents). The host is responsible for storing MCP clients and maintaining connections to servers.
- Client is a component of the user-facing app that maintains a one-to-one connection with a single MCP server. They communicate through messages defined by the MCP protocol.

MCP allows LLM to access different capabilities: tools, resources and prompts.
- Tools are functions that LLM can execute, such as getting the current time in a city or converting money from one currency to another.
- Resources are read-only data or context exposed by the server, such as a knowledge base or a change log.
- Prompts are pre-defined templates for AI interactions.

MCP allows you to write servers and tools in many different languages. In this article, we will be using the Python SDK.
Lifecycle
Now that we know the main components defined in MCP, let’s see how the full lifecycle of interaction between the MCP client and server works.
The first step is initialisation. The client connects to the server, they exchange protocol versions and capabilities, and, finally, the client confirms via notification that initialisation has been completed.

Then, we move to the message exchange phase.
- The client might start the interaction with discovery. MCP allows dynamic feature discovery, when the client can ask the server for a list of supported tools with requests like
tools/list
and will get the list of exposed tools in response. This feature allows the client to adapt when working with different MCP servers. - Also, the client can invoke capabilities (call a tool or access a resource). In this case, it can get back from the server not only a response but also progress notifications.

Finally, the client initiates the termination of the connection by sending a request to the server.
Transport
If we dive a little bit deeper into the MCP architecture, it’s also worth discussing transport. The transport defines how messages are sent and received between the client and server.
At its core, MCP uses the JSON-RPC protocol. There are two transport options:
- stdio (standard input and output) for cases when client and server are running on the same machine,
- HTTP + SSE (Server-Sent Events) or Streamable HTTP for cases when they need to communicate over a network. The primary difference between these two approaches lies in whether the connection is stateful (HTTP + SSE) or can also be stateless (Streamable HTTP), which can be crucial for certain applications.
When running our server locally, we will use standard I/O as transport. The client will launch the server as a subprocess, and they will use standard input and output to communicate.
With that, we’ve covered all the theory and are ready to move on to building our first MCP server.
Creating your toolkit as a local MCP server
I would like to build a server with some standard tools I use frequently, and also leverage all the MCP capabilities we discussed above:
- prompt template to query our ClickHouse database that outlines both the data schema and nuances of SQL syntax (it’s tedious to repeat it every time),
- tools to query the database and get some information about recent GitHub PRs,
- our changelog as resources.
You can find the full code in repository, I will show only the main server code in the snapshot below omitting all the business logic.
We will use the Python SDK for MCP. Creating an MCP server is pretty straightforward. Let’s start with a skeleton. We imported the MCP package, initialised the server object and ran the server when the program is executed directly (not imported).
from mcp.server.fastmcp import FastMCP
from mcp_server.prompts import CLICKHOUSE_PROMPT_TEMPLATE
from mcp_server.tools import execute_query, get_databases, get_table_schema, get_recent_prs, get_pr_details
from mcp_server.resources.change_log import get_available_periods, get_period_changelog
import os
# Create an MCP server
mcp = FastMCP("Analyst Toolkit")
# Run the server
if __name__ == "__main__":
mcp.run()
Now, we need to add the capabilities. We will do this by annotating functions. We will also write detailed docstrings and include type annotations to ensure that the LLM has all the necessary information to use them properly.
@mcp.prompt()
def sql_query_prompt(question: str) -> str:
"""Create a SQL query prompt"""
return CLICKHOUSE_PROMPT_TEMPLATE.format(question=question)
Next, we will define tools similarly.
# ClickHouse tools
@mcp.tool()
def execute_sql_query(query: str) -> str:
"""
Execute a SQL query on the ClickHouse database.
Args:
query: SQL query string to execute against ClickHouse
Returns:
Query results as tab-separated text if successful, or error message if query fails
"""
return execute_query(query)
@mcp.tool()
def list_databases() -> str:
"""
List all databases in the ClickHouse server.
Returns:
Tab-separated text containing the list of databases
"""
return get_databases()
@mcp.tool()
def describe_table(table_name: str) -> str:
"""
Get the schema of a specific table in the ClickHouse database.
Args:
table_name: Name of the table to describe
Returns:
Tab-separated text containing the table schema information
"""
return get_table_schema(table_name)
# GitHub tools
@mcp.tool()
def get_github_prs(repo_url: str, days: int = 7) -> str:
"""
Get a list of PRs from the last N days.
Args:
repo_url: GitHub repository URL or owner/repo format
days: Number of days to look back (default: 7)
Returns:
JSON string containing list of PR information, or error message
"""
import json
token = os.getenv('GITHUB_TOKEN')
result = get_recent_prs(repo_url, days, token)
return json.dumps(result, indent=2)
@mcp.tool()
def get_github_pr_details(repo_url: str, pr_identifier: str) -> str:
"""
Get detailed information about a specific PR.
Args:
repo_url: GitHub repository URL or owner/repo format
pr_identifier: Either PR number or PR URL
Returns:
JSON string containing detailed PR information, or error message
"""
import json
token = os.getenv('GITHUB_TOKEN')
result = get_pr_details(repo_url, pr_identifier, token)
return json.dumps(result, indent=2)
Now, it’s time to add resources. I’ve added two methods: one to see what changelog periods we have available, and another to extract information for a specific period. Also, as you might have noticed, we used URIs to access resources.
@mcp.resource("changelog://periods")
def changelog_periods() -> str:
"""
List all available change log periods.
Returns:
Markdown formatted list of available time periods
"""
return get_available_periods()
@mcp.resource("changelog://{period}")
def changelog_for_period(period: str) -> str:
"""
Get change log for a specific time period.
Args:
period: The time period identifier (e.g., "2025_q1" or "2025 Q2")
Returns:
Markdown formatted change log for the specified period
"""
return get_period_changelog(period)
That’s it for the code. The last step is setting up the environment. I will use the uv package manager, which is recommended in the MCP documentation.
If you don’t have it installed, you can get it from PyPI.
pip install uv
Then, we can initialise a uv project, create and activate the virtual environment and, finally, install all the required packages.
uv init --name mcp-analyst-toolkit # initialise an uv project
uv venv # create virtual env
source .venv/bin/activate # activate environment
uv add "mcp[cli]" requests pandas typing requests datetime
# adding dependencies
uv pip install -e . # installing package mcp_server
Now, we can run the MCP server locally. I will use the developer model since it also launches MCP Inspector, which is really useful for debugging.
mcp dev server.py
# Starting MCP inspector...
# ⚙️ Proxy server listening on 127.0.0.1:6277
# 🔑 Session token: <...>
# Use this token to authenticate requests or set DANGEROUSLY_OMIT_AUTH=true to disable auth
# 🔗 Open inspector with token pre-filled:
# http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=<...>
# 🔍 MCP Inspector is up and running at http://127.0.0.1:6274 🚀
Now, we have our server and MCP Inspector running locally. Essentially, MCP Inspector is a handy implementation of the MCP client designed for debugging. Let’s use the Inspector to test how our server works. The inspector lets us see all the capabilities the server exposes and call its tools. I started with feature discovery, requesting the server to share the list of tools. The client sent the tools/list
request we discussed earlier, as you can see in the history log at the bottom of the screen. Then, I executed a simple SQL query select 1
and got the tool call result back.

Great! Our first MCP server is up and running locally. So, it’s time to start using it in practice.
Using MCP servers in AI tools
As we discussed, the power of MCP servers lies in standardisation, which allows them to work with different AI tools. I will integrate my tools into Claude Desktop. Since Anthropic developed MCP, I expect their desktop client to have the best support for this protocol. However, you can use other clients like Cursor or Windsurf (other example clients).
I would like not only to utilise my tools, but also to leverage the work of others. There are a lot of MCP servers developed by the community that we can use instead of reinventing the wheel when we need common functions. However, keep in mind that MCP servers can access your system, so use only trusted implementations. I will use two reference servers (implemented to demonstrate the capabilities of the MCP protocol and official SDKs):
- Filesystem — allows working with local files,
- Fetch — helps LLMs retrieve the content of webpages and convert it from HTML to markdown for better readability.
Now, let’s move on to the setup. You can follow the detailed instructions on how to set up Claude Desktop here. All these tools have configuration files where you can specify the MCP servers. For the Claude Desktop, this file will be located at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
, - Windows:
%APPDATA%\Claude\claude_desktop_config.json
.
Let’s update the config to include three servers:
- For
analyst_toolkit
(our MCP server implementation), I’ve specified theuv
command, path to the repository and command to run the server. Also, I’ve added aGITHUB_TOKEN
environment variable to use for GitHub authentication. - For the reference servers, I’ve just copied the configs from the documentation. Since they are implemented in different languages (TypeScript and Python), different commands (
npx
anduvx
) are needed.
{
"mcpServers": {
"analyst_toolkit": {
"command": "uv",
"args": [
"--directory",
"/path/to/github/mcp-analyst-toolkit/src/mcp_server",
"run",
"server.py"
],
"env": {
"GITHUB_TOKEN": "your_github_token"
}
},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/marie/Desktop",
"/Users/marie/Documents/github"
]
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
}
That’s it. Now, we just need to restart the Claude Desktop client, and we will have access to all tools and prompt templates.

Let’s try using the prompt template and ask the LLM to visualise high-level KPIs.
Question: Could you please show the number of active customers and revenue by month since the beginning of 2024? Please, create a visualisation to look at dynamics and save the image in Desktop folder.
We described the task at a fairly high level without providing much detail about the data schema or ClickHouse dialect. Still, since all this information is captured in our prompt template, the LLM managed to compose a correct SQL query.
select
toStartOfMonth(s.action_date) as month,
uniqExact(s.user_id) as active_customers,
sum(s.revenue) as total_revenue
from ecommerce.sessions as s
inner join ecommerce.users as u on s.user_id = u.user_id
where s.action_date >= '2024-01-01'
and u.is_active = 1
group by toStartOfMonth(s.action_date)
order by month
format TabSeparatedWithNames
Then, the agent used our execute_sql_query
tool to get results, composed the HTML page with visualisations, and leveraged the write_file
tool from the Filesystem MCP server to save the results as an HTML file.
The final report looks really good.

One limitation of the current prompt template implementation is that you have to select it manually. The LLM can’t automatically choose to use the template even when it’s appropriate for the task. We’ll try to address this in our AI agent implementation in the upcoming article.
Another use case is trying out GitHub tools by asking about recent updates in the llama-cookbook repository from the past month. The agent completed this task successfully and provided us with a detailed summary.

So, we’ve learned how to work with the local MCP servers. Let’s discuss what to do if we want to share our tools more broadly.
Working with a remote MCP server
We will use Gradio and HuggingFace Spaces for hosting a public MCP server. Gradio has a built-in integration with MCP, making server creation really simple. This is all the code needed to build the UI and launch the MCP server.
import gradio as gr
from statsmodels.stats.proportion import confint_proportions_2indep
def calculate_ci(count1: int, n1: int, count2: int, n2: int):
"""
Calculate 95% confidence interval for the difference of two independent proportions.
Args:
count1 (int): Number of successes in group 1
n1 (int): Total sample size in group 1
count2 (int): Number of successes in group 2
n2 (int): Total sample size in group 2
Returns:
str: Formatted string containing group proportions, difference, and 95% confidence interval
"""
try:
p1 = count1 / n1
p2 = count2 / n2
diff = p1 - p2
ci_low, ci_high = confint_proportions_2indep(count1, n1, count2, n2)
return f"""Group 1: {p1:.3f} | Group 2: {p2:.3f} | Difference: {diff:.3f}
95% CI: [{ci_low:.3f}, {ci_high:.3f}]"""
except Exception as e:
return f"Error: {str(e)}"
# Simple interface
demo = gr.Interface(
fn=calculate_ci,
inputs=[
gr.Number(label="Group 1 successes", value=85, precision=0),
gr.Number(label="Group 1 total", value=100, precision=0),
gr.Number(label="Group 2 successes", value=92, precision=0),
gr.Number(label="Group 2 total", value=100, precision=0)
],
outputs="text",
title="A/B Test Confidence Interval",
description="Calculate 95% CI for difference of two proportions"
)
# Launch the Gradio web interface
if __name__ == "__main__":
demo.launch(mcp_server = True)
I’ve created a single function that calculates the confidence interval for the difference of two independent proportions. It might be helpful when analysing the A/B test results.
Next, we can push this code to HuggingFace Spaces to get a server running. I’ve covered how to do it step-by-step in one of my previous articles. For this example, I created this Space — https://huggingface.co/spaces/miptgirl/ab_tests. It has a clean UI and exposes MCP tools.

Next, we can add the server to our Claude Desktop configuration like this. We are using mcp-remote
as a transport this time since we’re now connecting to a remote server.
{
"mcpServers": {
"gradio": {
"command": "npx",
"args": [
"mcp-remote",
"https://miptgirl-ab-tests.hf.space/gradio_api/mcp/sse",
"--transport",
"sse-only"
]
}
}
}
Let’s test it with a simple A/B test analysis question. It works well. The LLM can now make thoughtful judgments based on statistical significance.

You can also use Gradio integration to build an MCP Client — documentation
And that’s it! We now know how to share our tools with a wider audience.
Summary
In this article, we’ve explored the MCP protocol and its capabilities. Let’s briefly recap the main points:
- MCP (Model Context Protocol) is a protocol developed by Antropic that aims to standardise communication between AI agents and tools. This approach reduces the number of integrations needed from M * N to M + N. The MCP protocol uses a client-server architecture.
- MCP servers expose capabilities (such as resources, tools and prompt templates). You can easily build your own MCP servers using SDKs or use servers developed by the community.
- MCP clients are part of user-facing apps (hosts) responsible for establishing a one-to-one connection with a server. There are many available apps compatible with MCP, such as Claude Desktop, Cursor or Windsurf.
Thank you for reading. I hope this article was insightful. Remember Einstein’s advice: “The important thing is not to stop questioning. Curiosity has its own reason for existing.” May your curiosity lead you to your next great insight.
Reference
This article is inspired by the “MCP: Build Rich-Context AI Apps with Anthropic” short course from DeepLearning.AI and the MCP course by Hugging Face.