Model Context Protocol (MCP)
Why MCP
Models are only as good as the context provided to them. Its an open protocol that standarsgizes how LLM apps connect to and work with tools and data sources.
wihtout mcp, ai development would be fragmented. - all apps would have custom implementation - custom prompt logic - custom tool calls - custom data access
with mcp. standardize ai development - data store mcp server: query, fetch, modify data, handle data streams - crm mcp server - version control mcp server - etc
mcp servers are resuable by various ai applications.
- Service providers make their own MCP implementation. make mcp server to wrap us access to service.
- mcp servers provide tool schemas + functions already defined.
MCP Architecture
Based on a client-server arch. - mcp client - mcp protocol - mcp server
Host: llm appls that want access to data thorugh mcp server: programs that each expose capabilities through mcp clients: maintain 1-1 connection with server inside host app.
- Tools: functions and tools that can be involed by the client.
- retrieve/search
- send a message
- updare db records
- resources: read only data or exposed by the server. can choose to include into the context
- files
- db records
- api responses
- prompt templates: predefined templates for ai interactions such that the overhead of writing efficient prompts for most common tasks is taken away from the developer. (battle-tested evaluated prompts)
- document QA
- transcript summary
- output as json
mcp transports - stdio - http + sse - streamable http (stateful & stateless connections) - recommended
create an mcp server
This server needs to handle two main requests from the client:
listing all the tools executing a particular tool
There are two ways for creating an MCP server:
- low-level implementation: in this approach, you directly define and handle the various types of requests (
ListToolsRequest
andCallToolRequest
). This approach allows you to customize every aspect of your server. - high-level implementation using
FastMCP
:FastMCP
is a high-level interface that makes building MCP servers faster and simpler. In this approach, you just focus on defining the tools as functions, andFastMCP
handles all the protocol details.
To create your MCP server using FastMCP, you will initialize a FastMCP server labeled as mcp and decorating the functions with @mcp.tool(). FastMCP automatically generates the necessary MCP schema based on type hints and docstrings. The magic function %%writefile mcp_project/research_server.py will not execute the code but it will save the content of the cell to the server file research_server.py in the directory: mcp_project.
%%writefile mcp_project/research_server.py
import arxiv
import json
import os
from typing import List
from mcp.server.fastmcp import FastMCP
PAPER_DIR = "papers"
# Initialize FastMCP server
mcp = FastMCP("research")
@mcp.tool()
def search_papers(topic: str, max_results: int = 5) -> List[str]:
"""
Search for papers on arXiv based on a topic and store their information.
Args:
topic: The topic to search for
max_results: Maximum number of results to retrieve (default: 5)
Returns:
List of paper IDs found in the search
"""
# Use arxiv to find the papers
client = arxiv.Client()
# Search for the most relevant articles matching the queried topic
search = arxiv.Search(
query = topic,
max_results = max_results,
sort_by = arxiv.SortCriterion.Relevance
)
papers = client.results(search)
# Create directory for this topic
path = os.path.join(PAPER_DIR, topic.lower().replace(" ", "_"))
os.makedirs(path, exist_ok=True)
file_path = os.path.join(path, "papers_info.json")
# Try to load existing papers info
try:
with open(file_path, "r") as json_file:
papers_info = json.load(json_file)
except (FileNotFoundError, json.JSONDecodeError):
papers_info = {}
# Process each paper and add to papers_info
paper_ids = []
for paper in papers:
paper_ids.append(paper.get_short_id())
paper_info = {
'title': paper.title,
'authors': [author.name for author in paper.authors],
'summary': paper.summary,
'pdf_url': paper.pdf_url,
'published': str(paper.published.date())
}
papers_info[paper.get_short_id()] = paper_info
# Save updated papers_info to json file
with open(file_path, "w") as json_file:
json.dump(papers_info, json_file, indent=2)
print(f"Results are saved in: {file_path}")
return paper_ids
@mcp.tool()
def extract_info(paper_id: str) -> str:
"""
Search for information about a specific paper across all topic directories.
Args:
paper_id: The ID of the paper to look for
Returns:
JSON string with paper information if found, error message if not found
"""
for item in os.listdir(PAPER_DIR):
item_path = os.path.join(PAPER_DIR, item)
if os.path.isdir(item_path):
file_path = os.path.join(item_path, "papers_info.json")
if os.path.isfile(file_path):
try:
with open(file_path, "r") as json_file:
papers_info = json.load(json_file)
if paper_id in papers_info:
return json.dumps(papers_info[paper_id], indent=2)
except (FileNotFoundError, json.JSONDecodeError) as e:
print(f"Error reading {file_path}: {str(e)}")
continue
return f"There's no saved information related to paper {paper_id}."
if __name__ == "__main__":
# Initialize and run the server
mcp.run(transport='stdio')
- Example of low-level server
- Advanced usage low-level server
- MCP inspector
- FastMCP
- Quickstart for server developpers
terminal instructions
- To open the terminal, run the cell below.
- Navigate to the project directory and initiate it with
uv
:cd L4/mcp_project
uv init
- Create virtual environment and activate it:
uv venv
source .venv/bin/activate
- Install dependencies:
uv add mcp arxiv
- Launch the inspector:
npx @modelcontextprotocol/inspector uv run research_server.py
- If you get a message asking “need to install the following packages”, type:
y
- You will get a message saying that the inspector is up and running at a specific address. To open the inspector, click on that given address. The inspector will open in another tab.
- In the inspector UI, make sure to follow the video. You would need to specify under configuration the
Inspector Proxy Address
. Please check the “Inspector UI Instructions” below and run the last cell (after the terminal) to get theInspector Proxy Address
for your configurations. - If you tested the tool and would like to access the
papers
folder: 1) click on theFile
option on the top menu of the notebook and 2) click onOpen
and then 3) click onL4
->mcp_project
. - Once you’re done with the inspector UI, make sure to close the inspector by typing
Ctrl+C
in the terminal below.
mcp client
%%writefile mcp_project/mcp_chatbot.py
from dotenv import load_dotenv
from anthropic import Anthropic
from mcp import ClientSession, StdioServerParameters, types
from mcp.client.stdio import stdio_client
from typing import List
import asyncio
import nest_asyncio
nest_asyncio.apply()
load_dotenv()
class MCP_ChatBot:
def __init__(self):
# Initialize session and client objects
self.session: ClientSession = None
self.anthropic = Anthropic()
self.available_tools: List[dict] = []
async def process_query(self, query):
messages = [{'role':'user', 'content':query}]
response = self.anthropic.messages.create(max_tokens = 2024,
model = 'claude-3-7-sonnet-20250219',
tools = self.available_tools, # tools exposed to the LLM
messages = messages)
process_query = True
while process_query:
assistant_content = []
for content in response.content:
if content.type =='text':
print(content.text)
assistant_content.append(content)
if(len(response.content) == 1):
process_query= False
elif content.type == 'tool_use':
assistant_content.append(content)
messages.append({'role':'assistant', 'content':assistant_content})
tool_id = content.id
tool_args = content.input
tool_name = content.name
print(f"Calling tool {tool_name} with args {tool_args}")
# Call a tool
#result = execute_tool(tool_name, tool_args): not anymore needed
# tool invocation through the client session
result = await self.session.call_tool(tool_name, arguments=tool_args)
messages.append({"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id":tool_id,
"content": result.content
}
]
})
response = self.anthropic.messages.create(max_tokens = 2024,
model = 'claude-3-7-sonnet-20250219',
tools = self.available_tools,
messages = messages)
if(len(response.content) == 1 and response.content[0].type == "text"):
print(response.content[0].text)
process_query= False
async def chat_loop(self):
"""Run an interactive chat loop"""
print("\nMCP Chatbot Started!")
print("Type your queries or 'quit' to exit.")
while True:
try:
query = input("\nQuery: ").strip()
if query.lower() == 'quit':
break
await self.process_query(query)
print("\n")
except Exception as e:
print(f"\nError: {str(e)}")
async def connect_to_server_and_run(self):
# Create server parameters for stdio connection
server_params = StdioServerParameters(
command="uv", # Executable
args=["run", "research_server.py"], # Optional command line arguments
env=None, # Optional environment variables
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
self.session = session
# Initialize the connection
await session.initialize()
# List available tools
response = await session.list_tools()
tools = response.tools
print("\nConnected to server with tools:", [tool.name for tool in tools])
self.available_tools = [{
"name": tool.name,
"description": tool.description,
"input_schema": tool.inputSchema
} for tool in response.tools]
await self.chat_loop()
async def main():
chatbot = MCP_ChatBot()
await chatbot.connect_to_server_and_run()
if __name__ == "__main__":
asyncio.run(main())
add prompts and resource features
Enjoy Reading This Article?
Here are some more articles you might like to read next: