Model Context Protocol (MCP) servers expose structured capabilities to AI clients. In this guide we will build a minimal Rust implementation that speaks MCP over standard input and standard output, so it can plug into any compliant client without socket plumbing. Along the way we will peek at the JSON-RPC envelope MCP relies on, map it to strongly typed Rust structures, and wire enough handlers to exercise the loop with real requests.
Prerequisites
You need the stable Rust toolchain (installed via rustup) and a recent MCP client for local testing. The sample code uses serde for JSON handling.
Understanding the MCP Envelope
MCP builds on JSON-RPC 2.0. Every message the client sends contains the jsonrpc version, a correlating id, a method name, and, optionally, a params object. Replies echo the id and set either a result payload or an error map. Because MCP does not change the framing, we can decode these envelopes with Serde and focus on routing by method name.
get_capabilitiestells the client what the server can do.- Custom methods (for example
call_tool) deliver the actual work. - Notifications omit the
id, so the server must guard againstnullidentifiers.
Understanding these rules early prevents subtle bugs when you expand the handler set.
Bootstrapping the Project
Create a new binary crate and add dependencies for serde and serde_json:
cargo new mcp-stdio-server
cd mcp-stdio-server
cargo add serde --features derive
cargo add serde_json
We will keep the entire prototype in src/main.rs.
Protocol Building Blocks
At minimum an MCP server must answer the get_capabilities request and echo metadata in the JSON-RPC style envelope. Define a few data structures to deserialize the incoming frames and craft responses.
use serde::{Deserialize, Serialize};
#[derive(Deserialize)]
struct Request {
pub id: serde_json::Value,
pub method: String,
#[serde(default)]
pub params: serde_json::Value,
}
#[derive(Serialize)]
struct Response<T> {
pub jsonrpc: &'static str,
pub id: serde_json::Value,
pub result: T,
}
#[derive(Serialize)]
struct Capabilities {
pub server: ServerInfo,
}
#[derive(Serialize)]
struct ServerInfo {
pub name: &'static str,
pub version: &'static str,
pub description: &'static str,
}
We intentionally keep these types small and derive Deserialize or Serialize directly. Real MCP interactions include nested data (tools, prompts, input schema), so building in this shape-first approach makes it easy to extend later. Notice that Request.id stays as a serde_json::Value; MCP allows both integers and strings, so preserving the raw JSON avoids lossy conversions.
Handling Requests Over stdio
With the types in place we can turn stdin into a stream of JSON frames. The stdio transport keeps the binary portable: you can run it as a standalone process or as a child managed by the client. The loop below shows four important responsibilities:
- Read the next line from stdin, skipping empty keepalives.
- Decode JSON into a
Request, returning friendly diagnostics on failure. - Match the method name and build a response payload.
- Serialize the reply and flush stdout so the client does not block.
We treat malformed input as recoverable by logging to stderr and continuing. That keeps long-lived sessions resilient during client upgrades.
use std::io::{self, BufRead, Write};
fn main() -> io::Result<()> {
let stdin = io::stdin();
let mut stdout = io::stdout();
let mut stderr = io::stderr();
for line in stdin.lock().lines() {
let line = match line {
Ok(line) if !line.trim().is_empty() => line,
Ok(_) => continue,
Err(err) => {
writeln!(stderr, "{{\"error\":\"stdin read failed: {}\"}}", err)?;
continue;
}
};
let request: Request = match serde_json::from_str(&line) {
Ok(req) => req,
Err(err) => {
writeln!(stderr, "{{\"error\":\"invalid JSON: {}\"}}", err)?;
continue;
}
};
match request.method.as_str() {
"get_capabilities" => {
let response = Response {
jsonrpc: "2.0",
id: request.id,
result: Capabilities {
server: ServerInfo {
name: "rust-mcp-stdio",
version: env!("CARGO_PKG_VERSION"),
description: "Minimal MCP server over stdio",
},
},
};
let serialized = serde_json::to_string(&response)
.expect("serialization must succeed");
writeln!(stdout, "{}", serialized)?;
stdout.flush()?;
}
"ping" => {
let response = Response {
jsonrpc: "2.0",
id: request.id,
result: serde_json::json!({ "ok": true }),
};
let serialized = serde_json::to_string(&response)
.expect("serialization must succeed");
writeln!(stdout, "{}", serialized)?;
stdout.flush()?;
}
unsupported => {
writeln!(stderr, "{{\"warn\":\"unsupported method: {}\"}}", unsupported)?;
}
}
}
Ok(())
}
This implementation assumes the client sends each JSON request on a single line. MCP clients typically buffer writes, so flushing stdout after every response keeps them in sync.
Adding Your First Tool
A capability-only server is useful for smoke tests, but the protocol shines when you expose tools. We will implement an uppercase tool that accepts a single string and returns the capitalized version. The key steps are: validate expectations, transform the payload, and mirror the id in the response so the client can correlate.
"call_tool" => {
let input = request
.params
.get("payload")
.and_then(|value| value.as_str())
.unwrap_or("");
let response = Response {
jsonrpc: "2.0",
id: request.id,
result: serde_json::json!({
"tool": "uppercase",
"output": input.to_ascii_uppercase(),
}),
};
let serialized = serde_json::to_string(&response)
.expect("serialization must succeed");
writeln!(stdout, "{}", serialized)?;
stdout.flush()?;
}
The unwrap_or("") keeps the example compact while still producing a valid response even when the client forgets to send parameters. In a production server you would return a JSON-RPC error object instead.
Running the Server
Build and run the binary, then pipe a handcrafted JSON request to verify the handshake.
cargo run --quiet <<'EOF'
{"jsonrpc":"2.0","id":1,"method":"get_capabilities"}
EOF
You should see a single-line JSON response describing the server:
{"jsonrpc":"2.0","id":1,"result":{"server":{"name":"rust-mcp-stdio","version":"0.1.0","description":"Minimal MCP server over stdio"}}}
Now exercise the tool branch:
cargo run --quiet <<'EOF'
{"jsonrpc":"2.0","id":"tool-1","method":"call_tool","params":{"payload":"hello"}}
EOF
The reply should echo the identifier and show the transformed payload:
{"jsonrpc":"2.0","id":"tool-1","result":{"tool":"uppercase","output":"HELLO"}}
Once these basics work you can add more methods by expanding the match arms. Each handler can return its own strongly typed payload while still fitting inside the shared Response<T> wrapper.
Automating a Smoke Test
It helps to guard the transport loop with a regression test. Add a small integration test under tests/smoke.rs that spawns the binary, writes a request to stdin, and asserts on the stdout JSON. The snippet below uses assert_cmd and serde_json to keep the assertion ergonomic:
use assert_cmd::Command;
use serde_json::Value;
#[test]
fn returns_capabilities() {
let mut cmd = Command::cargo_bin("mcp-stdio-server").unwrap();
let output = cmd
.write_stdin("{\"jsonrpc\":\"2.0\",\"id\":42,\"method\":\"get_capabilities\"}\n")
.assert()
.success()
.get_output()
.stdout
.clone();
let json: Value = serde_json::from_slice(&output).unwrap();
assert_eq!(json["result"]["server"]["name"], "rust-mcp-stdio");
}
A guard like this gives you confidence that refactors keep the contract intact.
Where to Go Next
- Advertise tools and prompts by implementing
list_toolsand returning schemas so clients can validate user input before sending it. - Swap the manual loop for Tokio or async-std when you need concurrency, but keep the same JSON framing so the client contract stays intact.
- Add structured logging with tracing to capture spans per request and pipe them to stderr for easier debugging in long sessions.
- Consider persisting configuration to disk and offering a
shutdownmethod for graceful teardown when the client exits.
With this foundation you can iterate toward a production-grade MCP server that still speaks the simplest transport possible: standard IO. When the need arises you can layer on richer transports, authentication, or observability without rewriting the core protocol handling you built here.