Vibe Coding my first MCP Server (for Apache Kafka)
If you’ve ever found yourself switching between multiple Kafka monitoring tools just to answer a simple question about consumer lag or topic health, you’re not alone. Most teams end up building custom dashboards or memorizing complex CLI commands to get basic insights from their streaming infrastructure.
This blog walks through building a Kafka MCP integration that lets you query your Kafka clusters using natural language through AI assistants. You’ll simply ask Claude to check your topic health or analyze consumer patterns. We will build this end to end.
Model Context Protocol (MCP)
What is MCP (in context of REST)
LLM are useful, but without context, it becomes increasing difficult to do meaningful task. All of us remember the time when ChatGPT couldn’t connect to the internet and thus every information you wanted was completely based on historical data.
MCP is an open protocol that standardizes how applications provide context to LLMs. What are applications here? They are ANY software tools (or API) that you can think of. Just like you can build REST API for any product in the market, similarly you can create an MCP component.
Why is this important? Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
The REST way to do it vs the MCP way
Traditional REST based AI Tool Integration.
The above diagram depicts how you can extract data from various systems (weather, database, email, etc).
Here are some drawbacks for this approach in the AI world: Multiple Endpoints, Auth Management, Context Loss, Error Handling, Complexity, Schema Mismatches.
MCP based AI tool integration
Some benefits include: Persistent Session, Tool Discovery, Context Sharing
I will highly encourage completing the MCP course on Anthropic (freely available).
Complete Guide: Setting MCP Server and Clients for Apache Kafka
What does it mean to build a MCP for anything. Let us take an example of Kafka broker. As a Kafka admin or user, you always want to know
- “Which consumer groups are falling behind and by how much?”
- “Are there any network bottlenecks in the cluster?”
- Show me recent connection errors or timeouts”
- “Which consumer groups are experiencing rebalancing issues?”
- “Alert me if consumer lag exceeds 10,000 messages”
- “Which topics contain PII data?”
The traditional way to do this would be the revisit the logs, Command Line Tools, JMX Metrics Collection, Prometheus + Grafana, Manual Processes, etc.
What MCP does is transform these into natural language conversations where users can simply ask “Show me consumer lag for orders topic” and get instant, comprehensive analysis. Example below (you will build this in the tutorial :) )
Why Kafka? My first aim was to create an MCP server for Snowflake, but Snowflake product team obviously beat me to it. There already is an MCP Server for Cortex function.
Also, there is a plain vanilla by Isaac Wasserman that provides database interaction with Snowflake. This server enables running SQL queries via tools and exposes data insights and schema context as resources.
Hence Kafka. In this blog, we’ll dive deep into creating a comprehensive Kafka MCP integration that lets AI tools like Claude interact directly with your Kafka infrastructure.
Note: This is, of course, just a tutorial to get me (and now hopefully you) started. The nuts and bolts aren’t tight ;)
🏗️ Overall Architecture Overview
🧩 Core Components Breakdown
🔄 How It All Works: End-to-End Flow
🚀 Getting Started: Your Complete Setup Guide
🎯 Expected Demo Output
🔧 Key Implementation Files
🎬 Wrapping Up
📁 Core Component and High Level Architecture
Below are the key components we will be interacting with. We have a detailed explanation for each.
🧩 Core Components Breakdown
1. MCP Server
The heart of our integration — this is where the magic happens. The server implements the MCP protocol and manages all interactions between AI clients and Kafka.
Key responsibilities:
- Server Framework (MCP SDK)
- Transport Layer (stdio/HTTP)
- Tool Registry
- Resource Registry
- Error Handling
- Configuration Management
2. Kafka Integration Layer
This layer connects to the entire Kafka ecosystem. This is typical Kafka setup in any organisation.
Components include:
- Kafka Admin Client (cluster management)
- Kafka Consumer Client (message reading)
- Kafka Producer Client (message publishing)
- Schema Registry Client (schema management)
- Connect API Client (connector management)
3. Tool Implementations
These are the actual MCP tools that AI can invoke — think of them as Kafka superpowers for your AI assistant!
Available tools:
- Consumer Lag Analysis
- Topic Health Checks
- Message Inspection
- Broker Status Monitoring
🔄 How It All Works: End-to-End Flow
Let’s walk through what happens when you ask Claude to check your Kafka setup:
Step 1: Initialization & Connection
Step 2: Tool Discovery
Step 3: AI Tool Execution
🚀 Getting Started: Your Complete Setup Guide
Ready to build this yourself? Here’s everything you need to get up and running. This is end to end package, so you don’t need to worry about anything except the below pre-requisites.
Vibe coding is an emerging approach to software development heavily reliant on artificial intelligence (AI). Coined by AI researcher Andrej Karpathy in early 2025, it describes a process where developers primarily use natural language prompts — speaking or typing in plain language — to instruct AI tools to generate, refine, and debug code.
A lot of this blog is vibe coding, so you feel free to use your LLM of choice if you are stuck anywhere, need to make it any better or just explore more ideas!
Prerequisites
Before we dive in, make sure you have:
- Docker Desktop running
- Node.js installed
- Claude Desktop installed (for testing)
Phase 1: Complete Setup
Let us first understand the folder structure that this utility creates.
Step 1: Clean Rebuild (Essential for preventing those annoying timeouts!)
First, let’s get our project structure ready. All the code files are available in my GitHub repository.
# Navigate to your project root
cd C:\path\to\your\kafka-mcp-integration# Clean and rebuild MCP Server
cd mcp-server
if (Test-Path "dist") { Remove-Item -Recurse -Force "dist" }
if (Test-Path "node_modules") { Remove-Item -Recurse -Force "node_modules" }
npm install
npm run build# Clean and rebuild MCP Client
cd ..\mcp-client
if (Test-Path "dist") { Remove-Item -Recurse -Force "dist" }
if (Test-Path "node_modules") { Remove-Item -Recurse -Force "node_modules" }
npm install
npm run build# Return to project root
cd ..
Step 2: Run Setup Script
# Use the setup script from the repository
./scripts/setup.ps1Step 3: Start Kafka Infrastructure
# Fire up Kafka using our startup script
./scripts/start-kafka.ps1
# Wait for Kafka to be ready (patience is key!)
Start-Sleep -Seconds 45Step 4: Verify Kafka Connection (This step is critical!)
# Test Kafka connection with the correct port
docker exec kafka kafka-topics --list --bootstrap-server localhost:29092
# Should return empty list initially (no error means success)Phase 2: Create Example Data
Step 5: Create Test Topics
# Create orders topic
docker exec kafka kafka-topics --create --topic orders --bootstrap-server localhost:29092 --partitions 3 --replication-factor 1# Create payments topic
docker exec kafka kafka-topics --create --topic payments --bootstrap-server localhost:29092 --partitions 2 --replication-factor 1# Create users topic
docker exec kafka kafka-topics --create --topic users --bootstrap-server localhost:29092 --partitions 2 --replication-factor 1# Verify topics created
docker exec kafka kafka-topics --list --bootstrap-server localhost:29092
Step 6: Produce Test Messages
# Produce some order messages
docker exec kafka bash -c 'echo "{\"order_id\":\"order_001\",\"customer_id\":\"cust_123\",\"amount\":99.99,\"timestamp\":\"2025-07-15T18:30:00Z\"}" | kafka-console-producer --topic orders --bootstrap-server localhost:29092'docker exec kafka bash -c 'echo "{\"order_id\":\"order_002\",\"customer_id\":\"cust_456\",\"amount\":249.50,\"timestamp\":\"2025-07-15T18:31:00Z\"}" | kafka-console-producer --topic orders --bootstrap-server localhost:29092'# Verify messages were produced
docker exec kafka kafka-console-consumer --topic orders --bootstrap-server localhost:29092 --from-beginning --timeout-ms 5000
Phase 3: Test MCP Server
Step 7: Start MCP Server
# Use our MCP server startup script
./scripts/start-mcp-server.ps1Step 8: Test Tools with MCP Client
# Open new terminal window
cd mcp-client# Start MCP client
node dist/index.js# Test 1: Check topic health
Enter tool name: check_topic_health
Enter arguments: {"topic": "orders"}# Test 2: Analyze consumer lag
Enter tool name: analyze_consumer_lag
Enter arguments: {"topic": "orders"}# Test 3: Inspect messages
Enter tool name: inspect_messages
Enter arguments: {"topic": "orders", "limit": 3}
Phase 4: Claude Desktop Integration
Step 9: Configure Claude Desktop
# Navigate to Claude config directory
cd %APPDATA%\Claude# Create/edit configuration file
notepad claude_desktop_config.json
Paste this configuration:
{
"mcpServers": {
"kafka-mcp": {
"command": "node",
"args": [
"C:\\path\\to\\your\\kafka-mcp-integration\\mcp-server\\dist\\index.js"
],
"env": {
"KAFKA_BROKERS": "localhost:29092",
"NODE_ENV": "production"
}
}
}
}Step 10: Test Claude Desktop Integration
Restart Claude Desktop and try these conversations:
Test Conversation 1:
“How is the health of my orders topic?”
Test Conversation 2:
“Are there any consumer lag issues with my order processing?”
Test Conversation 3:
“Show me some recent orders and tell me if there are any patterns”
🎯 Expected Demo Output
When everything is working perfectly, you should see something like this:
MCP Client Testing:
{
"content": [
{
"type": "text",
"text": "Topic health check for \"orders\":\nOverall Health: healthy\nHealthy Partitions: 3/3\nTotal Partitions: 3\nReplication Factor: 1"
}
]
}Claude Desktop Integration:
You: "How is my orders topic doing?"Claude: "I'll check the health of your orders topic for you."[Claude calls check_topic_health tool]Claude: "Your orders topic is in good health! It has 3 partitions, all of which are healthy and functioning properly. However, I notice it has a replication factor of 1, which means if your broker goes down, you could lose data. For production use, I'd recommend increasing the replication factor to at least 2."
🔧 Key Implementation Files
All the implementation details are available in the GitHub repository. Here’s what you’ll find:
/mcp-server/- Complete MCP server implementation/mcp-client/- Test client for development/scripts/- Setup and management scripts/docker/- Docker configuration files/docs/- Additional documentation and examples
🌟 What’s Next?
This integration opens up so many possibilities! Here are some ideas for extending it:
- Advanced Analytics: Add ML-based anomaly detection for streaming data
- Multi-Cloud Support: Extend to support Confluent Cloud, Amazon MSK
- Custom Dashboards: Build Streamlit apps that use the MCP integration
- Alerting Integration: Connect to Slack, PagerDuty, or other notification systems
🎬 Wrapping Up
That’s a wrap! We’ve built a Kafka MCP integration end to end. You can find all the code and additional examples in the [GitHub repository](https://github.com/SudhenduP/kafka-mcp-integration).
