Model Context Protocol (MCP)
Koveria's Model Context Protocol (MCP) gives LLMs the ability to act on the real world. Without MCP, an LLM can only generate text. With MCP, it can read files, call APIs, send emails, process PDFs, validate data, and perform precise calculations — all through a standardized tool-calling interface.
The Core Idea: LLM Function Calling
Every MCP tool is automatically exposed as a native function call to the LLM. The LLM decides which tools to invoke, when to invoke them, and with what arguments — based on the conversation context.
Without MCP
User: "What is the total with 19% tax on my invoice for €1,234.56?"
LLM: "That would be approximately €1,469.12"
← Approximation. No guarantee of correctness.
An LLM cannot reliably do decimal arithmetic.
With MCP
The LLM sees available tools and autonomously decides to call them:
{
"tool_calls": [
{
"function": {
"name": "finance.calculate_compound_interest",
"arguments": { "principal": 1234.56, "rate": 0.19 }
}
}
]
}
← Tool returns: { "result": "1469.13", "precision": "decimal" }
LLM: "The total with 19% tax is exactly €1,469.13."
← Precise Decimal arithmetic. Guaranteed correct.
The LLM chose the right tool, constructed the correct arguments, and used the precise result. This is the power of MCP.
How It Works
Key insight: The agent code is a thin orchestration layer. The LLM makes the decisions — which tools to call, in what order, and how to interpret the results.
Quick Example: Invoice Processing
Here's how an LLM-powered agent uses MCP tools to fully automate invoice processing:
from koveria import Agent, agent_action
class InvoiceAgent(Agent):
@agent_action(action_type="process_invoice")
async def process_invoice(self, user_message: str) -> str:
# 1. Gather all available MCP tools
tools = (
await self.mcp.pdf_processing.fetch_tools()
+ await self.mcp.json_yaml.fetch_tools()
+ await self.mcp.finance.fetch_tools()
+ await self.mcp.validation.fetch_tools()
+ await self.mcp.email_send.fetch_tools()
+ await self.mcp.cloud_storage.fetch_tools()
)
# 2. Let the LLM decide which tools to call
messages = [
{"role": "system", "content": """You are an invoice processing agent.
Use the available tools to: extract PDF text, parse structured data,
validate calculations, and send confirmation emails."""},
{"role": "user", "content": user_message}
]
# 3. LLM tool-calling loop
response = await self.llm.complete(
messages=messages,
model="gpt-4o",
tools=tools,
tool_choice="auto" # LLM decides autonomously
)
# 4. Execute tool calls chosen by the LLM
while response.tool_calls:
for tool_call in response.tool_calls:
server, tool = tool_call.function.name.split(".", 1)
result = await self.mcp.call(
server=server,
tool=tool,
**tool_call.function.arguments
)
messages.append({"role": "tool", "tool_call_id": tool_call.id, "content": str(result)})
response = await self.llm.complete(messages=messages, model="gpt-4o", tools=tools)
return response.content
What happens at runtime:
- The LLM receives tool schemas for PDF extraction, JSON parsing, finance calculations, validation, email sending, and cloud storage
- Given "Process invoice #12345 from inbox", the LLM autonomously chains:
cloud_storage.read_file→ reads the PDF from storagepdf_processing.extract_text→ extracts text from the PDFjson_yaml.parse_json→ parses structured invoice datafinance.calculate_compound_interest→ validates the totalvalidation.is_email→ validates the customer's email addressemail_send.send_email→ sends a confirmation
- The agent code just orchestrates — the LLM plans all the work
[!TIP] For deterministic workflows where tool order is fixed, you can also call MCP tools directly with
await self.mcp.finance.calculate_compound_interest(...). But LLM-driven tool calling is the primary pattern — it enables autonomy, multi-step reasoning, and graceful error recovery.
MCP Architecture Overview
MCP Service Classification
Koveria uses a three-dimensional classification to organize MCPs:
1. Execution Model (How it runs)
| Execution | Latency | Use Cases | Examples |
|---|---|---|---|
| Local | <1ms | Lightweight operations, utilities | JSON parsing, date formatting, math |
| Remote | 5-50ms | Heavy processing, external APIs | PDF extraction, web scraping, email |
2. Service Category (What it provides)
| Category | Description | Examples |
|---|---|---|
| stdlib | Standard library utilities (local) | JSON, DateTime, Math, Validation |
| platform | Core infrastructure services (remote) | Cloud Storage, HTTP Client, Email |
| integration | Third-party connectors | Stripe, Salesforce, Slack |
| custom | Organization-specific | Your custom business logic |
3. Ownership (Who maintains it)
| Ownership | Provider | Examples |
|---|---|---|
| core | Koveria (built-in) | All stdlib + platform MCPs |
| community | Community marketplace | Popular integrations |
| org | Your organization (private) | Internal tools |
Example Classification:
- JSON/YAML MCP = Core Standard Library (Local)
- HTTP Client MCP = Core Platform Service (Remote)
- Stripe MCP = Community Integration (Remote)
The 13 Core MCP Services
Koveria includes 13 foundational MCP services out of the box — all exposed to the LLM as native function calls:
Standard Library MCPs (Local, <1ms)
These tools solve problems that LLMs cannot do natively — precise arithmetic, reliable data parsing, deterministic validation:
- JSON/YAML MCP — Parse, validate, query, merge JSON/YAML data
- DateTime MCP — Current time, date arithmetic, timezone conversion, formatting
- Math MCP — Safe mathematical expression evaluation, descriptive statistics
- Text Processing MCP — Accurate token counting, text extraction, slug generation
- Validation MCP — Email, URL, phone, credit card pattern validation
- Finance MCP — Decimal-precision compound interest, present value, amortization
- Hashing MCP — Cryptographic hashing (SHA-256, MD5, HMAC)
- Encoding MCP — Base64, URL encoding/decoding
- Business Date MCP — Business day calculations
Platform MCPs (Remote, 5-50ms)
These tools give the LLM access to external systems it cannot reach on its own:
- Cloud Storage MCP — Read, write, list files in S3/GCS/NFS
- HTTP Client MCP — Make HTTP requests to external APIs
- Email Send MCP — Send transactional emails via SendGrid/AWS SES
- PDF Processing MCP — Extract text and metadata from PDFs
See: Standard Library Overview | Platform Services Overview
LLM Tool-Calling Patterns
Pattern 1: LLM-Driven (Autonomous) — Primary Pattern
The LLM decides which tools to call based on user intent:
# Fetch all available tool schemas
tools = await self.mcp.weather.fetch_tools()
# LLM sees tool schemas and decides autonomously
response = await self.llm.complete(
messages=messages,
tools=tools,
tool_choice="auto" # LLM decides
)
The LLM might respond with:
{
"tool_calls": [
{
"function": {
"name": "weather.get_current_weather",
"arguments": {"city": "Berlin"}
}
}
]
}
Pattern 2: Direct Invocation (Deterministic) — For Fixed Workflows
For predictable, performance-critical paths where tool order is predetermined:
# Agent code decides exactly what to call
weather = await self.mcp.weather.get_current_weather(city="Berlin")
[!IMPORTANT] LLM-driven invocation is the primary pattern. Direct invocation is appropriate for fixed workflows (cron jobs, batch processing) where the LLM's reasoning isn't needed. For user-facing interactions, always let the LLM choose.
See: MCP as Native LLM Tools for the complete tool-calling architecture
Configuration Hierarchy
MCPs support multi-level configuration for maximum flexibility:
Platform Defaults (least specific)
↓
Organization Settings
↓
Workspace Settings
↓
Team Settings
↓
User Settings (most specific)
Example: HTTP Client MCP Configuration
# Organization-wide (Admin GUI)
org_mcp_config:
http_client:
allowed_domains:
- "*.example.com"
- "api.stripe.com"
max_request_size_mb: 10
timeout_seconds: 30
# Team-level override (Customer Portal)
team_mcp_config:
http_client:
allowed_domains:
- "api.internal-tool.com" # Additional domain for this team
Benefits:
- Security by default — Org admin sets boundaries
- Team flexibility — Teams customize within limits
- Cost attribution — Track usage per team
- Audit trail — All config changes logged
Building Custom MCPs
Koveria provides a developer-first SDK for building custom MCP services. Every custom MCP automatically becomes available as an LLM function call.
Quick Example: Stripe Payment MCP
from koveria.mcp import MCPService, mcp_tool, mcp_config
from koveria.mcp.types import MCPExecutionModel, MCPCategory, MCPOwnership
@mcp_config.api_key(
name="stripe_api_key",
description="Stripe API key",
secret=True # Stored in Vault
)
class StripeMCP(MCPService):
"""Stripe payment integration."""
name = "stripe"
display_name = "Stripe Payments"
version = "1.0.0"
__execution__ = MCPExecutionModel.REMOTE
__category__ = MCPCategory.INTEGRATION
__ownership__ = MCPOwnership.ORG
@mcp_tool(description="Create a payment intent")
async def create_payment(
self,
amount: int,
currency: str = "usd",
customer_id: str = None
) -> dict:
"""Create a Stripe payment intent."""
return await self.stripe.PaymentIntent.create(
amount=amount,
currency=currency,
customer=customer_id
)
That's it! The platform automatically:
- ✅ Generates a JSON function-calling schema for the LLM
- ✅ Handles secret storage in Vault
- ✅ Builds and deploys the Docker image
- ✅ Makes the tool available to any agent's LLM via
fetch_tools()
The LLM can now autonomously call:
{
"function": {
"name": "stripe.create_payment",
"arguments": {"amount": 1000, "currency": "usd"}
}
}
See: Build Your First MCP
Use Cases
1. Compliance Automation
Problem: Compliance assessments require gathering evidence from documentation, code, and APIs — a perfect task for an LLM with tools.
Solution: Give the LLM access to cloud storage, JSON parsing, and HTTP tools. It autonomously:
// LLM tool calls (autonomous sequence):
{"function": {"name": "cloud_storage.read_file", "arguments": {"path": "/agents/agent-123/README.md"}}}
{"function": {"name": "cloud_storage.list_directory", "arguments": {"path": "/agents/agent-123/src", "pattern": "*.py"}}}
{"function": {"name": "json_yaml.parse_yaml", "arguments": {"yaml_string": "...koveria.yaml contents..."}}}
The LLM reads the evidence, infers compliance answers, and fills 70% of questionnaire fields automatically.
2. Customer Support Automation
Problem: Support agents need to check multiple systems (CRM, ticketing, knowledge base).
Solution: Give the LLM access to HTTP Client, JSON, and Text Processing MCP tools:
// LLM tool calls (autonomous sequence):
{"function": {"name": "http_client.get", "arguments": {"url": "https://api.zendesk.com/tickets/42"}}}
{"function": {"name": "json_yaml.parse_json", "arguments": {"json_string": "...response..."}}}
{"function": {"name": "http_client.post", "arguments": {"url": "https://kb.internal.com/search", "data": {"query": "billing dispute"}}}}
The LLM reads the ticket, searches the knowledge base, generates a response, and updates the ticket — all autonomously.
3. Financial Report Generation
Problem: Monthly financial reports require data from multiple sources, precise calculations, and formatting.
Solution: Give the LLM access to Cloud Storage, Finance, Math, and DateTime tools:
// LLM tool calls (autonomous sequence):
{"function": {"name": "cloud_storage.read_file", "arguments": {"path": "/finance/transactions-2026-02.csv"}}}
{"function": {"name": "math.evaluate", "arguments": {"expression": "sum([1234.56, 2345.67, 3456.78])"}}}
{"function": {"name": "finance.calculate_compound_interest", "arguments": {"principal": 7036.01, "rate": 0.19}}}
{"function": {"name": "datetime.format", "arguments": {"date_string": "2026-02-28", "format": "%B %d, %Y"}}}
The LLM reads transaction data, calculates totals with Decimal precision, formats dates, and generates the report.
Security & Compliance
Multi-Layer Security
1. Row-Level Security (RLS)
- All MCP configurations scoped by
org_id - Database-enforced tenant isolation
2. Secrets Management
- API keys stored in Vault (never in code)
- Automatic rotation policies
3. Rate Limiting
- Per-org, per-workspace, per-team quotas
- Prevent abuse and cost overruns
4. Audit Logging
- All MCP calls logged with metadata
- 7-year retention for compliance
5. Network Isolation
- Teams cannot access each other's MCP data
- NetworkPolicy enforcement in Kubernetes
Cost Attribution
Every MCP tool call is tracked for cost attribution:
{
"timestamp": "2026-01-03T10:30:00Z",
"org_id": "org-001",
"team_id": "team-sales",
"agent_id": "agent-invoice-processor",
"mcp_name": "pdf_processing",
"mcp_tool": "extract_text",
"cost_usd": 0.05,
"duration_ms": 234
}
Getting Started
For Developers: Build Your First MCP
Time: 10 minutes
# 1. Initialize new MCP project
koveria mcp init weather-api
# 2. Add @mcp_tool decorators in main.py
# 3. Start local dev server
koveria mcp dev
# 4. Tool is immediately available to the LLM via fetch_tools()
See: Build Your First MCP
For Agent Developers: Enable LLM Tool Calling
Time: 5 minutes
# Fetch tool schemas from any MCP
tools = await self.mcp.weather.fetch_tools()
# Pass to LLM — it decides autonomously
response = await self.llm.complete(
messages=messages,
tools=tools,
tool_choice="auto"
)
See: MCP as Native LLM Tools | LLM Tool-Calling Tutorial
Documentation Navigation
Core Concepts
- What is MCP? — Conceptual introduction
- Execution Models — Local vs Remote
- MCP as Native LLM Tools — Tool-calling bridge
- Classification System — 3D classification
- Manifest Files — koveria.yaml schema
Service Catalogs
- Standard Library MCPs — 9 local services (<1ms)
- Platform MCPs — 4 remote services (5-50ms)
Developer Guides
- Build Your First MCP — Step-by-step tutorial
- LLM Tool-Calling Tutorial — MCP-powered LLM agents
Reference
- CLI Commands —
koveria mcpCLI - Tool Passport API — LLM schema generation
- Manifest Schema — koveria.yaml schema
Last Updated: March 1, 2026 Version: 2.0.0 (LLM-First)