iTranslated by AI
Designing MCP Servers as "Backend for MCP" (BFMCP), Similar to BFF
Prerequisites
The following assumes a product that meets either of the following conditions:
- Medium-to-large scale products
- Products that also provide a backend for UI / API
Out-of-scope Cases
- Cases where a single external service API is exposed as-is as an MCP tool
(e.g., Google Drive / Slack / Notion, etc.)
For these, rather than a "Backend for MCP," a different set of design principles as an MCP wrapper for external APIs is more appropriate.
Clarifying MCP Responsibilities
First, let's clarify the responsibilities that MCP undertakes.
In essence, MCP is a "contract" that stands between the LLM and the outside world, and it has the following roles:
- Definition of available tools and resources
- Explicit definition of input/output schemas
- Limiting executable operations
- Validation of requests from the LLM
- Normalization and returning of execution results
The important point is that all of these are responsibilities of the boundary layer, not the application logic.
Common Anti-patterns
The following are configurations that are common pitfalls in the early stages of MCP adoption:
- Exposing existing backend APIs directly as MCP tools
- Each tool individually accessing DBs or external APIs
- Authorization and validation logic being scattered across different tools
In this configuration,
- Security policies become fragmented
- Constraints for the LLM and constraints for general APIs become mixed
- It becomes difficult to keep up with MCP-specific specification changes
Such problems are likely to occur.
The "Backend for MCP" Approach
Therefore, the design I would like to propose is to:
Decouple the MCP server as a "Backend for MCP"
This positions the MCP server as follows:
- A separate layer from the regular application backend
- Providing an interface specifically for the LLM
- Consolidating MCP responsibilities in one place
The architectural image is as follows:
LLM
↓ (MCP)
MCP Server (Backend for MCP)
↓
Existing Backend / DB / External API
Responsibilities to Consolidate in the MCP Server
1. Centralized Management of Tool and Resource Definitions
- Which tools to expose to the LLM
- Which arguments to allow
- Which operations are dangerous and should be excluded
By consolidating these into the MCP server,
you can see at a glance "what the LLM is allowed to do."
2. Validation and Guardrails
- Schema validation
- Rate limiting
- Read-only constraints
- Permission control per organization/user
LLMs are smart, but they cannot be trusted.
Guardrails must always be placed on the server side as a basic principle.
3. Thin Wrapping of Execution Logic
The MCP server itself does not need to hold heavy business logic.
- Calling existing APIs
- Assembling queries
- Formatting results
By focusing on being a "translation layer" for such tasks,
you can reduce the degree of coupling with existing systems.
Summary
It is natural to design an MCP server as:
- Not just a simple tool execution platform
- A place where MCP responsibilities are consolidated
- A backend dedicated to the LLM
Just as Backend for Frontend (BFF)
provides a backend optimized for each UI,
Backend for MCP becomes the optimal gateway for the LLM
Discussion