1231 words
6 minutes
The Ghost in the Machine: Bridging Isolated Projects with MCP

It’s 11:00 PM. You’re deep in the zone, working on Project A. You need to deploy a quick fix, but there’s a catch: the deployment documentation isn’t in Project A. It lives in Project B, a centralized “operations” repo located 3 levels up and 2 branches over in your file system.

You find yourself doing the “Developer Shuffle”—tabbing out of your IDE, navigating through a labyrinth of directories, cat-ing a markdown file in Project B, and then trying to hold those instructions in your head while you context-switch back to Project A.

Even with a powerful AI agent at your side, the agent is blind. It only knows what’s in your current folder. It’s like having a world-class chef who isn’t allowed to look in the pantry.

This is the problem of the Isolated Context, and the Model Context Protocol (MCP) is the bridge that finally solves it.

The Core Concept: What is MCP?#

The Model Context Protocol is an open standard that allows AI agents to connect to external data sources and tools. Instead of the agent being a “brain in a jar” limited to the files you’ve manually uploaded or opened, MCP gives the agent “eyes and hands” to interact with your local environment, databases, and third-party APIs.

In our scenario, we use an MCP server to bridge two separate local projects, allowing an agent working on Project A to “see” and “read” the documentation in Project B as if it were right there in the working directory.

TIP

More info about MCP can be found at modelcontextprotocol.io

How to Build Your First MCP Bridge#

If you want to create a custom connection between your projects, the most flexible way is using the TypeScript SDK. This allows you to define exactly how the agent should interact with your files.

1. Initializing MCP#

First, we bootstrap a project using the official MCP starter. Navigate to a neutral directory (not inside A or B) and run:

Terminal window
npx @modelcontextprotocol/create-server my-docs-bridge
cd my-docs-bridge
npm install

2. Defining Resources and Tools#

In src/index.ts, define how the agent “sees” Project B. We have two 2 options:

  1. Resources: Best for static data. We can expose specific files in Project B as URI-based resources (e.g., docs://project-b/deploy.md).
  2. Tools: Best for dynamic actions. We can create a tool called get_deployment_docs that takes a project name as an argument and returns the relevant file content from Project B.

3. Implementing the Logic#

Here is a simplified logic structure for our server using the TypeScript SDK. This example uses a tool-based approach to fetch documentation:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import fs from "fs/promises";
import path from "path";
const server = new Server({
name: "project-b-bridge",
version: "1.0.0",
}, {
capabilities: {
tools: {},
},
});
// 1. List the tools available to the agent
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [{
name: "read_project_b_docs",
description: "Read deployment documentation for a specific project from Project B",
inputSchema: {
type: "object",
properties: {
filename: { type: "string", description: "e.g., project-a-deploy.md" }
},
required: ["filename"]
}
}]
}));
// 2. Handle the tool execution
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "read_project_b_docs") {
const docPath = path.join("/absolute/path/to/Project-B/docs", request.params.arguments.filename);
try {
const content = await fs.readFile(docPath, "utf-8");
return { content: [{ type: "text", text: content }] };
} catch (error) {
return { content: [{ type: "text", text: `Error reading docs: ${error.message}` }], isError: true };
}
}
throw new Error("Tool not found");
});
const transport = new StdioServerTransport();
await server.connect(transport);

4. Connecting to Agent (the “Client”)#

The MCP server runs as a separate process. We to tell our AI client how to start it. Take Gemini CLI as an example, Gemini CLI looks for MCP server configurations in a specific JSON file. This is usually located at ~/.gemini/settings.json

Add the new bridge server to the mcpServers object:

{
"mcpServers": {
"docs-bridge": {
"command": "node",
"args": [
"/absolute/path/to/my-docs-bridge/build/index.js"
],
"env": {
"PROJECT_B_PATH": "/absolute/path/to/Project-B/docs",
"NODE_ENV": "production"
}
}
}
}
Key Integration Tips for the CLI
  • Use Absolute Paths: The Gemini CLI runs from various working directories. Always use absolute paths for the node executable (if not in global PATH) and the index.js file to avoid “file not found” errors during the handshake.
  • Environment Variables: Note the env block above. It is better practice to pass the directory of Project B as an environment variable rather than hardcoding it in TypeScript source. This keeps MCP server generic and reusable.
  • Permissions: Ensure the user running the Gemini CLI has read permissions for the Project B directory.

5 - Verification#

To verify the connection, restart Gemini CLI session and run a diagnostic command if available, or simply ask:

“List the available tools from my MCP servers.”

If configured correctly, we should see read_project_b_docs appear in the list of capabilities. This confirms the CLI has successfully “attached” to our local documentation bridge.

6 - Context File#

Even with an MCP server running, an AI agent needs a “reason” to use it. Without a hint in Project A, the agent might spend time looking for a Dockerfile or a docker-compose.yml within Project A’s directory in case of deploying A. When it fails to find them, it might try to hallucinate a deployment strategy based on general knowledge instead of using our specific centralized docs.

It is, therefore, highly recommended to keep a note in, in case of Gemini, GEMINI.md context file and frame it as an instruction pointer for the agent like this:

Deployment
----------
Deployment logic for this project is centralized in Project B. To understand how to deploy Project A:
1. Use the `read_project_b_docs` tool.
2. Request the file: `project-a-deployment.md`.
3. Follow the instructions regarding the Docker configuration defined there.

Result#

When we start a session in Project A:

  1. The agent reads GEMINI.md.
  2. It identifies that deployment is “external.”
  3. It sees it has a tool (read_project_b_docs) that matches the instruction.
  4. It proactively fetches the documentation from Project B before we even ask a specific deployment question.

This creates a seamless “web” of knowledge across our local repositories. We keep Project A clean of duplicate docs, but we give the agent the “map” it needs to find the truth in Project B.

Scaling to Entire Ecosystem#

What if we have dozens of projects in a single root directory? We don’t want to write a custom tool for every pair of projects. Instead, we can scale using a Generic Filesystem MCP.

What’s better than that: there is no need to reinvent the wheel - there is an official, open-source Filesystem MCP Server maintained as a reference implementation by the Model Context Protocol team.

The @modelcontextprotocol/server-filesystem (often referred to simply as the Filesystem MCP) is exactly what you are looking for. It is designed to bridge the gap for agents that need access to directories outside their current working path.

Key Features
  • Security (Allowed Directories): You explicitly whitelist directories it can touch. It will block any attempt to “escape” those folders (no ../ attacks).
  • Comprehensive Tooling: It comes pre-built with tools like read_file, list_directory, search_files (recursive glob search), and get_file_info.
  • Recursive Search: The agent can search for deployment files across all project subfolders in one go.

Taking Gemini CLI as an example, we can skip the manual coding and just install and configure this pre-built server

1. Installing Package#

We can run it directly using npx (which ensures we always have the latest version) or install it globally:

Terminal window
npm install -g @modelcontextprotocol/server-filesystem

Configuring Gemini CLI#

Update settings.json to use the official server. The most important part here is passing the centralized project root as an argument:

{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/your/centralized/projects/root",
"/path/to/another/relevant/dir"
]
}
}
}

(To be continued…)

The Ghost in the Machine: Bridging Isolated Projects with MCP
https://blogs.openml.io/posts/mcp-server/
Author
OpenML Blogs
Published at
2026-05-11
License
CC BY-NC-SA 4.0