The Value of Context
Exploring how context management through reusable prompts, project knowledge bases, and MCP integration creates more effective workflows with language models
Exploring how context management through reusable prompts, project knowledge bases, and MCP integration creates more effective workflows with language models
Context is a valuable tool in achieving desired outcomes. This is true for many things we do both professionally and personally. Context helps align our thoughts and focus efforts. It allows us to make well-informed decisions, or at the very least the best decisions we can with the available information we have. And this is especially true when working with LLMs.
In previous blog posts, I detailed my approach to working with Claude, Gemini, and ChatGPT. It really was a progressive process. First becoming more comfortable with working with AI but also learning how best to get the assistant to do what I wanted. Reusable prompt templates were very helpful and reduced the need to continually repeat myself, but ultimately the approach that benefited me the most was understanding that context is a valuable tool.
Sometimes I would achieve this through providing a URL to articles, uploading audio transcriptions or copying and pasting from a previous chat. These tools are still something I use, but my workflow and process have become more sophisticated. For instance, when I'm migrating from one chat to another, when I know my context window is getting small, I have a chat migration prompt that pulls out the important points from the chat we're focusing on, and I can immediately start a fresh chat with already established context and guidelines.
But the next step in this process was adopting projects. This is a feature both OpenAI and Anthropic provide in their chat interfaces. You can load up projects with knowledge base documents, and this allows the assistant to pull from this information. In some cases, I might even store prompts in the knowledge base and direct the assistant to execute them. But often I have specific documents that I find important to add to different projects. For instance, I have writing guidelines for my writing workspace, my resume reference and job search log templates for my Job Search workspace. Also, documents are often created in the process of developing ideas, so in my Software Engineering project I might have development roadmaps, implementation details, and research that I've created in collaboration with the assistant that get saved to project knowledge.
But where this really took off for me was with the use of remote MCP servers. Model Context Protocol (MCP) servers allow AI assistants to access external data sources, creating persistent context repositories that enhance every conversation. The servers provide a list of tools that allow the assistant to interact with the given repository or application and act on your behalf. The first server I installed was Linear's official MCP server. This became not just a way of managing the projects I was working on, but also a place to store lots of context in the form of project descriptions, issues, and project documents.
In fact, this is my primary use of Linear. I certainly have a few projects that you'd typically think of when you think of project management software, traditional development projects. But the majority of the projects I have in Linear are research and knowledge repositories.
These are often used in various chat conversations with Claude to reference context that may pertain to the current goal. For example, in writing this blog post, I'll have Claude reference writing voice samples and writing standards documents for proofreading and refinement. I write the blog post, then Claude becomes an editor that, by using these documents, learns how I tend to iterate over thoughts and ideas and in the process of proofreading and editing can do so effectively while avoiding altering my voice and intent. This is especially useful in situations where I don't like the way I worded something and want suggestions for a better way to put it.
The samples include my original writings and the approved edited versions. They also show how I tend to provide feedback on edits I don't like, such as crossing out words or whole sentences, either replacing them or putting notes in parentheses.
I also have various frameworks for the way I approach problem-solving and software engineering, and notes on AI collaboration techniques. A recent framework addition was inspired by a talk "The New Code" by Shawn Grove of OpenAI. This talk is worth a watch if you're interested in how you can get the most out of coding with AI agents. Here he talks about how specs are the new code. Based on the concepts in this talk, I refined the approach I was already using of doing research, choosing appropriate technologies or patterns, and based on these documents, creating implementation prompts. Using the ideas from this talk, as well as others around context-driven coding, I established a way of specifying what I want to build. This document is useful for both myself when implementing features or projects and Claude Code when executing this document, as well as serving as a historical record of the feature or phase of the application being built.
After populating Linear with context projects, I realized there were a few items missing from the official MCP server. I wanted to be able to create project documents and update them. I would have to add them to Linear manually instead of using the tools provided in the server. Document creation and updating was available through the GraphQL API but not the official MCP server, so I began exploring what it would take to build a custom MCP server to fill this need.
Building an MCP server involves creating a standardized interface that AI assistants can interact with. It's not too difficult. The official MCP documentation site provides clear documentation and examples and a free MCP course on huggingface. There are several SDKs available in various languages (I chose Go). The time investment varies depending on complexity, but basic functionality can be implemented in a few hours to a day. It's also worth noting that there are plenty of custom open source MCP servers out there and several official ones to use.
I figured if I was going to build an MCP server, why stop at just the missing document tools? What else could I do? What else was possible? I decided to integrate my calendar as well. In my previous post "Building Reliable AI Workflows," I gave the example of schedule extraction from images. I would provide Claude with an image and Claude would produce an iCal document. Originally, I was then importing this document into my calendar app. Now I provide Claude the images and Claude adds them directly to my calendar.
What else could I do? I decided to integrate my email, and this allowed me to create workflows involving communication about project status with clients. From here I was curious what I could do with the context I was creating when using these integrations. I added a knowledge graph, which included background entity extraction (people, projects, concepts etc.). How well does all of this work in practice? It's still very early—this is exploration and experimentation, after all.
And what started as one server has now become three: one focused on core productivity needs (email, calendar, notifications, and the missing Linear tools), one for messaging applications (Slack and Discord), and another with locally running LLMs.
But the point isn't so much that exploring what's possible with MCP servers is fun, though it is. My focus is really on providing more context about my workflows and creating greater opportunities for success. The hope is that all these tools will feed back into my knowledge graph, providing more context and down the road increasing my ability to achieve desired outcomes with greater ease.
Enjoyed this article? I share thoughts on component architecture, and AI-enhanced workflows, regularly.