I want to discuss something that I feel is too often overlooked or downplayed. In our excitement over AI (mine included), we do not often address the deficiencies of AI and the products built on top of them. I believe that these systems make me more productive, allow me to explore new ideas quickly and quite frankly are a lot of fun.
But these tools are not the product. They in of themselves are not universally transformative. Let's be clear. They are amazing and though much of the hype surrounding them is arguably unjustified or just plain wrong, I don't believe the excitement is. But this is not the product. These are not usable for everyone. They are for power users and those with technical expertise and the same goes for many applications that have incorporated the technology.
The Programming Parallel
This reminds me of expectations from the past. In the past, it was thought that everyone would learn how to program. But this was not the case; the barrier to entry was and is too high. Though there has certainly been opportunity to enter the industry if you have the will and the desire, the primary barrier is that most people do not want to code. This is the same with AI assistants. Most people do not tinker, they do not explore, they do not configure, they do not automate, nor do they want to. And this cannot be the expectation for these products. History has shown us that this will not succeed.
The Reality
Over the past several months, I've documented my journey with AI tools: from finding initial value through building reliable workflows to practical implementation examples and advanced context management. What I haven't emphasized enough is how much technical exploration, iteration, and frankly failure happened along the way.
The structured approach I developed required months of iteration to get reliable results. My system architecture demonstrates the technical complexity behind what appears to be "simple" AI chat. The workflow examples show the validation systems needed for quality output. The context management through MCP servers, knowledge graphs, and custom integrations represents significant technical investment.
The Infrastructure Behind "Simple" AI Chat
Looking at my Linear projects reveals the true complexity. The Writing Standards & Voice Integration document runs thousands of words, specifying prohibited language, authentic voice characteristics, and quality frameworks. The Professional Networking Message Samples shows multiple iterations of the same message type, with detailed analysis of what works and what doesn't.
This isn't just "prompt engineering." It's building a comprehensive system for reliable AI collaboration.
Who's getting the most out of these
I love to tinker, explore, automate and configure. Claude, Gemini and ChatGPT have been incredible tools to add to my tool belt. And though I continue to find ways to achieve greater consistency and make my workflow more efficient, the truth is there are many failures along the way.
I do encourage all to learn to collaborate with AI assistants. However, one of the most valuable lessons to learn is the limitations of the technology both in your specific use case and generally. Who exactly do I think should take the opportunity to learn to use these tools? Everyone. However, realistically, that is not going to be the case.
The real barrier isn't that AI is too complex. It's that most people don't want to engage with the process required to get the most out of it. They want tools that work consistently without complexity; something that they may feel they don't even have with most current non-AI applications. They don't want to document writing samples or create user context to provide to an app they're using.
I Heart Claude + MCP
I heart Claude + MCP, but the current generation requires a systematic approach. They're powerful for those willing to invest the time and energy into them.
My automation chain: Audio Hijack → Hazel cleanup → Shortcuts processing → DEVONthink archival. I have inserted Claude into this chain, using the following MCP servers:
- Linear – Project management and documentation where Claude can create issues, update project documents, and maintain context across my engineering and research work
- Fastmail – Email integration allowing Claude to search messages, compose responses, and coordinate project communications directly from chat
- Github – Repository management where Claude can review code and help with development workflows
- Airtable – CRM and data management where Claude can track contacts, update records, and maintain relationship context
- Filesystem – I have a directory called Claude, where I have Claude deposit various documents generated from research, audio transcripts, implementation specs and plans
This works for me because I enjoy building these systems. Most people don't.
That being said, I'm having a lot of fun and have found value in AI. For those with the technical inclination and patience for iteration, they offer genuine productivity gains. But, this is not the product.
Moving Forward
The excitement around AI isn't misplaced. These tools represent genuine technological advancement. But we need realistic expectations about who will actually adopt them in their current form. Building better products means acknowledging that most people want tools that simply work.
I don't know what the product looks like that we could not create previously. I don't think it's a matter of embedding LLMs into your application and hiding the complexity. It has to simply work and do something that was not achievable before. And it may not be recognizable as the conversational assistants we have now. But I do believe that product will come. Until then, we'll keep tinkering, exploring and iterating. It's the only way to get there.
This post builds on concepts from my AI workflow series: How to Find Value in AI, Building Reliable AI Workflows, Practical AI Workflow Examples, and The Value of Context.