Learning to Work with Claude
My current approach is over-engineered for the models I'm using today. I went from around 1,500 words to 150 words—quite a difference.
My current approach is over-engineered for the models I'm using today. I went from around 1,500 words to 150 words—quite a difference.
I've really enjoyed the past several months working with AI and Claude in particular. There have been successes and failures. Moments where I've been incredibly annoyed and times when I have laughed at the output that I received.
I developed complex system prompts, prompt templates, and validation protocols—all in an effort to get the best output possible. I am relatively new to all of this, so I want to continue growing my knowledge and understanding. I recently reviewed two videos from Anthropic: "Prompting 101" and "Prompting for Agents". What I learned is that my current approach is over-engineered for the models I'm using today—namely Sonnet 4 and Opus 4.1.
I work with and test many models, including several locally runnable ones. Some techniques I've discussed in previous blog posts may still have merit for certain models. But it's definitely time to move forward in how I work with the models I primarily use day-to-day.
I want to get the most value out of these systems to increase the quality of my work and productivity. When I realize I have catching up to do and adjustments to make, I try to put them into practice as quickly as possible.
In doing so, I've made my system prompt far less verbose and more maintainable. I went from around 1,500 words to 150 words—quite a difference. The same goes for my prompt templates, and in some cases, I've gotten rid of some altogether. This has also allowed me to streamline my context documents.
Current frontier models have advanced beyond needing explicit chain-of-thought direction or extensive few-shot prompting. As long as they have sufficient context, clear direction, and boundaries, we should "let Claude be Claude."
Finally, you can let Claude be Claude, and essentially what this means is that Claude is great at being an agent already. You don't have to do a ton of work at the very beginning so I would recommend just trying out your system with sort of a bare-bones prompt and bare-bones tools and seeing where it goes wrong and then working from there. Don't sort of assume that Claude can't do it ahead of time because Claude often will surprise you with how good it is.
My prompts were too rigid and weren't allowing the model to work to the best of its ability.
This doesn't mean prompts should always be short or that guidance isn't necessary. But our focus needs to shift toward trusting what these models can do.
That's not the same as trusting they'll always give perfect output or never hallucinate. But techniques that were previously essential shouldn't be our primary focus.
Here's the refined prompt structure from those videos:
What's particularly valuable is how they distinguished between general prompting and agent-specific prompting for tools and automated tasks.
The talks also provided practical anti-hallucination techniques:
Some of these I was already using intuitively, but reviewing them in this new context advanced how I approach prompting significantly.
Rather than diving deep into techniques here, I encourage you to watch those Anthropic videos yourself. Whether you're adjusting existing approaches or learning to prompt for the first time, seeing guidance directly from the team that builds these models is what will help you the most.
Enjoyed this article? I share thoughts on component architecture, and AI-enhanced workflows, regularly.