Skip to main content

How to Find Value in AI

What I've found is that you have to find use cases that make sense, at least one to start with, and learn how to interact with AI effectively. Then it becomes easier to see how you can expand into more use cases and integrate it into your workflow.

AI Workflows
Strategic Planning
Software Engineering
Career Development
Quality Control
Law Horne
June 21, 2025
8 min read

I'm sure we've all played around with AI chat to see what it can tell us, how it reacts to what we say, maybe created some silly stories or songs. But how do you go from that to practical application? What are the real benefits? How can AI actually help you get work done, and how do you get there?

What I've found is that you have to find use cases that make sense, at least one to start with, and learn how to interact with AI effectively. Then it becomes easier to see how you can expand into more use cases and integrate it into your workflow. But here's what surprised me: despite all the talk about "natural language" interaction with computers, it still takes a technical mindset to approach this technology effectively. You need to think methodically about how to apply it in ways that actually work for you.

My approach to AI wasn't based on the idea that it didn't have a useful application, but that all of the discussion around it was focused on fears of displacement, skepticism, or over-hyped marketing. My hope was and still is that the discussion moves beyond this and into practical application. For me, job applications turned out to be perfect.

My Use Case

I'm not a fan of searching for a job and I do not enjoy writing resumes or cover letters. They often feel like a chore. Did I communicate necessary information? Should I rework this section? Am I being accurate? There's a constant back and forth. And then there's the fact that I don't often think of documenting my achievements at work. So I spend hours trying to recall details from months ago.

This challenge made job applications a natural starting point for structured AI exploration. When a friend suggested I try AI for job application assistance, I decided to approach it methodically rather than casually.

From Basic to Structured

My first few attempts at using AI for job applications were fairly basic. Essentially "Here's resume content, tailor it to this pasted in job description." What I found was that errors and contrived experience abounded. Not only did the AI make mistakes, but I did as well.

I then decided to compose a prompt that I could reuse. This improved things, but I still needed to substantially edit the output. A tip I read suggested asking the assistant what are commonly overused terms and phrases that are found in resumes, then asking the assistant to avoid these. This improved things, but sometimes words and phrases would slip through.

Eventually I asked the assistant to improve my prompt and include quality checks to weed out this language and other issues. But to really make this work, I decided to create a resume master template.

Providing a Good Foundation

This started out as my initial resume, but I soon realized this could be improved to prevent the need to drastically alter the content each time. This happened really because my resume was weak at this current point.

I initiated a new chat, pasted in my resume, and offered additional context such as my volunteer work in the deaf/HoH community, my interest in photography and how it allowed me to learn more about people, and my career transition story. I then asked the assistant to conduct an interview process to inquire about timelines, people involved, and the specific details behind each accomplishment on my resume. I also instructed the assistant that if something wasn't clear to ask clarifying questions.

This turned into a single intensive session over the course of a few hours. I could take breaks and return when I was ready, which allowed me to think through my responses thoroughly. The assistant walked through each role step by step, asking detailed questions about technical challenges, quantifiable achievements, and specific responsibilities I might have forgotten or understated.

From this comprehensive interview process, an updated master resume was created and iterated on until I felt sufficiently comfortable with the content. I then completed the job prompt template that would always reference this master template and do quality checks.

Why Job Applications Made Sense

What made job applications particularly valuable as a training ground was the consequences involved. Unlike casual experimentation, job application materials directly impact your professional future. This forced me to develop quality control frameworks and structured approaches rather than accepting whatever the system produced.

The process taught me several critical principles:

Quality Control is Essential: System outputs require structured verification, especially for professional materials where accuracy matters.

Template Development Prevents Repetition: Instead of solving the same prompting challenges repeatedly, building reusable templates creates consistent, reliable results.

Iterative Improvement Works: Each round of feedback and refinement improved both my prompting skills and the AI's output quality.

Context Depth Matters: The interview process revealed that providing comprehensive background information dramatically improved output relevance and accuracy.

From Individual Projects to Integrated Workflows

It's worth noting that at this time, this was in an individual project with job search-oriented instructions. There was a system level prompt, but it dealt more with general communication style. By this time I had developed several other projects with their own instructions:

  • Workflow System – This dealt with automation based on my ecosystem preferences (macOS, Shortcuts, Audio Hijack, Hazel, DEVONthink)
  • Note-Taking Workflow – I designed templates for various note-taking contexts and overall documentation standards
  • Prompt Engineering – This is where I explored improving my prompting skills and often created new reusable prompt templates
Multi-Persona AI System Architecture showing Memory Keeper coordination above specialized persona clusters

Strategic context distribution across specialized AI personas

The Need for Consolidation

It's from these individual projects that my multi-persona system prompt was created. Trying to manage and keep these increasingly growing project instructions consistent was important. So I explored the value of combining them. I didn't know if this was a good idea or not, but it turns out this was already a technique that others were using.

Platform Evaluation

I created a rough draft in ChatGPT and then passed to Claude and then did final refining in Gemini. Why this specific process? I was actively evaluating them at this time and I was appreciating the output I was getting from each one. This seemed to clean up the output on each turn.

Platform-Specific Strengths I Discovered:

ChatGPT was the initial place where the projects were predominantly developed, so it made sense that the rough draft came from there.

Claude just had the best output overall, both technically and creatively, which is why it became my preferred assistant. Additionally, once the system prompt reached a certain size, the full prompt could no longer be given to ChatGPT due to character limits, and Gemini does not support a system prompt that affects the default chat experience.

Gemini seemed to exhibit technical proficiency, so it worked well at making the final draft of the prompt concise but clear and complete.

After that first finished draft, most of the personas and my basic user context (background, tech stack, communication style, job search status, interests, etc.) were in place, and I named it Mnemosyne - Your dedicated memory and context keeper. I liked the way it sounded, and it could be shortened nicely to "Mnem," which gave it a bit of personality.

Insights for AI Adoption

From this foundation-building experience, several patterns emerged that I believe are valuable for other senior engineers considering structured AI adoption:

Choose Use Cases That Matter

Job applications worked as a training ground because the stakes were real. Poor output had professional consequences, which forced structured improvement rather than casual acceptance of AI limitations.

Apply Engineering Discipline

The progression from individual prompts to reusable templates to integrated systems follows familiar software development patterns. Quality control frameworks, structured testing, and iterative improvement all translate well to AI collaboration.

Platform Evaluation Matters

Different AI systems have different strengths. Rather than defaulting to one platform, methodical evaluation across multiple systems for specific use cases leads to better outcomes.

Build Structured Knowledge Management

The evolution from individual projects to consolidated systems reflects the same thinking that leads to better software architecture. Reducing duplication, improving maintainability, and creating reusable components.

Looking Forward

This foundation-building period, roughly two months of structured development, established the groundwork for more sophisticated applications. The quality control frameworks, template systems, and platform evaluation methodology created a reliable base for tackling more complex challenges.

The next step was applying this approach to production-level work: building a professional website using AI-enhanced development workflows while maintaining the same quality standards established during the job application phase.

This is Part 1 of a series on AI adoption. Part 2 will cover system architecture and Part 3 will show practical workflow examples.

Enjoyed this article? I share thoughts on component architecture, and AI-enhanced workflows, regularly.