The moment most health practitioners see AI work — really work — something shifts. The "oh wow" hits. And then, almost immediately, a new obsession takes over: I need to figure this out.
So they prompt. And prompt again. The output isn't quite right, so they refine. The chat gets long. The model starts drifting, biased by everything said before it. The outputs get worse. They keep going anyway — convinced that mastery is one better prompt away.
I've watched this happen from the advisor's seat. A client I used to work with — sharp, experienced, genuinely good at their work — will occasionally send me an AI-generated response mid-strategy and say: do it this way now. No discussion. No sanity check against what we've been building. Just a new answer, treated as the answer.
The irony? This same person would never practice that way. They don't take one study and call it gospel. They cross-reference. They consider the full picture. They apply judgment.
But AI walks into the room, and that critical faculty goes quiet.
Here's what we'll cover in this issue:
The Goal Was Never Fluency. It Was Output.
Most practitioners have confused AI literacy with AI results.
Literacy says: I need to understand how this works.
Results say: I need to know if this is working for me.
Those aren't the same thing. And chasing the first one is quietly eating the time you'd need to get the second one.
Daniel Kahneman spent a career distinguishing between fast thinking — intuitive, pattern-matching, automatic — and slow thinking — deliberate, evaluative, critical. Clinical training is almost entirely slow thinking. You were taught to pause, question, cross-reference, and consider the patient in front of you before acting.
AI, for most people, triggers fast thinking. The output looks good. It's articulate. It sounds confident. So they take it and run — or worse, they spend hours trying to force it into something better through sheer prompting volume.
Neither approach uses the skill you already have.
The move: treat AI output the way you treat a lab result. It's data. It's useful data. But it requires your judgment before it becomes a decision.
What Confident Evaluation Actually Looks Like
Earlier this year, I built an AI agent to help manage Facebook ad strategy for a company of mine.
I didn't try to master ad buying. I didn't spend hours in the weeds of every output. I checked in every three days, reviewed what the agent surfaced, applied my judgment to the recommendations, and made the adjustments it flagged.
We just had our biggest sales month ever. $43K.
I wasn't in the details. I was evaluating. There's a difference.
Try this prompt: "Here is the output you gave me: [paste output]. Here is the goal I'm working toward: [describe goal]. Tell me where this output serves the goal, where it doesn't, and what you'd change."
That single habit — bringing your output back to your actual goal — is worth more than a hundred hours of prompt engineering. It keeps the AI honest. It keeps you in the evaluator seat instead of the operator seat. And it produces better results, faster, because you're not chasing refinement inside a degrading context window.
You already do this in your practice. You're just not doing it here yet.
Confidence Isn't About Saving Money. It's About Expanding Capacity.
Here's where a lot of practitioners get tripped up on AI: they see it as a cost-cutting tool.
If AI can do this, I don't need that person anymore.
That's the wrong first question.
The right question is:
if my team could produce twice the output with the same hours, what becomes possible?
Replacing headcount is a shrinking move. Expanding capacity is a growth move. And you can't get to the second one until you're confident enough in your own judgment about AI output to direct others — a team member, a contractor, an agent — toward producing it for you.
That's where the real leverage lives. Not in you spending more hours prompting. In you being clear enough about what good looks like, so that someone or something else can do the work while you evaluate the result.
You can't delegate what you can't evaluate. And you can't evaluate if you're still in mastery mode.
If you're ready to stop prompting your way through strategy and start building something that actually runs, lets talk.
And one more thing — I've got two things coming that I want to make sure land in the right hands:
🎙️ Ops & Om Podcast — launching soon. Business of health, systems, and what it actually takes to build leverage as a practitioner.
🏫 The School of Doza — a weekly membership for practitioners who want to learn, build, and grow without hiring a full team.
Which one are you more interested in?
Which of these interests you? (Use the poll below — one click)

Baldomero Garza Find me on X, LinkedIn, Instagram, or book a 1:1
If you found this issue helpful, please forward it to a friend or colleague who might benefit from it. Sharing is caring!
Did someone forward you this issue? Don’t miss out on future insights—subscribe to the Ops & Om newsletter here!
