Prompt Engineering Doesn’t Matter Anymore: Here’s Why
When your students graduate, their AI colleagues will write their own prompts. Equip future clinicians with the judgment to supervise AI colleagues and the ethical spine to decide when to say “no".
Dear Medical Educator,
I’ve noticed a growing obsession with “prompt engineering” — carefully building the perfect string of words to get the best results from a language model. There are courses, cheat sheets, and even job titles built around this.
However, creating very detailed prompts has become less and less necessary. I've been meaning to share this with you for a while. A new article written by medical students in Academic Medicine gave me the perfect opportunity. They share my belief that we might be heading in the wrong direction:
Teaching students how to write well-structured prompts is not the future. It’s a distraction.
Here’s the gist:
Prompt engineering is becoming obsolete, fast.
Why? Because AI models are getting better at doing it themselves.
The authors cite recent research on techniques where the models generate and refine their own prompts internally, as I briefly described how reasoning models work, in a way that's easy to understand, in one of my previous posts:
Due to their design, models like OpenAI's o1-preview (and other reasoning models like o1 and o3, not GPT models) don’t need detailed prompts — they need clear intent. The "magic prompt" era is fading.
They argue that medical education shouldn’t be focused on teaching students how to prompt, but on something deeper: AI literacy.
That includes:
Understanding how AI works (even at a basic level)
Knowing its limits and biases
Navigating ethical concerns
Working alongside AI specialists
Staying flexible as tools evolve
What do all these mean for us, practically?
If you’re using AI, here’s what to focus on:
1. Know your model. They behave differently. Don’t use the same approach for all of them. If you give very detailed instructions to reasoning models, they may perform worse. If you give an underspecified prompt to a non-reasoning model, it might try to fill in the gaps — and fail.
2. Don’t add a prompt engineering course to the curriculum — it has no future. If you’re using a non-reasoning model like GPT-4o, let a reasoning model write the prompt for you. In the near future, hybrid models (e.g. Claude 3.7 Sonnet, Gemini 2.5 Pro) that know when to use reasoning and when not to will be common — and they’ll handle this for you.
3. Focus on foundational knowledge. Ethics, bias, regulation, model limitations — these will age better than any prompt tip.
Don't forget:
By the time your students graduate, their AI colleagues (AI agents) will write their own prompts. Equip future clinicians with the judgment to supervise those colleagues, and the ethical spine to decide when to say “no”.
Yavuz Selim Kıyak, MD, PhD (aka MedEdFlamingo)
Follow the flamingo on X (Twitter) at @MedEdFlamingo for daily content.
Subscribe to the flamingo’s YouTube channel.
LinkedIn is another option to follow.
Who is the flamingo?
Related #MedEd reading:
Çiçek, F. E., Ülker, M., Özer, M., & Kıyak, Y. S. (2024). ChatGPT versus expert feedback on clinical reasoning questions and their effect on learning: a randomized controlled trial. Postgraduate medical journal, qgae170.
Kıyak, Y. S., & Kononowicz, A. A. (2024). Case-based MCQ generator: a custom ChatGPT based on published prompts in the literature for automatic item generation. Medical Teacher, 46(8), 1018-1020.. https://www.tandfonline.com/doi/full/10.1080/0142159X.2024.2314723
Kıyak, Y. S., & Emekli, E. (2024). ChatGPT prompts for generating multiple-choice questions in medical education and evidence on their validity: a literature review. Postgraduate Medical Journal, 100(1189), 858-865. https://academic.oup.com/pmj/advance-article/doi/10.1093/postmj/qgae065/7688383
Kıyak, Y. S., & Emekli, E. (2024). A Prompt for Generating Script Concordance Test Using ChatGPT, Claude, and Llama Large Language Model Chatbots. Revista Española de Educación Médica, 5(3). https://revistas.um.es/edumed/article/view/612381