Thinking Patterns For AI Prompts (Library) a fantastic library with frequent real-life scenarios that emerged from 100s of real conversations to make AI interactions actually useful and valuable — making people think more, not less to drive most value from AI. Kindly put together by Tey Bannerman.
At first, it might look like one of many prompt templates guides out there. But what I like about it is that it’s not yet another bunch of copy-paste prompts. It’s a series of good defaults, principles, mistakes and templates that *have* to be customized to be useful.
As Tey writes, “prompts are useful once you know what you need — and completely useless if you don’t. We’ve been teaching people the grammar of prompting without the vocabulary of thinking.”
Tey’s library flags where human judgment matters and what blindspots AI answers have — and actively makes us consider implications that we often forget about or skip when writing prompts manually.
I keep thinking that actually these considerations should be brought up by AI itself — without humans relying on manually modified prompt templates.
It happens already — many AI products proactively ask questions, clarify missing details, raise red flags and ask for more context. I’m hopeful that we’ll see it emerging sooner than we think.
Human in the Loop” framework for designing human-AI oversight in practice (attached), which maps consequences vs. what we are optimizing for. Definitely worth taking a look, too.
Article: https://teybannerman.com/ai/2025/08/25/human-in-the-loop-framework.html
PDF: https://teybannerman.com/images/human-in-the-loop-framework-by-tey-bannerman.pdf