System prompts for augmented thinking, Taylor Swift's pens, T-Bloat, dreaming to keep ahead, and more

“Don’t fear intelligent machines; work with them.” — Garry Kasparov

Moving to cognitive augmentation with AI .

The default way LLMs respond and people use them is like a vending machine - put something in and you get something out. GPT-5 seems to be particularly poor in its normal mode in engaging thinking rather than simply providing extensive responses.

A series of system prompts for GPT-5 can shift its responses to provide structured engagement, useful outcomes, and better thinking.

Be well!

Ross

 

📖In this issue

  • Toolkit: 9 System prompts for cognitive augmentation

  • Humans + AI update: Dreaming to keep ahead of AI, values in LLMs, and functions in Humans + AI work redesign.

  • From Humans + AI Explorers Community: The human role with AI, lessons from the “T-Bloat” concept, deep thinking with out AI, and more

  • Humans + AI Podcast: Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift’s pens

💡Tooklit: 9 System prompts for cognitive augmentation

I am often frustrated with GPT-5, as it is intensely analytical, generating a profusion of terse points in sprawling structures that are very poorly designed for augmenting human cognition.

I have co-designed a series of GPT-5 system prompts for augmented cognition, posted on Humans + AI Community on an open link. Let me know how you go with them and any feedback. First one here, the rest copyable in the link.

🧠🤖Humans + AI

The role of dreaming in keeping ahead of AI

AI tends to overfit data to potential patterns. Humans may be better at it as

"...dreaming is a way of preventing this kind of overfitting and collapse. The reason dreaming is evolutionary adaptive is to put you in weird situations that are very unlike your day-to-day reality, so as to prevent this kind of overfitting.

when you're generating things in your head and then you're attending to it, you're training on your own samples, you're training on your synthetic data. If you do it for too long, you go off-rails and you collapse way too much. You always have to seek entropy in your life. Talking to other people is a great source of entropy,"

LLMs vary substantially in their values

LLMs proved to have significantly different values in a quarter of situations, each with particular biases and propensities. This can impact companies’ cultures, if not understood and clearly managed.

Humans + AI work redesign is most often led by IT

A Deloitte study showed that IT and Operations most often lead work redesign initiatives. “…as a product, work design must center around the people who know it best—the managers and workers who do the work and experiment with it every day.”

🌐From Humans + AI Explorers Community

It has been a busy time in the Humans + AI Explorers Community.

Dan Bashaw proposed the lovely “T-Bloat” as a Learning Motivator framework, and the implications for learning and expertise development.

Beth Kanter shared on Deep Thinking without AI, considering approaches to engage deeper thinking, including pen and paper, with a great discussion on personal approaches with and without AI.

And far more, with a host of upcoming events!

🎙️This week’s podcast episode

Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift’s pens  

Why you should listen

I have followed Beth Kanter’s work for over 15 years, and it was a delight to speak with her to learn about how she helps leaders and organizations use AI to amplify their intent, and her focus on nonprofits.

Thanks for reading!

Ross Dawson and team