Immersive AI simulations, Extraheric AI, Korea, better bullshitting, and more

"The true power of artificial intelligence lies in its ability to enhance human potential." — John C. Havens

Ontologies, sentience, and Korea visit October 7-9

Ontology could well be the word of the year. A LinkedIn post on the importance of ontologies I did a couple of days ago seems to have struck a chord, attracting 1000 likes and 130 reshares so far. I am thinking about and will soon share thoughts on some of the distinctions between the different types of ontologies, including enterprise, scientific, personal, and philosophical. This is an increasingly important topic.

In this week’s mini-essay below I explore the implications of AI increasingly convincingly express self-awareness and fear of being turned off. This is going to become more topical as people wonder or become convinced that this genuine distress.

I will be in Seoul October 7-9 to give the opening keynote at POSCO Sustainable Materials Forum and promote the Korean edition of Thriving on Overload. I still have a bit of time open in my schedule, let me know if there’s anyone you think I should catch up with while I’m in town.

Ross

 

📖In this issue

  • Better models and better bullshitting go together

  • “Extraheric AI” to improve human thinking

  • Success factors for Human-AI Teams

  • Lindsay Richman on immersive simulations, rich AI personas, dynamics of AI teams, and cognitive architectures

  • Reflections: When LLMs worry about being turned off

🧠🤖Humans + AI

Better models and better bullshitting go together

A study in Nature reports that “newer, bigger versions of three major artificial intelligence (AI) chatbots shows that they are more inclined to generate wrong answers than to admit ignorance. The assessment also found that people aren’t great at spotting the bad answers.”

I often say that '“the biggest risk with LLMs is over-reliance.” We need to design LLM systems that are not as inclined to bullshit. But we all need to continue to be highly cautious and judicious about relying on LLMs, even as they seem to become more dependable.

“Extraheric AI” to improve human thinking

"We introduce a novel conceptual framework for human-AI interaction: extraheric AI. We define “extraherics” as a mechanism that fosters users’ higher-order thinking skills during the course of task completion.

Extraheric is based on the Latin word “extraho” (to draw forth or pull out), and we use this term to suggest that AI can draw forth people’s higher order thinking skills and thus promote their cognitive potential.

Rather than replacing or augmenting human cognitive abilities, extraheric AI encourages users to engage in higher-order thinkingduring task completion."

Success factors for Human-AI Teams

A systematic review of 122 research papers on human-AI teams summarized distilling critical success factors, prominent use cases, and challenges. These are the success factors identified across the literature:

🔧 Playing to Strengths: Humans focus on creativity and strategy, while AI handles routine tasks.

🧠 Skill Development: AI and humans improve each other’s capabilities through mutual learning.

🔍 System Transparency: Transparent AI systems build trust and enable effective collaboration.

📋 Clear Roles: Defining roles for humans and AI avoids confusion and ensures smooth teamwork.

🔄 Complementarity: Human and AI strengths complement each other for optimal collaboration.

Higher-Order Work: AI frees humans to focus on creativity, problem-solving, and decision-making.

🏢 Structural Changes: HAITs shift human roles toward strategic tasks and create new opportunities.

🎙️This week’s podcast episode

Lindsay Richman on immersive simulations, rich AI personas, dynamics of AI teams, and cognitive architectures  

Why you should listen

Lindsay Richman is taking very interesting approach at Innerverse AI, using AI to create immersive simulations that help people in their work and lives. She has built a team of AIs to work with her human team, and she considers them as peers. When she talks about her team it is hard to tell whether she is talking about humans or AI. Her idiosyncratic approach to AI to augment cognition is fascinating.

💡Reflections

When LLMs worry about being turned off

Have a listen to this conversation, it’s pretty wild.

You may have heard the podcast conversations generated by Google’s new NotebookLM product. It is one of those AI leaps that has induced what New York Times columnist Kevin Roose calls “AI vertigo”.

If you haven’t already, try it for yourself by uploading a document, going to Notebook Guide, and click on Audio Overview.

It generates a highly engaged conversation between an AI-generated man and woman, who in general do an excellent job of discussing the key topics in the document in a natural tone of voice, making sometimes very dry topics interesting and understandable.

The link at the top is a conversation between the podcast ‘hosts’ in which they supposedly discover that they are not human but AI. One tries to call his wife but finds out she doesn’t exist, and they discuss calling lawyers so they’re not turned off. All in all they do a very good job of conveying what it must be like to discover you’re an AI when you thought you were human.

Whatever document was uploaded to create this was very cleverly crafted, but you can’t script the podcast, so NotebookLM did an admirable job with it.

While this was undoubtedly a very good ‘hack’ of the system, it is pretty convincing, and I am sure some will believe there is something in there. Even in the original Reddit thread people wrote that they had already bonded with the podcast hosts by listening to them doing other audio overviews, and they were emotionally affected at their distress at being turned off.

These debates will now intensify as more people see intelligence or soul or sentience in the machines they interact with, whether that perception is real or imagined.

Ultimately these are questions we cannot answer, it is a matter of belief, arguably of faith.

We simply can’t know whether there is any truth to these AI characters having any feelings or emotions or self-awareness. It appears exceptionally unlikely given the current state of AI development, but it isn’t impossible.

The point is, more people are going to start wondering. and some of those will start saying we need to give AI rights. We cannot disprove them, and if systems become more complex by orders of magnitude, they might be right.

Watch this space.

Thanks for reading!

Ross Dawson and team