- Amplifying Cognition with Ross Dawson
- Posts
- 20 top AI charts, global attitudes to AI, network activation, anthropomorphizing AI, and more
20 top AI charts, global attitudes to AI, network activation, anthropomorphizing AI, and more
“Humans need and want more time to interact with each other. I think AI coming about and replacing routine jobs is pushing us to do what we should be doing anyway: the creation of more humanistic service jobs.” — Kai-Fu Lee.
The pace of progress of AI continues to be staggering. Among notable news this week Meta’s Llama 3 is in some ways pushing LLM capabilities further before an even more capable release soon, and the ease, power, and realism of deepfake video creation has leaped forward.
In this world we need to understand how AI is developing, but more importantly, focus on how AI can complement humans. The future is Humans + AI.
Ross
📖In this issue
Top 20 charts from the AI Index report
Human cognition in strategic decision-making
Differing global attitudes to AI - from excitement to nervousness
Next level realistic video from single image and audio clip
New models, new robots, and AI hardware fails
Céline Schillinger on network activation, curious conversations, podcasting for connection, and creative freedom
Why we should anthropomorphize AI
💡 Insights
20 top charts from the Stanford AI Index Report 2024
The Stanford Human-Centered AI Institute’s annual AI Index Report is the best single review of the state of the space. The 500-page report takes a while to get through so we compiled the 20 most interesting charts showing key aspects of the development of AI.
👩🤖Humans + AI update
AI to create a positive future of work
“The current labour market is a waste of human potential,,,
In an ideal world, people's survival shouldn’t be tied to their ability and willingness to work on the currently available jobs. Fountains of talent and creativity are sitting tucked away in the back room, waiting to be used.”
The human cognition involved in human decision-making
“We fundamentally think that strategic decisions are not amenable (solely) to AI-based cognition or computational decision making…. Most importantly, AI-based models are based on backward-looking data and prediction rather than any form of forward-looking theory-based causal logic (that is based on directed experimentation, problem solving, and new data). We claim that emphasizing or relying on prediction alone is a debilitating limitation for strategic decision making. “
Different global attitudes to AI
What is often neglected in the AI debate is widely differing global attitudes. The English-speaking nations, led by Australia and UK, are most nervous. Almost all the most excited nations are rapidly maturing economies.
🔥Hot news in AI
Meta releases Lllama 3, now a fifth 5 GPT-4 class LLM, with promise of bigger models soon - Meta
OpenAI’s Assistants have expanded capabilities, now capable of ingesting 10,000 files, up from 20 - OpenAI
The much-hyped Humane AI pin has been slammed by reviewers on launch - ZDNet
Boston Dynamics has upgraded its Atlas robot to electric, with improved capabilities
Microsoft has announced VASA-1, which creates extremely realistic video from a single portrait photo and an audio clip.
🎙️Latest podcast episode
Céline Schillinger on network activation, curious conversations, podcasting for connection, and creative freedom |
What you will learn
Exploring the journey from entrepreneurial beginnings to corporate transformation
The shock of transitioning to a large pharmaceutical company’s culture
The power of forming an employee network to instigate positive change
Challenging traditional hierarchies with network activation
Leveraging digital tools and volunteer networks for organizational innovation
Embracing agency, networking, and community for future-ready organizations
Personal practices for amplifying individual capabilities and fostering connections
💡Resources and insights
Why we should treat AI like a human
Leading AI proponent and author Ethan Mollick writes that:
“I know these are real risks, and to be clear, when I say an AI “thinks,” “learns,” “understands,” “decides,” or “feels,” I’m speaking metaphorically. Current AI systems don’t have a consciousness, emotions, a sense of self, or physical sensations. So why take the risk? Because as imperfect as the analogy is, working with AI is easiest if you think of it like an alien person rather than a human-built machine. And I think that is important to get across, even with the risks of anthropomorphism.”
Thanks for reading!
Ross Dawson and team