The day I heard about "Co-Intelligence," I preordered it. I’d been following Ethan Mollick on LinkedIn (and his Substack newsletter, One Useful Thing) for months, and he'd quickly become one of my favorite voices in the AI space.
My hope was that his book would feel similar to his newsletter. I was not disappointed.
Mollick is a professor at the Wharton School of the University of Pennsylvania, where he studies the intersection of entrepreneurship, innovation, and AI. And he has that rare ability to unpack complex topics and explore big questions — *What will AI mean for work? For education? How will it alter our relationship to truth, productivity, connection, and purpose?* — with clarity and even levity.
Perhaps even more rare is how he blends philosophy and practicality. He tackles tough questions, but this is not an academic or esoteric work. Again, much like his newsletter, "Co-Intelligence" is full of real-world examples, often using Mollick’s own AI conversations to advance the book while underscoring key points.
Here are just some of my Kindle highlights, but I hope this encourages you to pick up a copy. To quote Mollick:
“... Serious discussions need to start in many places, and soon. We can’t wait for decisions to be made for us, and the world is advancing too fast to remain passive.”
Top takeaways from “Co-Intelligence”
On AI + work:
For workers, these fluid categories [of AI tasks, AI-assisted tasks, and human tasks] mean the impact of AI will be felt gradually, as we adapt to its increasing powers, rather than in a single disruption.
This is the paradox of knowledge acquisition in the age of AI: we may think we don’t need to work to memorize and amass basic skills, or build up a storehouse of fundamental knowledge—after all, this is what the AI is good at … But the path to expertise requires a grounding in facts. The issue is that in order to learn to think critically, problem-solve, understand abstract concepts, reason through novel problems, and evaluate the AI’s output, we need subject matter expertise.
The key is to keep humans firmly in the loop—to use AI as an assistive tool, not as a crutch.
On innovation:
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. (Amara’s Law, named after futurist Roy Amara)
Fundamental truth about innovation: it is expensive for organizations and companies but cheap for individuals doing their job.
All the usual ways in which organizations try to respond to new technologies don’t work well for AI. They are all far too centralized and far too slow … Individual workers, who are keenly aware of their problems and can experiment a lot with alternate ways of solving them, are far more likely to find powerful and targeted uses.
On societal implications:
The burden of knowledge is increasing, in that there is too much to know before a new scientist has enough expertise to start doing research themselves ... Thus, we have the paradox of our Golden Age of science. More research is being published by more scientists than ever, but the result is actually slowing progress!
Even if AI development were paused, the impact of AI on how we live, work, and learn is going to be huge, and warrants considerable discussion.
There is a sense of poetic irony in the fact that as we move toward a future characterized by greater technological sophistication, we find ourselves contemplating deeply human questions about identity, purpose, and connection. To that extent, AI is a mirror, reflecting back at us our best and worst qualities.
On generative AI + us:
[Focusing] on apocalyptic events robs most of us of agency and responsibility. If we think that way, AI becomes a thing a handful of companies either builds or doesn’t build, and no one outside of a few dozen Silicon Valley executives and top government officials really has any say over what happens next.
The thing about a widely applicable technology is that decisions about how it is used are not limited to a small group of people. Many people in organizations will play a role in shaping what AI means for their team, their customers, their students, their environment.
Today’s decisions about how AI reflects human values and enhances human potential will reverberate for generations.
The book is a fantastic primer on where generative AI is today, and where AI might be headed tomorrow (including four possible scenarios). But if I had to distill the key takeaways, I would choose these three:
- Invite AI to everything. The path to true understanding and “co-intelligence” is experience and experimentation. Individuals, not companies, will likely unlock the most valuable use cases. The ways we work with AI will continually evolve as the technology, and society, change.
- Embrace expertise. AI will give all of us access to more knowledge, abilities, and even intelligence than we have on our own. But there are no shortcuts to expertise. Years of experience, hard work, and yes, broad knowledge of a subject are necessary — necessary to evaluate AI outputs and to add value in an age when intelligence has been commoditized.
- Get active in AI’s future. AI is a mirror — the best and the worst of us. Whether AI is transformative or destructive is up to us (and it will likely be a mix of both). And there are too many important questions looming to wait or sit on the sidelines.
You can pick up a copy of this instant NYT best seller here. (Not sponsored or affiliate — just strongly recommended.)
Subscribe for more growth, fewer growing pains.
Ever wonder how other growing companies make the tough decisions — like how much to spend on marketing, when it’s time to expand your org chart, or when and where to use AI?
Us, too. So we decided to create growthcurve: A newsletter dedicated to practical advice for growth without the growing pains.
Get frameworks, templates, and advice to smooth out the bumps along the way.