Genesis
The cycle of creation enters a new phase
Where are the philosophers?
In the final years of his life, Henry Kissinger repeatedly asked this question to his friend Craig Mundie. For Kissinger, the implications of the AI age were so profound they needed to be debated by experts trained in thinking about the nature of knowledge, ethics and meaning.
Instead of waiting around for the philosophers, Kissinger, Mundie and former Google CEO Eric Schmidt, wrote a book that guides us to the philosophical questions we should be examining. Genesis: Artificial Intelligence, Hope and the Human Spirit, explains why AI will transform society, gives us glimpses of the coming marvels and spells out the choices we need to make about the future of our species.
Answers To Questions We Never Asked
The scientific method has driven human progress since the Renaissance—knowledge and understanding advanced together.
AI will change that. Not because AI “thinks differently” than humans, but in part because it processes information millions of times faster. In addition, machines not only take in more information than humans, but also digest it in more forms and dimensions. Think of the human senses—they direct a very limited view of reality into our brains. Dogs, by contrast, with their expansive sense of smell, experience a vastly different world than us even though they occupy the same physical space. Machines, with data fed to them via sensors, can therefore “experience” the world in a multi-dimensional way that we never will.
Over time these advantages will lead AI to transition from helping us answer questions, to generating its own hypothesis, then running simulations and using robotics to conduct experiments in the physical world to test them. The results could be miraculous—the creation of new sustainable fuels or geo-engineering processes to remove carbon from the atmosphere. Mundie believes AI will allow us to develop power from nuclear fusion “within the next few years”. A similar transformational array of breakthroughs will take place in medical science.
These are AI’s “gifts” to humanity. The catch is that we won’t necessarily understand how they work. There will be a wedge between capabilities and comprehension. Here’s Mundie on just one of the foundational issues this raises:
What is education going to look like in a world where literally every person, from the youngest age, is going to be afforded the opportunity to have an unlimited Socratic teacher? Then what does it mean to go to school at any level?
Work And Dignity When Intelligence Is Commoditized
Mundie sees AI as “commoditizing intelligence” with the impact felt first in entry-level knowledge work, but quickly percolating throughout the economy. Manual labor might be less impacted initially, but eventually robotics technology will transform those jobs as well.
Of course, it’s not news that this will create major economic uncertainties. In past technological revolutions labor was also displaced at scale. This increased productivity and drove up real wages. As those with higher earnings spent their money, a whole new economy of jobs arose.
This could happen again. However, if it doesn’t, we’re headed for a society of abundance where control of this wealth is concentrated in fewer and fewer hands. For sure, some AI leaders have talked about the need to redistribute wealth more broadly, but I doubt they’ll take the political risk to push for changes to government policies. For instance, in this November’s election California citizens will vote on a 5% one-time tax on billionaires to fund health care. This has met with intense opposition from business leaders threatening to leave the state. Many Democrats oppose it as well, including Governor Newson.
I suspect a better starting point is voluntary efforts, where AI leaders can signal commitment to the problem while avoiding political battles. Capitalists For Shared Income, might provide a template. It is a non-profit that has channeled private donations into a permanent endowment. The endowment invests back into the stock market and pays out a portion of its wealth each year to those living in poverty. Perhaps AI leaders who are serious about addressing wealth disparities could allocate a portion of their profits to initiatives like this?
Wealth distribution is tricky and important, but there are deeper issues still, which Mundie believes are vital to think through. For instance, how do we define human dignity in a world where work is completely transformed?
Whether it was raising kids or working in the workplace or the fields…how you did that …was a huge part of what your dignity attached to. And work is going to get redefined… But if you look in our country how often do you turn on the news and hear anybody talking about what we’re going to do to prepare for this?
Why is a definition of dignity important? Machines will increasingly take on human qualities - telling jokes, recounting personal histories, expressing emotions. Mundie believes dignity - which humans possess but machines do not - is one way to permanently distinguish the two. However, if human dignity no longer derives substantially from work, we need a new way to conceive of it. He offers suggestions, but the book’s value is that it prompts us to develop our own definition. In thinking this through we might challenge ourselves to move beyond considering AI as a tool, to asking whether it is more usefully thought of as a new entity? One with which we can partner to manage our future.
A New Species?
Conceiving of AI machines as a new species might sound hard to stomach but take a step back. If prediction markets existed when Lucy walked the African grasslands three million years ago, what would the odds have been that her descendants would emerge as a new species? And not just a new species, but one so comprehensively dominant that it could actually change the planet’s climate? Pretty low, I suspect. (And a nice convex payout if you had put money on it.)
Is it then incomprehensible that an AI with the capabilities Mundie expects will also effectively become a new type of species? Framed that way, I don’t think so. What makes it hard to grasp is the compressed time frame from which this AI entity will have emerged. But remember, on an evolutionary time scale, Homo sapiens’ dominance emerged in the blink of an eye. That’s how non-linear processes work. Extended periods of seemingly little change and then an explosion. An AI species taking shape in what seems like the blink of a blink may just be the extension of this.
This raises Mundie’s final question. How should humans co-exist with this new “species”? What would a human-machine symbiosis look like? Or should we instead merge, trying to blend the best of machines and best of people?
These are uncomfortable, weird and scary questions. Kissinger, Mundie and Schmidt have had the courage to ask them. Have to courage to read their book. You might discover you don’t need the philosophers to find the answers.
Listen to my interview with Craig Mundie on Top Traders Unplugged “Ideas Lab” podcast.
Spotify
Apple Podcasts
You Might Also Like
The road to AGI is littered with blockers...and opportunities
Targeted algorithms can influence our thoughts, feelings and behaviors
The world’s oldest asset sits at the heart of the modern economy

