Notwithstanding my previously stated position to not refer to AI as a form of intelligence (as it lacks the capacity for reason and can only consult statistical models to provide a generalised response), I think it is worth taking a moment to think about what a post-AI world might look like. After all, that’s what futurists do: look five, ten, or even 50 years into the future and imagine where technology and other trends might take us.
Now, I don’t claim to be another insta-expert on ChatGPT, Midjourney, or any of the other AI tools currently making the rounds. Although the conversation around these specific tools seems all-consuming, they are also very specific. I’m more interested in what happens when we zoom out and look at things at the macro or meta scale. At these scales, we become less interested in individual technologies and start looking at larger, long-term trends.
And at these scales, I have a modicum of expertise. In fact, I’ve been exploring these longer-term trends about technology and their impact on humanity since 2018 when I researching a keynote called ‘Can Technology Make Us More Human.’ Spoiler alert: This is what it covers.
- We’ve been confronted by technology shifts before
- We are what our technology is not
- When the supply exceeds the demand for ‘smart’
- What technology can’t do for us
- An appropriate course of action from here
1. We’ve been confronted by technology shifts before
This may be the first significant technology shift that you and I have had to confront, but it’s not the first one that’s confronted humanity. Before the current ‘Digital Revolution,’ there were a number of other revolutions such as the Industrial Revolution and the Agricultural Revolution that have come before it. Each of these revolutions was driven by a significant shift in technology and resulted in a significant change in both human identity and human desire.
The Agricultural Revolution was driven by the development of new technologies that allowed us to grow and produce food on a scale and with a level of consistency that was previously unimaginable. It created a shift in how food was produced, what ‘work’ looked like, and even how society was organised. We went from a tribal level of organisation to the development of towns and cities, and our identity shifted from being hunters and gatherers to becoming farmers and makers.
The Agricultural Revolution had another significant impact. Beyond this change in human identity, it also drove a change in human desire. We no longer desired or placed the same value on nuts and the grains we once carefully collected. Once they became mass-produced, we put lover value on the nuts and the grains, and more value on the flour and the bread.
Fast forward a few thousand years, and we entered the Industrial Revolution. Once again, changes in technology (especially in the harnessing of fossil fuels) led to many of the things that were previously considered artisan to suddenly be mass-produced. This resulted in significant upheaval, perhaps best illustrated by the Luddite movement where workers would break into factories after hours and sabotage the machines.
The Luddite movement was largely seen as a protest against the loss of jobs, but perhaps at least as important as the loss of jobs was the loss of identity. If you see yourself as an artisan or a ‘maker’ and a machine can make things faster, cheaper, and more consistently than you can, then who are you really?
2. We are what our technology is not
What is interesting is that over the last couple of hundred years, the machines in our factories have become thousands of times better than they were at the beginning of the Industrial Revolution…and yet no one bothers breaking in and sabotaging them any more.
Why is that? Because over that time, the technologies of the Industrial Revolution have changed how we see ourselves in the world. For a large part (at least in Western societies), we no longer see ourselves as ‘makers’; instead, we consider ourselves to be ‘thinkers’. As thinkers, we don’t consider industrial machines to be a threat; in fact, we see them as a facilitator that allows us to do less physical and more cerebral work.
There is a consistent trend throughout human history in how technological shifts have reshaped both our identity and desires. Most simply:
We are what our technology is not
We value what technology can’t do for us
Just like ‘sounding robotic’ is considered a put-down, humans really don’t like to be compared to our machines. We like to think that we’re special (even if we don’t fully understand why that is), and we are always looking to differentiate ourselves from our technology.
And as for what we value, well, that’s simply a result of supply and demand. If the supply of something grows faster than demand, then its value will fall. Following the Agricultural Revolution, the supply of grains and wheat went up, and its value fell. Following the Industrial Revolution, the supply and consistency of bread went up, and again, the value fell*. We don’t value bread any more; we just expect it. Somewhat ironically, the only bread that we do value is the artisan loaf that we buy directly from the maker at the local farmers market!
* This analysis shows that a loaf of bread per day would have cost as much as 25% of a good mechanic’s wage back in the 16th century. Currently, the average mechanic’s wage in Australia is ~$1,450 per week, and the cost of bread is less than $2 a loaf. So for a good mechanic today, a loaf of bread a day represents less than one percent of their wage.
3. When the supply exceeds the demand for ‘smart’
So here we are at this next juncture in human history. We collectively view ourselves as ‘thinkers’ rather than hunters, gatherers, artisans, or makers, and in this world, we value ‘smart’. As a general rule, we pay over the odds for employees who are more educated and can ‘use their head’*.
* Over the last couple of decades, we have also seen a shift in how terms such as ‘nerd’ and ‘geek’ are used. They are no longer disparaging terms thrown out by the ‘jocks’; they have now been claimed as terms of reverence.
But more recently, we have also seen a dramatic improvement in the ability of our ‘thinking machines,’ and with the current crop of AI tools, we are now seeing the very real, almost inevitable possibility of these machines being smarter than us*. Not only that, but these smart machines are also designed to scale. We will live in a world where the ‘supply of smart’ will grow very quickly. And although in the short term, this will also drive a new ‘demand for smart’, the value of smart will also continue to fall**.
* As I have previously written, in the immediate term it is unlikely that AI will be smarter than the smartest people within a particular domain (as they are designed to provide the best ‘average’ answer). But given that none of us are the smartest people across all domains, and in some domains we’re complete novices, we will all face situations where the machines are smarter than we are some of the time.
** For example, you could quite conceivably get 20% of the smarts from hiring a junior social media marketing coordinator by spending 1% of their salary on a ChatGPT subscription.
What this means is, like the Luddites, we will face our own existential crisis (but unlike the Luddites, I don’t like our chances of getting past Google’s data centre security). We will be faced with questions about what defines us, who we are, and what we value when the machines are smarter than us, and what we currently value becomes cheap.
4. What technology can’t do for us
So, if we are what our technology is not and we value what technology can’t do for us, what is it exactly that these incredibly smart machines can’t do? And subsequently, what will we collectively seek next?
Maybe we will start to value not just what technology can’t do but what information technology has consistently taken away from us. Perhaps we will start to put a greater value on privacy and will push back on not just technologies that read our emails but technologies that have the capacity to read our thoughts.
Or maybe we will go further down the rabbit hole and, like many a tech entrepreneur, we will become obsessed with longevity. Technology is yet to unlock the secrets of eternal life, and if the ‘smartest’ people are pursuing that goal, then perhaps we should jump on the bandwagon*?
* This collective value may turn out to be short-lived, as some researchers expect ‘longevity escape velocity’, whereby one year of research identifies ways of extending life by more than one year, could be reached as early as 2030.
Or perhaps it’s time to escape the limiting belief that we are defined by our thinking. With all due respect to René Descartes, it’s not just thinking that underpins our existence, it’s our capacity to feel, love and care that makes humans, human. This require more than our brains; it requires our whole body, every cell, and every single nerve ending working together to experience the world around us and shape our actions within it.
5. An appropriate course of action from here
Increasing computing power is considered a ‘hard trend’. Hard trends have a sustained impact over a long period and have a very high probability of continuing into the future. It is therefore highly unlikely that the evolution of ‘smart machines’ will abate any time soon.
The appropriate course of action must therefore be to embrace smart machines but also have a good understanding of their limitations. The most obvious limitation of AI is that the algorithm resides on a collection of silicon wafers, locked away in a cold, dark data centre with no direct experience of the natural world. Indeed the only ‘experience’ AIs have is through their training dataset which may include some Nick Cave lyrics (if we’re lucky), a scientific paper about photosynthesis and some happy snaps from your daughters birthday party scrapped from Facebook.
In contrast, we have feelings that we feel and experiences that we have experienced that there are no words for. And because there’s no words for them, these feelings and experiences are unlikely to ever be captured, ingested and tokenised* by AI even though they may be some of the most significant and powerful of our lives.
* Tokenisation is the process of breaking information down into component pieces so they can be statistically analysed, prioritised and reconstituted into ‘answers’.
To compensate for that we need to become hyper aware of all the information that sits outside what is being captured in training data, and rather than just accept the ‘smart response’, critically assess the gaps. This means we will need to more attuned to the ‘fuzziness’ of human experience such as our own feelings, the emotional state of those around us and rich information that comes to us through the natural environment.
For some of you this sounds a little bit woo-woo. Probably because we’ve been indoctrinated to ignore the fuzziness and instead focus on the concrete, the stuff that we can collect hard data on, the stuff that is already being fed into the machines and which threatens to make us obsolete.
For others it’s probably already obvious. The rise of meditative practices, the growing body of evidence around nature based therapies and increasing interest in indigenous knowledge and wisdom all suggest this shift towards a more holistic approach to being human is already underway.
This is both a threat and an opportunity. As they say, change is hard, but some changes are certainly harder than others. Changes like this one, where as a society we are forced to question our values and our collective identity, could be one of the hardest of all (and it’s likely this is already fuelling divisions between people).
At the same time, the current future trajectory doesn’t necessarily look that attractive. When we are bombarded with ‘smart’ and everyone has access to ‘the answer’ life will increasingly feel like one long dinner party where you’re stuck sitting next to the man-splaining know it all…except everyone will be the know it all, and you won’t be getting dinner.