In 1948, Norbert Wiener wrote a supplemental note in Cybernetics, his foundational book on neural networks, systems design, and robotics.
In the note, he ponders whether humans would ever be capable of building a machine that could play chess. He concludes that, yes, it would be possible to build a machine that could beat the average human, but never the very best.
49 years later, when I was 11 years old, IBM’s Deep Blue beat Garry Kasparov, the world champion.
The vast majority of my life has been spent in a world where computers were significantly better than even the very best humans at playing chess.
So when I hear people say Large Language Models and modern AI will “never” be able to best humans at some given task, I reactively think, “Just you wait.”
Machines would never fly. Humans would never get to space. Computers would never play chess. And AI will never be truly creative.
Or, so they said, and say.
I don’t know what it means that AI will improve—nobody knows what happens next. The Wright Brothers invented the airplane and were shocked by what people did with it.
We’ll all be surprised by what AI does, and what people do with it.
But we shouldn’t be surprised that it will get better, faster, and smarter than it is today.
So, the question is, what can we do to prepare for whatever happens?
How can we keep moving forward, knowing that history never ends, and that we’re all living in someone else’s past?
First, we pay attention—we observe what the people building these tools do, say, and make.
Then, we experiment—we test the tools and find their strengths and weaknesses.
Finally, we adapt—we integrate what works and what helps into our processes, and we mitigate against the risks and hazards of both using and not using the tools.
And we never ignore.