Viewpoint

L. Graeme Smith: There is nothing inevitable about the future: On AI and the race to replace humanity

If we want a more human-centered future, we must insist upon it
A woman interacts with a robot at the Barbican exhibition centre in London, Wednesday, May 15, 2019. Frank Augstein/AP Photo.

Imagine your obsolescence, and imagine the thing that will bring it. Are you a writer, an artist, a creative-class hanger-on? Does it look, then, like a simple chatbox inviting you in for conversation, swinging open the digital door and offering you some tea, its black, blinking cursor an open(AI) invitation for you to initiate your relationship?

Sure, depending on your expectations the first burbling responses that gurgle forth are more rudimentary than deep. It’s ok. You’re just getting to know each other. Give it a few (years? months? weeks?) and those babbles will burst into brilliance. Just a little more training. You’ll have a new Nabakov in no time, and you’ll just be you, struggling to keep up. You can always count on a murderer for a fancy prose style

Is this it, then? 

A nuke, a nothingburger, an end to all things. ChatGPT has, at the least, announced itself with some fanfare. Whether or not its algorithms will in fact end the old world and birth the new, its arrival has, if nothing else, generated a buzz. 

Here at The Hub we have pitched our own voice to the hum. Here is Rudyard Griffiths explaining why everything has in fact changed, especially for the professional laptop class. He cites several articles generated in conversation with ChatGPT—framed, in our editorial voice, as exciting experiments with the future—and published to our pages. The foreword to each reads: 

At The Hub, we firmly believe that forward-looking optimism is an important part of creating a better future for Canada. It’s easy to embrace knee-jerk negativity and luddism, but that doesn’t help us build a better country. We’re determined to embrace the best parts of technology… even when it creeps us out.

But if this system is of a piece with the “best parts of technology”, the case is not made. Every proponent of every new and obviously moral social theory, scientific endeavour, and new technology has been promoted as such. That does not make it true. Something “creeping you out” isn’t a perfectly reliable indicator that it is bad, per se, but it’s a good indication that it deserves a sober second look.

And this new and improving AI technology capable of generating words, images, and audio representations of anything we can imagine is worth a cautious approach. Profits and efficiencies will surely be found. But these algorithms, feeding and shaping themselves on the data and detritus of every aspect of our lives, will reflect, in funhouse mirror fashion, ourselves back to us. Our creativity and accumulated knowledge, yes, but all of our sins and degradations too. 

Do not be shocked, then, if you upload some selfies and receive back child porn. Or if the phishing emails in your inbox are ever more sophisticated and convincing. Or if outright copyright theft becomes unavoidable. If you thought we had a problem with misinformation before, it should worry us that the algorithm has a tendency towards confidently making things up. The list goes on

For all the assumptions that these prototypes will surely improve with time, their particular flaws ironed out as we go, it is worth understanding that the world’s leading AI companies fundamentally cannot control their AIs, and whatever safeguards that are built in are to this point easily bypassed.

Now, the people professionally threatened by these developments: I happen to think that those of us in the “sense-making” industries such as the media take ourselves, in general, far too seriously. We marvel at our own self-importance, delight in self-indulgence. I do not begrudge a hearty “Good!” shouted by those outside the laptop class as they see it slam headlong into the same digital disruptions that have crippled so many elsewhere already.1But if overemphasis of our importance is one mistake, total diminishment is another. Preserving checks on power is all the more important when society is gifted expansive, potentially revolutionary ones. Actual human judgment shouldn’t be taken for granted in this regard.

I will admit that much of my own “knee-jerk negativity” is in reaction to the idea of eliminating human participation in art and culture entirely, or diminishing it to the point of pointlessness.2To what extent AI-generated output will ever completely overtake our culture is an open question. But a future with fewer artists and writers and musicians honing their craft and finding their expression through the struggle of the creative process is a worse one. The process of creating art shapes the artist as much as the artist shapes the art, and for the better. I am certain that eliminating struggle from the creative process altogether will only diminish the heights of what we can collectively achieve.3I am admittedly and without apology a romantic about such things, and I believe the more incentives we have for real, critical engagement that requires discipline and mastery, the better.

Our culture is already stuck in iterative loops of stagnation. Perhaps these technologies will remain merely “creative caddies” helping us to reach new and better summits of achievement in these fields (or more likely, create new fields entirely), but I suspect that outsourcing the hard parts of the process to a machine mind that is merely cannibalizing the collected IP of humanity will result in an abundance of bland, iterative content and a dearth of actual transcendence.4And even more worryingly, we will convince ourselves otherwise. We see faces everywhere already. ChatGPT, author of the Quixote. Maybe that’s enough to sustain us. I’m not convinced.

For whatever the merits of this particular argument, it is, ultimately, beside the point I wish to make, which is just that there is an actual argument here to be had about all of this. There are better and worse outcomes for our world, and almost nothing is inevitable. 

Whether the consequences of this new technological moment are good or bad (surely some of both), we do not need to consign ourselves from the outset to the myth of perpetual progress, to the belief that every new development must be embraced and encouraged and acquiesced to. Our ability to do something does not necessitate our doing so. 

“And remember, in a machine learning revolution of the means of production, there will be no limit on the scale or rate of change,” writes Griffiths. 

There will be no limits only if we preclude them from the outset. These technologies are tools, and ones which we ourselves can decide to use or not, and how. 

Life is always about recognizing and choosing between tradeoffs. We get, then, what we prioritize. We live, for instance in ugly grey concrete cities because that is what we have built. It could be otherwise. Such decline is a choice. If we want a more human-centered future, we must insist upon it. 

Hard choices between profits and protections, efficiencies and safeguards, are made all the time. Restraint is not impossible

We could clear-cut the Amazon and empty our oceans and pillage the earth of every resource for short-term profit, but we (mostly) don’t. We could simply kill off our poor and vulnerable instead of providing them with the support and resources they need, but we–wait, whoops

We could replace Steph Curry with a robot, but where’s the thrill in that? We could, with the help of AI analysis, win every chess game we play, but does that really make you a winner? We could outsource every op-ed in The Hub to ChatGPT, but what would we really gain, and what would we lose in the process? 

The point being, again, that there is almost nothing inevitable about the world we inhabit and the shape it takes. What is good about what we have cultivated can be lost. What is bad can be corrected. 

Creating art and culture is as old and fundamental a human experience as there is. Excising ourselves almost entirely from the creative experience and delegating that aspect of our existence to our AI overlords is certain to have some consequences. So will blundering ahead and adopting every AI technology before fundamentally solving the alignment problem

I’m only saying we should take a second to think it all over before we surrender without a fight. It may be easy to embrace knee-jerk luddism, but it’s even easier to get swept up in the uncritical promise of a grand new dawn, just over the horizon, or to simply lay down and let the wheels of history roll on over us. 

Grand, beautiful, lasting, human-centered things are mostly laborious, expensive, inefficient, unprofitable, and require tremendous sacrifice. And yet we have done them anyways and can do so again. If the future is beginning to look a little dystopian to our eyes, then it is up to us to change it. 

Sign up for FREE and receive The Hub’s weekly email newsletter.

You'll get our weekly newsletter featuring The Hub’s thought-provoking insights and analysis of Canadian policy issues and in-depth interviews with the world’s sharpest minds and thinkers.