Like The Hub?
Join our community.

Howard Anglin: Now that the dystopian future is here, it may be too late to object


ChatGPT, the deep-learning program that recently surprised a lot of people with its demonstration of how close a machine can come to sounding like an unusually pedantic Wikipedia editor, has unnerved some white-collar workers who suddenly face technological redundancy. 

Until now, if you worked in what is (mostly undeservedly) called the “knowledge” industry, the machines had always come for other people’s jobs. But now that a computer program can synthesise information faster and write better than most people whose bullshit jobs consist of one or both of those tasks, hundreds of thousands of their jobs are likely to disappear as quickly as CEOs can figure out how to ask ChatGPT to draft a pink slip. 

Sure, a few former wordsmiths and middle managers will be retained to perform quality assurance, at least for a little while longer, but the blow to their collective professional egos will leave a permanent scar on the lower-upper-middle class. Six-plus years of post-secondary education, and now you’re editing a robot. It’ll take a lot of post-work Proseccos to soothe that ignominy.  

Next in the hierarchy of irrelevance will be the credentialed professions. People who grew up before the machines took over our lives may balk at trusting something as personal as their legal rights or their health to a computer, but we’re not far from the time when specialised programs will provide legal analysis and medical diagnoses as reliably as humans. Knowing lawyers and doctors, they will fight hard to protect their guild privileges, but you can only hold back a much cheaper competitor for so long. 

And after lawyers, why not judges? A judge in Colombia has already admitted to consulting ChatGPT in a case that asked whether an autistic minor was entitled to health insurance coverage. The judge, who insisted that the final judgement was his alone, told a local news outlet that he had asked the chatbot: “Is an autistic minor exonerated from paying fees for their therapies?” to which it answered: “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.”One hopes that the use “exonerate” for “exempt” and other mistakes are errors of translation and not indicative of the quality of Colombian adjudication. 

The judge reportedly said that deep-learning programs should not replace judges, but that they can assist them, and that “by asking questions to the application we do not stop being judges, thinking beings.” He has a point. If ChatGPT can do legal research as reliably, and much faster, than a law clerk, why not use it instead? We accept that a judge can ask a clerk, who may be of middling acumen or indifferent work ethic, to draft him a memo on a point of law, so why not the no-less reliable ChatGPT? 

I’ll go further. Contra the honourable judge, if a future deep-learning program can produce judgements that are as reliable as an old-fashioned flesh and blood judge, what would be the objection to replacing him and relieving the good burghers of Cartagena of the burden of paying his salary? 

The question reminds me of an illuminating binary I thought of a few years ago. “Illuminating binary” is my term for a simple yes/no question, the answer to which reveals a  much larger set of assumptions, prejudices, and systemic preferences. The most famous illuminating binary was posed by Isaac Foot, the late Liberal MP and all-round nonconformist, who used to say: “I judge a man by one thing, which side would he have liked his ancestors to fight on at Marston Moor?” From the answer to that question, he believed he could discern the outlines of a man’s personality and his political philosophy.For what it’s worth, I hope my ancestors would have fought at the side of Prince Rupert of the Rhine, the heroic polymath, and his poodle, Boy.

So, I pose the question: if you were put on trial while innocent of a crime, would you prefer to be judged by a machine that is able to determine guilt with 99 percent accuracy, or by a marginally more fallible human judge?

The answer to me is obvious: I would rather face the higher risk of being judged wrongly by someone I can look in the eye and appeal to as a fellow human—someone with whom I can potentially reason after the fact, who may one day change his mind and show remorse—than face better odds with an impassive AI program.

I say that this is obvious to me, but I know that there are people who would just as certainly choose the machine, and I don’t believe either kind of person can really understand the other. At the simplest level, the divide has something to do with the relative primacy of reason and sentiment in how different people look at the world, but I think there is a deeper value in play: how much do we value efficiency? 

Efficiency comes in many forms. Accuracy is a type of efficiency, one which is clearly implicated in my hypothetical question. Time-saving is another, which is also relevant in an overloaded justice system. When these types of efficiency are considered from a personal perspective, they can be subsumed under the label of “convenience.” 

For some time, convenience has been the dominant motivation in our lives. Virtually all the major changes in the way we live over the last century have been motivated by convenience, and none of us is immune to its appeal. We’ve all succumbed to the siren of convenience in one way or another. From vacuum cleaners to Apple Watches, we’ve gradually accepted that anything that saves us time or provides us with more accurate feedback about the minutiae of our lives is a convenience worth adopting, without regard for where this parade of convenience is leading us.  

We got a glimpse of the end game recently courtesy of those reptilian high priests of rationalism at Davos, and it is terrifying. When I first saw the presentation by Duke University law professor Nita Farahany about the office of the near future, I had to check twice to make sure it wasn’t a Babylon Bee satire. You may think it’s neat that your phone can not only track how many steps you take in a day and measure your gait and balance, but wait till you see where Big Tech is taking that technology next.

The presentation begins with a short video, and if you haven’t seen it, please stop reading and take two minutes to watch it. It’s a cartoon scenario set in an office where you and your boss can both monitor your brain activity in real-time to measure your productivity, flag times of stress, deter inappropriate thoughts about co-workers, and, well, the possible intrusions are limited only by your employer’s rapacious amorality. The video ends with a worker being removed by security guards because his brain patterns mimic those of a colleague who has been caught defrauding the company. It makes Bentham’s panopticon look like the Unabomber’s cabin.

I have no hesitation in saying that what Professor Farahany is celebrating—and she is very clear about the fact that she is celebrating it—is evil. The glee with which she tells the audience that everything in the video is already possible, and that “after all, what you think, what you feel—it’s all just data” is demonic. No, professor, our brain activity is not “just data.” What a sociopathically reductive and dehumanised way of looking at the life of an embodied soul. Her office of the future-present is to humanity what a Tamagotchi is to pet ownership. 

Professor Farahany’s vision of the distant future (“within our lifetime”) is brain implants that bridge the human-technological divide to allow AI technology to “decode” our “complex thoughts” and provide reinforcement of “good” behaviour and deterrence of “bad” thoughts and activities. For now, though, the brain surveillance in the video uses “consumer wearable devices,” which she cheerfully describes as “like Fitbits for your brain.” Later she describes data monitoring through a “simple wearable watch.” If you own an Apple Watch, that’s your cue to crush it with a hammer.

After describing the productivity increases and health warnings that brain monitoring will make possible, Professor Farahany stresses that “I am giving you the positive use cases because what I don’t want the reaction to be is ‘let’s ban this.’” It’s the one time that she shows a hint of emotion: she really wants this future to happen. She is prepared to concede that this technology “has a dystopian possibility”—to which any sane person would respond: actually, it has no non-dystopian possibility—but she is imploring you to ignore the warnings from every sci-fi story ever written and trust her and her corporate bosses this time. Think of all the convenience!

We wanted convenience, and now we have it—or rather it has us.

If the video doesn’t set off your internal alarm, then I don’t know what to say. Maybe you’re one of those weirdos who looks forward to being judged by a deep-learning chatbot. But how much can any of us really object to a future of real-time brain monitoring? Didn’t we vote for this future with every purchase of every new technological breakthrough? Didn’t we make it inevitable when we never once said, maybe I won’t get a smartphone, or put the children in front of an iPad to keep them quiet, or log on to the nursery app?

We wanted convenience, and now we have it—or rather it has us. We have become slaves to convenience. In the name of efficiency, time-saving, and productivity, we have sleep-walked into an inhuman nightmare. Now that the dystopian future is here, it may be too late to object, but I’ll do it anyway. Sorry, Professor Farahany: let’s ban this.

Watching the video from Davos, two thoughts came to mind. First, why did the audience not riot, pelting the grinning harbinger of progress with the mini-quiches and crustless sandwiches laid out by catering? (Speaking of which, when did we stop booing bad performances, does anyone other than the loggionisti at La Scala still do this? It’s time to bring it back.) And second, how did we get here?

I suspect the answer to the second question provides an answer to the first. I said earlier that we are all culpable. Unless you are reading this article on watermarked paper hand-copied by your scribe, you are partially to blame for what’s coming next—you and your addiction to the idea of convenience. We’ve each played our part in the progression from the cotton gin to the internal combustion engine, from the cathode ray tube to the microprocessor, from the cell phone to the smart thermostat, and from the Fitbit to real-time brain monitoring by our employers. The audience was in no position to object to the video: it was the future they had already bought into. Literally. 

That doesn’t mean, of course, we can’t still be surprised to learn where we were heading all along. Like Mike in The Sun Also Rises, who went bankrupt “gradually, then suddenly,” we went to sleep one day chuffed at being able to read email on our watches and woke up to our emotions being monitored at work by computer programs that can reproduce what we are visualising in our mind. 

The story began innocently enough. Early household technologies were marketed as time-saving conveniences for harried housewives. Instead of rolling up their sleeves and slopping about in soapy tubs with washboards, the woman of the future would be able to lounge on her divan, primly dressed and pertly coiffed, reading about the next breakthrough in home convenience. In the U.K., houses are still advertised as having all “mod cons”—“modern conveniences.”

Of course, the promised life of leisure never materialised. It turns out that our schedules abhor a vacuum almost as much as 1950s housewives loved them. We have never lived more convenient lives, and we’ve never been busier. This is one reason I am skeptical of promises of four-day work weeks and fully automated luxury communism—just think how exhausted we’d be by all that extra “free” time.

Apple CEO Tim Cook speaks in front of images of the Apple Watch during an announcement of new products at the Apple Worldwide Developers Conference Monday, June 4, 2018, in San Jose, Calif. Marcio Jose Sanchez/AP Photo.

We are overwhelmed with convenience. We wake up to convenient alarms, we drive cars packed with conveniences—music, GPS, cruise control, lane control, automatic braking, self-parking—through streets filled with drivers with one eye on the road and the other on their convenient smartphones. If we work from home, our meetings conveniently come to us via Zoom. We use convenient word processing programs that allow us to type and retype documents a hundred times. Just think, our grandparents had to make do with typing a document once and living with the consequences. The poor fools. 

All day we are, conveniently, reachable by email and pop-up messages on our monitors, calls and texts on our personal devices, and haptic notifications on our watches. And when the work day is over, we have the convenience of sitting passively in front of a screen as an algorithm chooses a show that we will like, or at least one we won’t dislike enough to turn off, while we scroll absently on a second device, chatting with people we don’t have to make the effort to see. Or we listen to music selected for us by yet another app. 

Our kitchens are full of household conveniences—microwaves, air fryers, food processors, blenders, convection ovens—but we’ve never eaten more prepared food and takeout. Never mind that it takes less time to make a meal than it does to deliver one, we are just too tired at the end of the day to bother. And that is assuming we had planned ahead and stopped by the “convenience” store to buy ingredients. Our children are mesmerised by mind-altering social media programs run by hostile foreign governments (but hey, it keeps them quiet), and when we go to bed we are pacified by conveniently soporific apps. 

Convenience is addictive. Once we got used to being able to receive messages from anywhere, the idea of waiting until we got home to check an answering machine became mentally intolerable. We tell ourselves that all this convenience is making our lives easier, but we are most anxious when a convenience fails. Tapping a credit card is only marginally faster than pushing four buttons, and using a card is only a few seconds faster than an exchange of cash. Yet we’ve got to the point where we roll our eyes if the card machine asks us to manually swipe and enter our PIN. What was the height of convenience a few years ago is now an inconvenience.

The incessant nerve-jangling mental stimulus, sleeplessness, and anxiety are obvious signs of our addiction to technological convenience, but they aren’t the only problems with it. Convenience makes life more antiseptic, more regular, and more boring. It detaches us from what we own, so our possessions no longer have a direct connection to our neighbourhoods, or even our countries. How much of what you own was made by someone whose name you know? It leaves us with fewer things we can touch and take apart, and more things that break easily, and have to be replaced, not fixed, when they break. 

It is more convenient to be able to stream movies and music or to read books on a portable screen, but the price of that convenience is dependence. You don’t actually own that movie or song you just paid for, and if Big Tech later decides that it offends the sensitivity monitors in its corporate relations department, they can edit, censor, or disappear it without your permission. Netflix and Disney have already been caught bowdlerising old movies on their streaming services, and movies can be removed from your collection without your permission as a result of copyright disputes and regional licensing disparities. CDs and DVDs may be more inconvenient, but at least you own what you paid for. 

The idea that “you will own nothing and you’ll be happy,” which became the unofficial public motto of the WEF’s Great Reset is a triple lie. First, you will not be happy. The more humans try to adapt to life in a machine world, the less happy they are. Second, not owning something doesn’t mean you don’t have to pay for it. Never has an apparent renunciation of worldly goods been so expensive. Finally, you won’t stop owning things. Our lives have never been so cluttered with cheap goods and digital subscriptions. Not owning anything turns out to mean spending a lot of money on things you still need but no longer control.

Our addiction to convenience has left most of us incapable of productive leisure.

We are obsessed with saving time so that we can … what? Spend more time wasting time? The old dream was of more leisure time in which we were all free to participate in the goods of civilisation. A mass leisure class would be able to live the lives of Renaissance princes. We would have more time to read Great Books, play musical instruments, learn languages, paint, draw, hunt, and master the art of conversation. But who does any of that?

Our addiction to convenience has left most of us incapable of productive leisure—if most of us ever were capable of it. We have been habituated to constant external stimulation, which is the enemy of reflection. We bore too easily to read a book without unconsciously reaching for our phones to make sure we haven’t missed any news, no matter how trivial, inane, or irrelevant. 

I can’t prove it, but I suspect the new conveniences are such potent distractions because they meet an evolutionary need. We are designed for struggle: against nature, against time, against each other. A life of leisure is not natural; it’s something we must adapt to if it isn’t to slip into a life of idleness. We have to cultivate an aptitude for leisure to avoid succumbing to the temptations of ephemeral pleasure. Since we introduced the screen into our homes, this has become much harder. The old cranks who railed against the evils of the “idiot box” weren’t wrong.

At least television and radio used to sign off.This may come as a surprise to anyone under 40, but there was a time when the television day would end. All the channels would go off the air, usually after playing the national anthem around midnight (the time differed by jurisdiction). Until the late 1950s, the BBC was only permitted to air 12 hours on weekdays and 8 hours on weekend days. Programming also ended abruptly at 6 pm for an hour—the so-called “toddler’s truce”—to allow parents to put their young children to bed. (There was a similar mandated break from 2-4 on Sundays so that children could do their Bible reading). Now we have ubiquitous screens designed to catch and hold our attention. It is easy to satisfy atavistic instincts by consuming and expressing outrage over social media, finding in politics and ideology the agonistic outlets our ancestors found in tribal warfare. Social media companies and phone companies know this—heck, it’s their business model—and so far no governments have dared step in to regulate them, the way we restrict other addictive products. And I can’t blame them. Would you try taking a toy away from an angry ape?

I don’t have a solution. I would support a Right to Inconvenience, a Charter of Inefficiency, but I don’t pretend it would be a political winner. There is something about the way we are wired to adapt to convenience that makes the denial of a technology we just adopted feel like intolerable deprivation. If the price of being able to count our steps is our employers being able to scan our brainwaves, I suspect most people will shrug and accept the intrusion as the price of convenience. 

Tap by tap, twitch by twitch, we are building the cage in which we will live the rest of our convenient lives. It will be convenient for us, convenient for our employers, and convenient for corporations—a perfect win-win-win situation. Only our humanity will be lost.

Patrick Luciani: Thomas Piketty is still wrong


Stephen Hawking’s A Brief History of Time was awarded the prize for the least-read book that became a bestseller. The Hawking Index prize now belongs to the French economist Thomas Piketty’s Capital in the Twenty-First Century. The average reader managed to get to page 26 of this 700-page book before throwing in the towel. I believe readers stopped reading for different reasons. While people wanted to know about the universe’s origins, many found it difficult to understand. Piketty’s readers weren’t interested in understanding rising wealth or income inequality but in confirming their preconceived ideas that growing inequality is bad for the world and capitalism is the cause. For many, it is enough to keep Piketty’s book nearby as a talisman. And the tedium of the French translation didn’t help. 

Piketty’s book was an enormous commercial success selling over one-and-a-half million copies, driven partly by endorsements from Paul Krugman of The New York Times and Joseph Stiglitz, both winners of the Nobel Prize in economics. If they liked the book, why bother reading it? Since its publication ten years ago, economists haven’t been kind to Piketty’s analysis or conclusions. Many studies counter Piketty’s interpretation of his data and the reasons for wealth and income disparities. Economist Tyler Cowen contends that wealth inequality tends to disappear once increasing land values are accounted for in places such as London and San Francisco. This would probably apply to Toronto and Vancouver.

Piketty has never effectively responded to his critics. Chicago economist Deirdre McCloskey, in an extended essay, concludes that Piketty’s social theme is an “ethic of envy” and a defence of the idea that governments are the force behind economic prosperity. She concludes that his “economics is flawed from start to finish.”

Piketty has now followed up with his latest book, A Brief History of Equality, published last year. Instead of convincing his reader that markets don’t work, Piketty doubles down, insisting the state must crush income and wealth inequality with drastic socialist policies. The book reads more like a manifesto than an economic analysis. Piketty calls for “Participatory Socialism,” which includes high marginal tax rates and deep restrictions on inheritances and private property. He claims high progressive tax rates worked in the ’70s, and they will work again because they did not affect innovation and productivity (contrary to the economic logic of a tradeoff between equality and efficiency). Colonialism and slavery built American wealth, not free markets. Without irony, he proclaims that not for the Soviet Union and the international communist movement, the West may not have “accepted Social Security, progressive income taxes, decolonization, and civil rights.”

A powerful rebuke to Piketty’s work is in a new study, The Myth of American Inequality: How the Government Biases Policy Debate, written by Phil Gramm, Robert Ekelund, and John Early. The book makes a strong case that everything we know about income inequality and poverty in the U.S. is wrong. It answers Piketty’s primary claims that the rich benefit at the expense of the poor. Piketty claims that the trend toward greater inequality in the first half of the last century was interrupted by two world wars. Inequality then started to rise again in the 1950s. 

In the U.S., The Myth shows that this isn’t the case if transfer payments—the majority of income for the bottom 40 percent of the population—much of Piketty’s inequality disappears. 

The authors note that Piketty exaggerates the income of high-income individuals by imputing unrealized capital gains. When both factors are considered, the authors conclude that income inequality today is lower than in 1947. Regarding Piketty’s call for a return to high marginal tax rates when they were 91 percent in the 1960s, only 447 paid that rate of 71 million tax filers. High tax rates collect very little revenue.

This trend reversed dramatically as the tax rate fell to 28 percent in the late ’80s. On the question of the superrich paying their fair share, the authors of The Myth of American Inequality remind us that Bill Gates created new economic value and the benefits that follow. His wealth comes from owning 7 percent of Microsoft’s wealth, while the rest is held by pension and mutual funds. Another 150,000 people worldwide earn good incomes working for the company, creating further wealth. 

Piketty is in no mood to explain or justify his position to his critics, insisting that inequality is a moral issue, making it even more important than poverty. Here he tangles with moral philosopher Harry Frankfurt who argues that the real problem is ensuring the poor have enough, not distressing about the rich. The poor for Piketty is of concern only in abstraction. 

A fair-minded reader looking to make sense of the problem of inequality could do worse than picking up a copy of The Myth of American Inequality. Unfortunately, the evidence that people change their minds is depressingly low. Once opinions are formed, they tend to stay that way. We look for reassurance, not enlightenment.