Like The Hub?
Join our community.

Harry Rakowski: Dealing with the obesity epidemic: New drugs could be a gamechanger


Weight gain has been a common occurrence during the pandemic. Many people felt that the virus infected their clothes and made them “shrink in their closet” as they exercised less because of social isolation and ate more due to stress. 

The trend of gaining weight is not new. Obesity rates started climbing in the late 1970s and corresponded to the rise of eating out and consuming fast foods and high-carb meals. The Centers for Disease Control and Prevention estimated that in 2000, about 30.5 percent of Americans were obese and by 2018, about 42.4 percent had reached that weight level. Interestingly the greatest weight gain occurred in people’s mid-20s to mid-30s with an average weight gain of 17.6 pounds in that decade of life. Subsequent weight gain averaged a pound a year. 

The epidemic of obesity was also driven by an addiction to supersized portions of foods with high fat, salt, and sugar starting in childhood. We continue to have more sedentary work and spend more time on devices and less on exercise. It is not surprising that this trend has been associated with growing rates of diabetes along with its health risks and complications. 

You can determine your own ideal weight by calculating your body mass index (BMI) using the calculator from BMI represents your weight indexed to your height and provides a measure of whether you are in a healthy range of 18.5 to <25. A BMI of 25-<30 indicates being overweight, over 30 is in the obesity range and over 40 defines severe obesity. 

Obesity is now recognized as a disease with very negative and expensive public health outcomes. In addition to diabetes, it leads to a much higher risk of hypertension, stroke, breast and colon cancer, and degenerative arthritis. 

Obesity is caused by an interplay of genetic and environmental causes. Brain imaging studies have shown that there is an important genetic link in obesity affecting parts of the brain involved in impulse control, reward processing, and how we determine whether we are full when we eat. Physicians who simply advise and shame obese people by saying “all you have to do is eat less and exercise more” aren’t providing ideal advice. Managing obesity is challenging and requires a combination of eating better, exercising more, reducing stress eating, and when necessary using newer, safer medication. 

Diets often simply lead to yo-yo-ing with alternating weight loss and weight gain often due to the feelings of deprivation when on a “diet”. 

While not overeating remains important, eating “healthy” is superior to dieting. A Mediterranean diet of fruits, vegetables, beans, nuts, fresh fish, and olive oil has been shown to control weight and increase life expectancy. 

Exercise combining 30 minutes of cardiovascular exercise, resistance training, and stretching will also increase wellness and prevent frailty. 

It is of course tempting to just be able to take a pill to fix the problem. In the 1990’s Fen-Phen, the combination of two drugs, was approved with excellent weight loss benefits. The drug worked by increasing brain satiety and reducing appetite through its effects on brain chemicals serotonin and dopamine. Unfortunately, the drugs rarely caused life-threatening high pressures in the arteries to the lungs by constricting blood vessels. They also could stimulate receptors in heart valves leading to scarring resembling the effects of rheumatic valve disease. This led to the withdrawal of the drug and over $20 billion in lawsuits in the U.S. alone. Other drugs such as rimonabant also had to be stopped because of excessive side effects, such as depression and suicidal thoughts. 

That often left bariatric surgery as the best way to reduce stomach size to tell the brain that you were full earlier. The surgery could be done by an abdominal incision and stomach stapling or bypass, or by placing a stomach constricting band laparoscopically. These operations while useful, again had risk and the laparoscopic version was not always covered by insurance. 

The hottest current treatment is the use of drugs such as semaglutide and tirzepatide that treat diabetes and have very few serious side effects. While they have been popularized by stars and influencers posting about taking them inappropriately for minor weight loss, their greatest role is in managing true obesity. The drugs were approved for use in diabetics to reduce blood sugar and were found to have the additional benefit of significant weight loss.

Newer randomized trials focusing on weight loss alone showed about a 17 percent reduction in weight with semaglutide (Ozempic/Wegovy) and more than 20 percent with tirzepatide (Mounjaro), levels typically achieved by bariatric surgery. Only about 5 percent had to stop the drugs for intolerable side effects such as nausea, compared to 1-2 percent of the placebo control group. Rare, more serious side effects include pancreatic inflammation. 

The drugs work by mimicking gut hormones (GLP-1 and also GIPR for tirzepatide) that are secreted after eating food. While the mode of action for both drugs is different, they both increase insulin production, reduce insulin resistance, delay stomach emptying, and fool the brain into thinking you are full faster. 

These drugs have become so pervasively popular in reducing weight that overuse by those not obese has led to shortages of the drugs for diabetics who need them most. Access to those truly obese has also been limited by cost. While in the U.S. insurers will pay for the drug when used for diabetics, too many continue to not pay for obesity use, falsely arguing that obesity is simply a lifestyle choice, rather than a true disease. Since the U.S. drug cost is about $12,000 per year, few uninsured people can afford it. The drug price is about 1/3 of that in Canada. 

Obesity remains an all too common disease with multifactorial causes and is often resistant to intervention. For those with a BMI over 30 new drug use can help jump-start a comprehensive plan for successful, long-term weight loss. It shouldn’t be abused as a social drug. Insurers need to pay for its use when the level of obesity creates long-term risks and treatment will save both lives and downstream costs. Pharma companies have to also reduce their level of greed and lower cost and provide free drugs to clinics caring for those who are under-serviced and often need it most. 

The treatment of obesity requires compassion, education, better lifestyle choices, and affordable new drug therapy. Complex problems require thoughtful integrated solutions. Taking an important new drug can help break the frustration of not being able to overcome obesity. It isn’t a magic bullet and the effect wears off when you stop the drug. Its use needs to be complemented by eating less unhealthy addictive food that damages your microbiome and impairs gut health.

While obesity is a disease and genetics plays a role in its development, our behaviour modulates the effects of genetics. We often use food as an anti-anxiety drug. While we now have drugs that can safely help treat obesity, they won’t work without also working on our emotional health and what drives our addiction to the wrong foods. 

Howard Anglin: Now that the dystopian future is here, it may be too late to object


ChatGPT, the deep-learning program that recently surprised a lot of people with its demonstration of how close a machine can come to sounding like an unusually pedantic Wikipedia editor, has unnerved some white-collar workers who suddenly face technological redundancy. 

Until now, if you worked in what is (mostly undeservedly) called the “knowledge” industry, the machines had always come for other people’s jobs. But now that a computer program can synthesise information faster and write better than most people whose bullshit jobs consist of one or both of those tasks, hundreds of thousands of their jobs are likely to disappear as quickly as CEOs can figure out how to ask ChatGPT to draft a pink slip. 

Sure, a few former wordsmiths and middle managers will be retained to perform quality assurance, at least for a little while longer, but the blow to their collective professional egos will leave a permanent scar on the lower-upper-middle class. Six-plus years of post-secondary education, and now you’re editing a robot. It’ll take a lot of post-work Proseccos to soothe that ignominy.  

Next in the hierarchy of irrelevance will be the credentialed professions. People who grew up before the machines took over our lives may balk at trusting something as personal as their legal rights or their health to a computer, but we’re not far from the time when specialised programs will provide legal analysis and medical diagnoses as reliably as humans. Knowing lawyers and doctors, they will fight hard to protect their guild privileges, but you can only hold back a much cheaper competitor for so long. 

And after lawyers, why not judges? A judge in Colombia has already admitted to consulting ChatGPT in a case that asked whether an autistic minor was entitled to health insurance coverage. The judge, who insisted that the final judgement was his alone, told a local news outlet that he had asked the chatbot: “Is an autistic minor exonerated from paying fees for their therapies?” to which it answered: “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.”One hopes that the use “exonerate” for “exempt” and other mistakes are errors of translation and not indicative of the quality of Colombian adjudication. 

The judge reportedly said that deep-learning programs should not replace judges, but that they can assist them, and that “by asking questions to the application we do not stop being judges, thinking beings.” He has a point. If ChatGPT can do legal research as reliably, and much faster, than a law clerk, why not use it instead? We accept that a judge can ask a clerk, who may be of middling acumen or indifferent work ethic, to draft him a memo on a point of law, so why not the no-less reliable ChatGPT? 

I’ll go further. Contra the honourable judge, if a future deep-learning program can produce judgements that are as reliable as an old-fashioned flesh and blood judge, what would be the objection to replacing him and relieving the good burghers of Cartagena of the burden of paying his salary? 

The question reminds me of an illuminating binary I thought of a few years ago. “Illuminating binary” is my term for a simple yes/no question, the answer to which reveals a  much larger set of assumptions, prejudices, and systemic preferences. The most famous illuminating binary was posed by Isaac Foot, the late Liberal MP and all-round nonconformist, who used to say: “I judge a man by one thing, which side would he have liked his ancestors to fight on at Marston Moor?” From the answer to that question, he believed he could discern the outlines of a man’s personality and his political philosophy.For what it’s worth, I hope my ancestors would have fought at the side of Prince Rupert of the Rhine, the heroic polymath, and his poodle, Boy.

So, I pose the question: if you were put on trial while innocent of a crime, would you prefer to be judged by a machine that is able to determine guilt with 99 percent accuracy, or by a marginally more fallible human judge?

The answer to me is obvious: I would rather face the higher risk of being judged wrongly by someone I can look in the eye and appeal to as a fellow human—someone with whom I can potentially reason after the fact, who may one day change his mind and show remorse—than face better odds with an impassive AI program.

I say that this is obvious to me, but I know that there are people who would just as certainly choose the machine, and I don’t believe either kind of person can really understand the other. At the simplest level, the divide has something to do with the relative primacy of reason and sentiment in how different people look at the world, but I think there is a deeper value in play: how much do we value efficiency? 

Efficiency comes in many forms. Accuracy is a type of efficiency, one which is clearly implicated in my hypothetical question. Time-saving is another, which is also relevant in an overloaded justice system. When these types of efficiency are considered from a personal perspective, they can be subsumed under the label of “convenience.” 

For some time, convenience has been the dominant motivation in our lives. Virtually all the major changes in the way we live over the last century have been motivated by convenience, and none of us is immune to its appeal. We’ve all succumbed to the siren of convenience in one way or another. From vacuum cleaners to Apple Watches, we’ve gradually accepted that anything that saves us time or provides us with more accurate feedback about the minutiae of our lives is a convenience worth adopting, without regard for where this parade of convenience is leading us.  

We got a glimpse of the end game recently courtesy of those reptilian high priests of rationalism at Davos, and it is terrifying. When I first saw the presentation by Duke University law professor Nita Farahany about the office of the near future, I had to check twice to make sure it wasn’t a Babylon Bee satire. You may think it’s neat that your phone can not only track how many steps you take in a day and measure your gait and balance, but wait till you see where Big Tech is taking that technology next.

The presentation begins with a short video, and if you haven’t seen it, please stop reading and take two minutes to watch it. It’s a cartoon scenario set in an office where you and your boss can both monitor your brain activity in real-time to measure your productivity, flag times of stress, deter inappropriate thoughts about co-workers, and, well, the possible intrusions are limited only by your employer’s rapacious amorality. The video ends with a worker being removed by security guards because his brain patterns mimic those of a colleague who has been caught defrauding the company. It makes Bentham’s panopticon look like the Unabomber’s cabin.

I have no hesitation in saying that what Professor Farahany is celebrating—and she is very clear about the fact that she is celebrating it—is evil. The glee with which she tells the audience that everything in the video is already possible, and that “after all, what you think, what you feel—it’s all just data” is demonic. No, professor, our brain activity is not “just data.” What a sociopathically reductive and dehumanised way of looking at the life of an embodied soul. Her office of the future-present is to humanity what a Tamagotchi is to pet ownership. 

Professor Farahany’s vision of the distant future (“within our lifetime”) is brain implants that bridge the human-technological divide to allow AI technology to “decode” our “complex thoughts” and provide reinforcement of “good” behaviour and deterrence of “bad” thoughts and activities. For now, though, the brain surveillance in the video uses “consumer wearable devices,” which she cheerfully describes as “like Fitbits for your brain.” Later she describes data monitoring through a “simple wearable watch.” If you own an Apple Watch, that’s your cue to crush it with a hammer.

After describing the productivity increases and health warnings that brain monitoring will make possible, Professor Farahany stresses that “I am giving you the positive use cases because what I don’t want the reaction to be is ‘let’s ban this.’” It’s the one time that she shows a hint of emotion: she really wants this future to happen. She is prepared to concede that this technology “has a dystopian possibility”—to which any sane person would respond: actually, it has no non-dystopian possibility—but she is imploring you to ignore the warnings from every sci-fi story ever written and trust her and her corporate bosses this time. Think of all the convenience!

We wanted convenience, and now we have it—or rather it has us.

If the video doesn’t set off your internal alarm, then I don’t know what to say. Maybe you’re one of those weirdos who looks forward to being judged by a deep-learning chatbot. But how much can any of us really object to a future of real-time brain monitoring? Didn’t we vote for this future with every purchase of every new technological breakthrough? Didn’t we make it inevitable when we never once said, maybe I won’t get a smartphone, or put the children in front of an iPad to keep them quiet, or log on to the nursery app?

We wanted convenience, and now we have it—or rather it has us. We have become slaves to convenience. In the name of efficiency, time-saving, and productivity, we have sleep-walked into an inhuman nightmare. Now that the dystopian future is here, it may be too late to object, but I’ll do it anyway. Sorry, Professor Farahany: let’s ban this.

Watching the video from Davos, two thoughts came to mind. First, why did the audience not riot, pelting the grinning harbinger of progress with the mini-quiches and crustless sandwiches laid out by catering? (Speaking of which, when did we stop booing bad performances, does anyone other than the loggionisti at La Scala still do this? It’s time to bring it back.) And second, how did we get here?

I suspect the answer to the second question provides an answer to the first. I said earlier that we are all culpable. Unless you are reading this article on watermarked paper hand-copied by your scribe, you are partially to blame for what’s coming next—you and your addiction to the idea of convenience. We’ve each played our part in the progression from the cotton gin to the internal combustion engine, from the cathode ray tube to the microprocessor, from the cell phone to the smart thermostat, and from the Fitbit to real-time brain monitoring by our employers. The audience was in no position to object to the video: it was the future they had already bought into. Literally. 

That doesn’t mean, of course, we can’t still be surprised to learn where we were heading all along. Like Mike in The Sun Also Rises, who went bankrupt “gradually, then suddenly,” we went to sleep one day chuffed at being able to read email on our watches and woke up to our emotions being monitored at work by computer programs that can reproduce what we are visualising in our mind. 

The story began innocently enough. Early household technologies were marketed as time-saving conveniences for harried housewives. Instead of rolling up their sleeves and slopping about in soapy tubs with washboards, the woman of the future would be able to lounge on her divan, primly dressed and pertly coiffed, reading about the next breakthrough in home convenience. In the U.K., houses are still advertised as having all “mod cons”—“modern conveniences.”

Of course, the promised life of leisure never materialised. It turns out that our schedules abhor a vacuum almost as much as 1950s housewives loved them. We have never lived more convenient lives, and we’ve never been busier. This is one reason I am skeptical of promises of four-day work weeks and fully automated luxury communism—just think how exhausted we’d be by all that extra “free” time.

Apple CEO Tim Cook speaks in front of images of the Apple Watch during an announcement of new products at the Apple Worldwide Developers Conference Monday, June 4, 2018, in San Jose, Calif. Marcio Jose Sanchez/AP Photo.

We are overwhelmed with convenience. We wake up to convenient alarms, we drive cars packed with conveniences—music, GPS, cruise control, lane control, automatic braking, self-parking—through streets filled with drivers with one eye on the road and the other on their convenient smartphones. If we work from home, our meetings conveniently come to us via Zoom. We use convenient word processing programs that allow us to type and retype documents a hundred times. Just think, our grandparents had to make do with typing a document once and living with the consequences. The poor fools. 

All day we are, conveniently, reachable by email and pop-up messages on our monitors, calls and texts on our personal devices, and haptic notifications on our watches. And when the work day is over, we have the convenience of sitting passively in front of a screen as an algorithm chooses a show that we will like, or at least one we won’t dislike enough to turn off, while we scroll absently on a second device, chatting with people we don’t have to make the effort to see. Or we listen to music selected for us by yet another app. 

Our kitchens are full of household conveniences—microwaves, air fryers, food processors, blenders, convection ovens—but we’ve never eaten more prepared food and takeout. Never mind that it takes less time to make a meal than it does to deliver one, we are just too tired at the end of the day to bother. And that is assuming we had planned ahead and stopped by the “convenience” store to buy ingredients. Our children are mesmerised by mind-altering social media programs run by hostile foreign governments (but hey, it keeps them quiet), and when we go to bed we are pacified by conveniently soporific apps. 

Convenience is addictive. Once we got used to being able to receive messages from anywhere, the idea of waiting until we got home to check an answering machine became mentally intolerable. We tell ourselves that all this convenience is making our lives easier, but we are most anxious when a convenience fails. Tapping a credit card is only marginally faster than pushing four buttons, and using a card is only a few seconds faster than an exchange of cash. Yet we’ve got to the point where we roll our eyes if the card machine asks us to manually swipe and enter our PIN. What was the height of convenience a few years ago is now an inconvenience.

The incessant nerve-jangling mental stimulus, sleeplessness, and anxiety are obvious signs of our addiction to technological convenience, but they aren’t the only problems with it. Convenience makes life more antiseptic, more regular, and more boring. It detaches us from what we own, so our possessions no longer have a direct connection to our neighbourhoods, or even our countries. How much of what you own was made by someone whose name you know? It leaves us with fewer things we can touch and take apart, and more things that break easily, and have to be replaced, not fixed, when they break. 

It is more convenient to be able to stream movies and music or to read books on a portable screen, but the price of that convenience is dependence. You don’t actually own that movie or song you just paid for, and if Big Tech later decides that it offends the sensitivity monitors in its corporate relations department, they can edit, censor, or disappear it without your permission. Netflix and Disney have already been caught bowdlerising old movies on their streaming services, and movies can be removed from your collection without your permission as a result of copyright disputes and regional licensing disparities. CDs and DVDs may be more inconvenient, but at least you own what you paid for. 

The idea that “you will own nothing and you’ll be happy,” which became the unofficial public motto of the WEF’s Great Reset is a triple lie. First, you will not be happy. The more humans try to adapt to life in a machine world, the less happy they are. Second, not owning something doesn’t mean you don’t have to pay for it. Never has an apparent renunciation of worldly goods been so expensive. Finally, you won’t stop owning things. Our lives have never been so cluttered with cheap goods and digital subscriptions. Not owning anything turns out to mean spending a lot of money on things you still need but no longer control.

Our addiction to convenience has left most of us incapable of productive leisure.

We are obsessed with saving time so that we can … what? Spend more time wasting time? The old dream was of more leisure time in which we were all free to participate in the goods of civilisation. A mass leisure class would be able to live the lives of Renaissance princes. We would have more time to read Great Books, play musical instruments, learn languages, paint, draw, hunt, and master the art of conversation. But who does any of that?

Our addiction to convenience has left most of us incapable of productive leisure—if most of us ever were capable of it. We have been habituated to constant external stimulation, which is the enemy of reflection. We bore too easily to read a book without unconsciously reaching for our phones to make sure we haven’t missed any news, no matter how trivial, inane, or irrelevant. 

I can’t prove it, but I suspect the new conveniences are such potent distractions because they meet an evolutionary need. We are designed for struggle: against nature, against time, against each other. A life of leisure is not natural; it’s something we must adapt to if it isn’t to slip into a life of idleness. We have to cultivate an aptitude for leisure to avoid succumbing to the temptations of ephemeral pleasure. Since we introduced the screen into our homes, this has become much harder. The old cranks who railed against the evils of the “idiot box” weren’t wrong.

At least television and radio used to sign off.This may come as a surprise to anyone under 40, but there was a time when the television day would end. All the channels would go off the air, usually after playing the national anthem around midnight (the time differed by jurisdiction). Until the late 1950s, the BBC was only permitted to air 12 hours on weekdays and 8 hours on weekend days. Programming also ended abruptly at 6 pm for an hour—the so-called “toddler’s truce”—to allow parents to put their young children to bed. (There was a similar mandated break from 2-4 on Sundays so that children could do their Bible reading). Now we have ubiquitous screens designed to catch and hold our attention. It is easy to satisfy atavistic instincts by consuming and expressing outrage over social media, finding in politics and ideology the agonistic outlets our ancestors found in tribal warfare. Social media companies and phone companies know this—heck, it’s their business model—and so far no governments have dared step in to regulate them, the way we restrict other addictive products. And I can’t blame them. Would you try taking a toy away from an angry ape?

I don’t have a solution. I would support a Right to Inconvenience, a Charter of Inefficiency, but I don’t pretend it would be a political winner. There is something about the way we are wired to adapt to convenience that makes the denial of a technology we just adopted feel like intolerable deprivation. If the price of being able to count our steps is our employers being able to scan our brainwaves, I suspect most people will shrug and accept the intrusion as the price of convenience. 

Tap by tap, twitch by twitch, we are building the cage in which we will live the rest of our convenient lives. It will be convenient for us, convenient for our employers, and convenient for corporations—a perfect win-win-win situation. Only our humanity will be lost.