The Liberals’ flagship legislation announcement for 2024, the Online Harms Act, has had a bumpy journey through Parliament. Sold by Justice Minister Arif Virani as a four-part comprehensive overhaul that would update internet regulation and criminal laws to protect victims, children, and minorities, the bill was immediately criticized by civil liberties groups from across the political spectrum, including my own, for its draconian approach to regulating and criminalizing speech.
The bill proposed to create a new regulator to police “harmful” content on social media (part one); to beef up existing criminal law provisions governing hate speech, including by creating a new criminal offence for any crime “motivated by hatred,” that could lead to up to life in prison (part two); to give the Canadian Human Rights Commission the power to fine people for online hate speech (part three); and to create mandatory reporting requirements for child pornography on platforms (part four).
In the face of widespread opprobrium, Virani announced earlier this month that the Liberals will split parts one and four off from parts two and three, with parts one and four forming a new bill that will be prioritized, presumably to give the comparatively less divisive parts a better chance at passing.
But, in a committee study this week—which I participated in as a witness representing the Canadian Constitution Foundation—I made clear that even the first of the two new bills still posed a grave risk to free speech. While part four is uncontroversial and could likely pass on the unanimous consent from all parties, part one would create a digital safety commissioner with vast, unchecked powers to police “harmful content,” at an estimated cost of $200 million over the next five years. This would create such huge regulatory risks for platforms that they would likely feel they have no choice but to censor large amounts of protected speech on social media.
Unfortunately, a bill being pushed by Conservative MP Michelle Rempel-Garner as an alternative is not much better, and possibly worse. Bill C-412 appears much wider in scope. It would apply to “an online service or application”—by my read virtually the whole internet—in contrast to part one of C-63, which would apply only to social media platforms and some pornographic websites. Bill C-412 would impose a new duty on platforms to essentially create a cordon sanitaire, or a parallel sanitized internet, for anyone under the age of 18, violating the constitutional rights and freedoms of teenagers.
Platforms would be obligated to mitigate risks that minors would be exposed not only to content that most people would agree is harmful to kids and that could be relatively easily filtered out, such as advertisements for cannabis or sexual abuse imagery, but would also apply much less obviously harmful content including content “harmful to (a minor’s) dignity,” that “invades their privacy,” or that might promote “anxiety” or “loneliness.” These are incredibly vague and variable terms.
Fines for non-compliance, which would be applied by the CRTC, would range up to $25 million dollars—much higher than the $10 million proposed under the Online Harms Act. Individuals or their parents could sue for even more damages if they could show they had suffered “serious harm” from a failure to mitigate, including “substantial damage to reputation or relationships.” The scope of regulatory risk is so broad that one can’t rule out platforms just banning access to minors altogether.
Australia has just passed a law banning anyone under 16 from social media. Similarly, though, enforcement questions about how platforms are supposed to implement the required “reasonable steps” to keep minors out remain.
Bill C-412 would also forbid a minor from holding an account “without verifying the contact information for any of the user’s parents.” In addition to privacy risks, this means that minors would need the explicit participation and permission of parents before accessing the internet. That could create a precarious situation for teenagers seeking access to valuable information. Do we really want a teenager in an abusive household looking for resources on how to get help, uncomfortable with asking a teacher in school, to have to get a parent to verify their information before using any app or social media platform? Do we really want to prevent a minor who is living in a strict religious household where the internet is forbidden from seeking different perspectives on a computer at their school library? Do we really want 17-year-old college students to be unable to do proper academic research because they can’t access the full internet?
Kids have rights too. The Supreme Court has recognized that 16-year-olds who have religious objections to life-saving blood transfusions can make their own life-or-death medical decisions, so it’s difficult to see how the government could prevent them from using the full internet. While there’s no question that predators, scammers, and bad actors exist online, there are also wonderful mentors, a world of rich information, and helpful peers. I should know: as a 14-year-old suffering from anorexia, it was other girls with eating disorders whom I met on online message boards who saved me from my harrowing loneliness and convinced me that recovery was possible. A few remain real-life friends.
As Greg Lukianoff recently wrote in relation to the U.S.’s newly proposed Kids Online Safety Act, which suffers from similar problems, “when you are trying to solve a problem that relates to expression or, as here, expressive platforms, you should exhaust every possible solution before focusing on top-down legislative solutions that will almost always have unintended consequences.”
Before moving forward with heavy-handed regulation that will have untold negative consequences for rights, governments could focus on better enforcement of existing criminal laws governing the real threats that kids face online from predators, and appropriate penalties where warranted. Sometimes the cure is worse than the disease.