Enjoying The Hub?
Sign up for our free newsletter!

Joanna Baron: The government doubles down on censoring the internet

Commentary

The internet is an ugly place. A few days ago, my colleagues gave me a heads-up that a Twitter account was calling me out as a [sic] “Caucozoidic Jew” in charge of a “Jew charity.” I take the word “Caucozoidic” to mean I’m a fair-skinned Jew. The latter part is a bit more confusing because the charity I direct has no religious or ethnic affiliations. I laughed and blocked the account, but the Liberals’ newly proposed law suggests that not acting on these kinds of tweets could leave Elon Musk on the hook for 6 percent of X’s global revenues. 

That’s just one example of the absurdities that could result if Minister of Justice Arif Virani’s revamped Online Harms Act passes in Parliament. Bill C-63 is aimed at regulating a wide swath of undesirable online conduct, from child sexual exploitation material— already criminalized— to the more amorphous “content that foments hatred.” 

The Liberals’ decision to deal with this range of conduct in one fell swoop perhaps distracts from the reality that the Online Harms Act is a profoundly anti-free expression bill that threatens draconian penalties for online speech, chilling legitimate expression by the mere spectre of a complaint to the Canadian Human Rights Commission or the new Digital Safety Commission of Canada.

The bill beefs up criminal penalties for instances of hate speech and creates a new standalone “offence motivated by hatred.” The hate crime of “advocating genocide,” previously punishable by up to five years’ imprisonment, now carries a possibility of life imprisonment. Advocating genocide is evil, but it’s stunning to think someone in a free society could spend life in prison for their words.  

Previously, a finding that a crime was motivated by hate could be considered as an aggravating factor in sentencing. Now, an “offence motivated by hatred” is a separate offence, which can be charged and prosecuted by police and prosecutors. Virani proudly touted this new standalone offence as allowing offences motivated by hate to be charged “on the front end” suggesting the government is signalling to law enforcement to seek, charge, and prosecute more such crimes.

Even worse, the bill provides a preventative criminal restraint on suspected future speech: anyone with the attorney general’s consent can request that a judge order a 12-month “recognizance to keep the peace” if they have reasonable grounds to suspect that someone might commit hate speech in the future. If they agree, they can be subject to major restrictions on liberty such as giving a bodily sample, refraining from drugs and alcohol, and wearing an ankle monitor. If they refuse, they can be imprisoned.

The bill also brings back a civil remedy for communicating alleged instances of hate speech in the form of reviving the dreaded section 13 of the Canadian Human Rights Act, specifying that such instances constitute discrimination and are liable to be investigated by the Canadian Human Rights Tribunal. The tribunal will expand to as many as 20 government-appointed bureaucrats tasked with policing allegations of harmful speech. Even if most alleged instances are dismissed as not meeting the threshold of hate speech, the penalties for individuals found liable—up to $50,000 paid to the government plus $20,000 paid to the victim—are severe enough that we can infer the new regime will lead to large amounts of backpedalling and self-censorship by people accused of crossing the line. We will also see more people punished for their speech, considering that s. 13 creates a civil offence that need only be proved on a  “balance of probabilities,” which is much easier than meeting the criminal law’s more stringent threshold of “beyond a reasonable doubt.”

The bill makes extensive use of what has been called “jawboning”—delegating to and pressuring social media platforms to themselves take steps to police their users. Platforms are tasked with a “duty to act responsibly” and minimize harms to users. They must provide a mechanism for users to flag “harmful content,” which is defined as including speech that “foments hatred.” As a sop to free-expression concerns, the bill clarifies that platforms may limit users’ free expression—just not “disproportionately.” If platforms don’t comply with the bill’s stipulations, they are on the hook for $10M in fines or 6 percent of global revenues—whichever is higher.

The press conference outlining Bill C-63 was led off, somewhat unusually, not by the Minister but by a woman whose toddler was the victim of tragic sexual abuse filmed and disseminated online. Her harrowing testimony foregrounded some of the bill’s priorities, which are obviously laudable. That said, viewing or distributing child sexual exploitation material is already strictly criminalized. It’s also good news that the only harmful content that must be removed within 24 hours is child sexual exploitation material and revenge porn. The earlier iteration of the bill that died on the order paper in 2021 also required alleged hate speech to be taken down within 24 hours. This positive development is not surprising considering Germany’s attempts to impose takedown requirements on alleged hate speech have resulted in over-enforcement by platforms and a chill on edgy-but-legal speech.

But the Liberals’ decision to highlight the woman’s story as the kernel of the bill’s motivation reveals their broader strategy to merge two very different types of social ills which merit two different legislative responses. Child sexual exploitation is evil and should be subject to a strict zero-tolerance approach. Platforms have increasingly sophisticated algorithms for detecting and flagging it, and police have specialized training in investigating it. It is indeed appropriate to act decisively to ensure the physical and psychological safety of children online, but that should not be tied to laws that severely restrict speech.

A person works on a tablet computer in Ottawa on Wednesday, April 19, 2023. Sean Kilpatrick/The Canadian Press.

Online hate speech is categorically different, amorphous, and unavoidably subjective. The bill adopts the Supreme Court’s definition of hate speech, described as speech that is “likely to foment detestation or vilification.” But “detestation” is really just a synonym for “hate,” and “vilification” is also a highly subjective concept. On the Left, calling someone a TERF (Trans Exclusionary Radical Feminist) is a form of vilification, while on the Right, some wear the term as a badge of honour. Just last week, Toronto Star columnist Shree Paradkar penned a morally abhorrent column defending Hamas as a legitimate governing force this week and was promptly called a terror apologist on X. Did Paradkar’s critics vilify her? Conversely, did the Toronto Star vilify Jews and Israelis? It’s in the eye of the beholder.

Though these statements arguably constitute vilification, it’s unlikely that any of these allegations would end up being investigated as hate crimes. Nevertheless, it’s also difficult to put your finger on the line that separates them from criminal hate speech, which, as I mentioned, now poses possible penalties of life imprisonment in the case of advocating genocide and, for social media companies, potentially millions of dollars in fines. This lack of clarity poses a real threat to online discourse, which should brook passionate and adversarial disagreements. When these kinds of sanctions are in play, everyone has an incentive to err on the side of caution.

The Liberals have dealt with both, though, in the same heavy-handed manner. It seems to be the only mode this government knows.

Joanna Baron is Executive Director of the Canadian Constitution Foundation, a legal charity that protects constitutional freedoms in courts of law and public opinion. Previously, she was the founding National Director of the Runnymede Society and a criminal defence litigator in Toronto. She studied Classics at St John's College in…...

Michael Geist: Red flags abound in new online harms legislation

Commentary

After years of delay, the government tabled Bill C-63, the Online Harms Act this week. The bill is really three-in-one: the Online Harms Act that creates new duties for internet companies and a sprawling new enforcement system, changes to the Criminal Code and Canada Human Rights Act that meet longstanding requests from groups to increase penalties and enforcement against hate but which will raise expression concerns and a flood of complaints, and expansion of mandatory reporting of child pornography to ensure that it includes social media companies.

This post will seek to unpack some of the key provisions, but with a 100+ page bill, this will require multiple posts and analysis. My immediate response to the government materials was that the bill is significantly different from the 2021 consultation and that many of the worst fears—borne from years of poorly thought-out digital policy—have not been realized. Once I worked through the bill itself, concerns about the enormous power vested in the new Digital Safety Commission, which has the feel of a new CRTC funded by the tech companies, began to grow.

At a high level, I offer several takeaways. First, even with some of the concerns identified below, this is better than what the government had planned back in 2021. That online harms consultation envisioned measures such as takedowns without due process, automated reporting to law enforcement, and website blocking. Those measures are largely gone, replaced by an approach that emphasizes three duties: a duty to act responsibly, a duty to make certain content inaccessible, and a duty to protect children. That is a much narrower approach and draws heavily from the expert panel formed after the failed 2021 consultation.

Second, there are at least three big red flags in the bill. The first flag involves the definitions for harms such as inciting violence, hatred, and bullying. As someone who comes from a community that has faced relentless antisemitism and real threats in recent months, I think we need some measures to combat online harms. However, the definitions are not without risks that they may be interpreted in an overbroad manner and have implications for freedom of expression.

The second flag—related to the first—is the incredible power vested in the Digital Safety Commission, which will have primary responsibility for enforcing the law. The breadth of powers is remarkable: rulings on making content inaccessible, investigation powers, hearings that under certain circumstances can be closed to the public, establishing regulations and codes of conduct, and the power to levy penalties up to 6 percent of global revenues of services caught by the law. There is an awful lot there and questions about Commission oversight and accountability will be essential.

The third flag is that the provisions involving the Criminal Code and Canadian Human Rights Act require careful study as they feature penalties that go as high as life in prison and open the door to a tidal wave of hate speech-related complaints.

Finally, this feels like the first internet regulation bill from this government that is driven primarily by policy rather than implementing the demands of lobby groups or seeking to settle scores with big tech. After the battles over Bills C-11 and C-18, it is difficult to transition to a policy space where experts and stakeholders debate the best policy rather than participating in the consultation theatre of the past few years. It notably does not include Bill S-210 style age verification or website blocking. There will need to be adjustments in Bill C-63, particularly efforts to tighten up definitions and ensure effective means to watch the watchers, but perhaps that will come through a genuine welcoming of constructive criticism rather than the discouraging, hostile processes of recent years.

Now to the bill with a mini-FAQ.

Which services are caught by the bill?

The bill covers social media services, defined as “a website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content.” The Act adds that this includes adult content services and live streaming services. The service must meet a certain threshold of users in Canada for the law to apply (the threshold to be determined).

What duties do these services face?

As noted above, there are three duties: a duty to act responsibly, a duty to make certain content inaccessible, and a duty to protect children. The duty to act responsibly is the most extensive and it focuses on “measures that are adequate to mitigate the risk that users of the service will be exposed to harmful content on the service.” The Digital Safety Commission will be empowered to rule on whether companies have met this duty. Requirements include offering the ability to block users and flag content. The services must maintain available contacts and submit a digital safety plan to the Commission for review. There are detailed rules on what must be included in the plan. The services must also make their data available to researchers, which can be valuable but also raises potential privacy and security risks. The Commission would be responsible for accrediting researchers.

A duty to make certain content inaccessible focuses on two kinds of content: content that sexually victimizes a child or revictimizes a survivor or intimate content communicated without consent. The service must respond to flagged content and render it inaccessible within 24 hours. There is a notification and review process that follows.

A duty to protect children requires services to “integrate into a regulated service that it operates any design features respecting the protection of children, such as age-appropriate design, that are provided for by regulations.” There are few details available at this stage in the legislation about what this means.

A man uses a computer keyboard in Toronto in this Sunday, Oct. 9, 2023 photo illustration. Graeme Roy/The Canadian Press.
What harms are covered by the bill?

There are seven: sexually victimizing children, bullying, inducing a child to harm themselves, extremism/terrorism, inciting violence, fomenting hatred, and intimate content without consent including deep fakes.

How are these defined?

The definitions are where there may concerns in some instances. They are as follows:

Intimate content communicated without consent. This involves visual recordings involving nudity or sexually explicit activity where the person had a reasonable expectation of privacy and did not consent to the communication of the recording.

Content that foments hatredRefers to content that expresses detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination, within the meaning of the Canadian Human Rights Act, and that, given the context in which it is communicated, is likely to foment detestation or vilification of an individual or group of individuals on the basis of such a prohibited ground.

Note that content has to reach a certain threshold to “foment hatred”—content does not express detestation or vilification solely because it expresses disdain or dislike or it discredits, humiliates, hurts, or offends.

Content that incites violence. This means content that actively encourages a person to commit—or that actively threatens the commission of—an act of physical violence against a person or an act that causes property damage, and that, given the context in which it is communicated, could cause a person to commit an act that could cause

(a) serious bodily harm to a person;

(b) a person’s life to be endangered; or

(c) serious interference with or serious disruption of an essential service, facility, or system.

Content that incites violent extremism or terrorism. This means content that actively encourages a person to commit—or that actively threatens the commission of—for a political, religious, or ideological purpose, an act of physical violence against a person or an act that causes property damage, with the intention of intimidating or denouncing the public or any section of the public or of compelling a person, government or domestic or international organization to do or to refrain from doing any act, and that, given the context in which it is communicated, could cause a person to commit an act that could cause

(a) serious bodily harm to a person;

(b) a person’s life to be endangered; or

(c) a serious risk to the health or safety of the public or any section of the public.

Content that induces a child to harm themselves. Refers to content that advocates self-harm, disordered eating, or dying by suicide or that counsels a person to commit or engage in any of those acts, and that, given the context in which it is communicated, could cause a child to inflict injury on themselves, to have an eating disorder or to die by suicide.

Content used to bully a child. This means content, or an aggregate of content, that, given the context in which it is communicated, could cause serious harm to a child’s physical or mental health, if it is reasonable to suspect that the content or the aggregate of content is communicated for the purpose of threatening, intimidating or humiliating the child.

Content that sexually victimizes a child or revictimizes a survivor. This is a very long definition that includes multiple visual representations.

These are all obvious harms. The challenge will be to ensure that there is an appropriate balance between freedom of expression and safeguarding against such harms. There are clearly risks that these definitions could chill some speech and a close examination of each definition will be needed.

Emma Williamson, 11, plays on the internet on Wednesday, Nov. 29, 2006 at her home in Toronto. Nathan Denette/CP Photo.
How will the law be enforced?

This is the biggest red flag in the bill in my view. Enforcement lies with the new Digital Safety Commission, a new entity appointed by government with between three and five commissioners, including a chair and vice-chair. The Commission’s powers are incredibly broad-ranging. It can issue rulings on making content inaccessible, conduct investigations, demand any information it wants from regulated services, hold hearings that under certain circumstances can be closed to the public (the default is open), establish regulations and codes of conduct, issue compliance orders, and levy penalties up to 6 percent of global revenues of services caught by the law for compliance violations. Failure to abide by Commission orders can result in penalties of up to 8 percent of global revenues. The scope of the regulations covers a wide range of issues.

The law says the Commission must consider privacy, freedom of expression, and equality rights, among other issues. Despite those powers, the Commission is not subject to any legal or technical rules of evidence, as the law speaks to acting informally and expeditiously, an approach that seems inconsistent with its many powers.

In addition to the Commission, there are two other bodies: the Digital Safety Ombudsperson, who is responsible for supporting users, and the Digital Safety Office, which supports the Commission and Ombudsperson.

Who pays for all this?

Potentially the tech companies. The Act includes the power to establish regulations that would require the services caught by the Act to fund the costs of the Commission, Ombudsperson, and Office.

What about the Criminal Code and Human Rights Act provisions?

There are several new provisions designed to increase the penalties for online hate. This includes longer potential prison terms under the Criminal Code, including life in prison for advocating or promoting genocide. There are also expanded rules within the Canadian Human Rights Act that open the door to an influx of complaints on communicating hate speech (note that this does not include linking or private communications) with penalties as high as $20,000. These provisions will likely be a lightning rod over concerns about the chilling of speech and overloading the Human Rights Commission with online hate-related complaints.

And the mandatory reporting of child pornography?

These provisions expand the definition of Internet services caught by the reporting requirements.

This column originally appeared on michaelgeist.ca.

Michael Geist

Dr. Michael Geist is a law professor at the University of Ottawa where he holds the Canada Research Chair in Internet and E-commerce Law and is a member of the Centre for Law, Technology and Society.

00:00:00
00:00:00