On the same day Parliament recessed for the summer, Minister of Justice David Lametti tabled a new “online harms” bill.
The bill brings back section 13 of the Canada Human Rights Act, which had previously been repealed in 2013. But it does even more than that: It essentially proposes to put bureaucrats in charge of parsing and policing speech on the internet. It creates a civil remedy with cash fines of up to $50,000 for engaging in speech which is “likely to foment vilification or detestation.” It also puts the onus on private individuals or organizations to initiate complaints, with no financial consequences for unsuccessful findings.
The bill, which may not be taken up in the current parliamentary session due to a potential election, was tabled against a backdrop of significant “concept creep” over the past several years about what constitutes harmful speech, and more precisely, where the perceived line between healthy criticism and encouraging violence or other acts of hate ultimately lies. In terms of this crucial balance, the proposed legislation could put a chill on speech that’s better described as unpopular than hateful.
The Criminal Code already contains prohibitions against hate speech, as well as against counselling suicide and advocating genocide, all of which carry procedural protections such as proof beyond a reasonable doubt and the presumption of innocence, as well as the requirement that the attorney general approve a charge before it can be pursued. Not so with section 13, which could see these hefty fines on a much more lenient “balance of probability” standard.
Subscribe and receive exclusive member benefits.
The bill also creates a peace bond against future anticipated instances of hate speech, so that an individual can apply for a court order prophylactically — an awkward and practically unwieldy expansion of an already inchoate law.
The upshot: the bill would grant the government significant new powers to tackle hate speech using vague and highly subjective definitions that could include ideas and language that are contestable yet ought to be permissible in a free and pluralistic society.
This tension this legislation creates is best described through two examples.
First, take the argument which erupted over University of Toronto professor Jordan Peterson in 2016. Peterson was formally reprimanded by the university for arguing that amendments to the Ontario Human Rights Code adding gender identity as a protected ground of discrimination would place him at risk of being fined or jailed for failing to use non-binary pronouns. Most legal experts agreed this was incorrect — failure to use pronouns alone would probably not amount to discrimination.
However, the fallout from Peterson’s incident showed he had a point. The university, in a formal letter of reprimand, alleged that Peterson’s statements resulted in “threats of assault, injury and death” to trans members of the U of T community. In other words, Peterson’s dissent was ratcheted up to the level of inciting actual violence through a questionable series of inferences which certainly didn’t rely on any police reports, campus occurrences, or evidence of any risk of violence.
The line between words and violence had vanished.
The second context is the growing debate about Islamophobia and the tension between speech that actively encourages hateful acts on one hand and speech that reflects permissible criticisms on the other hand. There’s a careful yet highly-important line between prosecuting anti-Muslim hatred and enabling debatable yet legitimate criticism of a religion’s ideas.
Islam itself is a set of religious beliefs and propositions, some good, some bad, like every other religion. Just as in a free society individuals should be free to critique secular causes, there ought to be plenty of room to criticize and challenge religious ideas across different faith traditions including Islam.
Anti-Muslim hate is despicable and should be prosecuted. Full stop. But open critique of Islamic faith must be tolerated, especially on the part of Muslim moderates and reformers, who themselves are frequently accused of encouraging hate. Just look at the case Muslim reformer Maajid Nawaz, who was called an “anti-Muslim extremist” by the Southern Poverty Law Centre — and later won a multi-million dollar libel lawsuit against them.
The language in the updated section 13 of the CHRA describes hate speech as speech which “is likely to foment detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination.” This language is based on the 1990 Supreme Court of Canada decision Taylor, which concluded that section 13 covered only “unusually deep-felt feelings of hatred, calumny and emotion” and posed little risk of subjective opinion as to offensiveness.
Three dissenting judges, including future chief justice Beverley McLachlin, disagreed, finding that the words “hatred” and “contempt” were “[…] vague, subjective and susceptible of a wide range of meanings, extends the scope of s.13(1) to cover expression presenting little threat of fostering hatred or discrimination.”
The same comments are apposite today, but with the added reality of exponentially more public expression to sift through — on Twitter, TikTok, and whatever app will pop up to replace Parler — than was in circulation in 1990, and thus exponentially more opportunities for frivolous and expensive complaints.
At a Department of Justice briefing about the new bill, groups representing minority and racialized communities chimed in with a common complaint, noting the incredibly cynical timing of the bill. It was dropped like a finale high kick before Parliament recessed for summer with the widespread expectation of a fall election.
If minority communities were being weaponized as part of a wedge politics issue, they understandably weren’t happy with it. Why now? Department officials at the briefing had no answers.
To compound matters, further measures — including the rumoured establishment of a new social media regulator, appeal board, and tribunal which would enforce takedown measures against platforms — were deferred pending a summer “consultation” to be conducted by Heritage Minister Steven Guilbeault.
Guilbeault himself clearly is still holding off on the full impact of the legislation — he admitted at the Banff Forum recently that the forthcoming proposal would be “even more contentious” than his beleaguered C-10, which aims to regulate internet content.
Bill C-36 will not make the problem of hate speech go away, but drive it underground, away from places where bigotry and prejudices can be corrected, and make it harder to track.
In the meantime, the bill’s vague and subjective language would deter healthy discourse on difficult topics. Even if it dies on the order paper this fall, voters shouldn’t forget this.