Ottawa is trying to regulate AI chatbots—and going about it the exact wrong way

Commentary

Minister of Artificial Intelligence and Digital Innovation Evan Solomon in the House of Commons in Ottawa, Mar. 13, 2026. Justin Tang/The Canadian Press.

Why the Online Harms Act is a poor tool for the job

Ask The Hub

Why does the author argue that applying the Online Harms Act to AI chatbots is the wrong approach?

What alternative regulatory approach does the author suggest for AI chatbots, and why?

AI Minister Evan Solomon summoned executives from OpenAI to Ottawa at the end of February to explain why the company declined to alert police that it had flagged the account of Jesse Van Rootselaar, the Tumbler Ridge shooter who killed eight people earlier this month. The company stopped short of warning authorities, concluding that the account activity did not meet its standard of an “imminent and credible risk of serious physical harm to others.”

After the meeting, Solomon expressed disappointment with OpenAI, saying the company had not presented “substantial new safety protocols.” Justice Minister Sean Fraser said it expects OpenAI to make changes, or else the government would step in to regulate artificial intelligence companies.

OpenAI followed up with pledges to do more, but attention quickly turned to the Online Harms Act as a potential regulatory solution. The Online Harms Act, or Bill C-63, died on the order paper last year, but is expected to return in some form in the coming months.

Given that the act is tailor-made to address online harms, it isn’t surprising that some would suggest that it could be expanded to cover AI chatbots. Indeed, the government quickly summoned its online harms expert advisory panel to assist with the issue.

Yet the law was deliberately designed to avoid doing what politicians want the AI companies to do, as it expressly exempted private communications and proactive monitoring from its scope. Applying the Online Harms Act to AI chatbots would not simply extend existing online safety rules to a new technology. It would require dismantling core privacy safeguards, which were added after the government’s earlier online harms proposal faced widespread criticism for encouraging platform monitoring and rapid reporting to law enforcement.

In effect, proposals to use online harms to regulate AI chatbots risk reviving many of the same surveillance concerns that forced the government back to the drawing board just a few years ago.

The Online Harms Act was crafted to regulate social media platforms, not all digital services. Section 2 defines a social media service as a “website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content.‍”

Regulated services under the bill were defined as social media services that reached a certain threshold of users. The legislative focus was therefore on large-scale dissemination and amplification, namely platforms where harmful content can rapidly reach broad audiences through sharing and recommendation systems.

None of this fits with an AI chatbot. Interactions with chatbots such as ChatGPT do not involve user-to-user communication or public dissemination. A prompt entered into a chatbot is typically visible only to the individual user and the provider. There is no audience exposure risk, the central concern animating the Online Harms Act framework.

In fact, the bill reinforced this limitation through an explicit privacy safeguard. Section 6(1) provides that the act’s duties do not apply in respect of any private messaging feature of a regulated service. Section 6(2) defines private messaging as communications sent to a limited number of users selected by the sender rather than to a potentially unlimited audience.

This exclusion reflects a clear policy boundary as the government chose to regulate publicly amplified harms while leaving interpersonal digital communications outside the regime. Chatbot interactions align far more closely with private messaging than social media publishing since they involve one-to-one exchanges rather than public distribution. Bringing chatbot prompts within the Online Harms Act would therefore require narrowing or effectively bypassing the statute’s privacy protections.

View reader comments (0)

Moreover, Section 7(1) states that nothing in the legislation requires an operator to proactively search content communicated on the service in order to identify harmful content (subject to a narrow exception involving child sexual victimization materials). The current push to apply the Online Harms Act to AI chatbots moves in precisely the opposite direction. Identifying potentially dangerous behaviour from AI chatbot interactions would almost inevitably require analysis of prompts and conversational patterns within private exchanges. In practical terms, it would introduce monitoring into the very environments the act was structured to avoid regulating.

Neither of these safeguards is there by accident. Both are cited in the Department of Justice’s Charter analysis to justify the bill’s compliance with the Charter of Rights and Freedoms. And both have echoes of the government’s 2021 Online Harms consultation, which sparked widespread criticism after it floated proactive monitoring requirements and mandatory reporting to law enforcement within tight timelines.

Critics warned that requiring platforms to actively monitor user communications and rapidly report potentially unlawful content risked creating incentives for over-reporting and expanded surveillance of lawful expression. The consultation was widely viewed as blurring the line between addressing harmful public content and deputizing platforms as agents of law enforcement.

Applying the Online Harms Act to AI chatbot conversations now risks reopening the very issues policymakers previously sought to avoid. In fact, it is difficult to see the difference between something posted to an AI chatbot or similar content entered into a search query or included in text message or email correspondence. If proactive monitoring of searches, emails, or texts is subject to privacy safeguards, so too should be AI chatbot engagement.

The Online Harms Act failed in large measure because it sought to cover too much, layering in Criminal Code and Human Rights Act provisions alongside the platform liability elements. Expanding the bill to include AI chatbots runs the same risk. There is a role for AI chatbot regulation, but it isn’t an expanded Online Harms Act. Instead, the starting point should be specific, transparency-focused legislation that places the emphasis on ensuring there is full disclosure of user safety policies and how they are implemented and enforced.

A version of this post was originally published at michaelgeist.ca.

Michael Geist

Michael Geist holds the Canada Research Chair in Internet and E-commerce Law at the University of Ottawa, Faculty of Law.

Attempting to regulate AI chatbots by applying the Online Harms Act (Bill C-63) is inappropriate and risks undermining privacy safeguards. The act, designed for social media platforms facilitating public communication, doesn’t align with the private, one-to-one nature of chatbot interactions. Extending the act would necessitate dismantling privacy protections and implementing proactive monitoring, mirroring concerns raised during the act’s initial consultation. A more targeted approach focusing on transparency and disclosure of user safety policies is more appropriate, rather than expanding a flawed law that was designed to address online harms on social media platforms, not private conversations.

In effect, proposals to use online harms to regulate AI chatbots risk reviving many of the same surveillance concerns that forced the government back to the drawing board just a few years ago.

The Online Harms Act was crafted to regulate social media platforms, not all digital services.

There is a role for AI chatbot regulation, but it isn’t an expanded Online Harms Act.

Go to article
00:00:00
00:00:00