Chase Tribble: As AI reshapes the world, Canada’s policymakers must embrace its powerful potential

Commentary

Panel members at the Grand Palais during the Artificial Intelligence Action Summit in Paris, Feb. 11, 2025. Michel Euler/AP Photo.

While protecting against its inherent risks, AI can be a revolutionary tool when used in the proper context

The Liberal government’s speech from the throne was filled with many lofty promises. One of the more impactful was to balance the operating budget over the next three years, in part by “deploying technology to improve public sector productivity.”

A hint of what this might entail was given in a surprising addition to Mark Carney’s cabinet, his slimmed-down list of appointments included one with a new portfolio: the first-ever minister of artificial intelligence and digital innovation.

AI is rapidly transforming how people work, and public policy and the functioning of government should be no exception. Recent headlines show governments experimenting with AI tools, prompting both enthusiasm and scrutiny.

Ex-New York governor and current New York City mayoral candidate Andrew Cuomo was called out for using ChatGPT for his housing plan, which created a media cycle about the plan’s validity and created a headache for the policy team. The U.K. technology secretary used ChatGPT to help him understand tech concepts, which was found out by a freedom of information request by a journalist. And perhaps most prominently, some strikingly nonsensical elements of President Trump’s initial global tariff policy appeared to have included some back-of-the-napkin math generated by AI.

Globally, governments are exploring ambitious applications. The United Arab Emirates (UAE), for instance, recently announced its intention to use AI to draft legislation, marking a world-first initiative expected to speed up lawmaking by 70 percent. However, this has sparked intense debate among legal experts, who warn that accuracy, accountability, and human oversight must remain paramount. AI has become a part of the policymaking process, but what factors should people consider when they use these sorts of tools?

The risks

While ChatGPT and other AI tools are being used as a primary source for some policy developers, policymakers must understand that these tools are not all-knowing entities. They commonly make mistakes and have several limitations that need to be considered.

Large Language Models (LLMs) are the basis for any ChatGPT-like tool; these LLMs are fed copious amounts of data from several sources to help them better understand problems. LLMs use the data provided to help answer “prompts” or questions that users provide to the tool.

AI tools are prone to various issues when researching or providing early drafts of written products. One prominent example is “AI Hallucinations,” in which the AI provides made-up or misleading information. Not all the information that LLMs deliver is of the highest standard. In some cases, AI tools can use information that AI has created. For example, Robert F. Kennedy’s “Make America Healthy Again” report is making waves for appearing to cite sources that don’t actually exist but were likely hallucinatory fabrications by AI. In another instance, a British Columbian lawyer used two legal cases that never existed but were made up by ChatGPT. Tools like ChatGPT cannot ensure that the outputs they create are accurate; they use the provided data, which may or may not be real.

The potential

Although these tools may be prone to error, we should not discount their potential altogether. With the supervision of appropriate human oversight, they are very useful in increasing productivity in policy development. Instead of sifting through several articles to find one nugget of helpful information (we have all been there), LLMs can help us find the information we need faster, and if prompted, they can also provide sources where they found it.

Policy developers can use AI for idea generation. As policymakers think through how to improve Canada, using LLMs can provide some starting points. While their output shouldn’t be blindly taken at face value, prompts like “What are other countries doing to solve the housing crisis?” or “What are other countries doing to speed up the permitting process?” can be great ways for policymakers to find jumping-off points for their research.

AI is already proving its practical benefits in these and many related areas. Consider, for instance, how private companies are utilizing it to help them navigate the confusing intricacies of Trump’s tariffs to analyze their supply chains and ensure compliance. There’s no reason governments can’t do the same.

An important consideration is whether policymakers should use AI in policy development if the public is still unsure about these tools. According to the 2024 Edelman Trust Barometer Canada Report, only 31 percent of Canadians trust AI, and 63 percent believe that government regulators lack adequate understanding of emerging technologies to regulate them effectively. Given these stats, would Canadians feel comfortable with policymakers using AI to develop legislation, or would this further sow distrust in our institutions? If policymakers use AI in legislative processes, they must communicate its role clearly, emphasizing that AI supports, not substitutes, human judgment.

While there are several shortcomings of AI in the policy development space, there is also a potential for AI tools to boost policy development and government decision-making. AI can significantly increase government efficiency. For example, if the government were to make AI tools in-house, it could use these tools to create better program delivery and services. However, it would be vital to maintain transparency on how the AI was built and ensure that data security was heavily prioritized.

The U.K. government recently used AI to help process consultations and streamline review processes to make policy decisions faster. An interesting area of consideration would be reviewing immigration backlogs to speed up review processes or allowing AI chatbots to help people looking to find out which permitting rules to follow.

AI is here to stay and will be used in many areas of human life, including policymaking. LLMs and AI are tools, and policymakers should view them as such, using them to explore new ideas, synthesize information, and accelerate drafting.

These tools should not be considered entities with the answers to all of life’s questions and can make no wrong calls, but tools that can help spark ideas for people who design policy. Properly understood, they are a first step, not a final authority. Ensuring that facts and statements come from a reputable source is paramount. Policymaking demands human oversight, transparent sourcing, and public accountability, especially at a time of declining trust in institutions.

Used wisely, AI can be an ally. Misused, it risks deepening skepticism in an already fragile system.

Chase Tribble

Chase Tribble is a senior consultant at Counsel Public Affairs who served in the 2019 and 2021 Conservative war rooms.

Go to article
00:00:00
00:00:00