In 2022, Air Canada attempted a legal defence that belonged in a science fiction novel rather than a courtroom. After a grieving passenger, Jake Moffatt, was promised a bereavement fare refund by the airline’s chatbot, Air Canada refused to pay.
The airline’s argument? That the chatbot was a “separate legal entity,” and the company,therefore, was not responsible for its actions. They essentially argued that their own customer service tool was a rogue freelancer over whom they had no control.
The Civil Resolution Tribunal called this defence “remarkable”—legal politeness for “absolute nonsense”—and ruled Air Canada fully liable.
While the financial penalty was negligible, the strategic warning shot was deafening. We are currently witnessing a fundamental shift in artificial intelligence, moving from generative AI (bots that write poems, essays, etc.) to agentic AI (bots that take action). The promise is seductive: digital employees that can book flights, negotiate refunds, and manage supply chains while the C-suite sleeps.
But as any military commander knows, autonomy without governance is not efficiency; it is mutiny. By handing software the “keys to the car,” organizations are also handing it the ability to crash.
The liability gap
The Air Canada case was a tragedy for the customer, but a “Chevy Tahoe” incident in California turned the risk into a farce with terrifying implications. A dealership connected a chatbot to its sales page, and a tech-savvy user tricked it into agreeing to sell a $76,000 Chevy Tahoe for one dollar, ending the negotiation with “and that is a legally binding offer.” The bot accepted.
While that specific instance was a prank, the vulnerability is systemic. If that bot had been connected to a DocuSign API or an automated billing system, the dealership would have been legally exposed. In an agentic future, bots do not just talk; they execute. If they execute on a hallucination, the human leadership foots the bill.
The “protocol droid” vs. the junior officer
So, does this mean we kill the bots? Absolutely not. The efficiency gains are too massive to ignore. But we need to stop treating AI like a “protocol droid” (C-3PO)—an infallible machine—and start treating it like a “junior employee.”
If I were advising the C-suite on this, I would implement the “Maker-Checker” Principle. This is a standard governance model used in banking and the federal government. You never let the same person initiate a wire transfer and approve it. That violates segregation of duties.
Should companies be held liable for AI chatbot errors, like Air Canada's chatbot refund promise?
How can businesses balance AI efficiency with human oversight to avoid 'autonomy traps'?
What are the economic implications of implementing human oversight for agentic AI?
Comments (2)
Assets=Liabilities+Owners Equity. The adherence to this simple equation has led to a continued growth in a behavior of all corporations and institutions to hide behind walls of legalize to deny liabilities in order to support shareholder rights. Shareholders in this case are both corporate and citizens. Citizens almost inevitably lose, shareholders much less so. Transferring liability to digital agents allows time transfer liability while shareholders transfer ownership.