FREE three month
trial subscription!

Paul W. Bennett: AI is taking over our classrooms. What are we going to do about it?

Commentary

A student on their first day back to school at an elementary school in Montreal, Aug. 29, 2024. Christinne Muschi/The Canadian Press.

Generative artificial intelligence (AI) is moving lightning fast and it is becoming clear, once again, that ed-tech innovation is way ahead of policy, particularly in our K-12 provincial education systems. There’s plenty of hype, driven by missionaries embracing the “Big Reset” and the futurist vision of the World Economic Forum. Here in Canada, leading apostles such as the C21 Canada CEO Academy and Robert Martellacci’s Toronto-based Mind Share Learning have heralded “chatbots” or large language models (LLMs) such as ChatGPT as magical tools.

The gradual commercial deployment of these LLMs in schools creates plenty of buzz among ed-tech enthusiasts. Its most vocal Canadian evangelist is Tom D’Amico, director of education at the Ottawa Catholic District School Board (OCDSB). The initial hype has only grown since June 2024 when American “edupreneur” Salman Khan, founder of Khan Academy, published his latest book, Brave New Words: How AI will revolutionize education (and why that’s a good thing).

Whether it’s proclaiming a “Big Shift” technology-led reform philosophy or promoting the continued use of cell phones in class, ed-tech evangelists tend to plunge in, adopting every new tech toy and panacea. Unconditional support is offered with or without any recognition of the latest innovation’s impact on student achievement or aggravating the so-called “digital divide” between urban and rural schools, or between affluent and disadvantaged communities.

Confronting the potential hazards

With a new school year ahead, AI is now what American education critic Peter Greene aptly described as a “juggernaut” sweeping through North American K-12 education. Teachers everywhere are awakening to its implications and hazards and, after looking for guidance, are now desperate for assistance in coping with the changing realities. 

Sound advice and guidance are hard to find with school systems still reeling from learning setbacks in post-pandemic education times. While LLMs do introduce new and exciting possibilities in teaching and learning, the absence of guardrails is a serious and legitimate concern.Some will find the Common Sense Media research note, “ChatGPT and Beyond,” a useful short primer on how to cope with AI in schools.

One of the few organizations that has emerged to answer these major concerns is Cognitive Resonance, founded by Dr. Ben Riley, former director of deans for Impact. Its initial publication, “Education Hazards of Generative AI,” is an indispensable source of guidance for superintendents, program consultants, principals, and teachers. It explains what’s actually happening and highlights areas of concern like the uncritical consumption of AI “hallucinations,” the spreading of “deep fakes,” invasions of privacy, and normalizing plagiarism or passing off AI-generated work as your own.

Fumbling for directionand filling the policy void 

School systems have been slow in responding to the explosion of Gen AI and, almost two years after the arrival of ChatGPT, still do not have comprehensive policies addressing its profound implications and potential harms. In a survey of some 924 American educators, conducted for the Education Week Research Center in November and December 2023, 79 percent said their school districts did not have clear policies on the use of AI tools. That’s consistent with the general pattern from across Canada.

Getting on top of the AI movement is now an urgent priority. South of the border, a few education experts, such as Bree Dusseault, the U.S. Center for Reinventing Public Education’s managing director, are registering their concerns. Even those committed to tapping into a potential educational game-changer, like her, claim that district leaders are simply “overwhelmed and overloaded” with post-pandemic conditions affecting students and teachers.

A belated call to action in Ontario 

Ontario’s Ministry of Education has now been called on to intervene to help navigate the uncharted territory of AI in schools. Not a single one of the province’s 72 school boards, as of February 2024, had a formal and fully evolved policy on the responsible use of these rapidly evolving technologies. That prompted one Ontario school trustee, Markus de Domenico of the Toronto District Catholic School Board, to propose a motion on February 15 calling for a provincial strategy on the use of AI in education.

Speaking up for concerned educators and parents, De Domenico claimed that AI was “both exciting and frightening” in its implications. School boards, principals, and teachers, he claimed, needed some direction and the resources to implement a school-level policy. A comprehensive policy would include a provincial committee to help boards understand best practices, a conference that explores key issues around AI use, and ongoing support and strategies for educators.

Guidance on AI was critical at this juncture, De Domenico said, because while it’s a valuable tool for teachers, AI tools can also be used by students to circumvent class assessments and submit plagiarized written answers and project work. It was vital, therefore, to prepare students for “an AI-influenced world” and “understand the dangers that AI can present to students and our entire system.”

Early adoption experiments—The case of Ontario’s Ottawa Catholic District School Board

A few Canadian school districts, looking to be cutting-edge, have embraced AI with an almost missionary zeal. While waiting for provincial advice on AI use in the class, the Ottawa Catholic School Board began embracing AI and running pilot projects in early 2024 to “see what works,” in the words of Director D’Amico.

While some teachers in the Ottawa Catholic system and elsewhere would like tools such as ChatGPT banned, D’Amico spouted the global ed-tech gospel: “We’re not in an age, in 2024, where we don’t want our kids using technology,” he told the Toronto Star. Instead, he says teachers and educators may need to change how they assess students so it’s not easy to cheat. He does recognize, it might be added, that AI literacy is critical to bridging the digital divide and might ultimately put students on an equal playing field.

Looking south for direction

American state AI policy guidelines do provide a policy framework and a bit of a lifeline to school districts, principals, and teachers scrambling to keep up with AI incursions into K-12 education. In California, the home of Silicon Valley, for example, the state guidelines promote AI literacy, safeguard student safety and privacy, and advise staff and students to be aware of the potential inaccuracies and biases of AI.

Given the rapidity of technological change, the first generation of generative AI guides for educators, at all levels, are attracting criticism and encountering teacher resistance. The Chicago Public Schools’ guide to generative AI, published in August 2024, is a prime example of what can go wrong when educational thinking at the top is impaired by digital fuzziness and driven mostly by global economic imperatives.

Peter Greene, widely known for his CURMUDGUCATION blog, pounced upon the Chicago Public Schools guide as a “terrible,” “awful,” and “no good” AI guide exemplifying the worst excesses of tech-driven, wrong-headed, bureaucratic thinking. His stinging commentary called into question the guide’s explanation of what generative AI is and how it works, and goes deeper, analyzing in detail a few of the proposed applications in various grade levels and subject areas.

Some Chicago Public Schools exemplars, according to Greene, are potentially implementable, others are pointless. In many cases, teachers will realize they are far better off using tried and tested curricula and assignments. Most importantly, the output in the form of written work or artistic creations “requires careful review,” and teachers are left on their own to sort that out. What’s clear is that plagiarism is being normalized by default.

Address the latest ed-tech policy lag 

Global technology promoters and generative AI evangelists are stumbling forward and leaving it to teachers to sort out, just like they did with the proliferation of smartphones, sanctioned by BYOD (Bring Your Own Device) policy and abetted by fifteen years of ineffective guidelines restricting cellphone use in class. School board guidelines advising them to “use AI ethically” will be of little help when they are scrambling to cope on a day-to-day basis.

Provincial policy is urgently needed alongside realistic and implementable school board guidelines. In the wake of the pandemic, embracing AI guidelines like those of the Ontario Catholic District School Board is totally unreasonable because they expect teachers to retool all their assignments and monitor, on their own, whether assignments and projects are generated by machines rather than students. It’s time to provide guardrails as an aid to navigation in the magical world of AI in education.

Paul W. Bennett

Paul W. Bennett, Ed.D., is Senior Fellow, Education Policy, Macdonald-Laurier Institute. He’s based in Halifax, NS, where he serves as Director, Schoolhouse Institute, Adjunct Professor of Education, Saint Mary’s University, and Chair of researchED Canada.

00:00:00
00:00:00