Education systems are being asked to make long-term decisions in a short-term atmosphere. AI tools have moved into schools faster than most systems can properly evaluate them. Many educators feel pressure to adopt technologies they do not fully understand, while policymakers often oscillate between hype and fear. We believed the field needed a serious, accessible book that slows the conversation down.
This is not simply a technology story. It is a story about the future of teaching, the meaning of learning, student well-being, data rights, equity and educators’ professional status. If schools wait until everything is clear, they will have waited too long. But if they rush blindly, they risk embedding systems that may undermine the very values education is meant to protect.
So, the book was written as a timely intervention: to help educators think critically, ask better questions and act with confidence rather than confusion.
We mean that AI is never just software. It carries assumptions about efficiency, authority, intelligence, behaviour and value. Every AI system is designed by someone, funded by someone, trained on particular data and introduced for particular reasons. That means it is inherently political and social, not merely technical.
In education, this matters enormously. One group may see AI as a route to personalized tutoring and improved access. Another may see it as a mechanism for surveillance, standardization, cost-cutting or corporate influence over public schooling. Both perspectives can contain truth. Another, such as an Indigenous community, may see AI as another tool of colonialism and oppression.
Calling AI contested terrain reminds us that schools still have agency. We are not passive recipients of technological change. Communities can decide what kinds of tools align with their values, what boundaries are needed, what should be rejected and where human judgment must remain paramount.
The real question is never “Should we have AI?” but rather “What kind of AI, for what purpose, under whose control and with what consequences?”
Both —and neither. We reject the false choice between optimism and pessimism. What we advocate is critical hope.
There are genuine possibilities here. AI can support differentiated instruction, help create resources quickly, provide translation assistance, strengthen accessibility for some learners and reduce certain administrative burdens. Used wisely, it may give teachers more time for the deeply human work that matters most.
But there are equally real risks. AI can weaken independent thinking if overused. It can normalize plagiarism and shortcut learning. It can amplify bias, erode privacy, increase dependency on vendors and create a two-tier system in which some students learn to think while others learn to prompt.
Education needs both discernment and imagination. Our position is that AI’s future in schools is still open — and educators must help shape it.
Do not surrender your agency. Too often, educators are treated as the last people to be consulted about technologies that profoundly affect their work. Vendors market solutions, governments issue directives, commentators proclaim revolutions — and teachers are expected to adapt. That model is backward.
Teachers understand learning, motivation, developmental readiness, relationships and classroom complexity in ways no algorithm can. School leaders understand culture, trust and change management. Students understand how these tools are actually used. These voices must be central, not peripheral. We worked closely with students, especially black student leaders in the Toronto District School Board, who helped shape our thinking.
Our message is simple: AI should serve education, not the other way around. If a tool strengthens human flourishing, expands inclusion, deepens learning and respects dignity, it deserves consideration. If it weakens these things, it should be challenged or refused.
The future of schooling must remain a democratic choice shaped by educators and communities, not a commercial inevitability.


Artificial intelligence has entered education with unusual speed and extraordinary claims. It is described as a “revolutionary force” that will personalize learning, reduce teacher workload, improve assessment and transform schools. At the same time, it has generated serious anxieties about cheating, bias, surveillance, privacy, de-skilling and the erosion of teacher-student relationships. AI Unplugged was written to help educators navigate this moment with clarity rather than panic, and with judgment rather than surrender.
The book argues that education is first and foremost a human enterprise. Focused on K-12 schools but relevant to higher education, the authors suggest that schools are not simply delivery systems for content; they are places where identity is formed, belonging is nurtured and civic values are practised. AI may support some dimensions of this work, but it cannot replace the relational, ethical and developmental role of teachers and schools. That is why the book consistently resists the language of inevitability. AI is not destiny. It is a toolset, the impact of which depends entirely on governance, design, values and use.
Across 13 chapters, the authors examine both the structural questions and the practical realities. They explore the politics of AI adoption, design justice, teacher agency, student voice, assessment, special education, future competencies and the need for AI literacy. They also draw on student research and practitioner experience, ensuring the debate is grounded in real classrooms rather than abstract theory.
What distinguishes AI Unplugged is that it avoids two common traps: uncritical techno-optimism and reflexive rejectionism. Instead, it proposes a third stance: critical hope. This means being open to innovation while remaining alert to power, inequality, unintended consequences and the deeper purposes of education. In a sector often pushed to react quickly, this book invites educators to think deeply.
For teachers, leaders, policymakers, unions and graduate students, AI Unplugged is both a warning and an invitation: a warning against handing the future of education to technology markets, and an invitation to shape that future through professional wisdom, democratic choice and moral courage.
Do not surrender your agency. Too often, educators are treated as the last people to be consulted about technologies that profoundly affect their work. Vendors market solutions, governments issue directives, commentators proclaim revolutions — and teachers are expected to adapt. That model is backward.
Teachers understand learning, motivation, developmental readiness, relationships and classroom complexity in ways no algorithm can. School leaders understand culture, trust and change management. Students understand how these tools are actually used. These voices must be central, not peripheral. We worked closely with students, especially black student leaders in the Toronto District School Board, who helped shape our thinking.
Our message is simple: AI should serve education, not the other way around. If a tool strengthens human flourishing, expands inclusion, deepens learning and respects dignity, it deserves consideration. If it weakens these things, it should be challenged or refused.
The future of schooling must remain a democratic choice shaped by educators and communities, not a commercial inevitability.
We use cookies on this site to enhance your user experience
By clicking "Accept" you agree to practices outlined in our Privacy Policy