The assumption that learning outcomes remain constant across technological eras. We tend to treat educational goals as stable targets, as if what an educated person knows and can do is determined by some timeless standard rather than by the demands of a particular historical moment. AI disrupts that assumption by disrupting labour itself, and education has always followed labour.
Consider navigation. A generation ago, a competent sailor needed to read a compass, interpret a paper chart, calculate bearing and drift by hand. Those were genuine skills, hard-won and consequential. Today, the competent sailor reads GPS, monitors digital weather systems, integrates real-time data from multiple instruments simultaneously. The underlying goal, getting safely from one place to another while understanding the sea, has not changed. But the specific capabilities required have shifted substantially. No one argues that modern sailors should forgo GPS to preserve the integrity of traditional chart-reading.
Education faces the same logic. AI does not simply add a new tool to an existing skill set. It reorganizes what competence looks like across almost every domain. The writing instructor who insists students master five-paragraph structure before thinking about argument is training sailors to read charts that GPS has largely replaced. The question is no longer which traditional skills to protect but which new capabilities to develop, and how to identify them before the labor market makes the gap painfully obvious.
The book documents this pattern through what I call the great disruption: AI functioned as a stress test on educational infrastructure, revealing not just outdated assessments but outdated assumptions about what students need to learn. The skills that matter now involve operating with AI, directing it, evaluating its outputs, understanding where it succeeds and where it fails. Those are learnable. They are teachable. But we cannot teach them while defending curricula designed for a world that is receding.
Less like collaboration between two authors, and more like conducting an orchestra playing a score you partly composed and partly inherited.
The philosophical point is worth stating plainly. Cognition has never been a purely individual act. We think with language we did not invent, within conceptual frameworks built by others, using tools that extend our reach in every direction. What AI changes is the scale and intimacy of that extension. When a student works with an AI system on a complex analysis, the thinking genuinely flows across both agents. The boundary of where the student’s mind ends and the tool begins is not sharp.
What this demands is a capability the book calls Extended Executive Cognition: the ability to decompose a complex task, decide which components benefit from AI’s particular strengths and which require human judgment, sequence the work intelligently, evaluate what comes back, and integrate the pieces into something coherent. That is not a simple skill. It requires knowing your own cognitive process well enough to make sound decisions about what to delegate. It requires understanding AI’s tendencies well enough to anticipate where it will produce something useful and where it will produce something plausible-sounding but wrong. It requires the metacognitive discipline to stay genuinely in charge rather than ratifying whatever the system generates.
Think of it as a new form of literacy. Reading and writing were once specialized skills that transformed what kinds of thinking were possible. Operating effectively within distributed human-AI cognitive systems is the analogous literacy for this moment. The student who can decompose an ill-structured problem, allocate subtasks strategically, evaluate AI outputs with genuine critical discernment, and synthesize the results into something that reflects real intellectual judgment, is the student who mastered something genuinely new and genuinely valuable.
Academic integrity, in this frame, is not primarily about detecting tool use. It is about whether genuine judgment was exercised. You cannot hide behind the tool. You own the intellectual direction, the framing, and the quality of the outcome. The tool did not choose the question. You did. That is the standard that matters.
Probably not the course itself, at least not soon. The Carnegie unit, the credit hour, the semester-length sequence organized around a single instructor and a defined content area are structures that persist because they solve real administrative problems. Institutions need a common currency for measuring learning, transferring credit, staffing classrooms, and scheduling facilities. Those pressures do not disappear because AI arrived. The container remains for a considerable time.
What will change, and is already changing, is what happens inside that container. The content and pedagogy of courses will shift toward what the book calls the second coming of progressivism. Progressive education, the tradition of Dewey and his successors that emphasized authentic problem-solving over procedural drill, has always been theoretically compelling and practically difficult. The reason is simple: genuine inquiry-based learning at scale requires extraordinary amounts of individualized scaffolding. A room of thirty students pursuing thirty different investigations needs a level of responsive support that one instructor cannot reasonably provide.
AI dissolves or at least greatly reduces that constraint. When every student has access to a patient, knowledgeable system that can respond to their specific question at the moment they have it, the logistics of project-based learning change fundamentally. The instructor’s role shifts from information delivery, which AI handles more efficiently, toward something more like intellectual coaching: helping students identify meaningful problems, evaluate the quality of their own reasoning, and develop the metacognitive habits that make sustained independent inquiry possible.
So the course remains. But a course in, say, introductory economics increasingly looks less like a sequence of lectures on supply curves and elasticity, and more like a structured investigation into a real economic question, with AI providing the procedural support that used to require either massive instructor time or simply did not get provided. The progressive dream was always about developing students capable of genuine inquiry. AI gives that dream its first real means of delivery.
Intent. The capacity to decide what is worth doing, and to take genuine responsibility for that decision.
AI is extraordinarily capable within the space of pattern recognition and generation. It can produce competent arguments, synthesize research, execute analyses, generate options. What it cannot do is originate the question. It cannot determine what matters, in the full human sense of that phrase, the sense that involves values, commitments, and situated judgment about what kind of world we are trying to build.
But I want to add something more active than just intent. The skill education must cultivate is the ability to drive AI rather than ride as a passenger. The distinction is not trivial. Most people who use AI regularly are essentially passengers: they receive what the system offers, evaluate it superficially, and proceed. Driving means something different. It means knowing where you are going before you start, formulating the question with enough precision that the system can be genuinely useful, maintaining critical distance from the output, redirecting when the system drifts, and integrating the results into a larger purpose that you, not the system, defined.
Think of it as learning to ride a dragon. The dragon is powerful, possibly as intelligent as you in certain domains, capable of feats far beyond your unaided capacity. But the dragon needs direction. Without it, the dragon goes somewhere, possibly somewhere interesting, but not necessarily where you intended. The rider who knows where to go, who understands the dragon’s nature well enough to guide rather than merely hang on, that person accomplishes something that neither could manage alone.
This is not a mystical capacity. It develops through practice, through sustained engagement with genuine intellectual problems, through being held accountable for the quality of one’s intellectual direction. It develops, in other words, through exactly the kind of education the book describes: complex challenges, authentic stakes, metacognitive attention to one’s own thinking process, and the pedagogical insistence that you must know what you are trying to accomplish before the tool can help you accomplish it.
Alexander M. Sidorkin


Alexander M. Sidorkin’s AI-Enhanced Pedagogies is a provocative and intellectually ambitious book that attempts something many current publications on artificial intelligence and education avoid: it reframes the rise of AI not merely as a technological challenge but as a philosophical and structural critique of modern education itself. Rather than focusing on classroom tips or tool adoption, Sidorkin argues that generative AI has exposed deep weaknesses in the prevailing educational model, weaknesses that long predate the arrival of intelligent machines.
The book’s central thesis is clear: AI did not break education, it exposed its fragility. According to Sidorkin, modern education relies on assumptions that are becoming harder to uphold in a world where machines can generate text, analyze data, and provide explanations instantly. These assumptions include strict curricular sequences, narrow meritocratic ideals, and outdated theories of learning focused on individual effort and knowledge replication. The book, therefore, aims to develop a theory of education fit for a future where humans and intelligent systems learn together, not apart.
Structurally, the book is divided into four sections. Sidorkin’s first section explores the philosophical foundations of education, analyzing how AI challenges longstanding ideas about authorship, originality, and intellectual effort. He then critiques common myths regarding merit and achievement in education before introducing the concept of “shared cognition,” a model where human thinking is extended through collaboration with intelligent tools. Finally, he addresses implications for curriculum, assessment, and institutional design, arguing that education needs to transition from linear curricula to more adaptable systems that promote creativity, discernment, and intellectual courage.
One of the book’s major strengths lies in its willingness to move beyond the usual alarmism surrounding generative AI. While many discussions focus on academic integrity crises or fears of automation, Sidorkin adopts a more expansive perspective: AI, he suggests, expands human intelligence rather than replacing it. This argument places the book within a long tradition of scholarship on “extended cognition,” suggesting that technologies, from writing to calculators, have always reshaped the boundaries of human thinking. In this respect, the book offers a refreshing counterpoint to narratives that frame AI primarily as a threat to educational values.
Equally commendable is Sidorkin’s philosophical ambition. Few contemporary books in the AI-and-education field attempt to engage so directly with foundational questions:
These questions push the reader beyond policy debates and toward deeper reflection about the purposes of education itself.
Yet the book’s greatest strength is also, at times, its weakness. Sidorkin’s argument operates largely at a theoretical level, and readers seeking concrete guidance on how universities or schools might implement AI-enhanced pedagogies may find the discussion somewhat abstract. Although the final sections address curriculum and assessment reform, these proposals occasionally remain more conceptual than operational. Faculty members confronting immediate classroom dilemmas, how to redesign assignments, evaluate AI-assisted work, or teach AI literacy, may wish for more practical examples.
A second limitation is the critique of traditional education sometimes feels overstated. While it is certainly true curricula and assessment structures became rigid in many institutions, education has always been more pluralistic and adaptive than the book occasionally suggests. Many fields already employ project-based learning, collaborative inquiry, and authentic assessment. The narrative of systemic fragility might therefore benefit from acknowledging these ongoing, perhaps limited, innovations.
Nevertheless, these criticisms do little to diminish the book’s intellectual value. Sidorkin’s work stands out precisely because it attempts to rethink education at the level of first principles. By questioning the assumptions underlying meritocracy, authorship, and cognitive labour, the book challenges educators to reconsider what human learning should mean in a world where intelligent machines are ubiquitous.
In the end, AI-Enhanced Pedagogies is less a handbook for immediate reform than a philosophical provocation. It invites readers to move beyond short-term anxieties about plagiarism or automation and instead ask a deeper question: if AI changes how knowledge is produced, what should education ultimately cultivate in human beings?
For scholars of education, learning sciences researchers, and academic leaders grappling with the implications of generative AI, Sidorkin’s book offers an engaging and stimulating contribution to an emerging conversation. One may not agree with all of its claims, but it succeeds in doing something rare by forcing the reader to think again about the very foundations of education in the age of intelligent machines.
Probably not the course itself, at least not soon. The Carnegie unit, the credit hour, the semester-length sequence organized around a single instructor and a defined content area are structures that persist because they solve real administrative problems. Institutions need a common currency for measuring learning, transferring credit, staffing classrooms, and scheduling facilities. Those pressures do not disappear because AI arrived. The container remains for a considerable time.
What will change, and is already changing, is what happens inside that container. The content and pedagogy of courses will shift toward what the book calls the second coming of progressivism. Progressive education, the tradition of Dewey and his successors that emphasized authentic problem-solving over procedural drill, has always been theoretically compelling and practically difficult. The reason is simple: genuine inquiry-based learning at scale requires extraordinary amounts of individualized scaffolding. A room of thirty students pursuing thirty different investigations needs a level of responsive support that one instructor cannot reasonably provide.
AI dissolves or at least greatly reduces that constraint. When every student has access to a patient, knowledgeable system that can respond to their specific question at the moment they have it, the logistics of project-based learning change fundamentally. The instructor’s role shifts from information delivery, which AI handles more efficiently, toward something more like intellectual coaching: helping students identify meaningful problems, evaluate the quality of their own reasoning, and develop the metacognitive habits that make sustained independent inquiry possible.
So the course remains. But a course in, say, introductory economics increasingly looks less like a sequence of lectures on supply curves and elasticity, and more like a structured investigation into a real economic question, with AI providing the procedural support that used to require either massive instructor time or simply did not get provided. The progressive dream was always about developing students capable of genuine inquiry. AI gives that dream its first real means of delivery.
Intent. The capacity to decide what is worth doing, and to take genuine responsibility for that decision.
AI is extraordinarily capable within the space of pattern recognition and generation. It can produce competent arguments, synthesize research, execute analyses, generate options. What it cannot do is originate the question. It cannot determine what matters, in the full human sense of that phrase, the sense that involves values, commitments, and situated judgment about what kind of world we are trying to build.
But I want to add something more active than just intent. The skill education must cultivate is the ability to drive AI rather than ride as a passenger. The distinction is not trivial. Most people who use AI regularly are essentially passengers: they receive what the system offers, evaluate it superficially, and proceed. Driving means something different. It means knowing where you are going before you start, formulating the question with enough precision that the system can be genuinely useful, maintaining critical distance from the output, redirecting when the system drifts, and integrating the results into a larger purpose that you, not the system, defined.
Think of it as learning to ride a dragon. The dragon is powerful, possibly as intelligent as you in certain domains, capable of feats far beyond your unaided capacity. But the dragon needs direction. Without it, the dragon goes somewhere, possibly somewhere interesting, but not necessarily where you intended. The rider who knows where to go, who understands the dragon’s nature well enough to guide rather than merely hang on, that person accomplishes something that neither could manage alone.
This is not a mystical capacity. It develops through practice, through sustained engagement with genuine intellectual problems, through being held accountable for the quality of one’s intellectual direction. It develops, in other words, through exactly the kind of education the book describes: complex challenges, authentic stakes, metacognitive attention to one’s own thinking process, and the pedagogical insistence that you must know what you are trying to accomplish before the tool can help you accomplish it.
We use cookies on this site to enhance your user experience
By clicking "Accept" you agree to practices outlined in our Privacy Policy