In highly decentralized universities, institutional inertia is not an accidental byproduct of scale but one of the principal structural impediments to AI integration. This is because faculty are being asked to adopt technologies that appear to destabilize their domain expertise while simultaneously demanding substantial time for upskilling and pedagogical redesign. From the faculty vantage point, the rational question is straightforward: Why invest scarce time in mastering systems that may commodify core competencies or reconfigure authorship, assessment and disciplinary authority? In this sense, decentralization magnifies the adoption dilemma, since consensus-driven governance slows coordinated movement while individual departments bear the perceived risk. Both internal and external pressures therefore become catalytic: accreditation expectations, employer demand for AI-literate graduates, shifting research workflows, and competitive positioning among peer institutions all function as structural accelerants. Research-intensive departments, in particular, will encounter organic integration because the ecology of research and publication is already transforming through AI-mediated literature review, drafting assistance and data analysis, which alters expectations of productivity, collaboration and methodological transparency. Scholars who wish to remain relevant in grant competitions, interdisciplinary consortia and high-impact journals will increasingly confront the necessity of engaging these systems critically rather than abstaining. At my own institution, Lindenwood University, we have operationalized this reality by requiring each program to conduct a formal review of how AI is reshaping the professions for which students are being prepared, including analysis of evolving job descriptions, emergent skill taxonomies and employer expectations. This reframes adoption not as administrative enthusiasm but as fiduciary responsibility to student futures. Within such a model, central leadership articulates shared principles and timelines, yet departments retain agency over curricular design, thereby aligning coordinated pressure with localized autonomy. The result is neither coercive uniformity nor passive drift, but a structured reckoning in which disciplinary communities must interrogate how their knowledge practices are already being reshaped.
Ethical governance mechanisms are unquestionably necessary, yet in practice many institutions deploy them primarily as instruments of risk aversion rather than as enablers of responsible innovation, which can inadvertently stall meaningful experimentation. Ethics becomes rhetorically powerful in faculty debates, but it is often invoked without analytical precision, conflating legal liability, institutional compliance, professional standards and philosophical commitments into a generalized cautionary stance. It is important to distinguish clearly between responsible use within an educational context and the upstream legal controversies concerning how commercial models are trained, because litigation to date has targeted companies’ training practices rather than universities’ pedagogical deployment of tools. Institutions must therefore ask what they actually mean when they invoke ethical restraint: Are they seeking to remain compliant with federal and state data privacy mandates to protect student information under established regulations, to preserve disciplinary integrity or to avoid reputational risk? Each of these is legitimate, yet they require differentiated governance responses rather than blanket prohibitions. Data privacy boards, procurement review committees and algorithmic impact assessments can function as trust-building infrastructure when they articulate transparent criteria for adoption, delineate acceptable and unacceptable uses, and establish human oversight requirements. However, when these bodies default to prohibition without proportional analysis, they risk reinforcing resistance under the banner of prudence. Faculty concern about disciplinary misuse should be engaged substantively, especially in fields with high stakes such as health sciences or legal education, yet that engagement should culminate in calibrated guardrails rather than categorical abstention. Responsible governance must therefore be developmental rather than defensive, clarifying institutional values while permitting iterative pilots that generate evidence about efficacy and harm. Ethics, properly understood, is not a veto but a framework for disciplined experimentation that aligns innovation with institutional mission.
Intergenerational mentorship models are indispensable because the most respected figures within departments — those whom students regard as intellectual exemplars and career guides — are often senior faculty who, according to multiple adoption studies, are statistically less likely to be habitual users of AI tools. This creates a credibility paradox: Students seek guidance from mentors whose disciplinary wisdom is profound, yet whose fluency with emergent technologies may be limited. A viable solution requires a genuinely bidirectional model in which junior faculty, postdoctoral scholars and technologically adept graduate students share operational knowledge of tools and workflows, while senior scholars transmit epistemological rigor, methodological standards and the tacit knowledge that cannot be automated. Structured co-teaching labs, compensated mentorship pairings and collaborative curriculum studios can formalize this exchange, producing co-authored pedagogical artifacts that integrate tool literacy with disciplinary depth. In designing such systems, it is crucial to clarify the distinction between academic autonomy and academic freedom — concepts that are frequently conflated in debates about AI. Academic freedom protects the right to pursue inquiry and express scholarly conclusions without institutional censorship; it does not entail exemption from curricular evolution, technological literacy or institutional expectations concerning student preparedness. When AI adoption is framed as an infringement on freedom, the discourse risks mischaracterizing professional responsibility as external coercion. The objective is not to mandate uniform classroom practices but to ensure faculty possess sufficient understanding of the technologies reshaping their disciplines to make informed, autonomous decisions about their pedagogical use. By cultivating reciprocal mentorship and reaffirming the true contours of academic freedom, institutions can move beyond defensive postures toward a model in which wisdom and innovation reinforce rather than undermine one another.
James Hutson


In The Adoption of Artificial Intelligence and Inertia in Higher Education, James Hutson delivers a compelling and systematic exploration of why higher education institutions continue to resist the adoption of AI, even as AI’s pedagogical and administrative benefits become increasingly evident.
Situated at the intersection of sociology, educational technology and organizational change, the book offers a nuanced diagnosis of the complex interplay between technological potential and institutional resistance, and — importantly — suggests pathways for navigating this tension.
The central argument is that inertia in higher education is not a simple matter of ignorance or denial, but a multifaceted phenomenon shaped by leadership dynamics, cultural norms, professional identity and structural pressures. Hutson contends that despite compelling evidence of AI’s capacity to enhance personalized learning, streamline administrative processes and expand access, adoption remains uneven because institutions are embedded in deep-rooted socio-cultural and organizational systems that favour continuity over disruption.
Framing the resistance
The opening chapters set the theoretical backdrop by placing AI adoption in historical and organizational context. Hutson frames resistance in higher education as a form of institutional inertia — a concept borrowed from both sociology and organizational theory that emphasizes the power of established routines, norms and power structures to slow or block change. Rather than presenting resistance as irrational, he describes it as a logical response to perceived threats: to job security, professional autonomy and established disciplinary identities. This framing is one of the book’s greatest strengths, as it avoids techno-utopian rhetoric and acknowledges the legitimate human and institutional concerns at stake.
Multidimensional analysis
Hutson’s analysis unfolds across several dimensions:
Empirical and theoretical rigor
Hutson anchors his arguments in a blend of qualitative and quantitative data, as well as case studies drawn from U.S. and international higher education contexts. This mixed-methods approach lends credibility to his claims and allows readers to see how abstract concepts play out in real institutional settings. Theoretical frameworks are introduced with clarity, and each chapter builds toward actionable insights rather than remaining at the level of abstraction.
The concluding chapters synthesize these strands into strategic frameworks for sustainable AI adoption, including mentorship models, leadership continuity planning, and recommendations for bridging generational divides among faculty. Although the book does not offer a one-size-fits-all solution, it provides a rich set of heuristics for institutional leaders and scholars seeking to rethink technology adoption in complex social systems.
Critique and limitations
Despite its many strengths, the book has a few limitations worth noting:
Overall, The Adoption of Artificial Intelligence and Inertia in Higher Education is a thoughtful, well-researched and highly relevant contribution to the literature on technological change in higher education. It avoids simplistic narratives about innovation and resistance, offering instead a sophisticated account of how deep-seated cultural and structural factors shape technological futures in academe.
We use cookies on this site to enhance your user experience
By clicking "Accept" you agree to practices outlined in our Privacy Policy