For those who aren’t familiar with the popular sci-fi series Star Trek, the Borg are a group of cyborgs who aim to achieve perfection through the assimilation of the technologies and knowledge of the other alien species they encounter. Resistance to them is very often futile, unless you are lucky enough to be on board the starship USS Enterprise.
So when we consider higher education, are AI technologies ultimately going to assimilate the best of us, rendering us obsolete drone workers?
The short answer is: No.
Although new educational technologies are presented as dynamic forces that are here to disrupt many years of stasis and inertia, the reality is that their development has a long, rich history with recurring themes. As Martin Weller recounts in his introduction to “25 Years of Edtech,” we tend to have a fair amount of historical amnesia going on as a sector.
When we step back and take the longer view, though, we can see that the AI technologies emerging today are part of a larger cycle of innovation and change — and in that sense we can say the adoption of AI in education is probably inevitable and resistance is likely futile.
Indeed, the arguments for using AI technologies in areas of “commodity” technology are already compelling. AI-informed chatbots have been gaining significant traction in the university admissions space, working 24/7 to answer boring and repetitive questions in a responsive manner. Perhaps tomorrow they will be used to answer boring and repetitive questions about assignment deadlines and course materials, and is that really a problem?
Again: No.
What this longer view shows us is that the central role of the teacher has persisted alongside the successful history of technology adoption.
The reality is that AI technologies are not here to assimilate us, despite what the marketeers and edtech columnists would like to suggest. These technologies are mostly not clever enough. But the bigger truth is that this longer cycle of edtech history shows it’s actually us who are the Borg, and inevitably we will assimilate the best of the AI technologies that appear.
From that vantage point, the idea of the cyborg teacher could be framed as a compelling and exciting future rather than a threatening one. Adoption of AI is something we can pursue intentionally, not something done to us by external forces.
According to Neil Selwyn, author of Should Robots Replace Teachers?, distinguishing between “human teachers” and “robot teachers” is not a matter of people versus machines.
Instead, he says, “We are concerned with how different sets of people are entwined with machines and software in increasingly complex and closely connected ways.”
That’s not to say no resistance will occur. While AI technologies offer some truly exciting ways to reconceptualize the idea of the teacher, that is in itself highly disruptive.
Where might we see resistance?
- When parts of jobs disappear and instead of using that time to do more good teaching of more students, wages and hours get cut.
- Where the use of AI causes active harm to groups of students because of bias or unfair, inaccurate decision making.
- Where the use of AI is not transparent, open and explicable.
- Where we continue to be wary of experimentation because the cost of getting it wrong is too high to bear.
- Where external regulators or the different power relationships within our sector make us more conservative in our approaches.
- Where we’ve gutted our ability to support experimentation by hollowing out our digital capabilities through outsourcing.
- Where we’ve not invested in digitally knowledgeable senior leaders in institutions, despite all our talk of digital transformation.
Tackling these areas of resistance will require ongoing investment in people and culture change — the assimilation of new knowledge and ways of working — as well as new technology.