If you’re unsure what messages to give faculty about artificial intelligence (AI), you’re not alone.
First, stop and take a breath.
Second, stall or pass on the inevitable service offers from expert companies on how to leverage AI in your university or college.
As educators, it’s our job to be curious and to systematically research and test innovation to sort out valid from inept tools. The same applies to the (not so) new AI technology that is taking the world by storm due its easy user interface.
We also need to strongly resist the “catching up” dogma, or the formulation of knee-jerk responses. Education has wider purposes than the constant adoption of technologies, especially when they come with a high price tag and slick marketing.
While we’re always being told that we’re too slow to change, and that the private sector is coming to eat our lunch, we are simultaneously reminded of all the changes we’ve accommodated. How many pithy references to universities adapting to the invention of the electronic calculator have you read recently as part of the AI change discourse?
Next, and it cannot be stressed enough, is DO NOT START AN EDTECH ARMS RACE, especially in areas such as academic integrity. The only winners there are edtech companies.
Heading down this path will increase your operational costs and compound your administrative challenges. The volume of student appeals and academic integrity investigations will increase in direct proportion to how trusted and fairly treated students feel.
And don’t think that wrapping the veracity of your credentialling in a veneer of technological validation will serve you in the long run. The more of our core business that we outsource today, the faster the unbundling of our institutions will be tomorrow.
However, the robots are not just coming. They’re already here.
The AI du jour is ChatGPT.
We seem to have lost our collective minds freaking out about students using it, while at the same time making spectacularly bad integrity decisions ourselves. Arguably for the previous three years it’s been AI proctoring in the spotlight, with pandemic horror stories of racial and disability biases played out against a wider backdrop of widening inequity and polarization in the world.
That these technologies will come and go is an absolute certainty. What we need are the critical faculties and the capacity to respond, and we need to remember that new AI technology (indeed any new technology) will be an academic and people issue first.
So do not ask your CIOs or IT departments to lead the institutional response.
Resist the temptation to go backward, too. More in-class assignments and hand-written essays are certainly not the answer to something like ChatGPT. We can do better than reverting to less accessible forms of already questionable assessment practices.
The specific detail of what we need to say and do will vary based on what new AI technology we’re confronted with, but some big messages will hold true in most cases:
- Human teachers remain fundamental to education. Our imagination and creativity cannot be fully replaced by AI.
- Ethical applications of AI in education do exist but we need to be appropriately sceptical and consider which groups have been (or could be) harmed by these technologies.
- AI technologies are not going away and are part of a larger cycle of technology change. We need to build sustainable capacity to respond appropriately, not panic wildly on a two- to three-year cycle.
- Confrontation with these new technologies will force us to examine what we do and why we do it, and this is healthy. We should embrace these moments as opportunities to learn about ourselves and change our practice, as much as learning about the new shiny thing.
- Opportunities to work with our new robot colleagues exist. AI technologies will change our roles and our work, but there are more generative and exciting possibilities to be explored than “teacher efficiency” and increasingly precarious working conditions.
Invest in people and trust first, technology second
Beyond high-level messaging, we then need to actually do the work to create a safe climate for exploration and open spaces for dialogue and critical questions. Can you bring together your university community to hear both concerns and creative ideas for exploration and change? Symposiums, guest speakers, workshops … We tend to be rather good at this kind of thing anyway, so let’s lean into those strengths.
Take the following three steps:
- Recognize that some of your faculty may be anxious and provide reassurance. Do they have concerns for their jobs? Are they worried about academic integrity? Do they feel the institution is lagging behind? This is another way in which these moments tell us something about how we are today, so maybe also have a think about what these concerns mean for institutional capacity to change.
- Resist the rush to formulate policy and start with permissive guidelines that create space for exploration and play, with the appropriate guardrails around academic standards and ethical behaviours. Where possible, work to co-create these guidelines with students. Maybe this is also the time to check that you have some policies on the ethical use of students’ personal data on which to base your guardrails.
- Finally, consider how digitally literate your colleagues are and whether that might be another people area you need to invest in. What kind of digital leadership do you need on the academic side of the house? A senior role? A visible presence in your teaching and learning centre?
Keep in mind this nugget of wisdom from Neil Selwyn, author of Should Robots Replace Teachers? “AI is not an imperative that education needs to adapt to, catch up with or be reshaped around. Instead AI presents the education community with a series of choices and decisions. It is crucial that these choices and decisions are engaged with by as many people as possible.”