AI is not new and will continue to shift and change the landscape of how teaching, learning, assessment, student support services and financial and operational systems work.
There are issues, but that is the case with all innovation and change. The challenge is how to embrace the opportunity AI affords without it being so disruptive or inappropriate that it becomes both a poor investment and a mistake.
In the UK, The Russell Group of leading universities issued a set of principles for the deployment of AI among the group’s 24 universities. The five principles are:
- Universities will support students and staff to become AI-literate
- Staff should be equipped to support students in using generative AI tools effectively and appropriately in their learning experience
- Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access
- Universities will ensure that academic rigour and integrity is upheld
- Universities will work collaboratively to share best practices as the technology and its application in education evolve
Other university and college systems have responded in similar ways, although several are more deeply concerned with academic integrity and plagiarism than is evident here. Some institutions initially forbid students to use large language models for their work, denying them access to these technologies through their server controls.
What should be the framework for an approach to AI deployment in Canadian colleges, polytechnics, Indigenous institutes and universities? Here are some suggested guidelines.
Ethical AI
Colleges, polytechnics, Indigenous institutes and universities should adopt an approach to AI based on both ethical principles and sustainability. There are a range of frameworks for ethical AI, all of which have these features in common:
- Inclusive: Significant efforts are made to ensure all students have access and support for their use of AI rather than AI being in the service of the privileged. To make this effective, the transparency of AI and exposure of bias within AI systems is essential. The intention for deployment of AI should be to make education more accessible to all rather than less so.
- Empathic and human-centred: Although accuracy and appropriateness of responses are critical, AI systems intended to interact with people should be empathic, warm and genuine. They must be able to respond not just accurately but in a tone and manner that reflects the user's identity and becomes increasingly sensitive to user needs.
- Transparent and explainable: Transparency enables users to understand how an AI system is developed, trained, operated and deployed so they can make more informed choices and judgements about the outputs such systems produce. A user needs to be able to understand how AI or analytic systems came to the conclusions it did. What were its sources of information and how was it trained to use and interpret these sources? AI systems must be constantly trained and improved to reduce the incidence of the incorrect and nonsensical material and references they produce.
- Robust, secure and safe: To function, AI systems need access to significant datasets, including personal data about students, their backgrounds, performance and interaction with college or university systems. Such AI systems need to be able to withstand cybersecurity threats and be safe for students and staff to use. Colleges and universities are a major target for such attacks.
- Accountable: This refers to the expectation that organizations or individuals ensure the proper functioning, throughout their lifecycle, of the AI systems they design, develop, operate or deploy in accordance with their roles and applicable regulatory frameworks. They must demonstrate this through their actions and decision-making process (for example, by providing documentation on key decisions throughout the AI system lifecycle or conducting or allowing auditing where justified). AI systems must meet regulatory and legal requirements that all the university or college staff must meet — for example, concerning disabilities, exceptionalities and privacy.
- Sustainable: AI is a major consumer of energy and current CO2 emissions associated with AI systems and the cloud servers that support them exceed emissions from the aviation industry.
Other concerns that must be addressed
Impact on work: Deploying AI in a post-secondary institution needs to be seen as a way to support instructional staff, student service staff and administrators. The risk is that institutions will see AI as able to completely replace functions and activities, despite the known risks of AI systems. Unions will want assurances that AI is being deployed to support and expand the work of colleagues rather than replace them. Indeed, the emerging evidence is that the deployment of AI both changes the way in which existing employees work and adds new work opportunities. This should be the intention.
Bias and inequality: The larger concern will be with bias and the impact AI can have on equity and inclusion. AI systems have been trained using “scraped” material from the Internet and significant data sources. The predominance of the northern hemisphere in these materials and the lack of material from key sources — Indigenous knowledge, knowledge and expertise from ethnic minorities — in favour of dominant views (especially in science) are all problematic. In any deployment, especially of analytics, rigorous efforts must be made to uncover and respond to bias. Otherwise, AI will contribute to growing inequality rather than enabling equity. Institutions need to find ways to add new materials to AI systems which helps reduce bias and increases access to Indigenous knowledge.
Recognition. Authors, musicians, artists and other creatives have expressed significant concerns that their work is being shared and distributed, even though it is copyrighted or made available under a Commons Licence (which requires acknowledgement of the source). Some legal challenges are also under way. Post-secondary institutions, in their work in educating students and faculty in the use of AI systems, need to be mindful of these concerns and address them, encouraging all users to seek out and fully acknowledge source material.
Costs. There are real costs to the deployment of AI. Not only do specific applications require payment (for example, Microsoft 365 will charge each user US$30/ month for adding Copilot to their suite of products), but they also cost time to learn and have consequences for cloud storage costs and C02 emission management. Full cost calculations are needed to ensure that deployment produces a return on investment.
The future
Each day about 30 new applications of AI are released, some of which duplicate the work of other products. It is becoming difficult to keep track. Although post-secondary institutions need to encourage experimentation and exploration, they also should base deployment decisions on the principle that such a deployment will increase trust in evidence, sources, processes and outcomes. The return on investment might be thought of not just in terms of enhanced productivity but in terms of deepening trust in the work of the institution.