An Interview with Peter Scott President and Chief Executive Officer of the Commonwealth of Learning
![]() | Professor Peter Scott is the President and Chief Executive Officer of the Commonwealth of Learning (COL). His career reflects a deep commitment to open and distance learning, characterized by innovative leadership in various academic and administrative roles. Peter began his career in academia as a researcher and lecturer at the University of Sheffield, UK. His expertise in open learning innovation was further honed during a 20-year tenure at The Open University, UK, where he directed The Knowledge Media Institute, specializing in artificial intelligence and education. In 2015, Peter joined Australia’s University of Technology Sydney as Pro Vice-Chancellor, after which he became the President of Athabasca University in Alberta, Canada, in 2022. Across all his roles, Peter has been a pioneering force in educational change, contributing significantly to the global transformation of education through technology. |
Q: How will artificial intelligence (AI) change teaching and learning in colleges and universities in the next 3-5 years?
Peter: I should begin by setting the context that the Commonwealth of Learning is an inter-governmental organization focused on the sustainable development of education through innovation in the 56 Commonwealth nations. We work in a landscape of chronic learning inequity, with significant gaps in how the rich and poor can access learning and the potential positive change it offers. I'm describing this inequity as chronic because it is longstanding and even baked into our conventional educational systems and approaches.
Sadly, the equity gap in learning is worsening with increasing challenges of climate, social and political shocks. For instance, the world of work is changing quickly, with “time in a job” dropping consistently year-on-year (e.g. in the US, the Bureau of Labor Statistics notes that the 2024 average role tenure is now down to 4.2 years for men and 3.6 years for women). And each year, worker and employer surveys report that the gap between employment skills and employment needs is widening. Just as post-COVID workers have been slow to return to a physical workplace, post-COVID learners have seen some better ways to learn. We can try to force them back, or we can help them forward.
Fortunately, there are already things offering forward momentum through these challenges: increased access to technology, online networks, powerful mobile devices and more. For example, year-on-year, more countries are creating online and “open universities” providing access beyond conventional tertiary institutions to those who cannot access a classroom, fit to a timetable, or stop work to take on a huge learning debt to reskill. “Open,” for us, is already a big disruptor. Can AI join those positive disruptors?
Yes, absolutely. How, you asked?
Well, it is our hope that it will do so “in the open,” with models, agents and bots that are valuable partnership services and shared tools to empower conventional institutions and teachers to change and adapt to new needs, as learners and work are already changing.
Q: How will artificial intelligence (AI) change teaching and learning in colleges and universities in the next 3-5 years?
Peter: We have to remind ourselves that AI is not something that just appeared from nowhere as the large language model-based generative AI (GenAI) of today. This has been cooking for a long time! Intelligent tutoring systems modelled as “replacement teachers” were being developed from the late 1970s. Virtual learning environments to start to empower rather than replace followed in the early 2000s. We also started to dig deeply into student behaviour with machine-learning-based learning analytics at the Open University in the UK in some pioneering work in the 2010s. Now, with this generation of GenAI, we can see colleges and universities experimenting at scale with all those historical ideas and getting them to work with virtual tutoring, chatbots to support in-depth learning and formative assessment, adaptive and personalized learning, delivering and grading assessments and automating a lot of administrative tasks, especially in the admissions process. Carefully deployed, the new GenAI —with good prompts and quality assurance — is powerful in saving time in existing processes and streamlining current learning workflows. There are still plenty of challenges in framing these as sensible services and concerns about how smart learners will use them to jump past poorly designed assessments, but exposing poor practice in teaching and learning is not in itself a bad thing.
What is still missing is a coherent, integrated approach to using AI to accelerate learning openness beyond those simple improvements to our learning “business as usual.” What I am most interested in is those use cases that offer to sustainably change inequity in our systems to let us reach beyond our current practices to a greater number of learners and empower them, regardless of their background and initial skill level. That said, AI on its own is not enough of a game changer given the challenges we face. We need to link the use of AI to a commitment to open, accessible education where the AI is used ethically and responsibly to achieve outcomes that matter in a local context.
Q: Do you have any examples of what this might look like?
Peter: Absolutely. The Commonwealth of Learning is rolling out a wide variety of pilot GenAI services to our member nations, generally focused on ensuring that humans remain in the loop of anything new and that the technology is used to empower rather than replace. Initially, of course, these have focused on simply saving time and increasing productivity in places where teachers are precious resources. For instance, in Ghana and India, we've been supporting the AI-powered generation of open educational resources, where the AI can act as a learning-design support to mentor teachers in creating subject-specific resources. And in the Pacific, we recently rolled out a course to speed up lesson-plan production for classroom teachers. In one recent workshop on this course, 40 STEM educators generated 457 good lesson plans in a single session, of which 100 passed our high-quality assurance level and were published. In areas where teacher time is a high cost and already very stretched, saving that valuable time to allow it to be used elsewhere is very impactful.
A more exciting option for our context is that offered by Open Source AI models that can be created and then shared for on-device and offline deployment. For many countries, routine access to the expensive, computer-intensive network resources which this generation of GenAI requires to be trained is not viable. Indeed, for many years now, we've been deploying offline resources such as our Aptus server, which can operate as a pre-loaded resource hub where Internet access is unavailable or unreliable. Pre-loading services onto devices like this or even standalone PCs or (in future) mobiles to act as an on-device AI service is a very promising direction. We've already successfully integrated models from LLaMa, Falcon, Qwen and Microsoft Phi series onto local offline services to support various educational applications for remote contexts.
Whatever services we've been creating, we have some clear foundational principles:
- A real person — a tutor, teacher, or teaching assistant — needs to be in the loop of an AI service. This is not just for technical help but also for engaging learners with the materials, skills and resources their interaction with the AI produces.
- Any AI-generated material requires quality assurance and critical evaluation — since outcomes are not always correct, reliable and thorough. Results must be carefully refined and, even now, checked by a qualified human. Indeed, humans being in the loop is still the best way to offer appropriate guardrails against unsound or potentially dangerous outcomes.
- AI is not a panacea; it is part of the answer. We still need to imagine and design an integrated, engaged learning experience that will achieve a transformational outcome. As many are now recognizing, we need new learning designs to make the best use of new AI affordances.
You will see more in the coming years from the Commonwealth of Learning as we push the boundaries of learning, leveraging AI as a change agent.
Q: What’s getting in the way of the more extensive use of AI in higher education?
Peter: As we work across our varied Commonwealth countries, we see several challenges:
- Many governments do not have ready access to the level of AI knowledge, skills and capabilities. They are subject to a great deal of vendor hype, which gets in the way of good policy-making based on an in-depth understanding of both AI capabilities and limitations. Also, policy development generally lags behind practice, so governments (and governance bodies of institutions) always feel they are playing catch-up.
- Institutional policies are still behind this curve in every country. Some colleges or universities push the use of AI while others constrain it. Much work is required to help institutions share experiences, collaborate on good practices and work together on common solutions.
- The AI landscape is changing fast. With each generation of tools and each iteration offering major improvements almost daily, the pace of change can be overwhelming.
- AI developers think they know what teachers and students want, but they generally don’t partner with them to create the tool. This is a problem — it leads to a particular understanding of learning being embedded in a tool rather than a range of learning options being available when the tool is deployed. Think of a behaviourist approach to mathematics versus a constructivist or collectivist approach — teachers and students need choices.
- AI for education offers right now are still very instructor- and institutional-centric. Our experience over the last 50 years reminds us to focus more on the learner and the learner’s journey. We must help learners develop the skills to take charge of their own learning. This is the major innovation needed to transform higher education.
- AI, like many technologies, currently requires expensive access to high-quality broadband and the latest end-user devices. But affordable, high-quality broadband or the technology needed to fully leverage AI is not universal, so creating on-device and offline open-source resources will be key.
- And finally, ethics and responsibility must be at the forefront of our thinking in our policy and support of these new things. A responsible AI licensing perspective is needed to care about how models are trained and for the ownership and citation of sources of training data; to understand and manage inherent biases and privacy; and to take even more care in ensuring that we are quality assuring the outputs and its use.
The message of all of this is clear:
- We need better open access in the emerging industry with clearer regulatory drivers for accessible AI models, training and prompts.
- We must improve sharing and collaboration to create effective learning experiences co-designed with learners and teachers, not just developers.
- We need innovative and imaginative combined approaches to leverage policy, practice and technologies, including AI.
- And that last bit is critical: more agile regulation and policy frameworks to ensure the personal, social, economic and community impacts intended in a rapidly changing world.
It is an interesting time. Change is happening quickly, so we need to ensure that the disruption it creates to our conventional thinking has a positive trajectory. And particularly to ensure that it can help us to finally reach those that our conventional systems continue to leave behind.
Attachment | Size |
---|---|
![]() | 309.01 KB |