Skip to main content

542,000+ Visitors Annually!

Teach Online

Teachonline logo

Contact North | Contact Nord logo

Search form

  • Home
  • About Us
  • News
  • Contact Us

542,000+ Visitors Annually!

Menu

  • AI Apps for Education
  • Tools and Trends
  • Training and Resources
    • AI in Higher Education Resource Hub
    • Training and Resources Overview
    • College & University
  • Upcoming Conferences
  • Webinar Series
  1. Home
  2. How AI Could Change the Way We Teach and Learn: A Look into the Future
  3. Brian Lamb

Brian Lamb

 

An Interview with Brian Lamb, Director, Learning Technology & Innovation at Thompson Rivers University

Brian Lamb photo

An Interview with Brian Lamb, Director, Learning Technology & Innovation at Thompson Rivers University. A longtime educator and technologist, Brian is a leading voice in open education and digital learning. Over his 25-year career, he has championed inclusive, human-centered approaches to educational technology and has been instrumental in exploring how tools like AI can support equity, creativity, and learner engagement across post-secondary education.

 

Q: What do you see happening right now with respect to AI and teaching and learning?

Brian: I struggle to find a coherent narrative. I would say since generative AI tools became widely available in something like their current form, I encounter both a great deal of enthusiasm and optimism from some parts of my university – e.g. “This is going to revolutionize our practice and take us to some new frontier that we cannot even begin to imagine!” Often in the same room at the same time, someone else will suggest that AI “will be the ruin of our profession and will rot our brains and curve our spines.”

On a personal level, I have really enjoyed some of my encounters with these tools. I find a large language model interaction to be a very strange and interesting, almost uncanny, experience. I do enjoy playing with the edge cases and seeing how to bend AI and how it responds. I have been troubled too. This is shaped by how I've experienced digital technology in general over the last 25 years. I think all the gains in terms of networking and resource development and analysis tools have come at a cost. Some of our technology use and understanding is no longer fit for purpose. My attention span is shorter than it was 20 years ago. At least some of that is due to my media habits, the way I work and the way I do problem-solving now. I'm worried about the effects from this wave of technology on cognition.

A lot of the interactions I have with faculty here are focused on the assessments they have designed for their courses — they are worried that they are completely obsolete. They say, “Give me a tool to make it go back to the way it was so I don't have to deal with the problem anymore.” And I am sympathetic to that. This is not an easy time for a lot of people for all sorts of reasons. Some of these faculty who are contacting me are precariously employed, and they don't get a lot of time for preparation and design, and they may have large, large teaching loads. My job as a learning technologist is to serve faculty at my institution starting from where they are and their reality. I don't ever want to sound dismissive of their concerns. It's literally my job to help them navigate technology. We can't preach. Instead, we have to partner, engage and create alliances to navigate to the “next” place.

We really do need to question some of the transactional structures that lie at the core of how we assess and validate learning to determine whether learning has happened. That will be a massive transformation at the core of curriculum and practice. It's not something you can fix with a few workshops or Zoom sessions. We need to explore the long-term implications in collaborative and engaging ways.

 

Q: What do you see happening with AI in the near term — say the next 24-36 months? Some think that artificial general intelligence will begin to appear in this window, but others are not so sure. Whatever happens, AI is getting better, faster, smarter. What are the implications of this for teaching and learning?

Brian: I think some people are going to do some really interesting things. There's going to be examples of relatively low budget applications where people are using some of these AI tools in clever ways. I already see immensely impressive work with strategies for inclusion for diverse learners. There should be no excuse very soon not to have accessible practices across all online learning because the costs of things like captioning and alt text and those kinds of things are crashing. And that's a great thing.

I am also sure there will be examples of people taking some of these tools and inventively creating persona-imbued tutors with all sorts of backgrounds and perspective highly capable of teaching and interacting with students in seemingly authentic ways. I saw a demonstration of this recently — it was impressive work and done without any real budget.

On the more negative side, I am still waiting for the larger vendors to start putting out product improvements that genuinely impress me. I have been lukewarm about what I've been seeing from the major vendors in terms of how they're incorporating AI into effective teaching and learning. I don't think they've figured out how their education business models work. The most powerful AI tools are still quite expensive on a per capita basis.

I think there'll be some careless and destructive use of AI in admissions, in financial aid, in grading and assessment. And I worry that rolling out some of this before we've really got a handle on it and really have our heads around how we feel about it might prove to be a mistake. I think some people are going to be harmed and when this occurs, those people are usually the most vulnerable. People will lose their jobs to AI in our sector. And again, we might realize too late that we lost some real judgment and capacity when we made these moves. I think it's going to be a potent mix of effects.

The cost associated with this stuff will have come from somewhere — it is not as if colleges and universities have a lot of cash floating around. We cannot afford a lot of the technologies that are about to appear. Budgets are already tight and workloads high. Finding the means to adapt will be immensely difficult.

 

Q: Think now about the long term — three to five years out. What do you see AI enabling by then? Think about AGI arriving in this time window.

Brian: I am not immersed enough or literate enough to assess the cutting edge of scholarship and research in this area. I will accept the question, but I'll believe in AGI when I see it. Let's put it that way.

I have a hard time imagining that our culture will be better off because of it. Usually the bold visionaries around this stuff say things like, well, of course we're going to have to have universal basic income. And of course we're going to need a massive realignment of society or else it will collapse. And honestly, just looking at the trend lines of the last 20 years, I cannot see a more generous social safety net evolving in the Western world, much less in the developing world, than we have now. More likely, social and economic safety will decline rather than improve.

When I look at who stands to benefit from even the current wave of AI development, these do not strike me as socially progressive people. Elon Musk said recently that the fatal flaw at the heart of Western society is we have too much empathy. I can put myself in the place of imagining a world where a lot of the toils and a lot of the stupidities and inefficiencies that bog down our daily lives are being eliminated by effective and low-cost AI. And maybe that allows us to have revolutionary change in, you know, energy technology and our response to climate and health issues. I suppose it's possible. It doesn’t align with any of the historic realities that I've seen, but maybe that's just my lack of imagination.

I look at where we were when I entered this field, about 25 years ago, when the Internet was becoming widely available to people and worldwide adoption was happening. My first couple of years were as a teacher in Mexico with the Tec de Monterrey system, and they had their own satellites and a very powerful Internet network for the time. As a relatively new educator, being able to connect with other educators worldwide and get lesson plans and resources and readings every day was invaluable. And having my students connecting with people around the world felt incredibly exciting. Coming back to Canada, I encountered the emergence of self-publishing and blogs and wikis. I saw Wikipedia’s rise as an almost utopian technology, you know, that a worldwide collection of volunteers would come together to build something so impressive. I was very enthusiastic. I had what I now realize to be a shockingly naive point of view. I thought, “How could a technology that connects people worldwide be anything but a good thing?”

I saw we would soon have automatic language translations. And we're going to have all these things to connect and understand one another, like, how could we not be more enlightened? We're going to have unlimited access to free knowledge.

Even then I was uneasy as I watched the Facebooks of the world encroach on what had been a decentralized and more democratically owned network space. And now we watch the triumph of disinformation and we're seeing how AI sort of feeds into that. I hope I'm wrong, but I look at the people who are most well-positioned to profit from whatever's coming and how they're motivated, and it worries me. We’ve seen AI directly applied to some of the most vicious and inhumane practices in recent years.

I think the prospect of unlimited profit will direct the development of AI in some really perverse ways, and be very harmful. I hope I'm wrong. Thankfully, I frequently am.

 

Q: Any final thoughts?

I would just like to say for all the pessimism, I don't think we have the option of sitting this out. I'm not a prohibitionist on AI and I don't boycott the technology — I try it constantly. I do find it very useful for certain things, especially administrative tasks. And I can see why people get excited. I perform little experiments where I have it take on personas and try various improvisational scenarios with it and things like that. Those kind of strange interactions — having this incredibly powerful language machine that even the people who know it best don't quite understand or can't quite predict — it is pretty cool.

For example, I'm not a programmer, but I've been playing with these tools to try programming, partly because I really don't understand its fundamentals. At one point the AI was guiding me through installing a Python library, and I was getting an error because I didn't know what I was doing. I cut and pasted the error I was getting. The AI immediately said, “Oh you clearly have a level of understanding that is very different than what I was assuming!” It then completely revamped its instructions and started being much more basic, more literal and prescriptive about what it was doing. And I just thought “Wow!” When a teacher does that — picks up on my struggle and then recalibrates their level of instruction to me that fluidly — I consider that to be strong teaching.

I wish I was reading more from colleagues around the world, sharing recipes and outcomes of things they're doing in detail — sharing their prompts and their process rather than just telling me how this is going to change my life. How are you using it? How do we do this better? I need more sharing and less preaching!

 

 

Did you find this resource helpful?

Subscribe to Online Learning News

Provincial Land Acknowledgement

Contact North | Contact Nord respectfully acknowledges that our work, and the work of our community partners, takes place on traditional Indigenous territories across the province.

We are grateful to be able to work and live in these territories. We are thankful to the First Nations, Métis and Inuit people who have cared for these territories since time immemorial and who continue to strengthen Ontario and all communities across the province.

 

Contact North | Contact Nord is a not-for-profit corporation funded by the Government of Ontario.

Government of Ontario logo

  • Accessibility
  • Disclaimer
  • Key to Success
  • Privacy Policy
  • Job Postings
  • Contact Us
  • New Report Shows Online Learning is Thriving Across Canada

Follow Contact North

Visit studyonline.ca

Creative Commons License  teachonline.ca by http://contactnorth.ca is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Menu Toggle Icon - openClick here to open the menuMenu Toggle Icon - closeClick here to close the menu

Menu

TwitterFacebook

Search form

  • Home
  • About Us
  • News
  • Contact Us
  • AI Apps for Education
  • Tools and Trends
  • Training and Resources
    • AI in Higher Education Resource Hub
    • Training and Resources Overview
    • College & University
  • Upcoming Conferences
  • Webinar Series
  • Accessibility Standards for Customer Service
  • Disclaimer
  • Privacy Policy
Feedback