An Interview with David Wiley, Chief Academic Officer, Lumen Learning
![]() | An Interview with David Wiley, Chief Academic Officer, Lumen Learning, which he co-founded. A distinguished teacher and researcher, David is one of the founders of the open educational resources movement. He is an adjunct faculty in Brigham Young University's graduate program in Instructional Psychology and Technology where he was previously a tenured Associate Professor, and Director of The Brad D. Smith Student Incubator in the Center for Innovation and Entrepreneurship at Marshall University. |
Q: What do you see right now in terms of colleges and universities using artificial intelligence (AI) for teaching, learning and assessment?
David: It seems to me like there's very much a sense with AI in education, at least from the supply side – from the institutions and the faculty and administration – that AI is a tool that can help us do all the things we already do faster, cheaper, better.
I would characterize what's happening now as being the “horseless carriage” era of AI in education. By that, I mean the only reference we have for thinking about AI is what we've known and done before. And the things that people are trying to do, or the things that I'm seeing people try to do, are all things that we've done before – they're just using AI to do them a little faster, a little less expensive, hopefully a little better quality. For example, they’re still producing static learning materials like textbooks, but writing them faster. Or they’re still creating multiple choice assessments with diagnostic feedback for every distractor but doing it faster and less expensively.
I haven't seen much of an appetite yet for a large-scale reimagining what could happen or what might be possible, especially now that AI is operating largely at a post-doctoral level.
I think the zone of proximal development is true for our imaginations as much as it is for learning: there's only so much you can imagine on your own. When you have somebody who has a little different perspective that can show you a concrete example of something AI can do that would never have occurred to you before, then that can stretch your imagination out in the direction of that example.
I spend a lot of my time just trolling X and LinkedIn, looking for examples of things people are doing that showcases the power of AI as an innovation resource. Almost none of the examples are in education – I’m just trying to see new examples of new things people are thinking about to try to push my own imagination bubble out further. To slide my ZPD a little to the right. And I really think this is the only constraint – it’s just hard for us to imagine genuinely new things. The affordances of AI are just so new that we don't really know what AI is really capable of yet. We just don't know what to do with these affordances. We don't have enough experience with them yet. Hence the “horseless carriage.”
I also think faculty are stuck playing defence. All the oxygen in the room or all the all their brain power is being used up thinking about how to combat AI cheating and academic misconduct. It doesn’t leave any room for asking “how can I make more innovative uses of AI in teaching and learning?”.
So the effect students are having on the adoption of AI in education is (and I acknowledge it is not what they're trying to do) unwittingly slowing down progress with AI. They're causing faculty and administrators to spend all their time trying to figure out how to stop them from cheating, which leaves them no time to imagine the new things they could do.
Yesterday, after a keynote, I had this question in the Q&A: “What recommendations would you have for university policy around AI?” And I said, just don't make one! I gave the example of the early days of the Internet. In 1993, many of us got our first modems and got online for the first time. Now imagine if in 1993 or 1994, we set university-wide policy about how the Internet could be used in education. We would have messed that up royally! And at colleges and universities, once policies are written down, it takes a miracle to change them. Given the speed of AI development, I think it is much too soon to set use policies in place.
Q: When you look in the longer term – say 2030 and beyond when Artificial General Intelligence is likely to be with us – what do you see happening then?
David: I believe the majority of people involved in higher education, especially in the USA, don't actually understand what is propping up the higher education system.
I think there's literally only one thing propping it up, and that is the fact that many job descriptions say that a diploma or bachelor's degree is required to be eligible to apply. If you took that requirement off the table – if the President signed an executive order saying that it's illegal to require a college diploma or degree for any job application – I think the bottom would fall out of the US higher education market and enrollment would plumet. The eco-system would be in crisis.
This link between credentials and jobs is paramount. Why else would you invest four, or five, or six years and go a hundred thousand dollars or more into debt unless you thought that investment was going to pay off in terms of better job prospects? I’m sure some people would still go.
But I really don't think we appreciate the degree to which all the madness around the costs and structure of higher education is propped up in this way. Why would anyone be willing to do all these things unless at the end of that rainbow, there's a promise of a better job if I have this credential? If that promise of a better job somehow became untethered from that credential, then I think a lot of bad things would happen for higher education.
I think challenges around the relevance and value of higher education are going to be compounded by AI. Many tasks performed by people in many jobs are going to be satisfactorily accomplished by AI agents (as we can already see). For example, if you've got a team of salespeople who are doing cold calling, outbound lead generation, you're going to replace most, if not all, of those people with AI agents, as companies are already doing.
The whole ecosystem of higher education as we understand it now has to change.
Other parts of the ecosystem are changing dramatically, and ecosystems have to stay in balance or bad things happen. Imagine if higher education hadn’t changed in response to the Internet? Imagine if we still stood in lines with cards to register for classes, etc. I think many people don’t appreciate how much about higher education changed because of the Internet. It will change more because of AI.
Our institutions sit between students, the qualification, and the job. As jobs change to take account of AI and the qualifications required for workers to partner with AI agents change, that diploma or degree needs to be transformed to match the demand. And that’s a problem because historically our colleges and universities are less aware of their interdependency with employers. It’s also a problem insomuch as higher education’s monopoly on the credential that leads to high paying employment has allowed it to ignore students desires and wishes. Students have nowhere else to go, so why be responsive to them?
We need to ask: what would it mean for a college or university to really be student focused (in the way the best organizations are customer-focused) and to be able to demonstrate to students that their learning and activities in class are actually important and meaningful and relevant to them? That they will be learning things that will help them at work and in society and in their lives?
I’ll now get to your question: what does higher education look like five or 10 years from now? If our colleges and universities can’t or won’t adapt, I think we’ll see competition in the form of unbundled services. What would it mean if you had one provider who was doing instruction, another provider that was doing assessment, a separate provider that was doing the credentialing?
Students could secure their learning from any source and be assessed and credentialed at any time – isn’t that the obvious direction of travel for AI and our ecosystem? This would create an opportunity for employers to step in in some way, for on-the-job training to count toward degree or diploma requirements, the expansion of the kinds of apprenticeships that community colleges do a really nice job with that four-year schools typically shy away from. These unbundled services might be offered by entirely new entrants, who are responding to customer demand, or they might be offered by incumbent colleges and universities, or both.
Q: You mentioned the affordances of AI (the possibilities) – tell me more about that.
David: I think the affordance of generative AI that is really unique in the context of education is that generative AI removes “access to expertise” as an obstacle to learning. So when it's midnight, and you're doing your homework, and you've got a question, and you need somebody who knows enough to be able to answer it for you, and you look around, and of course, your teacher's not online, and your study group people aren't available. You get stuck because you don't have access to the expertise you need right then.
Within higher education, where in the system does the provision of expertise “live”? It mostly lives with faculty. Given the trajectory AI is on, what role is left for faculty in the future? Do some universities become pure research institutions or think tanks? Maybe the faculty role becomes more focused on getting to know students. Building relationships with them. Caring about their success and demonstrating that care. Engaging with them and inspiring them through supportive and active coaching, guidance, and encouragement.
Honestly, if I would be thrilled about the prospect of spending more of my time doing that and less time standing in front of a large group lecturing.
Q: Any final thoughts?
David: I have to admit, five years ago, I was a total non-believer in AI and its potential. I was a fan of work like Dreyfus’ What Computers Can't Do and just saw hype everywhere. I thought the claims for AI would never turn into anything, at least in my lifetime.
And now in 2025, I have completely eaten my words and had to change my beliefs about AI, because they're not about beliefs anymore – I can see it with my own eyes.
AI is already helping me improve my students’ experiences (according to them). And AI can make so much more of a difference in teaching, learning, and assessment. We’re only barely scratching the surface now. It’s like it’s 1993 and the Internet all over again.
I’m incredibly enthusiastic about and excited for the future.
Attachment | Size |
---|---|
![]() | 269.02 KB |