This article isn’t about the problems AI produces or the issues we face when it goes wrong. Numerous accounts of the weaknesses and risks of today’s AI systems can be found elsewhere. This article focuses on what will be developed when AI works, and how these developments will influence learning and development.
While leaving room for the AI sceptics — sure, it might all fail — we argue that there is enough evidence already to suggest its influence will be far broader and transformative than most people expect. AI will change how we learn, what we learn and why we learn.
Here are five ways it’ll help personalize the learning experience.
1. Beyond Books / Disposable Learning Resources
In the near future, we will no longer need to produce or pay for textbooks. Although there are still some limitations, we have already seen how well generative AI (GAI) can create new content. When seeded with authoritative contextual information, such as an operating manual or set of procedures, GAI can take the place of a textbook, in-person help desk or set of reference materials. Context-free generative AI may “hallucinate,” but context-bound AI will stay on track (although it may interpret text more literally than we intend).
This is just the beginning. GAI is transforming the act of reading from one where we follow the content in whatever format the author has created for us, to one where we interact with the content by asking questions, having a conversation or seeking advice. Google’s new LearnLM automated tutor is specifically designed to support a conversational mode of learning.
Although some people prefer to read books cover to cover, it is easy to see the advantages of the new system. A GAI tutor can be trained on work it would take us hundreds of years to read. It can find information it might take us days to look up. And it can present it to us directly in response to whatever question we might be posing at the moment.
These new resources pose a challenge, however. We can no longer assign students a body of content to learn and test them on how well they have remembered it. Students can just ask AI. This has already made traditional tests and exams irrelevant. What to do? In Nevada, students are being challenged to write better papers than AI. Students will become fluent working with an AI resource on any subject. Our definition of learning will shift from knowing what the answer is to knowing how to ask the question and defining what makes one response better than another.
Another challenge concerns the data we provide these AI tutors. What do we mean when we say we will train them with “authoritative” resources? China has deployed a large language model based on President Xi Jinping’s political philosophy. It says it is “secure and reliable.” What would prevent something similar from happening in western democracies? How can educators be sure that data provided by governments, corporations and publishers can be trusted?
2. No More Tests / Working With, Against and Around AI
Even though new technologies, such as digital badges, are bridging the gap between employer requirements and employee skills and competences, learner assessment remains a difficult, time-consuming and expensive process. And it’s not even clear we are assessing the right things in the first place!
Artificial intelligence will first have an impact on the generation and management of skills testing records. Already, AI systems are being used to match potential employees with new work opportunities through the use of skills profiles. Projects like Micromissions in Canada and the Mastery Transcript Consortium in the U.S. are an example of this.
At a certain point, however, we will be able to cease testing people entirely. It will be possible for artificial intelligence to comprehend their skills and capacities by directly observing them at work. Just as a master carpenter can quickly tell whether a person is a skilled woodworker, so also will AI be able to watch a student at work and draw the same sort of conclusion. This capacity has been demonstrated in projects such as the MuFIN multimodal feedback generation project at Carnegie Mellon.
How will these systems know what to look for? We could tell them what we want successful candidates to achieve by constructing skills profiles for different occupations. But AI is capable of a much more detailed examination of performance data than human researchers, and will capture many aspects of performance we simply don’t know how to test for (that’s why we supervise professionals like pilots and surgeons on the job before giving them a licence to operate).
It is very likely that as AI provides more and more detailed assessments, we will find that the skills we need do not resemble the skills we are taught today. Just as AI can detect new proteins or create new materials, it will be able to identify new skills and competencies. We'll encounter skills that are completely unfamiliar to us, much like how critical thinking was once a mysterious concept in the 1920s workplace.
The major concern with AI-based assessment will not be how accurate it is. We’ll know it’s accurate — more accurate than any human-based assessment could be. The concern will be how pervasive and fair it is. Employers, however, won’t wait. Human resource professionals already look at social media. Governments already scan communications for evidence of threats. It won’t be hard to spot potential future engineers in a crowd. People will want the right to be able to practise and develop a skill in private before allowing AI to pass judgement. And they’ll want to be able to control who is allowed to see their assessments, and who is not.
3. Your AI Tutor Gets You
As mentioned above, AI companies are already beginning to market AI-based tutoring systems that are more interactive and conversational. Tomorrow’s tutoring system won’t just know all about the content, they’ll know all about you. And they’ll use this knowledge to shape an interaction that responds to your every need and inclination.
Companies like TutorOcean in Ottawa are already launching AI-based services that use computers to supplement and eventually replace human-based learning support. Leading the way are specialty services such as leadership development and language learning. Much more is coming.
The need for connection in learning is well documented. A common complaint among students is that they feel invisible and neglected by the institution. Even when learning communities and social media are available, they take a lot of skill to navigate and connect with colleagues and mentors. And these environments are not always friendly or safe.
So in addition to learning everything about what you need to learn, AI tutors will learn everything they can about you. Right now we use huge cloud-based AI services, but AI in the future will be located on your device. It will have access to what you’re doing and where you are. Like fitness wristbands, it’ll monitor your heartbeat and blood pressure and use cameras and microphones to sense your surroundings. And like Microsoft’s new Recall feature in CoPilot, it will remember everything you ever said, ever saw, ever did.
No doubt many people will feel uncomfortable with this. But because the data collected is encrypted and under our control, we can feel reasonably sure it stays private. More and more, we’ll feel comfortable storing our personal data on our devices, just as we already store email messages, finances and personal calendars. It will be hard to resist using tools that take advantage of that data to help us in tangible ways.
The result will be an AI tutor that understands you better than any human tutor you ever had. We are just learning, for example, about how emotion recognition can enhance learning support. Frustrated? AI will ease off. Engaged? AI will pull you forward. Future AI tools will understand your learning goals better than you do, and if managed properly, will serve your interests first.
4. Multimodal Miracle / Can’t Get it Past the AI
Misinformation is a major problem in today’s information-filled world. Fake news proliferates on social media, AI-based content farms fill search engines with dubious content, fake science journals obscure research and “pink slime” news sites undercut journalism. It’s too much for most people to manage on their own.
What AI has been missing to this point is known in data science as a “single source of truth.” Without it, an image generator doesn’t know that hands (normally) have four fingers and a thumb, or that you can’t use glue in pizza. In limited contexts, AI can be required to use specific texts, as mentioned above. But meaning and truth are based on much more than text.
When AI becomes multi-modal, however, the effect is almost miraculous. Multi-modal AI can collect data not only from text but from audio, video, any other modality you can name. It is also able to absorb much more information from these modalities, while also focusing its attention on patterns in the data more complex than any human can comprehend.
What this means over time is that although it may be easy to fool humans (who struggle with cognitive biases and preconceptions), these weaknesses do not beguile AI. Some humans cannot spot AI-generated fake photographs, but AI can. It will be very difficult to get misinformation past AI that has access to scientific databases, sensor data, satellite photographs, historical records and the weather outside the window.
If there are risks in untrustworthy AI, there are probably greater risks in trustworthy AI. We may lose our direct attachment to truth and falsity through the senses. We may lose our common sense — flawed as it is — about what is credible and what is not. As with any mass media, should our AI systems be manipulated, it may not be possible to spot the misrepresentation.
As we develop powerful AI tutors, we will need to emphasize the need to develop the human, not just the knowledge. It will be enormously helpful to ensure we are not learning about the world through misinformation, but our AI tutors need to understand how humans develop skills, instincts and common sense. It will become more and more important to work with our hands and be outside in nature, even as our lives become more and more technology-based, in order to stay grounded.
5. Thinking it Through with AI
Above we referenced AI on the device. But what if that device is the human brain? We are very close to making that leap.
There is no likelihood that an AI will be able to read your mind. Most of our cognitive capacity exists as potential (or dispositions) rather than explicit representations or memories. But when prompted, our thoughts coalesce into sensory experiences — our inner voice, for example — and we can use these to “speak” to a computer and “hear” the response.
Such systems exist today: a brain implant that restores bilingual capacity in a paralyzed man, for example, or an AI tool that uses brain scans to recreate images a person saw. There are also devices that allow the blind to see and the deaf to hear. As these technologies become more capable, we can imagine a day when our entire stream of consciousness becomes available to AI – and that AI can talk back.
Right now the emphasis is on sensory modality, as it should be, but connect a personal AI tutor with such a system and a range of possibilities emerges. Think of thought as an interface device. No need to type in prompts, just think of them.
And these thoughts won’t be isolated. Just as today we can use (AI-assisted) social media to converse with our friends, we will be able to have conversations by thinking about them as well. This is clearly a skill that will need to be learned (although our AI tutors will train us in a safe personal space, and will stand by as a protector to block misinformation or abusive behaviour, just as Aksimet blocks spam today) and something we will need to be able to turn off.
Learning itself won’t suddenly be transformed into instant knowledge — this isn’t the The Matrix. We will still need practice and experience. But when we can capture one person’s experience and use the data to create another person’s training, our learning will feel much more practical than theoretical. Much of the knowledge we think we need today — mathematics and grammar, for example — will simply be available as a “sensation.” The important things will revolve around questions of values, judgements and intent.
A New Copernican Revolution
Most objections to things like AI tutors are based on the idea that there are things humans can do — have goals, feel emotions, care about friends — that are alien to artificial intelligence. There is no doubt that each of our embodied experiences is unique. That’s why AI that gets as close as possible to them is so powerful.
And when AI does get that close, we could begin to think of it as essentially the same as other people. This may feel like a stretch, but we already assign intent to our computers and treat our pets as family. It won’t matter how a computer’s response is produced, just as it doesn’t matter how a teacher’s care is produced. Humans are results-oriented that way.
Before the Copernican revolution, we thought we as humans lived at the centre of the universe. We have since learned that even though each person has their own unique place in the world, there is nothing special about their place. It’s the same with these other human capacities. Our thoughts, feelings and emotions are personal and unique, and that’s why they’re important. But thoughts, feelings and emotions can be produced in any number of ways, by any number of things that are not human. The more we learn to use AI, the more we will reassess our place in the universe, for better or worse.