A flood of AI-generated content
It’s a fact that AI is capable of producing convincing and often useful text and other content. Yes, there are concerns about the quality, accuracy and even the environmental impact of that content, but there’s no doubt AI can produce it and that it is already widespread.
It is legitimate to ask now whether any piece of content was authored either entirely or in part by a machine (for the record: this analysis is 100% human-authored). AI-generated content is flooding search engines and review websites. It’s even making its way into academic journals.
Some AI-generated content is relatively easy to identify, like the images of three-fingered humans, choppy and oddly accented audio, or unnatural expressions in text. But a lot of it fools even human readers, and to date AI itself has been unable to reliably tell the difference.
Not only are AI tools like Cramly and EssayGenius cheap and easy to use to create authoring content, they’re also– very popular. Surveys are beginning to show widespread use and a growing belief that tools like ChatGPT are more effective than human tutors. A similar impact is being felt in the workplace; GitHub, for example, reports that nearly all software developers have used AI assistants.
Generative AI and its critics
Not everybody is enthusiastic about the rise of AI content generation, and critics have raised valid concerns. Ben Williamson, Senior Lecturer in Digital Education at the University of Edinburgh for example, has authored a comprehensive list of arguments against the development and use of generative AI, especially in schools.
He and others, argue that AI is being irresponsibly developed, that it captures and magnifies bias and prejudice, that is often factually inaccurate, that it is based on surveillance and violates personal privacy, that it violates copyright laws, and that reliance on AI could lead to a diminishment of students’ cognitive processes, problem-solving abilities and critical thinking.
Others, such as Gary Marcus, professor emeritus of psychology and neural science at New York University, argue that the technology is fundamentally flawed and describes its current popularity as a bubble that will soon collapse. Using artificial neural networks, generative AI depends on an approach called “deep learning,” an approach Marcus says is unable to cope with reasoning and abstraction. Generative AI will be a business failure, he argues.
These criticisms cannot be disregarded, and it’s important to take them seriously. But what if they’re wrong? No technology has been without its critics, especially at the outset, and for every failure like cold fusion, there have been a dozen successes.
Today’s generative AI has already outperformed the sceptics’ expectations. Even without equaling human intellect, generative AI is already successful and having an impact in the marketplace. There’s no reason to believe it will simply go away.
Disposable learning resources
As the cost and time required to produce a learning resource decreases, that resource can be produced on-demand to meet a specific learning need.
- Some resources will be full-length texts, but these are needed only in exceptional circumstances. The more likely use of AI will be to request shorter, more specific learning resources.
- Although AI is often depicted as a tool that course designers and instructors can use to generate content, it’s so cheap and easy to use there’s no reason students can’t generate resources for themselves.
The upshot is that there will be no reason to prepare learning resources in advance of a specific need. Today, parents no longer buy dictionaries or encyclopedia sets for their children, allowing them to rely on Google and Wikipedia. Tomorrow, instead of textbooks, they will use some new online service to produce learning materials as needed.
It will take some time, but the entire textbook publishing industry will be repurposed into one based on collecting and managing data for on-demand learning resource production. Publishers are already preparing for this day: they’re amassing data collections, negotiating agreements with AI companies, and developing distribution channels.
How AI gets its facts right
Obviously, no AI-generated learning system will be worth developing if users cannot trust the content will be factually accurate. The tendency of popular applications like ChatGPT to “hallucinate” has not been reassuring, and the industry will need to rebuild trust.
AI hallucinations have their basis in the data used to train the models that make predictions. Early AI systems relied on publicly available content from the web produced by services such as Common Crawl in which volume, rather than accuracy, was the core concern. Developers have learned the answer is often found not in more data but in better data.
In areas such as health care and pharmacology, where accuracy is essential, the emphasis has been on data-centric foundation models based on, for example, obtaining
and processing high-quality clinical data records. In well-established domains, these models can be trained using artificial data generated by rule-based training models.
Future AI, such as Elicit’s high accuracy mode, will be built with direct connections to data stores. Amazon’s Bedrock provides this function as a service. The search service Perplexity provides accurate citations and references. As Nvidia CEO Jensen Huang says: “AI that understands the laws of physics” is the next wave of the technology.
Additionally, AI models can be generated with built-in constraints. A good example is found in software development models limited by programming language structure and syntax. Similarly, generative AI for architecture and engineering will be limited to known properties of chemistry and physics, and data synthesized from actual research and instrumentation.
Ultimately, the development of trustworthy AI depends on a combination of three intersecting domains: data integrity and accuracy, models that are known to reliably draw inferences from that data, and trustworthy process for the production and use of AI. It’s possible — likely, even — that no AI will be 100% reliable. But then, no human is either.
Context
Teaching depends on context. Teachers know who their students are, where they are in the curriculum, what they struggle with and where they succeed, even what they had for lunch and whether they get tired in the afternoon. A teacher will know why a student is learning about something specific, and how it may play a role in that student’s longer- term ambitions and hopes.
Similarly, any decision and learning support tool in the workplace will also depend on context. Information that may be relevant in a health care environment may be useless in an engineering context. There will be differences in vocabulary, questions that need to be asked, standards of evidence, definitions of success, the tools used, relevant legislation, and the people involved.
Any AI system will require context in order to be useful. At a minimum, it needs to be given a question or offered a prompt. But more broadly, the domain of prompt engineering defines the practice of describing to an AI what is needed, what things to take into account and what sort of output is required.
The more context is provided, the more specific the output of an AI can be — and in that context, the more helpful and accurate. AI companies use hidden prompts to put limits on what an AI model will produce. There’s even a form of attack, called prompt injection, that may mislead or misdirect an AI.
To provide personalized learning — something promised by no small number of learning providers — it will be essential to discover and apply the learning context. This will be a major undertaking. It is very unlikely that any commercial off-the-shelf (COTS) will be
able to provide this. Personalized learning technology will depend on the direct involvement of the learners themselves.
Changing reading from reading to interaction
Proponents of AI in education have long emphasized the role of learning analytics in assessing learners and recommending appropriate content. This approach frames learning as a discovery problem: find the right resource, and the learner will succeed. While content recommendation will remain useful, it is unlikely to be the centerpiece of AI in education for long. Instead, AI will increasingly support a more interactive and adaptive model of learning. Rather than passively reading content, learners will engage with AI systems that ask clarifying questions, provide just-in-time feedback, and generate explanations tailored to individual needs. These systems will be responsive, conversational, and context-aware — shifting the learning process from consumption to collaboration. The result will not be a replacement for teaching, but a transformation of how learners interact with knowledge itself.