Your hub for trends, best practices and resources
542,000+ Visitors Annually!
Tools and Trends
Digital question mark

Stop Looking for the Right Answers about AI – Look for the Right Questions

Over the past few years, we have seen artificial intelligence burst onto the scene to the point where every conversation about learning technology is a conversation about AI. As with anything new and potentially transformational, people have questions. But are they asking the right questions?

1. Using AI to cheat

People are asking:
How do we stop students from using AI to cheat?

It’s not that cheating is new. There’s a long history of students smuggling crib notes into an exam or paying a friend or online service to write an essay. The problem now is that it’s so widespread and so easy to do. Why would anyone write an essay from scratch when the computer can do a better job of it for just a few cents in just a few minutes?

But this is the wrong question to ask. If it’s that easy to write an essay, why are we even asking students to do it at all? It’s like requiring students to draw straight lines without a ruler. It may be true that they’ll never learn how to do it if they don’t stop using rulers, but is this a skill students really need to learn? It’s so easy and rulers just cost a few cents.

The right question to ask is:
What should we be asking students to do?

Barring a complete social and market collapse, the world of the future will contain AI. So, we should be asking what they can do with AI, or where their use of AI doesn’t matter. How can we make assessment AI-agnostic?

2. Incorrect answers and misinformation

People are asking:
How can we be sure AI isn’t misleading students with incorrect answers and even misinformation? What guardrails are in place to prevent harmful information?

It is true that AI is currently much less reliable than textbooks that have been vetted or news that is reported by professional journalists, but this is the wrong question to ask.

The right question to ask is:
How can we help students protect themselves?

There are ways to ensure we are not being misled. Students can develop skills like critical thinking and the scientific method to ask basic questions such as: What is the evidence? How do I test this? Learning these skills helps not only with AI misinformation but also the very human version of the same.

3. Bias in AI

People are asking:
What about the bias in AI training data? How can we be sure AI is not perpetuating harmful stereotypes and creating further disadvantages for at-risk students?

It’s not just that AI reflects the bias in the training data; it also appears to magnify and reinforce these harmful stereotypes.

But this is the wrong question to ask. Bias is the result of a hasty and often irrelevant generalization, one to which humans are prone as well. It’s a fallacious form of inference, one that tends toward negative perceptions and has harmful consequences in society. Simply correcting AI for bias in the input data does nothing to address the source of the problem.

The right question to ask is:
What is the proper basis for drawing generalizations, and how can we educate ourselves (and coincidentally, AI) to know the difference between this and harmful bias?

Related to this, we could ask, what generalizations are we missing because we’re so focused on the irrelevant ones? Generalization is an enormously useful tool but jumping to harmful conclusions based on narrow and unrepresentative data is an all-too-human failing.

4. Cognitive deficits

People are asking:
Is the use of AI creating a cognitive deficit in students, making it harder for them to learn critical thinking and creative skills?

There have been several studies recently showing that this is the case. After all, if we want to learn a skill, we have to practise it, and the use of AI instead of our own brains robs us of that practice.

But this is the wrong question to ask. Every new technology replaces a human ability with a mechanical one. When we started to ride horses, we stopped running so much, but we still went faster. When we started using roof shingles, we lost the ability to thatch our own roof, but our homes were much drier. When we started using calculators, we stopped doing so much math in our heads, but we can now perform millions of calculations in a second.

The right question to ask is:
What skills should we be teaching instead of the ones AI can do for us?

We will want some people to learn all the skills, just to ensure AI is doing the job properly, but it’s not necessary to teach everyone these skills. Maybe the skill of the future will be “how to ask good questions” or “how to describe our experiences” so we can make the most of automated reasoning.

5. Privacy and security

People are asking:
Can we use AI in education without violating students’ security and privacy?

AI requires a lot of data to do its work, and this data is often obtained from students, with or without their consent, in the form of harvesting their content, surveillance technology and reading their intentions and emotions. This can feel like a violation of privacy and, in some cases, can leave students open to criminal activity, harassment and abuse.

But this is the wrong question to ask. This is not to say the harms that are caused are irrelevant. Personal privacy and security are important. But society cannot function without exchanges of information, and we cannot function in society without being able to trust others. This is especially true in education. Privacy and security carried to extremes make society unworkable, and that makes education unworkable. So, we need to look beyond privacy and security to promote comfort and safety.

The right question to ask is:
How do we create, deploy and enforce a system of consent for the sharing and use of individual data?

We need to ask: Who owns and controls the data each of us creates, and what is allowed to be done with that data once others have access to it? Both these questions relate to the matter of consent or, as we sometimes read, “nothing about us without us.” When the entire economy is essentially based on human research, we need to ask How do we apply research ethics to the entire economy?

6. Copyright content

People are asking:
Has generative AI been trained with pirated copyrighted content?

This is like asking whether you are using copyright books to hold the door open. It doesn’t matter; the authors and publishers have no control over how you use the book. What they can control is whether you’ve copied the book (or otherwise redistributed it) without permission.

AI doesn’t contain copies of books, no more than your spell-checker contains copies of books. The AI learned from the books, but all it learned was how to order words, just the way a spell-checker has learned to order letters.

The right question to ask is:
Are you using AI to produce copies of copyright works?

Here we use the same law we would use for any other tool. If I photocopied Harry Potter, then I have violated copyright. It would be the same if I did it with AI.

7. Compensating authors and creators

People are asking:
Has generative AI been trained with pirated copyrighted content?

This, too, is the wrong question to ask. With a few exceptions, such as Anthropic’s use of pirated books, the content used to train AI has been paid for. The real question is whether we are responsible for paying authors and creators more.

The question of whether people are paid fairly by large corporations is an important one. It’s a question we should ask when buying shirts made in Bangladesh, oranges shipped from South Africa, burgers from McDonalds or music streaming on Spotify. The use of content by AI isn’t a special case. It is “business as usual,” as the courts have found.

The right question to ask is:
Are the revenues from AI being distributed fairly?

Right now, there are no revenues. AI companies are losing massive amounts of money each day. But in a possible future in which AI is profitable, we may well want to consider the potential for supporting Fair Trade AI.

8. Depending on large companies

People are asking:
What risks are created by depending on a few large companies like OpenAI, Microsoft and Meta to create and manage the content and data used to educate the next generation?

Relying on a limited number of major corporations can resemble a monopoly, where profit becomes the primary focus. This often results in higher prices and diminished product quality. And having large companies control the educational content opens the door to propaganda and inappropriate influence.

We have been facing this same question for generations in everything from publishing to railways to food production — but it’s the wrong question to ask. The push toward centralization and concentration of power and wealth has been a constant force in society and is not unique to either education or AI. Simply blocking or breaking up large companies has never been the solution. We need effective alternatives, and this means decentralization and democracy.

The right question to ask is:
How can we do these things ourselves?

How can we create and manage our own education, both individually and as a community? How can we develop and manage our own form of AI based on community participation and consent? Replacing large companies with more human-centred alternatives that work equally well, as the open-source software community has learned, is not trivial. These are core questions of self-governance, and there may be a role for technology to play or we may have to do it human-to-human.

9. Inequality

People are asking:
As we depend on more and more sophisticated tools, are we increasing the digital divide and inequality in society?

With each new technology, we make it harder for people living at a subsistence level to catch up with those in wealthy societies. Perhaps we should spend more time making sure everyone has electricity before creating computers that can autogenerate cat pictures.

It is true that a huge technology divide still exists between rich and poor. But slowing the development of new technology does nothing to change that. If anything, slowing down makes it harder to increase productivity and make new technology more affordable. The real issues occur when the development of new technology doesn’t increase productivity in ways that matter — or in other words, cases where tech solutionism fails.

The right question to ask is:
How do we improve the distribution of wealth in society?

What’s the best way — with or without AI — to ensure everyone has access to low-cost energy and drinking water, decent shelter, clean communities and all the other benefits that define a good quality of life? These are, in the main, social questions that speak more about how we govern ourselves than which tools we use.

10. Environment

People are asking:
Where will we get the energy and resources we need to sustain development and growth in AI across society?

An AI data centre can consume as much power as a small city and consume entire lakes worth of water. How can we waste resources like this when the world is on the verge of an environmental crisis?

This is the wrong question to ask. Yes, there are environmental issues, but AI is a small part of that. We consume the same amount of energy worldwide to enjoy our cup of coffee or tea in the morning, but people aren’t calling for coffee production to be shut down. Agriculture consumes far more water than technology. If we really wanted to make a difference, we’d stop eating beef.

The right question to ask is:
How do we develop a sustainable economy that includes AI along with all the other things we enjoy today?

In the case of energy, for example, the question is not how much energy AI consumes, but rather, why we’re burning coal and oil to produce it. Similarly, we should be asking how we can reduce our dependence on natural freshwater sources generally. How can agriculture be more efficient?

11. Human in the loop

People are asking:
How can we make sure we don’t depend on AI for decisions humans should be making, like grading and promotion?

How do we ensure we’re taking a student’s special circumstances into account or making exceptions when we feel a better and fairer decision might result?

This question assumes we would be replacing all human decision-making with AI decision-making. But more to the point, it assumes that all human decision-making is better than AI decision-making. Neither assumption is correct. What we should be working toward is a way to achieve the best decision-making possible, and this results only when humans and AI work together.

The right question to ask is:
How can humans and AI make the best decisions together?

We’re looking for an approach in which, for example, an AI might correct a human if they’re being unfair, biased or prejudiced, and where an AI isn’t taking a person’s individual circumstances into consideration. Educators should look into “human-AI teaming” (HAT) to find the best way forward.

12. Now your turn!

Every example in this list works the same way. Begin with the question people are asking, then look for the question that actually matters.

There are still questions missing — ones that only make sense in your context, your institution, your practice. Here’s where you add them.

People are asking:
____________________________?

The right question to ask is:
____________________________?

Trending Articles

new-pedagogy-1140x400
A New Pedagogy Is Emerging... ...
Changes in society, student expectations, and technology are motivating university and college faculty and instructors...
1140x400-10guidingprinciples2
Ten Guiding Principles for the...
The following ten principles are intended to provide a (far from definitive!) guide for reflecting...
1140x-400-ontariomadetools-5
OneClass – Class Notes and Mor...
The Toronto-based company, OneClass, offers the largest collection of online university course notes and other...