Skip to main content

542,000+ Visitors Annually!

Teach Online

Teachonline logo

Contact North | Contact Nord logo

Search form

  • Home
  • About Us
  • News
  • Contact Us

542,000+ Visitors Annually!

Menu

  • AI Apps for Education
  • Tools and Trends
  • Training and Resources
    • AI in Higher Education Resource Hub
    • Training and Resources Overview
    • College & University
  • Upcoming Conferences
  • Webinar Series
  1. Home
  2. Sci-Fi Scenarios - 2035
  3. Sci-Fi Scenarios for Teaching and Learning that Could Become Reality by 2035 - Stephen Downes

Sci-Fi Scenarios for Teaching and Learning that Could Become Reality by 2035 - Stephen Downes

This post is part of Sci-Fi Scenarios, the foresight series on TeachOnline.ca in which leaders in education and technology respond to five “sci-fi–sounding but plausible” AI futures.

We asked contributors to review five AI-driven scenarios for higher education (2025–2035), pick the one they find most compelling and explain why, and then add one future scenario of their own.

Below is the response from Contact North | Contact Nord Research Associate Stephen Downes, one of the leaders we invited to contribute.

 

From the list: Stephen Downes' #1 pick 

Autonomous feedback engines are AI systems that give rigorous, outcome-aligned feedback to student work, instantly. 


Key features

  • Structured feedback aligned with rubrics and learning outcomes
  • Challenge Mode: feedback that probes assumptions and deepens thinking
  • Tailored revision advice and reflection prompts
  • Seamless LMS or platform integration

 

High likelihood 

  • GPT-4 and successors already provide effective writing and code feedback
  • Platforms like Gradescope and FeedbackFruits show viable early use cases
  • Faculty burnout and demand for scalable assessment support are rising

 

Strategic value

  • Enhances learning through iterative, actionable feedback cycles
  • Ensures students get timely support even in large classes
  • Supports equity and academic integrity with transparent, consistent feedback 

 

Why it matters

Autonomous feedback tools help close gaps in student performance by providing consistent, high-quality insight at scale, especially where faculty bandwidth is limited.

 

Autonomous feedback: It's already here

The idea of automated feedback — including automated assignment and essay grading — has been around for several years, and studies have demonstrated their effectiveness.

Assigning a grade is in essence a classification problem — whether the assignment is an ‘A’, ‘B’, ‘C’, etc. — and adding to that a rubric is just a classification project across several dimensions. AI systems are adept at this sort of task, and studies suggest automated grading algorithms perform this task more consistently than human evaluators.

Similarly, we’ve already seen devices that offer feedback on physical performance. These are in use by professional athletes. Sensors either in the equipment or in video monitoring can detect the slightest variation from effective performance and point the way to improvements in swing, position or push.

We’re seeing these systems developed and put into use not because they’re cheaper — although they often are — and not because of factors like instructor burnout, but because they are effective. Prompt and accurate feedback at the point of performance is an effective learning tool.

The more rigorously defined a performance space is, the easier it is to obtain precise automated feedback. By contrast, however, when the performance space and success criteria are less well defined, systems that give rigorous, outcome-aligned feedback to student work become less and less effective.

This becomes apparent when we ask why an AI graded a student paper as an ‘A’ or a ‘B’. It is one thing to say that “the paper was most similar to other papers that were given an ‘A’ grade” and quite another thing to say that “these were the facets of the paper most relevant to having received an ‘A’ grade.” Similarity can predict outcomes, but it doesn’t explain them.

 

A worthy addition to the list: Community-Based Automated Assessment

Community-based automated assessment is a paradigm shift from fixed rubrics to fluid, socially  contextual performance evaluation.

 

What it is

It’s arguable that we don’t actually know what leads to expertise in most subjects. Techniques like learning outcomes and grading rubrics manage at best to capture an introductory and novice-level model of performance, but very little beyond that.

There’s an old saying aspiring writers learn: “You have to know what the rules are before you can learn how to break them.” Rubrics and learning outcomes can help people learn the rules but they’re silent on the subject of where and how to break them.

In disciplines that require innovation and creativity, automated assessment seems destined to fail. Creativity and innovation generate outliers, which do not resemble successful performances of the past. It’s easy to imagine an AI explaining the assignment of a ‘D’ to Ernest Hemingway on the basis that “his sentences are too short.”

At a certain point, educational designers will recognize that quality and success depend as much on how something is received by an audience as its content and performance. This varies by context and over time, and can’t be predicted by a model or algorithm.

Any advanced form of automated assessment will by necessity be required to access these contextual factors. This will be accomplished by community-based automated assessment.


How it works

Community-based automated assessment combines multiple types of data to deliver a dynamic, context-aware judgment of performance, including:

  • Expert performance data: Comparisons to past works rated as "high quality"
  • Expressed needs analysis: Mining community signals for areas of emerging relevance
  • Reception data: Gauging the response to an individual's work (e.g., comments, reactions, engagements)

Moreover, it shifts focus: 

  • From grading individual works to evaluating overall presence and contribution
  • From static rubrics to emergent use-case specific evaluation
  • From producing records to supporting real-time decision-making (e.g., hiring, project inclusion, compensation)

 

Strategic value

Education institutions have long held a monopoly on credentialing. But community-based systems are changing that — and fast.

Educators will need to shift from teaching to rubrics, to supporting learner-defined pathways, where meaning is created through engagement, reflection and context. Curricula will become more fluid, responsive and embedded in communities of practice.

Faculty roles may shift from content delivery to mentorship, curatorship and active fieldwork. Fields of study may morph into clusters of interest and emergent challenges, where knowledge development is ongoing and visibly networked.

 

Contributor profile: Stephen Downes

Stephen Downes picture

Stephen Downes is a Canadian scholar and pioneer in online and networked learning. He has been a Senior Research Officer at Canada’s National Research Council (NRC) within the Digital Technologies Research Centre since November 2001, and is a Research Associate with Contact North | Contact Nord.

He holds a BA and MA in Philosophy from the University of Calgary and completed PhD-level work at the University of Alberta, focusing on epistemology, philosophy of mind and the philosophy of science.

Downes has been instrumental in shaping online learning since the mid 1990s, notably developing one of Canada’s first academic MUDs, pioneering learning objectives and personal learning environments, and co-designing the first MOOC: a connectivist, open course delivered in 2008 alongside George Siemens.

Active in research and thought leadership, Downes has developed LMS software, content syndication tools and personal learning environment software. His proficiencies span online learning, digital media, pedagogy, connectivism, open educational resources, blockchain credentialing and AI in education.

Downes has authored the influential and award-winning daily newsletter OLDaily since 2001 and has done more than 500 keynote and other presentations in dozens of countries around the world. He has taught at the University of Alberta, Athabasca University, Grande Prairie Regional College and Assiniboine Community College.

Did you find this resource helpful?

Subscribe to Online Learning News

Provincial Land Acknowledgement

Contact North | Contact Nord respectfully acknowledges that our work, and the work of our community partners, takes place on traditional Indigenous territories across the province.

We are grateful to be able to work and live in these territories. We are thankful to the First Nations, Métis and Inuit people who have cared for these territories since time immemorial and who continue to strengthen Ontario and all communities across the province.

 

Contact North | Contact Nord is a not-for-profit corporation funded by the Government of Ontario.

Government of Ontario logo

  • Accessibility
  • Disclaimer
  • Key to Success
  • Privacy Policy
  • Job Postings
  • Contact Us
  • New Report Shows Online Learning is Thriving Across Canada

Follow Contact North

Visit studyonline.ca

Creative Commons License  teachonline.ca by http://contactnorth.ca is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Menu Toggle Icon - openClick here to open the menuMenu Toggle Icon - closeClick here to close the menu

Menu

TwitterFacebook

Search form

  • Home
  • About Us
  • News
  • Contact Us
  • AI Apps for Education
  • Tools and Trends
  • Training and Resources
    • AI in Higher Education Resource Hub
    • Training and Resources Overview
    • College & University
  • Upcoming Conferences
  • Webinar Series
  • Accessibility Standards for Customer Service
  • Disclaimer
  • Privacy Policy
Feedback