Stephen Downes
Publisher and Editor of OLDaily
Contact North I Contact Nord Research Associate
October 2017
Education is change, and change is at once our greatest strength and our greatest challenge.
Change happens in society gradually and imperceptibly, but change happens to individuals abruptly and without warning. It is as though we wake up one morning to find that the world we once knew has disappeared, while all around us people proceed with their lives as usual as though nothing had happened.
This essay is about change. It is about the change we see individually in our homes and in our workplaces, and this essay is about the larger changes sweeping through society.
This essay is also about technology, where we think of 'technology' in its broadest sense. It's about information, computation, automation and analytics. It sets these against a milieu of broad social and cultural change, where we are afforded both the chance to redesign our system of education from the ground up, and the need to preserve what is important and valuable and desirable in the system we already have.
This essay is addressed to both the teachers of today and to the students of tomorrow. It is addressed to policy makers and pundits, to technology designers and developers, and to those who by virtue of office or inclination have the voice to speak to the future, to inform the weld of what we can do and what we want to do.
To change the world, one must get inside change, and look outward at all the possibilities that change affords, and then choose.
A ROADMAP
When things change, so do the possibilities they afford in our lives and work.
In the next section, we will explore a selection of some of the major strands of innovation and technology impacting teaching and learning today and over the next five years, with an eye toward the longer term and the challenges ahead. We will also through the same process explore the varieties of change, with an eye toward comprehending how we perceive change, how change unfolds, how we influence change, and how change influences us.
In the next section, we discuss the technology environment. This discussion is to set the stage, to explore not only what we can possibly achieve, but also where the limits lie. Technological change is not limited to the world of education, and the ways technology impacts other domains will overlap into teaching and learning. We will see in this section that the pace and impact of change is uneven.
We follow this with a discussion of possibilities. When things change, so do the possibilities they afford in our lives and work. We look at these possibilities in education. We begin with the idea of the interface, which is the surface of technology and our point of direct contact. Beyond this, we look at what we can make, the environments that help us make, and the importance a rising culture of creativity has on our understanding of teaching and learning.
In the next section, we see how some of these changes in society over time are also reflected in technology. Both have waxed and waned from centralization to decentralization and back. The major trends affecting technology as a whole reflect this eternal cycle. Servers and services were centralized, then distributed, and now virtualized. The Internet replaced centralized platforms with distributed networks, only to be overwhelmed with centralized services like Google and Facebook. As we contemplate the swing back toward decentralization we need to ask ourselves the urgent question: where will we be when the cycle stops?
Change proceeds not through cause and effect - not through intervention and management - but through a process of what can be called 'spreading activation'.
There's no single answer, of course, and in the next section we examine the nature of change in the complex and chaotic future we will find ourselves in. Change proceeds not through cause and effect - not through intervention and management - but through a process of what can be called 'spreading activation'. Think of how a virus sweeps through a primary school population, or how an idea like the fidget spinner becomes all the rage (then disappears), and you have the idea. Change becomes a process of communication, and the processes of communication become central to education. This section examines that idea, looking at issues of control and privacy, new languages and media, interactivity, and presence.
In dynamic and complex systems, to learn is not to be able to remember, but to be able to recognize. When we look at the future, we typically look at the immediate and the short term, but we need to step back, take the longer view, and look at the patterns of change as they project themselves in constellations of resources. What do we see? We see new and evolving types of resources, new understandings of value and commerce, new benefits in services and performance support. What matters less and less over time is the content of our resources, and more and more how they're organized.
Finally, in the model of the snake eating its tail, we turn to the question of learning analytics, and ask about how we measure and comprehend teaching and learning in the future. When we understand what we could do with the new technology and the new learning environments arrayed before us, it's hard not to be disappointed by the limited scope of learning analytics, which as a whole promise nothing more than an enhancement of the education system as it already exists. Who speaks for us: ourselves, as we existed in the past? Or the hopes, dreams and ambitions of the future?
The future is complex, it is challenging, and if we allow it to be, it is amazing.
We create the future through our own individual actions, but the future of learning is not an extension of the past. Each day, it is created anew. It weaves, warps and winds through one of the many variations of change. The future is complex, it is challenging, and if we allow it to be, it is amazing.
THE TECHNOLOGY ENVIRONMENT
Technology in education is often far messier than the neat dreams and (commercial) aspirations of Silicon Valley.
We have to acknowledge, from the outset, that technology is neither neutral (it never has been) nor an unqualified good. Technology needs to be understood, as Selwyn (2013) suggests as a “knot of social, political, economic and cultural agendas that are riddled with complications, contradictions and conflicts” (p. 6). Technology in education is often far messier than the neat dreams and (commercial) aspirations of Silicon Valley. This realization is a necessary antidote for the “mythical hope” we often have regarding (educational) technology.
Technology is sometimes thought of as an amplifier of our hopes and dreams.
Technology is sometimes thought of as an amplifier of our hopes and dreams. Mark Zuckerburg, for example, looks to it to “develop the social infrastructure to give people the power to build a global community that works for all of us” in his recent Facebook Manifesto. At the same time, it can be used to foster hatred and pathology, as the everyday use of online communications by criminals and terrorists goes to show. Technology can free us, or be an instrument of state control, as Morozov (2012) argues.
The line between what is possible and what is not possible is an important one, and the limits of technology are limits that exist independently of our hopes and aspirations.
By the same token, the limits of technology define the limits of what we can do. We cannot teleport anything larger than a photon, we cannot implant knowledge directly into the human mind (yet), and we cannot create energy and resources out of nothing. The line between what is possible and what is not possible is an important one, and the limits of technology are limits that exist independently of our hopes and aspirations. So, in one sense, what we can do with technology is in our control, but what we can't do with technology is out of our hands.
And so, when a technology crosses that line from impossibility to possibility, there is an air of inevitability about it. We may want to put it back into the bottle, but once something is known in society it is very difficult for it to become unknown. We can manage and control the use and dispersion of ideas and inventions, but we cannot make them impossible again. Possibility, like time, is a one-way street.
Internet Access
Most people in Canada have Internet access. This access, though, is uneven. The same is true in most developed countries around the world. There is a rural-urban split, where in the worst case rural customers must rely on dial-up access, while in the best case urban customers have essentially unlimited bandwidth and speed. In some places, including places served by Contact North | Contact Nord, Internet is still not available at all.
Worldwide, access speeds are better for developed nations. In the developing world Internet access is generally available in urban centres, but poor to non-existent in slums and in rural communities. Even where it exists, it is slower than in the developed world (Internet Speedtest, which ranks nations based on actual speed access speed measurements, shows this clearly).
The cost of Internet access is also uneven and varies a lot around the world. Cost depends to a degree on wealth - the poorest countries in the world, like Ethiopia, pay the most for Internet access. But otherwise the cost of Internet access depends much more on local availability, the
number of providers, government policy, and ability to pay. When evaluated against income per capita, we see that people in the poorest regions have the least ability to pay.
nearly 60 percent of the world's people are still offline.
The benefits of widespread Internet access have also been unevenly distributed. As the World Bank notes, “nearly 60 percent of the world's people are still offline.” These people do not participate in the benefits at all. And because taking advantage of Internet access requires public and private investment, “Not surprisingly, the better educated, well connected, and more capable have received most of the benefits—circumscribing the gains from the digital revolution.”
Mobile access has grown significantly over the last decade, especially in less developed parts of the world (as documented, for example, by Pew). There are now more connected mobile devices in the world than there are people (some handy stats from Cisco) though only about two thirds of the people have access to mobile phones. About half of mobile phone traffic is smartphone traffic. Access continues to increase, with the greatest growth in Africa and the Middle East. In Canada, meanwhile, we continue to pay some of the highest rates in the world for mobile access, as the CRTC found.
Through we still think of the Internet as 'the wire' in fact most of the backbone is now fibre-optics, and most individual access is wireless. It's only in high-density environments, like offices and schools, where we see the traditional wired (ethernet) network predominate. The capacity to deliver higher and higher access speeds for both wired and wireless access speeds has risen steadily over the last decade and will continue to rise at the same rate over the next.
Most people will not feel the impact of this in the short term, though. Mobile 5G technology has been developed, and promises to deliver 1 gigabit per second access speeds (1 Gbps), but the standards aren't in place and deployment won't begin until 2020 at the earliest, which for most of us means that we have another 5-10 years to wait (EEE report on 5G). But having said that, we continue to consolidate on fibre to the home and on 4G/LTE mobile. As the wave of change moves slowly through society it arrives suddenly one person at a time.
the relative disparity in internet access is not going to change over the next five years, nor even probably even over the next generation.
More to the point, the relative disparity in Internet access is not going to change over the next five years, nor even probably even over the next generation. While the capacity to provide cheap and high quality Internet to everyone in the world exists, the will to provide it does not. The same political and economic considerations that prevent us from providing water and electricity to everyone in need will prevail with respect to the Internet. This, both inside and outside Canada, is a pressing social issue.
Power and Memory
Computer processing power and memory storage are increasing at the same pace as Internet access. This is most clear in the availability and size of flash memory and hard drives. It is now common to find reasonably priced 64 gigabyte flash memory drives and cards, while 128 gigabyte can be found (I have one in my phone). Meanwhile, terabyte hard drives have become commonplace and affordable, while a 1 gigabyte hard drive would be considered 'small'. Solid state flash memory, while more expensive, is much faster, more reliable, and widely available (look for 'SSD' hard drives).
Computers themselves come in all shapes and sizes, from $14 Raspberry Pi devices to powerful workstations costing thousands of dollars. To continue increasing computing power exponentially, computers now have more than one processing unit, or 'core'. Where we used to talk about processing speed in 'megahertz' (or, today, gigahertz) other factors come into play, so it is more common to speak of chip generations. Your computer will also have sub-processors to perform specific functions, such as audio or video rendering (here's a guide to processors).
Big changes are coming for both memory and processors…
Big changes are coming for both memory and processors, and at a certain point it's hard to think of them as electronics any more. Traditional computers store data by recording it magnetically. Solid state storage, such as flash memory, stores data by storing a charge in an electronic 'gate'. They are electronic because they store data using electricity. Computers of the future, by contrast, will store data by manipulating molecules themselves. This makes them much smaller, and they will use much less power.
None of this will impact most people for years. Your next computer will be much more powerful than your current computer, but it won't be dramatically different, and you won't be buying new computers so frequently. The current computer upgrade cycle is about five years (says Intel). If you have an older computer (defined as, say, not running Windows 10) then you will likely upgrade to a newer computer (running Windows 10 or something very similar). Your next computer will still be a 64 bit computer and it will be smaller (most likely a laptop or tablet) than your current computer.
Microcomputing
Because processing power and storage have advanced so rapidly, very powerful small scale computers are now affordable and are in the process of becoming ubiquitous.
Where we are seeing the biggest changes aren't at the top end, as our computers can do almost anything we ask of them today. It's at the small end. Because processing power and storage have advanced so rapidly, very powerful small scale computers are now affordable and are in the process of becoming ubiquitous. And this is what will have the greatest impact over the next five years.
Some of this technology is already available. The rapidly rising popularity of drones, for example, is enabled by affordable computer chips that balance a platform with four (or more) spinning rotors (Forbes writes about AI on a chip). Another example is the virtual reality (VR) talk about the Internet platform. Google amazed the world with a cardboard viewer, but what makes VR work is a high resolution computer display you can wear in a mask (here are some).
Another example is the MagicBand. It's a bracelet Disney hands out to hotel and resort guests. It gives you access to rides, automatically takes photos, and helps them run the park. It may seem creepy, but it's very convenient. The bracelets, along with the rest of the technologies in this list, communicate with other services using short-range communication called RFID as well as long range wireless Internet (story on Gizmodo).
We don't think much about these small things but they are becoming pervasive.
We don't think much about these small things but they are becoming pervasive. We had just graduated to chips on our debit and credit cards when we began to receive cards we could simply tap to pay instantly. Or, one day we had to wire our computers to the video projection system, and the next we could simply use the (Barco) ClickShare to wirelessly connect in about ten seconds.
Internet of Things
The development of this sort of technology is what is leading a lot of people to talk about the Internet of Things (IoT). The IoT isn't any single technology but rather is made up of a combination of many small things containing computers, sensors or data inputs, and wireless communication. There are standards and protocols for all of these, but beyond that, there are few limits (and even fewer laws) limiting what they can do.
Combine these trends together and it's possible to paint a fairly clear picture of the technology market for the next five years and even further. While our computers won't change a lot, we will interact more frequently with small specialized devices. We already have smart scales, thermostats, fridges and more. Next we'll have things like bracelets (or fitbits) that interact with these devices for us, and we'll be able to leave our smartcards at home.
Eventually we'll just carry our computers on our wrists (or embedded in our bodies, a.k.a. insertables). Our computers will interact with the world on an as-needed basis, and we will use whatever interface or device is handy to work with it. It's not hard to imagine. Think of the (Lenovo) IdeaCentre Stick, for example, which is a windows computer smaller than your phone. You simply plug it into any handy TV and use a wireless keyboard and mouse. In 10 years, this computer will be the size of a quarter and you'll just wear it.
Eventually we'll just carry our computers on our wrists (or embedded in our bodies…)
There will also be a sensation of losing control. In the current cultural and legal environment, we do not own the software that powers the Internet of Things; at best, we license it. This has again been a gradual change that some warn could lead to a new Internet feudalism. What happens when our personal data collected by things like our vacuum cleaners becomes a commodity sold to the highest bidder? What happens with global positioning and internal sensors are altered by hackers? Suddenly we're vulnerable.
We will need to negotiate a social contract between ourselves and our devices.
We will need to negotiate a social contract between ourselves and our devices. They will do everything from heating our home to driving our car; we need to be able to trust them, but we need assurances that they are trustworthy. Current civil and criminal mechanisms are ad hoc and insufficient to afford real protection. Some nations (including, probably, Canada) will develop regulatory regimes. The coming debate in the next decade will centre around the values and principles we want to based these on.
Constant Change
You'll feel these changes pushing you: should students wear ID MagicBand bracelets in school? Is it appropriate to allow them to connect their tablet to the classroom video screen? Is it appropriate to use augmented reality in the hallway? Should you use commercially available VR units or create your own?
You also feel these changes pushing at you when they come with potential side-effects. The Internet of Things, for example, has been sarcastically called the Internet of Broken Things. This follows hacking networks set up to take control of baby monitoring devices and security cameras to send spam and viruses to unsuspecting users (described here). And it's great to have plug-and-play wireless devices, but less great when other people use them to spy on you (like these key-loggers at Carleton).
Yet when you look back after five years, you may remark on how little things have actually changed. That's the slow pace of society-wide change, and you'll find then as now that most of the changes in technology you are expecting are still in front of you. There won't be holograms or quantum computers or DNA memory. Not yet. All this lies in the future.
At the same time, this long-term future is the future for your students. While we are still living in a world where 'fast Internet' meant we could use both audio and video Skype conferencing, our students will live in a world where 'fast' is meaningless – everything will just work. They won't have to save computer memory, they won't worry about download size, and they will expect there to be a computer chip in every object (and will be puzzled when things like magazines and soup cans don't swipe properly, like this child).
When we teach these students, it's hard to fight the temptation to teach them for a world that no longer exists. It's even harder not to teach them for conditions that apply today… We need to use the technologies of today to teach for the worldof tomorrow.
When we teach these students, it's hard to fight the temptation to teach them for a world that no longer exists. It's even harder not to teach them for conditions that apply today. The world we are preparing them for, however, is literally a next-generation world. We need to use the technologies of today to teach for the world of tomorrow.
WHAT WE CAN DO
We tend to think of change as being about what things there are and what events have happened, but change is really about what we can do that we couldn't do before, and sometimes what we can't do that we used to be able to do.
Affordances
There's a technical concept in education and the social sciences called 'affordance'. The idea of an affordance is that it is something you can do with a tool that you couldn't do without the tool (we say that the tool “affords” this opportunity). The affordances are not just the things that the tool was designed to enable, it also includes the novel uses people come up with in the process of using the tool.
We want to focus in this section not simply on what we can do with new technology, because this will occupy us for the rest of the essay, but rather, what we can create with new technology, and how this is changing.
That's what this section is about. Take photography, for example. We tend to think of the change from film photography to digital photography. It seems like a change in the essence of photography, in the substance of photography. We talk about the result, the product, how digital photos compare with their analog counterpart, for example.
The real change brought about by digital photography has little to do with the nature of the photo. That's why 'digital photo frames' never really caught on. No, what digital photography did was to bring the art and science of photography to everyone. Cameras and lenses may still be expensive (though, increasingly, they are very cheap) but the cost of capturing an image, editing it, and sharing it has been reduced to near zero.
Our relation to food changed when millions of people started tweeting their evening meal.
The implications here are far broader than the technology of the camera and the photograph themselves. Social, cultural and political change follows. Our relation to food changed when millions of people began tweeting their evening meal. Internet memes featuring cats, known as LOLcats, changed the way we talk to each other. Candid photos have sunk politicians, launched political movements, and spurred the rise of citizen journalism.
The same is true for video, to an even greater extent. With the exception of a few very dedicated hobbyists', video production was once almost exclusively the domain of professionals. The equipment was expensive, video film (and later, videotape) was expensive, and video production required dedicated equipment and years of training. Even in the days of the Super 8, video production was not for the faint of heart.
It took the powerful computers, cheap data storage and fast Internet connection speeds described in the previous section to make it possible, but as I write the largest sites on the Internet are sites like Instagram and YouTube, photo and video hosting sites respectively. Social network sites like Facebook and Twitter thrive largely because people can share photos and videos with their family and friends.
Because of these changes, we are constantly being challenged to redefine our rights and responsibilities. What happens when teachers lose their tempers (some examples) and end up on YouTube? They on the one hand are demanding privacy and on the other hand required to be more accountable. Public officials, police officers, bad drivers, drunken barroom brawlers - all risk being recorded and put up for display on YouTube.
We are entering an era of constant renegotiations of our relations with each other.
We are entering an era of constant renegotiations of our relations with each other. This will not change in the short term, nor even the long term, because what we can create continues to evolve and to grow. The most recent disputes centre around things like drone photography, facial recognition (here), monkey selfie photography (here), and computer- generated images of perfection (here).
Evolving Interface
Through the last decade and into the next decade our capacity to create is influenced by two major factors: first, the evolving interface, and second, what I'll call the 'cognitive capacity' of a computer system. We'll look at each in turn.
The interface is the environment where you do your creating. If you are writing an essay, for example, the interface will be your word processor. If you are working with photos, the interface might be Photoshop, while video editing tools help people create and edit video.
Today, interfaces range from very good to very limited. Commercial applications like Photoshop contain expressive feature-rich interfaces that give the user a great deal of creative flexibility. Others, such as Google's search, succeed through simplicity. At the same time, in education, students and teachers alike continue to work with a relatively cumbersome and inflexible environment in the Learning Management System (LMS). Complaints include inadequate reporting, restrictive rights management, clumsy workflows, difficult integration, and inadequate security.
It's hard to recognize, but these interfaces have become a lot better over time.
It's hard to recognize, but these interfaces have become a lot better over time. Again, it's the sort of change that happens very slowly over time. You don't notice it at all until you upgrade to a new version (and then all the controls are in the wrong place and you have controls that do things you've never heard of). But look at how the desktop has changed over time. There's no reason to suppose that interfaces will not continue to improve, and that the learning management system (or its successor) will adapt to meet actual needs.
The interface doesn't have to be a computer screen. In my lifetime the interface for cooking food has undergone a dramatic transformation (from gas to electric to microwave, and today we're looking at printed food and induction heating). A company called Ikea became a worldwide powerhouse by creating an interface enabling us to assemble our own furniture. The proliferation of home renovation and lifestyle shows is no accident; we are more than ever able to shape our own environments.
Through the last few decades one of the main emphases in education has been on what we can create digitally, and the major advances have been in the interface. Students were invited to learn computer programming with tools such as Scratch and the idea that they could learn by creating became the hallmark of constructionism and the cornerstone of Seymour Papert's revolutionary philosophy (read Mindstorms here). We also taught them to write online, form groups and communities, create animations and slide shows, make digital music, and create video.
We need also to keep in mind that the interaction with interfaces has two sides, and that the human side of the equation continues to evolve as well. We learn how to interact with technology at an early age; the best example of this is how an infant thinks of a magazine page as a broken iPad. In the same way, we will learn to use multi-touch and gesture-based interfaces of the future - but the infants will learn most quickly, and the rest of us will feel left behind (and will worry that the young are losing important skills like penmanship and keyboarding). Our institutions need to adapt.
In a world where everyone uses computers, should everyone learn computer programming? Or, in a world where the idea of pedagogy is giving way to learning design and learner self-management, should interfaces design be driven by pedagogy?
Additionally, our skills evolve depending on what's needed both to critically engage with the interface and to meet the needs of a changing society. For example, in a world where everyone uses computers, should everyone learn computer programming? Or, in a world where the idea of pedagogy is giving way to learning design and learner self-management, should interfaces design be driven by pedagogy?
The interface is a language, and interface design is the science of that language. No individual will define the grammar and the semantics, but all of us will need to learn this new literacy, to both understand what interfaces are trying to do, and to express ourselves through whatever interfaces we have available. Ultimately, language - and therefore interface - is use, and use shapes cognition.
Assembly, Fabrication
The more revolutionary interfaces will be non- digital (or perhaps more accurately, non-screen).
The more revolutionary interfaces will be non-digital (or perhaps more accurately, non-screen). This is a phenomenon that is already in progress (though, again, it moves in slow waves through society only to arrive all of a sudden in each person's life).
The earliest interfaces were assembly interfaces. These consists of sets of pieces or parts that are assembled (or plugged into each other) in novel configurations. Of these, the classic is an old standby, the Lego block, which is enjoying a digital renaissance with motors and computers and more. Assembly interfaces have a long history, from crystal radio sets (in kit form) to home-built computers.
The real change comes when we enjoy popular access to fabrication interfaces. These are systems or devices that enable us to actually transform materials into new shapes or constitutions. The Easy-bake Oven was one of the first of these for kids (mix some powder together, get a cupcake). We've long had shaping tools, such as automated engraving systems, and extrusion tools, which we use to make vinyl siding and pipes. The 3D printer (such as the MakerBot) is a layering tool, building objects from multiple thin films of material.
…the core skill is in design…
With these it becomes apparent that the core skill is in design, and the same sort of environment is employed in everything from plastics to software design to component assembly. Whether it's AutoCAD, Photoshop, Video editing, Microsoft Word, or Eclipse, the design interface consists of a main work area, where you manipulate the object in question, and a set of tools that offer different sorts of manipulations. These tools allow you to input samples, components, algorithms or other resources, and output software that can be used by machines of any shape and description.
This basic design environment will not change over the next five or ten years. How could it? But the way we create will be dramatically changed by changes in each of these three major areas: the work area, the tools area, and the export area. Let's look at each of them briefly.
Environments
The work area is today a flat surface on a screen where you see a two-dimensional representation of the object being created. Two new dimensions will be added. First, the killer application for immersive virtual reality will not be games (though they will be hugely popular) nor virtual reality video (let's call that 'tridio'), it will be design. These will be especially useful in the sciences, medicine, the arts and architecture, but virtually no domain will be untouched by immersive design studios.
Second, these immersive environments will be multi-user. There will not just be one set of hands or tools working on the design, there will be multiple hands and tools, working together in real time, communicating with each other in real time, usually through audio (though of course the whole environment will be a floating sketch pad), creating ideas and concepts through a process of stigmergy rather than argument or inference.
…we will be drawn increasingly into large, complex multi-user projects when the overall scope is beyond any individual comprehension or ambition.
These taken together change the subjective feel of creativity. Today, it can be a lonely task. But we will be drawn increasingly into large, complex multi-user projects when the overall scope is beyond any
individual comprehension or ambition. We need to consider how we will govern ourselves in such processes, and how to direct such work toward progressive ideas (as in Wikipedia) and away from destructive endeavours (as in 4chan).
The tools area will be enhanced with entire categories of tools. Today we have digital representations of most of the formative processes described above – combining, converting, shaping, extruding and layering. On top of this, automated systems (or macros) will execute complex combinations of operations of these tools. And their capacities will be enhanced – it will be possible, for example, to import text that is automatically translated as it is imported. Or to import audio that is automatically transcribed.
The tools area will also be enhanced with what we might call diagnostics or
analytics. We see this already in our word processor's attempts to correct our spelling or improve our grammar, or in PowerPoint's attempts to suggest slide designs. In the future, we will have access to representations of readability or semantic density, pacing and flow, overall concept and design. These tools will also tell us how productive we have been today, and may even tell us when to take a break.
The export area will extend the usefulness of our work. It will translate concepts and ideas into machine instructions that can be imported into almost any device and executed. This is precisely what happens in automated production systems today. The only difference in five years is that these instructions will be sent into any device that has a computer processor, which as we discussed above, means almost any device.
To call this area the 'export area' is probably to understate its importance. One of the major affordance of digital technology has been the possibility to now share our ideas and creations directly with each other. We'll look at this in the next section.
Computing as Creating
George Siemens recently wrote, “A stunning period web innovation occurred between 2000-2005: delicious, myspace, many blog platforms, flickr, wikis, etc. The gates were opened and everyone was a content creator and everything was subject to user creation. Everything was a possible social artifact. Take and share a picture. Post your thoughts on a blog. Tag and share valuable resources. The web had its velveteen rabbit moment and became real to people who had previously been unable to easy share their creative artifacts.”
It's easy to think of a new technology as simplyreplacing what we already do with some form of automation, but real change comes when it enables us to do things we could not do before.
The SAMR model stands for 'Substitution, Augmentation, Modification and Redefinition' and the idea is that it helps us understand the process through which a new technology enters the environment (explore the idea by viewing the presentations on Ruben R. Puentedura's Weblog). It's easy to think of a new technology as simply replacing what we already do with
some form of automation, but real change comes when it enables us to do things we could not do before.
In an education system focused on the future, therefore, the core of learning is found not in what is defined in the curriculum, but in how teachers help students discover new possibilities from familiar things, and then from new things.
In an education system focused on the future, therefore, the core of learning is found not in what is defined in the curriculum, but in how teachers help students discover new possibilities from familiar things, and then from new things. It is, to my mind, transformation from an idea of education defined as acquisition of skills or progression along a learning path to one characterized by exploration, discovery and finally creativity.
This is the why of Seymour Papert's constructionism. It isn't simply that we are helping students become better mathematicians or better computer programmers (though it is that). It isn't even that we want them to think like a mathematician or like a scientist or like an engineer (though it is that too). It's that we want students to see beyond what is possible in today's world, with today's technology, and acquire the ability to learn skills, conceptualize and design in future environments that do not exist today.
The more 'virtual' we can get in learning today, the better.
The more 'virtual' we can get in learning today, the better. We want students to think with their hands and their bodies as well as with their minds, to place themselves in imaginary worlds and environments and solve problems, undertake projects and create new solutions. Even before these immersive environments arrive in our schools and our workplaces we can prepare through hands-on activities in the arts, engineering and sciences, through drama and improvisation, and through design activities in mathematics, computer science, and language arts.
“Consider our curriculum as a self-contained coherent resource,” writes Siemens. “The goal of education? Teach this container to the students. What happens when you add artifact creation? If someone comes along and says, 'what about the power structure and the bias that underpins this content,'? Bam. It's a new course. Someone creates a video reacting to a lecture I delivered? Bam. It's a new course.” It's not the simple fact of being virtual or non-virtual that makes it better. It's the element of adding creativity to the curriculum, and to the extent this is enabled by virtual media, virtual media are better.
The technologies that will be important in the future will be those that helpus create new things, in new ways.
The future of virtual learning is so often depicted in terms of the distribution of learning resources. This focus is a mistake. The technologies that will be important in the future will be those that help us create new things, in new ways.
THE OLD AND THE NEW
Change is not just the new, it is invariably a combination of the old and new.
Change is not just the new, it is invariably a combination of the old and new. Change does not simply arrive from nowhere, it emerges as a result of a growing unease with existing practice. History, as Hegel said, is a process of thesis and antithesis clashing and combining to produce a synthesis. New knowledge, as Kuhn said, results from enough unexplained phenomena emerging to challenge normal science and produce a paradigm shift.
Marshall McLuhan famously wrote of the tetrad of media effects: when contemplating a new or emergent medium, we should ask, what does the new medium enhance, what does it make obsolete, what does it retrieve, what does it become when pushed to its limit? These questions give people hope that the old will endure through the new. They allow us to speak of how the spoken word of the campfire is retrieved by radio, how the way a printed work feels will never be replaced, or to point to the recent resurgence of vinyl record albums as the reversal of the ascetic nature of digital music.
We are tempted to say “it's not better just because it's new,” but we forget that the reason it's new, the reason it came into existence at all, is because it's better,if not for you, then at least for someone.
This, though, would be a mistake. The old, once it is displaced by the new, is gone. Talk radio of the 2000s in no way resembles pop radio of the 1970s, which in turn in no way resembles radio theatre of the 1950s. We may feel nostalgia for the printed book or the vinyl album, but we will continue with eBooks and digital audio because they are so much more useful and so much cheaper than the alternatives. We are tempted to say “it's not better just because it's new,” but we forget that the reason it's new, the reason it came into existence at all, is because it's better, if not for you, then at least for someone.
To understand the relation between the old and the new, it is important to understand the benefits of the new, the values it supports that brought it into existence. In business writing, for example, benefits are understood as lower cost or greater revenue, and these in turn are realized through greater efficiency or improved outcomes. In education, a benefit may
be understood as improved grades or greater retention. There are also social benefits to education, which often explain why governments and community groups invest time and energy into change.
Normal Education
Near the beginning of the Internet era people like Marc Prensky wrote of the 'digital native', people who would grow and act and learn and think differently because of the affordances created by new technology. This, he argues, will force all educators to change. Yet, he notes, “A frequent objection I hear from Digital Immigrant educators is “this approach is great for facts, but it wouldn't work for 'my subject.'” Nonsense! He says.
We have since learned that sweeping changes predicted by Prensky were not generational but were more social and cultural and depended more on exposure to the new technology than on age. Nor were the changes as sweeping as predicted, and in particular we've learned that multi-tasking is in many respects a bad idea. And we know that “the messy reality of the use of digital devices by children is highly variable and far more nuanced than is denoted by the simple dialectic of digital natives vs. digital immigrants.”
Having said that, it remains true that in the traditional education system, we are still firmly entrenched in the pre-digital age even though researchers and developers continually proclaim this or that paradigm shift. This will not change in the next five years, though there will be (as there has been over the last two decades) proclamations that we are gradually moving toward the edge. We still study in classrooms (whether virtual or off-line), we still study subjects in cohorts or classes, we still employ texts and workbooks, and we still submit assignments and write tests.
The static structures that used to define education have shifted; we are in an era of changing boundaries between formal, non- formal, informal and post- formal education.
And yet, despite the conservative nature of the educational system, we have conclusively and irreversibly entered the digital society, and students have permanently changed, as has society. The static structures that used to define education have shifted; we are in an era of changing boundaries between formal, non-formal, informal and post-formal education. People change jobs and even careers more frequently, learning is defined as lifelong, and corporate learning is shifting from the classroom to the 70-20- 10 model emphasizing experience, coaching and mentoring.
People change slowly, and institutions even more so, so 'normal education' will still be the norm in five years. But it will be less of a norm than it is today. When we look at the core elements of 'normal' education - classrooms, cohorts, textbooks, and assessment - we see the beginnings of change pushing us gradually to that point where 'normal education' is no longer viable. Let's look at each of these four areas.
Classrooms
It has long been a running criticism of educational technology that the first thing anyone does with a new tool is to recreate a traditional lecture theatre with rows of seats and a stage on which a teacher or professor will pontificate. That's what we did when we were creating multi-user environments in the 1990s, that's what Second Life designers did, and that's what we still see today in applications like Adobe Connect, Blackboard Collaborate, Big Blue Button, or WebEx.
We sometimes think that these classrooms will be replaced with virtual games or simulations where learning is more like an adventure or a contest. But in addition to being difficult to develop, these environments are in many ways too specific for educational purposes. A simulation might be an excellent way to learn how to fly a C-Series jet, but we won't need one of these in every classroom. Instead, as discussed in the previous section, we will use classroom spaces more and more for exploration, discovery and creation.
These environments don't exist yet. Teachers and learners everywhere have experienced the difficulty of collaborating in a learning environment. The tools aren't there, communications are too difficult, and there isn't a creative space. Collaborative authoring tools, such as Google Docs, can be used, but there is no sense of interactivity and cooperation. It's hard even to imagine writing or drawing or making video collectively, because they have traditionally been individual tasks.
Outside the domain of education, we can see where the future leads. Collaborative and cooperative work environments have become mainstream. We can find good examples in GitHub, a multi-user environment that enable authors to share their code, supports collaborative development, and enables to create new versions, or 'forks', or existing software. Another example is Slack, which is a multi-user project management and communications environment.
In time, our synchronous instructional environments will begin to look more like collaborative work environments.
In time, our synchronous instructional environments will begin to look more like collaborative work environments. The emphasis will be on helping students manage products and create cooperatively. Online conferencing will emphasis dialogue and discussion, rather than presentation. The conferences themselves will be recorded and stored and will be usable
by others as learning resources. In the longer run, the learning function being described here will be incorporated into production software used in offices and work environments.
These were all functions supported by traditional classroom spaces, so the physical environments used to support them will be repurposed. There will be common areas designed for comfort and serendipity, meeting rooms, and labs with specialized equipment and facilities. We shouldn't be thinking, for example, how Google Home can enhance existing classrooms. We should be building, one classroom at a time, active learning spaces, with an eye to supporting not only existing student populations, but also lifelong learners throughout the community.
Cohorts
One of the primary tasks of educational institutions is to place cohorts of learners together. The reason a person wants to study at MIT or Yale or Oxford is not only the quality of the instructor or the facilities, it is the quality of the person next to them in class. Look at what MIT does with the Media Lab (watch them work here), or what Stanford does with its innovators' programs (they share their stories here). Waterloo engineering students work in teams devoted to rocketry or robots or alternate fuels.
These processes work exceptionally well for the small number of students who have the opportunity to study at these institutions, but the formation of cohorts in online learning is mostly a matter of signing up for online classes and hoping for the best. Moreover, in large-scale online learning, such as the Massive Open Online Course (MOOC), cohort-formation is even more haphazard. Coursera has introduced 'automated cohorts' (read about it at Duke) but we still don't see the multi-discipline teams that can approach new problems in new domains from all directions.
There are sweeping changes being planned to replace traditional cohorts…
There are sweeping changes being planned to replace traditional cohorts - and, indeed, traditional education, with a model (defined here) of competency-based education (CBE). Various models are proposed, but the core idea is to replace traditional curricula with a set of measurable skills such that the instructional delivery can be personalized, with resources and activities centred on these competencies. It is touted to increased efficiencies and reduce costs. Technology companies and education publishers see opportunities in CBE for new markets, replacing traditional classrooms with automated systems.
We'll discuss competencies at greater length below. For now, it is sufficient to observe that the model of progression would shift from the cohort, where students are assigned grades relative to their achievement within the cohort, to a model based on mastery, where advancement is contingent on success at earlier levels. This changes the public dialogue on education. Think, for example, of student-instructor ratios, of the administration required to oversee and map a different configuration of the cohort, of staff requirements and quality assurance, and of funding regimes.
And, despite the loss of the cohort, we will still want students to be able to work together, and to actually work together. A range of groupings will
be desired: some uniting people who are working on the same topic at the same level, and others based on bringing together people from different disciplines, and at different levels, to work on common projects. The challenge for online learning environments in the next decade will be to solve the problem of how to bring people from diverse disciplines together in this way.
New environments dedicated to new models of cohort formationare likely in the short-term future
There are many models to choose from. The same problem exists on traditional campuses, and unofficial activities such as fraternities and campus clubs have always filled the gap. New environments dedicated to new models of cohort formation are likely in the short-term future. Though these will be the far more significant outcome of our rethinking of cohorts, they will be all but obscured by the significant, and often loud, debate over personalized learning and competency-based curricula.
Textbooks
There is still a market for books, though these are more frequently in the form of eBooks. That this is the case shows how little the production of learning resources has changed even two decades into the digital revolution. Supplementing books online are learning resources in the form of course packages, learning objects, audio and video recordings, and other media intended to transmit content from the author to the learner. No matter how you look at it, the revolution in learning resources has not yet occurred.
It's hard to imagine the$120 textbook surviving in this media environment whether online or off.
In the wider Internet, however, a full-blown crisis has emerged. Traditional news media are facing disappearing business models and a crisis in confidence. Online publishers are being challenged and often overtaken by marketing and fake news sites. People are abandoning record stores in favour of streaming media services and cable companies find themselves competing against Hulu and Netflix. It's hard to imagine the $120 textbook surviving in this media environment whether online or off.
We can safely predict that academic media will follow in the short term, which is good news for subscription services like EBSCO (view them here) and Lynda (here). But the crisis is far from resolved. We have yet to see the emergence of a YouTube for learning or a Facebook for learning, but these services have become important learning platforms in their own right. These or their successors will play an increasingly major role in the production and distribution of learning resources. What do publishers do then?
Traditional texts are also challenged by the idea of open educational resources (OER). These come in two major flavours: first, open access replacements for traditional textbooks, and second, smaller resources intended to support or supplement existing instruction. In either case, the attraction of OER is not only that they reduce costs and facilitate access, but also that they can be (with appropriate licensing) reused and modified by instructors and students alike. OER have attracted the interest of major foundations, such as William and Flora Hewlett, and of bodies like UNESCO (which defined the term in 2002).
Future technologies will emerge at the points of conflict: a business model not based on subscriptions or advertising but on something else, mechanisms for trust not based on marketing and distribution but on something else, personalization based less on tracking and surveillance and more on learning goals and preferences.
Probably more significantly in the long run, the textbooks themselves are beginning to change. The webpage was the first iteration of a new form of content in which images, interactivity and multimedia could be embedded. This is in the process of being extended. Technology such as the Jupyter Notebook allows people to embed computer code into books such that readers can change the content by changing the code. The Actionable Data Book initiative foresees an environment where live data can be streamed into texts.
As they become more interactive, digital textbooks will replace many of the functions of the learning management system.
As they become more interactive, digital textbooks will replace many of the functions of the learning management system. By combining activities and feedback, they will become, in a sense, automated tutoring systems. Consider how Codecademy teaches Python, for example. Readers progress step-by-step through the instruction, writing software right on the web page and getting feedback as they learn.
People still write and publish paper books, a fact that seems a bit surprising in the digital age. They could do so because digital books were merely copies of paper books. But this may be the last generation of books, as electronic replacements can do so much more, for less money, and for more people.
Pedagogy
We have traditionally thought of pedagogy as the core of teaching and learning, and many readers would have wanted this text to begin with pedagogy, rather than burying it in the middle of the fourth chapter. But in a world where people teach themselves, what are we to make of a concept the core of which is focused on how to teach others?
Through the history of learning and instructional technology our approach to pedagogy has been largely influenced by the processes and theories of distance education. These, in turn, have been influenced by the work of people like M.G. Moore, describing instruction as a process of transaction or interaction; and people like Robert Gagne, who speak of events, processes or learning activities.
These result in an understanding of pedagogy as instructional design, which has in turn been developed over the last two decades as the underlying science of online learning. This discipline of 'instructional designer' has been recast as a profession, and the tools and mechanisms of instructional design have been defined variously as a set of technology standards (for example, IMS Learning Design) and applications (such as the Learning Activity Management System (LAMS)).
Some of the core technology debates over the last two decades have also been pedagogical debates. For example, when evaluating the wisdom of designing learning technology as a content delivery system, we are also debating whether to endorse a form of instructivism, as defined by people like Kirschner, Sweller, Clark and Willingham. These authors, in turn, point to the “failure” of progressive, discovery-based, or constructivist methodologies, contra the design philosophy informing learning systems such as Moodle.
The practice, and indeed, the art, of pedagogy, has been replaced by a technology and a science.
Of course, none of this is what was intended by the concept of pedagogy, originally conceived, as the study of the method and practice of teaching. But we can't go back to the days when our understanding of learning is limited by the conception of a teacher managing the learning of a collection of students in a classroom. The practice, and indeed, the art, of pedagogy, has been replaced by a technology and a science.
We can seek in vain to return to that former understanding, or, moving forward, we can seek to identify those core values that underlie the why of pedagogy. What did it matter what a teacher did? What were the outcomes we hoped for? How does our current understand of instructional design meet those, or fall short? What can we develop in the future to address these issues?
Assessment
The early days of online learning saw the production of digital test and quiz creation software like Hot Potatoes. Version 6 of the software was released in 2013 and its enduring popularity shows that not much has changed in the intervening decades. Instead, technology has been developed to reinforce the existing model through proctoring and identity verification, through plagiarism detection, and through grading support.
Identity verification has always been a challenge for online learning. Even in the days of videoconferencing students were learning how to fool remote teachers and proctors. Distance and online learning programs relied on testing centres. MOOC vendors relied on honour codes and the scanning of identity documents. Modern identity systems are using such things as biometrics and keystroke recognition. But if users benefit by cheating the system, how much can we rely on the system?
Teachers and professors have also engaged on ongoing conflict against students who purchase essays, copy answers, or otherwise indulge in academic dishonesty. From the early days of the Internet, essay writing services were advertised online and this naturally led to the development of plagiarism detection systems such as TurnItIn, which launched in 1997 (view them here). The capacity of both sides has increased over the years and today students are using intelligent systems that paraphrase essays and institutions are deploying systems that catch them.
How can we design assessment systems that accurately and honestly measure a student's achievement?
The conflict here is easy to see: how can we design assessment systems that accurately and honestly measure a student's achievement? Even more to the point, how can create incentives for honest academic behaviour? This problem is faced not only in academia, but in any system where provenance and originality are in question. For example, subscribers complain about people reposting other people's images on content sharing sites like Imgur. Biologists complain about “unethical name creation based on other people's work.” They are in the same position as instructors reading the same stock essay submitted by students as 'The Light of Reason in Pride and Prejudice' for the eighth time.
If change emerges from conflict then there is abundant opportunity for change today, and ample opportunity to study the environment to see where that change will originate. In this section, we identified numerous conflicts but few solutions. In all of these cases, the conflict arises as instructors and developers attempt to use new technology to replicate existing practice.
Digital classrooms, online cohorts, eBooks and videos, and online assessment have all been successful, to a degree. And yet, to a degree, they have not. Nobody seems to be satisfied with the model of online classrooms, courses, textbooks and tests. People are trying MOOCs, they are trying social learning, and they are trying performance support, they are trying simulations and games. In the next few sections we will look at how digital media and learning technology are evolving to meet these and similar challenges.
Institutions and Credentials
Employment will not be stable and changing personal and socialconditions could alter their circumstances abruptly.
Expectations are changing. Young people today are less likely to prepare for a single career that will last a lifetime. The jobs aren't there. This does not mean they won't specialize; it is difficult to become a professional musician or doctor or baseball player otherwise. But the next generation of jobs won't be professions. Employment will not be stable and changing personal and social conditions could alter their circumstances abruptly. We already see a rising tide of part-time and self-employment in Canada.
As a result, institutions are changing. The level of public support for the traditional university experience no longer exists in many countries (in some countries, where resources are scarce, it has never existed). In New Mexico recently the governor vetoed all higher education funding.
In Canada the situation is less dramatic, but the pattern of a long-term decline in government support (and corresponding rise in tuition revenue) is also present.
As a result, students (and government, and employers) are trying to become more focused in the delivery of learning and instruction, as well as favouring private or unofficial education providers. We can see in numerous publications (like, say, the Horizon Report) predictions reflecting these trends. They represent a disaggregation of the traditional degree, breaking it into component parts. “Processes for assessing nuanced skills at a personal level are needed,” they write. Combine this with the observation that “to be profitable privatisation depends on standardisation to scale, reducing cost prohibitive services and the ability to meet unique needs,” and a movement toward things like competencies, badges and microcredentials seems inevitable.
At a certain point, it becomes irresponsible to train students for jobs and employment when this is not the economic future they can expect upon graduation.
This results in a change in the nature of students' relationship with educational institutions. Part-time students, for example, are increasingly young (see here, p. 8). Watch for a sharp increase in the number of part- time students as a percentage in official figures, and an even sharper increase if we include non-official and informal learning opportunities.
And students are looking for different things, not just career preparation. At a certain point, it becomes irresponsible to train students for jobs and employment when this is not the economic future they can expect upon graduation.
Credentials earn careers, but competencies earn gigs. So in the near future competencies will mean more than credentials. Academic engagements will be short-term, focused, increasingly online, and more and more frequently offered by private companies offering validation in the form of portfolios, work experience (think internships) and industry-specific certificates. Even in traditional industries, positions are defined in terms of competency models rather than degrees or credentials required. This shift from credentials to competencies is probably one of the most significant changes taking place in education today, yet there is little sign it is being reflected in institutional policy or practice.
It's not about the technology… It is a social and cultural shift from the idea of education as a good in itself to education as a means to some other objective, end, or purpose.
It's not about the technology. It's not about using Mozilla Backpack to support a badge infrastructure to recognize learning achievements, nor is it about Advanced Distributed Learning's Competencies and Skills Systems (CASS) initiative for corporate and military human resource management, though these are important ongoing initiatives. It is a social and cultural shift from the idea of education as a good in itself to education as a means to some other objective, end, or purpose.
A TURN OF THE WHEEL
Change is cyclical, but not all change is cyclical, and one day every cycle stops. These three great truisms underlie the foundations of religion, of mathematics, of economics and of nature itself. The sun rises and sets, the Moon waxes and wanes, the seasons change, what is old is new again.
Digital technology also has its own great axes around which fashions and flavours rotate. One of the most important of these is the question of whether computing resources should be centralized or distributed. In the early days, computers were by necessity centralized, since they occupied entire rooms. Workers would access computers through remote terminals, which were really nothing more than typewriters and computer screens or printers.
The microcomputer revolution of the 1980s was the turn of that wheel. Instead of dumb terminals we began to use devices that could function independently. They began as game consoles and smart typewriters and eventually became the Macintosh Apple and the IBM PC. Sure, computers could be networked, but there was no need, and most computers came without any such capacity (the first modems connected to serial ports, like a mouse; read this for a sense of how complicated it was).
The Internet represents another turn of the wheel. At first it was just a way to send data from one computer to another, and the real work was performed by applications like web browsers (Netscape and Internet Explorer, for example) and plug-ins like Shockwave and Flash. The arrival of Web 2.0 in the early 2000s began to change that. Applications like Flickr and YouTube and MySpace began to do things for us online. People stopped working on their own web pages and joined these online communities and social networking sites.
…it may be less obvious to most just how centralized today's internet has become… It is difficult enough even to understand the internet today, let along predict where it will go.
This much everyone knows. But it may be less obvious to most just how centralized today's Internet has become. As we push toward what is now called cloud computing it seems as though most of the Internet resides on just a few services owned by Amazon, Google, IBM, Facebook, and to a lesser degree Microsoft Azure, NTT, Alibaba, Apple and Yahoo (see more major cloud platforms here). This market, which has been growing steadily for the last ten years, is incredibly complex. It is difficult enough even to understand the Internet today, let alone predict where it will go.
Virtualization
Let's begin with the concept of virtualization. Take your desktop or laptop computer and think about it as a self-contained unit, which it is. It has an operating system, computer programs, and data. All this is stored in the computer's memory or on its hard drive. Make an exact clone of all that data. Now you have two versions of your computer – the original, and an image of the original. Take this image and place it inside a program that emulates your computer's hardware. Now your computer has been virtualized – you have one computer running inside another computer.
We've had virtualization software for decades. Apple users depended on Parallels to run Windows applications. People who used Linux used WINE to read Microsoft Word Documents or play Flight Simulator. Hobbyists used VMWare to run Linux computers on their Windows machine, and today both the original and Oracle's VirtualBox are the major competitors. Almost all websites today are run on virtual computers running inside (for example) a Rackspace environment.
In the last few years cloud technology has evolved from virtual computers to virtual containers.
In the last few years cloud technology has evolved from virtual computers (or virtual servers) to virtual containers. The idea of a container is that it is a self-contained application (like, say, a database) running inside its own virtual machine. The prototypical container service is called Docker, and it uses the metaphor of shipping containers to describe its service. So now, instead of running on a single machine, web services are created by assembling containers together.
This gives the service more stability, because an individual container crash won't crash the web service as a whole. And it provides flexibility. Websites can expand according to demand by adding more containers. Requests to individual containers are handled by load balancers and the whole system is run inside a cloud service provider that responds to demand as needed – Amazon, for example, calls it an Elastic computing cloud.
When we think of Massive Open Online Courses (MOOCs), we need to think of them in this context. Previously, online courses were run in learning management systems that operated like traditional websites. In larger institutions, they were virtualized, of course. But the Stanford and MIT MOOCs were run in elastic cloud computing environments using technology like Docker, which allowed them to scale just as much as was needed. All the innovation in these MOOCs was behind the scenes, while on the front end they looked like very traditional and unimaginative online courses.
In the short term, this approach is on the verge of dominating computing as we know it. In the long term, however, the wheel turns.
Services
Let's talk first about the short term. Container-type services work because stand-alone versions of individual applications can communicate with each other. This has already happened with the traditional components of web-based applications: databases, business logic, and user interface. In enterprise applications, we also have middleware which connects web applications with enterprise services such as human resources, financial or infrastructure management systems. The Learning Management system fits into this ecosystem.
In the next few years we will read a lot about the flowering of cloud computing services that have become available.
In the next few years we will read a lot about the flowering of cloud computing services that have become available. There is already an impressive library of services on any major provider. A good example of this is the collection of virtualized analytics and artificial intelligence services (collectively known as MLaaS – Machine Learning as a Service). Here's an article from Hacker News on how to set up your own MLaaS environment. Web applications can access services that make predictions, recognize trends, organize and categorize data, do voice recognition and biometrics, and any number of other advanced functions. Visit IMB Watson's demo page to get an idea of the possibilities.
Platforms
What will this look like to someone using one of these applications? We already have a sense from services like GMail or Google Docs. While websites of the past relied on a flow or a navigation through a series of hyperlinked pages, modern web applications keep the user in a single window and feed them a steady stream of content. This window,
meanwhile, becomes a platform through which the user can access other content or services.
The online learning experience also changes, though in general this change has lagged the development of web applications. Instead of paging through content as though it were a book (the classic 'page turner') some modern e-learning looks more like an online game or web application.
But change is slow. As Diane Valenti recently wrote, “the vast majority of my clients are still teeing up page-turner e-learning courses. And with the exception of PowerPoint slides replacing flip charts, instructor-led training that hasn't changed much since the dawn of time.”
What we are seeing in the short term are efforts to embed more interactive applications into traditional web pages. The early MOOCs, for example, embedded video lectures into their webpages. OpenEdX featured a method to use iframes to embed interactive Java applications (see the documentation here).
On another front, learning (and especially learning in the lower grades) experienced a proliferation of one-page apps designed for mobile devices. There are far too many of these to attempt to describe – you can get a sense by looking at Kahoot, Popplet, Explain Everything and Padlet. Some bloggers (such as Richard Byrne) have made their careers finding and describing new entries in this unending parade of apps.
The End of the App
The day of the app is coming to an end…
The day of the app is coming to an end, however, for a variety of reasons. Developers and purchasers are learning to avoid proprietary environments and closed app stores. But even more to the point, these stand-alone apps don't talk to each other; they communicate only through the Apple or Google platform, which means developers can't build clusters of inter- related apps to support a service.
A good example of this is the evolution of audio and video libraries. Before radio and television, music and visual presentations took place locally (in concert halls, salons and living rooms). But soon we were able to watch and listen on 'dumb terminals' (called television sets and radios) from centralized locations. The cost of this went from 'free' to ad-supported to expensive cable subscriptions. And when improvements in media and technology eventually made it better to watch and listen in the home we went through a quick succession of media, from vinyl to tape to CD/DVD to MP3/4, and we were back to the distributed network. iTunes probably represents the pinnacle of that era, creating and managing libraries of content on your computer. Then along came services like Netflix and Spotify and the consumption of streaming media became much simpler, and we began again to rely on centralized services, which is where we are now. The app (and the app store) are like MP3 recordings and iTunes. It's easier and more efficient to access apps from a seemingly endless cloud-based library than it is to buy them individually and store them on our computer.
…instead of consuming content one page after another, users will be invited to visit a learning application where they will access content, activities and services on an as- needed basis.
So, over the next few years, what we will see is that these apps (or, more accurately, their functionality) will be virtualized. They will run in their own containers can will be able to be embedded into larger applications like a learning technology platform. We will access them when we need to and forget about them otherwise. And as a result, the learning experience will change as well – instead of consuming content one page after another, users will be invited to visit a learning application where they will access content, activities and services on an as-needed basis.
This change has already started happening, though the excitement over MOOCs has obscured this. Because the MOOCs created by Stanford and MIT represent, on the surface, a step backward in learning technology, it's still easy to view the state of the art as dominated by page-turners and online video. But we are also seeing companies like Desire2Learn insist that their product be called a 'learning platform' rather than a 'learning management system'. This is a symptom of the changing environment.
Toward Decentralization
The migration from stand- alone app to virtualized service is just beginning.
The wheel will turn, but it will not turn within the next five years. The migration from stand-alone app to virtualized service is just beginning. It may be a decade before the reverse flow can be detected.
But let me talk for just a moment about why this flow will eventually reverse. The advantages of centralization today are stability and flexibility. Virtual applications can be stacked together in multiple configurations all in an environment that supports cost-effective delivery and efficient
organization. As these advantages begin to wane, the push will be on again for a more decentralized application environment.
Some of these pressures are already being felt – we are at the very leading edge of change here. People are beginning to see the danger inherent in collecting data and services into the hands of a small number of providers. It makes the system vulnerable to attack, it makes data (even containerized data) less secure, and it gives these companies more opportunity (and more temptation) to sell customer data to governments and corporations (so much so the U.S. government decided to level the playing field by allowing Internet service providers to do the same – story here).
Another problem is that these data centres are massive consumers of energy. How much is massive? In 2014 U.S. data centres consumed some 70 billion kilowatt-hours, about two percent of total energy consumption. Here's the data. This might be manageable if the consumption were distributed, as it is in homes and cars, but it's happening in relatively few locations. This creates what we call 'hyperscale' computing infrastructure. The on-site infrastructure is distributed (that's how Google achieved scale) but they require massive quantities of power and bandwidth to manage the information, run the computers, and cool them.
At a certain point – what Gladwell would call the 'tipping point' – it becomes easier to manage virtualization and container architecture on home computers rather than in data centres; this happens when processors become fast enough and storage becomes sufficiently cheap. We are close to that capacity now (my expensive laptop can handle it, but my mobile phone cannot). It will take more than five years for the cycle to shift. The early work is already happening.
And, eventually, the wheel stops turning. Eventually, one alternative or the other becomes unsustainable. We reach what is eventually a singularity, where all options collapse into a single point, a single decision. Left or right. Forward or backward. “The die,” as Caesar famously said, “is cast.” The Rubicon has been crossed and cannot be uncrossed.
We have alternated for years between centralized and decentralized systems. It is easy to predict that we will continue to alternate in this way. But why would it? At a certain point, once you break something apart, it does not reform. Expect the centralized systems to collapse in on themselves at some point. Not just the massive data centres, but also the huge companies that operate them, and the enormous economies that sustain them. It takes a perfect storm, but perfect storms happen.
A Two-Fold Strategy
From a practical point of view, what this means is that educators and developers should adopt a two-fold strategy.
Apps will be seen as increasingly expensive, and because they do not communicate well with learning infrastructure they will be more difficult to integrate with other learning activities.
On the one hand, it makes sense to migrate from an app-based model to a cloud-based model as soon as possible. Apps will be seen as increasingly expensive, and because they do not communicate well with learning infrastructure they will be more difficult to integrate with other learning activities. Governments, larger institutions and boards should consider establishing or investing in cloud environments or infrastructures.
This will need to be supported with the necessary infrastructure. Cloud computing demands bandwidth. Already streaming services are creating a significant load on academic computing environments. Predicting and managing student use of bandwidth is becoming a major focus (and the subject of research studies). This won't change, it affects the entire Internet, and will get worse before it gets better.
Institutions should also look at how they can provide cloud-based content and services.
Institutions should also look at how they can provide cloud-based content and services. Unless there is a real need to store video locally, why not simply upload it to YouTube? Or perhaps offer an on-demand audio streaming library hosted in the Amazon cloud, a service that would be inexpensive while it is less widely used and would scale immediately if there were a peak in demand. Instead of purchasing computing systems and large-scale enterprise applications, institutions should consider cloud- based alternatives. This especially applies to experimental things like open online courses, virtual environments, games and simulations, and machine learning.
In the longer term, educators and institutions should be watching for the pendulum to begin to swing the other way. Centralized resources and services will begin to disaggregate gradually from cloud provider to department or board to institution to individual. As I suggested, we are
already seeing some early indications of this. Through the rest of this paper we'll discuss a number of them, but one example will serve for this section.
Right now, energy production is almost entirely centralized. We depend on a network of large power plants – oil and coal, nuclear and hydro powered. Even where these supplies are renewable (and in Ontario, most such supplies are) the centralization of production creates risks. We saw it once during the ice storm of 1998 and again during the east coast blackout
of 2003. As a result of this, and as a result of the need to develop solar and wind power, our energy production is becoming more distributed. Rooftops covered with solar cells are appearing across the province. In time, individual households will produce their own power, and the energy grid will be used as a backup system and to power major infrastructure like transit systems.
What is the educational technology equivalent of the solar cell?
What is the educational technology equivalent of the solar cell? This is probably the core question facing people making long-range predictions about educational technology. At some point, we will each have our own personal learning assistant, probably. But the route from here to there is far from clear.
SPREADING ACTIVATION
As the number of members increases, the usefulness of the network increases exponentially.
Some time around 1980 Bob Metcalfe, the inventor of the Ethernet, was trying to explain how the value of networks increases as they grow. With only a few members – a two-person telephone network, for example – networks have limited value. But as the number of members increases, the usefulness of the network increases exponentially. A telephone that connects you to ten people is a hundred times more useful than a telephone that connects you to one.
This value is created by a type of change called 'spreading activation'. If you need to call ten people it can take quite a bit of time. But if you call two people, and they call two people, and so on, and so on… then change happens a lot more quickly. Not as quickly as a broadcast, where the same message is sent at the same time to many people, but broadcasting requires a lot of resources. Networking is more efficient, and it also means that the message can originate from any point in the network, not just a single source.
Conferencing and Communication
When the Internet was first developed this efficiency was found in conferencing and communications tools. There were two major flavours: person-to-person, which became e-mail, and person-to-group, which became Usenet, both invented in the 1970s. It quickly became possible to send e-mails to a group of people as well; this became the list server, which was distributed as Listserv in the 1980s. Usenet combined with e-mail very easily so by the time the web was created the foundation for communications and conferencing systems was well established.
E-mail and Usenet were both asynchronous, which meant that sender and receiver could use the system at different times. But there was also a need for synchronous communication, so that sender and receiver could have a conversation. Internet users could send direct one-to-one messages to other users since the early days. Synchronous one-to-many communication was realized with the invention of inter-relay chat (IRC) and the multiple user dungeon (MUD) in the 1980s. These evolved into things like ICQ and AOL Instant messenger, games like World of Warcraft, and on the telephone side, simple message service, or SMS.
These create the framework for all conferencing and communications systems today. We can think of them within a framework along the following dimensions:
- Granularity – network messages are either 'one-to-one' or 'one-to- many'. Because anyone can send a message in a network, the latter is often called 'many-to-many'. The 'many' in these can vary from just a few people to large populations, though as the 'many' gets larger, it begins to resemble broadcasting, and becomes less like networking. This becomes an important point later.
- Synchronicity – the interaction can be happening in real time, like a telephone call or a chat, in which case it is called 'synchronous', or there may be some lag time, in which case it becomes 'asynchronous'. As this lag time increases the communication begins to resemble 'publishing', especially if the audience is more public.
- Privacy – if the message is (in theory) confidential to its participants, then it is private. If the message is proprietary to one of more of those participants, then it is protected (for example, by copyright law). Messages may be accessible to a limited community of people (or 'in-
- house') or it may be generally accessible, in which case it is 'public' or 'open'. It the message may be resent by people to other people, it is 'open' or 'free'.
- Media – the earliest Internet messages were text messages, but other media quickly followed. A medium can be characterized by the 'transport layer' – was it sent using an Internet protocol, by telephone, or by carrier pigeon? It can be characterized by its 'encoding' – is it
- ASCII or UTF-8? Is it in XML, HTML or JSON? And it can be characterized by its 'presentation' – is it text, audio, or video? Is it a live recording, an animation or a simulation?
Trends in Communication: Control, T-Shirts
These dimensions define the technology of messaging and communication. With respect to predictions about future technology, they are to a certain degree limiting. There aren't really any alternatives in each of these dimensions beyond those already listed. Our modern Internet conferencing and communications technologies all fall within this framework.
Social media, for example, such as LinkedIn, Facebook and Twitter, all began as instances of short (or relatively short) text messaging to small groups of people (for example, lists of 'friends') just as e-mail lists or instant messaging services did. Over time, they all expanded media types to include images, audio, and video. YouTube, Second Life and MySpace all had similar attributes, but started with a focus on different media (video, simulation and audio, respectively).
The history of online collaboration and communication tools is itself a history of spreading activation. Once a part of the Internet becomes a platform, the capacity to collaborate and communicate spreads across all four of these dimensions. It's a gradual process, which for a time makes prediction relatively easy, at least, until the range of possibilities within these dimensions fills out. At that point, the network appears to be completely engaged, and it becomes difficult to predict a successor.
We will see a greater emphasis on being able to control granularity and privacy, to control who receives the message we send, and who we receive messages from.
Over the next five years, then, from this perspective, we will observe a spreading activation of collaboration and communications technologies. We will see a greater emphasis on being able to control granularity and privacy, to control who receives the message we send, and who we receive messages from. We'll also see people try to push the bounds of media: can we communicate using objects (as in 3D printing), can we communicate using clothing (as in smart shirts).
New Dimensions in Communication: Context and Bots
Far more important in an assessment of the future of collaboration and communication is a consideration of some of the additional dimensions that have not been explored yet. Here are just a few:
- Context – this is the time, place and situation in which collaboration and communication occur. Different contexts require different kinds of approaches to communication. At first, we were limited to desktop communication and then the mobile phone enabled mobile communication (with a host of social consequences). Current developments are focusing on hands free (so we can interact while driving, for example) so we're seeing numerous audio interfaces.
- Characters – the early days of online gaming always included NPCs (non-player characters) that drove storylines and posed obstacles. Microsoft brought us Clippy (which you can still add to your website). E-Commerce brought us the phone tree and automated responses, which people hated. Today we have 'bots' (short for 'chatbots'), which interact with us in almost every corner of the network. As these become more sophisticated – and they will – they will become more useful and more accepted as well.
- Platform – each generational change in platform creates a new layer in the network, which in turn sees collaboration and communications media spread anew. As the cloud became popular, for example, cloud communications became possible, which is why we have cloud- based social networks (such as Facebook) instead of website-based social networks or conferencing sites. The next social media service (Facebook, or whatever replaces Facebook) will draw from cloud services, adding input, tools and export services as described above.
- Function – this is the least defined dimension and therefore the one with the most potential for innovation. We have seen collaboration and communications tools used for everything from business management to software development to criminal syndicates to modern warfare. The software is now used to provide automated counselling and support, student advice, medical information, and travel advice. This movement has been gradual – spreading activation is not instantaneous – but has been steady and all encompassing.
The software is now used to provide automated counselling and support, student advice, medical information, andtravel advice.
Better, Easier and More Useful
…the use of collaboration and communications technology will become better, easier, and therefore, more useful.
Where we should see the most impact in the short term is that the use of collaboration and communications technology will become better, easier, and therefore, more useful.
The technology will become better because of the already-noted improvements in computer technology in general. More powerful computers and greater bandwidth mean that we can share full-screen live video experiences with each other, and are well on the way to being able to share 3D and immersive experiences. The new technology will also increase the computational capacity of collaboration and communications systems, giving us a wider array of tools, so we can do things like (for example) draw diagrams in mid-air in a shared augmented reality experience (here's a video demo). As discussed above, these developments will be incremental, but will arrive very suddenly for individuals when they do arrive.
Where we'll get the most out of advancing technology is that all these functions will be so much easier. Making a video call is simple. Select a person's name from a list and select 'call'. Multi-user video conferences and video broadcasts are only a little harder. And things we used to think of as impractical, like sharing a live online gaming experience with friends, require only inexpensive and widely used digital signal processors.
Our tools will be doing a lot more of the organizing for us. Today, for example, we need to go out and look for friends, or organize ourselves into teams or cohorts. Social automation tools will manage this for us, and will also assembles the tools and resources we need, and configure the space where we interact (or play, or work, or whatever). Social media automation is already a major industry (see top vendors here) but has mostly been used by advertisers and marketers. This degree of control will eventually reach the everyday user.
The range of things that can't be done online will shrink as thesetechnologies spread into all aspects of human life.
And all this makes it more useful. We will be using digital conferencing and communications technologies for almost everything that is today done in person. The range of things that can't be done online will shrink as these technologies spread into all aspects of human life. It seems difficult to imagine getting a haircut virtually (or by a bot) but we already live in a world where the technology for remote computer-assisted surgery already exists (see a news report from 2014) and is used.
From Medium to Environment
Teachers have a lot on their plate when it comes to online collaboration and communication. There are new tools every day, and there are new uses every day, some of which are harmful. Many of the skills students need have not changed, but they have not been translated well into new technology.
Teachers need to talk about the online world as a place where weconverse and collaborate, not as some other sort of news media or textbook publisher.
Consider conversational skills, for example. These involve knowledge of give and take in a conversation, polite discourse, listening skeptically, staying on topic, not calling people names, and a host of similar competencies we teach explicitly in kindergarten and implicitly (usually through modeling and correction) in older grades. Yet by thinking of the Internet as a publishing platform, we have made it almost impossible to transfer these skills to digital media. Teachers need to talk about the online world as a place where we converse and collaborate, not as some other sort of news media or textbook publisher.
Face-to-face doesn't go away, even if it isn't actually face-to-face any more.
Structuring their own work in classes and with students across these different dimensions would help as well. What are the norms for one-to- one communication in a video environment? How should people present themselves when podcasting? These are skills that need to be learned through practice, not simply because we have new media that make these forms of communication possible, but because any future media are also going to fall in among these modalities. Face-to-face doesn't go away, even if it isn't actually face-to-face any more. Most importantly, students need to see appropriate behaviour modeled in these environments.
Interactivity and Presence
A final note on conferencing and communication tools. Over the last two decades two key elements have been identified with respect to online learning: interactivity, and presence.
'Interactivity' is at the core of M.G. Moore's transactional distance theory (nice diagram here), and in particular interactions between student and teacher, and student and student. We understand this interaction in terms of structure, dialogue and autonomy. All three of these are transformed as conferencing and communications technology change.
- Structure shifts from linear to multidimensional. Interaction, if you will, becomes less like 'telling a story' and more like 'painting a picture. Students need to learn to communicate across multiple modalities, making art, design, and presentation skills all the more important in a digital environment.
- Dialogue shifts from text-based to multi-media. We gradually transform from communicating in words to communicating using a rich toolset of icons, graphics, memes, idioms, and actions. It is becoming more important to ensure students have a strong grounding not only in semantics but in general literacies.
- Autonomy shifts from demand-driven to needs-driven. What this means is that the locus of external control is no longer based on authority figures like parents and teachers, but is increasingly driven by the student's self-perception, identity formation, and understanding of his or her place in society. The teacher must change roles from being one who tests and evaluates students to one who acts as an advocate and support person when they face tests imposed on them by the wider community.
The teacher must change roles from being one who tests and evaluates students to one who acts as an advocate and support person when they face tests imposed on them by the wider community.
'Presence' is at the core of Archer, Anderson and Garrison's Community of Inquiry (CoI) model (described here). There are three major types of presence: social presence, cognitive presence and teaching presence.
- Social presence the idea that you're talking to a 'real person' and is based on the ability of people to 'project' themselves in a community. Projection is the idea that we see ourselves, place ourselves, or express ourselves into external objects, including other people (Freud), media (McLuhan) and networks (Siemens).
Who we are depends on what we project, and vice versa. So our very identity changes as collaboration and communications technology change. Different communities will identify themselves in different ways. We need to learn to look beyond traditional categorizations and learn to see people as they are in different media.
- Cognitive presence is the extent to which participants can 'construct meaning' together. Construction is the name given to the creation of models, theories, frameworks, or other structures that inform our understanding of the world. These too change as the technologies change, as new forms and structures become available.
Computational technology is, in important respects, more expressive than language and mathematics, and so in our models we can incorporate ideas of function, change, contingency, and covariance. Beyond that, we have now developed new ways of making models and these (found largely in artificial intelligence) represent some of the more significant trends in the years to come; more on these below.
The community has properties, and types of knowledge, that go beyond what the individual can produce on his or her own.
- Teaching presence is the ability of the teachers to facilitate interactivity (and here we can refer back to Moore). But it also leads us to the idea that, through interaction, the whole can be somehow greater than the parts. The community has properties, and types of knowledge, that go beyond what the individual can produce on his or her own.
A CONSTELLATION OF RESOURCES
Stare into chaos long enough and you begin to see things.
Stare into chaos long enough and you begin to see things. Stare into static, or into clouds, or into the night sky filled with stars, and patterns begin to emerge. The patterns form and reform as the chaos folds and unfold. Nobody would say that Orion is really there, drawing his bow in the heavens. No great bear really circumnavigates the pole each year. But we see them clearly, and the stars, clouds and static are as real as you and I.
What we are talking about here is the idea that patterns arise out of complex phenomena. It's a two-way street: from the substrate, the patterns emerge out of more basic media, and from above, something needs to recognize the patterns as this or that – an archer, a bear or a fluffy bunny. As the substrate moves, so does the pattern. When we see a storm front, we are recognizing a pattern in a complex mass of air and mist. As it approaches, it is not a thing that is approaching (though it really seems that way) but rather nothing more than a change in the underlying conditions.
Patterns
The Internet has created a galaxy of resources, a massive and complex ecosystem of people, devices, websites, resources, contents and services. Because of the interlinked nature of the web and its contents, it creates a chaotic network. We usually look at the web through the large end of the telescope, zeroing in on a single resource or a single web page, trying to understand what's happening everywhere by examining each one in turn. But we need to look through the small end of the telescope into the wider night sky.
What would we see?
Today, we would see a web dominated by publishers. We've talked about the major platforms above – sites like Google and Facebook and LinkedIn. Meanwhile the resources per se are dominated by content publishers.
We have books and eBooks published by the likes of Macmillan, Pearson, Wiley and McGraw-Hill. In academic publishing we have Reed-Elsevier, Springer, Taylor & Francis and Sage. In commercial media we have Disney, Comcast, Fox, Bertelsmann and Viacom. In Canada, we have Bell-Globe Media and Rogers.
Most content we read and share online is produced by, and owned by, a media company, and more often than not, one of the media giants.
It's a sky of giants, each one larger than we can imagine, interconnected and interlinked with each other, and supported by advertising networks, content distribution networks, sponsors and corporate interests, political parties, national media, and companies you may not even know, like Advance Publications (which owns the Cleveland Plain Dealer, Wired, Ars Technica and (part of) Reddit. Most content we read and share online is produced by, and owned by, a media company, and more often than not, one of the media giants.
Types of Resource Bases
Another way of looking at online resources and collections is to look at the types of resource bases that exist. In the field of learning and teaching there are the following types of resources:
- Curricula – governments and institutions maintain libraries of content and curricula developed for public education, for example, the Western and Northern Canadian Protocol and the Ontario Ministry of Education curriculum (see it here). There are also industry standards, such as the Association for Computing Machinery (ACM) K–12 Computer Science Standards.
- eBooks – each of the publishers listed above has an extensive eBook collection. In addition there are numerous collections of open eBooks, beginning with the original Gutenberg Project and continuing with the Personal E-Books for Learning Project is developing the Internet of Books (PEBL) and Open eBooks.
- Publication Databases – these range from commercial services such as Scopus and EDSCO to semi-commercial services such as Scribd, ResearchGate and Academia, to open access databases, such as Arxiv and the OER Knowledge Cloud.
- Courses – there are course libraries, such as the self-study materials at Alison or Khan Academy, MOOCs at Coursera, EdX and FutureLearn, video course, audio courses, Contact North | Contact Nord's own studyonline.ca, and numerous other libraries offered by institutions large and small.
- Educational Resources – these include open educational resources (OERs) and community educational resources (CERs, which are available to specific learning communities) such as are found at OER Commons, MERLOT , and many other sites. The new eCampusOntario also features an OER library.
- Data – governments and many institutions offer raw data either through open access or by subscription. You can find numerous resources on the U.S. Department of Education's Project Open Data, for example. Ontario has limited open data resources (you can search through just over 2,000 sources here).
- Applications – all the major 'app stores' (Google, Apple and Microsoft) have educational applications, and many learning and development institutions have created their own apps (mostly to provide access to the other resources listed here).
...a massive and complex ecosystem of people, devices, websites, resources, contentsand services.
The picture that should emerge from this short description is a galaxy of resources, a massive and complex ecosystem of people, devices, websites, resources, contents and services. The changes that emerge from this ecosystem are massive system-wide changes and can only be recognized when viewed at some distance from the source.
New Business Models
When viewed as a whole, these resources are becoming more widely available, available in multiple formats, cheaper, easier to find, and easier to use. Some categories (such as music and video, news media, and articles) are leading other more specialized categories. There are isolated exceptions and counter-currents: magazines demanding that we turn off our ad blocker (for example) or websites announcing the introduction of 'membership fees' (for example). But it would be a mistake to focus on these.
They won't be free. The average price won't be zero. But there will be many free resources, and the average price is so low that it makes attractive business models elusive for publishers. The media companies listed above have a common struggle: it is getting harder and harder to make money selling media year over year (see the data here). Consumers are being pressured to view more advertising and to pay more for Internet access, and there has been considerable resistance to both.
Both internet content and internet access are in the process of becoming essentially unlimited.
Both Internet content and Internet access are in the process of becoming essentially unlimited. Absent an artificial bottleneck or perception of scarcity, the value of both is nearing zero (it will never reach zero, of course). Through the last two decades the 'two times order of magnitude' rule of thumb has been useful – something that cost $100 then should cost $1 now; something that cost $1 then should cost 1 cent now. Compare paying $5 for unlimited music streaming as compared to $15 for one record album. Compare $7 a month for the family to watch unlimited movies as compared to going to the cinema once a week.
The next few years will see the pressures on both sides increase. On the one hand, the price of media will be pushed down bit by bit. Educational institutions should continue to push vendors on price, because they have market momentum on their side (for example, by creating consortia). Do not sign long-term deals for content and avoid content bundles at anything like market prices. Meanwhile, continue to exert price pressure by producing and distributing open educational resources and open access publications.
Meanwhile, expect the major media companies to push back where they can. The pressure on governments to lower taxes and enforce market constraints (through copyrights, limitation of competition, and regulation of content) will not let up. Access providers will seek to bundle content with bandwidth, making demand for one to subsidize the other (though this would effectively split the Internet into competing content distribution networks).
Educational institutions face both sets of pressures, because they are at once major consumers and major producers of content (especially if we consider classroom instruction to be a type of content).
…expect the major media companies to push back where they can.
On the one hand, institutions are expected to lower their costs, especially for content – this is (despite much protestation) one of the major motivations for online and digital learning. These costs include content licensing, and we see institutions beginning to push back against publishers. And they also include teaching salaries, which is reflected in pressures on class sizes and (especially in higher education) the increasing use of low-paid adjunct or sessional lecturers.
Resources as a Service
On the other hand, there is the galaxy of resources to consider. We haven't yet, but we will begin to really tap into these resources to support and supplement learning. It's a long transition, but if you stand in the right place, you can see the galaxy moving.
In the last few decades (beginning in the 1970s, really) we've seen a broad shift in learning and pedagogy from content transmission to active learning. To use these terms loosely, what this means is that in learning there has been less of an emphasis on instruction, and more of an emphasis on practice and reflection. Volumes of books and papers have been written about this transition, for and against, but the movement has been steady if slow.
In the corporate world we talk about the transition from classroom instr
uction to performance support (here's a quick guide). This concept is useful for educators in general. The idea of performance support is that, on the job, a person has a task to perform (answer sales calls, repair some equipment, write a report). Increasingly, these tasks depend on more and more complex knowledge, and moreover, knowledge that changes on a constant basis. Instead of taking time off work to take a class, employees are looking for learning support right on the job, so they can acquire knowledge as they need it (examples from Regina Q'u'Appelle Health Region, Coca-Cola, and Scotland Deanery).
This is where the resources come into play. You may have had the experience of having to repair some plumbing in the home and using YouTube to learn how to do that (learn the basics here). Or you may be installing a virtual server on Amazon Web Services and needed to consult Stack Overflow for help (available here). Or perhaps you need to purchase a crossover and need an independent review (such as at Car & Driver). These are all examples of resource databases supporting performance support.
Performance Support
It will take time for pedagogies and delivery systems to adapt to new models of performance support through we will begin to see the beginnings of this in the next few years.
It will take time for pedagogies and delivery systems to adapt to new models of performance support though we will begin to see the beginnings of this in the next few years. We'll see changes in two major areas:
How we find the resources – right now we have to search on Google or YouTube (but not Facebook, where search is terrible). There has been a lot of discussion about the discoverability of learning resources, which is usually taken to mean discovering them through search. This is one of the major motivations for tagging resources with metadata (as the Smithsonian well knows).
For many cases, though, search is not ideal. It takes time. You need to know what you want. There are too many resources to sort. This is why companies want to replace search with content recommendation. So we'll see more of things like Google Now (here's how to use it) and YouTube recommendations (which relies on advanced artificial intelligence). We'll talk more about analytics, recommendations and personalization below.
Meanwhile, what companies are looking for is performance support. This is the idea that, at the place and time you need a resource, it is available to you. A simple example of performance support is a recipe on the side of a cereal box. An advanced example is an app that can scan the equipment that you're working on and provide up-to-date instructions. Here's a patent for such a system. Here's a description of different types of performance support.
The content of the resource is much less important than the willingness to use it.
But ultimately - and importantly - learning is not a discovery problem. Providing instructions will not help people who don't read instructions. Providing support will not help people who don't know how to learn. The content of the resource is much less important than the willingness to use it.
What kind of resources?
This is the more interesting question. The majority of resource providers continue to support the presentation mode. They see resources – including learning resources – as things people consume, like video or text. This content is useful, no doubt, but bit represents only a small part of the potential of performance support. We can expand from documents to templates to assistive technology to specially designed cloud applications to scaffold performance.
It is here we should evaluate the idea that new learning media will include virtual reality, augmented reality, games and gamification, simulations, and related technologies that have been predicted with increasing frequency in the education and consumer technology press. How long have we been reading predictions like this: “technologies such as virtual reality allows students to experience the pyramids of Egypt through virtual reality headsets, from their classrooms?” The question is – after you “experience the Pyramids”, what then?
“The best 'content' is other people.”
To be effective, these technologies (and here I include games, simulations, virtual and augmented reality as a set) need to be interactive. Unless participants actually do something, they will be no more effective than television. Just as importantly, providers will have to combat the loneliness factor. “The best 'content' is other people. When people get together in online games they may fight dragons or shoot lasers — but they are being entertained primarily by the other players.” This is as true in education as in gaming. Finally, what we do in these environments must be real. It must actually matter.
…we need to reframe our understanding of learning resources from content to consume to tools and materials enable a person to assemble, fabricate and design.Being able to draw from and feed into the galaxy of resources becomes the new literacy, and the tools that support and enable this are the new printing presses.
Just as we need to reframe our understanding of learning from content transmission to active learning, we need to reframe our understanding of learning resources from content to consume to tools and materials enable a person to assemble, fabricate and design. When working with resources, the question to ask is not “what did you learn from this” but rather “what can you do with this.” A text isn't something to read but is rather something to fact-check. A video isn't something to watch but rather something to edit.
It doesn't matter what it is. The content will matter less and less. Being able to draw from and feed into the galaxy of resources becomes the new literacy, and the tools that support and enable this are the new printing presses.
WHO SPEAKS FOR US?
Change is not about what drives us but about what attracts us.
Change is not about what drives us but about what attracts us. We read about how technology drives change, how economics drive change, or how demographics drive change, and there are deep analyses into the consequences of these. We are advised to talk about what we need instead of what we want (for example, here). But it is what we want, in the end, that moves us forward.
Learning Analytics
Probably the most meaningful change in the learning technology market over the last five years has been the emergence of learning analytics.
Probably the most meaningful change in the learning technology market over the last five years has been the emergence of learning analytics. It has been significant enough that any number of futurists still have learning analytics on their radar screen. Learning analytics is “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.”
The nature of the data and what we mean by 'optimizing learning' are clearly significant here.
Notice how the introduction of analytics here is presented in terms of drivers. Google's Marissa Mayer suggests that there are three major factors contributing to the rise of analytics: speed, scale and sensors. Data comes at us a lot more rapidly that it used to, in much greater quantities, and from different and new types of sources. As George Siemens and Phil Long say (in EDUCAUSE), “these three elements create a situation in which existing data-management and decision-making approaches simply are not feasible.”
Just as is the case for learning management systems themselves, the model of learning analytics assumes that the goals of the institution would be the same after the change as before, as though the attractors remain unchanged. Here's Siemens and Long again (from six years ago!) describing the major elements of learning analytics:
- Course-level: learning trails, social network analysis, discourse analysis
- Educational data-mining: predictive modeling, clustering, pattern mining
- Intelligent curriculum: the development of semantically defined curricular resources
- Adaptive content: adaptive sequence of content based on learner behaviour, recommender systems
- Adaptive learning: the adaptive learning process (social interactions, learning activity, learner support, not only content)
Let's look at each of these five elements in turn:
Social network and discourse analysis involve tracking and analyzing patterns of interaction in online networks, including the path people take through learning materials (example), the networks they create with each other, and how they talk to each other (example).
The first step here is to help us visualize what's happening (for example, Meerkat-Ed (based on Meerkat)). We can often see significant patterns in the data for ourselves, for example, when people stop progressing after a certain assignment, or when a student hasn't formed any study groups or networks. These dashboards will become much more common over the next five years. More recent work is combining social network analysis with data modeling (for example, studying interactions to identify problem- solving techniques).
Experts don't just have more knowledge, they see things differently.
But why should these analyses begin and end with students? In a learning management system, perhaps, it's hard to do anything else. But in the future, educators will want to be able to see how already successful experts and professionals organize and manage themselves. How does a master plumber solve a problem, as compared to a novice? Experts don't just have more knowledge, they see things differently. How will this inform how we analyze and inform student learning? We don't know yet, now will we know in five years, but the research is already underway.
Predictive modeling, clustering and pattern mining are attempts to take the next step and to use the data to make predictions. Proponents are able to assemble an impressive array of case studies (found here) demonstrating how these tools improve student success. This technology has found its way into learning management systems (such as D2L's Brightspace).
There are two basic approaches here: those that are based on explicit data models (a.k.a. supervised), and those that allow the algorithms to create their own models (unsupervised). In recent years, unsupervised learning has come to be associated with deep learning (systems with more layers between inputs and outputs allowing them to create their own models), and this is where a lot of the research has focused.
…an onus on the designers of deep learning algorithms to explain their predictions
The difference, practically, is that predictions in supervised learning will be in terms we already understand (“if a person doesn't go to class they will likely fail”), while predictions in deep learning will be opaque (“people in cluster F are 86% more likely to fail”). This creates an onus on the designers of deep learning algorithms to explain their predictions; otherwise it's difficult to suggest practical remedies. This will be a source of continuing debate over the next number of years (see, for example, p. 89).
Semantically defined curricular resources are intended to group and organize learning objectives and learning resources by the topics they cover. This concept was first expressed in the idea of learning objects, whereby related resources could be clustered to form (and reform) course packages. Mapping these resources is an onerous task and tools are still lacking, leading to (for example) social and community-based curriculum mapping exercises, such as the Aachen Catalogue of Learning Objectives (ACLO).
It has long been the objective of educational technology to perform this task automatically, and there is a rich literature under the heading of 'automated metadata'. But the objective of semantically-defined learning resources runs deeper. Proponents envision a linking of all elements in the learning process according to their semantic properties.
Creating a semantical description of a resource is a difficult and possibly impossible task.
This was all envisioned in the 1990s (and arguably goes back to Bloom's taxonomy) with the first learning object metadata frameworks. The fact that, two decades later, this still lies in the future attests to the difficulty of constructing such a system, and describes where learning analytics may play a role. Creating a semantical description of a resource is a difficult and possibly impossible task.
First, it takes so much time and human resources that people mostly just don't do it. This is what happened to learning object metadata. Norm Friesen, for example, found that only a small number of core metadata elements are actually used. Here's a summary and links to other reports. And second, people interpret semantical elements differently depending on context. Just this year we have another survey (available here) pointing to tens of thousands of mistakes in metadata.
An algorithmic approach, using artificial intelligence, may be the only viable approach. But this entails far more than simple string-matching. Extracting the semantics of a resource entails looking at context, it entails comprehending visual patterns in images or videos, it entails tracking how a resource is actually used, and more.
Adaptive content describes the capacity of a learning resource to present different information based on the learning context. When people talk of 'adaptive content' they generally refer to mechanisms of selecting and presenting resources in varying sequences depending on circumstances or learner characteristics. Again, this idea has been around for a long time; here's a presentation I gave in 2004 referring back to things like Firefly (1996-1998) and launch.com (1999-2001). Services we see today, like Netflix or Google Music, use very similar mechanisms.
These systems were all based on what we now call collaborative filtering (described here, for example). If you need a selection of resources (a music playlist, say), the system looks at what people like you thought of resources like this at this time of day or in these circumstances (“your workout music is front and centre as you walk into the gym”). This can be a massive calculation and doesn't work at all without user feedback or tracking, so different algorithms look at different ways to make the process more efficient through network analysis, data-mining and resource semantics.
Recommender systems, even sophisticated systems, have difficulty getting beyond broad stereotypes.
Recommender systems, even sophisticated systems, have difficulty getting beyond broad stereotypes. For me, for example, when it launched, Google Music's recommender was useless; it looked at my age (58) and fed me a steady diet of 1970s rock. Recommenders can also jump to conclusions; I made the mistake recently of looking at a bunch of 'photo of the day' posts on Medium, and now its “daily three” consists of nothing but “photo of the day” pages from the past. This can have more ominous consequences, as for example when analytics engines embody damaging or hateful stereotypes. We see how easily this can happen when Microsoft's chatbot 'Tay' released a slew of racist and sexist tweets.
Recommender systems also fall prey to what has become known as the filter bubble. As first described by Eli Pariser in 2012, this is a phenomenon whereby your recommendation engine narrows your field of vision to only those sources and opinions you already agree with. “The danger of these filters is that you think you are getting a representative view of the world and you are really, really not, and you don't know it,” he explains. The filter bubble arguably had a major impact on the most recent U.S. election. And it impacts learning, making it more difficult to see alternative perspectives and making it harder to learn critical literacy.
Adaptive learning, finally, is what the core of writers are now calling 'personalized learning'. We can think of it as content recommendation systems being employed to inform all aspects of learning: social interactions, learning activity, learner support, and more. The key is that adaptive learning systems respond in real time to student performance. An adaptive learning system will attempt to maximize characteristics such as flow. If the learning activity is too easy, it doesn't teach, and if it's too hard, it doesn't teach. Just like a good computer game, learning technology wants to hit a sweet spot that is both challenging and rewarding.
First defined by psychologist Mihaly Csikszentmihalyi in the 1970s, 'flow' has four characteristics that also define adaptive learning (quoted from Baron):
- Have concrete goals with manageable rules.
- Demand actions to achieve goals that fit within the person's capabilities.
- Have clear and timely feedback on performance and goal accomplishment.
- Diminish extraneous distraction, thus facilitating concentration.
These principles inform a lot of contemporary work on pedagogy, ranging from cognitive load theory to game-based learning design to immersive learning theory.
The Limits of Analytics
What is perhaps the most stunning revelation in our study of learning analytics is how low are its aspirations.
What is perhaps the most stunning revelation in our study of learning analytics is how low are its aspirations. Compared with the range of possibilities and ambitions realizable in education today, how tame the ambitions of this like discourse analysis and predictive modelling seem to be. It is though, at best, the conception of learning analytics does no more than to support and enhance the traditional model.
Beware the vendors! What is presented as adaptive learning is often in fact no deeper than content recommendation and learning sequence design.
Deploying adaptive learning as instructional technology is an order of magnitude more difficult. Beware the vendors! What is presented as adaptive learning is often in fact no deeper than content recommendation and learning sequence design. Knewton, for example, positions itself as an adaptive learning system. But beyond recommending content, what does it do? There are some key questions (as posed, for example, by Monica Bulger) that need to be asked:
- Does the software actually learn about the student, or is it merely responsive, using a decision tree or set of rules to make recommendations?
- How good is the content, and how wide is the selection of content resources being drawn from?
- What impact does the technology have on social interactions? Does it help a student learn how to learn from other people?
- What is the evidence differentiated learning improves outcomes (compare, for example, the claims made by adaptive learning with the debate questioning the effectiveness of learning styles)?
- What is the evidence that data-driven instruction improves outcomes?
- What are the implications of personalized learning on personal privacy and data security?
- Finally, what do personalized learning systems optimize for?
What impact does the technology have on social interactions? Does it help a student learn how to learn from other people?What are the implications of personalized learning on personal privacy and data security?
This last of Bulger's questions may be the most important of all. Adaptive learning is teleological, or goal-directed. It defines certain outcomes that the system is intended to fulfill. But how are these outcomes defined? Is learning simply the acquisition of a skill or the remembering of some piece of knowledge?
It is easy to identify the drivers forcing changes in learning technology, and especially learning management systems. It looks like we are being pushed toward learning analytics. But we are being pushed in no particular direction at all. Learning analytics is just the tool that is getting us there.
Change in a chaotic system is not defined by drivers. And if we do not address the question of where we want to go – of what our attractors are – we will find ourselves headed in some direction by default. What might that look like? It might look like learning systems that reinforce stereotypes, that generate knowing but uncritical students, that reduce knowledge and learning to data rather than society, or to any number of unforeseen and possibly undesirable consequences.
Responding to Change
In the year 2000 Tony Bates published his influential book Managing Technological Change (buy it, read a review). It was at once a handbook for administrators responding to the drivers and imperatives of the millennial educational institution, and at the same time a guiding beacon pointing the way to the preservation of institutional values as they navigate through the turbulence.
Most of all, though, it was about responding to change rather than creating it. The discussion of operating costs and revenues, the description of post- Fordist organization, even the assessments of quality in online learning: all these point to what administrators should do in the face of technological and social pressures, and not to why they should be doing them. This is assumed; the goals of the institution would be the same after the change as before.
The history of the LMS (diagrammed here) mirrors this. The early systems like Blackboard and WebCT were developed to support college and university instruction and in so doing replicated institutional structures and goals. They were organized into courses and lessons, students were led through learning materials, and the objective was to complete study and succeed at a test or assessment. The evolution of corporate human resources, learning and talent management systems is the same (and is diagrammed here).
Drivers push change, but they do not push in any particular direction, so the default is to keep doing what we have always done. If we look for reasons to justify the change, the change itself becomes the justification: we are responding to technology because technology is changing.
We discussed the SAMR model previously. What we are seeing here is the creation of the first step or two of this model. Drivers push change, but they do not push in any particular direction, so the default is to keep doing what we have always done. If we look for reasons to justify the change, the change itself becomes the justification: we are responding to technology because technology is changing. Or the university may be facing economic pressures, and we may raise tuition, cut staff or even departments (what Bryan Alexander calls 'the Queen sacrifice'), but the justification always lies in the economic pressures themselves.
This is changing, and this change may be one of the most significant in the field of learning technology over the next five years. The bottom is dropping out of the LMS market (other metaphors: the LMS market is melting, it is in steep decline, the market is shifting, the market is losing ground, companies miss expectations or are withdrawing, or DOA). It is no longer enough to provide course materials, lead people through them, and then test them on the other side. What's key is that this is as true for institutions as it is for technology vendors.
It would be premature to predict that the learning management system, nor the companiesthat produce them, will disappear over the next few years.
As a trend, this should be understood in its proper proportions. It would be premature to predict that the learning management system, nor the companies that produce them, will disappear over the next few years. Contracts exist and are still being signed, customers exist and still look to the LMS, and so the outlook is for contraction and consolidation,
not extinction. Success Factors just launched in my office, for example. But we're also seeing them losing business (such as Wal-Mart) to next- generation vendors like Workday.
Who Speaks for Us?
Who speaks for what our students will want, need and have in the future?
In his powerful closing chapter to Cosmos Carl Sagan asked the question, “Who speaks for Earth?” We could now equally well apply the same question to learning, education and analytics. How do we embody this in our learning and design: “compassion for others, love for our children, a desire to learn from history and experience, and a great, soaring passionate intelligence -- the clear tools for our continued survival and prosperity.” Who speaks for what our students will want, need and have in the future?
In this essay we looked not only about education and technology, but also the processes of progress and change in society (and in ourselves). We studied how we perceive change, how we are impacted by change, and how we create change. We learned that, once we change, we can't go back, and that the great cycles of change that define us sometimes stop. We learned that the complex future we face, change becomes less about cause and effect and more about networking and communication. Which is why the question “who speaks for us?” is so important.
The topic of educational technology is often discussed in the same context as educational reform, where the tools of analytics and learning management are used to rationalize and reorganize the educational system. It is often discussed in the context of privatization, of the development of educational marketplaces, of business imperatives and sustainability, and the imperative of personal responsibility. These are often the logics of educational technology, and indeed may be supported by many readers of this essay, but they need not be.
We can have this society that we want, a society where each person is able to rise to his or her fullest potential, where they may express themselves fully and without reservation through art, writing, athletics, invention, or even through their avocations or lifestyle, where they are able to form networks of meaningful and rewarding relationships with their peers, with their mentors, and with their students and children.
But we have to say, of this vision or any other, this is what we want, this thing, which is not defined by what was, by what is counted, by what people say must be, but is defined instead by the same inner voice that speaks to us all, and tells us to imagine, and tells us it will be amazing, if we would only listen.