What's AI's impact on synchronous online learning?

Although many other things get much more attention - one of the notable changes in higher education in respect to edtech in recent years is the greater usage of video conferencing technology.

Many HEIs had this technology in their edtech suites before the pandemic, but usage for learning and teaching was minimal and largely for online distance learning programmes with a synchronous element.

During the pandemic, many people began to use these technologies in teaching and learning for the very first time and have continued to do so - arguably this is one of the most significant consequences of those times, but garners very little attention.

What has undoubtedly garnered attention - arguably gobbling up most of the bandwidth in terms of higher education learning and teaching discussion is AI. A lot of the focus has been on generative AI but it’s important to note that AI has permeated a range of education technologies in ways that are less obvious and it has done so prior to the last 9-12 months of AI intensity….and it goes without saying it will continue to do so in obvious and less obvious ways.

Its impact and potential future impact on education continues to be hugely debated, but I don’t want to wade into that bigger debate here, but rather explore the impact it is having on synchronous online learning, through the primary vehicle for that - video conferencing technologies.

For those of us working in edtech, online learning and learning & teaching it’s worth reflecting on developments here and the merits or otherwise of its increasing incorporation. There’s much to consider but here I’ll cover four main impacts.

1. Video and audio quality

One of the key ways AI is being used in video conferencing technologies is to improve video and audio quality. This is through things like suppression of background noise, enhancing speech and video compression. This isn’t a use of AI that gets a whole lot of attention, but is a notable impact.

Audio and video quality matters for learning - most obviously in terms of the intelligibility and comprehending what is being communicated. This is felt more acutely when people are communicating in languages that you as a student are less familiar with or confident in understanding.

But poor audio quality might potentially hamper a learning experience in other ways. A few years ago there was an interesting study conducted on the impacts of audio quality on the perceived trustworthiness and credibility of research that was being disseminated.

What the researchers found was that “people evaluated the research and researcher less favorably when the audio quality was low”. It doesn’t seem unreasonable to suggest there is at least a potential for similar negative perceptions around credibility in an educational setting.

Poor audio and video quality has also been shown through some studies to be fatiguing and increase listening effort. Ultimately, learning shouldn’t be a frictionless process, but we want the friction to be productive and minimise unnecessary friction that simply makes it harder for all to participate and access an educational experience.

2. Recaps, reviews, revisiting and revision

An interesting recent development is personalised AI generated meeting summaries. This feature forms a component of “Intelligent recap” in Microsoft Teams and Zoom IQ.

This feature inevitably has some implications for learning and teaching, but are all of them positive? Well, AI generated summaries of online synchronous learning & teaching activities in some senses might be considered analogous to lecture capture. Both (if we put lecture attendance to one side!) are supplemental and provide a basis for recapping, reviewing, revisiting and revision.

However, it’s very debatable as to whether the provision of those things in and of themselves inherently create better conditions for learning. It still comes down to what a learner does with those things and too often edtech developments and features feel like they’re driven by seeking to increase convenience or efficiency.

In his opinion piece The Tyranny of Convenience, Tim Wu notes that

“convenience — that is, more efficient and easier ways of doing personal tasks — has emerged as perhaps the most powerful force shaping our individual lives and our economies”

Whilst in commerce or other sectors convenience might be more desirable, in learning there’s a more sophisticated relationship between ease and difficulty. Toning down difficulty through features predicated on increasing convenience may actually diminish their value in relation to learning.

This is a common tension when you have tech products that span business and education use and have features that have been primarily borne out of improving business efficiency that are then transposed to education.

This feature might offer benefits such as being able to more clearly and succinctly summarise points made in a way that can better support comprehension and retention. However, it’s not going to imbue students with the kind of self regulation skills so critical to independent study.

Nor will it necessarily address the issue that students often have in choosing ineffective study strategies - these summaries can still be re-read and highlighted or used for pre-assessment cramming.

A useful direction for these AI generated recaps to go in would be their translation into formats that support self-testing, retrieval practice or that lead to other pathways to effective study strategies.

It’s also worth adding that AI-generated summaries aren’t the only component of this feature set, as Intelligent recap by MS Teams for example, also includes AI generated chapters and personalised time markers.

It’s hard to argue that these things aren’t going to be useful to some people in some contexts, because as Aurora Hartley at Nielsen Norman Group highlighted -

“video is mostly an inefficient, sequential-access medium for transmitting information: people cannot easily choose which frames to attend to”

These AI powered features arguably help to mitigate this somewhat.

3. Displays and ‘dissolving the screen’

One feature that many are familiar with is how AI is allowing for more options and sophistication in terms of displays.

Now whilst we might most strongly associate the virtual background enabled by AI with more whimsical applications - possibilities such as slides being used as a virtual background with a speaker appearing directly in front of them are an interesting development for education.

Greater possibilities are to be navigated with care and this is an area in which consulting the large body of work from Richard Mayer can provide wisdom in how to utilise such features.

Being able to appear alongside slides as a virtual background certainly intersects with Mayer’s embodiment principle. Mayer explains the rationale for the embodiment principle like this

“A high-embodiment onscreen instructor can serve as a positive social cue that primes a sense of social partnership in the learner, causing the learner to try harder to understand the instructional message and thereby learn more deeply”

…and in other work highlights how gestures when speaking might, to use Mayer’s terminology, “foster generative processing”.

Greater options do however create greater possibilities and risk of overload and overstimulation which might be counterproductive.

However broadly speaking I think we should welcome the capability that AI is enabling to have greater sophistication of display in video conferencing technologies.

Other features related to this include Microsoft Teams Together mode which place participants alongside each other in one shared space. Whilst this still feels like it needs refinement it’s an interesting attempt to create a greater sense of togetherness and connectedness online.

Doug Lemov talked about a similar idea to this in relation to primary and secondary education - framing this idea as “dissolving the screen”. The premise behind this being how to connect with students and build relationships from afar. Now although “dissolving the screen” is much more about teacher’s actual behaviours and interactions online, displays play a role.

A facet of this idea is “dissolving slides” which is to purposefully switch between a slide display and a student multi-picture display to support the different activities within a session.

Whether through AI or not it would be good to see developments that enable more ease in switching between displays, sharing and innovations in student displays that might in some small way help support interaction.

4. The rise of the AI assistants

Recently Class Technologies announced that they are introducing an AI teaching assistant feature to their video conferencing platform. The assistant will be able to field questions, provide descriptions of highlighted portions of spoken text transcripts and with a nod to my first point create a study guide - which is essentially supplementary material created either during or after a session.

Whilst Class Technologies are one the specifically education focussed video conferencing solutions, there are other assistants such as Copilot in Teams and OtterPilot.

It’s inevitable that we’ll see more and more AI assistants in the future in both synchronous and asynchronous online learning technologies. There’s already been a bunch of announcements of these being integrated into some of the leading higher education edtech companies products such as Canvas.

Given the benefits sometimes derived from having two people facilitating synchronous online sessions, having an AI powered assistant may help educators manage an array of student responses and interactions in a more timely way. This coupled with features like seating charts used by Class Technologies potentially go some way to supporting the facilitation of synchronous sessions, particularly the higher the ratio of educator-student gets.

It’s worth remembering that there’s already been notable examples of AI assistants being used in higher education such as the one developed by Ashok Goel and team at the Georgia Institute of Technology in the US.

As well as this there are technologies such as Packback that use AI assistants as a means of seeking to refine student responses to discussion and to moderate them.

This is going to be an area of significance in terms of educational technology in the years to come and one in which we would benefit from an increasing amount of research as it pertains to synchronous online learning.

Keeping up with innovations in synchronous online learning

The increasing use of videoconferencing technology in general along with growing numbers of online distance courses being developed means that possibilities offered by an increasing array of AI enabled features and developments are of note.

In general, video conferencing platforms have been centred around business use, but increasingly education is coming into view and we see this manifested in companies and products like Class Technologies, Engageli and Minerva Forum.

Those HEIs who are using video conferencing technologies extensively would be wise to keep abreast of product developments and the new products emerging that may support innovation and different approaches to synchronous online learning.

Arguably amongst edtech products that really matter, this space is one of the most interesting as both AI and increasing use fuel growing developments and feature additions.