Are AI-driven EdTech developments improving learning in higher education?

EdTech product development is a perennial activity, but there’s no doubt that AI has accelerated this recently and brought much greater dynamism to the product development space.

AI features are reshaping established EdTech products used by universities, while a wide array of consumer AI technologies are being adopted personally by students and staff alike. Like it or not, AI developments are having a significant influence on teaching and learning in higher education through both EdTech and consumer technology.

The maelstrom that artificial intelligence has created can make it difficult to form a clear, top-level view of how it is shaping EdTech and whether AI-driven product developments are likely to have a significant impact on learning. It’s that last question that I want to explore here. In the past few years, companies have had to define their AI strategies and decide how AI-driven features are incorporated into their products in ways that, they hope, differentiate them and deliver greater benefits to universities, students and other users. At the same time, major AI companies are seeking to position their products towards education and universities.

Developments span the spectrum of teaching and learning activity, with differing rationales and areas of focus. It’s still early days, but this is a view on some of what we’ve seen so far, and whether these developments are really likely to have a significant impact on learning.

Generative AI in course design and setup

Inevitably, a significant amount of focus has been on the generative aspect of AI, which aligns with the initial work of designing, creating and setting up formal courses of higher education study. Fundamentally, this work is about thinking, ideating, generating, curating, structuring, producing and constructing. In higher education, it combines the creation of formal elements such as learning outcomes and assessments with the development of learning materials, content, and teaching activities.

There is clear utility in using generative AI to support this process, and it is a workflow EdTech companies have sought to develop features for. Perhaps one of the most clearly aligned EdTech tools here is TeacherMatic, which offers a range of what it calls “AI generators” for teachers. These include module descriptions, slide decks, glossaries, FAQs and learning objectives.

The value proposition of many of these features is efficiency in generating, editing and curating the foundational assets of a course, such as descriptions, learning outcomes, syllabuses, summative assignment briefs, reading materials and learning content. A focus on tools that help teachers generate material is all well and good, but a key question is whether a dedicated generator delivers anything so distinctive that it sets it apart from a general-purpose generative AI tool. The latter invariably offers greater agency to define and converse about what gets edited or generated.

One of the issues with EdTech and AI product development at present is the tendency to focus too heavily on productivity hacks rather than elevating teaching and learning. This risks limiting the potential value considerably. What would arguably be more valuable than a feature that simply generates and edits key course materials would be those that more directly seek to elevate and positively influence the experience and outcomes of students. That entails not just a high-level identification of practical challenges, but also engagement with the more complex psycho-social dimensions of higher education learning.

So, rather than generating assets, how might generative AI be used to create more accessible, readable versions of course materials in plain, understandable language? How might these be remixed to highlight the value of achieving learning outcomes and undertaking activities, both in terms of developing what Wiggins and McTighe called “enduring understandings” and in their more instrumental, professionally oriented post-study value? How might content be made more relatable to common student interests, goals and motivations? In their book How Learning Works, Ambrose et al. highlight the importance of students’ efficacy expectations in driving motivation, one aspect of which is confidence - confidence that the tasks and activities they are asked to undertake are valid in helping them reach their goals.

The potential of AI lies not simply in accelerating the mechanics of course design, but in elevating it and shaping environments that make meaningful student experiences more likely. Yet I am not convinced that EdTech-specific tools for design and creation currently offer major advantages over a general-purpose generative AI tool equipped with a solid foundation of knowledge, experience and well-informed heuristics on designing effective learning experiences.

AI for virtual learning environment (VLE) setup and consistency

While I think the promise of AI lies more in elevating design than in churning out assets, the more practical layer is not irrelevant. In higher education, one of the practical aspects of course setup involves the virtual learning environment (VLE). If you teach a module at a university, you will have a module course site or shell that needs to be structured, designed and populated with course materials, content and activities.

VLE providers and other companies have been seeking to leverage AI to support this practical work. Blackboard, from Anthology, with its AI Design Assistant, has an auto-generate module feature, but don’t be misled by the terminology, this generates content containers within a module site or page. There is also Coursemagic, which positions itself as the course builder for any VLE and integrates with them to offer similar functionality, such as creating course structures and module plans.

However, it strikes me that no one is seriously looking at AI as a means of tackling a major, long-standing issue, which is unnecessary inconsistency in the user experience across module sites or shells. This is one of the main issues raised by students in VLE reviews. Students largely want academics to use the VLE in a predictable way where divergence is not necessary.

Product developments that either directly enable or nudge people towards consistency, based on solid UX heuristics for the benefit of students, are arguably more valuable than a proliferation of tools for creating content within module shells. A bold step for companies, though one that would probably be a tricky sell in universities, would be an agentic AI approach that crawls course shells and flags or addresses issues relating to inconsistency and baseline quality in the student platform experience. To a certain extent, a tool like Blackboard Ally is a precursor to this in respect of accessibility, and an extension of that idea could also focus on usability.

AI and the future of discussion forums

Beyond the fundamentals of course design and formation, there have been significant product developments centred around teaching and learning activities. For VLE companies, this has meant enhancements to existing, well-used functionality such as discussion forums and quizzes.

Developments in discussion forums include discussion question ideation, incorporated into D2L Lumi Ideas, and discussion summaries for educators in Canvas. The latter is more of an educator efficiency feature, and while support in developing good discussion questions is valuable, there is still a long way for VLE companies to go in addressing the challenges of making discussion forums rich contributors to student learning.

Other technologies are, and have been, using AI to support discussion activities. Packback Discussions is an EdTech tool that aims to tackle some of these challenges and has existed well before the generative AI boom. FeedbackFruits also offers an AI Discussions Coach that supports students in improving the quality of their posts and encouraging interaction.

The perennial challenge of discussion forums is encouraging student participation and interaction. The fact is, there is no simple step-by-step recipe that guarantees success in every scenario. A perceived lack of value in engaging with these activities, and in engaging with fellow students rather than only with the educator, is a fundamental issue, one that relates to earlier points about communicating and conveying value. Some of the tools mentioned can help with another challenge, namely student confidence and ability to contribute, but it would also be interesting to see how AI could model responses and provide exemplars to help reduce barriers to participation.

Discussion forums, in their current guise, are arguably one of the areas where an educator efficiency focus could really help. Online educators, in particular, often have to drive the success of forums by regularly acknowledging contributions, offering feedback and moving conversations forward. With small cohorts this is manageable, but in larger ones educators must be strategic, orchestrating activity while being unable to respond to everything. Tools that assist with this, offering a more qualitative than quantitative analytics approach, may prove valuable.

AI, however, presents a more fundamental challenge to discussion forums. In his paper on the three types of interaction, Michael G. Moore outlined learner-to-content, learner-to-instructor and learner-to-learner as the key distinctions in a distance learning context. I would suggest that, with AI advancements, there is now a fourth: learner-to-synthetic. This does not spell the end of the discussion forum, but it may further challenge the value proposition of engaging in discussions for some learners, who can interact with an AI chatbot without the demands and unease of real social exchange.

A middle way is to incorporate AI teaching assistants into discussion forums, an approach already adopted in some contexts, the most notable being Georgia Tech’s Jill Watson. No VLE company has yet developed an equivalent within their products, but clearly this is an avenue with the potential to enhance the discussion forum paradigm. That said, if UK higher education were the Enchantment Under the Sea Dance audience, it might currently see such developments like a rendition of Johnny B. Goode with an over-the-top extended guitar solo.

Quizzes, synthetic interaction and new learning paradigms

Quizzes are another area where AI product developments have emerged, but, like discussions, it feels like a space facing more fundamental disruption. Is there still a place for multiple-choice questions in an AI world? That said, several products now support quiz question generation. On one level this is positive, if I had a penny for every poor MCQ I’ve seen, I probably wouldn’t be writing this, but the issue remains similar to weighing up a sausage, it’s not clear what has gone into it. This circles back to a broader challenge for EdTech companies, do you take a parameter-bounded AI feature approach, or a conversational/expressive one? Put another way, what makes a single-purpose quiz tool more effective than a general-purpose generative AI tool that allows a dialogue to define what you want?

One way I see EdTech companies trying to make these solutions more education-focused is through what I call (in polite company) the “Bloom slider”. This is Bloom’s taxonomy reinvented as a microwave-style setting, from defrost through to incinerate.

Just as an aside, this era of AI EdTech developments has seen Benjamin Bloom referenced liberally, whether for his taxonomy or his 2 sigma problem. I really genuinely hope Bloom wasn’t the sort of person that was overly concerned about “legacy”. He has had a serious work of classifying educational objectives translated into a cheerful clip-art rainbow pyramid, and over the years it’s been dumbed down into a sort of CBeebies version for learning and teaching. Then there is his 2 sigma problem paper, which has been jumped on like it’s Dan Brown’s Da Vinci Code for education, and is now used as the justification for every AI tutoring feature going.

Anyway, I digress. These quiz-related developments may be a natural evolution at this stage, but as AI advances, I am not convinced of the long-term future of “selected response” activities with predetermined feedback.

Another interesting, and newer, direction is the emergence of features that sit squarely in the learner-to-synthetic interaction paradigm. The AI Conversations feature within Blackboard is one such example, offering role-play with an AI persona or Socratic dialogue. Instructure’s recently announced LLM-enabled Assignment feature for Canvas is another, and the first product of its tie-up with OpenAI.

These innovations have added to the repertoire of features within VLEs and present a new type of learning interaction in core EdTech. One wonders, though, whether the dialogic element of AI might be better deployed in a more embedded rather than direct approach. Of course, you can build a direct feature like role-play or Socratic dialogue to align with specific pedagogical strategies. However, the potential for AI dialogue across a VLE to support reflection and the development of metacognitive abilities might be more powerful, despite being a more ambient approach. This is something particularly relevant to the next area I’d like to cover.

Assessment and feedback: efficiency or deeper learning?

Assessment and feedback has inevitably been a major focus for AI in EdTech. Themes seen in other areas, such as support with ideation and tackling workload through efficiencies, are evident here too.

A number of companies have emerged whose products are now used in UK higher education. These include AI-powered marking and feedback tools such as Graide and Keath.AI. These tools are clearly pitched towards reducing the time it takes to mark student work while enhancing feedback. There has also been a clear focus on AI supporting the generation of marking and evaluation rubrics.

Given that assessment and feedback are regularly flagged through student feedback mechanisms like the National Student Survey (NSS) as being, in some cases, suboptimal, the use of AI to address this makes sense. This product development focus also aligns closely with the time pressures faced by educators.

Both D2L, through Lumi for Brightspace, and Anthology, through their Blackboard AVA Feedback Assistant, have introduced AI feedback features. An interesting element of this is the modification of tone, something that could also be a valuable feature for instructors responding in discussion forums.

The potential to gain more time to provide feedback is positive, but only if that time is used effectively. More does not necessarily mean better, and the relationship between feedback and learning is already a complex one. That said, going back to the NSS, the perception of value that comes with receiving more rigorous feedback may increase student satisfaction, even if it does not always drive learning.

Returning to my earlier point about dialogue with AI being more embedded, functionality that allows students to engage in dialogue with AI about their feedback could arguably be more influential than AI polishing unidirectional comments. Considerable work has been done on feedback dialogue, and while it is not simply a post-assignment activity, the use of conversational AI to support students’ understanding of feedback, and how they act on it, could be valuable for their learning and study skills.

The extent to which current developments in assessment and feedback represent a step change in educational outcomes is debatable. Is AI making marking quicker without enhancing the quality of evaluation? I often point to No More Marking, which uses comparative judgement, as an example of at least trying to do something different, not just to speed up marking, but to help improve human judgement. I am not sure we are seeing much else at that end of the spectrum.

Similarly, is the quality rather than the quantity of feedback being improved, and what can feedback-specific tools do beyond helping educators strike the right tone? Also, given that assessment and feedback should be used to drive student learning and outcomes, what role might they play in supporting metacognition and helping students to develop their metacognitive abilities?

The promise and pitfalls of AI in education technology

AI developments abound at the moment, and this has been far from a comprehensive sweep of all AI activity in EdTech. My overall judgement of the current landscape is that we are in a phase where developments are largely focused on efficiencies, ideation and generation within existing practices. I have seen only minimal evidence of more sophisticated features that reflect a deeper understanding of the complexities of learning and teaching, though it is still early days.

A challenge I have raised repeatedly is the extent to which EdTech companies and their products can genuinely offer distinct benefits over general-purpose generative AI tools for many of the functions they are developing. When dedicated features display a lack of sophistication in their understanding of learning and teaching, and when they contain certain ‘black box’ elements, this does little to inspire confidence.

While some EdTech companies position their advantage as offering a prompt-free interface that does not rely on user prompting skills, this also needs to be underpinned by confidence that the company has a nuanced and sophisticated understanding of learning and teaching. Only then can their features claim to offer real benefits over a general-purpose tool. EdTech does not have deep reserves of credibility here, and when it does attempt to flex its educational research credentials, it often does so in a narrow and unsophisticated way, leaning heavily on singular pieces of work such as Bloom’s taxonomy or Ebbinghaus’s forgetting curve. At present, in many cases, I would prefer to use my own knowledge and experience in conjunction with a general-purpose AI tool, rather than something tightly parameter-bound from an EdTech provider.

This question may become even more pressing if universities begin to adopt and license tools wholesale from the major AI players, such as OpenAI and Anthropic, particularly if they do so in a way that effectively “gives permission” for them to be used more formally in teaching and learning. There have already been moves in this direction, with LSE offering students and staff free access to Claude for Education, and the University of Oxford providing the same with ChatGPT Edu.

Humans are complex, and learning is a complex process with behavioural, psychological and social dynamics and influences. While there is a body of knowledge on learning, it requires wisdom to interpret and apply judiciously when designing learning experiences. When EdTech flattens this out or fails to engage meaningfully with it, particularly when developing features focused on learning rather than purely practical tasks, there is a risk those features will add little value.

The possibilities AI presents for product development are significant and potentially paradigm-altering. But, to quote Daisy Christodoulou on EdTech:

“if we persist with faulty ideas about how humans think and learn, we will just extend a century-long cycle of hype and disillusionment.”

I would broaden this beyond faulty ideas to also include one-dimensional understandings. This is the challenge for EdTech, not only in terms of the efficacy of its products, but also in relation to the competition posed by more general-purpose AI tools.

Latest Articles

Next
Next

Analysing Coursera: global growth, university dynamics and the road ahead