The Dispatch
The evidence:
it’s complicated
The real impact of edtech for students isn’t as simple as it seems, with many layers of complex issues to unpack.
By Claudia Civinini
“A lot of technology can be effective, but not a lot of technology can be implemented effectively."
The issue of research evidence in Edtech can be thorny.
It doesn’t help that the market is crowded with tools and products, some of which don’t follow the best pedagogical practices.
WGU Labs Research Director, Dr Betheny Gross, says: “There are gazillions of apps – that’s one of the problems. Some of them are fabulous. Some of them are horrible. […] There is a lot of incentive to produce technologies…
“What we can guarantee is that there will be crummy tools coming into the marketplace in the future – the thing that we’d like to also guarantee is that educators will have a way to sort through it all. But that’s not the reality right now.”
A UNESCO report published in 2023 warns that impartial evidence on the impact of educational technology is “in short supply”: “there is little robust evidence on digital technology’s added value in education: technology evolves faster than it is possible to evaluate it."
In the United Kingdom, 7 percent of education technology companies conducted randomised controlled trials, and 12 percent used third-party certification. A survey of teachers and administrators in 17 US states showed that only 11 percent requested peer-reviewed evidence prior to adoption.
The report also adds that while technology is a “lifeline for many”, highlighting the opportunities it has opened up for learners with disabilities and its fundamental role during the COVID pandemic, it also still “excludes many more”.
For AR and VR, it says it can facilitate practical learning in fields such as medicine, engineering, science and vocational lessons and mentions studies which found that it can improve students’ attitudes towards certain subjects, foster motivation and allow students to improve communication and interpersonal skills.
More from the report: Most evidence comes from the richest countries. In the United Kingdom, 7 percent of education technology companies had conducted randomized controlled trials, and 12 percent had used third-party certification. A survey of teachers and administrators in 17 US states showed that only 11 percent requested peer-reviewed evidence prior to adoption
In a 2021 report detailing the impact of educational technology with data from Visible Learning, Professor John Hattie’s synthesis on meta-analyses on which interventions have the largest effect on student learning, Dr Hattie and Dr Arran Hamilton explain that there had been, by that time, more than 15,000 individual quantitative studies about the impact of technology on learning outcomes. These are studies that seek to understand whether students’ achievement, for example on standardised tests, improved as a result of an intervention.
The good news, according to Dr Hattie and Dr Hamilton, is that only two technologies seem to have a detrimental impact: social media as a learning tool due to distraction and the potential of cyberbullying, and phones in the classroom again due to distraction). Overconsumption of TV outside of school hours was also found detrimental.
Of the other 26 types of tech interventions analysed, none reverses learning and 11 of them fall within those able to accelerate student learning. None, however, are within those interventions that can yield a “disproportionate potential for high return on investment in terms of improving learning outcomes”. The authors comment that this is “unfortunate”, because the Visible Learning research points to many interventions that can achieve that high potential to positively impact learning outcomes.
However, the authors say, the available data suggests the following: the use of technology is likely more beneficial for elementary and college students, although the reason is not fully understood; tech has an above-average impact for students with special learning needs, especially intelligent tutoring systems in maths and online guided reading programmes; and that there are benefits in using tech for feedback.
Dr Gross explains that not all technology tools are necessarily aiming at raising outcomes. “Some are about increasing access, others about managing costs. There is a range of outcomes that technology seeks to resolve.”
One important gatekeeper for whether tools are effective is how they are adopted and used and the capabilities of the users in the learning environment. “A lot of technology can be effective, but not a lot of technology can be implemented effectively,” Dr Gross says. “That’s the part of the puzzle that we spend the most of our time around.”
For example, Dr Gross recalls, WGU Labs did research on a tool supported by several evaluations, including randomised controlled trials, that showed its efficacy in helping students complete their coursework. “We tested it in a different environment and it just completely flamed,” she explains. That’s because, in that environment, students already had other similar tools and routines that helped them to focus on their assignments. “Lo and behold, this tool, as clean and cool as it was, just ended up confusing things. This is a clear case where a tool is effective in some scenarios but not in others.”
And that’s the crux of the evidence: it’s complicated.
“You may see bigger effects of the tools depending on where it’s implemented and who’s using it,” says Dr Stephanie Reeves, Senior Research Scientist at WGU Labs.
One of the findings of the WGU Labs in this regard is that faculty and students are experiencing high levels of technology fatigue. Dr Reeves explains that the implementation of new tech requires being mindful of whether a new tool is actually needed in order to avoid overwhelming students and faculty. “You could have a technology that's really effective, but maybe it won't benefit the students if they're already experiencing that technology fatigue.”