Why EdTech impact has moved from “nice to have” to “non negotiable”

Education institutions are no longer asking, “Does this product look good?”

They are asking, “Does this product actually improve something we care about?”

That shift matters. Schools, colleges, universities, and training providers are under pressure to justify every spend. Budgets are tight, staff are stretched, learners need better support, and AI has made technology decisions even more complex. At the same time, the evidence on EdTech is mixed. UNESCO’s 2023 Global Education Monitoring Report states that some technology can improve some types of learning, but impact is not automatic and the short and long term costs of digital technology are often underestimated. The report also argues that technology should support learning, not replace the human connection at the heart of education.

That is why impact has come to the front. Education institutions have seen enough shiny platforms, dashboards, and AI tools to know that usage does not always mean value. A tool can be used often and still not improve learning. A platform can reduce admin in one team but increase workload elsewhere. A product can look powerful in a demo but fail once it meets real classrooms, procurement teams, timetables, IT systems, and tired teachers on a Thursday afternoon.

The OECD makes a similar point. Its 2025 working paper on digital technology and student learning found that access to technology alone does not guarantee educational gain. The paper argues that successful digitalisation needs pedagogical solutions, not just technical ones. That line should probably be printed on every EdTech pitch deck.

Who is looking at EdTech impact?

There are several organisations shaping the conversation around EdTech evidence and impact.

Globally, UNESCO looks at technology in education through the lens of access, equity, quality, system management, and appropriate use. Its work is important because it warns against assuming that technology is always positive. The World Bank also publishes EdTech research and evidence, particularly for developing countries, and its EdTech work focuses on discovering, documenting, generating, and analysing evidence based technology solutions in education.

The OECD looks closely at student outcomes, digital learning, skills, and the conditions needed for technology to work. Its work is useful because it avoids both extremes. It does not say technology is pointless, but it also does not pretend that devices or platforms alone will raise outcomes.

The Education Endowment Foundation in the UK has produced guidance on using digital technology to improve learning. More recently, the EEF has also warned that EdTech can widen disadvantage gaps if it is not implemented well. That is a key point for schools and companies. The same product can support one group and leave another behind if access, design, training, and implementation are not considered.

In England, the Department for Education has also moved further into this area. Its EdTech Impact Testbed work is designed to support evidence based development and purchasing, and it has identified the need for evidence that shows measurable impact and value for schools and colleges. Parliamentary answers also describe the testbed as a way to generate evidence around tools including AI, with focus areas such as staff workload, pupil outcomes, and inclusivity.

The Chartered College of Teaching is involved in the EdTech Evidence Board, which is taking an evidence based approach to evaluating the effectiveness and impact of EdTech products. The board reviews evidence submitted by suppliers against defined criteria developed with experts, suppliers, schools, and colleges.

Jisc plays an important role in higher education. Its work on digital transformation focuses on leadership, investment, secure infrastructure, stakeholder engagement, and digitally capable staff and students. Jisc also runs digital experience insight surveys that help institutions use data to inform strategy, operations, investment, and return on investment.

There are also specialist platforms and organisations such as EdTech Impact, EduEvidence, BESA, ImpactEd, and EdTech Europe. EdTech Impact describes itself as an independent review platform helping educators compare more than 1,500 EdTech solutions using customer reviews. EduEvidence and related research around the Multiple EdTech Impact Index are pushing the sector towards broader impact measurement across efficacy, effectiveness, ethics, equity, and environment.

Why measuring impact is no longer optional

For EdTech companies, measuring impact is not just a research exercise. It is now part of sales, product, customer success, renewals, fundraising, and trust.

Education institutions need evidence because they are accountable to learners, staff, governors, boards, regulators, parents, government bodies, and funders. If they buy a product, they need to show why it was worth the money and what changed because of it. That is especially true when products claim to improve outcomes, reduce workload, increase engagement, support inclusion, or bring AI into teaching and learning.

Not measuring impact creates several risks. The first is commercial. If a company cannot show value, renewals become harder. Expansion becomes harder. Procurement becomes harder. The second is product risk. Without evidence, teams build based on opinion, loud customer requests, or internal assumptions. The third is ethical. If a product affects learners, staff, data, or decisions, the company needs to understand who benefits, who does not, and whether any group is being left behind. UNESCO has been clear that technology should be adopted based on evidence showing it is appropriate, equitable, scalable, and sustainable.

The fourth risk is trust. EdTech buyers have become more careful. They have seen tools overclaim. They have seen platforms fail to embed. They have seen technology increase workload when it was meant to reduce it. A company that cannot explain its impact is asking education institutions to take a leap of faith. At the moment, many institutions are not in a leaping mood.

What types of impact are measured?

Impact in EdTech is not one thing. This is where many companies get stuck. They treat impact as if it only means improved test scores. That is one type of impact, but it is not the only one.

Learning impact looks at whether the product improves knowledge, skills, confidence, attainment, assessment outcomes, progression, completion, or learner performance. This is often the most visible type of impact, especially for schools and assessment focused tools.

Teaching impact looks at whether the product improves teacher practice, reduces workload, supports planning, improves feedback, saves time, or helps teachers make better decisions. This is increasingly important because teacher workload is one of the main reasons institutions consider technology in the first place.

Engagement impact looks at whether learners use the tool, complete tasks, return regularly, participate more, or show stronger motivation. This is useful, but it needs care. Engagement is not the same as learning. A learner can click a lot and learn very little. Lovely dashboard, questionable impact.

Operational impact looks at whether the product saves time, reduces manual work, improves processes, reduces support tickets, improves communication, or supports better data flows. In higher education, this can be just as important as learning impact, especially for admissions, student support, assessment, and retention tools.

Equity impact looks at who benefits. Does the product work for learners with SEND? Does it support students from lower income backgrounds? Does it improve access for rural learners, international students, multilingual learners, or adult learners? The Multiple EdTech Impact Index includes equity as one of its core domains, alongside efficacy, effectiveness, ethics, and environment.

Ethical impact looks at data protection, privacy, responsible AI, transparency, safeguarding, bias, and user rights. This is becoming much more important as AI products move into schools and universities. Digital Futures for Children argues that schools should only procure EdTech that upholds children’s rights, complies with data protection, and is independently shown to benefit education.

Commercial impact matters too. Education institutions and companies both need to understand return on investment. That might mean renewal rates, implementation success, reduced staff cost, improved retention, reduced dropout, stronger learner progression, or better use of staff time.

What does a successful impact measurement look like?

A successful impact measurement starts with a clear claim. Not “we improve education,” because that means everything and nothing. A stronger claim would be “we reduce teacher marking time for formative assessment,” or “we improve learner completion in online CPD,” or “we increase student engagement during the first semester.”

From there, the company needs a theory of change. This explains how the product is expected to create the outcome. What problem does it solve? Who uses it? What behaviour changes? What short term signal should appear first? What longer term result should follow?

The next step is baseline data. Without a starting point, impact becomes guesswork. A school or university needs to know what was happening before the product was introduced. That might include attendance, attainment, completion, staff workload, support tickets, engagement rates, or student satisfaction.

Good impact measurement then uses both quantitative and qualitative data. Numbers show scale. Interviews and case studies explain why something happened. For example, a product may show improved completion rates, but user interviews might reveal that the real driver was better nudges, simpler onboarding, or more relevant content.

The strongest evidence usually includes a comparison. That might be a control group, matched comparison group, staged rollout, or before and after evaluation. Formal randomised controlled trials are useful in some cases, but they are not always realistic for every EdTech company, especially early stage companies. The key is to match the evidence method to the maturity of the product, the claim being made, and the decision being taken.

The What Works Clearinghouse in the United States is useful here because it reviews education research and rates the strength of evidence. Its resources explain evidence levels such as strong, moderate, or minimal, helping decision makers understand how much confidence to place in a study.

The grey areas of measuring EdTech impact

This is where it gets messy. Impact in education is hard to measure because education itself is complex.

The first grey area is attribution. If learner outcomes improve, was it because of the product, the teacher, a curriculum change, a new policy, more funding, better attendance, or a very determined head of department with strong coffee? EdTech rarely works in isolation.

The second grey area is context. A product may work well in one school, college, university, or country, and fail somewhere else. Implementation quality matters. Staff training matters. Leadership matters. Infrastructure matters. Culture matters. Jisc’s work on digital transformation in higher education highlights that success depends on leadership, investment, infrastructure, stakeholder engagement, and digital capability, not just the tool itself.

The third grey area is what counts as impact. Is saving teachers 20 minutes a day impact? Yes. Is improving student confidence impact? Often, yes. Is increasing logins impact? Maybe. Is making a dashboard look busy impact? Probably not.

The fourth grey area is time. Some outcomes appear quickly, such as reduced admin or increased usage. Others take longer, such as improved attainment, retention, progression, or institutional change. Measuring too early can miss real value. Measuring too late can make it hard to know what caused the change.

The fifth grey area is equity. A product may improve average outcomes but widen gaps between groups. That is why average impact is not enough. Institutions need to know who benefits and who does not.

The sixth grey area is evidence quality. Vendor case studies can be useful, but they are not the same as independent evidence. Customer testimonials can show value, but they do not prove causality. Engagement data can show behaviour, but not necessarily learning. This is why education institutions are becoming more careful about the kind of evidence they accept.

How education institutions can verify impact

Education institutions should not simply ask whether a product has evidence. They should ask what kind of evidence, in what context, and for which users.

A good starting point is to ask the company to define its impact claim clearly. What exactly does the product improve? For whom? Under what conditions? Over what period of time?

Institutions should then ask whether the evidence comes from similar settings. A study from a well funded private school may not translate to a large further education college. Evidence from one country may not fully apply to another. A higher education tool tested with one faculty may not work the same across the whole university.

They should also look at implementation requirements. What training is needed? How much staff time does it require? Does it integrate with existing systems? What data is needed? Who owns the data? What support is available?

For AI based tools, institutions should go further. They should ask about data privacy, bias, transparency, model limitations, human oversight, and whether the tool supports learning or simply produces outputs. The OECD’s recent work on digital technologies and learning stresses that educational gain depends on pedagogy and thoughtful design, not technology access alone.

The best institutions run structured pilots. They define success before implementation, collect baseline data, involve users, measure adoption and outcomes, and review whether the product is worth scaling. This protects the institution and gives the EdTech company better data too.

How EdTech companies can navigate impact

For EdTech companies, impact should not sit in a forgotten PDF that sales remembers only when procurement asks for it.

It should be part of product strategy.

The first step is to be precise about what kind of impact the company is trying to create. Learning outcomes, staff workload, engagement, retention, access, operational efficiency, equity, or commercial return. Pick the claim and build evidence around it.

The second step is to build measurement into the customer journey. That means working with customer success, product, data, and research teams to collect useful evidence from implementation onwards. The best impact data often sits across systems such as CRM, LMS, support tickets, product analytics, assessment data, and user feedback.

The third step is to partner with customers properly. Schools and universities do not just want to be used as case studies. They want useful insight. If a company can help an institution understand its own data, identify what is working, and improve implementation, that becomes a deeper partnership.

The fourth step is to avoid overclaiming. It is better to say “early evidence suggests” than to make sweeping claims that cannot be defended. Trust is built through careful language.

The fifth step is to develop different levels of evidence as the company grows. Early stage companies might begin with logic models, user research, pilots, and case studies. More mature companies can move towards independent evaluations, comparison studies, and published research. The evidence should mature with the product.

FAQs
Why is impact important in EdTech?

Impact matters because education institutions are not buying technology for the sake of technology. They need to know whether a product improves learning, saves time, supports staff, improves engagement, reduces admin, or helps learners progress. The OECD has found that access to technology alone does not guarantee educational gain, which is why institutions now look much more closely at evidence and outcomes.

Why has EdTech impact become such a big topic?

Impact has moved up the agenda because education budgets are tight, staff workload is high, and institutions are under pressure to justify spend. UNESCO’s 2023 Global Education Monitoring Report looks at technology through the lenses of relevance, equity, scalability, and sustainability, which shows how much the conversation has moved beyond “does it work?” towards “does it work fairly, safely, and at scale?”

What organisations are looking at EdTech impact?

Several organisations are shaping the evidence and impact conversation. UNESCO looks at global technology use in education. The OECD reviews digital tools and learning outcomes. The Education Endowment Foundation provides evidence guidance for schools. The Department for Education in England has explored an EdTech Impact Testbed to test products in schools and colleges and generate evidence of impact.

What types of impact should EdTech companies measure?

EdTech companies can measure several types of impact. This might include learning outcomes, learner engagement, teacher workload, student retention, completion rates, operational efficiency, accessibility, equity, and return on investment. The right measure depends on the product and the claim being made. A tool that supports assessment should not measure impact in the same way as a platform designed to reduce admin or improve student wellbeing.

Is usage data the same as impact?

No. Usage data can show whether people are logging in, clicking, completing activities, or returning to the product. That is useful, but it does not prove learning impact. A product can be used often and still fail to improve outcomes. Usage data becomes more meaningful when it is connected to a clear goal, such as improved completion, reduced workload, better feedback, or stronger learner progress.

What does good impact evidence look like?

Good evidence starts with a clear claim. For example, “this tool reduces teacher marking time” is much stronger than “this tool improves education.” Strong evidence usually includes baseline data, a clear method, user feedback, and a comparison where possible. The DfE’s EdTech Impact Testbed work is a good example of the sector moving towards more structured testing in real education settings.

Do all EdTech companies need randomised controlled trials?

No. Randomised controlled trials can be useful, but they are not always realistic, especially for early stage companies. What matters is that the evidence matches the maturity of the product and the size of the claim. Early stage companies can begin with pilots, user research, case studies, implementation data, and small comparison studies. As the product grows, the evidence should become stronger and more independent.

What are the grey areas of measuring impact?

The biggest grey areas are attribution, context, timing, and equity. It can be hard to prove that one product caused an improvement because education settings are complex. A product may work well in one school or university but not another. Some impact appears quickly, while other impact takes months or years. Average results can also hide whether some groups benefit more than others.

How can education institutions verify impact claims?

Institutions should ask what the product claims to improve, who it has worked for, and whether the evidence comes from a similar context. They should also ask about implementation support, data privacy, accessibility, and whether results have been independently reviewed. A strong pilot should define success before the product is introduced and gather both data and user feedback during the process.

Why is not measuring impact risky for EdTech companies?

Not measuring impact makes renewals, procurement, fundraising, and customer trust harder. It also means product decisions are based on assumptions rather than evidence. UNESCO notes that good, impartial evidence on EdTech impact is still in short supply, and that many companies have not conducted stronger forms of evaluation. That creates a clear opportunity for companies that take impact seriously.

How should EdTech companies start measuring impact?

Start small and be specific. Choose one clear impact claim, define what success looks like, collect baseline data, and work with customers to measure change. Impact should sit across product, customer success, sales, and research, not in one forgotten document that gets dusted off when procurement asks for it.