By nature, humans resist change. Especially when it feels like a threat. And such is the case with every technological advancement that we have faced. Today, it would be difficult to argue that the printing press was detrimental to the advancement of human knowledge. Yet, that was exactly how it was perceived to be at the time of its initial development and proliferation. This knee-jerk reaction is entirely natural - with incredible opportunity comes glaring risk. The question facing educators is how best to engage with it, so as to boost learning potential while maintaining intellectual integrity.
Though we cannot predict future patterns, history tells us one thing for sure: technological shifts don’t reverse. You can’t uninvent it. Just as the printing press, calculators and computers have remained, it is safe to say that artificial intelligence is here to stay. According to Oxford University Press, eight out of 10 students already use AI in both their schoolwork and homework. Now, over 3 years on from the launch of ChatGPT, AI tools are no secret — they are immediately accessible to all and improving rapidly too.
This presents educational providers with a dilemma: How to navigate this uncharted territory? One instant reaction might be to ban artificial intelligence altogether - history shows this is futile as a policy. And why do so, given the extent of opportunity AI offers? This situation is not a choice of adoption versus rejection, but rather, proactive, deliberate integration versus reactive damage control.
The advantages that AI offers hardly need to be stated. When approached strategically, it can offer transformative opportunity.
Firstly, students can benefit from personalised learning, catalysing cognitive processes and boosting understanding by leaps and bounds. AI systems can provide immediate, tailored feedback and suggest how to directly improve - rather than entire classes advancing at the same pace, leaving some students behind and some students needing more stretch. Around-the-clock support also reduces frustration cycles and maintains momentum beyond school hours. In addition, feedback loops become much shorter. Traditional, written feedback can take days, or even weeks; AI tools mark work and reveal misconceptions instantly, reducing the gap between error and correction.
Secondly, for teachers, AI brings a giant leap in efficiency. The goal here is not automation of teaching, but rather increasing professional capacity. Offloading routine marking and admin work to AI automation can be the ultimate time-saver. This frees time for high-quality lesson planning and teaching, along with better pastoral support and teacher satisfaction. Furthermore, AI-produced insights give in-depth analysis of student performance, also without costing teachers any extra time. This allows for more informed intervention, enabling teachers to proactively target weak areas in class.
Notwithstanding, the risks are substantial. Used well, AI strengthens learning. But used poorly, AI can weaken cognitive function.
Misinformation and ‘hallucinations’ are one common phenomenon. Large Language Models (LLMs) - some of the most common forms of AI - are built through training on textbooks, essays, and academic sites, such that their output mimics expert-like responses. But the information they regurgitate is not always accurate. In turn, many users mistake the fluency and confidence with which AI responses are presented for true epistemic credibility. These fabricated or distorted outputs can have awful consequences in academic and professional contexts. And to make matters worse, recent reports show that less than half of students are confident in identifying such inaccuracies.
With regard to cognition, research shows that, used improperly, LLMs can endanger student learning and memory retention. For instance, AI is shown to simplify the presentation of material. Though this may appear beneficial, it reduces the mental effort demanded of the student in learning. A lack of mental challenge - often described as ‘desirable difficulty’ - can lead to diminished long-term retention. An alternative perspective is offered through Self-Determination Theory, which holds that autonomy, competence, and relatedness are the three key drivers of human motivation and meaningful learning. Increasing reliance on LLMs causes students to significantly lose their sense of all three of these factors, as they offload more and more work to AI. This has detrimental effects, not just on learning, but also on young people’s wider self-esteem and motivation.
Last, one often overlooked risk presented by AI is its compounding of socio-economic inequality. On a global scale, the proliferation of AI generally exacerbates ‘data colonialism’ between the global north and the global south. Unlike traditional colonialism which often related to the extraction of natural resources, data colonialism operates through the commodification of of human life by digitisation. More developed regions benefit from advanced use of AI, while less developed regions labour to produce this technology, often with minimal gain. The rich get richer and the poor get poorer, as it relates to the consumption and production of AI.
That said, none of these risks justify disengagement, nor have they with any past advancement. But they demand strategic management.
The most important attitude schools must adopt is the shift from prohibition to policy. Outright bans have never been as productive or long-lasting as smart regulation. Establish clear guidance that regulates use and fosters accountability.
Another crucial aspect to consider is how AI use actually aligns with pedagogy. The likes of ChatGPT and Claude are not designed for education. Any AI platform schools use must promote thinking, not shortcut it. Both students and teachers should also be taught AI literacy - not just how to use AI but how to question and cross-examine its output. Related to this is staff development, providing teacher training and clarity on the subject. It is too often the case now where students’ technological knowledge is more advanced than that of their teachers. Schools should consider appointing a dedicated AI lead to coordinate policy, research tools thoroughly, and ensure alignment with privacy standards.
For governing bodies as well as schools, it may be worth seriously considering a more significant systemic change. With information so readily available, it may be wise for education and assessment to place greater emphasis on critical thinking, evaluation and judgement, rather than memorisation alone. But that might be a more long-term reflection.
Every generation always believes that its technological steps are unprecedented. This is materially true, but one could argue that every development is essentially the same type of advancement, just under a different label. Many similar opportunities, risks and precautions apply. But through responsible management, they have all brought success. The printing press did not end scholarship; it expanded it. The internet did not eliminate learning; it transformed access. Artificial intelligence represents another inflection point. How we harness it now may just determine whether human intelligence is strengthened or sidelined in the generations to come.