Neil Selwyn’s article on the future of education in AI provides a high-level critique of the potential unrestrained application of AI technology in education and society. One particular issue that Selwyn briefly touches on is the accessibility of AI technology in a global educational context. This topic alone merits thorough consideration and ongoing monitoring. As Selwyn notes, large-scale AI development is dominated by a few major players with narrow interests and a belief that the technology is values-neutral, despite substantial evidence to the contrary. Additionally, a strong focus on the reducibility of human existence is concerning, particularly in propagating or accelerating existing social biases through reductionist perspectives applied to data.
Where this is particularly concerning is in efforts to decolonize education and to create space to ensure inclusion and seek perspectives from BIPOC, LGBTQ2S+, neurodivergent and other vulnerable peoples. AI-driven education tools applied without substantial ethical and social input could result in dystopic outcomes through increased stratification and isolation of such groups through data-driven analytics, or the disappearance of underrepresented voices in generalized applications that homogenize education approaches.
I’ve recently worked with our hospital’s research institute on a project to develop fundamental research education for community hospitals. One of the core principles made clear in the program is the vital importance of ethical review. Given the potentially devastating risks that unbridled applications of AI in education technology can present to vulnerable and underrepresented peoples, an essential form of change management that should be implemented through regulatory mechanisms is the rigorous application of ethical review before unleashing AI tools into the public sphere. This would likely slow the advancement of AI, which is a positive direction and would act as a countermeasure to the perverse incentives that tend to drive behaviours that promote desperate attempts to achieve first-mover advantage in a market. While this opposes the principle of creating a sense of urgency to foster change, this methodology intentionally questions why there is a sense of urgency in the development of AI and whether the desired change merits the move to broad application. Deep investigations on change readiness need to determine the psychological ability to change and to ensure the requisite capacity, capability, and competency to safely implement such change at a societal level while ensuring broad engagement with impacted groups.
AI may very well become an advantageous tool to empower decolonization and give voices to those historically denied power in education and beyond. However, the current approach to AI safety, values alignment, and societal implications could also become a dystopian Pandora’s box.
Selywn, N. (2022, October 17). The future of AI and education: Some cautionary notes. European Journal of Education, Research, Development and Policy, (57)4. https://doi.org/10.1111/ejed.12532
Hello there CNH,
Your analysis offers a compelling critique of AI’s role in education, particularly in its potential to reinforce systemic inequities rather than democratize learning. Several key points stand out in your reflection on Neil Selwyn’s work, including the dominance of a few major players in AI development, the risks of uncritical application of AI-driven tools, and the pressing need for ethical review in AI integration. Your emphasis on decolonization and inclusion is particularly crucial, as AI’s data-driven analytics could inadvertently exacerbate educational stratification if not carefully regulated.
One particularly striking idea in your response is the concern that AI’s reductive tendencies may silence underrepresented voices. The push for efficiency and data-driven decision-making in education could, as you suggest, homogenize learning approaches, making it harder for diverse perspectives; such as those from BIPOC, LGBTQ2S+, and neurodivergent communities; to shape educational discourse. Your call for rigorous ethical oversight before AI tools are implemented aligns with broader discussions on change management, urging a reassessment of the urgency surrounding AI deployment. This is a critical point because many AI-driven educational reforms are presented as inevitable, creating a false sense of urgency that prioritizes speed over careful implementation. Effective change management requires evaluating not just how quickly AI should be integrated but whether and how it aligns with pedagogical values. As you highlight, slowing down AI adoption to ensure ethical review is not an obstacle to progress but a necessary safeguard against harmful, market-driven implementations that could deepen systemic inequities.
As someone who works at the K-12 level, I see firsthand both the potential and the pitfalls of AI in education. While AI promises to personalize learning, its application in real classrooms often raises questions about who benefits and who is left behind. AI-powered tools, particularly in assessment and monitoring, have already demonstrated biases against marginalized students, reinforcing inequities rather than dismantling them. This resonates with your concern about AI-driven education leading to increased stratification rather than inclusion. Additionally, the argument that AI can support decolonization is compelling—but only if it is intentionally designed with ethical, inclusive frameworks in mind. Without proactive intervention, AI could just as easily amplify dominant narratives rather than disrupt them.
Your insights also connect with my reflection on AI theatre; a term coined by Selwyn (2022) to describe how AI in education is often more about branding than substantive innovation. AI-driven personalization, for example, is frequently marketed as a revolutionary shift in education, yet many of these systems operate on rigid, pre-programmed responses rather than genuine adaptability. This creates an illusion of progress while maintaining traditional power structures in education. Similarly, AI-powered grading tools are often promoted as impartial, yet they inherit the biases present in their training data. AI theatre, in this sense, is not just about misleading marketing; it represents a deeper issue where AI is used to reinforce existing inequalities under the guise of progress. The same AI that claims to enhance individualized learning can also perpetuate pre-existing biases if not carefully scrutinized. By drawing attention to this phenomenon, you underscore the need for critical engagement rather than passive adoption of AI in educational spaces.
Moreover, you highlight the environmental costs of AI, a factor that is often overlooked in discussions about AI’s future in education. The computational demands of AI systems contribute to significant ecological concerns, making it imperative to balance technological advancement with sustainability. This issue further reinforces the need for ethical governance; not only in terms of equity and accessibility but also in relation to AI’s long-term environmental impact.
Rather than rejecting AI, I agree with your call for critical engagement. Open-source AI, increased educator involvement in AI design, and a strong push for digital literacy among both students and teachers are crucial steps toward ensuring AI serves educational values rather than market-driven incentives. As you point out, ethical governance and fair practices must shape AI adoption in education if we are to avoid the dystopian outcomes that unregulated AI could bring. A thoughtful, change-managed approach one that questions urgency, emphasizes ethical oversight, and includes diverse voices can ensure that AI is leveraged as a tool for inclusion rather than exclusion.
Hi CBH and Joan,
What a great discussion about both the possibilities and pitfalls of AI in education. As Joan summarizes – what is still often missing in these conversations and tool implementations is a “thoughtful, change-managed approach one that questions urgency, emphasizes ethical oversight, and includes diverse voices can ensure that AI is leveraged as a tool for inclusion rather than exclusion.” CBH you highlight that there should be ” Deep investigations on change readiness” that really digs into questions around urgency, climate impacts, and “rigorous application of ethical reviews” of these tools. I think we are all really struggling with how to ensure these conversations and approaches to change happen in our organizations. What kinds of strategies do you think might work in your different contexts?
Thank you very much, Joan, for your thoughtful analysis and feedback. I agree with your concerns about stratification and the need for scrutiny. You mentioned Open-source AI, which is something I genuinely struggle with from ethical and risk perspectives. Democratizing technology can provide incredible opportunities for growth, equity, and access. Yet, AI is a technology that is different in kind than any technology previously developed, which in the context of dual-use or “double-edged sword” dynamics, presents realistically horrifying possibilities. AIs trained on protein folding or to design other beneficial medical advancements can also be used to engineer potentially lethal pathogens for germ warfare. Having that capability uncontrolled and available to nefarious actors with publicly available gene editing tools like CRISPR is unsettling. That said, I have no answer to this problem as our previous uses of powerful dual-use technologies (e.g. nuclear fission) have had horrible uses even when tightly regulated. Ethical stewardship of Open Source AI is essential, but how can that be achieved?
Michelle, regarding your question about strategies to address these challenges, I’ll add a reply to this message that will also function as my Unit 3 Activity 1 submission.