In education, AI has become a terrain of struggle between the technocratic utopians and those who oppose AI's uncritical deployment. Underestimating the risks of hasty development is irresponsible. We need awareness and critical knowledge to constitute grounds for better understanding.
Where do the pressures to deploy Artificial Intelligence (AI) in higher education, despite its risks, come from? Is it the influence of corporate communication? Is it the compulsion “to do more with less” in response to governmental cuts to public education? Has the experience of the digital “revolution” in education, with its ideological illusions and disappointments, been already forgotten?
This incitation results from a dangerous mix: an interplay of ideology and ignorance. To argue my point, I will use the dramatis personae of the Good, the Bad and the Ignorant. In my plot, the Good epitomises the role of those who seek to avoid potential damage, the Bad those who seek to subordinate education to technocratic visions and the Ignorant those who ignore the warnings.
The Ignorant doesn’t know and doesn’t want to know the problems with new technologies if these problems make life more complicated. The Ignorant prefers to ignore the problem with the myth of technological determinism out of laziness, rather than interest, and resolve the problem of data privacy with the argument “if you have nothing to hide, you have nothing to worry”.
As the Ignorant presumably impersonates a rather large constituency, my suggestion is that the best chance for the Good to fight off the influence of the Bad consists in convincing the Ignorant about the huge risks associated to the hasty and uncritical deployment of AI in education. What is needed, in other words, is for the Good to popularize and disseminate the insights of research in the tradition of the critical studies of AI, such as scholars Jonathan Roberge and Michael Castelle, Pieter Verdegem and Simon Lindgren.
The dramatis personae of my narratives are ideals I have used to foster professional awareness and responsibility in connecting available knowledge to moral choices and political responsibilities. Although we are all, more or less in need of more reliable knowledge, the important distinction here is between those who seek, reject or ignore critical knowledge.
This knowledge constitutes the grounds for our moral choices, and our moral choices define our relative position in the politics of education. Once we learn this, we may also learn to develop more effective educational and pedagogical strategies to oppose the effects of technocentrism and of the manipulation of colleagues and students into a blind trust of new technologies.
Know the risks!
In the European Union, the digitalization of education has posed fundamental problems of data rights and ownership that are still largely unresolved. With the rapid development of AI these problems have largely been neglected and bypassed because AI is developed largely within companies outside the EU and too many school managers, teachers and students are now using this technology with a carefree attitude.
Too many don’t know or don’t care that large language models (LLM) behind of this technology are trained through methods that rise many privacy concerns. Afterall, granting free access to this technology to millions of people is a convenient method to advance corporate LLM training. What this means, however, is that when we think we are using AI, the company owning its LLM is actually using us.
AI is quickly becoming a fundamental element in the working infrastructure of virtually every industry. School managers in higher education and especially in universities of applied science incite students and teachers to use AI presumably in the belief that, by doing so, their institution will deliver more competent and competitive “human resources” to the industry.
Supporting this belief is not only the desire to increase the employability of their graduates, but also concerns about the future of their school as well, since a low employability rate may eventually lead to decrease funding and ultimately job cuts. Like with the digitalization of education before, also with AI, the name of the game is adapt or perish. As a result, the introduction of AI in higher education is more responsive to the corporate hype than to the warnings of independent research.
Simplifying to the extreme, I summarize these warnings below in relation to the institutional, educational and epistemic dimensions of the impact of AI and related technologies (e.g. ChatGPT, Bard, Open AI, Gemini, etc.) in higher education.
The introduction of AI in higher education is more responsive to the corporate hype than to the warnings of independent research.
First, and in relation to higher education as an institution, the uncritical deployment of AI enhances the influence, the ideology and visions of the groups that control this technology. Higher education has been dependent on corporate software for at least three decades. The introduction of AI marks the culmination of a process but also a qualitative change in the relationship between technology, teaching and learning. It marks the moment in the history of teaching/learning, in which corporate computational technology acquires fundamental cognitive functions with educational and pedagogical implications.
The delegation of these functions to AI is far from innocent as it forces formal education in an even deeper relationship with technological determinism, deepening its subordination to the ‘needs’, and ‘revolutionary’ transformations that corporate giants enforce on societies. The call for efficiency is only the peak of this ideological iceberg. Arguments such as “saving time, more effectively serving students, and more efficiently identifying plagiarists”, are instrumental to the introduction of AI surveillance of students and staff, transforming US universities into what scholars Mark Swartz and Kelly McElroy called “academicon”.
In the mainstream discussion about the “ethics” of AI, the influence of simplified and uncritical representations of what is needed for effective – and ethical! – embedding of ethical frameworks in AI technology, over-represents the perspective of engineering sciences and corporate managerialism, allows the tech industry to hijack the research agenda on AI ethics, and to use the notion of trust and trustworthiness as buzzwords, ultimately performing “ethics washing” and making the whole discussion about AI ethical guidelines and code of conduct rather useless.
In this way, the uncritical adoption of AI, risks to naturalize the instrumental notion of corporate ethics on the entire institution of higher education. After the digitalization, privatization and managerialization of higher education, the uncritical deployment of “trustworthy” AI is next step in the effort of subordinating the whole institution to the social purposes and visions associated with the neoliberal and technocratic imaginary of a fully automated social order in the service of the market – while keeping the same purposes, visions and imaginaries unquestioned and beyond the reach of transformative education.
In relation to education proper, scholars such as Simon Sweeney, Debby Cotton, Peter Cotton, and Reuben Shipway among many others, have argued AI facilitates cheating, disrupts conventional process of evaluation and, on the bright side, create an urgent need to re-think the nature and role of evaluation in higher education. But this re-think requires time! Until then, the uncritical deployment of AI in higher education is bound to be problematic in many respects.
Even putting the serious concerns about surveillance aside, the incitation of school managers to use AI whenever possible is based on an implicit “performative” pedagogy and learning strategies that prioritize results over process: performative instead of interpretative competences.
The uncritical deployment of “trustworthy” AI is next step in the effort of subordinating the whole institution to the social purposes and visions associated with the neoliberal and technocratic imaginary of a fully automated social order in the service of the market.
This pedagogy is problematic for the development of critical competences and undermines the pedagogical role of professional educators in the formation of critical subjectivities. The same implicit pedagogy, however, encourages the cognitive dependency on AI and related technologies while inhibiting the acquisition of the critical competences necessary for the “critical subjectivities” of democratic societies.
The uncritical deployment of AI poses epistemic risks because AI is an “epistemic technology”: a technology designed, developed and deployed to be used in relation to contexts, content and operations associated with the creation of knowledge. At the origins of these risks is the fact that the creation of knowledge is influential in a variety of formative processes: from personal identities to social order and all that is in between. The creation of knowledge through algorithmic computation, for example, is based on a radical form of instrumental rationality that is very different from and potentially dangerous for the rationality required by the processes of deliberation that are so fundamental in democratic regimes.
On a collective level, the problem with the epistemic influence of AI relates not only to the spread of disinformation, but also to the deployment of instrumental rationality and the logic of computational efficiency in the resolution of complex social problems.
This solutionism is not innocent but hides the neoliberal and technocratic visions of a social order based on “post-political dogma” and dehumanization. And maybe one does not need to be familiar with Prof. Jürgen Habermas’ work on knowledge and human interest to understand the primacy of computational epistemics privileges control over understanding, and tyranny over democracy.
To enforce the uncritical deployment of AI in higher education is a wrong response to the financial strangulation of democratic higher education. The incitation to “do more with less” is dangerous because it is apparently thought to be innocent and remains dangerously unchallenged if the warnings issued by critical scholarship are ignored. If those responsible ignore these warnings and have no interest in the broader context of the politics of education, it is understandable they believe they have no choice.
The Bad and the Technocratic Myth vs. Democratic Education
The neglect of the growing corpus of critical studies on the risks of AI in higher education, leads to a dangerous form of ignorance that strengthens the influence of technological determinism on the politics in education, and the agenda of people like Elon Musk, Mark Zuckerberg and others who seek to dismantle democracy and bring about the social order of techno–neofeudalism.
For the Bad, the defunding of higher education and the uncritical deployment of AI are opportunities to promote the technocratic myth in education, to undermine the formation of critical subjectivities and to complicate the learning of critical competences.
The Bad, in other words, combines defunding and AI as a unique opportunity to roll back democratic education and to enforce at least three ideas associated with the technocratic myth. First, technological progress is not a ‘social’ but a ‘natural’ process, deprived of political connotations, ideologically neutral, a force against which resistance is futile, and adaptation is the only ‘rational’ strategy, thus operating as a selective mechanism in the ‘survival of the fittest’.
Second, technological progress is the answer to social problems and, therefore, social change should be subordinated to technological progress and to the visions of corporate leaders and technocrats, allegedly capable of providing optimal solutions to social problems. Finally, the idea that technological advancement is a measure of ‘moral authority’, an indirect result of the Calvinist idea that earthly achievements are indicators of God’s predilection.
For the Bad, the defunding of higher education and the uncritical deployment of AI are opportunities to promote the technocratic myth in education, to undermine the formation of critical subjectivities and to complicate the learning of critical competences.
These ideas are appealing in times fraught with uncertainty and fears because they come with a promise of reassurance: to address compelling problems of the present effectively, to resolves and eliminate political conflict and ultimately bring about the safety of a post-political, surveillance capitalism.
The influence of these ideas contribute to naturalize and thus bring about the vision of a fully automated and depoliticized social order reminiscent of that regime of “unfreedom” in which the possibilities of reducing inequalities and increase social justice and democratic participation are suppressed by the ideology and the “productive apparatus” philosopher and political theorist Herbert Marcuse described in his One Dimensional Man.
Here, as presumably elsewhere, when ideology mix with ignorance the result for democracy can be lethal.
What has to be done?
The influence of undemocratic visions and the technocratic myth that supports them in the politics of education is formidable but not yet hegemonic. Resistance is far from futile and the struggle to oppose the uncritical deployment of AI in higher education is an important one.
To oppose the influence of technocratic utopia and defuse the influence of the Bad, the Good must bring the Ignorant of her side. This means at least two things. First, learn more about the critical contributions in AI studies. Second, share this knowledge with colleagues and, whenever possible, with students. Third, find the time and energy to explain to those among us who prefer to ignore the risks why responsible educators and school managers cannot just dismiss them.
Last but not least, the Good should not underestimate the legal relevance of EU regulations which consider the use of AI in education as “high risk” and is a preliminary but nevertheless compelling source of recommendations and possibly sanctions.
PhD Matteo Stocchetti is docent in political communication at Helsinki University and Åbo Akademie, and principal lecturer at Arcada UAS.
Article image: Anna Shvets / Pexels.com