Artificial Intelligence (AI) has become a focal point of debates in recent years, fueled by both real-world developments and speculative narratives. Renowned scientists, executives, and visionaries have voiced concerns about AI’s implications. For example, physicist Stephen Hawking cautioned that AI could mark the end of the human race, philosopher Nick Bostrom warned of the advent of a superintelligence, and entrepreneur Elon Musk went even as far as equating AI to summoning a demon. Just last year, an open statement highlighting that “mitigating the risk of extinction from AI should be a global priority” was signed by influential individuals like Sam Altman, Demis Hassabis, Geoffrey Hinton, Bill Gates, and hundreds of others. There can be no doubt that navigating the implications of AI demands global attention and concerted efforts. However, whether statements like these are more than just a marketing ploy and distraction from real-world issues concerning AI such as privacy rights, copyright concerns, and labor conditions remains a subject of debate. This also applies in particular to science and academia, where I argue in favor of a human-centric approach.
As cautionary tales, the above predictions and warnings about the potential dangers of AI echo themes found in science fiction narratives where AI systems threaten humanity’s existence. This is particularly noticeable in science fiction stories where powerful AI takes over control, such as Skynet in Terminator or the Machine God in Matrix. Science fiction as a genre mirrors fears of technological advancement outpacing human control which are deeply ingrained in human culture throughout history. However, these warnings also serve as metaphors for broader concerns regarding inhumane superordinate structures, ideologies, or institutions. (Hermann, 2023) In this context, Skynet and the Machine God represent manifestations of current and primeval anxieties about totalitarian, oppressive, and exploitative systems of governance beyond AI per se, where dissent or opposition is impossible. Fictional AI then provides the perfect canvas for projecting these anxieties.
Let me provide two examples to illustrate this point further, one real and one speculative: First, in the 2020 serial adaptation of Aldous Huxley’s classic “Brave New World,” the totalitarian world government is no longer represented by people as in the original book, but by an AI system called Indra, being created by humans to save the world. Obviously, AI has become a timely metaphor for unaccountable systems. Second, just imagine Franz Kafka’s book “Der Process” – depicting the bank clerk Joseph K. being prosecuted by the authorities without knowing his alleged offense – was adapted into a new film. I’m quite sure that the opaque bureaucratic judicial apparatus pushing Josef K. around would nowadays be portrayed as an inscrutable AI system. These examples show that fictional AI can address both real fears of uncontrolled technological development but also broader societal concerns about power structures and control mechanisms that extend beyond AI as such.
How does this relate to the topic of this special issue tackling the question what role AI can and should play in academia? The concern regarding the potential domination of powerful AI can also be seen as a metaphor for the domain of science itself. In a 2016 article on future predictions in “Scientific American”, science fiction author Kim Stanley Robinson asserted that “Science itself is the artificial intelligence we fear will take over: collective, abstract, mechanical, extending far beyond individual human senses.” (Robinson, 2024) A claim he also repeated later on in interviews. This notion may apply not only to the realm of science but also to “academia” as the institutional framework for scientific endeavors, as well as “research” as the processes by which science is conducted.
If the warnings of powerful AI taking over, whether depicted in fictional narratives or observed in reality, are interpreted as metaphors for the current state of science and its associated institutions, then they reflect an unease regarding the trajectory of science and technology. This apprehension suggests that science and technology have acquired a momentum independent of human control, prioritizing their own interests in power and profit over the advancement of human well-being. One might think about the unrestrained development of the atomic bomb, biological weapons or the potentials of genetic engineering, and funny enough AI itself. So, as stated earlier, the very individuals who warn against unrestricted development of AI are those people unrestrictedly advancing it – a phenomenon that Lee Vinsel termed “criti-hype”.
Returning to the realm of practical implications, the metaphorical AI science apparatus is now confronted with real-world AI applications. In light of these considerations, exploring the implications of AI in academia becomes not just a question of technological integration but also a deeper reflection on the evolving nature of scientific inquiry and its impact on human knowledge and society. So the question arises: How should we approach new tools in science to ensure academia operates less like an AI beyond human control and more as a tool for advancing humanity and providing a fair environment for researchers, students, and employees?
It’s essential to ensure that science aligns with human values and follows human choices, rather than pursuing research solely for its own sake or expediting processes for the sake of efficiency. This entails prioritizing human ethical considerations in the development and utilization of AI technologies within scientific domains.
Here are a few examples of what this approach might involve: Rather than simply speeding up flawed processes, a more prudent approach involves using AI to identify and address these processes. For example, if the allocation of study places, assignment of seminars, or grading of students is unfair or inefficient, AI can pinpoint issues and propose solutions. For instance, consider a scenario where a student from a non-academic or less privileged background writes a well-structured and coherent essay, albeit lacking fancy vocabulary. An AI tool might assign them a lower grade compared to an essay by another student who uses more sophisticated language. This discrepancy could unfairly penalize students based on their socioeconomic background rather than the quality of their writing, perpetuating existing inequalities in education and the academic system. However, employing an AI tool to uncover these after all very human biases and trying to overcome them could make academia more fair and diverse, and less classist in the long run. By refining procedures in this manner, AI can contribute to creating a more equitable academic landscape, benefiting all stakeholders.
Additionally, researchers, educators, program managers, and administrators need to continuously improve their proficiency in using AI ethically and effectively. This requires understanding both the technical aspects of AI and its social implications, including data privacy and bias mitigation. Educators should integrate AI-driven tools into teaching practices transparently, while program managers and administrators must consider the societal impacts of AI deployment. By investing in ongoing training on AI and ethics, individuals can ensure responsible integration of AI technologies. A concrete example is the use of generative AI tools like ChatGPT in education, which is despite concerns about copyright and privacy used anyway. Instead of banning them, proactive engagement and discussions on biases and ethical guidelines are essential. Educators can leverage generative AI to create personalized learning materials, discuss ethical and legal implications, and ensure transparency in data usage. This promotes critical thinking and responsible technology use, enhancing the learning experience.
In conclusion, the guiding principle in the integration of AI technologies within academia should not be to merely automate existing and maybe bad processes, but rather to strive for systemic improvement, grounded in principles of justice and humanity. By embracing this ethos, academia can leverage the transformative power of AI to advance knowledge, promote ethical inquiry, and foster societal well-being. In doing so, the humans within academia and research can strive to implement real-world AI in a responsible and transparent manner, thereby challenging the depiction of science as an uncontrollable superordinate AI system found in science fiction narratives.
Leave a Comment