See the latest Aloha ʻĀina Action here!

The Impact of AI on Academia

This blog post examines AI's growing influence in higher education, highlighting both its transformative potential and significant risks. While AI offers powerful tools for research, personalized learning, and administrative efficiency, it threatens to undermine critical thinking skills as students increasingly rely on AI for essays, problem-solving, and even forming opinions. The post warns that AI-generated content can flood academia with convincing but inaccurate information, reinforce existing biases, and exacerbate educational inequalities based on access to technology. Ultimately, it argues that as academic institutions adopt AI, they must prioritize teaching students digital literacy and critical analysis skills to ensure AI serves as a tool for enlightenment rather than control.

Artificial Intelligence is rapidly reshaping the academic landscape, offering powerful tools that streamline research, personalize learning, and expand access to knowledge. AI can analyze massive datasets in a fraction of the time it would take humans, opening new doors in fields like medicine, climate science, and linguistics. It also offers unprecedented language translation tools, speech-to-text systems, enabling AI tutors to make education more accessible to students with diverse needs and backgrounds. 

But as with any technology, the benefits of AI come with significant risks—particularly when it comes to critical thinking, the integrity of information that can be spread rapidly, and broader societal consequences.

Undermining Critical Thinking and Deep Learning

Despite its promise, AI poses a real threat to critical thinking. Students increasingly turn to AI tools for writing essays, solving problems, and even forming opinions—sometimes without fully understanding the underlying material. This outsourcing of cognitive effort can dull the development of crucial skills like reading comprehension, critical analysis, synthesis, and independent reasoning. AI can provide instant answers, but higher education should be about learning how to ask the right questions. Students who depend on AI risk losing opportunities to develop problem-solving strategies, reflect on real problems, and innovate for solutions.

The problem isn’t just with students. In recent years, academic journals and conferences have faced incidents where individuals submitted AI-generated fake research papers using tools like ChatGPT, which managed to slip through peer review and were published, exposing weaknesses in the academic vetting process. AI systems can also produce factually inaccurate or entirely fabricated information, including fake citations, invented studies, even made-up legal cases, called “AI hallucinations.” When AI is used to create convincing but meaningless content, it can undermine trust in scholarly publishing and damage institutional credibility, flooding academia with low-quality or misleading information that appears legitimate.

Indeed, AI-generated content—while often articulate—can blur the line between truth and fiction. Generative models can produce convincing but inaccurate or biased information (and it is not capable of independently evaluating the truth or ethical content of a claim), and students without strong media literacy skills may struggle to evaluate credibility. The result may be a generation less equipped to navigate misinformation or question dominant narratives. 

Reinforcement of Bias and Exacerbating Inequality

Another issue is that access to advanced AI tools may vary based on students’ economic status and digital literacy, giving students with better tech access yet another unfair academic advantage, exacerbating existing inequalities in education. Further, AI systems are trained on large datasets that often contain historical biases. If not carefully vetted, they can reproduce or amplify stereotypes in content and recommendations.

Even Geoffrey Hinton, winner of the Nobel Prize in Physics 2024 and known as the “Godfather of AI,” says he is increasingly concerned about AI’s rapid development. He’s warned about the uneven distribution of AI’s benefits and most recently the dangers of approaching AI with a profit-driven mindset – that this could likely mean a bigger likelihood of an AI takeover or bad actors co-opting the technology for dangerous means like mass surveillance.

This highlights the urgent need for updated policies, AI detection tools, better education around the ethical (and appropriate) use of generative technologies, and perhaps more (not less) regulation by the federal government.

A Double-Edged Sword for Academic Operations: Administrative Efficiency & Faculty Disempowerment

According to a poll by CollegeVine, 86% of university leaders agreed that AI presents a “massive opportunity to transform higher education.” 

Proponents suggest that AI presents an incredible opportunity for schools to reimagine how they operate and can streamline costly administrative processes like alumni engagement, lower costs, and redirect resources back to their core missions. Universities that adopt AI-driven grading, feedback, or tutoring systems to cut costs, however, risk marginalizing faculty expertise, reducing the richness of pedagogy, and replacing meaningful human interaction with impersonal automation. For instance, the California State University system announced a landmark initiative to make it “the nation’s first and largest AI-powered public university system” (it has 23 campuses). The announcement was met with pushback, with CSU faculty criticizing the undertaking as antithetical to the university system’s mission “to prepare significant numbers of educated, responsible people” to contribute to the state’s economy, culture, and future. Some argued that the initiative was launched without adequate consultation, raising concerns about academic freedom, data privacy, and the potential for AI to exacerbate existing inequalities. The California Faculty Association (CFA) filed an unfair labor practice charge, asserting that CSU failed to confer with faculty before implementing the AI tools, potentially violating labor agreements and impacting faculty workload and intellectual property rights.

During a time of budget cuts, it is important to ask whether investment in AI is more important than investment in people. 

A Broader Societal Concern: AI and Authoritarianism

AI’s misuse isn’t limited to classrooms. In the hands of authoritarian regimes or extremist movements, AI can be weaponized to manipulate public opinion, surveil dissent, and reinforce harmful ideologies. The very same tools that tailor educational content can also micro-target propaganda. The mass automation of persuasive messaging, coupled with a weakened ability to think critically, creates fertile ground for fascist ideologies to spread.

As academic institutions increasingly adopt AI, it’s essential that they also emphasize digital literacy, ethical AI use, and the preservation of democratic values. If universities teach students how to use AI, they must also teach them how to question it. We live in the age of misinformation coupled with powerful actors who have weaponized technologies to manipulate truth and reality for political means.  The need for academic institutions to teach critical analysis and rigorous analytical skills to question and discern truth from falsity should be unequivocally acknowledged. 

At this early phase, it is difficult to determine if AI is inherently good or bad—perhaps, it reflects the values of those who build and deploy it. In academia, its promise is vast, but so are the risks. These risks demand a critical and examined approach toward the integration of AI into our systems of knowledge production. By fostering critical thinking and ethical awareness, educators can help ensure AI serves as a tool for enlightenment rather than a catalyst for control.

Sources:

CSU Administrators Impose Greater Use of A.I., California Faculty Association (Feb. 13, 2025), https://www.calfac.org/csu-administrators-impose-greater-use-of-a-i/

Ron Carucci, In The Age Of AI, Critical Thinking Is More Needed Than Ever, Forbes (Feb. 6, 2024), https://www.forbes.com/sites/roncarucci/2024/02/06/in-the-age-of-ai-critical-thinking-is-more-needed-than-ever/

Erik Cliburn, A Growing Influence: The Power of AI in Academia, Insight into Academia (Jan. 7, 2025), https://www.insightintodiversity.com/a-growing-influence-the-power-of-ai-in-academia/

Aniya Greene-Santos, Does AI Have a Bias Problem?, neaToday (Feb. 22, 2024), https://www.nea.org/nea-today/all-news-articles/does-ai-have-bias-problem

Kathiann Kowalski, Artificial intelligence is making it hard to tell truth from fiction, Science News Explores (May 9, 2024), https://www.snexplores.org/article/artificial-intelligence-ai-deepfakes-trust-information

Martha Lincoln and Martha Kenney, CSU says its ‘AI-powered university’ is good for higher education. But is it?, San Francisco Chronicle (Feb. 13, 2025), https://www.sfchronicle.com/opinion/openforum/article/csu-ai-university-education-20158671.php

Beatrice Nolan, ‘Godfather of AI’ says AI is like a cute tiger cub—unless you know it won’t turn on you, you should worry, Fortune (April 28, 2025), https://fortune.com/article/geoffrey-hinton-ai-godfather-tiger-cub/

Cathleen O’Grady, Low-quality papers are surging by exploiting public data sets and AI, Science (May 14, 2025), https://www.science.org/content/article/low-quality-papers-are-surging-exploiting-public-data-sets-and-ai

New research integrity AI tool added to Springer Nature’s growing portfolio, Springer Nature, Press Release (April 7, 2025).

Chunpeng Zhai, Santoso Wibowo & Lily D. Li, The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review, Smart Learning Environments 28:11 (2024).

Recent posts

View all posts