Indiana University has long been a leader in educational technology. From high-performance computing to innovative digital learning tools, IU has built a reputation that many institutions strive to emulate. After my PhD years, a friend who found a job at an Ivy League university once confided her surprise at how limited the resources were there compared to Bloomington. She said she missed IU’s rich infrastructure for teaching and research — and I understood why. IU’s commitment to technological advancement remains one of its defining strengths.

That legacy was on full display during AI Research Day, held just two days ago. The event brought together researchers from across disciplines — education, medicine, climate science, public health, robotics, and beyond — to deliver two-minute lightning talks on how artificial intelligence is reshaping their fields. The presentations were as inspiring as they were unsettling.

Even with IU’s impressive technological resources, I couldn’t help but think about the sheer scale of private companies’ capabilities — and the growing disparity between academic and corporate access to AI infrastructure. As universities face tightening budgets, can they still compete as engines of ethical innovation?

For centuries, universities have been centers of intellectual development and, in recent decades, stewards of ethical research practices. Private companies, by contrast, often operate under very different values — driven primarily by profit and market power. This tension became clear when a representative from a large language model company presented on how their tools support student learning and creativity.

Her examples were carefully curated: a handful of studies highlighting positive impacts, with little attention to the risks. I couldn’t resist asking about broader consequences — on the economy, employment, and society at large. To her credit, she acknowledged potential job displacement and economic disruption. But those are only part of the picture.

I use large language models daily myself — including for revising this very post (trust me, you wouldn’t have wanted to read the original draft!). I’m not opposed to them. Yet I remain concerned about the pace and scope of their expansion. Energy and water consumption, over reliance for knowledge generation, and insufficient understanding of their limitations — these issues deserve urgent attention.

During AI Research Day’s breakout sessions, discussions frequently turned toward large language models and their growing influence. At my tables, we explored ideas like developing localized, purpose-built GPTs — AI systems with domain-specific expertise that could “collaborate” to address complex, real-world problems. The conversations were imaginative and hopeful, but also grounded in ethical awareness.

In one ethics session, we grappled with questions around student overreliance on AI, the emerging trend of young users forming “friendships” with chatbots, environmental costs, social biases in training data, and more. These dialogues reminded me that our challenge isn’t just to integrate AI — it’s to do so without losing sight of the values that make us human.

At IU, the atmosphere is electric. Scholars across disciplines are using AI to solve problems in their fields, and the energy is contagious. As an educator, I feel both inspired and cautious. Each day, I think about how to help my students use generative AI effectively, efficiently, and ethically. I ask hard questions. I invite reflection. And sometimes, I appeal to emotion — reminding them that technology should serve learning, not replace it.

The AI movement at IU is not just about innovation; it’s about responsibility. These conversations must continue — because the future of education depends on how thoughtfully we navigate this new frontier.

Leave a comment

Trending