Students in engineering and computer science are significantly more likely to integrate artificial intelligence into coursework than are humanities and social sciences students. But regardless of their major, many students report uncertainty about when and how AI use is appropriate. These and other findings when we studied how students at Virginia Tech are using generative AI tools are instructive. Most report experimenting with AI, but usage patterns reveal an AI proficiency gap.
This raises a challenge for institutions: if AI competence will soon be foundational to the modern workplace – as fundamental as knowing how to use spreadsheets or conduct online research – can we afford to let proficiency depend on major, instructor or personal curiosity? Regardless of their field, our students will encounter AI in industry. It is already integrated into transportation systems, urban planning, environmental management and public health. Our graduates must be ready.
But what does “AI readiness” mean?
I began reframing that question after attending a global technology gathering of 148,000 attendees and more than 4,000 companies in Las Vegas this January. At CES (Consumer Electronics Show) 2026, leaders from Nvidia, AMD and OpenAI described the future of AI. I saw robots playing table tennis and AI systems embedded in everything from mobility platforms to health devices.
One idea stood out.
Three essential components for AI success
A keynote speaker, Roland Busch, president and CEO of Siemens AG, described three essential components for success in the AI era: technology, domain know-how and partnerships. That framework has reshaped how I think about AI proficiency – and how I design my courses.
Technology is the obvious starting point. Students must understand what AI systems can – and can’t – do and how to use them. They must be familiar with GenAI models, data pipelines and emerging tools. But technical knowledge alone is insufficient.
Domain know-how is equally important.
I am a geospatial data scientist. My work focuses on place-based challenges such as transportation systems, smart cities and environmental analysis. AI models may be powerful, but they do not “understand” the nuances of geography context, complex land use and transportation patterns or human behaviour across space and time. In fact, my research highlights what I call geographic bias: AI systems often perform better in dense, data-rich urban areas than in rural regions, where data is sparse. Without domain expertise, those gaps can go unnoticed – and uncorrected.
- How can we teach AI literacy skills?
- Spotlight guide: GenAI as a research assistant
- Learning by doing in a GenAI-enabled world
This is why AI proficiency cannot simply mean “learning the tool”. Students must develop deep knowledge in their field and learn how AI interacts with that knowledge. Computer scientists cannot fully solve transportation challenges without transportation expertise. Environmental scientists cannot rely on models without understanding ecological context. Domain knowledge is not being replaced by AI; it is becoming more important.
The third component – partnerships – may be the most transformative.
To tackle pressing global challenges, universities must build productive partnerships across disciplines and beyond campus, including collaboration with communities and industry. Historically, collaboration across disciplines has been difficult. Each field has its own terminology, assumptions and methodologies. But AI is lowering those communication barriers. When I encounter unfamiliar technical concepts, I can use AI tools to help translate and clarify them. Likewise, computer scientists can use AI to better understand domain-specific problems.
This does not replace human collaboration. It strengthens it.
Meaningful problems are solved by people working together – for people. AI can facilitate those partnerships, but it is not a substitute for them. In my classroom, I now place greater emphasis on collaborative, project-based work that integrates technical skills with domain challenges and interdisciplinary dialogue.
Access and ethics in AI use
At the same time, we must carefully consider the ethical use of AI, including access to AI services. Access to AI tools can be uneven. Many advanced systems require paid subscriptions, and costs can quickly accumulate. While $20 (£15) per month may seem manageable for some, it is not trivial for all students. Universities are encouraged to expand institutional access to advanced AI infrastructure so that proficiency does not depend on personal financial capacity.
The ethical dimension of AI use is equally critical. Empirical research – including work in my own field – demonstrates that AI outputs are neither objective nor unbiased. Yet. Bias can manifest politically, socially and geographically. Students must learn not only how to generate results but how to question them.
Ultimately, AI literacy is not about chasing the latest tool; it is about cultivating the capacity to integrate technology with expertise and human networks. Universities should realise they are well positioned to lead. At Virginia Tech, our institutional motto “Ut Prosim (That I May Serve)” emphasises solving real-world problems through collaboration and service. The three components I observed at CES – technology, domain knowledge and partnerships – align naturally with that ethos, coupled with our institution’s emphasis on experiential learning, where students “learn by doing”.
The question is not whether to bring AI into the classroom – it is already there – but how we prepare students to engage with it meaningfully. Technology matters. But without domain understanding and strong partnerships, it is insufficient.
Junghwan Kim is an assistant professor in the department of geography and director of the Smart Cities for Good research group at Virginia Tech.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment