close
close

TEACHER VOICE: My students are bombarded with negative ideas about AI and now they are afraid

TEACHER VOICE: My students are bombarded with negative ideas about AI and now they are afraid

Since ChatGPT was released in November 2022, educators have been reflecting on its implications for education, with some leaning toward apocalyptic predictions about the end of learning, while others remain cautiously optimistic.

It took my students longer than expected to discover generative AI. When I asked them about ChatGPT in February 2023, many had never heard of it.

But some have caught up, and now our college’s Office of Academic Integrity is busier than ever dealing with AI-related fraud. The need for policies is discussed in every college meeting, but I’ve noticed a troubling response among students that educators aren’t considering: fear.

Students are bombarded with negative ideas about AI. Punitive policies reinforce this fear without considering the potential educational benefits of these technologies—nor that students will need them in their professional lives. Our role as educators is not to intimidate them, but to encourage critical thinking and prepare students for a job market that will use AI.

Yet course descriptions include prohibitions on the use of AI. Professors tell students they are not allowed to use it. And students regularly read stories about fellow students being put on probation for using Grammarly. If students constantly feel distrustful, it can create a hostile learning environment.

Related topics: Interested in innovation in higher education? Subscribe to our free bi-weekly Higher Education Newsletter.

Many of my students haven’t even played around with ChatGPT because they’re afraid of being accused of plagiarism. This avoidance creates a paradox: students are expected to be familiar with these modern tools upon graduation, but are discouraged from engaging with them during their education.

I suspect that the profile of my students makes them more susceptible to AI anxiety. Most are Hispanic and female and are taking courses in translation and interpreting. You see that the predominantly male and white The “tech bros” in Silicon Valley who are creating AI look nothing like them and internalize the idea that AI is not for them and not something they need to know about. I was not surprised that the only male student I had in class last semester was the only student who was excited about ChatGPT from the start.

Failure to develop AI skills in Hispanic students may reduce their confidence and interest in using these technologies. Their fearful reactions will exacerbate already troubling disparities between Hispanic and non-Hispanic students; the college attainment gap between Hispanic and white students widened between 2018 and 2021.

The stakes are high. Much like the internet boom, AI will revolutionize everyday activities and certainly knowledge jobs. To prepare our students for these changes, we need to help them understand what AI is and encourage them to explore the capabilities of large language models like ChatGPT.

I decided to tackle the problem head-on. I asked my students to write speeches on a current topic. But first, I asked them what they thought about AI. I was shocked by the extent of their misunderstanding: many believed AI to be an all-knowing, knowledge-producing machine connected to the Internet.

After I gave a short presentation on AI, they expressed surprise that large language models are based on predictions rather than direct knowledge. Their curiosity was piqued and they wanted to learn how to use AI effectively.

After they had written their speeches without AI, I asked them to proofread their drafts with ChatGPT and then report back to me. Again, they were surprised – this time by how much ChatGPT could improve their writing. I was happy (and even proud) that they also viewed the result critically, with comments like “That didn’t sound like me” or “Those parts of the story were made up.”

Was the activity perfect? ​​Of course not. The prompts were challenging. I noticed a clear correlation between literacy level and the quality of the prompts.

Students struggling with college-level writing weren’t getting anywhere with prompts like “Make it sound more fluid,” but this basic activity was still enough to spark curiosity and critical thinking about AI.

Individual activities like these are great, but without institutional support and guidance, efforts to promote AI literacy will not be successful.

The provost of my college has set up an Artificial Intelligence Committee to develop policies for the college. It includes professors from a variety of disciplines (including myself), other staff, and, most importantly, students.

In several meetings, we worked together to identify the most important aspects to consider and explored specific topics such as AI competency, privacy and security, AI detectors and bias.

We created a document broken down into key points that everyone could understand. The draft was then distributed to faculty and other committees to gather feedback.

Initially, we were concerned that disseminating the guidelines to too many stakeholders would complicate the process, but this step proved crucial. Feedback from professors in fields such as history and philosophy strengthened the guidelines and added valuable perspectives. This collaborative approach also helped increase institutional adoption, as everyone’s input was valued.

Related: A new partnership paves the way for greater use of AI in higher education

Underfunded public institutions like mine face significant challenges in integrating AI into education. While AI offers incredible opportunities for educators, implementing those opportunities requires significant institutional investment.

Asking the underpaid faculty in my department to take the time to learn how to use AI and incorporate it into their teaching seems unethical to me. But incorporating AI into our knowledge production activities can significantly improve student outcomes.

If this only happens at wealthy institutions, the performance gap between students increases.

Furthermore, if only students at wealthy institutions and companies can use AI, the bias inherent in these large language models will continue to increase.

If we want to provide equal educational opportunities for all students in our classrooms, institutions serving minorities must not be left behind in the introduction of AI.

Cristina Lozano Argüelles is an assistant professor of interpretation and bilingualism at John Jay College of the City University of New York, where she researches the cognitive and social dimensions of language acquisition.

This story about AI competence was produced by The Hechinger Reporta nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s Newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to publish. Our work informs educators and the public about pressing issues in schools and on campuses across the country. We tell the whole story, even when the details are uncomfortable. Help us keep doing so.

Join us today.