Skip links
Arizona State University has contributed feedback that aided in the development of OpenAI’s ChatGPT Edu. Photo: Matt York/Associated Press

Inside Universities’ Love-Hate Relationship With ChatGPT

As college students across the country settle in for their first day of classes, they’re getting used to a new section on the syllabus: a course’s generative AI policy. 

Institutions such as Cornell University and Columbia University are letting professors decide the extent to which tools like OpenAI’s ChatGPT are allowed in coursework. Or whether they are completely banned. 

It’s a question professors and institutions have been grappling with for almost two years since the debut of ChatGPT sowed panic in the sector, giving students the ability to whip up an assignment—not necessarily accurate—with a handful of prompts. 

Now, after training, task forces, and trial and error, universities say they have a better grip on what AI should and shouldn’t be able to do in the classroom, although they say it remains a learning process with many unanswered questions. And even when they can decide on a specific policy, it might not be easy to enforce. 

Colorado State University history professor Jonathan Rees plans to tell his students they cannot use AI tools to write essays for his class. He’s aware that he might not necessarily be able to tell if they do, but added that he believes the quality of the AI-generated essays will be so poor that they’ll receive a low mark anyway. 

“The policy I picked is ‘Don’t use AI,’” he said. “It’s going to be a bad paper. So if you use AI, you’re almost certainly going to fail.” 

It’s a different story at Arizona State University, which earlier this year signed up to use OpenAI’s ChatGPT enterprise product. Since then, it has contributed feedback that aided in the development of OpenAI’s ChatGPT Edu, which has similar controls to ChatGPT enterprise, but is less than half the cost, OpenAI said. The product can be used to cover an array of educational and administrative tasks, from personalized tutoring to helping faculty write grant applications and draft feedback, said Lev Gonick, ASU’s chief information officer.

Unlike Rees’s class at Colorado State, last semester, students in a freshman English class at ASU, for example, were asked to use AI for an assignment where they write something in their own voice, generate another version using an AI tool and then compare the differences to help them understand their own writing voices, Gonick said. 

Mira Murati, Chief Technology Officer at OpenAI. PHOTO: PATRICK T. FALLON/AGENCE FRANCE-PRESSE/GETTY IMAGES

Gonick said the university didn’t create a new academic integrity policy specifically to tackle AI, but as students misuse it, they will be punished under the existing policy, similar to how other forms of plagiarism are dealt with. He said he is counting on OpenAI and others to release tools that will help professors detect whether content is generated by AI or not as well as help students cite its use.  

“Obviously we need to work through important challenges,” said OpenAI Chief Technology Officer Mira Murati. 

Those challenges include the fact that so far, there’s no easy way for professors to detect whether an assignment submitted by a student was generated by ChatGPT or not. 

The Wall Street Journal previously reported that OpenAI has developed a method to reliably detect this, but the tool has been mired in internal debate and is yet to be released.

OpenAI said it is not satisfied that the tool is entirely effective and is still determining the most effective approach to identifying AI-generated content, considering watermarking and metadata among other solutions. 

The lack of AI detection tools is also forcing some universities to reimagine the way they test and assess learning. 

Kavita Bala, dean of the Bowers College of Computing and Information Science at Cornell University PHOTO: CORNELL UNIVERSITY

“In some courses we forbid its use,” said Kavita Bala, dean of the Bowers College of Computing and Information Science at Cornell University about the AI tools. In other courses where AI tools are allowed, professors may ask, “How creative can you be?” Bala said.

The university has also made a private version of Microsoft Copilot, which runs on ChatGPT, available to students, faculty and staff. 

Ramayya Krishnan, dean of Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, said that it will also be important for students to cite their use of AI, similarly to how they cite their sources in a research paper. 

Other professors are seizing the moment, recognizing in the emergence of ChatGPT an opportunity to dig deep into higher education’s purpose of combining inquiry and scholarship with preparation for an ever-changing world.

Vishal Misra, a professor at the Department of Computer Science at Columbia University and vice dean of Computing and AI of Columbia Engineering, said he hands students an answer to a coding problem that was generated by AI and asks them to identify the errors in it. It serves the additional purpose, he said, of teaching students not to blindly trust the AI. 

Misra said he’s been working to develop AI tutoring tools to help guide students to figure out the right answers, rather than spoon-feeding answers. The tools could also help professors create more relevant exam questions, he said. 

He plans on working with Columbia’s Center for Teaching and Learning over the course of the fall semester to really dig into whether these are tools that help with or hinder the learning process.

“That’s still something the university wants to measure further and study,” Misra said. “My own personal bias is that they’re useful.”

Source wsj.com

Leave a comment

This website uses cookies to improve your web experience.
Explore
Drag