Committee guides use of generative AI
Faculty and staff group developed guidance on ethics and responsibilities related to this new wave of technology.
A committee of faculty members and staff worked throughout the summer to develop guidance on the emerging technology known as generative artificial intelligence (AI). The result is a comprehensive list of resources on the Office of the Provost’s website, including training modules for instructors, with guidelines for use in University operations coming later this semester.
Generative AI describes systems, such as chatbots, that are capable of creating text, images, video, audio, code and other media in response to queries. Humans can interact with these tools in a seemingly natural way, but the AI models aren’t reliably capable of distinguishing between facts and falsehoods or fantasy.
To address the challenges and opportunities created by these tools, Provost Christopher Clemens charged the group in May to develop guidance for the use of generative AI by students, in research and in teaching. Stan Ahalt, dean of the School of Data Science and Society, and Mark McNeilly, professor of the practice at the Kenan-Flagler Business School, co-chair the committee.
Training for instructors will start Oct. 4 and self-paced recorded modules about teaching and ensuring academic integrity with AI are available now online. Todd Cherner, a clinical associate professor and program director of the School of Education’s master in educational innovation, technology and entrepreneurship, led the committee members’ recordings of the faculty modules.
“The training and the modules and the guidance that we developed over the summer can give faculty and staff and students a place to start thinking about how to effectively utilize AI, specifically what ethical considerations are important for courses, for their research and for students,” said Dana Riger, clinical assistant professor in the School of Education. “Faculty should be vigilant about data, privacy and bias and be prepared to guide students in navigating the technology responsibly, because students will be looking to us to help guide them.”
Two committee members — Daniel Anderson, Carolina Digital Humanities director and English professor, and Dayna Durbin, the undergraduate teaching and learning librarian at University Libraries — also designed modules for students to complete in Canvas. With the modules, students can learn how to use and cite conversations with ChatGPT or other large language models appropriately. Additionally, the Writing Center has posted tips for students on how to use generative AI in academic writing.
“There’s a lot of exciting capability and possibilities coming out of using these tools, but there are also substantial risks with respect to accuracy of information,” said Kristi Nickodem, the Robert W. Bradshaw Jr. Distinguished Term Assistant Professor of Public Law and Government in the School of Government. “Students need to be really careful about fact-checking behind those tools and understanding the limits of accuracy and depth of research when it comes to using these tools.”
During one committee meeting, six students shared feedback on what guidance would be most helpful. The committee agreed with a request by Katie Heath, senior vice president of the Graduate and Professional Student Government that graduate students have access to the faculty training because many graduate students grade assignments.
Other groups on campus are also looking at how generative AI will change higher education. The Center for Faculty Excellence has hosted a series of conversations on the topic and has shared resources for instructors.
“There’s a genuine interest among some researchers to actually research generative AI, not just use it as a tool or to do things, but to understand it better and potentially develop it as a research platform for many different projects,” said Eric Everett, professor at the Adams School of Dentistry and research integrity officer at the University.