Â鶹ĘÓƵ

Â鶹ĘÓƵ

Newsroom

USF linguistics expert pivots course in response to his new study on artificial intelligence chatbots

USF linguistics expert pivots course in response to his new study on artificial intelligence chatbots

By: Cassidy Delamarter, University Communications and Marketing

A USF assistant professor is utilizing results from his new study on artificial intelligence chatbots to rework how he’s assigning homework in his applied linguistics course.

“For instance, instead of asking students to produce short summaries on different readings, something that ChatGPT can do quite easily, I'm giving my students more integrated, hands-on assignments that ask them to blend traditional academic readings with personalized, experiential learning projects,” said Matthew Kessler, assistant professor in the Department of World Languages.

For example, in one of the assignments, Kessler’s students will be required to use a mobile app of their choice to learn a foreign language and document the ways they use the app to immerse themselves in the language for five weeks. 

These changes are inspired by Kessler’s new research just published in the ScienceDirect journal that revealed even experts from the world’s top linguistic journals have difficulty differentiating AI from human-generated writing. 

Working alongside J. Elliott Casal, assistant professor of applied linguistics at The University of Memphis, Kessler tasked 72 experts in linguistics with reviewing a variety of research abstracts to determine if they were written by AI or humans. 

“We thought if anybody is going to be able to identify human-produced writing, it should be people in linguistics who’ve spent their careers studying patterns in language and other aspects of human communication,” Kessler said.

The findings revealed otherwise. Despite the experts’ attempts to use rationales to judge the writing, such as identifying certain linguistic and stylistic features, they were largely unsuccessful with an overall positive identification rate of less than 39 percent.

“What was more interesting was when we asked them why they decided something was written by AI or a human,” Kessler said. “They shared very logical reasons, but again and again, they were not accurate or consistent.”

Based on this, Kessler and Casal concluded ChatGPT can write short genres just as well as most humans, if not better in some cases, given that AI typically does not make grammatical errors. 

The silver lining for human authors lies in longer forms of writing. “For longer texts, AI has been known to hallucinate and make up content, making it easier to identify that it was generated by AI,” Kessler said. 

USF sophomore Max Ungrey is taking Kessler’s course through the Judy Genshaft Honors College. He says he occasionally uses ChatGPT before raising his hand with a question in class.

“Obviously ChatGPT is detrimental if used as a replacement for learning, rather than a tool. I’ve absolutely noticed changes in school due to ChatGPT, both due to my own use and the use of others,” Ungrey said. “I can certainly see myself using ChatGPT and other language models in day-to-day work. For example, some web browsers have built in AI which can summarize large bodies of text, and I think I will end up using tools like that.”

Just like what Ungrey is experiencing, Kessler hopes this study will start a bigger conversation to establish the necessary ethics and guidelines surrounding the use of AI in research and education.

Return to article listing

News Archive

Learn more about USF's journey to Preeminence by viewing Newsroom articles from past years.

USF in the News

December 11, 2024

December 10, 2024

December 9, 2024

December 7, 2024

More USF in the News