NextFin news, On Friday, September 12, 2025, high school and college educators across the United States revealed that the widespread use of artificial intelligence (AI) tools by students has forced schools to rethink traditional assignments and assessments. This shift is occurring as educators struggle to define what constitutes cheating in an era where AI assistance is pervasive.
Casey Cuny, an English teacher at Valencia High School in southern California and 2024 California Teacher of the Year, described the situation as "the worst cheating" he has seen in his 23-year career. He noted that any take-home writing assignments must now be assumed to have been aided by AI chatbots. To counter this, Cuny has moved most writing assignments to in-class settings, using software to monitor and restrict student laptop activity. He also integrates AI into lessons to teach students how to use it as a study aid rather than a cheating tool.
Similarly, Kelly Gibson, a high school teacher in rural Oregon, has shifted from traditional take-home essays to in-class writing and verbal assessments, requiring students to discuss their understanding of readings to reduce AI-assisted cheating.
Students report that AI tools like ChatGPT are often their first resource for brainstorming essay ideas or summarizing complex texts. For example, a common English assignment analyzing social class in "The Great Gatsby" can be quickly supported by AI-generated outlines and quotes. However, students like Lily Brown, a psychology major at an East Coast liberal arts college, express uncertainty about where the line between legitimate AI use and cheating lies, as many course syllabi prohibit AI-generated writing but lack clear guidelines on permissible AI assistance.
AI policies vary widely even within the same schools, with some educators allowing AI-powered grammar checkers like Grammarly while others ban them due to their rewriting capabilities. Valencia High School 11th grader Jolie Lahey highlighted this inconsistency, noting that some teachers have strict "No AI" rules despite the tool's usefulness.
Over the summer, several universities, including the University of California, Berkeley, and Carnegie Mellon University, convened AI task forces to develop clearer guidelines. Berkeley instructed faculty to include explicit AI use policies in syllabi, ranging from bans to allowances, to reduce misuse. At Carnegie Mellon, academic integrity violations related to AI have surged, often involving unintentional misuse, such as a student using AI translation tools without realizing the output altered their original language, triggering AI detection software.
Rebekah Fitzsimmons, chair of the AI faculty advising committee at Carnegie Mellon’s Heinz College, explained that enforcing academic integrity is complicated by AI's subtlety and detection challenges. Faculty are cautious about accusing students unfairly, while students fear false accusations without means to prove innocence.
Educators nationwide acknowledge that traditional take-home essays and tests are becoming obsolete due to AI's capabilities. Instead, they are adopting new assessment methods such as in-class writing, oral presentations, and interactive discussions to better evaluate student learning and maintain academic honesty.
The rise of AI in education has also sparked a focus on "AI literacy," encouraging students to learn how to use AI tools responsibly and effectively. This approach aims to balance AI's educational benefits with the need to uphold academic standards.
These developments reflect a broader transformation in education as schools adapt to the realities of AI technology, seeking to clarify academic integrity definitions and redesign assessments to fit the digital age.
Source: ABC News, reporting by Jocelyn Gecker, September 12, 2025.
Explore more exclusive insights at nextfin.ai.
