Higher education has been navigating the complexities of the digital information age for more than 20 years. As artificial intelligence — AI — emerges as a transformative force, faculty specifically are adapting once again.
Over the last decade, AI’s development has been consistently on the rise, leading to increased awareness and usage of AI systems.
“We know everyone is using it,” half-jokes David Imhoof, professor of history at Susquehanna, “not because we’re ‘catching’ them but because we know everyone is using it.”
At Susquehanna’s Break Through career networking conference in 2024, nearly every student at the AI in the Workplace panel raised their hand when asked if they’ve ever used an AI platform.
The Pew Research Center asked U.S. teens ages 13 to 17 about their awareness and use of AI (November 2023). The organization found that 67% are familiar with ChatGPT, arguably the most well-known generative AI platform. Nineteen percent of those teens said they have used ChatGPT to help with their schoolwork.
Of the teens who have heard of ChatGPT, most (69%) say it’s acceptable to use the platform to research new things. The perception of acceptability declines when it comes to solving math problems (39%) or writing an essay (20%).
AI’s application in the classroom is fraught with ethical issues more complicated than just saying, “Alexa” or “Hey Siri.”
Susquehanna University’s Center for Teaching and Learning has tackled this topic head-on with a series of professional development sessions aimed at educating faculty on the mechanics of AI and how they can manage the use of it in their classrooms.
Susquehanna does not yet have a generalized, university-wide policy regarding the use of AI. Instead, Nabeel Siddiqui, assistant professor of digital media and director of the Center for Teaching and Learning, has encouraged faculty members to tailor their own policies to their classrooms. For Amanda Lenig ’07, department chair and associate professor of graphic design, that means nurturing a culture of transparency.
“It’s part of the industry now, so I believe our job as professors is to teach students to be discerning in how they choose to use AI and to be accountable for that choice,” she says. AI can be an advantage in the field of graphic design. As explained by Lenig, what once could have taken hours or days — let’s say creating a cardboard sword to be used in an advertising campaign for the television series Storage Wars, an example from one of Lenig’s assignments — can now be done in a matter of minutes through the AI tool in Adobe Firefly.
The ethics come into play, Lenig says, at the heart of the assignment.
“If the assignment was to create a custom or hand-done illustration, then using AI to create that illustration would be unethical,” Lenig reasons. “If the assignment was to create an ad campaign concept and execute that concept visually where stock photography could have been a method, then using AI would allow the student to create the perfect image for their campaign in a much quicker turnaround time.”
Lenig’s approach is one that is shared by others across the sciences and humanities. Siddiqui allows his students to use AI — up to a point. If he suspects a student is relying too heavily on AI, he will consult with the student about it.
What Siddiqui, in his position with the Center for Teaching and Learning, does not encourage is the default use of AI detectors, which typically search for the repetition of words as a sign that a text was AI-generated. This is because AI detectors can be problematic, he said.
According to a 2023 article published in the International Journal for Educational Integrity, an evaluation of 14 AI-detection tools found them neither accurate nor reliable (all scored below 80% of accuracy and only five over 70%). Studies have also shown AI detectors to be biased against nonnative English speakers.
“The reasons a student doesn’t cheat isn’t because they didn’t have access; it’s because they found it ethically problematic,” Siddiqui says. “When a student does make the decision to violate academic integrity policies, there are larger issues that are occurring, in which case it is even more important to be able to talk to that student to determine what is going on.”
Instead of relying on detectors or banning the use of AI altogether, some faculty members are integrating AI into their assignments. During the pandemic, Mike Ozlanski ’05, department head and Allen C. Tressler associate professor of accounting in the Sigmund Weis School of Business, migrated his tests and quizzes to an online setting out of necessity. He has since moved back to the “old-fashioned” way of doing things — in class with a pencil and paper.
“I did this because ChatGPT (in January 2023) earned, on average, a passing grade on these assessments, so I needed a way to assess how well my students — not AI — know accounting concepts,” he says. “I’ve also received informal feedback from students that many prefer taking paper-based assessments.”
However, he hasn’t altogether abandoned ChatGPT in his classes.
“I tell students they can use the tool to help them troubleshoot homework problems with the expectation they can still successfully navigate quizzes and exams,” he explains. “I also highlight that ChatGPT can create multiple-choice and true-false questions about course topics. So, they could use ChatGPT to help them prepare for these assessments.”
In another course, Ozlanski shares copies of ChatGPT output related to course projects and asks his students to critique them.
“We discuss the strengths and weaknesses of the AI output. Then, it is their job to ensure their analysis is better than the chat,” he says. “ChatGPT could be a starting point for their analysis, but they are ultimately responsible for the quality of their submissions, including accurate citations from credible sources.”
Ozlanski’s students must also acknowledge in their papers if they used AI as part of their analysis.
Anusha Veluswamy, visiting assistant professor of mathematical science, has had the students in her 400-level artificial intelligence course predict incidences of gestational diabetes by running an AI statistical analysis on provided data sets. She also uses AI-assisted grading.
“I load my answer key into the AI platform and first submit a test exam to confirm accuracy,” Veluswamy says. “The platform links directly to Canvas so students can easily submit their exams through a platform they are already familiar with.”
While not necessarily “AI-proofing” his assignments, Imhoof is and has always designed them in a way that would make them difficult to complete via AI.
“I do a lot of very narrowly focused assignments, so if students follow the assignment, it’s not easy for them to type something into ChatGPT and just get an answer,” Imhoof adds. “For example, I may ask my students to use specific documents to analyze an assigned topic because I’m less interested in their ability to gather information than I am in their ability to provide insight.”
Imhoof has also brough AI into the classroom through his Europe, Money and the World course. In it, students use ChatGPT to help them consider certain parts of a paper, but not write it for them. They also learn how important it is to submit the most appropriate prompt to a generative AI platform to receive the information they seek.
“We explored different ways ChatGPT could explain the process of decolonization — like a graduate student would, like a 15-year-old would, and like a stand-up comic would,” Imhoof explains. “Needless to say, they especially liked that last one.”
At Break Through, Joseph Morante ’21, a data analyst with Bloomberg, and Robert Masters ’20, a solutions analyst with Deloitte, spoke with students about the use of AI in the workplace.
Morante highlighted the various misconceptions surrounding AI, particularly the fear of job displacement. He and Masters pointed to various career pathways that he believes will be created or expand with the growth of AI, from coding to software design to prompt engineering.
“With any advancing technology there is a fear to it, yes, and some jobs may go, but these advancements in technology are creating more opportunities for jobs and more skills to be learned,” Morante says.
As AI platforms multiply and become more sophisticated, higher education will adapt as it had in the past to computers, the internet and smartphones. What educators like Imhoof, Lenig, Siddiqui and Veluswamy are looking forward to is using AI to instill in students what they have always sought — the ability to think creatively and critically to analyze issues and make effective decisions.
“When we teach people to be graphic designers, we’re teaching them to be critical thinkers and decision makers,” Lenig emphasizes. “While AI is certainly another tool in a student’s tool chest, AI doesn’t change what has always been central to our mission as educators.”
“Rather than reacting to generative AI like ChatGPT as a threat, instructors need to realize that our students will be working in a world that will feature AI in most jobs. We should, therefore, teach students how to use this technology effectively to enhance their critical thinking skills,” Imhoof says. “We have a unique opportunity to demonstrate how a liberal arts school like Susquehanna is the perfect place to figure out how to use AI as an extension of our skills, not as a replacement for them.”
Return to top