
The new calculator. A better spell checker. The death of artistic individuality. Conversations surrounding artificial intelligence (AI) and ChatGPT have taken the national stage, as evidenced by the recent WGA strike and Biden’s new Executive Order on safe AI. Hamilton College now finds itself on the precipice of a technological and social tipping point. At a college so focused on quality writing, it is vital to understand where Hamilton’s students and faculty stand on AI.
Currently at Hamilton, there is no college-wide regulation on the use of AI and computer programs like ChatGPT. Academic departments have not taken a united stance for or against ChatGPT, leaving students and professors to wonder to what extent it can be used, what qualifies as cheating and whether or not others are using AI to assist with their work.
The Spectator spoke with professors and students about their thoughts on AI and its place at Hamilton. In sharing their honest opinions, many members of the community requested that if quoted, they would remain anonymous. Their requests for anonymity reflect well the apprehension toward the free discourse needed to reach a consensus on a matter which, for now, may lead to retribution.
Much of the uncertainty comes from professors’ resistance to AI. One anonymous student ’26 explained: “I have professors on both sides of the spectrum. One professor says if we touch [ChatGPT], they’ll report it to the Honor Court. Another says…we are encouraged to use it; we just need to cite it.” Some students feel that this disconnect between professors creates a moral gray area around academic honesty. The Spectator asked a second anonymous student ’26 if they still use ChatGPT for assignments where it is not allowed, and they admitted: “I’ll use it to help brainstorm. Everyone does — at least my friends do.”
One anonymous STEM student ’24 shared an anecdote from a computer science course. “A bunch of kids used AI one night for our homework. The professor gave an in-class quiz the next day on the same set of code, and those same people couldn’t do it.” The possibility of more in-person assessments in light of new AI technologies remains feasible. “I think all STEM classes should just give more in-person closed note exams,” he continued. That student still uses ChatGPT as a tool when coding outside of the classroom. “I do feel like it’s an easier Google search. If I really can’t figure out a piece of code, I can look it up and use it as a reference. Maybe that’s not good, but I don’t use it to do my work for me,” the student said. “Using it just to do an assignment for you is really bad. It’s lazy.”
A history concentrator ’24 had a similar perspective. When asked if using Chat GPT is cheating, he responded “if you are literally copying it, that’s unethical.” On the topic of Writing Intensive courses, he talked about some of the harms of ChatGPT and said, “On the one hand professors might have to take into account that people are using it. But the class itself was designed to help you develop as a writer so you’re just sabotaging yourself.”
Stella O’Brien ’24 sees artificial intelligence as just another tool. When asked if she used Chat GPT, she said, “Yeah, all the time. I don’t think it counts as cheating when I use it to edit my grammar. It’s the same as Grammarly or spellcheck or any of those things.”
Assistant Professor of Government Ashley Gorham focuses on “hacktivism” and artificial intelligence in her research. In her classes, her assignments consist of revising a policy memo written by ChatGPT, then editing the AI version to become more personal and later reflecting on what the platform missed or misconstrued. Gorham said that she wants to help students understand the “hallucinations of ChatGPT,” or how to catch when the software confidently asserts things that are wrong.
Gorham also thinks more philosophically about the issues ChatGPT is posing within academia. In her opinion, ChatGPT forces us to question our academic motives. For example, Gorham asked, “Why do students take classes? Why do they write papers? Do they want to know something, or do they want to appear to know it?” She believes it raises important questions for professors as well, like “Why do we evaluate students the way we do? What are we trying to accomplish?” Although each professor might have their own responses to these questions, it seems that some Hamilton students are fine with only appearing to know what they are writing about. Another anonymous student ’24 explained, “I use ChatGPT on assignments that, to be honest, I wouldn’t have spent much time on regardless.”
Committed to its open curriculum, Hamilton requires that its students take three Writing Intensive courses while enrolled. According to the College’s website, Writing Intensive courses are important because they help students “learn to write in a clear, organized, and effective way.” AI’s ability to write clear and unique academic papers for students questions Hamilton’s claim to produce excellent and developed writers.
Miriam Lerner ’24 is a Writing Center tutor. Lerner said that, “Each person provides a different manner of writing, both through their smaller writing choices and through their ideas, both which are, in part, formed from their lived experiences (including what they read, their conversations, etc).” Lerner expressed doubt about AI’s ability to fully reproduce human writing: “Though AI can generate sentences that follow correct sentence structure and generate informed ideas, its writing lacks the humanity which makes writing truly meaningful.”
Rachel Budd ’25 started working at the Writing Center this past fall. She observes that the use of ChatGPT is becoming more prevalent on Hamilton’s campus, but, as a tutor, it does not affect her job. “Something written by AI versus something written by a parent or a friend is the same to me. It’s not up to us to decide whether or not the work submitted is their own.”
While it is not the job of Writing Center tutors to decide if a student’s work is their own, the Hamilton College Honor Code states that academic dishonesty includes “the submission of work as one’s own that has been prepared by another person.” Whether Artificial Intelligence qualifies as ‘another person’ is unclear under Hamilton’s policy.
Since the invention of the internet, a variety of new technologies have emerged — both as helpful contextual tools and as a means of helping idle students do less work. Wikipedia, Google Translate, Grammarly, SparkNotes and Chegg, among others, have aided students for years. Another anonymous student ’25 admitted: “I’ve been using Shmoop and Google Translate since I was in middle school. To me, ChatGPT is just another tool to help me stay on top of all the work I have at Hamilton.”
One student ’25 reports that they and their roommates found ChatGPT so helpful that they purchased ChatGPT Plus for $20 a month. “We use it like crazy,” they said. “It doesn’t write our essays for us, but it’s really good for reorganizing notes into outlines. The paid version can search the Internet, so it’s helpful for research.” This paid version of ChatGPT has more advanced features, and some students find it more accurate. But, at $240 a year, this could contribute to economic inequality within higher education.
ChatGPT may further serve as an economic barrier because many students use Chat GPT to secure jobs. One anonymous student ’25 “uses it for cover letter templates and stuff,” hinting at ChatGPT’s professional utility. People in the workforce also rely on artificial intelligence to succeed in their respective jobs. Recent graduate Camille Donaghey ’23 currently works as a sales associate at Tomo. Donaghey said she uses ChatGPT everyday while researching different residential areas throughout the nation. Donaghey explained that, “I use ChatGPT to get a lot of information quickly about an area…It’s quicker and easier to get all the info from one platform rather than Googling different towns and trying to figure out which is better. Obviously you have to take what it’s saying with a grain of salt, but it has been really helpful for me so far.”
Although there are discrepancies in how Hamilton students use ChatGPT, almost every student interviewed had a personal understanding of its limits. Of the eleven students interviewed, none said that they would use ChatGPT to write an entire assignment with the intention to turn it in, regardless of their professor’s policies. Even off the record, The Spectator was hard-pressed to find any student who would admit that they had copied more than a few sentences from ChatGPT. With the varying opinions of Hamilton students and professors, perhaps this self-regulated use of AI is as close to a Hamilton policy as the College will get — for now.
**
ChatGPT wrote the title of this article**.