
While the use of artificial intelligence in academic settings remains a topic of debate, two Hamilton professors are using AI technology in innovative ways to enhance their students’ learning experience.
Ty Seidule, visiting professor of history and executive director of Common Ground, is teaching a class this semester called “Common Ground” where students can learn about the history of free speech in America through practicums. Seidule calls this course “a laboratory of civil discourse” in which students learn to disagree agreeably through exercises focused on the U.S. Constitution in role-playing games and an artificial intelligence large language model (AI LLM) for the founding fathers.
Partnering with Lisa McFall, director of digital initiatives, scholarship, and collaboration, and Douglas Higgins, digital scholarship technologist, Seidule is developing an AI LLM for his class so students can debate founding fathers.
Seidule first thought of this idea in a discussion with President Steven Tepper about the What If Initiative. When the question of “What if we could talk to Alexander Hamilton?” was brought up, Seidule reached out to McFall and Higgins in Library & IT Services (LITS) to bring the idea to life.
The first trial in this process involved building a persona within ChatGPT. The LITS team initially created personas for Alexander Hamilton and Thomas Jefferson. McFall says the team entered, “PreTytend that you are Thomas Jefferson,” into ChatGPT and worked to fine-tune the persona from there based on the knowledge the historical figure would have in that time period.
The team ran a workshop for Common Ground student ambassadors to test the prototype before introducing it to the class. McFall says they have been working with Seidule and his students to introduce additional founding fathers by training AI on writings done by these historical figures and knowledge of the Constitutional Convention of 1787.
One Common Ground senior fellow, Paige McKenzie ’25, reflected on the benefits of this learning model. “Having students debate with the chatbot, ask questions, and even challenge their own perspectives based on how these figures might respond in today’s world is an excellent way to encourage critical thinking and reflection. This process not only helps students understand history but also invites them to rethink how historical figures’ beliefs and actions would play out in the modern context,” she said. “This type of learning experience ties into Common Ground’s mission, where we emphasize the importance of engaging in productive dialogues and learning to disagree in healthy, constructive ways,” McKenzie added.
Towards the end of the semester, McFall and Higgins will come to the class to teach students how to prompt answers from the AI tools. The technology will then be used by students to simulate discourse with the founding fathers. Students will be engaging with these AI-generated chatbots through questions such as, “What do you think a civil liberty is?”
Seidule says he is looking forward to having his students gain an understanding of how the founding fathers had differing opinions on many issues during that time, such as slavery and the Second Amendment right to bear arms. The last week of the semester will be dedicated to reflecting on the uses of the chatbots and capturing the lessons learned in that process.
“I had no expertise in [AI-generated chatbots] prior to this,” says Seidule. “It’s been great to have a long-term partnership with LITS throughout this process.”
The development of the founding father AI chatbots has allowed for “a combination of history and cutting edge technology, which is very cool,” says Seidule. “Students are excited about using AI in this way.”
Meanwhile, Margaret Gentry, the Margaret Bundy Scott Professor of Women’s and Gender Studies, is using AI tools in her Women’s and Gender Studies class to look at and analyze how AI is visually portraying different social groups in response to prompts.
“I’m trying to get students to analyze what is going into the creation of material that AI––voices, images, written descriptions––and what it is producing back,” she said.
She leads her class in an exercise where she uses ChatGPT or another AI image-generation tool to input a request, such as “Create images of beautiful women.” When the images pop up, the class will analyze them in a discussion centered around societal beauty standards, gender roles and biases. Gentry noted that ChatGPT produces images of all young, thin, light-skinned women wearing V-necks that show cleavage when prompted to display “beautiful women.” However, when AI is prompted with “Create images of handsome men,” AI produces images of men in a wider age range and all are fully clothed.
“We talk about the fact that a higher percentage of the images that go into AI for women come from sexualized material, or pornography, than happens for men,” she said.
She challenges the class to talk through questions such as, “What do these images tell us about how society views beauty?” and “What does that tell us about the images that AI is drawing from to create these images?” The class also discusses how those AI-generated images reinforce or create new images for people who are consuming what is coming out of AI.
Students have the option of writing a follow-up paper with several possible prompts, such as replicating what was done in class with a different social category, analyzing how AI-generated images affect women and people in marginalized communities or studying how AI perpetuates biases.
Gentry also examines AI in her seminar on aging, where the class looks at robots and care-giving AI voice assistants for the elderly. The class discusses why usually a female voice is being used by robotic or digital assistants for the elderly and how that is connected to the societal expectation that care is provided by females.
Gentry says these exercises have sparked insightful class discussions and have provided her and her students with a deeper understanding of the ethical issues surrounding the use of AI.
“When we use image creators or read an article that has used image creators, it gives a visual reality to it,” she said. “I recognize the benefits of AI, but I’m also deeply concerned about how we are replicating certain kinds of stereotypes and how that limits us.”