Dartmouth College went all-in on AI. Then came the tension.

Dartmouth leaders are not mandating the use of AI, yet critics on campus say they’ve had little opportunity to adapt to what they see as a massive change to how the college works. At stake, they say, is the tight-knit academic culture Dartmouth has nurtured in rural New Hampshire over nearly three centuries as a school that focuses on the liberal arts, small student body, and deeply personal style of instruction. It’s the only Ivy League institution with “college,” rather than “university,” in its name.
“There is no escaping [AI], and they have to figure out how to use it wisely,” said Charles Fadel, the Boston-based founder of the nonprofit Center for Curriculum Redesign. “The hard part is to accept that you are going to lose something.”
The recent efforts at Dartmouth — touted as the birthplace of AI, thanks to a seminal 1956 research conference on campus — largely began with James Dobson. The English professor was appointed as the special adviser to the provost on AI and drafted a report last year on the adoption of the technology. It recommended investing in data infrastructure and a partnership with a company like Anthropic — creator of the chatbot Claude — that would gradually integrate the tech into most facets of campus life.
The report marked how far Dartmouth has to go. Dobson’s own department has integrated AI in most first-year writing courses, where students sometimes compare close readings of scholarly articles to artificial intelligence-generated summaries. But he said the majority of faculty still ban the use of generative AI in their syllabuses, a “totally unenforceable” measure, he added. And a survey showed that over half of participating professors had not changed their assessments to reflect AI as of last summer.
For roughly a year now, some Dartmouth professors have been on edge as administrators faced criticism for placating the Trump administration’s efforts to overhaul universities. The college is the only Ivy League school not facing federal allegations of antisemitism, and Dartmouth president Sian Beilock chose not to sign a letter from universities last spring deriding the government’s interference in higher education.
Sian Beilock, president of Dartmouth College, spoke during a panel at the Globe Summit in Boston in November.Ben Pennington/for The Boston Globe
“Faculty feel in many ways under pressure politically right now. There is a sense of a lack of autonomy everywhere in the US,” Dobson said. “It’s unsettled us.”
Now some professors are unnerved by the incursion of AI and how Dartmouth might be giving into pressure from tech companies that hope to market their products to students, without fully understanding the long-term ramifications.
Those corporations are in an “arms race” to provide technology on campuses, said Sydney Saubestre, a senior policy analyst at New America’s Open Technology Institute.
The two-year deal with Anthropic — as well as Amazon Web Services, which gives Dartmouth access to its AI platform Bedrock — was officially announced days before the December break. It dictates that Dartmouth — which has nearly 7,000 students and more than 700 faculty members — pay an undisclosed sum annually to the company in exchange for 7,300 “Claude for Education” licenses. A pilot rollout of 150 licenses began in mid-February.
In January, the Dartmouth provost called the deal a “bargain” at a faculty meeting, two professors told the Globe. Dartmouth spokesperson Jana Barnello reiterated in a statement that the agreement came with “a steep educational discount, using central funding” that does not come from the academic budget.
“With limited federal funding and research budgets under pressure everywhere, if we can provide access more efficiently and more cheaply for faculty, that’s a win,” said Dean Madden, the university’s vice provost of research.
Students at Dartmouth are mixed about the proliferation of AI on campus.
Dartmouth sophomore Owen Gallagher said the technology is “more of an asset than a detriment” in his engineering courses.
But others worry about the harmful effects of AI, including its impact on the environment or how it might limit their ability to learn at a pricey institution.
Sophomore Pari Sidana said she has “been trying to avoid using AI as much as possible” in her government and English courses.
“What I’m getting out of a reading is very different from what a chatbot is telling me,” Sidana said.
Dartmouth chose Anthropic among the “flavors” of generative chatbots available because it has touted itself as being a leader in “the ethical use of AI,” Madden said.
The Dartmouth Green in May 2024.Cheryl Senter
But the announcement of the partnership came shortly after the company settled a class action lawsuit that accused Anthropic of training its Claude models on copyrighted books, agreeing to pay $3,000 per book to claimants, including about 130 authors who work at Dartmouth, the college’s student newspaper reported. Beilock, the president, is among those whose work was allegedly lifted by the company.
“This was a decision that was made for problematic reasons that have something to do with courting the donor class, pleasing the board, and the tech euphoria we are living in right now where everybody is acting like you either jump on the bandwagon, or you’re left behind in the brave new world,” said Mary K. Coffey, a Dartmouth art history professor and a claimant in the Anthropic lawsuit.
Barnello, the spokesperson, said the partnership “does not mean Dartmouth endorses every decision the company has made,” but rather “means we see strategic value in shaping how AI develops in education, rather than being shaped by it.”
Students and faculty across campus have free access to Anthropic’s Claude, Microsoft Copilot, and OpenAI’s GPT models, and professors who implement AI into their classrooms can apply for $1,000 grants.
“It’s capturing our everyday lives on campus in a way that is remarkable,” said Molly Geidel, a women’s studies professor who started a group that is resisting the speedy adoption of AI on campus. “It’s all totally optional. Yet it feels like this over-the-top promotion from these upper administrators.”
Peter Chin, an engineering professor who is co-chairing the faculty leadership group on AI, said that by staking partnerships and taking a lead in teaching students about AI, Dartmouth is helping to better prepare its students.
“It’s incumbent upon us as a school to think about how you should use these tools,” said Chin, who is also the father of a junior at Dartmouth. He added that the goal is to use AI to “enhance their ability and their cognition, never as a replacement.”
One example is Evergreen, a chatbot that Dartmouth students and faculty are developing to help students seeking mental health support find resources and navigate campus life, said Lisa Marsch, the founding director of Dartmouth’s Center for Technology and Behavioral Health and the project lead.
Although the app is still two years from launching, it is already at the center of a controversy.
The student newspaper, The Dartmouth, reported in January that the communications office approached Teddy Roberts — a student who works on Evergreen — about publishing a November op-ed that praised the AI project. Roberts later told The Dartmouth that communications officers edited the piece before submission and that Evergreen paid him for writing the op-ed.
Members of the Dartmouth College Marching Band performed outside Harvard Stadium in November before the Big Green’s matchup against Harvard.Erin Clark/Globe Staff
“The story in our mind was that the college was not transparent with us and took advantage of our goodwill,” Charlotte Hampton, The Dartmouth editor-in-chief, said in an interview.
The Dartmouth affixed an editor’s note to the November op-ed that said it “no longer meets our editorial standards.”
Roberts declined multiple requests for comment. Barnello said the handling of the Evergreen article reflected the university’s mandate to support and promote official programming.
Marsch added that after the story ran, Evergreen saw a bump in applications for its undergraduate research staff that currently totals about 130 students. She sees that reaction as evidence that students want to be part of figuring out how to implement this technology.
“Dartmouth is really trying to figure this out and lean into it,” Marsch said, “and not just pretend it’s not there.”
Aidan Ryan can be reached at [email protected]. Follow him @aidanfitzryan. Diti Kohli can be reached at [email protected]. Follow her @ditikohli_.




