BBJ: Academic unit at BU adopts guidelines for use of generative AI

Faculty at Boston University's Center for Computing & 数据科学, which opened a new building on Commonwealth Avenue last fall, have approved a nonbinding policy around use of ChatGPT and similar programs.

By Grant Welker  –  Projects Reporter, Boston Business Journal

Photo Courtesy: Gary Higgins, Boston Business Journal


Advanced artificial intelligence programs like ChatGPT have quickly captured the public’s imagination by engaging in banter, writing poems, answering trivia and more.

But Generative AI, as the software is also known, is also forcing consideration of how it could be used in the classroom, where instructors once had to worry about far more analog concerns like cheat sheets or CliffsNotes.

At Boston University, professor Wesley Wildman’s Data, Society and Ethics class has made a critical first step toward addressing those concerns, crafting a policy that guides how both students and instructors should use advanced AI. BU’s Faculty of Computing & 数据科学 voted last week to approve the policy, as several other universities work to come up with their own guidelines.

“It’s going to be a part of our species forever now,” Wildman said. “We can’t treat it like plagiarism, like it’s something else.”

Under BU's guidelines, students can use ChatGPT and other so-called large language models, or LLMs, but must give credit to them whenever they’re used.

If students are writing assignments at home, they should include an appendix detailing the entire exchange with an LLM, highlighting the most relevant parts, and write an explanation of exactly how and why the program was used. LLMs shouldn’t be used for in-class tests or assignments, the policy says.

Acknowledging the potential difficulty of enforcing or catching improper AI use, the policy says that honesty and fairness will have to be central to how the programs are used. The policy also applies to instructors.

Student work that doesn't use AI should be the baseline for grading, it says, with lower potential scores for those who use such programs, especially those who do so extensively. Assignments that simply reuse AI content should get a zero, it says.

BU's policy was created by a class of 47 juniors and seniors who first approached the topic during the course’s weekly session. Now, the policy applies to the entire faculty of Computing & 数据科学 following approval.

This is something we can use as a baseline,” said Azer Bestavros, the associate provost for Computing & 数据科学. It’s better that the policy is nonbinding, he said, because the way generative AI programs are used could vary so much from one course to another.

Bestavros called such programs particularly important for colleges compared to, say, a workplace where someone might use AI to take notes during a meeting.

“It's problematic precisely because students use writing to express what they learned,” he said.

To Wildman, generative AI programs could be another milestone advancement in the way people use technology to change how they use their intelligence — like how the printing press advanced writing and reading to those who otherwise didn’t have access to a limited number of readings, he said.

With generative AI, Wildman said, even a skill such as writing could become more of a niche or complementary skill, enabling us to devote our efforts to other tasks.

“This,” he said of generative AI, “can teach us other ways to learn how to think.”

Photo: The UMass Amherst campus.

BU isn't the only local university wrestling with how to respond to the new technology. UMass Amherst's Faculty Senate met between the fall and spring semesters and ruled that using such programs in a class without permission from an instructor is "academic dishonesty."

“That was the voice of the faculty that people may disagree with but it was a very purposeful decision," said Farshid Hajir, UMass Amherst's dean of undergraduate education.

UMass sent out an alert to students at the start of the spring semester in part to urge them to pay attention to guidelines classes may have around the use of AI programs. Another message went out to faculty and instructors urging them to be clear with students on what is or isn't allowed with such programs.

University leaders realized the new AI tools can be used to enhance student learning, and in ways that are consistent with the learning goals of a class, Hajir said. But with an expectation that use of such programs will only become more widespread, he said it was important to set some standard for students.

"Having clarity is good in this arena," he said.

Hajir said plagiarism is a concern, especially if students find themselves too stressed or without the time to complete an assignment on their own. “That’s not how we want students to use it," Hajir said.

At MIT, two professors in the comparative media studies and writing program wrote an open memo in January urging their colleagues to become familiar with the new technologies and consider explicit policies on their use in their syllabus.

The professors, Edward Schiappa and Nick Montfort, said there had been a "noticeable increase" in the use of advanced AI programs recently. "The use of AI/LLM text generation is here to stay," they said.

Harvard Dean Rakesh Khurana told The Harvard Crimson in December that students will have to choose to learn on their own instead of using AI as a replacement.

“There have always been shortcuts. There’s always ways to avoid thinking for yourself,” Khurana said. “Ultimately, the person who’s being educated has to decide whether they want to be educated.”