Harvard University To Experiment With Chatbots This Fall

Harvard University To Experiment With Chatbots This Fall


David J. Malan, a computer science professor at Harvard University, who teaches an extremely popular introductory course on the subject will be using an AI bot this fall.

Malan will be attempting to set the parameters of the bot so that it is only useful in certain areas, a clear attempt to thwart instances of academic dishonesty.

As Malan told the Harvard Crimson: “Our own hope is that, through AI, we can eventually approximate a 1:1 teacher: student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually.”

Malan’s workload is so intense that even though he has dozens of assistant professors, there simply isn’t enough time for him to devote to students taking the class in person, let alone the millions taking the course online. One of the fears of using artificial intelligence in the classroom from professors has consistently been an acceleration of plagiarism because the programs have become capable of writing at a level that occasionally mimics a human complexity of thought.

Researchers from Harvard University recently signed a petition calling for a pause on any artificial intelligence models more advanced than GPT-4. GPT-4 is the newest artificial intelligence model from OpenAI, the company behind ChatGPT.

This seems to align with Malan’s goals of making the bot one tool among many the students can access. Eventually, Malan wants to be able to use artificial intelligence to assist him in grading student work so that he can direct his teaching assistants’ and teaching fellows’ attention toward spending time assisting students personally.

In a statement delivered to the Harvard Crimson, Malan advanced an argument that programs like ChatGPT and GitHub Co-Pilot are too helpful for the purposes of his class. Malan also noted in his e-mail that students have always been able to access information in creative ways and there is no better way to teach ethics than to allow students to deal with those questions of how to responsibly use technology in a real-world way.

Malan and his team are currently beta-testing the program during summer school classes and hope to deploy a fully realized version in the fall semester.

ChatGPT and other programs like it are large language models, that is, they are programs designed to predict the next word in a given prompt using what is available on the internet. They are generally capable of writing human-like sentences and the more they are used, the more they learn and the more effective they get at mimicking our communication.

In just a couple of years, there has been an explosion of chatbot technology, not just in the academic sector. There have been attempts at using AI to create music, write television programs, and even at least one attempt of using a chatbot to preach a sermon.

These rapid developments have prompted technology leaders such as Tesla’s Elon Musk to call for a 6-month pause in advanced artificial intelligence systems. Musk and other technology magnates signed their names to a petition published by the Future of Life Institute, an organization dedicated to researching the social impact of technology.

As the Boston Globe reported, the petition primarily concerns itself with creating safeguards for advanced artificial intelligence programs and having some sort of oversight to ensure these developments do not outpace our ability to control them.

 


×