Page Loader
Summarize
OpenAI funds research into algorithms capable of human-level moral judgments
The research is being conducted at Duke University

OpenAI funds research into algorithms capable of human-level moral judgments

Nov 23, 2024
11:38 am

What's the story

OpenAI, the leading artificial intelligence (AI) research organization, has funded academic research to develop algorithms capable of predicting human moral judgments. The details were disclosed in an IRS filing by OpenAI's non-profit arm, OpenAI Inc. The grant has been given to a team of researchers at Duke University for a project called "Research AI Morality."

Project details

Grant supports 3-year study on moral AI

The funding from OpenAI is a part of a larger, three-year, $1 million grant given to Duke professors. The goal of this larger project is to investigate the idea of "making moral AI." The exact details of the "morality" research funded by OpenAI are not known, except that the grant period ends in 2025.

Past contributions

Duke researchers' previous work on AI and morality

The study's principal investigator, Walter Sinnott-Armstrong, a practical ethics professor at Duke, declined to comment on the project. However, Sinnott-Armstrong and co-investigator Jana Borg have previously conducted several studies and even written a book on AI's potential as a "moral GPS" for humans. Their past work includes developing a "morally-aligned" algorithm for kidney donation decisions and exploring cases where people prefer AI to make moral choices.

Research objectives

OpenAI-funded research aims to predict human moral judgments

The goal of the OpenAI-funded project is to create algorithms that can "predict human moral judgments" in cases of ethical dilemmas in medicine, law, and business. However, it still remains unclear if today's tech can really understand and apply something as complex as morality. The task is further complicated by the subjectivity of morality and absence of a universally applicable ethical framework.

Ethical limitations

AI's understanding of ethics is limited

Modern AI systems, as statistical machines, learn patterns from a plethora of examples to predict outcomes. However, they do not understand ethical concepts or the reasoning and emotions that go into moral decision-making. This was evident when the non-profit Allen Institute for AI's tool, Ask Delphi, designed to give ethically sound recommendations, approved of unethical actions when questions were slightly rephrased/reworded.