“The mediator between the head and the hands must be the heart,” says Maria, the working class advocate-turned machine in Fritz Lang’s 1927 film, “Metropolis.”
Considered to be the earliest film depiction of a robot, Maria is the namesake of West Virginia University professor Jaime Banks’ latest research.
“Some have interpreted this sentiment to represent a moral reconciliation between the masters of Metropolis and the rebelling labor class, which is reflective of master-laborer relationships between humans and machines,” said Banks, an assistant professor in the Department of Communication Studies. “Others think the line shows Maria herself is the mediating heart, having been both human and machine (metaphorically, the mind and the hands). In either case, it’s a fitting name for a study investigating how people may see robots as moral beings.”
The MARIA Project, or Moral Agency in Robot-human InterActions, is funded by a three-year, $730,000 award from the U.S. Air Force Office of Scientific Research. In that project, Banks is investigating people’s ideas about machines’ moral agency, meaning the ability to consider right and wrong and act on that consideration. This study emerged from her extensive work on player-avatar relationships in video games.
“In player-avatar relationships, there is a spectrum of moral agency between ‘I am in charge of what happens in the game’ and ‘My avatar is in charge according to their objectives and backstories,’” Banks said. “Although, players do a lot of arguably immoral stuff in games. A lot of times when people don’t want to feel like they’ve done it, they engage in this distancing where they say, ‘I didn’t do that. My avatar did that.’ But I started to wonder how this jockeying of moral responsibility might change when we can’t take full control over our machines and may not have similar options in accepting or offloading responsibility in our joint activities.”
Over the next three years, Banks will conduct 11 studies that combine survey, experimental and observational data about people’s in-person interactions with a full-size, humanoid robot.
“The studies will focus on discovering how some of our intuitive processes play a role in our perception of moral agency, including where our ideas about robots come from, how our natural reactions to robot behaviors govern our perceptions and the effects of those perceptions,” Banks said.
This research will support the AFOSR Trust and Influence Program’s goal, per the program’s website, to advance basic understandings of “properly calibrated trust” in mixed human-machine teams.
“Although most robots aren’t yet fully autonomous and integrated into everyday life, they’re on their way—think about Siri or Alexa or the Roomba. Science fiction has a pretty good track record of telling us what’s coming next,” Banks said. “If films like ‘Ex Machina’ or games like ‘Detroit: Become Human’ are on the right track, it’s important to understand the dynamics of how we see them as ‘good’ or ‘bad’ and what those perceptions mean for how we may or may not trust them and see them as partners in our daily activities.”