They propose a model to teach responsibility to autonomous AI agents, such as systems or entities like drones, robots, vehicles etc. which can perform tasks and operate independently without any human intervention.
Imagine if adaptive traffic lights on the road start communicating with each other and regulate the traffic better, or if an autonomous agent (advanced form of Artificial Intelligence) tells a supply chain manager when is the best time to ship an order. In a new study by the International Institute of Information Technology Bangalore’s (IIITB), researchers have come up with a model called Computational Transcendence which can help autonomous agents do these tasks responsibly.
Autonomous agents are systems or entities like drones, robots, vehicles etc. which can perform tasks and operate independently without any human intervention. Their application lies in diverse domains like healthcare, agriculture, and mobility among others. Considering the vast applicability where the actions of such agents directly impact humans and other agents in the system, the researchers say that the agents must act responsibly understanding the implications of their actions on others.
‘Computational Transcendence: A Model for Emergent Responsible Agency in Multi-Agent Systems’ was conducted and authored by Srinath Srinivasa, professor and dean (R&D), IIITB, and Jayati Deshmukh, who was a research scholar at IIITB between 2020 and 2022, and was recently published in AI and Ethics this August.
“While the decision making of autonomous agents has matured, they still face problems when they have to interact with each other. There was an incident in San Fransisco in the U.S. where many autonomous cars were trying to park at the same time and that led to collision and traffic jams. It is difficult to hardcode responsible behaviour into agents as they are typically modelled as external reinforcements,” said Prof. Srinivasa.
He further explained: “The innate responsibility in human beings is a result of our elastic identity where we identify ourselves with things bigger than us. We experimented with many approaches with the agents and our unique transcendental approach worked the best. This model endows agents with an elastic sense of self enabling them to identify with other external entities of the system which could be other agents and abstract notions. The agent will then consider them as a part of their transcended sense of self and act responsibly.”
So, what happens when these agents act responsibly? “Consider the adaptive traffic lights which we see in many cities including Bengaluru,” says Prof. Srinivasa. “They have some sense of synchronisation and work effectively in the intersection where they are located. But they are not that effective on a larger scale, which would be the synchronisation of all such lights in the city. With a little sense of transcendence, the traffic lights will be able to communicate with their immediate neighbours and coordinate better,” he said.
He narrated another example of a supply chain. “When orders are received, suppliers usually do it in bulk to reduce costs. They face the dilemma of either making the customer wait for their products until more orders pile up or ship immediately and compromise on profits. An autonomous agent with transcendence can help the supplier decide how to come up with a win-win situation for both the supplier and the customer.”
The authors believe that this model offers a promising direction of research that can help design and build intrinsically responsible autonomous agents which act responsibly because of their larger-than-self-identity and not for constraint or obligation satisfaction.
source/content: thehindu.com (headline edited)