In the 2004 dystopian action movie “I, Robot,” the main character (played by Will Smith) harboured a great deal of resentment towards advanced robotic assistants because of their inability to make complex moral decisions. In fact, as you find out through the course of the film, a robot had made a choice to save his life, rather than that of a young girl, based on the logical calculations of both their chances of survival during a catastrophic car accident. The point was simple, the decision making power of robots will always be flawed because they lack the emotional capacity to make nuanced moral choices.
While a decade ago considering moral theory as it relates to robotics might have seemed like some futuristic thought experiment, today it has become a reality, as the advent of self-driving cars is presenting unique moral challenges, particularly related to what decisions robotic cars should make in the event of a crash.
The fact of the matter is that while self-driving cars purport to deliver advantages related to more efficient traffic systems, reduced accidents and lower emissions, even robots will get into accidents, and autonomous vehicles will have to decide how to respond to those accidents and make decisions as to who might be injured in them: passengers or pedestrians.
It is a moral dilemma that is currently facing the autonomous vehicle industry, and one that will need to be programmed and resolved in forthcoming self-driving cars.
In that split second of an automobile accident the driver may make thousands of instantaneous moral decisions, particularly related to the safety of passengers, pedestrians, and self. Not only that, but repeat the same accident scenario 1000 times with 1000 different people, and you might get 1000 different outcomes, each person making unique instinctual choices (as far as it’s possible to make “choices” in such instances).
In fact, as you read this you might think you’re immediate response would be naturally altruistic, that you would look to save others before yourself, or perhaps you’re more concerned about you and yours, thinking of personal safety first and foremost. Say what you will about one’s innate propensity towards either end, they are decisions that we make, and thus they’ll need to be decisions that autonomous vehicles make as well.
But here’s the rub: According to new research by the University of California, when people were asked whether self-interest or the public good should predominate when it comes to programming moral principles into self-driving robotic cars, while most approve of the concept of self-driving cars potentially sacrificing a passenger (or passengers) to save others, those same people would rather not purchase or ride in such vehicles. Or to put it another way “participants were less likely to purchase a self-driving car that would sacrifice them and their passengers.”
“Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs,” the study authors said.
Simply put, people like the idea of self-sacrifice, but they have trouble when it might be demanded of them.
Going forward it will be interesting to see how the automotive and technology industries meet these unique challenges, finding ways to effectively meet the seemingly incompatible objectives of: responding consistently, not causing public outrage, and not alienating buyers. Given the outcome of the study mentioned above, finding an algorithm that aligns with complex and nuanced (and not to mention, fluctuating) human values will be challenging indeed.
As brain-controlled robots enter everyday life, an article published in Science states that now is the time to take action and put in place guidelines that ensure the safe and beneficial use of direct brain-machine interaction.
Accountability, responsibility, privacy and security are all key when considering ethical dimensions of this emerging field.
If a semi-autonomous robot did not have a reliable control or override mechanism, a person might be considered negligent if they used it to pick up a baby, but not for other less risky activities. The authors propose that any semi-autonomous system should include a form of veto control—an emergency stop— to help overcome some of the inherent weaknesses of direct brain-machine interaction.
Professor John Donoghue, director of the Wyss Center for Bio and Neuroengineeringin Geneva, Switzerland said: “Although we still don’t fully understand how the brain works, we are moving closer to being able to reliably decode certain brain signals. We shouldn’t be complacent about what this could mean for society. We must carefully consider the consequences of living alongside semi-intelligent brain-controlled machines and we should be ready with mechanisms to ensure their safe and ethical use.”
“We don’t want to overstate the risks nor build false hope for those who could benefit from neurotechnology,” he added. “Our aim is to ensure that appropriate legislation keeps pace with this rapidly progressing field.”
An electroencephalography (EEG) cap is for measuring brain activity on study participant. (Image courtesy of the Wyss Center.)
Protecting biological data recorded by brain-machine interfaces (BMIs) is another area of concern. Security solutions should include data encryption, information hiding and network security. Guidelines for patient data protection already exist for clinical studies but these standards differ across countries and may not apply as rigorously to purely human laboratory research.
Professor Niels Birbaumer, senior research fellow at the Wyss Center said: “The protection of sensitive neuronal data from people with complete paralysis who use a BMI as their only means of communication, is particularly important. Successful calibration of their BMI depends on brain responses to personal questions provided by the family (for example, “Your daughter’s name is Emily?”). Strict data protection must be applied to all people involved, this includes protecting the personal information asked in questions as well as the protection of neuronal data to ensure the device functions correctly.”
The possibility of ‘brainjacking’ – the malicious manipulation of brain implants – is a serious consideration say the authors. While BMI systems to restore movement or communication to paralysed people may not seem an appealing target, this could depend on the status of the user—a paralysed politician, for example, might be at increased risk of a malicious attack as brain readout improves.
Plecase read source story:
We are in the middle of a technological upheaval that will transform the way society is organized. We must make the right decisions now.
Great article: Will Democracy Survive Big Data and Artificial Intelligence?