When Morals Meet Machines: Should Self-Driving Cars Favour Passengers or Pedestrians in a Crash?

robotcarsIn the 2004 dystopian action movie “I, Robot,” the main character (played by Will Smith) harboured a great deal of resentment towards advanced robotic assistants because of their inability to make complex moral decisions. In fact, as you find out through the course of the film, a robot had made a choice to save his life, rather than that of a young girl, based on the logical calculations of both their chances of survival during a catastrophic car accident. The point was simple, the decision making power of robots will always be flawed because they lack the emotional capacity to make nuanced moral choices.

While a decade ago considering moral theory as it relates to robotics might have seemed like some futuristic thought experiment, today it has become a reality, as the advent of self-driving cars is presenting unique moral challenges, particularly related to what decisions robotic cars should make in the event of a crash.

The fact of the matter is that while self-driving cars purport to deliver advantages related to more efficient traffic systems, reduced accidents and lower emissions, even robots will get into accidents, and autonomous vehicles will have to decide how to respond to those accidents and make decisions as to who might be injured in them: passengers or pedestrians.

It is a moral dilemma that is currently facing the autonomous vehicle industry, and one that will need to be programmed and resolved in forthcoming self-driving cars.

In that split second of an automobile accident the driver may make thousands of instantaneous moral decisions, particularly related to the safety of passengers, pedestrians, and self. Not only that, but repeat the same accident scenario 1000 times with 1000 different people, and you might get 1000 different outcomes, each person making unique instinctual choices (as far as it’s possible to make “choices” in such instances).

In fact, as you read this you might think you’re immediate response would be naturally altruistic, that you would look to save others before yourself, or perhaps you’re more concerned about you and yours, thinking of personal safety first and foremost. Say what you will about one’s innate propensity towards either end, they are decisions that we make, and thus they’ll need to be decisions that autonomous vehicles make as well.

But here’s the rub: According to new research by the University of California, when people were asked whether self-interest or the public good should predominate when it comes to programming moral principles into self-driving robotic cars, while most approve of the concept of self-driving cars potentially sacrificing a passenger (or passengers) to save others, those same people would rather not purchase or ride in such vehicles. Or to put it another way “participants were less likely to purchase a self-driving car that would sacrifice them and their passengers.”

“Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs,” the study authors said.

Simply put, people like the idea of self-sacrifice, but they have trouble when it might be demanded of them.

Going forward it will be interesting to see how the automotive and technology industries meet these unique challenges, finding ways to effectively meet the seemingly incompatible objectives of: responding consistently, not causing public outrage, and not alienating buyers. Given the outcome of the study mentioned above, finding an algorithm that aligns with complex and nuanced (and not to mention, fluctuating) human values will be challenging indeed.

Source: Thetelecomblog.com


The end of meteoric telecommunications services’ growth

For those that have been part of the telecommunications industry over the last 20 years it has been a wonderful ride. Extraordinary growth in fixed-line telecommunications from 1998-2001, as the internet developed, was followed by the explosion in mobile from 2002-2009.

Since 2009, things have been rather different in most markets. Indeed, as the figure below shows, aggregate annual growth from the sixty-eight operators in our group has fallen from over 5% in 2011 to a steady 2-3% from 2013 onwards with no signs of a strong rebound on the horizon.

Aggregate revenue and growth rate for the sixty-eight players, 2009-2016


Source: Company accounts; STL Partners analysis

Some regional differences but growth is slowing for operators everywhere

Operators in Europe and other mature slow-growth markets have focused on building operations in faster growth regions – Telenor, Vodafone, Telefonica, Deutsche Telekom, Orange and others have all expanded into ‘developing’ markets. The problem is that these developing markets have matured fast from a telecommunications perspective and, although operators hope that growing wealth will translate into higher telecommunications spend, there is little evidence that this is happening in a meaningful way – see the figure below. It seems that, once low ARPUs have been established in a developing market, intense competition makes raising prices difficult even if mobile broadband demand is rising fast.

Revenue split and growth rates by region, 2009-2016


Source: Company accounts; STL Partners analysis

It is worth noting that the figure above reflects where operators are headquartered so, for example, Europe (at a steady -2% growth rate over the period) contains the revenues generated in other markets by the multi-national operators mentioned above. This is true in other markets too. So, for example, Asia-Pacific would include the revenues from Sprint as it is owned by Softbank which is headquartered in Japan.

Nevertheless, because well over 80% of revenues generated in each region are from operators headquartered there, STL Partners is confident that these regional splits and growth rates are relatively accurate and show that all regions have slowed substantially and are now operating in the -2% to 5% per year growth range.

Can mobile content plays help telcos return to growth?

As mobile markets become increasingly competitive, telcos are looking at mobile content plays as a way to differentiate their offerings. The mobile content proposition is finally coming into its own, as the spread of 4G networks means high bandwidth demand uses such as video streaming are becoming a reality.

But mobile operators have traditionally offered very little in the way of content. So how should they approach a content play, and more importantly how can they use content to grow mobile ARPU to replace dwindling revenues as voice and SMS declines?

Mobile content plays are currently perhaps more of a ‘holding’ strategy, offering telcos the opportunity to cement customer loyalty and cut churn. However, it is a low-cost option for telcos seeking a low-risk differentiator in mature markets.

Differentiation in a mature market – the UK

Competition is strong in the UK mobile market, with four major players and a wide range of MVNOs, meaning some level of differentiation is essential. Former market leader Vodafone has fallen to a distant third place behind key rival Telefónica’s O2, and EE, the result of the merger of Orange and T-Mobile’s UK units. EE has based its marketing drive on its network coverage and reliability, as it holds a disproportionately large share of the UK’s mobile spectrum following the merger of the two legacy networks.

The desire to consolidate is now growing in the market, beginning with the formation of EE, which was subsequently acquired by fixed-line giant BT, creating a powerful new player in the multiplay sector.

Vodafone UK’s third-party content play

Vodafone in the UK has recently developed a third-party services strategy, offering free subscriptions to Now TV, Spotify, or Sky Sports Mobile App on contracts with over 10 GB data allowances.

Vodafone’s data volumes have seen strong growth in the past few years, reflecting a Europe-wide trend as smartphone penetration climbs and data becomes cheaper. This has been key in stabilising and even stimulating growth in Vodafone’s UK mobile ARPU in the past few years:

Vodafone UK data use and total mobile ARPU, 2011-2016


Source: Vodafone

However, there is serious doubt as to whether this stability is sustainable, as prices fall and competition continues to increase. Total monthly mobile ARPU in the UK is in decline, falling 1.9% per year on average between 2012 and 2015.

Content plays can stabilise market share, but the growth argument is weak

As prices in the UK stay low and ARPU shows little sign of sustained growth, there remains some serious doubt in the longer-term effectiveness of Vodafone’s content strategy. The telco’s third-party play will attract a small number of new customers and may cut churn, but will not necessarily translate data use into ARPU growth – particularly as the revenue stream is split with Sky and Spotify.

Vodafone UK has already fallen behind its two key rivals O2 and EE, and the OTT strategy is not a strong enough differentiator to allow Vodafone to achieve the growth it needs to seriously challenge the market leaders. This, coupled with the disruptive influence of 3 UK and a growing quad-play sector, puts Vodafone in a precarious position in a highly competitive market.

Sky chooses not to leverage its exclusive content

UK satellite TV giant Sky has launched its own MVNO offering discounts to its triple-play customers. Sky in the UK has grown to be the second-largest triple-play operator on the back of a strong exclusive sport offering. However, the operator’s recently-launched MVNO rather bafflingly failed to offer customers any of this sport content, choosing instead a “safe” offering of discounts for existing fixed subscribers.

Sky’s strategy copies that of UK cable operator Virgin Media, which has grown a respectable mobile subscriber base of some 3 million. However, the vast majority of these will be quad-play subscribers who take Virgin Media’s fixed voice, broadband and TV offerings. Virgin has been happy to slowly build this mobile subscriber base as way to boost total ARPU, but shows little ambition for disrupting the “big four” players beyond its existing fixed customer base.

Virgin’s decision not to develop a mobile content play makes sense as it owns hardly any sport rights or exclusive video content. However, Sky’s decision to follow this strategy makes less sense. Sky’s brand is already synonymous with exclusive sport content, so it would seem to make sense to base a mobile content play upon this. Sky has yet to release any data on the success of its mobile offering, but we do not expect to see any eye-catching growth.

Virgin owner Liberty Global has signed partnerships with Vodafone elsewhere in Europe, and a tie-up in the UK has been touted (see our previous blog post, Time for Vodafone to revisit a Virgin Media deal?) But any such deal with a rival of Sky would certainly end Vodafone’s Sky Sports offering, without bringing Vodafone anything in the way of content to replace this.

Who’s Responsible if a Brain-Controlled Robot Drops a Baby?

As brain-controlled robots enter everyday life, an article published in Science states that now is the time to take action and put in place guidelines that ensure the safe and beneficial use of direct brain-machine interaction.

Accountability, responsibility, privacy and security are all key when considering ethical dimensions of this emerging field.

If a semi-autonomous robot did not have a reliable control or override mechanism, a person might be considered negligent if they used it to pick up a baby, but not for other less risky activities. The authors propose that any semi-autonomous system should include a form of veto control—an emergency stop— to help overcome some of the inherent weaknesses of direct brain-machine interaction.

Professor John Donoghue, director of the Wyss Center for Bio and Neuroengineeringin Geneva, Switzerland said: “Although we still don’t fully understand how the brain works, we are moving closer to being able to reliably decode certain brain signals. We shouldn’t be complacent about what this could mean for society. We must carefully consider the consequences of living alongside semi-intelligent brain-controlled machines and we should be ready with mechanisms to ensure their safe and ethical use.”

“We don’t want to overstate the risks nor build false hope for those who could benefit from neurotechnology,” he added. “Our aim is to ensure that appropriate legislation keeps pace with this rapidly progressing field.”

An electroencephalography (EEG) cap is for measuring brain activity on study participant. (Image courtesy of the Wyss Center.)

An electroencephalography (EEG) cap is for measuring brain activity on study participant. (Image courtesy of the Wyss Center.)

Protecting biological data recorded by brain-machine interfaces (BMIs) is another area of concern. Security solutions should include data encryption, information hiding and network security. Guidelines for patient data protection already exist for clinical studies but these standards differ across countries and may not apply as rigorously to purely human laboratory research.

Professor Niels Birbaumer, senior research fellow at the Wyss Center said: “The protection of sensitive neuronal data from people with complete paralysis who use a BMI as their only means of communication, is particularly important. Successful calibration of their BMI depends on brain responses to personal questions provided by the family (for example, “Your daughter’s name is Emily?”). Strict data protection must be applied to all people involved, this includes protecting the personal information asked in questions as well as the protection of neuronal data to ensure the device functions correctly.”

The possibility of ‘brainjacking’ – the malicious manipulation of brain implants – is a serious consideration say the authors. While BMI systems to restore movement or communication to paralysed people may not seem an appealing target, this could depend on the status of the user—a paralysed politician, for example, might be at increased risk of a malicious attack as brain readout improves.

Research on Unvoiced Speech Communications using Smartphones

Back in 2003 NTT Docomo generated a lot of news on this topic. Their research paper “Unvoiced speech recognition using EMG – mime speech recognition” was the first step in trying to find a way to speak silently while the other party can hear voice. This is probably the most quoted paper on this topic.

NASA was working on this area around the same time. They referred to this approach as ‘Subvocal Speech’. While the original intention of this approach was for astronauts suits, the intention was that it could also be available for other commercial use. Also, NASA was effectively working on limited number of words using this approach (picture source).

For both the approaches above, there isn’t a lot of recent updated information. While it has been easy to recognize certain characters, it takes a lot of effort to do the whole speech. Its also a challenge to play your voice rather than a robotic voice to the other party.

To give a comparison of how big a challenge this is, look at the Youtube videos where they do an automatic captions generation. Even though you can understand what the person is speaking, its always a challenge for the machine. You can read more about the challenge here.

Another term used for research has been ‘lip reading’. While the initial approaches to lip reading was the same as other approaches of attaching sensors to facial muscles (see here), the newer approaches are looking at exploiting smartphone camera for this.

Many researchers have achieved reasonable success using cameras for lip reading (see here and here) but researchers from Google’s AI division DeepMind and the University of Oxford have used artificial intelligence to create the most accurate lip-reading software ever.

Now the challenge with smartphones for using camera for speech recognition will be high speed data connectivity and ability to see lip movement clearly. While in indoor environment this can be solved with Wi-Fi connectivity and looking at the camera, it may be a bit tricky outdoors or not looking at the camera while driving. Who knows, this may be a killer use-case for 5G.

source: Link