from Alex Thomas via SHTFplan.com,
As the left leaning establishment continues to declare that the era of “self-driving” cars is upon us, many Americans have been left wondering what the privacy implications for such a tremendous change in society will end up being.
Now, with the very real possibility that, in the event of an accident, these cars would literally make the decision between who dies and who lives, Americans have even more to worry about when it comes to handing over control of their vehicle to a supercomputer.
According to multiple reports, the cars themselves are being designed to make so-called “moral” decisions which, in other words, means that the programming would essentially allow say a car full of people to crash rather than a school bus.
Consider this hypothetical:
It’s a bright, sunny day and you’re alone in your spanking new self-driving vehicle, sprinting along the two-lane Tunnel of Trees on M-119 high above Lake Michigan north of Harbor Springs. You’re sitting back, enjoying the view. You’re looking out through the trees, trying to get a glimpse of the crystal blue water below you, moving along at the 45-mile-an-hour speed limit.
As you approach a rise in the road, heading south, a charter tour bus appears, driving north, one driven by a human, and it veers sharply toward you. There is no time to stop safely, and no time for you to take control of the car.
Does the car:
A. Swerve sharply into the trees, possibly killing you but possibly saving the bus and its occupants?
B. Perform a sharp evasive maneuver around the bus and into the oncoming lane, possibly saving you, but sending the bus and its driver swerving into the trees, killing her and some of the people on board?
C. Hit the bus, possibly killing you as well as the driver and people on the bus?
The article goes on to say that the above question is one that is no longer theoretical as upwards of $80 billion have been invested in the sector with companies such as Google, Uber, and Tesla all racing to get their self-driving cars onto a road near you.
Interestingly, the USA Today report, as well as those quoted within it, seems to outwardly worry that the questions surrounding self-driving cars will cause Americans to shun them and continue to drive cars operated by actual humans who can choose whether or not they wish to save their own family or someone else’s if an accident were to take place. Clearly there is an agenda at play here.
The article continued:
Whether the technology in self-driving cars is superhuman or not, there is evidence that people are worried about the choices self-driving cars will be programmed to take.
Last month, Sebastian Thrun, who founded Google’s self-driving car initiative, told Bloomberg that the cars will be designed to avoid accidents, but that “If it happens where there is a situation where a car couldn’t escape, it’ll go for the smaller thing.”
But what if the smaller thing is a child?
How that question gets answered may be important to the development and acceptance of self-driving cars.
Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were ALL less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”
This truly is a scary situation, especially when you consider that we are now directly talking about allowing either private companies or the government to decide whether or not ourselves or our families are worthy of being saved in a car accident.