Today’s article is about self-driving cars. It’s not so much about whether or not autonomous cars are themselves ethical to have, but rather whether or not such a car would be able to be programmed to respond to all the various ethical situations which would confront a car on a day to day basis. For example, how to respond if confronted with a choice which seems to have no options except to crash into something, and the car has to decide what it will hit.
Now, the article itself was about ethics, and it raises quite a few good points. But I don not wish to go into the detail the article did on every point. There are simply two main things which I want to touch on. These are the beneficial use of a car “neural network”, and the Trolley Problem (or, what to do in a no-win situation).
First off, something not touched on in the article itself, at least as I am going to present it. The article spoke about a hypothetical tree branch in the middle of a road, and an autonomous car possibly simply stopping to avoid breaking traffic laws regarding crossing the center line. This could, however, lead to another car, driven by a human, to hit the stopping car. What to do about situations like this, where legal issues conflict with safety issues? Well, I cannot speak to every situation, but there is something which I have thought would be very helpful in a situation such as this. What if every car came equipped with a device which allowed it to detect cars around it, and communicate with those cars, relaying such information as velocity, current driving plans, whether or not it is autonomously driven, and any obstacle encountered? If this were the case, then such scenarios as the branch could be much easier to deal with. The car could search the area for other cars, and if none are found, then it simply is allowed to cross the center line and go around. If there is a car coming from the opposite direction, then the blocked car can communicate with it, and determine a way for both to pass safely, and if there are cars behind, the same would happen. Each car would no everything it needed to know about the surrounding cars. The beauty is that the cars wouldn’t need to know any information about the people in the cars, such are who it’s registered to, or what the license plate is. That wouldn’t be important. It would essentially function vaguely like the scene from the film “Bee Movie” where the two main leads walk into the middle of the street without looking, and the cars simply merge around them. Another interesting ramification, is that if the police is looking for a specific car, like “a black Ford Explorer”, it would be able to ask the network and find all the black Ford Explorers in the area, rather than simply putting a message on the road, and hope someone reports they saw it.
The second thing which interested me in the article was the Trolley Problem, which essentially puts one in the situation of allowing people to die, or performing actions with cause other, less numerous, people to die. How does one make such a decision? I want to first of all point out the similarities between this, and the film “I, Robot”. One of the major driving forces in that movie is that Will Smith’s character was saved from drowning by a robot, but in the process the robot was forced to allow a child to die. His character strongly believed that the robot made the wrong call, and this illustrates the problem robots are faced with. They are unable, by their very nature, to feel emotions or make decisions based on the things like the human protective instinct of a child. Therefore a computer will have to make decisions numerically, like in “I, Robot”, which is what the article was concerned about. I’m more concerned, honestly, with how this will affect insurance. If cars are making decisions based on their programming, then if it crashes it is not the fault of the driver, but the programmer. This will seriously change how insurance is sold, with “malpractice insurance” perhaps becoming just as much as necessity for car programmers as it is for doctors. I’m not sure how this will play out in the long run, but it is interesting to think about.
Automated cars are an exciting part of our future, and they are coming, no matter what anyone wants. All we can do is try to expect what it will mean for everyone, and plan accordingly. The Bible says in Romans 13:1 says that we are to “Be good citizens…”. Figuring out how we as a country are going to deal with the many changes happening to driving is a part of that. I hope that we will be able to form good laws which correctly deal with all the issues involved.