Self-driving or autonomous cars, like Google’s famous Waymo, are powered by computing systems and algorithms and are aided by tools such as navigational maps, cameras, sensors, and GPS. Their software allows them to navigate roads and detect surrounding objects such as other vehicles, people, obstructions, and others. The development of autonomous cars has been the cause not only of excitement for its potential contribution to transportation safety but also of heated debate and discussion.
The primary problem with self-driving cars is that they are not completely autonomous… yet. Most of the vehicles being tested right now for eventual release into the market have features that reduce the need for human input. However, all of them still require a person behind the wheel to take over when things go awry. Unfortunately, the recent fatal crashes involving the testing of self-driving cars highlight their biggest flaw as the automation technology gets better, human drivers feel more secure and, therefore, less vigilant when monitoring their self-driving car. When drivers place their complete trust in technology to keep them safe.
One of the ethical questions that keep coming up in public discourse is; should self-driving cars be developed to a point where they are expected to absolutely prevent fatalities before they are allowed on the streets or is it enough that they just reduce their likelihood? Some people argue that since the computing technology behind self-driving cars is inescapably tied to life and death consequences, drivers are better off equipping their cars with driver and passenger safety tools that they can purchase online with a discounted deals of Kogan. They believe that human drivers are still the best safety feature of any vehicle and that automation can eventually cause many drivers to disengage mentally and, thus, cause more accidents and deaths.
The same people claim that allowing this type of new technology on the streets is a frightening prospect that should absolutely involve more regulation. For instance, lobbyists from the Human Driving Association believe that self-driving vehicles should not be tested on public streets unless there is an accompanying driver monitoring system. This monitoring system may entail either a second human driver or a separate system which ensures that the attention of the driver is constantly on the task. These lobbyists also argue rigorous safety standards are the key to making autonomous cars work for the public.
The transition from limited to total automation may not be there yet, but it is not a complete impossibility. In fact, testing is still ongoing despite the roadblocks (pun intended) caused by bans being handed out in some cities after previous testing activities have led to several accidents and fatalities. More and more safety-related updates are being rolled out such as redundant systems and the ability to function in harsh weather conditions. Other promising updates include localization systems, perception systems, and planning systems. Incidentally, these are also tools that are increasingly becoming available in regular vehicles. Some of them you can even purchase online with up to 50% promo of AliExpress.