How will we ever know if autonomous cars are safe? And, do we even know if the cars we drive today are safe?
I suppose they are safe enough, as most of us spend much time on the roads. But the onset of autonomous cars presents a new opportunity for car companies and government regulators to re-look at what safety really means even to work together on an on-going basis for these standards to evolve.
I continue my conversation with Amitai Bin-Nun, vice president of Secure America’s Future Energy (SAFE), a nonpartisan, action-oriented organization which is committed to combating the economic and national security threat posed by America’s dependence on oil.
Our conversation continues to explore the complex relationship between autonomous car manufacturers, the government regulators and the consumer need for safety.
Edward Bernardon: It seems like there is a need for cooperation – a dialogue that is ongoing between manufacturers of autonomous vehicles, the customers and the government?
Amitai Bin-Nun: Absolutely. I think that’s true on many fronts. First, autonomous vehicles never have to make the same mistake twice. Most of us learn to drive at one point, and we make certain mistakes that every novice driver can make. There is a reason why the crash risk is higher for 18 to 24-year-olds.
The good news about autonomous vehicles is though they may represent themselves as a novice driver, they learn quickly and never make the same mistake twice. The more they learn, the quicker they advance. They don’t forget.
Also, the more we can get this industry to work together to help train autonomous vehicles, the quicker we learn and the faster we obtain the pathways to safety.
Autonomous cars legislation
Edward Bernardon: If you take a manufacturer of autonomous cars, their vehicle first vehicle learns something and shares that information with vehicles number 2 through 2000 plus. Also, since you have multiple manufacturers with thousands of vehicles, wouldn’t it would be advantageous if you could share all this information. Of course, companies work hard for their knowledge gained and may not want to share it.
Could the government accelerate the path to safety by forcing the sharing of data between companies? Can that learning or sharing of what is learned related to safety be regulated? How do you make that happen?
Amitai Bin-Nun: Sure. I think part of it happens organically. I don’t expect more prominent companies to open the doors of their servers and share with every actor in this space; that’s not how capitalism works. However, there is a certain amount of cooperation that happens.
Most self-driving cars are built with open source tools produced and released by technology companies. The companies in this space build on standard bodies, where you can benchmark their safety. Additionally, being in Silicon Valley, there is much information shared by engineers.
The way the government has a role in this is when the department of safety says there needs to be a standard around self-driving cars. It starts a process where they reach out to standard bodies like the Society of Automotive Engineers and ask for help create the standards around self-driving cars.
Everyone wants to participate because you want your vision of self-driving cars to reflect that standard. So the government can help to accelerate this cooperation by giving clarity on when and what they want to regulate and engage-in via collaboration and research.
Edward Bernardon : Let’s say you wave a magic wand and get the key people in government, as well as top mobility service providers and autonomous vehicle manufacturers, their CEOs and top-level technical people in one room to discuss the topics of highest importance for self-driving cars safety.
What are the main topics they should address to make positive progress in this discussion of autonomous vehicles and government regulations?
Amitai Bin-Nun: The most critical issue for this industry to coalesce around is establishing a universal language for safety. I don’t mean a common technical approach in building cars the same way or sharing the same tools and processes – those will come in time. However, there is a need for how you talk about safety, and how safe you want your vehicles to be before their deployment.
This safety discussion is essential for a couple of reasons. First, self-driving cars receive much scrutiny. Second, once on the road, every time there is an accident involving these vehicles it makes headlines. Even though there are hundreds of fatal crashes in conventional cars each day, the minor accidents involving self-driving cars get more attention.
So how do you, as a board, put your self-driving cars on the road? Even though your cars might be safe, or even safer than a human driver, they will never be perfect. However, at what point do you pull the trigger and say I’m ready to make a launch?
Also, I think this process is not just technological, but sociological as well. The more we have a common language we agree upon in what that trigger point is, it’s going to benefit public perception and the companies that are in the envious and challenging position of making that decision.
Edward Bernardon: It seems like an AV company when ready to launch their car will probably say, my car is safe. What is the universal language for defining safety? How would you describe a safe autonomous vehicle?
Amitai Bin-Nun: Autonomous vehicle safety is something the industry is grappling with because we already have 50 years’ worth of rules in the books about how safe a conventional car is by knowing how it performs in a crash test, how the brakes react, how tense the seatbelts, etc. However, these results tell you nothing about the safety of a self-driving car, because we need data which pertains to how well the software performs when the vehicle is on the road.
The way you technically define the safety of self-driving cars would be through simulators and mastering various driving scenarios. However, ultimately, we need to represent self-driving vehicles as safe as a human driver and continuing to get even safer. To ethically say it’s safer than the driver it is replacing is a powerful statement.
You can definitively make a strong case for it positively impacting society by replacing the human driver with software driven cars that make better decisions and get into far fewer crashes. So, the multi-billion dollar question is, what defines how you know that?
When is this technology safer than the human driver? Moreover, what are the technical tools and methods we need to implement to ensure we are at this point?
That’s something we have yet to answer.
This concludes part two, in a series of six blogs, of Ed Bernardon’s interview with Amitai Bin-Nun, vice president of Secure America’s Future Energy (SAFE), on the future of autonomous cars and government regulations. Continue to part three of our conversation.
About the author
Edward Bernardon is vice president of strategic automotive initiatives for the Specialized Engineering Software business segment of Siemens PLM Software, a business unit of the Siemens Industry Automation Division. Bernardon joined the company when Siemens acquired Vistagy, Inc. in December, 2011. During his 17 year tenure with Vistagy, Bernardon assumed the roles of vice president of sales, and later business development for all specialized engineering software products. Prior to Vistagy, Bernardon directed the Automation and Design Technology Group at the Charles Stark Draper Laboratory, formerly the Massachusetts Institute of Technology (MIT) Instrumentation Laboratory, which developed new manufacturing processes, automated equipment and complementary design software tools. Bernardon received an engineering degree in mechanical engineering from Purdue University, and later received a master’s degree from the Massachusetts Institute of Technology and an MBA from Butler University. He also holds numerous patents in the area of automated manufacturing systems, robotics and laser technologies.