Thought Leadership

Learning autonomous car safety lessons from aerospace

By Ed Bernardon

Autonomous cars hitting the road for testing isn’t new. We frequently see news stories about this testing and the work automakers are doing to prepare these cars for the road.

What we don’t hear about, even though it’s a crucial part of the conversation, is how autonomous car safety systems will be able to recognize driver inattention, or the way this inattention could play a role in how these cars operate.

There’s plenty of research happening to identify safe, effective answers for this dilemma. Bryan Reimer is a research scientist at the Massachusetts Institute of Technology, and I recently spoke with him about these efforts and about his experience in this field of study. He’s involved in numerous research efforts about the relationship between autonomous car safety and driver inattention. He’s also the associate director of the New England University Transportation, and he created three consortiums dedicated to researching this relationship, including the Advanced Vehicle Technology consortium.


Bryan Reimer_Headshot.jpgBryan Reimer is a research scientist at the Massachusetts Institute of Technology, where he focuses on the interaction between human attention and autonomous car safety. Bryan also founded three consortiums studying the connection between driver attention and autonomous cars.

We began our conversation by discussing how Bryan decided to be involved with this area of research and how automation has changed the way drivers act on the road. We discussed the handoff from autonomous cars to driver and how autonomous car safety systems could detect potential driver inattention. And, we discussed the state of how much we should trust these safety systems and the kind of training drivers could be required to take in the future when autonomous cars receive significant updates.

Here, we talk about the ways in which the auto industry could learn from the aerospace industry about automating systems, and the skills engineers will need moving into this autonomous future.

ED BERNARDON: At the beginning, you mentioned the aerospace industry and how autopilots evolved. They made some mistakes early on, but in the long run, it seems like the autopilot works well.

What can the automotive industry take from the aerospace experience, and what can’t it take because of fundamental differences?

BRYAN REIMER: First of all, we have to accept that the aviation industry provides us with some overarching guidance and understanding: it doesn’t necessarily provide us answers. In essence, we must understand the assumptions to key components from aviation and assess [if] they apply here.

The big difference is aviation has a tendency for a high-skilled, highly-trained operator in an operating domain where things happen in minutes. Cars have low-skill operators, and things happen in fractions of seconds.

There are things we can think about, though. One of the overarching ones is that with increased automation you often have less experience with the manual control activities which means the operators become novice airline pilots. The FAA has admitted this.

It’s clear that car drivers, as they engage in more automation, have less capabilities to respond when they need to. Think about that from a systems perspective. The more you offload with automation, humans have a tendency to do one of two things: find something else to do, or fall asleep.

It’s very hard to oversee a highly automated system. This is a vigilance problem. So a lot of the perspectives here are really key to be solved — or I shouldn’t even say solved — but managed effectively through true human systems engineering, where a human is a key component of the design process and you are building the automation to be a strategic teammate with the human.

In essence, leveraging machine intelligence, which is not exactly too flexible but good at making moment-to-moment control decisions with human intelligence – that’s a lot more flexible in nature and capable of adapting in real time to situations that it hasn’t observed before.

We must try to leverage the best of both worlds to make more effective long-term decisions.

ED BERNARDON: So taking all these issues, training, the automation, the human interface, in order to be successful, do engineers have to be skilled in all of these areas? Or, will the teams responsible for automation technology have to work more closely with interface developers?

BRYAN REIMER: There’s obviously a range. Some companies are better than others very clearly. But unfortunately, throughout a good part of the auto industry, not everywhere but a good part, you can come up with numerous case studies where engineering is just too siloed.

The human perceives the broad situation around them and has to react to that, which means that it is difficult to silo the engineering process when the person who is responsible for the overseeing machine development sees everything as a ubiquitous hole.

Case in point, many cars have lane departure warning systems, lane keeping assistance systems, and other lane centering systems such as pilot assist and autopilot. It’s quite possible in many organizations there’s a different engineering team responsible for the characteristics of these systems because they’re highly complex technological systems. The human, on the other hand, who is responsible for leveraging and utilizing these systems, can’t necessarily differentiate all the characteristic differences on a moment-to-moment basis.

In essence, how did the three teams working on these complex technological systems come together to ensure that confusion, typically called mode confusion, is minimized at the operational level?

Autonomous Car Safety_3.png

Final thoughts


Educating about automation. My discussion with Bryan has certainly gotten us to appreciate that it’s not all about autonomous car technology: it’s also about how this technology fits into a system involving drivers and passengers. Training will inevitably play a part in understanding software at the heart of autonomy, much like any software we learn to use today, especially if we want to get the most out of the software and systems it controls.

But car dealers aren’t in the education business. The vehicles they’ve sold are complex, but it’s a complexity customers understand and adapt to without much instruction. As driving systems become more complex, and when you need more in-depth knowledge to understand system limitations to get the most from your car (and, more importantly, stay safe), will car dealers step up to the challenge? Will new businesses pop up to provide training services? Or will the car itself be able to go into training mode for private, self-paced training when updates occur? These are all interesting questions to start asking now.

Asking the hard questions. Bryan and his team at MIT are taking action and addressing many tough questions in the AVT consortium. By pulling together a group with diverse experience, the consortium can facilitate necessary, open conversations about this disruptive technology. Bryan thinks the consortium is a great way to leverage incredible expertise on the industry side to educate government, consumers and industry all together.

In the end, Bryan thinks people are learning automation’s capabilities and making reasonable, cohesive decisions to take control, because they need to understand the ones in the case studies that aren’t as reasonable and ask what the breakdown is. He also said it’s impressive “how drivers are really developing some kind of predictive awareness over the operating characteristics of the automation.”

But what is reasonable enough? These are the kinds of questions the industry desperately needs to understand and answer, and the AVT is playing a big role in getting these answers.

Software’s role. Finally, the last part of this conversation is about what can engineering software companies do to support this development.

We’ll likely have clearer answers for this in the future. For now, Bryan thinks, as many others do, that the software industry needs to build software to simulate earlier in the development cycle. But these simulations aren’t only for mechanical and electric systems: the system also includes humans.

Software tools must be capable of understanding the limitation of a human’s ability to handle a task in an emergency driving situation, a worthy challenge for companies providing simulations based engineering software – including us at Siemens PLM Software.

This concludes my interview series with Bryan Reimer. Special thanks to Bryan for taking the time to speak with me about this. 

About the author
Edward Bernardon is vice president of strategic automotive initiatives for the Specialized Engineering Software business segment of Siemens PLM Software, a business unit of the Siemens Industry Automation Division. Bernardon joined the company when Siemens acquired Vistagy, Inc. in December, 2011. During his 17 year tenure with Vistagy, Bernardon assumed the roles of vice president of sales, and later business development for all specialized engineering software products. Prior to Vistagy, Bernardon directed the Automation and Design Technology Group at the Charles Stark Draper Laboratory, formerly the Massachusetts Institute of Technology (MIT) Instrumentation Laboratory, which developed new manufacturing processes, automated equipment and complementary design software tools. Bernardon received an engineering degree in mechanical engineering from Purdue University, and later received an M.S. from the Massachusetts Institute of Technology and an MBA from Butler University. He also holds numerous patents in the area of automated manufacturing systems, robotics and laser technologies.

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/2018/02/07/learning-autonomous-car-safety-lessons-from-aerospace/