Podcasts

Katherine Sheriff: Pioneering the Legal Framework of AI

By Ed Bernardon

Katherine Sheriff Associate at Davis Wright Tremaine

When does a machine become a person in the eyes of the law? As AI becomes more advanced, we’ll continue to see it more frequently in our everyday lives. Nowhere is that more clear, or complicated, than on our roads. The roads of the future are going to be different, and AI technology will have to fit into a legal system that we’ve created specifically for the humans behind the wheels.

The legal framework for AI is shaped by our early experiences with the technology, but it also has to depend heavily on simulations. It’s about contemplating every possible what-if scenario and preparing for it so that safety remains paramount. With that in mind, the legal questions start to roll in. Who bears responsibility for traffic violations or accidents involving automated vehicles? How do we reap the benefits of AI responsibly while minimizing the risks?

In this episode of the Women Driving the Future series, Ed Bernardon interviews Katherine Sheriff. Katherine is an Associate at Davis Wright Tremaine, and she’s long been a pioneer in the area of AI law. When she was on the show two years ago, AI in transportation was a far-away dream. Today, that’s changed so much that even university courses have been shaped around this rapidly growing field.

Today, we’ll talk about the evolution of AI law in recent years, the need to educate consumers about autonomous vehicles and terminology, and the increasing complexity of the legal system as AI continues to grow.

Some Questions I Ask:

  • What is the OECD, and what is their role in transportation? (3:02)
  • How do you go from the airline industry to managing a global board? (12:21)
  • What are some of the key things you learned from the airline industry and with startups that help you do your job now? (14:32)
  • Can you name a few women who are “driving the future” today? (23:58)
  • What is micro-mobility, and what are the different components of it? (29:28)
  • How do you figure out how to allocate space for different types of transport? (32:25)

What You’ll Learn in this Episode:

  • Past and current projects of the OECD (4:23)
  • How different countries with different needs work together (8:02)
  • The differences in how men and women use transport (15:52)
  • Examples of careers in the transport sector (17:49)
  • How COVID is already affecting the future of transportation (27:16)
  • How to make transportation more equitable (40:24)

Ed Bernardon: You open the app to your favorite pizza place, tap to order dinner and after a long day you’re excited that the autonomous pizza delivery vehicle will be at your door in about 30 minutes. But the order never shows. The autonomous vehicle delivering your pizza was in an accident. It tangled with someone on a bike; luckily that person will be okay but they’re in the hospital and they plan legal action to compensate for medical bills, time spent away from work and other expenses.

So, here’s are some questions to ponder: Who’s responsible for this accident? Is it the pizza restaurant? The car manufacturer? The programmer of the autonomous car brain? The local government since they are the ones who built the infrastructure. Does fault lie on the AI powered autonomous driver or some combination of the human actors that helped put the autonomous car on the road?

Our technological society is quickly heading towards automated abilities, addressing convenience, economy, safety, inclusivity, and environmental concerns.  We’re making machines that are smart, that make decisions on our behalf, that are becoming more like a partner rather than a machine we own. How do we ensure safe, legal, and productive paths for all who will inevitably benefit from these autonomous thinking machines that transport us and our goods to give us more time, freedom, and opportunities to move where we need to go? How do we find a common language to describe levels of automation and, how to apply our laws to these machines?

Welcome to the Future Car Podcast. I’m your host, Ed Bernardon, VP Strategic Automotive Initiatives at Siemens Digital Industry Software. My guest today has experienced firsthand the importance of the connection between physical mobility and personal freedom, and that experience has steered the direction of her rather unique career path. I’m happy to welcome Katherine Sheriff back to the Future Car podcast. Katherine is an Associate at Davis Wright Tremaine, and she’s a true pioneer in the area of AI law. Two years ago, she joined us for a conversation about the importance of developing a legal framework for autonomous vehicles. Since then, this field has exploded as the utilization of AI in transportation and elsewhere continues to rise.  We talk about how    AI law has evolved over the past 2 years, the need for consumer education and an AI nomenclature that is common to all, and the legal complications that result when machine start to act or even become more like people.

Welcome to the future car podcast Katherine

Katherine Sheriff: Thank you so much, Ed. Very happy to return.

Ed Bernardon: Since you were with us a few years back, I think it’s worthwhile just to take a step back and maybe you could remind the listeners about their history of how you really got involved in AI and autonomous vehicle law because if I recall, it had something to do with your grandfather and his loss of mobility and independence.

Katherine Sheriff: It did, Ed. And I believe that when we spoke before, I told you about my pawpaw, who was a marine by my late grandfather. And he was very independent, he was someone who was never going to retire. And I believe that we discussed how he had talked about retiring, and then decided to drive trucks and then decided to own service stations, and then kept buying different motor homes and things like that. So, just lots of different things that he enjoyed doing until he developed idiopathic pulmonary fibrosis, and was confined to a motorized chair. But it was very difficult to see him lose the ability to drive, which used to really like to take me to Longhorns Steakhouse and he couldn’t do that anymore or visit me at school or anything like that. And I really, I watched him lose part of himself, and it really hurts. And that’s not unique to my family and that’s certainly not unique to my grandfather. We have an entire world of people that everyone is aging. And so, we have to think about mobility as a means of access to life, really. And we have to think about it in terms of who has the ability to participate. And what are we thinking about in terms of designing a world that is accessible to the most people possible. So, that’s really what it’s about for me, that’s what it’s always been about.

Ed Bernardon: So, really, there’s this passion for designing this world where transportations accessible or really even beyond transportation, where transportation can allow people to do the things they want, even as they get older or maybe get an illness or whatever it might be. There’s the passion for that, and I suppose a passion for the law that’s come together and it’s brought you into this world of AI law. What is an AI law? Why do we need it? Why is it so important?

atherine Sheriff: It’s really interesting because legal liability considerations involve the ability to find blame, right? And that’s relatively straightforward, and I say relatively, because I mean, without thinking about artificial intelligence it’s still pretty complex. And so, I don’t want the lawyers listening to this to think she’s oversimplifying it. No, I’m not. But if we’re looking at the liability world, and traditional classes that you take in law school and then you throw in innovation and artificial intelligence into the mix and robotics, it’s really interesting, it brings up a lot of issues of agency. And I think at the moment, we’re really at the foothills of this proverbial mountain.

Ed Bernardon: Before you go on, what is agency?

Katherine Sheriff: So, agency is the idea that a principal has the responsibility for its subject. So, think about it in terms of an employee and employer context. You might have a flower shop, and you have a flower delivery driver that is your employee for the flower delivery shop. And if you go from state to state – we’ll just use the United States for right now – there are different laws that permit someone to recover from you – the employer, the flower shop owner – should the employee do something that injures someone else while delivering flowers. So, that’s an idea of agency. But that’s not the only example, you could also think of it in terms that are even more removed animals, for instance. Animals really are not our agents, however, we would be liable to say if my dog bit you, Ed.

Ed Bernardon: I’m not coming over to your house that’s for sure based on that.

Katherine Sheriff: No, you’re fine. But the agency, generally speaking, can be used synonymously I think with responsibility. So, for non-lawyer listeners, I think that that would be the best way to understand it.

Ed Bernardon: Right. So, is an AI machine, does it possess agency? Is an autonomous car truly independent? And AI law then starts to try and define that to some extent.

Katherine Sheriff: It does. And I think you’d be hard for us to find anyone who says, “Yes, an autonomous vehicle is truly independent.”. And a lot of what I’ve been working on lately involves making sure that we’re using the right words, to even describe an autonomous vehicle and make sure we’re describing the functions properly. But more directly to your point, the idea that AI possesses agency itself is the idea of legal personhood. And it’s laughable in some circles, in others it’s just a really fun thought experiment similar to say like the trolley problem or something. So, it’s useful in the idea that at some point, when we have more advanced AI, these issues would certainly come up. But right now, even as developed as the technology it’s, we’re just not at that point.

Ed Bernardon: It’s been a couple of years since you were on the podcast. What would you say, have been the biggest things that have changed or evolved over the past couple of years?

Katherine Sheriff: From a technology standpoint, I believe that the advances are continuing to progress. So, because I’m coming from the legal perspective I’ll speak to that. And there I would say that the major advances are really on the EU side. I think that there’s a lot going on in the United States, but the United Nations working parties, the WP29 and WP1 – WP1 is the global forum for road traffic safety, and WP29 is the world forum for the harmonization of vehicle regulations – and they’re both working together in different ways to develop a global framework for automated driving. And it’s extremely exciting, it’s very involved. There are players and stakeholders all over the world working on this. And that’s actually the reason that I went back to my original research, and spent a great deal of time, for lack of a better word, just updating it to reflect all of this work that’s happening and it’s really exciting.

Ed Bernardon: What do you mean by a global framework for autonomous driving?

Katherine Sheriff: The framework is a set of laws. And really, they’re going to vary by country, at the national level, let’s say in the EU, different nation-states. It’s going to vary on many different levels, but the idea is that we’re going to treat technology in such a way that recognizes that the entire point is to move people and move goods. And that doesn’t mean within the confines of one nation, it means around the world. And the AI technology that is automating fleets, for more streamlined supply chains, automating warehouses for sometimes even safer conditions for warehouse employees, and things like that. And then, of course, moving people like my papa if he were still with us. All of those things are going to have to be subject to some level of agreement, noting, yes there’ll be many variances. But there will have to be some level of agreement in order to permit the operation of technology moving all around the world.

Ed Bernardon: You mentioned, though that it was European based. Is the United States part of this effort?

Katherine Sheriff: The United States is part of the effort. And in fact, most of the tech companies are US-based. So, for instance, WayMo is very involved. There’s a professor that I follow Bryant Walker Smith, he’s from the University of South Carolina, and he also works at the Brookings Institute, but he has a great deal of scholarship on autonomous vehicles. And he’s actually the professor I studied and I read everything that he wrote when I was in law school. He actually participates in these working party sessions. So, you have to be a representative of one of the member states to be in the formal meetings. It’s very – of course, as the United Nations – it’s very regimented, who’s actually there. But there are a lot of informal meetings and there are a lot of sort of the way that the United States works with agency rulemaking, there’s a lot of opportunities to submit thoughts and to contribute in ways that do not involve actual attendance or participation in the formal sessions. And so, those are the types of ways that industry is participating and the US is there in those ways.

Ed Bernardon: As we talk about the laws coming into the mainstream and really, and the technology as well coming into the mainstream, my nephew – who’s soon to graduate from Cornell Law School – sent me a syllabus for a course called law of autonomy as vehicles. So, it seems like even in our law schools, now this whole idea is becoming mainstream because these courses are appearing on on college campuses.

Katherine Sheriff: And I think that that’s becoming more and more prevalent. The really cool thing, and I do have to sort of give a plug for Emory Law School because that’s my alma mater.

I graduated from law school a few years ago and I don’t have a stem background, I’m self-taught. I was really interested in what my little sister was doing. She’s an engineer and I really wanted to understand the connection between how does old law apply to new ideas. And I had the opportunity at Emory to take a course on law and technology taught by Professor Mark Goldfeder, who certainly deserves a plug. And between his tutelage and between mentoring from some of the other professors at the school, David Partlett is one, Frank Vandall. I really was able to develop, I think in an exponential way just very quickly these ideas and I was encouraged to write, and that’s what I did. That’s what I did in law school. And I started speaking on this subject in law school as well. So, I have to note that the courses are certainly growing in number, and definitely developing really cool topics not just in the context of mobility but also AI, in the context of like arts and different things. Really neat syllabi that I’ve seen thus far.

 

Ed Bernardon: Well, it sounds like you’ve been on both sides of the lectern, right? So, you’ve been actually speaking and teaching about this. And I want to read to you a little excerpt here from the syllabus, it says, “Do we need automotive laws to regulate these autonomous vehicles? Or is it more appropriate to reason by analogy, and regulate these systems by existing statute?” So, imagine you were teaching this course, how would you answer that question?

 

Katherine Sheriff: That’s a really good question. And, by the way, I definitely would like to meet your nephew at some point because I know that he took the time to read my paper, and I really appreciate that. I think that the way to answer that question is to reframe it first. The question posed is does the technology break the doctrine? That’s really the way academics like to say it. I would say no, I think that for some aspects, I think that we really need to rethink some things, but at the base, a lot of existing law really will apply. Now, that doesn’t mean that existing regulation…

 

Ed Bernardon: Because you do say some. It’s trying to figure out which some.

 

Katherine Sheriff: Right. Well, I mean, that’s why we get paid the big bucks. So, the example I would give is Federal Motor Vehicle Safety Standards govern everything about vehicles on public roads in the United States. And so, they are decided upon, promulgated by the US Department of Transportation by NHTSA, the National Highway Traffic Safety Standards Administrative body, so as an executive agency. The idea there is that under these say FMVSS rules, there are certain ways that seats have to be in the vehicle, for instance. So, they have to be in certain designated spots, they have to be tested in certain ways. There are rules on the driveshaft and the designation of the driver’s seat. So, now imagine that you don’t have a driver and that some of these safety standards are meant to protect the “driver” from impact with the steering wheel, maybe. But let’s say you don’t need a steering wheel either. So, that’s why NHTSA and the Department of Transportation, have really been, I think moving pretty quickly, in terms of the federal regulations. A lot of people will look at me and be like, “No, they’re dinosaurs.”, but you know what? They keep people safe.

 

Ed Bernardon: One would think that when it comes to safety with an autonomous vehicle, there are two things you want to keep safe. Well, at least two things. Certainly, they’re still the passengers. So, whatever systems you have, and you still may need airbags or whatever it might be for the rest of the people that aren’t driving. You still need all that and you certainly want to make sure that this other component, the AI brain, is also protecting those passengers, but in addition, you also have to protect the AI brain that’s in the car. So, that it doesn’t get damaged, I would think. Not to mention that you’ll want to make sure that it’s adequate.

Katherine Sheriff: Well, sure. And I think that that goes to your automatic over there embedded updates, when that’s protecting your, I imagine that you’re referring to cybersecurity efforts because that goes to software. But the other thing that there’s a toss on a third there, and tie it back to what the EU is doing. Let’s also protect everyone else around the vehicle. Let’s protect the world in which the vehicle is operating. So, by that I mean, that’s why the global automated driving framework is so important. And the idea is that when we all have similar expectations about how something works, let’s use a stop sign for instance. If a stop sign looks like a stop sign in Prague and in Paris, and or at least such that AI can recognize it, great because it is far less likely that you’re going to have accidents if you can know what to expect. And when I say you, the proverbial you I’m referring to the AI too. So, that also goes to law enforcement, it goes to all of the laws that are governing the mobility of the vehicles and it goes through the other drivers. It’s a very oversimplified example, I know. But if a stop sign is a stop sign, then it’s going to be easier for other drivers and non-autonomous vehicles, or with lower-level AI systems to be able to react appropriately and a safe way – that is what I mean by the appropriate – and safe way to the higher-level automated vehicles on the road.

Ed Bernardon: I want to jump in here and eventually talk about what the technology is, how we describe it, how people can come to understand it and trust it. But before we do that, I think it might be worthwhile because we have a lot of people from the technical side on this podcast, I think we need a little bit of law education. Like a law 101 because there’s tort law, criminal law, product liability law, data privacy. What are those and why do we need AI law on top of all that?

Katherine Sheriff: So, the reason we need AI law on top of all that, well, I’ll start backward. It goes to the question in your nephew’s syllabus. And that is, does old law apply to new tricks, basically? And the answer is yes and no. I mean, people hate that because when lawyers go to law school, we’re taught to say maybe, and honestly, it’s not a cop-out, it is just to be as accurate as possible. That’s what everybody, if you’re a good lawyer, that’s what you’re always trying to do and frankly, the answers always maybe.

Ed Bernardon: As accurate as possible, I like that. One never thinks of law and accuracy is an interesting combination.

Katherine Sheriff: Yeah, so I think that to answer your question, here’s the 101. Let’s start with criminal law because that’s what people know from at least.

Ed Bernardon: Yeah, seems simple

Katherine Sheriff: Alright, so criminal law is about punishing for what we define as crimes. So, let’s say that you are in a car accident is caused by drinking and driving which is something extremely unfortunate. Well, we have in our society deemed drinking and driving a crime. So, the person could be punished by the state or whatever government they’re in, and that can include fines, jail time. And so, that’s how we generally look at criminal law. Now, let’s take the same example, the same car accident, and apply tort law. Well, what about on the private side? Tort law is considered more of private law or civil law issue. And by civil law, I mean, it doesn’t involve jail time, it’s more between people and not between people and governments, so private actors. That’s where we’re coming from. And tort law is the idea behind it is basically you can compensate people for harms. So, if the driver that gets driving under the influence, harms someone and goes to jail, that’s great from a societal standpoint, or they’re fine or whatever because we want to deter unsafe actions, like DUI, right? But what about the vehicle that’s messed up? What about – let’s just say that nobody died, let’s try to keep it not insanely sad – so let’s say the vehicle is totaled. The state isn’t going to pay for that, so that’s where tort law comes in. And that means you can sue someone – there are lots of rules around this – but you can sue someone to get money when they hurt you or your property. Contract law is also underlying all of that and that goes with what are the agreements that we all make between ourselves. So, there can be contracts with government actors, but generally, contract law pertains to private actors agreeing with each other. So, in this example, it would be like your insurance contract, your agreement. So, maybe you have an insurance premium that you’ve agreed to, and maybe they do too – the person that hit you – and there are different rules, so many rules in insurance agreements, but that also factors in and governs who gets what and how and who gets reimbursed and the third indemnity clause and things like that. Privacy law goes to the protection of data. And it goes to really is very important in the healthcare industry, of course, but now it’s more and more important in the AI world because AI is trained on data. Data is the food for machine learning. It’s not going to grow or develop without data, that’s what it needs and data comes from people, or the most valuable data comes from people. And I think that people are more and more realizing the value of their data and they’re being a little tighter about giving away quite so easily. So, that’s where privacy law comes in. And then I think the other one that you’d mentioned was IP – intellectual property – as idea protection, basically. We can tack on environmental law, and that’s what is the car doing to the world and how are we protecting the air that we breathe. So, there’re just so so many facets of law, and my expertise is – it’s funny to say expertise, you actually have to be qualified as an expert in the United States – so my specialization is in products. And it’s really interesting to take a look at this from a general perspective because that sort of range is important when you’re looking at innovation, you’re looking at, how does something apply. From a top-down approach, what’s the big picture here?

Ed Bernardon: One of the things I noticed that’s common through many of the definitions you gave you use the word people a lot, use the word someone a lot, you even use the word breath at one point. But is an AI machine, or something powered by AI – like an autonomous vehicle – is it a person? Is it not a person? Certainly is not breathing, but yet it’s out there doing things, good things and potentially causing harm as a person could. And I would imagine that that’s where many of the challenges for AI law exist.

Katherine Sheriff: I think that you’re right, Ed. And I’ll go back to – and again, not a cop-out, just accurate – I’ll go back to something I said a little bit earlier, and that is, we’re not there yet. And by that, I mean, the current AI is not fully independent. Now, there are independent decisions that are made, and by that I mean they are occurring in an unsupervised way. So, your tech audience is far better versed than I am in this, but when you provide a data set, or you provide a system of instructions, an algorithm to a system, and you say “Go forth and find XYZ,” you either give examples or not – supervised or unsupervised – and the machine learning basically uses all of the data to teach itself. And it’s really fascinating, but you know what? It comes from someone and when I say someone, I mean a human being person. So, we’re not at the point that decisions made by AI are so far removed, in my opinion, that we would not be able to attribute an action to a human actor, whether that’d be the programmer or the owner. There’re a lot of iffy questions about who is that actor, but from where I’m standing, I think it’s still a human actor.

Ed Bernardon: Well, one thing that’s come up a couple of times in our conversation already is this whole idea of accuracy and how laws intertwined with accuracy. And in order to establish accuracy, we have to accurately define what an autonomous car is, or certainly what people perceive it to be. And there’s a lot of news about autonomous cars these days, everybody’s talking about them. But do you really think that people understand what it is that autonomous cars can and can’t do, what their limitations are?

Katherine Sheriff: I think some do, but I think the number is much lower than it should be. A case in point, I would look at without naming specific examples, but there have been tragedies involving autonomous, “autonomous” vehicles or automated systems or ADAS systems, and people who are actually well educated, and did not understand what the vehicle was meant to do. So, there’s always been this really interesting uptake of technology and consumer adoption of technology, there’s always this initial lack of trust, there’s a lot of hype, and then excitement followed by over-reliance, followed by a misuse based on that over-reliance. So, I think the answer to your question is, maybe some consumers do, but certainly not enough.

Ed Bernardon: If you don’t understand the limitations of what you’re using, you certainly can’t build trust. And there are two sides to it, right? If you under trust, you might not use the technology. If you over trust, you may use it improperly, and then put yourself and the passengers – whatever it might be – in danger. Do you think there’s enough capability there now that people can start to trust these machines? Do you think that we’ll ever have the level of trust that is really needed that consumers have in cars today?

Katherine Sheriff: I think that we’re on the right track and there are certain initiatives that I find extremely admirable. I believe that for instance, back in May actually, the SAE endorsed a joint effort by AAA and consumer reports, and some other organizations for these “clearing the confusion recommendations”. And it actually went to updating the SAE, a new standard for an ADAS system. So, different definitions like lane-keeping assistance, driver control assistance, collision warning, collision intervention, parking assistance. I’m sure I’m missing a bunch of them, but there are a lot of misconceptions about what is meant by let’s say, full self-driving. What is full self-driving? What is meant by that?

Ed Bernardon: Right. We don’t even have a common language.

Katherine Sheriff: Exactly. So, the idea is that generally, consumers do not understand ADAS features, or what the features are meant to do.

Ed Bernardon: The idea that you need a language that sets expectations. So, for instance, if you were to walk up to a car dealer and they say, “Oh, this vehicle is equipped with ADAS or this one is fully autonomous or this one has an autopilot.”, it should be clear to the consumer what that means so that their expectations are set properly. And then, they can build trust based on those expectations. Does that nomenclature exist and do you think that people understand it enough that they can, they could actually make a purchase or interact with autonomous cars or an ADAS car in the right way?

Katherine Sheriff: I think the nomenclature does not exist on a broad level yet. However, there are multiple efforts, one in which the United States is featured and the AAA organization, as well as consumer reports, everyone got together and they decided that more than once, over and over we are finding that ADAS means driver assist, not driver replace. So, you have to emphasize the importance of the human-in-the-loop driver monitoring systems, and touch on the risk involved with level three automation and really state unequivocally that the idea is drivers can understand driver assistance and driver replacement. But when you mix those two, it is very difficult to get humans back in the loop depending on the length of time out of the loop. So, this goes to driver monitoring really. Are they even paying attention? And that’s where people get hurt. So, really educating the driver is something that happens well before the vehicle purchase. And it’s the responsibility not only of automakers and not only of tech companies, but media and of nonprofits and of government. And so, that’s why the efforts by the Insurance Institute for Highway Safety, US consumer reports, AAA looking at these clearing the confusion recommendations that SAE has taken in stride. I think that that’s reminiscent or representative of some of the biggest steps towards establishing some words that we can all use. Again, not to replace automaker proprietary systems or package names, but rather to clarify the functionality of what do we mean by driver-assist versus driver replace. Are you building a WayMo driver? They’re not car makers, they’re creating a driver. Or do you have a vehicle with a very enhanced autopilot? There’s a really big difference and the difference, as we’ve seen, can be between life and death.

Ed Bernardon: I suppose there’s a difference between something that legally defines what it is. It’s an ADAS system, or it’s an autonomous system versus a product name. This is our autopilot, it is an ADAS system. Is someone establishing that when you describe your system that may be fully autonomous or partially autonomous, you have to use these terms? Does that exist today?

Katherine Sheriff: You’re asking, Is there a federal mandate to use certain words?

Ed Bernardon: Yeah.

Katherine Sheriff: No, there’s none.

Ed Bernardon: Well, okay, so maybe the next level down, is there an industry-accepted way of describing it?

Katherine Sheriff: I think we’re getting there. That’s what we really need and there are many automakers and there are many tech companies that are signing on to this idea of standardized nomenclature. And that’s why it’s so exciting because people agree that expectation setting and transparency are just absolutely, both things, so vital to making sure that we’re keeping consumers safe. And from a bottom-line standpoint, you’ve got to from automaker standpoint, from the software and the AI innovator standpoint, this is all well and good and very exciting. But if you’re anything like me, you’re doing this probably to change the world and in order to do that, and to stay in business, you will have to generate some sort of profit, right? Well, guess what! That’s going to be really, really tough if no one trusts you. It’s very delicate and once lost, it’s very difficult to build again. I think we can take a page from the OEMs because as slow as OEMs are, and in terms of vehicle manufacturing, and decision making. In the tech world, it’s alliterative, you break it, you fix it, you do it again. But when you’re putting people in mobile bullets – I think has been the analogies before – when you’re putting human beings and some apparatus that could potentially kill them, it is extremely important to take a measured approach. And I think that’s why these OEMs have built so much into their business models, based on client trust, and based on working towards the good ones, of course. Working towards this transparent nature, that not only engenders more consumer buying but also brand loyalty and people understand what they’re getting understand, what it does. And when there are fewer surprises, you can also count on fewer media fires, frankly, to put out.

Ed Bernardon: The media fires certainly occurred, especially with the early accidents. We all remember several years back the accident with a Tesla autopilot, where someone took that to mean they could take their hands completely off the wheel. Although I believe the instructions said you have to touch the wheel ever so often. The use of the word autopilot though, when you think about it in an airplane with your hands off the wheel, and many people probably believe that that was the case when driving a car as well. So, these types of terms it seems that need to be better defined or at least universally accepted to mean one thing or another.

Katherine Sheriff: Absolutely. And in the next steps, drivers may need specific training for partial automation to understand what automation is available, how it works, and how to take over if the feature is not available.

Ed Bernardon: Are companies taking the steps to provide that education?

Katherine Sheriff: Yes. I would say the vast majority are. And really after these accidents and these tragedies, I do think that there is a general consensus that this is, it’s not only necessary to use the same words to describe the same functions, but it’s also it’s just good business, honestly. It leaves a bad taste in someone’s mouth if you tell them that. I keep thinking about toys that I had when I was a little kid. There were Nintendo moon shoes, and you put all these rubber bands, and on the commercial, maybe it was just me as a child, but right now remembering back to the commercial thinking these kids were basically on a trampoline.

Ed Bernardon: Trampoline shoes in effect.

Katherine Sheriff: Right, but they weren’t exactly like that. And so, and the bands if you jump on them too much, the rubber bands that sort of made them moon shoes they might snap. So, when I was trying to look like the kid in the commercial, that wasn’t the safest way for me to use these moon shoes. So now, let’s just – I don’t know why that came to mind – but now let’s just sort of apply that to a vehicle. There are a lot of reasons to generate hype and to excite investors and new technology.

Ed Bernardon: But that’s probably the reason why you don’t want the vehicle manufacturers to train you to use a product. Certainly, that’s helpful, give you the instructions of how to use the product you just purchased. But now, a driver takes courses independently from the car companies and they get their license independently, of course from the government that allows them to be on the road. Do you see a change in how things are licensed with autonomous vehicles?

Katherine Sheriff: I don’t. At least for the near future because these same skills that are required to drive a standard vehicle, a standard level zero say…

Ed Bernardon: You’ll need that for an autonomous car, at least that.

Katherine Sheriff: Right because you have to always be able to take over because right now you can not purchase a fully autonomous vehicle. Let me repeat that, so everyone knows. You can not purchase a fully autonomous vehicle.

Ed Bernardon: But if you could, wouldn’t you as I guess you could say, as a driver of an autonomous vehicle most the time you’re not driving, but maybe you need to know when to take over. Wouldn’t you need training for that?

Katherine Sheriff: Yeah, that training is very important. Now, I think that that sort of training may actually be the onus, maybe on the carmakers, as opposed to the government. Now they can work together, but let’s just talk about resources. I think that industry, forget industry capturing this net, but I think the industry is extremely important in terms of educating government because, from a resource standpoint and expertise standpoint, it’s just impossible for our government leaders to know everything that industry knows. And so, that’s why it’s so important for industry stakeholders to contribute to these projects and these initiatives that are aimed at consumer safety. So, I think that is really where the tie-in the collaboration comes in.

Ed Bernardon: There is an organization out there called Partners for Automated Vehicle Education or PAVE. Do you see them playing a role in something like this? That’s an independent organization.

Katherine Sheriff: I think they’re playing a huge role, and I’m a big fan of PAVE. They actually had a really cool, and I think it’s available on YouTube, but a really cool talk about when humans need automation, managing risk. And I think they interviewed David Harkey from the Insurance Institute for Highway Safety, someone from Travelers Insurance that might have been Andrew Woods and Russ Martin, he’s with the governor’s Highway Safety Association. And they all discuss consumer education and driver monitoring. And this guidance that was issued, this clearing the confusion recommendations, that goes to making sure that not only are we focusing on driver monitoring and warning drivers and discussing how to make sure the drivers understand the vehicles operational design domain. But also talking about how manufacturers and dealerships and sales staff too can collaborate behind the scenes and also get behind like educational websites like My Car Does What is an example. I don’t have the URL, but it’s called My Car Does What and it’s a really neat website that exists to educate consumers about what my car does. And so, you can put in whatever, make the model you have I think, and look at the features and there’s a realistic, accurate portrayal of the functionality there. And that’s free. And it’s in terms that people without specialized training can understand, and that’s really the important point.

Ed Bernardon: You recently wrote a paper, “Defining autonomy in the context of tort liability. Is machine learning indicative of robotic responsibility?” and it gets into how autonomous vehicles and AI are intertwined with the law. And I want to dive into that if we can. You mentioned in your paper, there are three categories of easy cases, cases of general agreement. Can you explain what these are and why everyone agrees on them?

Katherine Sheriff: Yes, I’ll just keep it pretty general and then I’ll direct everyone to my paper. Let me do this from a 10,000-foot view.

Ed Bernardon: Sounds great, a great place to start.

Katherine Sheriff: There’s the law and then there’s legal philosophy. And this paper sort of ties both together. And there’s a really famous legal philosopher, HLA Hart and then there’s also Ronald Dworkin. And there were these ideas that judges basically look at the law and apply rules, legal rules to cases in different ways. And that legal rules can apply differently in different contexts. And that is the idea of open texture. And open texture pretty much means that words have meanings, but these meanings are often vague. So, I’ll give you the example that a Hart really likes, which is basically like no vehicles in the park. Yeah, that’s what it is. No vehicles in the park. So, that rule is probably meant to protect park-goers from getting run over or let’s say unruly children on motorbikes. But does it apply to rollerskates? Does it apply to bicycles? Does it apply to an embossed or not embossed, a bronze statue of some sort of military vehicle that’s there? And maybe if it was not in bronze, it would be functional, but it’s not right now. So, how does that work? So, with that idea in mind, that’s from your huge 10,000-foot view, we take that idea, and we distill it down to apply it to law. So, there’s legal philosophy and then we have all of these laws and all these different areas that we talked about earlier. And easy cases are those in which a judge just takes a set of facts and applies a rule as is, there’s not really a lot of interpretation going on, it may be a little bit, but it’s not…

Ed Bernardon: Straightforward. There’s no question how it applies.

Katherine Sheriff: Yes. Like, “Oh, should we make new rules for AI?”. No. For instance, maybe not.

Ed Bernardon: Don’t need it.

Katherine Sheriff: Right.

Ed Bernardon: What are the easy ones that apply to autonomous vehicles and AI?

Katherine Sheriff: I think, basically, anything below full autonomy, in my opinion, would probably fall closer to the easy cases line. Because you can just use the doctrine that exists. You can take the legal framework, and you can put it on top of new technology. We’ve been doing this for years and years and years. We had horses to start out with, now we have cars. A long time ago, no one had conceived of an elevator. So, a lot of the laws that we have can still apply, but you just have to contemplate and conceptualize the new technology in light of the law. Where it becomes a little bit difficult and where the hard cases come in. And by hard cases, I mean, the judges have to use some discretion and applying a rule. For any rule, there’s always going to be cases where it’s not clear how the rule applies. I mean those that may actually require us to take another look at the laws that we’re trying to apply to AI and apply to these special autonomous vehicle cases.

Ed Bernardon: Give us an example of a hard case.

Katherine Sheriff: A hard case could be anything from the vehicle is in the general AI category, and it is making decisions on its own. It’s so far removed from the original programmer that the decision made can’t be used in the same discretionary model. So, let’s say that we’re using an agency idea until the point where the AI takes on some sort of legal personhood, for instance, that we talked about earlier. So, it’s actually acting on its own. Now, again, we’re not there yet, but that would be a hard case. Because that would actually require a rethinking of some legal rules. Another idea is, what do you do when a vehicle makes a decision on its own and actually injures someone? Is there insurance that’s carried by the autonomous vehicle itself? That’s something that’s been discussed. So, it’s really interesting, and it’s very, it’s sort of like the trolley problem, and that it is not exactly applicable to everyday life right now. We’re not there yet, but there are really interesting ideas that may come into play and the paper and the newest iteration of my research really focuses on what are these ideas, and then if you get towards the end, what are we going to do right now. And right now, it’s really all about consumer education. It’s about making sure that we understand what we currently have and it’s about making sure that we are describing the technology that we currently have in a way that we can all agree on and keeps everyone safe.

Ed Bernardon: You mentioned the trolley problem, maybe you can explain that in a second. But just related to taking an AI machine, an autonomous car, and releasing it into the world. But there’s going to be unpredictable situations that may pop up, that it really isn’t pre-programmed to handle. However, it has the capacity to some level to make decisions on what to do. And I think the trolley problem is one example of this.

Katherine Sheriff: I think you’re totally right. For everyone, the idea of the trolley problem is it’s not just for autonomous vehicles, it’s a philosophical problem about ethics. And the idea is all about what do you choose if you’re driving a trolley, and you come to a fork in your trolley path, or in whatever the track is, and you have the option to injure everyone on the trolley and five other people or one person. So, you can go left or you can go right. And there are lots of different variations on the trolley problem, but the general idea is there has to be a choice. And so, the interesting thing about autonomous vehicles, it goes back to consumer education and transparency because suppose that you can train the vehicle to make a particular decision, to value say five people over one, or maybe if you look at some of the research that’s looking at algorithm morality, for instance. It’s really interesting to look at, not necessarily from the programming standpoint from where I sit, that’s interesting, too, but that’s not what I do. What’s interesting to me is what consumers need to know about this. From a transparency standpoint, in a future world in which we are literally training the AI to make one decision over another, or even perhaps we just know that it’s more inclined to make one decision over another – and again, this is in the futuristic world in which we’re working with general AI – is that in the handbook that you give the consumer? Does the consumer want to purchase a vehicle with more aggressive tendencies, for instance? It’s not going to stop so many stop signs, or not stop for so long at stop signs rather. So, these are all questions that are currently being debated and it’s just fascinating.

Ed Bernardon: Well, yes. Do you give the consumer the ability to do that? I think the four-way stop is a great example. So, I grew up in the Midwest, lots of four-way stops, and you become very expert in handling them. As you approach that four-way stop you’ll see a car in one of the other lanes starting to come to a stop but you can feel if your car is large back and base and when you feel that versus when the other cars you actually see it stop, you know if you were first or not. And it’s great for navigating those intersections. Now, here in the Boston area, things are a little bit different. They’re not as many four-way stops and some people are aggressive and they just jump right out there, others seem to want to wait forever. So, now you got two people waiting forever and nothing’s happening traffic’s backing up. There is no right or wrong way, but there is human interaction there that is there eye to eye contact or whatever it might be, might be almost impossible to program into the machine at least now. So, how do you know the level of aggressiveness to program in or should that be a choice of the person that purchases the vehicle?

Katherine Sheriff: Exactly. Maybe it should be maybe it shouldn’t be. Maybe these are decisions that are beyond the consumer’s purview, honestly. Some people don’t want to know how the sausage is made, but they need to know…

Ed Bernardon: If it tastes good.

Katherine Sheriff: Yeah. They need to know that it’s sausage, for instance. And so, I think that a lot of efforts, at least in terms of naming functional options and features of ADAS systems and automated driving that’s focused on “Is this a sausage or is it not? What does this actually do? Is this edible? Can I eat it? Or is there a choking hazard?”. Just really high-level stuff, as opposed to let’s get into fuzzy driving. But from an informed consent warning standpoint, it really is interesting to think about, from an academic perspective, how much would consumers want to know and really would it matter?

Ed Bernardon: Or maybe you could give that choice to the consumer. In other words, it could create a personality. There’s the personality of an aggressive but legal driver, and then there’s more of a less aggressive, maybe a more passive driver, again, perfectly legal. Maybe with autonomous cars, you can dial in the flavor that you want.

Katherine Sheriff: I think that’s something that’s actually been discussed and I see no problem with that, from a high-level perspective.

Ed Bernardon: We wouldn’t want the road rage on that knob, though.

Katherine Sheriff: We have to keep it within the confines of acceptable driving. So, there’s always the imperative of responsibility. So, but I really liked the idea and it’s certainly not a new idea, it’s one that has been discussed. Does the big debate just keep coming back to what do we tell the people that are buying this? And what do they really want to know, need to know what’s going to impact the safe use of the product? So, there’s personalization and then there’s safety. And I think that those are two disparate ideas.

Ed Bernardon: In our discussion, we continually talk about people and the law because laws are made for people. But what does that really mean? Are an autonomous car and an AI machine can it really be considered to be a person? Does it have free will or a soul? Because without that, you’re not really a person. If you’re pre-programmed to do certain things, you’re obviously not a person. So, could you really apply laws to it that apply to people?

Katherine Sheriff: I mean, I think that’s an excellent point and that’s really the debate that’s occurring. So, I was fortunate to do an e-workshop on the paper. And the purpose of the e-workshop was to discuss papers on robotic liability and this was back in December, and it was hosted by the University of York in the UK. And there were many different takes on robotic liability, but all touched very briefly on legal personhood. And the overwhelming consensus is that two things really, legal personhood is not completely relevant at present, it’s certainly an interesting idea. And then the other portion of that, is that no, right now, the current AI that we have does not have legal personhood. These autonomous vehicles do not have legal personhood. And here’s why, if you want a little bit of the background behind the legal philosophy. There are different ideas about legal personhood and what makes consciousness consciousness, and what makes us able to be liable. That’s really what legal personhood comes down to. How can you really be responsible for a choice? Sometimes it’s about deliberation. Are you making a deliberate choice? Sometimes it’s about whether you’re doing so consciously. So, there are different categories and different I would say characteristics, that go-to legal personhood. With that said, you could argue that an AI system has one or some, but currently, because the actions are still tied to the programmer because the algorithm is still dictated by the originator, the human, we’re just not to the point that there’s enough independent action, we’re not at the general AI point. So, I think that no, no legal personhood right now, but very interesting and really cool ideas about all of this. My favorite discussion, I would say, is a nice overview. We’ll go back to Professor Pagallo, the book is actually called The Laws Of Robots, I think it’s a 2017 book, but my copy is in tatters because I’ve read it 20 times.

Ed Bernardon: That’s a sign of a good book.

Katherine Sheriff: It’s fantastic. The best explanation I’ve heard of all of these ideas.

Ed Bernardon: So, a car can be autonomous and the AI brain can make that car autonomous. But the AI brain doesn’t really make its own decisions based on its, I don’t know if you want to call it a soul or what it feels. For instance, going back to your trolley problem, let’s say the choice is between one maneuver that might injure a cat, the other maneuver that might injure a dog. It doesn’t really call itself well, this AI brain is a dog person, there’s always going to be that. It’s always that program that’s inside that’s making decisions based on what the LIDAR image is, or what the camera or radar is telling it. Not that it’s a cat person or a dog person.

Katherine Sheriff: Yeah, basically. And scholars have been debating this increasingly over the last decades. So, whether legal systems that we have the framework should actually contemplate and apply personhood to robots, generally speaking to autonomous artificial agents, like you’re saying. Are they choosing the cat or the dog, but it’s based on a program versus this car is a cat car versus a dog car. But…

Ed Bernardon: Maybe that’s something you can dial in as well

Katherine Sheriff: Absolutely. I think really, though, we have to look at there are legal systems that grant independent legal personhood, or constitutional personhood. We’re looking at human rights, and so really, you’re getting into a whole lot of areas that go well beyond liability. Talking about robotic rights and things like that. It is an Alice in Wonderland wormhole, if you will.

Ed Bernardon: Well, hopefully, we won’t wait two years to have you back to the Future Car podcast. And maybe we can title your next episode with us as Alice in Wonderland and the wormhole of autonomous cars.

 

atherine Sheriff: Yes, absolutely. It’s really fun to talk about, it really is. But going back to and really pushing consumer education, standardized nomenclature, I would say stay tuned, I think that you’re probably going to see some things happen in the United States federal government, and the WP29, WP1. They’re all over these initiatives. Watch the SAE, really exciting. And I think that what’s going to happen is it’s going to make for a safer pathway to get autonomous vehicles to market. And in the interim, to understand what we do with the varying levels of autonomy that are all on the rows at the same time. Let’s not knock ADAS systems, ADAS systems are awesome. We just have to know what they are and what they do.

Ed Bernardon: And if you keep up your work, we know that the laws will keep up with all that. It won’t be that technology just rushing out there on its own. Well, listen, I want to end with our traditional rapid-fire questions session, where I’m going to ask you a series of easy-to-answer questions. You can answer them in one line or more, or you can say pass if you want. You’re ready to go?

Katherine Sheriff: Let’s do it.

Ed Bernardon:All right. What is the first car you ever owned?

Katherine Sheriff: Plymouth Laser, and it stalled at red lights.

Ed Bernardon: Plymouth Laser. Did you name the Laser?

Katherine Sheriff: No, I didn’t. I didn’t have it for very long. The next one was the Grand Prix GT and it was very fast. But the Plymouth Laser was the first car and poor thing stalled at red lights. And yeah, I didn’t name it.

Ed Bernardon: The Laser certainly didn’t make a big mark in history, that’s for sure. Did you pass your driver’s test on the first try?

Katherine Sheriff: I did.

Ed Bernardon: Have you ever gotten a speeding ticket?

Katherine Sheriff: Yes. That’s just a hard yes.

Ed Bernardon: If this is a hard yes, you have to tell me your best speeding ticket story.

Katherine Sheriff: Oh, no. Well, I’ll put it this way. I don’t think I got out of any of them. I had a couple of super speeders and I feel terrible because I hope that children, not children, but I hope that impressionable young minds are not listening to this. I was not being very smart, I was in college, I went to the University of Georgia undergrad, and going from Atlanta to there was a long stretch of straight road called 316, people in Georgia might be familiar. There were a lot of speed traps there because many people went very fast and that Grand Prix GT, which was the second car – sort of felt like the first one – accelerated very quickly and was very exciting and I’ve deserved the super speeder tickets and all the fines that went with them.

Ed Bernardon: And you learned your lesson, and you never did it again.

Katherine Sheriff: I did.

Ed Bernardon: So, we don’t condone speeding here on the Future Car podcast. If you had an autonomous car that was a living room on wheels, you’re on this five-hour car ride, what would be in your autonomous living room on wheels car?

Katherine Sheriff: Honestly, right now, it’s probably a different answer than what I provided a couple of years ago, I’d have to go back and check. But right now, the fun thing that I’m doing is playing the Just Dance game on Nintendo Switch. So, I would want a Switch with a monitor and just my little hand controller. And then, of course, I probably have my laptop as well, if I wanted to be productive, but I probably want to just dance the whole time.

Ed Bernardon: Dance and play video games and maybe work on the side. What person, living or not, would you want to spend that five-hour car ride with?

Katherine Sheriff: Oh, that’s easy. That would be my dad. That’s very easy. I would love to talk to him about all of the things that I’ve learned and all the things that his kids are doing, and why we’re in a vehicle playing Just Dance video game, and how this all came to pass.

Ed Bernardon: If you could have any car in the world today, except the Plymouth Laser, of course, what would it be?

Katherine Sheriff: I generally like larger cars, but I think that I would actually want the little BMW Z3 because it’s zippy and I would just go along the California coastline.

Ed Bernardon: What car best describes your personality?

Katherine Sheriff: Unfortunately, even though I’m very much an environmentalist, I would say an SUV because it is fun and functional. And so, as soon as we have hybrids and fully electric SUVs all over the place. Awesome. But yeah, I’m very much an SUV.

Ed Bernardon: What do you do to relax?

Katherine Sheriff: Other than play Just Dance? I sing. I am actually a yoga instructor, I’ve been certified for a few years, and I love to run.

Ed Bernardon: What do you wish you were better at?

Katherine Sheriff: Saying no.

Ed Bernardon: Greatest talent not related to anything you do at work?

Katherine Sheriff: I’ve really excellent foot-eye coordination.

Ed Bernardon: Foot eye coordination, interesting.

Katherine Sheriff: Even soccer and different little dance games and things like that.

Ed Bernardon: What’s your favorite city?

Katherine Sheriff: Oh, Atlanta. It’s still Atlanta, I can’t help it.

Ed Bernardon: Atlanta. You dropped the A

Katherine Sheriff: Atlanta? Yes, that’s where my Southern comes out.

Ed Bernardon: If you could uninvent one thing, what would it be?

Katherine Sheriff: Robocalling.

Ed Bernardon: If you could magically invent one thing, what would it be?

Katherine Sheriff: I would like to fly. So, some sort of human flight mechanism.

Ed Bernardon: Alright. And here’s the last two questions. These last few questions are going to bring it all together, everything about law, everything about robots and AI, and all that. If within the next 100 years, the possibility would exist for robots to be on the Supreme Court, would you accept a nomination to that court if all the other eight were robots?

Katherine Sheriff: It depends on if they were female robots? I’m just kidding. Yes, of course. Absolutely. I would accept a nomination at any point. They could be puppies and I would accept the nomination.

Ed Bernardon: Katherine, thank you so much for joining us on the Future Car podcast. I can’t wait till we have you back again.

Katherine Sheriff: Thank you so much, Ed, the pleasure is mine and I cannot wait to visit again.

Katherine Sheriff Associate Attorney in Davis Wright Tremaine

Katherine Sheriff Associate Attorney in Davis Wright Tremaine

Katherine is an associate attorney in Davis Wright Tremaine’s Technology Group. She devotes her legal practice to identifying areas of opportunity and potential challenges in emerging technology sectors, particularly in the dynamic field of autonomous vehicles. Katherine began her legal career as a litigator with a practice focused on product liability and insurance litigation issues related to liability and regulatory compliance. Today, she puts her passion for emerging technologies and mitigating risk to work for her clients in order to help bring breakthroughs in science and technology to market.

Ed Bernardon, Vice President Strategic Automotive Initiatives – Host

Ed Bernardon, Vice President Strategic Automotive Initiatives – Host

Ed is currently VP Strategic Automotive Initiatives at Siemens Digital Industries Software. Responsibilities include strategic planning and business development in areas of design of autonomous/connected vehicles, lightweight automotive structures and interiors. He is also responsible for Future Car thought leadership which includes hosting the Future Car Podcast and development of cross divisional projects. Previously he was a founding member of VISTAGY that developed light-weight structure and automotive interior design software acquired by Siemens in 2011, he previously directed the Automation and Design Technology Group at MIT Draper Laboratory.  Ed holds an M.S. in mechanical engineering from MIT, B.S. in mechanical engineering from Purdue, and MBA from Butler.

If you like this Podcast, you might also like:

The Future Car Podcast Podcast

The Future Car Podcast

Transportation plays a big part in our everyday life and with autonomous and electric cars, micro-mobility and air taxis to name a few, mobility is changing at a rate never before seen. On the Siemens Future Car Podcast we interview industry leaders creating our transportation future to inform our listeners in an entertaining way about the evolving mobility landscape and the people that are helping us realize it. Guests range from C-Level OEM executives, mobility startup founders/CEO’s, pioneers in AI law, Formula 1 drivers and engineers, Smart Cities architects, government regulators and many more. Tune in to learn what will be in your mobility future.

Listen on:

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/podcasts/the-future-car/katherine-sheriff-pioneering-the-legal-framework-of-ai/