Host Sean Harris speaks with Daniel Hinkle, the Senior State Affairs Counsel at the American Association for Justice, about robot cars and the future of auto liability.
Sean: Hello and welcome to Civilly Speaking, OAJ’s monthly podcast on practical and timely legal issues, I am your host, Sean Harris. We are very pleased today to have with us Daniel Hinkle. Daniel is the state affairs counsel at the American Association for Justice. Daniel Hinkle, welcome to Civilly Speaking.
Daniel: Thank you so much Sean. I really appreciate being here.
Sean: Our topic today is robot cars and when we say robot cars that conjures up lots of different kinds of ideas, what are we talking about when we’re talking about robot cars?
Daniel: Well the way it’s been framed by the engineers behind this is we’re talking about a vehicle that has an automated driving system installed in it. What’s an automated driving system? An automated driving system is a system that is capable of monitoring the roadway environment and responding appropriately within its given domain and so really when we’re talking about a robot car, what we’re talking about is a car equipped with a system that is capable of driving. They are for all intents and purposes, the driver when the system is engaged.
Sean: And I mean, when you use the term system, I gather that there are, that doesn’t necessarily mean you know, person in car, no hands on the wheel and no control, there’s different kinds of levels?
Daniel: Absolutely right and so there’s different kinds of levels, there’s different kinds of designs and so there, but there are a couple that really sort of stand out in people’s minds and so when approached to building a robot car is what I will call the Tesla approach, where you are going to continue to have steering wheel and the brakes and the accelerator pedals. A human being is going to be able to drive that car as much or as little as they want in the end, so it will be a dual use vehicle. You will have this automated driving system installed on it that you can turn on and then depending on the level of it, if it’s a level 3 system is the way that the society of automotive engineers termed it and people might be familiar with this, but if it’s a level 3 system a human being needs to be ready to take control if there’s an alarm essentially saying hey we don’t know what’s going on, we need you to take back over and the way that that’s been described to me is this is like a fire alarm. You don’t need to be sitting around in your office waiting for a fire alarm to go off so that you go do what you need to do in the event of a fire alarm. You can go about your business, you can be reading a book, but you need to be at least awake enough, available enough to respond if the fire alarm goes off. That’s one approach and that’s what the level 3 would be and now if you move to a level 4 or level 5 vehicle, these are vehicles where if the fire alarm goes off, essentially a robot will take control of the system and deliver you safely out of the building to continue the firm alarm analogy. So there’s a separate system within the vehicle that even if there is a failure, it could deliver you to what they call a minimal risk condition without your involvement. Now, in the event, in the case of like the Tesla example, that doesn’t mean that the human won’t be able to take over control of the car, they still could take control of the car if they chose to and this sort of handoff between when the automated driving system is in control of the car and when the human is in control of the car, is a very complicated difficult trade off that a lot of manufacturers think cannot be done safely or successfully and so a lot of companies are not going to pursue this type of system and those companies are building what they’re calling an automated driving system dedicated vehicle. This is the Waymo/Goggle, Goggle spinoff Waymo approach to this. Their job is to remove the human driver from the equation at all. They want to build a system that is entirely self-contained, only the vehicle is only being driven by an automated driving system. So this means that they are not going to have any of these level 3 vehicles. There is never going to be a situation where there’s a human fall back. They’re trying to design their system so that even if the main automated driving system were to fail, that they would have this fall back system that could at least pull them off to the side of the road or achieve some sort of minimally safe position to deliver the car in or you essentially have to wait for someone else to come get you I guess.
Sean: So this idea, the level 4 and the level 5 the kind of completely autonomous type of vehicle, that’s not here yet?
Daniel: It’s not here yet, although I will say that Google, Waymo claims that it is here yet out in Phoenix. Just this past week they said that they are putting their human driver into the back seat of their vehicle and essentially that human operator that’s supposed to be motoring the roadway environment, making sure that the vehicle is driving safely, their only role at this point is to press a button to pull the car off, off to the side of the road if something goes wrong. So they’re not fully here yet, but Waymo is claiming that they are very, very close to delivering this type of technology.
Sean: And do you have a sense of, I mean what kind of time frame we’re looking at as far as when we might see one of these out in the wild?
Daniel: Well that’s, I think is going to be very context dependent. So the reason why Waymo is saying that they’re very close to doing this is because they are doing it in the outskirts of Phoenix, where it never rains, it’s always sunny, the weather and traffic conditions are relatively light in the suburbs that they’re testing in and so in these sort of very narrow geofence that’s going to be specific only to that area geofenced conditions, you might see these level 4 vehicles being put on to the road in the next couple years. Now how long until they can make them like run at night, make them run when it’s raining, in the snow and really sort of build them out around the country and deliver that sort of technology that’s an unknown because they really are trying, they really have a technological barrier to making them safely drive in these different weather conditions or in different cities with different driving conditions. So on that side of the equation on this sort of like ride hailing context we’re going to see some very narrow deployments in the next couple years and then it’s kind of unknown how quickly that scales out across the country, but on the other side, on the Tesla side or on the General Motors side or Ford side, where there are these companies that are going to be building essentially dual use vehicles that have an automated driving system installed in them, Tesla claims that they are building all of their new vehicles today with the hardware necessary to allow this sort of level 3 or level 4 or even potentially, eventually level 5 without deployment where you can just push a button and allow the car that you own to drive itself or to allow the algorithm developed by the company to drive the vehicle. Those, Tesla is building them right now. Ford says it’s going to have something like this delivered by 2021. General Motors is building their electric vehicle the bolt. They say they have a special facility already setup that is building these vehicles as soon as they can. The only thing that is waiting on those side, on that side of the equation is they don’t have the software systems necessary to do what they want to do with the level of safety required before they feel comfortable saying that we’re responsible for monitoring the roadway environment and responding appropriately. You human being who might have had to get us on the interstate or might have to drive some of the time can read your book now. That sort of technology maybe 3 or 4 or 5 years out and then even when it does come out we’re not sure how extensive it’s going to be so how quickly or widely deployed that sort of system might be.
Sean: And I gather that, you know if we think of emerging technology in the last couple years, I mean things like Google Glass, which was supposedly you know the wave of the future, wearable computers, didn’t catch on. I suspect there’s no guarantees that robot cars even catch on in the first place.
Daniel: That’s absolutely right. There is still considerable public skepticism about robot cars. This is occasionally fueled by the crash that is involving a, what was described as a robot car or a driverless car so the Tesla collision down in Florida that was not an example of a truly driverless car, that was actually just a driving assist system, but the way Tesla had marketed it and the way that they had failed to correct the record whenever their vehicles were described as self-driving or driverless fueled this impression that they were a robot car and then when you have this catastrophic collision down in Florida where a semi-tractor trailer pulled across the raid but the Tesla driving system didn’t even slow down or hit the breaks or do anything to try and avoid the collision that had lead a lot of people to feel that this technology was unsafe and was inherently never going to be there. This is fueled every single time that we’re going to see a robot car collision going forward and if the American public doesn’t accept this as a safety technology then it’s hard for me to imagine them ever taking off beyond maybe some narrow context and so I do think that’s still a possibility out there. The Google Glass is a good example. Another example might be it’s like a Segway, where maybe it’s used in some cities on buses for some narrow touring purposes, but we never really figured out a way to make it safe enough that it became convenient enough that people ever really felt like using it.
Sean: I always thought those Segway’s were great, but every time I talk about wanting to ride one my kids think it’s dorky.
Daniel: I lived in downtown DC for a while and we see the people out on the Segway tours around the mall pretty much every single day and there, they look like they’re fun, but you got to wear a helmet and your sort of like just standing three feet taller than everyone else. It’s kind of weird.
Sean: Well and you mentioned that the idea that is safer might not catch on or people might not by it, but of course that’s how the auto companies are selling it, right, that this is safety first?
Daniel: Absolutely and I think that is born out of the history of the automobile industry. The automobile industry loves to talk about safety, but anytime that you try to pin them down on the fact that their vehicles are not safe, they shift the conversation and you can go back I’ve got the struggle for auto safety and Ralph Nader’s Unsafe at Any Speed sitting on my desk right now and I’m going through the history of the way the automobile industry has tried to change the conversation about safety to put the, to put it on the driver or to put it on the highway system or to put it on anyone else except for the people that were designing these vehicles that were unsafe and say no, it’s their responsibility and their the fault right now. This technology is not proven. We have no way of believing that this technology is going to be safer and yet you’re absolutely right whenever NHTSA gets up in front of everybody or whenever an auto manufacturer gets up and starts talking about this, they talk about human drivers are the problem and that if we can remove human drivers by putting a robot in charge that we will somehow miraculously reduce the amount of collisions. That statement is bullshit because not only because we don’t know that they can do that, but also because they history of the automobile industry indicates that they won’t do that. We currently have more outstanding recalls on vehicles that are on the road than in any other period in our history. Just some of the most recent scandals involving, say the GM ignition switch are an example of how a manufacturers who routinely put profits over safety and that’s going to continue to happen in driverless cars and driverless cars require a constant, a near constant investment in resources to keeps them up to date to keep pushing updates to them, to keep their maps current and any sort of reduction in that investment is potentially dangerous and so when they talk about that they are going to reduce collision by ninety percent that is envisioning a future where they don’t care about profits, which I don’t believe will ever come to be.
Daniel: Does that make sense? I was kind of rambling there.
Sean: Yes, unfortunately that makes complete sense. And so if we’re talking about safety from a trial lawyers perspective how do you envision the future of auto-liability cases going forward where theoretically or at least the way it’s been presented is “there is not driver to hold negligent.”
Daniel: When they framed this as a driverless car or a self-driving system they’re displacing the agency there that I think that we’re going to have to fight to maintain. These are vehicles that are being driven by an automated driving system. That automated driving system is under the control of the company that put it on the road. The company with control is the company at fault when that automated driving system commits some driving behavior that leads it to cause a collision, to leads it to drive in a negligent manner. So I think our biggest argument here for holding them accountable is to hold them accountable in the way that we hold any other driver accountable. If they run a red light or if they drive in a negligent manner that leads to, that causes a collision that is, as a duty as a driver to behave to drive as a reasonable person would under the circumstances and avoid a collision, they breached that duty by releasing this algorithm, putting this algorithm in their cars that could not adhere to that duty that breach of a duty caused a harm and therefor they’re liable for the damages that it caused. I think that’s the sort of argument that we should be working on. Now this is not exclusive of also a product liability argument. The problem is, if we had to fight all of these as, potentially say a design defect claim or some other theory of liability, you have to eventually try to show exactly what went wrong and that process of showing exactly what went wrong rather than something went wrong can be extremely expensive and difficult to comprehend if it can be done at all because one of the unique issues that’s involved in this is that algorithm that’s going to be determining how the vehicle is driving down the road is inscrutable. When they release it they can’t predict exactly how it’s going to act in every situation and even after it does say, run through a red light they can’t go to the code and say ah hah this is the reason why it screwed up we’ll fix it and we’ll be done with it. We aren’t going to be able to get that sort of level of precision as to what went wrong and so holding them liable under a theory of negligence as the driver I think is a position that we should be fighting for both in the legislatures and eventually in the court rooms when these cases get started.
Sean: Well and we were talking about this earlier that this situation certainly harkens back in my mind to the idea of strict product liability, right? Holding them accountable based on the defective design and nothing more.
Daniel: Absolutely right, and there’s a way that I think we can talk about this in the context of products liability that using concepts like rasie it solocator like they were they had the control over this process and something went wrong that we should be able to hold them accountable because they were the only ones who could have originated that harm or talking about the consumer expectations test and how the consumer expected the vehicle to behave in a situation that it did not meet that test. This sort of strict product liability framework would totally work the problem with that is that strict products liability has been eroded and the majority of states at this point do attest that it would make it very, very difficult to recover, it might make it cost prohibitive to recover in a large number of collisions that occur on the streets that this technology can be involved with.
Sean: We’ve been using the term robot car primarily as compared to driverless or self-driving and I gather that that is a deliberate word choice on your part.
Daniel: I like the term robot car and almost out of a process of elimination because when we talk about driverless cars or self-driving cars or autonomous cars, autonomous means self-governing, what we’re talking about is isolating the vehicle technology, the automated driving system from the people, the corporation that have control over that technology and I think that the emphasis should be placed on the people with control rather than the technology itself and I like robot car because people think of robots doing someone else’s bidding, they’re working for someone else, but I guess at the most accurate technically way to talk about this would be to say a vehicle equipped with an automated driving system or a vehicle controlled by a company through an automated driving system, but that doesn’t just roll off the tongue.
Daniel: So robot cars is where we sort of fall back on.
Sean: Words matter right? I like the idea that you say that by having a, a driver, whether that’s a human being or a company that is writing the code and implementing the technology there is someone to hold responsible as compared to, well its driverless car there’s no driver at all.
Daniel: Right and I think you get into these questions about aw let’s say a driverless car get into a collision who’s liable? Like we would never say that if we said oh a company is driving the car and it gets into a collision who’s liable? Of course, then the answer becomes inevitable just by the way you phrase the question and so I think a lot of the confusion that is out there about whose liable in a driverless car collision comes from the fact that we are framing it as a driverless car collision and that’s not an accurate representation and the way I understand the technology is going to be implemented.
Sean: Well Daniel Hinkle this is fascinating stuff and I am sure there is more to come in the future on this topic, thanks very much for joining us here on Civilly Speaking.
Daniel: Sean, thanks again for having me I really appreciate having the opportunity to speak with you today.