In this episode of Civilly Speaking, host Sean Harris talks with Julian Gomez, an attorney from McAllen, Texas. Julian provides insightful information on where we stand with crash avoidance technologies and autonomous vehicles today. He also shares his thoughts on theories of liability and what the future looks like for attorneys who handle car crash cases.

Sean: Hello, I’m your host, Sean Harris, and this is episode 56 of Civilly Speaking brought to you by the Ohio Association for Justice. Today is February 24th, and I’m here with our guest, Julian Gomez. Julian is an attorney in McAllen, Texas and our topic today is robot cars. Julian, thanks very much for joining us here on Civilly Speaking.

Julian: Oh, thanks for having me.

Sean: Now, when trial lawyers these days hear about robot cars, lots of us, frankly, get scared because it has the potential to be a gigantic shift in the practice, from your perspective, are robot cars the end of car accidents and the end of our practices as we know them?

Julian: That’s a great question, Sean. I wish that I had a magic ball and could foretell the future because it’s so I would be in the Bahamas right now on my 150-foot yacht, sunny and so the real answer is I don’t know. But if you look at the data that is out there right now, the data that’s really coming out, it seems to suggest that car wrecks are going to be around for quite some time and that robot cars, well, what you would call level five autonomy is really very far away and that for at least my lifetime and arguably into my child’s lifetime, there will still be car wrecks and with car wrecks, come attorneys.

Sean: Now, there’s a lot of terms that we hear talked about in the context of robot cars or self-driving cars and we should probably clear up those definitions at the beginning when we’re talking about crash avoidance technology versus autonomous vehicle. How is that happened? What’s the difference? And what do those mean?

Julian: I think that’s also a perfect way to start, is the Society for Automotive Engineers have come up with six levels of autonomy. Zero through five. Zero would be something like what we kind of grew up driving. I learned to drive on a, I don’t know, probably like a 1985/1990 Dodge pickup truck. It’s just a regular vehicle with no type of real technologies that would prevent a crash.

Sean: Mine was a 1984 Oldsmobile Cutlass Supreme.

Julian: See, we grew up where we are the same generation and we both learned to drive on level zero autonomy vehicles. On the opposite end of that spectrum, you’ve got level five autonomy and that is a car that drives itself all by itself with no human interaction all the time under all conditions and so that’s an autonomous vehicle and so there’s a dividing line really between level two and level three and so everything from level zero, one, and two that would be a vehicle that’s got crash avoidance technologies and that’s just basically a vehicle that is equipped with technologies or components that work together as a system without direct human input to avoid crashes. And it’s level three, four and five, that’s actually a vehicle that can perform all of the functions that a human driver would perform without human input. And so the difference, though, between three, four and five is that three can do it under limited circumstances or can do it under expanded circumstances, and level five can do it pretty much under all circumstances. There are no level five cars out there right now and at least commercially available to you and I there probably isn’t a level three available vehicle either. So, everything that we’re out there that’s in the marketplace right now is level two and that’s the Tesla’s, the Mercedes, the Audi. There could maybe be some argument, the General Motors Super Cruise that some of those maybe you’re touching into level three. But the reality is, is we’re all still probably at level two in crash avoidance technologies and that the only autonomous vehicles that are on the road are vehicles that are being tested.

Sean: And I would imagine that, a lot of the confusion in these names comes primarily from marketing, not from the engineering side.

Julian: Again, you’re hitting the nail on the head. I’ve tried to come up with a way to describe maybe some of the marketing that’s going on. The companies that are selling these technologies, the automobile manufacturers, the component parts manufacturers, I think maybe manufacturers and retailers in general had kind of figured out or learned that they can strategically misinform consumers and there’s no real consequence to it. Instead, we call it puffy and so if you call something self-driving and it doesn’t really drive itself well, we didn’t really mean that it drove itself because we gave you a little warning a little bit later on that said, well, you still have to drive even though we called it self-driving. But the consumer, although it’s called self-driving and when they hear that, all they think is that it drives itself irrespective of the warning and so manufacturers, retailers, companies, corporations have figured out let’s strategically misinform the public as to the capabilities of what these vehicles are by giving them catchy names and that’s why the public thinks that these cars do more than what they really do.

Sean: Where are we today with the technology? As far as what’s commercially available and why don’t we really have autonomous vehicles today?

Julian: Well, let’s kind of break that down into bite-sized pieces. So to kind of understand the technologies that we have today, it’s maybe important to understand the type of crashes that we have and so, if you can kind of close your eyes and you can picture what an automobile looks like and then you kind of picture the interior of the automobile, that interior, the cage it surrounds the area that we sit we call that the occupant compartment with a safety canopy and the reality is that it’s a rectangle. Right? It’s a rectangular cube and as a rectangular cube, there’s only a few ways you can bring force to bear on that cube. You can bring force to bear from the back. Right? You can bring force to bear on the sides. You can bring force to bear on the top or the roof. Or you can bring force to bear on the front of the vehicle. And so, I think the easiest way to kind of figure out where the technology is today is to classify the type of crash avoidance technologies by the types of crashes they are trying to avoid. And so you have roof impact avoided. That’s something like electronic stability control that’s been standard in vehicles for several years now and that prevents an automobile from rolling over and bringing force to bear on the roof. You have rear impact of wind systems and that’s like blind spot detection or the little sensors that you have on your car that keep you from hitting things as you back up. You have side impact avoidance. Those are technologies like lane departure warning or lane keeping, which keep your automobile in a lane of traffic when you are driving or warn you when you’re coming out of a lane of traffic. And what I really would consider the, the granddaddy if you will of crash avoidance technologies which would be foreword impact avoidance. That is what, forward collision avoidance, adaptive cruise control, that is the vehicle sensing what is in front of the vehicle, making a calculation and deciding whether to accelerate, decelerate, brake to avoid a collision in front of it. So that’s kind of what we’re at today. And then why don’t we have autonomous vehicles yet? It’s really, it’s more complicated than what anyone really thought it was gonna be and you’ve got what I would consider some of the leaders in the industry, right? You’ve got Waymo who probably has the fewest amount of public disengagements. Right? Let me digress for just a second. The public, you and I, consumers, everybody listening to this podcast the level three, four vehicles that are being tested right now, we really don’t know how good they are. Right? We don’t know how effective they are or how effective they’re not, because the manufacturers control that data. They’re only letting out what they would like to let out. The only kind of real exception to that is out in California where they measure disengagement. But this engagement’s even in California, really kind of is a loosey goosey term and what Waymo might call a disengagement is different than what General Motors might call a disengagement.

Sean: And forgive me when you use the word disengagement, tell us what you mean by that.

Julian: That would be where a human has to take over a vehicle and that’s you know, that’s a great point. Right now. Right now, with maybe one or two exceptions. All of the vehicles that are being tested out there still have a human literally behind the steering wheel or the joystick that can take over if they need to. Right? So the federal the NHTSA of the National Highway Traffic Safety Administration just granted in the United States the first exemption to a company called Nuro and if you watch the news, that’s the little vehicle that kind of looks like it’s got four doors that open up vertically that you could put your groceries in there or that you could put Domino’s Pizza in. They just granted the first kind of exception for those vehicles to hit the road, because up until now, if you had a car, you had to have a steering wheel, had to have brakes, it had to have items that were required by the federal motor vehicle safety standard. So there was still a human that was in these vehicles that was charged with taking over the driving if the vehicle had a problem and that’s what a disengagement is. But Waymo could classify a disengagement as, oh, well he just had to pump the brakes we don’t call that a disengagement. Where as General Motors says, well, he had to pump the brakes we are calling that a disengagement. There isn’t an official definition of what a disengagement is, but we were talking about why there wasn’t autonomous vehicles yet is that Waymo, who at least according to California, in 2018 and as of today, the 2019 data isn’t out had the least amount of disengagement and they’re coming out saying stuff like we’ve overestimated what the vehicles could do. It was more complicated than we anticipated. We’re not going to have autonomous vehicles anytime soon. Maybe never. Right? This is the CEO of Waymo coming out and publicly saying that for this kind of said the same staff. And again, they say that, but at the same time, are marketing kind of in a different way and that’s real interesting. But at the end of the day, it’s really a complicated issue and it’s complicated because we’re humans and we are subjective, right? We’re emotional beings and a computer’s objective. Right? It’s a switch at the end of the day. The electricity is either running through it or it’s not. It’s a binary system. It’s a zero or a one and that interaction between humans and the computer. Right? That’s all that the robot car really is. It’s all at the autonomous vehicle, really is, is really complicated and if you remove the human right? You’ve got autonomous vehicles already out there in closed systems right? That’s not that hard to do. That’s the monorail at Disney World. Right? That’s, you know, that’s you go to most airports. Right? There’s a train that takes you between terminals. Right? There’s not a human being that’s necessarily driving those. That’s the computer driving itself in a closed system because you don’t have that human computer interaction. But when you put humans in cars on the same roads and you try to make them play nice with each other, it’s not always so easy.

Sean: I’ve heard stories in some of the testing and some of the initial testing of, for example, a, when the computer sees a stop sign that has mud or a sticker or something on it, doesn’t recognize it as a stop sign.

Julian: You’re absolutely right. But it’s not even this nefarious, is that right? You could that’s somebody that’s intentionally doing something with the stop sign. In fact, there was a news article that came out today or over the weekend where McAfee was able to trick a Tesla into speeding by changing the speed limit on a sign and the Tesla saw the sign and said, oh, well, the speed limit is 100 miles per hour here or 80 miles per hour here or 50 instead of 40 and the vehicle it tricked into speeding. So it can be nefarious like that, but it doesn’t even have to be nefarious. Right? You guys are in Ohio. It snows. You get some wind and you get some wind on a stop sign and now it’s white, you can’t really see that it’s red. Maybe the shape is obscured because of snow or ice on it and the vehicle can’t tell what’s there. It really is just a complicated issue and in essence, the term artificial intelligence or machine learning is somewhat again, maybe disingenuous in that the computer has to the programmer has to have essentially anticipated all of the outcomes so that it could program the algorithm how to act when it sees that. And as human beings, you know, we do dumb and unforeseeable stuff all the time, but we kind of grew up with that. On Instagram. Last night I saw a picture of there’s a, I love to fish, right? I’d love to go fishing and I follow this Instagram account, it’s called the Qualified Captain and they show people at the boat ramp doing just, you know, dumb stuff all the time, and they showed a picture of a guy driving like a little station wagon with a boat. I don’t mean like a little canoe, like a boat on top of the roof of the station wagon with his mountain bike right next to it and who would anticipate this? I’ve recently seen another picture, again, on the same account of a guy sitting in the trunk of a car holding a dolly with a refrigerator on it. How do you anticipate that? And if you can’t anticipate it, it’s hard to program.

Sean: So because we’re lawyers and we deal with folks who’ve been injured in these situations, what are the theories of liability out there now for dealing with these cars?

Julian: So remembering again that we’re still zero level zero, one, or two, which is a crash avoidance technology. There are two main theories that are in existence and if you allow me to digress, just a second here is if at any time as an attorney that you have got a case where you have got catastrophic injuries. If the negligent defendant does not have enough coverage to make the plaintiff whole, you need to be looking at products liability theories and every single one. I can’t. You know, there’s a statistic that’s kind of being thrown around in this robot car world that ninety-four percent of catastrophic car wrecks are caused by human error. I mean I think that’s bullshit, but with that aside, is it that’s I misquoted statistic because while ninety-four percent of wrecks may be caused by human error, assuming that that’s true, that doesn’t mean that ninety-four percent of injuries are caused by human error. Right? Seatbelts fail, airbags fail, seats fail, tires fail, the occupant compartment fails. Is it no matter what the wreck is, if you’ve got catastrophic injuries, you need to be investigating the case for a product liability theory and so within the theory of a product’s liability with respect to crash to crash avoidance technologies there’s two main theories there. There’s failure to equip the vehicle with a crash avoidance technology and then you’ve got a failure of the actual crash avoidance technology.

Sean: And do we suspect that Congress will want to stick its nose into this in the future?

Julian: There is probably no doubt that Congress will want to stick its nose in to this area. Because at least right now, there is no regulation. There’s no law directly on point, and you’ve got to really hand it to, there’s a guy by the name of Lee Brown, who was the President of the Attorney Information Exchange Group, which is kind of the National Automotive Products Liability Group and he saw this issue before anyone else did and he put together a little committee and I was fortunate enough to be on that committee and so he started sending us to Washington to visit with NHTSA, the National Highway Traffic Safety Administration, to ensure that NHTSA did not regulate in this space. And then AAJ, American Association for Justice got involved and really kind of took over the lobbying on this point and has done an excellent job of keeping consumer rights available, not only at the federal level, right? No federal preemption, no federal immunity and then a young man by the name of Daniel Hinkle, who’s there with AAJ, has done a great job of teaching really the realities behind these vehicles to the state trial lawyers associations, so by and large, consumers still have both of these theories of liability available today and hopefully even if Congress does kind of stick its nose in this area, will still be able to find justice in America’s courtrooms consistent with the 7th Amendment.

Sean: Yeah and that’s kind of been my concern, you know, going forward that every car crash case would become purely a products liability case, which are, you know, expert intensive and expensive and all the things that come with it, but it’s just as reasonable, you know, subject to who’s in the room when the statute is drawn up to define the driver as the company. Right?  The manufacturer, as it is that a human being would be there.

Julian: Absolutely. So I clerked for a couple of federal judges and in between those two clerkships, I had the honor of working for Fred Baron in Dallas and I remember asking Fred, I was like, do you think this is 20 years ago, right? I was like, do you think that asbestos cases are still going to be around? And Fred assured me, he said, there will be mesothelioma cases for the rest of your legal career and while I certainly haven’t reached the end of my legal career, he’s right. Right? There are still mesothelioma cases out there today and there appear to be that there will be mesothelioma cases for the foreseeable future. There will be, I’m relatively confident negligent cases with cars for quite some time to come. The problem is just too complex that it’s not going to be able to where we have these Jetson type ideas, where the cars are just driving themselves anytime really soon and so even if there’s a crash avoidance technology right now, the way how the car manufacturers are defending those cases is they say, well we may have called it self-driving, but that doesn’t mean that it drives itself. You’re still responsible as the owner of the vehicle to intervene in this case and so I think consumers are going to have not only the product’s liability theories as means of recovery when there are catastrophic type injuries, but you will still have negligence cases in the run of the mill kind of car wreck case where you don’t have those level of damages because potentially the driver was negligent in the way how they were operating the system. And I think the system itself is defective, because if you kind of look at the wreck, the Uber wreck that happened out in Arizona, what wound up happening was the safety driver was watching, I believe The Voice, the television show on NBC and Uber had turned the brakes off on that vehicle and were relying on the safety driver to intervene if the vehicle needed to brake, but the safety driver was lulled into thinking that the car was driving itself and well, then bad things happen.

Sean: Right.

Julian: And so that lulling. Right?

Sean: So bottom line, of course, the question everybody wants to know is the answer to is, where are we headed? What’s coming? What should we expect?

Julian: Well, I do think that the problem is complex, and I do think that it is off a far way away. We are going to level three to five autonomy eventually. I just don’t think that it’s going to come as quickly as everyone initially said, and again, if you look at some of the leaders in the field, they’re walking back what’s going to be realistic. Our roads are probably going to have to be re-engineered. The vehicles are gonna have to be re-engineered. You’ll probably see it in limited rollouts. Right? So that Nuro vehicle, at least according to reports, they’re going to roll it out here in Texas, in Houston, in a very limited area. The vehicle’s going to drive at very low speeds and they’re going to test it. Right? So but the first truly autonomous vehicle, a level three vehicle that hit the road and I can’t remember if it’s Sweden or Switzerland, but it was one of the s countries in Europe and it is an 18-wheeler type of vehicle, a commercial truck that drives from one plant to another plant about 300 yards on the highway and it goes a whopping three miles per hour. I’m serious. Go Google it. You see the video, it’s a white truck with some little green markings on it. You’re going to see limited closed system type rollout of level three autonomous vehicles and then as the technology matures, you might see level four. And then maybe, maybe way down the road, you now, ten, fifteen, twenty, thirty years from now, you’ll see level five type technology, a vehicle that can drive itself from South Texas. Let’s say I wanted to go skiing in Colorado, right? That would be able to handle the heat, the wind, the rain, and then get into snow and ice. Just think of your windshield when you’re driving in snow. Well, at least you’ve got windshield wipers, right? You’ve got defrost. Is how does a camera and there are manufacturers who are limiting their technology only to cameras see through something like that? Alright I think a better system is kind of a redundant system, one that’s got cameras, its got radar that’s got LiDar, but the problem is going to be really tough and that’s going to eventually come, but that’s down the road and the other problem that I think is limiting it that we didn’t really talk about is hacking. We talked a little bit about how McAfee was recently able to trick that Tesla. But you’ve got, you just experienced the Democratic primary in Iowa. There is not one computer system that I’m aware of that has not had some type of a failure and when you’ve got human lives on the line it’s going to be a while that they’re going to have to really test these systems before that comes, but eventually in places maybe like Manhattan, you won’t be able to drive. You’ll have to go in a car. But it’s coming. It’s not an if it’s a when.

Sean: And we should say all this assumes that if they get the technology right at some point, that assumes that people will accept it. The classic example is Google Glass, which was very advanced, but people didn’t like it and didn’t trust it.

Julian: That’s a great point. Right? So I love my mother, right? I mean, I love my mom, but she and we’ve talked about some of this stuff, she’s like, but I don’t want no computer driving me. To get buy in from the public is going to be difficult and a lot of times we only really think of ourselves maybe as a state or as a country, but I recently was in Costa Rica for a meeting and it kind of brought to my attention is that if we think that the US is ten, fifteen, twenty, thirty years from this technology really kind of working, Costa Rica is double that, triple that and so with trade agreements like we just had with the new NAFTA. Right? With the USMCA right? You have commercial vehicles that cross our borders and go from Mexico into the United States. Well, if all you have are autonomous vehicles on the road operating at that time, you’ve got to then harmonize all of the regulations between those three countries and that’s just those three. Well, what if you have somebody coming from the south into Mexico, right? It’s a really complex problem that is quite a ways away.

Sean: Well, Julian, thank you very much for joining us here on Civilly Speaking. This is fascinating.

Julian: Well, Sean, I appreciate you visiting with us or letting me visit with you guys today and if you have any other questions, feel free to give me a call or shoot me an e-mail and I’ll help you guys out any way we can. Let me add one thing that I would really implore on your members. That organization, AIEG, it is our mantra we do not you cannot take other people’s cases. But if you are considering bringing a failure to equip case please, please contact AIEG. There is a pre-emption strike force that we’ve put together. It’s got a lot of briefing and we will help you with no obligation, no asking for a part of your case, get past that, because if attorneys if we get bad rulings, it affects other consumers down the road. You’ve got some great members there in Ohio like James Lowe, one of my mentors, an excellent, excellent attorney. And these folks, AIG will help you fight these issues so that you get justice for your clients and preserve this theory of liability for other folks down the road.

Sean: Thanks again Julianne, and if you like our show and want to learn more, check out civillyspeaking.com and please leave us a review on iTunes and we’ll see you here on the next episode of Civilly Speaking.