In this episode of Civilly Speaking, host Sean Harris talks with Hans Nilges, an attorney from Massillon, OH. Hans believes plaintiff attorneys are lagging behind other great professions when it comes to decision theory. Throughout the episode, Hans makes his case for evidence based legal decision making.

Ad: Today’s podcast is brought to you by NFP Structured Settlements. NFP Structured Settlements is pleased to be a Diamond Sponsor for the Ohio Association for Justice for the past 10 years. NFP is your trusted advisor and partner in all aspects of case settlement planning, including structured settlements and trust services. For more information, visit NFP Passionate Advocates, Proven Approach.

Sean: Hello, I’m your host, Sean Harris, and this is episode 52 of Civilly Speaking brought to you by the Ohio Association for Justice. Today is August 21st, and I’m here with our guest, Hans Nilges from Massillon. Hans, thanks very much for joining us here on Civilly Speaking.

Hans: Thank you very much.

Sean: Although perhaps given our topic today, I was joking that we should rename instead of Civilly Speaking, it should be statistically speaking and for our listeners, our topic today is decision theory and evidence based legal decision making. That’s a mouthful. What does that mean?

Hans: Well, sure. So I guess I when proposed the title, I really should have been kind of the case for evidence based legal decision making. It’s not that this is something that’s been figured out and can just be ruled out, but I guess what I’m trying to do is just start or maybe adjoin if I’m not aware of one that already exists, a dialogue in our profession, that it’s something that we should be moving to. It’s also something that I’ll be talking about further at our OAJ Winter Convention, but to answer your question, what is it that I’m talking about? I believe that ours is really the last of the great professions that still relies nearly entirely on our individual kind of expert evaluations rather than empirical studies, data, algorithms, things like that, that, you know, all the business world has been using, Wall Street’s been using it forever even the medical profession has moved towards it in the 80s and 90s through the evidence based medicine kind of revolution that took place. And you know, for the way we take a look and evaluate a case you know, we’re gonna take into account our past experiences with a type of a case or past experience with opposing counsel or the defendant. The cases that we find and read take into account our own abilities, you know, our opinion about how a jury pool or a judge is going to rule anecdotes from colleagues and really our gut and that’s what it all comes down to. When you take all that and you amalgamate it, that’s really what you’re doing is basing it on what your gut reaction is that’s driven by your experiences. And then we make our decision based on that and that’s the way things were done, for example, in medicine for. For a lot of times, for most, you know, until the evidence-based decision making in the medical field started coming around. So, there’s been a lot of kind of interesting work in this field of decision making and you know decision making under conditions of uncertainty and that’s the world where we live every day. You know, whether we’re making a decision, you know, as to a case, we take, a decision as to whether or not we settle or take a case to trial. There’s been a lot of interesting research in this decision-making space. So not to filibuster, but going along with that, give you an example of some of the research in an interesting kind of introduction to the topic is, is Dan Kahneman’s book Thinking Fast and Slow, which is a bestseller and a really interesting book that, you know, kind of got me started thinking about this and, you know, he has tons of white papers on it and his colleague Amos Tversky, it’s really interesting stuff, but, you know, pulling a couple examples from that book, he discusses a study where two hundred eighty-four people were experts that made their living, you know, on commenting and on political economic trends you know, pundits that we see on TV. Gathered together eighty thousand predictions from those people and, you know, asked to rate the probabilities of different events and the reality is that they failed pretty miserably. In fact, a monkey on a dartboard would have done better.

Sean: huh.

Hans: So, I guess, you know, that is a similar way that we’re making our decisions is, again, based on our own expertise and research and understanding of a topic but studies show when you’re basing it just on that, you know, it can be it can be deeply flawed.

Sean: Is it too simple to say that decision theory attempts to make what was otherwise subjective more objective?

Hans: Yeah. That’s exactly kind of the goal. Right? So, when we’re talking about, again, you know what we’re doing the world we live in is we’re making decisions under uncertainty. We don’t know how a judge is gonna rule. We don’t know whether or not a case is going to turn out well or not. We’re always having to make predictions and the reality is that predictions, it’s never going to be possible. The world’s too complex to be able to you know, no algorithm computer science program is going to ever be able to be 100 percent correct. The idea is that if you’re eliminating biases and you’re kind of confronting the fact that your own limitations on being able to make those predictions, that you’ll be able to improve your decisions. So one way is, you know, incorporating statistics and into your decision making. Another study, they had 14 psychologists and they were asked to make predictions about how students would do in school for that year and, you know, the psychologist went and interviewed people, you know, the students, they looked at their old history of grades, several aptitude tests, read a personal statement by them, and then they made their clinical predictions about how the people would do, how would they perform in school for the year and they ran another test concurrent with that, where it was a statistical prediction that was based just on grades and prior grades and one aptitude test and the statistical model beat the psychologist 11 out of 14 times. So it just kind of goes to drive home the point that experts necessarily aren’t the best people to be making predictions about how things are going to turn out and in those studies, you know, have been picked up on and the same result has been consistently shown to be true and all sorts of things from predicting all sorts of different, you know, successive medical procedures, survival rates and things like that.

Sean: What types of data or data points would you, in a perfect world, want to have for lawyers in our profession to be able to use this type of decision theory?

Hans: That is, I will say probably the data that is available, right? So I know we’ll get to the perfect world, but the data that’s available now is that’s the big impediment, I think, to the legal profession being able to really fully move into the into this world, so to speak, because as you know, our settlement information most things are going to be determined by a settlement and most of those settlements are going to be wrapped up in a confidentiality agreement or lawyers just aren’t sharing the information and first of all, getting a data pool is the first big challenge. Right? We need to figure out a way as a profession to be able to share that information while protecting, you know, attorney client privilege, while respecting any kind of confidentiality agreements or orders to seal cases, you know, addressing, you know, we’re all competitors, the you know, overcoming the reluctance to share information that people may feel is competitive, but, you know, so the first thing we need to do is figure out a way to get the data pool, right? It’s going to take a lot of cooperation and I’ll tell you that insurance companies already do this. Big giant insurance companies who represent, you know, have millions and millions of claims a year already have this information. They’re using it against us, quite frankly. They’re able to do the predictive modeling and make sure that they’re making the right decisions in their cases. Well, by using the data and you know, and I think that our profession gets put at a disadvantage because of that by big companies and insurance companies. So what data points? I think it’s really infinite, Sean. I mean, everything that you know, that goes into evaluating a case, take it by case type, you know, how much is the average auto accident case under all of these different factors? How what’s the average award you can get down to, you know, what do juries do in Akron, Ohio federal courts? You know, Judge Adams always tells us how conservative his juries are, but, you know, what’s the truth? In what types of cases? Are they conservative in criminal matters, but not certain types of civil matters? These are all things that if we had that information, we could really make better decisions rather than just saying, oh, I asked, you know, Joe Smith and he had a jury trial there in front of Judge Adams and it didn’t work out so well, just as a, you know, an example. So the more data we have, the better, the better choices we’ll be able to make and decisions will be able to make for our clients. But again, the first step is cooperating and getting that data pool.

Sean: Well, and that’s a good point, I hadn’t thought of it that way. When you say that we know the insurance companies are collecting this data, I mean, everybody’s at least in the personal injury’s context, everybody’s heard about Colossus and these, you know, programs that aggregate data and spit out an answer. Now, of course on their side they tend to be hamstrung by those numbers, but I think what you’re suggesting is the data, not that it hamstrings us, but it gives us, it’s a better predictor.

Hans: Yeah, that’s absolutely right. It ensures that we’re making the right decisions for our clients. So this goes into another part of this and it’s a very interesting if people start reading on it, it’s really interesting because there’s so many biases and flawed heuristics and everything people use in making the decisions, but going to making better decisions and using the data to make sure that we are making better decisions. Also, studies have shown that under prospect theory, people will tend to, when they have a bird in hand, take less than the expected value of the actual case when you take into probabilities of risk or loss. And I think that, you know, the fact that our general human instinct is to, you know, adopt that bird in hand view of I’m going to take the bird in hand, even though when I actually look at the risk involved, it’s not the right decision to make. If we have that data to be able to say, hey, this is what the actual, you know, probabilities of success are and you can run your expected value you’ll make sure that you’re not settling for less than a case is worth and if you can run a reliable expected value calculation, then you’ll make sure, one, you’re not selling out cases and on the other hand, you can have confidence that your decisions, if it’s exceeding expected value or meeting it, that it’s a good, solid decision and it’s something that we have started incorporating into our practice here. Of course, the limit on it is that we have to pick all subjective probabilities. We vet them. We argue about them. What are the subjective probabilities? Because guess what? We don’t have the data to be able to know what the true answers are. Right? But, you know, we use a program called TreeAge Pro where we run a decision tree model and compound all of the risk probabilities and come up with an expected value. And that is, in my opinion, helping us make better decisions when it comes to settlement, because, one, we’re not taking less than what our expected value is that’s for sure and on the other hand, we’re not being stupid, hopefully, and in pushing a case farther than it should go at this point. You know, if the settlement far exceeds expected value, why not settle it?

Sean: I want to follow up on something you said there. You talked about a decision tree app.

Hans: Yeah.

Sean: Tell us about that again and how you use it.

Hans: Sure. It’s a program and again, you know, because it’s actually primarily used in the medical profession for these reasons, because they are trying to make as evidence-based decisions, but there are some law firms that started using it in the legal context and what you do, is you map out all of the different kind of pinch points in your case. You know, in my world as a FLSA, you know, collective and class action lawyer those are typically first is my case going to get conditionally certified? If so, you know, what’s the probability of that? Do I have a rule 23 allegation? What’s the probability of me getting that certified? Then what’s the risk of summary judgment vs. liquidated damages? And you know, ultimately nor excuse me, summary judgment or decertification, what’s my chances of getting liquidated damages? What’s my chances of proving willfulness? And you take all of these things and you assign probabilities throughout the tree and then you compound all those risk levels and then you come up with an expected value calculation that’s in the numbers that you plug in, obviously, or your damages but those are easily objective, right? I mean, you just know a case is worth what it’s worth, at least in my world. I’ll say in personal injury, probably punitive damages and compensatory damages, right? For emotional distress and so on. Those are a lot more subjective and that’s where having data of what juries are actually awarding in terms of punitive damages and compensatory damages would be extremely valuable because we’re still just making kind of gut predictions.

Sean: Yeah. You know, that’s funny, one of my concerns when we go to CLE seminars and even though OAJ has a policy that speakers aren’t supposed to share verdicts, they do from time to time and I’m always concerned about that because it’s not representative.

Hans: Correct.

Sean: It’s one small piece of data. And you can’t necessarily extrapolate from that and by the way, they’re self-selecting the good results and they’re not telling you all the bad results.

Hans: That’s right. So, I mean, look at all of our websites, right? We all have our big verdicts on there or settlements. Right? But nobody has the you know, the case you settle for twenty-five hundred dollars, you know, and you litigated it for nine months, you know.

Sean: Right.

Hans: But those happen, unfortunately, right? And so that is a danger. You know, the law of small numbers is extremely dangerous. If you’re just looking at a you know, a very unrepresentative small population and you’re going to say, this is what I’m going to base my decisions on what I think I’m going to win a trial on. That’s extremely dangerous. I agree with that and, you know, unfortunately, we hear that from, you know, our people who are potential clients or our client.

Sean: Yeah.

Hans: They all read the million-dollar verdicts and, you know, in my world, I’m like, hey, I’m sorry, you made $10 an hour part time it’s going to take a lot, a lot of years of front pay to get you to a million dollars you know, because people’s views and perceptions do get skewed. So, yeah, having all the data, the good, the bad and the ugly is something that we as a profession on the plaintiff’s side need to start sharing with each other.

Sean: Yeah, bad data is still valuable data.

Hans: Yeah, absolutely because we can all find out how many million-dollar settlements people have by looking on their website.

 Sean: Right.

Hans: But the average stuff, you know, and the bad results, the ones that you know, what if we found you know, we did this study and we find out that certain type of cases that everybody poo poos, you know, is ones not worth taking are actually really valuable when you look at it from a context of time from signing up to the client to timing to getting the payment, you know, the amount of hours that you have into it, the availability of that type of claim, the amount of money you’re getting it could turn into, you know, we might be leaving a lot of money on the table or other cases that, you know, there’s a general perception that it’s highly valuable, right? But when you really run the numbers, it looks like, you know, it actually doesn’t have the same kind of punch that you think it does. I mean, it’s all really valuable information that we should be sharing with each other and everybody can use it so.

Sean: Well and I suppose that begs the question then, how do we collect this data?

Hans: Yeah, that’s the question. I mean, the only general answer I have is that an organization like ours is, is the place to start and it’s just going to take voluntary participation by members willing to kind of step up and take part of that process and it’s not an uncomplicated thing to do, but I guess the first step on down the road is getting agreement, getting consensus that this is the right thing to do and next step, trying to figure out how to overcome the thousand obstacles it will be to get to the endpoint, which is the actionable data. But I mean, I think, OAJ in particular is a great place to start because we have collectively the big data that we need, but individually, we don’t so.

Sean: Well Hans, thanks very much for joining us here on Civilly Speaking. This is fascinating stuff.

Hans: Sean, thank you very much for letting me kind of get on my soapbox about this. It’s a fun topic for me, but I do think that it is moving towards this evidence based legal decision making. That is where we need to go. I think we’re lagging far behind all of the other great professions and we need to catch up.

Sean: Yeah and for our listeners out there, if you like our show and want to learn more, check out and please leave us a review on iTunes and we’ll see you here on the next episode of Civilly Speaking.