Autonomous driving technology is advancing at a breakneck pace. Are laws and ethics ready?
Professor Bryant Walker Smith is one of the world’s leading legal experts on transportation technologies. His research focuses on issues of risk and trust in new technologies, especially automated driving systems and unmanned aerial systems. As automated vehicle technology move towards Level 4 and 5 of automation, the idea of risk analysis is front and center.
In this conversation with Jonathan Negretti, part of the Legal Beagle Podcast, Smith discusses how minimal risk conditions (MRC) factor into the future of this driving technology.
Should there be better standards regarding expected risk? More simply put, might it be time to rethink the relationship between MRC and automated driving systems, to allow for environmental considerations?
This interview has been edited for clarity.
Jonathan Negretti: I caught a comment that you made about the levels of automation. There are six levels of automation, according to SAE International. You argue that those aren’t accurate in some way. What do you mean by that?
Bryant Walker Smith: In 2012, I was one of a few people who sat down and wrote the first version of those levels. We borrowed from the Germans. We were really trying to develop a common language. That was the key from the beginning. We weren’t being normative. We were being descriptive. It was a dictionary.
There are lots of things I would have done differently in hindsight. But I think we did develop a useful vocabulary. To give you an example, Level 3 automated driving is frequently critiqued, along with the levels themselves.
As much as people push back at against these definitional documents, it does provide language necessary to discuss topics. In the same way that we might say, “Murder is bad.” We would also say we’re glad that the dictionary defines murder, because we have a term that we can agree on — a common meaning.
Now, unfortunately, the levels have been widely misconstrued by people who claim expertise in the field. That’s on them. They are plenty capable of reading a 35-page document and understanding it. For everybody else, the public regulators, that’s on us — the authors. We need to more effectively communicate what these levels do, what the divisions among them are and how all of the supporting concepts fit in.
Jonathan Negretti: Could you boil down autonomous driving levels to some basic definitions or principles, for someone who maybe doesn’t have the level expertise that you do? What are the six levels of vehicle automation, 0 to 5, according to SAE?
Bryant Walker Smith: Level 0 is not just your father’s father’s Oldsmobile, it’s also a lot of the vehicles that are still on the road today. It is where you are driving full-stop, without qualification.
I would contrast that with the other levels that are assisted driving — Levels 1 and 2. At Level 1, you’re driving, but you’re assisted with either steering or speed. At Level 2, you’re driving, but you’re assisted with both steering and speed. Now, each of these levels is assisted driving.
They exclude, as a technical matter, emergency systems that intervene, although momentarily — including everything from anti-lock brakes up to crash avoidance. They include adaptive cruise control, active lane keeping — the kinds of systems that are on many of the new vehicles that someone could buy today. The key for each of these levels is they work — unless and until they don’t. And that’s why, in defining these, it’s important to emphasize that you, the human, are driving, even though these systems might assist.
SAE J3016, the definitions document, does contrast these assisted driving features with the automated driving features, which are Levels 3, 4, and 5.
At Level 3, you’re not driving, but you will need to drive, if prompted, in order to maintain safety. The key example here are the kinds of features that a few automakers have promised to introduce imminently, that under certain low speeds, freeway conditions, the motor vehicle would continue traveling with the other vehicles and the human driver could disengage, look down, or look away. But at the point that the vehicle started moving faster again, the system would be about to leave what we call its operational design domain, or ODD, for short. Essentially it would be the point at which the human was going to have to start driving again. The system would give a warning, and the human would be expected to engage. Now, if the human did not, the system would hopefully make that very uncomfortable. You might be sitting on a freeway, and the system might even try to reduce risk, by pulling off to the side, if possible.
But Level 3 does not entail the expectation — that is, the promise — that the system could reliably reliably achieve what we would call a minimal risk condition.
Contrast that with Level 4, where the system does reliably achieve a minimal risk condition, where the manufacturer in effect promises that even if a human does not resume actively driving, the system will be able to take risk out of the system, to a level where we would say, “Yeah, that’s good enough. You’re on the side of the road. You’re not in an active lane of traffic under most circumstances.”
Now, at Level 4, you’re not driving, but either one of two situations apply:
- The first is you will need to drive, if prompted, in order to reach your destination. This is in a vehicle that you can drive.
- Or, you will not be able to reach every destination. This is in a vehicle that you can’t drive, so like a low-speed shuttle that has no steering wheel.
You can’t take it from Phoenix, Arizona to Columbia, South Carolina. But if the ODD remains in the neighborhood that it’s circulating, it will reliably achieve a minimum risk condition and the human will never need to drive.
Level 5 is where we start mixing our axes and we say, “Well, it’s Level 4 everywhere.” You’re not driving, and you can reach any destination that a human could reasonably expect to reach.
Now I’ll just note that I’ve been using the word “driving” here, and already I’m contradicting our own definitions document, where driving is given a much broader meaning. In fact, law gives driving a much broader meaning. When I say drive, what I’m talking about is what SAE J3016 calls performing the dynamic driving task. That’s doing all the things necessary for real-time driving: steering, braking, paying attention, responding to events.
Those are the levels. I think those are really helpful in some situations. But I think they’re less helpful for a lot of general-purpose conversations about automated driving. Rather than talk about levels, it’s more useful to say, “Look, are we talking about the assistance feature, or are we talking about an automated driving feature?” That’s the really key divide for most purposes. Second, where can the system operate? I also talk about types of trips and types of vehicles in addition to the levels, which describe types of vehicle features.
Jonathan Negretti: Are the Waymo vehicles in Chandler, Arizona operating under Level 4, based on the way you just described Level 4?
Bryant Walker Smith: How Waymo describes its system, these are vehicles that are able to drive within their zone — their operational design domain — without a human driver, who remains in the loop and intervenes in real time, or is even expected to intervene in real time.
Waymo does have an extensive monitoring network, including people who can provide input into the vehicle. For example, if there’s a hazard that the system needs to navigate around — a remote assistant could suggest a path or help identify a scenario to communicate with the people inside. But, in Waymo’s characterization, that person is not driving. They are not in real-time observing the road and, based on that, actively engaging directly or remotely, steering, or braking. Therefore, that person, or people, do not qualify as remote drivers. Therefore, that is an example of Level 4 automated driving.
Now even there that’s subject to some challenge. This is a field with hyperbole. Tesla felt the need to call their driver assistance system full self-driving. Waymo responded by calling their automated system “fully autonomous.” This “full” word is just going to modify everything!
I think there are ways where we would question that. You’re limited in your domain. Clearly you rely on a human who does a lot, even if it’s not driving. There are these details that I think we need to be more transparent about in truly understanding how each of these systems operate.
Jonathan Negretti: Let’s talk about J3016, the minimal risk conditions (MRC), and the possibility of additional conditions, the attainable and the expected MRC. Can you explain MRC and then talk to me about why you think we need to have attainable and expected MRC?
Bryant Walker Smith: Minimal risk condition is the fancy term that we use to say, “Where do you go when you can’t keep driving?”
Humans can achieve a minimal risk condition. You blow out a tire, you pull to the side of the road to change the tire. There’s a big snowstorm and you can’t see? You pull off the freeway and wait at the rest stop. There is a crash up ahead — this is where it gets a little tricky — you can’t pull off, you’re sitting in traffic. You put on your flashers and you wait for the crash to clear. These are all at least arguably minimal risk conditions: the things we do when we either cannot or should not complete the trip that we’re on.
That’s the key concept of minimal risk condition. It’s important in two different ways. One, is describing where a vehicle ends up when its automated driving system — this is expected at Level 4 or 5 — needs to achieve a minimal risk condition. That could be because the human who was expected to drive, to complete the trip, has decided to fall asleep. The vehicle says, “I can achieve reasonable safety, but can’t complete the trip, so I have to go somewhere.” Or, when there is a failure in one of the systems, or an issue in the environment that prevents the vehicle from continuing. There’s a deer strike, and it needs to get off the road. That’s the first: describing what that situation is — under the circumstances, what is the most reasonable thing to do?
Under those circumstances, it might be to drive to your maintenance depot at slow speeds. It might be to get off at the next exit. It might be pull over to the shoulder now, or it might be there is no shoulder, you have to stop. This is a very context-dependent condition. It depends on what the system actually can do.
So that’s the first way that it’s useful — in describing under the circumstances the least bad option: this isn’t our first choice, but what’s the least bad thing that we can do right now to reduce the risk of a crash?
The second way that minimal risk condition is used is to delineate Level 3 from Level 4. At Level 4 we say the automated driving system always achieves a minimal risk condition. At Level 3, the human driver is expected to achieve that minimal risk condition.
In other words, if you are on a congested freeway and suddenly the vehicle can’t continue, it alerts the driver and the human driver would be expected to pull it over to the shoulder. If the system reliably can achieve this, or the manufacturer promises the system can reliably achieve this, it’s Level 4. If not, it’s Level 3.
Now, the problem comes when we mash these two different uses of minimal risk condition together. In some circumstances, it might be the least bad thing for a vehicle to stop in an active lane of traffic. You can’t stop on a shoulder. There’s a blizzard. You just have to stop in your lane.
But, under other circumstances, that would not be acceptable. You’re driving down a freeway at 70 miles an hour and the automated driving system says, “I shouldn’t continue.” The minimal risk condition can’t be just stopping on I-95 as cars are whizzing by on either side.
If we define minimal risk condition to be the least bad thing that the automated driving system is capable of doing, then the least bad thing that a Level 3 or a less capable system might be capable of is to simply stop in the lane. If, then, definitionally, the minimal risk condition is the thing that the system can do, that becomes the minimal risk condition. If that becomes the minimal risk condition, that means the system can achieve it. This is the power of low expectations. If the system can achieve it, then that makes it not a Level 3 system, but a Level 4 system.
It would be like a sign at the carnival that said, “To get on this ride you have to be as tall as you are.” You know, everybody could ride. That would definitionally be correct, but it wouldn’t be very helpful for describing the safety goal of ensuring that the person who gets on the carnival ride is tall enough.
When we use minimal risk condition in these two ways, we’re basically saying that the minimal risk condition is what the system does, but a minimal risk condition is also a certain expectation for what the system should do — the descriptive versus the normative. We need to distinguish those two concepts.
Jonathan Negretti: That is how we dive into attainable and expected MRCs. Can you explain that a bit more?
Bryant Walker Smith: Attainable minimal risk condition is what the system can actually do under the circumstances. That accounts not only for environmental constraints and vehicle constraints, but also for the constrains of the automated driving system itself.
For example, if the automated driving system were to crash and lose its only power source, then all that automatic driving system might be able to do is to come to a stop in its travel path. It might not even know where the lanes are. That would be the safest thing that system could do under the circumstances, and that would be attainable minimal risk condition.
But that can’t be what we expect the minimal risk condition to be, because that would mean that an automated driving system that has no power backups is let off the hook compared to one that has five power backups.
And, so, expected minimal risk condition would be what we say the automated driving system should be capable of doing given constraints in the environment and in the rest of the vehicle, but without regard to limitations in the automated driving system itself. Meaning, if a driveshaft breaks, you stop in the lane. If your LIDAR gets knocked off, you stop in the lane, too. But that reflects a failure of a Level 4 automated driving system, because we would expect the system to be able to move the car to the shoulder. If it loses its sensors, it cannot do that, even though it should for the purposes of the expected minimal risk condition.
The expected is normative: “What should the automated driving system do under the circumstances?”
The attainable is the descriptive: “What can this particular system, with its limitations, actually achieve under the circumstances?”
Jonathan Negretti: Under the normative thought of expected MRC, are we deciding what these expectations are?
Bryant Walker Smith: J3016 always strives to be non-normative, but there are certain normative underpinnings that we have to accept. What is a car? What is a system? What is observing? There are things that need some content — some minimum floor. If we are going to distinguish Level 3 and Level 4 with reference to minimal risk condition, we do need to define that minimal floor.
Fortunately, what SAE and even ISO are doing at this point is, early on, working on a taxonomy of minimal risk conditions on possibly a hierarchy, or at least a set of language to describe different conditions — like pull to the road, drive to the service depot, stop in the lane, go to a hotel for the night, drive to the emergency room right away. Stop, do not pass go. This would be useful for supplying content into minimal risk condition.
Who ultimately decides? There are some engineering judgments that need to be made here. One of those is distinguishing, frankly, between the automated driving system and the rest of the vehicle. If you share an actuator, is that part of the automated driving system or is that part of the vehicle? Where do you draw that line?
The other is determining, under the circumstances, what would be achievable. As a matter of physics, as a matter of safety, a vehicle that cannot power its motor cannot accelerate. A vehicle that cannot turn cannot change its path. An automated driving system, though, that loses confidence in its determinations might still be able to make those decisions, but with less of a safety margin — with less confidence that the environment that it’s perceiving is the environment that actually exists. So, there are lots of judgement calls, unfortunately, embedded in terminology, in these levels, and even in my proposal.
Jonathan Negretti: Is there, in your opinion, a goal to make this imperfect system perfect? Or is that just not realistic?
Bryant Walker Smith: None of these systems are going to be perfect. We don’t even know what that means, unfortunately. You know, the history of technology — progress of law — is replacing one set of problems with a new set of problems, and just really hoping that the new set on aggregate is less than the old set.
Cars were supposed to be the green environmental technology of 100 years ago. They were our solution to pollution — the solution being horses. The average horse is 25 pounds of manure [daily]. New York City had 100,000 horses. You can do the math. It’s a lot of manure a day. So cars came along. They were supposed to solve pollution. And then they didn’t.
The same with automation. We’ll introduce problems. Things will mess up. There will be unforeseeable issues. So much of the difficulty in in designing these systems, and even regulating them, comes in the long tail of the unforeseeable — the things that we can’t really predict yet could turn out to be the biggest problems in the future. Lack of availability — if we have a system-wide shutdown — if everyone’s trying to evacuate a flood and suddenly all the automated vehicles shut down. That’s a problem.
Unanticipated hazards of technologies — concerns real or ungrounded about active sensors, all of these are the uncertainty that will confront these systems. So, absolutely not perfection.
To bring this back to the levels, J3016 does not define these levels by perfection. Meaning, a manufacturer that promises its system will achieve a minimal risk condition represents that its system is Level 4. That system remains Level 4 even if the system fails to achieve a minimal risk condition. Even if it drives itself off of a bridge, it’s still Level 4. It’s just a Level 4 system that has failed.
Jonathan Negretti: Is the goal to make this uniform across all the technology that is being utilized by these different makers in these different vehicles? Right now, if you look at crashes, really that’s just a failure of the interpretation of an MRC by the human being. Your interpretation of your MRC is different than mine, and then we crash. There may be other factors, and I would argue you’re right. But we have different interpretations of what is minimal risk. Is the future of this, Professor, to see uniformity and a standard? Is that what J3016 is trying to do?
Bryant Walker Smith: Let’s break that into two pieces — each of which is really important. The first would be minimal risk condition itself. This is a condition that is static. It’s not the dynamic, on-the-road, moving condition. It’s not two people — “My minimal risk is just go straight through this traffic signal” and another person is, “My minimal risk is to accelerate through the yellow.” That would be their estimation of risk, which is another function that automated driving systems will necessarily do — make judgments about actual risk and then judgments about whether that actual risk is acceptable. There, I agree that so many crashes result from a disconnect between perceived risk and actual risk — particularly among humans, who are somewhat bad at that.
When we’re talking about minimal risk, it’s once a trip should not be completed, where do we go, what do we do? The other part is how we get there. How do we pull off to the side of the road? The end states are what MRC is.
The second part you brought up is standardization. J3016 is not a standard. That is developed through SAE, through a standard-setting process by a standard-setting committee. It is, however, not a standard with the expectation that it is followed, but a recommended practice, with the idea that “It would be really nice if everybody in industry followed this.” Even the foundational definitions document doesn’t yet claim to set domain over the field.
I think it would be really great if everybody agreed on a common language and used it correctly, and stopped summarizing it incorrectly. We’re not there yet. I think we’re still struggling with that. One of the difficulties is sometimes terms just aren’t a great fit, and sometimes we have to step away from the terms and just say what we mean. What do we mean when we’re talking about a vehicle that does X, Y, and Z?
I don’t have the expectation that suddenly we’re going to reach even linguistic standardization anytime soon. I hope we at least reach more linguistic discipline in ensuring that we’re not actively fostering miscommunication within or across domains.
As for the next step, which would be the more substantive standardization, that’s the work of other SAE documents that are beginning to set the stage for a more substantive expectation. We might even say at some point standards — that is, what the systems should do. That’s also the work of policymakers who will eventually set normative expectations, whether at the federal level — including through some of National Highway Traffic Administration’s inchoate rulemaking efforts — or the state level, through some pretty basic expectations being placed on these systems. That’s where we’re going to supply more of the content for what reasonable driving means.
Jonathan Negretti: I think your class worked on an e-scooter, dockless mobility project (docklessmobility.org), you talked to cities and policy makers about how to properly govern e-scooters in their jurisdictions. I feel like this is all happening after the fact, like the scooters got dumped on city streets, and safety and consumer protection was an afterthought. People started running into each other — flying off of them on sidewalks. All sorts of catastrophic injuries started to occur. And then, cities start to say, “Hey, we need to change things. We should re-look at the standards of how we police this and, ultimately, how we enforce this.” Do you see that happening with autonomous technology in vehicles?
Bryant Walker Smith: Oh, yeah. I think you could say the same thing about basically every technology, including cars. They just got dumped on roads and we’re still struggling to figure out what to do about that.
I think it’s unfortunate with e-scooters that they get squeezed. Vulnerable road users — active mobility users — are metaphorically and literally at the margins. Vehicles get 60 feet of pavement. Everybody else fights over the six feet of the curb. As a result, it’s less that they pose a particular safety problem and more that there’s no space for them. They’re endangered if they’re on the road, and they endanger others if they’re on the sidewalk. I think that’s a real tragedy caused principally by the dominance of the car, and not by the emergence of the e-scooter.
I really agree with the undercurrent of your comment that we need to be thinking not just how to get automated vehicles, but how to use them as a tool to really unlock the policy goals that we have, whether that is increased safety or increased mobility, or increased environmental performance, or community or autonomy. Any of these could be helped by automation. They could also be threatened by automation. And so, I think it is incumbent on policymakers to define the goal and then think about how technologies, including automated driving and lower-hanging fruit and non-technological interventions can really help achieve those goals.