Part of the challenge I have in teaching extremely bright business students in the final year of their undergraduate studies is many of them are considering heading to law school. So as juniors and seniors, they often try to develop what they see as the lawyer’s mindset.

It’s a great mindset and one that will both challenge and help them as they grapple with emerging issues such as legal liability for self-driving cars. Established models for analyzing risk don’t always work in examining the self-driving car landscape and far too often we fall back on the silly faux moral dilemma of “Oh! Should the car swerve and kill its passengers or stay on course and kill the pedestrians?!”

Simplistic cliches aside, any analysis of legal liability for self-driving cars begins with the notion of control. What does it mean to be “driving” a self-driving car? Well, it means that you’re in control of it. Maybe the best analogy is someone “piloting” a drone. You have the control in your hand and even if you set the drone on some auto-course (imagine you’re asking the drone to fly over the mountain where you’re skiing) you possess all of the instrumentalities of control (remember that half-hour in law school on res ipsa loquitur).

So if you’re ultimately the one in control of the self-driving car, more or less normal tort analysis works. If there is an accident and you are at fault (even if you’re not “driving”) then you’re at fault. Period. Whether you were actually behind the wheel or just the owner/operator of the self-driving car, the analysis will be the standard tort analysis used today if you’re actually behind the wheel.

What is really interesting about legal liability surrounding self-driving cars is product liability – and this is one that always piques student interest in my classes.

A self-driving car is like a car on top of a car. Bear with me as I unpack this.

It is like a regular car in that the actual product (the car itself) needs to be free from defects, safe to drive, etc. No different than any other car you would drive today. But the self-driving piece layers in basically another entire machine on top of your car and that is the software and associated computer systems.

In the future, there will absolutely be a lot of product liability cases against the manufacturers of the systems that drive self-driving cars where the car itself performed perfectly, but the self-driving technology failed, was defective, deviated from industry standards, etc. – any of the number of things that can bring forward a product liability lawsuit.

This is what’s sometimes difficult to logically appreciate, because, for such a long time, we have thought of cars in a very linear way. We understood that if a car manufacturer allows a defective car to get on the road, there is liability. To date, the legal system has established that the components parts the car manufacturers build or buy to be all put together and formed into a sellable car are all under that manufacturer’s liability. The car company can’t really successfully argue “Sorry – we didn’t make the brakes we installed in that model so it’s not our fault and we shouldn’t be held liable.”

But as cars have become more complex, the line blurs between what the car manufacturer may or may not be liable for.

Ultimately, this is going to be really hard to figure out, so we are going to see plaintiffs in these suits try to choose where to impose liability. What this means is that since the actual car and the system that make it self-driving are going to become increasingly intertwined and arguably inseparable, the courts will be tested by smart lawyers seeking to justifiably dig into as many deep corporate pockets as possible. It’s going to take years to have a solid precedential foundation upon which the courts may or may not try to lump together or separate liability for injuries caused by self-driving cars – injuries to the occupants of that car, other drivers, and often innocent, vulnerable members of the public.

Which takes us back to that facile moral dilemma of which group of people the self-driving car should kill. The correct answer is none. Artificial intelligence and anything that self-driving car system developers try to pass off as good AI will need to act as a reasonable human does behind the wheel. There’s this dangerous and false notion that self-driving cars should do better than we collectively have for many decades behind the wheel. But that can quickly become a dangerous legal fiction and one that leads to silly moral narratives.

For years to come, plaintiffs, their lawyers, car and systems manufacturers, and the courts will define and test the elasticity of limits of legal liability for self-driving cars. While the process will surely be interesting to observe, what is certain is that standards of reasonableness will continue to evolve and set the bar of appropriate responsibility for everyone involved, from the drivers, to the manufacturers, to the public.