The increasing use of machine learning in business opens up interesting legal questions. If your ML system makes errors that harm a customer, are you legally liable? Could you be negligent if you deploy an ML system that’s not thoroughly checked, even though the decisions that cause harm are made by a computer and not a human? Who is responsible for the actions of an inscrutable ML system when the unintended consequences of design decisions cannot be easily foreseen?
See also Interpretable and explainable models.
Selbst, A. D. (2020). Negligence and AI’s human users. Boston University Law Review, 100, 1315–1376. https://ssrn.com/abstract=3350508
In tort law, your actions are negligent if they lead to injury to someone else, even though you did not intend injury, provided the injury could have been foreseen by a reasonable person. (Basically “well, you should have known!” in law.) But suppose you make a fancy machine learning system for decision support, say for helping doctors diagnose cases. If the ML system does something wrong, can anyone be held responsible? That would be difficult:
Because machine learning’s whole purpose is to detect patterns humans would not notice, it is very hard to tell in advance that the ML system is wrong. How were you supposed to know that these five million neural network weights would lead to a misdiagnosis five years from now?
Humans are not good at working together with a system they must constantly supervise. Drivers of mostly-but-not-completely autonomous cars find it difficult to exercise adequate supervision of the vehicle in case it does something wrong. If a doctor can trust a diagnosis system most of the time, but has to be constantly vigilant for the few cases where it will misfire, is it reasonable to expect the doctor to do a good job?
Also, computers are insecure, so you need to think about foreseeing the effects of security flaws.
Fundamentally these are difficult to solve. Even an interpretable and explainable model can find relationships that a human cannot easily check, making it difficult to foresee their harms. Suggests that liability law can’t solve this on its own, and possibly some kind of regulation of ML is necessary.
Páez, A. (2021). Negligent algorithmic discrimination. Law and Contemporary Problems, 4(3), 19–33. https://scholarship.law.duke.edu/lcp/vol84/iss3/3
The first point to consider is that we are discussing algorithmic discrimination in 2021, not in 2012. By now, there is plenty of evidence—some of it presented in Part II—that the models used in the different stages of hiring decisions are very likely to be biased. In a sense, discrimination has become foreseeable by default, thus making these systems intrinsically harmful.
In cases like hiring, where algorithmic discrimination is well-studied and widely documented, the use of biased algorithms should be considered to be negligence. Legal standards of disparate impact and disparate treatment are hard to prove: disparate treatment requires proving the bias is intentional, while disparate impact requires proof the practice disadvantages a protected group and is not necessary for valid hiring reasons.