Interpretable and explainable models

Alex Reinhart – Updated November 6, 2019 notebooks ·

When discussing data ethics and how algorithms should be used to make decisions about people, one criterion that often comes up is that models should be “interpretable” or “explainable.” An inscrutable algorithm that produces accurate predictions but whose predictions cannot be examined or explained leaves no avenue for due process; an algorithm that gives predictions based on clear rules can be challenged when one of the rules is obviously wrong or biased.

But how do we produce explainable models, and is explainabaility enough?

See also Privacy and surveillance for more on the power dynamics of possessing and using data, and Machine learning and law on legal implications.