Recommended Reads

Moving beyond “algorithmic bias is a data problem”

Read Paper

This paper explores how model design choices can cause or exacerbate algorithmic biases, notwithstanding the common view that data predominantly cause bias problems in machine learning systems. The author cites two important factors that constrain our ability to curb bias solely through working on the quality or scope of training data: inherent messiness in real world data and limits on accurately anticipating features in a model that can cause bias. Model designers should therefore consider how their choices about the length of model training or the use of differential privacy techniques can affect model accuracy for groups underrepresented in the data.

Sara Hooker, Patterns

AI in Financial Services

Read Report

This report examines broad implications of using AI in financial services. While recognizing the potentially significant benefits of AI for the financial system, the report argues that four types of challenges increase the importance of model transparency: data quality issues; model opacity; increased complexity in technology supply chains; and the scale of AI systems’ effects. The report suggests that model transparency has two distinct components: system transparency, where stakeholders have access to information about an AI system’s logic; and process transparency, where stakeholders have information about an AI system’s design, development, and deployment.

Florian Ostmann and Cosmina Dorobantu, The Alan Turing Institute

Psychological Foundations of Explainability and Interpretability in Artificial Intelligence

Read Paper

This paper defines and differentiates between the concepts of explainability and interpretability for AI/ML systems. The author uses explainability to refer to the ability to describe the process that leads to an AI/ML algorithm’s output, and argues that it is of greater use to model developers and data scientists than interpretability. Interpretability refers to the ability to contextualize the model’s output based on its use case(s), value to the user, and other real-world factors, and is important to the users and regulators of AI/ML systems. The author argues that the recent proliferation of explainability technologies has resulted in comparatively little attention being paid to interpretability, which will be critical for emerging debates on how to regulate AI/ML systems.

David A. Broniatowski, National Institute of Standards and Technology

The Input Fallacy

Read Paper

This paper critiques traditional approaches to fair lending that restrict certain inputs such as the consideration of protected class information (race, gender, etc.) or that require identifying inputs that cause disparities. Based on a simulation of algorithmic lending using mortgage lending data, the author argues that focusing on inputs fails to address core discrimination concerns. She also proposes an alternative fair lending framework to address the needs of algorithmic lenders and to recognize the potential limitations of explaining complex models.

Talia Gillis, Minnesota Law Review

What Does Building a Fair AI Really Entail?

Read Article

This article analyzes AI fairness as both essential in itself and as a way to solve the issue of trust in AI systems. The author advocates for an interdisciplinary approach, with computer science and the social sciences working together. Three recommendations are outlined: (1) train managers to act as “devil’s advocates” by evaluating algorithmic decision-making using common sense and intuitive notions of what is right and wrong; (2) require leaders to articulate their companies’ values and moral norms to help inform compromises between utility and human values in AI deployment; (3) hold data scientists and organizational leaders responsible for collaborating to evaluate the fairness of AI models both against technical definitions and broader company values.

David De Cremer, Harvard Business Review

AI in Banking: Where It Works and Where It Doesn’t

Read Article

This article outlines the ways in which AI is being adopted by banks and describes the growing competitive pressure on these institutions to adopt AI technologies. Relevant use cases for AI include well-established applications like fraud detection as well as emerging uses like lending, where AI has potential to improve the accuracy and fairness of models, but poses more significant risks to consumers, firms, and investors.

Penny Crosman, American Banker

The Tensions Between Explainable AI and Good Public Policy

Read Article

The author considers the complexity of using algorithmic decision-making in policy-sensitive areas, like determining criminal bail and sentences and welfare benefits claims and argues that advances in explainability techniques are necessary, but not sufficient, for resolving key questions about such decisions. She argues that the inherent complexity of the most powerful AI models and our inability to reduce law and regulation to clearly stated optimization goals for the algorithm reinforce the need for transparent governance by model users, especially when they are government agencies..

Diane Coyle, Brookings Institution

Affirmative Algorithms: The Legal Grounds for Fairness as Awareness

Read Paper

The authors explore the application of modern antidiscrimination law to algorithmic fairness techniques and find incompatibility between those approaches and equal protection jurisprudence that demands “individualized consideration” and bars formal, quantitative weights for race regardless of purpose. The authors look to government-contracting cases as an alternative grounding for algorithmic fairness, because these cases permit explicit and quantitative race-based remedies based on historical discrimination by the actor. But while limited, this doctrinal approach mandates that adjustments be calibrated to the entity’s responsibility for historical discrimination causing present-day disparities. The authors argue that these cases provide a legally viable path for algorithmic fairness under current constitutional doctrine but call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.

Daniel E. Ho and Alice Xiang, University of Chicago Law Review

Machine Learning Interpretability: A Survey on Methods and Metrics

Read Article

This article surveys approaches for achieving interpretability in machine learning models and considers societal impacts of interpretability in sensitive, audited, and regulated deployments. The authors also propose metrics for measuring the quality of an explanation.

Diego Carvalho, Eduardo Pereira, and Jaime Cardoso, Electronics

Underspecification Presents Challenges for Credibility in Modern Machine Learning

Read Paper

This paper explores the problem of “underspecification” – a statistical phenomenon that occurs when an observed issue may have several possible causes, not all of which are accounted for in the model. The team of authors from Google examined case studies in computer vision, medical imaging, natural language processing, and medical genomics, and found variation in model performance based on underspecification problems using a variety of ML pipelines. As a result, training processes that can produce sound models often result in poor models, and the difference between the two will not be apparent until the model is in use and has to generalize to non-training data. Based on these findings, the authors point to greater rigor in specifying model requirements and stress testing models before they are approved for use.

Alexander D’Amour et al., ArXiv.org

Translate »