Category: AI/Machine Learning

How Privacy-Enhanced Technologies Can make Financial Crime Compliance More Effective

Read Article

One of the prominent potential applications for federated machine learning is in detecting financial crime risks across multiple institutions which cannot share data with each other due to confidentiality and other regulatory restrictions. This article delves into the recent growth of financial crime in congruence with failing financial crime compliance and monitoring systems. The author describes how privacy enhancing technologies such as federated machine learning could help to overcome information sharing restrictions in relation to financial crime compliance and monitoring.

Alon Kaufman, ABA Banking Journal

Federated Learning: Challenges, Methods, and Future Directions

Read Survey

This survey delves into challenges of federated machine learning beyond potential security issues that could affect adoption in industries like financial services. For example, the authors consider how asymmetric data and communications systems might make building networks between heterogenous institutions difficult and increase the costs related to uploading and downloading models or portions of models. These considerations may be especially important in underserved and emerging markets.

Tian Li et al.

Towards Federated Graph Learning for Collaborative Financial Crimes Detection

Read Paper

This paper describes the efforts of a team of researchers to develop a federated AML model for the UK Financial Conduct Authority’s Global Anti-Money-Laundering and Financial Crime Tech sprint. The model was trained on data from several financial institutions and outperformed a conventional AML model in detecting potentially suspicious activity by 20%.

Toyotaro Suzumura et al.

Measuring Algorithmic Fairness

Read Paper

This article examines alternative fairness metrics from conceptual and normative perspectives with particular attention paid to predictive parity and error rate ratios. The article also questions the common view that anti-discrimination law prevents model developers from using race, gender, or other protected characteristics to improve the fairness and accuracy of the algorithms that they design.

Deborah Hellman, University of Virginia Law Review

Fairness Definitions Explained

Read Paper

This paper maps twenty definitions of fairness for algorithmic classification problems, explains the rationale for each definition, and applies them in the context of a single case study. This analysis demonstrates that the same fact pattern can be considered fair or unfair depending on the definition being applied.

Sahil Verma and Julia Robin, ACM/IEEE International Workshop on Software Fairness

Financial Inclusion and Alternative Credit Scoring: Role of Big Data and Machine Learning in Fintech

Read Paper

This research paper analyzed whether unstructured digital data can substitute for traditional credit bureau scores with an analysis of loan-level data from a large Indian fintech firm. The researchers found that evaluating creditworthiness based on social and mobile footprints can potentially expand credit access. Variables found to significantly improve default prediction and outperform credit bureau scores include the number and types of apps installed, metrics of the applicant’s social connectivity, and measures of borrowers’ “deep social footprints” derived from call logs.

Sumit Agarwal, Shashwat Alok, Pulak Ghosh, and Sudip Gupta

If Then: How the Simulmatics Corporation Invented the Future

Read Book

Historian Jill Lepore tells the story of the Simulmatics Corporation as a case study in the Cold War origins of data science and of the technological, market, and political debates that shape our “data-mad” times. This company’s efforts throughout the 1960s to build a business on the power of prediction raises important questions about how its work affected democratic institutions, personal behavior, and conceptions of privacy.

Jill Lepore, Liveright Publishing

Should We Trust Algorithms?

Read Article

This article addresses concerns about the ethical use of AI algorithms given their prominence in so many facets of daily life.The author argues that distinguishing the trustworthiness of claims made about an algorithm from those made by an algorithm can improve how we evaluate individual algorithms or uses and promote ‘intelligent transparency.’ He proposes a four-part framework inspired by pharmaceutical development for evaluating the trustworthiness of algorithms.

David Spiegelhalter, Harvard Data Science Review

Why Are We Using Black Models in AI When We Don’t Need to? A Lesson from an Explainable AI Competition

Read Article

This report from an explainable AI competition raises the question whether model developers need to rely on “black box” machine learning techniques or can meet their needs using more interpretable forms of machine learning.

Cynthia Rubin and Joanna Rudin, Harvard Data Science Review

Our Weird Behavior During the Pandemic is Messing with AI Models

Read Article

Using online retail search data, this article explores how dramatic shifts in consumer behavior during the quarantine affected algorithms used to manage inventory, sell ads, and screen for fraud. It underscores the role of informed and timely governance – including human intervention – to ensure algorithm performance.

Will Douglas Heaven, MIT Technology Review

Translate »