edit

Automated discovery of trade-off between utility, privacy and fairness in machine learning models

Bogdan Ficiu, Neil D. Lawrence, Andrei Paleyes
3rd Workshop on Bias and Fairness in AI (BIAS), ECML 2023, 2023.

Abstract

Machine learning models are deployed as a central component in decision making and policy operations with direct impact on individuals’ lives. In order to act ethically and comply with government regulations, these models need to make fair decisions and protect the users’ privacy. However, such requirements can come with decrease in models’ performance compared to their potentially biased, privacy-leaking counterparts. Thus the trade-off between fairness, privacy and performance of ML models emerges, and practitioners need a way of quantifying this trade-off to enable deployment decisions. In this work we interpret this trade-off as a multi-objective optimization problem, and propose PFairDP, a pipeline that uses Bayesian optimization for discovery of Pareto-optimal points between fairness, privacy and utility of ML models. We show how PFairDP can be used to replicate known results that were achieved through manual constraint setting process. We further demonstrate effectiveness of PFairDP with experiments on multiple models and datasets.

This site last compiled Fri, 06 Dec 2024 20:39:33 +0000
Github Account Copyright © Neil D. Lawrence 2024. All rights reserved.