Princeton University users: to view a senior thesis while away from campus, connect to the campus network via the Global Protect virtual private network (VPN). Unaffiliated researchers: please note that requests for copies are handled manually by staff and require time to process.
 

Publication:

Approaching Equalized Odds by Actively Forgetting in Deep-Learning Networks

Loading...
Thumbnail Image

Files

final_thesis___actual_final-10.pdf (2.52 MB)Embargo until 2026-07-01

Date

2025-04-28

Journal Title

Journal ISSN

Volume Title

Publisher

Research Projects

Organizational Units

Journal Issue

Access Restrictions

Abstract

This work explores how Machine Learning (ML) models can learn what and how to forget. We develop a deep-learning image classification Convolution Neural Network (CNN) trained incrementally by class on the RAF-DB database. Our model addresses the Catastrophic Forgetting Problem (CFP) while simultaneously learning to actively forget biases. We propose an iterative, chained learning process for MLs to measure how well knowledge is retained and, in parallel, how effectively biased data is forgotten. Our contributions are threefold: (1) We construct a novel process that uses active forgetting to mitigate biases; (2) We present a nonlinear learning algorithm that emulates human-like behavior; and (3) We present a schema for machines to avoid the CFP parallel to actively forgetting. To the best of our knowledge, combining a continual learning approach with natural active forgetting algorithms to mitigate biases during the training process is unexplored. Its results are applicable to real-world scenarios like security measures and healthcare data. In particular, we find that using active forgetting alongside regularization and replay methods improves the adaptability, efficiency, and fairness of machine learners in dynamic real-world settings. Finally, we discuss ethical considerations and potential future directions.

Description

Keywords

Citation