Chazelle, BernardDoğan, IrmakElsheikh, Rahma2025-08-072025-08-072025-04-28https://theses-dissertations.princeton.edu/handle/88435/dsp01zs25xc91cThis work explores how Machine Learning (ML) models can learn what and how to forget. We develop a deep-learning image classification Convolution Neural Network (CNN) trained incrementally by class on the RAF-DB database. Our model addresses the Catastrophic Forgetting Problem (CFP) while simultaneously learning to actively forget biases. We propose an iterative, chained learning process for MLs to measure how well knowledge is retained and, in parallel, how effectively biased data is forgotten. Our contributions are threefold: (1) We construct a novel process that uses active forgetting to mitigate biases; (2) We present a nonlinear learning algorithm that emulates human-like behavior; and (3) We present a schema for machines to avoid the CFP parallel to actively forgetting. To the best of our knowledge, combining a continual learning approach with natural active forgetting algorithms to mitigate biases during the training process is unexplored. Its results are applicable to real-world scenarios like security measures and healthcare data. In particular, we find that using active forgetting alongside regularization and replay methods improves the adaptability, efficiency, and fairness of machine learners in dynamic real-world settings. Finally, we discuss ethical considerations and potential future directions.en-USApproaching Equalized Odds by Actively Forgetting in Deep-Learning NetworksPrinceton University Senior Theses