Fayyad, SalamHussein, Ahmed W.2025-08-042025-08-042025-04-04https://theses-dissertations.princeton.edu/handle/88435/dsp01v118rh98hAbstract: This study looks into the algorithmic biases embedded deep in AI moderation systems regarding their impact on the Middle Eastern and North African (MENA) region focusing on Arab and Palestinian digital rights. It analyzes how prominent platforms like Meta, Twitter (X), Google, and YouTube employ systems that excessively moderate Arabic and Palestinian voices which leads to economic and informational inequality. Drawing on research from Human Rights Watch, Amnesty International, 7amleh, and Safiya Noble, the study stresses the lack of accountability in the governance frameworks concerning AI. It also analyzes the EU's regulatory approach in the AI Act, as well as the fragmented American version of the approach, and pinpointed many overlaps and contradictions, as well as gaps. In reaction to those gaps, the paper proposes active social responsibility policies on ethics of government investment, biases in governance datasets, transparency, independent audits, international cooperative governance frameworks, diversity policies, and ethical structural biases. The aim is to strengthen the proposition advocating for a responsive and comprehensive global AI governance system that provides fairness, accountability, and equity for those disproportionately affected by policies biased towards digital moderation.en-USBIAS IN THE MACHINE: AI, CONTENT MODERATION, AND THE ALGORITHMIC MARGINALIZATION OF MENA COMMUNITIESPrinceton University Senior Theses