Noa's research examines the influence of private actors' AI governance on users' rights. Her current focus is on digital linguistic disparities, analyzing how Natural Language Processing (NLP) and Large Language Models (LLMs) offer far-reaching opportunities while also reinforcing offline hierarchies and restricting participation for speakers of digitally marginalized languages. Her earlier research explored content visibility reduction, a powerful AI-driven moderation strategy leveraged by digital platforms to limit content exposure without outright removal. This content moderation approach, as currently applied, raises the normative thresholds for permissible content, lacks transparency and accountability, and poses significant human rights concerns. Noa's research identified ways to address these challenges while harnessing content reduction's flexible nature to benefit the digital sphere.