The “coded gaze” is a term coined by researcher Joy Buolamwini that describes the algorithmic bias in technology, particularly facial recognition systems, where the biases of the programmers who created the code are reflected in the way the system identifies and interprets faces, often leading to discriminatory outcomes against certain demographics like people of color or women.
Buolamwini discovered this phenomenon when she installed software that would allow her reflection to be that of Serena Williams and the software couldn’t “see” her until she put on a white mask. She was alarmed to discover this lack of machine neutrality not only because of the inconvenience and demoralization it caused her while coding but also because this phenomenon amplifies inequities. According to Masha Hamilton’s “The Coded Gaze: Unpacking Biases in Algorithms That Perpetuate Inequity,” “[T]he data-based rules humans were using to create algorithms were dangerous… and damaging people’s lives by defining which schools admitted them, who hired them, what medical care they received, or even whether they were arrested.” In the article, Buolmwini also brings to surface the idea that “Data is a function of our history… So the past dwells within our algorithms. It is showing us the inequalities that have been there.” With this in mind and the recent repeal of diversity, equity, and inclusion initiatives, it is even more important to remove bias where possible.
Vanderbilt students can play a vital role in combating the coded gaze by working across disciplines to create more equitable technology. One approach to this is to facilitate collaboration between students studying computer science, sociology, and ethics to ensure AI systems are developed with fairness in mind. By integrating social awareness into technical coursework, students can examine how biased data influences AI decision-making and learn strategies to counteract it. For example, a partnership between the School of Engineering and the Department of Sociology could lead to research initiatives that evaluate bias in existing algorithms and develop frameworks for more inclusive datasets.
Another avenue is utilizing Vanderbilt’s innovation spaces, such as The Wond’ry, to launch student-led projects focused on ethical AI. By participating in hackathons or interdisciplinary research teams, students can create new technologies that prioritize fairness. For instance, a project could focus on improving facial recognition systems by ensuring datasets include a diverse range of faces, reducing the risk of misidentification. These initiatives not only provide hands-on experience but also challenge students to think critically about the ethical implications of their work.
Additionally, students can engage in policy discussions to advocate for AI transparency and accountability. By working with the law school and political science department, students can push for regulations that require companies to audit their algorithms for bias before deployment. Joy Buolamwini has emphasized the need for such oversight, arguing that without accountability, biased algorithms will continue to reinforce existing inequalities. For example, in Gender Shades, her research shows that all classifiers perform worst on darker-skinned females, with error rates up to 34.7%. As future leaders in technology, business, and policy, Vanderbilt students have a responsibility to ensure that the systems they help create serve all communities fairly.
Education and awareness are also crucial. Hosting campus-wide discussions, guest lectures, or workshops on algorithmic bias can help students from all disciplines recognize the real-world consequences of the coded gaze. By encouraging open dialogue, Vanderbilt can cultivate a generation of leaders who are not only technologically skilled but also socially conscious.
The coded gaze is not an abstract issue — it affects real people in tangible ways, from wrongful arrests due to biased facial recognition to discriminatory hiring algorithms that reinforce workplace inequalities. As Buolamwini states, “Data is a function of our history,” but history does not have to define the future. By using an interdisciplinary approach, Vanderbilt students can work to dismantle algorithmic bias and create AI that reflects the values of fairness and inclusivity.
References
“Gendershades.” Gendershades, gendershades.org, 2018, gendershades.org.
“Unpacking Biases in Algorithms That Perpetuate Inequity.” The Rockefeller Foundation, 13 Oct. 2020, www.rockefellerfoundation.org/grantee-impact-stories/unpacking-biases-in-algorithms-that-perpetuate-inequity/.