Artificial intelligence has significant potential to improve many areas of life and society. However, as AI systems become more widely adopted, it is crucial that they are fair, unbiased, and inclusive. Racial and ethnic minorities continue to face discrimination from AI in areas like facial recognition. The AI community must address these critical issues of unfairness and make progress toward AI that serves all groups equally well.
To help work towards fairer AI, Google researchers recently developed and open-sourced the Monk Skin Tone (MST) Scale, a new skin tone rating system to evaluate how well AI models perform across different skin tones. Developed by Harvard sociologist Dr. Ellis Monk, the MST Scale provides a nuanced categorization of skin tones to help combat unfairness in computer vision systems. By making the MST Scale and the Monk Skin Tone Examples (MST-E) dataset freely accessible, Google hopes it will help researchers and companies to build AI systems that are unbiased, equitable, and serve a diverse range of users.
"A lot of the time people feel that they are lumped together into racial categories: the Black category, white category, the Asian category, etc., but in this there’s all this difference. You need a much more fine-grain complex understanding that will really do justice to this distinction between a broad racial category and all these phenotypic differences across these categories." – Dr. Ellis Monk
The MST Scale presents a noteworthy contrast to the widely used Fitzpatrick Scale in dermatology. The latter was developed to assess how different skin types respond to ultraviolet radiation, primarily to gauge skin cancer risk. Over time, the Fitzpatrick Scale was adopted outside of its intended medical contexts, including in tech, to approximate skin tones for various applications. However, its limitations in representing a broad array of black and brown skin tones is well documented, particularly in the context of ML fairness.
In contrast, the MST Scale, conceived with a broader perspective on skin tone diversity, is proving to be a more inclusive and representative tool for developing technologies that interface with skin tone. It's important to note, though, that the Fitzpatrick Scale remains relevant in its original domain—clinical dermatology and related research.
The impact of this open-source scale on the AI community could be far-reaching. The MST Scale provides a standardized metric for assessing the performance and bias of ML models across diverse skin tones, and promoting fairness and equity in technology that serves a global user base. Furthermore, it guides developers in creating more balanced datasets, which are fundamental to AI systems.
Open access to MST will hopefully inspire more developers to make inclusivity a priority, helping ensure that future AI technologies respect and serve all users equitably. However, the MST Scale is not a comprehensive solution and is still under active development. More work is needed to refine the scale for global representation and fully explore its possibilities and limitations.
AI has the opportunity to greatly improve the human condition, but only if it is fair and beneficial to all of humanity. The MST Scale is one critical step towards that goal, but much work remains. Researchers must continue the difficult but important work of addressing bias and unfairness in AI to create systems that serve and empower all groups of people equally regardless of their race, ethnicity or other attributes. Overall, the MST Scale demonstrates how the AI community can collaborate to envision and build a future with inclusive AI.