Social Media

Light
Dark

Meta releases a dataset to probe computer vision models for biases

Continuing its commitment to open source initiatives, Meta has introduced a new AI benchmark named FACET, designed to assess the fairness of AI models used for classifying and detecting objects in photos and videos, including individuals.

Comprising 32,000 images featuring 50,000 people, all labeled by human annotators, FACET (short for “FAirness in Computer Vision EvaluaTion”) encompasses a wide range of attributes, including occupations like “basketball player,” “disc jockey,” and “doctor,” as well as demographic and physical characteristics. This extensive dataset enables comprehensive evaluations of biases associated with these attributes, allowing for a deeper understanding of fairness issues.

Meta’s goal in releasing FACET is to empower researchers and practitioners to assess and address disparities in their AI models and monitor the effectiveness of fairness mitigations. The company encourages the use of FACET to benchmark fairness in various vision and multimodal tasks.

While benchmarks for detecting biases in computer vision algorithms are not new, Meta claims that FACET offers a more thorough assessment than previous benchmarks. It can answer questions such as whether models exhibit biases when classifying people based on their perceived gender presentation or whether biases are magnified based on attributes like hair type.

To create FACET, Meta’s annotators labeled images for demographic and physical attributes and classes, combining these labels with data from the Segment Anything 1 Billion dataset, which was developed by Meta to train computer vision models. The source of the images in FACET is a third-party photo provider, but it remains unclear whether the individuals in the images were informed of their use for this purpose or how the annotator teams were recruited and compensated.

Historically, many annotators working on AI datasets have come from low-income regions, raising concerns about fair compensation and ethical considerations. Meta states that the annotators for FACET were trained experts from various geographic regions and were compensated with hourly wages set according to their countries.

While FACET offers a valuable resource for assessing biases in AI models, it is not without potential issues. Meta acknowledges that the dataset may not fully capture real-world concepts and demographic groups, and it may not reflect changes in professions or attributes over time. Nevertheless, Meta has released the dataset along with a web-based dataset explorer tool, with the condition that developers agree not to train computer vision models on FACET but only use it for evaluation, testing, and benchmarking.

Leave a Reply

Your email address will not be published. Required fields are marked *