Skip links
Sony

Sony AI Launches FHIBE — the First Global Fairness Benchmark for Computer Vision

When artificial intelligence “looks” at the world, it sees not only shapes and colors — it sees our biases. Computer vision algorithms trained on billions of images have long become a mirror of society — with all its distortions, stereotypes, and inequalities. And now, for the first time, a tool appears that can show how skewed this gaze is.

Sony AI’s research division has introduced the Fair Human-Centric Image Benchmark (FHIBE) — the world’s first open dataset for evaluating fairness in computer vision, collected on a voluntary basis and reflecting real cultural, ethnic, and gender diversity of humanity. Inside are thousands of images of people, classified and annotated with geographic, social, and professional differences in mind.

The goal of FHIBE is not simply to test how well algorithms recognize faces, emotions, or poses, but to understand how fairly they do it. The system allows researchers to measure the level of systemic bias in models: for example, the inclination to associate certain professions, emotions, or visual contexts with a specific gender or ethnic group.

See also  PhyE2E: When Artificial Intelligence Begins to Write the Laws of Nature

According to Sony AI, FHIBE will become a kind of “litmus test” for the industry, where developers can evaluate their models not only by accuracy but also by ethical compliance. For the first time, fairness goes beyond political declarations — becoming a quality metric on par with precision and speed.

This is a step that could change the rules of the game. Because if AI once “learned to see” the world, now it must learn to see fairly.

More details: https://ai.sony/

This website uses cookies to improve your web experience.
Explore
Drag