Sony has a new benchmark for ethical AI

1 month ago 4

Sony AI released a dataset that tests the fairness and bias of AI models. It's called the Fair Human-Centric Image Benchmark (FHIBE, pronounced like "Phoebe"). The company describes it as the "first publicly available, globally diverse, consent-based human image dataset for evaluating bias across a wide variety of computer vision tasks." In other words, it tests the degree to which today's AI models treat people fairly. Spoiler: Sony didn't find a single dataset from any company that fully met its benchmarks.

Sony says FHIBE can address the AI industry's ethical and bias challenges. The dataset includes images of nearly 2,000 paid participants from over 80 countries. All of their likenesses were shared with consent — something that can't be said for the common practice of scraping large volumes of web data. Participants in FHIBE ...

Source: https://www.engadget.com/ai/sony-has-a-new-benchmark-for-ethical-ai-160045574.html?src=rss

Read Entire Article

Disclaimer of liability !!!

NEWS.SP1.RO is an automatic news aggregator. In each article, taken over by NEWS.SP1.RO with maximum 500 characters from the original article, the source name and hyperlink to the source are specified.

The acquisition of information aims to promote and facilitate access to information, in compliance with intellectual property rights, in accordance with the terms and conditions of the source.

If you are the owner of the content and do not wish to publish your materials, please contact us by email at [email protected] and the content will be deleted as soon as possible.