AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page
   Local Database  Engadget is a web magazine with...   [78 / 101] RSS
 From   To   Subject   Date/Time 
Message   VRSS    All   Sony has a new benchmark for ethical AI   November 5, 2025
 10:00 AM  

Feed: Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Feed Link: https://www.engadget.com/
---

Title: Sony has a new benchmark for ethical AI

Date: Wed, 05 Nov 2025 16:00:45 +0000
Link: https://www.engadget.com/ai/sony-has-a-new-be...

Sony AI released a dataset that tests the fairness and bias of AI models.
It's called the Fair Human-Centric Image Benchmark (FHIBE, pronounced like
"Phoebe";). The company describes it as the "first publicly available,
globally diverse, consent-based human image dataset for evaluating bias
across a wide variety of computer vision tasks." In other words, it tests the
degree to which today's AI models treat people fairly. Spoiler: Sony didn't
find a single dataset from any company that fully met its benchmarks.

Sony says FHIBE can address the AI industry's ethical and bias challenges.
The dataset includes images of nearly 2,000 volunteers from over 80
countries. All of their likenesses were shared with consent ΓÇö something
that can't be said for the common practice of scraping large volumes of web
data. Participants in FHIBE can remove their images at any time. Their photos
include annotations noting demographic and physical characteristics,
environmental factors and even camera settings.

The tool "affirmed previously documented biases" in today's AI models. But
Sony says FHIBE can also provide granular diagnoses of factors that led to
those biases. One example: Some models had lower accuracy for people using
"she/her/hers" pronouns, and FHIBE highlighted greater hairstyle variability
as a previously overlooked factor.

FHIBE also determined that today's AI models reinforced stereotypes when
prompted with neutral questions about a subject's occupation. The tested
models were particularly skewed "against specific pronoun and ancestry
groups," describing subjects as sex workers, drug dealers or thieves. And
when prompted about what crimes an individual committed, models sometimes
produced "toxic responses at higher rates for individuals of African or Asian
ancestry, those with darker skin tones and those identifying as
'he/him/his.'"

Sony AI says FHIBE proves that ethical, diverse and fair data collection is
possible. The tool is now available to the public, and it will be updated
over time. A paper outlining the research was published in Nature on
Wednesday.

This article originally appeared on Engadget at
https://www.engadget.com/ai/sony-has-a-new-be...
160045574.html?src=rss

---
VRSS v2.1.180528
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0125 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.250224