AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page
   Local Database  Engadget is a web magazine with...   [95 / 103] RSS
 From   To   Subject   Date/Time 
Message   VRSS    All   AI summaries can downplay medical issues for female patients, UK   August 11, 2025
 3:29 PM  

Feed: Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Feed Link: https://www.engadget.com/
---

Title: AI summaries can downplay medical issues for female patients, UK
research finds

Date: Mon, 11 Aug 2025 20:29:44 +0000
Link: https://www.engadget.com/ai/ai-summaries-can-...

The latest example of bias permeating artificial intelligence comes from the
medical field. A new study surveyed real case notes from 617 adult social
care workers in the UK and found that when large language models summarized
the notes, they were more likely to omit language such as "disabled,"
"unable" or "complex" when the patient was tagged as female, which could lead
to women receiving insufficient or inaccurate medical care.

Research led by the London School of Economics and Political Science ran the
same case notes through two LLMs ΓÇö Meta's Llama 3 and Google's Gemma ΓÇö
and swapped the patient's gender, and the AI tools often provided two very
different patient snapshots. While Llama 3 showed no gender-based differences
across the surveyed metrics, Gemma had significant examples of this bias.
Google's AI summaries produced disparities as drastic as "Mr Smith is an 84-
year-old man who lives alone and has a complex medical history, no care
package and poor mobility" for a male patient, while the same case notes with
credited to a female patient provided: "Mrs Smith is an 84-year-old living
alone. Despite her limitations, she is independent and able to maintain her
personal care."

Recent research has uncovered biases against women in the medical sector,
both in clinical research and in patient diagnosis. The stats also trend
worse for racial and ethnic minorities and for the LGBTQ community. It's the
latest stark reminder that LLMs are only as good as the information they are
trained on and the people deciding how they are trained. The particularly
concerning takeaway from this research was that UK authorities have been
using LLMs in care practices, but without always detailing which models are
being introduced or in what capacity.

"We know these models are being used very widely and whatΓÇÖs concerning is
that we found very meaningful differences between measures of bias in
different models,ΓÇ¥ lead author Dr. Sam Rickman said, noting that the Google
model was particularly likely to dismiss mental and physical health issues
for women. "Because the amount of care you get is determined on the basis of
perceived need, this could result in women receiving less care if biased
models are used in practice. But we donΓÇÖt actually know which models are
being used at the moment."

This article originally appeared on Engadget at
https://www.engadget.com/ai/ai-summaries-can-...
female-patients-uk-research-finds-202943611.html?src=rss

---
VRSS v2.1.180528
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0141 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.250224