AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page
   Local Database  Engadget is a web magazine with...   [381 / 534] RSS
 From   To   Subject   Date/Time 
Message   VRSS    All   Meta will reportedly soon use AI for most product risk assessmen   May 31, 2025
 3:54 PM  

Feed: Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Feed Link: https://www.engadget.com/
---

Title: Meta will reportedly soon use AI for most product risk assessments
instead of human reviewers

Date: Sat, 31 May 2025 20:54:16 +0000
Link: https://www.engadget.com/social-media/meta-wi...

According to a report from NPR, Meta plans to shift the task of assessing its
products' potential harms away from human reviewers, instead leaning more
heavily on AI to speed up the process. Internal documents seen by the
publication note that Meta is aiming to have up to 90 percent of risk
assessments fall on AI, NPR reports, and is considering using AI reviews even
in areas such as youth risk and "integrity," which covers violent content,
misinformation and more. Unnamed current and former Meta employees who spoke
with NPR warned AI may overlook serious risks that a human team would have
been able to identify.

Updates and new features for Meta's platforms, including Instagram and
WhatsApp, have long been subjected to human reviews before they hit the
public, but Meta has reportedly doubled down on the use of AI over the last
two months. Now, according to NPR, product teams have to fill out a
questionnaire about their product and submit this for review by the AI
system, which generally provides an "instant decision" that includes the risk
areas it's identified. They'll then have to address whatever requirements it
laid out to resolve the issues before the product can be released.

A former Meta executive told NPR that reducing scrutiny "means you're
creating higher risks. Negative externalities of product changes are less
likely to be prevented before they start causing problems in the world." In a
statement to NPR, Meta said it would still tap "human expertise" to evaluate
"novel and complex issues," and leave the "low-risk decisions" to AI. Read
the full report over at NPR.

It comes a few days after Meta released its latest quarterly integrity
reports ΓÇö the first since changing its policies on content moderation and
fact-checking earlier this year. The amount of content taken down has
unsurprisingly decreased in the wake of the changes, per the report. But
there was a small rise in bullying and harassment, as well as violent and
graphic content.

This article originally appeared on Engadget at
https://www.engadget.com/social-media/meta-wi...
most-product-risk-assessments-instead-of-human-reviewers-
205416849.html?src=rss

---
VRSS v2.1.180528
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0163 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.250224