AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page
   Local Database  Engadget is a web magazine with...   [343 / 488] RSS
 From   To   Subject   Date/Time 
Message   VRSS    All   Facebook sees rise in violent content and harassment after polic   May 29, 2025
 1:26 PM  

Feed: Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Feed Link: https://www.engadget.com/
---

Title: Facebook sees rise in violent content and harassment after policy
changes

Date: Thu, 29 May 2025 18:26:51 +0000
Link: https://www.engadget.com/social-media/faceboo...

Meta has published the first of its quarterly integrity reports since Mark
Zuckerberg walked back the company's hate speech policies and changed its
approach to content moderation earlier this year. According to the reports,
Facebook saw an uptick in violent content, bullying and harassment despite an
overall decrease in the amount of content taken down by Meta.

The reports are the first time Meta has shared data about how Zuckerberg's
decision to upend Meta's policies have played out on the platform used by
billions of people. Notably, the company is spinning the changes as a
victory, saying that it reduced its mistakes by half while the overall
prevalence of content breaking its rules "largely remained unchanged for most
problem areas."

There are two notable exceptions, however. Violent and graphic content
increased from 0.06%-0.07% at the end of 2024 to .09% in the first quarter of
2025. Meta attributed the uptick to "an increase in sharing of violating
content" as well as its own attempts to "reduce enforcement mistakes." Meta
also saw a noted increase in the prevalence of bullying and harassment on
Facebook, which increased from 0.06-0.07% at the end of 2024 to 0.07-0.08% at
the start of 2025. Meta says this was due to an unspecified "spike" in
violations in March. (Notably, this is a separate category from the company's
hate speech policies, which were re-written to allow posts targeting
immigrants and LGBTQ people.)

Those may sound like relatively tiny percentages, but even small increases
can be noticeable for a platform like Facebook that sees billions of posts
every day. (Meta describes its prevalence metric as an estimate of how often
rule-breaking content appears on its platform.)

The report also underscores just how much less content Meta is taking down
overall since it moved away from proactive enforcement of all but its most
serious policies like child exploitation and terrorist content. Meta's report
shows a significant decrease in the amount of Facebook posts removed for
hateful content, for example, with just 3.4 million pieces of content
"actioned" under the policy, the company's lowest figure since 2018. Spam
removals also dropped precipitously from 730 million at the end of 2024 to
just 366 million at the start of 2025. The number of fake accounts removed
also declined notably on Facebook from 1.4 billion to 1 billion (Meta doesn't
provide stats around fake account removals on Instagram.)

At the same time, Meta claims it's making far fewer content moderation
mistakes, which was one of Zuckerberg's main justifications for his decision
to end proactive moderation."We saw a roughly 50% reduction in enforcement
mistakes on our platforms in the United States from Q4 2024 to Q1 2025," the
company wrote in an update to its January post announcing its policy changes.
Meta didn't explain how it calculated that figure, but said future reports
would "include metrics on our mistakes so that people can track our
progress."

Meta is acknowledging, however, that there is at least one group where some
proactive moderation is still necessary: teens. "At the same time, we remain
committed to ensuring teens on our platforms are having the safest experience
possible," the company wrote. "ThatΓÇÖs why, for teens, weΓÇÖll also continue
to proactively hide other types of harmful content, like bullying." Meta has
been rolling out "teen accounts" for the last several months, which should
make it easier to filter content specifically for younger users.

The company also offered an update on how it's using large language models to
aid in its content moderation efforts. "Upon further testing, we are
beginning to see LLMs operating beyond that of human performance for select
policy areas," Meta writes. "WeΓÇÖre also using LLMs to remove content from
review queues in certain circumstances when weΓÇÖre highly confident it does
not violate our policies."

The other major component to Zuckerberg's policy changes was an end of Meta's
fact-checking partnerships in the United States. The company began rolling
out its own version of Community Notes to Facebook, Instagram and Threads
earlier this year, and has since expanded the effort to Reels and Threads
replies. Meta didn't offer any insight into how effective its new crowd-
sourced approach to fact-checking might be or how often notes are appearing
on its platform, though it promised updates in the coming months.

This article originally appeared on Engadget at
https://www.engadget.com/social-media/faceboo...
and-harassment-after-policy-changes-182651544.html?src=rss

---
VRSS v2.1.180528
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0178 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.250224