AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page
   Local Database  Engadget is a web magazine with...   [175 / 351] RSS
 From   To   Subject   Date/Time 
Message   VRSS    All   Anthropic's Claude AI now has the ability to end 'distressing' c   August 17, 2025
 3:14 PM  

Feed: Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Feed Link: https://www.engadget.com/
---

Title: Anthropic's Claude AI now has the ability to end 'distressing'
conversations

Date: Sun, 17 Aug 2025 20:14:27 +0000
Link: https://www.engadget.com/ai/anthropics-claude...

Anthropic's latest feature for two of its Claude AI models could be the
beginning of the end for the AI jailbreaking community. The company announced
in a post on its website that the Claude Opus 4 and 4.1 models now have the
power to end a conversation with users. According to Anthropic, this feature
will only be used in "rare, extreme cases of persistently harmful or abusive
user interactions."

To clarify, Anthropic said those two Claude models could exit harmful
conversations, like "requests from users for sexual content involving minors
and attempts to solicit information that would enable large-scale violence or
acts of terror." With Claude Opus 4 and 4.1, these models will only end a
conversation "as a last resort when multiple attempts at redirection have
failed and hope of a productive interaction has been exhausted," according to
Anthropic. However, Anthropic claims most users won't experience Claude
cutting a conversation short, even when talking about highly controversial
topics, since this feature will be reserved for "extreme edge cases."

Anthropic

In the scenarios where Claude ends a chat, users can no longer send any new
messages in that conversation, but can start a new one immediately. Anthropic
added that if a conversation is ended, it won't affect other chats and users
can even go back and edit or retry previous messages to steer towards a
different conversational route.

For Anthropic, this move is part of its research program that studies the
idea of AI welfare. While the idea of anthropomorphizing AI models remains an
ongoing debate, the company said the ability to exit a "potentially
distressing interaction" was a low-cost way to manage risks for AI welfare.
Anthropic is still experimenting with this feature and encourages its users
to provide feedback when they encounter such a scenario.

This article originally appeared on Engadget at
https://www.engadget.com/ai/anthropics-claude...
distressing-conversations-201427401.html?src=rss

---
VRSS v2.1.180528
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.015 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.250224