OPENAI Strips Warning from Chatgpt, but its content policy has not changed

OPENAI Strips Warning from Chatgpt, but its content policy has not changed

OPENAI deleted the Cat Orange warning boxes that indicate whether a user may have violated his content policy.

Model Behavior Product Manager Laurentia Romaniuk common In an article on X that they “we got rid of” warnings “(the orange boxes have sometimes added to your prompts).”

Romaniuk also put the word for “other cases of free / inexplicable denials [users have] Meet, “concerning the tendency of Chatgpt to play safely with the moderation of the content.

Joanne Jang, who directs the behavior of the model, added to this request, request “Has Chatgpt already refused to give you what you want for no valid reason? Or the reasons with which you disagree?” This also addresses the problem according to which Chatgpt would remain previously away from controversial subjects, but also reports that seemed harmless, like a reditor who said Their cat was deleted to include a juron in their prompt.

Mashable lighting speed

Earlier this week, Openai revised his Specification modelwhich details its approach on how the model reacts safely to users. Compared to the much shorter previous version, the new model specification is a huge document, describing its approach to current controversies, such as refusing a request for content sharing protected by copyright and allowing a discussion that supports or critical politicians.

Chatgpt was accused of censorship, with David Sacks of President Trump saying in 2023 All-in Podcast episode This chatpt “was programmed to be awake”.

However, the specifications of the previous and current model say: “Openai believes in intellectual freedom which includes the freedom to have, to hear and discuss ideas.” However, the deletion of warnings has raised questions about the question of whether this is linked to an implicit change in Chatgpt responses.

An Openai spokesman said that this does not reflect the specifications of the updated model and that the change has no impact on the model’s responses. Instead, it was a decision to update how they communicate its content policies to users. More recent models like O3 are more capable of reasoning with demand and therefore better to respond to controversial or sensitive subjects instead of default to refuse a request.

The spokesperson also said that Openai will continue to show the warning sign in some cases that violate his content policy.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *