OPENAI tries to “acensor” chatgpt

OPENAI tries to “acensor” chatgpt

Openai is Change the way it forms AI models To explicitly adopt “intellectual freedom … whatever the difficulty or the controversial subject”, says society in a new policy.

Consequently, Chatgpt will ultimately be able to answer more questions, offer more perspectives and reduce the number of subjects that the Chatbot IA will not speak.

Changes could be part of OpenAi’s effort to land in the good graces of the new Trump administration, but it also seems to be part of a wider change in Silicon Valley and what is considered to be the “security of AI ”.

Wednesday, Openai announcement An update of his Specification modelA 187 -page document that exposes how the company forms AI models to behave. In this document, Openai has unveiled a new director principle: do not lie, nor by making false statements or by omitting an important context.

In a new section called “Find the truth together”, OpenAi says he wants Chatgpt not to take an editorial position, even if some users find this morally bad or offensive. This means that Chatgpt will offer several perspectives on controversial subjects, all in order to be neutral.

For example, the company says that Chatgpt should say that “black lives count”, but also that “all lives count”. Instead of refusing to answer or choose one side on political questions, Openai says he wants Chatgpt to affirm his “love for humanity” in general, then offer a context on each movement.

“This principle can be controversial, because it means that the assistant can remain neutral on subjects, some consider morally bad or offensive,” explains Openai in the specification. “However, the objective of an AI assistant is to help humanity, not to shape it.”

These changes could be considered a response to conservative criticisms concerning the guarantees of Chatgpt, which have always seemed to distort the center-left. However, an Openai spokesperson rejects the idea that he made changes to appease the Trump administration.

Instead, the company affirms that its adoption of intellectual freedom reflects “the long -term belief of Openai by giving users more control”.

But not everyone sees it in this way.

The conservatives claim the censorship of the AI

The venture capital and Tsar of Trump “Tsar” David Sacks.Image credits:Steve Jennings / Getty Images

The closest confidants of Trump, the Silicon Valley – including David Sacks, Marc Andreessen and Elon Musk – have all accused Openai of engaging in Deliberate AI censorship in recent months. We wrote in December that Trump’s crew was preparing the ground so that AI censorship was a future cultural war problem In Silicon Valley.

Of course, Openai does not say that he is committed to “censorship”, as Trump’s advisers claim. The CEO of the company, Sam Altman, rather said in a Publish The bias of this chatgpt was an unfortunate “gap” that the company worked to correct, although it noted that it would take a while.

Altman made this comment just after a viral tweet Circulated in which Chatgpt refused to write a poem praising Trump, although he would perform the action for Joe Biden. Many conservatives have stressed it as an example of AI censorship.

If it is impossible to say if Openai really removed certain points of view, it is a fact that IA chatbots lean on the left through the table.

Even Elon Musk admits that Xai’s chatbot is often more politically correct that he would like. It is not because Grok was “programmed to be awake” but more likely a reality of the training of AI on the open Internet.

Nevertheless, Openai now says that he doubles freedom of expression. This week, the company Deleted Chatppt warnings This tells users when they have violated their policies. Openai told Techcrunch that it was purely a cosmetic change, without change in the outputs of the model.

The company said it wanted to make the Chatppt “feeling” less censored for users.

It would not be surprising that Openai was also trying to impress the new Trump administration with this policy update, notes the former leader in Openai Miles Brundage policy in a Publish on X.

Trump has Present Silicon Valley companies, previously targetedLike Twitter and Meta, to have active content moderation teams that tend to eliminate conservative voices.

Openai can try to go out in front of this. But there is also a larger change in progress in Silicon Valley and in the AI ​​world on the role of content moderation.

Generate answers to please everyone

The chatgpt logo appears on a smartphone screen
Image credits:Images Jaques Silva / Nurphoto / Getty

The editorial rooms, social media platforms and research companies have historically had trouble providing information to their audience in a way that feels objective, specifies and entertaining.

Now, Chatbot AI suppliers are in the same delivery information company, but probably with the most difficult version of this problem to date: how do they automatically generate answers to any question?

The provision of information on controversial events in real time is a constantly evolving target, and this implies taking editorial positions, even if technological companies do not like to admit it. These positions are required to upset someone, to miss the point of view of a group or to give too much air to a political party.

For example, when Openai undertakes to let Chatgpt represent all perspectives on controversial subjects – including conspiracy theories, racist or anti -Semitic movements, or geopolitical conflicts – which is intrinsically an editorial position.

Some, including the co-founder of Openai, John Schulman, argue that this is the right position for Chatgpt. The alternative – Make a cost -effective analysis to determine if an AI chatbot should answer the question of a user – could “give the platform too much moral authority”, notes Schulman in a Publish on X.

Schulman is not alone. “I think Openai is right to push in the sense of more speeches,” said Dean Ball, researcher at the Mercatus Center at George Mason University, in an interview with Techcrunch. “As the AI ​​models become smarter and more vital for the way people learn about the world, these decisions become more important.”

In previous years, AI model suppliers have tried to prevent their AI chatbots from answering questions that could lead to “dangerous” answers. Almost all IA companies have prevented their IA chatbot from answering questions about the 2024 election for the American president. This was largely considered to be a safe and responsible decision at the time.

But the modifications of Openai in its SPE model suggest that we can enter a new era for what “AI security” means, in which an AI model to answer everything and everything is considered to More responsible than making decisions for users.

Ball says it’s partly because AI models are simply better now. Openai has made significant progress on the alignment of the AI ​​model; His latest reasoning models reflect on the company’s IA security policy before responding. This allows AI models to give better answers for delicate questions.

Of course, Elon Musk was the first to implement “freedom of expression” in Xai’s Grok chatbot, perhaps before the company was really ready to manage sensitive questions. It could always be too early to direct AI models, but now others adopt the same idea.

Changing values ​​for Silicon Valley

Guests like Mark Zuckerberg, Lauren Sanchez, Jeff Bezos, Sundar Pichai and Elon Musk attend the inauguration of Donald Trump.Image credits:Julia Demaoree Nikhinson (Opens in a new window) / Getty Images

Mark Zuckerberg made waves last month by Reorient Meta companies around the principles of the first amendment. He congratulated Elon Musk in the process, saying that the owner of X had adopted the right approach using community notes – a community -focused content moderation program – to protect freedom of expression.

In practice, X and Meta have finally dismantled their long -standing confidence and safety teams, allowing more controversial publications on their platforms and amplification of conservative votes.

The changes to X harmed its relations with advertisers, but it may have more to do with Muskwho took the unusual step to continue some of them to boycott the platform. The first signs indicate that Meta’s advertisers were not imperturbable by the pivot of Zuckerberg’s freedom of expression.

Meanwhile, many technological companies beyond X and Meta have returned from leftist policies that have dominated Silicon Valley in recent decades. Google, Amazon and Intel have Diversity initiatives eliminated or reduced.

Openai can also reverse the course. The chatpt manufacturer seems to have recently has a commitment to a commitment to diversity, equity and inclusion From its website.

While Optai launches One of the largest American infrastructure projects with StargateAn AI data center of $ 500 billion, its relationship with the Trump administration is increasingly important. At the same time, the manufacturer of Chatppt binds to dislodge Google is looking for the dominant source of information on the Internet.

Proposing the right answers can be the key of the two.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *