Bluesky weighs a proposal that gives users consent on how their data is used for AI
Talk to SXSW conference In Austin on Monday, the CEO of Bluesky, Jay Graber, said that the social network has worked on a framework for user consent on how they want their data to be used for a generative AI.
The public nature of the Bluesky social network has already authorized Others to train their AI systems on user content, as discovered last year when 404 media fell on a Data set built from a million Bluesky messages hosted on an embroidered face.
The competitor of Bluesky X, meanwhile, feeds user publications in the Sister XAI company to help form his chatbot Ai Grok. Last fall, he changed his privacy policy To allow third parties to train their AI on X User PublicationsAlso. This decision, followed by the American elections which raised the status of the owner of Elon Musk within the Trump administration, helped fuel another exodus of X users in Bluesky.
Consequently, the open source of Bluesky, Decentralized X Alternative has increased to more than 32 million users in just two years.
However, the request for training data on AI means that the new social network must reflect on its AI policy, even if It does not plan to train its own AI systems on user items.
Speaking in SXSW, Graber explained that the company has engaged with partners to develop a framework for user consent on how they would like their data to be used – or not used – for a generative AI.
“We really believe in users’ choice,” said Graber, saying that users would be able to specify how they want their Bluesky content to be used.
“It could be something similar to how websites specify whether they want to be scratched by search engines or not,” she continued.
“Search engines can always scratch websites, whether or not you have that, because websites are open to public internet. But in general, this Robots.txt file is respected by many search engines, ”she said. “So you need something to be largely adopted and have users, businesses and regulators to support this framework. But I think it’s something that could work here.
The proposal, which is Currently on GitHubwould involve obtaining user consent at the account level or even at the position, then asking other companies to respect this parameter.
“We are working with other people in space concerned about how AI affects how we are targeting our data,” added Graber. “I think it’s a positive direction to take.”