The United Kingdom drops the “security” of its AI body, now called AI security institute, ink the child’s protocol with anthropic
The British government wants to make a difficult pivot to stimulate its economy and its industry with AI, and in this, it pivots an institution which it founded a little more than a year ago for a very purpose different. Today, the Department of Sciences, Industry and Technology has announced that it would rename the AI Security Institute at “the IA security institute”. With this, it will move from the exploration mainly from exploration of areas such as existential risk and biases in important language models, at the accent on cybersecurity, “reinforcing in particular the risks against the risks that ‘IA poses to national security and crime ”.
At the same time, the government has also announced a new partnership with Anthropic. No business service has been announced, but the memorandum of understanding indicates that the two “explore” using the AI assistant of Anthropic Claude in public services; And Anthropic will aim to help work in scientific research and economic modeling. And at the AI Security Institute, it will provide tools to assess AI’s capacities in the context of the identification of security risks.
“AI has the potential to transform the way governments serve their citizens,” said the co-founder and CEO of Anthropic, Dario Amodei, in a statement. “We are impatient to explore how the assistant AI of Anthropic, Claude, could help British government agencies improve public services, in order to discover new ways of making information and vital services more effective and accessible to British residents. “
Anthropic is the only company announced today – coinciding with a week of AI activities in Munich and Paris – but it is not the only one to work with the government. A series of new tools that were unveiled in January have all been fed by Openai. (At the time, Peter Kyle, Secretary of State for Technology, said that the government planned to work with various founding companies of AI, and this is what anthropogenic agreement proves.)
The change by the government of the IA security institute – launched A little over a year ago With a lot of fanfare – AI security should not be too surprised.
When the newly installed labor government announced its AI-HOWY change plan in JanuaryIt should be noted that the words “security”, “damage”, “existential” and “threat” did not appear at all in the document.
It was not supervision. The government plan is to launch investments in a more modernized economy, using technology and in particular AI to do this. He wants to work more closely with Big Tech, and he also wants to build his own major local technicians. The main messages it has promoted was development, AI and greater development. Civil servants will have their own AI assistant called “RushAnd they are encouraged to share data and use AI in other areas to accelerate their operation. Consumers will be Get digital wallets For their government documents and chatbots.
Have AI security problems been resolved? Not exactly, but the message seems to be that they cannot be taken into account at the expense of progress.
The government said that despite the name change, the song will remain the same.
“The changes that I am announcing today represent the next logical step in the way we approach the responsible development of AI – helping us to release AI and develop the economy within the framework of our change plan “Said Kyle in a statement. “The work of the IA security institute will not change, but this renewed orientation will guarantee that our citizens – and those of our allies – are protected from those who would seek to use AI against our institutions, our democratic values And our way of life. “”
“The Institute has focused on the start on security and we have formed a team of scientists focused on serious risks for the public,” added Ian Hogarth, who remains the president of the institute. “Our new team of criminal improper use and deepening the partnership with the national security community mark the next stage in the fight against these risks.”
Further on, the priorities certainly seem to have changed around the importance of “IA security”. The biggest risk that the AI security institute in the United States is considering at the moment is that it will be disassembled. US vice-president Jd vance Telegraphy also earlier this week during his speech in Paris.
Techcrunch has a newsletter focused on AI! Register here To get it in your reception box every Wednesday.