The anthropogenic CEO, Dario Amodei
Just after the end of AI Summit AI In Paris, the co-founder and CEO of Anthropic Dario Amodei called The event a “missed opportunity”. He added that “greater concentration and an emergency are necessary on several subjects given the rhythm to which technology progresses” in the Declaration published Tuesday.
AI organized an event focused on developers in Paris in partnership with the French startup DustAnd Techcrunch had the opportunity to interview Amodei on stage. During the event, he explained his line of thought and defended a third path which is neither a pure optimism nor a pure criticism on the subjects of innovation and governance of AI, respectively.
“I was a neuroscientist, where I mainly looked inside the real brain to live. And now we are looking inside artificial brains to live. We will therefore have, in the coming months, fascinating advances in the field of interpretability – where we are really starting to understand how the models work, “AMODEI told Techcrunch.
“But it is definitely a race. It is a race between making the models more powerful, which is incredibly fast for us and incredibly fast for others – you can’t really slow down, right? … Our understanding must follow our ability to build things. I think it’s the only way, “he added.
Since the first Summit ai in Bletchley In the United Kingdom, the tone of discussion on AI governance has changed considerably. This is partly due to the current geopolitical landscape.
“I’m not here this morning to talk about AI security, which was the title of the conference a few years ago,” said US vice-president JD Vance said at the AI action summit on Tuesday. “I am here to talk about the opportunity of AI.”
Interestingly, Amodei tries to avoid this antagonization between security and opportunity. In fact, he believes an increased concentration on security East An opportunity.
“During the original summit, the British Bletchley summit, there have been a lot of discussion on tests and measure for various risks. And I don’t think these things have slowed down the technology a lot, “said Amodei at the anthropogenic event. “If something, doing this type of measure has helped us better understand our models, which helps us produce better models.”
And whenever AMODEI emphasizes security, he also likes to remind everyone that Anthropic is always very focused on the construction of border AI models.
“I don’t want to do anything to reduce the promise. We provide models every day on which people can rely and that are used to do incredible things. And we must certainly stop doing this, “he said.
“When people talk a lot about the risks, I am sort of bored, and I say:” Oh, guy, nobody really did a good job to really explain how great this technology could be “”, a- he added later in the conversation.
Deepseek training costs are “simply not precise”
When the conversation moved to Chinese LLM-MAKER DEEPSEEKThe recent models, Amodei minimized technical achievements and said that he had the impression that the public’s reaction was “inorganic”.
“Honestly, my reaction was very small. We saw V3, which is the basic model of Deepseek R1, in December. And it was an impressive model, “he said. “The model that was published in December was on this type of very normal cost reduction curve that we saw in our models and other models.”
What was remarkable is that the model did not come out of the “three or four border laboratories” based in the United States, it listed Google, Openai and Anthropic as some of the border laboratories which generally push the envelope with new model versions.
“And it was a question of geopolitical concern for me. I never wanted authoritarian governments to dominate this technology, “he said.
As for the supposed training costs of Deepseek, he rejected the idea that the Deepseek V3 training was 100x cheaper compared to training costs in the United States “I think [it] is simply not precise and not based on facts, “he said.
To come to come with reasoning
Although Imodei has not announced any new model during the Wednesday event, he teased some of the company’s future versions – and yes, this includes reasoning capacities.
“We are generally focusing on the test of making our own vision of the reasoning models which are better differentiated. We care about ensuring that we have enough capacity, that the models become smarter and that we are concerned about security things, “said Amodei.
One of the problems that Anthropic tries to solve is the model selection enigma. If you have a chatgpt plus account, for example, it can be difficult to know which model you should choose in the model of model selection for your next message.

The same goes for developers using large language model APIs (LLM) for their own applications. They want to balance things between precision, speed of responses and costs.
“We were a little perplexed by the idea that there are normal models and that there are models of reasoning and that they are somehow different from each other,” said Amodei. “If I speak to you, you don’t have two brains and one of them responds immediately and as the other is waiting for longer.”
According to him, depending on the entrance, there should be a more fluid transition between pre-formed models like Claude 3.5 Sonnet or GPT-4O and models formed with learning to strengthen and which can produce a chain of reflection (Cot ) Like O1 or OPENAI DEEPSEEK’s R1.
“We believe that they should exist within the framework of a single continuous entity. And we may not be there yet, but Anthropic really wants to move things in this direction, “said Amodei. “We should have a more fluid transition from this to pre-formed models-rather than” here is one thing and here is the B thing, “he added.
While large AI and Anthropic companies continue to publish better models, Amodei believes that it will open up great opportunities to disrupt large companies in the world in each industry.
“We work with certain pharmaceutical companies to use Claude to write clinical studies, and they have been able to reduce the time necessary to write the 12 -week to three -day clinical study report,” said Amodei.
“Beyond the biomedical, there is legal, financial, insurance, productivity, software, things around energy. I think there will be – basically – a rebirth of disruptive innovation in the AI application space. And we want to help him, we want to support everything, “he concluded.
Read our full coverage From the top of the action of artificial intelligence in Paris.