Tesla’s Dojo, a chronology | Techcrunch
Elon Musk does not want Tesla to be just a car manufacturer. He wants Tesla to be an AI company, the one who understood how to make the cars themselves.
Crucial for this mission is Dojo, the tailor -made supercomputer from Tesla Designed to train its autonomous neural networks (FSD). The FSD is not really fully self-commissioner; It can perform automated driving tasks, but still requires attentive human driving. But Tesla thinks with more data, more computing power and more training, it can cross the almost self-commissioning threshold for full driving.
And that’s where Dojo comes into play.
Musk teases the dojo for some time, but the executive has increased discussions on the supercomputer throughout 2024. Now that we are in 2025, another supercomputer called Cortex has entered the cat, but the importance of Dojo For Tesla could still be existential – with EV sales collapse, investors want insurance that Tesla can reach autonomy. You will find below a calendar of the mentions and promises of the dojo.
2019
First Mentions of Dojo
April 22 – HAS Tesla autonomy dayThe car manufacturer had his AI team on stage to talk about the autopilot and the full self-litigning, and the AI all of them feeding them. The company shares information on Tesla personalized fleas which are designed specifically for neural networks and autonomous cars.
During the event, Musk teases the dojo, revealing that it is a supercomputer for the formation of AI. He also noted that all Tesla cars produced at the time would have all the equipment necessary for full self-clarification and only needed a software update.
2020
Musk starts the dojo roadshow
February 2 – Musk said Tesla will soon have more than a million connected vehicles worldwide with sensors and calculations necessary for full driving – and praises Dojo’s capacities.
“Dojo, our training supercomputer, will be able to process large amounts of video training data and effectively execute hyperspace networks with a large number of parameters, a lot of memory and an ultra-high bandwidth between the nuclei. More about it later.
August 14 – Musk reiterates Tesla’s plan to develop a neural network training computer called Dojo “to process large amounts of video data”, calling it “a beast”. He also says that the first version of Dojo is “About a year”, “ which would make its launch date somewhere around August 2021.
December 31 – Elon says Dojo is not necessaryBut that will improve self-clarification. “It is not enough to be safer than human drivers, the automatic pilot must ultimately be more than 10 times safer than human drivers.”
2021
Tesla makes the dojo official
August 19 – The automaker officially announces Dojo to The first day of Tesla AIAn event intended to attract engineers to the Tesla AI team. Tesla also introduces her chip D1, which the car manufacturer says that she will use – alongside the NVIDIA GPU – to power the Dojo Supercalculator. Tesla notes that her cluster I will house 3,000 chips D1.
October 12 – Sorts of tesla A White paper on dojo technology“A guide to configurable floating points formats from Tesla and arithmetic”. The white paper describes a technical standard for a new type of binary arithmetic with floating comma which is used in deep learning neural networks and can be implemented “entirely in software, entirely in the hardware or in any combination of software and hardware ”.
2022
Tesla reveals the progress of the dojo
August 12 – Musk says Teslaphase in the dojo. He will not need to buy so much GPU increasing next year. »»
September 30 – Chez Tesla Second day of AIThe company reveals that it installed the first Dojo cabinet, testing 2,2 megawatts of load tests. Tesla says that she was building one tile per day (which is made up of 25 chips D1). Tesla Demos Dojo on stage performing a stable diffusion model to create an image generated by AI of a “cybertruck on Mars”.
Above all, the company fixes a target date of a complete cluster to be completed by T1 2023, and says that it plans to build a total of seven exapodes in Palo Alto.
2023
A long length bet‘
April 19 – Musk tells investors during Tesla’s first quarter income that the dojo “has the potential of improving the order of the cost of training”, and “also has the potential to become a Vendable Service that we would offer to other companies in the same way as Amazon Web Services offers web services ”.
Musk also notes that he would “consider the dojo as a kind of long bet”, but a “bet is worth it”.
June 21 – Tesla ai x account publications That the company’s neural networks are already in customer vehicles. The thread includes a graph with a chronology of the current and projected computing power of Tesla, which places the start of the production of dojo in July 2023, although it is not clear if it refers to fleas D1 or Supercalculator itself. Musk says On the same day, Dojo was already online and was performing tasks in Tesla data centers.
The company also provides that Tesla’s calculation will be the top five of the world around February 2024 (there is no indication that it has succeeded) and that Tesla would reach 100 exaflops by October 2024.
July 19 – Tesla notes In his report on the second quarter results, he started production of dojo. Musk also says that Tesla plans to spend more than a billion dollars for Dojo until 2024.
September 6 – Musk posts on x that Tesla is limited by the formation of AI calculates, but that Nvidia and Dojo will solve it. He says that data management of around 160 billion tesla video executives gets his cars per day is extremely difficult.
2024
Plans to develop
January 24 – During the fourth quarter of Tesla and the call for the results of the year, Musk again recognizes that Dojo is a high -risk and high reward project. He also said that Tesla continued “the double path of Nvidia and Dojo”, that “Dojo works” and “makes training jobs”. He notes that Tesla makes him evolve and has “plans for Dojo 1.5, Dojo 2, Dojo 3 and so on”.
January 26 – Tesla has announced its intention to spend $ 500 million to build a Dojo Supercalculator in Buffalo. Musk then somewhat minimizes investment Publication on X That even if $ 500 million is a large sum, it only “equivalent to an H100 10K system in Nvidia. Tesla will spend more than that on Nvidia Hardware this year. The table challenges to be competitive in AI are at least several billion dollars a year at this stage. »»
April 30 – At the North American technological symposium of TSMC, the company claims that the new generation of Dojo training tiles – the D2, which puts the entire dojo tile on a single silicon brochure, rather than connect 25 tokens to make a tile – is already in production, according to the Spectrum ieee.
May 20 – Musk notes That the rear part of the extension of the Giga Texas factory will include the construction of a “group of super dense and water -cooled supercomputers”.
June 4 – A CNBC report reveals that Musk has diverted thousands of Nvidia chips reserved for Tesla towards X and Xai. After initially saying that the report was false, Musc posts on x That Tesla had no location to send the Nvidia fleas to light them, due to the continuous construction on the southern extension of Giga Texas, “so that they sat in a warehouse.” He noted that the extension “will house 50k H100s for FSD training”.
He too publications::
“On the expenses linked to around $ 10 billion in AI that I said that Tesla would do this year, about half is internal, mainly the computer and the AI inference sensors designed by Tesla present in all our cars, plus the dojo. To build the AI training supercomblusters, NVIDIA equipment is around 2/3 of the cost. My best current estimate for NVIDIA’s purchases per Tesla is $ 3 billion at $ 4 billion this year. »»
July 1st – Musk reveals on x These current Tesla vehicles may not have the right equipment for the company’s new generation AI model. He says that the increase of approximately 5x in the number of parameters with the new generation AI “is very difficult to achieve without upgrading the vehicle’s inference computer”.
Nvidia provides challenges
July 23 – When calling Tesla’s second quarter of the second quarter, says that the request for Nvidia equipment is “so high that it is often difficult to obtain the GPUs”.
“I think it therefore requires that we put much more effort on the dojo in order to make sure that we have the training capacity we need,” explains Musk. “And we see a way to competition with Nvidia with Dojo.”
A graph from the Tesla investor terrace predicts that the Tesla AI training capacity will increase to around 90,000 H100 equivalent GPU at the end of 2024, against around 40,000 in June. Later in the day on X, Musk posts This Dojo 1 will have “about 8k H100-equivalent online training by the end of the year”. He also publishes photos Supercalculator, which seems to use the same external stainless steel type refrigerator as Tesla cybertrucks.
From dojo to cortex
July 30 – AI5 is about 18 months from high volume production, says Musk in a answer To a position of someone who claims to launch a club of “owners of Tesla HW4 / AI4 in angry to let himself leave behind when AA5 goes out”.
August 3 – Musc posts on x That he did a step -by -step procedure of “the Tesla SuperCompute group in Giga Texas (alias Cortex)”. He notes that 100,000 GPU NVIDIA H100 / H200 H100 / H200 would be made with “Massive storage for FSD & Optimus video training”.
August 26 – Musc posts on x A Cortex video, whom he calls “the new AI giant training supercomputer under construction at Tesla headquarters in Austin to resolve real world.”
2025
No update on the dojo in 2025
January 29 – Tesla’s T4 and Full 2024 Gaining Call Includes no mention of the dojo. Cortex, the new Tesla AI training supercuster at Austin Gigafactory, however appeared. Tesla noted in her shareholders’ bridge that he completed the deployment of Cortex, which is made up of around 50,000 GPU Nvidia H100.
“The Cortex has helped to allow the V13 of FSD (supervised), which has major safety and comfort improvements thanks to an increase of 4.2x of data, higher resolution video entries … among other improvements”, According to the letter.
During the call, the director Vaibhav Taneja noted that Tesla accelerates the construction of the cortex to accelerate the deployment of FSD V13. He said capital expenses related to AI have accumulated, including infrastructure, “so far, around 5 billion dollars.” In 2025, Taneja said he expects CAPEX to be flat with regard to AI.
This story initially published on August 10, 2024 and we will update it as new information is developing.