Two years after Tesla began implementing its own artificial intelligence chips in its autonomous vehicles, the company unveiled, last Thursday (19), a new component aimed at training AI networks, the D1. The launch will integrate the Dojo supercomputer system, which, according to Elon Musk’s expectations, should go into operation as early as 2022.
Ganesh Venkataramanan, senior director of autopilot hardware at the manufacturer, explains that the processor, whose production architecture is 7 nm, will have a matrix area of 645 mm², 50 billion transistors, 354 nodes based on a 64-bit superscalar CPU with four cores and up to 362 teraflops of processing power. With so much power, the solution will compete with products from Intel, NVIDIA and Graphcore.
Also according to the executive, 25 D1 units will form a single training block, with 120 blocks totaling more than 1 exaflop. The feature places the equipment, which, among the instructions, supports FP32, BFP16, CFP8, INT32, INT16 and INT8, among the fastest in the world. “We will set up our first cabinets soon,” Venkataramanan points out.
A step forward
D1 processors will help models improve recognition of a wide variety of items captured by Tesla vehicle cameras, which require extensive computing work. The evolution offered by the new chip will enable the company’s products to reach a new level of autonomy.
If, at the moment, the Full Self-Driving Capability complement, which costs US$ 10 thousand (about R$ 54 thousand, in direct conversion), allows the brand’s cars to change lanes, travel along highways, enter parking spaces and leaving them to meet drivers with almost no human intervention, the promise is that they will soon circulate through city streets automatically.
However, to ensure all that, the manufacturer will depend on Dojo’s success and its unprecedented bandwidth — up to 10TBps. After all, the fleet of over 1 million units managed by Tesla currently provides an insane amount of data, used precisely for training their neural networks.