Tesla / Deep Learning / Innovation
If you’ve read much about the history of Tesla, you’ll know they’ve got a history of making in-house what other car companies buy from OEMs — not because they are driven by some corporate self-sufficiency dogma, but because for many key components, they fail to find a supplier who can meet their very, very high standards and do it a price they think is reasonable.
This has led Tesla to invent or co-invent components from drive shafts to the monster 17-inch touchscreen that they decided their first car just had to have. Innovations in motors and batteries are less surprising in an electric car company — if they couldn’t innovate there, perhaps even Muskian fundraising prowess wouldn’t have sufficed to save them.
All these things are very cool, very impressive, but what is even cooler? Tesla “Full Self Driving” chips (FSD)! Yes, there is some non-Tesla IP in it, but most of its area is given over to two very carefully designed neural processing units (NPUs). These are accelerators for doing inference on deep neural networks, and they are fast (many tera-ops!), efficient (operations per watt at the level of Intel’s Spring Hill inference accelerator, even though the Intel chip is made with a 10 nanometer process versus14 nanometers for the Tesla FSD) and very much customised for Tesla’s core workload — processing images with very low latency, with the hardware designed to operate at peak efficiency at a batch size of one (a lot of deep learning accelerators need to be loaded up with large batches to give their best results). The power envelope is nice too — a very abstemious 7.5 watts. As is becoming common with other accelerators, Tesla have chosen to quantize their models down to 8 bits.
This hardware is super-impressive, yes, but think about how hard this would be to get done at almost any other firm. Most companies struggle (fail!) to get excel at their core competence. Tesla is now doing many things as well or better than anyone else. It’s true that they have certain advantages when designing NPUs — they are targeting one single application, they know the customer’s needs intimately, getting people comfortable with the extra steps required by quantization is not an issue (they don’t need to worry about supporting every possible TensorFlow/PyTorch op and use-case), but, they are a car company! I love Intel, it is a great place to work, but if you asked us to build a really top-notch car, well, it could take us some time.
Also, this chip is not vaporware nor is it merely sampling (like nearly every other deep learning inference accelerator). It has been shipping in an actual product (Tesla’s cars) since April, and from which it displaced an NVIDIA product (which used about 30% more power to do an order of magnitude less work). A car company has managed not to just to out-engineer other car companies, but also a chip company, and not in some deeply obscure application area, but smack in the category that their investors are counting on for enormous growth.
Maybe great inventions are driven not by a desire to for headlines, patents or even market share, but by a burning need to solve a problem.
The excellent article at the link has lots more details on the FSD: https://fuse.wikichip.org/news/2707/inside-teslas-neural-processor-in-the-fsd-chip/