High Performance Compute - Under the Hood
- ICM
- Jul 7
- 7 min read
Written By Real People (all the typos and poor grammar are ours. Don’t blame the robots).

Our High Performance Compute team tends to send out insights and analysis on deep tech for the data centre. But that’s not all we do. That would be tedious. We also look at future tech stuff that will deliver compute closer to home. What if the high-powered processing you need is not accessible via the cloud? What if you need it most in your home or car, or even in your brain? Extreme compute at the edge is driven by the same physics as those mega-clusters at the hyperscale gigawatt data centres.
At ICM HPQC Fund (HPQC), we ask – what new tech will get the most flops out for the fewest watts in? To find the start-ups that could scale up into this edge space, it’s useful to track what the majors are telling us their requirements will be in five years. Best in class (and certainly the most fun to track) is Tesla and sister company Neuralink. In the next two posts, we will look at what Musk is doing to improve compute under the hood, and under the skull.

We’ve talked about Tesla’s AI ‘Dojo’ training chips – these are architected to allow AI training for self-driving car AI models and, in all likelihood, self-working Optimus robots from next year on. If you haven’t seen the progress on Optimus yet, check out the new video from Tesla here.
Dojo chips sit in massive data centres training LLMs. The less sexy but possibly more meaningful chip is the Full Self-Driving (FSD) chip they have designed to sit at the core of all Tesla cars. Strictly speaking, these are AI inference chips.

Within the FSD hardware design constraints are very tight. In an electric car (or robot) you want the battery power being used to drive the vehicle, not the AI. So, every aspect of the architecture needs to be allergic to waste – minimal heat and wattage. Anything you can do to lower the operating costs in terms of power but still deliver flops the better – or rather more TOPS (Trillions of Operations Per Second). Whatever you design has to also work at extreme environmental levels flawlessly for a decade. Over hill and over dale. This is the way Tesla itself describes the FSD design team:
“Build AI inference chips to run our Full Self-Driving software, considering every small architectural and micro-architectural improvement while squeezing maximum silicon performance-per-watt. Perform floor-planning, timing, and power analyses on the design. Write robust tests and scoreboards to verify functionality and performance. Implement drivers to program and communicate with the chip, focusing on performance optimisation and redundancy. Finally, validate the silicon chip and bring it to mass production in our vehicles.”
There is a lot to unpack in that statement – but just from a design architecture perspective, imagine the challenge of putting raw processing power into as tight an environment as possible for use not just with a current software iteration, but with enough headroom to handle later, likelier heavier loads driven by software updates. The way the Tesla design team must be set up to ensure the balance is right among components, with all the sub-engineering teams competing for the scarce power resource across the architecture, must be fascinating – fun, challenging, dynamic, and all-consuming if you are a young engineer. Mildly terrifying and exhausting if you are an older designer who cut their teeth at Intel or Broadcom.
Back to the FSD kit – we just got a sneak peek via a Korean newspaper of what Tesla’s new FSD hardware may be able to do – Korean news outlet MK (Korean) has provided what seems to be a credible glimpse of the AI5 – Tesla’s new FSD. (If you are driving a new Tesla today it has an A14 under the hood.)
MK’s report claims that Tesla is preparing to produce its new AI5 FSD computer with a performance target of 2,000 to 2,500 TOPS (Trillion Operations Per Second). According to the report, Tesla is considering using Samsung and TSMC to manufacture the hardware (source).
To grasp what exactly that 2,500 TOPS number means, let’s compare it to that fancy new gaming PC you just bought to play F1 driver-sims on. Your gaming PC has got two of Nvidia’s recently released gaming GPUs in it, the RTX 5080 and the RTX 5090 (about $1,500 and $3,000 GPUs, respectively). The 5080 clocks in at 1,800 TOPS, while the 5090 pushes a powerful 3,400 TOPS. Those also come alongside power draws of 360 and 575 watts, respectively.
Tesla’s FSD as a dedicated automotive AI chip being able to place itself squarely in the middle of those performance numbers is quite a feat, especially given Tesla’s previous hardware. HW3 clocked in at a measly 144 TOPS, while HW4/AI4, the one in your Tesla today, pulls in at around 500 TOPS, a solid 3-5x leap over HW3.
Tesla’s quest to teach its cars to think has never been short of ambition – or silicon. The need for AI5, this latest in-house computing beast, stems not merely from the demands of today’s Full Self-Driving system, but from the insatiable appetite of tomorrow’s algorithms. As Tesla’s neural networks grow fatter and cleverer, so too must the machinery that feeds and houses them. Bigger brains need bigger memory.
And it’s not just a matter of size. Autonomy is a game played in milliseconds. A human driver might mull over whether that pedestrian is about to jaywalk; Tesla’s FSD must decide, act, and brake before you’ve blinked. If latency were a luxury, the system could ponder its decisions at leisure, serving up a polished reply after tea. Alas, robots do not enjoy such indulgences. The tyranny of real-time constraints makes computational brawn not a bonus, but a necessity.
Elon Musk’s lieutenants are fond of proclaiming that the road to unsupervised autonomy – and those new robotaxis – is paved with petaflops. More power, they say, and more backup power. Each leap in performance promises to tame yet another “edge case,” those pesky scenarios that fall outside the training data. The goal? Marching steadily from 99% reliability to the industrial-grade succession of 9s beloved by engineers: 99.9%, 99.99%, and eventually, nirvana. But with each decimal comes greater complexity, and with it, the need for not only smarter code, but the hardware muscle to carry it all without breaking stride – or crashing into a lamppost.
That’s where some new startups might become useful to Tesla. As transistor density soars, electrical interconnects hit a bandwidth wall. Co‑packaged optics (CPO)—where silicon photonics and ASICs sit together on an interposer—offer a game‑changing remedy. They promise chip‑to‑chip I/O at terabyte‑scale speeds with nanosecond latency. Market‑leading startups like Celestial AI have already raised US$250 million to commercialise their “Photonic Fabric” that bridges compute and memory dies via light, dramatically slashing power and latency. Newcomers like Mixx could be in the mix. Tesla could license or acquire such tech to outfit AI5/AI6 with integrated optical interconnects.

Beyond simple point‑to‑point links, future AI chips will need optical network‑on‑chip (NoC) fabrics to connect multiple AI accelerators and memory banks. Academic concepts such as co‑packaged optics for disaggregated AI systems show that integrating optical waveguides across chiplets can support TB/s bandwidth with sub‑nanosecond delays. Firms like Lightmatter and Ayar Labs (together raising nearly US$1.2 billion) are building tech for optical fabrics at scale. Tesla must either partner with these or build its own photonic R&D team. (Check this out: https://arxiv.org/abs/2303.01744)
The next frontier may be photonic neuromorphic tensor cores—optical accelerators that compute and store on the same substrate. Lab breakthroughs in VCSEL‑driven spiking photonic neuromorphic circuits hint at ultra‑fast, energy‑efficient processing ideal for edge inference. Tesla might collaborate with VCSEL pioneers or fund early‑stage projects to build a niche advantage in optical inference hardware (source).
So next rainy Sunday afternoon when you're idly flicking through colour options for your new Tesla Model S Plaid, spare a thought for not only the accent features you want on the leather trim, but also the engineering genius dedicated to the compute under the hood. That small pack of silicon that saves you from driving headlong into oncoming traffic after one too many gin slings at the office Christmas party. Not that we’re inferring anything…
Next edition – compute in the skull. Prepping yourself for insertion.
Matthew Gould
Portfolio Manager, ICM HPQC Fund
July 2025
Important Note
The information in this article should not be considered an offer or solicitation to deal in ICM HPQC Fund (Registration number T22VC0112B-SF003) (the “Sub-fund”). The information is provided on a general basis for informational purposes only and is not to be relied upon as investment, legal, tax, or other advice. It does not take into account the investment objectives, financial situation, or particular needs of any specific investor. The information presented has been obtained from sources believed to be reliable, but no representation or warranty is given or may be implied that it is accurate or complete. The Investment Manager reserves the right to amend the information contained herein at any time, without notice. Investments in the Sub-fund are subject to investment risks, including the possible loss of the principal amount invested. The value of investments and the income derived therefrom may fall or rise. Past performance is not indicative of future performance. Investors should seek relevant professional advice before making any investment decision. This document is intended solely for institutional investors and accredited investors as defined under the Securities and Futures Act (Cap. 289) of Singapore. This document has not been reviewed by the Monetary Authority of Singapore.
ICM HPQC Fund is a registered Sub-fund of the ICMGF VCC (the VCC), a variable capital company incorporated in the Republic of Singapore. The assets and liabilities of ICM HPQC Fund are segregated from other Sub-funds of the VCC, in accordance with Section 29 of the VCC Act.





Comments