ICM HPQC News Flash - April 2026
- Apr 8
- 13 min read
Updated: Apr 14

The New Tin
Silicon, TSMC, and the Oldest Supply-Chain Crisis in History
By Matt Gould │ HPQC │ April 2026
“Written by humans, please don’t blame the robots for our typos”
When the great palace civilisations of the Bronze Age ran out of tin, empires fell and four centuries of darkness followed. Today, artificial intelligence runs on a different critical material — advanced silicon wafers from a single company in Taiwan. The bottleneck looks uncomfortably familiar.
The recent long weekend gave all of us, one hopes, time for reflection and a chance to catch up on non-essential reading. I found myself deep in the reflective, well-written prose of Professor Eric Cline’s seminal tome: 1177 BC: The Year Civilization Collapsed (Princeton University Press). In it, Cline outlines his theory of why our culture experienced a late Bronze Age collapse. If you are more for listening than reading, I can recommend the long form of his lecture on the same topic, freely available online. Key to the thesis put forward by Cline is the notion that just a few resources being constricted was enough to trigger much of the collapse. I, for one, favour tin being a prime candidate for causing society’s implosion. The idea goes like this:
The Setup
The great palace civilisations of the Eastern Mediterranean — Mycenaean Greece, the Hittites, Ugarit, Assyria, and Egypt — were all dependent on bronze. Bronze is an alloy of copper and tin, and while copper was reasonably available around the Mediterranean, tin was extraordinarily rare. The nearest reliable sources were likely in Afghanistan (the Badakhshan region), possibly Anatolia, and perhaps as far as Cornwall in Britain. No, really — people went to Cornwall back then, despite its well-earned reputation for wild, woad-daubed inhabitants who spiked their hair with clay. This meant tin had to travel thousands of miles across multiple kingdoms and sea routes before it reached the smiths who needed it.

The entire military and economic infrastructure of these civilisations — weapons, armour, tools, ship fittings, agricultural implements — ran on bronze. There was no substitute. And bronze needs tin.
The Constraint
These trade networks were intricate and fragile. Tin moved along overland caravan routes and sea lanes involving dozens of middlemen, port cities, and political agreements between kingdoms. The Ugarit tablets — clay administrative records found in modern Syria — give us an almost forensic picture of this system: merchants writing urgent letters about delayed tin shipments, kings negotiating safe passage, warehouses tracking stocks with meticulous precision.
Around 1200 BCE, a confluence of factors — drought, internal rebellions, the migrations of the “Sea Peoples,” and the cascading failure of political stability — began strangling these critical tin trade routes. Ugarit, one of the great hub cities of this network, was destroyed around 1185 BCE. Its last known clay tablet is a desperate letter from the king of Alashiya (Cyprus) warning of ships spotted offshore. It was never sent.

The Collapse
Without tin, you cannot make bronze. Without bronze, armies cannot equip themselves, farmers cannot maintain tools, navies cannot build or repair ships. The palace economies, which ran on tight redistributive systems with virtually no slack, had no fallback. They couldn’t substitute iron — that technology wasn’t yet mature enough. They couldn’t stockpile tin because the whole system assumed continuous supply.
Mycenae fell. The Hittite Empire, one of the great superpowers of the ancient world, simply ceased to exist within a generation. Ugarit was never rebuilt. Egypt survived but entered a long contraction. Greece entered what historians call the “Dark Ages” — a period of roughly 400 years where literacy, monumental architecture, and long-distance trade essentially vanished.
Now, if any of this is sounding spookily prescient for our current civilisation’s dependence on silicon wafers and specialised chips, you would be right. If you were thinking Matt is now going to make an elaborate segue into the current economics of chip scarcity — you would be even more right. (I’m not about to suggest we are on the cusp of a millennium of civilisational collapse and the fall of empires. But it is enormously instructive to draw a comparison between the constraints in the tin trade three thousand years ago and the constraints in chips from TSMC this month. The parallels are there, so come along for the ride.)
A Single Foundry to Rule Them All
The Bronze Age had its tin mines in the mountains of Afghanistan, its trade routes threading thousands of miles across kingdoms and seas, all funnelling a single irreplaceable ingredient to the smiths of the Mediterranean. The AI age has or TSMC which occupies a similarly solitary position as the world’s only credible manufacturer of the advanced logic chips that power artificial intelligence.
This is not, in itself, a new observation. What is new — and what will determine the competitive fortunes of every technology company worth its market capitalisation — is the specific depth of the current bottleneck. We are no longer merely dependent on TSMC in the abstract. We are critically dependent on a single fabrication process: TSMC’s N3 family of 3-nanometre nodes. And demand for that process is, in the language of modern supply-chain analysis, absolutely ripping.
The Convergence
When historians of technology eventually write about this period, they will note with some amusement that the entire AI industry managed to decide, more or less simultaneously, to transition to the same fabrication process. NVIDIA, the undisputed kingmaker of the GPU era, is moving from its 4NP-based Blackwell generation to the N3P-based Rubin. Google’s TPU programme, which powers both internal workloads and a growing portion of Anthropic’s inference, has shifted fully to N3E starting with TPU v7. Amazon’s Trainium3 is on N3P. AMD’s MI350X sits on N3, and its MI400 follows suit for the accelerator tiles. Meta’s MTIA chips are along for the ride.
This is what SemiAnalysis describes as an industry-wide convergence toward TSMC’s N3 family. The timing is remarkable. For years, AI demand was satisfied primarily by older process nodes; Apple iPhones and Qualcomm smartphone chips drove the early N3 ramp. In 2026, AI-related demand — accelerators, host CPUs, and networking silicon combined — is expected to account for just under 60% of total N3 wafer output. By 2027, that figure is modelled at 86%. The remaining silicon — your iPhone, your laptop, your Snapdragon — is, effectively, being squeezed out.
This is the 1200 BCE moment. The palace economies all needed bronze at once, the trade routes could not keep pace, and the redistributive systems ran out of slack. Today’s equivalent is TSMC running its N3 fabrication lines at utilisation rates expected to exceed 100% of nameplate capacity in the second half of 2026. The company is extracting every possible wafer from its existing lines, shifting certain process layers to other fabs to free incremental capacity wherever it can be found. The stone-faced precision of semiconductor manufacturing does not lend itself to improvisation. There is no overtime shift that prints more photons.
How Did We Get Here?
The honest answer is that TSMC did not see the scale of it coming quickly enough to line up the capital required (and in all likelihood the human resources to manage such a ramp up). As investors in our fund will know, we called go greatest AI infrastructure build-out in history back in late 2022, triggered by ChatGPT’s eruption into public consciousness. And yet TSMC’s capital expenditure only exceeded its previous historical peak in 2025 — some three years later. The company spent $40.9bn on capex in 2025, and is expected to push beyond $52bn in 2026 as the full scale of the demand shock has become undeniable.
The reason this lag is so painful is architectural. Advanced semiconductor fabrication requires all those purpose-built cleanrooms, much more complex than your average Cornish tin mine. And they need the clean room infrastructure tuned and checked off before any equipment can be installed and any wafers started. From the decision to build to the first usable chip output runs to approximately two years. TSMC cannot, in 2026, conjure N3 capacity that was not ordered in 2024. This is not a management failing; it is physics.

The Ugarit tablets told a similar story: merchants writing urgent letters about delayed tin shipments, kings negotiating safe passage, accountants tracking warehouse stocks with forensic precision. It’s all happening again. Hyperscaler 2026 capital expenditure revisions, measured against analyst consensus estimates from just six months prior, tell the tale in numbers that would not look out of place in a Bronze Age tribute ledger. Google has revised its 2026 capex upwards by 94%, from $93bn to $180bn. Amazon has added 53%, lifting its estimate to $200bn. Meta has increased by 29%, Microsoft by 19%. These companies would spend more still if there were more to spend on. The constraint, as SemiAnalysis notes, is not ambition or capital — it is silicon supply.
Memory: The Copper Problem
The Bronze Age analogy holds an additional layer of instructive misery. Tin was the scarce ingredient, but bronze also required copper, which — while more accessible — was not infinite, and its supply chains came under their own stress when the tin routes broke down. Today’s AI chips have a precisely analogous companion constraint in the form of High Bandwidth Memory, or HBM.
HBM is the specialised memory that sits alongside the compute die in every serious AI accelerator. It is manufactured in three-dimensional stacks and requires, on a per-bit basis, roughly three times the wafer capacity of conventional commodity DRAM. As AI accelerator designs grow more memory-hungry with each generation — NVIDIA’s Rubin Ultra reportedly increases HBM capacity by a factor of four compared to Blackwell Ultra, while Google’s TPUv8AX and Amazon’s Trainium3 are also migrating to denser 12-Hi stacks — the memory supply chain is coming under its own independent pressure.
Again there is this ‘beyond natural’ spike as the compute gap fails to get bridged: In 2023, AI-related applications accounted for approximately 12% of total DRAM wafer capacity. By 2025, that share had reached 39%. SemiAnalysis models it at 52% for 2026 and 69% for 2027. Each incremental HBM wafer-start, moreover, yields fewer bits than a conventional DRAM wafer-start — meaning the crowding-out effect on commodity DRAM is even larger than the headline figures suggest. Memory prices are rising accordingly, feeding back into handset costs, dampening smartphone demand, which — in a slightly grim twist of irony — may marginally free up N3 wafer starts for AI accelerators. The palace economies would have recognised this dynamic entirely: when tin runs short, the price of copper rises too, and the knock-on effects spread unpredictably through every downstream market.
No Iron in Sight
The Hittites and Mycenaeans could not switch from bronze to iron — the technology, while nascent, was not yet competitive at scale. This is precisely the situation facing TSMC’s rivals today. Told you we could drag an historical analogy out to the max. Intel Foundry and Samsung both possess advanced fabrication capabilities, and both are investing heavily to improve them. Intel, in particular, enjoys the explicit backing of the US government, which would very much like the western world’s dependence on a single Taiwanese manufacturer to be somewhat less existential. Samsung has recently secured Tesla chip programmes and, more significantly, entered NVIDIA’s data centre supply chain — a development that, in semiconductor circles, was received as a genuine milestone. Better late than never.

Yet the fact that these developments count as milestones illustrates the problem. TSMC’s technology lead at the leading edge remains sufficiently commanding that customers are not simply routing around it. They are instead competing ferociously for TSMC’s allocation. In this environment, TSMC plays the role of kingmaker. AI accelerator customers, with their larger die sizes, greater packaging complexity, and multi-year purchase commitments backed by the balance sheets of the world’s most valuable companies, are receiving clear priority over consumer electronics customers in N3 allocation decisions. Those in the smartphone and PC segments who cannot secure sufficient N3 capacity may find themselves extending existing product cycles or migrating directly to TSMC’s next node — N2 — ahead of their original roadmaps. Which is not the worst outcome, unless you needed the product this year.
Supply-Chain Control as Competitive Moat
In this environment, successful procurement is the primary determinant of competitive position. Which brings us, inevitably, to NVIDIA.
If the Bronze Age had a kingdom that understood the tin trade better than everyone else — that secured supply agreements early, cultivated relationships with caravan masters, and built strategic reserves while others were still spending on walls — NVIDIA is that kingdom. We were at NVIDIA GTC a few weeks back. Poorly edited videos of Matt trawling booths for quality merch will be forthcoming. NVIDIA is looking increasingly hegemonic. The company has locked in the majority of its supply requirements for logic wafers, HBM memory, and the advanced CoWoS packaging that binds them together into functioning accelerators. Jensen Huang’s trips to South Korea in 2025, which drew considerable comment at the time, were not primarily about sampling the local cuisine. They were, according to SemiAnalysis’s analysis, about securing memory supply and laying the groundwork for NVIDIA to offload procurement pressure from its customers, keeping them in the supply chain and out of the arms of rivals.
The strategic implication is stark. In a world where the binding constraint is not ideas, capital, or talent, but silicon allocation, the company with the best supply-chain position wins. Not the company with the best model. Not the company with the most engineers. The one with the wafers. As SemiAnalysis puts it with admirable directness: whichever vendor secures the most silicon supply will ultimately capture the most deployed compute.
Anthropic added a reported $6bn of annualised revenue in the single month of February 2026, driven by adoption of its Claude Code platform; the company’s own estimate is that it could have added more still, had more compute been available. The constraint on ambition is not ambition — it is wafers.
The Outlook: Constrained, But Not Collapsed
It is at this point that the ancient parallel offers some genuine comfort. The Bronze Age collapse was total: the palace systems that could not find tin had no fallback, no surplus, no market mechanism to substitute around the problem. Mycenae did not pivot. Ugarit did not iterate. The Dark Ages lasted four centuries.
The current silicon shortage will not end civilisation, or indeed the AI industry. Power — which just a few years ago was the binding constraint, driving feverish construction of data centres and the signing of breathless long-term power purchase agreements — is now available in excess of near-term silicon supply. The data centres exist. The electricity is there. What is missing is the chip to fill the rack. That is a far more tractable problem than the absence of foundational infrastructure, and one amenable to human ingenuity and capital expenditure in ways that the Bronze Age tin crisis was not.
TSMC’s capex programme will, in time, bring new N3 and N2 capacity online. CoWoS advanced packaging constraints, which previously throttled system assembly, are easing as TSMC’s front-end silicon becomes the dominant bottleneck: there is, as SemiAnalysis notes, little point in over-investing in packaging capacity if there is no front-end wafer supply to support it. The human ingenuity that has, across recorded history, always eventually found a way around resource bottlenecks shows no signs of retiring to a less demanding age.
But for now — for the companies racing to deploy compute, for the AI labs whose ambitions outpace their GPU allocations, for the hyperscalers who are revising their capex estimates upwards each quarter and finding that the money outruns the supply — the constraint is as real as it is unyielding. The tin mine of the modern world is a cleanroom in Hsinchu, Taiwan. The queue outside it stretches from Silicon Valley to Seoul, and the wait time is measured in years.
Jensen Huang knows this. That is why he went to Korea.
Time to Ao Around the traps! (Again, with thanks to Robert Dale at Language Technology. Read his Substack!!)
Microsoft is taking over a data centre construction project in Texas next to OpenAI's massive Stargate facility. [Yahoo Finance]
Mistral AI has secured US$830m in debt financing to purchase Nvidia GPUs and build a large-scale AI facility near Paris. [Tech Funding News]
Starcloud has raised US$170m in Series A funding to build cost-competitive orbital data centres using Starship-class spacecraft. [Yahoo Finance]
Global cloud spending reached US$399.6B in 2025, growing 24% annually, with Omdia predicting 27% growth in 2026 driven by AI deployment. [TechRadar]
But S&P Global warned that Middle East tensions and rising energy costs could force tech companies to cut AI infrastructure spending, potentially triggering significant equity market corrections. [Yahoo Finance]
SpaceX has filed for an IPO to fund orbital AI data centres but faces economic and technical challenges that doomed Microsoft's similar undersea project. [Yahoo Finance]
Get some Hot Chips
Arm has launched its first in-house AI chip, the AGI CPU, designed for large-scale data centre workloads with Meta and OpenAI as early adopters. [TechRadar]
Four Chinese universities, including two linked to the military, purchased Super Micro servers containing restricted AI chips despite US export controls on advanced processors to China. [Yahoo Finance]
Huawei has launched the 950PR AI chip with improved CUDA compatibility, securing orders from ByteDance and Alibaba. [Phone World]
And Huawei has launched the Atlas 350 accelerator with 1.56 PFLOPS FP4 compute and 112GB HBM, claiming superior performance to Nvidia's H20. [TechRadar]
IBM has partnered with Arm to develop dual-architecture hardware for streamlining enterprise AI and data-intensive workloads. [ITPro]
Intel has launched its Arc Pro B70 ‘Big Battlemage’ GPU with 32GB VRAM starting at $949, designed primarily for AI workloads. [The Verge]
And Intel's Data Centre chief disputes Arm's claim that agentic AI requires specialised new CPU designs. [The Register]
Micron stock declined despite strong earnings as investors worry Google's AI efficiency algorithm will reduce demand for high-bandwidth memory chips. [Yahoo Finance]
SK Hynix has confidentially filed for a US listing targeting 2026, potentially raising $10-14 billion to close valuation gaps with global peers. [TechCrunch]
And Chinese chipmakers captured nearly 41% of China's AI accelerator server market in 2025, significantly eroding Nvidia's dominance. [Reuters]
In case you missed it, read our recent fund communication from HPQC here which includes the latest portfolio update.
Remember: The HPQC fund’s final close is at the end of this month. For those requesting Top-Ups, please contact scott.duncan@icmltd.co
Sources & Further Reading
Data and analysis in this piece draw primarily on: SemiAnalysis, “The Great AI Silicon Shortage: TSMC N3 Wafer Shortages, Memory Constraints, Datacenter Bottlenecks, Supply Chain Wars Winner” (March 2026), by Ivan Chiam, Myron Xie, Ray Wang and colleagues (newsletter.semianalysis.com).
Historical context: Eric H. Cline, 1177 B.C.: The Year Civilization Collapsed (Princeton University Press, 2014; revised edition 2021). Professor Cline’s Long Now Foundation lecture is freely available at longnow.org/talks/02016-cline/.
Archaeometallurgy of Bronze Age tin: Berger et al., “Isotope systematics and chemical composition of tin ingots from Mochlos (Crete) and other Late Bronze Age sites in the eastern Mediterranean,” PLOS ONE (2019); Radivojević et al., “From Land’s End to the Levant: did Britain’s tin sources transform the Bronze Age in Europe and the Mediterranean?” Antiquity (Cambridge Core, 2024/25).
Hyperscaler capex estimates: Company earnings filings (Google, Amazon, Meta, Microsoft) and Bloomberg consensus data, as reported by SemiAnalysis.

Matthew Gould
Portfolio Manager of the ICM HPQC Fund
MAS Licenced Representative, ICM Global Funds Pte Ltd
April 2026
Important Note:
The information in this article should not be considered an offer or solicitation to deal in ICM HPQC Fund (Registration number T22VC0112B-SF003) (the “Sub-fund”). The information is provided on a general basis for informational purposes only and is not to be relied upon as investment, legal, tax, or other advice. It does not take into account the investment objectives, financial situation, or particular needs of any specific investor. The information presented has been obtained from sources believed to be reliable, but no representation or warranty is given or may be implied that it is accurate or complete. The Investment Manager reserves the right to amend the information contained herein at any time, without notice. Investments in the Sub-fund are subject to investment risks, including the possible loss of the principal amount invested. The value of investments and the income derived therefrom may fall or rise. Past performance is not indicative of future performance. Investors should seek relevant professional advice before making any investment decision. This document is intended solely for institutional investors and accredited investors as defined under the Securities and Futures Act (Cap. 289) of Singapore. This advertisement or publication has not been reviewed by the Monetary Authority of Singapore.
ICM HPQC Fund is a registered Sub-fund of the ICMGF VCC (the VCC), a variable capital company incorporated in the Republic of Singapore. The assets and liabilities of ICM HPQC Fund are segregated from other Sub-funds of the VCC, in accordance with Section 29 of the VCC Act.
Get to know us better:








Comments