ICM HPQC News Flash - July 2025
- ICM
- Jun 27
- 9 min read

It’s been a big month for compute infrastructure with a continued focus from governments on building sovereign capabilities. Read more on:
Government and private sector commitments to building more compute infrastructure
An update on our portfolio
Exciting progress in quantum computing
A summary of June’s conferences
A brief overview of in-memory computing
A snap coverage of the most recent moves in the sector

Investments Continue to be Pumped into Computing and It’s Just the Beginning
The need for more data centres is resulting in massive investment in the sector. In mid-June, Temasek announced it was joining a consortium backed by Microsoft, Blackrock, MGX and Nvidia to invest and expand in AI infrastructure with a US$30M fund and up to US$100B in total investment potential when including debt financing.
At a government level, the UK announced a commitment of £1B to fund a 20x increase in computing power by 2030. In France, President Macron reiterated the country’s commitment to building AI infrastructure via a partnership with Mistral and Nvidia. This is off the back of investments totalling €112.5B committed in February by the private sector to invest in France’s AI sector. Geographical Europe is putting the money where their mouth is when it comes to AI sovereignty.
We believe this will mobilise the demand for the underlying technologies, giving data centres the capital to invest heavily in new technologies like the ones our portfolio companies are producing. We’ve known there’s been demand pull but now there’s capitalised demand pull.
With almost US$7T expected to go into data centre build outs over the next 5 years, these announcements are likely just the tip of the iceberg.
It’s been a big month of investment for semiconductor manufacturing too with Texas Instruments making “the largest investment in foundational semiconductor manufacturing in US history” with a US$60B commitment to build fabrication facilities in the US, GlobalFoundries announcing a US$16B investment in US chip production, and Micron announcing a planned investment of US$200B in semiconductor manufacturing and R&D.
Despite these massive investments in both data centres and in foundational chip manufacturing, Epoch AI estimates that the world is underinvesting in AI today and even the US$500B investment commitments like OpenAI’s stargate project are just a drop in the ocean relative to what would be justified to be invest in AI to unlock its full value.
“A big part of the story boils down to the huge amounts of value that could be generated from increased AI automation. In particular, current worldwide labor compensation is in the order of $50T, much of which would be captured by AI via full automation. For such an enormous amount of value, it could be economically justified to make huge upfront investments, speeding up the arrival of full automation” (Epoch AI, June 2025).
Welcoming Diraq to the Portfolio

In June we officially welcomed Diraq to the ICM HPQC Fund (ICM HPQC). Diraq’s technology was invented for the very purpose of scalability by founder, Laureate Professor Andrew Dzurak, who realised in the early 2000s that quantum computers were going to need many millions of physical qubits to deliver on the promises of the technology, and that semiconductor manufacturing would be the only way to do this at the necessary scale. Professor Dzurak has dedicated his life’s work to this technology and has had many world-firsts, including the first demonstration of a qubit using a modified transistor back in 2014.

There continues to be external validation of Diraq’s approach, including being one of 18 companies in the world selected for the DARPA QBI programme focused on companies that can deliver “computational value exceeding its build and operational costs by the year 2033”, working with NVIDIA and developing its silicon chips with one of the most important semiconductor manufacturers in the world, GlobalFoundries.
ICM HPQC has conviction that Diraq’s “spins in silicon” technology may be one of the only ways to build a quantum computer that can deliver commercially useful machines in a cost-
effective way and in a form-factor that is practical for end-users. With designs to eventually fit billions of qubits on a single chip (really), Diraq challenges the assumption that quantum computers will need to be networked together in a large system. By building a quantum computer in a single system, Diraq aims to deliver commercially relevant systems to the world by the end of decade.
At Nvidia GTC in June, Diraq announced it had successfully integrated Nvidia GPUs and its silicon quantum processors in a world-first demonstration alongside Quantum Machines.
Learn more about the motivation behind Diraq’s approach in this short conversation between Dr. Bill Jeffrey and Professor Andrew Dzurak. https://www.youtube.com/watch?v=rVwO1adqu9o
This Month in Quantum
Quantum computing hype continues to build, one founder quipped to us that if you got hold of Jensen Huang’s speeches ahead of major tech events you could make money trading quantum stocks as they respond quickly (and drastically) to his sentiment.
We’d be amiss not to mention the Oxford Ionics US$1B acquisition by IonQ. It’s a fascinating example of a company whose technology and roadmap were widely viewed as

unrealistic buying in a technology that does have a shot at success. It capitalised on its sentiment-driven valuation peak (i.e. hype) to buy a company using mostly stock (limited cash contribution) which has a much more realistic pathway to deliver on fault tolerant quantum computing. Post acquisition, IonQ updated its roadmap to forecast a 2 million physical qubits system by 2030. The transaction also sets a precedent for exit opportunities for quantum computing startups.

McKinsey has released its latest technology monitor. McKinsey takes the position that “surging investment and faster-than-expected innovation could propel the quantum market to $100 billion in a decade”. The report features ICM HPQC portfolio company Q-CTRL’s recent breakthrough in GPS-denied navigation.
Themes from recent conferences
June has been a busy month for conferences across Europe. Some of these were compute-specific conferences like Imec’s ITF World, Leti Innovation Days and the Quantum Data Centre Alliance Forum, along with some general tech conferences including Viva Tech and London Tech Week.
It’s comforting to see the same themes in ICM HPQC’s thesis echoed across both the technical conferences and the general tech conferences. This includes increasing investment commitments at the data centre scale and in fundamental semiconductor manufacturing to deliver on the future of compute. Quantum computing was also frequently discussed, particularly how it might ease data availability bottlenecks and speed up some parts of AI models. A clear message across all conferences was that AI first and foremost relies on computing power provided through data centres.
Innovation is happening throughout the data centre stack, and big players will take whatever works. NVIDIA's Director of Networking Ashkan Syedi said there was demand for “infinite volume and it was needed yesterday”. However, new products need to be reliable, be able to be retrofitted into existing systems and have the capability to scale up to tens of thousands of units per month.
Finally, AI is facing an energy crisis. Building new compute capabilities is just as much about delivering it in a sustainable and secure way as it is about delivering compute power to drive AI. Countries with nuclear capabilities like France may have a natural advantage as the pressure builds on global energy supply.
Tech Highlight – In-Memory Compute
We’re seeing a lot of exciting technology in the pipeline and this month we wanted to give a summary of which one of these technologies is gaining traction, in-memory processing.
Standard computing systems have three components: computation, communication and memory/storage. Because computation and memory are normally held in different components, the data must move a lot between the two when processing information. Data movement dominates performance and is a major system energy bottleneck. In data centres, 20% of the total system energy is spent on data movement, and this is even higher for AI-specific data centres.

In-memory processing is a computing architecture where the computation is done next to or in the same part of the chip where data is stored. It’s one approach to remove what is called the “Von-Neumann bottleneck” by physically unifying the computation and memory/ storage. It is particularly relevant for applications where there are large data sets or where quickly accessible memory is important, like in reinforcement learning (RL), and in AI inference.

There will be an increasing focus on architectures like in-memory computing as AI needs shift from training to inference over the next decade (see image). A number of startups are working on this including UPMEM (France), Fractile (UK) and d-Matrix (US).
Flash Snap: Quick Link Roundup of the Latest Plays (In Case You Missed Them)
Epoch AI investigate how speed trade-offs against cost in language model inference and the importance of faster hardware in making inference more profitable. [Epoch AI]
SemiAnalysis explains how the latest hot topic in AI - reinforcement learning (RL) - works and why it is the reason for AI models achieving higher benchmarking scores at cheaper costs. It also explains that RL is inference heavy, a task which current chips are not well suited for and anticipates that future hardware development will be in chips that focus on quickly accessible memorise over pure compute power. [SemiAnalysis]
TSMC is crowned “King of the Data Centre” in an opinion piece by Semiconductor Engineering, showing that TSMC has essentially a 100% market share of the fundamental semiconductor technologies that make up AI data centres. [Semi Engineering]
Blackstone shared its investment strategy on “the long-term case for data centres”, arguing that long-term fundamentals including the AI revolution supports strong long-term demand for data centres. [Blackstone]
Bond Capital released a good report on trends in Artificial Intelligence, explaining some of the reason why demand for compute is exploding. [Bond Capital]
Temasek joins Microsoft, Blackrock and MGX in the AI Infrastructure Partnership, one of the world’s largest efforts to invest in data centres and energy facilities needed to power AI applications. [Reuters]
Mistral launches a new Nvidia-backed infrastructure project to build 1.4GW data centre campus in Paris. [Data Center Dynamics]
The UK pledges £1B investment for UK tech and AI infrastructure. [Business Matters]
AMD has strengthened its AI position by partnering with startups, acquiring companies, and incorporating OpenAI's feedback into its MI450 chip design. [Yahoo Finance]
AWS has announced a Graviton4 chip upgrade featuring 600 gigabits per second network bandwidth, challenging Nvidia's dominance in AI infrastructure with Project Rainier and Trainium chips. [CNBC]
Amazon announced a US$20B investment in Pennsylvania data centres, including one controversially planned to connect directly to the Susquehanna nuclear power plant. [CNBC]
Amazon has announced a US$10B investment in North Carolina to expand data centres, creating 500 high-skilled jobs and supporting AI and cloud computing advancement. [Verdict]
MLCommons' latest benchmark test revealed that networking efficiency between chips has become increasingly crucial in AI system performance, as demonstrated by Nvidia's 8,192-GPU system and Grace-Blackwell machines. [ZDNet]
Nvidia and Dell have partnered to supply the US Department of Energy's 'Doudna' supercomputer, featuring liquid-cooled servers and Vera Rubin chips, for 2026 deployment. [Reuters]
Nvidia's Blackwell chips have demonstrated record-breaking performance in MLPerf Training benchmarks, achieving up to 2.5 times improvement over previous-generation architecture across AI workloads. [VentureBeat]
Nvidia has excluded China from its forecasts after US export restrictions prevented the chipmaker from selling US$2.5B worth of AI chips to Chinese buyers. [CNN]
TSMC has committed US$165B to establish CoWoS chip packaging facilities in Arizona, reducing US dependence on Taiwan for advanced semiconductor manufacturing. [Newsd]
Important Note:
The information in this article should not be considered an offer or solicitation to deal in ICM HPQC Fund (Registration number T22VC0112B-SF003) (the “Sub-fund”). The information is provided on a general basis for informational purposes only and is not to be relied upon as investment, legal, tax, or other advice. It does not take into account the investment objectives, financial situation, or particular needs of any specific investor. The information presented has been obtained from sources believed to be reliable, but no representation or warranty is given or may be implied that it is accurate or complete. The Investment Manager reserves the right to amend the information contained herein at any time, without notice. Investments in the Sub-fund are subject to investment risks, including the possible loss of the principal amount invested. The value of investments and the income derived therefrom may fall or rise. Past performance is not indicative of future performance. Investors should seek relevant professional advice before making any investment decision. This document is intended solely for institutional investors and accredited investors as defined under the Securities and Futures Act (Cap. 289) of Singapore. This document has not been reviewed by the Monetary Authority of Singapore.
ICM HPQC Fund is a registered Sub-fund of the ICMGF VCC (the VCC), a variable capital company incorporated in the Republic of Singapore. The assets and liabilities of ICM HPQC Fund are segregated from other Sub-funds of the VCC, in accordance with Section 29 of the VCC Act.
Get to know us better:









Comments