1. Homepage
  2. News
  3. Finserve Chelverton Global Technology Fund – November 2025

Finserve Chelverton Global Technology Fund – November 2025

In November, the third quarter earnings season continued with continued positive fundamentals: among our largest holdings, AMD and Nvidia reported strong results, beating expectations and significantly outperforming the market forecast. Both are benefiting significantly from AI demand – AMD benefits from the fact that AI computing also requires much more CPU compute in addition to GPUs; and Nvidia remains the clear de facto standard for AI computing with its GPUs, with barriers to entry stemming from their chip-level advantages; networking advantages; and their CUDA software ecosystem.

In November, Google's Gemini 3 model was launched and performed well in standard AI benchmarking tests. This created a market debate around Google TPU as a credible competitor to Nvidia, with Nvidia shares falling during the month, which was our biggest negative factor for the fund's performance. We own Alphabet/Google in the portfolio as well as Broadcom (which designs Google's TPU chip) and TSMC (which manufactures both Nvidia and Broadcom chips).

While we believe Google's TPU could be sold and leased externally - and in that case we would benefit from both Broadcom (which was our strongest contributor to performance during the month) and Alphabet (the second strongest contributor) in the portfolio - we would first like to remind investors that we are still severely constrained in supply across the industry - in terms of the logic and memory that both chips require - which is one of the reasons we believe Nvidia's pricing power remains intact.

We also remain convinced that the advantage of GPUs is that they are much more flexible and can be programmed for many different use cases – this is important, especially considering that we are early enough in this technology that new use cases are still being invented. The general nature of Nvidia GPUs also means that they can run and distribute workloads much more efficiently. Nvidia systems can switch between power profiles based on the type of workloads being run – AI training has very different power requirements compared to AI inference. This means that in a power-constrained data center, Nvidia systems would likely result in much more throughput than an ASIC/TPU-only system.

The second thing we would add is that while it may make sense for Meta to run TPUs, for the vast majority of enterprises (without the same scale) it is important not to be tied to a specific cloud provider and a specific compute architecture. Comments from across the supply chain continue to note that we are dealing with supply shortages – Nvidia CEO Jensen Huang was reportedly in Taiwan asking for more capacity at TSMC, and Elon Musk also noted that there was not enough chip production to meet demand.

Elsewhere, Taiwanese supply chain figures for October still showed strong momentum in demand for AI infrastructure.

Read more