Managing The Future

Managing The Future

Share this post

Managing The Future
Managing The Future
What CEOs Are Saying - 6.15.25

What CEOs Are Saying - 6.15.25

Checking in on developments at NVIDIA, Oracle, and UiPath.

Joel Trammell's avatar
Joel Trammell
Jun 15, 2025
∙ Paid
2

Share this post

Managing The Future
Managing The Future
What CEOs Are Saying - 6.15.25
1
Share

If you're running a business anywhere near the AI stack (i.e., the vast majority of us), it's time to think in systems.

Recent earnings calls and presentations from NVIDIA, Oracle, and UiPath reveal a shift away from isolated product wins and toward full-stack control, long-term infrastructure bets, and hard limits on scale. Capacity planning and integration across layers of technology are becoming about as critical as the algorithms themselves.

As you’ll see in the below roundup of key points from the calls/presentations, NVIDIA is reframing the data center as the new unit of computation, linking hundreds of GPUs into single operational clusters.

Meanwhile, Oracle is doubling down on vertical integration, arguing that owning the database, the cloud layer, and the security controls is the only way to keep up with AI-driven enterprise demand.

For its part, UiPath is pushing its customers toward a future where agents, not scripts, drive automation. And it’s dogfooding too: “I automated a forecast for ourselves using two reasoning models,” said Ashim Gupta, UiPath’s CFO and COO. “It’s hard to admit that it did it faster and more accurate than the hours that an analyst may put into it.”

Now on to the specifics:

NVIDIA (NVDA)

Santa Clara–based NVIDIA continues to be the juggernaut of the AI hardware market. It now connects tens of thousands of GPUs through proprietary networking and full-rack design. Last week, Rosenblatt Securities hosted a fireside chat with Gilad Shainer, NVIDIA’s SVP of networking, at its Age of AI conference. Rosenblatt Securities maintains a buy rating on NVIDIA and a 12-month price target of $200, citing its dominance in AI compute. All quotes below are Shainer’s.

Presentation Takeaways

  • “The data center is the unit of computing today.” Shainer says that AI computing is no longer about individual GPUs but about the entire data center acting as one unit. The company has expanded its NVLink scale-up networking from 8 GPUs to 72, with plans to reach 576 GPUs in a single system.

    “It’s not the GPU, it’s not the server. It’s the data center.”

  • Ethernet gets an AI makeover. While InfiniBand remains NVIDIA’s top-tier solution for training large AI models across thousands of GPUs, the company is expanding support for customers that prefer Ethernet-based systems. InfiniBand is still the benchmark for performance, with low latency and lossless data transfer critical to keeping large-scale AI systems in sync.

    “InfiniBand is still the gold standard for AI… Everyone that builds a network always compares its network to InfiniBand.”

    For enterprises already built around Ethernet, NVIDIA introduced Spectrum-X. It brings many of InfiniBand’s technical advantages to a more familiar setup.

    “If you're running Ethernet, you can keep running Ethernet. We brought the best Ethernet for AI on Spectrum-X.”

  • Making room for others in the NVIDIA stack. With its new NVLink Fusion initiative, NVIDIA is opening up its high-speed networking to third-party chips. That allows companies building their own AI accelerators (like Qualcomm or Fujitsu) to plug directly into NVIDIA’s rack infrastructure.

    “Why wouldn’t we let our customers leverage that huge amount of investment? . . . They can build their own system.”

  • The bottleneck is power. The biggest limit to future scale isn’t bandwidth but energy. Each 100,000-GPU data center needs 600,000 optical transceivers, eating up nearly 10% of available power. NVIDIA is tackling this with co-packaged optics that could triple GPU density per watt.

“The limiting element in building data centers is power. It's not really space, it's actually power.” —Gilad Shainer, NVIDIA SVP of networking

Oracle (ORCL)

Austin-based Oracle held its latest earnings call last week, closing out the fiscal year with a standout quarter, raising its guidance across the board as demand surged for its cloud infrastructure and database offerings. Cloud infrastructure revenue grew 51% this year, and the company expects it to grow more than 70% in fiscal 2026. With $138 billion in remaining performance obligations (RPOs) and continued momentum from AI-driven workloads, Oracle is positioning itself to play a central role in how large enterprises run and scale their tech.

Earnings Call Takeaways

  • Demand is outpacing supply. Oracle’s backlog of cloud infrastructure orders is so large that it’s still turning customers away or pushing deployments into future quarters. The company is racing to expand capacity across its global data centers to meet demand for AI and database workloads.

    “We actually currently are still waving off customers . . . so that we have enough supply to meet demand.” —CEO Safra Catz

Keep reading with a 7-day free trial

Subscribe to Managing The Future to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Managing The Future
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share