Bill Mazzetti : An Expert in Data Center Development

Lucy Pilgrim
Lucy Pilgrim - Deputy Head of Editorial
Arizona Data Center
Highlights
  • “Power availability is both the greatest enabler and disabler in data center development today and for the foreseeable,” says Bill Mazzetti, Senior Vice President, Rosendin Electric.
  • “Time will tell whether we’ve overbuilt in the short term, hit the sweet spot between data center megawatt (MW) needs and AI demand, or if we’re still short,” Bill Mazzetti explains.

Qualified with over 40 years in the data center industry, Bill Mazzetti, Senior Vice President of US electrical contractor, Rosendin Electric, provides a unique insight into the sector’s current and future developments, and its confluence with artificial intelligence.

Q&A WITH BILL MAZZETTI, SENIOR VICE PRESIDENT, ROSENDIN ELECTRIC

Firstly, what is your take on the current uptick in data center development and how long do you think this will continue?

Bill Mazzetti, Senior Vice President (BM): That’s what everyone is asking us these days!  When you look at both the backlog and recent quarterly calls from hyperscale data centers, everyone is showing robust expenditure for the next three years. All reports indicate that there will be no break in capital expenditure (CapEx) for the foreseeable.    

The recent DeepSeek announcement, for instance, really set everything off as many industry players see the company as an artificial intelligence (AI) platform that is not only useful but represents an improvement to Rev.ai, a speech recognition AI platform, and a new look for the large language model (LLM) stack.  

However, it doesn’t break the dynamic shift that the computer industry is experiencing from central processing units (CPU) to graphics processing units (GPU).  

The announcement also caused a lot of turmoil, but we believe this enables a greater commoditization of AI services. Historically, when commoditization in the technology sector arrives, it actually drives greater demand and further growth, contrary to most industries.  

Our clients are certainly voting with their feet and checkbooks right now. We anticipate them doing so for the next several quarters, so we’re optimistic through to 2029.  

We don’t see the “crazy busy forever” trend as indefinite, but we feel that this robust build cycle will continue for the next six years, or two complete leasing, procurement, and development cycles. This will allow hardware and facility deployments to catch up with application usage and the resulting revenue.   

Time will tell whether we’ve overbuilt in the short term, hit the sweet spot between data center megawatt (MW) needs and AI demand, or if we’re still short. If you listen to recent interviews from leadership at social media conglomerate Meta and US AI organization OpenAI, it reinforces this opinion. 

Some of us at Rosendin Electric are contrarians as we feel there’s a confluence of several factors that could affect facility deployments.  

First, the industry can’t keep going at this pace indefinitely; we have all been caught up in the “get it at all costs” AI arms race for the past two years.   

Eventually, tech does what tech firms do – it makes things more efficient, drive their customization to a commoditized level, set reference designs and procurement pipelines for their particular business, and drive both adoption and lower costs.   

Evidently, none of our clients’ medium to long-term development forecasts have changed after the announcement.  

When you look back at the history of the cloud business, there is a precedent. We’re seeing how the next generation of technology is going and how it advances organically, as evidenced by the fact the growth of cloud environments in the past decade has not been linear.  

AI is a hardware-centric world where the hardware, operating systems, network, and applications are more homogeneous and similar to the mainframe technology seen in the 1990s.  

This is contrary to the heterogeneous cloud reality of blended best-in-class and vertically integrated hardware, software, applications, and networks defined by the end user. It’s basically a major change to known design form factors, or at worst, massive revisions to existing facilities.  

DeepSeek might require lower MW and time needs, but it does not decrease the overall need for AI computing in the short, medium, or long-term. It may prove to be a good platform for certain AI workloads, but we’ll see what happens to DeepSeek adoption for large organizations and governments over security and data custody concerns.  

There is no doubt that the software is thought-provoking, but we have some concerns about its assertion and whether it will displace other LLMs, and existing AI systems.    

We also feel that Bloomberg’s coverage on DeepSeek was not entirely forthcoming for the company’s total cost.  

Time will tell how truthful its reporting was, and to be honest, we’re doubtful that Bloomberg stated all the costs on an apples-to-apples basis from what we’ve seen in North America, the European Union, and the UK.   

Several industry thought leaders are sceptical of the cost to complete the DeepSeek model but not the code and application itself.  

The software is head-turning, open-source work, but it won’t be the only AI system to be employed by our clients. We also don’t believe that it will slow data center development in the next  
several years. 

Will end users or developers be the driving force behind data center facility development? If so, what will their roles be?

BM: The short answer is that everyone’s busy, and we feel this pace will continue for the next three to five years.  

The building of facilities and this demand starts with the end user. We’ve seen some swings in the split between end user and developer-delivered facilities. This has been the reality of data center development for the past two decades.  

We take a more macro view of the market. For us, it’s about the MWs delivered per year in a given market or region.  

The capital plans of our end users are arranged years in advance, so all of us who work in the space – designers, builders, manufacturers – have pretty good visibility.  

That said, it’s been the busiest time for facility delivery that we’ve seen in the past 15 years, which is driving an all-hands-on-deck approach where both end users and developers are working through the former’s facility needs.    

It also drives a matrixed and nuanced approach to the location of facility developments, which is based around where power is available, permits and entitlements are not particularly onerous, the end user or developer has a footprint to execute, and how relevant historical markets are to the needs of the facility being presented. These conditions will drive developers or end users to deliver facilities depending on the market.  

Today’s reality is that all the usual suspects are aligned with a stable of developers and end users, which tends to focus firms on specific regions or clients.  

We have seen a strong trend over the past two years for many newer, larger developments in the 1+ gigawatt (GW) range to become developer-delivered, which we expect to continue for a number of reasons.  

Firstly, end users can’t scale their staff to face builds of this magnitude as quickly as developers can.  

Second, the campuses used, and likely the power plants that may accompany them, are very capital-intensive. While our end users are successful, we feel that they are allocating their capital to technology, building up their GPU base and AI-specific applications.  

Bill Mazzetti, Senior Vice President, Rosendin Electric

“Power availability is both the greatest enabler and disabler in data center development today and for the foreseeable”

Bill Mazzetti, Senior Vice President, Rosendin Electric

What is the greatest enabler and biggest concern in data center development?

BM: Power availability is both the greatest enabler and disabler in data center development today and for the foreseeable.   

Sites and areas that have power will attract the lion’s share of these major developments. We’re also seeing power developers entering and partnering in key data center markets and energy development taking place in the utilities service areas that support these large data center advancements.   

We’ve also seen alternative energy being embraced by end users in developed areas short of power. This has been an issue we’ve been promoting for a while; I guess it just took time for folks to catch on. When you’re starving, your palate is forced to expand! 

Some of our other biggest concerns also include project enablers or killers, such as regulation, entitlement difficulties, NIMBYism, and a rising tide of negative outlooks toward data center development in some markets.  

Indeed, AI data centers aren’t truly different from their generational predecessors, they’re just bigger projects. 

Without getting into specifics, overcoming this starts with ground-up education across all project constituents, including state, county, city, and town officials, regulatory agencies, and most importantly, the people that live in and around the job site.  

We’ve seen several instances where developers have tried to ride roughshod over a town to push a development through without diplomacy and sensitivity to any of these parties.  

On the other side of the coin, some areas have become negative and obstructive toward data center development. In that situation, and over the next few years, those areas will need a time-out. 

What are the hottest markets for data center development?

BM: AI markets have been the hottest in the US based on export restrictions on Nvidia’s top-of-the-line systems, but we’re not seeing the same demand outside the US. 

Development follows power availability these days. This falls into three categories – the interconnection queue at regional utilities, available power that resides in a utility’s service area, and the ability to develop co-located or near-located power plants with a data center campus.    

Right now, the hottest markets are a mix of some historic places in Western and Central Texas, including Houston, Amarillo, Midland and Odessa, as well as Oklahoma and the Phoenix-Tucson corridor. In the Southeast US, other markets include Memphis, Nashville, Montgomery, and Atlanta. There’s also a bunch of legitimate activity in Northern Ohio, Illinois, and Western Pennsylvania.

“There is a strong trend toward retrofitting and simply too much fiber and investment going into these older facilities to ignore them, but they require massive cooling system upgrades in the migration from air-based to water-based cooling”

Bill Mazzetti, Senior Vice President, Rosendin Electric

Stakeholders are weighing up the benefits of the increased speed of upgraded facilities and cooling systems versus the longer lead times for greenfield developments using the latest technology. What is your view on this?

BM: What an insightful question. Firstly, CloudTech is not going away, nor is storage. All that data has to virtually live somewhere! 

There is a strong trend toward retrofitting and simply too much fiber and investment going into these older facilities to ignore them, but they require massive cooling system upgrades in the migration from air-based to water-based cooling.  

The rest of the improvements are system churns that we typically see with any technology update, with everything still working on a three megavolt-ampere (MVA) main switchboard (MSB) basis.  

Greenfield developments are the core driver in today’s markets and we are seeing them get a lot bigger.  

10 years ago, a big data center was 30MW, now they have grown to 100MW, with multiple buildings being delivered in 300MW tranches.  

This not only speaks to our point about deal and project flow, but also refers to the deep backlogs on the fulfillment side of the industry. It also means that there’s not enough MWs on a mature campus to accommodate these larger AI facilities.  

Finally, do you believe that data centers can be future-proofed, and if so, how?

BM: Great question. I’m not sure there’s a way to futureproof facilities between a CPU versus a GPU environment.  

It’s an easier question to answer electrically versus mechanically. As mentioned, AI requires water for direct cooling to the cabinet, a technique that harks back to the 1990s when continuous cooling and thermal storage were required and is a hard pivot away from the air-based cooling of cloud facilities.  

While this is a tactical change, by adding an uninterruptible power supply for cooling systems in an electrical MSB line-up, AI requires a pivot from air-based to water-based cooling.  

Due to the amount of evaporative cooling required, this pretty much kills off any open-circuit cooling system.   

Likewise, cooling system tonnage serves the whole data center load and can be delivered via air-side coils, direct connection to an AI system, or a mixture of both.  

This forces a tough balance if one has a strong mix of CPUs and GPUs in the same facility, as the last-mile heating, ventilation, and air conditioning (HVAC) or cooling solutions are diametrically opposed.  

Thankfully, there are some recently released HVAC systems that provide air and water-based cooling, and that’s a brilliant start.

A License Begin
Share This Article
Deputy Head of Editorial
Follow:
Lucy Pilgrim is an in-house writer for North America Outlook Magazine, where she is responsible for interviewing corporate executives and crafting original features for the magazine, corporate brochures, and the digital platform.