As data centres address more expansive and unique challenges, so too must their power distribution equipment meet those performance needs. Server cabinets, racks and individual server units, need to be designed for maximum adaptability to the ever-changing power consumption requirements of their unique and demanding environments.

Whether dedicated to supercomputing or Artificial Intelligence (AI) data centres are by their very nature unique in form factor and physical architecture. Sometimes they’ll fit into an existing building on campus, with a retrofit of new infrastructure to support the additional demands placed on the power and cooling systems of the facility. Other times they’re installed in an entirely new facility designed expressly for housing the machinery. In both instances, administrators must find custom solutions for delivering power, cooling and networking.

On the other hand, edge computing is designed to put applications and data closer to devices and their users. But it brings a different set of challenges than the massive data centres used in supercomputing and AI applications. Space is a significant issue in many cases – smaller enclosures mean even less space for power distribution equipment. Because edge computing takes place remotely, you need to validate remote connectivity and possibly remediate any issues.

Data centres require power and lots of it

It’s as simple as that.

The design of data centres has always required solving how to feed their power needs and distributing the electrical power once it’s in the facility.

Some of the world’s largest data centres can each contain many tens of thousands of IT devices and require more than 100 MW of power capacity.

With this immense power consumption demand comes the challenge of managing power distribution on a more granular level. Off-the-shelf and semi-custom solutions for remote access, power and white space infrastructure satisfy the needs of most enterprise and SMB data centre applications. More expansive and complex data centres often use similar solutions.

However, the need for ongoing improvements in efficiency and sustainability leads many HPC installations, AI applications, hyperscale data centres and telecom operators to seek novel custom solutions to layout, power density, cooling and connectivity.

It’s a safe assumption that each software workload has its unique power consumption requirements. If form follows function, then the application drives architectural choices for hardware and its environment. Hyperscalers provide a roadmap for adding more space and more racks for more servers when we think we’ve reached, or are about to hit, our power consumption caps. But supercomputing wants everything physically close together to maximise throughput, while AI wants to be on specialised processors, and by its very nature, edge computing is inherently distributed.

In many installations, the space dedicated to processing and computer room air conditioners leaves little room for distributing power going into the units. A situation like this has potential challenges for the deployment of PDUs, necessitating a customised solution.

  • There is possibly little or no room at the back of the rack for a zero-u PDU, indicating it might have to sit in racks’ sides.
  • The likelihood of little or no airflow to cool the PDU suggests convection outside the rack cools the PDU.
  • Taller racks with more servers generate high outlet density situations.
  • The need for high power density for the racks may necessitate PDUs with monitoring capabilities.

AI poses possible predicaments for PDUs

AI regularly produces incredible accomplishments with computers, but these accomplishments require enormous amounts of computing power and electricity to devise and train algorithms. A unique aspect of AI applications is their high internal bandwidth between boxes/ nodes and optical connections, which can be power intensive.

When designing a power distribution plan for an AI facility, you often face similar challenges as you would with a supercomputer facility.

  • You may need a PDU that can help with capacity planning and maximising electrical power utilisation.
  • AI facilities often require the use of custom racks, which demand ingenuity in the location of PDUs.
  • High density and higher power installations test the limitations of standard PDUs.
  • Your power density goes beyond what a C19 or other standard outlets can deliver.

Gaining an edge with PDUs

Edge computing occurs at or near the user’s physical location or the source of the data. By placing computing services closer to these locations, users benefit from faster, more reliable services. The explosive growth of IoT devices and new applications that require real-time computing power continues to drive edge-computing systems.

Edge computing can occur in harsh environments like manufacturing facilities, warehouses, or outdoor locations, for example, oil rigs and cell phone towers. These demanding environments may require the edge data centre to operate in sizeable operating temperature ranges, which impose the need for support for environmental sensors. Their placement at the data source may demand remote management capabilities and limited remote access control.

Therefore, edge computing offers some distinctive challenges:

  • The need for environmental monitoring as a safeguard against temperature and power extremes outside the operating capabilities of the equipment.
  • Presents a case for remotely monitoring power consumption.
  • PDUs that have onboard communications capable of scheduling outlet power on and off.
  • PDUs capable of shedding the power load to maximise battery power uptime if the unit exceeds thresholds.
  • The operating environment dictates that the PDU go beyond the usual 0-60 degrees Celsius.

When custom power components are the only real solution

As stated, off-the-shelf and semi-custom solutions for remote access, power and white space infrastructure satisfy the needs of most enterprise and SMB data centre applications. However, the self-imposed drive for ongoing improvements in efficiency and sustainability worldwide has led HPC installations, AI applications, hyperscale data centres, and telecom operators to seek novel custom solutions to layout, power density, cooling, and connectivity. The push for renewable energy sources also influences the use of DC power versus conventional AC power.

Work with a professional partner

Legrand data centre specialists work closely with customers, offering technical pre-sales support at the project design stage, through to supervision of installation, testing, commissioning and site acceptance tests. The team also offers operator training, as well extended after sales support, including annual maintenance contracts and fast, efficient intervention to emergency calls. 

In addition to data centres, Legrand also offers carefully designed solutions for commercial and industrial buildings, hospitals, airports, hotels and various other applications.

By Admin