A: Yes. Computer modeling has demonstrated that the data center will function in a wide range of latitudes and climates. While there are limits, such as the Sahara, it will work in the majority of global markets.
A: We are always looking for ways to improve our design, we believe strongly in challenging our own assumptions regularly, and to be open to input. Keeping the diesel for the generator heated in order to maintain the proper viscosity is one energy sink that we would like to improve. The most exciting option would be to incorporate methane fuel cells, which could easily replace the generator and reduce our annualized PUE by a significant amount.
A: The following are the specifications of our solutions:
A: Agile principles were and are, a primary driver behind the design of Server Dome. The prototype was designed for maximum efficiency, rapid deployment, small footprint, scalability, extensibility, flexibility, concurrent maintainability, and an ability to rapidly respond to evolving technologies. The modular design accommodates edge, cloud, HPC, Enterprise, and other evolving compute, storage, and network requirements.
A: Designed for concurrent maintainability, the structure can be easily scaled for additional power, air-flow, or rack space modules without disrupting any on-going data center operations. The design also accommodates a heterogenous equipment load ranging from legacy to state-of-the-art without the need for special equipment segregation for power or placement.
A: The solution lifespan includes a capacity plan that promotes aggressive virtualization of network, server, and storage solutions. Depending on programmatic needs, we project a growing capacity plan for 15 years and sustained capacity at that level into the future. The physical structure should have a lifespan of 30+ years. Power and cooling can be upgraded/replaced at any time without impacting the design or operations.
A: The current facility houses high performance computing (HPC) in the same area as the Enterprise computing operations. This is part of the flexibility of the design that can accommodate a highly heterogenous equipment load without special designated areas for different needs.
A: The only technical barriers encountered with systems design, architecture, and deployment were regarding the owner/user “fear” associated with a radical new concept for data center design. The strength of the design includes significant computational fluid dynamic modeling of the model and generation of supporting data to prove that the operational outcomes are valid. We also now have nearly four years of operational data that validates our claims of the design.
A: Current annualized PUE metrics are in the 1.13 – 1.17 range. This metric was achieved in early phases of deployment and has remained consistent. Average annual water utilization for augmentation cooling (evaporative cooling) is 1,500 gallons/500 Kw of IT load.
A: The Server Dome is vendor neutral. There is nothing in the design or operation of the data center that requires specific hardware, network, storage, etc. The configuration of the prototype represents an integrated combination of best-of-breed components to achieve the performance reported. Customers may choose any components that best suit their needs without altering the basic functionality of the design.
A: The Server Dome prototype was designed to support a multi-mission, mission critical, academic healthcare institution with a major research function. As such, most regulatory and compliance challenges have been addressed to produce the final product. We have yet to encounter any issues that are not met by the current design.
A: The design addresses automation, self-service, and customer provisions. The facility is a remotely operated lights-out design with few, if any, on-site staff required for normal operations. Other features include the ability for customers to provision electrical drops from a twist-lock busway without the need for an electrician. Rack, network, and compute equipment are completely available for customer configuration and any data center management system can be deployed if desired. The modular pod design of the facility is an asset for user self-configuration and custom provisioning.
A: Extensive CFD modeling was deployed to validate design.
A: The Server Dome has an annualized PUE of 1.13, uses less than 1,533 gallons of water a year, has minimized maintenance costs, and has had zero down time in nearly 4 years of operation.
A: Data center emissions include exhausted heat when not re-circulated for heating during cold weather. In addition, diesel generator exhaust is present during prescribed testing cycles or during loss of power company supply.
A: Aggressive timeframes and milestones were co-managed by the core team with incentives for meeting deadlines. The prototype was delivered on time and on budget.
A: Direct current was not used in this design (although nothing precludes this power model). The power distribution described above was used since it covers the widest range of current equipment without custom power distribution.
A: Liquid cooling was not used although the design will accommodate it. 25-40Kw per rack is easily accommodated in the Server Dome without liquid cooling.
A: Heat harvesting is not currently used other than to re-circulate the air during cold weather to heat the facility. However, the heat plume in the dome was designed to be harvested using a heat exchanger when useful or required.
A: A robust sensor array for both air pressure and cold aisle/hot aisle heat is deployed. In addition, all internal and external environmental weather conditions are monitored. Water consumption is also metered.
A: The Server Dome is a highly modular design, and most of the parts are preconstructed and shipped for assembly on site.
A: The current design uses 52U racks, but the design specifications can include 3,000 pound racks and up to 40KW per rack.
A: The CIP approach was to factor in the primary location liabilities including seismic exposure, wind exposure, snow/volcanic ash loading, dust/fire exposure, etc. Loss of local or regional infrastructure included providing long-term diesel storage for emergency generators and long-term water storage for evaporative supplemental cooling.
A: To provide adequate data and expertise to support the “heretical” design departure from the industry norm.
Visit us and see for yourself the future of data centers.