FAQ

Q: Will a ServerDome work outside Portland, Oregon?

A: Yes. Computer modeling has demonstrated that the data center will function in a wide range of latitudes and climates. While there are limits, such as the Sahara, it will work in the majority of global markets.

Q: What can be done to make the design more efficient?

A: We are always looking for ways to improve our design, we believe strongly in challenging our own assumptions regularly, and to be open to input. Keeping the diesel for the generator heated in order to maintain the proper viscosity is one energy sink that we would like to improve. The most exciting option would be to incorporate methane fuel cells, which could easily replace the generator and reduce our annualized PUE by a significant amount.

Q: What would be an example of a solution that you have delivered?

A: The following are the specifications of our solutions: • Mechanical: supply air fan walls; supplemental evaporative cooling; (chillers: none; CRAC units: none; heaters: none; exhaust fans: none) vegetative bioswale for additional passive cooling • Appearance: Monolithic aluminum dome; Diameter: 178 ft; Height: 48.5 ft; Internal geometry: modular spoke & wheel configuration (10 IT pods & 1 Central distribution pod) • Climatic Range: 14 – 105.5 degrees F; Humidity: Full ambient range • Internal operating temperature range: 64.4-80.6 F (18C- 27C) • Airflow: 42,000 – 726,000 CFM; Air velocity:150 – 400 FPM • Seismic factor: 1.25 • Wind load: 112 mph • Snow load: 80 psf • Available Power: 4Mw (full build-out); 25Kw/rack average; 40Kw/rack peak; 350 Kw/pod • Capacity: 168 52U racks (8,736 rack units) – equivalent to 200 42U racks • Electrical Distribution: 415/240 volt; 225amp busways; dual cord; N+1; flywheel UPS • Tier: 3+ (no single points of failure) • PUE: Annualized 1.13 – 1.17; 1.06 best case

Q: How have/would you implement(ed) agile data center principles and strategies?

A: Agile principles were and are, a primary driver behind the design of Server Dome. The prototype was designed for maximum efficiency, rapid deployment, small footprint, scalability, extensibility, flexibility, concurrent maintainability, and an ability to rapidly respond to evolving technologies. The modular design accommodates edge, cloud, HPC, Enterprise, and other evolving compute, storage, and network requirements.

Q: What solution do you deliver for ensuring scalability?

A: Designed for concurrent maintainability, the structure can be easily scaled for additional power, air-flow, or rack space modules without disrupting any on-going data center operations. The design also accommodates a heterogenous equipment load ranging from legacy to state-of-the-art without the need for special equipment segregation for power or placement.

Q: What is the estimated lifespan of your data center?

A: The solution lifespan includes a capacity plan that promotes aggressive virtualization of network, server, and storage solutions. Depending on programmatic needs, we project a growing capacity plan for 15 years and sustained capacity at that level into the future. The physical structure should have a lifespan of 30+ years. Power and cooling can be upgraded/replaced at any time without impacting the design or operations.

Q: Can this HPC data center co-exist or co-locate with IT/ICT infrastructure?

A: The current facility houses high performance computing (HPC) in the same area as the Enterprise computing operations. This is part of the flexibility of the design that can accommodate a highly heterogenous equipment load without special designated areas for different needs.

Q: What are some of the potential technical barriers that were encountered with systems architecture, design, and deployment?

A: The only technical barriers encountered with systems design, architecture, and deployment were regarding the owner/user “fear” associated with a radical new concept for data center design. The strength of the design includes significant computational fluid dynamic modeling of the model and generation of supporting data to prove that the operational outcomes are valid. We also now have 5 years of operational data that validates our claims of the design.

Q: What are some of the metrics, benchmarks, KPI’s, or reporting scorecards of the solution that was delivered?

A: Current annualized PUE metrics are in the 1.13 – 1.17 range. This metric was achieved in early phases of deployment and has remained consistent. Average annual water utilization efficiency for augmentation, WUE, is between 0.1-0.2 L/Kwh. Performance Data During a one year sample period: • Outside Temperature Range: 19 – 103 Degrees F • Outside Humidity Range: 13% – 100% • Wind Range: 0 – 54 mph • Up-time: 100% • Average Cold Aisle Temperature: 72.5 Degrees F • Average Hot Aisle Temperature: 83.0 Degrees F

Q: Is a ServerDome limited to using a specific vendor for any hardware, software, networks, storage, support, etc.?

A: ServerDome is vendor neutral. There is nothing in the design or operation of the data center that requires specific hardware, network, storage, etc. The configuration of the prototype represents an integrated combination of best-of-breed components to achieve the performance reported. Customers may choose any components that best suit their needs without altering the basic functionality of the design.

Q: How does the ServerDome design address the changing regulatory and compliance landscape?

A: The ServerDome prototype was designed to support a multi-mission, mission critical, academic healthcare institution with a major research function. As such, most regulatory and compliance challenges have been addressed to produce the final product. We have yet to encounter any issues that are not met by the current design.

Q: What processes and solutions are used for automation, provisioning, orchestration, self-service?

A: The design addresses automation, self-service, and customer provisions. The facility is a remotely operated lights-out design with few, if any, on-site staff required for normal operations. Other features include the ability for customers to provision electrical drops from a twist-lock busway without the need for an electrician. Rack, network, and compute equipment are completely available for customer configuration and any data center management system can be deployed if desired. The modular pod design of the facility is an asset for user self-configuration and custom provisioning.

Q: What sandbox or test/dev system was put in place to reduce risk and increase resiliency?

A: Extensive CFD modeling was deployed to validate design.

Q: Summarize the key performance data points of the Server Dome.

A: The ServerDome has an annualized PUE of 1.13 and an average WUE of 0.1 L/Kwh, has minimized maintenance costs, and has had zero down time in 5 years of operation.

Q: What is the emission profile of the data center?

A: Data center emissions include exhausted heat when not re-circulated for heating during cold weather. In addition, diesel generator exhaust is present during prescribed testing cycles or during loss of power company supply

Q: How did/will you meet aggressive time frames and milestones?

A: Aggressive time frames and milestones were co-managed by the core team with incentives for meeting deadlines. The prototype was delivered on time and on budget.

Q: Is direct current (DC) power used for the data center?

A: Direct current was not used in this design (although nothing precludes this power model). The power distribution described above was used since it covers the widest range of current equipment without custom power distribution.

Q: Is liquid cooling used?

A: Liquid cooling was not used although the design will accommodate it. 25-40Kw per rack is easily accommodated in the Server Dome without liquid cooling.

Q: What forms of heat rejection, heat recover, heat pumps, heat harvesting used?

A: Heat harvesting is not currently used other than to re-circulate the air during cold weather to heat the facility. However, the heat plume in the dome was designed to be harvested using a heat exchanger when useful or required.

Q: What form and scope of sensor network was used for continuous environmental, leak, and unit condition reporting is used?

A: A robust sensor array for both air pressure and cold aisle/hot aisle heat is deployed. In addition, all internal and external environmental weather conditions are monitored. Water consumption is also metered.

Q: Do you use a modular approach in data center construction?

A: The ServerDome is a highly modular design, and most of the parts are pre-constructed and shipped for assembly on site.

Q: What is the most extreme rack design specification?

A: The current design uses 52U racks, but the design specifications can include 3,000 pound racks and up to 40KW per rack.

Q: Can you describe your CIP?

A: The CIP approach was to factor in the primary location liabilities including seismic exposure, wind exposure, snow/volcanic ash loading, dust/fire exposure, etc. Loss of local or regional infrastructure included providing long-term diesel storage for emergency generators and long-term water storage for evaporative supplemental cooling.

Q: What lessons have you learned from your customer engagement?

A: To provide adequate data and expertise to support the “heretical” design departure from the industry norm.