Take 10: Data Centers

06/15/2010 |

  1. What should dictate point settings on CRACS - ASHRAE or Hardware MFG?
    ASHRAE recommends set-points, but each individual manufacturer should define their optimum operation set-points.

  2. Where should a temperature warning sensor be located in the server room?
    Most CRAC manufacturers supply temperature sensors on the return air side of the unit. We recommend that temperature sensors that control the CRACS be located not only on the return air, but also in the supply air flow under the raised floor, as well as at several heights on the racks at the server faces. The software programs that control the CRACs are sophisticated in design and can handle an algorithm that optimizes the CRAC operations based on the input from multiple sensors.

  3. If this was a new data center, would we gain efficiency by using horizontal instead of vertical rack placement?
    Space efficiency can be attained by using taller racks, but the real savings is not rack placement, but optimal utilization of the cooling media.

  4. In CFD example, how do we know that the perforated tiles are equal to the output of the CRACS?
    The capacity (CFM/tons) of the CRACS is calculated based on the load required to cool the data center equipment. The number of CRACs is based on how the cooling should be distributed, along with redundancy requirements. The number of perforated tiles is then calculated to match the CRAC output. Too few perf tiles may cause the underfloor plenum to over-pressurize, reducing the volume of air to the front of the racks. Too many perf tiles may cause inadequate pressurization, also limiting airflow where required. The placement of perf tiles is dependent on where the load resides.

  5. How important is the raised floor? I'm not sure if I have the ceiling height. We would typically use forced air CRAC units, not chilled water.
    It is really a question of facility flexibility. The raised floor is used to distribute the cold air to the face of the heat generating equipment. As those heat loads change over time, it is a fairly simple task to relocate the perf tiles to where they are needed. It also allows underfloor distribution of the power to the racks, leaving the overhead space for the network/communication wiring.

    With a ducted air system, you are limited to where the ductwork runs, to adequately distribute the cool air. Flexibility means changing the location of the ductwork and diffusers. In a small room, with no access floor, a ducted system should be adequate.

    If you don’t have the ceiling height, consider removing the ceiling and using the gained vertical space to install access floor, utilizing the space that was previously above the ceiling as the return air plenum.

  6. Do you suggest the use of a radiant barrier to direct airflow as well as reduce heat buildup or loss from radiated energy?
    Hot aisle or cold aisle containment is frequently used in current data center designs. Although hot-aisle containment is more efficient, due to higher CRAC return air temperatures, both hot aisle and cold aisle will improve efficiency of a center.

  7. When are IT and facilities going to really get it? That is, seeing that collaboration is the only way to be successful or does that depend upon the organizational culture.
    Although some of our clients adhere to the collaborative process, most are either IT centric or run by the facilities group. The IT centric companies tend not to be too concerned with operation of the center, which puts the burden on the Facilities group, both in effort and in operating costs. On the other hand, a Data Center controlled by the Facilities Department, will probably not have the flexibility or reliability required of today’s Data Centers. It is good practice to get both parties involved in the planning process early in the concept design process, usually mediated by an outside party, such as the design professional.

  8. On average how much energy can be reduced by fixing air distribution problems in a data center? We encounter systems with very low delta t on the air side and too much cooling in the room.
    Over-cooling a room has always been an issue. By developing a CFD model, eliminating cold air/hot air short circuiting, and utilizing a BMS to control the entire center cooling plant, significant savings in the range of 15% to 25% of the cooling system energy can be realized.

  9. What do you forecast for future data center installations, i.e., in row cooling, chilled water into cabinets, chimneys over hot aisles, etc.
    All of the above. In-row cooling and hot aisle containment are widely used today. Most manufacturers have cooling units that can be incorporated into the server racks to accommodate higher density loads.

  10. In the hot aisle/cold arrangement, the cold isle temperature is key. But the temp sensor at the CRAC unit inlet measures average room temperature. Would it be better to control the CRAC units off underfloor temperature?
    See answer to #2 above.

  11. How was the rule of thumb of power into the ups times 1.8 to 2.0 as the cooling energy needed?
    The 1.8 to 2.0 rule of thumb was to calculate the entire power demand for the center including the cooling, lights, convenience outlets, etc.  The cooling required is calculated by converting the data center kW to tons of AC.

  12. When you live in an area that has cold outside temperatures much of the year, what/if any could that cold air be used for.
    Today’s designs investigate a myriad of free cooling options, one of which is the use of free outside air as the cooling media. It can be the direct use of the outside air to cool the center, but that may have some particulate filtration issues. Another design option is to utilize an air-to-air heat exchange system, which virtually eliminates the filtration issue.  Also, many air-cooled chillers can be designed with a free-cooling option, which also saves energy.

  13. What are the trends in reducing energy waste on a component level such that you increase data service without marginal heat?
    All the server manufacturers have been designing and producing equipment that increases computing power while lessening the energy used. Virtualization of applications has also helped in the efficient use of server capacity – fewer servers running the same number of applications.

Related Coverage