Navigation
This form does not yet contain any fields.

    Entries by GreenDataCenterMan (33)

    Thursday
    Aug042016

    Smart Data Centers

    As we collect more and more data about everything around us, smarter cities and facilities are developing to operate in more efficient and reliable ways.  The same can be done for data centers, as the massive amount of data can be leveraged to reduce risk while striving to optimize how the many data center systems can perform. If you haven't seen it in action at least you can read about how Google has introduced AI to control and optimize their cooling systems: DeepMind to take over all of Google data center cooling.

    Cities and companies have been implementing means to monitor their vehicle fleets then perform audits to see how they can improve for years.  This data, growing larger every day, allows major decisions to be made about how the vehicles are deployed, where, and how simple things can improve the drivers' efficiency and decrease accidents and time in traffic.  With more granular monitoring and controls being incorporated in facilities over the last 10 years, the same can be done with facilities.

    For instance, a data center could consider operating its UPS systems in an eco/bypass mode to avoid double conversion losses.  However, the bypass mode can be considered a risk to reliability since the reaction time is usually increased.  Instead, the weather or other transients that are detected can trigger to switch the UPS back to double conversion and avoid increasing reliability risk. 

    Now, the data that is collected and analyzed can be stepped up beyond the simple equipment controls that Google's DeepMind is responsible for optimizing.  Network system monitoring can be overlaid with the facilities information to determine how reactionary each system can be when changes are made.  Chillers and cooling towers can be studied and targeted to maximize their energy use as compared to operational loads of the data center, even considering how peaks for each system can be reduced.  Utilizing integrated temperature monitoring, say with both the actual server sensors and a DCIM solution, can allow fans and pumps to not sure ensure that the loads are met but also alter the settings to just meet the operational needs. 

    Changing how all equipment operates in a dynamic scenario seems like setting up the data center temperatures, airflow and loads to fluctuate in a yo-yo like fashion. However, this is just using the most current data that is received.  Making the data center smart is tracking the trending and understanding all of the minute details of how the many systems respond.  Mechanical controls do not react in the near-instantaneous manner as IT equipment; guaging this lag of timing is crucial, and when understood the dynamic changes of a data center can be anticipated much more closely. 

    In the years ahead the buffer temperatures that we may apply for a data center can be minimized to achieve greater and greater savings.  Automation of temperatures and flow of air and water can seek the best performance while not hampering IT equipment operation.  What we may need to get used to is that an AI (Skynet?  Hal9000?) may be the one issuing work orders for us to fulfill, even telling us the exact procedures and equipment to buy for it's own preservation. 

    Thursday
    Sep032015

    Take Advantage of DCIM

    Data Center Infrastructure Management (DCIM) tools have had a number of approaches to save data center managers time, money and uptime.  While the different tools have come from different directions in the industry, such as facilities or communications, the tools themselves are all performing well when used as intended.  Managing assets, capacity, change, power, energy, and the environmental conditions are among the ways that DCIM products can help managers save, and when combined the savings can be stretched even further.

    Many DCIM vendors approach customers with the ability to centralize the information about their data center.  With that data the DCIM can display some of the causes of systematic problems or losses of time and energy.  Many managers see their own spreadsheets and other tracking tools as already handling these issues so the appeal is lessened.  However, it is when a DCIM product can track when equipment is due for maintenance, replacement or other asset management that it surpasses the spreadsheets, as the latter can often miss updates and version control between groups gets out of hand. 

    Another approach vendors push is energy and thus cost savings.  Managers may often wonder how this can happen without also needing to follow up with investing in energy saving measures - an additional cost to the DCIM.  A good DCIM tool can generate reports on energy usage and power trends across a power chain as well as helping to find stranded power.  The DCIM should also be able to help with power, space and cooling allocations as the data center changes, sometimes dynamicly.  And without investing in replacements or additional equipment a manager can adjust the server locations and data center temperatures to best reduce hot spots while running a warmer space by trial and error to reduce cost. 

    Capacity planning is perhaps the most underrated sellling point of a DCIM product. Managers sometimes step into positions where they don't fully know the data center.  They want to know what they have and if there is unused, stranded, or low performing equipment.  The DCIM tool should be able to highlight some of these aspects prior to an audit of the equipment itself, saving the time of scouring the data center to find ways to recapture space.  The same approach also goes for power and cooling, as the under used capacity can be regained.  And doing this faster and with less resources helps put off the request for a new data center due to only a space, power or cooling factor. 

    Proving the value of a DCIM product can also be hard to determine.  Others in the industry tout 50% or more savings in just power adjustments alone; when peeling back how it was done sometimes reveals that the DCIM tool pointed the way toward the savings based on making changes and investments.  Saving a team time to review their assets can be difficult to judge to see the benefits of DCIM reports.  The links to savings are often made with power or cooling savings and sometimes those results can be fuzzy when compared to the cost.  However, knowing where the data center stands when you walk in or out can be a stress relief well worth the cost; plus many offer remote access capabilities to help with quicker responses. 

    Tuesday
    Sep012015

    Energy Codes & Standards

    In the USA there are two main documents that pertain to energy.  The main one is the model code International Energy Conservation Code (IECC), developed by the International Code Council.  47 states have adopted it, as well as the District of Columbia, Puerto Rico, and the U.S. Virgin Islands.  The other is Standard 90.1, developed jointly by the American National Standards Institute (ANSI), the American Society of Heating, Refrigeration and Air-conditioning Engineers (ASHRAE), and the Illuminating Engineering Society of North America (IESNA).  Minnesota and Indiana have adopted Standard 90.1.  California has instead developed their own energy code which exceeds the requirements of both the IECC and Standard 90.1.  But just because a state or commonwealth adopted one or the other, the local jurisdictions may have their own changes or adopted 90.1.  Although most have chosen the IECC, Standard 90.1 is still important since the IECC accepts 90.1 as a suitable means for showing compliance for commercial buildings.  For data centers, the greater flexibility of 90.1 means that compliance may be met more easily. 

    Monday
    Aug312015

    Location by Incentives

    A lot of people wonder why data centers aren't built only at the arctic and antarctic circles to take advantage of year-round free cooling.  The answer, in a word, is incentives.  When looking more indepth, a few words behind that are taxes and power costs. 

    The cost of power is one of the strongest reasons behind data center locations.  Oregon has attacted Google, Apple and Facebook for the low cost of power.  By negotiating tax breaks a good deal gets even better.  Tax incentives have been many and long, ranging from lower property taxes to server equipment taxes to business furnishings taxes.  When added up those taxes make up a big savings of the overall operational costs at a site.  State and local governments are aware of this and compete to lure data centers by offering better rates, longer terms, negotiating for lower power costs, or easing permit and regulations for a smooth process.  

    Those governments are hoping to lure a data center to their area, but the benefits can be murky when compared with the lower number of jobs provided and money made back by the community.  The construction is a one-time event, with additions and modifications usually separated by many years.  The data center may provide high-end tech jobs for the area, but if the data center becomes a 'lights-out' facility, these may change to security jobs instead.  On the other hand, the infrastructure for the surrounding communities may have been upgraded to accommodate the data center needs.  Not only power is considered, but water, roads, and civil projects are undertaken to support the data center. 

    Fiber connectivity has been an issue, but not as much as it used to be as more networks become available.  When comparing the cost of power and tax incentives to the potential savings from the reduction of cooling, the amounts aren't yet as favorable for locating a data center at a latitude near 66.6 degrees north or south. 

    Tuesday
    Aug182015

    Water Issues of the Southwest USA

    It seems that there are reports about the dire water conditions of the Southwest United States at least once a month.  Each state is vying for more water use to support their agriculture, domestic, and power needs.  Water levels in Lake Mead and Lake Powell have dropped enormously and evidence of this can be readily seen in comparison pictures by journalist John Fleck.  Nice graphics of the water in the west can be seen at Dean Farrell's site too.

    But the states are moving to ensure that they will have water now and for the future.  While there may be water shortages that require Arizona and Nevada to enforce water saving measures similar to California, they have been outlining plans based on past legislation:

    • California: due to deals made as far back as 1968 California will get the water it needs in the event of a water shortage, which is now very real. 
    • Nevada (Las Vegas): The city has known that the cost of water is going up while the availability is going to go down and it has cut back its water useage by about 30% over the last 10 years.  The water authority has also been installing a new intake system that has an inlet at a lower elevation in Lake Mead.
    • Arizona: The state has plans to pump water to Phoenix and to the agriculture while cutting back elsewhere, such as delaying replenishing the groundwater reserviors. 

    Many have wondered how this will affect the data center industry, as much of the cooling is done evaporatively with water.  Currently the impact has been to operational costs.  Water costs money, and with conditions expected to plateau or increase over the next 30 years those costs are only going to increase.  Although we might not see the results now the impact of a megadrought, a desert-like drought lasting decades, could shut doors on businesses throughout the Southwest just to save water. 

    Cooling towers, the need for most of the data center water, may have a mandatory minimum of six cycles of concentration, a big step up from the typical two.  Scaling and fouling of equipment then becomes a challenge of facilities operators, likely via filtration and higher levels of chemical treatment.  Another impact to this operation is that more cooling tower water will need to come from non-potable (greywater or similar) sources.  These considerations all play into the reliability of the data center facility as well as the operational cost.