Navigation
This form does not yet contain any fields.
    Friday
    Jun192015

    Better Modeling, Better Results

    Modeling your data center using Computational Fluid Dynamics (CFD)can help get a better picture of the data center performance, hot spots, air flows, pressures and more.  There are natural concerns about the cost, complexity, and how a CFD can compliment your floor design and work with your Data Center Information Management (DCIM) tools. 

    CFD modeling can help answer some of the sophisticated 'what-if' scenarios, such as new system deployment, optimizing power space and cooling, and potential results of equipment failures.  Additionally hot and cold aisle containment can be modeled to see how well a deployment may operate.  Modifications and upgrades in one part of the data center can affect other areas that are seemingly unrelated; CFD modeling helps to identify those relationships, similar to a weathermap that changes as the factors are altered. 

    Primarily CFD modeling has been used to understand and address how best to resolve environmental issues in order to protect equipment and reduce costs.  As the modeling advances and becomes closer to reality, new options allow for better construction results when laying out a data center to maximize it's space and power. 

    Today most DCIM tools provide a good depiction of the data center in its current state; adding the predictive capability of modeling can provide more accurate answers to possible deployments.  While DCIM gathers and shows the data center capacities and efficiencies, the CFD modeling tools can take this detailed information to provide more accurate results.  For instance, while a DCIM solution can allocate a server being installed at a location and the power allocated, the CFD model will review how adjacent servers and cabinets will be affected.

    In-row cooling units (IRCs) have been one of the latest solutions to provide cooling to higher density rows of racks.  But knowing how many and where to place them can be a complex question to ask let alone solve.  Knowing how the data center has been behaving via DCIM and then applying a theoretical model to predict the actual application of the IRCs can lead to a reduced number of IRCs as well as the best locations to maximize their efficiency. 

    Would a CFD model help with every server deployment?  Not necessarily, but for large-scale changes, installations, and replacements a model can help with finding better arrangements and locations.  Modeling has other limitations, such as being only a snap-shot in time and heavily reliant on the information being provided.  However CFD modeling does provide a scientific approach to cooling and power management in a data center to improve designs and cooling effectiveness.

    Monday
    May182015

    Lights Out

    Lighting in the data center has been a target in data centers for a while, as they can have upwards of 1.5 watts per square foot of white space.  While this power density may seem small by comparison with the computing loads, any lighting with over 1.0 watts per square foot is likely higher than needed.  For a data center the exception is enough task lighting to allow work in the aisles as needed.  But most of the time the lights aren't needed; if you're not doing it already, turn them off when you can. 

    The lights in the data center are likely fluorescent, which is more efficient than incandescent, but even with these there are big differences.  T12 types of fluorescents should be replaced with T8 or even better T5.  The number refers to the tube diameter, and the smaller diameter lamps have better efficacy (lumen-to-watt output). 

    LED lighting has caught on since the prices have reduced dramatically over the last decade.  They are brighter, easier to set up, control, replace, and have little wasted heat output.  If you have the choice for an upgrade in your building, ask for LEDs as the solution instead of other solutions. 

    More advances are coming and even now organic LED (OLED) has taken off, as the output is higher while requiring even less energy.  Plus the heat is reduced too, making them them the next likely candidate to replace all of your building lighting in the coming years.  But even with the newer technologies, off still uses less energy. 

    Wednesday
    Apr292015

    Adding Cooling

    There are a number of ways to effectively and efficiently cool a data center.  Below are some considerations when looking to add cooling to your critical space. 

    Strategy: Understand the cooling methodology for your data center: air and/or water; raised floor; hot/cold aisle; containment; perimeter cooling; localized cooling; liquid cooling; etc.  Establishing a new cooling method to be used in combination with an existing system can be tricky, but knowing how each will operate is critical.  How will adding more cooling work in conjunction with the existing cooling systems?

    Know the Load: Fundamental understanding of your cooling needs is important to the capacities of the cooling equipment you may need.  The temperatures and humidity levels of the inlet and outlet air will dictate the performance of the equipment.  On paper or spreadsheet it may see you already have adequate cooling, but with difference operating temperatures or humidity levels the net cooling for the space changes.  How much load will be added and how will the cooling match or exceed the maximum required?

    Know the Airflow: Airflow to the equipment, bypass and recirculation need to be considered.  Keeping the IT equipment cool is a matter of delivering the correct amount of cooling air to the equipment where needed.  Most data centers are designed with excess cooling capacity to counterbalance the airflow losses.  And as long as these inefficiencies exist excessive energy consumption will continue.  Improving the airflow management is a better way to cool the IT equipment.  And although airflow fundamentals for a data center may be straight forward, implementation of better practices is surprisingly poor.  How can understanding of the airflow change the cooling required?  Could there still be hot spots and air-starved areas after adding cooling capacity?

    Options: Determine whether matching existing systems will be the most beneficial and if your other systems, such as chilled water, can support the additional cooling equipment.  Does your load warrant a newly revamped cooling system or just a temporary portable solution?  Who are the trusted manufacturers and who offers the best soluton for what I need?  Is there a means to allocate the loads differently to aid the cooling systems?  And ultimately will the solution solve the cooling problems? 

     

    Thursday
    Feb192015

    3+ Billion Users, 30+ Billion Connected Devices

    Sometime in the next year the world wil have about 3 billion users of the internet.  This expansion will continue to increase to about 3.5 billion or more by 2018 as access to new markets continues.  India and southeast asia will see double-digit growth, assisted by Google, Facebook and others as they create new ways to expand.  North America, which already has highly developed markets, will see the least growth. 

    In addition to the growth of users, the growth rate of devices connected to the internet will mean that there will be about 30 billion devices connected by 2020.  The prediction of the expansion of the market from around $1.4 trillion to the predicted $3 trillion by 2020 means that every year devices will be hitting the markets around the world.  When added together with the expansion to new markets we could easily see more than 30 billion devices, including cars, homes, and other smart devices. 

    This growth of users and devices also means that more and more data centers will be built or expanded to support the needs.  And with the expansion of cloud-based resources, those enterprise-sized data centers might be located anywhere around the globe.  Already we are looking to take advantage of cooler climates and cleaner sources of electricity.  With the addition of more robust networks, we can look forward to supporting users and devices no matter where they are with data centers that consume less energy than ever before. 

    Wednesday
    Feb182015

    Software Defined Data Center

    While the term SDDC (Software Defined Data Center) has been cropping up in discussion more often, the concept of the catch-all term is still rather vague.  Many have an idea of what they want it to be, which naturally doesn't seem to be the same.  Perhaps it is thought of as a fully flexible data center, a virtualized version of everything standing behind what it needed.  But this may be a little too abstract; IT is looking to have a simplier means to roll out sevices rapidly, and the SDDC is the means to achieve the needed requirements of redundancy, provisioning and more. 

    In the past there were dedicated infrastructure components for each application with little sharing.  Efficiency wasn't easily achieveable since there was overprovisioning of assets to meet needs that might never arrive.  Additionally, changes meant additional infrastructure that would take months to implement.  Then virtualization opened up the applications to share the infrastructure.  IT managers could respond dynamically within days, sometimes sooner.  And now, the applications are asking for the resources they need from the infrastructure.  This is faster cloud that we are progressing toward, with agile applications that works fluidly with the infrastructure - the whole infrastruture - to meet its location, space, and reliability needs. 

    However, the physical data centers themselves, with an average age approching 18 years, need to respond to allow the smarter, faster cloud to operate.  They may become ranked for their cost, reliability, security, computing abilities and more, allowing applications and their users decide what the balance should be.  In this way, the SDDCs become dashboards for users to choose amongst.  Behind the scenes the equipment will need to remain operational to maintain their reputation, which may be the biggest disconnect between the software defined and the actual data center operations over the coming decades.