Navigation
This form does not yet contain any fields.
    Tuesday
    Aug182015

    Purge those Unused Servers

    In many data centers the motto is to leave servers where they are until the owner or manager of it requests a change.  This is a common practice of supporting processes and applications with dedicated, unvirtualized servers that may or may not be performing work. 

    Then, add to this that most IT groups and data center managers are separate from the financials, meaning that the monthly utility bills are received and paid without much oversight.  The economic pain of paying a higher bill than you need to doesn't register since the impact is not direct. 

    But what if you knew that some of your servers were only utilized 12-20% and up to 30% of all of your servers were actually powered-up and sitting idle?  Along with being a huge waste of energy and the associated cost these servers are also consuming cooling and space in your data center.  Perhaps it may be time to do a bit of IT housecleaning.

    However it's not all that obvious or easy.  Getting the approval to go through and remove or virtualize costs time and money and some may want to know what the return on that investment will be.  Gaging against a years of operation, the cost savings will likely win; but also be prepared with other planned retrofits, changes and upgrades.  Do the work and show how much the savings might be in kWh and $ with conservative estimates of how much time it may take your IT team. 

    A good place to start is with a basic inspection of the equipment.  It may be surprising, but finding plugged in servers and storage that are unconnected does happen rather frequently.  Correcting these simple mistakes make easy victories.  Schedule surveys of the other IT assets to look for less obvious problems.  These can be overcome by more crucial events, but they should be scheduled to keep them from dwindling out.  There are services from vendors that can help, which can seem to shift the burden of risking a server outage but ultimately the IT group will still bear the responsibility. 

    Having an up-to-date inventory can help identify when servers were installed and which group may be responsible for them.  Asking those groups to do an audit themselves, if they can, can be a great way for a group that hasn't made changes in years to reassess what they actually need.  Also groups change frequently and they may not know what they have; a reminder may be just what they need. 

    Despite utilization analysis, it can be just as much of a problem if you load up servers to 100%.  This would introduce bottlenecks, latency, or other performance problems that limit effectiveness.  That may be like a balancing act, but often there are servers that can ramp up to help with loads as needed and shouldn't be considered unused. This also goes into knowing your applications and how they perform on a given class of server. 

    The latest issue also seems to be virtual machine 'sprawl', where more instances, images and virtual machines have come online because they are easy to deploy.  But the impact can hurt, as they tie up disk space and energy.  Archiving and consolidation can be key to helping with the sprawl. 

    Thursday
    Aug132015

    Liquid Cooling Definitions

    The majority of the industry understands that liquids are used in some or most of the cooling process for most data centers.  However there has been some misunderstandings about what is actually liquid cooled.  ASHRAE (the American Society of Heating, Refrigeration and Air-conditioning Engineers) released the second edition of Liquid Cooling Guidelines for Datacom Equipment Centers in 2013.  As part of the introduction, the book provides definitions of liquid cooling based on the boundaries that the liquid crosses to get closer to the load.  Liquid cooling, in general, is where a liquid is used to extract heat instead of air from a data center space.  This might mean in-row coolers, adjacent or rear door cooling other other means. 

    As the liquid is used more closely to the load, other definitions were given that are starting to become standard for the industry:

    1. Liquid-cooled Rack: a liquid cooled rack is where the liquid is circulated inside a rack or cabinet for cooling;
    2. Liquid-cooled Equipment: liquid is used inside a server or other datacom equipment to remove heat;
    3. Liquid-cooled Electronics: the liquid is used to directly cool the heat source without another medium, such as air.

    It should be mentioned that these definitions are for cooling inside the data center itself and does not include definitions for liquid cooling equipment such as chillers or cooling towers.  While this equipment does do heat transfer via water or another liquid, this is not typically considered to define a data center as liquid cooled. 

    Wednesday
    Jul292015

    Digital Growth of the Health-Care Industry

    Despite some of the recent scares with security, the health-care industry is having a data explosion that is continuing to grow.  With the future expectation of access to health data on demand, the storage and retrieval will continue to push both data centers and online security providers.  The Health Information Technology for Economic and Clinical Health act (HITECH) offers incentives the use of electronic health records and partly as a simulus package.  This meant creating and storing digital records but also using the records in a meaningful way such as coordination of care. 

    The Health Insurance Portability and Accountability Act (HIPAA) was introduced to allow people to keep their health insurance as they transferred jobs regardless of pre-existing conditions.  Also included was the idea of Protected Health Information (PHI): information related to health status, payment and other sensitive data.  For data centers and their managers, this is digital information that needed to be protected in the appropriate way as more records went online.  This became even more important when in 2015 medical facilities were penalized through Medicare for not providing electronic records. 

    To support the new needs data center operations began ramping up to support health-care providers in four key ways: storage; access; encryption; and backup and recovery.  Along with each are periodic testing to ensure that there are no gaps in service so that data is not lost or fall into the wrong hands. 

    As the digital records grow, this can provide a great value to the individual when coupled with other means of tracking one's health, such as with wearable devices and other medical tracking to provide trending and other clear signals.  Ths may seem cumbersome as both the individual and the health-care provider need the storage.  But when coupled with smart technology and algorithms to reduce complications and errors the advantages can be life changing.  Big data analytics and research can be used more readily to push out information to the relevant audience as well as provide sources for more reliable data.

    For data centers, growth may be hard to predict as regulations and trends in data privacy change.  At worst the growth may plateau; at best it may surge forward faster than anticipated and leaving individuals and providers with a lack of a secure means to save and retrieve their data.

    Friday
    Jul102015

    Environments to be Monitored

    Monitoring a data center is an important part of managing how it operates and making decisions for the future.  Knowing what is important to measure and monitor is crucial, as the deluge of information and misleading metrics can begin to lead you astray. About one in three facilities will have an outage of four hours or more and many of those thought they had no warning.  Proactively monitoring can help protect your facility from 'unexpected' outages due to unknown environmental conditions.

    Air Conditions - Temperature & Humidity: Temperature gradients and fluctuations are important to track throughout a data center.  Operating the data center temperature with empirical data is better than having C-level managers increase cooling just because the data center is warmer than a hallway outside.  The temperature monitoring leads to understanding performance problems that can lead to shutdown, failure, or damage in older systems.  Know the expected cold and hot aisle temperature averages.  Find out where hot spots are and make plans to address them.  Also watch out for high humidity in your data center.  Unlike temperature, humidity will disperse throughout a room, which means that there is a risk of condensation that can cause shorts in sensitive electrical equipment. 

    Airflow & Quality: Proper airflow monitoring allows better management of data center performance and air cooling efficiency.  This also overlaps with temperature monitoring, as sometimes hot spots can be caused by restricted airflow.  Airflow and tempertaure should be viewed as early warnings of potential issues that if not addressed will lead to unacceptable conditions throughout the data center.  Airborne contaminants, including smoke and other sources, are becoming a greater concern as recent studies have shown increases in failure rates based on higher particulate counts.  Smoke detection and suppression systems should be monitored and tested regularly to ensure that false alarms or actual fire events are stopped quickly before causing catastrophic damage.

    Power & Electrical Systems: Power and UPS systems impact reliability more than most other threats and knowing where anomalies exist is key to prevention.  Monitoring the power systems enables foreshadowing of sags and spikes than can cause power-related issues.  This monitoring should extend from the utility to data center and possibly down to the server if needed.  In the same manner UPS systems should have monitoring in place for all of the components: generators; fuel systems; each battery string (and individual batteries); and all the electrical gear that are expected to function automatically when needed.

    Monitoring Products: If you don't have them yet and don't have them in your budget, environmental monitoring equipment should be given a priority.  Often when a monitor sends an alert about quickly pending changes they recover their investment in those crucial minutes of preventing a partial or full outage.  Quality sensors that can quickly convey the data center conditions can be coupled with controls to understand what changes need to be made to prevent issues before they happen.  However not all of the environmental systems should be expected to do this as some, such as leak detection, may be reactive instead of proactive. 

    Monitoring Software: Products and sensors often come with a trustworthy monitoring application to give reliable information about your data center conditions.  As the industry evolves, the interfaces can likely be customized for your needs, the data can be accessed securely from remote locations, and alerts can be pushed anywhere.  Many consider DCIM as part of the ultimate solution to environmental monitoring.  This is partially true only if the DCIM has the capability to connect to the equipment.  The benefits can far exceed the limitations, as a robust DCIM package may be able to optimize the power and space as well as overlay the expected cooling conditions. 

    Having a reliable monitoring system is the next step forward to not being surprised as often by environmental changes.  More than anything getting a solution in place will allow better visualization of what you have; there are no perfect solutions, and having at least a notional idea of your data center is better than having no clue at all.  

    Tuesday
    Jul072015

    Choosing a Contractor

    There are many clients who are now taking on projects small and large and getting the right contractor can lead to a successful project outcome. Here are a few of the key items to consider.

    Expertise: choosing a good contractor isn't enough if the contractor isn't good with critical facilities and data centers.  Their terms of quality and redundancy may differ quite significantly.  And the expertise shouldn't just come from the company but from the individuals that are going to actually help with the project.  And even if they've done data centers there can be a big difference between doing enterprise-sized green field projects versus co-location retrofits while maintaining uninterruptable services.  Be specific when asking about what you need.

    References: the past work and accomplishments of many contractors can be impressive but getting a second opinion is helpful.  Previous customers and other references can tell you more about how the contractor handled issues, communication, timelyness and other details that may make the decision easier or shorten your list of viable candidates.  Repeat clientele can be very helpful references as there may have been a developed trust between the contractor and client over time that you might also be interested in establishing.  This can be especially helpful when you may already know the forecast includes other future projects and having the same contractor on board can lead to a better success rate. 

    Budget: for a new project, the cost is often the driver for many decisions, but your money shouldn't be the only consideration.  The contractor should also be solid financially, as switching can be devistating to your project budget and schedule.  This should be investigated no matter how big or seemingly successful they appear.

    Subcontractor management: contractors will hire others to complete some specific portions of the work; how they are managed and overseen can be another key to having successful projects.  Some clients like to be part of the process, from bidding to construction management, and knowing how you and the contractor will both engage the subcontractor can be crucial to schedule and budget.  Knowing how the contractor choose or recommended a subcontractor can help you understand how the contractor will also hold up their reputation for quality outcomes. 

    Time & budget: timing and budget are likely the other largest factors for choosing a contractor.  They can also be the most difficult, as the contractor may agree to one aspect with stipulations on the other, such as they cannot start for 6 months.  They can also look to leverage the schedule and cost as the project progresses and learning how schedule slippage and overruns are handled are crucial.  If the contractor is coy about how to respond to unplanned problems or isn't prepared with other contingencies then these may be risks to staying within budget or on schedule.

    Other considerations: data centers are each unique in their own fashion and there are many aspects that may require special attention that are new to a contractor.  How the contractor may handle these special issues should be part of the vetting process.  Compliance with regulations or your standards should be brought up to see if the contractor has any experience with requests that might be new for them - especially in your data center environment.