The cutting edge of processor technology is nothing short of amazing, and their capacities continue to increase according to Moore’s law. But one area that has not made the same progress happens to be the one area that data centers desperately need to catch up: storage.
There is a performance gap between the capabilities of today’s processors and the storage technologies that hold the data for those processors. But because all arrays don’t have to access all their files all the time, there is a way to balance performance with cost.
Skew: Meeting demand for data
Some data is accessed more than other data—that differential is called “skew,” and it describes how unevenly the data in an array is used. The specific details vary for each array, but generally speaking, 20% of the data stored in an array is accessed by 80% (or more) of the I/O requests.
In a nutshell, skew means that you can dramatically improve the performance of a given workload by speeding up the performance of the data with the highest access density. With smart software within the array that can detect data access patterns and redistribute data accordingly, you have the ability to significantly improve performance without needing to move all of the data onto solid state devices.
That brings us to hybrid flash arrays (HFAs).
HFAs are a simple enough concept: blending spinning disks with SSDs in a single array to boost an array performance without paying the premium for an all-flash array. Think of it as a supercharged minivan that gets off the starting line quickly but still has plenty of capacity for people and groceries. But as is always the case with flash, the reality is a little more complicated.
Hybrid flash arrays: For ultimate control
HFAs can use a mix of SSD, high speed SAS, and/or slower nearline SAS (NL-SAS) drives within the same array, creating a multi-tier storage system. Getting the right mix is a highly custom process—your workload skew shows what you need out of your array, so you can know exactly what to put in it. In addition to the I/O requirements, you also need to consider how data moves across tiers and the speed with which the array needs to move data. It all has an effect on the right drive mix.
Whatever goes into an HFA, its success or failure depends upon processing power and software capability within the array to relocate data based upon access patterns. Multi-tier arrays can accommodate a wide variety of workloads, scale to evolving data access patterns, and provide the best price/performance ratio.
Four ways to measure value
There are four ways to measure the value of HFA, but none of them are perfect. All are dependent on your workload analysis and what your customers need from your server capacity. These four measurement points provide a reference point to help you decide on which array to use and how to measure its value once it is installed.
How to make the right decision
You’re constantly in the business of upgrading your older servers as old servers reach the end of their life. But knowing what to invest in to future-proof your data center is tough. It depends almost entirely on analyzing your server’s workload and mapping your current and future needs to the arrays best suited to your needs. Despite their incredible performance and low maintenance requirements, AFAs are not the end-all-be-all solution for every organization. Hybrid flash arrays may be exactly what you need to solve your business needs.
Reach out any time for a consultation. We solve data center problems every day, and we supplement our expertise with the most innovative partnerships so that we can know how to best assist you.
If you like this post, check out The CEO’s Guide to Investing in Flash Storage for an overview of everything business decision makers need to know about data storage in their organizations!