Subscribe Now

* You will receive the latest news and updates on the Canadian IT marketplace.

Trending News

Blog Post

Where and how will all that [Big] Data be stored?

Where and how will all that [Big] Data be stored? 

This is happening, in part, because many of the analog functions required to monitor and manage the physical world are quickly becoming digital. And this isn’t the only source of this new surge in data. From traditional ERP and CRM applications to emerging distributed mobile applications, the amount of data generated by organizations continues to skyrocket.

In addition, enter data from embedded systems and devices (e.g., smartphones, MRI scanners), often referred to as the Internet of Things. All of this data isn’t just “white noise” being generated by devices – instead, it actually holds value for those who know how to extract it. For example, by searching through that data, businesses are able to provide more relevant content, refine their products, and improve interactions between people and devices.

There isn’t enough power, space, or time available to sustain traditional storage solutions with the volumes of data that organizations are dealing with. The solution to this is something like a weight loss regimen for data storage: Consume less and exercise more. Similarly, the best way to slim down your bloated data storage is with technologies that take a multi-dimensional approach to the problem. The right technologies will not only perform better, but deliver more value in less space, and have the capability to eliminate the need for extra data copies.

One of the most disruptive technologies for slimming down data storage is solid state or flash storage. For decades, the best way to boost the performance of your applications was to add additional disk drives, or spindles, to your storage environment, even when you didn’t really need the added capacity. With solid state drives (SSDs) that use flash technology, you can get all of that performance in 10% of the physical space required by storage systems powered by hard disk drives (HDDs). Along with the savings on floor space, flash also provides a significant savings on utilities because solid state drives don’t spin, which means they don’t have to be powered or cooled in the same way that HDDs do.

But getting fit is not just about losing weight – it’s also about creating a healthier, more active version of yourself. Flash allows you to slim down the number of arrays and the resources used to maintain them, and it also transforms your storage capabilities. When performance matters most, nothing can beat an all-flash array built on a flash-optimized architecture. Flash can deliver double-digit gigabytes per second of random data throughput. It offers millions of input/output operations per second (IOPS) – an order of magnitude greater than even the fastest HDDs. Fast, predictable access times with only sub-milliseconds of latency can improve end-user productivity. The difference between spinning disk drives and flash drives is like the difference between someone going out for a casual jog and a world-class sprinter.

“Growth of data in the enterprise is staggering – more than two times the growth rate of processing power, network capacity, and even mobile phone adoption between now and 2020,” says Dave Pearson, IDC Canada’s research manager for enterprise storage. “Fortunately, not all data is created equal.  Identifying value in your enterprise data can improve your overall data ‘fitness,’ allowing you to put the right workload in the right place in your storage infrastructure.”

It’s also important to recognize that not all flash is created equal. While performance and space are immediate benefits, it’s important to consider that as you move more workloads to flash, you can’t sacrifice the tier-1 data availability and scale that you’ve come to expect from mission-critical storage systems. Looking outside of next-generation storage media, it’s key to take advantage of data services that eliminate wasted space.

“Advanced data reduction, compression, and deduplication technologies can help with the bloat of large amounts of low value data, with the end product moving into more cost efficient archive tiers.  The low-latency and high IOPS of flash can ensure that target-rich data workloads receive the horsepower they require, while reducing the footprint and energy utilization of today’s data centre,” Pearson says.

One of the things that consumes so much enterprise storage space is the multiple copies of data sets that are required across systems. These data sets are used for disaster recovery, test and development, data warehousing, and backups, and they are all based on the same original data set. However, they operate independently of each other and result in copy after copy. When companies rethink these discrete systems and consolidate them onto a highly scalable and accelerated flash array, they can take advantage of space-efficient snapshot mechanisms to take a full-fidelity virtual snapshot of a dataset, and then expose that snapshot to a new application or developer. In reality, no additional copies of the data are ever created, which can drive additional space savings of 6x or more.

When George Carlin talked about stuff, he was really referring to the clutter an individual accumulates during his or her lifetime. But for businesses, stuff isn’t a collection of souvenir shot glasses from every trip you ever took – it’s a valuable resource that can be mined for essential information. If you can perpetually store and effectively utilize all that data, it can be the gateway to better decision-making and more profitable business ventures.

For a more in-depth buyer’s guide, click here.

Michael Bloom Michael Bloom is Vice President, Storage Solutions (Canada) for Hewlett Packard Enterprise.

Related posts