5 disruptive storage technologies for 2020

Leading-edge technologies, including NVMe, storage class memory, and intent-based storage management, promise to change the way IT organisations store, manage and use data

Credit: Dreamstime

For decades, storage technology progress was measured primarily in terms of capacity and speed. No longer.

In recent times, those steadfast benchmarks have been augmented, and even superseded, by sophisticated new technologies and methodologies that make storage smarter, more flexible and easier to manage.

Next year promises to bring even greater disruption to the formerly staid storage market, as IT leaders seek more efficient ways of coping with the data tsunami generated by AI, IoT devices and numerous other sources. Here's a look at the five storage technologies that will create the greatest disruption in 2020, as enterprise adoption gains ground.

Software-defined storage

Attracted by the lures of automation, flexibility, increased storage capacity and improved staff efficiency, a growing number of enterprises are considering a transition to software-defined storage (SDS).

SDS separates storage resources from their underlying hardware. Unlike conventional network-attached storage (NAS) or storage area network (SAN) systems, SDS is designed to operate on any industry-standard x86 system. SDS adopters benefit from smarter interactions between workloads and storage, agile storage consumption and real-time scalability.

"SDS technologies virtualise the available storage resources while also providing a simplified storage management interface that represents different storage pools as a unified storage resource," explains Cindy LaChapelle, principal consultant at tech research and consulting firm ISG.

SDS offers abstraction, mobility, virtualisation, and storage resource management and optimisation. The technology also requires managers to shift their view of hardware as the most important enterprise storage element to that of a less critical supporting player. In 2020, managers will deploy SDS for various reasons.

"Often, the goal is to improve operating expense (OpEx) by requiring less administrative effort," LaChapelle says. Solid-state drive (SSD) technologies are changing the way organisations use and manage their storage needs, making them prime candidates for a transition to SDS.

"These technologies provide organisations with greater control and configurability to enable the right level of performance and capacity while also optimising utilisation and controlling cost."

Selecting the least disruptive approach to SDS requires a clear and thorough understanding of application requirements for capacity and performance.

Potential adopters also need to honestly assess their organisations' ability to manage an SDS environment. Depending on the level of in-house expertise, an SDS appliance featuring prepackaged software and hardware often provides the best adoption course.

NVMe/NVMe-oF

Early flash storage devices were connected via SATA or SAS, legacy interfaces that were developed decades ago for hard disk drives (HDD). NVMe (Non-Volatile Memory express), running on top of the Peripheral Component Interconnect express (PCIe) layer, is a far more powerful communications protocol, targeted specifically at high-speed flash storage systems.

Supporting low-latency commands and parallel queues, NVMe is designed to exploit the performance of high-end SSDs.

"It not only offers significantly higher performance and lower latencies for existing applications than legacy protocols, but also enables new capabilities for real-time data processing in the data centre, cloud and edge environments," says Yan Huang, an assistant professor of business technologies at Carnegie Mellon University's Tepper School of Business.

"These capabilities can help businesses stand out from their competition in the big data environment." NVMe is particularly valuable for data-driven businesses, especially those that require real-time data analytics or are built upon emerging technologies.

The NVMe protocol is not limited to connecting flash drives; it also can serve as a networking protocol. The arrival of NVMe-oF (NVMe over Fabrics) now allows organisations to create a very high-performance storage network with latencies that rival direct attached storage (DAS). As a result, flash devices can be shared, when needed, among servers.

Together, NVMe and NVMe-oF represent a leap forward in terms of performance and low latency relative to predecessors, such as SATA and SAS.

"This enables new solutions, applications and use cases that were previously unattainable or cost prohibitive," says Richard Elling, principal architect at storage manufacturer Viking Enterprise Solutions.

A lack of robustness and maturity have so far limited NVMe/NVMe-oF adoption. "With enhancements, such as the newly announced NVMe over TCP, we see the adoption of new applications and use cases accelerating dramatically," Elling notes.

"Although experiencing only modest growth in this early adoption period, we now see NVMe and NVMe-oF hitting their stride and accelerating their deployment in 2020."

Computational storage

An approach that allows for some processing to be performed at the storage layer, rather than in main memory by the host CPU, computational storage is attracting the interest of a growing number of IT leaders.

Emerging AI and IoT applications require ever greater amounts of high-performance storage, as well as additional compute resources, yet moving data to the host processor is both costly and inherently inefficient.

"Due to high-performance SSDs, the trend of moving compute closer to the storage has been going on for several years," says Paul von-Stamwitz, senior storage architect at technology incubator Fujitsu Solutions Labs. Observers believe that 2020 will be the year that the method finally enters the IT mainstream.

Computational storage can be used in several different ways, "from using small edge devices to filter data before sending it to the cloud to storage arrays providing data sorting for databases to rack-level systems transforming large datasets for big data applications," von-Stamwitz explains.

NVMe and containers are computational storage's primary enablers. "Therefore, if they have not already done so, IT managers should plan to transition to NVMe- and container-based infrastructures," von-Stamwitz advises. "In addition, managers can identify applications that could benefit most from the improved efficiencies of computational storage and engage with the appropriate vendors," he suggests.

Storage-class memory

Widespread adoption of storage class memory (SCM) has been predicted for several years, and 2020 may be the year it finally happens. While Intel Optane, Toshiba XL-Flash and Samsung Z-SSD memory modules have all been available for some time, their impact hasn't exactly been earth shattering so far.

"The big difference now is that Intel has gotten their Optane DCPMM persistent memory module version working," says Andy Watson, CTO of enterprise storage software developer Weka.io. "That’s a game-changer."

The Intel device blends the characteristics of fast, yet volatile, DRAM with slower, but persistent, NAND storage. This two-punch combo aims to boost users' ability to work with large datasets, providing both the speed of DRAM and the capacity and persistence of NAND.

SCM is not merely faster than NAND-based flash alternatives—it's in the range of 1,000-times faster. "Microsecond latency, not millisecond," Watson says. "It’s going to take some time to wrap our collective heads around what this will mean for our applications and our infrastructure," he adds.

SCM's initial big play will be extending memory, Watson predicts, noting that third-party software already allows in-memory applications to use Optane to achieve footprints of up to 768TB.

The fact that data centers planning to adopt SCM will be restricted to deployment on servers using the latest-generation Intel CPUs (Cascade Lake) threatens to mute the technology's immediate impact.

"But the ROI may turn out to be so irresistible that it could drive a wave of data center upgrades to embrace the unfolding opportunities associated with this major sea change," Watson says.

Intent-based storage management

Building on SDS and other recent storage innovations, intent-based storage management is expected to improve the planning, design and implementation of storage architectures in 2020 and beyond, particularly for organisations coping with mission-critical environments.

"Intent-based approaches ... can deliver the same benefits we’ve seen in networking, like rapid scaling, operational agility and adoption of emerging technology, years earlier—for both existing and new applications," says Hal Woods, CTO of enterprise storage software developer Datera.

He adds that the approach can also compress deployment time and administrative effort by orders of magnitude, compared to conventional storage administration, while being far less error prone.

With intent-based storage management, a developer who specifies a desired outcome (such as, "I need fast storage”) isn't consumed with administrative overhead and can thereby provision containers, microservices or conventional applications more rapidly.

"Infrastructure operators can then manage to the needs of the application and the developer, including performance, availability, efficiency and data placement, and allow the intelligence in the software to optimise the data environment to meet application needs," Woods says.

Additionally, with intent-based storage management, a developer can simply adjust storage policies, rather than spend days manually tuning each array.

A continuous and autonomous cycle of deployment, consumption, telemetry, analytics and SDS technology make intent-based storage possible. "The SDS system can then employ AI/ML techniques to continuously ensure the customer-specified intent is being met, and even allow the intent to be non-disruptively adjusted as the AI/ML engine provides feedback on improving the customers environment," Woods says.

The downside to intent-based storage management, as with any disruptive technology, is the hurdle of deployment versus promised value. "Intent-based storage is not a one-size-fits-all technology," Woods notes.

"It delivers the greatest value in disaggregated, at-scale, mission critical environments where delivering developer velocity and operational agility will have the largest business impact." For smaller, less critical environments, approaches such as direct-attached storage or a hyper-converged infrastructure are often sufficient, he says.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags storage

More about DASEnterpriseFujitsuIntelLakeMellonNASSamsungSASSDSToshiba

Show Comments
[]