By David Marshall
A Contributed Article by Greg Scherer, Vice President & Chief Technology Officer of QLogic
With the arrival of 2016, I'd like to share my thoughts on four key technology and market trends that will be driving the transaction-heavy data center into the New Year: open source technology, faster speeds and feeds, flash and solid state drive (SSD) solutions, and ease of management tools.
1. Open Source Technology
What began with Linux in the early 1990s has exploded with OpenStack, Open Compute Project (OCP), Data Plane Development Kit (DPDK) and applications like Hadoop, memcached, etc. The proliferation of open source technology will be driving IT development in 2016 more than ever. We're taking it seriously in our labs, and think you should too.
Here are three of the most notable open trends to keep your eye on in 2016:
2. Faster Speeds and Feeds
Today, the majority of data center traffic is "east-west," meaning that servers, storage and appliances all need to be interconnected in a manner that facilitates peer-to-peer conversations, as opposed to pushing traffic "north" to have conversations. This is not unlike the transformation that the telephone system went through from "operator-assisted" calls to "direct-dial" peer conversations. Between greater use of Storage Area Networks (SANs) that couple virtual servers to their shared storage to scale-out storage environments like Hadoop/MapReduce, east-west traffic patterns are driving the need for fatter interconnection (bandwidth) and faster interconnection (lower latency).
3. Flash and SSD storage solutions are transforming the data center, and will be a force to contend with in the immediate future as virtualization proliferates and performance-challenged enterprise applications create bottlenecks within the infrastructure. Welcome to the era of flash/SSD.
My conclusion is that flash is THE emerging trend, pushing aside the hard disk drive (HDD) incumbent; this is supported by a plethora of computing benchmarks and statistics. Because we are space-constrained, I'll give you my top three proof points:
The growing prevalence of flash storage is one of the key drivers for the critical requirement for higher speeds and feeds (my number two trend). Memory-intensive, I/O-starved, transaction-heavy applications beg microsecond response times and simply put, are fired up by flash. But flash needs network performance to get the job done right. This leads us into the symbiotic discussion of the latest technology trend towards NVMe. NVMe provides a standards-based approach for PCI Express (PCIe) SSD access that significantly improves performance by reducing latency and streamlining the command set. Flash is fast. NVMe can make it even faster.
There are two approaches to NVMe that are looming brightly on the 2016 horizon:
4. Ease of Management
Mission-critical, private cloud and flash-accelerated workloads are transforming the data center, and because of this you're going to see more demands than ever before for performance, Quality of Service (QoS) and infrastructure reliability. Therefore, centralized management is one of the most critical trends for a technology-mature data center to acknowledge in 2016. "Cool technology" as laid out in this blog is only cool if it is easily managed. It must be "transparently" managed, or at least be managed in the context of commonly used tools that are native to the operating system (OS) or application.
QLogic Corp. and Brocade Communication Systems, Inc. demonstrated the first NVMe over Fabrics solution using FC as the fabric.
The demonstration is based on the draft specification of NVMe over Fabrics under definition by the NVM Express, Inc. organization and the draft FC over NVMe (FC-NVMe) standards under definition by T11. The solution was demonstrated at the Gartner Data Center, Infrastructure & Operations Management Conference in Las Vegas.
NVMe provides a standards-based approach for PCIe SSD access that improves performance by reducing latency and streamlining the command set while providing support for security and data protection. NVMe over Fabrics defines a mechanism to utilize these devices in large scale storage deployments and provides investment protection by allowing the latest in innovations and advances in low latency SSD flash to be used over FC fabrics. This enables the NVMe storage devices to be shared, pooled and managed more effectively. The QLogic and Brocade FC-NVMe proof of concept (POC) is aimed at providing a foundation for lower latency and increased performance, while providing improved fabric integration for flash-based storage.
"Next-generation data intensive workloads are utilizing low latency NVMe flash-based storage to meet ever increasing user demand," said Vikram Karvat, VP of products, marketing and planning, QLogic. "By combining the lossless, highly deterministic nature of FC with NVMe, FC-NVMe targets the performance, application response time, and scalability needed for next generation data centers, while leveraging existing FC infrastructures. QLogic is pioneering this effort with industry leaders, which in time, will yield significant operational benefits to data center operators and IT managers."
"The advent of FC-NVMe is critical to the evolution of the data center and Brocade has been at the forefront, being both a key contributor and editor for the T11 FC-NVMe architecture," said Jack Rondoni, VP of storage networking, Brocade. "As the data center continues to evolve, so does FC-based technology. FC plays a vital role in connecting flash-based devices and FC-NVMe will further strengthen this correlation."
"The NVM Express organization is pleased to see the first demonstration of NVMe over Fabrics using FC," said Amber Huffman, president, NVM Express. "We are excited to collaborate with the T11 standards organization to develop a robust standard that works well for FC-based storage environments."
"FC-NVMe extends the reach of existing FC investments, enabling data centers to continue to leverage the tens of billions of dollars of installed FC equipment," said Seamus Crehan, president of Crehan Research, Inc. "The popularity of FC connected flash-based storage continues to grow strongly, ensuring the longevity and persistence of FC connectivity in the data center of the future."
By Sarah Wilson, Dave Raffo, Garry Kranz, Sonia Lelli, Rich Castagna, Andrew Burton, & Carol Silwa
Find out what's hot and what's not-so-hot on our list of data storage technology trends for the coming year.
Here we go again -- our list of Hot Techs for 2016! For the past 13 years, we've honored the best and brightest technologies of the upcoming year. And, as always, we are proud to present a batch of technologies we believe will make a big impact.
As in years past, our list leans toward practicality -- most of our hot techs are "newish" rather than futuristic, because we want to focus on the techs that are mature enough that we know they are proven and generally available.
Buckle up and get a ready for a ride through our picks for what's hot in storage for 2016.
Copy data management
Managing numerous physical copies of the same data from multiple tools remains expensive, continues to be a management headache and even poses a security threat. That's why copy data management (CDM), which uses a single live clone for backup, archiving, replication and other data services, is one of the storage technology trends poised for stronger adoption in 2016.
The market has grown to include startups Cohesity Inc. and Rubrik, which have recently unveiled products, along with traditional vendors such as Catalogic Software, Commvault, Hitachi Data Systems and NetApp. Research firm IDC estimates copy data will cost IT organizations nearly $51 billion by 2018.
Actifio is the pioneer in this space with its copy data virtualization platform that decouples data from infrastructure and consolidates siloed data protection processes.
Cohesity launched its Cohesity Data Platform designed to converge all secondary storage workloads with an Intel-based 2U appliance that serves as a building block for its scale-out architecture. Its Cohesity Open Architecture for Scalable, Intelligent Storage (OASIS) software includes quality of service management to converge analytics, archiving and data protection on a single platform.
Rubrik came out with its data management product in 2015, selling a 2U appliance with built-in software that performs backup, deduplication, compression and version management. Hitachi leverages its Hitachi Data Instance Director (HDID) and the Hitachi Virtual Storage Platform to help reduce copies.
CDM greatly differs from traditional storage management because it streamlines a silo process in which customers use multiple tools from multiple vendors, particularly for data protection.
"Today, there is a bunch of fragmentation in secondary storage," said Cohesity CEO Mohit Aron. "A customer goes and buys a bunch of different products from multiple vendors and somehow has to interface them together manually, managing them through multiple UIs. That becomes a major manageability headache."
Aron acknowledged the evolution of CDM products with varying capabilities.
"Cohesity Data Platform converges all your data protection workflows on one appliance," Aron said. "We have a single pane of glass that can be used to manage all these workloads. The analogy I use is that our infrastructure is similar to what Apple did with the iPhone. We are building the infrastructure and the platform that can deploy some native applications to solve these customer use cases. In the future, we want to expand and have other vendors and even third parties write software on our platform."
"There are three kinds of companies who say they do copy data management," said Ash Ashutosh, founder and CEO of Actifio. "First, there are backup guys. They take snapshot management and put lipstick on it and call it copy data management. Then you have guys who say ‘if you have 14 storage devices, buy ours as the fifteenth.' What we do is different. We're completely independent of infrastructure. We want to manage data from the time it's created across its entire lifecycle. We provide instant access, and manage data to scale, regardless of where it is."
The goal for all these products is to maintain a balance between safe and accessible data by reigning in the amount of rogue copies of sensitive data created via conventional data protection platforms.
Growing adoption of object storage, cloud-based backup storage and the emergence of high-capacity hard disk drives (HDDs) have turned up the temperature of erasure coding over the past few years, and it is projected to be one of the hot storage technology trends in 2016. Petabyte- and exabyte-scale data sets make use of RAID untenable, said George Crump, president of IT analyst firm Storage Switzerland.
"As we move into (using) 6 TB and 8 TB drives, erasure coding is the only technology that can provide data protection feasible for larger volumes of data. If you put high-capacity drives in an array, you're looking at weeks of recovery with RAID. With erasure coding, you're looking at hours," Crump said.
Erasure coding uses a mathematical formula to break data into multiple fragments, and then places each fragment in a different location within a storage array. Redundant data components are added during the process, and a subset of the components is used to reproduce original data should it get corrupted or lost.
The goal of erasure coding is to enable faster drive rebuilds. The process of copying data and scattering it across multiple drives is similar to RAID. However, erasure coding differs from RAID in scale and data longevity. If data gets corrupted or lost, only some of the "erased" fragments are needed to reconstruct the drive. The technique also preserves data integrity by tolerating multiple drive failures without performance degradation.
Today, the use of erasure coding is considered table stakes for object storage providers, including leading vendors such as Amplidata (acquired by HGST), Caringo, IBM Cleversafe and Scality. But, block and file storage vendors are getting in on the action as well. Hyper-converged array vendor Nutanix in July integrated proprietary EC-X erasure coding in a version upgrade to its Nutanix Operating System. Scale-up vendor Nexenta Systems added support for block and object storage in a version upgrade to its NexentaEdge software in May.
Erasure coding is the core data protection mechanism for cloud-based object storage, due to scalability of protecting vast amounts of data. Thus far, users are moving data to the cloud mostly for specific use cases such as backup and active archiving, a trend that is expected to continuously rise.
"Erasure coding is the type of design that's ideal for an object storage system: a scale-out, multi-node storage infrastructure. It is a way to provide RAID-like protection across nodes, instead of contained within a single storage system," Crump said.
Next-generation storage networking
Flash and virtualization are key drivers fueling the rise of next-generation storage networking as a storage technology trend, whether its Fibre Channel (FC), Ethernet or InfiniBand.
Shipments of 16 Gigabit per second (Gbps) FC switches and adapters should remain hot next year, while 32 Gbps gear starts to warm up. Brocade and Cisco will focus their roadmaps on 32 Gig switches. QLogic got the ball rolling this fall with 16 Gbps/Gen 5 FC adapters that customers can upgrade to 32 Gbps/Gen 6 in 2016.
Vikram Karvat, vice president of products, marketing and planning at QLogic, said flash storage vendors were "banging down the door" for 16 Gbps quad-port FC adapters, capable of delivering 16 lanes of PCI Express 3.0, to address the demands of virtualization, analytics and transaction-heavy workloads.
"This level of performance isn't for everybody, but when you need it, you need it," said Karvat. "Ethernet is very good at certain things. I haven't got a bias one way or the other. But, there are certain workloads that Fibre Channel has been tuned for. It just works."
Casey Quillin, director of SAN, network security and data center appliance market research at Dell'Oro Group, said 16 Gbps FC has largely been a switch story to date because there weren't many 16 Gbps ports on servers or storage arrays. He expects 16 Gbps FC adapters to play "catch up" next year and reach nearly 50% of total FC port shipments by the end of 2016.
Quillin said Brocade is working with FC adapter companies to "make sure the ecosystem is better rounded out" with 32 Gbps than it was with 16 Gbps. But, he still expects the ramp to 32 Gbps to be slower than the migration to 16 Gbps.
The main trend in Ethernet-based storage networking will be 25 Gigabit switch and adapter chips with ports that enable companies to use the same class of cables they deployed with 10 Gigabit Ethernet (10 GbE). The original Ethernet roadmap called for a jump from 10 GbE to 40 GbE, but 40 GbE technology required a upgrade to thicker, more expensive cables.
Networking vendors rallied around standards for new single-pin 25 GbE switch and adapter chips in response to the needs of hyperscale cloud service providers. The ports on the new 25 GbE chips use the same number of pins and lanes on the server PCIe bus as 10 GbE ports do. The roadmap extends to 50 GbE and 100 GbE, with the latter using four lanes of 25 GbE.
"The big advantage of 25 (GbE) to 50 (GbE) is you don't have to replace what you've got to get to 100. It's a much simpler progression of getting higher performance without adding a lot of cost. That's why it's going to take off," said Marc Staimer, president of Dragon Slayer Consulting. "The next gen is going to be 25 (GbE) to 50 (GbE); 40 Gig's going to end up dying on the vine."
Networking options are already available for both speeds. Dan Conde, an enterprise networking analyst at Enterprise Strategy Group, said users are deciding whether to go to 25 GbE or 40 GbE based on vendor support and cost savings.
Meanwhile, InfiniBand continues to focus on high-performance computing (HPC). The current dominant speed is 56 Gbps, but the transition to 100 Gbps should heat up in 2016 fueled by HPC, big data and Web 2.0 applications, according to Kevin Deierling, vice president of marketing at Mellanox Technologies.
Sergis Mushell, a research director at Gartner Inc., said flash will give users reason to upgrade to next-generation storage networking. "Because flash is going to drive higher IOPS, bandwidth and latency are becoming more and more important. If you really want to get the value out of the flash, you need lower latency and higher bandwidth," he said.
Yet, more than higher bandwidth, the most prominent storage networking trend in 2016 could be the emergence of products supporting non-volatile memory express (NVMe) over FC, Ethernet or InfiniBand fabrics, according to Mushell. He said the lighter NVMe protocol layer reduces the command set to address the array and improves performance.
Deierling said the ever-increasing amount of data that must be available in real time will start to drive software-defined flash storage utilizing remote direct memory access (RDMA). He said flash storage needs fast RDMA-capable interconnects, where the higher-speed networking comes into play.
We first pronounced object storage a hot technology in 2012, and it's even hotter now. With more complete offerings from vendors and concrete use cases defined, the technology is poised to make a bigger splash among storage technology trends in 2016.
Unlike file systems, object storage systems store data in a flat namespace with unique identifiers that allow data to be retrieved without a server knowing where that data is located. The flat namespace also allows a far greater amount of metadata to be stored than can be stored on a typical file system, making tasks like automation and management simpler for the administrator. These days, the technology is being used for long-term data retention, backup and file sharing.
Until recently, object storage system options were limited -- most were systems that used a REST-based protocol on proprietary hardware. "Now, object vendors are packaging systems in such a way that traditional IT can take advantage of them," said Crump. "They're providing more protocol access like NFS, CIFS and iSCSI, and they're also providing more cost-effective back ends."
Some of today's vendors are focusing more on the software so that users can select their own hardware for a lower cost and easier integration into the main data center. Object storage software vendor Caringo, for example, in September launched FileFly software, which allows users to move their data between object storage and file systems.
"Broad adoption has to be in the legacy data center, and the legacy data center is seeing what cloud providers are doing and adopting that capability into that use case," Crump said.
This is also demonstrated by HGST's March acquisition of object vendor Amplidata, and IBM's October acquisition of Cleversafe -- signs that legacy vendors realize how important object technology is for backup and archiving strategies.
One of the main drawbacks to object technology is latency introduced due to the amount of metadata. But the most obvious use cases are ones where performance is not a primary concern. In-house file sync and share, for example, is becoming more popular as a means to reduce shadow IT and increase business productivity.
We also saw an increased interest in big data lakes over the past year. The addition of multi-protocol support from many vendors means object storage is now extremely suitable for housing this data because of its low-cost, scalable nature.
"The biggest problem that was holding it back was nobody was going to buy object storage just because it was object storage. It had to solve a problem and now we've better identified what those problems are," Crump said.
Software-defined storage appliances
After two years of non-stop talk about software-defined storage, vendors are realizing even the best storage software still requires good hardware to work.
The pendulum began swinging back to hardware in 2015. We saw startup Savage IO release a hardware array built to run somebody else's storage software. Software-defined storage products such as EMC's ScaleIO and Cloudian HyperStore came out on appliances. Dell came out with its Blue Thunder project that makes its hardware available for other vendors' storage software, and lined up VMware, Microsoft, Nutanix, Nexenta and Red Hat as partners.
SanDisk launched the InfiniFlash IF100, a flash-only array that runs other vendors' software and signed up software-defined storage vendor Nexenta as one of its first partners.
The hardware doesn't even have to be new to be a part of this trend. Curvature Solutions will even sell used storage bundled with DataCore SANsymphony-V, which was software-defined storage before it became cool.
With more hardware options available, vendors' unabashed claims of being software-defined began subsiding. "We're definitely not software-defined storage, since we include a rack-mounted appliance," said Brian Biles, founder and CEO of Datrium, when the startup launched with its DVX Server Flash Storage System in July. When was the last time you heard a storage vendor say that? Datrium does have DVX software, but it only runs on its storage. Still, vendors in the past few years might have tried to position that type of setup as software-defined storage.
Savage IO took the notion of a storage appliance up a few notches. The SavageStor 4800 is a 4U 48-drive system with 12-core processors, which supports Fibre Channel, InfiniBand and solid-state drives. It is designed for for high-performance computing, big data analytics and cloud storage. However, Savage IO doesn't develop software -- SavageStor must run either commercial storage management software or open source applications, such as Lustre, OpenStack or CentOS. "This is a Ferrari powertrain you can match up to your software if you need that type of performance," John Fithian, Savage IO's director of business development, said of SavageStor.
EMC ScaleIO Node and Cloudian HyperStore FL3000 appliance package applications originally designed as software-defined storage on hardware for customers who don't want to build their own storage. And that's apparently most customers.
"The mainstream storage buyer still wants an integrated appliance," said Ashish Nadkarni, IDC program director for enterprise storage and servers. "They want to benefit from software-defined storage, but aren't ready to trade that for the comfort of having it all on one box."
We certainly haven't heard the last of software-defined storage, or software-defined technology in general. But we expect the hardware that actually stores the data to receive its fair share of attention now.
Today, QLogic and Brocade demonstrated the industry's first Non-Volatile Memory Express (NVMe) over Fabrics solution using Fibre Channel as the fabric. The demonstration is based on the draft specification of NVMe over Fabrics under definition by the NVM Express, Inc. organization and the draft Fibre Channel over NVMe (FC-NVMe) standards under definition by T11. The solution is being demonstrated in the Brocade booth #546 at the Gartner Data Center, Infrastructure & Operations Management Conference in Las Vegas.
NVMe provides a standards-based approach for PCI Express (PCIe) solid state drive (SSD) access that significantly improves performance by reducing latency and streamlining the command set while providing support for security and end-to-end data protection. NVMe over Fabrics defines an efficient mechanism to utilize these devices in large scale storage deployments and provides investment protection by allowing the latest in innovations and advances in low latency SSD flash to be used over proven Fibre Channel fabrics. This enables the NVMe storage devices to be shared, pooled and managed more effectively. The QLogic and Brocade FC-NVMe proof of concept (POC) is aimed at providing a foundation for lower latency and increased performance, while providing improved fabric integration for flash-based storage.
"Next-generation data intensive workloads are utilizing low latency NVMe flash-based storage to meet ever increasing user demand," said Vikram Karvat, vice president of products, marketing and planning, QLogic. "By combining the lossless, highly deterministic nature of Fibre Channel with NVMe, FC-NVMe targets the performance, application response time, and scalability needed for next generation data centers, while leveraging existing Fibre Channel infrastructures. QLogic is pioneering this effort with industry leaders, which in time, will yield significant operational benefits to data center operators and IT managers."
"The advent of FC-NVMe is critical to the evolution of the data center and Brocade has been at the forefront, being both a key contributor and editor for the T11 FC-NVMe architecture," said Jack Rondoni, vice president of storage networking, Brocade. "As the data center continues to evolve, so does Fibre Channel-based technology. Fibre Channel plays a vital role in connecting flash-based devices and FC-NVMe will further strengthen this correlation."
"The NVM Express organization is pleased to see the first demonstration of NVMe over Fabrics using Fibre Channel," said Amber Huffman, president, NVM Express, Inc. "We are excited to collaborate with the T11 standards organization to develop a robust standard that works well for Fibre Channel-based storage environments."
"FC-NVMe extends the reach of existing Fibre Channel investments, enabling data centers to continue to leverage the tens of billions of dollars of installed Fibre Channel equipment," said Seamus Crehan, president of Crehan Research. "The popularity of Fibre Channel connected flash-based storage continues to grow strongly, ensuring the longevity and persistence of Fibre Channel connectivity in the data center of the future."
By Chris Mellor
Brocade and QLogic team up for a proof of concept demo
FC-NVMe has been demoed over Fibre Channel by Brocade and QLogic, with the system effectively the PCIe bus protocol in standard form used for solid state memory, Non-Volatile Memory Express, and deployed as a network fabric external to a server's PCIe bus over Fibre Channel.
The idea seems to be that servers can enjoy access to external Fibre Channel-connected storage arrays, fitted with SSDs, at PCIe bus speeds.
What we have here is essentially a QLogic and Brocade FC-NVMe proof of concept, "aimed at providing a foundation for lower latency and increased performance [and] improved fabric integration for flash-based storage".
The demonstration is based on the draft specification of NVMe over Fabrics, under definition by the NVM Express organisation, and the draft Fibre Channel over NVMe (FC-NVMe) standards under definition by T11 standards organisation.
What is not understood is how NVMe-class latency and speed can operate through a current 16Gbit/s Fibre Channel link, or, indeed, a coming 32Gbit/s one, which, although twice as fast as the 16Gbit/s link, is still slower than the PCIe bus.
A Brocade spokesperson said: “The demonstration shows the ability for native FC-NVMe and NVMe over FCP traffic to run simultaneously over existing Fibre Channel infrastructure. While this is a proof of concept, performance is similar to current Fibre Channel today, we are driving to reduce transport latency by 30-40 per cent.”
The NVMeF over FC demo can be seen at the Brocade booth #546 at the Gartner Data Center, Infrastructure and Operations Management Conference, Las Vegas. ®