By Maria Deutscher
Network equipment supplier QLogic is aiming for a comeback with its 16Gb FC "Gen 5" adapters, the first product family to support simultaneous multi-protocol traffic in Xeon E5-2600/1600 v2 environments. Citing independent research, the company reported on Monday that its share of the Fibre Channel market increased by 12 percent in the third quarter for a total of 54 percent.
The Dell'Oro Group's newly published Q3 2013 SAN Report and the Crehan Research Q3 2013 Quarterly Market Share Report both concluded that QLogic increased its lead over the nearest competitor (Emulex) by more than five points, boosting its edge to more than 15.4 percent. The company credits much of this growth to the success of its FlexSuite 2600 Series Gen 5 devices, which aim to address the scalability and availability requirements of mission-critical workloads.
"In addition to being the market leader in Fibre Channel host bus adapters (initiator side), QLogic chips are used to deliver 16Gb FC for most storage arrays (target side)," says Wikibon Senior Analyst Stu Miniman. "FC usage remains high in enterprise environments; and with the ability to support both FC and Ethernet from the same product, QLogic has a strong position for SANs no matter which protocol customers choose to deploy."
Speaking on theCUBE at the recently concluded Oracle OpenWorld 2013 conference, QLogic OEM marketing boss David Ard told us that his company is taking a protocol-agnostic approach to ending the great Fibre Channel over Ethernet (FCoE) interconnection debate. QLogic is offering a flexible host bus adapter (HBA) that lets customers switch from one protocol to another without spending an arm and a leg on infrastructure replacements.
The Oracle Sun Storage 16Gb Universal HBA is designed to "enhance the Oracle RAC environment" and give customers the "ability to meet the growing demands of their business and continuing to keep up with the data needs they have in that environment," according to Ard.
To hear the full interview from Oracle OpenWorld 2013, check out the video below.
By David Clark
The evolution of our Southern California freeway system strangely resembles that of the data center. Original freeway designs supported traffic for fewer, smaller vehicles traveling at slower speeds. Expansion accommodated the growing number of bigger, faster cars, but space isn’t limitless, and horrendous traffic jams often cripple the system.
In the data center, complex business applications and growing storage requirements pushed rapid expansion, causing strain and excess pressure. Like our initial freeway upgrades, data centers simply got bigger to solve problems. But data center real estate has its limits too, along with lofty price tags. Multi-processor CPUs and virtualization technologies provided the compute resources required to meet these demanding workload requirements, but data traffic jams still occur as I/O performance struggles to keep up with processor and memory performance as defined by history and Moore's Law.
While business applications continue to put more and more usable data at the fingertips of those who need it most, organizations are also generating, capturing and purchasing new data at exponential rates. All this data creates an opportunity for improved efficiency and more-powerful business intelligence, while at the same time contributing to application performance issues.
The recent Storage Acceleration and Performance Technologies study by the Taneja Group* noted that 42 percent of respondents reported I/O performance causing unresponsive applications or application failures. Storage performance negatively affecting IT efficiencies, including maximizing server virtualization initiatives and meeting service-level agreements (SLAs), were identified by half of all respondents as top drivers for deploying storage acceleration technologies.
Time to data, or getting usable data into the hands of employees, can be a critical competitive advantage for organizations. IT professionals know they can’t afford to let business applications sit idle, because the time it takes applications to get to stored information affects nearly every facet of business operations, from customer satisfaction to overall business performance.
Flash Cache as a Server-Based Application Accelerator
Fortunately for IT professionals, you don’t necessarily need real estate for continued expansion in the digital world. In fact, many organizations mandate increased performance from their IT infrastructures without significant investment or expansion. One popular solution for boosting application performance is server-based flash cache, which both accelerates I/O and reduces latency by placing frequently accessed data near the processor. As flash-caching technologies have advanced, IT professionals have used these capabilities to speed application performance in the data center.
Flash-based storage accelerators may operate as an end-point storage device in place of spinning disks or as an intermediate caching device in conjunction with storage. Solutions that use flash cache technology for storage are widely available in the market and include flash-based storage arrays, appliances and server-based SSDs.
The same Taneja Group study revealed that server-based storage acceleration solutions are the “most valued” by respondents, which makes complete sense. Placing flash-caching solutions in I/O-intensive servers puts the cache closest to the application and in a position where it is less sensitive to congestion; it also avoids repetitive data creation, processing, and transportation. But server-based flash cache solutions do have their limitations and drawbacks.
The move of mission-critical business applications to clustered, highly virtualized data center models highlights the challenges of implementing sever-based flash solutions. Mission-critical database applications, such as Oracle Real Application Clusters (RAC), generate workloads that demand the highest levels of performance from virtualized servers and their associated shared SAN storage infrastructure. Business-intelligence applications doing analytics processing, enterprise resource planning, supply-chain management or business collaboration applications such as enterprise email are all being virtualized to optimize server performance and compute resources.
Indeed, the Taneja Group study confirmed that virtualized data centers have the greatest need for application performance acceleration and that customers are increasingly investigating enterprise caching for potential solutions. But there is a drawback. Nearly all flash cache accelerators have a one-to-one server-based relationship and are not “enterprise ready” to operate in clustered and highly virtualized environments. Looking under the hood one finds nearly all flash storage accelerators that appear enterprise ready actually require additional software and system reconfiguration, which drains time, money and resources that negate a solution’s potential gains.
How "Enterprise Ready" Affects Flash-Based Storage Acceleration
The purpose of flash-based storage acceleration is to improve I/O performance by reducing I/O latency and increasing IOPS performance. To be enterprise ready, solutions should be easy to deploy, maintain existing SAN data protection and compliance policies, and deliver benefits across the widest range of applications in the enterprise. To do so, enterprise-ready flash solutions must be completely transparent to the OS and application and provide caching support on individual servers as well as multi-server clusters, including highly virtualized environments and clustered applications.
Very few companies have the ability to deliver true, enterprise-ready flash solutions. It takes proven technology in the data path combined with the ability to intelligently move active data to a cache; all while standard SAN traffic is unimpeded. Optimal operation requires high-performance, low-latency, scalable I/O connectivity between servers and shared SAN storage in clustered and virtualized applications. This situation doesn’t describe the domain of typical SSD flash manufacturers, nor does it describe the domain of most any data center connectivity company. The marriage of innovative flash-caching technology with battle-hardened data center I/O capabilities would appear to be the path for delivering true enterprise-ready flash cache storage acceleration.
The caching SAN adapter is a new class of server-based storage acceleration that uses flash-based caching to address business-critical application performance requirements. Seamlessly integrating with network-attached Fibre Channel storage, caching SAN adapters use the capacity, availability and mission-critical storage-management functions for which enterprise SANs have historically been deployed. This unique fabric-connected PCIe flash solution boosts application performance of all data center applications by unclogging I/O on the digital data superhighway
"Today's server-based caching solutions are beginning to break down the I/O performance gap between high-performance servers and slower, mechanical, disk-based arrays," said Arun Taneja, founder and consulting analyst of Taneja Group. "Caching SAN adapters are finally enabling clustered enterprise applications to take advantage of SSD performance acceleration typically found only in individual servers..."
Caching SAN adapters use a combination of features, functions and capabilities for optimizing application performance. Rather than creating a discrete captive cache for each server, the flash cache can be combined with a SAN host bus adapter (HBA) and creates a shared cache resource among multiple servers. The aggregate capacity of the flash now becomes a substantial and scalable central resource available to all servers, eliminating silos of captive flash capacity and reducing the cost to achieve performance gains.
In addition, this new approach uses a simple HBA driver to appear to the host as a standard Fibre Channel adapter. This new technology incorporates host-based, intelligent, I/O optimization engines that provide integrated storage-network connectivity, a flash interface and the embedded processing required to make the all flash management and caching tasks entirely transparent to the host. The only host-resident software required for operation is a standard device driver. All the “heavy lifting” is performed transparently on board the caching SAN adapter by the embedded multicore processor.
Moreover, cache from multiple cards can be clustered to provide a large high-performance cache pool to support the high I/O demand of clustered applications and highly virtualized environments.
Finally, this unique new approach guarantees cache coherence and eliminates potential cache corruption by establishing a LUN cache owner to which all cache requests for that LUN are directed. The LUN cache owner monitors the state of every cache and is updated to changes as they occur. Because only one caching SAN adapter is ever actively caching a LUN, all other members must process I/O requests for that LUN through the LUN cache owner, thereby requiring them to all work on the same copy of data. This way, cache coherence is guaranteed without the complexity and overhead of coordinating multiple copies of the same data.
By clustering caches and enforcing cache coherence through a single LUN cache owner, this implementation of server-based caching addresses all of the concerns of traditional server-based caching and delivers true enterprise-ready and optimized application acceleration.
By Tim Lustig
Driven by the desire for a competitive edge, IT landscapes are rapidly changing to improve compute infrastructure and better service business needs. The storage network plays a very vital role in these environments, and IT administrators know that Fibre Channel inherently delivers what is required from a storage transport technology; performance and reliability. In a recent end-user survey by Enterprise Strategy Group* (ESG), 85 percent of the participants responded that they will increase or maintain investment levels in Fibre Channel SANs. Key drivers noted were ‘Performance’ and ‘Reliability’ over other storage technologies. According to IDC, more than 90 percent of Fortune 1,000 data centers use Fibre Channel as the de facto standard for storage networking. In addition, IDC recently projected a compound annual growth rate of more than 60 percent for enterprise data storage through 2015.
A key factor driving IT transformation and Fibre Channel health is server virtualization. The availability of servers based on Intel’s E5 processors combined with new features within VMware’s vSphere 5.1 and Microsoft’s Hyper-V hypervisors, have introduced a new game-changing compute platform. This platform supports new levels of virtual machine (VM) density, and Tier-1 applications that previously required dedicated server hardware for the first time could run on virtual servers.
ESG's survey confirms this rapid growth in server virtualization. Of all servers capable of being virtualized within their data centers, 41 percent of respondents reported between 51 and 100 percent of those servers were virtualized. Over the next three years, 71 percent of respondents planned to virtualize between 51 and 100 percent of servers that could be virtualized. In this same three year time frame, IDC reports that VM deployments will grow by more than 91 million.
Infrastructure to Support New Technologies Lags Behind
While improved hypervisors and E5-based servers are driving significant deployment in many enterprise data centers, the I/O and network infrastructure to support these new technologies lags far behind. To fully optimize a virtualized data center, servers need maximum I/O capacity to support Tier-1 applications that require higher bandwidth. In addition, increased bandwidth is needed for densely virtualized servers, which aggregate I/O from multiple VMs to the host’s data path. Highly virtualized environments generate a tremendous amount of I/O traffic, magnifying the I/O performance bottleneck issue already present in most enterprises.
As companies move Oracle database applications, Microsoft SQL Server, SAP and other mission-critical applications onto virtualized servers, the robust nature of Fibre Channel becomes necessary for satisfying storage I/O performance and data integrity requirements. Fibre Channel’s credit-based flow control – one of the exclusive features of Fibre Channel that make it so well suited for block-level storage data networks and interconnects – delivers data as fast as the destination buffer is able to receive it, without dropping frames or losing data.
Cloud Computing Driving Demand
Cloud architectures are also driving demand for more modern IT landscapes that deliver multi-tenancy, greater bandwidth and faster transactional response times. Multi-tenant infrastructures by nature put additional stress on storage networks for greater reliability, stability and scalability. Within these new IT architectures, storage technology must support granular Quality of Service (QoS) as an essential attribute to avoid I/O bottlenecks and maintain Service Level Agreements (SLA’s). Fibre Channel is deterministic by design and can be fine-tuned with capabilities that eliminate network congestion and maximize efficiency and performance to guarantee SLAs.
Cloud computing, plus the latest VMware and Windows hypervisor implementations are pushing the limits of what storage I/O can handle. The evolving needs of storage networks point directly to Fibre Channel, and more specifically, to the advanced capabilities of 16Gb Gen 5 Fibre Channel. Gen 5 Fibre Channel is backward compatibility with the huge installed base that represents an incredible investment by the world’s largest and most successful companies. Gen 5 Fibre Channel continues to empower end-users by delivering architectural flexibility, enabling a more agile, cost effective and efficient environment. With a strong roadmap on the horizon, Fibre Channel provides confidence that investments in the technology will be preserved into the foreseeable future, while its inherent characteristics play an even greater role in the protocol’s long-term viability.
David Ard, OEM Marketing at QLogic, demoed a new collaborative solution with Oracle, FabricCache and Oracle RAC, discussing its performance benefits with theCUBE co-hosts John Furrier and Dave Vellante, live at Oracle OpenWorld 2013. (Full video of demo to follow – see our entire event playlist here.)
The FabricCache solution is designed to "meet the growing demands of the business, and keep up with the data needs," Ard explained. It enables a cluster cache capability on the server side of their environment which helps increase application speed and performance, allowing teams to do more work during the same time frame.
FabricCache and OracleRAC allow users to "take advantage of caching all the hot data on the server side," said Ard. As a result, the solution speeds up application performance significantly. During the demo, Ard explained how a business analytics application was used. The number of transactions per minute that the cluster can handle with cache enabled and without it showed a 4x performance increase with the new capability on. "All you're doing is enabling cache, no further tuning on the database,' Ard pointed out.
Ard then mentioned this was a write-through cache solution, saying "all the hot data on the SAN will be cached on the server side." When combining Oracle RAC with the QLogic FiberCache, “application response time comes down significantly,” companies can do more in less time, which translates to either driving revenue or cutting costs. Ard said that the average amount of hot data represents 20-30 percent of the environment. "That's basically how you scale and determine what type of cache you need."
From an end user perspective, "all it is it is an HBA, completely transparent, no management layer software they need to install," Ard explained. The combined solution is already available. "We're at a point where customers are starting to roll it out. It gains a lot of attention when you talk about maximizing the investment," as they do not need to extend the existing infrastructure.
Check out David Ard, OEM Marketing at QLogic, demoed a new collaborative solution with Oracle, FabricCache and Oracle RAC full-interview on theCUBE below:
By Chris Mellor
Wanna give your Oracle real application cluster setup a fivefold transactions-per-minute boost? Just add a flash-cached host bus adapter between the Oracle gear and its SAN, says QLogic...preferably its own.
Fibre Channel host bus adaptor (HBA) vendor QLogic has a new line of network adapters called FabricCache which use its Mount Rainier technology to add a caching flash storage pool and store hot data from a Fibre Channel-connected SAN. Accessing servers thereby get faster access to data.
It's got a demo running at Oracle OpenWorld (exhibit number 2115) with FabricCache used to link a Fibre Channel SAN to an Oracle real application cluster (RAC) set-up. QLogic claims this can bump up the transactions per minute count by a factor of five in Oracle RAC business analytics applications, and provide up to 75 per cent faster transaction response times.
This happens because FabricCache significantly lowers data access latency on a Fibre Channel SAN which it front-ends, according to QLogic.
It might be easier to add a FabricCache 10000 Series adapter to your Oracle RAC-Fibre Channel SAN set up than sticking PCIe flash cards into the servers. That will be a cost, complexity and performance benefits decision.
With this demo QLogic is announcing that it now has an alternative to PCIe server flash that deserves a look. Grab a look at a datasheet here (PDF). ®