By Stuart Miniman
Incremental changes in a market rarely lead to disruption. Amongst networking vendors, moving from one generation of product to the next can lead to slight changes in market share as competitors compete with each other with the latest release.
In 2010, I wrote about the Competitive Positioning of Network Adapters, which forecasted that the transition from 1Gb to 10Gb Ethernet would not see dramatic shifts – Intel and Broadcom were the strong leaders in 1Gb while Emulex and QLogic were leaders in Fibre Channel (FC) but only niche players in Ethernet.
While this standard transition has been happening, macro-trends in IT buying have been shifting. A small number of large cloud vendors now make up a significant piece of the overall addressable market. From 2012 to 2013, the ODM server manufacturers that sell into the cloud providers saw 48% growth to take 13% of the total market. Shrinking margins on hardware was a primary reason behind IBM's sale of its x86 business to Lenovo so that it could focus more on its own cloud offerings.
This shift is a backdrop of a deal that saw QLogic acquire Broadcom's 10Gb, 40Gb and 100Gb Ethernet adapter lines. The move will make QLogic the #2 supplier of Ethernet server and storage connectivity (behind Intel) and allows Broadcom to focus on its switching line and other solutions that don’t gain as much leverage from the adapters.
Adapters and cabling are critical components of the network that are typically overlooked except by those who are building the configuration. Cisco’s cabling and optic business is estimated to be between $2B-3B/year and is now an area where customers and competitors are looking for lower-cost alternatives. Most of the development cost of server and storage adapters is in the software, but users often mistake the boards as commodity widgets. The software and hardware of adapters does not share a lot of common elements with the switch counterparts.
In the last year, QLogic has sold off its InfiniBand switch business to Intel and ended development of its FC and Ethernet switch lines; and subsequently does not compete anymore with either Broadcom or Brocade (whose adapter businesses QLogic acquired). QLogic has strong OEM relationships, which includes most of the storage (target) design wins for both FC and Ethernet – and there is good leverage between the target and adapter (initiator). The Broadcom assets allow QLogic to deliver 40Gb and 100Gb Ethernet RDMA, which are important for Microsoft SMB and new flash solutions such as Dell FluidCache for SAN. QLogic will need to move fast to prevent Mellanox, the InfiniBand leader whose Ethernet adapters now make up 15% of its revenue, from gaining any more momentum with RDMA solutions.
On a financial call, new QLogic CEO Prasad Rampali stated that these moves will allow for the company to move from being just a connectivity player to a platform for both enterprise and cloud customers. Intel continues to deliver more of the data center on a single chip, which can be threatening to some technology partners. This gives QLogic an opening to create a differentiated solution that leverages its technology and expands partnerships. Additionally, large cloud providers are more flexible with buying decisions, which is a double-edged sword – it gives an opening to win business based on new features or better cost, but it is a recurring battle.
by Maria Deutscher
QLogic, which earlier this month marked the tenth consecutive year as the largest vendor of Fibre Channel (FC) adapters by market share, is investing heavily to sustain its edge over the competition. Defying the weakening demand for the technology, the Aliso Viejo, California-based manufacturer recently acquired Brocade’s converged adapter business as part of a broad alliance that also encompasses joint development of SAN solutions based on the newly released Gen 6 FC standard.
The agreement was signed off by interim chief exec Jean Hu, who just handed over the reins to former EMC engineering head Prasad Rampalli, a favorite guest on theCUBE with even bolder expansion plans than his predecessor. As we’ve learned on Tuesday, the industry veteran’s first major move as CEO was a milestone patent deal with Broadcom valued at approximately $147 million in cash.
The purchase includes certain 10/40/100Gb Ethernet controller-related assets and non-exclusive licenses to intellectual property relating primarily to the communication chip maker’s programmable NetXtreme II Ethernet controller family. Broadcom will become an ASIC supplier to QLogic in support of the product line, and also intends to license $62 billion worth of the firm’s FC patents under a non-exclusive contract that is expected to receive board approval in the near future. The transactions kick off Rampalli’s efforts to diversify his firm into new high-growth segments, leveraging its formidable Fibre Channel portfolio as a launchpad.
"QLogic is well positioned for the flexibility and scalability that is required for data centers today and tomorrow," wrote Wikibon senior analyst Stu Miniman. "There will be continued competitive pressure for design wins and for brand recognition against some big competitors, but if QLogic can continue to focus on both OEM and end-user requirements, there is a lot of growth opportunity."
by Chris Mellor
QLogic has signed a deal with Broadcom to buy some Ethernet assets and license its Fibre Channel IP.
The converged network controller maker is spending $147m in cash to buy:
Broadcom will become an ASIC supplier to QLogic in support of the NetXtreme II product line. It is expected also that "QLogic will license certain Broadcom patents under a non-exclusive patent license agreement that will cover QLogic’s Fibre Channel products in exchange for a license fee of $62m."
Stifel Nicolaus MD Aaron Rakers says the implication of the FC IP licensing deal is that it settles any uncertainty over patent spats between Broadcom and QLogic. Broadcom recently launched a suit against QLogic rival Emulex. He mentions in his report that 170 employees will transfer to QLogic.
QLogic has a new CEO, Prasad Rampalli, who assumed that position and the QLogic presidency on 3 February, taking over from interim CEO Jean Hu who steps back to her CFO position. Rampalli left EMC, where he was SVP of the cross-business engineering function, in December last year, to join QLogic.
QLogic thinks it could immediately gain about $45m in fiscal 2015 revenue.
As well as its Fibre Channel HBAs, iSCSI adapters and Converged Network Adapters (for FCOE), QLogic has a 3200 series of Ethernet Adapters (NICs) for TCP/IP offload.
Broadcom says its NetXtreme II 1 Gigabit and 10 Gigabit Ethernet controllers deliver high performance dual-port, single-chip C-NIC at 1Gbps or 10-Gbps rates, without requiring external packet memory. ... [and] include on-chip TCP processing and iSCSI Host Bus Adapter (HBA) capabilities."
QLogic is clearly focussing more on Ethernet for future growth. Rakers said: "We believe access to Broadcom controllers could allow QLogic to become more competitive in the cloud market, vs its historical enterprise focus."
In a briefing call to analysts, reported by Rakers, Rampalli said that, in addition to planting a stake as a No 2 player in the Ethernet market, this is a platform for the company to offer more services – focused on the company's positioning for network convergence and the view that services such as dedupe, replication, encryption, acceleration/caching, and other capabilities can be tightly integrated into these offerings.
He added that this move should be considered a strategic decision to redirect the company’s R&D efforts; these new solutions should complement the company’s storage solutions in both FC and FCoE.
by Tim Lustig - QLogic
Enhancements to server and storage technology have created an I/O performance gap in enterprise storage networks that is being addressed by Flash-based cache solutions, including PCIe-based Flash cards.
Flash-based caching decreases this performance gap by reducing I/O latency. With many new Flash cache solutions hitting the market, how can you choose the right solution for your environment? Any enterprise caching solution under consideration should have a few basic features:
When these three are combined, a Flash cache solution can maintain existing storage area network (SAN) data protection and compliance policies to deliver the greatest benefits across the widest range of applications in your enterprise datacenter.
Today, we see server-based caching gaining in popularity because it places the Flash cache closest to the application, "short-stopping" a large percentage of the I/O demand of critical applications, thus lowering latency and improving overall storage performance. Caching at the server positions cache for mission-critical applications where it is insensitive to congestion on the network or storage infrastructure. Effectively reducing the demand on both the storage network and arrays, server-based caching improves overall storage performance for all applications, even those that do not have caching enabled, and extends the useful life of existing storage infrastructure.
Server-based caching requires no upgrades to storage arrays and no additional appliance installation in the data path of critical networks. Better still, storage I/O performance can scale smoothly with increasing application demands. More importantly, server-based caching enables pooled cache to be shared across virtualized and clustered applications: a capability array-based caching and appliance-based caching simply cannot provide.
Caching SAN Adapter
Caching technology typically requires coherence between caches when solutions span multiple physical servers. Traditional captive implementations of server-based Flash caching do not support this capability. While they are very effective at improving the performance of individual servers, providing storage acceleration across clustered server environments or virtualized infrastructures, which utilize multiple physical servers is beyond their reach. This limits the performance benefits of flash-based caching to a relatively small set of single server applications.
The caching SAN adapter is a new approach to server-based caching that addresses these drawbacks. Rather than creating a discrete captive-cache for each server, the Flash-based cache can be integrated with a SAN adapter featuring a cache coherent implementation, which utilizes the existing SAN infrastructure to create a shared cache resource distributed over multiple servers. This eliminates the single server limitation for caching and opens caching performance benefits to the high I/O demand of clustered applications and highly virtualized environments.
The novel approach is a groundbreaking enterprise-ready application performance acceleration solution that combines a Fibre Channel host bus adapter (HBA), intelligent caching, and I/O management with connectivity to a server-based PCIe Flash card. This innovative approach requires no changes to existing server software or infrastructure, and is completely application and hypervisor-transparent, as well as infrastructure and storage subsystem-agnostic.
The caching SAN adapter is exceptionally simple to deploy and manage, and transforms single-server, captive cache into a consolidated, shared, performance-enhancing resource across servers. The result is transparent, adapter-based caching that is a dramatically simpler solution, lowering total cost of ownership (TCO) and unleashing shared performance for performance-challenged clustered and virtualized applications in the enterprise.
This approach guarantees cache coherence and precludes potential cache corruption by establishing a single cache owner for each configured logical unit number (LUN). Only one caching adapter in the accelerator cluster is ever actively caching each LUN's traffic. All other members of the accelerator cluster process all I/O requests for each LUN through that LUN's cache owner, so all storage accelerator cluster members work on the same copy of data.
Cache coherence is guaranteed without the complexity and overhead of coordinating multiple copies of the same data. By clustering caches and enforcing cache coherence through a single LUN cache owner, we overcome the shortcomings of traditional server-based caching.
In the latest installment of SiliconANGLE’s Cube Conversations series, Wikibon Senior Analyst Stu Miniman discusses the highlights of this week’s Open Compute Project (OCP) Summit and shares his take on the latest and greatest in hyperscale, a paradigm-shifting approach to data center design.
The Open Compute Project was created by Facebook in April 2011 to develop standards for the kind of highly scalable and power-efficient hardware it uses in its own cutting-edge facilities. The initiative has come a long way in the past 12 months, Miniman observes, with a growing number of vendors now implementing Open Compute specifications in an effort to differentiate on price and performance. The technology is already starting to change the balance of power in the server market, where original design manufacturers (ODMs) like Quanta are quickly gaining ground against established suppliers such as Dell and IBM.
"Industry reports have shown that these ODMs are now at about 12 percent of overall shipments, which is quite sizable when you think about how many years the traditional vendors have been out there," Miniman notes.
Quanta is shipping its OCP-certified STRATOS S215-X1M2Z servers with QLogic’s QOE2562 8Gb Fibre Channel mezzanine HBA, the industry’s first commercially available FC adapter designed specifically for use in hyperscale environments. The product, which was announced on Day One of the OCP Summit along with a 40GbE network interface controller (NIC) from Mellanox, represents a major advance towards enterprise-readiness.
"This is important to be able to get into large scale environments, especially going beyond the Googles and Facebooks of the world,” Miniman details. “Large banks and other very large enterprises want to adopt this, Fibre Channel is trusted and used in these environments. So QLogic, obviously being the market leader in this space, can transition to this environment."
Meanwhile, Microsoft announced on Wednesday that it is donating its server designs and managment software to the Open Compute Project in a bid to gain a more prominent role in the ecosystem. The move underscores the industry-wide shift to commodity hardware, but while this trend is driving new innovations, Miniman points out that the growing focus on individual components is distracting from the data center as a whole. One company that hasn’t lost sight of the big picture is IO, which is rolling out a new OpenStack-based cloud platform that runs on OCP servers.
Miniman advises organizations, especially large enterprises, to keep a close eye on developments in the hyperscale space as Open Compute continues to gain momentum. See his entire commentary in the video below.