Making the Most of Available Storage

Monday Mar 20th 2017 by Arthur Cole
Share:

Steadily increasing data loads will continue to put pressure on the enterprise to find more storage, but in many cases this can be done through greater flexibility of storage already under management rather than the provisioning of new resources.

No matter how scaled-out, hyperconverged or abstracted data center infrastructure becomes, one element remains the same: Servers need high-speed access to vast amounts of storage.

This produces a paradox, however, because as more storage comes online, it takes longer to find and retrieve data locked somewhere within its volumes.

To counter this, platform developers are continuously tweaking their designs to make storage more responsive and more suitable to the cloud-facing data environments that are taking on more of the data load. And increasingly, this involves placing high-capacity solutions directly on the server itself.

Intel recently unveiled an addition to its Optane SSD line, the DC P4800X, which the company bills as an extended pooled memory solution suitable for scale-out, accelerated applications incorporating artificial intelligence and machine learning. Like many memory solutions, the Optane is designed to provide low latency and a means to alleviate data bottlenecks by improving CPU utilization. But the new device is also paired with Intel’s Memory Drive Technology that integrates the drive into the memory subsystem of the server to present it as DRAM to operating systems and applications. This allows for larger memory pools that enable the enterprise to consolidate workloads on fewer servers.

Meanwhile, a company called Excelero recently emerged from stealth with a new server SAN solution that allows Flash storage to extend across web-scale data centers. The company’s NVMesh solution virtualizes available NVMe Flash into a single, distributed pool, allowing organizations to maintain local performance while preserving centralized management and high utilization rates. The company says it can scale performance linearly while maintaining 100 percent efficiency through its method of shifting data services from centralized CPU architectures to a client-side distribution model. In this way, both converged and disaggregated infrastructure can be combined into a distributed, non-volatile array suitable for IoT, Big Data and other web-scale workloads.

Of course, few enterprises are willing to give up their legacy SAN arrays just yet, although they are eager to drive more performance from them. Lenovo took a step in this direction recently by teaming up with DataCore to bundle the SANSymphony storage virtualization software onto the Lenovo DX8200D appliance. According to the UK Register, this provides a turnkey solution to integrate existing SANs into a virtual storage infrastructure backed by predictive analysis, automated discovery, real-time monitoring, and a host of other enterprise-class functions. The company promises upwards of a 90 percent reduction in management and support responsibilities for the enterprise, plus a 75 percent reduction in costs and a 10-fold increase in performance.

New ways to employ high-speed interconnects like InfiniBand are also hitting the channel. IaaS provider ProfitBricks has developed a new driver, dubbed IBNBD, that enables RDMA transfers of block I/O over InfiniBand so that it appears to both the client and server as a normal block device. According to developer Danil Kipnis, this improves the speed of database applications by leveraging the high transfer rate of InfiniBand without having to implement a complex intermediate transport protocol. As he explained to Linux.com, this is superior compared to SRP and NVMe over Fabric (NVMEoF) solutions because it provides for greater storage flexibility for distributed replication and other functions.

Steadily increasing data loads will continue to put pressure on the enterprise to find more storage, but in many cases this can be done through greater flexibility of storage already under management rather than the provisioning of new resources.

By leveraging what it already has through new fabric architectures and distributed pooling software, the enterprise should be able to handle the initial surge of Big Data and IoT workloads – and then leverage either greater server-side capacity or the cloud for when volumes really kick into high gear in the next decade.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved