How to Manage Storage in a Virtual Desktop Infrastructure
Agencies are finding reduced costs and improvements in agility, data protection, management and security as they deploy virtual desktop infrastructures. However, they also must take steps to make sure that the increased demands of VDI do not result in aggregation that negatively affects their storage and input-output networking infrastructures.
Successful VDI deployments require an understanding of the applications an organization uses and their associated storage and IO characteristics running on physical desktop infrastructures. A common mistake is to simply focus on the storage capacity of the PDI and apply some general rules of thumb to IO and networking activity.
For example, a typical PDI usually does only 10 to 30 IO operations per second (perhaps as many as 50 to 100 IOPS during high-traffic periods such as boot, startup and shutdown). Aggregation causes aggravation when VDI greatly increases the stress on a storage system. The average number of IOPS of a PDI may not be much activity for a typical server. However, an agency that moves 20 physical desktops to VDI may find that it is moving 20 times the usual number of IOPS to a server, along with corresponding network activity. Thus, the server must support 200 to 600 IOPS. Increase the number of virtual desktops to 50 or 100, and the amount of traffic can make a corresponding jump.
Here are tips agencies should consider as they look to avoid storage and IO challenges with VDI deployments.
Make the Most of Fast Storage
Agencies should consider using fast storage systems that support IO as well as storage capacity optimization. For example, IO optimization using read caching, along with write acceleration techniques, DRAM and NAND flash solid-state drives, and fast hard-disk drives can improve performance.
Infrastructure is important, too. Fast storage systems and their associated media or devices, including SSDs, are only as effective as the speed of the IO networking or interface connecting them and their fast controllers to the servers.
Avoid Speed Bumps
In general, avoid introducing aggravation into the VDI environment in the course of aggregating or consolidating PDI workloads. Strive to remove complexity while enabling productivity rather than simply moving problems from one location to another. Productive VDI environments need fast networks to access fast servers with fast storage, so look for and remove barriers, speed bumps or other points of instability or bottlenecks.
Load-balancing tools are also important within the network, as well as on servers and in storage systems, to avoid aggravation.
Agencies implementing VDI should be careful when using rules of thumb or making decisions based on generic benchmarks or workload simulations that do not reflect their specific environment. A better plan is for an agency to tailor its strategy to its own network, applications and workload.
Implement Efficient Technologies
SSDs can be placed in different locations for various purposes. For example, an SSD can be used in a server for caching reads so that the back-end storage system can focus on write acceleration. Likewise, the back-end storage system can also include SSD devices and PCIe cards for caching and as targets for storing data.
Leverage data footprint reduction techniques, such as real-time compression, data deduplication, single-instance storage, thin provisioning and linked clones, to maximize use of storage space.
ze:10??;oP?????:Arial'>Univa and MapR partner on enterprise-grade workload management. Univa and MapR announced a partnership to integrate Univa Grid Engine with the MapR enterprise platform. The partnership enables customers to leverage the MapR Distribution for Hadoop in conjunction with Univa Grid Engine capabilities of policy control management and infrastructure sharing to run mixed workloads with insightful enterprise reporting and analytics. The partnership matches Univa’s distributed resource management software platform with the MapR enterprise-grade big data platform. The Univa Grid Engine platform combined with the MapR Distribution for Hadoop enables enterprises to deploy a best-in-breed Hadoop solution with enterprise policy controls, such as different end-users securely accessing the cluster to advance fair share rules. ”Univa Grid Engine is already a key product in the Big Data infrastructure mix, but with this partnership we are providing customers with integration capabilities to the unrivaled enterprise-class MapR platform for Hadoop,” said Rahul Jain, VP of Big Data, Univa. “This was a natural marriage as MapR provides the most advanced enterprise-grade capabilities for Hadoop, while Univa operationalizes mission-critical applications in mixed workloads to reach its maximum potential and accelerate Hadoop deployments into production.”