At the heart of the cloud sits the data center and your storage ecosystem. This is the central point where all information is gathered, and then distributed to other data centers or to the end-user.
Because of these new initiatives and new ways to deliver data, the data center has been forced to evolve to support more agile and scalable platforms. Part of the conversation revolves around unified computing systems – while the other part revolves around something even more specific: storage. Today’s infrastructure is being tasked with supporting many more applications, users and workloads. Because of this, the storage infrastructure of a data center – especially one that’s cloud-facing – must be adaptable and capable of intelligent data management. So, many storage vendors have evolved their solutions to provide more efficient systems capable of much more to help support these new IT and business demands.
- SSD and flash. There is a growing argument around this technology. Will it take over all storage or is it still a niche player? The truth is that SSD and flash are still somewhat pricey and are really designed to play a specific role within storage. For workloads that require very high IOPS – VDI or database processing for example – working with SSD or flash systems may be the right move. Now, organizations looking to offload heavy cycles from their primary spinning disks can load flash or SSD to help control that load. In many cases, a good array can offload 80%-90% of the IOPS from spinning disks that may be a part of the controller.
- Unified computing systems. Since efficiency plays a big part in any data center environment, many vendors have been integrating storage solutions into a unified communications system. And for good reason, too. Using FlexPod (Cisco, NetApp, VMware) or vBlock (Cisco, EMC, VMware) as validated designs, administrators can launch entire systems which are robust and capable of direct scalability.
Replication. A big part of cloud computing and storage is the process of data distribution and replication. New storage systems must be capable of not only managing data at the primary site – but they must also be able to replicate that information efficiently to other locations. Why? There is a direct need to manage branch office, remote sites, other data centers, and of course – disaster recovery. Setting the right replication infrastructure will mean managing bandwidth, scheduling and what data is actually pushed out.
- Multi-tenancy. Many storage vendors have now applied some storage virtualization practices into their product strategy. The idea is simple – with one controller, split up services to allow others access to a locked down instance of storage. Storage manufacturers are logically segmented physical controllers and delivering private, virtual, arrays to sub-admins. Now, instead of having to purchase controllers and arrays for multiple corporate departments, storage administrators can split up the one and control the entire environment.
- Data deduplication. Control over the actual data within the storage environment has always been a big task as well. Storage resources aren’t only finite – they’re expensive. So, data deduplication can help manage data that sits on the storage array as well as information being used for other systems. For example, instead of sending out 100 20mb attachments, the storage array would be intelligent enough to only store one file and create 99 pointers. If a change was made to the file – the system is smart enough to log those changes and create secondary pointers to a new file.
In developing future systems, administrators will look for platforms which are highly scalable and can manage large amounts of information. Whether it’s big data, a distributed file system, cloud computing or just the user environment – the storage infrastructure will always plat an important role. At MTM, we believe the idea will always revolve around ease of management and control over the data. Reach out to MTM today for help developing a solid storage platform, while always making sure to plan for the future; since data growth will be an inevitable part of today’s cloud environment.