Complex Made Simple

Big data means opportunities, not problems for Mena firms

Stored data is currently increasing 80% year on year and Middle East companies need to adopt a cost effective, modern approach to handle big data.

By Simon Gregory, Business Development Director, CommVault

Businesses in the Middle East are under pressure to manage massive amounts of complex data. Information levels are estimated to be growing at up to 80% year on year and the biggest challenge associated with that comes from the dramatic increase in managing unstructured data from emerging sources – desktops/laptops, audio/visual files, images, databases, social media and a variety of other data types that are prominent in an organisation, but frequently managed in ‘silos’.

This unrelenting growth is a major force driving the ‘big data’ debate, which is further compounded by the universal adoption of virtualisation, the rapid shift to cloud-enabled services, the influx of mobile computing devices, demand for 24×7 operations and increasing consolidation.

Whilst big data brings with it a lot of good regarding new ways to create information that offers real business value it also presents a new set of challenges for the IT department as organisations struggle to find ways in which to keep pace with more demanding service levels for recovery and collapsing backup windows – which often leads to overloaded networks and a tendency to turn to more costly alternatives.

A fundamental issue here appears to simply be that there just isn’t enough time, resources or budget to manage, protect, index and retain massive amounts of unstructured data. The negative side effects of big data, which include risk, complexity and cost, clearly need to be met head on if the positive benefits are to win out.

Legacy solutions are not ‘fit for purpose’

Unfortunately, legacy data management methods and tools simply aren’t up to the task of managing or controlling the data explosion. Originally created to solve individual challenges, which has since led to multiple products being deployed to manage backup, archive and analytics and resulted in complex administration, information silos have now been created causing upgrade concerns and bringing forward the debate around the cost of alternatives versus current maintenance issues.

Traditional solutions also have two stages for each protection operation – scan and collection. In order to perform backup, archive and file analytic operations, each product must scan and collect files or information from the file system.

Synthetic full, de-duplication and VTL solutions may have been introduced to try to reduce repository problems but a lack of integration capabilities causes these solutions to fall short in the longer term.

Typically, incremental scan times on large file systems can also require more time than actual data collection. Regularly scheduled, full protection, operations then exceed back up windows and require heavy network and server resources to manage the process. It’s a vicious circle.

Convergence is the way forward

There is an alternative approach, which is to adopt a unified data management strategy which collapses data collection operations into a single solution enabling the copying, indexing and storage of data in an intelligent, virtual repository that provides an efficient and scalable foundation for e-Discovery, data mining, and retention.

Such an approach also enables data analytics and reporting to be performed from the index in order to help classify data and implement archive policies for data tiering to lower cost media. This also serves to reduce the total cost of ownership.

The advantages here are immediately clear. Built-in intelligent data collection classification will help to reduce scan times, which in turn allows companies to maintain incremental backup windows. Improved single pass and data collection for backup, archive and reporting also helps to reduce server load and operations.

Integration, source-side de-duplication and synthetic full back up then further reduces and the network load whilst a single index instantly decreases the silos of information.

Instead of moving the pain point, a converged solution, will create a single process that has the potential to reduce the combined time typically required to backup, archive and report by more than 50% compared to traditional methods and will deliver the simplified management tools required to affordably protect, manage and access data on systems that have become ‘too big’.

Taking control of data mountains with ‘CORE’ strategy

Whilst there are many ways to create big data, organisations that want to take control of the data mountain would be advised to consider adopting a ‘Copy Once Re-use Extensively’ (CORE) strategy if they want to manage Big Data cost effectively in the long term. The key benefits to CORE are simple:
• Process data once
• Store data once
• Retain data once
• Search data from one place
• Centralise policy management
• Automate tiering of data while maintaining hardware and storage flexibility
• Synchronise data deletion and automate space reclamation

There is no doubt that many organisations are having to walk a fine line between over-collection of data, which brings companies higher review costs, and under-collection, which presents them with the risk of missing key information, perhaps located in one of the emerging data sources – a critical issue in today’s world of information-on-demand, regulation and compliance.

What companies should be focused on achieving is the use of one platform that will enable those working with the information to intelligently manage and protect enormous amounts of data across a number of applications, hypervisors, operating systems and infrastructure from a single console.

A policy-driven approach to protecting, storing and recovering vast amounts of data whilst automating administration will always be the best way to maximise IT productivity and reduce overall support costs.

Eliminating manual processes and seamlessly tiering data to physical, virtual and cloud storage helps to decrease administration costs whilst increasing operational efficiencies – enabling IT departments to ‘do more, with less’ resources.

A single data store would empower businesses to streamline data preservation and eliminate data redundancy during the review process which is now considered to be one of the major causes of skyrocketing data management costs.

The ability to more easily navigate, search and mine data could fundamentally mean that Big Data is finally viewed as an asset to the business, not a hindrance.