By Simon Gregory, Business Development Director, CommVault
Businesses in the Middle East are under pressure to manage massive amounts of complex data. Information levels are estimated to be growing at up to 80% year on year and the biggest challenge associated with that comes from the dramatic increase in managing unstructured data from emerging sources - desktops/laptops, audio/visual files, images, databases, social media and a variety of other data types that are prominent in an organisation, but frequently managed in 'silos'.
This unrelenting growth is a major force driving the 'big data' debate, which is further compounded by the universal adoption of virtualisation, the rapid shift to cloud-enabled services, the influx of mobile computing devices, demand for 24x7 operations and increasing consolidation.
Whilst big data brings with it a lot of good regarding new ways to create information that offers real business value it also presents a new set of challenges for the IT department as organisations struggle to find ways in which to keep pace with more demanding service levels for recovery and collapsing backup windows - which often leads to overloaded networks and a tendency to turn to more costly alternatives.
A fundamental issue here appears to simply be that there just isn't enough time, resources or budget to manage, protect, index and retain massive amounts of unstructured data. The negative side effects of big data, which include risk, complexity and cost, clearly need to be met head on if the positive benefits are to win out.
Legacy solutions are not 'fit for purpose'
Unfortunately, legacy data management methods and tools simply aren't up to the task of managing or controlling the data explosion. Originally created to solve individual challenges, which has since led to multiple products being deployed to manage backup, archive and analytics and resulted in complex administration, information silos have now been created causing upgrade concerns and bringing forward the debate around the cost of alternatives versus current maintenance issues.
Traditional solutions also have two stages for each protection operation - scan and collection. In order to perform backup, archive and file analytic operations, each product must scan and collect files or information from the file system.
Synthetic full, de-duplication and VTL solutions may have been introduced to try to reduce repository problems but a lack of integration capabilities causes these solutions to fall short in the longer term.
Typically, incremental scan times on large file systems can also require more time than actual data collection. Regularly scheduled, full protection, operations then exceed back up windows and require heavy network and server resources to manage the process. It's a vicious circle.
Convergence is the way forward
There is an alternative approach, which is to adopt a unified data management strategy which collapses data collection operations into a single solution enabling the copying, indexing and storage of data in an intelligent, virtual repository that provides an efficient and scalable foundation for e-Discovery, data mining, and retention.
Such an approach also enables data analytics and reporting to be performed from the index in order to help classify data and implement archive policies for data tiering to lower cost media. This also serves to reduce the total cost of ownership.
The advantages here are immediately clear.



Steven Bond, Reporter



