There’s good news and bad news in the world of cloud storage. For starters, organizations big and small have a plethora of choices when it comes to moving data from on-premises storage systems to the cloud in order to slash CAPEX, achieve greater agility and provide users with better opportunities to collaborate.
One look at the smorgasbord of choices out there confirms companies have an embarrassment of riches when it comes to cloud storage options.
While this presents enterprises with a dizzying array of choices, there’s a dirty little secret that plagues the cloud storage industry. Standing in the way of a more aggressive shift from on-premises storage systems to the cloud are considerable challenges posed by the synchronization, migration and/or backup of files. As more and more organizations are finding out, this is hardly a slam dunk.
When you consider that many enterprises store tens to hundreds of millions of files that are often distributed among a wide variety of storage systems and locations, the process of syncing, migrating and backing up this volume of data – that can reach terabytes or petabytes – is daunting.
Unfortunately, not all files are structured the same. They have unique properties, metadata, versions and permissions. As a result, many organizations are discovering that, when it comes to moving data from NFS, SharePoint or enterprise content management (ECM) platforms to the cloud, it’s a lot harder than anticipated.
The Complexities of Content Management
If organizations only had to deal with a single storage repository for all of their structured and unstructured content, life would be considerably simpler for IT organizations and the user communities they support. But that’s not close to a reality today. Significant CAPEX and OPEX (News - Alert) investments have been poured into various on-premises storage systems over the decades, often driven by the decentralized needs of individual business units and/or regional offices spanning domestic and international geographies. This has created distributed content silos that, more often than not, suffer from incompatibilities, when it comes to sharing files.
Image and portal-based enterprise content management (ECM) systems have made valiant efforts to meet increasing demands for greater collaboration between distributed workforces that have clamored for shared access to files shielded behind the corporate firewall. But, the limited success of ECMs has given rise to users taking matters into their own hands. In what might best be described as the “bring your own storage” (BYOS) revolution, users have revolted by turning to more collaborative options made possible by cloud storage systems such as Box, Dropbox (News - Alert), Google Drive, Office 365 and many more. To state the obvious, this has raised security concerns among IT teams responsible for managing and safeguarding company data.
The trifecta of siloed on-premises storage systems, incompatible ECM systems and cloud-based storage services has proven to be a highly complex IT management challenge. Consolidating these systems into a coherent “hybrid” platform solution continues to be an elusive undertaking, especially when you examine the requirements of managing millions of files, folders, permissions, metadata, versions and file locks between systems that don’t speak a common language. To pour salt into the wound, consolidation efforts are compounded by incompatible file names due to “long paths” or illegal characters when migrating content from one system to another.
The Need for a New Hybrid Content Architecture
Market researchers at Gartner (News - Alert) Group have looked long and hard at this problem and, in a recent research note – “Cool Vendors in Content Management 2015” – they concluded, “A hybrid content architecture can help with the simplicity and ease of synchronization/migration of content across multiple content platforms.”
That said, this begs a fundamental question: How can a hybrid content architecture successfully resolve incompatibilities between critical ECM systems, Sharepoint, NFS, SAN or NAS systems and automate file migration, synchronization or backups to a cloud service?
The short answer is that a new middleware layer is needed to serve as an intelligent membrane between on-premises and cloud storage systems in order to automate the migration and bi-directional synchronization of files, folder hierarchies, properties, versions, map permissions, user accounts and metadata. The alternative is the engagement of expensive third-party integration teams or assigning vital IT staff to what has historically been a time and resource-intensive endeavor.
CIOs, enterprise architects, IT managers and storage administrators alike are increasingly aware of this challenge. On the one hand, the advantages of cloud-based storage are well understood: Forklift storage upgrades can be eliminated, CAPEX costs can be slashed, workforce collaboration can be dramatically improved and users can access files and content from a much wider array of computing devices. At the same time, on-premises storage systems aren’t going away by any stretch of the imagination; compliance and security requirements dictate this reality. So, moving forward, hybrid storage environments are rapidly becoming the “new normal.”
Implementing “File Logistics” in the Real World
This concept of a new middleware layer that serves as an intelligent membrane between on-premises and cloud storage systems is not just a theoretical abstraction of the future. It’s being implemented in thousands of organizations across multiple markets today.
A case in point is Teach for America (TFA), a progressive organization that enlists, develops and mobilizes educators of the future. TFA has more than 2,500 users who require access to content that used to be siloed across 50 regional offices.
Because decentralized storage made it difficult to access and share organizational data that used to reside on local Network File Systems, employees began using cloud storage services on their own with no oversight or security from IT. This triggered a decision by TFA’s IT department to migrate its users from on-premises NFS to Box (News - Alert) to give users ownership of their content and enable secure sharing inside and outside of the organization.
What TFA quickly learned, however, is that moving this content manually and organizing it into individual Box accounts became a time-consuming process. Initially, users were allowed to pick and choose what they wanted to move to Box on their own, but some users had more than 10GB of data. What TFA quickly realized is that it needed something that could easily sync its local servers to Box, while maintaining existing permissions and file structures.
At the recommendation of Box, TFA is using an intelligent middleware solution that quickly bridges on-premises and cloud storage services thanks to user home-drive mapping and advanced folder grouping tools. This has allowed TFA to migrate its user base and quickly replicate entire folder hierarchies to Box literally with a few mouse clicks.
The ability to automate the creation of folders directly from shared drives has saved TFA thousands of man-hours in terms of integration time. Without its new file logistics middleware solution, TFA would have had to move content from its shared folders to Box and then manually recreate folders on the Box side.
Equally important to TFA is its newfound ability to track the state of all migration processes. If an error is encountered during a job, it is automatically tracked to a specific folder and file instead of starting over at the beginning of the migration. Doing so manually would have again consumed countless hours.
If enterprises are, indeed, going to manage their rapidly growing volumes of structured and unstructured data with a hybrid content architecture, they’re going to have to find the means to resolve incompatibilities between their legacy ECM systems, SharePoint, NFS, SAN and/or NAS systems when migrating and syncing to a cloud service. A file logistics middleware layer is going to become an increasingly critical enabling technology that will be key to successful hybrid content environments.
About the Author: Steve is a 24 year veteran in the Information Technology field, focusing primarily on web technology, workflow, imaging, and ECM solutions. He has designed, built, and delivered ECM applications for several Fortune 500 companies including some of the country’s largest insurance and financial companies.
Edited by Maurice Nagle