Data Storage Group, an industry leader in data backup and disaster recovery software, announced today that the United States Patent and Trademark Office has awarded the company US Patent 7,860,843 for the firm’s core data deduplication technology. DataStor’s unique and innovative software-based approach, known as Adaptive Content Factoring?, is a technological breakthrough offering significant advancements and operational efficiencies in data backup and archival storage for small to medium sized businesses (SMBs) up to large enterprises.
Today’s organisations face significant challenges meeting long term data retention requirements while maintaining compliance with numerous state and federal regulations and guidelines requiring firms to keep necessary information available in a useable fashion. Adding to this challenge is the expansive growth in digital information. Documents are richer in content and often reference related works, resulting in a tremendous amount of information to manage. The increasing volume, complexity, and costs of data backup and disaster recovery are causing many firms to rethink traditional data protection strategies, driving the need for innovative and affordable data management strategies which simplify and optimise data storage operations. By eliminating redundant data, deduplication is an essential part of the process of streamlining data backup and archival storage to increase efficiency and compliance while reducing costs.
Brian Dodd, CEO of Data Storage Group, commented on the issuing of the patent, “For DataStor this patent recognises our technological contribution to the industry and represents the culmination of years of hard work by a team of dedicated and very talented individuals. We are extremely pleased to receive this patent and to have the associated exclusive rights to offer this core foundation of groundbreaking technology to the industry.”
Mike Moore, company co-founder and CTO explains, “Unlike other, more typical deduplication technologies that chunk data into tiny blocks and require massive indexes to identify and manage common content, our elegant solution to the problem decreases backup storage requirements by efficiently identifying and eliminating sub-file redundancies at the source, thereby optimising the data before it’s transmitted across networks. This technology has demonstrated substantial increases in bandwidth utilisation, providing much quicker and more efficient backups – as much as 20 times faster than traditional backups.”
By distributing the source-side deduplication process across a network of computers, the power of distributed systems is harnessed for even greater levels of performance and scalability. Far less compute-intensive resources are required, and the solution scales for use in network configurations ranging from laptop computers up to large networks of enterprise servers. The technology also delivers a fully integrated virtual file system which allows users to easily restore, and even directly access, through standard interfaces, data for all managed points-in-time, empowering SMBs and enterprise users to meet their most stringent data storage and retention requirements, all at an affordable cost.
Subscribe to our newsletter