Brief | Book | |
---|---|---|
In the age of data science, the rapidly increasing amount of data is a major concern in numerous applications of computing operations and data storage. Duplicated data or redundant data is a main challenge in the field of data science research. Data Deduplication Approaches: Concepts, Strategies, and Challenges shows readers the various methods that can be used to eliminate multiple copies of the same files as well as duplicated segments or chunks of data within the associated files. Due to ever-increasing data duplication, its deduplication has become an especially useful field of research for storage environments, in particular persistent data storage. Data Deduplication Approaches provides readers with an overview of the concepts and background of data deduplication approaches, then proceeds to demonstrate in technical detail the strategies and challenges of real-time implementations of handling big data, data science, data backup, and recovery. The book also includes future research directions, case studies, and real-world applications of data deduplication, focusing on reduced storage, backup, recovery, and reliability. |
Why? - build a strategic, smart and strong analytics capability to transform your institution and ensure a future proof competitive advantage. This type of transformation impacts top-line growth—such as those related to institutional transformation and data utilization—as well as productivity and performance. This discipline includes: Agile and rapid prototyping. Analytics capability assessment and transformation. Remember the conviction: #Analyticship:#BI,#ML,#AI,#BigData,#Analytics,#HiEd:
Pageviews and counting
Wednesday, September 2, 2020
Data Deduplication Approaches...Concepts, Strategies, and Challenges
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment