会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • System and method for removing overlapping ranges from a flat sorted data structure
    • 从平坦排序的数据结构中去除重叠范围的系统和方法
    • US08868520B1
    • 2014-10-21
    • US13409315
    • 2012-03-01
    • Rohini RaghuwanshiAshish ShuklaPraveen Killamsetti
    • Rohini RaghuwanshiAshish ShuklaPraveen Killamsetti
    • G06F17/30
    • G06F17/30156G06F11/1004G06F11/1453G06F17/30088G06F17/30091G06F17/30159
    • A system and method efficiently removes ranges of entries from a flat sorted data structure, such as a fingerprint database, of a storage system. The ranges of entries represent fingerprints that have become stale, i.e., are not representative of current states of corresponding blocks in the file system, due to various file system operations such as, e.g., deletion of a data block without overwriting its contents. A deduplication module performs an attributes intersect range calculation (AIRC) procedure on the stale fingerprint data structure to compute a set of non-overlapping and latest consistency point (CP) ranges. The output from the AIRC procedure, i.e., the set of non-overlapping and latest CP ranges, is then used to remove stale fingerprints associated with that deleted block (as well as each other deleted data block) from the fingerprint database.
    • 系统和方法有效地从存储系统的平坦排序的数据结构(诸如指纹数据库)中去除条目的范围。 条目的范围表示由于各种文件系统操作,例如删除数据块而不覆盖其内容,已经变得陈旧的指纹,即不代表文件系统中对应的块的当前状态。 重复数据消除模块在陈旧的指纹数据结构上执行属性相交范围计算(AIRC)过程,以计算一组不重叠和最新的一致性点(CP)范围。 然后使用来自AIRC过程的输出,即一组非重叠和最新的CP范围,以从指纹数据库中删除与该删除的块(以及每个其他已删除的数据块)相关联的陈旧的指纹。
    • 3. 发明授权
    • Fingerprints datastore and stale fingerprint removal in de-duplication environments
    • 在重复数据删除环境中指纹数据存储和陈旧的指纹删除
    • US08898119B2
    • 2014-11-25
    • US12969527
    • 2010-12-15
    • Alok SharmaPraveen KillamsettiSatbir Singh
    • Alok SharmaPraveen KillamsettiSatbir Singh
    • G06F7/00G06F17/00G06F17/30G06F3/06
    • G06F17/30156G06F3/0608G06F3/0641G06F3/0683G06F17/3015
    • A storage server is coupled to a storage device that stores blocks of data, and generates a fingerprint for each data block stored on the storage device. The storage server creates a fingerprints datastore that is divided into a primary datastore and a secondary datastore. The primary datastore comprises a single entry for each unique fingerprint and the secondary datastore comprises an entry having an identical fingerprint as an entry in the primary datastore. The storage server merges entries in a changelog with the entries in the primary datastore to identify duplicate data blocks in the storage device and frees the identified duplicate data blocks in the storage device. The storage server stores the entries that correspond to the freed data blocks to a third datastore and overwrites the primary datastore with the entries from the merged data that correspond to the unique fingerprints to create an updated primary datastore.
    • 存储服务器耦合到存储数据块的存储设备,并且为存储在存储设备上的每个数据块生成指纹。 存储服务器创建一个指纹数据存储区,分为主数据存储和辅助数据存储。 主数据存储区包括每个唯一指纹的单个条目,辅助数据存储区包括具有与主数据存储区中的条目相同的指纹的条目。 存储服务器将更改日志中的条目与主数据存储中的条目合并,以识别存储设备中的重复数据块,并释放存储设备中标识的重复数据块。 存储服务器将与释放的数据块对应的条目存储到第三个数据存储,并使用与唯一指纹对应的合并数据中的条目覆盖主数据存储,以创建更新的主数据存储。
    • 4. 发明授权
    • Processing data of a file using multiple threads during a deduplication gathering phase
    • 在重复数据删除收集阶段,使用多个线程处理文件的数据
    • US08234250B1
    • 2012-07-31
    • US12561683
    • 2009-09-17
    • Alok SharmaPraveen KillamsettiBipul Raj
    • Alok SharmaPraveen KillamsettiBipul Raj
    • G06F17/30
    • G06F17/3015
    • A method and apparatus for deduplication of files of a storage system is described. During a gathering phase, a file may be simultaneously processed by two or more threads to produce and store content identifiers for data blocks of the file. Each file may be sub-divided into multiple file sub-portions, each file sub-portion comprising a predetermined number of data blocks. A thread may be assigned to each sub-portion of a file for processing the data blocks. The currently assigned sub-portion for each thread may be recorded and used upon a system crash to restart each scanner thread at the currently assigned sub-portion to minimize the data blocks that are re-processed. The size of a file sub-portion may be predetermined based on the organization of inode data structures representing the files (e.g., based on the maximum number of pointers that an indirect block in the inode data structure may contain).
    • 描述了用于存储系统的文件的重复数据删除的方法和装置。 在收集阶段期间,文件可以由两个或多个线程同时处理,以产生和存储该文件的数据块的内容标识符。 每个文件可以被细分成多个文件子部分,每个文件子部分包括预定数量的数据块。 可以将线程分配给文件的每个子部分以处理数据块。 可以在系统崩溃时记录并使用每个线程的当前分配的子部分,以在当前分配的子部分重新启动每个扫描器线程,以最小化被重新处理的数据块。 可以基于表示文件的inode数据结构的组织(例如,基于inode数据结构中的间接块可能包含的最大指针数)来预定文件子部分的大小。
    • 6. 发明申请
    • FINGERPRINTS DATASTORE AND STALE FINGERPRINT REMOVAL IN DE-DUPLICATION ENVIRONMENTS
    • 指纹环境中的指纹数据和标志指纹去除
    • US20120158670A1
    • 2012-06-21
    • US12969527
    • 2010-12-15
    • Alok SharmaPraveen KillamsettiSatbir Singh
    • Alok SharmaPraveen KillamsettiSatbir Singh
    • G06F17/00
    • G06F17/30156G06F3/0608G06F3/0641G06F3/0683G06F17/3015
    • A storage server is coupled to a storage device that stores blocks of data, and generates a fingerprint for each data block stored on the storage device. The storage server creates a fingerprints datastore that is divided into a primary datastore and a secondary datastore. The primary datastore comprises a single entry for each unique fingerprint and the secondary datastore comprises an entry having an identical fingerprint as an entry in the primary datastore. The storage server merges entries in a changelog with the entries in the primary datastore to identify duplicate data blocks in the storage device and frees the identified duplicate data blocks in the storage device. The storage server stores the entries that correspond to the freed data blocks to a third datastore and overwrites the primary datastore with the entries from the merged data that correspond to the unique fingerprints to create an updated primary datastore.
    • 存储服务器耦合到存储数据块的存储设备,并且为存储在存储设备上的每个数据块生成指纹。 存储服务器创建一个指纹数据存储区,分为主数据存储和辅助数据存储。 主数据存储区包括每个唯一指纹的单个条目,辅助数据存储区包括具有与主数据存储区中的条目相同的指纹的条目。 存储服务器将更改日志中的条目与主数据存储中的条目合并,以识别存储设备中的重复数据块,并释放存储设备中标识的重复数据块。 存储服务器将与释放的数据块对应的条目存储到第三个数据存储,并使用与唯一指纹对应的合并数据中的条目覆盖主数据存储,以创建更新的主数据存储。
    • 7. 发明授权
    • Scalable deduplication of stored data
    • 存储数据的可重复数据删除
    • US08086799B2
    • 2011-12-27
    • US12190511
    • 2008-08-12
    • Shishir MondalPraveen Killamsetti
    • Shishir MondalPraveen Killamsetti
    • G06F12/00
    • G06F3/0641G06F3/0608G06F3/061G06F3/0689G06F11/1453
    • In a method and apparatus for scalable deduplication, a data set is partitioned into multiple logical partitions, where each partition can be deduplicated independently. Each data block of the data set is assigned to exactly one partition, so that any two or more data blocks that are duplicates of each are always be assigned to the same logical partition. A hash algorithm generates a fingerprint of each data block in the volume, and the fingerprints are subsequently used to detect possible duplicate data blocks as part of deduplication. In addition, the fingerprints are used to ensure that duplicate data blocks are sent to the same logical partition, prior to deduplication. A portion of the fingerprint of each data block is used as a partition identifier to determine the partition to which the data block should be assigned. Once blocks are assigned to partitions, deduplication can be done on partitions independently.
    • 在用于可重复数据删除的方法和装置中,数据集被划分为多个逻辑分区,其中每个分区可以独立地去重复数据删除。 数据集的每个数据块被分配给正好一个分区,使得每个数据块的每个数据块总是被分配给相同的逻辑分区。 哈希算法生成卷中每个数据块的指纹,并且指纹随后用于检测可能的重复数据块作为重复数据删除的一部分。 此外,在重复数据删除之前,使用指纹来确保将重复的数据块发送到同一个逻辑分区。 将每个数据块的指纹的一部分用作分区标识符来确定应该分配数据块的分区。 一旦块分配给分区,重复数据删除就可以在分区上独立完成。