会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 12. 发明授权
    • Transferring data between storage media while maintaining host processor
access for I/O operations
    • 在存储介质之间传输数据,同时保持主机处理器对I / O操作的访问
    • US5210865A
    • 1993-05-11
    • US925307
    • 1992-08-04
    • Scott H. DavisWilliam L. GolemanDavid W. Thiel
    • Scott H. DavisWilliam L. GolemanDavid W. Thiel
    • G06F11/16G06F11/20
    • G06F11/2082G06F11/16G06F11/2087
    • A system and method for transferring data from a first storage medium to a second storage medium, each of the storage media being divided into corresponding data blocks, the method comprising steps of: (a) reading data stored in a first data block in the first storage medium, the first data block initially constituting a current data block; (b) comparing data read in the current data block to data stored in a corresponding data block in the second storage medium; (c) if the data compared in step b are identical, reading data stored in a different data block in the first storage medium, the different data block becoming the current data block, and returning to step b; (d) modifying the data stored in one of the storage media such that the data in the current data block is identical to the corresponding data in the second storage medium; and (e) rereading the data in the current data block and returning to step b.
    • 一种用于将数据从第一存储介质传送到第二存储介质的系统和方法,每个存储介质被分成相应的数据块,所述方法包括以下步骤:(a)读取存储在第一数据块中的第一数据块中的数据 存储介质,所述第一数据块最初构成当前数据块; (b)将当前数据块中读取的数据与存储在第二存储介质中相应数据块中的数据进行比较; (c)如果在步骤b中比较的数据相同,则读取存储在第一存储介质中的不同数据块中的数据,将不同数据块变为当前数据块,并返回到步骤b; (d)修改存储在一个存储介质中的数据,使得当前数据块中的数据与第二存储介质中的对应数据相同; 和(e)重新读取当前数据块中的数据并返回到步骤b。
    • 16. 发明授权
    • System for storing pending parity update log entries, calculating new
parity, updating the parity block, and removing each entry from the log
when update is complete
    • 用于存储待决奇偶校验更新日志条目,计算新奇偶校验,更新奇偶校验块以及在更新完成时从日志中删除每个条目的系统
    • US5819109A
    • 1998-10-06
    • US987116
    • 1992-12-07
    • Scott H. Davis
    • Scott H. Davis
    • G06F11/10G06F11/20G11B20/18G06F7/00G06F7/16
    • G06F11/1076G06F11/1008G06F11/1471G11B20/1833
    • The present invention is a method of writing data to a storage system using a redundant array of independent/inexpensive disks ("RAID") organization that eliminates the write hole problem of regenerating undetected corrupt data. The invention also overcomes the need for system overhead to synchronize data writes to logical block numbers that map to the same parity block. A log is constructed and used for storing information relating to requested updates or write operations to the data blocks in the multiple disk array. A separate entry is made in the log for each parity block that must be updated as a result of the write operation. Each log entry contains the addresses of the logical block numbers to which data must be written for that operation. After the new data is written to data blocks in the RAID array, a background scrubber operation sequentially reads the next available entry in the log and performs a parity calculation to determine the parity resulting from the write operation. The new parity information is written to the corresponding parity block and the log entry is deleted by the scrubber operation to indicate that the parity block corresponds to the data it represents. In addition, if a system failure occurs during a data write or after the data write but before the associated parity block is written, the original data can be accurately reconstructed using the remaining data blocks and the original parity information that remains in the parity block.
    • 本发明是使用独立/便宜的盘(“RAID”)组织的冗余阵列将数据写入存储系统的方法,其消除了再现未检测到的损坏数据的写孔问题。 本发明还克服了将数据写入同步到映射到相同奇偶校验块的逻辑块号的系统开销的需要。 构建日志并用于将与所请求的更新或写入操作相关的信息存储到多个磁盘阵列中的数据块。 每个奇偶校验块的日志中都会有一个单独的条目,作为写入操作的结果必须进行更新。 每个日志条目包含必须为该操作写入数据的逻辑块号的地址。 在将新数据写入RAID阵列中的数据块之后,后台清除器操作将顺序读取日志中的下一个可用条目,并执行奇偶校验计算以确定由写入操作产生的奇偶校验。 新的奇偶校验信息被写入对应的奇偶校验块,并且通过擦除器操作来删除日志条目以指示奇偶校验块对应于其表示的数据。 此外,如果在数据写入期间或在数据写入之后但在相关联的奇偶校验块被写入之前发生系统故障,则可以使用保留在奇偶校验块中的剩余数据块和原始奇偶校验信息来精确地重构原始数据。