会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Remote data facility prefetch
    • 远程数据设备预取
    • US06557079B1
    • 2003-04-29
    • US09468270
    • 1999-12-20
    • Robert S. Mason, Jr.Yuval OfekDan Arnon
    • Robert S. Mason, Jr.Yuval OfekDan Arnon
    • G06F1200
    • G06F12/0862
    • A mechanism for optimizing predictive read performance in a data storage system that is connected to a geographically remote data storage system by a data link for remote replication of data in support of data recovery operations. The data storage system initiates a local prefetch and initiates via the data link a remote prefetch by the remote data storage system to retrieve data from storage devices coupled to the local and remote data storage systems, respectively. The remote prefetch read start address is offset from the local prefetch read start address by a programmable track offset value. The programmable track offset value is adjusted to tune the prefetch workload balance between the local and remote data storage systems.
    • 一种用于优化数据存储系统中的预测读取性能的机制,其通过数据链路连接到地理上远程的数据存储系统,用于支持数据恢复操作的数据的远程复制。 数据存储系统启动本地预取,并通过数据链路启动远程数据存储系统的远程预取,以分别从耦合到本地和远程数据存储系统的存储设备检索数据。 远程预取读取起始地址通过可编程轨道偏移值与本地预取读取起始地址偏移。 调整可编程磁道偏移值以调整本地和远程数据存储系统之间的预取工作负载平衡。
    • 3. 发明授权
    • Dynamically modifying system parameters in data storage system
    • 动态修改数据存储系统中的系统参数
    • US06487562B1
    • 2002-11-26
    • US09467354
    • 1999-12-20
    • Robert S. Mason, Jr.Yuval Ofek
    • Robert S. Mason, Jr.Yuval Ofek
    • G06F1730
    • G06F3/0629G06F3/0605G06F3/0689Y10S707/99931Y10S707/99956
    • A system and method for dynamically modifying parameters in a data storage system such as a RAID system. Such parameters include QOS (Quality of Service) parameters, which control the speed in which system operations are performed for various parts of a data storage system. The storage devices addressable as logical volumes can be individually controlled and configured for preferred levels of performance and service. The parameters can be changed at any time while the data storage system is in use, with the changes taking effect very quickly. These parameter changes are permanently stored and therefore allow system configurations to be maintained. A user interface (UI) allows a user or system administrator to easily observe and configure system parameters, preferably using a graphic user interface which allows a user to select system changes along a scale from minimum to a maximum.
    • 一种用于动态修改诸如RAID系统的数据存储系统中的参数的系统和方法。 这些参数包括QOS(服务质量)参数,其控制对数据存储系统的各个部分执行系统操作的速度。 可寻址为逻辑卷的存储设备可以单独控制和配置为优化性能和服务级别。 在数据存储系统正在使用时,可以随时更改参数,并使更改生效非常快。 这些参数更改将被永久存储,因此可以维护系统配置。 用户界面(UI)允许用户或系统管理员容易地观察和配置系统参数,优选地使用图形用户界面,其允许用户沿着从最小到最大的比例来选择系统变化。
    • 5. 发明授权
    • Method for improving mean time to data loss (MTDL) in a fixed content distributed data storage
    • 在固定内容分布式数据存储中提高平均数据丢失时间(MTDL)的方法
    • US09305011B2
    • 2016-04-05
    • US11675224
    • 2007-02-15
    • Robert S. Mason, Jr.
    • Robert S. Mason, Jr.
    • G06F17/30
    • G06F21/6209G06F17/30082G06F17/30188G06F17/30197
    • An archival storage cluster of preferably symmetric nodes includes a data protection management system that periodically organizes the then-available nodes into one or more protection sets, with each set comprising a set of n nodes, where “n” refers to a configurable “data protection level” (DPL). At the time of its creation, a given protection set is closed in the sense that each then available node is a member of one, and only one, protection set. When an object is to be stored within the archive, the data protection management system stores the object in a given node of a given protection set and then constrains the distribution of copies of that object to other nodes within the given protection set. As a consequence, all DPL copies of an object are all stored within the same protection set, and only that protection set. This scheme significantly improves MTDL for the cluster as a whole, as the data can only be lost if multiple failures occur within nodes of a given protection set. This is far more unlikely than failures occurring across any random distribution of nodes within the cluster.
    • 优选对称节点的归档存储集群包括数据保护管理系统,其周期性地将当前可用节点组织到一个或多个保护集中,其中每个集合包括一组n个节点,其中“n”是指可配置的“数据保护 级“(DPL)。 在创建时,给定的保护集是关闭的,因为每个可用节点都是一个成员,只有一个保护集。 当一个对象要存储在归档中时,数据保护管理系统将对象存储在给定保护集的给定节点中,然后将该对象的拷贝的分布约束到给定保护集内的其他节点。 因此,对象的所有DPL副本都存储在同一个保护集中,只有该保护设置。 该方案大大提高了集群整体的MTDL,因为只有在给定保护集的节点内发生多个故障时,数据才能丢失。 这比群集内节点的任何随机分布发生的故障更不可能。
    • 6. 发明授权
    • Method of and system for dynamic automated test case generation and execution
    • 动态自动测试用例生成和执行的方法和系统
    • US08473913B2
    • 2013-06-25
    • US11329631
    • 2006-01-11
    • Jesse A. NollerRobert S. Mason, Jr.
    • Jesse A. NollerRobert S. Mason, Jr.
    • G06F9/44
    • G06F11/3688G06F11/3684
    • An automated system randomly generates test cases for hardware or software quality assurance testing. A test case comprises a sequence of discrete, atomic steps (or “building blocks”). A particular test case has a variable number of building blocks. The system takes a set of test actions and links them together to create a much larger library of test cases or “chains.” The chains comprise a large number of random sequence tests that facilitate “chaos-like” or exploratory testing of the overall system under test. Upon execution in the system under test, the test case is considered successful if each building block in the chain executes successfully; if any building block fails, the test case, in its entirety, is considered a failure.
    • 自动化系统随机生成用于硬件或软件质量保证测试的测试用例。 测试用例包括一系列离散的原子步骤(或“构建块”)。 特定的测试用例具有可变数量的构建块。 系统需要一组测试动作,并将它们链接在一起,以创建更大的测试用例库或“链”。 这些链包括大量的随机序列测试,可以帮助整个被测系统进行“混乱”或探索性测试。 在被测系统中执行时,如果链中的每个构建块成功执行,则测试用例被认为是成功的; 如果任何构建块失败,则整个测试用例被认为是失败的。
    • 8. 发明授权
    • System and method for optimizing cache write backs to disks
    • 用于优化缓存回写到磁盘的系统和方法
    • US06304946B1
    • 2001-10-16
    • US09347592
    • 1999-07-01
    • Robert S. Mason, Jr.
    • Robert S. Mason, Jr.
    • G06F1208
    • G06F12/0866G06F11/1076G06F12/0804Y10S707/99932Y10S707/99937
    • A system and method for increasing efficiency in a mass storage system such as a RAID (redundant array of inexpensive disks) array with a cache memory. Multi-host mass storage systems employ a data structure called a write tree. The write tree is stored in cache memory, and is used to mark addressable data elements stored in the cache memory which must be written back to disk (referred to as “destaging”, or “write-backs”). Disks and disk controllers must scan and traverse the write tree to search for pending write operations. By storing a write tree cache apart from the write tree, the system efficiency is greatly increased. The write tree cache consists of a cylinder address as found in the write tree, and a bit mask indicating pending write operations at a specific level of the write tree. The specific disks and disk controllers can then avoid accessing the write tree in cache memory when searching for pending write operations.
    • 一种用于提高大容量存储系统(例如具有高速缓冲存储器的RAID(廉价磁盘的冗余阵列)阵列)的效率的系统和方法。 多主机大容量存储系统采用称为写树的数据结构。 写树存储在高速缓冲存储器中,用于标记存储在高速缓冲存储器中的可寻址数据元素,该数据元素必须写回磁盘(称为“降级”或“回写”)。 磁盘和磁盘控制器必须扫描并遍历写树,以搜索待执行的写操作。 通过将写树高速缓存与写树分开存储,系统效率大大提高。 写树高速缓存由在写树中找到的圆柱体地址以及指示写入树的特定级别的等待写入操作的位掩码组成。 然后,特定的磁盘和磁盘控制器可以避免在搜索挂起的写入操作时访问高速缓存中的写入树。
    • 9. 发明授权
    • Versioned file system with fast restore
    • 版本化文件系统快速还原
    • US08799231B2
    • 2014-08-05
    • US12871198
    • 2010-08-30
    • Robert S. Mason, Jr.David M. ShawKevin W. BaughmanStephen Fridella
    • Robert S. Mason, Jr.David M. ShawKevin W. BaughmanStephen Fridella
    • G06F17/30
    • G06F11/1448G06F17/3023
    • A versioned file system comprises a set of structured data representations, such as XML. Each structured data representation corresponds to a “version,” and each version comprises a tree of write-once objects rooted at a root directory manifest. Each version in the versioned file system has associated therewith a “borrow window.” When it is desired to reconstruct the file system to a point in time (or, more generally, a given state), i.e., to perform a “restore,” it is only required to walk (use) a single structured data representation (a tree). During a restore, metadata is pulled back from the cloud first, so users can see the existence of needed files immediately. The remainder of the data is then pulled back from the cloud if/when the user goes to open the file. As a result, the entire file system (or any portion thereof) can be restored to a previous time nearly instantaneously. A “fast” restore is performed if an object being restored exists within a “borrow window” of the version from which the system is restoring.
    • 版本化文件系统包括一组结构化数据表示,例如XML。 每个结构化数据表示对应于“版本”,并且每个版本包括一根基于根目录清单的一次写入树的树。 版本化文件系统中的每个版本都与“借用窗口”相关联。当希望将文件系统重建到一个时间点(或更一般地,给定状态)时,即执行“恢复”时, 只需要(使用)单个结构化数据表示(树)。 在还原期间,元数据首先从云中被拉回,因此用户可以立即看到所需文件的存在。 如果/当用户打开文件时,剩余的数据将从云中被拉回。 结果,整个文件系统(或其任何部分)可以几乎瞬间恢复到之前的时间。 如果正在还原的对象存在于系统还原的版本的“借用窗口”中,则执行“快速”恢复。
    • 10. 发明授权
    • Fast primary cluster recovery
    • US07917469B2
    • 2011-03-29
    • US11936317
    • 2007-11-07
    • Benjamin K. D. BernhardRobert S. Mason, Jr.
    • Benjamin K. D. BernhardRobert S. Mason, Jr.
    • G06F17/00
    • G06F11/08G06F11/1662G06F11/2028G06F11/2094G06F11/2097
    • A cluster recovery process is implemented across a set of distributed archives, where each individual archive is a storage cluster of preferably symmetric nodes. Each node of a cluster typically executes an instance of an application that provides object-based storage of fixed content data and associated metadata. According to the storage method, an association or “link” between a first cluster and a second cluster is first established to facilitate replication. The first cluster is sometimes referred to as a “primary” whereas the “second” cluster is sometimes referred to as a “replica.” Once the link is made, the first cluster's fixed content data and metadata are then replicated from the first cluster to the second cluster, preferably in a continuous manner. Upon a failure of the first cluster, however, a failover operation occurs, and clients of the first cluster are redirected to the second cluster. Upon repair or replacement of the first cluster (a “restore”), the repaired or replaced first cluster resumes authority for servicing the clients of the first cluster. This restore operation preferably occurs in two stages: a “fast recovery” stage that involves preferably “bulk” transfer of the first cluster metadata, following by a “fail back” stage that involves the transfer of the fixed content data. Upon receipt of the metadata from the second cluster, the repaired or replaced first cluster resumes authority for the clients irrespective of whether the fail back stage has completed or even begun.