会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Method, system, apparatus, and computer-readable medium for implementing caching in a storage system
    • 用于在存储系统中实现高速缓存的方法,系统,装置和计算机可读介质
    • US08549230B1
    • 2013-10-01
    • US12498599
    • 2009-07-07
    • Paresh ChatterjeeSrikumar SubramanianSrinivasa Rao VempatiSuresh Grandhi
    • Paresh ChatterjeeSrikumar SubramanianSrinivasa Rao VempatiSuresh Grandhi
    • G06F12/00G06F13/00G06F13/28
    • G06F12/0808G06F3/0608G06F3/061G06F3/0631G06F3/0665G06F3/0689G06F9/5011G06F12/0866G06F12/0871G06F2212/6026
    • A method, system, apparatus, and computer-readable medium are provided for implementing caching in a data storage system. According to aspects of the invention, a cache module is provided that utilizes cache lines sized according to a snapshot chunk size or an allocation unit size. The cache module utilizes cache header data structures corresponding to cache lines, each of which is assigned a device and logical block access range when active. The active headers are arranged in a set of hash queues. A free queue corresponds to the list of unused cache headers and a dirty queue corresponding to a list of unflushed cache headers. The cache header contains sector-level bitmaps of the cache line, specifying on a per sector granularity the bits that are dirty and valid. Flushing is performed by copying the dirty bitmap into a temporary memory location and flushing the bits set in it, while resetting the dirty bitmap and allowing writes to it. A read-ahead algorithm is used to perform read-ahead operations only in the event of a sequential read.
    • 提供了一种用于在数据存储系统中实现高速缓存的方法,系统,装置和计算机可读介质。 根据本发明的方面,提供一种缓存模块,其利用根据快照块大小或分配单元大小而定尺寸的高速缓存行。 高速缓存模块使用与高速缓存行相对应的高速缓存头数据结构,每个高速缓存行在活动时分配一个设备和逻辑块访问范围。 活动头部排列在一组散列队列中。 空闲队列对应于未使用的缓存头列表和与未刷新缓存头列表相对应的脏队列。 高速缓存头包含高速缓存行的扇区级位图,以扇区粒度指定脏和有效的位。 通过将脏位图复制到临时存储器位置并刷新其中设置的位,同时重置脏位图并允许写入,执行刷新。 只有在顺序读取的情况下,才能使用预读算法执行预读操作。
    • 2. 发明授权
    • Performance in virtual tape libraries
    • 虚拟磁带库中的性能
    • US08055938B1
    • 2011-11-08
    • US11450653
    • 2006-06-09
    • Paresh ChatterjeeSrikumar SubramanianSuresh GrandhiSrinivasa Rao Vempati
    • Paresh ChatterjeeSrikumar SubramanianSuresh GrandhiSrinivasa Rao Vempati
    • G06F11/00
    • G06F11/1084G06F11/1092G06F2211/104G06F2211/1057G06F2211/1059G06F2211/108
    • A method, system, apparatus, and computer-readable medium are provided for storing data at a virtual tape library (“VTL”) computer or server. According to one method, a VTL computer maintains one or more storage volumes for use by initiators on an array of mass storage devices. Space on each of the volumes is allocated using thin provisioning. The VTL computer may also include a cache memory that is at least the size of a full stripe of the array. Write requests received at the VTL computer are stored in the cache memory until a full stripe of data has been received. Once a full stripe of data has been received, the full stripe of data is written to the array at once. The array utilized by the VTL computer may include a hot spare mass storage device. When a failed mass storage device is identified, only the portions of the failed device that have been previously written are rebuilt onto the hot spare. The array may be maintained using RAID-5. If one of the mass storage devices in the array fails, any subsequent writes directed to the array may be stored using RAID-0.
    • 提供了一种用于在虚拟磁带库(“VTL”)计算机或服务器上存储数据的方法,系统,装置和计算机可读介质。 根据一种方法,VTL计算机维护一个或多个存储卷以供大容量存储设备阵列上的启动器使用。 使用精简配置分配每个卷上的空间。 VTL计算机还可以包括至少为阵列的整个条带的大小的高速缓冲存储器。 在VTL计算机上接收到的写入请求被存储在高速缓冲存储器中,直到接收到完整的数据条。 一旦接收到完整的数据条,数据的完整数据将立即写入阵列。 VTL计算机使用的阵列可以包括热备用大容量存储设备。 当识别出故障的大容量存储设备时,仅将先前写入的故障设备的部分重建到热备用设备上。 可以使用RAID-5维护阵列。 如果阵列中的一个大容量存储设备出现故障,则可以使用RAID-0存储定向到阵列的任何后续写入。
    • 4. 发明授权
    • Method, system, apparatus, and computer-readable medium for locking and synchronizing input/output operations in a data storage system
    • 用于在数据存储系统中锁定和同步输入/输出操作的方法,系统,装置和计算机可读介质
    • US07562200B1
    • 2009-07-14
    • US11417802
    • 2006-05-04
    • Paresh ChatterjeeSrinivasa Rao VempatiVijayarankan MuthirisavenugopalNarayanan Balakrishnan
    • Paresh ChatterjeeSrinivasa Rao VempatiVijayarankan MuthirisavenugopalNarayanan Balakrishnan
    • G06F12/16
    • G06F3/0665G06F3/0608G06F3/0622G06F3/0623G06F3/0659G06F3/0689G06F11/1451Y10S707/99938
    • A method, system, apparatus, and computer-readable medium are provided for synchronizing I/O operations in a computer system. According to aspects of the invention, multiple reader and writer locks are provided that may be acquired by calling processes at two different granularities. Locks may be acquired for an area of storage equivalent to the logical unit of allocation or for a sub-provision area equivalent to a unit of snapshot read-modify-write. Each lock is represented by a lock data structure that represents the same amount of logical address space as the logical unit of allocation. A request that arrives to the lock data structure is made to wait in a lock wait queue until the request can be honored. Requests that have been honored but that have not yet released the lock are maintained in a dispatch queue. When a writer lock is assigned to a lock request, no other readers or writers may be allocated to it. When a reader lock is assigned to a lock request, the lock may also be given to other readers, but not to a writer. A round robin technique is utilized to respond to requests for locks so that one lock does not starve the other locks.
    • 提供了一种用于在计算机系统中同步I / O操作的方法,系统,装置和计算机可读介质。 根据本发明的方面,提供了可以通过以两种不同粒度调用进程来获取的多个读写器锁。 可以针对与分配的逻辑单元相当的存储区域或等效于快照读取 - 修改 - 写入单位的子提供区域获取锁定。 每个锁由锁定数据结构表示,该数据结构表示与分配的逻辑单元相同的逻辑地址空间量。 到达锁定数据结构的请求被设置为等待一个锁等待队列,直到请求得到遵守。 已经兑现尚未释放锁的请求将保留在调度队列中。 当写锁定被分配给锁请求时,不得向其分配其他读写器。 当读卡器锁定被分配给锁定请求时,也可以将锁定给予其他读取器,但不能给予写入器。 利用循环技术来响应锁的请求,使得一个锁不会使其他锁挨饿。