会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • Apparatus and method to manage a data cache
    • 用于管理数据高速缓存的设备和方法
    • US20060080510A1
    • 2006-04-13
    • US10964474
    • 2004-10-12
    • Michael BenhaseBinny GillThomas JarvisDharmendra Modha
    • Michael BenhaseBinny GillThomas JarvisDharmendra Modha
    • G06F12/00
    • G06F12/123G06F12/0866
    • A method is disclosed to manage a data cache. The method provides a data cache comprising a plurality of tracks, where each track comprises one or more segments. The method further maintains a first LRU list comprising one or more first tracks having a low reuse potential, maintains a second LRU list comprising one or more second tracks having a high reuse potential, and sets a target size for the first LRU list. The method then accesses a track, and determines if that accessed track comprises a first track. If the method determines that the accessed track comprises a first track, then the method increases the target size for said first LRU list. Alternatively, if the method determines that the accessed track comprises a second track, then the method decreases the target size for said first LRU list. The method demotes tracks from the first LRU list if its size exceeds the target size; otherwise, the method evicts tracks from the second LRU list.
    • 公开了一种管理数据高速缓存的方法。 该方法提供包括多个轨道的数据高速缓存,其中每个轨道包括一个或多个段。 该方法还维护包括具有低再利用潜力的一个或多个第一轨道的第一LRU列表,维护包括具有高重用潜力的一个或多个第二轨道的第二LRU列表,并设置第一LRU列表的目标大小。 然后,该方法访问轨道,并且确定所访问的轨道是否包括第一轨道。 如果方法确定所访问的轨道包括第一轨道,则该方法增加所述第一LRU列表的目标大小。 或者,如果该方法确定所访问的轨道包括第二轨道,则该方法减小所述第一LRU列表的目标大小。 该方法如果其大小超过目标大小,则会从第一个LRU列表中降低轨道; 否则,该方法从第二LRU列表中逐出轨道。
    • 4. 发明申请
    • Method and system for adaptive back-off and advance for non-volatile storage (NVS) occupancy level management
    • 用于非易失性存储(NVS)占用级别管理的自适应退避和提前的方法和系统
    • US20070250660A1
    • 2007-10-25
    • US11407797
    • 2006-04-20
    • Binny GillDharmendra Modha
    • Binny GillDharmendra Modha
    • G06F12/00
    • G06F12/0804G06F12/0866G06F2212/222
    • A technique for determining when to destage write data from a fast, NVS of a computer system from an upper level to a lower level of storage in the computer system comprises adaptively varying a destage rate of the NVS according to a current storage occupancy of the NVS; maintaining a high threshold level for the NVS; maintaining a low threshold level that is set to be a predetermined fixed amount below the high threshold; setting the destage rate of the NVS to zero when the NVS occupancy is below the low threshold; setting the destage rate of the NVS to be maximum when the NVS occupancy is above the high threshold; linearly increasing the destage rate of the NVS from zero to maximum as the NVS occupancy goes from the low to the high threshold; and adaptively varying the high threshold in response to a dynamic computer storage workload.
    • 一种用于确定何时从计算机系统的快速NVS将计算机系统的写入数据从计算机系统中的较高级别存储到较低级别的存储装置的技术包括根据NVS的当前存储占用自适应地改变NVS的流率 ; 维持NVS的高门槛值; 保持低阈值水平,其被设置为低于高阈值的预定固定量; 当NVS占用率低于低阈值时,将NVS的流出率设置为零; 当NVS占用率高于高阈值时,将NVS的流出率设置为最大值; 随着NVS占用率从低到高的阈值,将NVS的流失率从零线性上升到最大值; 以及响应于动态计算机存储工作负载自适应地改变高阈值。
    • 5. 发明申请
    • Wise ordering for writes - combining spatial and temporal locality in write caches for multi-rank storage
    • 明智的订单写入 - 将空间和时间局部性组合在多级存储的写缓存中
    • US20070220200A1
    • 2007-09-20
    • US11384890
    • 2006-03-20
    • Binny GillDharmendra Modha
    • Binny GillDharmendra Modha
    • G06F13/00G06F12/00
    • G06F12/123G06F12/0804G06F12/0868G06F2212/1016G06F2212/262
    • A storage system has a storage controller for an array of storage disks, the array being ordered in an sequence of write groups. A write cache is shared by the disks. The storage controller temporarily stores write groups in the write cache responsive to write groups being written to their respective arrays. The write groups are assigned to a global queue ordered by ages. The controller selects a quantity of write groups for attempted destaging to the arrays responsive to a predetermined high threshold for the global queue and to sizes and the ages of the write groups in the global queue, and allocates the selected quantity among the arrays responsive to quantities of certain ones of the write groups in the global queue. Write groups are destaged to respective arrays responsive to the selected allocation quantity for the array and the sequences of the write groups in the arrays.
    • 存储系统具有用于存储盘阵列的存储控制器,阵列以写入组的顺序排序。 写缓存由磁盘共享。 存储控制器响应于被写入它们各自的阵列的写入组而将写入组临时存储在写入高速缓存中。 写组被分配给按年龄排序的全局队列。 控制器响应于全局队列的预定高阈值以及全局队列中的写组的大小和年龄来选择尝试向阵列尝试次数的写组,并且响应于数量来分配阵列中的所选数量 的全局队列中的某些写入组。 响应于阵列的选定分配量和阵列中的写入组的序列,写入组将转移到各个阵列。
    • 6. 发明申请
    • Decoupling storage controller cache read replacement from write retirement
    • 解除存储控制器缓存从写入退出中读取替换
    • US20070118695A1
    • 2007-05-24
    • US11282157
    • 2005-11-18
    • Steven LoweDharmendra ModhaBinny GillJoseph Hyde
    • Steven LoweDharmendra ModhaBinny GillJoseph Hyde
    • G06F12/00G06F13/00
    • G06F12/123G06F12/0866G06F12/127
    • In a data storage controller, accessed tracks are temporarily stored in a cache, with write data being stored in a first cache (such as a volatile cache) and a second cache and read data being stored in a second cache (such as a non-volatile cache). Corresponding least recently used (LRU) lists are maintained to hold entries identifying the tracks stored in the caches. When the list holding entries for the first cache (the A list) is full, the list is scanned to identify unmodified (read) data which can be discarded from the cache to make room for new data. Prior to or during the scan, modified (write) data entries are moved to the most recently used (MRU) end of the list, allowing the scans to proceed in an efficient manner and reducing the number of times the scan has to skip over modified entries Optionally, a status bit may be associated with each modified data entry. When the modified entry is moved to the MRU end of the A list without being requested to be read, its status bit is changed from an initial state (such as 0) to a second state (such as 1), indicating that it is a candidate to be discarded. If the status bit is already set to the second state (such as 1), then it is left unchanged. If a modified track is moved to the MRU end of the A list as a result of being requested to be read, the status bit of the corresponding A list entry is changed back to the first state, preventing the track from being discarded. Thus, write tracks are allowed to remain in the first cache only as long as necessary.
    • 在数据存储控制器中,将访问的轨道临时存储在高速缓存中,其中写入数据被存储在第一高速缓存(例如易失性高速缓存)和第二高速缓存中,并且将读取的数据存储在第二高速缓存(例如, 易失性缓存)。 保持相应的最近使用的(LRU)列表来保存标识存储在高速缓存中的轨道的条目。 当保存第一个缓存(A列表)的条目的列表已满时,将扫描列表以识别可以从缓存中丢弃的未修改(读取)数据,为新数据腾出空间。 在扫描之前或期间,修改(写入)数据条目移动到列表的最近使用(MRU)端,允许扫描以有效的方式继续进行,并减少扫描必须跳过修改的次数 条目可选地,状态位可以与每个修改的数据条目相关联。 当修改的条目移动到A列表的MRU结尾而不被请求读取时,其状态位从初始状态(例如0)改变到第二状态(例如1),表示它是 候选人被丢弃。 如果状态位已经设置为第二个状态(如1),那么它将保持不变。 如果作为被请求读取的结果将经修改的轨道移动到A列表的MRU端,则相应的A列表条目的状态位被改回到第一状态,防止轨道被丢弃。 因此,仅在需要时才允许写轨迹保留在第一缓存中。
    • 7. 发明申请
    • Apparatus, system, and method for dynamically allocating main memory among a plurality of applications
    • 用于在多个应用中动态分配主存储器的装置,系统和方法
    • US20060129782A1
    • 2006-06-15
    • US11014529
    • 2004-12-15
    • Sorav BansalPaul McKenneyDharmendra Modha
    • Sorav BansalPaul McKenneyDharmendra Modha
    • G06F12/00
    • G06F12/121G06F12/122G06F12/124G06F2212/502
    • An apparatus, system, and method are disclosed for dynamically allocating main memory among applications. The apparatus includes a cache memory module configured to maintain a first list and a second list, each list having a plurality of pages, and a resize module configured to resize the cache by adaptively selecting the first or second list and subtracting pages from or adding pages to the selected list. The system includes the apparatus and a cache replacement module configured to adaptively distribute a workload between the first list and the second list. The method includes maintaining a first list and a second list, each list having a plurality of pages, maintaining a cache memory module having a selected size, and resizing the selected size by adaptively selecting the first or second list and adding pages to the selected list to increase the selected size and subtracting pages from the selected list to decrease the selected size.
    • 公开了用于在应用程序之间动态分配主存储器的装置,系统和方法。 该设备包括:高速缓存存储器模块,被配置为维护第一列表和第二列表,每个列表具有多个页面;以及调整大小模块,其被配置为通过自适应地选择第一列表或第二列表以及从页面中增加或添加页面来调整高速缓存大小 到所选列表。 该系统包括装置和配置成在第一列表和第二列表之间自适应地分配工作负载的高速缓存替换模块。 该方法包括维护第一列表和第二列表,每个列表具有多个页面,维护具有所选大小的高速缓存存储器模块,以及通过自适应地选择第一列表或第二列表以及将页面添加到所选列表来调整所选大小的大小 以增加所选的大小,并从所选列表中减去页面以减少所选的大小。
    • 8. 发明申请
    • Method and system of clock with adaptive cache replacement and temporal filtering
    • 具有自适应高速缓存替换和时间滤波的时钟方法和系统
    • US20060069876A1
    • 2006-03-30
    • US10955201
    • 2004-09-30
    • Sorav BansalDharmendra Modha
    • Sorav BansalDharmendra Modha
    • G06F12/00
    • G06F12/121G06F12/126G06F2212/502
    • A method and system of managing data retrieval in a computer comprising a cache memory and auxiliary memory comprises organizing pages in the cache memory into a first and second clock list, wherein the first clock list comprises pages with short-term utility and the second clock list comprises pages with long-term utility; requesting retrieval of a particular page in the computer; identifying requested pages located in the cache memory as a cache hit; transferring requested pages located in the auxiliary memory to the first clock list; relocating the transferred requested pages into the second clock list upon achieving at least two consecutive cache hits of the transferred requested page; logging a history of pages evicted from the cache memory; and adaptively varying a proportion of pages marked as short and long-term utility to increase a cache hit ratio of the cache memory by utilizing the logged history of evicted pages.
    • 一种在包括高速缓冲存储器和辅助存储器的计算机中管理数据检索的方法和系统,包括将高速缓冲存储器中的页面组织成第一和第二时钟列表,其中第一时钟表包括具有短期效用的页面和第二时钟列表 包括具有长期效用的页面; 请求在计算机中检索特定页面; 将位于高速缓冲存储器中的请求页面识别为高速缓存命中; 将位于辅助存储器中的请求的页面传送到第一时钟列表; 在传送的请求页面的至少两个连续的缓存命中时,将传送的所请求的页面重定位到第二时钟列表中; 记录从高速缓存中排出的页面的历史记录; 并且自适应地改变标记为短期和长期效用的页面的比例,以通过利用所记录的被驱逐的页面的历史来增加缓存命中率。
    • 9. 发明申请
    • Method and system of adaptive replacement cache with temporal filtering
    • 具有时间滤波的自适应替代缓存的方法和系统
    • US20050086436A1
    • 2005-04-21
    • US10690303
    • 2003-10-21
    • Dharmendra Modha
    • Dharmendra Modha
    • G06F12/12G06F12/00
    • G06F12/124G06F12/123G06F2212/502
    • A method for adaptively managing pages in a cache memory with a variable workload comprises defining a cache memory; organizing the cache into disjoint lists of pages, wherein the lists comprise lists T1, T2, B1, and B2; maintaining a bit that is set to either “S” or “L” for every page in the cache, which indicates whether the bit has short-term utility or long-term utility; ensuring that each member page of T1 is marked either as “S” or “L”, wherein each member page of T1 and B1 is marked as “S” and each member page of T2 and B2 is marked as “L”; and maintaining a temporal locality window parameter such that pages that are re-requested within a window are of short-term utility and pages that are re-requested outside the window are of long-term utility, wherein the cache comprises pages that are members of any of lists T1 and T2.
    • 用于以可变工作负载自适应地管理高速缓冲存储器中的页面的方法包括定义高速缓冲存储器; 将缓存组织成不相交的页面列表,其中列表包括列表T 1,T 2,B 1和B 2 为缓存中的每个页面保留一个被设置为“S”或“L”的位,这表示该位是否具有短期效用或长期效用; 确保T< 1> 1的每个成员页面被标记为“S”或“L”,其中T 1< 1< 1< 1< 1< >标记为“S”,将T< 2< 2>和B> 2< 2>的每个成员页面标记为“L”; 并且维护时间局部性窗口参数,使得在窗口内重新请求的页面具有短期效用,并且在窗口之外重新请求的页面是长期有用的,其中,高速缓存包括作为 列表T 1和T 2中的任何一个。
    • 10. 发明申请
    • System and method for adaptively managing pages in a memory
    • 用于自适应地管理存储器中的页面的系统和方法
    • US20050235114A1
    • 2005-10-20
    • US11151363
    • 2005-06-13
    • Nimrod MegiddoDharmendra Modha
    • Nimrod MegiddoDharmendra Modha
    • G06F12/00G06F12/12
    • G06F12/123G06F12/122G06F12/127G06F2212/502
    • An adaptive replacement cache policy dynamically maintains two lists of pages, a recency list and a frequency list, in addition to a cache directory. The policy keeps these two lists to roughly the same size, the cache size c. Together, the two lists remember twice the number of pages that would fit in the cache. At any time, the policy selects a variable number of the most recent pages to exclude from the two lists. The policy adaptively decides in response to an evolving workload how many top pages from each list to maintain in the cache at any given time. It achieves such online, on-the-fly adaptation by using a learning rule that allows the policy to track a workload quickly and effectively.
    • 自适应替换高速缓存策略除了缓存目录之外,还动态维护两个页面列表,新近度列表和频率列表。 该策略将这两个列表大致相同的大小,缓存大小c。 一起,两个列表记住了适合缓存的页面数量的两倍。 在任何时候,策略都会选择要从两个列表中排除的最新页面的可变数量。 该策略自适应地决定响应于不断变化的工作负载,在任何给定时间,来自每个列表的顶页数量将保持在高速缓存中。 它通过使用允许策略来快速有效地跟踪工作负载的学习规则来实现在线,即时的适应。