会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • Class Dependent Clean and Dirty Policy
    • 类依赖的清洁和肮脏的政策
    • US20130124802A1
    • 2013-05-16
    • US13296119
    • 2011-11-14
    • David B. GlascoPeter B. HolmqvistGeorge R. LynchPatrick R. MarchandJames RobertsJohn H. Edmondson
    • David B. GlascoPeter B. HolmqvistGeorge R. LynchPatrick R. MarchandJames RobertsJohn H. Edmondson
    • G06F12/08
    • G06F12/0804
    • A method for cleaning dirty data in an intermediate cache is disclosed. A dirty data notification, including a memory address and a data class, is transmitted by a level 2 (L2) cache to frame buffer logic when dirty data is stored in the L2 cache. The data classes may include evict first, evict normal and evict last. In one embodiment, data belonging to the evict first data class is raster operations data with little reuse potential. The frame buffer logic uses a notification sorter to organize dirty data notifications, where an entry in the notification sorter stores the DRAM bank page number, a first count of cache lines that have resident dirty data and a second count of cache lines that have resident evict_first dirty data associated with that DRAM bank. The frame buffer logic transmits dirty data associated with an entry when the first count reaches a threshold.
    • 公开了一种用于清除中间高速缓存中的脏数据的方法。 当脏数据存储在L2高速缓存中时,包含存储器地址和数据类的脏数据通知由级别2(L2)高速缓存发送到帧缓冲器逻辑。 数据类可能包括首先驱逐,最后驱逐正常和驱逐。 在一个实施例中,属于第一数据类别的数据是具有很少重用潜力的光栅操作数据。 帧缓冲器逻辑使用通知排序器来组织脏数据通知,其中通知分类器中的条目存储DRAM存储体页面编号,具有驻留脏数据的高速缓存行的第一计数和具有居民驱逐器的第一高速缓存行计数 与该DRAM库相关联的脏数据。 当第一计数达到阈值时,帧缓冲器逻辑发送与条目相关联的脏数据。
    • 4. 发明授权
    • Using a data cache array as a DRAM load/store buffer
    • 使用数据高速缓存阵列作为DRAM加载/存储缓冲区
    • US08234478B1
    • 2012-07-31
    • US12256400
    • 2008-10-22
    • James RobertsDavid B. GlascoPatrick R. MarchandPeter B. HolmqvistGeorge R. LynchJohn H. Edmondson
    • James RobertsDavid B. GlascoPatrick R. MarchandPeter B. HolmqvistGeorge R. LynchJohn H. Edmondson
    • G06F12/00G06F13/00G06F13/28
    • G06F12/0895
    • One embodiment of the invention sets forth a mechanism for using the L2 cache as a buffer for data associated with read/write commands that are processed by the frame buffer logic. A tag look-up unit tracks the availability of each cache line in the L2 cache, reserves necessary cache lines for the read/write operations and transmits read commands to the frame buffer logic for processing. A data slice scheduler transmits a dirty data notification to the frame buffer logic when data associated with a write command is stored in an SRAM bank. The data slice scheduler schedules accesses to the SRAM banks and gives priority to accesses requested by the frame buffer logic to store or retrieve data associated with read/write commands. This feature allows cache lines reserved for read/write commands that are processed by the frame buffer logic to be made available at the earliest clock cycle.
    • 本发明的一个实施例提出了一种使用L2高速缓存作为与由帧缓冲器逻辑处理的读/写命令相关联的数据的缓冲器的机制。 标签查找单元跟踪L2高速缓存中每个高速缓存行的可用性,为读/写操作预留必要的高速缓存行,并将读命令发送到帧缓冲器逻辑进行处理。 当与写命令相关联的数据被存储在SRAM存储体中时,数据片调度器将脏数据通知发送到帧缓冲器逻辑。 数据片调度器调度对SRAM组的访问,并且优先级由帧缓冲器逻辑请求的访问来存储或检索与读/写命令相关联的数据。 该功能允许由帧缓冲器逻辑处理的读/写命令保留的高速缓存行在最早的时钟周期内可用。
    • 5. 发明授权
    • Configurable cache occupancy policy
    • 可配置缓存占用策略
    • US08131931B1
    • 2012-03-06
    • US12256378
    • 2008-10-22
    • James RobertsDavid B. GlascoPatrick R. MarchandPeter B. HolmqvistGeorge R. LynchJohn H. Edmondson
    • James RobertsDavid B. GlascoPatrick R. MarchandPeter B. HolmqvistGeorge R. LynchJohn H. Edmondson
    • G06F12/00
    • G06F12/121
    • One embodiment of the invention is a method for evicting data from an intermediary cache that includes the steps of receiving a command from a client, determining that there is a cache miss relative to the intermediary cache, identifying one or more cache lines within the intermediary cache to store data associated with the command, determining whether any of data residing in the one or more cache lines includes raster operations data or normal data, and causing the data residing in the one or more cache lines to be evicted or stalling the command based, at least in part, on whether the data includes raster operations data or normal data. Advantageously, the method allows a series of cache eviction policies based on how cached data is categorized and the eviction classes of the data. Consequently, more optimized eviction decisions may be made, leading to fewer command stalls and improved performance.
    • 本发明的一个实施例是一种用于从中间缓存中取出数据的方法,包括以下步骤:从客户机接收命令,确定相对于中间缓存存在高速缓存未命中,识别中间缓存内的一个或多个高速缓存行 存储与所述命令相关联的数据,确定驻留在所述一个或多个高速缓存行中的数据中的任何一个是否包括光栅操作数据或正常数据,以及使驻留在所述一个或多个高速缓存行中的数据被驱逐或停止所述命令, 至少部分地关于数据是否包括光栅操作数据或正常数据。 有利地,该方法允许基于缓存数据被分类和数据的逐出类别的一系列缓存驱逐策略。 因此,可以进行更优化的驱逐决定,导致更少的命令停顿和改进的性能。
    • 7. 发明授权
    • Compression status bit cache with deterministic isochronous latency
    • 具有确定性同步延迟的压缩状态位缓存
    • US08595437B1
    • 2013-11-26
    • US12276147
    • 2008-11-21
    • David B. GlascoPeter B. HolmqvistGeorge R. LynchPatrick R. MarchandKaran MehraJames Roberts
    • David B. GlascoPeter B. HolmqvistGeorge R. LynchPatrick R. MarchandKaran MehraJames Roberts
    • G06F12/08
    • G06F12/0811G06F12/0804G06F12/084G06F2212/302G06F2212/401
    • One embodiment of the present invention sets forth a compression status bit cache with deterministic latency for isochronous memory clients of compressed memory. The compression status bit cache improves overall memory system performance by providing on-chip availability of compression status bits that are used to size and interpret a memory access request to compressed memory. To avoid non-deterministic latency when an isochronous memory client accesses the compression status bit cache, two design features are employed. The first design feature involves bypassing any intermediate cache when the compression status bit cache reads a new cache line in response to a cache read miss, thereby eliminating additional, potentially non-deterministic latencies outside the scope of the compression status bit cache. The second design feature involves maintaining a minimum pool of clean cache lines by opportunistically writing back dirty cache lines and, optionally, temporarily blocking non-critical requests that would dirty already clean cache lines. With clean cache lines available to be overwritten quickly, the compression status bit cache avoids incurring additional miss write back latencies.
    • 本发明的一个实施例针对压缩存储器的同步存储器客户端提出了具有确定性延迟的压缩状态位缓存。 压缩状态位缓存通过提供压缩状态位的片上可用性来提高整体存储器系统性能,压缩状态位用于对存储器访问请求进行大小和解释,并将其解释为压缩存储器。 为了避免同步存储器客户端访问压缩状态位缓存时的非确定性延迟,采用了两个设计特征。 第一个设计功能涉及当压缩状态位缓存读取新的高速缓存行以响应高速缓存读取未命中时绕过任何中间缓存,从而消除在压缩状态位缓存范围之外的额外的潜在的非确定性延迟。 第二个设计功能包括通过机会地写回脏的高速缓存线,以及可选地临时阻止将已经清除高速缓存行的非关键请求,来保持最小的干净的高速缓存行池。 使用干净的缓存线可以快速覆盖,压缩状态位缓存避免了额外的错误回写延迟。