会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Apparatus and method for preventing cache data eviction during an atomic operation
    • 用于在原子操作期间防止高速缓存数据驱逐的装置和方法
    • US06347360B1
    • 2002-02-12
    • US09513033
    • 2000-02-25
    • Anuradha N. MoudgalBelliappa M. KuttannaAllan Tzeng
    • Anuradha N. MoudgalBelliappa M. KuttannaAllan Tzeng
    • G06F1200
    • G06F12/126G06F12/0831
    • Apparatus and method for protecting cache data from eviction during an atomic operation. The apparatus includes a first request queue, a second request queue, and an atomic address block. The first request queue stores an entry for each cache access request. Each entry includes a first set of address bits and an atomic bit. The first set of address bits represents a first cache address associated with the cache access request and the atomic bit indicates whether the cache access request is associated with the atomic operation. The second request queue stores an entry for each cache eviction request. Each entry of the second request queue includes a second set of address bits indicating a second cache address associated with the cache eviction request. The atomic address block prevents eviction of a third cache address during the atomic operation on the third cache address. During a first clock cycle the atomic address block receives and analyzes a first set of signals representing a first entry of the first request queue to determine whether they represent the atomic operation. If so, the atomic address block sets a third set of address bits to a value representative of the first cache address. During a second clock cycle during which the atomic operation is being executed the atomic address block receives and analyzes a second set of signals representing the second set of address bits to determine whether the second set of address bits represent a same cache address as the third set of address bits. If so, the atomic address block stalls servicing of the second request queue, thus preventing eviction of data from the cache upon which an atomic operation is being performed.
    • 用于在原子操作期间保护缓存数据免于驱逐的装置和方法。 该装置包括第一请求队列,第二请求队列和原子地址块。 第一个请求队列存储每个缓存访问请求的条目。 每个条目包括第一组地址位和原子位。 第一组地址位表示与高速缓存访​​问请求相关联的第一高速缓存地址,并且原子位指示高速缓存访​​问请求是否与原子操作相关联。 第二个请求队列存储每个缓存逐出请求的条目。 第二请求队列的每个条目包括指示与缓存驱逐请求相关联的第二高速缓存地址的第二组地址位。 原子地址块防止在第三高速缓存地址的原子操作期间驱逐第三高速缓存地址。 在第一时钟周期期间,原子地址块接收并分析表示第一请求队列的第一条目的第一组信号,以确定它们是否表示原子操作。 如果是,则原子地址块将第三组地址位设置为表示第一高速缓存地址的值。 在原子操作正在执行的第二时钟周期期间,原子地址块接收并分析表示第二组地址位的第二组信号,以确定第二组地址位是否表示与第三组相同的高速缓存地址 的地址位。 如果是,则原子地址块停止对第二请求队列的服务,从而防止从正在执行原子操作的高速缓存的数据的驱逐。
    • 4. 发明授权
    • Apparatus and method for bad address handling
    • 不良地址处理的装置和方法
    • US06526485B1
    • 2003-02-25
    • US09368008
    • 1999-08-03
    • Anuradha N. MoudgalBelliappa M. Kuttanna
    • Anuradha N. MoudgalBelliappa M. Kuttanna
    • G06F1200
    • G06F9/3842G06F9/383G06F9/3861
    • Circuitry including a request queue and a bad address handling circuit. The request queue includes an entry for each outstanding load requesting access to a cache. Each request queue entry includes a valid bit, an issue bit and a flush bit. The state of the valid bit indicates whether or not the associated access request should be issued to the cache. The issue bit indicates whether the load access request has been issued to the cache and the flush bit indicates whether the data retrieved from the cache in response to the request should be loaded into a specified register. The bad address handling circuit responds to a replay load request by manipulating the states of the valid or flush bit of the relevant request queue entry to prevent completion of bad consumer load requests. The bad address handling circuit includes a validation circuit and a flush circuit. The validation circuit alters the state of the valid bit of the relevant request queue entry in response to the replay load request based upon the state of issue bit for that request queue entry. If the issue bit indicates that the load access request has not yet been issued to the cache, then the validation circuit alters the state of the associated valid bit to prevent the issuance of that load access request to the cache. On the other hand, if the bad consumer has already been issued to the cache, then the flush circuit responds by altering the state of the flush bit to prevent the data retrieved from the cache in response to the bad consumer from being loaded into the register file.
    • 电路包括请求队列和不良地址处理电路。 请求队列包括每个未完成的负载请求访问高速缓存的条目。 每个请求队列条目包括有效位,发布位和刷新位。 有效位的状态指示是否应该向缓存发出关联的访问请求。 问题位指示加载访问请求是否已经发送到高速缓存,并且刷新位指示是否将响应于该请求从高速缓存检索的数据加载到指定的寄存器中。 坏地址处理电路通过操纵相关请求队列条目的有效或刷新位的状态来响应重放加载请求,以防止完成不良的消费者负载请求。 坏地址处理电路包括一个验证电路和一个冲洗电路。 响应于基于该请求队列条目的发布状态位的重放加载请求,验证电路改变相关请求队列条目的有效位的状态。 如果问题位指示加载访问请求尚未被发送到缓存,则验证电路改变相关联的有效位的状态,以防止向缓存发出该加载访问请求。 另一方面,如果坏消费者已经被发送到高速缓存,则刷新电路通过改变刷新位的状态来响应,以防止响应于坏消费者从缓存中检索的数据被加载到寄存器中 文件。
    • 6. 发明授权
    • Method and system for efficiently fetching from cache during a cache
fill operation
    • 高速缓存填充操作期间从高速缓存中有效提取的方法和系统
    • US5897654A
    • 1999-04-27
    • US881223
    • 1997-06-24
    • Lee E. EisenBelliappa M. KuttannaSoummya MallickRajesh B. Patel
    • Lee E. EisenBelliappa M. KuttannaSoummya MallickRajesh B. Patel
    • G06F12/08G06F13/00
    • G06F12/0859G06F12/0862
    • A method and system in a data processing system for efficiently interfacing with cache memory by allowing a fetcher to read from cache memory while a plurality of data words or instructions are being loaded into the cache. A request is made by a bus interface unit to load a plurality of instructions or data words into a cache. In response to each individual instruction or data word being loaded into the cache by the bus interface unit, there is an indication that the individual one of said plurality of instructions or data words is valid. Once a desired instruction or data word has an indication that it is valid, the fetcher is allowed to complete a fetch operation prior to all of the instructions or data words being loaded into cache. In one embodiment, a group of invalid tag bits may be utilized to indicate to the fetcher that individual ones of a group of instructions or data words are valid in cache after being written into cache by the bus interface unit.
    • 数据处理系统中的一种方法和系统,用于通过在多个数据字或指令被加载到高速缓存中允许读取器从高速缓冲存储器中读取来高效地与高速缓存存储器连接。 总线接口单元请求将多个指令或数据字加载到高速缓存中。 响应于由总线接口单元加载到高速缓存中的每个单独指令或数据字,存在所述多个指令或数据字中的单个指令或数据字有效的指示。 一旦所需的指令或数据字具有有效的指示,则在所有指令或数据字被加载到高速缓存之前,允许提取器完成取指操作。 在一个实施例中,一组无效标签位可以被用于在由总线接口单元写入高速缓存之后向读取器指示一组指令或数据字中的各个有效。
    • 10. 发明授权
    • Reducing cache misses by snarfing writebacks in non-inclusive memory
systems
    • 通过在非包容性内存系统中缩写回写来减少高速缓存未命中
    • US5909697A
    • 1999-06-01
    • US940219
    • 1997-09-30
    • Norman M. HayesRicky C. HetheringtonBelliappa M. KuttannaFong PongKrishna M. Thatipelli
    • Norman M. HayesRicky C. HetheringtonBelliappa M. KuttannaFong PongKrishna M. Thatipelli
    • G06F12/08G06F12/02
    • G06F12/0831G06F12/0811
    • A non-inclusive multi-level cache memory system is optimized by removing a first cache content from a first cache, so as to provide cache space in the first cache. In response to a cache miss in the first and second caches, the removed first cache content is stored in a second cache. All cache contents that are stored in the second cache are limited to have read-only attributes so that if any copies of the cache contents in the second cache exist in the cache memory system, a processor or equivalent device must seek permission to access the location in which that copy exists, ensuring cache coherency. If the first cache content is required by a processor (e.g., when a cache hit occurs in the second cache for the first cache content), room is again made available, if required, in the first cache by selecting a second cache content from the first cache and moving it to the second cache. The first cache content is then moved from the second cache to the first cache, rendering the first cache available for write access. Limiting the second cache to read-only access reduces the number of status bits per tag that are required to maintain cache coherency. In a cache memory system using a MOESI protocol, the number of status bits per tag is reduced to a single bit for the second cache, reducing tag overhead and minimizing silicon real estate used when placed on-chip to improve cache bandwidth.
    • 通过从第一高速缓存中移除第一高速缓存内容来优化非包容性多级缓存存储器系统,以便在第一高速缓存中提供高速缓存空间。 响应于第一和第二高速缓存中的高速缓存未命中,将移除的第一高速缓存内容存储在第二高速缓存中。 存储在第二高速缓存中的所有高速缓存内容被限制为具有只读属性,使得如果高速缓冲存储器系统中存在第二缓存中的高速缓存内容的任何副本,则处理器或等效设备必须寻求访问该位置的许可 其中存在该副本,确保高速缓存一致性。 如果处理器需要第一高速缓存内容(例如,当高速缓存命中发生在第一高速缓存内容的第二高速缓存中时),则如果需要,再次通过从第一高速缓存中选择第二高速缓存内容来在第一高速缓存中提供空间 第一个缓存并将其移动到第二个缓存。 然后将第一高速缓存内容从第二高速缓存移动到第一高速缓存,使得第一高速缓存可用于写访问。 将第二个缓存限制为只读访问减少了维护高速缓存一致性所需的每个标记的状态位数。 在使用MOESI协议的高速缓冲存储器系统中,每个标签的状态位的数量减少到第二高速缓存的单个位,减少标签开销并最小化放置在片上时使用的硅空间以提高高速缓存带宽。