会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 42. 发明授权
    • Dual cache directories with respective queue independently executing its
content and allowing staggered write operations
    • 具有相应队列的双缓存目录独立地执行其内容并允许交错的写入操作
    • US6085288A
    • 2000-07-04
    • US839556
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • G06F12/16G06F12/08G06F12/00
    • G06F12/0831
    • A method of storing values in a cache used by a processor of a computer system, the cache having two or more cache directories. An address tag associated with the memory block is written into a first cache directory during an initial processor cycle, the address tag is written into a second cache directory during the next or subsequent processor cycle. Another address tag associated with a different memory block may be read from the second cache directory during the initial processor cycle. Additionally, another address tag associated with yet a different memory block may be read from the first cache directory during the subsequent processor cycle. A write operation for the address tag may be placed into a write queue of the first cache directory, prior to writing the address tag into the first cache directory, and the same write operation may be placed into a write queue of the second cache directory, prior to said step of writing the address tag into the second cache directory; the write queue of the second cache directory executes its contents independently of the write queue of the first cache directory. This staggered writing ability imparts greater flexibility in carrying out write operations for a cache having multiple directories, thereby increasing performance.
    • 一种在计算机系统的处理器使用的高速缓存中存储值的方法,所述高速缓存具有两个或多个高速缓存目录。 在初始处理器周期期间,与存储器块相关联的地址标签被写入第一高速缓存目录中,在下一个或后续处理器周期期间将地址标签写入第二高速缓存目录。 可以在初始处理器周期期间从第二高速缓存目录读取与不同存储器块相关联的另一地址标签。 此外,在随后的处理器周期期间,可以从第一高速缓存目录读取与另一个存储器块相关联的另一地址标签。 在将地址标签写入第一高速缓存目录之前,可以将地址标签的写入操作放入第一高速缓存目录的写入队列中,并且可以将相同的写入操作放入第二高速缓存目录的写入队列中, 在将所述地址标签写入所述第二高速缓存目录之前的所述步骤之前; 第二高速缓存目录的写入队列独立于第一高速缓存目录的写入队列来执行其内容。 这种交错的写入能力为对具有多个目录的高速缓存执行写入操作赋予更大的灵活性,从而提高性能。
    • 45. 发明授权
    • Apparatus and method of maintaining cache coherency in a multi-processor
computer system with global and local recently read states
    • 在具有全局和局部最近读取状态的多处理器计算机系统中维持高速缓存一致性的装置和方法
    • US6018791A
    • 2000-01-25
    • US24307
    • 1998-02-17
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F15/16G06F12/08G06F15/177G06F12/00G06F13/00
    • G06F12/0811G06F12/0831
    • A multi-processor computer system with clustered processing units uses a cache coherency protocol having a "recent" coherency state to indicate that a particular cache block containing a valid copy of a value (instruction or data) was the most recently accessed block out of a group of cache blocks in different caches (but at the same cache level) that share valid copies of the value. The "recent" state can advantageously be used to implement optimized memory operations such as intervention, by sourcing the value from the cache block in the "recent" state, as opposed to sourcing the value from system memory (RAM), which would be a slower operation. In an exemplary implementation, the hierarchy has two cache levels supporting a given processing unit cluster; the "recent" state can be applied to a plurality of caches at the first level (each associated with a different processing unit cluster), and the "recent" state can further be applied to one of the caches at the second level.
    • 具有集群处理单元的多处理器计算机系统使用具有“最近”一致性状态的高速缓存一致性协议来指示包含值(指令或数据)的有效副本的特定高速缓存块是最近访问的块 在不同的缓存(但在相同的高速缓存级别)共享高速缓存块的组,共享该值的有效副本。 可以有利地利用“近期”状态来实现优化的存储器操作,例如干预,通过从“最近”状态的高速缓存块中获取值,而不是将来自系统存储器(RAM)的值 运行较慢 在示例性实现中,层次结构具有支持给定处理单元簇的两个高速缓存级别; “最近”状态可以应用于在第一级别的多个高速缓存(每个与不同的处理单元群集相关联),并且“最近”状态可以进一步应用于第二级别的高速缓存之一。
    • 46. 发明授权
    • Software-managed programmable congruence class caching mechanism
    • 软件管理可编程一致级缓存机制
    • US6000014A
    • 1999-12-07
    • US834490
    • 1997-04-14
    • Ravi Kumar ArimilliLeo James ClarkJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliLeo James ClarkJohn Steven DodsonJerry Don Lewis
    • G06F12/08
    • G06F12/0864
    • A method of providing programmable congruence classes in a cache used by a processor of a computer system is disclosed. Program instructions are loaded in the processor for modifying original addresses of memory blocks in a memory device to produce encoded addresses. A plurality of cache congruence classes is then defined using a mapping function which operates on the encoded addresses, such that the program instructions may be used to arbitrarily assign a given one of the original addresses to a particular one of the cache congruence classes. The program instructions can modify the original addresses by setting a plurality of programmable fields. Application software may provide the program instructions, wherein congruence classes are programmed based on a particular procedure of the application software which is running on the processor, that might otherwise run with excessive "striding" of the cache. Alternatively, operating-system software may monitor allocation of memory blocks in the cache and provides the program instructions to modify the original addresses based on the allocation of the memory blocks, to lessen striding.
    • 公开了一种在由计算机系统的处理器使用的高速缓存器中提供可编程一致等级的方法。 程序指令被加载到处理器中,用于修改存储器件中的存储器块的原始地址以产生编码的地址。 然后使用对编码地址进行操作的映射函数来定义多个高速缓存一致等级,使得程序指令可以用于任意地将给定的一个原始地址分配给高速缓存一致性类的特定一个。 程序指令可以通过设置多个可编程字段来修改原始地址。 应用软件可以提供程序指令,其中根据在处理器上运行的应用软件的特定过程对一致性类进行编程,否则可能以高速缓存的“跨步”运行。 或者,操作系统软件可以监视高速缓存中的存储器块的分配,并且提供程序指令以基于存储器块的分配来修改原始地址,以减少跨越。
    • 47. 发明授权
    • Cache-coherency protocol with recently read state for data and
instructions
    • 缓存一致性协议,最近读取数据和指令状态
    • US5996049A
    • 1999-11-30
    • US839548
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJohn Michael KaiserJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJohn Michael KaiserJerry Don Lewis
    • G06F12/08G06F12/16
    • G06F12/0831
    • A method of providing instructions and data values to a processing unit in a multi-processor computer system, by expanding the prior-art MESI cache-coherency protocol to include an additional cache-entry state corresponding to a most recently accessed state. Each cache of the processing units has at least one cache line with a block for storing the instruction or data value, and an indication is provided that a cache line having a block which contains the instruction or data value is in a "recently read" state. Each cache entry has three bits to indicate the current state of the cache entry (one of five possible states). A processing unit which desires to access a shared instruction or data value detects transmission of the indication from the cache having the most recently accessed copy, and the instruction or data value is sourced from this cache. Upon sourcing the instruction or data value, the cache that originally contained the most recently accessed copy thereof changes its indication to indicate that its copy is now shared, and the processing unit which accessed the instruction or data value is thereafter indicated as having the cache containing the copy thereof that was most recently accessed. This protocol allows instructions and data values which are shared among several caches to be sourced directly (intervened) by the cache having the most recently accessed copy, without retrieval from system memory (RAM), significantly improving the processing speed of the computer system.
    • 通过扩展现有技术的MESI高速缓存一致性协议以包括对应于最近访问状态的附加高速缓存入口状态,向多处理器计算机系统中的处理单元提供指令和数据值的方法。 处理单元的每个高速缓存具有至少一个具有用于存储指令或数据值的块的高速缓存行,并且提供了具有包含指令或数据值的块的高速缓存行处于“最近读取”状态的指示 。 每个缓存条目有三个位用于指示缓存条目的当前状态(五种可能状态之一)。 期望访问共享指令或数据值的处理单元检测来自具有最近访问的副本的高速缓存的指示的传输,并且指令或数据值来自该高速缓存。 在提供指令或数据值时,最初包含其最近访问的副本的高速缓存改变其指示以指示其副本现在被共享,并且访问该指令或数据值的处理单元此后被指示为具有高速缓存 最近访问的副本。 该协议允许由具有最近访问的副本的缓存直接(介入)在几个高速缓存之间共享的指令和数据值,而不从系统存储器(RAM)检索,显着地提高了计算机系统的处理速度。
    • 48. 发明授权
    • Hardware-managed programmable congruence class caching mechanism
    • 硬件管理的可编程一致级缓存机制
    • US5983322A
    • 1999-11-09
    • US839560
    • 1997-04-14
    • Ravi Kumar ArimilliLeo James ClarkJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliLeo James ClarkJohn Steven DodsonJerry Don Lewis
    • G06F12/08
    • G06F12/0864
    • A method of providing programmable congruence classes in a cache used by a processor of a computer system is disclosed. A logic unit is connected to the cache for modifying original addresses of memory blocks in a memory device to produce encoded addresses. A plurality of cache congruence classes are then defined using a mapping function which operates on the encoded addresses, such that the logic unit may be used to arbitrarily assign a given one of the original addresses to a particular one of the cache congruence classes. The logic unit can modify the original addresses by setting a plurality of programmable fields. The logic unit also can collect information on cache misses, and modify the original addresses in response to the cache miss information. In this manner, a procedure running on the processor and allocating memory blocks to the cache such that the original addresses, if applied to the mapping function, would result in striding of the cache, runs more efficiently by using the encoded addresses to result in less striding of the cache.
    • 公开了一种在由计算机系统的处理器使用的高速缓存器中提供可编程一致等级的方法。 逻辑单元连接到高速缓存,用于修改存储器设备中的存储器块的原始地址以产生编码的地址。 然后使用对编码的地址进行操作的映射函数来定义多个高速缓存一致等级,使得逻辑单元可以用于任意地将给定的一个原始地址分配给高速缓存一致性类别中的特定一个。 逻辑单元可以通过设置多个可编程字段来修改原始地址。 逻辑单元还可以收集关于高速缓存未命中的信息,并且响应于缓存未命中信息修改原始地址。 以这种方式,在处理器上运行的过程并将存储器块分配给高速缓存,使得原始地址(如果应用于映射功能)将导致高速缓存的跨越,则通过使用编码的地址来更有效地运行以导致较少的 跨越缓存。
    • 49. 发明授权
    • Method and system for front-end gathering of store instructions within a
data-processing system
    • 数据处理系统中存储指令前端收集的方法和系统
    • US5940611A
    • 1999-08-17
    • US837519
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F9/312G06F9/38G06F9/30
    • G06F9/30043G06F9/3824
    • A method and system for front-end gathering of store instructions within a processor is disclosed. In accordance with the method and system of the present invention, a store queue within a data-processing system includes a front-end queue and a back-end queue. Multiple entries are provided in the back-end queue, and each entry includes an address field, a byte-count field, and a data field. A determination is first made as to whether or not a data field of a first entry of the front-end queue is filled completely. In response to a determination that the data field of the first entry of the front-end queue is not filled completely, another determination is made as to whether or not an address for a store instruction in a subsequent second entry is equal to an address for the store instruction in the first entry plus a byte count in the first entry. In response to a determination that the address for the store instruction in the subsequent second entry is equal to the address for the store instruction in the first entry plus the byte count in the first entry, the store instruction in the subsequent second entry is collapsed into the store instruction in the first entry.
    • 公开了一种用于处理器内存储指令前端收集的方法和系统。 根据本发明的方法和系统,数据处理系统内的存储队列包括前端队列和后端队列。 在后端队列中提供多个条目,每个条目包括地址字段,字节计数字段和数据字段。 首先确定前端队列的第一条目的数据字段是否被完全填充。 响应于确定前端队列的第一条目的数据字段未被完全填充,另外确定在后续第二条目中的存储指令的地址是否等于 第一个条目中的存储指令加上第一个条目中的字节数。 响应于确定后续第二条目中的存储指令的地址等于第一条目中的存储指令的地址加上第一条目中的字节计数,则随后的第二条目中的存储指令被折叠成 商店指令在第一个条目。