会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Slave cache having sub-line valid bits updated by a master cache
    • 具有由主缓存更新的子行有效位的从缓存
    • US5784590A
    • 1998-07-21
    • US618637
    • 1996-03-19
    • Earl T. CohenJay C. Pattin
    • Earl T. CohenJay C. Pattin
    • G06F12/08
    • G06F12/0848G06F12/0857G06F12/0897G06F12/0831
    • A cache system has a large master cache and smaller slave caches. The slave caches are coupled to the processor's pipelines and are kept small and simple to increase their speed. The master cache is set-associative and performs many of the complex cache management operations for the slave caches, freeing the slaves of these bandwidth-robbing duties. Only the slave caches store sub-line valid bits with all cache lines; the master cache has only full cache lines valid. During a miss from a slave cache, the slave cache sends its sub-line valid bits to the master cache. The slave's sub-line valid bits are loaded into a request pipeline in the master cache. As requests are fulfilled and finish the pipeline, its address is compared to the addresses of all other pending requests in the master's pipeline. If another pending request matches the slave's index and tag, its sub-line valid bits are updated by setting the corresponding sub-line valid bit for the completing request's sub-line. If another pending request matches the slave's index but not the tag, all of the other request's sub-line valid bits are cleared. Thus subline valid bits of pending requests are updated as each request completes the master's pipeline and writes its sub-line to the slave cache.
    • 缓存系统具有大的主缓存和较小的从高速缓存。 从属高速缓存耦合到处理器的管道,并保持较小且简单以增加其速度。 主缓存设置关联,并为奴隶缓存执行许多复杂的高速缓存管理操作,释放这些带宽抢占任务的从站。 只有从属缓存存储所有高速缓存行的子行有效位; 主缓存只有完整的高速缓存行有效。 在从高速缓存缺失时,从站缓存将其子行有效位发送到主缓存。 从站的子行有效位加载到主缓存中的请求流水线中。 当满足请求并完成流水线时,将其地址与主控管道中所有其他待处理请求的地址进行比较。 如果另一个未决请求与从属索引和标签相匹配,则通过设置完成请求的子行的相应子行有效位来更新其子行有效位。 如果另一个挂起的请求与从站的索引匹配,而不是标记,则所有其他请求的子行有效位都将被清除。 因此,当每个请求完成主控管道并将其子行写入从缓存时,更新挂起请求的子行有效位。
    • 2. 发明授权
    • Master-slave cache system with de-coupled data and tag pipelines and
loop-back
    • 主从缓存系统,具有解耦数据和标签管道和环回
    • US5692152A
    • 1997-11-25
    • US649115
    • 1996-05-14
    • Earl T. CohenJay C. Pattin
    • Earl T. CohenJay C. Pattin
    • G06F12/08
    • G06F12/0848G06F12/0857G06F12/0897G06F12/0831
    • A cache system has a large master cache and smaller slave caches. The slave caches are coupled to the processor's pipelines and are kept small and simple to increase their speed. The master cache is set-associative and performs many of the complex cache management operations for the slave caches, freeing the slaves of these bandwidth-robbing duties. The master cache has a tag pipeline for accessing the tag RAM array, and a data pipeline for accessing the data RAM array. The tag pipeline is optimized for fast access of the tag RAM array, while the data pipeline is optimized for overall data transfer bandwidth. The tag pipeline and the data pipeline are bound together for retrieving the first sub-line of a new miss from the slave cache. Subsequent sub-lines only use the data pipeline, freeing the tag pipeline for other operations. Bus snoops and cache management operations can use just the tag pipeline without impacting data bandwidth. Loop-back flows are performed which cancel an intervening flow in the tag pipeline when the index portions of the addresses match.
    • 缓存系统具有大的主缓存和较小的从高速缓存。 从属高速缓存耦合到处理器的管道,并保持较小且简单以增加其速度。 主缓存设置关联,并为奴隶缓存执行许多复杂的高速缓存管理操作,释放这些带宽抢占任务的从站。 主缓存具有用于访问标签RAM阵列的标签管线和用于访问数据RAM阵列的数据流水线。 标签管道针对标签RAM阵列的快速访问进行了优化,而数据流水线针对总体数据传输带宽进行了优化。 标签流水线和数据流水线绑定在一起,用于从从缓存中检索新的未命中的第一个子行。 后续子行只能使用数据管道,为其他操作释放标签管道。 总线监听和缓存管理操作只能使用标签管道,而不影响数据带宽。 当地址的索引部分匹配时,执行环回流,其消除标签流水线中的插入流。
    • 3. 发明授权
    • Master-slave cache system for instruction and data cache memories
    • 用于指令和数据高速缓冲存储器的主从缓存系统
    • US5551001A
    • 1996-08-27
    • US267658
    • 1994-06-29
    • Earl T. CohenRussell W. TillemanJay C. PattinJames S. Blomgren
    • Earl T. CohenRussell W. TillemanJay C. PattinJames S. Blomgren
    • G06F12/08
    • G06F12/0897G06F12/0848G06F12/0857G06F12/0831
    • A master-slave cache system has a large, set-associative master cache, and two smaller direct-mapped slave caches, a slave instruction cache for supplying instructions to an instruction pipeline of a processor, and a slave data cache for supplying data operands to an execution pipeline of the processor. The master cache and the slave caches are tightly coupled to each other. This tight coupling allows the master cache to perform most cache management operations for the slave caches, freeing the slave caches to supply a high bandwidth of instructions and operands to the processor's pipelines. The master cache contains tags that include valid bits for each slave, allowing the master cache to determine if a line is present and valid in either of the slave caches without interrupting the slave caches. The master cache performs all search operations required by external snooping, cache invalidation, cache data zeroing instructions, and store-to-instruction-stream detection. The master cache interrupts the slave caches only when the search reveals that a line is valid in a slave cache, the master cache causing the slave cache to invalidate the line. A store queue is shared between the master cache and the slave data cache. Store data is written from the store queue directly in to both the slave data cache and the master cache, eliminating the need for the slave data cache to write data through to the master cache. The master-slave cache system also eliminates the need for a second set of address tags for snooping and coherency operations. The master cache can be large and designed for a low miss rate, while the slave caches are designed for the high speed required by the processor's pipelines.
    • 主从缓存系统具有大的集合关联主缓存和两个较小的直接映射从高速缓存,用于向处理器的指令流水线提供指令的从指令高速缓存器和用于向数据操作数提供数据操作数的从数据高速缓存 处理器的执行流水线。 主缓存和从属高速缓存彼此紧密耦合。 这种紧密耦合允许主缓存对从属高速缓存执行大多数缓存管理操作,释放从高速缓存以向处理器的管道提供高带宽的指令和操作数。 主缓存包含包含每个从站的有效位的标签,允许主缓存在两个从属高速缓存中确定一条线是否存在并且有效,而不会中断从高速缓存。 主缓存执行外部侦听,缓存无效,缓存数据归零指令和存储到指令流检测所需的所有搜索操作。 主缓存仅当搜索显示从属缓存中的行有效时才会中断从属高速缓存,主缓存导致从高速缓存使该线无效。 存储队列在主缓存和从属数据高速缓存之间共享。 存储数据从存储队列中直接写入到从属数据高速缓存和主缓存中,无需从属数据高速缓存将数据写入主缓存。 主从缓存系统还消除了对于窥探和一致性操作的第二组地址标签的需要。 主缓存可能很大,设计用于低错误率,而从属缓存设计为处理器管道所需的高速。
    • 4. 发明授权
    • Combined store queue for a master-slave cache system
    • 组合存储队列,用于主从缓存系统
    • US5644752A
    • 1997-07-01
    • US350815
    • 1994-12-07
    • Earl T. CohenRussell W. TillemanJay C. Pattin
    • Earl T. CohenRussell W. TillemanJay C. Pattin
    • G06F12/08
    • G06F12/0848G06F12/0857G06F12/0897G06F12/0831
    • A master-slave cache system has a large master cache and smaller slave caches, including a slave data cache for supplying operands to an execution pipeline of a processor. The master cache performs all cache coherency operations, freeing the slaves to supply the processor's pipelines at their maximum bandwidth. A store queue is shared between the master cache and the slave data cache. Store data from the processor's execute pipeline is written from the store queue directly into both the master cache and the slave data cache, eliminating the need for the slave data cache to write data back to the master cache. Additionally, fill data from the master cache to the slave data cache is first written to the store queue. This fill data is available for use while in the store queue because the store queue acts as an extension to the slave data cache. Cache operations, diagnostic stores and TLB entries are also loaded into the store queue. A new store or line fill can be merged into an existing store queue entry. Each entry has valid bits for the master cache, the slave data cache, and the slave's tag. Separate byte enables are provided for the master and slave caches, but a single physical address field in each store queue entry is used.
    • 主从缓存系统具有大的主缓存和较小的从高速缓存,包括用于将操作数提供给处理器的执行流水线的从数据高速缓存。 主缓存执行所有高速缓存一致性操作,释放从站以最大带宽提供处理器的管道。 存储队列在主缓存和从属数据高速缓存之间共享。 从处理器的执行流水线存储数据将从存储队列直接写入主缓存和从数据高速缓存,从而无需从属数据缓存将数据写回主缓存。 另外,从主缓存中填充数据到从数据高速缓存首先被写入存储队列。 这个填充数据在存储队列中可用,因为存储队列作为从属数据高速缓存的扩展。 缓存操作,诊断存储和TLB条目也被加载到存储队列中。 新的存储或行填充可以合并到现有存储队列条目中。 每个条目都有主缓存,从数据缓存和从属标签的有效位。 为主和从缓存提供单独的字节使能,但是使用每个存储队列条目中的单个物理地址字段。
    • 5. 发明授权
    • Multi-processor DRAM controller that prioritizes row-miss requests to
stale banks
    • 多处理器DRAM控制器将排错请求优先于陈旧的银行
    • US5745913A
    • 1998-04-28
    • US691005
    • 1996-08-05
    • Jay C. PattinJames S. Blomgren
    • Jay C. PattinJames S. Blomgren
    • G06F12/02G06F13/16G06F12/00
    • G06F13/1631G06F12/0215
    • Memory requests from multiple processors are re-ordered to maximize DRAM row hits and minimize row misses. Requests are loaded into a request queue and simultaneously decoded to determine the DRAM bank of the request. The last row address of the decoded DRAM bank is compared to the row address of the new request and a row-hit bit is set in the request queue if the row addresses match. The bank's state machine is consulted to determine if RAS is low or high, and a RAS-low bit in the request queue is set if RAS is low and the row still open. A row counter is reset for every new access but is incremented with a slow clock while the row is open but not being accessed. After a predetermined count, the row is considered "stale". A stale-row bit in the request queue is set if the decoded bank has a stale row. A request prioritizer reviews requests in the request queue and processes row-hit requests first, then row misses which are to a stale row. Lower in priority are row misses to non-stale rows which have been more recently accessed. Requests loaded into the request queue before the cache has determined if a cache hit has occurred are speculative requests and can open a new row when the old row is stale or closed.
    • 重新排序来自多个处理器的存储器请求,以最大化DRAM行命中率并最小化行错误。 请求被加载到请求队列中并同时解码以确定请求的DRAM库。 将解码的DRAM组的最后一行地址与新请求的行地址进行比较,并且如果行地址匹配,则在请求队列中设置行命中位。 咨询银行的状态机以确定RAS是低还是高,并且如果RAS为低电平且该行仍然打开,则请求队列中的RAS低位置1。 行计数器为每个新访问复位,但在行打开但未被访问时以缓慢时钟递增。 经过预定的计数,该行被认为是“陈旧”。 如果解码的存储体具有陈旧的行,则请求队列中的陈旧位被设置。 请求优先级程序审查请求队列中的请求,并首先处理行命中请求,然后将行错过到一个陈旧的行。 优先级较低的行错过了最近访问的非陈旧行。 在高速缓存确定高速缓存命中是否发生之前加载到请求队列中的请求是推测请求,并且可能会在旧行过期或关闭时打开新行。
    • 6. 发明授权
    • Emulating operating system calls in an alternate instruction set using a
modified code segment descriptor
    • 使用修改的代码段描述符模拟替代指令集中的操作系统调用
    • US5481684A
    • 1996-01-02
    • US277905
    • 1994-07-20
    • David E. RichterJay C. PattinJames S. Blomgren
    • David E. RichterJay C. PattinJames S. Blomgren
    • G06F9/30G06F9/318G06F9/38G06F9/455G06F12/02
    • G06F9/30145G06F9/30167G06F9/30174G06F9/30185G06F9/30196G06F9/3822G06F9/45554G06F12/0292
    • The CISC architecture is extended to provide for segments that can hold RISC code rather than just CISC code. These new RISC code segments have descriptors that are almost identical to the CISC segment descriptors, and therefore these RISC descriptors may reside in the CISC descriptor tables. The global descriptor table in particular may have CISC code segment descriptors for parts of the operating system that are written in x86 CISC code, while also having RISC code segment descriptors for other parts of the operating system that are written in RISC code. An undefined or reserved bit within the descriptor is used to indicate which instruction set the code in the segment is written in. An existing user program may be written in CISC code, but call a service routine in an operating system that is written in RISC code. Thus existing CISC programs may be executed on a processor that emulates a CISC operating system using RISC code. A processor capable of decoding both the CISC and RISC instruction sets is employed. The switch from CISC to RISC instruction decoding is triggered when control is transferred to a new segment, and the segment descriptor indicates that the code within the segment is written in the alternate instruction set.
    • 扩展CISC架构以提供可以容纳RISC代码而不仅仅是CISC代码的段。 这些新的RISC代码段具有与CISC段描述符几乎相同的描述符,因此这些RISC描述符可以驻留在CISC描述符表中。 全局描述符表格可能具有用x86 CISC代码编写的操作系统部分的CISC代码段描述符,同时还具有用RISC代码编写的操作系统其他部分的RISC代码段描述符。 描述符中的未定义或保留位用于指示段中的代码被写入哪个指令集。现有的用户程序可以用CISC代码写入,但是在以RISC代码编写的操作系统中调用服务程序 。 因此,可以在使用RISC代码模拟CISC操作系统的处理器上执行现有的CISC程序。 采用能够解码CISC和RISC指令集的处理器。 当控制转移到新的段时,从CISC切换到RISC指令解码被触发,段描述符指示段内的代码被写入替代指令集。