会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Simplified writeback handling
    • 简化回写处理
    • US06477622B1
    • 2002-11-05
    • US09670856
    • 2000-09-26
    • Kevin B. NormoyleMeera KasinathanRajasekhar Cherabuddi
    • Kevin B. NormoyleMeera KasinathanRajasekhar Cherabuddi
    • G06F1200
    • G06F12/0804
    • The main cache of a processor in a multiprocessor computing system is coupled to receive writeback data during writeback operations. In one embodiment, during writeback operations, e.g., for a cache miss, dirty data in the main cache is merged with modified data from an associated write cache, and the resultant writeback data line is loaded into a writeback buffer. The writeback data is also written back into the main cache, and is maintained in the main cache until replaced by new data. Subsequent requests (i.e., snoops) for the data are then serviced from the main cache, rather than from the writeback buffer. In some embodiments, further modifications of the writeback data in the main cache are prevented. The writeback data line in the main cache remains valid until read data for the cache miss is returned, thereby ensuring that the read address reaches the system interface for proper bus ordering before the writeback line is lost. In one embodiment, the writeback operation is paired with the read operation for the cache miss to ensure that upon completion of the read operation, the writeback address has reached the system interface for bus ordering, thereby maintaining cache coherency while allowing requests to be serviced from the main cache.
    • 多处理器计算系统中的处理器的主缓存被耦合以在回写操作期间接收回读数据。 在一个实施例中,在回写操作期间,例如,对于高速缓存未命中,主缓存器中的脏数据与来自相关联的写入高速缓存的修改的数据合并,并且所得到的写回数据行被加载到写回缓冲器中。 写回数据也被写回到主缓存中,并保存在主缓存中,直到被新数据替换为止。 然后,从主缓存器而不是从回写缓冲器来服务数据的后续请求(即,窥探)。 在一些实施例中,防止主缓存中的回写数据的进一步修改。 主缓存中的回写数据线在返回高速缓存未命中的读取数据之前保持有效,从而确保读地址到达系统接口以在回写行丢失之前进行正确的总线排序。 在一个实施例中,写回操作与用于高速缓存未命中的读取操作配对,以确保在完成读取操作时,回写地址已经到达用于总线排序的系统接口,从而保持高速缓存一致性,同时允许从 主缓存。
    • 2. 发明授权
    • Method to reduce memory latencies by performing two levels of speculation
    • 通过执行两级投机来减少内存延迟的方法
    • US06496917B1
    • 2002-12-17
    • US09499264
    • 2000-02-07
    • Rajasekhar CherabuddiKevin B. NormoyleBrian J. McGeeMeera KasinathanAnup SharmaSutikshan Bhutani
    • Rajasekhar CherabuddiKevin B. NormoyleBrian J. McGeeMeera KasinathanAnup SharmaSutikshan Bhutani
    • G06F1200
    • G06F12/0831G06F2212/2542G06F2212/507
    • A multiprocessor system includes a plurality of central processing units (CPUs) connected to one another by a system bus. Each CPU includes a cache controller to communicate with its cache, and a primary memory controller to communicate with its primary memory. When there is a cache miss in a CPU, the cache controller routes an address request for primary memory directly to the primary memory via the CPU as a speculative request without access the system bus, and also issues the address request to the system bus to facilitate data coherency. The speculative request is queued in the primary memory controller, which in turn retrieves speculative data from a specified primary memory address. The CPU monitors the system bus for a subsequent transaction that requests the specified data in the primary memory. If the subsequent transaction requesting the specified data is a read transaction that corresponds to the speculative address request, the speculative request is validated and becomes non-speculative. If, on the other hand, the subsequent transaction requesting the specified data is a write transaction, the speculative request is canceled.
    • 多处理器系统包括通过系统总线相互连接的多个中央处理单元(CPU)。 每个CPU包括与其高速缓存通信的高速缓存控制器以及与其主存储器通信的主存储器控制器。 当CPU中存在高速缓存未命中时,缓存控制器将主存储器的地址请求直接通过CPU作为推测请求直接发送到主存储器,而无需访问系统总线,并且还向系统总线发出地址请求以方便 数据一致性。 推测请求在主存储器控制器中排队,主存储器控制器又从指定的主存储器地址检索推测数据。 CPU监视系统总线以用于请求主存储器中指定数据的后续事务。 如果请求指定数据的后续事务是与推测地址请求相对应的读事务,则推测请求将被验证并变为非推测性。 另一方面,如果请求指定数据的后续事务是写事务,则推测请求被取消。
    • 8. 发明授权
    • Apparatus and method to speculatively initiate primary memory accesses
    • 推测性地启动主存储器访问的装置和方法
    • US5761708A
    • 1998-06-02
    • US658874
    • 1996-05-31
    • Rajasekhar CherabuddiAnuradha MoudgalKevin Normoyle
    • Rajasekhar CherabuddiAnuradha MoudgalKevin Normoyle
    • G06F12/08G06F13/16G06F13/18
    • G06F13/161G06F12/0884
    • A central processing unit with an external cache controller and a primary memory controller is used to speculatively initiate primary memory access in order to improve average primary memory access times. The external cache controller processes an address request during an external cache latency period and selectively generates an external cache miss signal or an external cache hit signal. If no other primary memory access demands exist at the beginning of the external cache latency period, the primary memory controller is used to speculatively initiate a primary memory access corresponding to the address request. The speculative primary memory access is completed in response to an external cache miss signal. The speculative primary memory access is aborted if an external cache hit signal is generated or a non-speculative primary memory access demand is generated during the external cache latency period.
    • 具有外部高速缓存控制器和主存储器控制器的中央处理单元用于推测性地启动主存储器访问,以便提高平均主存储器访问时间。 外部高速缓存控制器在外部高速缓存等待期间处理地址请求,并选择性地产生外部高速缓存未命中信号或外部高速缓存命中信号。 如果在外部高速缓存等待时间开始时不存在其他主存储器访问需求,则主存储器控制器用于推测地发起对应于地址请求的主存储器访问。 响应于外部高速缓存未命中信号完成了推测性主存储器访问。 如果外部缓存命中信号被产生或在外部高速缓存等待时间段期间产生非推测性的主存储器访问需求,则推测主存储器访问被中止。
    • 9. 发明授权
    • Dynamically allocated cache memory for a multi-processor unit
    • 为多处理器单元动态分配高速缓存
    • US06725336B2
    • 2004-04-20
    • US09838921
    • 2001-04-20
    • Rajasekhar Cherabuddi
    • Rajasekhar Cherabuddi
    • G06F1208
    • G06F12/084
    • The resources of a partitioned cache memory are dynamically allocated between two or more processors on a multi-processor unit (MPU). In one embodiment, the MPU includes first and second processors, and the cache memory includes first and second partitions. A cache access circuit selectively transfers data between the cache memory partitions to maximize cache resources. In one mode, both processors are active and may simultaneously execute separate instruction threads. In this mode, the cache access circuit allocates the first cache memory partition as dedicated cache memory for the first processor, and allocates the second cache memory partition as dedicated cache memory for the second processor. In another mode, one processor is active, and the other processor is inactive. In this mode, the cache access circuit allocates both the first and second cache memory partitions as cache memory for the active processor.
    • 分区高速缓冲存储器的资源在多处理器单元(MPU)上的两个或多个处理器之间动态分配。 在一个实施例中,MPU包括第一和第二处理器,并且高速缓存存储器包括第一和第二分区。 缓存访问电路选择性地在高速缓冲存储器分区之间传送数据以最大化高速缓存资源。 在一种模式下,两个处理器都是活动的,并且可以同时执行单独的指令线程。 在该模式中,高速缓存访​​问电路将第一高速缓存存储器分区分配为用于第一处理器的专用高速缓存存储器,并且将第二高速缓冲存储器分区分配为用于第二处理器的专用高速缓冲存储器。 在另一种模式下,一个处理器处于活动状态,另一个处理器处于非活动状态。 在该模式中,高速缓存访​​问电路将第一高速缓存存储器分区和第二高速缓存存储器分区分配为用于活动处理器的高速缓冲存储器。