会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Mechanism for high performance transfer of speculative request data between levels of cache hierarchy
    • 在高速缓存层级之间高速传输推测请求数据的机制
    • US06532521B1
    • 2003-03-11
    • US09345715
    • 1999-06-30
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • G06F1200
    • G06F9/3802G06F9/30047G06F9/383G06F12/0811G06F12/0862G06F12/123G06F12/127
    • A method of operating a processing unit of a computer system, by issuing an instruction having an explicit prefetch request directly from an instruction sequence unit to a prefetch unit of the processing unit. The invention applies to values that are either operand data or instructions. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hierarchy, and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value. The prefetch limit of cache usage may be established with a maximum number of sets in a congruence class usable by the requesting processing unit. A flag in a directory of the cache may be set to indicate that the prefetch value was retrieved as the result of a prefetch operation. In the implementation wherein the cache is a multi-level cache, a second flag in the cache directory may be set to indicate that prefetch value has been sourced to an upstream cache. A cache line containing prefetch data can be automatically invalidated after a preset amount of time has passed since the prefetch value was requested.
    • 一种操作计算机系统的处理单元的方法,通过从指令序列单元向处理单元的预取单元发出具有显式预取请求的指令。 本发明适用于作为操作数数据或指令的值。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。 如果从存储器层次结构请求另一个预取值,并且确定高速缓存的高速缓存使用的预取限制已经被高速缓存满足,则分配包含较早预取值之一的高速缓存行中的高速缓存行用于接收另一个预取 值。 高速缓存使用的预取限制可以由请求处理单元可用的同余类中的最大数量的集合来建立。 高速缓存目录中的标志可以被设置为指示作为预取操作的结果检索预取值。 在其中高速缓存是多级高速缓存的实现中,高速缓存目录中的第二标志可以被设置为指示预取值已经被提供给上游高速缓存。 包含预取数据的缓存行可以在从请求预取值开始经过预设的时间后自动失效。
    • 6. 发明授权
    • Optimized cache allocation algorithm for multiple speculative requests
    • 针对多个推测请求的优化缓存分配算法
    • US06393528B1
    • 2002-05-21
    • US09345714
    • 1999-06-30
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • G06F1200
    • G06F12/0862G06F12/127
    • A method of operating a computer system is disclosed in which an instruction having an explicit prefetch request is issued directly from an instruction sequence unit to a prefetch unit of a processing unit. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hiearchy and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value.
    • 公开了一种操作计算机系统的方法,其中具有显式预取请求的指令直接从指令序列单元发送到处理单元的预取单元。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。 如果从存储器hiearchy请求另一个预取值,并且确定高速缓存的高速缓存使用的预取限制已被满足,则包含先前预取值中的一个的高速缓存行中的高速缓存行被分配用于接收另一个预取值 。
    • 9. 发明授权
    • Method and system for high speed access to a banked cache memory
    • 用于高速访问存储缓存的方法和系统
    • US06553463B1
    • 2003-04-22
    • US09436959
    • 1999-11-09
    • Lakshminarayana Baba ArimilliRavi Kumar ArimilliJames Stephen Fields, Jr.Sanjeev GhaiPraveen S. Reddy
    • Lakshminarayana Baba ArimilliRavi Kumar ArimilliJames Stephen Fields, Jr.Sanjeev GhaiPraveen S. Reddy
    • G06F1200
    • G06F12/0864G06F12/0846G06F2212/6082
    • A method and system for high speed data access of a banked cache memory. In accordance with the method and system of the present invention, during a first cycle, in response to receipt of a request address at an access controller, the request address is speculatively transmitted to a banked cache memory, where the speculative transmission has at least one cycle of latency. Concurrently, the request address is snooped in a directory associated with the banked cache memory. Thereafter, during a second cycle the speculatively transmitted request address is distributed to each of multiple banks of memory within the banked cache memory. In addition, the banked cache memory is provided with a bank indication indicating which bank of memory among the multiple banks of memory contains the request address, in response to a bank hit from snooping the directory. Thereafter, data associated with the request address is output from the banked cache memory, in response to the bank indication, such that access time to a high latency remote banked cache memory is minimized.
    • 一种用于高速缓存存储器的高速数据访问的方法和系统。 根据本发明的方法和系统,在第一周期期间,响应于在访问控制器处接收到请求地址,请求地址被推测地发送到存储的高速缓冲存储器,其中推测传输具有至少一个 延迟周期。 同时,请求地址被窥探在与存储的高速缓冲存储器相关联的目录中。 此后,在第二周期期间,推测性发送的请求地址被分配到分组缓存存储器内的多个存储器组中的每一个。 此外,为了响应于窥探目录的银行命中,所述存储体高速缓存存储器具有指示多个存储器组中的存储器组包含请求地址的存储体指示。 此后,响应于存储体指示,从存储的高速缓冲存储器输出与请求地址相关联的数据,使得到高等待时间的远程分组高速缓冲存储器的访问时间被最小化。