会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 22. 发明授权
    • Cache allocation policy based on speculative request history
    • 基于推测请求历史记录的缓存分配策略
    • US06421762B1
    • 2002-07-16
    • US09345713
    • 1999-06-30
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • G06F1200
    • G06F12/0862G06F12/0897G06F12/126G06F2212/6024
    • A method of operating a processing unit of a computer system, by issuing an instruction having an explicit prefetch request directly from an instruction sequence unit to a prefetch unit of the processing unit. The invention applies to values that are either operand data or instructions. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hierarchy, and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value. The prefetch limit of cache usage may be established with a maximum number of sets in a congruence class usable by the requesting processing unit. A flag in a directory of the cache may be set to indicate that the prefetch value was retrieved as the result of a prefetch operation. In the implementation wherein the cache is a multi-level cache, a second flag in the cache directory may be set to indicate that the prefetch value has been sourced to an upstream cache. A cache line containing prefetch data can be automatically invalidated after a preset amount of time has passed since the prefetch value was requested.
    • 一种操作计算机系统的处理单元的方法,通过从指令序列单元向处理单元的预取单元发出具有显式预取请求的指令。 本发明适用于作为操作数数据或指令的值。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。 如果从存储器层次结构请求另一个预取值,并且确定高速缓存的高速缓存使用的预取限制已经被高速缓存满足,则分配包含较早预取值之一的高速缓存行中的高速缓存行用于接收另一个预取 值。 高速缓存使用的预取限制可以由请求处理单元可用的同余类中的最大数量的集合来建立。 高速缓存目录中的标志可以被设置为指示作为预取操作的结果检索预取值。 在其中缓存是多级高速缓存的实现中,高速缓存目录中的第二标志可以被设置为指示预取值已经被提供给上游高速缓存。 包含预取数据的缓存行可以在从请求预取值开始经过预设的时间后自动失效。
    • 24. 发明授权
    • Multiprocessor system bus protocol with command and snoop responses for modified-unsolicited cache state
    • 多处理器总线协议,包括读取请求,其具有在提供所请求的修改值时具有指定解除分配的襟翼
    • US06345343B1
    • 2002-02-05
    • US09437178
    • 1999-11-09
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliJohn Steven DodsonGuy Lynn GuthrieWilliam John Starke
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliJohn Steven DodsonGuy Lynn GuthrieWilliam John Starke
    • G06F1200
    • G06F12/0831
    • A novel cache coherency protocol provides a modified-unsolicited (MU) cache state to indicate that a value held in a cache line has been modified (i.e., is not currently consistent with system memory), but was modified by another processing unit, not by the processing unit associated with the cache that currently contains the value in the MU state, and that the value is held exclusive of any other horizontally adjacent caches. Because the value is exclusively held, it may be modified in that cache without the necessity of issuing a bus transaction to other horizontal caches in the memory hierarchy. The MU state may be applied as a result of a snoop response to a read request. The read request can include a flag to indicate that the requesting cache is capable of utilizing the MU state. Alternatively, a flag may be provided with intervention data to indicate that the requesting cache should utilize the modified-unsolicited state.
    • 一种新颖的高速缓存一致性协议提供修改的非请求(MU)高速缓存状态,以指示保持在高速缓存行中的值已被修改(即,当前不符合系统存储器),但是被另一个处理单元修改,而不是由 与当前包含MU状态的值的高速缓存相关联的处理单元,并且该值被保持为任何其他水平相邻的高速缓存。 因为该值是唯一保留的,所以可以在该高速缓存中修改该值,而不需要向存储器层级中的其他水平高速缓存发出总线事务。 作为对读取请求的窥探响应的结果,可以应用MU状态。 读取请求可以包括用于指示请求的高速缓存能够利用MU状态的标志。 或者,可以向标记提供干预数据,以指示请求的高速缓存应该利用修改的未经请求的状态。
    • 25. 发明授权
    • Fixed bus tags for SMP buses
    • 用于SMP总线的固定总线标签
    • US06662216B1
    • 2003-12-09
    • US08839478
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1516
    • G06F11/349G06F12/0831G06F2201/885
    • According to a first aspect of the present invention, a data processing system is provided that includes a communication network to which multiple devices are coupled. A first of the multiple devices includes a number of requestors (or queues), which are each permanently assigned a respective one of a number of unique tags. In response to a communication request by a requestor within the first device, a tag assigned to the requestor is transmitted on the communication network in conjunction with the requested communication transaction. According to a second aspect of the present invention, a data processing system includes a cache having a cache directory. A status indication indicative of the status of at least one of a plurality of data entries in the cache is stored in the cache directory. In response to receipt of a cache operation request, a determination is made whether to update the status indication. In response to the determination that the status indication is to be updated, the status indication is copied into a shadow register and updated. The status indication is then written back into the cache directory at a later time. The shadow register thus serves as a virtual cache controller queue that dynamically mimics a cache directory entry without functional latency.
    • 根据本发明的第一方面,提供一种数据处理系统,其包括多个设备耦合到的通信网络。 多个设备中的第一个包括多个请求者(或队列),每个请求者(或队列)被永久地分配多个唯一标签中的相应的一个。 响应于第一设备内的请求者的通信请求,分配给请求者的标签与所请求的通信事务一起在通信网络上发送。 根据本发明的第二方面,数据处理系统包括具有高速缓存目录的高速缓存。 指示高速缓存中的多个数据条目中的至少一个的状态的状态指示被存储在高速缓存目录中。 响应于接收到高速缓存操作请求,确定是否更新状态指示。 响应于要更新状态指示的确定,状态指示被复制到影子寄存器并被更新。 状态指示随后被写回缓存目录。 因此,影子寄存器用作虚拟高速缓存控制器队列,其动态地模拟高速缓存目录条目而没有功能延迟。
    • 26. 发明授权
    • Bus protocol, bus master and bus snooper for execution of global operations utilizing multiple tokens
    • 总线协议,总线主机和总线监听器,用于执行使用多个令牌的全局操作
    • US06507880B1
    • 2003-01-14
    • US09435927
    • 1999-11-09
    • Ravi Kumar ArimilliJohn Steven DodsonJody B. JoynerJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJody B. JoynerJerry Don Lewis
    • G06F1300
    • G06F12/0831
    • In response to a need to initiate a global operation, a bus master within a multiprocessor system issues a combined token and operation request on a bus coupled to the bus master. The combined token and operation request solicits one of a plurality of tokens required to complete the global operation and identifies the global operation to be processed with the token, if granted. Bus snoopers contain a number of snooper queues for global operations equal to the number of global operation tokens employed within the multiprocessor system. A bus snooper, upon detecting a combined token and operation request, begins speculatively processing the operation if the snooper is not already busy. Before completing the operation, the snooper watches for a combined response with a token number acknowledging either the combined request or a subsequent token request from the same processor, which indicates that the originating bus master has been granted a token for completing a global operation. Otherwise, a combined response acknowledging an operation request containing the token number implies release of the granted token.
    • 响应于启动全局操作的需要,多处理器系统内的总线主机在耦合到总线主机的总线上发出组合的令牌和操作请求。 组合的令牌和操作请求请求完成全局操作所需的多个令牌中的一个令牌,并且如果被授权则标识要用令牌处理的全局操作。 总线侦听器包含多个用于全局操作的侦听队列,等于在多处理器系统中使用的全局操作令牌的数量。 一旦检测到组合的令牌和操作请求,总线侦听器开始推测性地处理该操作,如果该侦听器尚未忙。 在完成操作之前,窥探者使用令牌号来识别来自同一处理器的组合请求或后续令牌请求的组合响应,其指示始发总线主机已经被授予用于完成全局操作的令牌。 否则,确认包含令牌号的操作请求的组合响应意味着释放所授予的令牌。
    • 28. 发明授权
    • Cache having virtual cache controller queues
    • 缓存具有虚拟缓存控制器队列
    • US06502168B1
    • 2002-12-31
    • US09404028
    • 1999-09-23
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1200
    • G06F11/349G06F12/0831G06F2201/885
    • According to the present invention, a data processing system includes a cache having a cache directory. A status indication indicative of the status of at least one of a plurality of data entries in the cache is stored in the cache directory. In response to receipt of a cache operation request, a determination is made whether to update the status indication. In response to the determination that the status indication is to be updated, the status indication is copied into a shadow register and updated. The status indication is then written back into the cache directory at a later time. The shadow register thus serves as a virtual cache controller queue that dynamically mimics a cache directory entry without functional latency.
    • 根据本发明,数据处理系统包括具有高速缓存目录的高速缓存。 指示高速缓存中的多个数据条目中的至少一个的状态的状态指示被存储在高速缓存目录中。 响应于接收到高速缓存操作请求,确定是否更新状态指示。 响应于要更新状态指示的确定,状态指示被复制到影子寄存器并被更新。 状态指示随后被写回缓存目录。 因此,影子寄存器用作虚拟高速缓存控制器队列,其动态地模拟高速缓存目录条目而没有功能延迟。