会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 42. 发明授权
    • Bus snooper for SMP execution of global operations utilizing a single token with implied release
    • 使用具有隐含释放的单个令牌来执行全局操作的SMP的总线监听器
    • US06460100B1
    • 2002-10-01
    • US09435929
    • 1999-11-09
    • Ravi Kumar ArimilliJohn Steven DodsonJody B. JoynerJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJody B. JoynerJerry Don Lewis
    • G06F1314
    • G06F13/37
    • Only a single snooper queue for global operations within a multiprocessor system is implemented within each bus snooper, controlled by a single token allowing completion of one operation. A bus snooper, upon detecting a combined token and operation request, begins speculatively processing the operation if the snooper is not already busy. The snooper then watches for a combined response acknowledging the combined request or a subsequent token request from the same processor, which indicates that the originating processor has been granted the sole token for completing global operations, before completing the operation. When processing an operation from a combined request and detecting an operation request (only) from a different processor, which indicates that another processor has been granted the token, the snooper suspends processing of the current operation and begins processing the new operation. If the snooper is busy when a combined request is received, the snooper retries the operation portion of the combined request and, upon detecting a subsequent operation request (only) for the operation, begins processing the operation at that time if not busy. Snoop logic for large multiprocessor systems is thus simplified, with conflict reduced to situations in which multiple processors are competing for the token.
    • 在一个多处理器系统内,只有一个用于全局操作的侦听队列是在每个总线侦听器中实现的,由一个允许完成一个操作的令牌控制。 一旦检测到组合的令牌和操作请求,总线侦听器开始推测性地处理该操作,如果该侦听器尚未忙。 监听器然后在完成操作之前监视来自同一处理器的组合请求或后续令牌请求的组合响应,其指示始发处理器已经被授予用于完成全局操作的唯一令牌。 当从组合请求处理操作并从另一处理器(仅指示另一个处理器已被授予令牌)检测到操作请求时,监听器暂停对当前操作的处理并开始处理新的操作。 如果接收到组合请求时,监听器正忙,则侦听器重试组合请求的操作部分,并且在检测到用于该操作的后续操作请求(仅))时,如果不忙,则开始处理该操作。 因此,大型多处理器系统的窥探逻辑被简化,冲突降低到多个处理器竞争令牌的情况。
    • 43. 发明授权
    • Method for alternate preferred time delivery of load data
    • 负载数据交替优选时间交付方法
    • US06389529B1
    • 2002-05-14
    • US09344059
    • 1999-06-25
    • Ravi Kumar ArimilliLakshminarayanan Baba ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliLakshminarayanan Baba ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F9312
    • G06F9/3824G06F9/3836G06F9/3838G06F9/384
    • A system for time-ordered execution of load instructions. More specifically, the system enables just-in-time delivery of data requested by a load instruction. The system consists of a processor, an L1 data cache with corresponding L1 cache controller, and an instruction processor. The instruction processor manipulates a plurality of architected time dependency fields of a load instruction to create a plurality of dependency fields. The dependency fields holds a relative dependency value which is utilized to order the load instruction in a Relative Time-Ordered Queue (RTOQ) of the L1 cache controller. The load instruction is sent from RTOQ to the L1 data cache at a particular time so that the data requested is loaded from the L1 data cache at the time specified by one of the dependency fields. The dependency fields are prioritized so that the cycle corresponding to the highest priority field which is available is utilized.
    • 用于加载指令的时间执行的系统。 更具体地,该系统实现了由加载指令请求的数据的及时传送。 该系统由处理器,具有对应的L1高速缓存控制器的L1数据高速缓存器和指令处理器组成。 指令处理器操纵加载指令的多个架构时间依赖性字段以创建多个依赖项。 相关性字段保持相对依赖性值,该相关性值用于对L1高速缓存控制器的相对时间排序队列(RTOQ)中的加载指令进行排序。 加载指令在特定时间从RTOQ发送到L1数据高速缓存,以便在由一个依赖项指定的时间内从L1数据高速缓存中加载请求的数据。 优先依赖关系字段,以便利用对应于可用的最高优先级字段的周期。
    • 44. 发明授权
    • Extended cache state with prefetched stream ID information
    • 扩展缓存状态与预取流ID信息
    • US06360299B1
    • 2002-03-19
    • US09345644
    • 1999-06-30
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliLeo James ClarkJohn Steven DodsonGuy Lynn GuthrieJames Stephen Fields, Jr.
    • G06F1200
    • G06F12/0862G06F12/121G06F2212/6028
    • A method of operating a computer system is disclosed in which an instruction having an explicit prefetch request is issued directly from an instruction sequence unit to a prefetch unit of a processing unit. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit. The invention may advantageously associate each prefetch request with a stream ID of an associated processor stream, or a processor ID of the requesting processing unit (the latter feature is particularly useful for caches which are shared by a processing unit cluster). If another prefetch value is requested from the memory hierarchy, and it is determined that a prefetch limit of cache usage has been met by the cache, then a cache line in the cache containing one of the earlier prefetch values is allocated for receiving the other prefetch value.
    • 公开了一种操作计算机系统的方法,其中具有显式预取请求的指令直接从指令序列单元发送到处理单元的预取单元。 在优选实施例中,使用两个预取单元,第一预取单元是硬件独立的,并且动态地监视与由处理单元的核心执行的操作相关联的一个或多个活动流,并且第二预取单元知道较低级别 存储子系统,并用预取请求发送将预取值加载到处理单元的较低级缓存中的指示。 本发明可以有利地将每个预取请求与相关联的处理器流的流ID或请求处理单元的处理器ID相关联(后一特征对于由处理单元簇共享的高速缓存特别有用)。 如果从存储器层次结构请求另一个预取值,并且确定高速缓存的高速缓存使用的预取限制已经被高速缓存满足,则分配包含较早预取值之一的高速缓存行中的高速缓存行用于接收另一个预取 值。
    • 45. 发明授权
    • Merged vertical cache controller mechanism with combined cache controller and snoop queries for in-line caches
    • 合并的垂直缓存控制器机制与组合高速缓存控制器和窥探查询用于在线高速缓存
    • US06347363B1
    • 2002-02-12
    • US09024316
    • 1998-02-17
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1208
    • G06F12/0811G06F12/0831G06F12/123
    • Logically in line caches within a multilevel cache hierarchy are jointly controlled by single cache controller. By combining the cache controller and snoop logic for different levels within the cache hierarchy, separate queues are not required for each level. During a cache access, cache directories are looked up in parallel. Data is retrieved from an upper cache if hit, or from the lower cache if the upper cache misses and the lower cache hits. LRU units may be updated in parallel based on cache directory hits. Alternatively, the lower cache LRU unit may be updated based on cache memory accesses rather than cache directory hits, or the cache hierarchy may be provided with user selectable modes of operation for both LRU unit update schemes. The merged vertical cache controller mechanism does not require the lower cache memory to be inclusive of the upper cache memory. A novel deallocation scheme and update protocol may be implemented in conjunction with the merged vertical cache controller mechanism.
    • 逻辑上在多级缓存层次结构中的行高速缓存由单缓存控制器联合控制。 通过将缓存控制器和窥探逻辑组合在缓存层次结构中的不同级别,每个级别不需要单独的队列。 在缓存访问期间,并行查找缓存目录。 如果命中,则从高级缓存中检索数据,如果高速缓存未命中,并且较低级缓存命中,则从较低级缓存中检索数据。 可以基于缓存目录命中并行更新LRU单元。 或者,可以基于高速缓存存储器访问而不是高速缓存目录命中来更新低级缓存LRU单元,或者可以为两个LRU单元更新方案提供用户可选择的操作模式的高速缓存层级。 合并的垂直高速缓存控制器机制不需要较低的高速缓冲存储器来包含高速缓存存储器。 可以结合合并的垂直高速缓存控制器机制来实现新颖的解除分配方案和更新协议。
    • 47. 发明授权
    • Multiprocessor system bus with system controller explicitly updating snooper LRU information
    • 具有系统控制器的多处理器系统总线显式更新窥探LRU信息
    • US06338124B1
    • 2002-01-08
    • US09368229
    • 1999-08-04
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn GuthrieJody B. JoynerJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn GuthrieJody B. JoynerJerry Don Lewis
    • G06F1208
    • G06F12/123G06F12/0831
    • Combined response logic for a bus receives a combined data access and cast out/deallocate operation initiating by a storage device within a specific level of a storage hierarchy, with a coherency state and LRU position of the cast out/deallocate victim appended. Snoopers on the bus drive snoop responses to the combined operation with the coherency state and/or LRU position of locally-stored cache lines corresponding to the victim appended. The combined response logic determines, from the coherency state and LRU position information appended to the combined operation and the snoop responses, whether an update of the LRU position and/or coherency state of a cache line corresponding to the victim within one of the snoopers is required. If so, the combined response logic selects a snooper storage device to have at least the LRU position of a respective cache line corresponding to the victim updated, and appends an update command identifying the selected snooper to the combined response. The snooper selected to be updated may be randomly chosen, selected based on LRU position of the cache line corresponding to the victim within respective storage, or selected based on other criteria.
    • 总线的组合响应逻辑接收组合的数据访问,并且通过由存储层级的特定级别中的存储设备发起/撤销分配操作,附加了外推/解除分配的受害者的一致性状态和LRU位置。 总线驱动器侦听器上的侦听器响应于与所附加的受害者对应的本地存储的缓存线的相关性状态和/或LRU位置的组合操作。 组合响应逻辑从相关性状态和附加到组合操作和窥探响应的LRU位置信息中确定与窥探者之一内的受害者对应的高速缓存线的LRU位置和/或一致性状态的更新是否是 需要。 如果是,组合的响应逻辑选择窥探存储设备至少具有与受害者相对应的相应高速缓存行的LRU位置更新,并且将识别所选窥探者的更新命令附加到组合响应。 选择要更新的窥探者可以被随机地选择,基于在相应存储器内对应于受害者的高速缓存线的LRU位置来选择,或者基于其他标准来选择。
    • 48. 发明授权
    • High performance multiprocessor system with modified-unsolicited cache state
    • 具有修改的主动缓存状态的高性能多处理器系统
    • US06321306B1
    • 2001-11-20
    • US09437179
    • 1999-11-09
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliJohn Steven DodsonGuy Lynn GuthrieWilliam John Starke
    • Ravi Kumar ArimilliLakshminarayana Baba ArimilliJohn Steven DodsonGuy Lynn GuthrieWilliam John Starke
    • G06F1212
    • G06F12/0831
    • A novel cache coherency protocol provides a modified-unsolicited (MU) cache state to indicate that a value held in a cache line has been modified (i.e., is not currently consistent with system memory), but was modified by another processing unit, not by the processing unit associated with the cache that currently contains the value in the MU state, and that the value is held exclusive of any other horizontally adjacent caches. Because the value is exclusively held, it may be modified in that cache without the necessity of issuing a bus transaction to other horizontal caches in the memory hierarchy. The MU state may be applied as a result of a snoop response to a read request. The read request can include a flag to indicate that the requesting cache is capable of utilizing the MU state. Alternatively, a flag may be provided with intervention data to indicate that the requesting cache should utilize the modified-unsolicited state.
    • 一种新颖的高速缓存一致性协议提供修改的非请求(MU)高速缓存状态,以指示保持在高速缓存行中的值已被修改(即,当前不符合系统存储器),但是被另一个处理单元修改,而不是由 与当前包含MU状态的值的高速缓存相关联的处理单元,并且该值被保持为任何其他水平相邻的高速缓存。 因为该值是唯一保留的,所以可以在该高速缓存中修改该值,而不需要向存储器层级中的其他水平高速缓存发出总线事务。 作为对读取请求的窥探响应的结果,可以应用MU状态。 读取请求可以包括用于指示请求的高速缓存能够利用MU状态的标志。 或者,可以向标记提供干预数据,以指示请求的高速缓存应该利用修改的未经请求的状态。
    • 50. 发明授权
    • Cache coherency protocol having hovering (H) and recent (R) states
    • 具有悬停(H)和最近(R)状态的高速缓存一致性协议
    • US06292872B1
    • 2001-09-18
    • US09024609
    • 1998-02-17
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1200
    • G06F12/0833
    • A cache and method of maintaining cache coherency in a data processing system are described. The data processing system includes a plurality of processors and a plurality of caches coupled to an interconnect. According to the method, a first data item is stored in a first of the caches in association with an address tag indicating an address of the first data item. A coherency indicator in the first cache is set to a first state that indicates that the tag is valid and that the first data item is invalid. Thereafter, the interconnect is snooped to detect a data transfer initiated by another of the plurality of caches, where the data transfer is associated with the address indicated by the address tag and contains a valid second data item. In response to detection of such a data transfer while the coherency indicator is set to the first state, the first data item is replaced by storing the second data item in the first cache in association with the address tag. In addition, the coherency indicator is updated to a second state indicating that the second data item is valid and that the first cache can supply said second data item in response to a request.
    • 描述了在数据处理系统中维持高速缓存一致性的缓存和方法。 数据处理系统包括耦合到互连的多个处理器和多个高速缓存。 根据该方法,第一数据项与指示第一数据项的地址的地址标签相关联地存储在第一缓存中。 第一高速缓存中的一致性指示符被设置为指示标签有效并且第一数据项无效的第一状态。 此后,窥探互连以检测由多个高速缓存中的另一个缓存发起的数据传输,其中数据传输与由地址标签指示的地址相关联并且包含有效的第二数据项。 响应于在一致性指示符被设置为第一状态时检测到这样的数据传输,通过与地址标签相关联地将第二数据项存储在第一高速缓存中来替换第一数据项。 此外,一致性指示符被更新为指示第二数据项有效的第二状态,并且第一高速缓存可以响应于请求提供所述第二数据项。