会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 61. 发明授权
    • Demand-based larx-reserve protocol for SMP system buses
    • 用于SMP系统总线的基于需求的larx-reserve协议
    • US5895495A
    • 1999-04-20
    • US815647
    • 1997-03-13
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisDerek Edward Williams
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisDerek Edward Williams
    • G06F9/52G06F12/08G06F15/16G06F15/177G06F13/16
    • G06F12/0811
    • A method of handling load-and-reserve instructions in a multi-processor computer system wherein the processing units have multi-level caches. Symmetric multi-processor (SMP) computers use cache coherency to ensure the same values for a given memory address are provided to all processors in the system. Load-and-reserve instructions used, for example, in quick read-and-write operations, can become unnecessarily complicated. The present invention provides a method of accessing values in the computer's memory by loading the value from the memory device into all of said caches, and sending a reserve bus operation from a higher-level cache to the next lower-level cache only when the value is to be cast out of the higher cache, and thereafter casting out the value from the higher cache after sending the reserve bus operation. This procedure is preferably used for all caches in a multi-level cache architecture, i.e., when the value is to be cast out of any given cache, a reserve bus operation is sent from the given cache to the next lower-level cache (i.e., the adjacent cache which lies closer to the bus), but the reserve bus operation is not sent to all lower caches. Any attempt by any other processing unit in the computer system to write to an address of the memory device which is associated with the value will then be forwarded to all higher-level caches. The marking of the block as reserved is removed in response to any such attempt to write to the address.
    • 一种在多处理器计算机系统中处理加载和备用指令的方法,其中所述处理单元具有多级高速缓存。 对称多处理器(SMP)计算机使用高速缓存一致性来确保给定内存地址的相同值提供给系统中的所有处理器。 例如,在快速读写操作中使用的加载和备份指令可能会变得不必要的复杂。 本发明提供了一种通过将来自存储器设备的值加载到所有高速缓存中来访问计算机存储器中的值的方法,并且只有当值从高级高速缓存发送到下级高级缓存时, 将被抛出较高的缓存,然后在发送备用总线操作之后从较高的缓存中输出该值。 该过程优选地用于多级高速缓存架构中的所有高速缓存,即,当该值将从任何给定的高速缓存中抛出时,预留总线操作从给定的高速缓存发送到下一级的高级缓存(即 ,靠近总线的相邻缓存),但是备用总线操作不发送到所有较低的高速缓存。 计算机系统中的任何其他处理单元尝试写入与该值相关联的存储器件的地址然后将被转发到所有更高级别的高速缓存。 响应于写入地址的任何此类尝试,删除块作为保留的标记。
    • 62. 发明授权
    • Cache coherency protocol with tagged intervention of modified values
    • 缓存一致性协议,具有修改值的标记干预
    • US06701416B1
    • 2004-03-02
    • US09024620
    • 1998-02-17
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1200
    • G06F12/0831G06F12/0811
    • A cache coherency protocol uses a “Tagged” coherency state to track responsibility for writing a modified value back to system memory, allowing intervention of the value without immediately writing it back to system memory, thus increasing memory bandwidth. The Tagged state can migrate across the caches (horizontally) when assigned to a cache line that has most recently loaded the modified value. Historical states relating to the Tagged state may further be used. The invention may also be applied to a multi-processor computer system having clustered processing units, such that the Tagged state can be applied to one of the cache lines in each group of caches that support separate processing unit clusters. Priorities are assigned to different cache states, including the Tagged state, for responding to a request to access a corresponding memory block. Any tagged intervention response can be forwarded only to selected caches that could be affected by the intervention response, using cross-bars. The Tagged protocol can be combined with existing and new cache coherency protocols. The invention further contemplates independent optimization of cache operations using the Tagged state.
    • 高速缓存一致性协议使用“标记”一致性状态跟踪将修改后的值写回系统内存的责任,允许干预该值而不会立即将其写回系统内存,从而增加内存带宽。 当分配给最近加载修改值的高速缓存行时,Tagged状态可以跨缓存迁移(水平)。 与Tagged状态有关的历史状态可能会被进一步使用。 本发明还可以应用于具有群集处理单元的多处理器计算机系统,使得标签状态可以应用于支持单独处理单元群集的每组高速缓存中的一个高速缓存行。 优先级被分配给不同的缓存状态,包括标签状态,用于响应访问对应的存储器块的请求。 任何标记的干预响应只能转发到可能受到干预响应影响的所选高速缓存,使用交叉条。 标签协议可以与现有的和新的高速缓存一致性协议相结合。 本发明进一步考虑使用标签状态对高速缓存操作的独立优化。
    • 63. 发明授权
    • Incremental tag build for hierarchical memory architecture
    • 用于分层内存架构的增量标签构建
    • US06587926B2
    • 2003-07-01
    • US09903729
    • 2001-07-12
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1200
    • G06F3/0601G06F2003/0697
    • A method and system for managing a data access transaction within a hierarchical data storage system. In accordance with the method of the present invention, a data access request is delivered from a source device to multiple data storage devices within the hierarchical data storage system. The data access request includes a source path tag and a target address. At least one device identification tag is added to the source path tag, wherein the at least one device identification tag uniquely identifies a data storage device within each level of the hierarchical data storage system traversed by the data access request such that the data access transaction can be processed in accordance with source path information that is incrementally encoded within the data access request as the data access request traverses the hierarchical data storage system.
    • 一种用于管理层级数据存储系统内的数据访问事务的方法和系统。 根据本发明的方法,将数据访问请求从源设备传送到分级数据存储系统内的多个数据存储设备。 数据访问请求包括源路径标签和目标地址。 至少一个设备标识标签被添加到源路径标签,其中至少一个设备标识标签唯一地标识由数据访问请求遍历的分级数据存储系统的每个级别内的数据存储设备,使得数据访问事务可以 根据在数据访问请求遍历分层数据存储系统时在数据访问请求内逐步编码的源路径信息进行处理。
    • 64. 发明授权
    • Elimination of vertical bus queueing within a hierarchical memory architecture
    • 消除分层内存架构内的垂直总线排队
    • US06587925B2
    • 2003-07-01
    • US09903728
    • 2001-07-12
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1200
    • G06F12/0831G06F12/0811
    • A method and system for processing a split data access transaction within a hierarchical data storage system. In accordance with the method of the present invention, a data access request is delivered from a source device onto an address bus that is shared by a plurality of data storage devices within the hierarchical data storage system, wherein the data access request includes a target address and a source path tag. The source path tag includes at least one device identification tag that uniquely identifies at least one data storage device within each level of the hierarchical data storage system traversed by the data access request. In response to a data access request hit at a given data storage device, a data access response is delivered onto a data bus, wherein the data access response includes the source path tag and the target address.
    • 一种用于处理分层数据存储系统内的分割数据访问事务的方法和系统。 根据本发明的方法,将数据访问请求从源设备传送到由层级数据存储系统内的多个数据存储设备共享的地址总线上,其中数据访问请求包括目标地址 和源路径标签。 源路径标签包括至少一个设备标识标签,其唯一地标识由数据访问请求遍历的分级数据存储系统的每个级别内的至少一个数据存储设备。 响应于在给定数据存储设备处的数据访问请求命中,将数据访问响应传递到数据总线上,其中数据访问响应包括源路径标签和目标地址。
    • 65. 发明授权
    • ECC mechanism for set associative cache array
    • 用于集合关联高速缓存阵列的ECC机制
    • US06480975B1
    • 2002-11-12
    • US09024617
    • 1998-02-17
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • H02H305
    • G06F11/1064
    • A method of checking for errors in a set associative cache array, by comparing a requested value to values loaded in the cache blocks and determining, concurrently with this comparison, whether the cache blocks collectively contain at least one error (such as a soft error caused by stray radiation). Separate parity checks are performed on each cache block and if a parity error occurs, an error correction code (ECC) is executed for the entire congruence class, i.e., only one set of ECC bits are used for the combined cache blocks forming the congruence class. The cache operation is retried after ECC execution. The present invention can be applied to a cache directory containing address tags, or to a cache entry array containing the actual instruction and data values. This novel method allows the ECC to perform double-bit error as well, but a smaller number of error checking bits is required as compared with the prior art, due to the provision of a single ECC field for the entire congruence class. This construction not only leads to smaller cache array sizes, but also to faster overall operation by avoiding unnecessary ECC circuit operations.
    • 一种通过将所请求的值与加载在高速缓存块中的值进行比较来确定集合关联高速缓存阵列中的错误的方法,并且与该比较同时确定高速缓存块是否共同包含至少一个错误(诸如引起的软错误 通过杂散辐射)。 对每个高速缓存块执行单独的奇偶校验,并且如果发生奇偶校验错误,则对整个同余类执行纠错码(ECC),即,组合高速缓存块中只有一组ECC位用于形成一致等级 。 ECC执行后重试高速缓存操作。 本发明可以应用于包含地址标签的高速缓存目录,或者应用于包含实际指令和数据值的高速缓存条目数组。 这种新颖的方法允许ECC执行双位错误,但与现有技术相比,由于为整个等级类提供单个ECC字段,所以需要较少数量的错误校验位。 这种结构不仅导致较小的缓存阵列大小,而且可以通过避免不必要的ECC电路操作来实现更快的整体操作。
    • 66. 发明授权
    • Bus protocol and token manager for execution of global operations utilizing a single token with multiple operations with explicit release
    • 总线协议和令牌管理器,用于使用具有明确释放的多个操作的单个令牌来执行全局操作
    • US06442629B1
    • 2002-08-27
    • US09435924
    • 1999-11-09
    • Ravi Kumar ArimilliJohn Steven DodsonJody B. JoynerJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJody B. JoynerJerry Don Lewis
    • G06F1338
    • G06F13/37
    • Serialization of global operations within a multiprocessor system is achieved utilizing a single token, requiring a bus master to acquire the token for completion of one or more global operations to be initiated by that bus master. A combined token and operation request, in which a token request and an operation request are transmitted in a single bus transaction, is employed once for a global operation, to initiate the global operation for the first time. A token manager determines whether the token is available and released and, if available but not released, whether the token is checked out to the bus master originating the combined token and operation request. If the token is available and released or is available and was last checked out to the bus master originating the combined token and operation request, the token manager acknowledges to the token portion of the combined request; otherwise the token manager retries the token portion of the combined request. Snoopers respond to the operation portion of the combined request depending on whether they are busy. If the bus master to which the token was last checked out issues a combined token and operation request with release or a token request (only) with release followed by an operation request (only) with release, and a combined response acknowledging the combined token and operation request with release or the operation request (only) with release implies release of the token.
    • 使用单个令牌来实现多处理器系统内的全局操作的序列化,要求总线主机获取该令牌以完成由该总线主机发起的一个或多个全局操作。 在单一总线事务中发送令牌请求和操作请求的组合令牌和操作请求对于全局操作被采用一次,以首次启动全局操作。 令牌管理器确定令牌是否可用并被释放,并且如果可用但未被释放,则是否将令牌检出到总线主机以产生组合的令牌和操作请求。 如果令牌可用并被释放或可用并且最后检出到发起组合令牌和操作请求的总线主机,则令牌管理器向组合请求的令牌部分确认; 否则令牌管理器重试组合请求的令牌部分。 侦听器根据它们是否忙碌来响应组合请求的操作部分。 如果令牌最后一次检出的总线主机通过释放发出组合的令牌和操作请求,只发布带有释放的令牌请求(仅),随后发布操作请求(仅)发布,并且组合响应确认组合的令牌和 具有释放的操作请求或仅具有释放的操作请求意味着释放令牌。
    • 67. 发明授权
    • Cache coherency protocol having an imprecise hovering (H) state for instructions and data
    • 缓存一致性协议对于指令和数据具有不精确的悬停(H)状态
    • US06415358B1
    • 2002-07-02
    • US09024322
    • 1998-02-17
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1200
    • G06F12/0831
    • A cache and method of maintaining cache coherency in a data processing system are described. The data processing system includes a plurality of processors that are each associated with a respective one of a plurality of caches. According to the method, a first data item is stored in a first of the caches in association with an address tag indicating an address of the data item. A coherency indicator in the first cache is set to a first state that indicates that the data item is valid. In response to another of the caches indicating an intent to store to the address indicated by the address tag while the coherency indicator is set to the first state, the coherency indicator in the first cache is updated to a second state that indicates that the address tag is valid and that the first data item in the first cache is invalid. Thereafter, in response to detection of a data transfer associated with the address indicated by the address tag while the coherency indicator is set to the second state, the first cache is refreshed by replacing the first data item with a second data item in the data transfer and updating the coherency indicator to a third state that indicates that the second data item is valid.
    • 描述了在数据处理系统中维持高速缓存一致性的缓存和方法。 数据处理系统包括多个处理器,每个处理器与多个高速缓存中的相应一个相关联。 根据该方法,第一数据项与指示数据项的地址的地址标签相关联地存储在第一缓存中。 第一高速缓存中的一致性指示符被设置为指示数据项有效的第一状态。 响应于另一个高速缓存指示在将一致性指示符设置为第一状态时存储到由地址标签指示的地址的意图,将第一高速缓存中的一致性指示符更新为指示地址标签的第二状态 第一个缓存中的第一个数据项无效。 此后,响应于在将一致性指示符设置为第二状态时与由地址标签指示的地址相关联的数据传输的检测,通过在数据传输中用第二数据项替换第一数据项来刷新第一高速缓存 以及将所述一致性指示符更新为指示所述第二数据项有效的第三状态。
    • 69. 发明授权
    • Cache coherency protocols with posted operations
    • 缓存一致性协议与发布的操作
    • US06347361B1
    • 2002-02-12
    • US09024587
    • 1998-02-17
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1200
    • G06F12/0831
    • A method of avoiding deadlocks in cache coherency protocol for a multi-processor computer system, by loading a memory value into a plurality of cache blocks, assigning a first coherency state having a higher collision priority to only one of the cache blocks, and assigning one or more additional coherency states having lower collision priorities to all of the remaining cache blocks. Different system bus codes can be used to indicate the priority of conflicting requests (e.g., DClaim operations) to modify the memory value. The invention also allows folding or elimination of redundant DClaim operations, and can be applied in a global versus local manner within a multi-processor computer system having processing units grouped into at least two clusters.
    • 一种通过将存储器值加载到多个高速缓存块中来避免多处理器计算机系统的高速缓存一致性协议中的死锁的方法,将具有较高冲突优先级的第一相关性状态分配给仅一个高速缓存块,并且分配一个 或更多的附加一致性状态对所有剩余的高速缓存块具有较低的冲突优先级。 可以使用不同的系统总线代码来指示冲突请求的优先级(例如,DClaim操作)来修改存储器值。 本发明还允许折叠或消除冗余DClaim操作,并且可以在具有被分组为至少两个簇的处理单元的多处理器计算机系统内以全局与局部方式应用。