会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 41. 发明授权
    • Cache coherency protocol with efficient write-through aliasing
    • 缓存一致性协议,具有高效的直写混叠
    • US6021468A
    • 2000-02-01
    • US992788
    • 1997-12-17
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F12/08
    • G06F12/0811G06F12/0831
    • A method of maintaining cache coherency in a multi-processor computer system, which avoids unnecessary writing of values to lower level caches in response to write-through store operations. When a write-through store operation is executed by a processing unit, the modified value is stored in its first level (L1) cache, without storing the value in a second level (L2) cache (or other lower level caches), and a new coherency state is assigned to the lower level cache to indicate that the value is held in a shared state in the first level cache but is undefined in the lower level cache. When the value is written to system memory from a store queue, the lower level cache switches to the new coherency state upon snooping the broadcast from the store queue. This approach has the added benefit of avoiding the prior art read-modify-write process that is used to update the lower level cache.
    • 一种在多处理器计算机系统中维持高速缓存一致性的方法,其避免了响应于直写存储操作而将值不必要地写入低层高速缓存。 当由处理单元执行直写存储操作时,将修改的值存储在其第一级(L1)高速缓存中,而不将值存储在第二级(L2)高速缓存(或其他低级高速缓存)中,并且 新的一致性状态被分配给较低级缓存以指示该值在第一级高速缓存中保持在共享状态中,但在下级高速缓存中未定义。 当值从存储队列写入系统内存时,低级缓存在从存储队列中窥探广播时切换到新的一致性状态。 这种方法具有避免用于更新较低级缓存的现有技术的读 - 修改 - 写入处理的附加益处。
    • 42. 发明授权
    • Cache intervention from a cache line exclusively holding an unmodified
value
    • 来自缓存行的缓存干预专门保留未修改的值
    • US5963974A
    • 1999-10-05
    • US837518
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJohn Michael KaiserJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJohn Michael KaiserJerry Don Lewis
    • G06F15/16G06F12/08G06F12/00
    • G06F12/0815
    • A method of improving memory latency associated with a read-type operation in a multiprocessor computer system is disclosed. After a value (data or instruction) is loaded from system memory into a cache, the cache is marked as containing an exclusively held, unmodified copy of the value and, when a requesting processing unit issues a message indicating that it desires to read the value, and the cache transmits a response indicating that the cache can source the value. The response is transmitted in response to the cache snooping the message from an interconnect which is connected to the requesting processing unit. The response is detected by system logic and forwarded from the system logic to the requesting processing unit. The cache then sources the value to an interconnect which is connected to the requesting processing unit. The system memory detects the message and would normally source the value, but the response informs the memory device that the value is to be sourced by the cache instead. Since the cache latency can be much less than the memory latency, the read performance can be improved substantially with this new protocol.
    • 公开了一种改善与多处理器计算机系统中的读取类型操作相关联的存储器延迟的方法。 在将值(数据或指令)从系统存储器加载到高速缓存中之后,高速缓存被标记为包含唯一保持的未修改的值的副本,并且当请求处理单元发出指示其期望读取值的消息时 ,并且高速缓存发送指示高速缓存能够输出该值的响应。 该响应响应于来自连接到请求处理单元的互连的高速缓存窥探消息而被发送。 响应由系统逻辑检测并从系统逻辑转发到请求处理单元。 高速缓存然后将该值输出到连接到请求处理单元的互连。 系统内存检测到该消息,并且通常会发送该值,但响应通知存储设备该值将由缓存提供。 由于缓存延迟可能远小于内存延迟,因此可以通过此新协议大大提高读取性能。
    • 43. 发明授权
    • Method and system for front-end and back-end gathering of store
instructions within a data-processing system
    • 数据处理系统中存储指令前端和后端采集的方法和系统
    • US5956503A
    • 1999-09-21
    • US834111
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F9/312G06F9/38G06F9/30
    • G06F9/30043G06F9/3824
    • A method and system for front-end and back-end gathering of store instructions within a processor is disclosed. In accordance with the method and system of the present invention, the store queue includes a front-end queue and a back-end queue. In response to a determination that the data field of the first entry of the front-end queue is not filled completely, another determination is made as to whether or not an address for a store instruction in a subsequent second entry is equal to an address for the store instruction in the first entry plus a byte count in the first entry. If so, the store instruction in the subsequent second entry is collapsed into the store instruction in the first entry. Concurrently, in response to a determination that the data field of the last entry of the back-end queue is not filled completely with data, another determination is made as to whether or not an address for a store instruction in a subsequent entry is equal to an address for the store instruction in the last entry plus a byte count in said last entry. If so, the two store instructions are combined into one bus transfer.
    • 公开了一种用于处理器内存储指令的前端和后端收集的方法和系统。 根据本发明的方法和系统,存储队列包括前端队列和后端队列。 响应于确定前端队列的第一条目的数据字段未被完全填充,另外确定在后续第二条目中的存储指令的地址是否等于 第一个条目中的存储指令加上第一个条目中的字节数。 如果是这样,则第二条目中的存储指令被折叠到第一条目中的存储指令中。 同时,响应于确定后端队列的最后一个条目的数据字段未被完全填充数据,另外确定后续条目中存储指令的地址是否等于 最后一个条目中存储指令的地址加上最后一个条目中的字节数。 如果是这样,两个存储指令组合成一个总线传输。
    • 44. 发明授权
    • Shared intervention protocol for SMP bus using caches, snooping, tags
and prioritizing
    • SMP总线的共享干预协议使用缓存,窥探,标签和优先级排序
    • US5946709A
    • 1999-08-31
    • US839479
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJohn Michael KaiserJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJohn Michael KaiserJerry Don Lewis
    • G06F15/16G06F12/08G06F12/00
    • G06F12/0831
    • A method of improving memory latency associated with a read-type operation in a multiprocessor computer system is disclosed. After a value (data or instruction) is loaded from system memory into a plurality of caches, one cache is identified as a specific cache which contains an unmodified copy of the value that was most recently read, and that cache is marked as containing the most recently read copy, while the remaining caches are marked as containing shared, unmodified copies of the value. When a requesting processing unit issues a message indicating that it desires to read the value, the specific cache transmits a response indicating that it cache can source the value. The response is transmitted in response to the cache snooping the message from an interconnect which is connected to the requesting processing unit. The response is detected by system logic and forwarded from the system logic to the requesting processing unit. The specific cache then sources the value to an interconnect which is connected to the requesting processing unit. The system memory detects the message and would normally source the value, but the response informs the memory device that the value is to be sourced by the cache instead. Since the cache latency can be much less than the memory latency, the read performance can be substantially improved with this new protocol.
    • 公开了一种改善与多处理器计算机系统中的读取类型操作相关联的存储器延迟的方法。 在将值(数据或指令)从系统存储器加载到多个高速缓存中之后,一个高速缓存被识别为包含最近读取的值的未修改副本的特定高速缓存,并且该高速缓存被标记为包含最多 最近读取副本,而剩余的高速缓存被标记为包含共享的,未修改的值的副本。 当请求处理单元发出指示希望读取该值的消息时,特定高速缓存发送指示其高速缓存能够输出该值的响应。 该响应响应于来自连接到请求处理单元的互连的高速缓存窥探消息而被发送。 响应由系统逻辑检测并从系统逻辑转发到请求处理单元。 特定高速缓存然后将该值引导到连接到请求处理单元的互连。 系统内存检测到该消息,并且通常会发送该值,但响应通知存储设备该值将由缓存提供。 由于缓存延迟可能远小于内存延迟,因此可以通过此新协议大大提高读取性能。
    • 45. 发明授权
    • Method and system for speculatively sourcing cache memory data prior to
upstream cache invalidation within a multiprocessor data-processing
system
    • 用于在多处理器数据处理系统中上游缓存无效之前推测采购缓存存储器数据的方法和系统
    • US5924118A
    • 1999-07-13
    • US839542
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F12/08G06F13/00
    • G06F12/0833G06F12/0811
    • A method and system for speculatively sourcing data among cache memories within a multiprocessor data-processing system is disclosed. In accordance with the method and system of the present invention, the data-processing system has multiple processing units, each of the processing units including at least one cache memory. In response to a request for data by a first processing unit within the data-processing system, an intervention response is issued from a second processing unit within the data-processing system that contains the requested data. The requested data is then sourced from a secondary cache memory within the second processing unit onto a system data bus concurrently with invalidating a copy of the requested data from a primary cache within the second processing unit. During this time, the second processing unit is also pending for a combined response to return from all the processing units.
    • 公开了一种用于在多处理器数据处理系统内的高速缓冲存储器之间推测性地采集数据的方法和系统。 根据本发明的方法和系统,数据处理系统具有多个处理单元,每个处理单元包括至少一个高速缓冲存储器。 响应于数据处理系统内的第一处理单元对数据的请求,从包含所请求数据的数据处理系统内的第二处理单元发出干预响应。 然后,所请求的数据从第二处理单元内的二级高速缓存存储器同时发送到系统数据总线,同时使来自第二处理单元中的一级高速缓存的请求数据的副本无效。 在此期间,第二处理单元也等待组合响应从所有处理单元返回。
    • 47. 发明授权
    • Method and system for back-end gathering of store instructions within a
data-processing system
    • 在数据处理系统中后台收集存储指令的方法和系统
    • US5894569A
    • 1999-04-13
    • US839480
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F9/312G06F9/38G06F7/00
    • G06F9/30043G06F9/3824
    • A method and system for back-end gathering of store instructions within a processor is disclosed. In accordance with the method and system of the present invention, a store queue within a data-processing system includes a front-end queue and a back-end queue. A multiple of entries is provided in the back-end queue, and each entry includes an address field, a byte-count field, and a data field. A determination is first made as to whether or not a data field of a last entry of the back-end queue is completely filled. In response to a determination that the data field of the last entry of the back-end queue is not completely filled, another determination is made as to whether or not an address for a store instruction in a subsequent entry is equal to an address for a store instruction in the last entry plus a byte count in the last entry. In response to a determination that the address for a store instruction in a subsequent entry is equal to the address for a store instruction in the last entry plus the byte count in the last entry, the two store instructions are combined into one bus transfer.
    • 公开了一种用于处理器内存储指令后端收集的方法和系统。 根据本发明的方法和系统,数据处理系统内的存储队列包括前端队列和后端队列。 在后端队列中提供多个条目,并且每个条目包括地址字段,字节计数字段和数据字段。 首先确定后端队列的最后一个条目的数据字段是否被完全填充。 响应于后端队列的最后条目的数据字段未被完全填写的确定,另外确定后续条目中存储指令的地址是否等于 在最后一个条目中存储指令加上最后一个条目中的字节数。 响应于确定后续条目中存储指令的地址等于最后一个条目中的存储指令的地址加上最后一个条目中的字节计数,两个存储指令被组合成一个总线传输。
    • 48. 发明授权
    • Bus master and bus snooper for execution of global operations utilizing a single token for multiple operations with explicit release
    • 总线主机和总线监听器,用于执行全局操作,利用单个令牌进行多次操作,并显式释放
    • US06516368B1
    • 2003-02-04
    • US09435928
    • 1999-11-09
    • Ravi Kumar ArimilliJohn Steven DodsonJody B. JoynerJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJody B. JoynerJerry Don Lewis
    • G06F1314
    • G06F12/0831
    • In response to a need to initiate one or more global operations, a bus master within a multiprocessor system issues a combined token and operation request in a single bus transaction on a bus coupled to the bus master. The combined token and operation request solicits a single existing token required to complete the global operations within the multiprocessor system and identifies the first of the global operations to be processed with the token, if granted. Once a bus master is granted the token, no other bus master will be granted the token until the current token owner explicitly requests release. The current token owner repeats the combined token and operation request for each global operation which needs to be initiated and, on the last global operation, issues a combined request with an explicit release. Acknowledgement of the combined request with release implies release of the token for use by other bus masters.
    • 响应于需要启动一个或多个全局操作,多处理器系统内的总线主机在耦合到总线主机的总线上的单总线事务中发出组合令牌和操作请求。 组合的令牌和操作请求请求在多处理器系统中完成全局操作所需的单个现有令牌,并且如果被授予则标识要使用令牌处理的第一个全局操作。 一旦总线主机被授予令牌,在当前令牌所有者明确请求发布之前,将不会授予其他总线主机的令牌。 当前的标记所有者重复需要启动的每个全局操作的组合令牌和操作请求,并且在最后一个全局操作中发出具有明确版本的组合请求。 对发布的组合请求的确认意味着释放令牌供其他总线主机使用。
    • 49. 发明授权
    • Bus snooper for SMP execution of global operations utilizing a single token with implied release
    • 使用具有隐含释放的单个令牌来执行全局操作的SMP的总线监听器
    • US06460100B1
    • 2002-10-01
    • US09435929
    • 1999-11-09
    • Ravi Kumar ArimilliJohn Steven DodsonJody B. JoynerJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJody B. JoynerJerry Don Lewis
    • G06F1314
    • G06F13/37
    • Only a single snooper queue for global operations within a multiprocessor system is implemented within each bus snooper, controlled by a single token allowing completion of one operation. A bus snooper, upon detecting a combined token and operation request, begins speculatively processing the operation if the snooper is not already busy. The snooper then watches for a combined response acknowledging the combined request or a subsequent token request from the same processor, which indicates that the originating processor has been granted the sole token for completing global operations, before completing the operation. When processing an operation from a combined request and detecting an operation request (only) from a different processor, which indicates that another processor has been granted the token, the snooper suspends processing of the current operation and begins processing the new operation. If the snooper is busy when a combined request is received, the snooper retries the operation portion of the combined request and, upon detecting a subsequent operation request (only) for the operation, begins processing the operation at that time if not busy. Snoop logic for large multiprocessor systems is thus simplified, with conflict reduced to situations in which multiple processors are competing for the token.
    • 在一个多处理器系统内,只有一个用于全局操作的侦听队列是在每个总线侦听器中实现的,由一个允许完成一个操作的令牌控制。 一旦检测到组合的令牌和操作请求,总线侦听器开始推测性地处理该操作,如果该侦听器尚未忙。 监听器然后在完成操作之前监视来自同一处理器的组合请求或后续令牌请求的组合响应,其指示始发处理器已经被授予用于完成全局操作的唯一令牌。 当从组合请求处理操作并从另一处理器(仅指示另一个处理器已被授予令牌)检测到操作请求时,监听器暂停对当前操作的处理并开始处理新的操作。 如果接收到组合请求时,监听器正忙,则侦听器重试组合请求的操作部分,并且在检测到用于该操作的后续操作请求(仅))时,如果不忙,则开始处理该操作。 因此,大型多处理器系统的窥探逻辑被简化,冲突降低到多个处理器竞争令牌的情况。
    • 50. 发明授权
    • Method for alternate preferred time delivery of load data
    • 负载数据交替优选时间交付方法
    • US06389529B1
    • 2002-05-14
    • US09344059
    • 1999-06-25
    • Ravi Kumar ArimilliLakshminarayanan Baba ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliLakshminarayanan Baba ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F9312
    • G06F9/3824G06F9/3836G06F9/3838G06F9/384
    • A system for time-ordered execution of load instructions. More specifically, the system enables just-in-time delivery of data requested by a load instruction. The system consists of a processor, an L1 data cache with corresponding L1 cache controller, and an instruction processor. The instruction processor manipulates a plurality of architected time dependency fields of a load instruction to create a plurality of dependency fields. The dependency fields holds a relative dependency value which is utilized to order the load instruction in a Relative Time-Ordered Queue (RTOQ) of the L1 cache controller. The load instruction is sent from RTOQ to the L1 data cache at a particular time so that the data requested is loaded from the L1 data cache at the time specified by one of the dependency fields. The dependency fields are prioritized so that the cycle corresponding to the highest priority field which is available is utilized.
    • 用于加载指令的时间执行的系统。 更具体地,该系统实现了由加载指令请求的数据的及时传送。 该系统由处理器,具有对应的L1高速缓存控制器的L1数据高速缓存器和指令处理器组成。 指令处理器操纵加载指令的多个架构时间依赖性字段以创建多个依赖项。 相关性字段保持相对依赖性值,该相关性值用于对L1高速缓存控制器的相对时间排序队列(RTOQ)中的加载指令进行排序。 加载指令在特定时间从RTOQ发送到L1数据高速缓存,以便在由一个依赖项指定的时间内从L1数据高速缓存中加载请求的数据。 优先依赖关系字段,以便利用对应于可用的最高优先级字段的周期。