会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Apparatus and method of handling race conditions in mesi-based
multiprocessor system with private caches
    • 在具有私有高速缓存的基于Mesi的多处理器系统中处理竞争条件的装置和方法
    • US5551005A
    • 1996-08-27
    • US201854
    • 1994-02-25
    • Nitin V. SarangdharWen-Hann WangMatthew Fisch
    • Nitin V. SarangdharWen-Hann WangMatthew Fisch
    • G06F12/08
    • G06F12/0808G06F12/0831
    • In a computer system having a plurality of processors with internal caches, a method for handling race conditions arising when multiple processors simultaneously write to a particular cache line. Initially, a determination is made as to whether the cache line is in an exclusive, modified, invalid, or shared state. If the cache line is in either the exclusive or modified state, the cache line is written to and then set to the modified state. If the cache line is in the invalid state, a Bus-Read-Invalidate operation is performed. However, if the cache line is in the shared state and multiple processors initiate Bus-Write-Invalidate operations, the invalidation request belonging to the first processor is allowed to complete. Thereupon, the cache line is sent to the exclusive state, data is updated, and the cache line is set to the modified state. The second processor receives a second cache line, updates this second cache line, and sets the second cache line to the modified state.
    • 在具有多个具有内部高速缓存的处理器的计算机系统中,一种用于处理当多个处理器同时写入特定高速缓存行时产生的竞争条件的方法。 首先,确定高速缓存行是否处于排他,修改,无效或共享状态。 如果缓存行处于独占状态或修改状态,则将高速缓存行写入并设置为修改状态。 如果缓存行处于无效状态,则执行总线读取无效操作。 然而,如果高速缓存行处于共享状态,并且多个处理器启动总线写入无效操作,则允许属于第一处理器的无效请求完成。 于是缓存行被发送到独占状态,数据被更新,高速缓存行被设置为修改状态。 第二处理器接收第二高速缓存行,更新第二高速缓存行,并将第二高速缓存行设置为修改状态。
    • 2. 发明授权
    • Method and apparatus for cache memory replacement line identification
    • 用于高速缓存存储器替换线路识别的方法和装置
    • US5809524A
    • 1998-09-15
    • US822044
    • 1997-03-24
    • Gurbir SinghWen-Hann WangMichael W. RhodehamelJohn M. BauerNitin V. Sarangdhar
    • Gurbir SinghWen-Hann WangMichael W. RhodehamelJohn M. BauerNitin V. Sarangdhar
    • G06F12/08G06F12/12
    • G06F12/123G06F12/0831
    • A method and apparatus for cache memory replacement line identification have a cache interface which provides a communication interface between a cache memory and a controller for the cache memory. The interface includes an address bus, a data bus, and a status bus. The address bus transfers requested addresses from the controller to the cache memory. The data bus transfers data associated with requested addresses from the controller to the cache memory, and also transfers replacement line addresses from the cache memory to the controller. The status bus transfers status information associated with the requested addresses from the cache memory to the controller which indicate whether the requested addresses are contained in the cache memory. In one embodiment, the data bus also transfers cache line data associated with a requested address from the cache memory to the controller when the requested address hits the cache memory.
    • 一种用于高速缓存存储器替代线路识别的方法和装置具有缓存接口,其提供高速缓冲存储器和用于高速缓冲存储器的控制器之间的通信接口。 该接口包括地址总线,数据总线和状态总线。 地址总线将请求的地址从控制器传送到高速缓冲存储器。 数据总线将与请求的地址相关联的数据从控制器传送到高速缓冲存储器,并且还将替换行地址从高速缓冲存储器传送到控制器。 状态总线将与请求的地址相关联的状态信息从高速缓冲存储器传送到控制器,该控制器指示所请求的地址是否包含在高速缓冲存储器中。 在一个实施例中,当请求的地址与高速缓冲存储器匹配时,数据总线还将与所请求的地址相关联的高速缓存行数据从高速缓冲存储器传送到控制器。
    • 3. 发明授权
    • Method and apparatus for transferring information between a processor
and a memory system
    • 用于在处理器和存储器系统之间传送信息的方法和装置
    • US5701503A
    • 1997-12-23
    • US360331
    • 1994-12-21
    • Gurbir SinghWen-Hann WangMichael W. RhodehamelJohn M. BauerNitin V. Sarangdhar
    • Gurbir SinghWen-Hann WangMichael W. RhodehamelJohn M. BauerNitin V. Sarangdhar
    • G06F12/08G06F12/00
    • G06F12/0897G06F12/0831
    • A method and apparatus for transferring information between a processor and a memory system utilizing a chunk write buffer, where read and write requests to the L2 cache memory are controlled by the processor. The cache line associated with each such request is larger than the interface coupling the L2 cache memory and the processor. Read requests are returned from the L2 cache memory to the processor in burst fashion. Write requests are transferred from the processor to the L2 cache memory during clock cycles in which the processor does not require the interface for a read request. Write requests need not be transferred in burst fashion; rather, a portion of the write request corresponding to the size of the interface, referred to as a chunk, is transferred from the processor to the L2 cache memory and stored temporarily in the chunk write buffer. When the processor has transferred the entire cache line to the L2 cache memory, the processor signals the L2 cache memory to transfer the contents of the chunk write buffer into the data array of the cache memory.
    • 一种利用块写入缓冲器在处理器和存储器系统之间传送信息的方法和装置,其中对L2高速缓冲存储器的读取和写入请求由处理器控制。 与每个这样的请求相关联的高速缓存行大于耦合L2高速缓冲存储器和处理器的接口。 读取请求以突发方式从L2高速缓冲存储器返回到处理器。 在处理器不需要读取请求的接口的时钟周期期间,写入请求从处理器传送到L2高速缓冲存储器。 写请求不需要以突发方式传输; 相反,与被称为块的接口的大小相对应的写入请求的一部分从处理器传送到L2高速缓冲存储器,并临时存储在块写入缓冲器中。 当处理器将整个高速缓存行传输到L2高速缓冲存储器时,处理器发信号通知L2缓存存储器,以将块写入缓冲器的内容传送到高速缓冲存储器的数据阵列中。
    • 4. 发明授权
    • Apparatus and method for caching lock conditions in a multi-processor
system
    • 用于在多处理器系统中缓存锁定条件的装置和方法
    • US6006299A
    • 1999-12-21
    • US204592
    • 1994-03-01
    • Wen-Hann WangKonrad K. LaiGurbir SinghMandar S. JoshiNitin V. SarangdharMatthew A. Fisch
    • Wen-Hann WangKonrad K. LaiGurbir SinghMandar S. JoshiNitin V. SarangdharMatthew A. Fisch
    • G06F9/46G06F13/08
    • G06F9/52
    • In a computer system, an apparatus for handling lock conditions wherein a first instruction executed by a first processor processes data that is common to a second processor while the second processor is locked from simultaneously executing a second instruction that also processes this same data. A lock bit is set when the first processor begins execution of the first instruction. Thereupon, the second processor is prevented from executing its instruction until the first processor has completed its processing of the shared data. Hence, the second processor queues its request in a buffer. The lock bit is cleared after the first processor has completed execution of its instruction. The first processor then checks the buffer for any outstanding requests. In response to the second processor's queued request, the first processor transmits a signal to the second processor indicating that the data is now not locked.
    • 在计算机系统中,一种用于处理锁定条件的装置,其中由第一处理器执行的第一指令在第二处理器被锁定时处理与第二处理器相同的数据,同时执行也处理该相同数据的第二指令。 当第一个处理器开始执行第一个指令时,锁定位被置位。 于是,第二处理器被阻止执行其指令,直到第一处理器完成对共享数据的处理。 因此,第二处理器将其请求排队在缓冲器中。 在第一个处理器完成其指令执行后,锁定位被清零。 然后,第一个处理器检查缓冲区是否有任何未完成的请求。 响应于第二处理器的排队请求,第一处理器向第二处理器发送指示数据现在不被锁定的信号。
    • 5. 发明授权
    • Apparatus for maintaining multilevel cache hierarchy coherency in a
multiprocessor computer system
    • 用于在多处理器计算机系统中维持多级高速缓存层级一致性的装置
    • US5715428A
    • 1998-02-03
    • US639719
    • 1996-04-29
    • Wen-Hann WangKonrad K. LaiGurbir SinghMichael W. RhodehamelNitin V. SarangdharJohn M. BauerMandar S. JoshiAshwani K. Gupta
    • Wen-Hann WangKonrad K. LaiGurbir SinghMichael W. RhodehamelNitin V. SarangdharJohn M. BauerMandar S. JoshiAshwani K. Gupta
    • G06F12/08G06F13/00
    • G06F12/0831G06F12/0811
    • A computer system comprising a plurality of caching agents with a cache hierarchy, the caching agents sharing memory across a system bus and issuing memory access requests in accordance with a protocol wherein a line of a cache has a present state comprising one of a plurality of line states. The plurality of line states includes a modified (M) state, wherein a line of a first caching agent in M state has data which is more recent than any other copy in the system; an exclusive (E) state, wherein a line in E state in a first caching agent is the only one of the agents in the system which has a copy of the data in a line of the cache, the first caching agent modifying the data in the cache line independent of other said agents coupled to the system bus; a shared (S) state, wherein a line in S state indicates that more than one of the agents has a copy of the data in the line; and an invalid (I) state indicating that the line does not exist in the cache. A read or a write to a line in I state results in a cache miss. The present invention associates states with lines and defines rules governing state transitions. State transitions depend on both processor generated activities and activities by other bus agents, including other processors. Data consistency is guaranteed in systems having multiple levels of cache and shared memory and/or multiple active agents, such that no agent ever reads stale data and actions are serialized as needed.
    • 一种计算机系统,包括具有高速缓存层级的多个高速缓存代理,所述高速缓存代理器通过系统总线共享存储器并根据协议发出存储器访问请求,其中高速缓存行具有包括多条线路之一的当前状态 状态。 多个行状态包括修改的(M)状态,其中M状态的第一高速缓存代理的行具有比系统中的任何其他副本更新的数据; 排除(E)状态,其中第一高速缓存代理中的E状态中的线是系统中唯一具有高速缓存行中的数据的副本的代理,第一高速缓存代理将数据修改为 所述高速缓存行独立于耦合到所述系统总线的其它所述代理; 共享(S)状态,其中S状态的行指示多于一个代理具有该行中的数据的副本; 和指示该行不存在于缓存中的无效(I)状态。 对I状态的行进行读取或写入会导致高速缓存未命中。 本发明将状态与线相关联并且定义了管理状态转换的规则。 状态转换取决于处理器生成的活动和其他总线代理(包括其他处理器)的活动。 在具有多级缓存和共享内存和/或多个活动代理的系统中保证数据一致性,使得任何代理程序都不会读取过时的数据,并且操作根据需要进行序列化。
    • 6. 发明申请
    • Scalable distributed memory and I/O multiprocessor systems and associated methods
    • 可扩展分布式存储器和I / O多处理器系统及相关方法
    • US20070106833A1
    • 2007-05-10
    • US11422542
    • 2006-06-06
    • Linda RankinPaul PierceGregory DermerWen-Hann WangKai ChengRichard HofsheierNitin Borkar
    • Linda RankinPaul PierceGregory DermerWen-Hann WangKai ChengRichard HofsheierNitin Borkar
    • G06F13/00
    • G06F13/4022G06F13/4027
    • A multiprocessor system comprises at least one processing module, at least one I/O module, and an interconnect network to connect the at least one processing module with the at least one input/output module. In an example embodiment, the interconnect network comprises at least two bridges to send and receive transactions between the input/output modules and the processing module The interconnect network further comprises at least two crossbar switches to route the transactions over a high bandwidth switch connection. Using embodiments of the interconnect network allows high bandwidth communication between processing modules and I/O modules Standard processing module hardware can be used with the interconnect network without modifying the BIOS or the operating system. Furthermore, using the interconnect network of embodiments of the present invention is non-invasive to the processor motherboard. The processor memory bus, clock, and reset logic all remain intact.
    • 多处理器系统包括至少一个处理模块,至少一个I / O模块和互连网络,以将所述至少一个处理模块与所述至少一个输入/输出模块连接。 在示例实施例中,互连网络包括用于在输入/输出模块和处理模块之间发送和接收事务的至少两个桥接器。互连网络还包括至少两个交叉开关以通过高带宽交换机连接路由交易。 使用互连网络的实施例允许处理模块和I / O模块之间的高带宽通信标准处理模块硬件可以与互连网络一起使用,而无需修改BIOS或操作系统。 此外,使用本发明的实施例的互连网络对于处理器主板是非侵入性的。 处理器内存总线,时钟和复位逻辑都保持不变。
    • 7. 发明授权
    • Cache line pre-load and pre-own based on cache coherence speculation
    • 缓存线预加载和基于缓存一致性推测的预先拥有
    • US07076613B2
    • 2006-07-11
    • US10761995
    • 2004-01-21
    • Jih-Kwon PeirSteve Y. ZhangScott H. RobinsonKonrad LaiWen-Hann Wang
    • Jih-Kwon PeirSteve Y. ZhangScott H. RobinsonKonrad LaiWen-Hann Wang
    • G06F12/00
    • G06F12/0831
    • The invention provides a cache management system comprising in various embodiments pre-load and pre-own functionality to enhance cache efficiency in shared memory distributed cache multiprocessor computer systems. Some embodiments of the invention comprise an invalidation history table to record the line addresses of cache lines invalidated through dirty or clean invalidation, and which is used such that invalidated cache lines recorded in an invalidation history table are reloaded into cache by monitoring the bus for cache line addresses of cache lines recorded in the invalidation history table. In some further embodiments, a write-back bit associated with each L2 cache entry records when either a hit to the same line in another processor is detected or when the same line is invalidated in another processor's cache, and the system broadcasts write-backs from the selected local cache only when the line being written back has a write-back bit that has been set.
    • 本发明提供一种缓存管理系统,其包括在各种实施例中预先加载和预先拥有的功能,以增强共享存储器分布式高速缓存多处理器计算机系统中的高速缓存效率。 本发明的一些实施例包括无效历史表,用于记录通过脏或无效无效而无效的高速缓存行的行地址,并且其被使用,使得记录在无效历史表中的无效高速缓存行通过监视高速缓存的总线被重新加载到高速缓存中 记录在无效历史表中的高速缓存行的行地址。 在一些另外的实施例中,与每个L2高速缓存条目相关联的回写位在检测到另一个处理器中的同一行的命中时或者当另一个处理器的高速缓存中的同一行无效时记录,并且系统广播回写从 所选择的本地缓存只有当正在写回的行具有已设置的回写位时。
    • 8. 发明授权
    • Method and apparatus for combining a direct-mapped cache and a
multiple-way cache in a cache memory
    • 用于将直接映射高速缓存与多路高速缓存组合在高速缓冲存储器中的方法和装置
    • US5548742A
    • 1996-08-20
    • US288923
    • 1994-08-11
    • Wen-Hann WangKonrad K. Lai
    • Wen-Hann WangKonrad K. Lai
    • G06F12/08G06F13/14
    • G06F12/0864
    • A two-way set-associative cache memory includes both a set array and a data array in one embodiment. The data array comprises multiple elements, each of which can contain a cache line. The set array comprises multiple sets, with each set in the set array corresponding to an element in the data array. Each set in the set array contains information which indicates whether an address received by the cache memory matches the cache line contained in its corresponding element of the data array. The information stored in each set includes a tag and a state. The tag contains a reference to one of the cache lines in the data array. If the tag of a particular set matches the address received by the cache memory, then the cache line associated with that particular set is the requested cache line. The state of a particular set indicates the number of cache lines mapped into that particular set.
    • 双向组合关联高速缓冲存储器在一个实施例中包括集阵列和数据阵列。 数据阵列包括多个元素,每个元素可以包含高速缓存行。 集合阵列包括多个集合,集合阵列中的每个集合对应于数据数组中的一个元素。 集合阵列中的每个集合包含指示高速缓冲存储器接收的地址与数据阵列的相应元素中包含的高速缓存行匹配的信息。 存储在每个集合中的信息包括标签和状态。 该标记包含对数据数组中的一条缓存行的引用。 如果特定集合的标签与高速缓冲存储器接收的地址匹配,则与该特定集合相关联的高速缓存行是所请求的高速缓存行。 特定集合的状态指示映射到该特定集合的高速缓存行数。
    • 9. 发明授权
    • Scalable distributed memory and I/O multiprocessor system
    • 可扩展分布式内存和I / O多处理器系统
    • US08745306B2
    • 2014-06-03
    • US13590936
    • 2012-08-21
    • Linda J. RankinPaul R. PierceGregory E. DermerWen-Hann WangKai ChengRichard H. HofsheierNitin Y. Borkar
    • Linda J. RankinPaul R. PierceGregory E. DermerWen-Hann WangKai ChengRichard H. HofsheierNitin Y. Borkar
    • G06F13/00
    • G06F13/4022G06F13/4027
    • A multiprocessor system comprises at least one processing module, at least one I/O module, and an interconnect network to connect the at least one processing module with the at least one input/output module. In an example embodiment, the interconnect network comprises at least two bridges to send and receive transactions between the input/output modules and the processing module. The interconnect network further comprises at least two crossbar switches to route the transactions over a high bandwidth switch connection. Using embodiments of the interconnect network allows high bandwidth communication between processing modules and I/O modules. Standard processing module hardware can be used with the interconnect network without modifying the BIOS or the operating system. Furthermore, using the interconnect network of embodiments of the present invention is non-invasive to the processor motherboard. The processor memory bus, clock, and reset logic all remain intact.
    • 多处理器系统包括至少一个处理模块,至少一个I / O模块和互连网络,以将所述至少一个处理模块与所述至少一个输入/输出模块连接。 在示例实施例中,互连网络包括用于在输入/输出模块和处理模块之间发送和接收事务的至少两个桥。 互连网络还包括至少两个交叉开关以通过高带宽交换机连接路由交易。 使用互连网络的实施例允许处理模块和I / O模块之间的高带宽通信。 标准处理模块硬件可以与互连网络一起使用,而无需修改BIOS或操作系统。 此外,使用本发明的实施例的互连网络对于处理器主板是非侵入性的。 处理器内存总线,时钟和复位逻辑都保持不变。
    • 10. 发明授权
    • Scalable distributed memory and I/O multiprocessor system
    • 可扩展分布式内存和I / O多处理器系统
    • US07058750B1
    • 2006-06-06
    • US09569100
    • 2000-05-10
    • Linda J. RankinPaul R. PierceGregory E. DermerWen-Hann WangKai ChengRichard H. HofsheierNitin Y. Borkar
    • Linda J. RankinPaul R. PierceGregory E. DermerWen-Hann WangKai ChengRichard H. HofsheierNitin Y. Borkar
    • G06F13/00
    • G06F13/4022G06F13/4027
    • A multiprocessor system comprises at least one processing module, at least one I/O module, and an interconnect network to connect the at least one processing module with the at least one input/output module. In an example embodiment, the interconnect network comprises at least two bridges to send and receive transactions between the input/output modules and the processing module The interconnect network further comprises at least two crossbar switches to route the transactions over a high bandwidth switch connection. Using embodiments of the interconnect network allows high bandwidth communication between processing modules and I/O modules Standard processing module hardware can be used with the interconnect network without modifying the BIOS or the operating system. Furthermore, using the interconnect network of embodiments of the present invention is non-invasive to the processor motherboard. The processor memory bus, clock, and reset logic all remain intact.
    • 多处理器系统包括至少一个处理模块,至少一个I / O模块和互连网络,以将所述至少一个处理模块与所述至少一个输入/输出模块连接。 在示例实施例中,互连网络包括用于在输入/输出模块和处理模块之间发送和接收事务的至少两个桥接器。互连网络还包括至少两个交叉开关以通过高带宽交换机连接路由交易。 使用互连网络的实施例允许处理模块和I / O模块之间的高带宽通信标准处理模块硬件可以与互连网络一起使用,而无需修改BIOS或操作系统。 此外,使用本发明的实施例的互连网络对于处理器主板是非侵入性的。 处理器内存总线,时钟和复位逻辑都保持不变。