会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • High performance symmetric multiprocessing systems via super-coherent data mechanisms
    • 通过超相干数据机制的高性能对称多处理系统
    • US06785774B2
    • 2004-08-31
    • US09978362
    • 2001-10-16
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • G06F1200
    • G06F12/0831
    • A multiprocessor data processing system comprising a plurality of processing units, a plurality of caches, that is each affiliated with one of the processing units, and processing logic that, responsive to a receipt of a first system bus response to a coherency operation, causes the requesting processor to execute operations utilizing super-coherent data. The data processing system further includes logic eventually returning to coherent operations with other processing units responsive to an occurrence of a pre-determined condition. The coherency protocol of the data processing system includes a first coherency state that indicates that modification of data within a shared cache line of a second cache of a second processor has been snooped on a system bus of the data processing system. When the cache line is in the first coherency state, subsequent requests for the cache line is issued as a Z1 read on a system bus and one of two responses are received. If the response to the Z1 read indicates that the first processor should utilize local data currently available within the cache line, the first coherency state is changed to a second coherency state that indicates to the first processor that subsequent request for the cache line should utilize the data within the local cache and not be issued to the system interconnect. Coherency state transitions to the second coherency state is completed via the coherency protocol of the data processing system. Super-coherent data is provided to the processor from the cache line of the local cache whenever the second coherency state is set for the cache line and a request is received.
    • 一种多处理器数据处理系统,包括多个处理单元,多个高速缓存,每个高速缓存与每个处理单元中的一个相关联;以及处理逻辑,响应于对一致性操作的第一系统总线响应的接收,使得 请求处理器使用超相干数据执行操作。 数据处理系统还包括逻辑,其最终返回到响应于预定条件的发生的其他处理单元的相干操作。 数据处理系统的一致性协议包括第一相关性状态,其指示在数据处理系统的系统总线上已经窥探第二处理器的第二高速缓存的共享高速缓存行内的数据的修改。 当高速缓存行处于第一相关性状态时,在系统总线上作为Z1读取发出对高速缓存行的后续请求,并且接收到两个响应中的一个。 如果对Z1读取的响应指示第一处理器应利用高速缓存行内当前可用的本地数据,则将第一相关性状态改变为第二相关性状态,其向第一处理器指示对高速缓存行的后续请求应当利用 本地缓存内的数据,不发给系统互连。 通过数据处理系统的一致性协议完成一致性状态转换到第二相关性状态。 每当为高速缓存行设置第二相关性状态并接收到请求时,将超相干数据从本地高速缓存行提供给处理器。
    • 2. 发明授权
    • Symmetric multiprocessor systems with an independent super-coherent cache directory
    • 具有独立超级相干缓存目录的对称多处理器系统
    • US06779086B2
    • 2004-08-17
    • US09978363
    • 2001-10-16
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • G06F1200
    • G06F12/0831G06F12/0817
    • A multiprocessor data processing system comprising, in addition to a first and second processor having an respective first and second cache and a main cache directory affiliated with the first processor's cache, a secondary cache directory of the first cache, which contains a subset of cache line addresses from the main cache directory corresponding to cache lines that are in a first or second coherency state, where the second coherency state indicates to the first processor that requests issued from the first processor for a cache line whose address is within the secondary directory should utilize super-coherent data currently available in the first cache and should not be issued on the system interconnect. Additionally, the cache controller logic includes a clear on barrier flag (COBF) associated with the secondary directory, which is set whenever an operation of the first processor is issued to said system interconnect. If a barrier instruction is received by the first processor while the COBF is set, the contents of the secondary directory are immediately flushed and the cache lines are tagged with an invalid state.
    • 一种多处理器数据处理系统,除了具有相应的第一和第二高速缓存以及隶属于第一处理器的高速缓存的主缓存目录的第一处理器和第二处理器之外,还包括第一高速缓存的副高速缓存目录,其包含高速缓存行的子集 来自对应于处于第一或第二相关性状态的高速缓存行的主缓存目录的地址,其中第二一致性状态向第一处理器指示从第一处理器发出的对于地址在次目录内应该利用的高速缓存行的请求 超级相干数据目前在第一个缓存中可用,不应在系统互连上发布。 此外,高速缓存控制器逻辑包括与副目录相关联的清除屏障标志(COBF),其随着第一处理器的操作被发布到所述系统互连而被设置。 如果在设置COBF时由第一处理器接收到屏障指令,则立即刷新副目录的内容,并将高速缓存行标记为无效状态。
    • 3. 发明授权
    • Super-coherent multiprocessor system bus protocols
    • 超相干多处理器系统总线协议
    • US06763435B2
    • 2004-07-13
    • US09978355
    • 2001-10-16
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • G06F1314
    • G06F12/0831
    • A method for improving performance of a multiprocessor data processing system comprising snooping a request for data held within a shared cache line on a system bus of the data processing system whose cache contains an updated copy of the shared cache line, and responsive to a snoop of the request by the second processor, issuing a first response on the system bus indicating to the requesting processor that the requesting processor may utilize data currently stored within the shared cache line of a cache of the requesting processor. When the request is snooped by the second processor and the second processor decides to release a lock on the cache line to the requesting processor, the second processor issues a second response on the system bus indicating that the first processor should utilize new/coherent data and then the second processor releases the lock to the first processor.
    • 一种用于提高多处理器数据处理系统的性能的方法,包括:窥探在所述数据处理系统的系统总线上的共享高速缓存行中保存的数据的请求,所述数据处理系统的高速缓存包含所述共享高速缓存行的更新副本,并响应于所述 第二处理器的请求,在系统总线上发出第一响应,向请求处理器指示请求处理器可以利用当前存储在请求处理器的高速缓存的共享高速缓存行中的数据。 当请求被第二处理器窥探并且第二处理器决定释放到请求处理器的高速缓存行上的锁时,第二处理器在系统总线上发出指示第一处理器应该利用新的/相干数据的第二响应, 那么第二处理器将锁定释放到第一处理器。
    • 4. 发明授权
    • Super-coherent data mechanisms for shared caches in a multiprocessing system
    • 多处理系统中共享缓存的超连贯数据机制
    • US06658539B2
    • 2003-12-02
    • US09978353
    • 2001-10-16
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • G06F1200
    • G06F12/0831G06F12/084
    • A method for improving performance of a multiprocessor data processing system having processor groups with shared caches. When a processor within a processor group that shares a cache snoops a modification to a shared cache line in a cache of another processor that is not within the processor group, the coherency state of the shared cache line within the first cache is set to a first coherency state that indicates that the cache line has been modified by a processor not within the processor group and that the cache line has not yet been updated within the group's cache. When a request for the cache line is later issued by a processor, the request is issued to the system bus or interconnect. If a received response to the request indicates that the processor should utilize super-coherent data, the coherency state of the cache line is set to a processor-specific super coherency state. This state indicates that subsequent requests for the cache line by the first processor should be provided said super-coherent data, while a subsequent request for the cache line by a next processor in the processor group that has not yet issued a request for the cache line on the system bus, may still go to the system bus to request the cache line. The individualized, processor-specific super coherency states are individually set but are usually changed to another coherency state (e.g., Modified or Invalid) as a group.
    • 一种用于改善具有处理器组与共享高速缓存的多处理器数据处理系统的性能的方法。 当共享缓存的处理器组内的处理器窥探在处理器组内的另一处理器的高速缓存中的共享高速缓存线的修改时,第一高速缓存内的共享高速缓存行的一致性状态被设置为第一 指示高速缓存行已被处理器组内的处理器修改并且高速缓存行尚未在组的高速缓存内更新的一致性状态。 当稍后由处理器发出对高速缓存行的请求时,该请求被发布到系统总线或互连。 如果对该请求的接收到的响应指示处理器应该使用超相干数据,则高速缓存行的一致性状态被设置为处理器特定的超一致性状态。 该状态指示应该为所述超相干数据提供由第一处理器对高速缓存行的后续请求,而处理器组中尚未发出对高速缓存行请求的下一个处理器对高速缓存行的后续请求 在系统总线上,仍然可以去系统总线请求缓存行。 个性化的处理器特定的超一致性状态是单独设置的,但是通常作为一组更改为另一个一致性状态(例如,修改或无效)。
    • 5. 发明授权
    • Dynamic hardware and software performance optimizations for super-coherent SMP systems
    • 超连贯SMP系统的动态硬件和软件性能优化
    • US06704844B2
    • 2004-03-09
    • US09978361
    • 2001-10-16
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • G06F1210
    • G06F12/0831
    • A method for increasing performance optimization in a multiprocessor data processing system. A number of predetermined thresholds are provided within a system controller logic and utilized to trigger specific bandwidth utilization responses. Both an address bus and data bus bandwidth utilization are monitored. Responsive to a fall of a percentage of data bus bandwidth utilization below a first predetermined threshold value, the system controller provides a particular response to a request for a cache line at a snooping processor having the cache line, where the response indicates to a requesting processor that the cache line will be provided. Conversely, if the percentage of data bus bandwidth utilization rises above a second predetermined threshold value, the system controller provides a next response to the request that indicates to any requesting processors that the requesting processor should utilize super-coherent data which is currently within its local cache. Similar operation on the address bus permits the system controller to triggering the issuing of Z1 Read requests for modified data in a shared cache line by processors which still have super-coherent data. The method also comprises enabling a load instruction with a plurality of bits that (1) indicates whether a resulting load request may receive super-coherent data and (2) overrides a coherency state indicating utilization of super-coherent data when said plurality of bits indicates that said load request may not utilize said super-coherent data. Specialized store instructions with appended bits and related functionality are also provided.
    • 一种用于在多处理器数据处理系统中提高性能优化的方法。 在系统控制器逻辑中提供多个预定阈值,并用于触发特定带宽利用响应。 监视地址总线和数据总线带宽利用率。 响应于低于第一预定阈值的百分比的数据总线带宽利用率的下降,系统控制器在具有高速缓存行的窥探处理器处提供对高速缓存行的请求的特定响应,其中响应向请求处理器指示 将提供缓存行。 相反,如果数据总线带宽利用率的百分比上升到高于第二预定阈值,则系统控制器向请求处理器提供对请求的下一个响应,该请求指示请求处理器应该利用当前在其本地内的超相干数据 缓存。 地址总线上的类似操作允许系统控制器通过仍具有超相干数据的处理器触发在共享高速缓存行中发出对于修改数据的Z1读请求。 该方法还包括启用具有多个位的加载指令,其中(1)指示所产生的加载请求是否可以接收超相干数据,以及(2)当所述多个比特指示时,超过表示超相干数据的利用的相关性状态 所述加载请求可能不利用所述超相干数据。 还提供了具有附加位和相关功能的专用存储指令。
    • 7. 发明授权
    • System and method for asynchronously overlapping storage barrier operations with old and new storage operations
    • 使用旧的和新的存储操作异步重叠存储屏障操作的系统和方法
    • US06609192B1
    • 2003-08-19
    • US09588607
    • 2000-06-06
    • Guy Lynn GuthrieRavi Kumar ArimilliJohn Steven DodsonDerek Edward Williams
    • Guy Lynn GuthrieRavi Kumar ArimilliJohn Steven DodsonDerek Edward Williams
    • G06F9312
    • G06F9/30087G06F9/3834G06F9/3842
    • Disclosed is a multiprocessor data processing system that executes loads transactions out of order with respect to a barrier operation. The data processing system includes a memory and a plurality of processors coupled to an interconnect. At least one of the processors includes an instruction sequencing unit for fetching an instruction sequence in program order for execution. The instruction sequence includes a first and a second load instruction and a barrier instruction, which is between the first and second load instructions in the instruction sequence. Also included in the processor is a load/store unit (LSU), which has a load request queue (LRQ) that temporarily buffers load requests associated with the first and second load instructions. The LRQ is coupled to a load request arbitration unit, which selects an order of issuing the load requests from the LRQ. Then a controller issues a load request associated with the second load instruction to memory before completion of a barrier operation associated with the barrier instruction. Alternatively, load requests are issued out-of-order with respect to the program order before or after the barrier instruction. The load request arbitration unit selects the request associated with the second load instruction before a request associated with the first load instruction, and the controller issues the request associated with the second load instruction before the request associated with the first load instruction and before issuing the barrier operation.
    • 公开了一种多处理器数据处理系统,其针对屏障操作执行无序的负载事务。 数据处理系统包括存储器和耦合到互连的多个处理器。 至少一个处理器包括用于以程序顺序取出指令序列以执行的指令排序单元。 指令序列包括在指令序列中的第一和第二加载指令之间的第一和第二加载指令和障碍指令。 还包括在处理器中的是装载/存储单元(LSU),其具有临时缓冲与第一和第二加载指令相关联的加载请求的加载请求队列(LRQ)。 LRQ耦合到负载请求仲裁单元,该单元从LRQ中选择发出负载请求的顺序。 然后,在与障碍指令相关联的屏障操作完成之前,控制器向存储器发出与第二加载指令相关联的加载请求。 或者,负载请求在屏障指令之前或之后相对于程序顺序发出无序。 负载请求仲裁单元在与第一加载指令相关联的请求之前选择与第二加载指令相关联的请求,并且控制器在与第一加载指令相关联的请求之前发布与第二加载指令相关联的请求,并且在发布屏障之前 操作。
    • 10. 发明授权
    • System and method for providing multiprocessor speculation within a speculative branch path
    • 在推测性分支路径中提供多处理器推测的系统和方法
    • US06728873B1
    • 2004-04-27
    • US09588507
    • 2000-06-06
    • Guy Lynn GuthrieRavi Kumar ArimilliJohn Steven DodsonDerek Edward Williams
    • Guy Lynn GuthrieRavi Kumar ArimilliJohn Steven DodsonDerek Edward Williams
    • G06F9312
    • G06F9/30087G06F9/3834G06F9/3842
    • Disclosed is a method of operation within a processor, that enhances speculative branch processing. A speculative execution path contains an instruction sequence that includes a barrier instruction followed by a load instruction. While a barrier operation associated with the barrier instruction is pending, a load request associated with the load instruction is speculatively issued to memory. A flag is set for the load request when it is speculatively issued and reset when an acknowledgment is received for the barrier operation. Data which is returned by the speculatively issued load request is temporarily held and forwarded to a register or execution unit of the data processing system after the acknowledgment is received. All process results, including data returned by the speculatively issued load instructions are discarded when the speculative execution path is determined to be incorrect.
    • 公开了一种处理器内的操作方法,其增强了推测性分支处理。 推测执行路径包含指令序列,其中包含跟随加载指令的障碍指令。 当与障碍指令相关联的障碍操作正在等待时,与加载指令相关联的加载请求被推测地发布到存储器。 当推测性地发出加载请求时设置标志,并且当接收到用于屏障操作的确认时,重置该标志。 在接收到确认之后,由推测发出的加载请求返回的数据被暂时保存并转发到数据处理系统的寄存器或执行单元。 当推测性执行路径被确定为不正确时,所有处理结果(包括由推测发出的加载指令返回的数据)被丢弃。