会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 111. 发明授权
    • Local invalidation buses for a highly scalable shared cache memory hierarchy
    • 用于高可扩展共享缓存存储器层次结构的局部无效总线
    • US06813694B2
    • 2004-11-02
    • US10216637
    • 2002-08-08
    • Ravi Kumar ArimilliGuy Lynn Guthrie
    • Ravi Kumar ArimilliGuy Lynn Guthrie
    • G06F1208
    • G06F12/0811
    • A set of local invalidation buses for a highly scalable shared cache memory hierarchy is disclosed. A symmetric multiprocessor data processing system includes multiple processing units. Each of the processing units is associated with a level one cache memory. All the level one cache memories are associated with an imprecisely inclusive level two cache memory. In addition, a group of local invalidation buses is connected between all the level one cache memories and the level two cache memory. The imprecisely inclusive level two cache memory includes a tracking means for imprecisely tracking cache line inclusivity of the level one cache memories. Thus, the level two cache memory does not have dedicated inclusivity bits for tracking the cache line inclusivity of each of the associated level one cache memories. The tracking means includes a last_processor_to_store field and a more_than_two_loads field per cache line. When the more_than_two_loads field is asserted, except for a specific cache line in the level one cache memory associated with the processor indicated in the last_processor_to_store field, all cache lines within the level one cache memories that shared identical information with that specific cache line are invalidated via the local invalidation buses connected between all the level one cache memories and the level two cache memory.
    • 公开了一种用于高度可扩展的共享高速缓冲存储器层级的局部无效总线。 对称多处理器数据处理系统包括多个处理单元。 每个处理单元与一级缓存存储器相关联。 所有一级缓存存储器与不精确包含的两级缓存存储器相关联。 此外,一组本地无效总线连接在所有一级缓存存储器和二级缓存存储器之间。 不精确包含的两级缓存存储器包括用于不精确地跟踪一级高速缓存存储器的高速缓存行包容性的跟踪装置。 因此,二级高速缓冲存储器不具有用于跟踪每个相关联的一级缓存存储器的高速缓存行包容性的专用包容性位。 跟踪装置包括每个高速缓存行的last_processor_to_store字段和more_than_two_loads字段。 当more_than_two_loads字段被断言时,除了与last_processor_to_store字段中指示的处理器相关联的一级高速缓冲存储器中的特定高速缓存行之外,与该特定高速缓存行共享相同信息的一级缓存存储器内的所有高速缓存行都通过 连接在所有一级缓存存储器和二级高速缓冲存储器之间的局部无效总线。
    • 112. 发明授权
    • Imprecise snooping based invalidation mechanism
    • 不精确的基于窥探的无效机制
    • US06801984B2
    • 2004-10-05
    • US09895119
    • 2001-06-29
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn GuthrieJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn GuthrieJerry Don Lewis
    • G06F1208
    • G06F12/0831
    • A method, system, and processor cache configuration that enables efficient retrieval of valid data in response to an invalidate cache miss at a local processor cache. A cache directory is provided a set of directional bits in addition to the coherency state bits and the address tag. The directional bits provide information that includes a processor cache identification (ID) and routing method. The processor cache ID indicates which processor's operation resulted in the cache line of the local processor changing to the invalidate (I) coherency state. The routing method indicates what transmission method to utilize to forward the cache line, from among a local system bus or a switch or broadcast mechanism. Processor/Cache directory logic provide responses to requests depending on the values of the directional bits.
    • 一种方法,系统和处理器高速缓存配置,其能够响应于在本地处理器高速缓存处的无效高速缓存未命中而有效地检索有效数据。 除了一致性状态位和地址标签之外,向缓存目录提供一组方向位。 方向位提供包括处理器缓存标识(ID)和路由方法的信息。 处理器缓存ID指示哪个处理器的操作导致本地处理器的高速缓存行变为无效(I)一致性状态。 该路由方法指示用于从本地系统总线或交换机或广播机制中转发高速缓存行的什么传输方法。 处理器/缓存目录逻辑根据定向位的值提供对请求的响应。
    • 113. 发明授权
    • System and method for providing multiprocessor speculation within a speculative branch path
    • 在推测性分支路径中提供多处理器推测的系统和方法
    • US06728873B1
    • 2004-04-27
    • US09588507
    • 2000-06-06
    • Guy Lynn GuthrieRavi Kumar ArimilliJohn Steven DodsonDerek Edward Williams
    • Guy Lynn GuthrieRavi Kumar ArimilliJohn Steven DodsonDerek Edward Williams
    • G06F9312
    • G06F9/30087G06F9/3834G06F9/3842
    • Disclosed is a method of operation within a processor, that enhances speculative branch processing. A speculative execution path contains an instruction sequence that includes a barrier instruction followed by a load instruction. While a barrier operation associated with the barrier instruction is pending, a load request associated with the load instruction is speculatively issued to memory. A flag is set for the load request when it is speculatively issued and reset when an acknowledgment is received for the barrier operation. Data which is returned by the speculatively issued load request is temporarily held and forwarded to a register or execution unit of the data processing system after the acknowledgment is received. All process results, including data returned by the speculatively issued load instructions are discarded when the speculative execution path is determined to be incorrect.
    • 公开了一种处理器内的操作方法,其增强了推测性分支处理。 推测执行路径包含指令序列,其中包含跟随加载指令的障碍指令。 当与障碍指令相关联的障碍操作正在等待时,与加载指令相关联的加载请求被推测地发布到存储器。 当推测性地发出加载请求时设置标志,并且当接收到用于屏障操作的确认时,重置该标志。 在接收到确认之后,由推测发出的加载请求返回的数据被暂时保存并转发到数据处理系统的寄存器或执行单元。 当推测性执行路径被确定为不正确时,所有处理结果(包括由推测发出的加载指令返回的数据)被丢弃。
    • 115. 发明授权
    • Dynamic hardware and software performance optimizations for super-coherent SMP systems
    • 超连贯SMP系统的动态硬件和软件性能优化
    • US06704844B2
    • 2004-03-09
    • US09978361
    • 2001-10-16
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • Ravi Kumar ArimilliGuy Lynn GuthrieWilliam J. StarkeDerek Edward Williams
    • G06F1210
    • G06F12/0831
    • A method for increasing performance optimization in a multiprocessor data processing system. A number of predetermined thresholds are provided within a system controller logic and utilized to trigger specific bandwidth utilization responses. Both an address bus and data bus bandwidth utilization are monitored. Responsive to a fall of a percentage of data bus bandwidth utilization below a first predetermined threshold value, the system controller provides a particular response to a request for a cache line at a snooping processor having the cache line, where the response indicates to a requesting processor that the cache line will be provided. Conversely, if the percentage of data bus bandwidth utilization rises above a second predetermined threshold value, the system controller provides a next response to the request that indicates to any requesting processors that the requesting processor should utilize super-coherent data which is currently within its local cache. Similar operation on the address bus permits the system controller to triggering the issuing of Z1 Read requests for modified data in a shared cache line by processors which still have super-coherent data. The method also comprises enabling a load instruction with a plurality of bits that (1) indicates whether a resulting load request may receive super-coherent data and (2) overrides a coherency state indicating utilization of super-coherent data when said plurality of bits indicates that said load request may not utilize said super-coherent data. Specialized store instructions with appended bits and related functionality are also provided.
    • 一种用于在多处理器数据处理系统中提高性能优化的方法。 在系统控制器逻辑中提供多个预定阈值,并用于触发特定带宽利用响应。 监视地址总线和数据总线带宽利用率。 响应于低于第一预定阈值的百分比的数据总线带宽利用率的下降,系统控制器在具有高速缓存行的窥探处理器处提供对高速缓存行的请求的特定响应,其中响应向请求处理器指示 将提供缓存行。 相反,如果数据总线带宽利用率的百分比上升到高于第二预定阈值,则系统控制器向请求处理器提供对请求的下一个响应,该请求指示请求处理器应该利用当前在其本地内的超相干数据 缓存。 地址总线上的类似操作允许系统控制器通过仍具有超相干数据的处理器触发在共享高速缓存行中发出对于修改数据的Z1读请求。 该方法还包括启用具有多个位的加载指令,其中(1)指示所产生的加载请求是否可以接收超相干数据,以及(2)当所述多个比特指示时,超过表示超相干数据的利用的相关性状态 所述加载请求可能不利用所述超相干数据。 还提供了具有附加位和相关功能的专用存储指令。
    • 118. 发明授权
    • System bus read data transfers with bus utilization based data ordering
    • 系统总线读取数据传输与基于总线利用的数据排序
    • US06535957B1
    • 2003-03-18
    • US09436422
    • 1999-11-09
    • Ravi Kumar ArimilliVicente Enrique ChungGuy Lynn GuthrieJody Bern Joyner
    • Ravi Kumar ArimilliVicente Enrique ChungGuy Lynn GuthrieJody Bern Joyner
    • G06F932
    • G06F9/30043G06F9/30185G06F9/34G06F12/0879
    • A method for selecting an order of data transmittal based on system bus utilization of a data processing system. The method comprises the steps of coupling system components to a processor within the data processing system to effectuate data transfer, dynamically determining based on current system bus loading, an order in which to retrieve and transmit data from the system component to the processor, and informing the processor of the order selected by issuing to the data bus a plurality of selected order bits concurrent with the transmittal of the data, wherein the selected order bit alerts the processor of the order and the data is transmitted in that order. In a preferred embodiment, the system component is a cache and a system monitor monitors the system bus usage/loading. When a read request appears at the cache, the modified cache controller preference order logic or a preference order logic component determines the order to transmit the data wherein the order is selected to substantially optimize data bandwidth when the system bus usage is high and selected to substantially optimize data latency when system bus usage is low.
    • 一种基于数据处理系统的系统总线利用率来选择数据传输顺序的方法。 该方法包括以下步骤:将系统组件耦合到数据处理系统内的处理器以实现数据传输,基于当前系统总线负载动态确定,从系统组件检索和传输数据到处理器的顺序,以及通知 所述处理器通过向所述数据总线发送与所述数据的传送同时发送的多个所选顺序位而选择的所述顺序,其中所述选择的顺序位向所述处理器报告所述顺序并且所述数据以该顺序被传送。 在优选实施例中,系统组件是高速缓存,系统监视器监视系统总线的使用/加载。 当读取请求出现在高速缓存时,修改的高速缓存控制器偏好顺序逻辑或偏好顺序逻辑组件确定发送数据的顺序,其中当系统总线使用率高时选择该顺序以基本上优化数据带宽并且被选择为基本上 当系统总线使用率低时优化数据延迟。
    • 120. 发明授权
    • Multiprocessor computer system with sectored cache line system bus protocol mechanism
    • 多处理器计算机系统采用高速缓存线路系统总线协议机制
    • US06484241B2
    • 2002-11-19
    • US09752862
    • 2000-12-28
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn Guthrie
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn Guthrie
    • G06F1200
    • G06F12/0831
    • A method of maintaining coherency in a multiprocessor computer system wherein each processing unit's cache has sectored cache lines. A first cache coherency state is assigned to one of the sectors of a particular cache line, and a second cache coherency state, different from the first cache coherency state, is assigned to the overall cache line while maintaining the first cache coherency state for the first sector. The first cache coherency state may provide an indication that the first sector contains a valid value which is not shared with any other cache (i.e., an exclusive or modified state), and the second cache coherency state may provide an indication that at least one of the sectors in the cache line contains a valid value which is shared with at least one other cache (a shared, recently-read, or tagged state). Other coherency states may be applied to other sectors in the same cache line. Partial intervention may be achieved by issuing a request to retrieve an entire cache line, and sourcing only a first sector of the cache line in response to the request. A second sector of the same cache line may be sourced from a third cache. Other sectors may also be sourced from a system memory device of the computer system as well. Appropriate system bus codes are utilized to transmit cache operations to the system bus and indicate which sectors of the cache line are targets of the cache operation.
    • 一种在多处理器计算机系统中维持一致性的方法,其中每个处理单元的高速缓冲存储器具有高速缓存行。 第一高速缓存一致性状态被分配给特定高速缓存行的一个扇区,并且与第一高速缓存一致性状态不同的第二高速缓存一致性状态被分配给总高速缓存行,同时保持第一高速缓存一致性状态 部门。 第一高速缓存一致性状态可以提供第一扇区包含不与任何其它高速缓存共享的有效值(即,排他或修改状态)的指示,并且第二高速缓存一致性状态可以提供以下指示: 高速缓存行中的扇区包含与至少一个其他高速缓存(共享,最近读取或标记状态)共享的有效值。 其他一致性状态可以应用于同一高速缓存行中的其他扇区。 部分干预可以通过发出检索整个高速缓存线的请求来实现,并且仅响应于该请求仅提供高速缓存行的第一扇区。 相同高速缓存行的第二扇区可以来自第三高速缓存。 其他扇区也可以来自计算机系统的系统存储器设备。 利用适当的系统总线代码将高速缓存操作发送到系统总线,并指示高速缓存行的哪些扇区是高速缓存操作的目标。