会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明申请
    • Method for completing full cacheline stores with address-only bus operations
    • 完成具有仅地址总线操作的完整缓存线存储的方法
    • US20050251623A1
    • 2005-11-10
    • US10825189
    • 2004-04-15
    • Ravi ArimilliGuy GuthrieHugh ShenDerek Williams
    • Ravi ArimilliGuy GuthrieHugh ShenDerek Williams
    • G06F12/00G06F12/08
    • G06F12/0897G06F12/0804
    • A method and processor system that substantially eliminates data bus operations when completing updates of an entire cache line with a full store queue entry. The store queue within a processor chip is designed with a series of AND gates connecting individual bits of the byte enable bits of a corresponding entry. The AND output is fed to the STQ controller and signals when the entry is full. When full entries are selected for dispatch to the RC machines, the RC machine is signaled that the entry updates the entire cache line. The RC machine obtains write permission to the line, and then the RC machine overwrites the entire cache line. Because the entire cache line is overwritten, the data of the cache line is not retrieved when the request for the cache line misses at the cache or when data goes state before write permission is obtained by the RC machine.
    • 一种方法和处理器系统,其在完成具有完整存储队列条目的整个高速缓存行的更新时基本上消除数据总线操作。 处理器芯片内的存储队列设计有连接相应条目的字节使能位的各个位的一系列与门。 AND输出被馈送到STQ控制器,并在条目已满时发出信号。 当选择完整条目以发送到RC机器时,RC机器发出信号,表示该条目更新整个高速缓存行。 RC机器获得线路的写入权限,然后RC机器覆盖整个高速缓存行。 由于整个高速缓存线被覆盖,当缓存线的请求在高速缓存中丢失时或在RC机器获得写入许可之前数据进入状态时,不会检索高速缓存行的数据。
    • 4. 发明申请
    • Processor, data processing system, and method for initializing a memory block in a data processing system having multiple coherency domains
    • 处理器,数据处理系统和用于初始化具有多个相干域的数据处理系统中的存储器块的方法
    • US20070226423A1
    • 2007-09-27
    • US11388001
    • 2006-03-23
    • Ravi ArimilliGuy GuthrieWilliam StarkeDerek Williams
    • Ravi ArimilliGuy GuthrieWilliam StarkeDerek Williams
    • G06F13/28
    • G06F12/0822G06F12/084
    • A data processing system includes at least first and second coherency domains, each including at least one processor core and a memory. In response to an initialization operation by a processor core that indicates a target memory block to be initialized, a cache memory in the first coherency domain determines a coherency state of the target memory block with respect to the cache memory. In response to the determination, the cache memory selects a scope of broadcast of an initialization request identifying the target memory block. A narrower scope including the first coherency domain and excluding the second coherency domain is selected in response to a determination of a first coherency state, and a broader scope including the first coherency domain and the second coherency domain is selected in response to a determination of a second coherency state. The cache memory then broadcasts an initialization request with the selected scope. In response to the initialization request, the target memory block is initialized within a memory of the data processing system to an initialization value.
    • 数据处理系统至少包括第一和第二相干域,每个域包括至少一个处理器核和存储器。 响应于指示要初始化的目标存储器块的处理器核心的初始化操作,第一相干域中的高速缓存存储器确定目标存储器块相对于高速缓冲存储器的一致性状态。 响应于该确定,高速缓存存储器选择识别目标存储器块的初始化请求的广播范围。 响应于第一相关性状态的确定而选择包括第一相关域并且排除第二相关性域的较窄范围,并且响应于确定第一相关性域的第一相关性域和第二相关域 第二一致性状态。 然后,高速缓冲存储器播放具有所选范围的初始化请求。 响应于初始化请求,将目标存储器块在数据处理系统的存储器内初始化为初始化值。
    • 5. 发明申请
    • Data processing system, method and interconnect fabric supporting multiple planes of processing nodes
    • 支持多个处理节点平面的数据处理系统,方法和互连结构
    • US20070081516A1
    • 2007-04-12
    • US11245887
    • 2005-10-07
    • Ravi ArimilliBenjiman GoodmanGuy GuthriePraveen ReddyWilliam Starke
    • Ravi ArimilliBenjiman GoodmanGuy GuthriePraveen ReddyWilliam Starke
    • H04L12/28
    • G06F15/16
    • A data processing system includes a first plane including a first plurality of processing nodes, each including multiple processing units, and a second plane including a second plurality of processing nodes, each including multiple processing units. The data processing system also includes a plurality of point-to-point first tier links. Each of the first plurality and second plurality of processing nodes includes one or more first tier links among the plurality of first tier links, where the first tier link(s) within each processing node connect a pair of processing units in the same processing node for communication. The data processing system further includes a plurality of point-to-point second tier links. At least a first of the plurality of second tier links connects processing units in different ones of the first plurality of processing nodes, at least a second of the plurality of second tier links connects processing units in different ones of the second plurality of processing nodes, and at least a third of the plurality of second tier links connects a processing unit in the first plane to a processing unit in the second plane.
    • 数据处理系统包括包括第一多个处理节点的第一平面,每个处理节点包括多个处理单元,以及包括第二多个处理节点的第二平面,每个处理节点包括多个处理单元。 数据处理系统还包括多个点对点第一层链路。 第一多个处理节点和第二多个处理节点中的每一个包括多个第一层链路之中的一个或多个第一层链路,其中每个处理节点内的第一层链路连接相同处理节点中的一对处理单元,用于 通讯。 数据处理系统还包括多个点到点第二层链路。 所述多个第二层链路中的至少第一层连接所述第一多个处理节点中的不同处理节点中的处理单元,所述多个第二层链路中的至少一个链接连接所述第二多个处理节点中的不同处理节点中的处理单元, 并且所述多个第二层链路中的至少三分之一链路将所述第一平面中的处理单元连接到所述第二平面中的处理单元。
    • 8. 发明申请
    • System and method of re-ordering store operations within a processor
    • 在处理器内重新排序存储操作的系统和方法
    • US20060179226A1
    • 2006-08-10
    • US11054450
    • 2005-02-09
    • Guy GuthrieHugh ShenWilliam StarkeDerek Williams
    • Guy GuthrieHugh ShenWilliam StarkeDerek Williams
    • G06F12/00
    • G06F9/3834G06F9/30043G06F9/3824G06F12/0817
    • A system and method for re-ordering store operations from a processor core to a store queue. When a store queue receives a new processor-issued store operation from the processor core, a store queue controller allocates a new entry in the store queue. In response to allocating the new entry in the store queue, the store queue controller determines whether or not the new entry is dependent on at least one other valid entry in the store queue. In response to determining the new entry is dependent on at least one other valid entry in the store queue, the store queue controller inhibits requesting of the new entry to the RC dispatch logic until each valid entry on which the new entry is dependent has been successfully dispatched to an RC machine by the RC dispatch logic.
    • 一种用于重新排序从处理器核到存储队列的存储操作的系统和方法。 当存储队列从处理器核心接收到新的处理器发出的存储操作时,存储队列控制器在存储队列中分配新的条目。 响应于在商店队列中分配新条目,商店队列控制器确定新条目是否依赖于商店队列中的至少一个其他有效条目。 响应于确定新条目取决于存储队列中的至少一个其他有效条目,存储队列控制器禁止向RC调度逻辑请求新条目,直到新条目依赖于其上的每个有效条目已经成功 通过RC调度逻辑调度到RC机器。
    • 9. 发明申请
    • Reducing number of rejected snoop requests by extending time to respond to snoop request
    • 通过延长响应窥探请求的时间来减少被拒绝的窥探请求数
    • US20060184746A1
    • 2006-08-17
    • US11056679
    • 2005-02-11
    • Guy GuthrieHugh ShenWilliam StarkeDerek Williams
    • Guy GuthrieHugh ShenWilliam StarkeDerek Williams
    • G06F13/28
    • G06F13/1605G06F12/0831
    • A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. The snoop request is entered in the first available latch of the stall/reorder unit unless the stall/reorder unit is full in which case the new snoop request is transmitted to a second unit configured to transmit a request to retry resending the new snoop request. Snoop requests have a higher priority than requests from processors and snoop requests are selected by the arbitration mechanism over processor requests unless the arbitration mechanism requests otherwise (“stall request”) to the stall/reorder unit. By snoop requests having a higher priority than processor requests, the number of snoop requests rejected is reduced. By having the arbitration mechanism issue a stall request, the processor will not be starved.
    • 用于减少拒绝的窥探请求数量的缓存,系统和方法。 缓存中的“停止/重新排序单元”从互连中接收窥探请求。 监听请求被输入到停止/重新排序单元的第一可用锁存器中,除非停止/重新排序单元已满,在这种情况下,新的窥探请求被发送到被配置为发送重新发送新的窥探请求的请求的第二单元。 侦听请求具有比来自处理器的请求更高的优先级,并且仲裁机制通过处理器请求选择侦听请求,除非仲裁机制另请求(“停止请求”)到停止/重新排序单元。 通过具有比处理器请求更高优先级的侦听请求,减少了被拒绝的侦听请求的数量。 通过使仲裁机制发出停顿请求,处理器不会饿死。
    • 10. 发明申请
    • System bus read data transfers with data ordering control bits
    • 系统总线使用数据排序控制位读取数据传输
    • US20050193174A1
    • 2005-09-01
    • US11041711
    • 2005-01-22
    • Ravi ArimilliVicente ChungGuy GuthrieJody Joyner
    • Ravi ArimilliVicente ChungGuy GuthrieJody Joyner
    • G06F12/08G06F12/00
    • G06F12/0831
    • A method for informing a processor of a selected order of transmission of data to the processor. The method comprises the steps of coupling system components via a data bus to the processor to effectuate data transfer, determining at the system component logic the order in which to transmit data to the processor, and issuing to the data bus a selected order bit concurrent with the data, wherein the selected order bit alerts the processor of the order and the data is transmitted in that order. In a preferred embodiment, the system component is the cache and the method may involve receiving at the cache a preference of ordering for a read address/request from the processor. The preference order logic of the cache controller or a preference order logic component evaluates the preference of ordering desired by comparing the processor preference with other preferences, including cache order preference. One preference order is selected and the data is then retrieved from a cache line of the cache in the order selected.
    • 一种用于向处理器通知所选择的数据传输顺序的处理器的方法。 该方法包括以下步骤:将系统组件经由数据总线耦合到处理器以实现数据传输,在系统组件逻辑处确定将数据发送到处理器的顺序,以及向数据总线发出与 数据,其中所选择的订单位向处理器提醒订单,并且以该顺序传送数据。 在优选实施例中,系统组件是高速缓存,并且该方法可以涉及在高速缓存处接收对来自处理器的读取地址/请求的排序的偏好。 高速缓存控制器或偏好顺序逻辑组件的偏好顺序逻辑通过将处理器偏好与其他偏好(包括高速缓存顺序偏好)进行比较来评估期望的顺序的偏好。 选择一个偏好顺序,然后以所选顺序从高速缓存的高速缓存行检索数据。