会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Cache memory, processing unit, data processing system and method for assuming a selected invalid coherency state based upon a request source
    • 高速缓冲存储器,处理单元,数据处理系统和方法,用于基于请求源假设所选择的无效一致性状态
    • US07237070B2
    • 2007-06-26
    • US11109085
    • 2005-04-19
    • Guy L. GuthrieAaron C. SawdeyWilliam J. StarkeDerek Edward Williams
    • Guy L. GuthrieAaron C. SawdeyWilliam J. StarkeDerek Edward Williams
    • G06F13/00
    • G06F12/0811G06F12/0813G06F12/0831
    • At a first cache memory affiliated with a first processor core, an exclusive memory access operation is received via an interconnect fabric coupling the first cache memory to second and third cache memories respectively affiliated with second and third processor cores. The exclusive memory access operation specifies a target address. In response to receipt of the exclusive memory access operation, the first cache memory detects presence or absence of a source indication indicating that the exclusive memory access operation originated from the second cache memory to which the first cache memory is coupled by a private communication network to which the third cache memory is not coupled. In response to detecting presence of the source indication, a coherency state field of the first cache memory that is associated with the target address is updated to a first data-invalid state. In response to detecting absence of the source indication, the coherency state field of the first cache memory is updated to a different second data-invalid state.
    • 在与第一处理器核心相关联的第一高速缓冲存储器处,通过将第一高速缓冲存储器耦合到分别隶属于第二和第三处理器核的第二和第三高速缓冲存储器的互连结构接收独占存储器存取操作。 独占内存访问操作指定目标地址。 响应于独占存储器访问操作的接收,第一高速缓存存储器检测是否存在指示来自第一高速缓存存储器的专用存储器访问操作的源指示由第一高速缓存存储器通过专用通信网络耦合到 第三缓存存储器未被耦合。 响应于检测到源指示的存在,与目标地址相关联的第一高速缓冲存储器的一致性状态字段被更新为第一数据无效状态。 响应于检测到不存在源指示,将第一高速缓冲存储器的一致性状态字段更新为不同的第二数据无效状态。
    • 5. 发明授权
    • Mode-based castout destination selection
    • 基于模式的castout目的地选择
    • US08312220B2
    • 2012-11-13
    • US12420933
    • 2009-04-09
    • Guy L. GuthrieHarmony L. HelterhoffWilliam J. StarkePhillip G. WilliamsJeffrey A. Stuecheli
    • Guy L. GuthrieHarmony L. HelterhoffWilliam J. StarkePhillip G. WilliamsJeffrey A. Stuecheli
    • G06F12/08
    • G06F12/0811G06F12/12
    • In response to a data request of a first of a plurality of processing units, the first processing unit selects a victim cache line to be castout from the lower level cache of the first processing unit and determines whether a mode is set. If not, the first processing unit issues on the interconnect fabric an LCO command identifying the victim cache line and indicating that a lower level cache is the intended destination. If the mode is set, the first processing unit issues a castout command with an alternative intended destination. In response to a coherence response to the LCO command indicating success of the LCO command, the first processing unit removes the victim cache line from its lower level cache, and the victim cache line is held elsewhere in the data processing system. The mode can be set to inhibit castouts to system memory, for example, for testing.
    • 响应于多个处理单元中的第一处理单元的数据请求,第一处理单元从第一处理单元的较低级高速缓存中选择要丢弃的牺牲高速缓存行,并且确定是否设置了模式。 如果不是,则第一处理单元在互连结构上发出识别受害者高速缓存行的LCO命令,并指示较低级别的高速缓存是预期的目的地。 如果模式被设置,则第一处理单元发出具有替代预定目的地的停顿命令。 响应于指示LCO命令成功的LCO命令的一致性响应,第一处理单元从其较低级高速缓存中去除受害者高速缓存行,并且将受害者高速缓存行保持在数据处理系统的其他地方。 该模式可以设置为抑制系统内存的丢弃,例如进行测试。
    • 6. 发明授权
    • Victim cache prefetching
    • 受害者缓存预取
    • US08209489B2
    • 2012-06-26
    • US12256064
    • 2008-10-22
    • Guy L. GuthrieWilliam J. StarkeJeffrey A. StuecheliPhillip G. Williams
    • Guy L. GuthrieWilliam J. StarkeJeffrey A. StuecheliPhillip G. Williams
    • G06F12/08
    • G06F12/0862G06F12/0897Y02D10/13
    • A processing unit for a multiprocessor data processing system includes a processor core and a cache hierarchy coupled to the processor core to provide low latency data access. The cache hierarchy includes an upper level cache coupled to the processor core and a lower level victim cache coupled to the upper level cache. In response to a prefetch request of the processor core that misses in the upper level cache, the lower level victim cache determines whether the prefetch request misses in the directory of the lower level victim cache and, if so, allocates a state machine in the lower level victim cache that services the prefetch request by issuing the prefetch request to at least one other processing unit of the multiprocessor data processing system.
    • 用于多处理器数据处理系统的处理单元包括处理器核心和耦合到处理器核心的高速缓存层级以提供低延迟数据访问。 高速缓存层级包括耦合到处理器核心的高级缓存和耦合到高级缓存的较低级别的牺牲缓存。 响应于在高级缓存中丢失的处理器核心的预取请求,较低级别的受害者缓存确定预取请求是否丢失在较低级别的受害者缓存的目录中,并且如果是,则在下级缓存中分配状态机 通过向多处理器数据处理系统的至少一个其他处理单元发出预取请求来服务于预取请求。
    • 7. 发明授权
    • Fault tolerant encoding of directory states for stuck bits
    • 卡位的目录状态的容错编码
    • US08205136B2
    • 2012-06-19
    • US12189808
    • 2008-08-12
    • Robert H. Bell, Jr.Guy L. GuthrieWilliam J. Starke
    • Robert H. Bell, Jr.Guy L. GuthrieWilliam J. Starke
    • G11C29/00
    • G11C29/832G06F11/1064
    • A method of handling a stuck bit in a directory of a cache memory, by defining multiple binary encodings to indicate a defective cache state, detecting an error in a tag stored in a member of the directory (wherein the tag at least includes an address field, a state field and an error-correction field), determining that the error is associated with a stuck bit of the directory member, and writing new state information to the directory member which is selected from one of the binary encodings based on a field location of the stuck bit within the directory member. The multiple binary encodings may include a first binary encoding when the stuck bit is in the address field, a second binary encoding when the stuck bit is in the state field, and a third binary encoding when the stuck bit is in the error-correction field. The new state information may also further be selected based on the value of the stuck bit, e.g., a state bit corresponding to the stuck bit is assigned a bit value from the new state information which matches the value of the stuck bit.
    • 一种通过定义多个二进制编码来指示缺陷高速缓存状态来处理高速缓冲存储器的目录中的卡住位的方法,检测存储在目录成员中的标签中的错误(其中标签至少包括地址字段 ,状态字段和纠错字段),确定错误与目录成员的卡住位相关联,并且基于字段位置将新状态信息写入从二进制编码之一中选择的目录成员 的目录成员中的卡住位。 多个二进制编码可以包括当卡住位在地址字段中时的第一二进制编码,当卡位位于状态字段时的第二二进制编码,以及当卡位位于错误校正字段中时的第三二进制编码 。 还可以基于卡住位的值进一步选择新的状态信息,例如,对应于该卡住位的状态位从与该卡位的值匹配的新状态信息中分配一位值。
    • 10. 发明申请
    • Virtual Barrier Synchronization Cache
    • 虚拟障碍同步缓存
    • US20100257317A1
    • 2010-10-07
    • US12419364
    • 2009-04-07
    • Ravi K. ArimilliGuy L. GuthrieRobert A. CargnoniWilliam J. StarkeDerek E. Williams
    • Ravi K. ArimilliGuy L. GuthrieRobert A. CargnoniWilliam J. StarkeDerek E. Williams
    • G06F12/08G06F12/00
    • G06F12/0811G06F9/522
    • A data processing system includes an interconnect fabric, a system memory coupled to the interconnect fabric and including a virtual barrier synchronization region allocated to storage of virtual barrier synchronization registers (VBSRs), and a plurality of processing units coupled to the interconnect fabric and operable to access the virtual barrier synchronization region of the system memory. Each of the plurality of processing units includes a processor core and a cache memory including a cache array that caches VBSR lines from the virtual barrier synchronization region of the system memory and a cache controller. The cache controller, responsive to a store request from the processor core to update a particular VBSR line, performs a non-blocking update of the cache array in each other of the plurality of processing units contemporaneously holding a copy of the particular VBSR line by transmitting a VBSR update command on the interconnect fabric.
    • 数据处理系统包括互连结构,耦合到互连结构并包括分配给虚拟屏障同步寄存器(VBSR)的存储的虚拟屏障同步区域的系统存储器,以及耦合到互连结构的多个处理单元, 访问系统内存的虚拟屏障同步区域。 多个处理单元中的每一个包括处理器核心和高速缓存存储器,其包括从系统存储器的虚拟屏障同步区域缓存VBSR行的缓存阵列和高速缓存控制器。 高速缓存控制器响应于来自处理器核心的存储请求来更新特定VBSR线路,通过发送来同时保存特定VBSR线路的副本的多个处理单元中的彼此之间的高速缓存阵列的非阻塞更新 互连结构上的VBSR更新命令。