会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 81. 发明授权
    • Cache memory, processing unit, data processing system and method for assuming a selected invalid coherency state based upon a request source
    • 高速缓冲存储器,处理单元,数据处理系统和方法,用于基于请求源假设所选择的无效一致性状态
    • US07237070B2
    • 2007-06-26
    • US11109085
    • 2005-04-19
    • Guy L. GuthrieAaron C. SawdeyWilliam J. StarkeDerek Edward Williams
    • Guy L. GuthrieAaron C. SawdeyWilliam J. StarkeDerek Edward Williams
    • G06F13/00
    • G06F12/0811G06F12/0813G06F12/0831
    • At a first cache memory affiliated with a first processor core, an exclusive memory access operation is received via an interconnect fabric coupling the first cache memory to second and third cache memories respectively affiliated with second and third processor cores. The exclusive memory access operation specifies a target address. In response to receipt of the exclusive memory access operation, the first cache memory detects presence or absence of a source indication indicating that the exclusive memory access operation originated from the second cache memory to which the first cache memory is coupled by a private communication network to which the third cache memory is not coupled. In response to detecting presence of the source indication, a coherency state field of the first cache memory that is associated with the target address is updated to a first data-invalid state. In response to detecting absence of the source indication, the coherency state field of the first cache memory is updated to a different second data-invalid state.
    • 在与第一处理器核心相关联的第一高速缓冲存储器处,通过将第一高速缓冲存储器耦合到分别隶属于第二和第三处理器核的第二和第三高速缓冲存储器的互连结构接收独占存储器存取操作。 独占内存访问操作指定目标地址。 响应于独占存储器访问操作的接收,第一高速缓存存储器检测是否存在指示来自第一高速缓存存储器的专用存储器访问操作的源指示由第一高速缓存存储器通过专用通信网络耦合到 第三缓存存储器未被耦合。 响应于检测到源指示的存在,与目标地址相关联的第一高速缓冲存储器的一致性状态字段被更新为第一数据无效状态。 响应于检测到不存在源指示,将第一高速缓冲存储器的一致性状态字段更新为不同的第二数据无效状态。
    • 82. 发明授权
    • Selective cache-to-cache lateral castouts
    • 选择性高速缓存到缓存横向转义
    • US09189403B2
    • 2015-11-17
    • US12650018
    • 2009-12-30
    • Guy L. GuthrieWilliam J. StarkeJeffrey StuecheliDerek E. WilliamsThomas R. Puzak
    • Guy L. GuthrieWilliam J. StarkeJeffrey StuecheliDerek E. WilliamsThomas R. Puzak
    • G06F12/00G06F12/08G06F12/12
    • G06F12/0811G06F12/12
    • A data processing system includes first and second processing units and a system memory. The first processing unit has first upper and first lower level caches, and the second processing unit has second upper and lower level caches. In response to a data request, a victim cache line to be castout from the first lower level cache is selected, and the first lower level cache selects between performing a lateral castout (LCO) of the victim cache line to the second lower level cache and a castout of the victim cache line to the system memory based upon a confidence indicator associated with the victim cache line. In response to selecting an LCO, the first processing unit issues an LCO command on the interconnect fabric and removes the victim cache line from the first lower level cache, and the second lower level cache holds the victim cache line.
    • 数据处理系统包括第一和第二处理单元和系统存储器。 第一处理单元具有第一上层和第一下层高速缓存,第二处理单元具有第二上层和下层高速缓存。 响应于数据请求,选择要从第一较低级高速缓存丢弃的受害者高速缓存行,并且第一较低级高速缓存选择在执行到第二低级高速缓存的受害者高速缓存行的横向流出(LCO) 基于与受害者高速缓存行相关联的置信指示,将受害者缓存行的丢弃发送到系统存储器。 响应于选择LCO,第一处理单元在互连结构上发布LCO命令,并从第一低级缓存中移除受害者高速缓存行,并且第二下级缓存保存受害缓存行。
    • 88. 发明授权
    • Synchronizing access to data in shared memory via upper level cache queuing
    • 通过高级缓存排队同步访问共享内存中的数据
    • US08327074B2
    • 2012-12-04
    • US13445080
    • 2012-04-12
    • Guy L. GuthrieWilliam J. StarkeDerek E. Williams
    • Guy L. GuthrieWilliam J. StarkeDerek E. Williams
    • G06F12/00
    • G06F12/0811G06F9/3004G06F9/30072G06F9/30087G06F9/3834
    • A processing unit includes a store-in lower level cache having reservation logic that determines presence or absence of a reservation and a processor core including a store-through upper level cache, an instruction execution unit, a load unit that, responsive to a hit in the upper level cache on a load-reserve operation generated through execution of a load-reserve instruction by the instruction execution unit, temporarily buffers a load target address of the load-reserve operation, and a flag indicating that the load-reserve operation bound to a value in the upper level cache. If a storage-modifying operation is received that conflicts with the load target address of the load-reserve operation, the processor core sets the flag to a particular state, and, responsive to execution of a store-conditional instruction, transmits an associated store-conditional operation to the lower level cache with a fail indication if the flag is set to the particular state.
    • 处理单元包括具有确定存在或不存在预留的预约逻辑的存储下位缓存和包括存储通过上级缓存,指令执行单元,负载单元的处理器核心,该负载单元响应于 由指令执行单元通过执行装载预约指令而产生的加载备用操作的上级缓存暂时缓冲加载备用操作的加载目标地址,以及指示载入预约操作被绑定到 上级缓存中的值。 如果接收到与加载保留操作的加载目标地址冲突的存储修改操作,则处理器核心将该标志设置为特定状态,并且响应于执行存储条件指令,发送关联的存储 - 如果该标志被设置为特定状态,则向低级缓存进行条件操作,并显示故障指示。
    • 89. 发明授权
    • Data processing system, method and interconnect fabric having a flow governor
    • 具有流量调节器的数据处理系统,方法和互连结构
    • US08254411B2
    • 2012-08-28
    • US11055399
    • 2005-02-10
    • Leo J. ClarkGuy L. GuthrieWilliam J. Starke
    • Leo J. ClarkGuy L. GuthrieWilliam J. Starke
    • H04J3/16H04J3/22G06F13/00G06F15/173
    • H04L47/722H04L47/70H04L49/109
    • A data processing system includes a plurality of local hubs each coupled to a remote hub by a respective one a plurality of point-to-point communication links. Each of the plurality of local hubs queues requests for access to memory blocks for transmission on a respective one of the point-to-point communication links to a shared resource in the remote hub. Each of the plurality of local hubs transmits requests to the remote hub utilizing only a fractional portion of a bandwidth of its respective point-to-point communication link. The fractional portion that is utilized is determined by an allocation policy based at least in part upon a number of the plurality of local hubs and a number of processing units represented by each of the plurality of local hubs. The allocation policy prevents overruns of the shared resource.
    • 数据处理系统包括多个本地集线器,每个集线器通过相应的一个多个点对点通信链路耦合到远程集线器。 多个本地集线器中的每一个排队对存储器块进行访问的请求,用于在到远程集线器中的共享资源的点对点通信链路中的相应一个上传输。 多个本地集线器中的每一个仅利用其相应点对点通信链路的带宽的小数部分向远程集线器发送请求。 所使用的分数部分由至少部分地基于多个本地集线器的数量和由多个本地集线器中的每一个表示的多个处理单元的分配策略确定。 分配策略可以防止超出共享资源。
    • 90. 发明申请
    • SYNCHRONIZING ACCESS TO DATA IN SHARED MEMORY VIA UPPER LEVEL CACHE QUEUING
    • 通过上级缓存队列同步访问共享存储器中的数据
    • US20120198167A1
    • 2012-08-02
    • US13445080
    • 2012-04-12
    • Guy L. GuthrieWilliam J. StarkeDerek E. Williams
    • Guy L. GuthrieWilliam J. StarkeDerek E. Williams
    • G06F12/08
    • G06F12/0811G06F9/3004G06F9/30072G06F9/30087G06F9/3834
    • A processing unit includes a store-in lower level cache having reservation logic that determines presence or absence of a reservation and a processor core including a store-through upper level cache, an instruction execution unit, a load unit that, responsive to a hit in the upper level cache on a load-reserve operation generated through execution of a load-reserve instruction by the instruction execution unit, temporarily buffers a load target address of the load-reserve operation, and a flag indicating that the load-reserve operation bound to a value in the upper level cache. If a storage-modifying operation is received that conflicts with the load target address of the load-reserve operation, the processor core sets the flag to a particular state, and, responsive to execution of a store-conditional instruction, transmits an associated store-conditional operation to the lower level cache with a fail indication if the flag is set to the particular state.
    • 处理单元包括具有确定存在或不存在预留的预约逻辑的存储下位缓存和包括存储通过上级缓存,指令执行单元,负载单元的处理器核心,该负载单元响应于 由指令执行单元通过执行装载预约指令而产生的加载备用操作的上级缓存暂时缓冲加载备用操作的加载目标地址,以及指示载入预约操作被绑定到 上级缓存中的值。 如果接收到与加载保留操作的加载目标地址冲突的存储修改操作,则处理器核心将该标志设置为特定状态,并且响应于执行存储条件指令,发送关联的存储 - 如果该标志被设置为特定状态,则向低级缓存进行条件操作,并显示故障指示。