会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 62. 发明申请
    • Aggregate Data Processing System Having Multiple Overlapping Synthetic Computers
    • 具有多重重合成计算机的综合数据处理系统
    • US20110153943A1
    • 2011-06-23
    • US12643800
    • 2009-12-21
    • Guy L. GuthrieCharles F. MarinoWilliam J. StarkeDerek E. Williams
    • Guy L. GuthrieCharles F. MarinoWilliam J. StarkeDerek E. Williams
    • G06F12/00G06F12/14G06F12/08
    • G06F12/0813G06F12/0284G06F15/167
    • A first SMP computer has first and second processing units and a first system memory pool, a second SMP computer has third and fourth processing units and a second system memory pool, and a third SMP computer has at least fifth and sixth processing units and third, fourth and fifth system memory pools. The fourth system memory pool is inaccessible to the third, fourth and sixth processing units and accessible to at least the second and fifth processing units, and the fifth system memory pool is inaccessible to the first, second and sixth processing units and accessible to at least the fourth and fifth processing units. A first interconnect couples the second processing unit for load-store coherent, ordered access to the fourth system memory pool, and a second interconnect couples the fourth processing unit for load-store coherent, ordered access to the fifth system memory pool.
    • 第一SMP计算机具有第一和第二处理单元和第一系统存储器池,第二SMP计算机具有第三和第四处理单元和第二系统存储器池,并且第三SMP计算机具有至少第五和第六处理单元,第三SMP计算机具有至少第五和第六处理单元, 第四和第五系统内存池。 第四系统存储器池对于第三,第四和第六处理单元是不可访问的,并且可访问至少第二和第五处理单元,并且第五系统存储器池对于第一,第二和第六处理单元是不可访问的,并且至少可访问 第四和第五处理单元。 第一互连耦合第二处理单元,用于对第四系统存储池进行加载存储相关的有序访问,并且第二互连耦合第四处理单元,用于加载存储相关的有序访问到第五系统存储池。
    • 64. 发明申请
    • Virtual Barrier Synchronization Cache Castout Election
    • 虚拟障碍同步缓存铸造选举
    • US20100257316A1
    • 2010-10-07
    • US12419343
    • 2009-04-07
    • Ravi K. ArimilliGuy L. GuthrieMichael SiegelWilliam J. StarkeDerek E. Williams
    • Ravi K. ArimilliGuy L. GuthrieMichael SiegelWilliam J. StarkeDerek E. Williams
    • G06F12/08G06F12/00
    • G06F12/0811G06F9/30101G06F9/3851G06F9/522
    • A data processing system includes an interconnect fabric, a system memory coupled to the interconnect fabric and including a virtual barrier synchronization region allocated to storage of virtual barrier synchronization registers (VBSRs), and a plurality of processing units coupled to the interconnect fabric and operable to access the virtual barrier synchronization region. Each of the plurality of processing units includes a processor core and a cache memory including a cache controller and a cache array that caches VBSR lines from the virtual barrier synchronization region of the system memory. The cache controller of a first processing unit, responsive to a memory access request from its processor core that targets a first VBSR line, transfers responsibility for writing back to the virtual barrier synchronization region a second VBSR line contemporaneously held in the cache arrays of first, second and third processing units. The responsibility is transferred via an election held over the interconnect fabric.
    • 数据处理系统包括互连结构,耦合到互连结构并包括分配给虚拟屏障同步寄存器(VBSR)的存储的虚拟屏障同步区域的系统存储器,以及耦合到互连结构的多个处理单元, 访问虚拟屏障同步区域。 多个处理单元中的每一个包括处理器核心和高速缓存存储器,其包括高速缓存控制器和从系统存储器的虚拟屏障同步区域缓存VBSR行的高速缓存阵列。 响应于来自其处理器核心的第一VBSR线路的存储器访问请求的第一处理单元的高速缓存控制器将负责向第一虚拟屏障同步区域写回同时保存在第一VBSR线路的高速缓存阵列中的第二VBSR线路, 第二和第三处理单元。 通过互连结构上的选举来转移责任。
    • 66. 发明授权
    • Cache-based speculation of stores following synchronizing operations
    • 同步操作后,存储器中基于缓存的推测
    • US08683140B2
    • 2014-03-25
    • US13456420
    • 2012-04-26
    • Guy L. GuthrieWilliam J. StarkeDerek E. Williams
    • Guy L. GuthrieWilliam J. StarkeDerek E. Williams
    • G06F12/00
    • G06F12/0837G06F12/0895
    • A method of processing store requests in a data processing system includes enqueuing a store request in a store queue of a cache memory of the data processing system. The store request identifies a target memory block by a target address and specifies store data. While the store request and a barrier request older than the store request are enqueued in the store queue, a read-claim machine of the cache memory is dispatched to acquire coherence ownership of target memory block of the store request. After coherence ownership of the target memory block is acquired and the barrier request has been retired from the store queue, a cache array of the cache memory is updated with the store data.
    • 在数据处理系统中处理存储请求的方法包括在数据处理系统的高速缓存存储器的存储队列中引入存储请求。 存储请求通过目标地址识别目标存储器块并指定存储数据。 当存储请求和存储请求之前的屏障请求在存储队列中排队时,调度高速缓冲存储器的读取机器以获取存储请求的目标存储器块的一致性所有权。 在获取目标存储器块的一致性所有权并且屏障请求已经从存储队列中退出之后,用存储数据更新高速缓冲存储器的高速缓存阵列。
    • 68. 发明授权
    • Performing a partial cache line storage-modifying operation based upon a hint
    • 基于提示执行部分缓存行存储修改操作
    • US08332588B2
    • 2012-12-11
    • US13349315
    • 2012-01-12
    • Ravi K. ArimilliGuy L. GuthrieWilliam J. StarkeDerek E. Williams
    • Ravi K. ArimilliGuy L. GuthrieWilliam J. StarkeDerek E. Williams
    • G06F12/04
    • G06F12/0822
    • Analyzing pre-processed code includes identifying at least one storage-modifying construct specifying a storage-modifying memory access to a memory hierarchy of a data processing system and determining if more than one granule of a cache line of data containing multiple granules that is targeted by the storage-modifying construct is subsequently referenced by said pre-processed code. Post-processed code including a storage-modifying instruction corresponding to the at least one storage-modifying construct in the pre-processed code is generated and stored. Generating the post-processed code includes marking the storage-modifying instruction with a partial cache line hint indicating that said storage-modifying instruction targets less than a full cache line of data within a memory hierarchy if the analyzing indicates only one granule of the target cache line will be accessed while the cache line is held in the cache memory and otherwise refraining from marking the storage-modifying instruction with the partial cache line hint.
    • 分析预处理的代码包括识别指定对数据处理系统的存储器层次结构的存储修改存储器访问的至少一个存储修改结构,并且确定是否存在多个颗粒的高速缓存行数据,所述数据包含多个颗粒的高速缓存行是由 存储修改结构随后由所述预处理代码引用。 生成并存储包括与预处理代码中的至少一个存储修改结构对应的存储修改指令的后处理代码。 生成后处理代码包括用部分高速缓存线提示标记存储修改指令,指示所述存储修改指令的目标小于存储器层次结构内的完整高速缓存数据行,如果分析仅指示目标高速缓存的一个颗粒 将高速缓存线保持在高速缓存存储器中,并以其它方式避免使用部分高速缓存线提示来标记存储修改指令。
    • 69. 发明申请
    • PERFORMING A PARTIAL CACHE LINE STORAGE-MODIFYING OPERATION BASED UPON A HINT
    • 根据提示执行部分缓存线存储 - 修改操作
    • US20120265938A1
    • 2012-10-18
    • US13349315
    • 2012-01-12
    • Ravi K. ArimilliGuy L. GuthrieWilliam J. StarkeDerek E. Williams
    • Ravi K. ArimilliGuy L. GuthrieWilliam J. StarkeDerek E. Williams
    • G06F12/08
    • G06F12/0822
    • Analyzing pre-processed code includes identifying at least one storage-modifying construct specifying a storage-modifying memory access to a memory hierarchy of a data processing system and determining if more than one granule of a cache line of data containing multiple granules that is targeted by the storage-modifying construct is subsequently referenced by said pre-processed code. Post-processed code including a storage-modifying instruction corresponding to the at least one storage-modifying construct in the pre-processed code is generated and stored. Generating the post-processed code includes marking the storage-modifying instruction with a partial cache line hint indicating that said storage-modifying instruction targets less than a full cache line of data within a memory hierarchy if the analyzing indicates only one granule of the target cache line will be accessed while the cache line is held in the cache memory and otherwise refraining from marking the storage-modifying instruction with the partial cache line hint.
    • 分析预处理的代码包括识别指定对数据处理系统的存储器层次结构的存储修改存储器访问的至少一个存储修改结构,并且确定是否存在多个颗粒的高速缓存行数据,所述数据包含多个颗粒的高速缓存行是由 存储修改结构随后由所述预处理代码引用。 生成并存储包括与预处理代码中的至少一个存储修改结构对应的存储修改指令的后处理代码。 生成后处理代码包括用部分高速缓存线提示标记存储修改指令,指示所述存储修改指令的目标小于存储器层次结构内的完整高速缓存数据行,如果分析仅指示目标高速缓存的一个颗粒 将高速缓存线保持在高速缓存存储器中,并以其它方式避免使用部分高速缓存线提示来标记存储修改指令。