会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Updating partial cache lines in a data processing system
    • 更新数据处理系统中的部分缓存行
    • US08117390B2
    • 2012-02-14
    • US12424434
    • 2009-04-15
    • David W. CummingsGuy L. GuthrieHugh ShenWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • David W. CummingsGuy L. GuthrieHugh ShenWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • G06F13/00
    • G06F12/0822G06F12/0897G06F2212/507
    • A processing unit for a data processing system includes a processor core having one or more execution units for processing instructions and a register file for storing data accessed in processing of the instructions. The processing unit also includes a multi-level cache hierarchy coupled to and supporting the processor core. The multi-level cache hierarchy includes at least one upper level of cache memory having a lower access latency and at least one lower level of cache memory having a higher access latency. The lower level of cache memory, responsive to receipt of a memory access request that hits only a partial cache line in the lower level cache memory, sources the partial cache line to the at least one upper level cache memory to service the memory access request. The at least one upper level cache memory services the memory access request without caching the partial cache line.
    • 用于数据处理系统的处理单元包括具有一个或多个用于处理指令的执行单元的处理器核心和用于存储在指令处理中访问的数据的寄存器文件。 处理单元还包括耦合到并支持处理器核的多级高速缓存层级。 多级高速缓存层级包括具有较低访问延迟的至少一个高级缓存存储器和具有较高访问延迟的至少一个较低级别的高速缓存存储器。 响应于仅接收低级高速缓冲存储器中的部分高速缓存行的存储器访问请求的响应,较低级别的高速缓存存储器将部分高速缓存行源送至至少一个上级高速缓冲存储器来服务存储器访问请求。 至少一个上级缓存存储器服务于存储器访问请求,而不缓存部分高速缓存行。
    • 4. 发明授权
    • Bandwidth of a cache directory by slicing the cache directory into two smaller cache directories and replicating snooping logic for each sliced cache directory
    • 缓存目录的带宽通过将缓存目录分成两个较小的缓存目录,并为每个切片缓存目录复制侦听逻辑
    • US08135910B2
    • 2012-03-13
    • US11056721
    • 2005-02-11
    • Guy L. GuthrieWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • Guy L. GuthrieWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • G06F12/00
    • G06F12/0831G06F12/0851
    • A cache, system and method for improving the snoop bandwidth of a cache directory. A cache directory may be sliced into two smaller cache directories each with its own snooping logic. By having two cache directories that can be accessed simultaneously, the bandwidth can be essentially doubled. Furthermore, a “frequency matcher” may shift the cycle speed to a lower speed upon receiving snoop addresses from the interconnect thereby slowing down the rate at which requests are transmitted to the dispatch pipelines. Each dispatch pipeline is coupled to a sliced cache directory and is configured to search the cache directory to determine if data at the received addresses is stored in the cache memory. As a result of slowing down the rate at which requests are transmitted to the dispatch pipelines and accessing the two sliced cache directories simultaneously, the bandwidth or throughput of the cache directory may be improved.
    • 用于提高缓存目录的窥探带宽的缓存,系统和方法。 高速缓存目录可以分成两个较小的缓存目录,每个具有自己的侦听逻辑。 通过具有可以同时访问的两个缓存目录,带宽可以基本上加倍。 此外,当从互连接收到窥探地址时,“频率匹配器”可以将周期速度转移到较低速度,从而将请求发送到调度管线的速率减慢。 每个调度流水线被耦合到一个切片缓存目录,并被配置为搜索该高速缓存目录以确定该接收到的地址上的数据是否被存储在该高速缓冲存储器中。 作为将请求发送到调度管线的速度变慢并且同时访问两个分片缓存目录的结果,可以提高缓存目录的带宽或吞吐量。
    • 6. 发明申请
    • Cache-To-Cache Cast-In
    • 缓存到缓存注入
    • US20100153647A1
    • 2010-06-17
    • US12335975
    • 2008-12-16
    • Guy L. GuthrieAlvan W. NgMichael S. SiegelWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • Guy L. GuthrieAlvan W. NgMichael S. SiegelWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • G06F12/08
    • G06F12/0811G06F12/0804G06F12/0831G06F12/0862
    • A data processing system includes a first processing unit and a second processing unit coupled by an interconnect fabric. The first processing unit has a first processor core and associated first upper and first lower level caches, and the second processing unit has a second processor core and associated second upper and lower level caches. In response to a data request, a victim cache line is selected for castout from the first lower level cache. The first processing unit issues on the interconnect fabric a lateral castout (LCO) command that identifies the victim cache line to be castout from the first lower level cache and indicates that a lower level cache is an intended destination. In response to a coherence response indicating success of the LCO command, the victim cache line is removed from the first lower level cache and held in the second lower level cache.
    • 数据处理系统包括由互连结构耦合的第一处理单元和第二处理单元。 第一处理单元具有第一处理器核心和相关联的第一上部和第一下层高速缓存,并且第二处理单元具有第二处理器核心和相关联的第二上部和下部高速缓存。 响应于数据请求,选择受害者高速缓存行用于从第一较低级别缓存进行舍弃。 第一处理单元在互连结构上发出横向聚合(LCO)命令,该命令标识要从第一较低级缓存中抛出的受害缓存行,并且指示较低级缓存是预期目的地。 响应于指示LCO命令的成功的一致性响应,从第一低级缓存中删除受害者高速缓存行并保存在第二较低级高速缓存中。