会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • Cache-To-Cache Cast-In
    • 缓存到缓存注入
    • US20100153647A1
    • 2010-06-17
    • US12335975
    • 2008-12-16
    • Guy L. GuthrieAlvan W. NgMichael S. SiegelWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • Guy L. GuthrieAlvan W. NgMichael S. SiegelWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • G06F12/08
    • G06F12/0811G06F12/0804G06F12/0831G06F12/0862
    • A data processing system includes a first processing unit and a second processing unit coupled by an interconnect fabric. The first processing unit has a first processor core and associated first upper and first lower level caches, and the second processing unit has a second processor core and associated second upper and lower level caches. In response to a data request, a victim cache line is selected for castout from the first lower level cache. The first processing unit issues on the interconnect fabric a lateral castout (LCO) command that identifies the victim cache line to be castout from the first lower level cache and indicates that a lower level cache is an intended destination. In response to a coherence response indicating success of the LCO command, the victim cache line is removed from the first lower level cache and held in the second lower level cache.
    • 数据处理系统包括由互连结构耦合的第一处理单元和第二处理单元。 第一处理单元具有第一处理器核心和相关联的第一上部和第一下层高速缓存,并且第二处理单元具有第二处理器核心和相关联的第二上部和下部高速缓存。 响应于数据请求,选择受害者高速缓存行用于从第一较低级别缓存进行舍弃。 第一处理单元在互连结构上发出横向聚合(LCO)命令,该命令标识要从第一较低级缓存中抛出的受害缓存行,并且指示较低级缓存是预期目的地。 响应于指示LCO命令的成功的一致性响应,从第一低级缓存中删除受害者高速缓存行并保存在第二较低级高速缓存中。
    • 4. 发明授权
    • Lateral cache-to-cache cast-in
    • 横向缓存到缓存投入
    • US08225045B2
    • 2012-07-17
    • US12335975
    • 2008-12-16
    • Guy L. GuthrieAlvan W. NgMichael S. SiegelWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • Guy L. GuthrieAlvan W. NgMichael S. SiegelWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • G06F12/00
    • G06F12/0811G06F12/0804G06F12/0831G06F12/0862
    • A data processing system includes a first processing unit and a second processing unit coupled by an interconnect fabric. The first processing unit has a first processor core and associated first upper and first lower level caches, and the second processing unit has a second processor core and associated second upper and lower level caches. In response to a data request, a victim cache line is selected for castout from the first lower level cache. The first processing unit issues on the interconnect fabric a lateral castout (LCO) command that identifies the victim cache line to be castout from the first lower level cache and indicates that a lower level cache is an intended destination. In response to a coherence response indicating success of the LCO command, the victim cache line is removed from the first lower level cache and held in the second lower level cache.
    • 数据处理系统包括由互连结构耦合的第一处理单元和第二处理单元。 第一处理单元具有第一处理器核心和相关联的第一上部和第一下层高速缓存,并且第二处理单元具有第二处理器核心和相关联的第二上部和下部高速缓存。 响应于数据请求,选择受害者高速缓存行用于从第一较低级别缓存进行舍弃。 第一处理单元在互连结构上发出横向聚合(LCO)命令,该命令标识要从第一较低级缓存中抛出的受害缓存行,并且指示较低级缓存是预期目的地。 响应于指示LCO命令的成功的一致性响应,从第一低级缓存中删除受害者高速缓存行并保存在第二较低级高速缓存中。
    • 8. 发明授权
    • Bandwidth of a cache directory by slicing the cache directory into two smaller cache directories and replicating snooping logic for each sliced cache directory
    • 缓存目录的带宽通过将缓存目录分成两个较小的缓存目录,并为每个切片缓存目录复制侦听逻辑
    • US08135910B2
    • 2012-03-13
    • US11056721
    • 2005-02-11
    • Guy L. GuthrieWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • Guy L. GuthrieWilliam J. StarkeDerek E. WilliamsPhillip G. Williams
    • G06F12/00
    • G06F12/0831G06F12/0851
    • A cache, system and method for improving the snoop bandwidth of a cache directory. A cache directory may be sliced into two smaller cache directories each with its own snooping logic. By having two cache directories that can be accessed simultaneously, the bandwidth can be essentially doubled. Furthermore, a “frequency matcher” may shift the cycle speed to a lower speed upon receiving snoop addresses from the interconnect thereby slowing down the rate at which requests are transmitted to the dispatch pipelines. Each dispatch pipeline is coupled to a sliced cache directory and is configured to search the cache directory to determine if data at the received addresses is stored in the cache memory. As a result of slowing down the rate at which requests are transmitted to the dispatch pipelines and accessing the two sliced cache directories simultaneously, the bandwidth or throughput of the cache directory may be improved.
    • 用于提高缓存目录的窥探带宽的缓存,系统和方法。 高速缓存目录可以分成两个较小的缓存目录,每个具有自己的侦听逻辑。 通过具有可以同时访问的两个缓存目录,带宽可以基本上加倍。 此外,当从互连接收到窥探地址时,“频率匹配器”可以将周期速度转移到较低速度,从而将请求发送到调度管线的速率减慢。 每个调度流水线被耦合到一个切片缓存目录,并被配置为搜索该高速缓存目录以确定该接收到的地址上的数据是否被存储在该高速缓冲存储器中。 作为将请求发送到调度管线的速度变慢并且同时访问两个分片缓存目录的结果,可以提高缓存目录的带宽或吞吐量。