会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 21. 发明授权
    • Cache residence prediction
    • 缓存居住预测
    • US07904657B2
    • 2011-03-08
    • US11779749
    • 2007-07-18
    • Xiaowei ShenJaehyuk HuhBalaram Sinharoy
    • Xiaowei ShenJaehyuk HuhBalaram Sinharoy
    • G06F12/00
    • G06F12/0831G06F12/0893G06F2212/507
    • The present invention proposes a novel cache residence prediction mechanism that predicts whether requested data of a cache miss can be found in another cache. The memory controller can use the prediction result to determine if it should immediately initiate a memory access, or initiate no memory access until a cache snoop response shows that the requested data cannot be supplied by a cache.The cache residence prediction mechanism can be implemented at the cache side, the memory side, or both. A cache-side prediction mechanism can predict that data requested by a cache miss can be found in another cache if the cache miss address matches an address tag of a cache line in the requesting cache and the cache line is in an invalid state. A memory-side prediction mechanism can make effective prediction based on observed memory and cache operations that are recorded in a prediction table.
    • 本发明提出了一种新颖的缓存驻留预测机制,其预测是否可以在另一个高速缓存中找到所请求的高速缓存未命中的数据。 存储器控制器可以使用预测结果来确定它是否应该立即启动存储器访问,或者不启动存储器访问,直到高速缓存侦听响应显示所请求的数据不能被高速缓存提供。 缓存驻留预测机制可以在高速缓存侧,存储器侧或两者中实现。 如果高速缓存未命中地址与请求高速缓存中的高速缓存线的地址标签匹配并且高速缓存行处于无效状态,则高速缓存侧预测机制可以预测在另一个高速缓存中可以找到由高速缓存未命中请求的数据。 存储器侧预测机制可以基于记录在预测表中的观察到的存储器和高速缓存操作来进行有效的预测。
    • 22. 发明申请
    • Cache Residence Prediction
    • 缓存住宅预测
    • US20090024797A1
    • 2009-01-22
    • US11779749
    • 2007-07-18
    • Xiaowei ShenJaehyuk HuhBalaram Sinharoy
    • Xiaowei ShenJaehyuk HuhBalaram Sinharoy
    • G06F12/08
    • G06F12/0831G06F12/0893G06F2212/507
    • The present invention proposes a novel cache residence prediction mechanism that predicts whether requested data of a cache miss can be found in another cache. The memory controller can use the prediction result to determine if it should immediately initiate a memory access, or initiate no memory access until a cache snoop response shows that the requested data cannot be supplied by a cache.The cache residence prediction mechanism can be implemented at the cache side, the memory side, or both. A cache-side prediction mechanism can predict that data requested by a cache miss can be found in another cache if the cache miss address matches an address tag of a cache line in the requesting cache and the cache line is in an invalid state. A memory-side prediction mechanism can make effective prediction based on observed memory and cache operations that are recorded in a prediction table.
    • 本发明提出了一种新颖的缓存驻留预测机制,其预测是否可以在另一个高速缓存中找到所请求的高速缓存未命中的数据。 存储器控制器可以使用预测结果来确定它是否应该立即启动存储器访问,或者不启动存储器访问,直到高速缓存侦听响应显示所请求的数据不能被高速缓存提供。 缓存驻留预测机制可以在高速缓存侧,存储器侧或两者中实现。 如果高速缓存未命中地址与请求高速缓存中的高速缓存行的地址标签匹配并且高速缓存行处于无效状态,则高速缓存侧预测机制可以预测在另一个高速缓存中可以发现由高速缓存未命中请求的数据。 存储器侧预测机制可以基于记录在预测表中的观察到的存储器和高速缓存操作来进行有效的预测。
    • 25. 发明申请
    • Location-aware cache-to-cache transfers
    • 位置感知缓存到缓存传输
    • US20050240735A1
    • 2005-10-27
    • US10833197
    • 2004-04-27
    • Xiaowei ShenJaehyuk HuhBalaram Sinharoy
    • Xiaowei ShenJaehyuk HuhBalaram Sinharoy
    • G06F12/00G06F12/08
    • G06F12/0831G06F12/0813Y02D10/13
    • In shared-memory multiprocessor systems, cache interventions from different sourcing caches can result in different cache intervention costs. With location-aware cache coherence, when a cache receives a data request, the cache can determine whether sourcing the data from the cache will result in less cache intervention cost than sourcing the data from another cache. The decision can be made based on appropriate information maintained in the cache or collected from snoop responses from other caches. If the requested data is found in more than one cache, the cache that has or likely has the lowest cache intervention cost is generally responsible for supplying the data. The intervention cost can be measured by performance metrics that include, but are not limited to, communication latency, bandwidth consumption, load balance, and power consumption.
    • 在共享内存多处理器系统中,来自不同采购缓存的缓存干预可能导致不同的缓存干预成本。 利用位置感知高速缓存一致性,当缓存接收到数据请求时,高速缓存可以确定从高速缓存中提取数据是否会导致比从另一高速缓存提供数据更少的高速缓存干预成本。 该决定可以基于保存在缓存中的适当信息或从其他缓存的窥探响应收集。 如果在多个缓存中找到所请求的数据,则具有或可能具有最低缓存干预成本的高速缓存通常负责提供数据。 干预成本可以通过包括但不限于通信延迟,带宽消耗,负载平衡和功耗的性能指标来衡量。
    • 26. 发明授权
    • Location-aware cache-to-cache transfers
    • 位置感知缓存到缓存传输
    • US07676637B2
    • 2010-03-09
    • US10833197
    • 2004-04-27
    • Xiaowei ShenJaehyuk HuhBalaram Sinharoy
    • Xiaowei ShenJaehyuk HuhBalaram Sinharoy
    • G06F12/00
    • G06F12/0831G06F12/0813Y02D10/13
    • In shared-memory multiprocessor systems, cache interventions from different sourcing caches can result in different cache intervention costs. With location-aware cache coherence, when a cache receives a data request, the cache can determine whether sourcing the data from the cache will result in less cache intervention cost than sourcing the data from another cache. The decision can be made based on appropriate information maintained in the cache or collected from snoop responses from other caches. If the requested data is found in more than one cache, the cache that has or likely has the lowest cache intervention cost is generally responsible for supplying the data. The intervention cost can be measured by performance metrics that include, but are not limited to, communication latency, bandwidth consumption, load balance, and power consumption.
    • 在共享内存多处理器系统中,来自不同采购缓存的缓存干预可能导致不同的缓存干预成本。 利用位置感知高速缓存一致性,当缓存接收到数据请求时,高速缓存可以确定从高速缓存中提取数据是否会导致比从另一高速缓存提供数据更少的高速缓存干预成本。 该决定可以基于保存在缓存中的适当信息或从其他缓存的窥探响应收集。 如果在多个缓存中找到所请求的数据,则具有或可能具有最低缓存干预成本的高速缓存通常负责提供数据。 干预成本可以通过包括但不限于通信延迟,带宽消耗,负载平衡和功耗的性能指标来衡量。