会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明授权
    • Method and system for providing an improved store-in cache
    • 用于提供改进的存储缓存的方法和系统
    • US07941728B2
    • 2011-05-10
    • US11683285
    • 2007-03-07
    • Philip George EmmaWing K. LukThomas R. PuzakVijayalakshmi Srinivasan
    • Philip George EmmaWing K. LukThomas R. PuzakVijayalakshmi Srinivasan
    • H03M13/00
    • G06F12/0802G06F11/1064G06F11/1666
    • A system and method of providing a cache system having a store-in policy and affording the advantages of store-in cache operation, while simultaneously providing protection against soft-errors in locally modified data, which would normally preclude the use of a store-in cache when reliability is paramount. The improved store-in cache mechanism includes a store-in L1 cache, at least one higher-level storage hierarchy; an ancillary store-only cache (ASOC) that holds most recently stored-to lines of the store-in L1 cache, and a cache controller that controls storing of data to the ancillary store-only cache (ASOC) and recovering of data from the ancillary store-only cache (ASOC) such that the data from the ancillary store-only cache (ASOC) is used only if parity errors are encountered in the store-in L1 cache.
    • 提供具有存储策略并提供存储高速缓存操作的优点的缓存系统的系统和方法,同时提供对本地修改的数据中的软错误的保护,这通常会阻止使用存储 缓存当可靠性至关重要时。 改进的存储高速缓存机制包括存储L1高速缓存,至少一个更高级别的存储层级; 保存最近存储在L1高速缓存中的行的辅助存储专用缓存(ASOC)以及控制将数据存储到辅助存储高速缓存(ASOC)并从 辅助存储高速缓存(ASOC),使得只有在存储的L1高速缓存中遇到奇偶校验错误时才使用来自辅助存储高速缓存(ASOC)的数据。
    • 9. 发明授权
    • Branch history guided instruction/data prefetching
    • 分支历史指导/数据预取
    • US06560693B1
    • 2003-05-06
    • US09459739
    • 1999-12-10
    • Thomas R. PuzakAllan M. HartsteinMark CharneyDaniel A. PrenerPeter H. OdenVijayalakshmi Srinivasan
    • Thomas R. PuzakAllan M. HartsteinMark CharneyDaniel A. PrenerPeter H. OdenVijayalakshmi Srinivasan
    • G06F1500
    • G06F9/383G06F9/30047G06F9/3455G06F9/3836G06F9/3844
    • A mechanism is described that prefetches instructions and data into the cache using a branch instruction as a prefetch trigger. The prefetch is initiated if the predicted execution path after the branch instruction matches the previously seen execution path. This match of the execution paths is determined using a branch history queue that records the branch outcomes (taken/not taken) of the branches in the program. For each branch in this queue, a branch history mask records the outcomes of the next N branches and serves as an encoding of the execution path following the branch instruction. The branch instruction along with the mask is associated with a prefetch address (instruction or data address) and is used for triggering prefetches in the future when the branch is executed again. A mechanism is also described to improve the timeliness of a prefetch by suitably adjusting the value of N after observing the usefulness of the prefetched instructions or data.
    • 描述了使用分支指令作为预取触发器将指令和数据预取到高速缓存中的机制。 如果分支指令之后的预测执行路径与先前查看的执行路径匹配,则启动预取。 使用分支历史队列确定执行路径的这种匹配,该分支历史队列记录节目中分支的分支结果(已取/未采用)。 对于该队列中的每个分支,分支历史掩码记录下一个N个分支的结果,并且作为分支指令之后的执行路径的编码。 分支指令与掩码一起与预取地址(指令或数据地址)相关联,并且在再次执行分支时用于触发预取。 还描述了一种机制,以通过在观察到预取指令或数据的有用性之后适当地调整N的值来提高预取的及时性。