会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 51. 发明授权
    • Techniques for data prefetching using indirect addressing with offset
    • 使用间接寻址偏移量进行数据预取的技术
    • US08161264B2
    • 2012-04-17
    • US12024246
    • 2008-02-01
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • G06F13/00
    • G06F12/0862G06F12/1045G06F2212/6028
    • A technique for performing data prefetching using indirect addressing includes determining a first memory address of a pointer associated with a data prefetch instruction. Content, that is included in a first data block (e.g., a first cache line) of a memory, at the first memory address is then fetched. An offset is then added to the content of the memory at the first memory address to provide a first offset memory address. A second memory address is then determined based on the first offset memory address. A second data block (e.g., a second cache line) that includes data at the second memory address is then fetched (e.g., from the memory or another memory). A data prefetch instruction may be indicated by a unique operational code (opcode), a unique extended opcode, or a field (including one or more bits) in an instruction.
    • 使用间接寻址执行数据预取的技术包括确定与数据预取指令相关联的指针的第一存储器地址。 然后取出包含在第一存储器地址的存储器的第一数据块(例如,第一高速缓存行)中的内容。 然后将偏移量添加到第一存储器地址处的存储器的内容以提供第一偏移存储器地址。 然后基于第一偏移存储器地址确定第二存储器地址。 包括第二存储器地址上的数据的第二数据块(例如,第二高速缓存行)然后被取出(例如,从存储器或另一个存储器)。 数据预取指令可以由指令中的唯一操作代码(操作码),唯一扩展操作码或字段(包括一个或多个位)来指示。
    • 52. 发明申请
    • Assigning Memory to On-Chip Coherence Domains
    • 将内存分配给片上一致性域
    • US20110296115A1
    • 2011-12-01
    • US12787939
    • 2010-05-26
    • William E. SpeightLixin Zhang
    • William E. SpeightLixin Zhang
    • G06F12/08G06F12/00
    • G06F12/0831
    • A mechanism is provided for assigning memory to on-chip cache coherence domains. The mechanism assigns caches within a processing unit to coherence domains. The mechanism then assigns chunks of memory to the coherence domains. The mechanism monitors applications running on cores within the processing unit to identify needs of the applications. The mechanism may then reassign memory chunks to the cache coherence domains based on the needs of the applications running in the coherence domains. When a memory controller receives the cache miss, the memory controller may look up the address in a lookup table that maps memory chunks to cache coherence domains. Snoop requests are sent to caches within the coherence domain. If a cache line is found in a cache within the coherence domain, the cache line is returned to the originating cache by the cache containing the cache line either directly or through the memory controller. If a cache line is not found within the coherence domain, the memory controller accesses the memory to retrieve the cache line.
    • 提供了一种用于将存储器分配给片上高速缓存一致性域的机制。 该机制将处理单元内的高速缓存分配给相干域。 该机制然后将大块内存分配给一致性域。 该机制监视在处理单元内的核心上运行的应用程序,以识别应用程序的需求。 然后,该机制可以基于在相干域中运行的应用的需要将存储器块重新分配给高速缓存一致性域。 当存储器控制器接收高速缓存未命中时,存储器控制器可以查找映射存储器块到高速缓存一致性域的查找表中的地址。 侦听请求被发送到连贯域内的缓存。 如果在相干域内的高速缓存中找到高速缓存行,则通过直接或通过存储器控制器的高速缓存行的高速缓存将高速缓存行返回到始发高速缓存。 如果在相干域内没有找到高速缓存行,则内存控制器访问内存以检索高速缓存行。
    • 55. 发明授权
    • Dynamic adjustment of prefetch stream priority
    • 预取流优先级的动态调整
    • US07958316B2
    • 2011-06-07
    • US12024411
    • 2008-02-01
    • William E. SpeightLixin Zhang
    • William E. SpeightLixin Zhang
    • G06F12/12G06F5/12
    • G06F12/0862G06F2212/1041G06F2212/6024
    • A method, processor, and data processing system for dynamically adjusting a prefetch stream priority based on the consumption rate of the data by the processor. The method includes a prefetch engine issuing a prefetch request of a first prefetch stream to fetch one or more data from the memory subsystem. The first prefetch stream has a first assigned priority that determines a relative order for scheduling prefetch requests of the first prefetch stream relative to other prefetch requests of other prefetch streams. Based on the receipt of a processor demand for the data before the data returns to the cache or return of the data along time before the receiving the processor demand, logic of the prefetch engine dynamically changes the first assigned priority to a second higher or lower priority, which priority is subsequently utilized to schedule and issue a next prefetch request of the first prefetch stream.
    • 一种用于基于处理器的数据的消耗速率动态地调整预取流优先级的方法,处理器和数据处理系统。 该方法包括预取引擎,其发出第一预取流的预取请求以从存储器子系统获取一个或多个数据。 第一预取流具有第一分配的优先级,其相对于其他预取流的其他预取请求确定第一预取流的调度预取请求的相对顺序。 基于在数据返回到高速缓存之前对数据的接收处理器需求,或者在接收到处理器需求之前的时间返回数据,预取引擎的逻辑动态地将第一分配的优先级改变为第二较高或更低的优先级 随后利用该优先级来调度和发出第一预取流的下一个预取请求。
    • 57. 发明授权
    • Branch target address cache
    • 分支目标地址缓存
    • US07783870B2
    • 2010-08-24
    • US11837893
    • 2007-08-13
    • David S. LevitanWilliam E. SpeightLixin Zhang
    • David S. LevitanWilliam E. SpeightLixin Zhang
    • G06F9/38G06F9/32
    • G06F9/3804G06F9/3844
    • A processor includes an execution unit and instruction sequencing logic that fetches instructions from a memory system for execution. The instruction sequencing logic includes branch logic that outputs predicted branch target addresses for use as instruction fetch addresses. The branch logic includes a level one branch target address cache (BTAC) and a level two BTAC each having a respective plurality of entries each associating at least a tag with a predicted branch target address. The branch logic accesses the level one and level two BTACs in parallel with a tag portion of a first instruction fetch address to obtain a first predicted branch target address from the level one BTAC for use as a second instruction fetch address in a first processor clock cycle and a second predicted branch target address from the level two BTAC for use as a third instruction fetch address in a later second processor clock cycle.
    • 处理器包括执行单元和从存储器系统执行指令的指令排序逻辑。 指令排序逻辑包括分支逻辑,该分支逻辑输出用作指令获取地址的预测分支目标地址。 分支逻辑包括一级分支目标地址高速缓存(BTAC)和二级BTAC,每级具有相应的多个条目,每个条目将至少一个标签与预测的分支目标地址相关联。 分支逻辑与第一指令获取地址的标签部分并行地访问一级和二级BTAC以从第一级BTAC获得第一预测分支目标地址,以在第一处理器时钟周期中用作第二指令获取地址 以及来自第二级BTAC的第二预测分支目标地址,以在随后的第二处理器时钟周期中用作第三指令提取地址。