会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 52. 发明申请
    • Cache Partitioning in Virtualized Environments
    • 虚拟化环境中的高速缓存分区
    • US20110055827A1
    • 2011-03-03
    • US12546791
    • 2009-08-25
    • Jiang LinLixin Zhang
    • Jiang LinLixin Zhang
    • G06F12/08G06F9/455
    • G06F9/455G06F12/084G06F12/0842G06F12/0848G06F12/0864G06F12/121
    • A mechanism is provided in a virtual machine monitor for providing cache partitioning in virtualized environments. The mechanism assigns a virtual identification (ID) to each virtual machine in the virtualized environment. The processing core stores the virtual ID of the virtual machine in a special register. The mechanism also creates an entry for the virtual machine in a partition table. The mechanism may partition a shared cache using a vertical (way) partition and/or a horizontal partition. The entry in the partition table includes a vertical partition control and a horizontal partition control. For each cache access, the virtual machine passes the virtual ID along with the address to the shared cache. If the cache access results in a miss, the shared cache uses the partition table to select a victim cache line for replacement.
    • 在虚拟机监视器中提供了一种机制,用于在虚拟化环境中提供高速缓存分区。 该机制为虚拟化环境中的每个虚拟机分配一个虚拟标识(ID)。 处理核心将虚拟机的虚拟ID存储在专用寄存器中。 该机制还为分区表中的虚拟机创建一个条目。 该机制可以使用垂直(方式)分区和/或水平分区来划分共享高速缓存。 分区表中的条目包括垂直分区控制和水平分区控制。 对于每个缓存访问,虚拟机将虚拟ID与该地址一起传递到共享缓存。 如果缓存访问导致错过,共享缓存使用分区表选择受害缓存行进行替换。
    • 55. 发明申请
    • PREDICATION SUPPORTING CODE GENERATION BY INDICATING PATH ASSOCIATIONS OF SYMMETRICALLY PLACED WRITE INSTRUCTIONS
    • 通过表示对称写入指令的路径协会来确定支持代码生成
    • US20090288063A1
    • 2009-11-19
    • US12123083
    • 2008-05-19
    • Ram RanganMark W. StephensonLixin Zhang
    • Ram RanganMark W. StephensonLixin Zhang
    • G06F9/44
    • G06F8/4451
    • A predication technique for out-of-order instruction processing provides efficient out-of-order execution with low hardware overhead. A special op-code demarks unified regions of program code that contain predicated instructions that depend on the resolution of a condition. Field(s) or operand(s) associated with the special op-code indicate the number of instructions that follow the op-code and also contain an indication of the association of each instruction with its corresponding conditional path. Each conditional register write in a region has a corresponding register write for each conditional path, with additional register writes inserted by the compiler if symmetry is not already present, forming a coupled set of register writes. Therefore, a unified instruction stream can be decoded and dispatched with the register writes all associated with the same re-name resource, and the conditional register write is resolved by executing the particular instruction specified by the resolved condition.
    • 用于无序指令处理的预测技术提供了低硬件开销的有效的无序执行。 一个特殊的操作代码区分了程序代码的统一区域,其中包含依赖于条件分辨率的预测指令。 与特殊操作码相关联的字段或操作数指示操作码后面的指令数,并且还包含每个指令与其对应条件路径的关联的指示。 区域中的每个条件寄存器写入对于每个条件路径都有相应的寄存器写入,如果对称性尚未存在,编译器插入附加的寄存器写入,形成一组寄存器写操作。 因此,统一的指令流可以使用与相同重名资源相关联的寄存器写入进行解码和分派,并且通过执行由解析条件指定的特定指令来解决条件寄存器写入。
    • 56. 发明申请
    • System and Method for Priority-Based Prefetch Requests Scheduling and Throttling
    • 基于优先级的预取请求调度和调节的系统和方法
    • US20090199190A1
    • 2009-08-06
    • US12024389
    • 2008-02-01
    • Lei ChenLixin Zhang
    • Lei ChenLixin Zhang
    • G06F9/46
    • G06F12/0862G06F9/3455G06F9/383G06F2212/1016
    • A method, processor, and data processing system for implementing a framework for priority-based scheduling and throttling of prefetching operations. A prefetch engine (PE) assigns a priority to a first prefetch stream, indicating a relative priority for scheduling prefetch operations of the first prefetch stream. The PE monitors activity within the data processing system and dynamically updates the priority of the first prefetch stream based on the activity (or lack thereof). Low priority streams may be discarded. The PE also schedules prefetching in a priority-based scheduling sequence that corresponds to the priority currently assigned to the scheduled active streams. When there are no prefetches within a prefetch queue, the PE triggers the active streams to provide prefetches for issuing. The PE determines when to throttle prefetching, based on the current usage level of resources relevant to completing the prefetch.
    • 一种用于实现基于优先级调度和限制预取操作的框架的方法,处理器和数据处理系统。 预取引擎(PE)将优先级分配给第一预取流,指示用于调度第一预取流的预取操作的相对优先级。 PE监视数据处理系统内的活动,并基于活动(或缺乏)动态地更新第一预取流的优先级。 低优先级流可能被丢弃。 PE还在基于优先级的调度序列中调度预取,该调度序列对应于当前分配给调度的活动流的优先级。 当预取队列中没有预取时,PE触发活动流以提供预取。 根据与完成预取相关的资源的当前使用水平,PE确定何时限制预取。
    • 57. 发明申请
    • Dynamic Adjustment of Prefetch Stream Priority
    • 动态调整预取流优先级
    • US20090198907A1
    • 2009-08-06
    • US12024411
    • 2008-02-01
    • William E. SpeightLixin Zhang
    • William E. SpeightLixin Zhang
    • G06F12/08
    • G06F12/0862G06F2212/1041G06F2212/6024
    • A method, processor, and data processing system for dynamically adjusting a prefetch stream priority based on the consumption rate of the data by the processor. The method includes a prefetch engine issuing a prefetch request of a first prefetch stream to fetch one or more data from the memory subsystem. The first prefetch stream has a first assigned priority that determines a relative order for scheduling prefetch requests of the first prefetch stream relative to other prefetch requests of other prefetch streams. Based on the receipt of a processor demand for the data before the data returns to the cache or return of the data along time before the receiving the processor demand, logic of the prefetch engine dynamically changes the first assigned priority to a second higher or lower priority, which priority is subsequently utilized to schedule and issue a next prefetch request of the first prefetch stream.
    • 一种用于基于处理器的数据的消耗速率动态地调整预取流优先级的方法,处理器和数据处理系统。 该方法包括预取引擎,其发出第一预取流的预取请求以从存储器子系统获取一个或多个数据。 第一预取流具有第一分配的优先级,其相对于其他预取流的其他预取请求确定第一预取流的调度预取请求的相对顺序。 基于在数据返回到高速缓存之前对数据的接收处理器需求,或者在接收到处理器需求之前的时间返回数据,预取引擎的逻辑动态地将第一分配的优先级改变为第二较高或更低的优先级 随后利用该优先级来调度和发出第一预取流的下一个预取请求。
    • 60. 发明申请
    • DATA PROCESSING SYSTEM, PROCESSOR AND METHOD OF DATA PROCESSING HAVING IMPROVED BRANCH TARGET ADDRESS CACHE
    • 数据处理系统,具有改进的分支目标地址高速缓存的数据处理的处理器和方法
    • US20090049286A1
    • 2009-02-19
    • US11837893
    • 2007-08-13
    • David S. LevitanWilliam E. SpeightLixin Zhang
    • David S. LevitanWilliam E. SpeightLixin Zhang
    • G06F9/38
    • G06F9/3804G06F9/3844
    • A processor includes an execution unit and instruction sequencing logic that fetches instructions from a memory system for execution. The instruction sequencing logic includes branch logic that outputs predicted branch target addresses for use as instruction fetch addresses. The branch logic includes a level one branch target address cache (BTAC) and a level two BTAC each having a respective plurality of entries each associating at least a tag with a predicted branch target address. The branch logic accesses the level one and level two BTACs in parallel with a tag portion of a first instruction fetch address to obtain a first predicted branch target address from the level one BTAC for use as a second instruction fetch address in a first processor clock cycle and a second predicted branch target address from the level two BTAC for use as a third instruction fetch address in a later second processor clock cycle.
    • 处理器包括执行单元和从存储器系统执行指令的指令排序逻辑。 指令排序逻辑包括分支逻辑,该分支逻辑输出用作指令获取地址的预测分支目标地址。 分支逻辑包括一级分支目标地址高速缓存(BTAC)和二级BTAC,每级具有相应的多个条目,每个条目将至少一个标签与预测的分支目标地址相关联。 分支逻辑与第一指令获取地址的标签部分并行地访问一级和二级BTAC以从第一级BTAC获得第一预测分支目标地址,以在第一处理器时钟周期中用作第二指令获取地址 以及来自第二级BTAC的第二预测分支目标地址,以在随后的第二处理器时钟周期中用作第三指令提取地址。