会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Branch predictor directed prefetch
    • 分支预测器定向预取
    • US07702888B2
    • 2010-04-20
    • US11711925
    • 2007-02-28
    • Marius EversTrivikram Krishnamurthy
    • Marius EversTrivikram Krishnamurthy
    • G06F9/00
    • G06F9/3804G06F9/3806G06F12/0862G06F2212/6022
    • An apparatus for executing branch predictor directed prefetch operations. During operation, a branch prediction unit may provide an address of a first instruction to the fetch unit. The fetch unit may send a fetch request for the first instruction to the instruction cache to perform a fetch operation. In response to detecting a cache miss corresponding to the first instruction, the fetch unit may execute one or more prefetch operation while the cache miss corresponding to the first instruction is being serviced. The branch prediction unit may provide an address of a predicted next instruction in the instruction stream to the fetch unit. The fetch unit may send a prefetch request for the predicted next instruction to the instruction cache to execute the prefetch operation. The fetch unit may store prefetched instruction data obtained from a next level of memory in the instruction cache or in a prefetch buffer.
    • 一种用于执行分支预测器定向预取操作的装置。 在操作期间,分支预测单元可以向提取单元提供第一指令的地址。 提取单元可以向指令高速缓存发送第一指令的取出请求以执行取出操作。 响应于检测到与第一指令相对应的高速缓存未命中,提取单元可以执行一个或多个预取操作,同时处理与第一指令相对应的高速缓存未命中。 分支预测单元可以向提取单元提供指令流中的预测下一条指令的地址。 提取单元可以将预测的下一条指令的预取请求发送到指令高速缓存以执行预取操作。 获取单元可以将从存储器的下一级获得的预取指令数据存储在指令高速缓存或预取缓冲器中。
    • 2. 发明申请
    • Branch predictor directed prefetch
    • 分支预测器定向预取
    • US20080209173A1
    • 2008-08-28
    • US11711925
    • 2007-02-28
    • Marius EversTrivikram Krishnamurthy
    • Marius EversTrivikram Krishnamurthy
    • G06F9/38
    • G06F9/3804G06F9/3806G06F12/0862G06F2212/6022
    • An apparatus for executing branch predictor directed prefetch operations. During operation, a branch prediction unit may provide an address of a first instruction to the fetch unit. The fetch unit may send a fetch request for the first instruction to the instruction cache to perform a fetch operation. In response to detecting a cache miss corresponding to the first instruction, the fetch unit may execute one or more prefetch operation while the cache miss corresponding to the first instruction is being serviced. The branch prediction unit may provide an address of a predicted next instruction in the instruction stream to the fetch unit. The fetch unit may send a prefetch request for the predicted next instruction to the instruction cache to execute the prefetch operation. The fetch unit may store prefetched instruction data obtained from a next level of memory in the instruction cache or in a prefetch buffer.
    • 一种用于执行分支预测器定向预取操作的装置。 在操作期间,分支预测单元可以向提取单元提供第一指令的地址。 提取单元可以向指令高速缓存发送第一指令的取出请求以执行取出操作。 响应于检测到与第一指令相对应的高速缓存未命中,提取单元可以执行一个或多个预取操作,同时处理与第一指令相对应的高速缓存未命中。 分支预测单元可以向提取单元提供指令流中的预测下一条指令的地址。 提取单元可以将预测的下一条指令的预取请求发送到指令高速缓存以执行预取操作。 获取单元可以将从存储器的下一级获得的预取指令数据存储在指令高速缓存或预取缓冲器中。
    • 3. 发明授权
    • Processing pipeline having stage-specific thread selection and method thereof
    • 具有阶段特定线程选择的处理管线及其方法
    • US08086825B2
    • 2011-12-27
    • US11967923
    • 2007-12-31
    • Gene ShenSean LieMarius Evers
    • Gene ShenSean LieMarius Evers
    • G06F9/38G06F9/48
    • G06F9/3867G06F9/3851G06F9/3891
    • One or more processor cores of a multiple-core processing device each can utilize a processing pipeline having a plurality of execution units (e.g., integer execution units or floating point units) that together share a pre-execution front-end having instruction fetch, decode and dispatch resources. Further, one or more of the processor cores each can implement dispatch resources configured to dispatch multiple instructions in parallel to multiple corresponding execution units via separate dispatch buses. The dispatch resources further can opportunistically decode and dispatch instruction operations from multiple threads in parallel so as to increase the dispatch bandwidth. Moreover, some or all of the stages of the processing pipelines of one or more of the processor cores can be configured to implement independent thread selection for the corresponding stage.
    • 多核处理设备的一个或多个处理器核心可以利用具有多个执行单元(例如,整数执行单元或浮点单元)的处理流水线,这些执行单元共同共享具有指令获取的前执行前端,解码 并派遣资源。 此外,一个或多个处理器核心可以实现调度资源,配置为通过分开的调度总线并行分配多个相应执行单元的多个指令。 调度资源还可以并行地从多个线程机会地解码和分派指令操作,以增加调度带宽。 此外,一个或多个处理器核心的处理流水线的一些或所有阶段可被配置为实现相应阶段的独立线程选择。
    • 4. 发明授权
    • Token based power control mechanism
    • 基于令牌的电源控制机制
    • US07818592B2
    • 2010-10-19
    • US11788215
    • 2007-04-18
    • Stephan MeierMarius Evers
    • Stephan MeierMarius Evers
    • G06F1/26
    • G06F1/3203G06F1/3243G06F1/3287G06F9/30149G06F9/3017G06F9/383G06F9/3842G06F9/3851G06F9/3853G06F9/3885G06F9/3891Y02D10/152Y02D10/171
    • A token-based power control mechanism for an apparatus including a power controller and a plurality of processing devices. The power controller may detect a power budget allotted for the apparatus. The power controller may convert the allotted power budget into a plurality of power tokens, each power token being a portion of the allotted power budget. The power controller may then assign one or more of the plurality of power tokens to each of the processing devices. The assigned power tokens may determine the power allotted for each of the processing devices. The power controller may receive one or more requests from the plurality of processing devices for one or more additional power tokens. In response to receiving the requests, the power controller may determine whether to change the distribution of power tokens among the processing devices.
    • 一种用于包括功率控制器和多个处理设备的装置的基于令牌的功率控制机构。 功率控制器可以检测为该装置分配的功率预算。 功率控制器可以将分配的功率预算转换成多个功率令牌,每个功率令牌是分配的功率预算的一部分。 功率控制器然后可以将多个功率令牌中的一个或多个分配给每个处理设备。 分配的功率令牌可以确定为每个处理设备分配的功率。 功率控制器可以从多个处理设备接收一个或多个附加功率令牌的一个或多个请求。 响应于接收到请求,功率控制器可以确定是否改变处理设备之间的功率标记的分布。
    • 5. 发明申请
    • Token based power control mechanism
    • 基于令牌的电源控制机制
    • US20080263373A1
    • 2008-10-23
    • US11788215
    • 2007-04-18
    • Stephan MeierMarius Evers
    • Stephan MeierMarius Evers
    • G06F1/26
    • G06F1/3203G06F1/3243G06F1/3287G06F9/30149G06F9/3017G06F9/383G06F9/3842G06F9/3851G06F9/3853G06F9/3885G06F9/3891Y02D10/152Y02D10/171
    • A token-based power control mechanism for an apparatus including a power controller and a plurality of processing devices. The power controller may detect a power budget allotted for the apparatus. The power controller may convert the allotted power budget into a plurality of power tokens, each power token being a portion of the allotted power budget. The power controller may then assign one or more of the plurality of power tokens to each of the processing devices. The assigned power tokens may determine the power allotted for each of the processing devices. The power controller may receive one or more requests from the plurality of processing devices for one or more additional power tokens. In response to receiving the requests, the power controller may determine whether to change the distribution of power tokens among the processing devices.
    • 一种用于包括功率控制器和多个处理设备的装置的基于令牌的功率控制机构。 功率控制器可以检测为该装置分配的功率预算。 功率控制器可以将分配的功率预算转换成多个功率令牌,每个功率令牌是分配的功率预算的一部分。 功率控制器然后可以将多个功率令牌中的一个或多个分配给每个处理设备。 分配的功率令牌可以确定为每个处理设备分配的功率。 功率控制器可以从多个处理设备接收一个或多个附加功率令牌的一个或多个请求。 响应于接收到请求,功率控制器可以确定是否改变处理设备之间的功率标记的分布。
    • 6. 发明授权
    • Method for selecting transistor threshold voltages in an integrated circuit
    • 在集成电路中选择晶体管阈值电压的方法
    • US07188325B1
    • 2007-03-06
    • US10957848
    • 2004-10-04
    • Marius EversJeffrey E. TrullAlper HalbutogullariRobert W. Williams
    • Marius EversJeffrey E. TrullAlper HalbutogullariRobert W. Williams
    • G06F17/50
    • G06F17/5036G06F17/5045
    • In one embodiment, a method for selecting transistor threshold voltages on an integrated circuit may include assigning a first threshold voltage to selected groups of transistors such as cell instances, for example, and determining which of the selected groups of transistors to assign a second threshold voltage, that is lower than the first threshold voltage, by iteratively performing a cost/benefit analysis. The method may further include determining which of the selected groups of transistors having a third threshold voltage to assign the first threshold voltage by iteratively performing a cost/benefit analysis. The cost/benefit analysis may include calculating a cost/benefit ratio for each group of the selected groups of transistors. In addition, the cost/benefit analysis may include calculating an upcone benefit and a downcone benefit for groups of transistors coupled to one or more inputs and outputs, respectively.
    • 在一个实施例中,用于在集成电路上选择晶体管阈值电压的方法可以包括例如为选定的晶体管组(例如单元实例)分配第一阈值电压,以及确定选择的晶体管组中的哪一个以分配第二阈值电压 ,即低于第一阈值电压,通过迭代地执行成本/效益分析。 该方法还可以包括通过迭代地执行成本/效益分析来确定具有第三阈值电压的所选择的晶体管组中的哪一个以分配第一阈值电压。 成本/效益分析可以包括计算每组所选择的晶体管组的成本/效益比。 此外,成本/效益分析可以包括分别计算耦合到一个或多个输入和输出的晶体管组的升序优点和降压益处。
    • 7. 发明申请
    • PROCESSING PIPELINE HAVING STAGE-SPECIFIC THREAD SELECTION AND METHOD THEREOF
    • 具有特殊螺纹选择的加工管道及其方法
    • US20090172362A1
    • 2009-07-02
    • US11967923
    • 2007-12-31
    • Gene ShenSean LieMarius Evers
    • Gene ShenSean LieMarius Evers
    • G06F9/30
    • G06F9/3867G06F9/3851G06F9/3891
    • One or more processor cores of a multiple-core processing device each can utilize a processing pipeline having a plurality of execution units (e.g., integer execution units or floating point units) that together share a pre-execution front-end having instruction fetch, decode and dispatch resources. Further, one or more of the processor cores each can implement dispatch resources configured to dispatch multiple instructions in parallel to multiple corresponding execution units via separate dispatch buses. The dispatch resources further can opportunistically decode and dispatch instruction operations from multiple threads in parallel so as to increase the dispatch bandwidth. Moreover, some or all of the stages of the processing pipelines of one or more of the processor cores can be configured to implement independent thread selection for the corresponding stage.
    • 多核处理设备的一个或多个处理器核心可以利用具有多个执行单元(例如,整数执行单元或浮点单元)的处理流水线,这些执行单元共同共享具有指令获取的前执行前端,解码 并派遣资源。 此外,一个或多个处理器核心可以实现调度资源,配置为通过分开的调度总线并行分配多个相应执行单元的多个指令。 调度资源还可以并行地从多个线程机会地解码和分派指令操作,以增加调度带宽。 此外,一个或多个处理器核心的处理流水线的一些或所有阶段可被配置为实现相应阶段的独立线程选择。