会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 31. 发明申请
    • METHOD AND APPARATUS FOR PREFETCHING NON-SEQUENTIAL INSTRUCTION ADDRESSES
    • 用于预置非顺序指令地址的方法和装置
    • WO2008016849A3
    • 2008-04-10
    • PCT/US2007074598
    • 2007-07-27
    • QUALCOMM INCSTEMPEL BRIAN MICHAELSARTORIUS THOMAS ANDREWSMITH RODNEY WAYNE
    • STEMPEL BRIAN MICHAELSARTORIUS THOMAS ANDREWSMITH RODNEY WAYNE
    • G06F9/38
    • G06F9/3804G06F9/3806
    • A processor performs a prefetch operation on non-sequential instruction addresses. If a first instruction address misses in an instruction cache and accesses a higher-order memory as part of a fetch operation, and a branch instruction associated with the first instruction address or an address following the first instruction address is detected and predicted taken, a prefetch operation is performed using a predicted branch target address, during the higher-order memory access. If the predicted branch target address hits in the instruction cache during the prefetch operation, associated instructions are not retrieved, to conserve power. If the predicted branch target address misses in the instruction cache during the prefetch operation, a higher-order memory access may be launched, using the predicted branch instruction address. In either case, the first instruction address is re-loaded into the fetch stage pipeline to await the return of instructions from its higher-order memory access.
    • 处理器对非顺序指令地址执行预取操作。 如果第一指令地址在指令高速缓存中未命中并作为提取操作的一部分访问较高阶存储器,并且与第一指令地址或第一指令地址之后的地址相关联的分支指令被检测并被预测得到,则预取 在高阶存储器访问期间使用预测的分支目标地址来执行操作。 如果在预取操作期间预测的分支目标地址在指令高速缓存中命中,则不检索相关联的指令以节省电力。 如果在预取操作期间预测的分支目标地址在指令高速缓存中未命中,则可以使用预测的分支指令地址来启动更高阶的存储器访问。 在任何一种情况下,第一个指令地址都会被重新载入到提取阶段的流水线中,以等待指令从其高阶内存访问中返回。
    • 33. 发明申请
    • EFFICIENT MEMORY HIERARCHY MANAGEMENT
    • 有效的记忆层次管理
    • WO2007085011A3
    • 2007-10-04
    • PCT/US2007060815
    • 2007-01-22
    • QUALCOMM INCMORROW MICHAEL WILLIAMSARTORIUS THOMAS ANDREW
    • MORROW MICHAEL WILLIAMSARTORIUS THOMAS ANDREW
    • G06F9/38G06F12/08
    • G06F9/3802G06F12/0848
    • In a processor, there are situations where instructions and some parts of a program may reside in a data cache prior to execution of the program. Hardware and software techniques are provided for fetching an instruction in the data cache after having a miss in an instruction cache to improve the processor's performance. If an instruction is not present in the instruction cache, an instruction fetch address is sent as a data fetch address to the data cache. If there is valid data present in the data cache at the supplied instruction fetch address, the data actually is an instruction and the data cache entry is fetched and supplied as an instruction to the processor complex. An additional bit may be included in an instruction page table to indicate on a miss in the instruction cache that the data cache should be checked for the instruction.
    • 在处理器中,在执行程序之前,存在指令和程序的某些部分驻留在数据高速缓存中的情况。 提供了硬件和软件技术,用于在指令高速缓存中缺少提取处理器性能之后,在数据高速缓存中取指令。 如果指令高速缓存中不存在指令,则将指令提取地址作为数据提取地址发送到数据高速缓存。 如果在提供的指令获取地址处存在数据高速缓存中的有效数据,则数据实际上是指令,并且将数据高速缓存条目作为指令提取并提供给处理器复合体。 在指令页表中可以包括额外的位,以指示在指令高速缓存中不应该对该指令检查数据高速缓存。
    • 34. 发明申请
    • METHOD AND APPARATUS FOR POWER REDUCTION IN AN HETEROGENEOUSLY- MULTI-PIPELINED PROCESSOR
    • 多异构管道加工器中减少功率的方法和装置
    • WO2006094196A3
    • 2007-02-01
    • PCT/US2006007607
    • 2006-03-03
    • QUALCOMM INCCOLLOPY THOMAS KSARTORIUS THOMAS ANDREW
    • COLLOPY THOMAS KSARTORIUS THOMAS ANDREW
    • G06F9/38
    • G06F9/3867G06F9/3836G06F9/3851G06F9/3857G06F9/3875G06F9/3885
    • A processor includes a common instruction decode front end, e.g. fetch and decode stages, and a heterogeneous set of processing pipelines. A lower performance pipeline has fewer stages and may utilize lower speed/power circuitry. A higher performance pipeline has more stages and utilizes faster circuitry. The pipelines share other processor resources, such as an instruction cache, a register file stack, a data cache, a memory interface, and other architected registers within the system. In disclosed examples, the processor is controlled such that processes requiring higher performance run in the higher performance pipeline, whereas those requiring lower performance utilize the lower performance pipeline, in at least some instances while the higher performance pipeline is effectively inactive or even shut-off to minimize power consumption. The configuration of the processor at any given time, that is to say the pipeline(s) currently operating, may be controlled via several different techniques.
    • 处理器包括公共指令解码前端,例如, 提取和解码阶段,以及一组异构的处理流水线。 较低性能的管道具有较少的级数,并且可以利用较低速度/功率电路。 更高性能的管道具有更多的阶段并且利用更快的电路。 管道共享其他处理器资源,例如指令高速缓存,寄存器堆栈,数据高速缓存,存储器接口和系统内的其他架构寄存器。 在所公开的示例中,处理器被控制,使得需要更高性能的处理在较高性能流水线中运行,而在较低性能流水线的那些需求较低的性能流水线中,至少在某些情况下使用较低性能流水线,而较高性能流水线有效地不活动或甚至切断 以最小化功耗。 处理器在任何给定时间的配置,也就是当前操作的流水线,可以通过几种不同的技术进行控制。