会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • Processor instruction retry recovery
    • 处理器指令重试恢复
    • US20060179207A1
    • 2006-08-10
    • US11055258
    • 2005-02-10
    • Susan EisenHung LeMichael MackDung NguyenJose ParedesScott Swaney
    • Susan EisenHung LeMichael MackDung NguyenJose ParedesScott Swaney
    • G06F12/14
    • G06F12/0888G06F9/30043G06F9/3851G06F9/3863G06F11/1407
    • Recovery circuits react to errors in a processor core by waiting for an error-free completion of any pending store-conditional instruction or a cache-inhibited load before ceasing to checkpoint or backup progress of a processor core. Recovery circuits remove the processor core from the logical configuration of the symmetric multiprocessor system, potentially reducing propagation of errors to other parts of the system. The processor core is reset and the checkpointed values may be restored to registers of the processor core. The core processor is allowed not just to resume execution just prior to the instructions that failed to execute correctly the first time, but is allowed to operate in a reduced execution mode for a preprogrammed number of groups. If the preprogrammed number of instruction groups execute without error, the processor core is allowed to resume normal execution.
    • 恢复电路通过在停止处理器核心的检查点或备份进程之前等待任何挂起的存储条件指令或高速缓存禁止负载的无差错完成来响应处理器内核中的错误。 恢复电路将处理器核从对称多处理器系统的逻辑配置中移除,可能会将错误的传播减少到系统的其他部分。 处理器内核被复位,检查点值可以恢复到处理器内核的寄存器。 允许核心处理器不仅在第一次执行失败的指令之前恢复执行,而且允许以预编程的组数减少执行模式运行。 如果指令组的预编程数量无错误地执行,则允许处理器内核恢复正常执行。
    • 5. 发明申请
    • Load Lookahead Prefetch for Microprocessors
    • 加载用于微处理器的前瞻预取
    • US20080077776A1
    • 2008-03-27
    • US11950495
    • 2007-12-05
    • Richard EickemeyerHung LeDung NguyenBenjamin StoltBrian Thompto
    • Richard EickemeyerHung LeDung NguyenBenjamin StoltBrian Thompto
    • G06F9/38
    • G06F9/3842G06F9/3804G06F9/383G06F9/3838G06F9/3851
    • The present invention allows a microprocessor to identify and speculatively execute future load instructions during a stall condition. This allows forward progress to be made through the instruction stream during the stall condition which would otherwise cause the microprocessor or thread of execution to be idle. The data for such future load instructions can be prefetched from a distant cache or main memory such that when the load instruction is re-executed (non speculative executed) after the stall condition expires, its data will reside either in the L1 cache, or will be enroute to the processor, resulting in a reduced execution latency. When an extended stall condition is detected, load lookahead prefetch is started allowing speculative execution of instructions that would normally have been stalled. In this speculative mode, instruction operands may be invalid due to source loads that miss the L1 cache, facilities not available in speculative execution mode, or due to speculative instruction results that are not available via forwarding and are not written to the architected registers. A set of status bits are used to dynamically keep track of the dependencies between instructions in the pipeline and a bit vector tracks invalid architected facilities with respect to the speculative instruction stream. Both sources of information are used to identify load instructions with valid operands for calculating the load address. If the operands are valid, then a load prefetch operation is started to retrieve data from the cache ahead of time such that it can be available for the load instruction when it is non-speculatively executed.
    • 本发明允许微处理器在失速状态期间识别并推测性地执行未来的加载指令。 这允许在停顿条件期间通过指令流进行正向进展,否则将导致微处理器或执行线程空闲。 可以从远程高速缓存或主存储器预取这样的未来加载指令的数据,使得当停止条件到期之后,当加载指令被重新执行(不推测执行)时,其数据将驻留在L1高速缓存中,或者将 进入处理器,导致执行时间缩短。 当检测到扩展失速条件时,启动加载前瞻预取,允许推测执行通常已经停止的指令。 在这种推测模式中,由于缺少L1高速缓存的源负载,设备在推测执行模式下不可用的设备,或由于不能通过转发而不能使用并且未写入到架构化寄存器的推测性指令结果,指令操作数可能无效。 一组状态位用于动态地跟踪流水线中的指令之间的依赖关系,并且位向量相对于推测性指令流跟踪无效的架构设施。 两个信息来源用于识别加载指令,其中包含用于计算加载地址的有效操作数。 如果操作数有效,则启动加载预取操作以提前从高速缓存中检索数据,使得当非推测性地执行加载指令时,可以对加载指令可用。
    • 6. 发明申请
    • Method using hazard vector to enhance issue throughput of dependent instructions in a microprocessor
    • 使用危险向量的方法来增强微处理器中依赖指令的问题吞吐量
    • US20060179282A1
    • 2006-08-10
    • US11054289
    • 2005-02-09
    • Hung LeDung NguyenRaymond Yeung
    • Hung LeDung NguyenRaymond Yeung
    • G06F9/30
    • G06F9/383G06F9/3834G06F9/3838G06F9/3867
    • A method and related apparatus is provided for a processor having a number of registers, wherein instructions are sequentially issued to move through a sequence of execution stages, from an initial stage to a final write back stage. As a method, an embodiment includes the step of issuing a first instruction, such as an FMA instruction, to move through the sequence of execution stages, the first instruction being directed to a specified one of the registers. The method further includes issuing a second instruction to move through the execution stages, the second instruction being issued after the first instruction has issued, but before the first instruction reaches the final write back stage. The second instruction is likewise directed to the specified register, and comprises either a store instruction or a load instruction, selectively. R and W bits corresponding to the specified register are used to ensure that a store instruction does not read data from, and that a load instruction does not write data to the specified register, respectively, before the first instruction is moved to the final write back stage.
    • 提供了一种用于具有多个寄存器的处理器的方法和相关装置,其中顺序地发出指令以从初始阶段到最终回写阶段移动经过一系列执行阶段。 作为一种方法,实施例包括发出诸如FMA指令的第一指令以移动经过执行级序列的步骤,第一指令被引导到指定的一个寄存器。 该方法还包括发出第二指令以移动通过执行阶段,第二指令在第一指令发出之后但在第一指令到达最终回写阶段之前发出。 第二条指令同样针对指定的寄存器,并且选择性地包括存储指令或加载指令。 使用与指定寄存器相对应的R和W位来确保存储指令不会从第一指令移动到最终回写之前分别读取数据,并且加载指令不会将数据写入指定的寄存器 阶段。
    • 7. 发明申请
    • Using a modified value GPR to enhance lookahead prefetch
    • 使用修改值GPR来增强前瞻预取
    • US20060149934A1
    • 2006-07-06
    • US11016206
    • 2004-12-17
    • Richard EickemeverHung LeDung NguyenBenjamin StoltBrian Thompto
    • Richard EickemeverHung LeDung NguyenBenjamin StoltBrian Thompto
    • G06F9/30
    • G06F9/3842G06F9/3804G06F9/383G06F9/3838
    • The present invention allows a microprocessor to identify and speculatively execute future instructions during a stall condition. This allows forward progress to be made through the instruction stream during the stall condition which would otherwise cause the microprocessor or thread of execution to be idle. The execution of such future instructions can initiate a prefetch of data or instructions from a distant cache or main memory, or otherwise make forward progress through the instruction stream. In this manner, when the instructions are re-executed (non speculatively executed) after the stall condition expires, they will execute with a reduced execution latency; e.g. by accessing data prefetched into the L1 cache, or enroute to the processor, or by executing the target instructions following a speculatively resolved mispredicted branch. In speculative mode, instruction operands may be invalid due to source loads that miss the L1 cache, facilities not available in speculative execution mode, or due to speculative instruction results that are not available. Dependency and dirty (i.e. invalid result) bits are tracked and used to determine which speculative instructions are valid for execution. A modified value register storage and bit vector are used to improve the availability of speculative results that would otherwise be discarded once they leave the execution pipeline because they cannot be written to the architected registers. The modified general purpose registers are used to store speculative results when the corresponding instruction reaches writeback and the modified bit vector tracks the results that have been stored there. Younger speculative instructions that do not bypass directly from older instructions will then use this modified data when the corresponding bit in the modified bit vector indicates the data has been modified. Otherwise, data from the architected registers will be used.
    • 本发明允许微处理器在失速状态期间识别和推测地执行未来的指令。 这允许在停顿条件期间通过指令流进行正向进展,否则将导致微处理器或执行线程空闲。 这样的未来指令的执行可以启动来自远程高速缓存或主存储器的数据或指令的预取,或以其他方式通过指令流进行进展。 以这种方式,当在停止条件到期之后重新执行(不推测地执行)指令时,它们将以降低的执行延迟执行; 例如 通过访问预取到L1高速缓存中的数据,或者进入处理器,或通过在推测性地解决的误预测分支之后执行目标指令。 在推测模式中,由于缺少L1缓存的源加载,在推测执行模式下不可用的设备,或由于不可用的推测指令结果,指令操作数可能无效。 跟踪依赖关系和脏(即无效结果)位,并用于确定哪些推测指令对执行有效。 改进的值寄存器存储和位向量被用于提高推测结果的可用性,否则,由于不能将其写入到架构化的寄存器,否则将抛弃执行流水线。 修改后的通用寄存器用于在对应指令到达回写时存储推测结果,修改后的位向量跟踪存储在其中的结果。 当修改的位向量中的相应位指示数据已被修改时,不直接从旧指令旁路的较小的推测指令将使用该修改的数据。 否则,将使用来自架构化寄存器的数据。
    • 8. 发明申请
    • Branch lookahead prefetch for microprocessors
    • 用于微处理器的分支前瞻预取
    • US20060149933A1
    • 2006-07-06
    • US11016200
    • 2004-12-17
    • Richard EickemeyerHung LeDung NguyenBenjamin StoltBrian Thompto
    • Richard EickemeyerHung LeDung NguyenBenjamin StoltBrian Thompto
    • G06F9/30
    • G06F9/3842G06F9/3861
    • A method of handling program instructions in a microprocessor which reduces delays associated with mispredicted branch instructions, by detecting the occurrence of a stall condition during execution of the program instructions, speculatively executing one or more pending instructions which include at least one branch instruction during the stall condition, and determining the validity of data utilized by the speculative execution. In particular, the method can detect a load instruction miss which results in the stall condition. Dispatch logic determines the validity of the data by marking one or more registers of an instruction dispatch unit to indicate which results of the pending instructions are invalid. The speculative execution of instructions can occur across multiple pipeline stages of the microprocessor, and the validity of the data is tracked during their execution in the multiple pipeline stages while monitoring a dependency of the speculatively executed instructions relative to one another during their execution in the multiple pipeline stages. A branch prediction unit predicts a path of the branch instruction prior to detection of the stall condition, and fetches speculative instructions from the predicted path into an instruction queue. If the speculative execution of the branch instruction indicates it was mispredicted, the speculative instructions are flushed from the pipeline and instruction queue, and the branch prediction information is updated based on results of the speculative execution of the branch instruction. The speculative execution of the instructions occurs without altering any architected facilities of the microprocessor.
    • 一种处理微处理器中的程序指令的方法,其通过在执行程序指令期间检测到失速状态的发生来减少与错误预测的分支指令相关联的延迟,推测性地执行一个或多个未决指令,其中包括在失速期间包括至少一个分支指令 条件,并确定投机执行使用的数据的有效性。 特别地,该方法可以检测到导致失速状态的加载指令未命中。 调度逻辑通过标记指令调度单元的一个或多个寄存器来指示待处理指令的哪些结果无效来确定数据的有效性。 指令的推测执行可以在微处理器的多个流水线阶段发生,并且在多个流水线阶段的执行期间跟踪数据的有效性,同时在多个流水线阶段的执行期间监视推测性执行的指令相对于彼此的依赖性 流水线阶段 分支预测单元在检测到停顿条件之前预测分支指令的路径,并且从预测路径获取推测指令到指令队列中。 如果分支指令的推测执行指示它被错误预测,则从流水线和指令队列刷新推测指令,并且基于分支指令的推测执行的结果更新分支预测信息。 指令的推测执行不会改变微处理器的任何架构设计。
    • 10. 发明申请
    • Instruction group formation and mechanism for SMT dispatch
    • SMT派遣指导小组组织和机制
    • US20060101241A1
    • 2006-05-11
    • US10965143
    • 2004-10-14
    • Brian CurranBrian KonigsburgHung LeDavid LuickDung Nguyen
    • Brian CurranBrian KonigsburgHung LeDavid LuickDung Nguyen
    • G06F9/30
    • G06F9/3853G06F9/30145G06F9/382G06F9/3851G06F9/3885
    • A more efficient method of handling instructions in a computer processor, by associating resource fields with respective program instructions wherein the resource fields indicate which of the processor hardware resources are required to carry out the program instructions, calculating resource requirements for merging two or more program instructions based on their resource fields, and determining resource availability for simultaneously executing the merged program instructions based on the calculated resource requirements. Resource vectors indicative of the required resource may be encoded into the resource fields, and the resource fields decoded at a later stage to derive the resource vectors. The resource fields can be stored in the instruction cache associated with the respective program instructions. The processor may operate in a simultaneous multithreading mode with different program instructions being part of different hardware threads. When the resource availability equals or exceeds the resource requirements for a group of instructions, those instructions can be dispatched simultaneously to the hardware resources. A start bit may be inserted in one of the program instructions to define the instruction group. The hardware resources may in particular be execution units such as a fixed-point unit, a load/store unit, a floating-point unit, or a branch processing unit.
    • 通过将资源字段与相应的程序指令相关联来处理计算机处理器中的指令的更有效的方法,其中资源字段指示需要哪个处理器硬件资源来执行程序指令,计算用于合并两个或多个程序指令的资源需求 并且基于所计算的资源需求来确定用于同时执行所合并的程序指令的资源可用性。 指示所需资源的资源矢量可以被编码到资源字段中,并且在稍后阶段解码资源字段以导出资源向量。 资源字段可以存储在与相应的程序指令相关联的指令高速缓存中。 处理器可以以同时多线程模式操作,其中不同的程序指令是不同硬件线程的一部分。 当资源可用性等于或超过一组指令的资源需求时,可以将这些指令同时发送到硬件资源。 可以在程序指令之一中插入起始位以定义指令组。 硬件资源可以特别地是诸如定点单元,加载/存储单元,浮点单元或分支处理单元之类的执行单元。