会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 11. 发明授权
    • Branch lookahead prefetch for microprocessors
    • 用于微处理器的分支前瞻预取
    • US07552318B2
    • 2009-06-23
    • US11016200
    • 2004-12-17
    • Richard James EickemeyerHung Qui LeDung Quoc NguyenBenjamin Walter StoltBrian William Thompto
    • Richard James EickemeyerHung Qui LeDung Quoc NguyenBenjamin Walter StoltBrian William Thompto
    • G06F9/00
    • G06F9/3842G06F9/3861
    • A method of handling program instructions in a microprocessor which reduces delays associated with mispredicted branch instructions, by detecting the occurrence of a stall condition during execution of the program instructions, speculatively executing one or more pending instructions which include at least one branch instruction during the stall condition, and determining the validity of data utilized by the speculative execution. Dispatch logic determines the validity of the data by marking one or more registers of an instruction dispatch unit to indicate which results of the pending instructions are invalid. The speculative execution of instructions can occur across multiple pipeline stages of the microprocessor, and the validity of the data is tracked during their execution in the multiple pipeline stages while monitoring a dependency of the speculatively executed instructions relative to one another during their execution in the multiple pipeline stages.
    • 一种处理微处理器中的程序指令的方法,其通过在执行程序指令期间检测到失速状态的发生来减少与错误预测的分支指令相关联的延迟,推测性地执行一个或多个未决指令,其中包括在失速期间包括至少一个分支指令 条件,并确定投机执行使用的数据的有效性。 调度逻辑通过标记指令调度单元的一个或多个寄存器来指示待处理指令的哪些结果无效来确定数据的有效性。 指令的推测执行可以在微处理器的多个流水线阶段发生,并且在多个流水线阶段的执行期间跟踪数据的有效性,同时在多个流水线阶段的执行期间监视推测性执行的指令相对于彼此的依赖性 流水线阶段
    • 13. 发明授权
    • Cache address generation with and without carry-in
    • 具有和不带有进位的缓存地址生成
    • US5940877A
    • 1999-08-17
    • US873783
    • 1997-06-12
    • Richard James EickemeyerThomas Leo Jeremiah
    • Richard James EickemeyerThomas Leo Jeremiah
    • G06F12/08G06F12/10G06F13/00
    • G06F12/0864G06F12/1054G06F2212/6082
    • A cache system provides for accessing set associative caches with no increase in critical path delay, for reducing the latency penalty for cache accesses, for reducing snoop busy time, and for responding to MRU misses and cache misses. The cache array is accessed by multiplexing two most-recently-used (MRU) arrays which are addressed and accessed substantially in parallel with effective address generation, the outputs of which MRU arrays are generated, one by assuming a carryin of zero, and the other by assuming a carryin of one to the least significant bit of the portion of the effective addressed used to access the MRU arrays. The hit rate in the MRU array is improved by hashing within an adder the adder's input operands with predetermined additional operand bits.
    • 缓存系统提供访问集合关联缓存,而不增加关键路径延迟,以减少高速缓存访​​问的延迟损失,减少侦听占线时间,以及响应MRU未命中和高速缓存未命中。 通过复用两个最近使用的(MRU)阵列来访问高速缓存阵列,其基本上与生成有效地址的MRU阵列的输出相平行地寻址和访问,其中一个通过假定为零,另一个 通过假设一个进位到用于访问MRU阵列的有效寻址部分的最低有效位。 MRU阵列中的命中率通过加法器中的加法器的加法器的输入操作数与预定的附加操作数位进行散列得到改善。
    • 16. 发明授权
    • Load lookahead prefetch for microprocessors
    • 加载微处理器的前瞻预取
    • US07594096B2
    • 2009-09-22
    • US11950495
    • 2007-12-05
    • Richard James EickemeyerHung Qui LeDung Quoc NguyenBenjamin Walter StoltBrian William Thompto
    • Richard James EickemeyerHung Qui LeDung Quoc NguyenBenjamin Walter StoltBrian William Thompto
    • G06F9/30G06F9/40G06F15/00
    • G06F9/3842G06F9/3804G06F9/383G06F9/3838G06F9/3851
    • The present invention allows a microprocessor to identify and speculatively execute future load instructions during a stall condition. This allows forward progress to be made through the instruction stream during the stall condition which would otherwise cause the microprocessor or thread of execution to be idle. The data for such future load instructions can be prefetched from a distant cache or main memory such that when the load instruction is re-executed (non speculative executed) after the stall condition expires, its data will reside either in the L1 cache, or will be enroute to the processor, resulting in a reduced execution latency. When an extended stall condition is detected, load lookahead prefetch is started allowing speculative execution of instructions that would normally have been stalled. In this speculative mode, instruction operands may be invalid due to source loads that miss the L1 cache, facilities not available in speculative execution mode, or due to speculative instruction results that are not available via forwarding and are not written to the architected registers. A set of status bits are used to dynamically keep track of the dependencies between instructions in the pipeline and a bit vector tracks invalid architected facilities with respect to the speculative instruction stream. Both sources of information are used to identify load instructions with valid operands for calculating the load address. If the operands are valid, then a load prefetch operation is started to retrieve data from the cache ahead of time such that it can be available for the load instruction when it is non-speculatively executed.
    • 本发明允许微处理器在失速状态期间识别并推测性地执行未来的加载指令。 这允许在停顿条件期间通过指令流进行正向进展,否则将导致微处理器或执行线程空闲。 可以从远程高速缓存或主存储器预取这样的未来加载指令的数据,使得当停止条件到期之后,当加载指令被重新执行(不推测执行)时,其数据将驻留在L1高速缓存中,或者将 进入处理器,导致执行时间缩短。 当检测到扩展失速条件时,启动加载前瞻预取,允许推测执行通常已经停止的指令。 在这种推测模式中,由于缺少L1高速缓存的源负载,设备在推测执行模式下不可用的设备,或由于不能通过转发而不能使用并且未写入到架构化寄存器的推测性指令结果,指令操作数可能无效。 一组状态位用于动态地跟踪流水线中的指令之间的依赖关系,并且位向量相对于推测性指令流跟踪无效的架构设施。 两个信息来源用于识别加载指令,其中包含用于计算加载地址的有效操作数。 如果操作数有效,则启动加载预取操作以提前从高速缓存中检索数据,使得当非推测性地执行加载指令时,可以对加载指令可用。
    • 17. 发明授权
    • Load lookahead prefetch for microprocessors
    • 加载微处理器的前瞻预取
    • US07444498B2
    • 2008-10-28
    • US11016236
    • 2004-12-17
    • Richard James EickemeyerHung Qui LeDung Quoc NguyenBenjamin Walter StoltBrian William Thompto
    • Richard James EickemeyerHung Qui LeDung Quoc NguyenBenjamin Walter StoltBrian William Thompto
    • G06F9/30G06F9/40G06F15/00G06F7/38G06F9/00G06F9/44G06F7/00
    • G06F9/3842G06F9/3804G06F9/383G06F9/3838G06F9/3851
    • The present invention allows a microprocessor to identify and speculatively execute future load instructions during a stall condition. This allows forward progress to be made through the instruction stream during the stall condition which would otherwise cause the microprocessor or thread of execution to be idle. The data for such future load instructions can be prefetched from a distant cache or main memory such that when the load instruction is re-executed (non speculative executed) after the stall condition expires, its data will reside either in the L1 cache, or will be enroute to the processor, resulting in a reduced execution latency. When an extended stall condition is detected, load lookahead prefetch is started allowing speculative execution of instructions that would normally have been stalled. In this speculative mode, instruction operands may be invalid due to source loads that miss the L1 cache, facilities not available in speculative execution mode, or due to speculative instruction results that are not available via forwarding and are not written to the architected registers. A set of status bits are used to dynamically keep track of the dependencies between instructions in the pipeline and a bit vector tracks invalid architected facilities with respect to the speculative instruction stream. Both sources of information are used to identify load instructions with valid operands for calculating the load address. If the operands are valid, then a load prefetch operation is started to retrieve data from the cache ahead of time such that it can be available for the load instruction when it is non-speculatively executed.
    • 本发明允许微处理器在失速状态期间识别并推测性地执行未来的加载指令。 这允许在停顿条件期间通过指令流进行正向进展,否则将导致微处理器或执行线程空闲。 可以从远程高速缓存或主存储器预取这样的未来加载指令的数据,使得当停止条件到期之后,当加载指令被重新执行(不推测执行)时,其数据将驻留在L1高速缓存中,或者将 进入处理器,导致执行时间缩短。 当检测到扩展失速条件时,启动加载前瞻预取,允许推测执行通常已经停止的指令。 在这种推测模式中,由于缺少L1高速缓存的源负载,设备在推测执行模式下不可用的设备,或由于不能通过转发而不能使用并且未写入到架构化寄存器的推测性指令结果,指令操作数可能无效。 一组状态位用于动态地跟踪流水线中的指令之间的依赖关系,并且位向量相对于推测性指令流跟踪无效的架构设施。 两个信息来源用于识别加载指令,其中包含用于计算加载地址的有效操作数。 如果操作数有效,则启动加载预取操作以提前从高速缓存中检索数据,使得当非推测性地执行加载指令时,可以对加载指令可用。
    • 18. 发明申请
    • Using a Modified Value GPR to Enhance Lookahead Prefetch
    • 使用修改值GPR来增强前瞻预取
    • US20080250230A1
    • 2008-10-09
    • US12061290
    • 2008-04-02
    • Richard James EickemeyerHung Qui LeDung Quoc NguyenBenjamin Walter StoltBrian William Thompto
    • Richard James EickemeyerHung Qui LeDung Quoc NguyenBenjamin Walter StoltBrian William Thompto
    • G06F9/30
    • G06F9/3842G06F9/3804G06F9/383G06F9/3838
    • The present invention allows a microprocessor to identify and speculatively execute future instructions during a stall condition. This allows forward progress to be made through the instruction stream during the stall condition which would otherwise cause the microprocessor or thread of execution to be idle. The execution of such future instructions can initiate a prefetch of data or instructions from a distant cache or main memory, or otherwise make forward progress through the instruction stream. In this manner, when the instructions are re-executed (non speculatively executed) after the stall condition expires, they will execute with a reduced execution latency; e.g. by accessing data prefetched into the L1 cache, or enroute to the processor, or by executing the target instructions following a speculatively resolved mispredicted branch. In speculative mode, instruction operands may be invalid due to source loads that miss the L1 cache, facilities not available in speculative execution mode, or due to speculative instruction results that are not available. Dependency and dirty (i.e. invalid result) bits are tracked and used to determine which speculative instructions are valid for execution. A modified value register storage and bit vector are used to improve the availability of speculative results that would otherwise be discarded once they leave the execution pipeline because they cannot be written to the architected registers. The modified general purpose registers are used to store speculative results when the corresponding instruction reaches writeback and the modified bit vector tracks the results that have been stored there. Younger speculative instructions that do not bypass directly from older instructions will then use this modified data when the corresponding bit in the modified bit vector indicates the data has been modified. Otherwise, data from the architected registers will be used.
    • 本发明允许微处理器在失速状态期间识别和推测地执行未来的指令。 这允许在停顿条件期间通过指令流进行正向进展,否则将导致微处理器或执行线程空闲。 这样的未来指令的执行可以启动来自远程高速缓存或主存储器的数据或指令的预取,或以其他方式通过指令流进行进展。 以这种方式,当在停止条件到期之后重新执行(不推测地执行)指令时,它们将以降低的执行延迟执行; 例如 通过访问预取到L1高速缓存中的数据,或者进入处理器,或通过在推测性地解决的误预测分支之后执行目标指令。 在推测模式中,由于缺少L1缓存的源加载,在推测执行模式下不可用的设备,或由于不可用的推测指令结果,指令操作数可能无效。 跟踪依赖关系和脏(即无效结果)位,并用于确定哪些推测指令对执行有效。 改进的值寄存器存储和位向量被用于提高推测结果的可用性,否则,由于不能将其写入到架构化的寄存器,否则将抛弃执行流水线。 修改后的通用寄存器用于在对应指令到达回写时存储推测结果,修改后的位向量跟踪存储在其中的结果。 当修改的位向量中的相应位指示数据已被修改时,不直接从旧指令旁路的较小的推测指令将使用该修改的数据。 否则,将使用来自架构化寄存器的数据。