会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Preventing register data flow hazards in an SST processor
    • 防止SST处理器中的寄存器数据流危害
    • US07610470B2
    • 2009-10-27
    • US11703462
    • 2007-02-06
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • G06F9/38
    • G06F9/30181G06F9/30189G06F9/3838G06F9/3842G06F9/3851G06F9/3863
    • One embodiment of the present invention provides a system that prevents data hazards during simultaneous speculative threading. The system starts by executing instructions in an execute-ahead mode using a first thread. While executing instructions in the execute-ahead mode, the system maintains dependency information for each register indicating whether the register is subject to an unresolved data dependency. Upon the resolution of a data dependency during execute-ahead mode, the system copies dependency information to a speculative copy of the dependency information. The system then commences execution of the deferred instructions in a deferred mode using a second thread. While executing instructions in the deferred mode, if the speculative copy of the dependency information for a destination register indicates that a write-after-write (WAW) hazard exists with a subsequent non-deferred instruction executed by the first thread in execute-ahead mode, the system uses the second thread to execute the deferred instruction to produce a result and forwards the result to be used by subsequent deferred instructions without committing the result to the architectural state of the destination register. Hence, the system makes the result available to the subsequent deferred instructions without overwriting the result produced by a following non-deferred instruction.
    • 本发明的一个实施例提供一种在同时推测的线程中防止数据危害的系统。 系统通过使用第一个线程以执行模式执行指令来启动。 在执行执行模式下执行指令时,系统维护每个寄存器的依赖信息,指示寄存器是否受到未解析的数据依赖。 在执行提前模式下解析数据依赖关系时,系统将依赖关系信息复制到依赖关系信息的推测性副本。 然后,系统使用第二个线程以延迟模式开始执行延迟指令。 在延迟模式下执行指令时,如果目的寄存器的依赖关系信息的推测性副本指示在执行提前模式下由第一线程执行的后续非延迟指令存在写后写入(WAW)危险 ,系统使用第二个线程执行延迟指令以产生结果,并转发后续延迟指令使用的结果,而不将结果提交到目标寄存器的体系结构状态。 因此,系统使结果可用于后续延期指令,而不会覆盖由以下非延迟指令产生的结果。
    • 2. 发明授权
    • Generation of multiple checkpoints in a processor that supports speculative execution
    • 在支持推测性执行的处理器中生成多个检查点
    • US07571304B2
    • 2009-08-04
    • US11084655
    • 2005-03-18
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • G06F15/00G06F7/38G06F9/00G06F9/44
    • G06F9/3863G06F9/383G06F9/3842
    • One embodiment of the present invention provides a system which creates multiple checkpoints in a processor that supports speculative-execution. The system starts by issuing instructions for execution in program order during execution of a program in a normal-execution mode. Upon encountering a launch condition during an instruction which causes a processor to enter execute-ahead mode, the system performs an initial checkpoint and commences execution of instructions in execute-ahead mode. Upon encountering a predefined condition during execute-ahead mode, the system generates an additional checkpoint and continues to execute instructions in execute-ahead mode. Generating the additional checkpoint allows the processor to return to the additional checkpoint, instead of the previous checkpoint, if the processor subsequently encounters a condition that requires the processor to return to a checkpoint.
    • 本发明的一个实施例提供一种在支持推测执行的处理器中创建多个检查点的系统。 系统以正常执行模式在程序执行期间以程序顺序发出指令来开始。 在使处理器进入执行模式的指令期间遇到启动条件时,系统执行初始检查点并以执行提前模式开始执行指令。 在执行提前模式期间遇到预定义的条件时,系统生成附加检查点,并以执行提前模式继续执行指令。 如果处理器随后遇到需要处理器返回到检查点的条件,则生成附加检查点将允许处理器返回到附加检查点,而不是先前检查点。
    • 3. 发明申请
    • Method and apparatus for synchronizing threads on a processor that supports transactional memory
    • 用于在支持事务性存储器的处理器上同步线程的方法和装置
    • US20070240158A1
    • 2007-10-11
    • US11418652
    • 2006-05-05
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • G06F9/46
    • G06F9/52G06F9/3004G06F9/30087G06F9/3834G06F9/3857
    • One embodiment of the present invention provides a system that synchronizes threads on a multi-threaded processor. The system starts by executing instructions from a multi-threaded program using a first thread and a second thread. When the first thread reaches a predetermined location in the multi-threaded program, the first thread executes a Start-Transactional-Execution (STE) instruction to commence transactional execution, wherein the STE instruction specifies a location to branch to if transactional execution fails. During the subsequent transactional execution, the first thread accesses a mailbox location in memory (which is also accessible by the second thread) and then executes instructions that cause the first thread to wait. When the second thread reaches a second predetermined location in the multi-threaded program, the second thread signals the first thread by accessing the mailbox location, which causes the transactional execution of the first thread to fail, thereby causing the first thread to resume non-transactional execution from the location specified in the STE instruction. In this way, the second thread can signal to the first thread without the first thread having to poll a shared variable.
    • 本发明的一个实施例提供了一种在多线程处理器上同步线程的系统。 系统通过使用第一个线程和第二个线程执行来自多线程程序的指令来启动。 当第一线程到达多线程程序中的预定位置时,第一线程执行开始 - 事务执行(STE)指令以开始事务执行,其中STE指令指定分支到事务执行失败的位置。 在随后的事务执行期间,第一个线程访问存储器中的邮箱位置(也可由第二个线程访问),然后执行使第一个线程等待的指令。 当第二线程到达多线程程序中的第二预定位置时,第二线程通过访问邮箱位置来发信号通知第一线程,这导致第一线程的事务执行失败,从而使第一线程恢复为非线程, 从STE指令中指定的位置进行事务执行。 以这种方式,第二线程可以向第一线程发信号,而第一线程不必轮询共享变量。
    • 5. 发明申请
    • Method and apparatus for enforcing membar instruction semantics in an execute-ahead processor
    • 在执行处理器中执行膜指令语义的方法和装置
    • US20050273583A1
    • 2005-12-08
    • US11083263
    • 2005-03-16
    • Paul CaprioliShailender ChaudhryMarc Tremblay
    • Paul CaprioliShailender ChaudhryMarc Tremblay
    • G06F9/00G06F9/30G06F9/38G06F9/45G06F12/08
    • G06F9/30087G06F9/3004G06F9/3834G06F9/3836G06F9/3838G06F9/384G06F9/3842G06F9/3857
    • One embodiment of the present invention provides a system that facilitates executing a memory barrier (membar) instruction in an execute-ahead processor, wherein the membar instruction forces buffered loads and stores to complete before allowing a following instruction to be issued. During operation in a normal-execution mode, the processor issues instructions for execution in program order. Upon encountering a membar instruction, the processor determines if the load buffer and store buffer contain unresolved loads and stores. If so, the processor defers the membar instruction and executes subsequent program instructions in execute-ahead mode. In execute-ahead mode, instructions that cannot be executed because of an unresolved data dependency are deferred, and other non-deferred instructions are executed in program order. When all stores and loads that precede the membar instruction have been committed to memory from the store buffer and the load buffer, the processor enters a deferred mode and executes the deferred instructions, including the membar instruction, in program order. If all deferred instructions have been executed, the processor returns to the normal-execution mode and resumes execution from the point where the execute-ahead mode left off.
    • 本发明的一个实施例提供了一种便于在执行前处理器中执行存储器屏障(membar)指令的系统,其中,在允许执行后续指令之前,该指令强制缓冲的负载和存储完成。 在正常执行模式下的操作期间,处理器以程序顺序发出执行指令。 在遇到一条指令时,处理器确定加载缓冲区和存储缓冲区是否包含未解决的负载和存储。 如果是这样,则处理器延迟膜指令,并以执行模式执行后续的程序指令。 在执行提前模式下,由于未解决的数据依赖关系而无法执行的指令被延迟,并且其他非延迟指令以程序顺序执行。 当存储缓冲区和加载缓冲区之前的所有存储和负载已经提交到存储缓冲区的内存中时,处理器以程序顺序进入延迟模式并执行延迟指令,包括指令指令。 如果所有延迟指令都已执行,则处理器返回到正常执行模式,并从执行方式退出的点恢复执行。
    • 6. 发明授权
    • Entering scout-mode when stores encountered during execute-ahead mode exceed the capacity of the store buffer
    • 在执行超前模式期间遇到的存储进入侦察模式超过存储缓冲区的容量
    • US07484080B2
    • 2009-01-27
    • US11103912
    • 2005-04-11
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • G06F9/00
    • G06F9/3863G06F9/383G06F9/3834G06F9/3836G06F9/3838G06F9/384G06F9/3842G06F9/3857G06F9/3865
    • One embodiment of the present invention provides a system that facilitates deferring execution of instructions with unresolved data dependencies as they are issued for execution in program order. During a normal execution mode, the system issues instructions for execution in program order. Upon encountering an unresolved data dependency during execution of an instruction, the system generates a checkpoint that can subsequently be used to return execution of the program to the point of the instruction. Next, the system executes the instruction and subsequent instructions in an execute-ahead mode, wherein instructions that cannot be executed because of an unresolved data dependency are deferred, and wherein other non-deferred instructions are executed in program order. Upon encountering a store during the execute-ahead mode, the system determines if the store buffer is full. If so, the system prefetches a cache line for the store, and defers execution of the store. If the number of stores that are encountered during execute-ahead mode exceeds the capacity of the store buffer, which means that the store buffer will never have additional space to accept additional stores during the execute-ahead mode because the store buffer is gated, the system directly enters the scout mode, without waiting for the deferred queue to eventually fill.
    • 本发明的一个实施例提供了一种系统,其有助于在按照程序顺序执行时,推迟执行具有未解决的数据依赖性的指令。 在正常执行模式下,系统以程序顺序发出执行指令。 在执行指令期间遇到未解决的数据依赖性时,系统产生一个检查点,随后可以使用该检查点将程序的执行返回到指令点。 接下来,系统以执行模式执行指令和后续指令,其中由于未解决的数据依赖性而不能执行的指令被延迟,并且其中以程序顺序执行其他非延迟指令。 在执行提前模式期间遇到存储器时,系统确定存储缓冲区是否已满。 如果是这样,系统将预取商店的高速缓存线,并延迟商店的执行。 如果在执行超前模式期间遇到的存储的数量超过了存储缓冲区的容量,这意味着由于存储缓冲区被选通,在执行提前模式下,存储缓冲区将永远不会有额外的空间来接受附加存储, 系统直接进入侦察模式,无需等待延期队列最终填满。
    • 7. 发明申请
    • Preventing register data flow hazards in an SST processor
    • 防止SST处理器中的寄存器数据流危害
    • US20080189531A1
    • 2008-08-07
    • US11703462
    • 2007-02-06
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • G06F9/44
    • G06F9/30181G06F9/30189G06F9/3838G06F9/3842G06F9/3851G06F9/3863
    • One embodiment of the present invention provides a system that prevents data hazards during simultaneous speculative threading. The system starts by executing instructions in an execute-ahead mode using a first thread. While executing instructions in the execute-ahead mode, the system maintains dependency information for each register indicating whether the register is subject to an unresolved data dependency. Upon the resolution of a data dependency during execute-ahead mode, the system copies dependency information to a speculative copy of the dependency information. The system then commences execution of the deferred instructions in a deferred mode using a second thread. While executing instructions in the deferred mode, if the speculative copy of the dependency information for a destination register indicates that a write-after-write (WAW) hazard exists with a subsequent non-deferred instruction executed by the first thread in execute-ahead mode, the system uses the second thread to execute the deferred instruction to produce a result and forwards the result to be used by subsequent deferred instructions without committing the result to the architectural state of the destination register. Hence, the system makes the result available to the subsequent deferred instructions without overwriting the result produced by a following non-deferred instruction.
    • 本发明的一个实施例提供一种在同时推测的线程中防止数据危害的系统。 系统通过使用第一个线程以执行模式执行指令来启动。 在执行执行模式下执行指令时,系统维护每个寄存器的依赖信息,指示寄存器是否受到未解析的数据依赖。 在执行提前模式下解析数据依赖关系时,系统将依赖关系信息复制到依赖关系信息的推测性副本。 然后,系统使用第二个线程以延迟模式开始执行延迟指令。 在延迟模式下执行指令时,如果目的寄存器的依赖关系信息的推测性副本指示在执行提前模式下由第一线程执行的后续非延迟指令存在写后写入(WAW)危险 ,系统使用第二个线程执行延迟指令以产生结果,并转发后续延迟指令使用的结果,而不将结果提交到目标寄存器的体系结构状态。 因此,系统使结果可用于后续延期指令,而不会覆盖由以下非延迟指令产生的结果。
    • 9. 发明申请
    • PATCHABLE AND/OR PROGRAMMABLE PRE-DECODE
    • 可调和/或可编程的预编译
    • US20070226464A1
    • 2007-09-27
    • US11277735
    • 2006-03-28
    • Shailender ChaudhryPaul CaprioliQuinn A. JacobsonMarc Tremblay
    • Shailender ChaudhryPaul CaprioliQuinn A. JacobsonMarc Tremblay
    • G06F9/40
    • G06F9/30145G06F9/30174G06F9/30196G06F9/382G06F9/3822G06F9/3897
    • Mechanisms have been developed for providing great flexibility in processor instruction handling, sequencing and execution. In particular, it has been discovered that a programmable pre-decode mechanism can be employed to alter the behavior of a processor. For example, pre-decode hints for sequencing, synchronization or speculation control may altered or mappings of ISA instructions to native instructions or operation sequences may be altered. Such techniques may be employed to adapt a processor implementation (in the field) to varying memory models, implementations or interfaces or to varying memory latencies or timing characteristics. Similarly, such techniques may be employed to adapt a processor implementation to correspond to an extended/adapted instruction set architecture. In some realizations, instruction pre-decode functionality may be adapted at processor run-time to handle or mitigate a timing, concurrency or speculation issue. In some realizations, operation of pre-decode may be reprogrammed post-manufacture, at (or about) initialization, or at run-time.
    • 已经开发了用于在处理器指令处理,排序和执行中提供极大灵活性的机制。 特别地,已经发现可以采用可编程预解码机制来改变处理器的行为。 例如,用于排序,同步或推测控制的预解码提示可以改变或者将ISA指令映射到本地指令或操作序列可以被改变。 可以采用这样的技术来使处理器实现(在现场中)适应于变化的存储器模型,实现或接口或者改变存储器延迟或定时特性。 类似地,可以采用这样的技术来使处理器实现适应于扩展/适应的指令集架构。 在一些实现中,可以在处理器运行时调整指令预解码功能以处理或减轻定时,并发或推测问题。 在某些实现中,可以在(或大约)初始化或运行时在制造后重新编程预解码的操作。
    • 10. 发明申请
    • Selective execution of deferred instructions in a processor that supports speculative execution
    • 在支持推测执行的处理器中选择性执行延迟指令
    • US20060010309A1
    • 2006-01-12
    • US11058522
    • 2005-02-14
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • G06F9/00
    • G06F9/3836G06F9/30181G06F9/30189G06F9/3838G06F9/3842G06F9/3857
    • One embodiment of the present invention provides a system which selectively executes deferred instructions following a return of a long-latency operation in a processor that supports speculative-execution. During normal-execution mode, the processor issues instructions for execution in program order. When the processor encounters a long-latency operation, such as a load miss, the processor records the long-latency operation in a long-latency scoreboard, wherein each entry in the long-latency scoreboard includes a deferred buffer start index. Upon encountering an unresolved data dependency during execution of an instruction, the processor performs a checkpointing operation and executes subsequent instructions in an execute-ahead mode, wherein instructions that cannot be executed because of the unresolved data dependency are deferred into a deferred buffer, and wherein other non-deferred instructions are executed in program order. Upon encountering a deferred instruction that depends on a long-latency operation within the long-latency scoreboard, the processor updates a deferred buffer start index associated with the long-latency operation to point to position in the deferred buffer occupied by the deferred instruction. When a long-latency operation returns, the processor executes instructions in the deferred buffer starting at the deferred buffer start index for the returning long-latency operation.
    • 本发明的一个实施例提供一种在支持推测执行的处理器中返回长延迟操作之后选择性地执行延迟指令的系统。 在正常执行模式下,处理器以程序顺序发出执行指令。 当处理器遇到诸如加载未命中的长延迟操作时,处理器将长延迟操作记录在长延迟记分板中,其中长延迟记分板中的每个条目包括延迟缓冲器开始索引。 在执行指令期间遇到未解决的数据依赖性时,处理器执行检查点操作并以执行模式执行后续指令,其中由于未解决的数据依赖性而不能执行的指令被推迟到延迟缓冲器中,并且其中 其他非延迟指令按程序顺序执行。 在遇到取决于长延迟记分板中的长时间延迟操作的延迟指令时,处理器更新与长延迟操作相关联的延迟缓冲器开始索引以指向由延迟指令占用的延迟缓冲器中的位置。 当长延迟操作返回时,处理器执行延迟缓冲区中的指令,从延迟缓冲区起始索引开始,用于返回长延迟操作。