会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Preventing register data flow hazards in an SST processor
    • 防止SST处理器中的寄存器数据流危害
    • US07610470B2
    • 2009-10-27
    • US11703462
    • 2007-02-06
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • G06F9/38
    • G06F9/30181G06F9/30189G06F9/3838G06F9/3842G06F9/3851G06F9/3863
    • One embodiment of the present invention provides a system that prevents data hazards during simultaneous speculative threading. The system starts by executing instructions in an execute-ahead mode using a first thread. While executing instructions in the execute-ahead mode, the system maintains dependency information for each register indicating whether the register is subject to an unresolved data dependency. Upon the resolution of a data dependency during execute-ahead mode, the system copies dependency information to a speculative copy of the dependency information. The system then commences execution of the deferred instructions in a deferred mode using a second thread. While executing instructions in the deferred mode, if the speculative copy of the dependency information for a destination register indicates that a write-after-write (WAW) hazard exists with a subsequent non-deferred instruction executed by the first thread in execute-ahead mode, the system uses the second thread to execute the deferred instruction to produce a result and forwards the result to be used by subsequent deferred instructions without committing the result to the architectural state of the destination register. Hence, the system makes the result available to the subsequent deferred instructions without overwriting the result produced by a following non-deferred instruction.
    • 本发明的一个实施例提供一种在同时推测的线程中防止数据危害的系统。 系统通过使用第一个线程以执行模式执行指令来启动。 在执行执行模式下执行指令时,系统维护每个寄存器的依赖信息,指示寄存器是否受到未解析的数据依赖。 在执行提前模式下解析数据依赖关系时,系统将依赖关系信息复制到依赖关系信息的推测性副本。 然后,系统使用第二个线程以延迟模式开始执行延迟指令。 在延迟模式下执行指令时,如果目的寄存器的依赖关系信息的推测性副本指示在执行提前模式下由第一线程执行的后续非延迟指令存在写后写入(WAW)危险 ,系统使用第二个线程执行延迟指令以产生结果,并转发后续延迟指令使用的结果,而不将结果提交到目标寄存器的体系结构状态。 因此,系统使结果可用于后续延期指令,而不会覆盖由以下非延迟指令产生的结果。
    • 3. 发明授权
    • Generation of multiple checkpoints in a processor that supports speculative execution
    • 在支持推测性执行的处理器中生成多个检查点
    • US07571304B2
    • 2009-08-04
    • US11084655
    • 2005-03-18
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • G06F15/00G06F7/38G06F9/00G06F9/44
    • G06F9/3863G06F9/383G06F9/3842
    • One embodiment of the present invention provides a system which creates multiple checkpoints in a processor that supports speculative-execution. The system starts by issuing instructions for execution in program order during execution of a program in a normal-execution mode. Upon encountering a launch condition during an instruction which causes a processor to enter execute-ahead mode, the system performs an initial checkpoint and commences execution of instructions in execute-ahead mode. Upon encountering a predefined condition during execute-ahead mode, the system generates an additional checkpoint and continues to execute instructions in execute-ahead mode. Generating the additional checkpoint allows the processor to return to the additional checkpoint, instead of the previous checkpoint, if the processor subsequently encounters a condition that requires the processor to return to a checkpoint.
    • 本发明的一个实施例提供一种在支持推测执行的处理器中创建多个检查点的系统。 系统以正常执行模式在程序执行期间以程序顺序发出指令来开始。 在使处理器进入执行模式的指令期间遇到启动条件时,系统执行初始检查点并以执行提前模式开始执行指令。 在执行提前模式期间遇到预定义的条件时,系统生成附加检查点,并以执行提前模式继续执行指令。 如果处理器随后遇到需要处理器返回到检查点的条件,则生成附加检查点将允许处理器返回到附加检查点,而不是先前检查点。
    • 4. 发明授权
    • Arithmetic early bypass
    • 算术早期绕行
    • US07421465B1
    • 2008-09-02
    • US10932522
    • 2004-09-02
    • Leonard Dennis RarickMurali Krishna InagantiShailender ChaudhryPaul Caprioli
    • Leonard Dennis RarickMurali Krishna InagantiShailender ChaudhryPaul Caprioli
    • G06F7/38
    • G06F7/483G06F7/5443
    • A value that bypasses some of the computations for an arithmetic operation can be supplied for performance of a dependent arithmetic operation without waiting for completion of the computations of the arithmetic operation. During performance of a first arithmetic operation, a value is generated. The value is viable for use in performing a second arithmetic operation that is dependent upon the first arithmetic operation. The value is utilized to continue performance of the first arithmetic operation and commence performance of the second arithmetic operation. As part of the continued performance of the first arithmetic operation, determining whether the value is to be modified for the first arithmetic operation. Compensating for modifications to the value for performance of the second arithmetic operation.
    • 可以提供绕过算术运算的一些计算的值,以执行相关的算术运算,而无需等待算术运算的计算完成。 在执行第一个算术运算时,产生一个值。 该值对于用于执行依赖于第一算术运算的第二算术运算来说是可行的。 该值用于继续执行第一算术运算并开始执行第二算术运算。 作为继续执行第一算术运算的一部分,确定是否要为第一算术运算修改该值。 补偿修改第二个算术运算的性能值。
    • 6. 发明申请
    • Method and apparatus for synchronizing threads on a processor that supports transactional memory
    • 用于在支持事务性存储器的处理器上同步线程的方法和装置
    • US20070240158A1
    • 2007-10-11
    • US11418652
    • 2006-05-05
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • G06F9/46
    • G06F9/52G06F9/3004G06F9/30087G06F9/3834G06F9/3857
    • One embodiment of the present invention provides a system that synchronizes threads on a multi-threaded processor. The system starts by executing instructions from a multi-threaded program using a first thread and a second thread. When the first thread reaches a predetermined location in the multi-threaded program, the first thread executes a Start-Transactional-Execution (STE) instruction to commence transactional execution, wherein the STE instruction specifies a location to branch to if transactional execution fails. During the subsequent transactional execution, the first thread accesses a mailbox location in memory (which is also accessible by the second thread) and then executes instructions that cause the first thread to wait. When the second thread reaches a second predetermined location in the multi-threaded program, the second thread signals the first thread by accessing the mailbox location, which causes the transactional execution of the first thread to fail, thereby causing the first thread to resume non-transactional execution from the location specified in the STE instruction. In this way, the second thread can signal to the first thread without the first thread having to poll a shared variable.
    • 本发明的一个实施例提供了一种在多线程处理器上同步线程的系统。 系统通过使用第一个线程和第二个线程执行来自多线程程序的指令来启动。 当第一线程到达多线程程序中的预定位置时,第一线程执行开始 - 事务执行(STE)指令以开始事务执行,其中STE指令指定分支到事务执行失败的位置。 在随后的事务执行期间,第一个线程访问存储器中的邮箱位置(也可由第二个线程访问),然后执行使第一个线程等待的指令。 当第二线程到达多线程程序中的第二预定位置时,第二线程通过访问邮箱位置来发信号通知第一线程,这导致第一线程的事务执行失败,从而使第一线程恢复为非线程, 从STE指令中指定的位置进行事务执行。 以这种方式,第二线程可以向第一线程发信号,而第一线程不必轮询共享变量。
    • 8. 发明授权
    • Method and apparatus for avoiding write-after-write hazards in an execute-ahead processor
    • 用于在执行前处理器中避免写后危害的方法和装置
    • US07213133B2
    • 2007-05-01
    • US10923217
    • 2004-08-20
    • Paul CaprioliShailender Chaudhry
    • Paul CaprioliShailender Chaudhry
    • G06F9/30
    • G06F9/3863G06F9/30181G06F9/30189G06F9/383G06F9/3834G06F9/3838G06F9/3842G06F9/3857
    • One embodiment of the present invention provides a system that avoids write-after-write (WAW) hazards while speculatively executing instructions. The system starts in a normal execution mode, wherein the system issues instructions for execution in program order. Upon encountering an unresolved data dependency during execution of an instruction, the system generates a checkpoint, defers the instruction, and executes subsequent instructions in an execute-ahead mode. During this execute-ahead mode, instructions that cannot be executed because of unresolved data dependencies are deferred, and other non-deferred instructions are executed in program order. If an unresolved data dependency is resolved during the execute-ahead mode, the system moves into a deferred mode wherein the system executes deferred instructions. While executing a deferred instruction, if dependency information for an associated destination register indicates that a WAW hazard potentially exists with a following non-deferred instruction, the system executes the deferred instruction to produce a result, and forwards the result to be used by subsequent instructions in a pipeline and/or deferred queue for the processor. The system does so without committing the result to the architectural state of the destination register. In this way, the system makes the result available to the subsequent instructions without overwriting a result produced by the following non-deferred instruction, thereby avoiding a WAW hazard.
    • 本发明的一个实施例提供了一种在推测性地执行指令时避免写后写入(WAW)危险的系统。 系统以正常执行模式启动,其中系统以程序顺序发出执行指令。 在执行指令期间遇到未解决的数据依赖性时,系统产生检查点,延迟指令,并以执行方式执行后续指令。 在此执行模式期间,由于未解决的数据依赖关系而无法执行的指令被延迟,并且其他非延迟指令以程序顺序执行。 如果在执行提前模式期间解决了未解决的数据依赖关系,则系统进入延迟模式,其中系统执行延迟指令。 在执行延迟指令时,如果相关联的目标寄存器的依赖关系信息指示可能存在具有以下非延迟指令的WAW危险,则系统执行延迟指令以产生结果,并转发后续指令使用的结果 在处理器的流水线和/或延迟队列中。 系统没有将结果提交到目标寄存器的架构状态。 以这种方式,系统使结果可用于随后的指令,而不会覆盖由以下非延迟指令产生的结果,从而避免WAW危险。
    • 9. 发明申请
    • Method and apparatus for suppressing duplicative prefetches for branch target cache lines
    • 用于抑制分支目标缓存行的重复预取的方法和装置
    • US20060242365A1
    • 2006-10-26
    • US11111654
    • 2005-04-20
    • Abid AliPaul CaprioliShailender ChaudhryMiles Lee
    • Abid AliPaul CaprioliShailender ChaudhryMiles Lee
    • G06F13/00
    • G06F9/3804G06F9/3814G06F9/3816G06F12/0862
    • A system that suppresses duplicative prefetches for branch target cache lines. During operation, the system fetches a first cache line into in a fetch buffer. The system then prefetches a second cache line, which immediately follows the first cache line, into the fetch buffer. If a control transfer instruction in the first cache line has a target instruction which is located in the second cache line, the system determines if the control transfer instruction is also located at the end of the first cache line so that a corresponding delay slot for the control transfer instruction is located at the beginning of the second cache line. If so, the system suppresses a subsequent prefetch for a target cache line containing the target instruction because the target instruction is located in the second cache line which has already been prefetched.
    • 一种抑制分支目标缓存行重复预取的系统。 在操作期间,系统将第一个高速缓存行提取到获取缓冲区中。 系统然后将紧跟在第一个高速缓存行之后的第二个高速缓存行预取到获取缓冲区。 如果第一高速缓存行中的控制传送指令具有位于第二高速缓存行中的目标指令,则系统确定控制传输指令是否也位于第一高速缓存行的末端,使得对应的延迟时隙 控制传输指令位于第二高速缓存行的开头。 如果是这样,则由于目标指令位于已经被预取的第二高速缓存行中,所以系统抑制对包含目标指令的目标高速缓存行的后续预取。
    • 10. 发明申请
    • Method and apparatus for enforcing membar instruction semantics in an execute-ahead processor
    • 在执行处理器中执行膜指令语义的方法和装置
    • US20050273583A1
    • 2005-12-08
    • US11083263
    • 2005-03-16
    • Paul CaprioliShailender ChaudhryMarc Tremblay
    • Paul CaprioliShailender ChaudhryMarc Tremblay
    • G06F9/00G06F9/30G06F9/38G06F9/45G06F12/08
    • G06F9/30087G06F9/3004G06F9/3834G06F9/3836G06F9/3838G06F9/384G06F9/3842G06F9/3857
    • One embodiment of the present invention provides a system that facilitates executing a memory barrier (membar) instruction in an execute-ahead processor, wherein the membar instruction forces buffered loads and stores to complete before allowing a following instruction to be issued. During operation in a normal-execution mode, the processor issues instructions for execution in program order. Upon encountering a membar instruction, the processor determines if the load buffer and store buffer contain unresolved loads and stores. If so, the processor defers the membar instruction and executes subsequent program instructions in execute-ahead mode. In execute-ahead mode, instructions that cannot be executed because of an unresolved data dependency are deferred, and other non-deferred instructions are executed in program order. When all stores and loads that precede the membar instruction have been committed to memory from the store buffer and the load buffer, the processor enters a deferred mode and executes the deferred instructions, including the membar instruction, in program order. If all deferred instructions have been executed, the processor returns to the normal-execution mode and resumes execution from the point where the execute-ahead mode left off.
    • 本发明的一个实施例提供了一种便于在执行前处理器中执行存储器屏障(membar)指令的系统,其中,在允许执行后续指令之前,该指令强制缓冲的负载和存储完成。 在正常执行模式下的操作期间,处理器以程序顺序发出执行指令。 在遇到一条指令时,处理器确定加载缓冲区和存储缓冲区是否包含未解决的负载和存储。 如果是这样,则处理器延迟膜指令,并以执行模式执行后续的程序指令。 在执行提前模式下,由于未解决的数据依赖关系而无法执行的指令被延迟,并且其他非延迟指令以程序顺序执行。 当存储缓冲区和加载缓冲区之前的所有存储和负载已经提交到存储缓冲区的内存中时,处理器以程序顺序进入延迟模式并执行延迟指令,包括指令指令。 如果所有延迟指令都已执行,则处理器返回到正常执行模式,并从执行方式退出的点恢复执行。