会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 62. 发明申请
    • ANTI-PREFETCH INSTRUCTION
    • 防伪指示
    • US20090265532A1
    • 2009-10-22
    • US12104159
    • 2008-04-16
    • Paul CaprioliSherman H. YipGideon Levinsky
    • Paul CaprioliSherman H. YipGideon Levinsky
    • G06F9/38
    • G06F9/3802G06F9/3004G06F9/30047G06F9/30087G06F9/383G06F9/3834G06F9/3842G06F9/3851G06F9/3863G06F9/3867G06F12/0862
    • Embodiments of the present invention execute an anti-prefetch instruction. These embodiments start by decoding instructions in a decode unit in a processor to prepare the instructions for execution. Upon decoding an anti-prefetch instruction, these embodiments stall the decode unit to prevent decoding subsequent instructions. These embodiments then execute the anti-prefetch instruction, wherein executing the anti-prefetch instruction involves: (1) sending a prefetch request for a cache line in an L1 cache; (2) determining if the prefetch request hits in the L1 cache; (3) if the prefetch request hits in the L1 cache, determining if the cache line contains a predetermined value; and (4) conditionally performing subsequent operations based on whether the prefetch request hits in the L1 cache or the value of the data in the cache line.
    • 本发明的实施例执行反预取指令。 这些实施例首先解码处理器中的解码单元中的指令,以准备执行指令。 在对反预取指令进行解码时,这些实施例使解码单元停止以防止解码后续指令。 这些实施例然后执行反预取指令,其中执行反预取指令涉及:(1)在L1高速缓存中发送用于高速缓存行的预取请求; (2)确定预取请求是否在L1高速缓存中命中; (3)如果预取请求命中在L1高速缓存中,则确定高速缓存线是否包含预定值; 以及(4)基于所述预提取请求是否在所述L1高速缓存中的命中或所述高速缓存行中的数据的值有条件地执行后续操作。
    • 66. 发明申请
    • METHOD AND APPARATUS FOR IMPROVING TRANSACTIONAL MEMORY COMMIT LATENCY
    • 用于改进交易记忆提交延迟的方法和装置
    • US20090182956A1
    • 2009-07-16
    • US12014217
    • 2008-01-15
    • Paul CaprioliMartin KarlssonSherman H. Yip
    • Paul CaprioliMartin KarlssonSherman H. Yip
    • G06F9/46G06F12/08
    • G06F12/084G06F9/30087G06F9/3834G06F9/3857G06F9/467G06F12/126G06F2212/1016
    • Embodiments of the present invention provide a system that executes transactions on a processor that supports transactional memory. The system starts by executing the transaction on the processor. During execution of the transactions, the system places stores in a store buffer. In addition, the system sets a stores_encountered indicator when a first store is placed in the store buffer during the transaction. Upon completing the transaction, the system determines if the stores_encountered indicator is set. If so, the system signals a cache to commit the stores placed in the store buffer during the transaction to the cache and then resumes execution of program code following the transaction when the stores have been committed. Otherwise, the system resumes execution of program code following the transaction without signaling the cache.
    • 本发明的实施例提供一种在支持事务存储器的处理器上执行事务的系统。 系统通过在处理器上执行事务来启动。 在执行事务期间,系统将存储放在存储缓冲区中。 此外,当事务期间第一个存储被放置在存储缓冲区中时,系统设置stores_en遇到的指示符。 完成交易后,系统确定是否设置了stores_en遭遇指示符。 如果是这样,系统就会发出一个缓存,将事务期间放置在存储缓冲区中的存储提交到高速缓存,然后在存储已提交后,在事务之后恢复执行程序代码。 否则,系统将在事务之后恢复执行程序代码,而不发出缓存信号。
    • 69. 发明授权
    • Deferring loads and stores when a load buffer or store buffer fills during execute-ahead mode
    • 在执行提前模式下,当加载缓冲区或存储缓冲区填满时,延迟加载和存储
    • US07293161B1
    • 2007-11-06
    • US11106180
    • 2005-04-13
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • G06F9/48
    • G06F9/383G06F9/30181G06F9/3814G06F9/3834G06F9/3836G06F9/3838G06F9/3842G06F9/3863G06F12/0862
    • One embodiment of the present invention provides a system that facilitates deferring execution of instructions with unresolved data dependencies as they are issued for execution in program order. During a normal execution mode, the system issues instructions for execution in program order. Upon encountering an unresolved data dependency during execution of an instruction, the system generates a checkpoint that can subsequently be used to return execution of the program to the point of the instruction. Next, the system executes the instruction and subsequent instructions in an execute-ahead mode, wherein instructions that cannot be executed because of an unresolved data dependency are deferred, and wherein other non-deferred instructions are executed in program order. Upon encountering a store during the execute-ahead mode, the system determines if the store buffer is full. If so, the system prefetches a cache line for the store, and defers execution of the store.
    • 本发明的一个实施例提供了一种系统,其有助于在按照程序顺序执行时,推迟执行具有未解决的数据依赖性的指令。 在正常执行模式下,系统以程序顺序发出执行指令。 在执行指令期间遇到未解决的数据依赖性时,系统产生一个检查点,随后可以使用该检查点将程序的执行返回到指令点。 接下来,系统以执行模式执行指令和后续指令,其中由于未解决的数据依赖性而不能执行的指令被延迟,并且其中以程序顺序执行其他非延迟指令。 在执行提前模式期间遇到存储器时,系统确定存储缓冲区是否已满。 如果是这样,系统将预取商店的高速缓存线,并延迟商店的执行。
    • 70. 发明授权
    • Mechanism for eliminating the restart penalty when reissuing deferred instructions
    • 重新发布延期指示时消除重启罚款的机制
    • US07293160B2
    • 2007-11-06
    • US11058521
    • 2005-02-14
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • Shailender ChaudhryPaul CaprioliMarc Tremblay
    • G06F9/30G06F9/40
    • G06F9/3842G06F9/3836G06F9/3838G06F9/384G06F9/3857G06F9/3863
    • One embodiment of the present invention provides a system which facilitates eliminating a restart penalty when reissuing deferred instructions in a processor that supports speculative-execution. During a normal execution mode, the system issues instructions for execution in program order, wherein issuing the instructions involves decoding the instructions. Upon encountering an unresolved data dependency during execution of an instruction, the processor performs a checkpointing operation and executes subsequent instructions in an execute-ahead mode, wherein instructions that cannot be executed because of the unresolved data dependency are deferred, and wherein other non-deferred instructions are executed in program order. When an unresolved data dependency is resolved during execute-ahead mode, the processor begins to execute the deferred instructions in a deferred mode. In doing so, the processor initially issues deferred instructions, which have already been decoded, from a deferred queue. Simultaneously, the processor feeds instructions from a deferred SRAM into the decode unit, and these instructions eventually pass into the deferred queue. In this way, at the start of deferred mode, deferred instructions can issue from the deferred queue without having to pass through the decode unit, thereby providing time for deferred instructions from the deferred SRAM to progress through a decode unit in order to read input values for the decoded instruction, but not to be re-decoded.
    • 本发明的一个实施例提供了一种在支持推测执行的处理器中重新发布延迟指令时有助于消除重新启动损失的系统。 在正常执行模式期间,系统以程序顺序发出执行指令,其中发出指令涉及解码指令。 在执行指令期间遇到未解决的数据依赖性时,处理器执行检查点操作并以执行模式执行后续指令,其中由于未解决的数据依赖性而不能执行的指令被推迟,并且其中其他非延迟 指令以程序顺序执行。 当在执行提前模式下解决未解决的数据依赖关系时,处理器开始以延迟模式执行延迟指令。 在这样做时,处理器最初从延迟队列中发出已被解码的延迟指令。 同时,处理器将来自延迟SRAM的指令送入解码单元,并且这些指令最终进入延迟队列。 以这种方式,在延迟模式开始时,延迟指令可以从延迟队列中发出,而不必通过解码单元,从而为延迟的SRAM提供延迟指令的时间,以进行解码单元以便读取输入值 对于解码的指令,但不被重新解码。