会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • ANTI-PREFETCH INSTRUCTION
    • 防伪指示
    • US20090265532A1
    • 2009-10-22
    • US12104159
    • 2008-04-16
    • Paul CaprioliSherman H. YipGideon Levinsky
    • Paul CaprioliSherman H. YipGideon Levinsky
    • G06F9/38
    • G06F9/3802G06F9/3004G06F9/30047G06F9/30087G06F9/383G06F9/3834G06F9/3842G06F9/3851G06F9/3863G06F9/3867G06F12/0862
    • Embodiments of the present invention execute an anti-prefetch instruction. These embodiments start by decoding instructions in a decode unit in a processor to prepare the instructions for execution. Upon decoding an anti-prefetch instruction, these embodiments stall the decode unit to prevent decoding subsequent instructions. These embodiments then execute the anti-prefetch instruction, wherein executing the anti-prefetch instruction involves: (1) sending a prefetch request for a cache line in an L1 cache; (2) determining if the prefetch request hits in the L1 cache; (3) if the prefetch request hits in the L1 cache, determining if the cache line contains a predetermined value; and (4) conditionally performing subsequent operations based on whether the prefetch request hits in the L1 cache or the value of the data in the cache line.
    • 本发明的实施例执行反预取指令。 这些实施例首先解码处理器中的解码单元中的指令,以准备执行指令。 在对反预取指令进行解码时,这些实施例使解码单元停止以防止解码后续指令。 这些实施例然后执行反预取指令,其中执行反预取指令涉及:(1)在L1高速缓存中发送用于高速缓存行的预取请求; (2)确定预取请求是否在L1高速缓存中命中; (3)如果预取请求命中在L1高速缓存中,则确定高速缓存线是否包含预定值; 以及(4)基于所述预提取请求是否在所述L1高速缓存中的命中或所述高速缓存行中的数据的值有条件地执行后续操作。
    • 2. 发明授权
    • Anti-prefetch instruction
    • 反预取指令
    • US08732438B2
    • 2014-05-20
    • US12104159
    • 2008-04-16
    • Paul CaprioliSherman H. YipGideon N. Levinsky
    • Paul CaprioliSherman H. YipGideon N. Levinsky
    • G06F9/30
    • G06F9/3802G06F9/3004G06F9/30047G06F9/30087G06F9/383G06F9/3834G06F9/3842G06F9/3851G06F9/3863G06F9/3867G06F12/0862
    • Embodiments of the present invention execute an anti-prefetch instruction. These embodiments start by decoding instructions in a decode unit in a processor to prepare the instructions for execution. Upon decoding an anti-prefetch instruction, these embodiments stall the decode unit to prevent decoding subsequent instructions. These embodiments then execute the anti-prefetch instruction, wherein executing the anti-prefetch instruction involves: (1) sending a prefetch request for a cache line in an L1 cache; (2) determining if the prefetch request hits in the L1 cache; (3) if the prefetch request hits in the L1 cache, determining if the cache line contains a predetermined value; and (4) conditionally performing subsequent operations based on whether the prefetch request hits in the L1 cache or the value of the data in the cache line.
    • 本发明的实施例执行反预取指令。 这些实施例首先解码处理器中的解码单元中的指令,以准备执行指令。 在对反预取指令进行解码时,这些实施例使解码单元停止以防止解码后续指令。 这些实施例然后执行反预取指令,其中执行反预取指令涉及:(1)在L1高速缓存中发送用于高速缓存行的预取请求; (2)确定预取请求是否在L1高速缓存中命中; (3)如果预取请求命中在L1高速缓存中,则确定高速缓存线是否包含预定值; 以及(4)基于所述预提取请求是否在所述L1高速缓存中的命中或所述高速缓存行中的数据的值有条件地执行后续操作。
    • 3. 发明授权
    • Pseudo-LRU cache line replacement for a high-speed cache
    • 用于高速缓存的伪LRU高速缓存行替代
    • US08364900B2
    • 2013-01-29
    • US12029889
    • 2008-02-12
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • G06F12/00
    • G06F12/125G06F12/0864Y02D10/13
    • Embodiments of the present invention provide a system that replaces an entry in a least-recently-used way in a skewed-associative cache. The system starts by receiving a cache line address. The system then generates two or more indices using the cache line address. Next, the system generates two or more intermediate indices using the two or more indices. The system then uses at least one of the two or more indices or the two or more intermediate indices to perform a lookup in one or more lookup tables, wherein the lookup returns a value which identifies a least-recently-used way. Next, the system replaces the entry in the least-recently-used way.
    • 本发明的实施例提供了一种在偏斜相关高速缓存中以最近最近使用的方式替换条目的系统。 系统从接收缓存行地址开始。 然后系统使用高速缓存行地址生成两个或多个索引。 接下来,系统使用两个或更多个索引生成两个或更多个中间索引。 然后,系统使用两个或更多个索引中的至少一个或两个或更多个中间索引来在一个或多个查找表中执行查找,其中查找返回标识最近最近使用的方式的值。 接下来,系统以最近最少使用的方式替换条目。
    • 7. 发明授权
    • Mechanism for hardware tracking of return address after tail call elimination of return-type instruction
    • 尾部呼叫消除返回类型指令后返回地址的硬件跟踪机制
    • US07610474B2
    • 2009-10-27
    • US11352147
    • 2006-02-10
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • G06F9/00
    • G06F9/3806G06F9/3842G06F9/3861
    • A technique maintains return address stack (RAS) content and alignment of a RAS top-of-stack (TOS) pointer upon detection of a tail-call elimination of a return-type instruction. In at least one embodiment of the invention, an apparatus includes a processor pipeline and at least a first return address stack for maintaining a stack of return addresses associated with instruction flow at a first stage of the processor pipeline. The processor pipeline is configured to maintain the first return address stack unchanged in response to detection of a tail-call elimination sequence of one or more instructions associated with a first call-type instruction encountered by the first stage. The processor pipeline is configured to push a return address associated with the first call-type instruction onto the first return address stack otherwise.
    • 检测到返回类型指令的尾部消除消息后,技术维护返回地址堆栈(RAS)内容和RAS顶层(TOS)指针的对齐。 在本发明的至少一个实施例中,一种装置包括处理器流水线和至少第一返回地址堆栈,用于在处理器流水线的第一级保持与指令流相关联的返回地址堆栈。 响应于检测到与第一级遇到的第一呼叫类型指令相关联的一个或多个指令的尾部呼叫消除序列,处理器流水线被配置为维持第一返回地址堆栈不变。 否则处理器流水线被配置为将与第一调用类型指令相关联的返回地址推送到第一返回地址堆栈。
    • 8. 发明申请
    • FACILITATING TRANSACTIONAL EXECUTION IN A PROCESSOR THAT SUPPORTS SIMULTANEOUS SPECULATIVE THREADING
    • 在支持同时进行线性加工的处理器中促进交易执行
    • US20090254905A1
    • 2009-10-08
    • US12061554
    • 2008-04-02
    • Sherman H. YipPaul CaprioliMarc Tremblay
    • Sherman H. YipPaul CaprioliMarc Tremblay
    • G06F9/46
    • G06F9/466G06F9/3842G06F9/3851G06F12/0842
    • Embodiments of the present invention provide a system that executes a transaction on a simultaneous speculative threading (SST) processor. In these embodiments, the processor includes a primary strand and a subordinate strand. Upon encountering a transaction with the primary strand while executing instructions non-transactionally, the processor checkpoints the primary strand and executes the transaction with the primary strand while continuing to non-transactionally execute deferred instructions with the subordinate strand. When the subordinate strand non-transactionally accesses a cache line during the transaction, the processor updates a record for the cache line to indicate the first strand ID. When the primary strand transactionally accesses a cache line during the transaction, the processor updates a record for the cache line to indicate a second strand ID.
    • 本发明的实施例提供了一种在同时推测的线程(SST)处理器上执行交易的系统。 在这些实施例中,处理器包括主链和从属链。 在非事务性地执行指令的同时遇到与主链的事务时,处理器检查主链,并与主链一起执行事务,同时继续非事务地执行与下级链的延迟指令。 当下级链在事务期间非事务地访问高速缓存行时,处理器更新用于高速缓存行的记录以指示第一个链ID。 当主链在事务期间事务地访问高速缓存行时,处理器更新用于高速缓存行的记录以指示第二个链ID。
    • 10. 发明授权
    • Circuitry and method for accessing an associative cache with parallel determination of data and data availability
    • 用于通过并行确定数据和数据可用性访问关联高速缓存的电路和方法
    • US07461208B1
    • 2008-12-02
    • US11155147
    • 2005-06-16
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • G06F13/16
    • G06F12/0864G06F2212/1016G06F2212/6082
    • A circuit for accessing an associative cache is provided. The circuit includes data selection circuitry and an outcome parallel processing circuit both in communication with the associative cache. The outcome parallel processing circuit is configured to determine whether an accessing of data from the associative cache is one of a cache hit, a cache miss, or a cache mispredict. The circuit further includes a memory in communication with the data selection circuitry and the outcome parallel processing circuit. The memory is configured to store a bank select table, whereby the bank select table is configured to include entries that define a selection of one of a plurality of banks of the associative cache from which to output data. Methods for accessing the associative cache are also described.
    • 提供了一种用于访问关联高速缓存的电路。 电路包括与关联高速缓存通信的数据选择电路和结果并行处理电路。 结果并行处理电路被配置为确定来自关联高速缓存的数据的访问是否是高速缓存命中,高速缓存未命中或高速缓存错误预测中的一个。 电路还包括与数据选择电路和结果并行处理电路通信的存储器。 存储器被配置为存储存储体选择表,由此存储体选择表被配置为包括定义从其输出数据的关联高速缓存的多个存储区之一的选择的条目。 还描述了访问关联高速缓存的方法。