会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • Lookahead mode sequencer
    • 前瞻模式音序器
    • US20060184772A1
    • 2006-08-17
    • US11055862
    • 2005-02-11
    • Miles DooleyScott FrommerHung LeSheldon LevensteinAnthony Saporito
    • Miles DooleyScott FrommerHung LeSheldon LevensteinAnthony Saporito
    • G06F9/30
    • G06F9/3836G06F9/3855G06F9/3857G06F9/3867
    • A method, system, and computer program product for enhancing performance of an in-order microprocessor with long stalls. In particular, the mechanism of the present invention provides a data structure for storing data within the processor. The mechanism of the present invention comprises a data structure including information used by the processor. The data structure includes a group of bits to keep track of which instructions preceded a rejected instruction and therefore will be allowed to complete and which instructions follow the rejected instruction. The group of bits comprises a bit indicating whether a reject was a fast or slow reject; and a bit for each cycle that represents a state of an instruction passing through a pipeline. The processor speculatively continues to execute a set bit's corresponding instruction during stalled periods in order to generate addresses that will be needed when the stall period ends and normal dispatch resumes.
    • 一种方法,系统和计算机程序产品,用于增强具有长档位的按顺序微处理器的性能。 特别地,本发明的机构提供了一种用于在处理器内存储数据的数据结构。 本发明的机构包括包括由处理器使用的信息的数据结构。 数据结构包括一组比特,用于跟踪被拒绝指令之前的哪些指令,因此将被允许完成,以及哪些指令遵循被拒绝的指令。 该比特组包括指示拒绝是否是快速或慢速拒绝的位; 以及表示通过管道的指令的状态的每个周期的一点。 处理器推测地在停滞时段期间继续执行设置位的相应指令,以便产生在停滞期结束并且恢复正常调度时将需要的地址。
    • 5. 发明申请
    • System and method for tracking changes in L1 data cache directory
    • 用于跟踪L1数据缓存目录中的更改的系统和方法
    • US20060179221A1
    • 2006-08-10
    • US11054273
    • 2005-02-09
    • Sheldon LevensteinAnthony Saporito
    • Sheldon LevensteinAnthony Saporito
    • G06F12/00
    • G06F12/0855
    • Method, system and computer program product for tracking changes in an L1 data cache directory. A method for tracking changes in an L1 data cache directory determines if data to be written to the L1 data cache is to be written to an address to be changed from an old address to a new address. If it is determined that the data to be written is to be written to an address to be changed, a determination is made if the data to be written is associated with the old address or the new address. If it is determined that the data is to be written to the new address, the data is allowed to be written to the new address following a prescribed delay after the address to be changed is changed. The method is preferably implemented in a system that provides a Store Queue (STQU) design that includes a Content Addressable Memory (CAM)-based store address tracking mechanism that includes early and late write CAM ports. The method eliminates time windows and the need for an extra copy of the L1 data cache directory.
    • 方法,系统和计算机程序产品,用于跟踪L1数据缓存目录中的更改。 用于跟踪L1数据高速缓存目录中的变化的方法确定要写入L1数据高速缓存的数据是否被写入要从旧地址改变到新地址的地址。 如果确定要写入的数据要写入要改变的地址,则确定要写入的数据是否与旧地址或新地址相关联。 如果确定要将数据写入新地址,则在要更改的地址改变之后,允许将数据写入到遵循规定延迟的新地址。 该方法优选地在提供包括基于内容寻址存储器(CAM)的存储地址跟踪机制的存储队列(STQU)设计的系统中实现,该机制包括早期和晚期写入CAM端口。 该方法消除了时间窗口,并需要额外的L1数据高速缓存目录副本。
    • 8. 发明授权
    • Cache set selective power up
    • 缓存设置选择上电
    • US08972665B2
    • 2015-03-03
    • US13524574
    • 2012-06-15
    • Brian R. PraskyAnthony SaporitoAaron Tsai
    • Brian R. PraskyAnthony SaporitoAaron Tsai
    • G06F1/32G06F21/81G06F17/30
    • G06F1/3275G06F9/3802G06F12/0864G06F17/30982G06F2212/1028G06F2212/6082Y02D10/13
    • Embodiments of the disclosure include selectively powering up a cache set of a multi-set associative cache by receiving an instruction fetch address and determining that the instruction fetch address corresponds to one of a plurality of entries of a content addressable memory. Based on determining that the instruction fetch address corresponds to one of a plurality of entries of a content addressable memory a cache set of the multi-set associative cache that contains a cache line referenced by the instruction fetch address is identified and only powering up a subset of cache. Based on the identified cache set not being powered up, selectively powering up the identified cache set of the multi-set associative cache and transmitting one or more instructions stored in the cache line referenced by the instruction fetch address to a processor.
    • 本公开的实施例包括通过接收指令获取地址并且确定指令获取地址对应于内容可寻址存储器的多个条目之一来选择性地加电多组关联高速缓存的高速缓存组。 基于确定指令获取地址对应于内容可寻址存储器的多个条目中的一个,识别包含由指令获取地址引用的高速缓存行的多组关联高速缓存的高速缓存集,并且仅为子集 的缓存。 基于所识别的未被加电的高速缓存集,选择性地加电多组关联高速缓存的所识别的高速缓存集,并且将由指令提取地址引用的高速缓存行中存储的一个或多个指令发送到处理器。
    • 9. 发明授权
    • Mitigating lookahead branch prediction latency by purposely stalling a branch instruction until a delayed branch prediction is received or a timeout occurs
    • 通过故意停止分支指令,直到接收到延迟的分支预测或发生超时来减轻前瞻分支预测等待时间
    • US08874885B2
    • 2014-10-28
    • US12029543
    • 2008-02-12
    • James J. BonannoDavid S. HuttonBrian R. PraskyAnthony Saporito
    • James J. BonannoDavid S. HuttonBrian R. PraskyAnthony Saporito
    • G06F9/30G06F9/38
    • G06F9/3844G06F9/3806G06F9/3836G06F9/3848
    • Embodiments relate to mitigation of lookahead branch predication latency. An aspect includes receiving an instruction address in an instruction cache for fetching instructions in a microprocessor pipeline. Another aspect includes receiving the instruction address in a branch presence predictor coupled to the microprocessor pipeline. Another aspect includes determining, by the branch presence predictor, presence of a branch instruction in the instructions being fetched, wherein the branch instruction is predictable by the branch target buffer, and any indication of the instruction address not written to the branch target buffer is also not written to the branch presence predictor. Another aspect includes, based on receipt of an indication that the branch instruction is present from the branch presence predictor, holding the branch instruction. Another aspect includes, based on receipt of a branch prediction corresponding to the branch instruction from the branch target buffer, releasing said held branch instruction to the pipeline.
    • 实施例涉及减轻前瞻分支预测延迟。 一个方面包括在指令高速缓存中接收用于在微处理器流水线中取指令的指令地址。 另一方面包括在耦合到微处理器流水线的分支存在预测器中接收指令地址。 另一方面包括通过分支存在预测器确定在所取出的指令中存在分支指令,其中分支指令可由分支目标缓冲器预测,并且未写入分支目标缓冲器的指令地址的任何指示也是 没有写入分支存在预测器。 另一方面包括:基于从分支存在预测器接收到分支指令的指示,保持分支指令。 另一方面包括基于从分支目标缓冲器接收到与分支指令相对应的分支预测,将所述保持的分支指令释放到流水线。