会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • CACHE LINE REPLACEMENT TECHNIQUES ALLOWING CHOICE OF LFU OR MFU CACHE LINE REPLACEMENT
    • 采用LFU或MFU高速缓存行替代方式的高速缓存行替换技术
    • US20080147982A1
    • 2008-06-19
    • US11523485
    • 2006-09-19
    • Richard Edward MatickJaime H. MorenoMalcolm Scott Ware
    • Richard Edward MatickJaime H. MorenoMalcolm Scott Ware
    • G06F12/00
    • H04W12/06G06F12/122G06F12/127H04L63/0853Y02D10/13
    • Methods and apparatus allowing a choice of Least Frequently Used (LFU) or Most Frequently Used (MFU) cache line replacement are disclosed. The methods and apparatus determine new state information for at least two given cache lines of a number of cache lines in a cache, the new state information based at least in part on prior state information for the at least two given cache lines. Additionally, when an access miss occurs in one of the at least two given lines, the methods and apparatus (1) select either LFU or MFU replacement criteria, and (2) replace one of the at least two given cache lines based on the new state information and the selected replacement criteria. Additionally, a cache for replacing MFU cache lines is disclosed. The cache additionally comprises MFU circuitry (1) adapted to produce new state information for the at least two given cache lines in response to an access to one of the at least two given cache lines, and (2) when a cache miss occurs in one of the at least two given cache lines, adapted to determine, based on the new state information, which of the at least two given cache lines is the most frequently used cache line.
    • 公开了允许选择最低频(LFU)或最常用(MFU)高速缓存线替换的方法和装置。 所述方法和装置确定高速缓存中多个高速缓存行的至少两个给定高速缓存行的新状态信息,所述新状态信息至少部分地基于所述至少两个给定高速缓存行的先前状态信息。 另外,当在至少两条给定行之一中存在访问错误时,方法和装置(1)选择LFU或MFU替换标准,以及(2)基于新的 状态信息和所选择的替换标准。 另外,公开了用于替换MFU高速缓存线的高速缓存。 高速缓存还包括MFU电路(1),其适于响应于对至少两个给定高速缓存行中的一个的访问而产生用于所述至少两个给定高速缓存行的新状态信息,以及(2)当高速缓存未命中出现在一个 所述至少两个给定高速缓存行适于基于所述新状态信息确定所述至少两个给定高速缓存行中的哪一个是最常用的高速缓存行。
    • 3. 发明授权
    • Method and apparatus for reducing logic activity in a microprocessor using reduced bit width slices that are enabled or disabled depending on operation width
    • 使用减少的位宽度切片减少微处理器中的逻辑活动的方法和装置,其根据操作宽度被启用或禁用
    • US06948051B2
    • 2005-09-20
    • US09855241
    • 2001-05-15
    • Jude A. RiversJaime H. MorenoVinodh R. Cuppu
    • Jude A. RiversJaime H. MorenoVinodh R. Cuppu
    • G06F9/30G06F9/302G06F9/38G06F9/318
    • G06F9/3891G06F9/30014G06F9/30036G06F9/30112G06F9/3012G06F9/3016G06F9/30192G06F9/3887
    • A method and apparatus for reducing logic activity in a microprocessor which examines every instruction before it is executed and determines in advance the minimum appropriate datapath width (in byte or half-word quantities) necessary to accurately execute the operation. Achieving this requires two major enhancements to a traditional microprocessor pipeline. First, extra logic (potentially an extra pipeline stage for determining an operation's effective bit width—the WD width detection logic) is introduced between the Decode and Execution stages. Second, the traditional Execution stage architecture (including a register file RF and the arithmetic logical unit ALU), instead of being organized as one continuous 32-bit unit, is organized as a collection of multiple slices, where a slice can be of an 8-bit (a byte) or a 16-bit (double byte) granularity. Each slice in this case can operate independently of each other slice, and includes portion of the register file, functional unit and cache memory. Concatenating a multiple number of these slices together creates a required full width processor.
    • 一种用于减少微处理器中的逻辑活动的方法和装置,其在执行之前检查每个指令,并且预先确定准确执行该操作所需的最小适当的数据路径宽度(以字节或半字数量)。 实现这一点需要对传统微处理器管道进行两个主要的改进。 首先,在解码和执行阶段之间引入额外的逻辑(潜在的用于确定操作的有效位宽度的额外流水线级 - WD宽度检测逻辑)。 第二,传统的执行阶段架构(包括寄存器文件RF和算术逻辑单元ALU)而不是组织为一个连续的32位单元,被组织为多个片段的集合,其中片可以是8 位(一个字节)或一个16位(双字节)粒度。 在这种情况下,每个切片可独立于每个切片进行操作,并且包括寄存器文件,功能单元和高速缓冲存储器的部分。 将多个这些切片连接在一起创建所需的全宽处理器。
    • 4. 发明授权
    • Method and apparatus for history-based movement of shared-data in coherent cache memories of a multiprocessor system using push prefetching
    • 用于使用推取预取的多处理器系统的相干高速缓冲存储器中共享数据的基于历史的移动的方法和装置
    • US06711651B1
    • 2004-03-23
    • US09655642
    • 2000-09-05
    • Jaime H. MorenoJude A. RiversJohn-David Wellman
    • Jaime H. MorenoJude A. RiversJohn-David Wellman
    • G06F1300
    • G06F12/0862G06F12/0815G06F2212/6024
    • A method and apparatus are provided for moving at least one of instructions and operand data throughout a plurality of caches included in a multiprocessor computer system, wherein each of the plurality of caches is included in one of a plurality of processing nodes of the system so as to provide history-based movement of shared-data in coherent cache memories. A plurality of entries are stored in a consume after produce (CAP) table attached to each of the plurality of caches. Each of the entries is associated with a plurality of storage elements in one of the plurality of caches and includes information of prior usage of the plurality of storage elements by each of the plurality of processing nodes. Upon a miss by a processing node to a cache included therein, any storage elements that caused the miss are transferred to the cache from one of main memory and another cache. An entry is created in the table that is associated with the storage elements that caused the miss. A push prefetching engine may be used to create the entry.
    • 提供了一种方法和装置,用于在包括在多处理器计算机系统中的多个高速缓存中移动指令和操作数数据中的至少一个,其中多个高速缓存中的每一个包括在系统的多个处理节点之一中,以便 以共享缓存存储器中的共享数据提供基于历史的移动。 在附加到多个高速缓存中的每一个的产生(CAP)表之后,将多个条目存储在消费中。 每个条目与多个高速缓存之一中的多个存储元件相关联,并且包括多个处理节点中的每一个的多个存储元件的先前使用的信息。 当处理节点错过其中包含的高速缓存时,导致未命中的任何存储元件从主存储器和另一高速缓存之一传送到高速缓存。 在表中创建一个与导致未命中的存储元素相关联的条目。 推式预取引擎可用于创建条目。
    • 5. 发明授权
    • Method and apparatus for reducing encoding needs and ports to shared resources in a processor
    • 用于将编码需求和端口减少到处理器中的共享资源的方法和装置
    • US06704855B1
    • 2004-03-09
    • US09585766
    • 2000-06-02
    • Erik R. AltmanJaime H. MorenoMayan Moudgill
    • Erik R. AltmanJaime H. MorenoMayan Moudgill
    • G06F930
    • G06F9/30098G06F9/30141G06F9/3016G06F9/3824G06F9/3853
    • The present invention relates to a method for accessing elements from a shared resource to be used by consumers that perform actions according to corresponding operations. The method creates a packet of operations to be processed simultaneously, wherein the elements from the shared resource used by the operations are specified by source and destination identifier fields that are shared among the operations in such a way that the sum of all the elements from the shared resource used by the operations does not exceed a total number of identifiers available in the packet. The method also reads the elements from the shared resource according to the shared identifier fields specified in the packet. The method decodes a number of elements from the shared resource needed by each operation, by passing the operations to an operation decoder having a defined routing scheme based on the needs of the operations. The method also routes the elements to the consumers performing operations and resulting values to the shared resource, according to a routing signal of the operation decoder.
    • 本发明涉及一种从共享资源访问要由使用者根据相应操作执行动作的消费者的方法。 该方法创建要同时处理的一组操作,其中由操作使用的来自共享资源的元素由在操作之间共享的源和目标标识符字段指定,使得来自 操作使用的共享资源不会超过数据包中可用的标识符的总数。 该方法还根据分组中指定的共享标识符字段从共享资源中读取元素。 该方法根据操作的需要,通过将操作传递给具有定义的路由方案的操作解码器,从每个操作所需的共享资源中解码多个元素。 该方法还根据操作解码器的路由信号将元素路由到消费者对共享资源执行操作和结果值。
    • 10. 发明授权
    • Selective bypassing of a multi-port register file
    • 选择性绕过多端口寄存器文件
    • US07051186B2
    • 2006-05-23
    • US10230492
    • 2002-08-29
    • Sameh AsaadJaime H. MorenoVictor Zyuban
    • Sameh AsaadJaime H. MorenoVictor Zyuban
    • G06F15/82G06F9/305
    • G06F9/3826G06F9/30109
    • A multi-port register file may be selectively bypassed such that any element in a result vector is bypassed to the same index of an input vector of a succeeding operation when the element is requested in the succeeding operation in the same index as it was generated. Alternatively, the results to be placed in a register file may be bypassed to a succeeding operation when the N elements that dynamically compose a vector are requested as inputs to the next operation exactly in the same order as they were generated. That is, for the purposes of bypassing, the N vector elements are treated as a single entity. Similar rules apply for the write-through path.
    • 可以选择性地旁路多端口寄存器文件,使得当在跟随生成的相同索引中在后续操作中请求元素时,结果向量中的任何元素被绕过到后续操作的输入向量的相同索引。 或者,当动态组成向量的N个要素作为下一个操作的输入被精确地按照它们被生成的相同顺序被请求作为输入时,放置在寄存器文件中的结果可以被绕过到后续的操作。 也就是说,为了绕过,N个向量元素被视为单个实体。 类似的规则适用于直通路径。