会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Dual cache directories with respective queue independently executing its
content and allowing staggered write operations
    • 具有相应队列的双缓存目录独立地执行其内容并允许交错的写入操作
    • US6085288A
    • 2000-07-04
    • US839556
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • G06F12/16G06F12/08G06F12/00
    • G06F12/0831
    • A method of storing values in a cache used by a processor of a computer system, the cache having two or more cache directories. An address tag associated with the memory block is written into a first cache directory during an initial processor cycle, the address tag is written into a second cache directory during the next or subsequent processor cycle. Another address tag associated with a different memory block may be read from the second cache directory during the initial processor cycle. Additionally, another address tag associated with yet a different memory block may be read from the first cache directory during the subsequent processor cycle. A write operation for the address tag may be placed into a write queue of the first cache directory, prior to writing the address tag into the first cache directory, and the same write operation may be placed into a write queue of the second cache directory, prior to said step of writing the address tag into the second cache directory; the write queue of the second cache directory executes its contents independently of the write queue of the first cache directory. This staggered writing ability imparts greater flexibility in carrying out write operations for a cache having multiple directories, thereby increasing performance.
    • 一种在计算机系统的处理器使用的高速缓存中存储值的方法,所述高速缓存具有两个或多个高速缓存目录。 在初始处理器周期期间,与存储器块相关联的地址标签被写入第一高速缓存目录中,在下一个或后续处理器周期期间将地址标签写入第二高速缓存目录。 可以在初始处理器周期期间从第二高速缓存目录读取与不同存储器块相关联的另一地址标签。 此外,在随后的处理器周期期间,可以从第一高速缓存目录读取与另一个存储器块相关联的另一地址标签。 在将地址标签写入第一高速缓存目录之前,可以将地址标签的写入操作放入第一高速缓存目录的写入队列中,并且可以将相同的写入操作放入第二高速缓存目录的写入队列中, 在将所述地址标签写入所述第二高速缓存目录之前的所述步骤之前; 第二高速缓存目录的写入队列独立于第一高速缓存目录的写入队列来执行其内容。 这种交错的写入能力为对具有多个目录的高速缓存执行写入操作赋予更大的灵活性,从而提高性能。
    • 2. 发明授权
    • Dynamic updating of repair mask used for cache defect avoidance
    • 用于缓存缺陷避免的修复掩码的动态更新
    • US6006311A
    • 1999-12-21
    • US839559
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • G06F13/00G11C29/00
    • G11C29/88G06F12/126G11C29/76G06F2212/1032
    • A method of dynamically avoiding defective cache lines in a cache used by a processor of a computer system is disclosed. A repair mask is used, having an array of bit fields each corresponding to a cache lines in the cache, and certain bit fields in the repair mask array are initially set to indicate that a group of corresponding cache lines are defective. Thereafter the repair mask is updated by setting additional bit fields in the repair mask array to indicate that an additional group of corresponding cache lines are defective. Access to all defective cache lines is prevented based on the corresponding bit fields in the repair mask array. The initial setting of certain bit fields can take place at fabrication of the cache chip in response to testing of the cache lines. Additionally, the repair mask may be updated each time the computer system is booted in response to testing by the boot procedure. The repair mask may also be updated real-time during program execution in response to detection of an error associated with a particular cache line. Updating in real-time can be accomplished by counting a cumulative number of errors associated with a cache line, and then identifying the cache line as being defective only after a certain number of cumulative errors has occurred.
    • 公开了一种动态地避免由计算机系统的处理器使用的高速缓存中的有缺陷的高速缓存行的方法。 使用具有每个对应于高速缓存中的高速缓存行的位字段阵列的修复掩码,并且初始设置修复掩码阵列中的某些位字段以指示一组对应的高速缓存行有缺陷。 此后,通过在修复掩码阵列中设置附加位字段来指示修复掩码被更新以指示附加的一组对应的高速缓存行是有缺陷的。 基于修复掩码阵列中的相应位字段来防止对所有有缺陷的高速缓存行的访问。 响应于高速缓存行的测试,可以在制造高速缓存芯片时进行某些位字段的初始设置。 此外,每当计算机系统引导时响应于引导过程的测试,可以更新修复掩码。 响应于检测到与特定高速缓存行相关联的错误,修复掩码也可以在程序执行期间被实时更新。 可以通过计数与高速缓存行相关联的错误的累积数量,然后将高速缓存行识别为在发生一定数量的累积错误之后的缺陷来实现。
    • 4. 发明授权
    • Cache array defect functional bypassing using repair mask
    • 缓存阵列缺陷功能旁路使用修复掩码
    • US5958068A
    • 1999-09-28
    • US839554
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • G06F12/16G06F11/00G06F12/08
    • G06F12/126G06F12/0802G11C29/846G06F2212/1032
    • A method of bypassing defects in a cache used by a processor of a computer system. A repair mask has an array of bit fields corresponding to cache lines in the cache, and when a particular cache line in the cache is identified as being defective, a corresponding bit field in the repair mask array is set to indicate that the particular cache line is defective, and further access to the defective cache line is prevented, based on the corresponding bit field in the repair mask array. The repair mask can be used to prevent the defective cache line from ever resulting in a cache hit, and to prevent the defective cache line from ever being chosen as a victim for cache replacement. Using a set associative cache, the defective cache line is thereby effectively removed from its respective congruence class. This approach allows the cache to use all non-defective cache lines without any cache lines being reserved for redundancy.
    • 绕过由计算机系统的处理器使用的高速缓存中的缺陷的方法。 修复掩模具有对应于高速缓存中的高速缓存行的位字段阵列,并且当高速缓存中的特定高速缓存行被识别为有缺陷时,修复掩码阵列中的相应位字段被设置为指示特定高速缓存行 基于修复掩码阵列中的相应位字段,防止对缺陷高速缓存线的进一步访问。 可以使用修复掩码来防止有缺陷的高速缓存线从不导致高速缓存命中,并防止有缺陷的高速缓存行被选为高速缓存替换的受害者。 使用集合关联高速缓存,从而有效地从有缺陷的高速缓存行中删除其相应的一致等级。 这种方法允许高速缓存使用所有无缺陷高速缓存行,而没有为冗余保留任何高速缓存行。
    • 5. 发明授权
    • Multiple cache directories for non-arbitration concurrent accessing of a
cache memory
    • 多个缓存目录,用于缓存内存的非仲裁并发访问
    • US5943686A
    • 1999-08-24
    • US834492
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • G06F12/08G06F12/00
    • G06F12/0831G06F12/0846
    • A method of accessing a cache used by a processor of a computer system, to eliminate arbitration logic which would otherwise be required to handle operations from multiple snooping devices. A plurality of cache directories are provided in the cache, respectively connected directly to a plurality of snooping devices using a plurality of interconnects. An operation from a given snooping device is then handled by using a respective cache directory to issue a response to a respective interconnect. For example, a first cache directory may be connected to a first interconnect on a processor side of the cache, and a second cache directory may be connected to a second interconnect on a system bus side of the cache. This construction allows handling of operations from multiple snooping devices without having to use critical path arbitration logic. Furthermore, this construction allows for improved cache access due to the physical placement of the multiple cache directories.
    • 访问由计算机系统的处理器使用的高速缓存的方法,以消除否则将需要来处理来自多个窥探设备的操作的仲裁逻辑。 高速缓存中提供多个高速缓存目录,分别使用多个互连直接连接到多个窥探装置。 然后,通过使用相应的缓存目录来对相应的互连进行响应来处理来自给定窥探设备的操作。 例如,第一高速缓存目录可以连接到高速缓存的处理器侧上的第一互连,并且第二高速缓存目录可以连接到高速缓存的系统总线侧上的第二互连。 这种结构允许处理来自多个窥探设备的操作,而不必使用关键路径仲裁逻辑。 此外,这种结构允许由于多个高速缓存目录的物理放置而改进的高速缓存访​​问。
    • 6. 发明授权
    • Method for recoverability via redundant cache arrays
    • 通过冗余缓存阵列可恢复的方法
    • US5883904A
    • 1999-03-16
    • US834491
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • G06F11/10G11C29/00
    • G06F11/1064
    • A method of correcting an erroneous bit field in a cache used by a processor is disclosed. A first array stores a plurality of bit fields, respectively connected to error checking circuits, and a substitute bit field is supplied for a bit field in the first array that is found to be erroneous by the error checking circuits, the substitute bit field being read from a second array which redundantly stores the bit fields. The error checking circuits can be connected to a parity error control unit which reads the substitute bit field from the second array. The parity error control unit forces the cache into a busy mode when any of the error checking circuits indicates that a bit field is erroneous, and maintains the busy mode until the substitute bit field is supplied. The error checking circuits can check the parity of a plurality of subsets of the bit fields in the first array and indicate which of the subsets are erroneous, and the parity error controller then reads only subsets of the bit fields in the second array corresponding to those subsets in the first array that are erroneous.
    • 公开了一种校正由处理器使用的高速缓存中的错误位字段的方法。 第一阵列存储分别连接到错误检查电路的多个比特字段,并且为错误检查电路发现错误的第一阵列中的比特字段提供替代比特字段,替代比特字段被读取 从第二个阵列冗余存储位字段。 错误检查电路可以连接到从第二阵列读取替代位字段的奇偶校验错误控制单元。 当任何错误检查电路指示位字段是错误的时,奇偶校验错误控制单元强制高速缓存进入忙模式,并保持忙模式,直到提供替代位字段。 错误检查电路可以检查第一阵列中的位字段的多个子集的奇偶校验,并且指示哪些子集是错误的,并且奇偶校验错误控制器然后仅读取与那些相应的第二阵列中的第二阵列中的位字段的子集 第一个数组中的子集是错误的。
    • 7. 发明授权
    • Method for high-speed recoverable directory access
    • 高速可恢复目录访问方法
    • US5867511A
    • 1999-02-02
    • US834118
    • 1997-04-14
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don LewisTimothy M. Skergan
    • G06F12/16G06F11/10G06F12/08G06F12/12G11C29/00
    • G11C29/88G06F11/1064G06F12/0802G06F12/126
    • A method of determining if a requested memory block of a memory device is contained in a cache used by a processor of a computer system is disclosed. An address associated with the requested memory block is compared to a plurality of address tags stored in a cache directory of the cache, while simultaneously performing error checks on the address tags. Corrected address tags are supplied for any erroneous address tags indicated by the error checks, and any corrected address tags are also compared to the address of the requested memory block. The error check may be a parity check of a portion of the address tag, either the entire portion, or of several subsets having a number of bits smaller than the address tag. The address tags can be stored in a redundant cache directory of the cache, and the corrected address tags supplied by substituting corresponding address tags from the redundant cache directory. By moving error checking out of the critical retrieval path of the cache, the present invention results in improved performance (increased speed).
    • 公开了一种确定所请求的存储器件的存储块是否包含在计算机系统的处理器使用的高速缓存中的方法。 将与所请求的存储器块相关联的地址与存储在高速缓存的高速缓存目录中的多个地址标签进行比较,同时对地址标签执行错误检查。 对于由错误检查指示的任何错误的地址标签提供了更正的地址标签,并且还将任何校正的地址标签与请求的存储器块的地址进行比较。 错误检查可以是地址标签的一部分,即整个部分或具有小于地址标签的位数的几个子集的奇偶校验。 地址标签可以存储在高速缓存的冗余高速缓存目录中,并通过从冗余高速缓存目录中替换相应的地址标签来提供更正的地址标签。 通过移动从高速缓存的关键检索路径出来的错误检查,本发明导致改进的性能(提高速度)。
    • 8. 发明授权
    • Scarfing within a hierarchical memory architecture
    • 在分层内存架构中进行扫描
    • US06587924B2
    • 2003-07-01
    • US09903727
    • 2001-07-12
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonJerry Don Lewis
    • G06F1200
    • G06F12/0831Y10S707/99931
    • A method and system for scarfing data during a data access transaction within a hierarchical data storage system. A data access request is delivered from a source device to a plurality of data storage devices. The access request includes a target address and a source path tag, wherein the source path tag includes a device identification tag that uniquely identifies a data storage device within a given level of the system traversed by the access request. A device identification tag that uniquely identifies the third party transactor within a given memory level is appended to the source path tag such that the third party transactor can scarf returning data without reserving a scarf queue entry.
    • 一种用于在分层数据存储系统内的数据访问事务期间对数据进行分页的方法和系统。 数据访问请求从源设备传送到多个数据存储设备。 访问请求包括目标地址和源路径标签,其中源路径标签包括唯一地标识由访问请求遍历的系统的给定级别内的数据存储设备的设备标识标签。 唯一地标识给定存储器级别内的第三方交易者的设备识别标签被附加到源路径标签,使得第三方交易者可以围绕返回数据而不预留围巾队列条目。
    • 10. 发明授权
    • Cache index based system address bus
    • 基于缓存索引的系统地址总线
    • US06477613B1
    • 2002-11-05
    • US09345302
    • 1999-06-30
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn GuthrieJody B. JoynerJerry Don Lewis
    • Ravi Kumar ArimilliJohn Steven DodsonGuy Lynn GuthrieJody B. JoynerJerry Don Lewis
    • G06F1200
    • G06F12/0811G06F12/0895G06F12/0897
    • Following a cache miss by an operation, the address for the operation is transmitted on the bus coupling the cache to lower levels of the storage hierarchy. A portion of the address including the index field is transmitted during a first bus cycle, and may be employed to begin directory lookups in lower level storage devices before the address tag is received. The remainder of the address is transmitted during subsequent bus cycles, which should be in time for address tag comparisons with the congruence class elements. To allow multiple directory lookups to be occurring concurrently in a pipelined directory, a portion of multiple addresses for several data access operations, each portion including the index field for the respective address, may be transmitted during the first bus cycle or staged in consecutive bus cycles, with the remainders of each address—including the cache tags—transmitted during the subsequent bus cycles. This allows directory lookups utilizing the index fields to be processed concurrently within a lower level storage device for multiple operations, with the address tags being provided later, but still timely for tag comparisons at the end of the directory lookup. Where the lower level storage device operates at a higher frequency than the bus, overall latency is reduced and directory bandwidth is more efficiently utilized.
    • 在操作的高速缓存未命中之后,操作的地址在将高速缓存耦合到存储层级的较低级别的总线上传输。 包括索引字段的地址的一部分在第一总线周期期间被发送,并且可以用于在接收到地址标签之前开始下级存储设备中的目录查找。 在随后的总线周期期间传送地址的其余部分,这些时间应与地址标签与同余类元素进行比较。 为了允许在流水线目录中同时发生多个目录查找,可以在第一个总线周期期间发送多个数据访问操作的多个地址的一部分,每个部分包括相应地址的索引字段,或者在连续的总线周期中分段 ,每个地址的剩余部分,包括在后续总线周期期间发送的缓存标签。 这允许使用索引字段的目录查找在较低级存储设备中同时处理以用于多个操作,其中地址标签稍后提供,但是在目录查找结束时仍然适合于标签比较。 在较低级存储设备以比总线更高的频率工作的地方,总体延迟降低,目录带宽更有效地利用。