会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Cache error retry technique
    • 缓存错误重试技术
    • US06108753A
    • 2000-08-22
    • US52457
    • 1998-03-31
    • Douglas Craig BossenManratha Rajasekharaiah JaisimhaAvijit SahaShih-Hsiung Stephen Tung
    • Douglas Craig BossenManratha Rajasekharaiah JaisimhaAvijit SahaShih-Hsiung Stephen Tung
    • G06F11/10G06F11/14G06F12/08G06F9/30
    • G06F11/141G06F11/1064G06F12/0802G06F2201/81G06F2201/88
    • A method and apparatus is provided for enhanced error correction processing through a retry mechanism. When an L1 cache instruction line error is detected, either by a parity error detection process or by an ECC (error correcting code) or other process, the disclosed methodology will schedule an automatic retry of the event that caused the line error without re-booting the entire system. Thereafter, if the error remains present after a predetermined number of retries to load the requested data from L1 cache, then a second level of corrective action is undertaken. The second level corrective action includes accessing an alternate memory location, such as the L2 cache for example. If the state of the requested cache line is exclusive or shared, then an artificial L1 miss is generated for use in enabling an L2 access for the requested cache line. If the requested cache line still does not load from the L2 cache, the second level corrective methodology, after a selective number of retries, terminates and a machine check is generated to initiate a more extensive corrective or recovery action procedure. In an exemplary embodiment, a mechanism is illustrated for recovery from transient errors in an L1 cache load operation although the disclosed methodology may also be implemented partially or entirely in software and in any parity or other error detecting application.
    • 提供了一种通过重试机制来增强纠错处理的方法和装置。 当检测到L1高速缓存指令行错误时,无论是通过奇偶校验错误检测过程还是通过ECC(纠错码)或其他过程,所公开的方法将调度导致线路错误的事件的自动重试,而不需要重新启动 整个系统。 此后,如果在经过预定次数的重试以从L1高速缓存加载所请求的数据之后仍存在错误,则进行第二级的校正动作。 第二级纠正措施包括访问备用存储器位置,例如L2缓存。 如果所请求的高速缓存行的状态是独占的或共享的,则生成人造L1小命令用于对所请求的高速缓存行启用L2访问。 如果请求的高速缓存行仍然不从L2高速缓存加载,则在选择性重试次数终止之后,第二级校正方法被生成,并且生成机器检查以启动更广泛的纠正或恢复操作过程。 在示例性实施例中,示出了用于从L1高速缓存加载操作中的瞬态错误中恢复的机制,尽管所公开的方法也可部分地或完全地以软件和任何奇偶校验或其他错误检测应用程序来实现。
    • 4. 发明授权
    • Pipelined cache memory deallocation and storeback
    • 流水线缓存存储器释放和存储
    • US06298417B1
    • 2001-10-02
    • US09196906
    • 1998-11-20
    • Kin Shing ChanDwain Alan HicksMichael John MayfieldShih-Hsiung Stephen Tung
    • Kin Shing ChanDwain Alan HicksMichael John MayfieldShih-Hsiung Stephen Tung
    • G06F1200
    • G06F12/0859G06F12/0804
    • A deallocation pipelining circuit for use in a cache memory subsystem. The pipelining circuit is configured to initiate a storeback buffer (SBB) transfer of first line data stored in a first line of a cache memory array if the deallocation pipelining circuit detects a cache miss signal corresponding to the first line and identifies the first line data as modified data. The deallocation pipelining circuit is configured to issue a storeback request signal to a bus interface unit after the completion of the SBB transfer. The circuit initiates a bus interface unit transfer of the first line data after receiving a data acknowledge signal from the bus interface unit. The pipelining circuit is still further configured to deallocate the first line of the cache memory after receiving a request acknowledge signal from the bus interface unit. This deallocation of the first line of the cache memory occurs regardless of a completion status of the bus interface unit transfer whereby a pending fill of the first cache line may proceed prior to completion of the bus interface unit transfer. In one embodiment, the storeback buffer includes first and second segments for storing first and second segment data respectively. In this embodiment, the deallocation pipelining circuit is able to detect the completion of the transfer of the first segment data during the bus interface unit transfer and preferably configured to initiate an SBB transfer of second line data from a second line in the cache memory array in response to the completion of the first segment data transfer. In this manner, the initiation of the second line SBB transfer precedes the completion of the first line bus interface unit transfer.
    • 用于高速缓存存储器子系统中的一个释放流水线电路。 流水线电路被配置为如果分配流水线电路检测到与第一行对应的高速缓存未命中信号并且将第一行数据标识为第一行数据,则启动存储在高速缓冲存储器阵列的第一行中的第一行数据的存储缓冲器(SBB)传送 修改数据。 配置流水线电路被配置为在SBB传送完成之后向总线接口单元发出存储请求信号。 在从总线接口单元接收到数据确认信号之后,该电路启动总线接口单元传送第一行数据。 流水线电路还被配置为在从总线接口单元接收到请求确认信号之后解除对高速缓冲存储器的第一行的分配。 无论总线接口单元传送的完成状态如何,高速缓冲存储器的第一行的解除分配都会发生,从而可以在完成总线接口单元传输之前继续执行第一高速缓存行的填充。 在一个实施例中,存储缓冲器包括用于分别存储第一和第二段数据的第一和第二段。 在该实施例中,解除分配流水线电路能够在总线接口单元传送期间检测到第一段数据的传送完成,并且优选地被配置为发起来自高速缓存存储器阵列中的第二行的第二行数据的SBB传送 响应完成第一段数据传输。 以这种方式,第二线路SBB传送的启动先于第一线路总线接口单元传送的完成。
    • 6. 发明授权
    • Method to arbitrate for a cache block
    • 仲裁缓存块的方法
    • US06463514B1
    • 2002-10-08
    • US09025605
    • 1998-02-18
    • David Scott RayShih-Hsiung Stephen TungPei Chun Liu
    • David Scott RayShih-Hsiung Stephen TungPei Chun Liu
    • G06F1200
    • G06F12/0851G06F12/0857
    • A method of arbitrating between cache access circuits (i.e., load/store units) by stalling a first cache access circuit in response to detection of a conflict between a first cache address and a second cache address. The stalling is performed in response to a comparison of one or more subarray selection bits in each of the first and second cache addresses, and further preferably includes a common contention logic unit for both the first and second cache access circuits. The first cache address is retained within the first cache access circuit so that the first cache access circuit does not need to re-generate the first cache address. If the same word (or doubleword) is being accessed by multiple load operations, this condition is not considered contention and both operations are allowed to proceed, even though they are in the same subarray of the interleaved cache.
    • 响应于检测到第一高速缓存地址和第二高速缓存地址之间的冲突而通过停止第一高速缓存访​​问电路来在高速缓存访​​问电路之间进行仲裁(即,加载/存储单元)的方法。 响应于第一和第二高速缓存地址中的每一个中的一个或多个子阵列选择位的比较来执行停滞,并且还优选地包括用于第一和第二高速缓存存取电路的公共竞争逻辑单元。 第一高速缓存地址被保留在第一高速缓存访​​问电路内,使得第一高速缓存访​​问电路不需要重新生成第一高速缓存地址。 如果通过多个加载操作访问相同的字(或双字),则该条件不被认为是争用,并且两个操作都被允许继续进行,即使它们在交错高速缓存的相同子阵列中。
    • 7. 发明授权
    • Processor and method of prefetching data based upon a detected stride
    • 基于检测到的步幅预取数据的处理器和方法
    • US06430680B1
    • 2002-08-06
    • US09052567
    • 1998-03-31
    • William Elton BurkyDavid Andrew SchroterShih-Hsiung Stephen TungMichael Thomas Vaden
    • William Elton BurkyDavid Andrew SchroterShih-Hsiung Stephen TungMichael Thomas Vaden
    • G06F900
    • G06F9/3455G06F9/3832
    • A processor and method of fetching data within a data processing system are disclosed. According to the method, a first difference between a first load address and a second load address is calculated. In addition, a determination is made whether a second difference between a third load address and the second load address is equal to the first difference. In response to a determination that the first difference and the second difference are equal, a fourth load address, which is generated by adding the third address and the second difference, is transmitted to the memory as a memory fetch address. In an embodiment of the data processing system including a processor having an associated cache, the fourth load address is transmitted to the memory only if the fourth load address is not resident in the cache or the target of an outstanding memory fetch request.
    • 公开了一种在数据处理系统内取出数据的处理器和方法。 根据该方法,计算第一加载地址和第二加载地址之间的第一差。 此外,确定第三加载地址和第二加载地址之间的第二差是否等于第一差。 响应于确定第一差异和第二差异相等,通过将第三地址和第二差值相加产生的第四加载地址作为存储器提取地址被发送到存储器。 在包括具有关联高速缓存的处理器的数据处理系统的实施例中,仅当第四加载地址不驻留在高速缓存中或未完成的存储器提取请求的目标时才将第四加载地址发送到存储器。
    • 9. 发明授权
    • Efficient store machine in cache based microprocessor
    • 高效存储机器在基于缓存的微处理器中
    • US06446170B1
    • 2002-09-03
    • US09232239
    • 1999-01-19
    • Kin Shing ChanDwain Alan HicksMichael John MayfieldShih-Hsiung Stephen Tung
    • Kin Shing ChanDwain Alan HicksMichael John MayfieldShih-Hsiung Stephen Tung
    • G06F1200
    • G06F9/3824
    • A method of retiring operations to a cache. Initially, a first operation is queued in a stack such as the store queue of a retire unit. The first operation is then copied, in a first transfer, to a latch referred to as the miss latch in response to a resource conflict that prevents the first operation from accessing the cache. The first operation is maintained in the stack for the duration of the resource conflict. When the resource conflict is resolved, the cache is accessed, in a first cache access, with the first operation from the stack. Preferably, the first operation is removed from the stack when the resource conflict is resolved and the first cache access is initiated. In the preferred embodiment, the first operation is maintained in the miss latch until the first cache access results in a cache hit. One embodiment of the invention further includes accessing the cache, in a first miss access, with the first operation from the miss latch in response to a cache miss that resulted from the first cache access. In a presently preferred embodiment, a second access is executed to access the cache with a second operation queued in the stack in response to a cache hit resulting from the first cache access. The first and second cache accesses preferably occur in consecutive cycles. Typically, the first and second operations are store operations that are queued in the stack in program order. In one embodiment the first operation is removed from the stack upon resolving of the resource conflict.
    • 一种退出到缓存的操作的方法。 最初,第一操作在诸如退出单元的存储队列的堆栈中排队。 然后,响应于防止第一操作访问高速缓存的资源冲突,第一操作在第一传送中被复制到被称为未命中锁存器的锁存器。 在资源冲突的持续时间内,第一个操作被保留在堆栈中。 当解决资源冲突时,在第一个高速缓存访​​问中,缓存从堆栈中进行第一个操作。 优选地,当解决资源冲突并启动第一高速缓存访​​问时,第一操作从堆栈中移除。 在优选实施例中,第一操作保持在未命中锁存器中,直到第一高速缓存访​​问导致高速缓存命中。 本发明的一个实施例还包括响应于由第一高速缓存访​​问引起的高速缓存未命中,以第一未命中访问的速度访问来自未命中锁存器的第一操作。 在当前优选的实施例中,响应于由第一高速缓存访​​问导致的高速缓存命中,执行第二访问以访问在堆栈中排队的第二操作的高速缓存。 第一和第二高速缓存访​​问优选地在连续的周期中进行。 通常,第一和第二操作是以程序顺序排列在堆栈中的存储操作。 在一个实施例中,在解决资源冲突时,第一操作从堆栈中移除。