会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明授权
    • Pipelined cache memory deallocation and storeback
    • 流水线缓存存储器释放和存储
    • US06298417B1
    • 2001-10-02
    • US09196906
    • 1998-11-20
    • Kin Shing ChanDwain Alan HicksMichael John MayfieldShih-Hsiung Stephen Tung
    • Kin Shing ChanDwain Alan HicksMichael John MayfieldShih-Hsiung Stephen Tung
    • G06F1200
    • G06F12/0859G06F12/0804
    • A deallocation pipelining circuit for use in a cache memory subsystem. The pipelining circuit is configured to initiate a storeback buffer (SBB) transfer of first line data stored in a first line of a cache memory array if the deallocation pipelining circuit detects a cache miss signal corresponding to the first line and identifies the first line data as modified data. The deallocation pipelining circuit is configured to issue a storeback request signal to a bus interface unit after the completion of the SBB transfer. The circuit initiates a bus interface unit transfer of the first line data after receiving a data acknowledge signal from the bus interface unit. The pipelining circuit is still further configured to deallocate the first line of the cache memory after receiving a request acknowledge signal from the bus interface unit. This deallocation of the first line of the cache memory occurs regardless of a completion status of the bus interface unit transfer whereby a pending fill of the first cache line may proceed prior to completion of the bus interface unit transfer. In one embodiment, the storeback buffer includes first and second segments for storing first and second segment data respectively. In this embodiment, the deallocation pipelining circuit is able to detect the completion of the transfer of the first segment data during the bus interface unit transfer and preferably configured to initiate an SBB transfer of second line data from a second line in the cache memory array in response to the completion of the first segment data transfer. In this manner, the initiation of the second line SBB transfer precedes the completion of the first line bus interface unit transfer.
    • 用于高速缓存存储器子系统中的一个释放流水线电路。 流水线电路被配置为如果分配流水线电路检测到与第一行对应的高速缓存未命中信号并且将第一行数据标识为第一行数据,则启动存储在高速缓冲存储器阵列的第一行中的第一行数据的存储缓冲器(SBB)传送 修改数据。 配置流水线电路被配置为在SBB传送完成之后向总线接口单元发出存储请求信号。 在从总线接口单元接收到数据确认信号之后,该电路启动总线接口单元传送第一行数据。 流水线电路还被配置为在从总线接口单元接收到请求确认信号之后解除对高速缓冲存储器的第一行的分配。 无论总线接口单元传送的完成状态如何,高速缓冲存储器的第一行的解除分配都会发生,从而可以在完成总线接口单元传输之前继续执行第一高速缓存行的填充。 在一个实施例中,存储缓冲器包括用于分别存储第一和第二段数据的第一和第二段。 在该实施例中,解除分配流水线电路能够在总线接口单元传送期间检测到第一段数据的传送完成,并且优选地被配置为发起来自高速缓存存储器阵列中的第二行的第二行数据的SBB传送 响应完成第一段数据传输。 以这种方式,第二线路SBB传送的启动先于第一线路总线接口单元传送的完成。
    • 5. 发明授权
    • Method and system for buffering condition code data in a data processing
system having out-of-order and speculative instruction execution
    • 用于在具有无序和推测性指令执行的数据处理系统中缓冲条件码数据的方法和系统
    • US5974240A
    • 1999-10-26
    • US480999
    • 1995-06-07
    • Kin Shing Chan
    • Kin Shing Chan
    • G06F9/32G06F9/38
    • G06F9/30094G06F9/3836G06F9/384G06F9/3842G06F9/3857G06F9/3863
    • In response to dispatching a condition register modifying instruction to an execution unit, a condition register rename buffer is associated with such a condition register modifying instruction. The instruction is then executed in the execution unit. Following the execution of the condition register modifying instruction, condition register data is set in the condition register rename buffer to reflect the result of such instruction execution. Additionally, an indicator is set to indicate the condition register data is valid. At the time for completing the condition register modifying instruction, the condition register data is transferred from the condition register rename buffer to the architected condition register, thereby permitting condition register modifying instructions to be dispatched, executed, and finished before the condition register is available to complete each condition register modifying instruction.
    • 响应于向执行单元调度条件寄存器修改指令,条件寄存器重命名缓冲器与这种条件寄存器修改指令相关联。 然后在执行单元中执行该指令。 在执行条件寄存器修改指令之后,条件寄存器数据被设置在条件寄存器重命名缓冲器中以反映这种指令执行的结果。 此外,指示符被设置以指示条件寄存器数据有效。 在完成条件寄存器修改指令时,条件寄存器数据从条件寄存器重命名缓冲器传送到架构条件寄存器,从而允许条件寄存器修改指令在条件寄存器可用之前被调度,执行和完成 完成每个条件寄存器修改指令。
    • 6. 发明授权
    • Methods and system for predecoding instructions in a superscalar data
processing system
    • 超标量数据处理系统中预解码指令的方法和系统
    • US5828895A
    • 1998-10-27
    • US531882
    • 1995-09-20
    • Kin Shing ChanRavindra Kumar Nair
    • Kin Shing ChanRavindra Kumar Nair
    • G06F9/30G06F9/32G06F9/38
    • G06F9/382G06F9/30094G06F9/30163G06F9/30167G06F9/3836G06F9/3838G06F9/384G06F9/3857G06F9/3885
    • In response to reloading an instruction from main memory for storing in an instruction cache in a superscalar data processing system, a particular instruction category in which the instruction belongs is selected from multiple instruction categories. Types of data processing system resources required for instruction execution and a quantity of each type of resource required are determined. Thereafter, a plurality of decode bits are calculated, wherein the decode bits represent a particular instruction category in which the instruction belongs and the type and quantity of each data processing system resource required for execution of the instruction. Thereafter, the instruction and the predecode bits are stored in instruction cache. The predecode bits enable the dispatch unit to efficiently, and without fully decoding the instruction at dispatch time, select an execution unit for executing the instruction and determine if the data processing system resources required for execution of the instruction are available before the dispatch unit dispatches the instruction.
    • 响应于从主存储器重新加载用于存储在超标量数据处理系统中的指令高速缓存中的指令,从多个指令类别中选择指令所属的特定指令类别。 确定指令执行所需的数据处理系统资源的类型和所需的每种类型的资源的数量。 此后,计算多个解码位,其中解码位表示指令所属的特定指令类别以及执行指令所需的每个数据处理系统资源的类型和数量。 此后,指令和预解码位存储在指令高速缓存中。 预分解位使得调度单元能够有效地并且在调度时没有完全解码指令的情况下,选择用于执行指令的执行单元,并且在调度单元调度之前确定执行指令所需的数据处理系统资源是否可用 指令。
    • 8. 发明授权
    • Efficient store machine in cache based microprocessor
    • 高效存储机器在基于缓存的微处理器中
    • US06446170B1
    • 2002-09-03
    • US09232239
    • 1999-01-19
    • Kin Shing ChanDwain Alan HicksMichael John MayfieldShih-Hsiung Stephen Tung
    • Kin Shing ChanDwain Alan HicksMichael John MayfieldShih-Hsiung Stephen Tung
    • G06F1200
    • G06F9/3824
    • A method of retiring operations to a cache. Initially, a first operation is queued in a stack such as the store queue of a retire unit. The first operation is then copied, in a first transfer, to a latch referred to as the miss latch in response to a resource conflict that prevents the first operation from accessing the cache. The first operation is maintained in the stack for the duration of the resource conflict. When the resource conflict is resolved, the cache is accessed, in a first cache access, with the first operation from the stack. Preferably, the first operation is removed from the stack when the resource conflict is resolved and the first cache access is initiated. In the preferred embodiment, the first operation is maintained in the miss latch until the first cache access results in a cache hit. One embodiment of the invention further includes accessing the cache, in a first miss access, with the first operation from the miss latch in response to a cache miss that resulted from the first cache access. In a presently preferred embodiment, a second access is executed to access the cache with a second operation queued in the stack in response to a cache hit resulting from the first cache access. The first and second cache accesses preferably occur in consecutive cycles. Typically, the first and second operations are store operations that are queued in the stack in program order. In one embodiment the first operation is removed from the stack upon resolving of the resource conflict.
    • 一种退出到缓存的操作的方法。 最初,第一操作在诸如退出单元的存储队列的堆栈中排队。 然后,响应于防止第一操作访问高速缓存的资源冲突,第一操作在第一传送中被复制到被称为未命中锁存器的锁存器。 在资源冲突的持续时间内,第一个操作被保留在堆栈中。 当解决资源冲突时,在第一个高速缓存访​​问中,缓存从堆栈中进行第一个操作。 优选地,当解决资源冲突并启动第一高速缓存访​​问时,第一操作从堆栈中移除。 在优选实施例中,第一操作保持在未命中锁存器中,直到第一高速缓存访​​问导致高速缓存命中。 本发明的一个实施例还包括响应于由第一高速缓存访​​问引起的高速缓存未命中,以第一未命中访问的速度访问来自未命中锁存器的第一操作。 在当前优选的实施例中,响应于由第一高速缓存访​​问导致的高速缓存命中,执行第二访问以访问在堆栈中排队的第二操作的高速缓存。 第一和第二高速缓存访​​问优选地在连续的周期中进行。 通常,第一和第二操作是以程序顺序排列在堆栈中的存储操作。 在一个实施例中,在解决资源冲突时,第一操作从堆栈中移除。
    • 9. 发明授权
    • Method and system for pre-fetch cache interrogation using snoop port
    • 使用snoop端口预取缓存询问的方法和系统
    • US06202128B1
    • 2001-03-13
    • US09038422
    • 1998-03-11
    • Kin Shing ChanDwain Alan HicksPeichun Peter LiuMichael John MayfieldShih-Hsiung Stephen Tung
    • Kin Shing ChanDwain Alan HicksPeichun Peter LiuMichael John MayfieldShih-Hsiung Stephen Tung
    • G06F1208
    • G06F12/0862G06F12/0831G06F12/1054
    • An interleaved data cache array which is divided into two subarrays is provided for utilization within a data processing system. Each subarray includes a plurality of cache lines wherein each cache line includes a selected block of data, a parity field, a content addressable field containing a portion of an effective address (ECAM) for the selected block of data, a second content addressable field contains a real address (RCAM) for the selected block of data and a data status field. Separate effective address ports (EA) and a real address port (RA) permit parallel access to the cache without conflict in separate subarrays and a subarray arbitration logic circuit is provided for attempted simultaneous access of a single subarray by both the effective address port (EA) and the real address port (RA). A normal word line is provided and activated by either the effective address port or the real address port through the subarray arbitration. An existing Real Address (RA) cache snoop port is used to check whether a pre-fetching stream's line access is a true cache hit or not. The snoop read access uses a (33-bit) real address to access the data cache without occupying a data port during testing of the pre-fetching stream hits. Therefore, the two Effective Address (EA) accesses and a RCAM snoop access can access the data cache simultaneously thereby increasing pre-fetching performance.
    • 提供划分为两个子阵列的交错数据高速缓存阵列用于数据处理系统内的利用。 每个子阵列包括多条高速缓存线,其中每条高速缓存线包括选定的数据块,奇偶校验字段,包含所选择的数据块的有效地址(ECAM)的一部分的内容可寻址字段,第二内容可寻址字段包含 用于所选数据块的实际地址(RCAM)和数据状态字段。 分离的有效地址端口(EA)和实际地址端口(RA)允许并行访问高速缓存,而不会在单独的子阵列中发生冲突,并提供子阵列仲裁逻辑电路,用于通过有效地址端口(EA)同时访问单个子阵列 )和实际地址端口(RA)。 正常字线由有效地址端口或实地址端口通过子阵列仲裁提供和激活。 现有的Real Address(RA)缓存侦听端口用于检查预取流的线路访问是否是真正的缓存命中。 在测试预取流命中期间,窥探读取访问使用(33位)实地址访问数据高速缓存,而不占用数据端口。 因此,两个有效地址(EA)访问和RCAM侦听访问可以同时访问数据高速缓存,从而增加预取性能。
    • 10. 发明授权
    • Method and apparatus for reconstructing the address of the next instruction to be completed in a pipelined processor
    • 用于重构在流水线处理器中要完成的下一条指令的地址的方法和装置
    • US06185674B2
    • 2001-02-06
    • US08417421
    • 1995-04-05
    • Kin Shing ChanChiao-Mei ChuangAlessandro Marchioro
    • Kin Shing ChanChiao-Mei ChuangAlessandro Marchioro
    • G06F900
    • G06F9/322
    • A computer processing unit is provided that includes an apparatus for generating an address of the next instruction to be completed. The apparatus includes a first table for storing a plurality of entries each corresponding to a dispatched instruction, each entry comprising an identifier that identifies the corresponding instruction and a status bit that indicates if the corresponding instruction is completed; a second table for storing a plurality of entries each corresponding to dispatched branch instructions, each entry comprising the same identifier stored in the first table, a target address of the dispatched branch instruction and a resolution status field that indicates at least if the corresponding branch instruction has been resolved taken or has been resolved not taken; program counter update logic that, in each machine cycle, updates a program counter to store and output the address of the next instruction to be completed according to the entries stored in the first table and the second table. Because the first and second tables employ efficient identification tags to identify instructions that modify the control flow of the execution pipeline and the target address of such instructions, the computer processing unit of the present invention need not store the full address of each instruction in the execution pipeline to update the program counter as is conventional, and thus saves real estate that may be used for other circuitry.
    • 提供一种计算机处理单元,其包括用于生成要完成的下一条指令的地址的装置。 该装置包括:第一表,用于存储与分派指令对应的多个条目;每个条目包括识别对应指令的标识符和指示相应指令是否完成的状态位; 用于存储多个条目的第二表,每个条目各自对应于分派的分支指令,每个条目包括存储在第一表中的相同标识符,分派的分支指令的目标地址和分辨率状态字段,其至少指示相应的分支指令 已解决或已解决不采取; 程序计数器更新逻辑,其在每个机器周期中根据存储在第一表和第二表中的条目更新程序计数器以存储和输出要完成的下一个指令的地址。因为第一和第二表使用有效 用于识别修改执行流水线的控制流程和这些指令的目标地址的指令的识别标签,本发明的计算机处理单元不需要在执行流水线中存储每条指令的完整地址以更新程序计数器 并且因此节省可用于其它电路的不动产。