会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Cache unit information replacement apparatus
    • 缓存单元信息更换装置
    • US4314331A
    • 1982-02-02
    • US968048
    • 1978-12-11
    • Marion G. PorterRobert W. Norman, Jr.Charles P. Ryan
    • Marion G. PorterRobert W. Norman, Jr.Charles P. Ryan
    • G06F12/08G06F12/12G06F9/30
    • G06F12/126
    • A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words. Directory circuits, associated with the cache store, contain address information identifying those instructions and data words stored in the cache store. The cache unit has at least one instruction register for storing address and level signals for specifying the location of the next instruction to be fetched and transferred to the processing unit. Replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information. The cache unit further includes detection apparatus for detecting a conflict condition resulting in an improper assignment. The detection apparatus, upon detecting such a condition, advances the relacement circuits forward for assigning the next sequential group of locations or level inhibiting it from making its normal location assignment. It also inhibits the directory circuits from writing the necessary information therein required for making the location assignment and prevents the information which produced the conflict from being written into cache store when received from memory.
    • 缓存单元包括组织成多个级别的缓存存储器,以提供对指令和数据字的快速访问。 与缓存存储相关联的目录电路包含标识存储在高速缓存存储器中的那些指令和数据字的地址信息。 高速缓存单元具有至少一个用于存储地址和电平信号的指令寄存器,用于指定要提取并传送到处理单元的下一个指令的位置。 包括替换电路,其在正常操作期间顺序地分配高速缓存位置以用新信息替换旧信息。 高速缓存单元还包括用于检测导致不正确分配的冲突状态的检测装置。 检测装置在检测到这样的状况时,前进用于分配下一个连续的位置组或级别的替换电路以防止其正常的位置分配。 它还禁止目录电路在进行位置分配时写入其中所需的必要信息,并且在从存储器接收时防止产生冲突的信息被写入高速缓存存储器。
    • 2. 发明授权
    • Cache apparatus for enabling overlap of instruction fetch operations
    • 缓存装置,用于实现指令获取操作的重叠
    • US4313158A
    • 1982-01-26
    • US968049
    • 1978-12-11
    • Marion G. PorterCharles P. Ryan
    • Marion G. PorterCharles P. Ryan
    • G06F9/38G06F12/08G06F9/00
    • G06F9/3802G06F12/0859
    • A data processing system comprises a data processing unit coupled to a cache unit which couples to a main store. The cache unit includes a cache store organized into a plurality of levels, each for storing blocks of information in the form of data and instructions. The cache unit further includes control apparatus, an instruction buffer for storing instructions received from main store and a transit block buffer comprising a plurality of locations for storing read commands. The control apparatus includes a plurality of groups of bit storage elements corresponding to the number of transit buffer locations. Each group includes at least a pair of instruction fetch indicator elements which are operatively connected to control the writing of first and second blocks of instructions into the instruction buffer. Each time a read command specifying the fetching of instructions of either a first or second block is received from the processing unit, the flag storage element associated with the transit block buffer location into which the read command is loaded is set to a binary ONE state while the corresponding ones of the flag storage elements associated with the other locations storing outstanding read commands specifying instruction fetches are reset to binary ZEROS. This permits only those instructions received from main store in response to that read command to be loaded into a specified section of the instruction buffer for enabling overlaps in processing several commands specifying instruction fetch operations.
    • 数据处理系统包括耦合到耦合到主存储器的高速缓存单元的数据处理单元。 缓存单元包括组织成多个级别的高速缓存存储器,每个级别用于以数据和指令的形式存储信息块。 高速缓存单元还包括控制装置,用于存储从主存储器接收的指令的指令缓冲器和包括用于存储读取命令的多个位置的转接块缓冲器。 控制装置包括与传送缓冲器位置的数量相对应的多组位存储元件。 每个组包括至少一对指令提取指示符元件,其可操作地连接以控制将第一和第二指令块写入指令缓冲器。 每当从处理单元接收到指定取出第一或第二块指令的读取命令时,与加载读取命令的传输块缓冲器位置相关联的标志存储元件被设置为二进制ONE状态,而 与存储未指定指令提取的读取命令的其他位置相关联的标志存储元件中的相应的标志存储元件被重置为二进制ZEROS。 这仅允许响应于该读命令从主存储器接收到的指令被加载到指令缓冲器的指定部分中,以便在处理指定指令提取操作的几个命令时进行重叠。
    • 3. 发明授权
    • Apparatus for cache clearing
    • 缓存清除装置
    • US4471429A
    • 1984-09-11
    • US342370
    • 1982-01-25
    • Marion G. PorterCharles P. RyanJames L. King
    • Marion G. PorterCharles P. RyanJames L. King
    • G06F12/08G06F13/00
    • G06F12/0822
    • A cache clearing apparatus for a multiprocessor data processing system having a cache unit and a duplicate directory associated with each processor. The duplicate directory, which reflects the contents of the cache directory within its associated cache unit, and the cache directory are connected through a system controller unit. Commands affecting information segments within the main memory are transferred by the system controller unit to each of the duplicate directories to determine if the information segment affected is stored in the cache memory of its associated cache memory. If the information segment is stored therein the duplicate directory issues a clear command through the system controller to clear the information segment from the associated cache unit.
    • 一种用于多处理器数据处理系统的高速缓存清除装置,具有与每个处理器相关联的高速缓存单元和重复目录。 反映其关联的高速缓存单元内的高速缓存目录的内容的重复目录和高速缓存目录通过系统控制器单元连接。 影响主存储器内的信息段的命令由系统控制器单元传送到每个重复目录以确定受影响的信息段是否存储在其相关联的高速缓冲存储器的高速缓冲存储器中。 如果信息段存储在其中,则重复目录通过系统控制器发出清除命令,以从相关联的高速缓存单元中清除信息段。
    • 4. 发明授权
    • Cache unit with transit block buffer apparatus
    • 具有传输块缓冲装置的缓存单元
    • US4217640A
    • 1980-08-12
    • US968522
    • 1978-12-11
    • Marion G. PorterCharles P. RyanWilliam A. Shelly
    • Marion G. PorterCharles P. RyanWilliam A. Shelly
    • G06F12/08G06F13/00
    • G06F12/0855
    • A data processing system comprises a data processing unit coupled to a cache unit which couples to a main store. The cache unit includes a cache store organized into a plurality of levels, each for storing a number of blocks of information in the form of data and instructions. Directories associated with the cache store contain addresses and level control information for indicating which blocks of information reside in the cache store. The cache unit further includes control apparatus and a transit block buffer comprising a number of sections each having a plurality of locations for storing read commands and transit block addresses associated therewith. A corresponding number of valid bit storage elements are included, each of which is set to a binary ONE state when a read command and the associated transit block address are loaded into a corresponding one of the buffer locations. Comparison circuits, coupled to the transit block buffer, compare the transit block address of each outstanding read command stored in the transit block buffer section with the address of each read command or write command received from the processing unit. When there is a conflict, the comparison circuits generate an output signal which conditions the control apparatus to hold or stop further processing of the command by the cache unit and the operation of the processing unit. Holding lasts until the valid bit storage element of the location storing the outstanding read command is reset to a binary ZERO indicating that execution of the read command is completed.
    • 数据处理系统包括耦合到耦合到主存储器的高速缓存单元的数据处理单元。 高速缓存单元包括组织成多个级别的缓存存储器,每个级别用于以数据和指令的形式存储多个信息块。 与高速缓存存储相关联的目录包含用于指示哪些信息块驻留在高速缓存存储器中的地址和级别控制信息。 高速缓存单元还包括控制装置和传输块缓冲器,其包括多个部分,每个部分具有用于存储读取命令的多个位置和与其相关联的传输块地址。 包括相应数量的有效位存储元件,当将读取命令和相关联的传输块地址加载到相应的一个缓冲器位置时,其中的每一个被设置为二进制ONE状态。 耦合到传输块缓冲器的比较电路将存储在传输块缓冲器部分中的每个未完成读取命令的传输块地址与从处理单元接收的每个读取命令或写入命令的地址进行比较。 当存在冲突时,比较电路产生输出信号,该输出信号使控制装置保持或停止高速缓存单元对命令的进一步处理和处理单元的操作。 持续持续,直到存储未完成读取命令的位置的有效位存储元件被重置为指示执行读命令的二进制零。
    • 5. 发明授权
    • Method and system for cache miss prediction based on previous cache
access requests
    • 基于先前缓存访问请求的高速缓存未命中预测方法和系统
    • US5495591A
    • 1996-02-27
    • US906618
    • 1992-06-30
    • Charles P. Ryan
    • Charles P. Ryan
    • G06F12/08
    • G06F12/0862G06F2212/6026Y10S707/99936
    • For a data processing system which employs a cache memory, the disclosure includes both a method for lowering the cache miss ratio for requested operands and an example of special purpose apparatus for practicing the method. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same, indicating a pattern which yields information which can be combined with an address in the stack to develop a predictive address. The efficiency of the apparatus is improved by placing a series of "select pattern" values representing the search order for trying patterns into a register stack and providing logic circuitry by which the most recently found "select pattern" value is placed at the top of the stack with the remaining "select pattern" values pushed down accordingly.
    • 对于采用高速缓冲存储器的数据处理系统,本公开包括用于降低所请求的操作数的高速缓存未命中率的方法和用于实施该方法的专用设备的示例。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地是为了速度目的而进行硬连接的,并且包括用于评估未命令堆栈中的各种移位地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的比较器电路,指示产生信息的模式 其可以与栈中的地址组合以开发预测地址。 通过将表示用于尝试图案的搜索顺序的一系列“选择图案”值放置到寄存器堆栈中并提供逻辑电路来改善装置的效率,通过该逻辑电路将最近发现的“选择图案”值放置在 堆栈与剩余的“选择模式”值相应地下推。
    • 6. 发明授权
    • Method and apparatus for exhaustively testing interactions among multiple processors
    • 用于彻底测试多个处理器之间的交互的方法和装置
    • US06249880B1
    • 2001-06-19
    • US09156378
    • 1998-09-17
    • William A. ShellyCharles P. Ryan
    • William A. ShellyCharles P. Ryan
    • H02H305
    • G06F11/24G06F11/2242
    • Interactions among multiple processors (92) are exhaustively tested. A master processor (92) retrieves test information for a set of tests from a test table (148). It then enters a series of embedded loops, with one loop for each of the tested processors (92). A cycle delay count for each of the tested processors (92) is incremented (152, 162, 172) through a range specified in the test table entry. For each combination of cycle delay count loop indices, a single test is executed (176). In each such test (176), the master processor (92) sets up (182) each of the other processors (92) being tested. This setup (182) specifies the delay count and the code for that processor (92) to execute. When each processor (92) is setup (182), it waits (192) for a synchronize interrupt (278). When all processors (92) have been setup (182), the master processor (92) issues (191) the synchronize interrupt signal (276). Each processor (92) then starts traces (193) and delays (194) the specified number of cycles. After the delay, the processor (92) executes its test code (195).
    • 多处理器之间的相互作用(92)进行了详尽的测试。 主处理器(92)从测试表(148)检索一组测试的测试信息。 然后,它进入一系列嵌入式循环,每个测试处理器(92)有一个循环。 每个测试处理器(92)的周期延迟计数通过测试表条目中指定的范围递增(152,162,172)。 对于循环延迟计数循环指标的每个组合,执行单个测试(176)。 在每个这样的测试(176)中,主处理器(92)建立(182)被测试的每个其他处理器(92)。 该设置(182)指定延迟计数和该处理器(92)执行的代码。 当每个处理器(92)被建立(182)时,它等待(192)同步中断(278)。 当所有处理器(92)已经建立(182)时,主处理器(92)发出(191)同步中断信号(276)。 每个处理器(92)然后开始指定数量的循环的迹线(193)和延迟(194)。 在延迟之后,处理器(92)执行其测试代码(195)。
    • 8. 发明授权
    • Controllably operable method and apparatus for predicting addresses of
future operand requests by examination of addresses of prior cache
misses
    • 可控制的可操作的方法和装置,用于通过检查先前的高速缓存未命中的地址来预测未来的操作数请求的地址
    • US5694572A
    • 1997-12-02
    • US841687
    • 1992-02-26
    • Charles P. Ryan
    • Charles P. Ryan
    • G06F12/10G06F9/38G06F12/08G06F12/12G06F13/00
    • G06F9/383G06F12/0862G06F9/3832G06F2212/6026
    • In a data processing system which employs a cache memory feature, a method and exemplary special purpose apparatus for practicing the method are disclosed to lower the cache miss ratio for called operands. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same indicating a pattern yielding information which can be combined with an address in the stack to develop a predictive address. The cache miss prediction mechanism is only selectively enabled during cache "in-rush" following a process change to increase the recovery rate; thereafter, it is disabled, based upon timing-out a timer or reaching a hit ratio threshold, in order that normal procedures allow the hit ratio to stabilize at a higher percentage than if the cache miss prediction mechanism were operated continuously.
    • 在采用高速缓冲存储器特征的数据处理系统中,公开了一种用于实施该方法的方法和示例性专用设备,用于降低被叫操作数的高速缓存未命中率。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地用于速度目的是硬连线的,并且包括用于评估未命令堆栈中的各种移位的地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的指示可以 与堆栈中的地址组合以开发预测地址。 高速缓存未命中预测机制仅在过程变化缓存“急速”中有选择地启用,以提高恢复速率; 此后,基于定时器的定时器或达到命中率阈值来禁用它,以便正常程序允许命中率稳定在比高速缓存未命中预测机制连续运行的更高的百分比。
    • 9. 发明授权
    • Cache miss prediction method and apparatus for use with a paged main
memory in a data processing system
    • 用于数据处理系统中的分页主存储器的缓存未命中预测方法和装置
    • US5450561A
    • 1995-09-12
    • US921825
    • 1992-07-29
    • Charles P. Ryan
    • Charles P. Ryan
    • G06F12/08G06F12/02
    • G06F12/0862G06F2212/6026
    • In a data processing system which employs a cache memory feature, a method and exemplary special purpose apparatus for practicing the method are disclosed to lower the cache miss ratio for called operands. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same indicating a pattern yielding information which can be combined with an address in the stack to develop a predictive address. The efficiency of the apparatus operating in an environment incorporating a paged main memory is improved, according to the invention, by the addition of logic circuitry which serves to inhibit prefetch if a page boundary would be encountered.
    • 在采用高速缓冲存储器特征的数据处理系统中,公开了一种用于实施该方法的方法和示例性专用设备,用于降低被叫操作数的高速缓存未命中率。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地用于速度目的是硬连线的,并且包括用于评估未命令堆栈中的各种移位的地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的指示可以 与堆栈中的地址组合以开发预测地址。 根据本发明,在包含分页主存储器的环境中操作的设备的效率得到改善,通过添加用于在遇到页边界时用于禁止预取的逻辑电路。
    • 10. 发明授权
    • Controlling cache predictive prefetching based on cache hit ratio trend
    • 基于缓存命中率趋势控制缓存预测预取
    • US5367656A
    • 1994-11-22
    • US850713
    • 1992-03-13
    • Charles P. Ryan
    • Charles P. Ryan
    • G06F12/08G06F13/00
    • G06F12/0862G06F2212/6026
    • In a data processing system which employs a cache memory feature, a method and exemplary special purpose apparatus for practicing the method are disclosed to lower the cache miss ratio for called operands. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same indicating a pattern yielding information which can be combined with an address in the stack to develop a predictive address. The cache miss prediction mechanism is adaptively selectively enabled by an adaptive circuit that develops a short term operand cache hit ratio history and responds to ratio improving and ratio deteriorating trends by accordingly enabling and disabling the cache miss prediction mechanism.
    • 在采用高速缓冲存储器特征的数据处理系统中,公开了一种用于实施该方法的方法和示例性专用设备,用于降低被叫操作数的高速缓存未命中率。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地用于速度目的是硬连线的,并且包括用于评估未命令堆栈中的各种移位的地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的指示可以 与堆栈中的地址组合以开发预测地址。 高速缓存未命中预测机制由自适应电路自适应地选择性地使能,该自适应电路产生短期操作数高速缓存命中率历史,并且通过相应地启用和禁用高速缓存未命中机制来响应比率改进和比例恶化趋势。