会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Method and apparatus for exhaustively testing interactions among multiple processors
    • 用于彻底测试多个处理器之间的交互的方法和装置
    • US06249880B1
    • 2001-06-19
    • US09156378
    • 1998-09-17
    • William A. ShellyCharles P. Ryan
    • William A. ShellyCharles P. Ryan
    • H02H305
    • G06F11/24G06F11/2242
    • Interactions among multiple processors (92) are exhaustively tested. A master processor (92) retrieves test information for a set of tests from a test table (148). It then enters a series of embedded loops, with one loop for each of the tested processors (92). A cycle delay count for each of the tested processors (92) is incremented (152, 162, 172) through a range specified in the test table entry. For each combination of cycle delay count loop indices, a single test is executed (176). In each such test (176), the master processor (92) sets up (182) each of the other processors (92) being tested. This setup (182) specifies the delay count and the code for that processor (92) to execute. When each processor (92) is setup (182), it waits (192) for a synchronize interrupt (278). When all processors (92) have been setup (182), the master processor (92) issues (191) the synchronize interrupt signal (276). Each processor (92) then starts traces (193) and delays (194) the specified number of cycles. After the delay, the processor (92) executes its test code (195).
    • 多处理器之间的相互作用(92)进行了详尽的测试。 主处理器(92)从测试表(148)检索一组测试的测试信息。 然后,它进入一系列嵌入式循环,每个测试处理器(92)有一个循环。 每个测试处理器(92)的周期延迟计数通过测试表条目中指定的范围递增(152,162,172)。 对于循环延迟计数循环指标的每个组合,执行单个测试(176)。 在每个这样的测试(176)中,主处理器(92)建立(182)被测试的每个其他处理器(92)。 该设置(182)指定延迟计数和该处理器(92)执行的代码。 当每个处理器(92)被建立(182)时,它等待(192)同步中断(278)。 当所有处理器(92)已经建立(182)时,主处理器(92)发出(191)同步中断信号(276)。 每个处理器(92)然后开始指定数量的循环的迹线(193)和延迟(194)。 在延迟之后,处理器(92)执行其测试代码(195)。
    • 3. 发明授权
    • Controllably operable method and apparatus for predicting addresses of
future operand requests by examination of addresses of prior cache
misses
    • 可控制的可操作的方法和装置,用于通过检查先前的高速缓存未命中的地址来预测未来的操作数请求的地址
    • US5694572A
    • 1997-12-02
    • US841687
    • 1992-02-26
    • Charles P. Ryan
    • Charles P. Ryan
    • G06F12/10G06F9/38G06F12/08G06F12/12G06F13/00
    • G06F9/383G06F12/0862G06F9/3832G06F2212/6026
    • In a data processing system which employs a cache memory feature, a method and exemplary special purpose apparatus for practicing the method are disclosed to lower the cache miss ratio for called operands. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same indicating a pattern yielding information which can be combined with an address in the stack to develop a predictive address. The cache miss prediction mechanism is only selectively enabled during cache "in-rush" following a process change to increase the recovery rate; thereafter, it is disabled, based upon timing-out a timer or reaching a hit ratio threshold, in order that normal procedures allow the hit ratio to stabilize at a higher percentage than if the cache miss prediction mechanism were operated continuously.
    • 在采用高速缓冲存储器特征的数据处理系统中,公开了一种用于实施该方法的方法和示例性专用设备,用于降低被叫操作数的高速缓存未命中率。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地用于速度目的是硬连线的,并且包括用于评估未命令堆栈中的各种移位的地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的指示可以 与堆栈中的地址组合以开发预测地址。 高速缓存未命中预测机制仅在过程变化缓存“急速”中有选择地启用,以提高恢复速率; 此后,基于定时器的定时器或达到命中率阈值来禁用它,以便正常程序允许命中率稳定在比高速缓存未命中预测机制连续运行的更高的百分比。
    • 4. 发明授权
    • Cache miss prediction method and apparatus for use with a paged main
memory in a data processing system
    • 用于数据处理系统中的分页主存储器的缓存未命中预测方法和装置
    • US5450561A
    • 1995-09-12
    • US921825
    • 1992-07-29
    • Charles P. Ryan
    • Charles P. Ryan
    • G06F12/08G06F12/02
    • G06F12/0862G06F2212/6026
    • In a data processing system which employs a cache memory feature, a method and exemplary special purpose apparatus for practicing the method are disclosed to lower the cache miss ratio for called operands. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same indicating a pattern yielding information which can be combined with an address in the stack to develop a predictive address. The efficiency of the apparatus operating in an environment incorporating a paged main memory is improved, according to the invention, by the addition of logic circuitry which serves to inhibit prefetch if a page boundary would be encountered.
    • 在采用高速缓冲存储器特征的数据处理系统中,公开了一种用于实施该方法的方法和示例性专用设备,用于降低被叫操作数的高速缓存未命中率。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地用于速度目的是硬连线的,并且包括用于评估未命令堆栈中的各种移位的地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的指示可以 与堆栈中的地址组合以开发预测地址。 根据本发明,在包含分页主存储器的环境中操作的设备的效率得到改善,通过添加用于在遇到页边界时用于禁止预取的逻辑电路。
    • 5. 发明授权
    • Controlling cache predictive prefetching based on cache hit ratio trend
    • 基于缓存命中率趋势控制缓存预测预取
    • US5367656A
    • 1994-11-22
    • US850713
    • 1992-03-13
    • Charles P. Ryan
    • Charles P. Ryan
    • G06F12/08G06F13/00
    • G06F12/0862G06F2212/6026
    • In a data processing system which employs a cache memory feature, a method and exemplary special purpose apparatus for practicing the method are disclosed to lower the cache miss ratio for called operands. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same indicating a pattern yielding information which can be combined with an address in the stack to develop a predictive address. The cache miss prediction mechanism is adaptively selectively enabled by an adaptive circuit that develops a short term operand cache hit ratio history and responds to ratio improving and ratio deteriorating trends by accordingly enabling and disabling the cache miss prediction mechanism.
    • 在采用高速缓冲存储器特征的数据处理系统中,公开了一种用于实施该方法的方法和示例性专用设备,用于降低被叫操作数的高速缓存未命中率。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地用于速度目的是硬连线的,并且包括用于评估未命令堆栈中的各种移位的地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的指示可以 与堆栈中的地址组合以开发预测地址。 高速缓存未命中预测机制由自适应电路自适应地选择性地使能,该自适应电路产生短期操作数高速缓存命中率历史,并且通过相应地启用和禁用高速缓存未命中机制来响应比率改进和比例恶化趋势。
    • 6. 发明授权
    • Instruction buffer associated with a cache memory unit
    • 与高速缓冲存储器单元相关联的指令缓冲器
    • US4521850A
    • 1985-06-04
    • US433569
    • 1982-10-04
    • John E. WilhiteWilliam A. ShellyCharles P. Ryan
    • John E. WilhiteWilliam A. ShellyCharles P. Ryan
    • G06F9/38G06F9/12
    • G06F9/3804
    • Apparatus and method for providing an improved instruction buffer associated with a cache memory unit. The instruction buffer is utilized to transmit to the control unit of the central processing unit a requested sequence of data groups. In the current invention, the instruction buffer can store two sequences of data groups. The instruction buffer can store the data group sequence for the procedure currently in execution by the data processing unit and can simultaneously store data groups to which transfer, either conditional or unconditional, has been identified in the sequence currently being executed. In addition, the instruction buffer provides signals for use by the central processing unit defining the status of the instruction buffer.
    • 用于提供与高速缓冲存储器单元相关联的改进的指令缓冲器的装置和方法。 指令缓冲器用于向中央处理单元的控制单元发送所请求的数据组序列。 在本发明中,指令缓冲器可以存储两个数据组序列。 指令缓冲器可以存储数据处理单元当前正在执行的过程的数据组序列,并且可以同时存储在当前执行的序列中已经被识别的有条件的或无条件的传输的数据组。 此外,指令缓冲器提供由定义指令缓冲器的状态的中央处理单元使用的信号。
    • 8. 发明授权
    • Method and system for cache miss prediction based on previous cache
access requests
    • 基于先前缓存访问请求的高速缓存未命中预测方法和系统
    • US5495591A
    • 1996-02-27
    • US906618
    • 1992-06-30
    • Charles P. Ryan
    • Charles P. Ryan
    • G06F12/08
    • G06F12/0862G06F2212/6026Y10S707/99936
    • For a data processing system which employs a cache memory, the disclosure includes both a method for lowering the cache miss ratio for requested operands and an example of special purpose apparatus for practicing the method. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same, indicating a pattern which yields information which can be combined with an address in the stack to develop a predictive address. The efficiency of the apparatus is improved by placing a series of "select pattern" values representing the search order for trying patterns into a register stack and providing logic circuitry by which the most recently found "select pattern" value is placed at the top of the stack with the remaining "select pattern" values pushed down accordingly.
    • 对于采用高速缓冲存储器的数据处理系统,本公开包括用于降低所请求的操作数的高速缓存未命中率的方法和用于实施该方法的专用设备的示例。 最近的高速缓存未命中被存储在先入先出的堆栈中,并且存储的地址被搜索到位移模式。 然后,通过从主存储器预取由预测地址识别的信号,随后采用任何检测到的模式来预测随后的高速缓存未命中。 用于执行该任务的装置优选地是为了速度目的而进行硬连接的,并且包括用于评估未命令堆栈中的各种移位地址的减法电路和用于确定来自至少两个减法电路的输出是否相同的比较器电路,指示产生信息的模式 其可以与栈中的地址组合以开发预测地址。 通过将表示用于尝试图案的搜索顺序的一系列“选择图案”值放置到寄存器堆栈中并提供逻辑电路来改善装置的效率,通过该逻辑电路将最近发现的“选择图案”值放置在 堆栈与剩余的“选择模式”值相应地下推。
    • 9. 发明授权
    • Cache unit information replacement apparatus
    • 缓存单元信息更换装置
    • US4314331A
    • 1982-02-02
    • US968048
    • 1978-12-11
    • Marion G. PorterRobert W. Norman, Jr.Charles P. Ryan
    • Marion G. PorterRobert W. Norman, Jr.Charles P. Ryan
    • G06F12/08G06F12/12G06F9/30
    • G06F12/126
    • A cache unit includes a cache store organized into a number of levels to provide a fast access to instructions and data words. Directory circuits, associated with the cache store, contain address information identifying those instructions and data words stored in the cache store. The cache unit has at least one instruction register for storing address and level signals for specifying the location of the next instruction to be fetched and transferred to the processing unit. Replacement circuits are included which, during normal operation, assign cache locations sequentially for replacing old information with new information. The cache unit further includes detection apparatus for detecting a conflict condition resulting in an improper assignment. The detection apparatus, upon detecting such a condition, advances the relacement circuits forward for assigning the next sequential group of locations or level inhibiting it from making its normal location assignment. It also inhibits the directory circuits from writing the necessary information therein required for making the location assignment and prevents the information which produced the conflict from being written into cache store when received from memory.
    • 缓存单元包括组织成多个级别的缓存存储器,以提供对指令和数据字的快速访问。 与缓存存储相关联的目录电路包含标识存储在高速缓存存储器中的那些指令和数据字的地址信息。 高速缓存单元具有至少一个用于存储地址和电平信号的指令寄存器,用于指定要提取并传送到处理单元的下一个指令的位置。 包括替换电路,其在正常操作期间顺序地分配高速缓存位置以用新信息替换旧信息。 高速缓存单元还包括用于检测导致不正确分配的冲突状态的检测装置。 检测装置在检测到这样的状况时,前进用于分配下一个连续的位置组或级别的替换电路以防止其正常的位置分配。 它还禁止目录电路在进行位置分配时写入其中所需的必要信息,并且在从存储器接收时防止产生冲突的信息被写入高速缓存存储器。
    • 10. 发明授权
    • Balanced access to prevent gateword dominance in a multiprocessor write-into-cache environment
    • 平衡访问以防止在多处理器写入缓存环境中的门字优势
    • US06868483B2
    • 2005-03-15
    • US10256289
    • 2002-09-26
    • Wayne R. BuzbyCharles P. Ryan
    • Wayne R. BuzbyCharles P. Ryan
    • G06F9/46G06F12/00G06F12/08
    • G06F9/52G06F12/0815
    • In a multiprocessor data processing system including: a main memory; at least first and second shared caches; a system bus coupling the main memory and the first and second shared caches; at least four processors having respective private caches with the first and second private caches being coupled to the first shared cache and to one another via a first internal bus, and the third and fourth private caches being coupled to the second shared cache and to one another via a second internal bus; method and apparatus for preventing hogging of ownership of a gateword stored in the main memory and which governs access to common code/data shared by processes running in at least three of the processors. Each processor includes a gate control flag. A gateword CLOSE command, establishes ownership of the gateword in one processor and prevents other processors from accessing the code/data guarded until the one processor has completed its use. A gateword OPEN command then broadcasts a gateword interrupt to set the flag in each processor, delays long enough to ensure that the flags have all been set, writes an OPEN value into the gateword and flushes the gateword to main memory. A gateword access command executed by a requesting processor checks its gate control flag, and if set, starts a fixed time delay after which normal execution continues.
    • 一种多处理器数据处理系统,包括:主存储器; 至少第一和第二共享高速缓存; 耦合主存储器和第一和第二共享高速缓存的系统总线; 具有相应私有高速缓存的至少四个处理器具有第一和第二专用高速缓存,其经由第一内部总线耦合到第一共享高速缓存并且彼此耦合,并且第三和第四专用高速缓存耦合到第二共享高速缓存并且彼此耦合 通过第二条内部总线; 用于防止存储在主存储器中的门词的所有权陷入的方法和装置,并且其控制对在至少三个处理器中运行的进程共享的公共代码/数据的访问。 每个处理器包括一个门控制标志。 门字关闭命令,确定一个处理器中的门字的所有权,并防止其他处理器访问代码/数据,直到一个处理器完成使用。 门字OPEN命令然后广播门字中断以在每个处理器中设置标志,延迟足够长的时间以确保标志已经被设置,将OPEN值写入门字并将门字刷新到主存储器。 由请求处理器执行的门字访问命令检查其门控制标志,并且如果被设置,则启动固定的时间延迟,之后继续正常执行。