会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Facility to allow fast execution of and, or, and test instructions
    • 允许快速执行和/或测试指令的设施
    • US06233675B1
    • 2001-05-15
    • US09276315
    • 1999-03-25
    • Kenneth K. MunsonPeter C. Mills
    • Kenneth K. MunsonPeter C. Mills
    • G06F9305
    • G06F9/30029G06F9/30094G06F9/3867
    • Improvements are made in how microprocessors execute AND, OR, and TEST instructions when the operands of this instruction are equal. AND/OR/TEST instructions with equal operands are used to set flags based on the contents of the single operand without explicitly performing the actual AND/OR/TEST command. By resetting these flags directly, this mechanism allows these instructions to be paired with preceding dependent instructions simply by using the flags set by the AND/OR/TEST for the previous instruction. An architecture that hardwires the implementation into the microprocessor through logic gates is preferred. This will result in increased speed while reducing power consumption. Further, a full-sized ALU is not needed in order to execute the AND/OR/TEST instruction with equal operands. As this is a more direct procedure, a pipeline with a reduced capability ALU can be utilized.
    • 当本指令的操作数相等时,微处理器如何执行AND,OR或TEST指令。 具有相等操作数的AND / OR / TEST指令用于基于单个操作数的内容设置标志,而不显式执行实际的AND / OR / TEST命令。 通过直接复位这些标志,该机制可以简单地通过使用由AND / OR / TEST为先前指令设置的标志,将这些指令与先前的相关指令进行配对。 通过逻辑门硬实现微处理器的架构是优选的。 这将导致速度提高,同时降低功耗。 此外,为了以相同的操作数执行AND / OR / TEST指令,不需要全尺寸的ALU。 由于这是一个更直接的过程,因此可以利用ALU能力降低的管道。
    • 2. 发明授权
    • Parallel processing instructions routed through plural differing capacity units of operand address generators coupled to multi-ported memory and ALUs
    • 并行处理指令通过耦合到多端口存储器和ALU的操作数地址发生器的多个不同容量单元进行布线
    • US06341343B2
    • 2002-01-22
    • US09842107
    • 2001-04-26
    • Kenneth K. Munson
    • Kenneth K. Munson
    • G06F938
    • G06F9/3824G06F9/3867G06F9/3885
    • Three parallel instruction processing pipelines of a microprocessor share two data memory ports for obtaining operands and writing back results. Since a significant proportion of the instructions of a typical computer program do not require reading operands from the memory, the probability is high that at least one of any three program instructions to be executed at the same time need not fetch an operand from memory. The two memory ports are thus connected at any given time with the two of the three pipelines which are processing instructions that require memory access, the pipeline without access to the memory processing an instruction that does not need it. To do so, the added third pipeline need not have all the same resources as the other two pipelines, so its stages are made to have a reduced capability in order to save space and reduce power consumption. The stages of the three pipelines are also dynamically interchanged in response to the specific combination of three instructions being processed at the same time, in order to increase the rate of processing a large number of instructions.
    • 微处理器的三个并行指令处理流程共享两个数据存储器端口,用于获取操作数并写入结果。 由于典型的计算机程序的相当大比例的指令不需要从存储器读取操作数,所以要同时执行的任何三个程序指令中的至少一个不需要从存储器获取操作数的概率很高。 因此,两个存储器端口在任何给定的时间与三个管线中的两个连接,这三个管线是需要存储器访问的处理指令,流水线不访问存储器来处理不需要存储器的指令。 为此,添加的第三条管道不需要具有与其他两条管道相同的资源,因此其阶段具有降低的能力,以节省空间并降低功耗。 响应于同时处理的三个指令的特定组合,三个管线的阶段也被动态地交换,以便增加处理大量指令的速率。
    • 3. 发明授权
    • Branch prediction mechanism
    • 分支预测机制
    • US06263427B1
    • 2001-07-17
    • US09146995
    • 1998-09-04
    • Sean P. CumminsKenneth K. Munson
    • Sean P. CumminsKenneth K. Munson
    • G06F938
    • G06F9/3806
    • A branch prediction mechanism for predicting the outcome and the branch target address of the next possible branch instruction of a current instruction. Each of the entry of the branch target buffer (“BTB”) of the present invention provides a next possible branch instruction address, and the corresponding branch target address. By checking the TAG portion of each entry of the BTB with the current instruction address, the branch prediction mechanism can predict the next possible branch instruction and the corresponding branch target address.
    • 用于预测当前指令的下一个可能分支指令的结果和分支目标地址的分支预测机制。 本发明的分支目标缓冲器(“BTB”)的每个条目提供下一个可能的分支指令地址和相应的分支目标地址。 通过使用当前指令地址检查BTB的每个条目的TAG部分,分支预测机制可以预测下一个可能的分支指令和相应的分支目标地址。
    • 4. 发明授权
    • Execution of data dependent arithmetic instructions in multi-pipeline processors
    • 在多管线处理器中执行与数据相关的算术指令
    • US06263424B1
    • 2001-07-17
    • US09128164
    • 1998-08-03
    • Dzung X. TranKenneth K. Munson
    • Dzung X. TranKenneth K. Munson
    • G06F9302
    • G06F9/3001G06F9/30094
    • A single chip microprocessor has at least two parallel pipelines that each have multiple processing stages, one of which is an instruction execution stage with a full functioned arithmetic logic unit (ALU). The ALU of one pipeline includes an adder that has the usual two input ports while the adder of the ALU of the other pipeline has at least one extra input port. Two successive arithmetically data dependent instructions are executed by the larger adder alone, while the smaller adder is used as part of a logic circuit that determines the carry bit for the instruction execution result obtained from the larger adder. The smaller adder is thus efficiently used, in an operation where it would otherwise be idle. The additional logic circuitry necessary to determine the carry bit is thus minimized. This additional logic circuitry uses carry bit outputs of both adders, plus the number of adder inputs where the data is inverted in order to execute the instructions, to determine the ultimate carry bit of the instruction execution data.
    • 单个芯片微处理器具有至少两个并行管道,每个管道具有多个处理级,其中之一是具有全功能运算逻辑单元(ALU)的指令执行级。 一个流水线的ALU包括具有通常两个输入端口的加法器,而另一个流水线的ALU的加法器具有至少一个额外的输入端口。 两个连续的算术数据相关指令仅由较大的加法器执行,而较小的加法器用作确定从较大加法器获得的指令执行结果的进位位的逻辑电路的一部分。 因此,较小的加法器在否则将为空闲的操作中被有效地使用。 因此,确定进位位所需的附加逻辑电路被最小化。 该附加逻辑电路使用两个加法器的进位位输出,加上数据被反相以便执行指令的加法器输入的数量,以确定指令执行数据的最终进位位。
    • 6. 发明授权
    • Executing multiple instructions in multi-pipelined processor by dynamically switching memory ports of fewer number than the pipeline
    • 通过动态切换数量少于管道的存储器端口,在多流水线处理器中执行多条指令
    • US06304954B1
    • 2001-10-16
    • US09151634
    • 1998-09-11
    • Kenneth K. Munson
    • Kenneth K. Munson
    • G06F934
    • G06F9/3824G06F9/3867G06F9/3885
    • Three parallel instruction processing pipelines of a microprocessor share two data memory ports for obtaining operands and writing back results. Since a significant proportion of the instructions of a typical computer program do not require reading operands from the memory, the probability is high that at least one of any three program instructions to be executed at the same time need not fetch an operand from memory. The two memory ports are thus connected at any given time with the two of the three pipelines which are processing instructions that require memory access, the pipeline without access to the memory processing an instruction that does not need it. To do so, the added third pipeline need not have all the same resources as the other two pipelines, so its stages are made to have a reduced capability in order to save space and reduce power consumption. The stages of the three pipelines are also dynamically interchanged in response to the specific combination of three instructions being processed at the same time, in order to increase the rate of processing a large number of instructions.
    • 微处理器的三个并行指令处理流程共享两个数据存储器端口,用于获取操作数并写入结果。 由于典型的计算机程序的相当大比例的指令不需要从存储器读取操作数,所以要同时执行的任何三个程序指令中的至少一个不需要从存储器获取操作数的概率很高。 因此,两个存储器端口在任何给定的时间与三个管线中的两个连接,这三个管线是需要存储器访问的处理指令,流水线不访问存储器来处理不需要存储器的指令。 为此,添加的第三条管道不需要具有与其他两条管道相同的资源,因此其阶段具有降低的能力,以节省空间并降低功耗。 响应于同时处理的三个指令的特定组合,三个管线的阶段也被动态地交换,以便增加处理大量指令的速率。
    • 7. 发明授权
    • Instruction cache address generation technique having reduced delays in fetching missed data
    • 指令高速缓存地址生成技术减少了提取丢失数据的延迟
    • US06223257B1
    • 2001-04-24
    • US09310659
    • 1999-05-12
    • Sean P. CumminsKenneth K. MunsonChristopher I. W. NorrieMatthew D. Ornes
    • Sean P. CumminsKenneth K. MunsonChristopher I. W. NorrieMatthew D. Ornes
    • G06F1212
    • G06F9/3802G06F12/0859
    • A technique and system for reading instruction data from a cache memory with minimum delays. Addresses are calculated and applied to the cache memory in two or more cycles by a pipelined address generation circuit. While data at one address is being retrieved, the next address is being calculated. It is presumed, when calculating the next address, that the current address will return all the data it is addressing. In response to a miss signal received from the cache when no data at the current address is in the cache, the missed data is read from a main system memory and accessed with improved speed. In a system where the cache memory and processor operate at a higher clock frequency than the main system memory, new data is obtained from the main memory during only periodically occurring cache clock cycles. A missed cache memory address is regenerated in a manner to access such new data during the same cache clock cycle that it first becomes available from the main memory. This eliminates the occurrence of penalty delay cycles that reduce the rate at which instructions are issued in existing processors, and thus improves the speed of operation of the processors.
    • 一种用于以最小延迟从高速缓冲存储器读取指令数据的技术和系统。 地址由流水线地址产生电路计算并在两个或多个周期内应用于高速缓冲存储器。 虽然正在检索一个地址的数据,但是正在计算下一个地址。 假设当计算下一个地址时,当前地址将返回其正在寻址的所有数据。 响应于当高速缓存中没有当前地址上的数据时从高速缓存接收到的未命中信号,从主系统存储器读取丢失的数据并以改进的速度访问。 在高速缓冲存储器和处理器以比主系统存储器更高的时钟频率工作的系统中,仅在周期性发生的高速缓存时钟周期期间从主存储器获得新的数据。 错过的高速缓存存储器地址以在首次从主存储器可用的相同高速缓存时钟周期内访问这样的新数据的方式被重新生成。 这消除了降低在现有处理器中发出指令的速率的惩罚延迟周期的发生,并因此提高了处理器的操作速度。