会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Branch prediction mechanism
    • 分支预测机制
    • US06263427B1
    • 2001-07-17
    • US09146995
    • 1998-09-04
    • Sean P. CumminsKenneth K. Munson
    • Sean P. CumminsKenneth K. Munson
    • G06F938
    • G06F9/3806
    • A branch prediction mechanism for predicting the outcome and the branch target address of the next possible branch instruction of a current instruction. Each of the entry of the branch target buffer (“BTB”) of the present invention provides a next possible branch instruction address, and the corresponding branch target address. By checking the TAG portion of each entry of the BTB with the current instruction address, the branch prediction mechanism can predict the next possible branch instruction and the corresponding branch target address.
    • 用于预测当前指令的下一个可能分支指令的结果和分支目标地址的分支预测机制。 本发明的分支目标缓冲器(“BTB”)的每个条目提供下一个可能的分支指令地址和相应的分支目标地址。 通过使用当前指令地址检查BTB的每个条目的TAG部分,分支预测机制可以预测下一个可能的分支指令和相应的分支目标地址。
    • 2. 发明授权
    • Instruction cache address generation technique having reduced delays in fetching missed data
    • 指令高速缓存地址生成技术减少了提取丢失数据的延迟
    • US06223257B1
    • 2001-04-24
    • US09310659
    • 1999-05-12
    • Sean P. CumminsKenneth K. MunsonChristopher I. W. NorrieMatthew D. Ornes
    • Sean P. CumminsKenneth K. MunsonChristopher I. W. NorrieMatthew D. Ornes
    • G06F1212
    • G06F9/3802G06F12/0859
    • A technique and system for reading instruction data from a cache memory with minimum delays. Addresses are calculated and applied to the cache memory in two or more cycles by a pipelined address generation circuit. While data at one address is being retrieved, the next address is being calculated. It is presumed, when calculating the next address, that the current address will return all the data it is addressing. In response to a miss signal received from the cache when no data at the current address is in the cache, the missed data is read from a main system memory and accessed with improved speed. In a system where the cache memory and processor operate at a higher clock frequency than the main system memory, new data is obtained from the main memory during only periodically occurring cache clock cycles. A missed cache memory address is regenerated in a manner to access such new data during the same cache clock cycle that it first becomes available from the main memory. This eliminates the occurrence of penalty delay cycles that reduce the rate at which instructions are issued in existing processors, and thus improves the speed of operation of the processors.
    • 一种用于以最小延迟从高速缓冲存储器读取指令数据的技术和系统。 地址由流水线地址产生电路计算并在两个或多个周期内应用于高速缓冲存储器。 虽然正在检索一个地址的数据,但是正在计算下一个地址。 假设当计算下一个地址时,当前地址将返回其正在寻址的所有数据。 响应于当高速缓存中没有当前地址上的数据时从高速缓存接收到的未命中信号,从主系统存储器读取丢失的数据并以改进的速度访问。 在高速缓冲存储器和处理器以比主系统存储器更高的时钟频率工作的系统中,仅在周期性发生的高速缓存时钟周期期间从主存储器获得新的数据。 错过的高速缓存存储器地址以在首次从主存储器可用的相同高速缓存时钟周期内访问这样的新数据的方式被重新生成。 这消除了降低在现有处理器中发出指令的速率的惩罚延迟周期的发生,并因此提高了处理器的操作速度。