会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Cache predictor for simultaneous multi-threaded processor system supporting multiple transactions
    • 支持多个事务的同时多线程处理器系统的缓存预测器
    • US07039768B2
    • 2006-05-02
    • US10424487
    • 2003-04-25
    • Gregory William AlexanderDavid Stephen LevitanBalaram Sinharoy
    • Gregory William AlexanderDavid Stephen LevitanBalaram Sinharoy
    • G06F12/00
    • G06F12/0864G06F12/1054G06F2212/6082
    • A set-associative I-cache that enables early cache hit prediction and correct way selection when the processor is executing instructions of multiple threads having similar EAs. Each way of the I-cache comprises an EA Directory (EA Dir), which includes a series of thread valid bits that are individually assigned to one of the multiple threads. Particular ones of the thread valid bits are set in each EA Dir to indicate when an instruction block the thread is cached within the particular way with which the EA Dir is associated. When a cache line request for a particular thread is received, a cache hit is predicted when the EA of the request matches the EA in the EA Dir and the cache line is selected from the way associated with the EA Dir who has the thread valid bit for that thread set. Early way selection is thus achieved since the way selection only requires a check of the thread valid bits.
    • 当处理器执行具有类似EA的多个线程的指令时,能够实现早期缓存命中预测和正确选择方法的集合关联I缓存。 I缓存的每个方式包括EA目录(EA目录),其包括单独分配给多个线程之一的一系列线程有效位。 在每个EA Dir中设置特定的线程有效位,以指示线程是否以EA Dir所关联的特定方式缓存的时间。 当接收到针对特定线程的高速缓存线请求时,当请求的EA与EA Dir中的EA匹配时,预测缓存命中,并且从与具有线程有效位的EA Dir相关联的方式中选择高速缓存行 为该线程集。 因此,由于选择方式仅需要检查线程有效位,因此实现了早期方式选择。
    • 3. 发明授权
    • Simultaneous multithread processor with result data delay path to adjust pipeline length for input to respective thread
    • 具有结果数据延迟路径的同时多线程处理器,用于调整输入到相应线程的流水线长度
    • US07000233B2
    • 2006-02-14
    • US10422653
    • 2003-04-21
    • David Stephen LevitanBalaram Sinharoy
    • David Stephen LevitanBalaram Sinharoy
    • G06F9/46
    • G06F9/3867G06F9/30189G06F9/3851
    • An SMT system has a single thread mode and an SMT mode. Instructions are alternately selected from two threads every clock cycle and loaded into the IFAR in a three cycle pipeline of the IFU. If a branch predicted taken instruction is detected in the branch prediction circuit in stage three of the pipeline, then in the single thread mode a calculated address from the branch prediction circuit is loaded into the IFAR on the next clock cycle. If the instruction in the branch prediction circuit detects a branch predicted taken in the SMT mode, then the selected instruction address is loaded into the IFAR on the first clock cycle following branch predicted taken detection. The calculated target address is fed back and loaded into the IFAR in the second clock cycle following branch predicted taken detection. Feedback delay effectively switches the pipeline from three stages to four stages.
    • SMT系统具有单线程模式和SMT模式。 每个时钟周期从两个线程交替选择指令,并在IFU的三个循环管道中加载到IFAR中。 如果在流水线的第三级中在分支预测电路中检测到分支预测的指令,则在单线程模式中,来自分支预测电路的计算的地址在下一个时钟周期被加载到IFAR中。 如果分支预测电路中的指令检测到以SMT模式取得的分支预测,则在分支预测采集检测之后,所选择的指令地址在第一时钟周期被加载到IFAR中。 计算的目标地址在分支预测采集检测后的第二个时钟周期中反馈并加载到IFAR中。 反馈延迟有效地将管道从三个阶段切换到四个阶段。
    • 6. 发明授权
    • Apparatus and method of branch prediction utilizing a comparison of a branch history table to an aliasing table
    • 使用分支历史表与混叠表的比较的分支预测的装置和方法
    • US06484256B1
    • 2002-11-19
    • US09370680
    • 1999-08-09
    • David Stephen LevitanBalaram Sinharoy
    • David Stephen LevitanBalaram Sinharoy
    • G06F900
    • G06F9/3806G06F9/3848
    • Improved conditional branch instruction prediction by detecting branch aliasing in a branch history table. Each entry in an aliasing table is associated with only one of a plurality of conditional branch instructions tracked by the branch history table. Prior to executing a conditional branch instruction, outcome of the execution of the conditional branch instruction is predicted utilizing the branch history table entry associated with the conditional branch instruction. Outcome of the execution of the conditional branch instruction is also predicted utilizing the aliasing table entry associated with the conditional branch instruction. Branch aliasing is detected by comparing the prediction made utilizing the branch history table with the prediction made utilizing the aliasing table. In response to the predictions being different, a determination is made that branch aliasing occurred, and the prediction made utilizing the aliasing table is utilized for predicting the outcome of the execution of the conditional branch instruction.
    • 通过检测分支历史表中的分支别名来改进条件分支指令预测。 混叠表中的每个条目仅与由分支历史表跟踪的多个条件转移指令中的一个相关联。 在执行条件转移指令之前,利用与条件转移指令相关联的分支历史表条目来预测条件转移指令的执行结果。 还使用与条件分支指令相关联的混叠表条目来预测条件分支指令的执行的结果。 通过将利用分支历史表进行的预测与利用混叠表进行的预测进行比较来检测分支混叠。 响应于不同的预测,确定发生分支混叠,并且使用利用混叠表进行的预测用于预测条件分支指令的执行结果。
    • 8. 发明授权
    • Method and system of addressing which minimize memory utilized to store
logical addresses by storing high order bits within a register
    • 寻址方法和系统,通过在寄存器中存储高阶位来最小化用于存储逻辑地址的存储器
    • US5765221A
    • 1998-06-09
    • US767568
    • 1996-12-16
    • Paul Charles RossbachChin-Cheng KauDavid Stephen Levitan
    • Paul Charles RossbachChin-Cheng KauDavid Stephen Levitan
    • G06F9/32G06F9/355G06F9/38G06F12/04
    • G06F9/342G06F9/30094G06F9/32G06F9/321G06F9/324G06F9/3557G06F9/3802
    • An improved method of addressing within a pipelined processor having an address bit width of m+n bits is disclosed, which includes storing m high order bits corresponding to a first range of addresses, which encompasses a selected plurality of data executing within the pipelined processor. The n low order bits of addresses associated with each of the selected plurality of data are also stored. After determining the address of a subsequent datum to be executed within the processor, the subsequent datum is fetched. In response to fetching a subsequent datum having an address outside of the first range of addresses, a status register is set to a first of two states to indicate that an update to the first address register is required. In response to the status register being set to the second of the two states, the subsequent datum is dispatched for execution within the pipelined processor. The n low order bits of the subsequent datum are then stored, such that memory required to store addresses of instructions executing within the pipelined processor is thereby decreased.
    • 公开了一种具有地址位宽度为m + n位的流水线处理器内的寻址改进方法,其包括存储对应于第一地址范围的m个高位,其包含在流水线处理器内执行的选定的多个数据。 还存储与所选择的多个数据中的每一个相关联的n个低位地址。 在确定要在处理器中执行的后续数据的地址之后,获取随后的数据。 响应于获取具有在第一地址范围之外的地址的后续数据,状态寄存器被设置为两种状态中的第一状态,以指示需要对第一地址寄存器的更新。 响应于将状态寄存器设置为两个状态中的第二个状态,随后的数据被调度以在流水线处理器内执行。 然后存储随后数据的n个低位,从而减少了在流水线处理器内执行的指令的存储地址所需的存储器。
    • 10. 发明授权
    • Fencing off instruction buffer until re-circulation of rejected preceding and branch instructions to avoid mispredict flush
    • 禁止指令缓冲区直到重新循环被拒绝的前导和分支指令,以避免错误预测冲洗
    • US07254700B2
    • 2007-08-07
    • US11056512
    • 2005-02-11
    • David Stephen LevitanBrian William Thompto
    • David Stephen LevitanBrian William Thompto
    • G06F9/38
    • G06F9/3804G06F9/3814G06F9/3842G06F9/3844G06F9/3861
    • Systems and methods for handling the event of a wrong branch prediction and an instruction rejection in a digital processor are disclosed. More particularly, hardware and software are disclosed for detecting a condition where a branch instruction was mispredicted and an instruction that preceded the branch instruction is rejected after the branch instruction is executed. When the condition is detected, the branch instruction and rejected instruction are recirculated for execution. Until, the branch instruction is re-executed, control circuitry can prevent instructions from being received into an instruction buffer that feeds instructions to the execution units of the processor by fencing the instruction buffer from the fetcher. The instruction fetcher may continue fetching instructions along the branch target path into a local cache until the fence is dropped.
    • 公开了用于处理数字处理器中错误分支预测和指令拒绝的事件的系统和方法。 更具体地,公开了用于检测分支指令被错误预测的条件并且在执行分支指令之后拒绝分支指令之前的指令的硬件和软件。 当检测到条件时,分支指令和拒绝指令被再循环以执行。 直到分支指令被重新执行为止,控制电路可以防止指令被接收到指令缓冲器中,该指令缓冲器通过从提取器中击打指令缓冲器来将指令馈送到处理器的执行单元。 指令读取器可以继续沿着分支目标路径获取指令到本地高速缓存中,直到栅栏被丢弃。