会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Method and apparatus for reducing register file access times in pipelined processors
    • 用于在流水线处理器中减少寄存器文件访问时间的方法和装置
    • US06934830B2
    • 2005-08-23
    • US10259721
    • 2002-09-26
    • Sudarshan KadambiAdam R. TalcottWayne I. Yamamoto
    • Sudarshan KadambiAdam R. TalcottWayne I. Yamamoto
    • G06F9/30G06F9/38
    • G06F9/30138G06F9/3824G06F9/3857
    • One embodiment of the present invention provides a system that reduces the time required to access registers from a register file within a processor. During operation, the system receives an instruction to be executed, wherein the instruction identifies at least one operand to be accessed from the register file. Next, the system looks up the operands in a register pane, wherein the register pane is smaller and faster than the register file and contains copies of a subset of registers from the register file. If the lookup is successful, the system retrieves the operands from the register pane to execute the instruction. Otherwise, if the lookup is not successful, the system retrieves the operands from the register file, and stores the operands into the register pane. This triggers the system to reissue the instruction to be executed again, so that the re-issued instruction retrieves the operands from the register pane.
    • 本发明的一个实施例提供一种减少从处理器内的寄存器文件访问寄存器所需的时间的系统。 在操作期间,系统接收要执行的指令,其中该指令从该寄存器文件中识别要访问的至少一个操作数。 接下来,系统在寄存器窗格中查找操作数,其中寄存器窗格比寄存器文件更小和更快,并且包含寄存器文件中寄存器子集的副本。 如果查找成功,系统将从寄存器窗格中检索操作数,执行指令。 否则,如果查找不成功,系统将从寄存器文件中检索操作数,并将操作数存储到寄存器窗格中。 这将触发系统重新发出要再次执行的指令,以便重新发出的指令从寄存器窗格中检索操作数。
    • 3. 发明申请
    • Efficient On-Chip Accelerator Interfaces to Reduce Software Overhead
    • 高效的片上加速器接口,以减少软件开销
    • US20080222383A1
    • 2008-09-11
    • US11684358
    • 2007-03-09
    • Lawrence A. SpracklenSantosh G. AbrahamAdam R. Talcott
    • Lawrence A. SpracklenSantosh G. AbrahamAdam R. Talcott
    • G06F9/34
    • G06F12/1027G06F12/1036G06F2212/1024G06F2212/683
    • In one embodiment, a processor comprises execution circuitry and a translation lookaside buffer (TLB) coupled to the execution circuitry. The execution circuitry is configured to execute a store instruction having a data operand; and the execution circuitry is configured to generate a virtual address as part of executing the store instruction. The TLB is coupled to receive the virtual address and configured to translate the virtual address to a first physical address. Additionally, the TLB is coupled to receive the data operand and to translate the data operand to a second physical address. A hardware accelerator is also contemplated in various embodiments, as is a processor coupled to the hardware accelerator, a method, and a computer readable medium storing instruction which, when executed, implement a portion of the method.
    • 在一个实施例中,处理器包括耦合到执行电路的执行电路和转换后备缓冲器(TLB)。 执行电路被配置为执行具有数据操作数的存储指令; 并且所述执行电路被配置为生成作为执行所述存储指令的一部分的虚拟地址。 所述TLB被耦合以接收所述虚拟地址并被配置为将所述虚拟地址转换为第一物理地址。 此外,TLB被耦合以接收数据操作数并将数据操作数转换为第二物理地址。 还可以在各种实施例中考虑硬件加速器,以及耦合到硬件加速器的处理器,方法和存储指令的计算机可读介质,其在执行时实现该方法的一部分。
    • 5. 发明授权
    • Method and apparatus for branch target prediction
    • 分支目标预测方法和装置
    • US5938761A
    • 1999-08-17
    • US976826
    • 1997-11-24
    • Sanjay PatelAdam R. TalcottRajasekhar Cherabuddi
    • Sanjay PatelAdam R. TalcottRajasekhar Cherabuddi
    • G06F9/38G06F9/32
    • G06F9/3806
    • One embodiment of the present invention provides a method and an apparatus for predicting the target of a branch instruction. This method and apparatus operate by using a translation lookaside buffer (TLB) to store page numbers for predicted branch target addresses. In this embodiment, a branch target address table stores a small index to a location in the translation lookaside buffer, and this index is used retrieve a page number from the location in the translation lookaside buffer. This page number is used as the page number portion of a predicted branch target address. Thus, a small index into a translation lookaside buffer can be stored in a predicted branch target address table instead of a larger page number for the predicted branch target address. This technique effectively reduces the size of a predicted branch target table by eliminating much of the space that is presently wasted storing redundant page numbers. Another embodiment maintains coherence between the branch target address table and the translation lookaside buffer. This makes it possible to detect a miss in the translation lookaside buffer at least one cycle earlier by examining the branch target address table.
    • 本发明的一个实施例提供了一种用于预测分支指令的目标的方法和装置。 该方法和装置通过使用翻译后备缓冲器(TLB)来存储用于预测的分支目标地址的页码。 在本实施例中,分支目标地址表将小索引存储到翻译后备缓冲器中的位置,并且使用该索引从翻译后备缓冲器中的位置检索页码。 该页码用作预测分支目标地址的页码部分。 因此,可以在预测的分支目标地址表中存储向翻译后备缓冲器的小索引,而不是预测的分支目标地址的较大的页码。 该技术通过消除存储冗余页码的目前浪费的大部分空间来有效地减小预测分支目标表的大小。 另一个实施例维护分支目标地址表和转换后备缓冲器之间的一致性。 这使得可以通过检查分支目标地址表来更早地检测翻译后备缓冲区中的未命中至少一个周期。
    • 6. 发明授权
    • Selection from multiple fetch addresses generated concurrently including
predicted and actual target by control-flow instructions in current and
previous instruction bundles
    • 通过当前和以前的指令束中的控制流指令从多个并发产生的提取地址中进行选择,包括预测和实际目标
    • US5935238A
    • 1999-08-10
    • US878759
    • 1997-06-19
    • Adam R. TalcottRamesh K. Panwar
    • Adam R. TalcottRamesh K. Panwar
    • G06F9/38G06F9/32
    • G06F9/3806G06F9/30054G06F9/3861
    • A microprocessor is provided with an instruction fetch mechanism that simultaneously predicts multiple control-flow instructions. The instruction fetch unit farther is capable of handling multiple types of control-flow instructions. The instruction fetch unit uses predecode data and branch prediction data to select the next instruction fetch bundle address. If a branch misprediction is detected, a corrected branch target address is selected as the next fetch bundle address. If no branch misprediction occurs and the current fetch bundle includes a taken control-flow instruction, then the next fetch bundle address is selected based on the type of control-flow instruction detected. If the first taken control-flow instruction is a return instruction, a return address from the return address stack is selected as the next fetch bundle address. If the first taken control-flow instruction is an unconditional branch or predicted taken conditional branch, a predicted branch target address is selected as the next fetch bundle address. If no branch misprediction is detected and the current fetch bundle does not include a taking control-flow instruction, then a sequential address is selected as the next fetch bundle address.
    • 微处理器具有同时预测多个控制流指令的指令获取机制。 指令提取单元能够处理多种类型的控制流程指令。 指令提取单元使用预解码数据和分支预测数据来选择下一个指令获取束地址。 如果检测到分支错误预测,则选择校正的分支目标地址作为下一个获取束地址。 如果没有发生分支错误预测,并且当前的提取束包括所采取的控制流指令,则基于检测到的控制流指令的类型来选择下一个提取束地址。 如果第一个采取的控制流程指令是一个返回指令,则返回地址堆栈的返回地址将被选择作为下一个提取包地址。 如果第一个采取的控制流程指令是无条件分支或预测的条件分支,则选择预测的分支目标地址作为下一个获取束地址。 如果没有检测到分支错误预测,并且当前的提取包不包括获取控制流程指令,则选择顺序地址作为下一个提取包地址。
    • 10. 发明授权
    • Accuracy of multiple branch prediction schemes
    • 多分支预测方案的准确性
    • US06948055B1
    • 2005-09-20
    • US09685270
    • 2000-10-09
    • Adam R. Talcott
    • Adam R. Talcott
    • G06F9/38
    • G06F9/3844
    • A method and apparatus of improving prediction accuracy of a branch instruction scheme includes reading an individual instruction in a current set of instructions, fetching the individual instruction when an instruction fetch unit determines that the individual instruction is valid, and allowing the instruction fetch unit to use an index address for the fetched individual instruction. A method and apparatus of improving branch prediction accuracy includes receiving a set of instructions having an assigned address, making a prediction for a branch instruction in the set of instructions using the assigned address, and retaining the assigned address for the branch instruction in the set of instructions.
    • 一种改善分支指令方案的预测精度的方法和装置,包括:当指令提取单元确定单独指令有效时,读取当前指令集中的单独指令,取出单独指令,并允许指令提取单元使用 提取的单独指令的索引地址。 一种提高分支预测精度的方法和装置包括:接收具有分配地址的一组指令,使用分配的地址对该组指令中的分支指令进行预测,并将分配指令的分配地址保留在 说明。