会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Method and apparatus for maximum throughput scheduling of dependent
operations in a pipelined processor
    • 用于流水线处理器中依赖操作的最大吞吐量调度的方法和装置
    • US6101597A
    • 2000-08-08
    • US176370
    • 1993-12-30
    • Robert P. ColwellMichael A. FettermanGlenn J. HintonRobert W. MartellDavid B. Papworth
    • Robert P. ColwellMichael A. FettermanGlenn J. HintonRobert W. MartellDavid B. Papworth
    • G06F9/38G06F9/30
    • G06F9/3824G06F9/383
    • Maximum throughput or "back-to-back" scheduling of dependent instructions in a pipelined processor is achieved by maximizing the efficiency in which the processor determines the availability of the source operands of a dependent instruction and provides those operands to an execution unit executing the dependent instruction. These two operations are implemented through number of mechanisms. One mechanism for determining the availability of source operands, and hence the readiness of a dependent instruction for dispatch to an available execution unit, relies on the prospective determination of the availability of a source operand before the operand itself is actually computed as a result of the execution of another instruction. Storage addresses of the source operands of an instruction are stored in a content addressable memory (CAM). Before an instruction is executed and its result data written back, the storage location address of the result is provided to the CAM and associatively compared with the source operand addresses stored therein. A CAM match and its accompanying match bit indicate that the result of the instruction to be executed will provide a source operand to the dependent instruction waiting in the reservation station. Using a bypass mechanism, if the operand is computed after dispatch of the dependent instruction, then the source operand is provided directly from the execution unit computing the source operand to a source operand input of the execution unit executing the dependent instruction.
    • 通过最大化处理器确定依赖指令的源操作数的可用性的效率,并将这些操作数提供给执行依赖的执行单元,从而实现流水线处理器中相关指令的最大吞吐量或“背对背” 指令。 这两个操作通过多个机制来实现。 用于确定源操作数的可用性以及因此用于发送到可用执行单元的依赖指令的准备的机制依赖于在操作数本身实际计算之前源操作数的可用性的预期确定 执行另一条指令。 指令的源操作数的存储地址存储在内容可寻址存储器(CAM)中。 在执行指令并且其结果数据被写回之前,将结果的存储位置地址提供给CAM并与存储在其中的源操作数地址相关联地进行比较。 CAM匹配及其伴随的匹配位指示要执行的指令的结果将为在保留站等待的从属指令提供源操作数。 使用旁路机制,如果在分派依赖指令之后计算操作数,则将操作数从执行单元直接提供到计算源操作数到执行依赖指令的执行单元的源操作数输入。