会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 12. 发明授权
    • Misalignment predictor
    • 对准预测器
    • US08117404B2
    • 2012-02-14
    • US11200771
    • 2005-08-10
    • Tse-Yu YehPo-Yung ChangEric Hao
    • Tse-Yu YehPo-Yung ChangEric Hao
    • G06F12/00
    • G06F9/3824G06F9/30043G06F9/30145G06F9/3832G06F9/3861
    • In one embodiment, a processor comprises a circuit coupled to receive an indication of a memory operation to be executed in the processor. The circuit is configured to predict whether or not the memory operation is misaligned. A number of accesses performed by the processor to execute the memory operation is dependent on whether or not the circuit predicts the memory operation as misaligned. In another embodiment, a misalignment predictor is coupled to receive an indication of a memory operation, and comprises a memory and a control circuit coupled to the memory. The memory is configured to store a plurality of indications of memory operations previously detected as misaligned during execution in a processor. The control circuit is configured to predict whether or not a memory operation is misaligned responsive to a comparison of the received indication and the plurality of indications stored in the memory.
    • 在一个实施例中,处理器包括耦合以接收要在处理器中执行的存储器操作的指示的电路。 电路被配置为预测存储器操作是否不对准。 由处理器执行的执行存储器操作的多个访问取决于电路是否将存储器操作预测为未对准。 在另一个实施例中,未对准预测器被耦合以接收存储器操作的指示,并且包括耦合到存储器的存储器和控制电路。 存储器被配置为存储先前在处理器中执行期间被检测为未对准的存储器操作的多个指示。 控制电路被配置为响应于所接收的指示与存储在存储器中的多个指示的比较来预测存储器操作是否失准。
    • 13. 发明申请
    • Replay reduction for power saving
    • 节电减重
    • US20080086622A1
    • 2008-04-10
    • US11546223
    • 2006-10-10
    • Po-Yung ChangWei-Han LienJesse PanRamesh GunnaTse-Yu YehJames B. Keller
    • Po-Yung ChangWei-Han LienJesse PanRamesh GunnaTse-Yu YehJames B. Keller
    • G06F9/30
    • G06F9/3842
    • In one embodiment, a processor comprises a scheduler configured to issue a first instruction operation to be executed and an execution core coupled to the scheduler. Configured to execute the first instruction operation, the execution core comprises a plurality of replay sources configured to cause a replay of the first instruction operation responsive to detecting at least one of a plurality of replay cases. The scheduler is configured to inhibit issuance of the first instruction operation subsequent to the replay for a subset of the plurality of replay cases. The scheduler is coupled to receive an acknowledgement indication corresponding to each of the plurality of replay cases in the subset, and is configured to inhibit issuance of the first instruction operation until the acknowledge indication is asserted that corresponds to an identified replay case of the subset.
    • 在一个实施例中,处理器包括被配置为发出要执行的第一指令操作和耦合到调度器的执行核心的调度器。 配置为执行第一指令操作,执行核心包括被配置为响应于检测多个重放情况中的至少一个而使第一指令操作重放的多个重放源。 调度器被配置为禁止在多个重放情况的子集的重放之后发出第一指令操作。 调度器被耦合以接收对应于子集中的多个重播案例中的每一个的确认指示,并且被配置为禁止发出第一指令操作,直到确认对应于所识别的该子集的重放情况为止的确认指示为止。
    • 14. 发明申请
    • Uncacheable load merging
    • 不可加载的负载合并
    • US20080086594A1
    • 2008-04-10
    • US11545825
    • 2006-10-10
    • Po-Yung ChangRamesh GunnaTse-Yu YehJames B. Keller
    • Po-Yung ChangRamesh GunnaTse-Yu YehJames B. Keller
    • G06F12/00
    • G06F9/383G06F9/30043G06F9/3826G06F12/0888Y02D10/13
    • In one embodiment, a processor comprises a buffer and a control unit coupled to the buffer. The buffer is configured to store requests to be transmitted on an interconnect on which the processor is configured to communicate. The buffer is coupled to receive a first uncacheable load request having a first address. The control unit is configured to merge the first uncacheable load request with a second uncacheable load request that is stored in the buffer responsive to a second address of the second load request matching the first address within a granularity. A single transaction on the interconnect is used for both the first and second uncacheable load requests, if merged. Separate transactions on the interconnect are used for each of the first and second uncacheable load requests if not merged.
    • 在一个实施例中,处理器包括耦合到缓冲器的缓冲器和控制单元。 缓冲器被配置为存储要在处理器配置为进行通信的互连上发送的请求。 缓冲器被耦合以接收具有第一地址的第一不可缓存的加载请求。 所述控制单元被配置为将所述第一不可缓存的加载请求与存储在所述缓冲器中的第二不可缓存的加载请求进行合并,所述第二不可​​缓存的加载请求响应于在粒度内与所述第一地址匹配的第二加载请求的第二地址。 如果合并,互连上的单个事务将用于第一个和第二个不可缓存的加载请求。 对于第一和第二不可缓存的加载请求中的每一个,如果不合并,则互连上的单独事务将被使用。
    • 16. 发明授权
    • Mechanism for processing speculative LL and SC instructions in a pipelined processor
    • 在流水线处理器中处理推测性LL和SC指令的机制
    • US07162613B2
    • 2007-01-09
    • US11046454
    • 2005-01-28
    • Tse-Yu YehPo-Yung ChangMark H. PearceZongjian Chen
    • Tse-Yu YehPo-Yung ChangMark H. PearceZongjian Chen
    • G06F9/312
    • G06F9/3004G06F9/30072G06F9/30087G06F9/3834G06F9/3842G06F9/3861G06F9/3867
    • A processor includes a first circuit and a second circuit. The first circuit is configured to provide a first indication of whether or not at least one reservation is valid in the processor. A reservation is established responsive to processing a load-linked instruction, which is a load instruction that is architecturally defined to establish the reservation. A valid reservation is indicative that one or more bytes indicated by the target address of the load-linked instruction have not been updated since the reservation was established. The second circuit is coupled to receive the first indication. Responsive to the first indication indicating no valid reservation, the first circuit is configured to select a speculative load-linked instruction for issued. The second circuit is configured not to select the speculative load-linked instruction for issue responsive to the first indication indicating the at least one valid reservation. A method is also contemplated.
    • 一种处理器包括第一电路和第二电路。 第一电路被配置为提供在处理器中至少一个预留是否有效的第一指示。 响应于处理负载链接指令来建立预留,负载指令是建筑上定义用于建立预留的加载指令。 有效的预约指示由保留建立以来,由负载链接指令的目标地址指示的一个或多个字节未被更新。 第二电路被耦合以接收第一指示。 响应于第一指示,不指示有效预约,第一电路被配置为选择用于发出的推测性加载链接指令。 第二电路被配置为不响应于指示至少一个有效预留的第一指示来选择用于发出的推测性加载链接指令。 也考虑了一种方法。
    • 18. 发明授权
    • Processor executing plural instruction sets (ISA's) with ability to have plural ISA's in different pipeline stages at same time
    • 处理器执行多个指令集(ISA),同时具有在不同流水线阶段具有多个ISA的能力
    • US06430674B1
    • 2002-08-06
    • US09223441
    • 1998-12-30
    • Jignesh TrivediTse-Yu Yeh
    • Jignesh TrivediTse-Yu Yeh
    • G06F9455
    • G06F9/3802G06F9/30076G06F9/30174G06F9/30189G06F9/3836G06F9/3857
    • A method and apparatus for transitioning a processor from a first mode of operation for processing a first instruction set architecture (instruction set) to a second mode of operation for processing a second set instruction set. The method provides that instructions of a first instruction set architecture (instruction set) are processed in a pipelined processor in a first mode of operation, and instructions of a second, different, instruction set, are processed in the pipelined processor in a second, different, mode of operation. While operating in one mode and before a switch to the other mode occurs, the pipeline is loaded with a set of instructions that transition the processor from one mode to the other, wherein the set of instructions are substantially insensitive to the mode that the processor operates in. The processor begins processing the set of instructions while in one mode, and finishes processing the instructions after switching to the other mode, and the set of instructions are held in a permanent memory, and are ready to be executed and do not require decoding. The processor switches mode in response to a mode switch instruction in the pipeline, and the set of instructions follow the mode switch instruction in the pipeline by a spacing which is less than the number of stages in the pipeline. The transition instructions include mode sensitive instructions that follow the mode insensitive instructions, and the mode sensitive instructions enter the pipeline after the mode switch has occurred. Further, the pipeline has alternate front end stages, one operating to decode instructions in one mode of operation, and the other to decode instructions in the other mode of operation. In addition, one of the front end stages translates instructions from one instruction set to another.
    • 一种用于将处理器从用于处理第一指令集架构(指令集)的第一操作模式转换到用于处理第二集合指令集的第二操作模式的处理器的方法和装置。 该方法提供了在第一操作模式中在流水线处理器中处理第一指令集架构(指令集)的指令,并且在流水线处理器中处理第二指令集指令的第二不同的指令集 , 操作模式。 当在一种模式下操作并且在切换到另一模式之前,流水线装载有一组将处理器从一种模式转移到另一种模式的指令,其中该组指令对处理器操作的模式基本不敏感 处理器在一种模式下开始处理指令集,并且在切换到另一种模式之后完成处理指令,并且该组指令被保持在永久存储器中,并且准备好被执行并且不需要解码 。 处理器响应于流水线中的模式切换指令而切换模式,并且指令集在流水线中遵循小于流水线级数的间隔的模式切换指令。 转换指令包括遵循模式不敏感指令的模式敏感指令,模式敏感指令在模式切换发生后进入流水线。 此外,管线具有交替的前端级,一个操作用于在一种操作模式下对指令进行解码,另一个在另一种操作模式下解码指令。 另外,前端级之一将指令从一个指令集转换到另一个指令集。
    • 19. 发明授权
    • Branch prediction table having pointers identifying other branches
within common instruction cache lines
    • 分支预测表具有标识公共指令高速缓存行内的其他分支的指针
    • US5815700A
    • 1998-09-29
    • US576954
    • 1995-12-22
    • Mircea PoplingherTse-Yu YehWenliang Chen
    • Mircea PoplingherTse-Yu YehWenliang Chen
    • G06F9/38G06F9/32
    • G06F9/3844
    • A branch prediction system is described for use within a microprocessor having an instruction cache capable of storing two or more instructions per cache line. Each entry of a branch prediction table (BPT) includes a value identifying whether at least one other instruction within a common cache line contains a branch. The value is referred to herein as a multiple-B bit value. The multiple-B bit value is examined by branch prediction logic while one branch prediction is being performed to determine whether a second branch prediction can be initiated for another branch within the same cache line. In one implementation, the multiple-B bit of one BPT entry is examined following a hit. A branch prediction for the entry generating a hit is initiated. Simultaneously, the BPT is reaccessed to search for an entry corresponding to another instruction within the same cache line if the multiple-B bit for the first entry was set. If the second entry is found, a secondary branch prediction is initiated. Eventually, the first branch prediction is output. If the first branch prediction is Not Taken, then the second branch prediction is output during the next clock cycle. If the first branch prediction is Taken, then the second branch prediction may be aborted as it is not needed. Method and apparatus embodiments of the invention are described.
    • 描述了一种在具有每个高速缓存行存储两个或多个指令的指令高速缓存器的微处理器内使用的分支预测系统。 分支预测表(BPT)的每个条目包括标识公共高速缓存行中的至少一个其他指令是否包含分支的值。 该值在本文中称为多B位值。 通过分支预测逻辑检查多个B比特值,同时执行一个分支预测以确定是否可以针对同一高速缓存行内的另一个分支启动第二分支预测。 在一个实现中,在命中之后检查一个BPT条目的多个B位。 开始生成命中的条目的分支预测。 同时,如果设置了第一个条目的多个B位,则BPT被重新访问以搜索与同一高速缓存行内的另一个指令相对应的条目。 如果找到第二个条目,则启动辅助分支预测。 最终输出第一个分支预测。 如果第一分支预测未被采用,则在下一个时钟周期期间输出第二分支预测。 如果采用第一分支预测,那么第二分支预测可能因不需要而中止。 描述本发明的方法和设备实施例。