会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Accessing and manipulating microprocessor state
    • 访问和操作微处理器状态
    • US07305586B2
    • 2007-12-04
    • US10424485
    • 2003-04-25
    • Richard William DoingMichael Stephen FloydRonald Nick KallaJohn Wesley Ward, III
    • Richard William DoingMichael Stephen FloydRonald Nick KallaJohn Wesley Ward, III
    • G06F11/00
    • G06F11/2236
    • A microprocessor includes an externally accessible port and a serial communication bus connected to the port. An execution pipeline of the processor includes a pipeline satellite circuit coupling the pipeline to the bus. The satellite enables an external agent to provide an instruction directly to the pipeline via the serial bus. A dedicated register and register satellite circuit couple the register to the communication bus. The execution pipeline can access the dedicated register during execution of the instruction. In this manner, the satellite circuits enable the external agent to access architected state. The communication bus enables access to the satellites while a system clock to the processor remains active. In one embodiment, the pipeline satellite accesses the pipeline “downstream” of the decode stage such that the set of instructions that may be “rammed” into the pipeline is not limited to the set of instructions that the decode stage can generate.
    • 微处理器包括外部可访问端口和连接到端口的串行通信总线。 处理器的执行流水线包括将管道耦合到总线的流水线卫星电路。 该卫星使外部代理可以通过串行总线直接向管线提供指令。 专用寄存器和寄存器卫星电路将寄存器耦合到通信总线。 在执行指令期间,执行流水线可以访问专用寄存器。 以这种方式,卫星电路使外部代理能够访问架构状态。 当处理器的系统时钟保持有效时,通信总线可以访问卫星。 在一个实施例中,流水线卫星访问解码级的“下游”流水线,使得可能被“冲撞”到流水线中的指令集不限于解码级可以产生的一组指令。
    • 3. 发明授权
    • Generating partition corresponding real address in partitioned mode supporting system
    • 在分区模式支持系统中生成分区对应的实际地址
    • US06438671B1
    • 2002-08-20
    • US09346206
    • 1999-07-01
    • Richard William DoingRonald Nick KallaStephen Joseph SchwinnEdward John SilhaKenichi Tsuchiya
    • Richard William DoingRonald Nick KallaStephen Joseph SchwinnEdward John SilhaKenichi Tsuchiya
    • G06F1206
    • G06F9/3804G06F9/3842G06F9/3851G06F9/5077G06F12/0284G06F12/1036G06F12/109G06F12/1491
    • A processor supports logical partitioning of a computer system. Logical partitions isolate the real address spaces of processes executing on different processors and the hardware resources that include processors. However, this multithreaded processor system can dynamically reallocate hardware resources including the processors among logical partitions. An ultra-privileged supervisor process, called a hypervisor, regulates the logical partitions. Preferably, the processor supports hardware multithreading, each thread independently capable of being in either hypervisor, supervisor, or problem state. The processor assigns certain generated addresses to its logical partition, preferably by concatenating certain high order bits from a special register with lower order bits of the generated address. A separate range check mechanism concurrently verifies that these high order effective address bits are in fact 0, and generates an error signal if they are not. In the preferred embodiment, instruction addresses from either active or dormant threads can be pre-fetched in anticipation of execution. In the preferred embodiment, the processor supports different environments which use the hypervisor, supervisor and problem states differently.
    • 处理器支持计算机系统的逻辑分区。 逻辑分区隔离在不同处理器上执行的进程的真实地址空间以及包含处理器的硬件资源。 然而,该多线程处理器系统可以在逻辑分区中动态地重新分配包括处理器在内的硬件资源。 一个超级特权的管理程序,称为管理程序,它调节逻辑分区。 优选地,处理器支持硬件多线程,每个线程独立地能够处于管理程序,管理程序或问题状态中。 处理器将某些生成的地址分配给其逻辑分区,优选地通过将特定寄存器中的某些高阶位与所生成的地址的较低位相连。 单独的范围检查机制同时验证这些高阶有效地址位实际上为0,并且如果它们不是,则产生错误信号。 在优选实施例中,来自主动或休眠线程的指令地址可以预期执行。 在优选实施例中,处理器支持使用管理程序,主管和问题状态不同的不同环境。
    • 5. 发明授权
    • Instruction cache for multithreaded processor
    • 多线程处理器的指令缓存
    • US6161166A
    • 2000-12-12
    • US266133
    • 1999-03-10
    • Richard William DoingRonald Nick KallaStephen Joseph Schwinn
    • Richard William DoingRonald Nick KallaStephen Joseph Schwinn
    • G06F12/08G06F12/10
    • G06F12/0859G06F12/0842G06F12/0864G06F12/0875G06F12/1036G06F12/1054
    • A multithreaded processor includes a level one instruction cache shared by all threads. The I-cache is accessed with an instruction unit generated effective address, the I-cache directory containing real page numbers of the corresponding cache lines. A separate line fill sequencer exists for each thread. Preferably, the I-cache is N-way set associative, where N is the number of threads, and includes an effective-to-real address table (ERAT), containing pairs of effective and real page numbers. ERAT entries are accessed by hashing the effective address. The ERAT entry is then compared with the effective address of the desired instruction to verify an ERAT hit. The corresponding real page number is compared with a real page number in the directory array to verify a cache hit. Preferably, the line fill sequencer operates in response to a cache miss, where there is an ERAT hit. In this case, the full real address of the desired instruction can be constructed from the effective address and the ERAT, making it unnecessary to access slower address translation mechanisms for main memory. Because there is a separate line fill sequencer for each thread, threads are independently able to satisfy cache fill requests without waiting for each other. Additionally, because the I-cache index contains real page numbers, cache coherency is simplified. Furthermore, the ERAT avoids the need in many cases to access slower memory translation mechanisms. Finally, the n-way associative nature of the cache reduces thread contention.
    • 多线程处理器包括由所有线程共享的一级指令高速缓存。 使用指令单元生成有效地址访问I缓存,I缓存目录包含相应缓存行的实际页码。 每个线程都存在单独的行填充序列发生器。 优选地,I缓存是N路组关联,其中N是线程数,并且包括有效到真实地址表(ERAT),其包含有效和真实页码对。 通过散列有效地址来访问ERAT条目。 然后将ERAT条目与所需指令的有效地址进行比较,以验证ERAT命中。 将相应的实际页码与目录数组中的真实页码进行比较,以验证缓存命中。 优选地,线填充定序器响应于存在ERAT命中的高速缓存未命中而操作。 在这种情况下,可以通过有效地址和ERAT构建所需指令的完整实际地址,从而无需访问较慢的主存储器的地址转换机制。 因为每个线程都有一个单独的行填充顺控程序,线程可以独立地满足缓存填充请求,而无需等待彼此。 另外,因为I缓存索引包含真实的页码,所以缓存一致性被简化。 此外,ERAT避免了在许多情况下访问较慢内存转换机制的需要。 最后,缓存的n路关联属性减少线程争用。
    • 6. 发明申请
    • REDUCING THE FETCH TIME OF TARGET INSTRUCTIONS OF A PREDICTED TAKEN BRANCH INSTRUCTION
    • 减少预期的分支指导目标指示的时间
    • US20080276071A1
    • 2008-11-06
    • US12176386
    • 2008-07-20
    • Richard William DoingBrett OlssonKenichi Tsuchiya
    • Richard William DoingBrett OlssonKenichi Tsuchiya
    • G06F9/312
    • G06F9/3804G06F9/3844
    • A method and processor for reducing the fetch time of target instructions of a predicted taken branch instruction. Each entry in a buffer, referred to herein as a “branch target buffer”, may store an address of a branch instruction predicted taken and the instructions beginning at the target address of the branch instruction predicted taken. When an instruction is fetched from the instruction cache, a particular entry in the branch target buffer is indexed using particular bits of the fetched instruction. The address of the branch instruction in the indexed entry is compared with the address of the instruction fetched from the instruction cache. If there is a match, then the instructions beginning at the target address of that branch instruction are dispatched directly behind the branch instruction. In this manner, the fetch time of target instructions of a predicted taken branch instruction is reduced.
    • 一种用于减少预测的分支指令的目标指令的获取时间的方法和处理器。 缓冲器中的每个条目(这里称为“分支目标缓冲器”)可以存储预测的分支指令的地址和从预测的分支指令的目标地址开始的指令。 当从指令高速缓存中取出指令时,使用获取的指令的特定位来对分支目标缓冲器中的特定条目进行索引。 将索引条目中的分支指令的地址与从指令高速缓存获取的指令的地址进行比较。 如果有匹配,则从该分支指令的目标地址开始的指令直接在分支指令的后面进行调度。 以这种方式,减少预测的分支指令的目标指令的获取时间。
    • 9. 发明授权
    • Group formation with multiple taken branches per group
    • 每组多组分组成组
    • US08127115B2
    • 2012-02-28
    • US12417798
    • 2009-04-03
    • Richard William DoingKevin Neal MagilBalaram SinharoyJeffrey R. SummersJames Albert Van Norstrand, Jr.
    • Richard William DoingKevin Neal MagilBalaram SinharoyJeffrey R. SummersJames Albert Van Norstrand, Jr.
    • G06F9/30
    • G06F9/30145G06F9/3802G06F9/3814G06F9/382G06F9/3853
    • Disclosed are a method and a system for grouping processor instructions for execution by a processor, where the group of processor instructions includes at least two branch processor instructions. In one or more embodiments, an instruction buffer can decouple an instruction fetch operation from an instruction decode operation by storing fetched processor instructions in the instruction buffer until the fetched processor instructions are ready to be decoded. Group formation can involve removing processor instructions from the instruction buffer and routing the processor instruction to latches that convey the processor instructions to decoders. Processor instructions that are removed from instruction buffer in a single clock cycle can be called a group of processor instructions. In one or more embodiments, the first instruction in the group must be the oldest instruction in the instruction buffer and instructions must be removed from the instruction buffer ordered from oldest to youngest.
    • 公开了一种用于将处理器指令分组以由处理器执行的方法和系统,其中处理器指令组包括至少两个分支处理器指令。 在一个或多个实施例中,指令缓冲器可以通过在指令缓冲器中存储获取的处理器指令直到所读出的处理器指令准备解码,从而将指令提取操作与指令解码操作分离。 组形成可以涉及从指令缓冲器中移除处理器指令并将处理器指令路由到将处理器指令传送给解码器的锁存器。 在单个时钟周期内从指令缓冲区中删除的处理器指令可以称为一组处理器指令。 在一个或多个实施例中,组中的第一指令必须是指令缓冲器中的最早的指令,并且必须从从最老到最小的指令缓冲器中移除指令。