会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Apparatus and method for memory copy at a processor
    • 在处理器处进行存储器复制的装置和方法
    • US09524162B2
    • 2016-12-20
    • US13455800
    • 2012-04-25
    • Thang M. TranJames Yang
    • Thang M. TranJames Yang
    • G06F12/00G06F9/30G06F12/08G06F9/38
    • G06F9/30032G06F9/30043G06F9/3838G06F9/384G06F12/0897
    • A processor uses a dedicated buffer to reduce the amount of time needed to execute memory copy operations. For each load instruction associated with the memory copy operation, the processor copies the load data from memory to the dedicated buffer. For each store operation associated with the memory copy operation, the processor retrieves the store data from the dedicated buffer and transfers it to memory. The dedicated buffer is separate from a register file and caches of the processor, so that each load operation associated with a memory copy operation does not have to wait for data to be loaded from memory to the register file. Similarly, each store operation associated with a memory copy operation does not have to wait for data to be transferred from the register file to memory.
    • 处理器使用专用缓冲器来减少执行内存复制操作所需的时间。 对于与存储器复制操作相关联的每个加载指令,处理器将负载数据从存储器复制到专用缓冲区。 对于与存储器复制操作相关联的每个存储操作,处理器从专用缓冲器检索存储数据并将其传送到存储器。 专用缓冲器与寄存器文件和处理器的高速缓存分开,使得与存储器复制操作相关联的每个加载操作不必等待数据从存储器加载到寄存器文件。 类似地,与存储器复制操作相关联的每个存储操作不必等待数据从寄存器文件传送到存储器。
    • 4. 发明申请
    • TECHNIQUES FOR REDUCING PROCESSOR POWER CONSUMPTION THROUGH DYNAMIC PROCESSOR RESOURCE ALLOCATION
    • 通过动态处理器资源分配减少处理器功耗的技术
    • US20140025967A1
    • 2014-01-23
    • US13551220
    • 2012-07-17
    • Thang M. Tran
    • Thang M. Tran
    • G06F1/32
    • G06F1/3206
    • A technique for performing power management for configurable processor resources of a processor determining whether to increase, decrease, or maintain resource units for each of the configurable processor resources based on utilization of each of the configurable processor resources. A total weighted power number for the processor is substantially maintained while resource units for each of the configurable processor resources whose utilization is above a first level is increased and resource units for each of the configurable processor resources whose utilization is below a second level is decreased. The total weighted power number corresponds to a sum of weighted power numbers for the configurable processor resources.
    • 一种用于对处理器的可配置处理器资源执行电力管理的技术,其基于每个可配置处理器资源的利用来确定是否增加,减少或维护每个可配置处理器资源的资源单元。 基本上维持处理器的总加权功率数,同时利用率高于第一级的可配置处理器资源的每个可用处理器资源的资源单元被增加,并且其利用率低于第二级的每个可配置处理器资源的资源单元减少。 总加权功率数量对应于可配置处理器资源的加权功率数之和。
    • 7. 发明授权
    • Microprocessor with independent SIMD loop buffer
    • 具有独立SIMD循环缓冲器的微处理器
    • US07330964B2
    • 2008-02-12
    • US11273493
    • 2005-11-14
    • Thang M. TranMuralidharan S. Chinnakonda
    • Thang M. TranMuralidharan S. Chinnakonda
    • G06F9/40
    • G06F9/381G06F9/30036G06F9/325G06F9/3885G06F9/3887
    • An apparatus comprising detection logic configured to detect a loop among a set of instructions, the loop comprising one or more instructions of a first type of instruction and a second type of instruction and a co-processor configured to execute the loop detected by the detection logic, the co-processor comprising an instruction queue. The apparatus further comprises fetch logic configured to fetch instructions; decode logic configured to determine instruction type; a processor configured to execute the loop detected by the detection logic, wherein the loop comprises one or more instructions of the first type of instruction, and an execution unit configured to execute the loop detected by the detection logic.
    • 一种装置,包括被配置为检测一组指令中的循环的检测逻辑,该循环包括第一类型的指令和第二类型的指令的一个或多个指令,以及被配置为执行由检测逻辑检测到的循环的协处理器 协处理器包括指令队列。 所述设备还包括被配置为获取指令的提取逻辑; 解码逻辑配置为确定指令类型; 被配置为执行由检测逻辑检测到的循环的处理器,其中所述循环包括所述第一类型的指令的一个或多个指令,以及被配置为执行由所述检测逻辑检测到的所述循环的执行单元。
    • 8. 发明授权
    • Data address prediction structure and a method for operating the same
    • 数据地址预测结构及其操作方法
    • US06604190B1
    • 2003-08-05
    • US08473504
    • 1995-06-07
    • Thang M. Tran
    • Thang M. Tran
    • G06F930
    • G06F9/3832G06F9/3824G06F9/3826G06F9/383G06F9/3838
    • A data address prediction structure for a superscalar microprocessor is provided. The data address prediction structure predicts a data address that a group of instructions is going to access while that group of instructions is being fetched from the instruction cache. The data bytes associated with the predicted address are placed in a relatively small, fast buffer. The decode stages of instruction processing pipelines in the microprocessor access the buffer with addresses generated from the instructions, and if the associated data bytes are found in the buffer they are conveyed to the reservation station associated with the requesting decode stage. Therefore, the implicit memory read associated with an instruction is performed prior to the instruction arriving in a functional unit. The functional unit is occupied by the instruction for a fewer number of clock cycles, since it need not perform the implicit memory operation. Instead, the functional unit performs the explicit operation indicated by the instruction.
    • 提供了一种用于超标量微处理器的数据地址预测结构。 数据地址预测结构预测当从指令高速缓存取出指令组时,指令组将要访问的数据地址。 与预测地址相关联的数据字节被放置在相对较小的快速缓冲器中。 微处理器中的指令处理流水线的解码阶段利用从指令生成的地址来访问缓冲器,并且如果在缓冲器中找到关联的数据字节,则它们被传送到与请求解码级相关联的保留站。 因此,在指令到达功能单元之前执行与指令相关联的隐式存储器读取。 由于不需要执行隐式存储器操作,功能单元被少量时钟周期的指令占用。 相反,功能单元执行指令指示的显式操作。
    • 9. 发明授权
    • Instruction cache configured to provide instructions to a microprocessor
having a clock cycle time less than a cache access time of said
instruction cache
    • 指令高速缓存,其被配置为向微处理器提供具有小于所述指令高速缓存的高速缓存访​​问时间的时钟周期时间的指令
    • US6167510A
    • 2000-12-26
    • US65346
    • 1998-04-23
    • Thang M. Tran
    • Thang M. Tran
    • G06F9/38G06F12/08G06F15/00
    • G06F9/3814G06F9/3802G06F9/3806G06F12/0851G06F12/0864G06F12/0875
    • An apparatus including a banked instruction cache and a branch prediction unit is provided. The banked instruction cache allows multiple instruction fetch addresses (comprising consecutive instruction blocks from the predicted instruction stream being executed by the microprocessor) to be fetched concurrently. The instruction cache provides an instruction block corresponding to one of the multiple fetch addresses to the instruction processing pipeline of the microprocessor during each consecutive clock cycle, while additional instruction fetch addresses from the predicted instruction stream are fetched. Preferably, the instruction cache includes at least a number of banks equal to the number of clock cycles consumed by an instruction cache access. In this manner, instructions may be provided during each consecutive clock cycle even though instruction cache access time is greater than the clock cycle time of the microprocessor.
    • 提供一种包括分组指令高速缓存和分支预测单元的装置。 存储的指令高速缓存允许同时取出多​​个指令获取地址(包括来自由微处理器执行的预测指令流的连续指令块)。 指令高速缓冲存储器在每个连续的时钟周期期间向微处理器的指令处理流水线提供与多个提取地址中的一个相对应的指令块,同时提取来自预测指令流的附加指令提取地址。 优选地,指令高速缓存包括等于指令高速缓存访​​问消耗的时钟周期的数量的至少一组存储体。 以这种方式,即使指令高速缓存访​​问时间大于微处理器的时钟周期时间,也可以在每个连续时钟周期期间提供指令。