会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 61. 发明授权
    • Techniques for multi-level indirect data prefetching
    • 多级间接数据预取技术
    • US08161265B2
    • 2012-04-17
    • US12024260
    • 2008-02-01
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • G06F13/00
    • G06F12/1027G06F12/0862G06F12/0897G06F2212/6026G06F2212/681
    • A technique for performing data prefetching using multi-level indirect data prefetching includes determining a first memory address of a pointer associated with a data prefetch instruction. Content that is included in a first data block (e.g., a first cache line of a memory) at the first memory address is then fetched. A second memory address is then determined based on the content at the first memory address. Content that is included in a second data block (e.g., a second cache line) at the second memory address is then fetched (e.g., from the memory or another memory). A third memory address is then determined based on the content at the second memory address. Finally, a third data block (e.g., a third cache line) that includes another pointer or data at the third memory address is fetched (e.g., from the memory or the another memory).
    • 使用多级间接数据预取来执行数据预取的技术包括确定与数据预取指令相关联的指针的第一存储器地址。 然后取出包含在第一存储器地址的第一数据块(例如,存储器的第一高速缓存行)中的内容。 然后基于第一存储器地址处的内容来确定第二存储器地址。 包含在第二存储器地址的第二数据块(例如,第二高速缓存行)中的内容然后被取出(例如,从存储器或另一个存储器)。 然后基于第二存储器地址处的内容来确定第三存储器地址。 最后,取出(例如,从存储器或另一个存储器)中包含第三存储器地址处的另一指针或数据的第三数据块(例如,第三高速缓存行)。
    • 62. 发明授权
    • Techniques for indirect data prefetching
    • 间接数据预取技术
    • US08161263B2
    • 2012-04-17
    • US12024239
    • 2008-02-01
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • G06F13/00
    • G06F12/0862G06F2212/6028
    • A processor includes a first address translation engine, a second address translation engine, and a prefetch engine. The first address translation engine is configured to determine a first memory address of a pointer associated with a data prefetch instruction. The prefetch engine is coupled to the first translation engine and is configured to fetch content, included in a first data block (e.g., a first cache line) of a memory, at the first memory address. The second address translation engine is coupled to the prefetch engine and is configured to determine a second memory address based on the content of the memory at the first memory address. The prefetch engine is also configured to fetch (e.g., from the memory or another memory) a second data block (e.g., a second cache line) that includes data at the second memory address.
    • 处理器包括第一地址转换引擎,第二地址转换引擎和预取引擎。 第一地址转换引擎被配置为确定与数据预取指令相关联的指针的第一存储器地址。 预取引擎被耦合到第一翻译引擎,并被配置为在第一存储器地址处提取包含在存储器的第一数据块(例如,第一高速缓存行)中的内容。 第二地址转换引擎耦合到预取引擎,并且被配置为基于第一存储器地址处的存储器的内容来确定第二存储器地址。 预取引擎还被配置为从第二存储器地址提取包括数据的第二数据块(例如,第二高速缓存行)(例如,从存储器或另一存储器)。
    • 63. 发明授权
    • Fully asynchronous memory mover
    • 全异步内存移动器
    • US08095758B2
    • 2012-01-10
    • US12024613
    • 2008-02-01
    • Ravi K. ArimilliRobert S. BlackmoreChulho KimBalaram SinharoyHanhong Xue
    • Ravi K. ArimilliRobert S. BlackmoreChulho KimBalaram SinharoyHanhong Xue
    • G06F12/02G06F12/04
    • G06F9/30032G06F12/0831G06F12/0862G06F12/10
    • A data processing system has a processor and a memory coupled to the processor and an asynchronous memory mover coupled to the processor. The asynchronous memory mover has registers for receiving a set of parameters from the processor, which parameters are associated with an asynchronous memory move (AMM) operation initiated by the processor in virtual address space, utilizing a source effective address and a destination effective address. The asynchronous memory mover performs the AMM operation to move the data from a first physical memory location having a source real address corresponding to the source effective address to a second physical memory location having a destination real address corresponding to the destination effective address. The asynchronous memory mover has an associated off-chip translation mechanism. The AMM operation thus occurs independent of the processor, and the processor continues processing other operations independent of the AMM operation.
    • 数据处理系统具有耦合到处理器的处理器和存储器以及耦合到处理器的异步存储器移动器。 异步存储器移动器具有用于从处理器接收一组参数的寄存器,这些参数与虚拟地址空间中由处理器发起的异步存储器移动(AMM)操作相关联,利用源有效地址和目的地有效地址。 异步存储器移动器执行AMM操作以将来自具有与源有效地址相对应的源实际地址的第一物理存储器位置的数据移动到具有与目的地有效地址相对应的目的地实际地址的第二物理存储器位置。 异步存储器移动器具有相关的片外转换机制。 因此,AMM操作独立于处理器,并且处理器继续处理独立于AMM操作的其他操作。
    • 64. 发明申请
    • Hardware Assist Thread for Increasing Code Parallelism
    • 硬件辅助线程增加代码并行性
    • US20110283095A1
    • 2011-11-17
    • US12778192
    • 2010-05-12
    • Ronald P. HallHung Q. LeRaul E. SilveraBalaram Sinharoy
    • Ronald P. HallHung Q. LeRaul E. SilveraBalaram Sinharoy
    • G06F9/30G06F9/38
    • G06F9/3851G06F9/3009G06F9/30101G06F9/30149G06F9/30189
    • Mechanisms are provided for offloading a workload from a main thread to an assist thread. The mechanisms receive, in a fetch unit of a processor of the data processing system, a branch-to-assist-thread instruction of a main thread. The branch-to-assist-thread instruction informs hardware of the processor to look for an already spawned idle thread to be used as an assist thread. Hardware implemented pervasive thread control logic determines if one or more already spawned idle threads are available for use as an assist thread. The hardware implemented pervasive thread control logic selects an idle thread from the one or more already spawned idle threads if it is determined that one or more already spawned idle threads are available for use as an assist thread, to thereby provide the assist thread. In addition, the hardware implemented pervasive thread control logic offloads a portion of a workload of the main thread to the assist thread.
    • 提供了将工作负载从主线程卸载到辅助线程的机制。 机构在数据处理系统的处理器的提取单元中接收主线程的分支到辅助线程指令。 分支到辅助线程指令通知处理器的硬件来查找已经产生的空闲线程以用作辅助线程。 硬件实现的普遍线程控制逻辑确定一个或多个已经产生的空闲线程是否可用作辅助线程。 如果确定一个或多个已经产生的空闲线程可用作辅助线程,则实现的普遍线程控制逻辑的硬件从一个或多个已经产生的空闲线程中选择空闲线程,从而提供辅助线程。 此外,实现的普遍线程控制逻辑的硬件将主线程的一部分工作量卸载到辅助线程。
    • 69. 发明申请
    • Completion Arbitration for More than Two Threads Based on Resource Limitations
    • 基于资源限制的两个以上线程的完成仲裁
    • US20100262967A1
    • 2010-10-14
    • US12423561
    • 2009-04-14
    • Susan E. EisenDung Q. NguyenBalaram SinharoyBenjamin W. Stolt
    • Susan E. EisenDung Q. NguyenBalaram SinharoyBenjamin W. Stolt
    • G06F9/46
    • G06F9/485
    • A mechanism is provided for thread completion arbitration. The mechanism comprises executing more than two threads of instructions simultaneously in the processor, selecting a first thread from a first subset of threads, in the more than two threads, for completion of execution within the processor, and selecting a second thread from a second subset of threads, in the more than two threads, for completion of execution within the processor. The mechanism further comprises completing execution of the first and second threads by committing results of the execution of the first and second threads to a storage device associated with the processor. At least one of the first subset of threads or the second subset of threads comprise two or more threads from the more than two threads. The first subset of threads and second subset of threads have different threads from one another.
    • 提供线程完成仲裁的机制。 该机制包括在处理器中同时执行多于两个指令的线程,在多于两个线程中从线程的第一子集中选择第一线程,以完成处理器内的执行,以及从第二子集中选择第二线程 的线程,在两个以上的线程中,用于完成处理器内的执行。 该机制还包括通过将执行第一和第二线程的结果提交到与处理器相关联的存储设备来完成第一和第二线程的执行。 线程的第一子集或线程的第二子集中的至少一个包括来自多于两个线程的两个或多个线程。 线程的第一个子集和线程的第二个子集具有彼此不同的线程。
    • 70. 发明申请
    • Specifying an Addressing Relationship In An Operand Data Structure
    • 在操作数数据结构中指定寻址关系
    • US20100153683A1
    • 2010-06-17
    • US12336342
    • 2008-12-16
    • Ravi K. ArimilliBalaram Sinharoy
    • Ravi K. ArimilliBalaram Sinharoy
    • G06F9/34G06F12/02
    • G06F9/345
    • A processor includes at least one execution unit that executes instructions, at least one register file, coupled to the at least one execution unit, that buffers operands for access by the at least one execution unit, and an instruction sequencing unit that fetches instructions for execution by the execution unit. The processor further includes an operand data structure and an address generation accelerator. The operand data structure specifies a first relationship between addresses of sequential accesses within a first address region and a second relationship between addresses of sequential accesses within a second address region. The address generation accelerator computes a first address of a first memory access in the first address region by reference to the first relationship and a second address of a second memory access in the second address region by reference to the second relationship.
    • 处理器包括执行指令的至少一个执行单元,耦合到所述至少一个执行单元的至少一个寄存器文件,其缓冲由所述至少一个执行单元访问的操作数,以及指令排序单元,其提取用于执行的指令 由执行单位。 处理器还包括操作数数据结构和地址生成加速器。 操作数数据结构指定第一地址区域内的顺序访问的地址与第二地址区域内的顺序存取的地址之间的第一关系。 参考第二关系,地址生成加速器通过参考第一关系和第二地址区中的第二存储器访问的第二地址来计算第一地址区中的第一存储器访问的第一地址。