会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 23. 发明申请
    • Techniques for Data Prefetching Using Indirect Addressing with Offset
    • 使用偏移量进行间接寻址的数据预取技术
    • US20090198904A1
    • 2009-08-06
    • US12024246
    • 2008-02-01
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • G06F12/08
    • G06F12/0862G06F12/1045G06F2212/6028
    • A technique for performing data prefetching using indirect addressing includes determining a first memory address of a pointer associated with a data prefetch instruction. Content, that is included in a first data block (e.g., a first cache line) of a memory, at the first memory address is then fetched. An offset is then added to the content of the memory at the first memory address to provide a first offset memory address. A second memory address is then determined based on the first offset memory address. A second data block (e.g., a second cache line) that includes data at the second memory address is then fetched (e.g., from the memory or another memory). A data prefetch instruction may be indicated by a unique operational code (opcode), a unique extended opcode, or a field (including one or more bits) in an instruction.
    • 使用间接寻址执行数据预取的技术包括确定与数据预取指令相关联的指针的第一存储器地址。 然后取出包含在第一存储器地址的存储器的第一数据块(例如,第一高速缓存行)中的内容。 然后将偏移量添加到第一存储器地址处的存储器的内容以提供第一偏移存储器地址。 然后基于第一偏移存储器地址确定第二存储器地址。 包括第二存储器地址上的数据的第二数据块(例如,第二高速缓存行)然后被取出(例如,从存储器或另一个存储器)。 数据预取指令可以由指令中的唯一操作代码(操作码),唯一扩展操作码或字段(包括一个或多个位)来指示。
    • 24. 发明授权
    • Block driven computation using a caching policy specified in an operand data structure
    • 使用在操作数数据结构中指定的缓存策略进行块驱动计算
    • US08458439B2
    • 2013-06-04
    • US12336350
    • 2008-12-16
    • Ravi K. ArimilliBalaram Sinharoy
    • Ravi K. ArimilliBalaram Sinharoy
    • G06F12/00
    • G06F9/383G06F2212/6028
    • A processor has an associated memory hierarchy including a cache memory. The processor includes an instruction sequencing unit that fetches instructions for processing, an operand data structure including a plurality of entries corresponding to operands of operations to be performed by the processor, and a computation engine. A first entry among the plurality of entries in the operand data structure specifies a first caching policy for a first operand, and a second entry specifies a second caching policy for a second operand. The computation engine computes and stores operands in the memory hierarchy in accordance with the cache policies indicated within the operand data structure.
    • 处理器具有包括高速缓冲存储器的相关联的存储器层级。 所述处理器包括:指令排序单元,其提取用于处理的指令;操作数数据结构,包括与由所述处理器执行的操作操作数对应的多个条目;以及计算引擎。 操作数数据结构中的多个条目中的第一条目指定第一操作数的第一高速缓存策略,第二条目指定用于第二操作数的第二高速缓存策略。 计算引擎根据操作数数据结构中指示的缓存策略计算并存储存储器层次结构中的操作数。
    • 26. 发明授权
    • Specifying an addressing relationship in an operand data structure
    • 在操作数数据结构中指定寻址关系
    • US08281106B2
    • 2012-10-02
    • US12336342
    • 2008-12-16
    • Ravi K. ArimilliBalaram Sinharoy
    • Ravi K. ArimilliBalaram Sinharoy
    • G06F12/00
    • G06F9/345
    • A processor includes at least one execution unit that executes instructions, at least one register file, coupled to the at least one execution unit, that buffers operands for access by the at least one execution unit, and an instruction sequencing unit that fetches instructions for execution by the execution unit. The processor further includes an operand data structure and an address generation accelerator. The operand data structure specifies a first relationship between addresses of sequential accesses within a first address region and a second relationship between addresses of sequential accesses within a second address region. The address generation accelerator computes a first address of a first memory access in the first address region by reference to the first relationship and a second address of a second memory access in the second address region by reference to the second relationship.
    • 处理器包括执行指令的至少一个执行单元,耦合到所述至少一个执行单元的至少一个寄存器文件,其缓冲由所述至少一个执行单元访问的操作数,以及指令排序单元,其提取用于执行的指令 由执行单位。 处理器还包括操作数数据结构和地址生成加速器。 操作数数据结构指定第一地址区域内的顺序访问的地址与第二地址区域内的顺序存取的地址之间的第一关系。 参考第二关系,地址生成加速器通过参考第一关系和第二地址区中的第二存储器访问的第二地址来计算第一地址区中的第一存储器访问的第一地址。
    • 29. 发明授权
    • Techniques for multi-level indirect data prefetching
    • 多级间接数据预取技术
    • US08161265B2
    • 2012-04-17
    • US12024260
    • 2008-02-01
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • G06F13/00
    • G06F12/1027G06F12/0862G06F12/0897G06F2212/6026G06F2212/681
    • A technique for performing data prefetching using multi-level indirect data prefetching includes determining a first memory address of a pointer associated with a data prefetch instruction. Content that is included in a first data block (e.g., a first cache line of a memory) at the first memory address is then fetched. A second memory address is then determined based on the content at the first memory address. Content that is included in a second data block (e.g., a second cache line) at the second memory address is then fetched (e.g., from the memory or another memory). A third memory address is then determined based on the content at the second memory address. Finally, a third data block (e.g., a third cache line) that includes another pointer or data at the third memory address is fetched (e.g., from the memory or the another memory).
    • 使用多级间接数据预取来执行数据预取的技术包括确定与数据预取指令相关联的指针的第一存储器地址。 然后取出包含在第一存储器地址的第一数据块(例如,存储器的第一高速缓存行)中的内容。 然后基于第一存储器地址处的内容来确定第二存储器地址。 包含在第二存储器地址的第二数据块(例如,第二高速缓存行)中的内容然后被取出(例如,从存储器或另一个存储器)。 然后基于第二存储器地址处的内容来确定第三存储器地址。 最后,取出(例如,从存储器或另一个存储器)中包含第三存储器地址处的另一指针或数据的第三数据块(例如,第三高速缓存行)。
    • 30. 发明授权
    • Techniques for indirect data prefetching
    • 间接数据预取技术
    • US08161263B2
    • 2012-04-17
    • US12024239
    • 2008-02-01
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • Ravi K. ArimilliBalaram SinharoyWilliam E. SpeightLixin Zhang
    • G06F13/00
    • G06F12/0862G06F2212/6028
    • A processor includes a first address translation engine, a second address translation engine, and a prefetch engine. The first address translation engine is configured to determine a first memory address of a pointer associated with a data prefetch instruction. The prefetch engine is coupled to the first translation engine and is configured to fetch content, included in a first data block (e.g., a first cache line) of a memory, at the first memory address. The second address translation engine is coupled to the prefetch engine and is configured to determine a second memory address based on the content of the memory at the first memory address. The prefetch engine is also configured to fetch (e.g., from the memory or another memory) a second data block (e.g., a second cache line) that includes data at the second memory address.
    • 处理器包括第一地址转换引擎,第二地址转换引擎和预取引擎。 第一地址转换引擎被配置为确定与数据预取指令相关联的指针的第一存储器地址。 预取引擎被耦合到第一翻译引擎,并被配置为在第一存储器地址处提取包含在存储器的第一数据块(例如,第一高速缓存行)中的内容。 第二地址转换引擎耦合到预取引擎,并且被配置为基于第一存储器地址处的存储器的内容来确定第二存储器地址。 预取引擎还被配置为从第二存储器地址提取包括数据的第二数据块(例如,第二高速缓存行)(例如,从存储器或另一存储器)。