会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 93. 发明授权
    • Operand data structure for block computation
    • 块计算的操作数数据结构
    • US08407680B2
    • 2013-03-26
    • US12336301
    • 2008-12-16
    • Ravi K. ArimilliBalaram Sinharoy
    • Ravi K. ArimilliBalaram Sinharoy
    • G06F9/45
    • G06F8/4441G06F8/447
    • In response to receiving pre-processed code, a compiler identifies a code section that is not a candidate for acceleration and a code block that is a candidate for acceleration. The code block specifies an iterated operation having a first operand and a second operand, where each of multiple first operands and each of multiple second operands for the iterated operation has a defined addressing relationship. In response to the identifying, the compiler generates post-processed code containing lower level instruction(s) corresponding to the identified code section and creates and outputs an operand data structure separate from the post-processed code. The operand data structure specifies the defined addressing relationship for the multiple first operands and for the multiple second operands. The compiler places a block computation command in the post-processed code that invokes processing of the operand data structure to compute operand addresses.
    • 响应于接收预处理的代码,编译器识别不是加速候选的代码段和作为加速候选的代码块。 代码块指定具有第一操作数和第二操作数的迭代操作,其中用于迭代操作的多个第一操作数和多个第二操作数中的每一个具有定义的寻址关系。 响应于识别,编译器生成包含对应于所识别的代码段的较低级别指令的后处理代码,并创建并输出与后处理代码分离的操作数数据结构。 操作数数据结构指定多个第一个操作数和多个第二个操作数的定义的寻址关系。 编译器在后处理代码中放置块计算命令,该代码调用操作数数据结构的处理以计算操作数地址。
    • 94. 发明授权
    • Computation table for block computation
    • 块计算的计算表
    • US08327345B2
    • 2012-12-04
    • US12336332
    • 2008-12-16
    • Ravi K. ArimilliBalaram Sinharoy
    • Ravi K. ArimilliBalaram Sinharoy
    • G06F9/45
    • G06F8/4441
    • In response to receiving pre-processed code, a compiler identifies a code section that is not candidate for acceleration and identifying a code block specifying an iterated operation that is a candidate for acceleration. In response to identifying the code section, the compiler generates post-processed code containing one or more lower level instructions corresponding to the identified code section, and in response to identifying the code block, the compiler creates and outputs an operation data structure separate from the post-processed code that identifies the iterated operation. The compiler places a block computation command in the post-processed code that invokes processing of the operation data structure to perform the iterated operation and outputs the post-processed code.
    • 响应于接收预处理的代码,编译器识别不是加速候选的代码段,并且识别指定作为加速候选的迭代操作的代码块。 响应于识别代码部分,编译器生成包含与识别的代码部分相对应的一个或多个较低级别指令的后处理代码,并且响应于识别代码块,编译器创建并输出与 标识迭代操作的后处理代码。 编译器在后处理代码中放置块计算命令,该代码调用操作数据结构的处理以执行迭代操作,并输出后处理代码。
    • 96. 发明授权
    • Specifying an access hint for prefetching limited use data in a cache hierarchy
    • 指定在缓存层次结构中预取有限使用数据的访问提示
    • US08176254B2
    • 2012-05-08
    • US12424681
    • 2009-04-16
    • Bradly G. FreyGuy L. GuthrieCathy MayBalaram SinharoyPeter K. Szwed
    • Bradly G. FreyGuy L. GuthrieCathy MayBalaram SinharoyPeter K. Szwed
    • G06F13/00
    • G06F12/0862G06F12/0897G06F2212/6028
    • A system and method for specifying an access hint for prefetching limited use data. A processing unit receives a data cache block touch (DCBT) instruction having an access hint indicating to the processing unit that a program executing on the data processing system may soon access a cache block addressed within the DCBT instruction. The access hint is contained in a code point stored in a subfield of the DCBT instruction. In response to detecting that the code point is set to a specific value, the data addressed in the DCBT instruction is prefetched into an entry in the lower level cache. The entry may then be updated as a least recently used entry of a plurality of entries in the lower level cache. In response to a new cache block being fetched to the cache, the prefetched cache block is cast out of the cache.
    • 一种用于指定预取有限使用数据的访问提示的系统和方法。 处理单元接收具有指示给处理单元的访问提示的数据高速缓存块触摸(DCBT)指令,即在数据处理系统上执行的程序可以很快访问在DCBT指令内寻址的高速缓存块。 访问提示包含在存储在DCBT指令的子字段中的代码点中。 响应于检测到代码点被设置为特定值,DCBT指令中寻址的数据被预取到低级缓存中的条目中。 然后可以将条目作为较低级别高速缓存中的多个条目的最近最少使用的条目来更新。 响应于将新的高速缓存块提取到高速缓存,预取的高速缓存块被抛出高速缓存。
    • 97. 发明授权
    • Apparatus for randomizing instruction thread interleaving in a multi-thread processor
    • 用于在多线程处理器中随机化指令线程交错的装置
    • US08145885B2
    • 2012-03-27
    • US12112859
    • 2008-04-30
    • Ronald Nick KallaMinh Michelle Quy PhamBalaram SinharoyJohn Wesley Ward, III
    • Ronald Nick KallaMinh Michelle Quy PhamBalaram SinharoyJohn Wesley Ward, III
    • G06F9/44G06F9/46
    • G06F9/3851
    • A processor interleaves instructions according to a priority rule which determines the frequency with which instructions from each respective thread are selected and added to an interleaved stream of instructions to be processed in the data processor. The frequency with which each thread is selected according to the rule may be based on the priorities assigned to the instruction threads. A randomization is inserted into the interleaving process so that the selection of an instruction thread during any particular clock cycle is not based solely by the priority rule, but is also based in part on a random or pseudo random element. This randomization is inserted into the instruction thread selection process so as to vary the order in which instructions are selected from the various instruction threads while preserving the overall frequency of thread selection (i.e. how often threads are selected) set by the priority rule.
    • A处理器根据优先级规则对指令进行交织,该优先级规则确定选择来自每个相应线程的指令的频率,并将其附加到要在数据处理器中处理的交错指令流。 根据规则选择每个线程的频率可以基于分配给指令线程的优先级。 随机化被插入到交织处理中,使得在任何特定时钟周期期间指令线程的选择不仅仅基于优先级规则,而且还部分地基于随机或伪随机元素。 该随机化被插入到指令线程选择处理中,以便改变从各种指令线程中选择指令的顺序,同时保持由优先级规则设置的线程选择的总体频率(即选择多少线程)。
    • 99. 发明授权
    • Group formation with multiple taken branches per group
    • 每组多组分组成组
    • US08127115B2
    • 2012-02-28
    • US12417798
    • 2009-04-03
    • Richard William DoingKevin Neal MagilBalaram SinharoyJeffrey R. SummersJames Albert Van Norstrand, Jr.
    • Richard William DoingKevin Neal MagilBalaram SinharoyJeffrey R. SummersJames Albert Van Norstrand, Jr.
    • G06F9/30
    • G06F9/30145G06F9/3802G06F9/3814G06F9/382G06F9/3853
    • Disclosed are a method and a system for grouping processor instructions for execution by a processor, where the group of processor instructions includes at least two branch processor instructions. In one or more embodiments, an instruction buffer can decouple an instruction fetch operation from an instruction decode operation by storing fetched processor instructions in the instruction buffer until the fetched processor instructions are ready to be decoded. Group formation can involve removing processor instructions from the instruction buffer and routing the processor instruction to latches that convey the processor instructions to decoders. Processor instructions that are removed from instruction buffer in a single clock cycle can be called a group of processor instructions. In one or more embodiments, the first instruction in the group must be the oldest instruction in the instruction buffer and instructions must be removed from the instruction buffer ordered from oldest to youngest.
    • 公开了一种用于将处理器指令分组以由处理器执行的方法和系统,其中处理器指令组包括至少两个分支处理器指令。 在一个或多个实施例中,指令缓冲器可以通过在指令缓冲器中存储获取的处理器指令直到所读出的处理器指令准备解码,从而将指令提取操作与指令解码操作分离。 组形成可以涉及从指令缓冲器中移除处理器指令并将处理器指令路由到将处理器指令传送给解码器的锁存器。 在单个时钟周期内从指令缓冲区中删除的处理器指令可以称为一组处理器指令。 在一个或多个实施例中,组中的第一指令必须是指令缓冲器中的最早的指令,并且必须从从最老到最小的指令缓冲器中移除指令。