会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 42. 发明申请
    • UNIFIED CACHE STRUCTURE THAT FACILITATES ACCESSING TRANSLATION TABLE ENTRIES
    • 统一的高速缓存结构,便于访问翻译表
    • US20100205344A1
    • 2010-08-12
    • US12367828
    • 2009-02-09
    • Paul CaprioliGregory M. Wright
    • Paul CaprioliGregory M. Wright
    • G06F12/08G06F12/00G06F12/10
    • G06F12/1063G06F12/1027
    • One embodiment provides a system that includes a processor with a unified cache structure that facilitates accessing translation table entries (TTEs). This unified cache structure can simultaneously store program instructions, program data, and TTEs. During a memory access, the system receives a virtual memory address. The system then uses this virtual memory address to identify one or more cache lines in the unified cache structure which are associated with the virtual memory address. Next, the system compares a tag portion of the virtual memory address with the tags for the identified cache line(s) to identify a cache line that matches the virtual memory address. The system then loads a translation table entry that corresponds to the virtual memory address from the identified cache line.
    • 一个实施例提供了一种系统,其包括具有促进访问转换表条目(TTE)的统一高速缓存结构的处理器。 这种统一的缓存结构可以同时存储程序指令,程序数据和TTE。 在存储器访问期间,系统接收虚拟存储器地址。 然后,系统使用该虚拟存储器地址来识别与虚拟存储器地址相关联的统一缓存结构中的一个或多个高速缓存行。 接下来,系统将虚拟存储器地址的标签部分与用于所识别的高速缓存行的标签进行比较,以识别与虚拟存储器地址匹配的高速缓存行。 然后,系统从识别的高速缓存行加载对应于虚拟存储器地址的转换表条目。
    • 44. 发明授权
    • Mechanism for hardware tracking of return address after tail call elimination of return-type instruction
    • 尾部呼叫消除返回类型指令后返回地址的硬件跟踪机制
    • US07610474B2
    • 2009-10-27
    • US11352147
    • 2006-02-10
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • G06F9/00
    • G06F9/3806G06F9/3842G06F9/3861
    • A technique maintains return address stack (RAS) content and alignment of a RAS top-of-stack (TOS) pointer upon detection of a tail-call elimination of a return-type instruction. In at least one embodiment of the invention, an apparatus includes a processor pipeline and at least a first return address stack for maintaining a stack of return addresses associated with instruction flow at a first stage of the processor pipeline. The processor pipeline is configured to maintain the first return address stack unchanged in response to detection of a tail-call elimination sequence of one or more instructions associated with a first call-type instruction encountered by the first stage. The processor pipeline is configured to push a return address associated with the first call-type instruction onto the first return address stack otherwise.
    • 检测到返回类型指令的尾部消除消息后,技术维护返回地址堆栈(RAS)内容和RAS顶层(TOS)指针的对齐。 在本发明的至少一个实施例中,一种装置包括处理器流水线和至少第一返回地址堆栈,用于在处理器流水线的第一级保持与指令流相关联的返回地址堆栈。 响应于检测到与第一级遇到的第一呼叫类型指令相关联的一个或多个指令的尾部呼叫消除序列,处理器流水线被配置为维持第一返回地址堆栈不变。 否则处理器流水线被配置为将与第一调用类型指令相关联的返回地址推送到第一返回地址堆栈。
    • 45. 发明申请
    • FACILITATING TRANSACTIONAL EXECUTION IN A PROCESSOR THAT SUPPORTS SIMULTANEOUS SPECULATIVE THREADING
    • 在支持同时进行线性加工的处理器中促进交易执行
    • US20090254905A1
    • 2009-10-08
    • US12061554
    • 2008-04-02
    • Sherman H. YipPaul CaprioliMarc Tremblay
    • Sherman H. YipPaul CaprioliMarc Tremblay
    • G06F9/46
    • G06F9/466G06F9/3842G06F9/3851G06F12/0842
    • Embodiments of the present invention provide a system that executes a transaction on a simultaneous speculative threading (SST) processor. In these embodiments, the processor includes a primary strand and a subordinate strand. Upon encountering a transaction with the primary strand while executing instructions non-transactionally, the processor checkpoints the primary strand and executes the transaction with the primary strand while continuing to non-transactionally execute deferred instructions with the subordinate strand. When the subordinate strand non-transactionally accesses a cache line during the transaction, the processor updates a record for the cache line to indicate the first strand ID. When the primary strand transactionally accesses a cache line during the transaction, the processor updates a record for the cache line to indicate a second strand ID.
    • 本发明的实施例提供了一种在同时推测的线程(SST)处理器上执行交易的系统。 在这些实施例中,处理器包括主链和从属链。 在非事务性地执行指令的同时遇到与主链的事务时,处理器检查主链,并与主链一起执行事务,同时继续非事务地执行与下级链的延迟指令。 当下级链在事务期间非事务地访问高速缓存行时,处理器更新用于高速缓存行的记录以指示第一个链ID。 当主链在事务期间事务地访问高速缓存行时,处理器更新用于高速缓存行的记录以指示第二个链ID。
    • 46. 发明申请
    • METHOD AND APPARATUS FOR RECOVERING FROM BRANCH MISPREDICTION
    • 从分支机构预测中恢复的方法和装置
    • US20090210683A1
    • 2009-08-20
    • US12033626
    • 2008-02-19
    • Paul Caprioli
    • Paul Caprioli
    • G06F9/38
    • G06F9/3861G06F9/383
    • Embodiments of the present invention provide a system that executes a branch instruction. When executing the branch instruction, the system obtains a stored prediction of a resolution of the branch instruction and fetches subsequent instructions for execution based on the predicted resolution of the branch instruction. If an actual resolution of the branch instruction is different from the predicted resolution (i.e., if the branch is mispredicted), the system updates the stored prediction of the resolution of the branch instruction to the actual resolution of the branch instruction. The system then re-executes the branch instruction. When re-executing the branch instruction, the system obtains the stored prediction of the resolution of the branch instruction and fetches subsequent instructions for execution based on the predicted resolution of the branch instruction.
    • 本发明的实施例提供一种执行分支指令的系统。 当执行分支指令时,系统获得存储的分支指令的分辨率的预测,并且基于分支指令的预测分辨率获取后续的执行指令。 如果分支指令的实际分辨率与预测分辨率不同(即,如果分支被错误预测),则系统将存储的分支指令的分辨率的预测更新为分支指令的实际分辨率。 然后系统重新执行分支指令。 当重新执行分支指令时,系统获得存储的分支指令的分辨率的预测,并且基于分支指令的预测分辨率获取后续的执行指令。
    • 47. 发明授权
    • Entering scout-mode when stores encountered during execute-ahead mode exceed the capacity of the store buffer
    • 在执行超前模式期间遇到的存储进入侦察模式超过存储缓冲区的容量
    • US07484080B2
    • 2009-01-27
    • US11103912
    • 2005-04-11
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • G06F9/00
    • G06F9/3863G06F9/383G06F9/3834G06F9/3836G06F9/3838G06F9/384G06F9/3842G06F9/3857G06F9/3865
    • One embodiment of the present invention provides a system that facilitates deferring execution of instructions with unresolved data dependencies as they are issued for execution in program order. During a normal execution mode, the system issues instructions for execution in program order. Upon encountering an unresolved data dependency during execution of an instruction, the system generates a checkpoint that can subsequently be used to return execution of the program to the point of the instruction. Next, the system executes the instruction and subsequent instructions in an execute-ahead mode, wherein instructions that cannot be executed because of an unresolved data dependency are deferred, and wherein other non-deferred instructions are executed in program order. Upon encountering a store during the execute-ahead mode, the system determines if the store buffer is full. If so, the system prefetches a cache line for the store, and defers execution of the store. If the number of stores that are encountered during execute-ahead mode exceeds the capacity of the store buffer, which means that the store buffer will never have additional space to accept additional stores during the execute-ahead mode because the store buffer is gated, the system directly enters the scout mode, without waiting for the deferred queue to eventually fill.
    • 本发明的一个实施例提供了一种系统,其有助于在按照程序顺序执行时,推迟执行具有未解决的数据依赖性的指令。 在正常执行模式下,系统以程序顺序发出执行指令。 在执行指令期间遇到未解决的数据依赖性时,系统产生一个检查点,随后可以使用该检查点将程序的执行返回到指令点。 接下来,系统以执行模式执行指令和后续指令,其中由于未解决的数据依赖性而不能执行的指令被延迟,并且其中以程序顺序执行其他非延迟指令。 在执行提前模式期间遇到存储器时,系统确定存储缓冲区是否已满。 如果是这样,系统将预取商店的高速缓存线,并延迟商店的执行。 如果在执行超前模式期间遇到的存储的数量超过了存储缓冲区的容量,这意味着由于存储缓冲区被选通,在执行提前模式下,存储缓冲区将永远不会有额外的空间来接受附加存储, 系统直接进入侦察模式,无需等待延期队列最终填满。
    • 50. 发明授权
    • Circuitry and method for accessing an associative cache with parallel determination of data and data availability
    • 用于通过并行确定数据和数据可用性访问关联高速缓存的电路和方法
    • US07461208B1
    • 2008-12-02
    • US11155147
    • 2005-06-16
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • G06F13/16
    • G06F12/0864G06F2212/1016G06F2212/6082
    • A circuit for accessing an associative cache is provided. The circuit includes data selection circuitry and an outcome parallel processing circuit both in communication with the associative cache. The outcome parallel processing circuit is configured to determine whether an accessing of data from the associative cache is one of a cache hit, a cache miss, or a cache mispredict. The circuit further includes a memory in communication with the data selection circuitry and the outcome parallel processing circuit. The memory is configured to store a bank select table, whereby the bank select table is configured to include entries that define a selection of one of a plurality of banks of the associative cache from which to output data. Methods for accessing the associative cache are also described.
    • 提供了一种用于访问关联高速缓存的电路。 电路包括与关联高速缓存通信的数据选择电路和结果并行处理电路。 结果并行处理电路被配置为确定来自关联高速缓存的数据的访问是否是高速缓存命中,高速缓存未命中或高速缓存错误预测中的一个。 电路还包括与数据选择电路和结果并行处理电路通信的存储器。 存储器被配置为存储存储体选择表,由此存储体选择表被配置为包括定义从其输出数据的关联高速缓存的多个存储区之一的选择的条目。 还描述了访问关联高速缓存的方法。