会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 81. 发明授权
    • Pseudo-LRU cache line replacement for a high-speed cache
    • 用于高速缓存的伪LRU高速缓存行替代
    • US08364900B2
    • 2013-01-29
    • US12029889
    • 2008-02-12
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • Paul CaprioliSherman H. YipShailender Chaudhry
    • G06F12/00
    • G06F12/125G06F12/0864Y02D10/13
    • Embodiments of the present invention provide a system that replaces an entry in a least-recently-used way in a skewed-associative cache. The system starts by receiving a cache line address. The system then generates two or more indices using the cache line address. Next, the system generates two or more intermediate indices using the two or more indices. The system then uses at least one of the two or more indices or the two or more intermediate indices to perform a lookup in one or more lookup tables, wherein the lookup returns a value which identifies a least-recently-used way. Next, the system replaces the entry in the least-recently-used way.
    • 本发明的实施例提供了一种在偏斜相关高速缓存中以最近最近使用的方式替换条目的系统。 系统从接收缓存行地址开始。 然后系统使用高速缓存行地址生成两个或多个索引。 接下来,系统使用两个或更多个索引生成两个或更多个中间索引。 然后,系统使用两个或更多个索引中的至少一个或两个或更多个中间索引来在一个或多个查找表中执行查找,其中查找返回标识最近最近使用的方式的值。 接下来,系统以最近最少使用的方式替换条目。
    • 83. 发明授权
    • Method and apparatus for synchronizing threads on a processor that supports transactional memory
    • 用于在支持事务性存储器的处理器上同步线程的方法和装置
    • US07930695B2
    • 2011-04-19
    • US11418652
    • 2006-05-05
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • G06F9/46
    • G06F9/52G06F9/3004G06F9/30087G06F9/3834G06F9/3857
    • One embodiment of the present invention provides a system that synchronizes threads on a multi-threaded processor. The system starts by executing instructions from a multi-threaded program using a first thread and a second thread. When the first thread reaches a predetermined location in the multi-threaded program, the first thread executes a Start-Transactional-Execution (STE) instruction to commence transactional execution, wherein the STE instruction specifies a location to branch to if transactional execution fails. During the subsequent transactional execution, the first thread accesses a mailbox location in memory (which is also accessible by the second thread) and then executes instructions that cause the first thread to wait. When the second thread reaches a second predetermined location in the multi-threaded program, the second thread signals the first thread by accessing the mailbox location, which causes the transactional execution of the first thread to fail, thereby causing the first thread to resume non-transactional execution from the location specified in the STE instruction. In this way, the second thread can signal to the first thread without the first thread having to poll a shared variable.
    • 本发明的一个实施例提供了一种在多线程处理器上同步线程的系统。 系统通过使用第一个线程和第二个线程执行来自多线程程序的指令来启动。 当第一线程到达多线程程序中的预定位置时,第一线程执行开始 - 事务执行(STE)指令以开始事务执行,其中STE指令指定分支到事务执行失败的位置。 在随后的事务执行期间,第一个线程访问存储器中的邮箱位置(也可由第二个线程访问),然后执行使第一个线程等待的指令。 当第二线程到达多线程程序中的第二预定位置时,第二线程通过访问邮箱位置来发信号通知第一线程,这导致第一线程的事务性执行失败,从而使第一线程恢复为非线程, 从STE指令中指定的位置进行事务执行。 以这种方式,第二线程可以向第一线程发信号,而第一线程不必轮询共享变量。
    • 84. 发明授权
    • Method and apparatus for selectively prefetching based on resource availability
    • 用于基于资源可用性选择性地预取的方法和装置
    • US07707359B2
    • 2010-04-27
    • US11390896
    • 2006-03-27
    • Wayne MesardPaul Caprioli
    • Wayne MesardPaul Caprioli
    • G06F12/00
    • G06F12/0862G06F9/30047G06F9/30072G06F9/3802G06F9/383G06F2212/6028
    • One embodiment of the present invention provides a system which facilitates selective prefetching based on resource availability. During operation, the system executes instructions in a processor. While executing the instructions, the system monitors the availability of one or more system resources and dynamically adjusts an availability indicator for each system resource based on the current availability of the system resource. Upon encountering a prefetch instruction which involves the system resource, the system checks the availability indicator. If the availability indicator indicates that the system resource is not sufficiently available, the system terminates the execution of the prefetch instruction, whereby terminating execution prevents prefetch instructions from overwhelming the system resource.
    • 本发明的一个实施例提供一种基于资源可用性促进选择性预取的系统。 在操作期间,系统在处理器中执行指令。 在执行指令时,系统监视一个或多个系统资源的可用性,并根据系统资源的当前可用性动态调整每个系统资源的可用性指标。 在遇到涉及系统资源的预取指令时,系统检查可用性指标。 如果可用性指示符指示系统资源不够可用,则系统终止预取指令的执行,由此终止执行防止预取指令压倒系统资源。
    • 85. 发明授权
    • Method and apparatus for using multiple threads to spectulatively execute instructions
    • 使用多个线程来分析执行指令的方法和装置
    • US07634641B2
    • 2009-12-15
    • US11361257
    • 2006-04-24
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • Shailender ChaudhryMarc TremblayPaul Caprioli
    • G06F9/00G06F9/40
    • G06F9/3851G06F9/383G06F9/3842G06F9/3863
    • One embodiment of the present invention provides a system which performs simultaneous speculative threading. The system staffs by executing instructions in normal execution mode using a first thread. Upon encountering a data-dependent stall condition, the first thread generates an architectural checkpoint and commences execution of instructions in execute-ahead mode. During execute-ahead mode, the first thread executes instructions that can be executed and defers instructions that cannot be executed into a deferred queue. When the data dependent stall condition has been resolved, the first thread generates a speculative checkpoint and continues execution in execute-ahead mode. At the same time, the second thread commences execution in a deferred mode. During execution in the deferred mode, the second thread executes instructions deferred by the first thread.
    • 本发明的一个实施例提供一种执行同时投机线程的系统。 系统通过使用第一个线程在正常执行模式下执行指令来启动。 在遇到数据相关的停顿条件时,第一个线程生成架构检查点,并以执行提前模式开始执行指令。 在执行提前模式期间,第一个线程执行可执行的指令,并将不能执行的指令拖到延迟队列中。 当数据相关的停顿条件已经解决时,第一个线程生成一个推测性检查点,并以执行提前模式继续执行。 同时,第二个线程以延迟模式开始执行。 在延迟模式执行期间,第二线程执行由第一线程延迟的指令。
    • 88. 发明授权
    • Patchable and/or programmable pre-decode
    • 可修补和/或可编程预解码
    • US07509481B2
    • 2009-03-24
    • US11277735
    • 2006-03-28
    • Shailender ChaudhryPaul CaprioliQuinn A. JacobsonMarc Tremblay
    • Shailender ChaudhryPaul CaprioliQuinn A. JacobsonMarc Tremblay
    • G06F9/00
    • G06F9/30145G06F9/30174G06F9/30196G06F9/382G06F9/3822G06F9/3897
    • Mechanisms have been developed for providing great flexibility in processor instruction handling, sequencing and execution. In particular, it has been discovered that a programmable pre-decode mechanism can be employed to alter the behavior of a processor. For example, pre-decode hints for sequencing, synchronization or speculation control may altered or mappings of ISA instructions to native instructions or operation sequences may be altered. Such techniques may be employed to adapt a processor implementation (in the field) to varying memory models, implementations or interfaces or to varying memory latencies or timing characteristics. Similarly, such techniques may be employed to adapt a processor implementation to correspond to an extended/adapted instruction set architecture. In some realizations, instruction pre-decode functionality may be adapted at processor run-time to handle or mitigate a timing, concurrency or speculation issue. In some realizations, operation of pre-decode may be reprogrammed post-manufacture, at (or about) initialization, or at run-time.
    • 已经开发了用于在处理器指令处理,排序和执行中提供极大灵活性的机制。 特别地,已经发现可以采用可编程预解码机制来改变处理器的行为。 例如,用于排序,同步或推测控制的预解码提示可以改变或者将ISA指令映射到本地指令或操作序列可以被改变。 可以采用这样的技术来使处理器实现(在现场中)适应于变化的存储器模型,实现或接口或者改变存储器延迟或定时特性。 类似地,可以采用这样的技术来使处理器实现适应于扩展/适应的指令集架构。 在一些实现中,可以在处理器运行时调整指令预解码功能以处理或减轻定时,并发或推测问题。 在某些实现中,可以在(或大约)初始化或运行时在制造后重新编程预解码的操作。
    • 89. 发明授权
    • Collapsible front-end translation for instruction fetch
    • 可折叠的前端翻译用于指令提取
    • US07509472B2
    • 2009-03-24
    • US11345165
    • 2006-02-01
    • Paul CaprioliShailender Chaudhry
    • Paul CaprioliShailender Chaudhry
    • G06F9/26G06F9/30G06F9/40
    • G06F9/3804G06F9/30054G06F12/0875G06F12/10G06F2212/655Y02D10/13
    • Address translation for instruction fetching can be obviated for sequences of instruction instances that reside on a same page. Obviating address translation reduces power consumption and increases pipeline efficiency since accessing of an address translation buffer can be avoided. Certain events, such as branch mis-predictions and exceptions, can be designated as page boundary crossing events. In addition, carry over at a particular bit position when computing a branch target or a next instruction instance fetch target can also be designated as a page boundary crossing event. An address translation buffer is accessed to translate an address representation of a first instruction instance. However, until a page boundary crossing event occurs, the address representations of subsequent instruction instances are not translated. Instead, the translated portion of the address representation for the first instruction instance is recycled for the subsequent instruction instances.
    • 针对驻留在同一页面上的指令实例的序列,可以避免用于指令获取的地址转换。 避免访问地址转换缓冲区,避免地址转换降低功耗并提高管道效率。 某些事件,如分支错误预测和例外,可以被指定为页边界交叉事件。 此外,当计算分支目标或下一指令实例提取目标时,在特定比特位置进位也可以被指定为页边界交叉事件。 访问地址转换缓冲器以转换第一指令实例的地址表示。 然而,直到发生页边界交叉事件,后续指令实例的地址表示不被转换。 相反,第一个指令实例的地址表示的翻译部分被回收用于后续指令实例。