会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 5. 发明授权
    • Method and apparatus for reducing register file access times in pipelined processors
    • 用于在流水线处理器中减少寄存器文件访问时间的方法和装置
    • US06934830B2
    • 2005-08-23
    • US10259721
    • 2002-09-26
    • Sudarshan KadambiAdam R. TalcottWayne I. Yamamoto
    • Sudarshan KadambiAdam R. TalcottWayne I. Yamamoto
    • G06F9/30G06F9/38
    • G06F9/30138G06F9/3824G06F9/3857
    • One embodiment of the present invention provides a system that reduces the time required to access registers from a register file within a processor. During operation, the system receives an instruction to be executed, wherein the instruction identifies at least one operand to be accessed from the register file. Next, the system looks up the operands in a register pane, wherein the register pane is smaller and faster than the register file and contains copies of a subset of registers from the register file. If the lookup is successful, the system retrieves the operands from the register pane to execute the instruction. Otherwise, if the lookup is not successful, the system retrieves the operands from the register file, and stores the operands into the register pane. This triggers the system to reissue the instruction to be executed again, so that the re-issued instruction retrieves the operands from the register pane.
    • 本发明的一个实施例提供一种减少从处理器内的寄存器文件访问寄存器所需的时间的系统。 在操作期间,系统接收要执行的指令,其中该指令从该寄存器文件中识别要访问的至少一个操作数。 接下来,系统在寄存器窗格中查找操作数,其中寄存器窗格比寄存器文件更小和更快,并且包含寄存器文件中寄存器子集的副本。 如果查找成功,系统将从寄存器窗格中检索操作数,执行指令。 否则,如果查找不成功,系统将从寄存器文件中检索操作数,并将操作数存储到寄存器窗格中。 这将触发系统重新发出要再次执行的指令,以便重新发出的指令从寄存器窗格中检索操作数。
    • 7. 发明申请
    • Data Cache Block Zero Implementation
    • 数据缓存块零实现
    • US20100106916A1
    • 2010-04-29
    • US12650075
    • 2009-12-30
    • Ramesh GunnaSudarshan KadambiPeter J. Bannon
    • Ramesh GunnaSudarshan KadambiPeter J. Bannon
    • G06F12/08G06F12/00
    • G06F12/0808G06F9/30047G06F9/383G06F9/3834G06F9/3842G06F9/3861G06F12/0815G06F2212/507
    • In one embodiment, a processor comprises a core configured to execute a data cache block write instruction and an interface unit coupled to the core and to an interconnect on which the processor is configured to communicate. The core is configured to transmit a request to the interface unit in response to the data cache block write instruction. If the request is speculative, the interface unit is configured to issue a first transaction on the interconnect. On the other hand, if the request is non-speculative, the interface unit is configured to issue a second transaction on the interconnect. The second transaction is different from the first transaction. For example, the second transaction may be an invalidate transaction and the first transaction may be a probe transaction. In some embodiments, the processor may be in a system including the interconnect and one or more caching agents.
    • 在一个实施例中,处理器包括被配置为执行数据高速缓存块写入指令的核心和耦合到所述核心和所述处理器被配置为在其上进行通信的互连的接口单元。 核心被配置为响应于数据高速缓存块写入指令向接口单元发送请求。 如果请求是推测性的,则接口单元被配置为在互连上发布第一事务。 另一方面,如果请求是非推测性的,则接口单元被配置为在互连上发布第二事务。 第二个交易与第一笔交易不同。 例如,第二事务可以是无效事务,并且第一事务可以是探查事务。 在一些实施例中,处理器可以在包括互连和一个或多个高速缓存代理的系统中。
    • 9. 发明申请
    • Partial load/store forward prediction
    • 部分负载/存储正向预测
    • US20070038846A1
    • 2007-02-15
    • US11200744
    • 2005-08-10
    • Sudarshan KadambiPo-Yung ChangEric Hao
    • Sudarshan KadambiPo-Yung ChangEric Hao
    • G06F9/44
    • G06F9/3834G06F9/30043G06F9/30145G06F9/3017G06F9/3826G06F9/3838
    • In one embodiment, a processor comprises a prediction circuit and another circuit coupled to the prediction circuit. The prediction circuit is configured to predict whether or not a first load instruction will experience a partial store to load forward (PSTLF) event during execution. A PSTLF event occurs if a plurality of bytes, accessed responsive to the first load instruction during execution, include at least a first byte updated responsive to a previous uncommitted store operation and also include at least a second byte not updated responsive to the previous uncommitted store operation. Coupled to receive the first load instruction, the circuit is configured to generate one or more load operations responsive to the first load instruction. The load operations are to be executed in the processor to execute the first load instruction, and a number of the load operations is dependent on the prediction by the prediction circuit.
    • 在一个实施例中,处理器包括预测电路和耦合到预测电路的另一电路。 预测电路被配置为预测在执行期间第一加载指令是否将经历部分存储以进行加载(PSTLF)事件。 如果响应于执行期间的第一加载指令访问的多个字节包括至少响应于先前未提交的存储操作而更新的第一字节,并且还包括响应于先前未提交的存储器而不更新的至少第二字节,则发生PSTLF事件 操作。 耦合以接收第一加载指令,该电路被配置为响应于第一加载指令生成一个或多个加载操作。 在处理器中执行加载操作以执行第一加载指令,并且多个加载操作取决于预测电路的预测。
    • 10. 发明授权
    • Method and apparatus for reducing the effects of hot spots in cache memories
    • 减少高速缓冲存储器中热点影响的方法和装置
    • US06948032B2
    • 2005-09-20
    • US10354327
    • 2003-01-29
    • Sudarshan KadambiVijay BalakrishnanWayne I. Yamamoto
    • Sudarshan KadambiVijay BalakrishnanWayne I. Yamamoto
    • G06F12/00G06F12/08
    • G06F12/0897
    • One embodiment of the present invention provides a system that uses a hot spot cache to alleviate the performance problems caused by hot spots in cache memories, wherein the hot spot cache stores lines that are evicted from hot spots in the cache. Upon receiving a memory operation at the cache, the system performs a lookup for the memory operation in both the cache and the hot spot cache in parallel. If the memory operation is a read operation that causes a miss in the cache and a hit in the hot spot cache, the system reads a data line for the read operation from the hot spot cache, writes the data line to the cache, performs the read operation on the data line in the cache, and then evicts the data line from the hot spot cache.
    • 本发明的一个实施例提供一种使用热点缓存来缓解由高速缓冲存储器中的热点引起的性能问题的系统,其中热点缓存存储从高速缓存中的热点驱逐的线。 在缓存中接收到存储器操作时,系统并行地对高速缓存和热点高速缓存中的存储器操作进行查找。 如果存储器操作是导致高速缓存中的缺失和热点高速缓存中的命中的读取操作,则系统从热点缓存读取用于读取操作的数据行,将数据行写入高速缓存,执行 在缓存中的数据行上读取操作,然后从热点缓存中排除数据行。