会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • System interface protocol with optional module cache
    • 具有可选模块缓存的系统接口协议
    • US5987544A
    • 1999-11-16
    • US525114
    • 1995-09-08
    • Peter J. BannonAnil K. JainJohn H. EdmondsonRuben William Sixtus Castelino
    • Peter J. BannonAnil K. JainJohn H. EdmondsonRuben William Sixtus Castelino
    • G06F12/08G06F12/06
    • G06F12/0831
    • A computer system includes a plurality of processor modules coupled to a system bus with each of said processor modules including a processor interfaced to the system bus. The processor module has a backup cache memory and tag store. An index bus is coupled between the processor and the backup cache and backup cache tag store with said bus carrying only an index portion of a memory address to said backup cache and said tag store. A duplicate tag store is coupled to an interface with the duplicate tag memory including means for storing duplicate tag addresses and duplicate tag valid, shared and dirty bits. The duplicate tag store and the separate index bus provide higher performance from the processor by minimizing external interrupts to the processor to check on cache status and also allows other processors access to the processor's duplicate tag while the processor is processing other transactions.
    • 计算机系统包括耦合到系统总线的多个处理器模块,每个所述处理器模块包括与系统总线接口的处理器。 处理器模块具有备份高速缓存和标签存储。 索引总线耦合在处理器和备份高速缓存和备份高速缓存标签存储之间,所述总线仅将存储器地址的索引部分携带到所述备份高速缓存和所述标签存储。 重复标签存储器与重复标签存储器耦合到接口,包括用于存储重复标签地址和重复标签有效,共享和脏位的装置。 重复标签存储和单独的索引总线通过最小化处理器的外部中断来检查高速缓存状态,从处理器提供更高的性能,并且还允许其他处理器在处理器处理其他事务时访问处理器的重复标签。
    • 2. 发明授权
    • Method for increasing system bandwidth through an on-chip address lock
register
    • 通过片上地址锁定寄存器增加系统带宽的方法
    • US5615167A
    • 1997-03-25
    • US525106
    • 1995-09-08
    • Anil K. JainJohn H. EdmondsonPeter J. Bannon
    • Anil K. JainJohn H. EdmondsonPeter J. Bannon
    • G06F9/46G11C13/00
    • G06F9/526G06F9/3004G06F9/30072G06F9/30087
    • A computer system comprising one or more processor modules. Each processor module comprising a central processing unit comprising a storage element disposed in the central processing unit dedicated for storing a semaphore address lock value and a semaphore lock flag value, a cache memory system for storing data and instruction values used by the central processing unit, a system bus interface for communicating with other processor modules over a system bus, a memory system implemented as a common system resource available to the processor modules for storing data and instructions, an IO system implemented as a common system resource available to the plurality of processor modules for each to communicate with data input devices and data output devices, and a system bus connecting the processor module to the memory system and to the IO system.
    • 一种包括一个或多个处理器模块的计算机系统。 每个处理器模块包括中央处理单元,该中央处理单元包括设置在专用于存储信号量地址锁定值和信号量锁定标志值的中央处理单元中的存储元件,用于存储由中央处理单元使用的数据和指令值的高速缓冲存储器系统, 用于通过系统总线与其他处理器模块通信的系统总线接口,被实现为用于存储数据和指令的处理器模块可用的公共系统资源的存储器系统,被实现为可用于多个处理器的公共系统资源的IO系统 每个模块用于与数据输入设备和数据输出设备通信,以及将处理器模块连接到存储器系统和IO系统的系统总线。
    • 4. 发明授权
    • Data cache block zero implementation
    • 数据缓存块零实现
    • US08301843B2
    • 2012-10-30
    • US12650075
    • 2009-12-30
    • Ramesh GunnaSudarshan KadambiPeter J. Bannon
    • Ramesh GunnaSudarshan KadambiPeter J. Bannon
    • G06F12/00G06F13/00
    • G06F12/0808G06F9/30047G06F9/383G06F9/3834G06F9/3842G06F9/3861G06F12/0815G06F2212/507
    • In one embodiment, a processor comprises a core configured to execute a data cache block write instruction and an interface unit coupled to the core and to an interconnect on which the processor is configured to communicate. The core is configured to transmit a request to the interface unit in response to the data cache block write instruction. If the request is speculative, the interface unit is configured to issue a first transaction on the interconnect. On the other hand, if the request is non-speculative, the interface unit is configured to issue a second transaction on the interconnect. The second transaction is different from the first transaction. For example, the second transaction may be an invalidate transaction and the first transaction may be a probe transaction. In some embodiments, the processor may be in a system including the interconnect and one or more caching agents.
    • 在一个实施例中,处理器包括被配置为执行数据高速缓存块写入指令的核心和耦合到所述核心和所述处理器被配置为在其上进行通信的互连的接口单元。 核心被配置为响应于数据高速缓存块写入指令向接口单元发送请求。 如果请求是推测性的,则接口单元被配置为在互连上发布第一事务。 另一方面,如果请求是非推测性的,则接口单元被配置为在互连上发布第二事务。 第二个交易与第一笔交易不同。 例如,第二事务可以是无效事务,并且第一事务可以是探查事务。 在一些实施例中,处理器可以在包括互连和一个或多个高速缓存代理的系统中。
    • 6. 发明授权
    • Data cache block zero implementation
    • 数据缓存块零实现
    • US07707361B2
    • 2010-04-27
    • US11281840
    • 2005-11-17
    • Ramesh GunnaSudarshan KadambiPeter J. Bannon
    • Ramesh GunnaSudarshan KadambiPeter J. Bannon
    • G06F12/00G06F13/00
    • G06F12/0808G06F9/30047G06F9/383G06F9/3834G06F9/3842G06F9/3861G06F12/0815G06F2212/507
    • In one embodiment, a processor comprises a core configured to execute a data cache block write instruction and an interface unit coupled to the core and to an interconnect on which the processor is configured to communicate. The core is configured to transmit a request to the interface unit in response to the data cache block write instruction. If the request is speculative, the interface unit is configured to issue a first transaction on the interconnect. On the other hand, if the request is non-speculative, the interface unit is configured to issue a second transaction on the interconnect. The second transaction is different from the first transaction. For example, the second transaction may be an invalidate transaction and the first transaction may be a probe transaction. In some embodiments, the processor may be in a system including the interconnect and one or more caching agents.
    • 在一个实施例中,处理器包括被配置为执行数据高速缓存块写入指令的核心和耦合到所述核心和所述处理器被配置为在其上进行通信的互连的接口单元。 核心被配置为响应于数据高速缓存块写入指令向接口单元发送请求。 如果请求是推测性的,则接口单元被配置为在互连上发布第一事务。 另一方面,如果请求是非推测性的,则接口单元被配置为在互连上发布第二事务。 第二个交易与第一笔交易不同。 例如,第二事务可以是无效事务,并且第一事务可以是探查事务。 在一些实施例中,处理器可以在包括互连和一个或多个高速缓存代理的系统中。
    • 10. 发明授权
    • Fused store exclusive/memory barrier operation
    • 融合商店独家/内存屏障操作
    • US08285937B2
    • 2012-10-09
    • US12711941
    • 2010-02-24
    • Peter J. BannonPo-Yung Chang
    • Peter J. BannonPo-Yung Chang
    • G06F12/00G06F12/08
    • G06F9/3004G06F9/30087G06F9/3017G06F9/3834G06F9/3842G06F9/3857G06F9/3859G06F9/522G06F9/526G06F2209/521
    • In an embodiment, a processor may be configured to detect a store exclusive operation followed by a memory barrier operation in a speculative instruction stream being executed by the processor. The processor may fuse the store exclusive operation and the memory barrier operation, creating a fused operation. The fused operation may be transmitted and globally ordered, and the processor may complete both the store exclusive operation and the memory barrier operation in response to the fused operation. As the fused operation progresses through the processor and one or more other components (e.g. caches in the cache hierarchy) to the ordering point in the system, the fused operation may push previous memory operations to effect the memory barrier operation. In some embodiments, the latency for completing the store exclusive operation and the subsequent data memory barrier operation may be reduced if the store exclusive operation is successful at the ordering point.
    • 在一个实施例中,处理器可以被配置为在由处理器执行的推测性指令流中检测存储排他操作,随后进行存储器障碍操作。 处理器可以融合存储专有操作和存储器屏障操作,创建融合操作。 可以传输和全局排序融合操作,并且响应于融合操作,处理器可以完成存储排他操作和存储器屏障操作两者。 当融合操作通过处理器和一个或多个其它组件(例如高速缓存层级中的高速缓存)进行到系统中的订购点时,融合操作可以推动先前的存储器操作来实现存储器屏障操作。 在一些实施例中,如果存储排他操作在订购点成功,则可以减少完成存储排他操作和后续数据存储器屏障操作的等待时间。