会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 21. 发明申请
    • SCALABLE DISTRIBUTED MEMORY AND I/O MULTIPROCESSOR SYSTEMS AND ASSOCIATED METHODS
    • 可分配的分布式存储器和I / O多处理器系统及相关方法
    • US20080114919A1
    • 2008-05-15
    • US12013595
    • 2008-01-14
    • Linda RankinPaul PierceGregory DermerWen-Hann WangKai ChengRichard HofsheierNitin Borkar
    • Linda RankinPaul PierceGregory DermerWen-Hann WangKai ChengRichard HofsheierNitin Borkar
    • G06F13/36
    • G06F13/4022G06F13/4027
    • A multiprocessor system comprises at least one processing module, at least one I/O module, and an interconnect network to connect the at least one processing module with the at least one input/output module. In an example embodiment, the interconnect network comprises at least two bridges to send and receive transactions between the input/output modules and the processing module. The interconnect network further comprises at least two crossbar switches to route the transactions over a high bandwidth switch connection. Using embodiments of the interconnect network allows high bandwidth communication between processing modules and I/O modules. Standard processing module hardware can be used with the interconnect network without modifying the BIOS or the operating system. Furthermore, using the interconnect network of embodiments of the present invention is non-invasive to the processor motherboard. The processor memory bus, clock, and reset logic all remain intact.
    • 多处理器系统包括至少一个处理模块,至少一个I / O模块和互连网络,以将所述至少一个处理模块与所述至少一个输入/输出模块连接。 在示例实施例中,互连网络包括用于在输入/输出模块和处理模块之间发送和接收事务的至少两个桥。 互连网络还包括至少两个交叉开关以通过高带宽交换机连接路由交易。 使用互连网络的实施例允许处理模块和I / O模块之间的高带宽通信。 标准处理模块硬件可以与互连网络一起使用,而无需修改BIOS或操作系统。 此外,使用本发明的实施例的互连网络对于处理器主板是非侵入性的。 处理器内存总线,时钟和复位逻辑都保持不变。
    • 22. 发明授权
    • Cache line pre-load and pre-own based on cache coherence speculation
    • 缓存线预加载和基于缓存一致性推测的预先拥有
    • US06725341B1
    • 2004-04-20
    • US09605239
    • 2000-06-28
    • Jih-Kwon PeirSteve Y. ZhangScott H. RobinsonKonrad LaiWen-Hann Wang
    • Jih-Kwon PeirSteve Y. ZhangScott H. RobinsonKonrad LaiWen-Hann Wang
    • G06F1200
    • G06F12/0831
    • The invention provides a cache management system comprising in various embodiments pre-load and pre-own functionality to enhance cache efficiency in shared memory distributed cache multiprocessor computer systems. Some embodiments of the invention comprise an invalidation history table to record the line addresses of cache lines invalidated through dirty or clean invalidation, and which is used such that invalidated cache lines recorded in an invalidation history table are reloaded into cache by monitoring the bus for cache line addresses of cache lines recorded in the invalidation history table. In some further embodiments, a write-back bit associated with each L2 cache entry records when either a hit to the same line in another processor is detected or when the same line is invalidated in another processor's cache, and the system broadcasts write-backs from the selected local cache only when the line being written back has a write-back bit that has been set.
    • 本发明提供一种缓存管理系统,其包括在各种实施例中预先加载和预先拥有的功能,以增强共享存储器分布式高速缓存多处理器计算机系统中的高速缓存效率。 本发明的一些实施例包括无效历史表,用于记录通过脏或无效无效而无效的高速缓存行的行地址,并且其被使用,使得记录在无效历史表中的无效高速缓存行通过监视高速缓存的总线被重新加载到高速缓存中 记录在无效历史表中的高速缓存行的行地址。 在一些另外的实施例中,与每个L2高速缓存条目相关联的回写位在检测到另一个处理器中的同一行的命中时或者当另一个处理器的高速缓存中的同一行无效时记录,并且系统广播回写从 所选择的本地缓存只有当正在写回的行具有已设置的回写位时。
    • 23. 发明授权
    • Virtual access cache protection bits handling method and apparatus
    • 虚拟访问缓存保护位处理方法和装置
    • US5619673A
    • 1997-04-08
    • US610802
    • 1996-03-07
    • Wen-Hann Wang
    • Wen-Hann Wang
    • G06F12/08G06F12/14
    • G06F12/145G06F12/0875
    • A protection update buffer in conjunction with a cache memory that stores data, protection information and data line tags. The protection update buffer also stores cache address tags. By storing cache tags in the protection update buffer, the protection update buffer may alert the cache memory of lines that have had protection bits change. Also, by further storing data protection information in the protection update buffer, it is possible for the protection update buffer to provide correct protection information for cached data. If writing a tag and/or data protection information to the protection update buffer causes an overflow of the protection update buffer, then the associated cache is flushed and the entries of the protection update buffer are cleared.
    • 保护更新缓冲器与存储数据,保护信息和数据线标签的缓存存储器相结合。 保护更新缓冲区还存储缓存地址标签。 通过将缓存标签存储在保护更新缓冲器中,保护更新缓冲器可以向高速缓冲存储器警告具有保护位改变的行。 此外,通过在保护更新缓冲器中进一步存储数据保护信息,保护更新缓冲器可以为缓存数据提供正确的保护信息。 如果向保护更新缓冲区写入标签和/或数据保护信息则导致保护更新缓冲器溢出,则清除关联的高速缓存并清除保护更新缓冲区的条目。
    • 24. 发明授权
    • System and method for practicing essential inclusion in a multiprocessor
and cache hierarchy
    • 用于在多处理器和缓存层次结构中实践必要包含的系统和方法
    • US5530832A
    • 1996-06-25
    • US136631
    • 1993-10-14
    • Kimming SoWen-Hann Wang
    • Kimming SoWen-Hann Wang
    • G06F12/08
    • G06F12/0811
    • A system and method for managing caches in a multiprocessor having multiple levels of caches. An inclusion architecture and procedure are defined through which the L2 caches shield the L1 caches from extraneous communication at the L2, such as main memory and I/O read/write operations. Essential inclusion eliminates special communication from the L1 cache to the L2, yet maintains adequate knowledge at the L2, regarding the contents of the L1, to minimize L1 invalidations. Processor performance is improved by the reduced communication and the decreased number of invalidations. The processors and L1 caches practice a store-in policy. The L2 cache uses inclusion bits to designate by cache line a relationship between the line of data in the L2 cache and the corresponding lines as they exist in the associated L1 caches. Communication and invalidations are reduced through a selective setting/resetting of the inclusion bits and related L2 interrogation practice.
    • 一种用于在具有多级高速缓存的多处理器中管理高速缓存的系统和方法。 定义了包含体系结构和过程,L2高速缓存通过L2缓存来屏蔽L2高速缓存,从而在L2进行外部通信,如主存储器和I / O读/写操作。 必要的包含消除了从L1缓存到L2的特殊通信,但是在L2上保持足够的知识,关于L1的内容,以最小化L1无效。 通过减少的通信和减少的无效数量来改善处理器的性能。 处理器和L1缓存实践存储策略。 L2高速缓存使用包含比特来由缓存线指定L2高速缓存中的数据行与相关联的L1高速缓存中存在的对应行之间的关系。 通过选择性设置/重置包含位和相关的L2询问实践来减少通信和无效。