会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • CACHED MULTIPROCESSOR SYSTEM WITH PIPELINE TIMING
    • 具有管道时序的缓存多处理器系统
    • WO1981002210A1
    • 1981-08-06
    • PCT/US1981000126
    • 1981-01-28
    • DIGITAL EQUIP CORP
    • DIGITAL EQUIP CORPSULLIVAN DLARY RGIGGI RARULPRAGASAM J
    • G06F09/38
    • G06F9/52G06F12/084G06F12/0846G06F12/0857G06F12/1458G06F13/18G06F15/177
    • A multiprocessor data processing system including a main memory system, the processors (30) of which share a common control unit (CCU 10) that includes a write-through cache memory (20), for accessing copies of memory data therein without undue delay in retrieving data from the main memory system. A synchronous processor bus (76) having conductors (104) couples the processors (30) to the CCU. An asynchronous input/output bus (60) couples input/output devices (32) to an interface circuit (64) which, in turn, couples the information signals thereof to the synchronous processor bus (76) of the CCU so that both the processors (30) and the I/O devices (32) can gain quick access to memory data rather than in the cache memory (20). When a read command "misses" the cache memory (20), the CCU accesses the memory modules (28) for allocating its cache memory (20) and for returning read data to the processors (30) or input/output devices (32). To inhibit reads to locations in the cache for which there is a write-in-progress, the CCU includes a Processor Index random-access-memory (PIR 20) that temporarily stores memory addresses for which there is a write-in-progress. The PIR is used by the cache memory to force a "miss" for all references to the memory address contained therein until the CCU updates the cache memory. The CCU also includes a duplicate tag store (67) that maintains a copy of the cache memory address tag store (20A) thereby to enable the CCU to update its cache memory when data is written into a main memory location that is to be maintained in the cache memory.
    • 一种多处理器数据处理系统,包括主存储器系统,其处理器(30)共享包括直写高速缓冲存储器(20)的公共控制单元(CCU 10),用于在其中访问存储器数据的副本,而不会有不适当的延迟 从主存储器系统检索数据。 具有导体(104)的同步处理器总线(76)将处理器(30)耦合到CCU。 异步输入/输出总线(60)将输入/输出设备(32)耦合到接口电路(64),接口电路(64)又将其信息信号耦合到CCU的同步处理器总线(76),使得两个处理器 (30)和I / O设备(32)可以快速访问存储器数据,而不是在高速缓冲存储器(20)中。 当读取命令“错过”高速缓存存储器(20)时,CCU访问用于分配其高速缓冲存储器(20)的存储器模块(28)并将读取的数据返回到处理器(30)或输入/输出设备(32) 。 为了禁止对正在进行写入的高速缓存中的位置的读取,CCU包括临时存储正在进行写入的存储器地址的处理器索引随机存取存储器(PIR 20)。 高速缓冲存储器使用PIR强制对包含在其中的存储器地址的所有引用的“未命中”,直到CCU更新缓存存储器。 CCU还包括维护高速缓冲存储器地址标签存储器(20A)的副本的重复标签存储器(67),从而当数据被写入要保持的主存储器位置时,CCU能够更新其高速缓冲存储器 缓存内存。