会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明公开
    • Intelligent and adaptive memory and methods and devices for managing distributed memory systems with hardware-enforced coherency
    • 智能和自适应存储器和用于分布式存储系统的管理方法和设备,硬件强制一致性
    • EP1008940A2
    • 2000-06-14
    • EP99309343.4
    • 1999-11-23
    • Network Virtual Systems Inc.
    • Akkawi, IsamDonley, Greggory D.Quinn, Robert F.
    • G06F12/08
    • G06F12/0817G06F12/0813G06F2212/2542G06F2212/507
    • Methods and devices for reducing memory access latencies in scaleable multi-node, multi-processor systems include supplementing the demand driven nature of filling cache memories wherein caches are filled based upon the past demands of the processor or processors coupled thereto with a push-based model wherein recently modified lines of memory are actively pushed into selected cache memories based upon usage information. The usage information is maintained for each line of memory and indicates at least which node in the system stores a copy of which line of memory. The timing of the pushing is adaptive and configurable and biases the system to push updated copies of recently modified lines of memory to selected nodes before the processors associated therewith request the line. Other methods and devices for reducing memory access latencies in a multi-node multi-processor system carry out the steps of generating two-phase acknowledgments to invalidate commands wherein, after a phase I acknowledgment, a requesting processor is allowed to modify a line of memory before other processors in the system receive the invalidate. A temporary and process-transparent incoherency then occurs for that line, which is resolved by delaying all access requests to the modified line until phase II acknowledgements to the invalidate are received from all nodes sharing that line. The push-based method of filling the cache memories may be employed together with the two phase invalidate acknowledgment method, or may be employed separately therefrom.
    • 方法和装置用于减少可扩展多节点的存储器访问等待时间,多处理器系统包括补充填充高速缓冲存储器worin高速缓存是基于与基于推送的模型耦合到其上的处理器的过去的要求或处理器填充的需求驱动性质 worin的存储器最近修改线被积极被推入基于使用信息选择的高速缓存存储器。 该使用信息被保持的存储器中的每个线,并且指示至少哪个节点在系统存储的存储器,它线的拷贝。 推压的定时是自适应的和可配置的和偏置系统有请求线相关联的处理器之前要推送的存储器最近修改的行更新的副本到选择的节点。 用于降低多节点多处理器系统的存储器存取等待时间的其他方法和设备进行生成两相的确认的步骤,以无效命令worin,相位我确认之后,请求处理器被允许修改的线的存储器 系统中的其它处理器之前收到无效。 临时和过程透明不一致性则发生该线,所有这些通过延迟的所有访问请求被修改线,直到相位II确认到无效从所有节点共享做线路接收解决。 填充所述高速缓冲存储器的基于推的方法可以连同两相无效确认方法中使用,或者可以从单独采用那里。
    • 2. 发明公开
    • Computing devices
    • Rechnervorrichtungen
    • EP0959410A2
    • 1999-11-24
    • EP99303744.9
    • 1999-05-13
    • Network Virtual Systems Inc.
    • Quinn, Robert F.Akkawi, IsamDonley, Greggory D.
    • G06F12/08
    • H04L29/06G06F12/0813H04L67/42
    • A network of computing devices according to the present invention includes at least one memory server computing device, each memory server including server memory and at least one client computing device, each client having at least a portion of its memory space mapped into the memory space of at least one of the memory server computing devices. A network memory controller is attached to each client, the network memory controller being configured to access the server memory of at least one of the memory server computing devices over the network and including a cache memory for storing copies of memory locations fetched from the server memory. A method of allocating and managing memory resources in a network of computing devices, each of the computing devices including a network memory controller having cache memory, according to the present invention, includes steps of mapping at least a portion of the memory space of at least one of the computing devices into a memory space of at least one other computing device within the network; responding to a request for access to a memory location from a computing device by first checking the cache memory of its associated network memory controller for the requested memory location; and accessing the memory of the at least one computing device when the requested memory location is not present in the requesting computing device's network memory controller cache memory.
    • 根据本发明的计算设备的网络包括至少一个存储器服务器计算设备,每个存储器服务器包括服务器存储器和至少一个客户端计算设备,每个客户机具有映射到存储器空间的存储空间的至少一部分 至少一个存储器服务器计算设备。 网络存储器控制器附接到每个客户端,网络存储器控制器被配置为通过网络访问存储器服务器计算设备中的至少一个的服务器存储器,并且包括用于存储从服务器存储器取出的存储器位置的副本的高速缓冲存储器 。 一种在计算设备的网络中分配和管理存储器资源的方法,每个计算设备包括具有高速缓冲存储器的网络存储器控制器,其包括以下步骤:将至少一部分存储器空间映射到至少 计算设备中的一个进入网络内的至少一个其他计算设备的存储器空间; 响应于从计算设备访问存储器位置的请求,首先检查其相关联的网络存储器控制器的高速缓存用于所请求的存储器位置; 以及当请求的存储器位置不存在于请求计算设备的网络存储器控制器高速缓冲存储器中时,访问所述至少一个计算设备的存储器。
    • 3. 发明公开
    • Computing devices
    • 计算设备
    • EP0959410A3
    • 2000-11-08
    • EP99303744.9
    • 1999-05-13
    • Network Virtual Systems Inc.
    • Quinn, Robert F.Akkawi, IsamDonley, Greggory D.
    • G06F12/08
    • H04L29/06G06F12/0813H04L67/42
    • A network of computing devices according to the present invention includes at least one memory server computing device, each memory server including server memory and at least one client computing device, each client having at least a portion of its memory space mapped into the memory space of at least one of the memory server computing devices. A network memory controller is attached to each client, the network memory controller being configured to access the server memory of at least one of the memory server computing devices over the network and including a cache memory for storing copies of memory locations fetched from the server memory. A method of allocating and managing memory resources in a network of computing devices, each of the computing devices including a network memory controller having cache memory, according to the present invention, includes steps of mapping at least a portion of the memory space of at least one of the computing devices into a memory space of at least one other computing device within the network; responding to a request for access to a memory location from a computing device by first checking the cache memory of its associated network memory controller for the requested memory location; and accessing the memory of the at least one computing device when the requested memory location is not present in the requesting computing device's network memory controller cache memory.