会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 61. 发明授权
    • Accessing memory and processor caches of nodes in multi-node configurations
    • 访问多节点配置中节点的内存和处理器缓存
    • US08015366B2
    • 2011-09-06
    • US12179386
    • 2008-07-24
    • James C. WilsonWolf-Dietrich Weber
    • James C. WilsonWolf-Dietrich Weber
    • G06F12/08
    • G06F12/0817G06F12/0813G06F12/0831G06F15/17
    • A method for communicating between nodes of a plurality of nodes is disclosed. Each node includes a plurality of processors and an interconnect chipset. The method issues a request for data from a processor in a first node and passes the request for data to other nodes through an expansion port (or scalability port). The method also starts an access of a memory in response to the request for data and snoops a processor cache of each processor in each node. The method accordingly identifies the location of the data in either the processor cache or memory in the node having the processor issuing the request or in a processor cache or memory of another node.
    • 公开了一种在多个节点的节点之间进行通信的方法。 每个节点包括多个处理器和互连芯片组。 该方法向第一个节点中的处理器发出数据请求,并通过扩展端口(或可扩展端口)将数据请求传递给其他节点。 该方法还响应于对数据的请求并且监听每个节点中的每个处理器的处理器高速缓存,开始访问存储器。 因此,该方法识别在具有处理器发出请求的节点中的处理器高速缓存或存储器中的数据的位置,或者在另一节点的处理器高速缓存或存储器中。
    • 63. 发明授权
    • Accessing memory and processor caches of nodes in multi-node configurations
    • 访问多节点配置中节点的内存和处理器缓存
    • US07418556B2
    • 2008-08-26
    • US10917815
    • 2004-08-13
    • James C WilsonWolf-Dietrich Weber
    • James C WilsonWolf-Dietrich Weber
    • G06F12/08G06F13/14
    • G06F12/0817G06F12/0813G06F12/0831G06F15/17
    • A method for communicating between nodes of a plurality of nodes is disclosed. Each node includes a plurality of processors and an interconnect chipset. The method issues a request for data from a processor in a first node and passes the request for data to other nodes through an expansion port (or scalability port). The method also starts an access of a memory in response to the request for data and snoops a processor cache of each processor in each node. The method accordingly identifies the location of the data in either the processor cache or memory in the node having the processor issuing the request or in a processor cache or memory of another node. A method for requesting data between two directly coupled nodes in a router system is also disclosed. A method for requesting data between three or more nodes in an interconnect system is also disclosed. A method for resolving crossing cases in an interconnect system is also disclosed. An interconnect system for coupling nodes directly or through a protocol engine is also disclosed.
    • 公开了一种在多个节点的节点之间进行通信的方法。 每个节点包括多个处理器和互连芯片组。 该方法向第一个节点中的处理器发出数据请求,并通过扩展端口(或可扩展端口)将数据请求传递给其他节点。 该方法还响应于对数据的请求并且监听每个节点中的每个处理器的处理器高速缓存,开始访问存储器。 因此,该方法识别在具有处理器发出请求的节点中的处理器高速缓存或存储器中的数据的位置,或者在另一节点的处理器高速缓存或存储器中。 还公开了一种在路由器系统中的两个直接耦合节点之间请求数据的方法。 还公开了一种在互连系统中的三个或更多节点之间请求数据的方法。 还公开了一种用于解决互连系统中的交叉案例的方法。 还公开了用于直接或通过协议引擎耦合节点的互连系统。
    • 64. 发明授权
    • Resolving crossing requests in multi-node configurations
    • 在多节点配置中解决交叉请求
    • US07406582B2
    • 2008-07-29
    • US10917682
    • 2004-08-13
    • James C WilsonWolf-Dietrich Weber
    • James C WilsonWolf-Dietrich Weber
    • G06F13/14
    • G06F12/0817G06F12/0813G06F12/0831G06F15/17
    • A method for communicating between nodes of a plurality of nodes is disclosed. Each node includes a plurality of processors and an interconnect chipset. The method issues a request for data from a processor in a first node and passes the request for data to other nodes through an expansion port (or scalability port). The method also starts an access of a memory in response to the request for data and snoops a processor cache of each processor in each node. The method accordingly identifies the location of the data in either the processor cache or memory in the node having the processor issuing the request or in a processor cache or memory of another node. A method for requesting data between two directly coupled nodes in a router system is also disclosed. A method for requesting data between three or more nodes in an interconnect system is also disclosed. A method for resolving crossing cases in an interconnect system is also disclosed. An interconnect system for coupling nodes directly or through a protocol engine is also disclosed.
    • 公开了一种在多个节点的节点之间进行通信的方法。 每个节点包括多个处理器和互连芯片组。 该方法向第一个节点中的处理器发出数据请求,并通过扩展端口(或可扩展端口)将数据请求传递给其他节点。 该方法还响应于对数据的请求并且监听每个节点中的每个处理器的处理器高速缓存,开始访问存储器。 因此,该方法识别在具有处理器发出请求的节点中的处理器高速缓存或存储器中的数据的位置,或者在另一节点的处理器高速缓存或存储器中。 还公开了一种在路由器系统中的两个直接耦合节点之间请求数据的方法。 还公开了一种在互连系统中的三个或更多节点之间请求数据的方法。 还公开了一种用于解决互连系统中的交叉案例的方法。 还公开了用于直接或通过协议引擎耦合节点的互连系统。
    • 70. 发明授权
    • System and method for avoiding deadlock in multi-node network
    • 避免多节点网络死锁的系统和方法
    • US06490630B1
    • 2002-12-03
    • US09285316
    • 1999-04-02
    • Wing Leong PoonPatrick J. HellandTakeshi ShimizuYasushi UmezawaWolf-Dietrich Weber
    • Wing Leong PoonPatrick J. HellandTakeshi ShimizuYasushi UmezawaWolf-Dietrich Weber
    • G06F1516
    • G06F15/17
    • A computer architecture for avoiding a deadlock condition in an interconnection network comprises a messaging buffer having a size pre-calculated to temporarily store outgoing messages from a node. Messages are classified according to their service requirements and messaging protocols, and reserved quotas in the messaging buffer are allocated for different types of messages. The allocations of the reserved quotas are controlled by a mechanism that, to prevent overflow, limits the maximum number of messages that can be outstanding at any time. The messaging buffer is sized large enough to guarantee that a node is always able to service incoming messages, thereby avoiding deadlock and facilitating forward progress in communications. The buffer may be bypassed to improve system performance when the buffer is empty or when data in the buffer is corrupted. In addition, a multicast engine facilitates dense packing of the buffer and derives information from a message header to determine whether there is a multicast to perform and to permit passage of messages. Other considerations to reduce the buffer size are incorporated.
    • 用于避免互连网络中的死锁状况的计算机体系结构包括具有预先计算以便临时存储来自节点的传出消息的大小的消息传送缓冲器。 消息根据其服务要求和消息协议进行分类,消息缓冲区中的保留配额被分配给不同类型的消息。 保留配额的分配由一种机制来控制,为了防止溢出,可以限制任何时候可以发送的最大消息数。 消息传递缓冲区的大小足够大,以确保节点始终能够服务传入的消息,从而避免死锁并促进通信的前进进程。 当缓冲区为空或缓冲区中的数据被破坏时,可能会旁路缓冲区以提高系统性能。 此外,多播引擎促进了缓冲器的密集打包,并从消息头部导出信息,以确定是否存在要执行的多播并允许消息通过。 纳入了减少缓冲区大小的其他考虑因素。