会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Scalable efficient I/O port protocol
    • 可扩展的高效I / O端口协议
    • US08364851B2
    • 2013-01-29
    • US10677583
    • 2003-10-02
    • Richard E. KesslerSamuel H. DuncanDavid W. HartwellDavid A. J. Webb, Jr.Steve Lang
    • Richard E. KesslerSamuel H. DuncanDavid W. HartwellDavid A. J. Webb, Jr.Steve Lang
    • G06F3/00
    • G06F15/17381G06F12/0817G06F2212/621
    • A system that supports a high performance, scalable, and efficient I/O port protocol to connect to I/O devices is disclosed. A distributed multiprocessing computer system contains a number of processors each coupled to an I/O bridge ASIC implementing the I/O port protocol. One or more I/O devices are coupled to the I/O bridge ASIC, each I/O device capable of accessing machine resources in the computer system by transmitting and receiving message packets. Machine resources in the computer system include data blocks, registers and interrupt queues. Each processor in the computer system is coupled to a memory module capable of storing data blocks shared between the processors. Coherence of the shared data blocks in this shared memory system is maintained using a directory based coherence protocol. Coherence of data blocks transferred during I/O device read and write accesses is maintained using the same coherence protocol as for the memory system. Data blocks transferred during an I/O device read or write access may be buffered in a cache by the I/O bridge ASIC only if the I/O bridge ASIC has exclusive copies of the data blocks. The I/O bridge ASIC includes a DMA device that supports both in-order and out-of-order DMA read and write streams of data blocks. An in-order stream of reads of data blocks performed by the DMA device always results in the DMA device receiving coherent data blocks that do not have to be written back to the memory module.
    • 公开了一种支持高性能,可扩展和高效的I / O端口协议来连接到I / O设备的系统。 分布式多处理计算机系统包含多个处理器,每个处理器都耦合到实现I / O端口协议的I / O桥ASIC。 一个或多个I / O设备耦合到I / O桥ASIC,每个I / O设备能够通过发送和接收消息分组来访问计算机系统中的机器资源。 计算机系统中的机器资源包括数据块,寄存器和中断队列。 计算机系统中的每个处理器耦合到能够存储处理器之间共享的数据块的存储器模块。 使用基于目录的一致性协议来维护该共享存储器系统中的共享数据块的一致性。 使用与存储系统相同的一致性协议来维护I / O设备读写访问期间传输的数据块的一致性。 只有当I / O桥ASIC具有数据块的排他副本时,I / O桥ASIC才能缓存在I / O设备读或写访问期间传输的数据块。 I / O桥ASIC包括支持数据块的顺序和无序DMA读和写数据流的DMA设备。 由DMA设备执行的数据块的顺序读取流总是导致DMA设备接收不必写入存储器模块的相干数据块。
    • 2. 发明授权
    • Scalable efficient I/O port protocol
    • 可扩展的高效I / O端口协议
    • US06738836B1
    • 2004-05-18
    • US09652391
    • 2000-08-31
    • Richard E. KesslerSamuel H. DuncanDavid W. HartwellDavid A. J. Webb, Jr.Steve Lang
    • Richard E. KesslerSamuel H. DuncanDavid W. HartwellDavid A. J. Webb, Jr.Steve Lang
    • G06F1300
    • G06F15/17381G06F12/0817G06F2212/621
    • A system that supports a high performance, scalable, and efficient I/O port protocol to connect to I/O devices is disclosed. A distributed multiprocessing computer system contains a number of processors each coupled to an I/O bridge ASIC implementing the I/O port protocol. One or more I/O devices are coupled to the I/O bridge ASIC, each I/O device capable of accessing machine resources in the computer system by transmitting and receiving message packets. Machine resources in the computer system include data blocks, registers and interrupt queues. Each processor in the computer system is coupled to a memory module capable of storing data blocks shared between the processors. Coherence of the shared data blocks in this shared memory system is maintained using a directory based coherence protocol. Coherence of data blocks transferred during I/O device read and write accesses is maintained using the same coherence protocol as for the memory system. Data blocks transferred during an I/O device read or write access may be buffered in a cache by the I/O bridge ASIC only if the I/O bridge ASIC has exclusive copies of the data blocks. The I/O bridge ASIC includes a DMA device that supports both in-order and out-of-order DMA read and write streams of data blocks. An in-order stream of reads of data blocks performed by the DMA device always results in the DMA device receiving coherent data blocks that do not have to be written back to the memory module.
    • 公开了一种支持高性能,可扩展和高效的I / O端口协议来连接到I / O设备的系统。 分布式多处理计算机系统包含多个处理器,每个处理器都耦合到实现I / O端口协议的I / O桥ASIC。 一个或多个I / O设备耦合到I / O桥ASIC,每个I / O设备能够通过发送和接收消息分组来访问计算机系统中的机器资源。 计算机系统中的机器资源包括数据块,寄存器和中断队列。 计算机系统中的每个处理器耦合到能够存储处理器之间共享的数据块的存储器模块。 使用基于目录的一致性协议来维护该共享存储器系统中的共享数据块的一致性。 使用与存储系统相同的一致性协议来维护I / O设备读写访问期间传输的数据块的一致性。 只有当I / O桥ASIC具有数据块的排他副本时,I / O桥ASIC才能缓存在I / O设备读或写访问期间传输的数据块。 I / O桥ASIC包括支持数据块的顺序和无序DMA读和写数据流的DMA设备。 由DMA设备执行的数据块的顺序读取流总是导致DMA设备接收不必写入存储器模块的相干数据块。
    • 3. 发明授权
    • Broadcast invalidate scheme
    • 广播无效方案
    • US07076597B2
    • 2006-07-11
    • US10685039
    • 2003-10-14
    • David A. J. Webb, Jr.Richard E. KesslerSteve LangAaron T. Spink
    • David A. J. Webb, Jr.Richard E. KesslerSteve LangAaron T. Spink
    • G06F12/08
    • G06F12/0826
    • A directory-based multiprocessor cache control scheme for distributing invalidate messages to change the state of shared data in a computer system. The plurality of processors are grouped into a plurality of clusters. A directory controller tracks copies of shared data sent to processors in the clusters. Upon receiving an exclusive request from a processor requesting permission to modify a shared copy of the data, the directory controller generates invalidate messages requesting that other processors sharing the same data invalidate that data. These invalidate messages are sent via a point-to-point transmission only to master processors in clusters actually containing a shared copy of the data. Upon receiving the invalidate message, the master processors broadcast the invalidate message in an ordered fan-in/fan-out process to each processor in the cluster. All processors within the cluster invalidate a local copy of the shared data if it exists and once the master processor receives acknowledgements from all processors in the cluster, the master processor sends an invalidate acknowledgment message to the processor that originally requested the exclusive rights to the shared data. The cache coherency is scalable and may be implemented using the hybrid point-to-point/broadcast scheme or a conventional point-to-point only directory-based invalidate scheme.
    • 用于分发无效消息以改变计算机系统中的共享数据的状态的基于目录的多处理器高速缓存控制方案。 多个处理器被分组成多个簇。 目录控制器跟踪发送到集群中的处理器的共享数据的副本。 当从处理器接收到请求许可修改数据的共享副本的独占请求时,目录控制器产生无效消息,请求共享相同数据的其他处理器使该数据无效。 这些无效消息通过点对点传输仅发送到实际包含数据共享副本的集群中的主处理器。 在收到无效消息后,主处理器将有序扇入/扇出进程中的无效消息广播到群集中的每个处理器。 集群内的所有处理器使共享数据的本地副本(如果存在)无效,并且一旦主处理器从集群中的所有处理器接收到确认,则主处理器向原始请求共享的专有权的处理器发送无效确认消息 数据。 高速缓存一致性是可扩展的,并且可以使用混合点对点/广播方案或常规的仅基于点对点的仅基于目录的无效方案来实现。
    • 4. 发明授权
    • Broadcast invalidate scheme
    • 广播无效方案
    • US06751721B1
    • 2004-06-15
    • US09652165
    • 2000-08-31
    • David A. J. Webb, Jr.Richard E. KesslerSteve LangAaron T. Spink
    • David A. J. Webb, Jr.Richard E. KesslerSteve LangAaron T. Spink
    • G06F1300
    • G06F12/0826
    • A directory-based multiprocessor cache control scheme for distributing invalidate messages to change the state of shared data in a computer system. The plurality of processors are grouped into a plurality of clusters. A directory controller tracks copies of shared data sent to processors in the clusters. Upon receiving an exclusive request from a processor requesting permission to modify a shared copy of the data, the directory controller generates invalidate messages requesting that other processors sharing the same data invalidate that data. These invalidate messages are sent via a point-to-point transmission only to master processors in clusters actually containing a shared copy of the data. Upon receiving the invalidate message, the master processors broadcast the invalidate message in an ordered fan-in/fan-out process to each processor in the cluster. All processors within the cluster invalidate a local copy of the shared data if it exists and once the master processor receives acknowledgements from all processors in the cluster, the master processor sends an invalidate acknowledgment message to the processor that originally requested the exclusive rights to the shared data. The cache coherency is scalable and may be implemented using the hybrid point-to-point/broadcast scheme or a conventional point-to-point only directory-based invalidate scheme.
    • 用于分发无效消息以改变计算机系统中的共享数据的状态的基于目录的多处理器高速缓存控制方案。 多个处理器被分组成多个簇。 目录控制器跟踪发送到集群中的处理器的共享数据的副本。 当从处理器接收到请求许可修改数据的共享副本的独占请求时,目录控制器产生无效消息,请求共享相同数据的其他处理器使该数据无效。 这些无效消息通过点对点传输仅发送到实际包含数据共享副本的集群中的主处理器。 在收到无效消息后,主处理器将有序扇入/扇出进程中的无效消息广播到群集中的每个处理器。 集群内的所有处理器使共享数据的本地副本(如果存在)无效,并且一旦主处理器从集群中的所有处理器接收到确认,则主处理器向原始请求共享的专有权的处理器发送无效确认消息 数据。 高速缓存一致性是可扩展的,并且可以使用混合点对点/广播方案或常规的仅基于点对点的仅基于目录的无效方案来实现。
    • 5. 发明授权
    • Priority rules for reducing network message routing latency
    • 降低网络消息路由延迟的优先级规则
    • US06961781B1
    • 2005-11-01
    • US09652322
    • 2000-08-31
    • Shubhendu S. MukherjeeRichard E. KesslerSteve LangDavid A. J. Webb, Jr.
    • Shubhendu S. MukherjeeRichard E. KesslerSteve LangDavid A. J. Webb, Jr.
    • G06F15/16G06F15/173H04L12/56
    • H04L47/6265H04L45/302H04L45/60H04L47/50H04L47/522
    • A system and method is disclosed for reducing network message passing latency in a distributed multiprocessing computer system that contains a plurality of microprocessors in a computer network, each microprocessor including router logic to route message packets prioritized in importance by the type of message packet, age of the message packet, and the source of the message packet. The microprocessors each include a plurality of network input ports connected to corresponding local arbiters in the router. The local arbiters are each able to select a message packet from the message packets waiting at the associated network input port. Microprocessor input ports and microprocessor output ports in the microprocessor allow the exchange of message packets between hardware functional units in the microprocessor and between the microprocessors. The microprocessor input ports are similarly each coupled to corresponding local arbiters in the router. Each of the local arbiters is able to select a message packet among the message packets waiting at the microprocessor input port. Global arbiters in the router connected to the network output ports and microprocessor output ports select a message packet from message packets nominated by the local arbiters of the network input ports and microprocessor input ports. The local arbiters connected to each network input port or microprocessor input port will request service from a output port global arbiter for a message packet based on the message packet type if the message packet is ready to be dispatched.
    • 公开了一种用于减少在计算机网络中包含多个微处理器的分布式多处理计算机系统中的网络消息传递延迟的系统和方法,每个微处理器包括路由器逻辑,用于路由消息分组的重要性优先于消息分组的类型, 消息包和消息包的来源。 微处理器各自包括连接到路由器中对应的本地仲裁器的多个网络输入端口。 本地仲裁器能够从相关联的网络输入端口等待的消息分组中选择一个消息包。 微处理器输入端口和微处理器输出端口允许在微处理器中的硬件功能单元和微处理器之间交换消息包。 微处理器输入端口类似地分别耦合到路由器中的对应的本地仲裁器。 每个本地仲裁器能够在等待在微处理器输入端口的消息包中选择一个消息包。 连接到网络输出端口和微处理器输出端口的路由器中的全局仲裁器从由网络输入端口和微处理器输入端口的本地仲裁器指定的消息分组中选择消息分组。 连接到每个网络输入端口或微处理器输入端口的本地仲裁器将根据消息分组类型从消息分组的输出端口全局仲裁器请求服务,如果消息分组准备好被分派。
    • 6. 发明授权
    • Special encoding of known bad data
    • US07100096B2
    • 2006-08-29
    • US10675133
    • 2003-09-30
    • David Arthur James Webb, Jr.Richard E. KesslerSteve Lang
    • David Arthur James Webb, Jr.Richard E. KesslerSteve Lang
    • G06F11/00
    • G06F11/0763G06F11/0724
    • A multi-processor system in which each processor receives a message from another processor in the system. The message may contain corrupted data that was corrupted during transmission from the preceding processor. Upon receiving the message, the processor detects that a portion of the message contains corrupted data. The processor then replaces the corrupted portion with a predetermined bit pattern known or otherwise programmed into all other processors in the system. The predetermined bit pattern indicates that the associated portion of data was corrupted. The processor that detects the error in the message preferably alerts the system that an error has been detected. The message now containing the predetermined bit pattern in place of the corrupted data is retransmitted to another processor. The predetermined bit pattern will indicate that an error in the message was detected by the previous processor. In response, the processor detecting the predetermined bit pattern preferably will not alert the system of the existence of an error. The same message with the predetermined bit pattern can be retransmitted to other processors which also will detect the presence of the predetermined bit pattern and in response not alert the system of the presence of an error. As such, because only the first processor to detect an error alerts the system of the error and because messages containing uncorrectable errors still are transmitted through the system, fault isolation is improved and the system is less likely to fall into a deadlock condition.
    • 7. 发明授权
    • Special encoding of known bad data
    • 已知不良数据的特殊编码
    • US06662319B1
    • 2003-12-09
    • US09652314
    • 2000-08-31
    • David Arthur James Webb, Jr.Richard E. KesslerSteve Lang
    • David Arthur James Webb, Jr.Richard E. KesslerSteve Lang
    • G06F1110
    • G06F11/0763G06F11/0724
    • A multi-processor system in which each processor receives a message from another processor in the system. The message may contain corrupted data that was corrupted during transmission from the preceding processor. Upon receiving the message, the processor detects that a portion of the message contains corrupted data. The processor then replaces the corrupted portion with a predetermined bit pattern known or otherwise programmed into all other processors in the system. The predetermined bit pattern indicates that the associated portion of data was corrupted. The processor that detects the error in the message preferably alerts the system that an error has been detected. The message now containing the predetermined bit pattern in place of the corrupted data is retransmitted to another processor. The predetermined bit pattern will indicate that an error in the message was detected by the previous processor. In response, the processor detecting the predetermined bit pattern preferably will not alert the system of the existence of an error. The same message with the predetermined bit pattern can be retransmitted to other processors which also will detect the presence of the predetermined bit pattern and in response not alert the system of the presence of an error. As such, because only the first processor to detect an error alerts the system of the error and because messages containing uncorrectable errors still are transmitted through the system, fault isolation is improved and the system is less likely to fall into a deadlock condition.
    • 一种多处理器系统,其中每个处理器从系统中的另一处理器接收消息。 消息可能包含在从前一个处理器传输过程中损坏的损坏的数据。 处理器收到消息后,检测到消息的一部分包含损坏的数据。 然后,处理器以已知或以其他方式编程到系统中的所有其他处理器的预定位模式来替换被破坏的部分。 预定位模式指示相关联的数据部分已损坏。 检测消息中的错误的处理器最好提醒系统检测到错误。 现在包含预定位模式以代替已损坏数据的消息被重新发送到另一个处理器。 预定的位模式将指示消息中的错误被先前的处理器检测到。 作为响应,优选地,检测预定位模式的处理器不会警告系统存在错误。 具有预定位模式的相同消息可以被重新发送到其他处理器,其也将检测预定位模式的存在,并且在响应时不向系统警告存在错误。 因此,由于只有第一个处理器检测错误才会使系统发生错误,并且由于包含不可校正错误的消息仍然通过系统传输,所以故障隔离得到改善,系统不太可能陷入死锁状态。
    • 8. 发明授权
    • Mechanism to control the allocation of an N-source shared buffer
    • 控制N源共享缓冲区分配的机制
    • US07213087B1
    • 2007-05-01
    • US09651924
    • 2000-08-31
    • Michael S. BertoneRichard E. KesslerDavid H. AsherSteve Lang
    • Michael S. BertoneRichard E. KesslerDavid H. AsherSteve Lang
    • G06F5/00
    • H04L47/39H04L49/90
    • A method and apparatus for ensuring fair and efficient use of a shared memory buffer. A preferred embodiment comprises a shared memory buffer in a multi-processor computer system. Memory requests from a local processor are delivered to a local memory controller by a cache control unit and memory requests from other processors are delivered to the memory controller by an interprocessor router. The memory controller allocates the memory requests in a shared buffer using a credit-based allocation scheme. The cache control unit and the interprocessor router are each assigned a number of credits. Each must pay a credit to the memory controller when a request is allocated to the shared buffer. If the number of filled spaces in the shared buffer is below a threshold, the buffer immediately returns the credits to the source from which the credit and memory request arrived. If the number of filled spaces in the shared buffer is above a threshold, the buffer holds the credits and returns the credits in a round-robin manner only when a space in the shared buffer becomes free. The number of credits assigned to each source is sufficient to enable each source to deliver an uninterrupted burst of memory requests to the buffer without having to wait for credits to return from the buffer. The threshold is the point when the number of free spaces available in the buffer is equal to the total number of credits assigned to the cache control unit and the interprocessor router.
    • 一种用于确保公平和有效地使用共享存储器缓冲器的方法和装置。 优选实施例包括在多处理器计算机系统中的共享存储器缓冲器。 来自本地处理器的存储器请求由高速缓存控制单元传送到本地存储器控制器,并且来自其他处理器的存储器请求由处理器间路由器递送到存储器控制器。 存储器控制器使用基于信用的分配方案在共享缓冲器中分配存储器请求。 高速缓存控制单元和处理器间路由器分配有多个信用。 当请求分配给共享缓冲区时,每个都必须向内存控制器支付抵免额。 如果共享缓冲区中的填充空间数量低于阈值,则缓冲区立即将信用返回到信用和存储器请求到达的来源。 如果共享缓冲器中的填充空间数目高于阈值,则缓冲器只有当共享缓冲器中的空间变得空闲时才保存信用并以循环方式返回信用。 分配给每个源的信用点数量足以使每个源能够将不间断的存储器请求发送到缓冲器,而不必等待信用从缓冲器返回。 阈值是缓冲器中可用空间的数量等于分配给缓存控制单元和处理器间路由器的总信用数量的点。