会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明授权
    • Network adapter utilizing a hashing function for distributing packets to multiple processors for parallel processing
    • 网络适​​配器利用散列函数将数据包分配到多个处理器进行并行处理
    • US06631422B1
    • 2003-10-07
    • US09383741
    • 1999-08-26
    • Gregory Scott AlthausTai-Chien Daisy ChangHerman Dietrich Dierks, Jr.Satya Prakesh Sharma
    • Gregory Scott AlthausTai-Chien Daisy ChangHerman Dietrich Dierks, Jr.Satya Prakesh Sharma
    • G06F1300
    • H04L67/00H04L69/12H04L69/22H04L69/32
    • Network input processing is distributed to multiple CPUs on multiprocessor systems to improve network throughput and take advantage of MP scalability. Packets are received by the network adapter and are distributed to N receive buffer pools set up by the device driver, based on N CPUs being available for input processing of packets. Each receive buffer pool has an associated CPU. Packets are direct memory accessed to one of the N receive buffer pools by using a hashing function, which is based on the source MAC address, source IP address, or the packet's source and destination TCP port numbers, or all or a combination of the foregoing. The hashing mechanism ensures that the sequence of packets within a given communication session will be preserved. Distribution is effected by the network adapter, which sends an interrupt to the CPU corresponding to the receive buffer pool, subsequent to the packet being DMAed into the buffer pool. This optimizes the efficiency of the MP system by eliminating any reliance on the scheduler and increasing the bandwidth between the device driver and the network adapter, while maintaining proper packet sequences. Parallelism is thereby increased on network I/O processing, eliminating CPU bottleneck for high speed network I/Os and, thus, improving network performance.
    • 网络输入处理分配到多处理器系统上的多个CPU,以提高网络吞吐量并利用MP可扩展性。 数据包由网络适配器接收,并分配给设备驱动程序设置的N个接收缓冲池,基于可用于数据包输入处理的N个CPU。 每个接收缓冲池都有一个关联的CPU。 数据包是通过使用基于源MAC地址,源IP地址或数据包的源和目标TCP端口号的散列函数或全部或前述的组合来访问N个接收缓冲池中的一个的直接存储器 。 散列机制确保给定通信会话内的数据包序列将被保留。 分配由网络适配器实现,该网络适配器在将数据包DMA到缓冲池之后向与接收缓冲池相对应的CPU发送中断。 这通过消除对调度器的任何依赖,并增加设备驱动程序和网络适配器之间的带宽,同时保持适当的数据包序列,从而优化了MP系统的效率。 因此,在网络I / O处理上并行性增加,消除了高速网络I / O的CPU瓶颈,从而提高了网络性能。
    • 5. 发明授权
    • Detecting a dead gateway for subsequent non-TCP transmission by sending a first TCP packet and deleting an ARP entry associated with the gateway
    • 通过发送第一个TCP数据包和删除与该网关相关联的ARP条目来检测用于后续非TCP传输的死网关
    • US06826623B1
    • 2004-11-30
    • US09661275
    • 2000-09-14
    • Deanna Lynn Quigg BrownVinit JainSatya Prakesh Sharma
    • Deanna Lynn Quigg BrownVinit JainSatya Prakesh Sharma
    • G06F15173
    • H04L29/12018H04L29/12009H04L45/22H04L45/28H04L61/10H04L69/40
    • A method, computer program product and system for detecting a first-hop dead gateway. In one embodiment, a method comprises the step of sending a TCP packet of data from an application of a sender host to a receiver host through a first gateway, where the first gateway is a first-hop away from the sender host. The method further comprises the step of TCP failing to receive an acknowledgment of received data from the receiver host. The method further comprises the step of deleting an ARP entry associated with the first gateway in the sender host. The method further comprises the step of establishing a new communication using the first gateway by the application or new application of the sender host. The method further comprises the step of sending an ARP request to the first gateway by the sender host. If the sender host receives a response from the first gateway, then the method further comprises the step of sending a TCP or non-TCP packet of data using the first gateway if the new communication was a TCP or non-TCP communication, respectively. If the sender host does not receive a response from the first gateway, then the method further comprises the step of selecting an alternative path through an alternative first-hop gateway in the routing table of the sender host. The application or new application of the sender host then sends the TCP or non-TCP packet of data using the alternative gateway if the new communication was a TCP or non-TCP communication, respectively.
    • 一种用于检测第一跳死网关的方法,计算机程序产品和系统。 在一个实施例中,一种方法包括以下步骤:通过第一网关将数据的TCP数据包从发送方主机的应用发送到接收者主机,其中第一网关是远离发送者主机的第一跳。 该方法还包括TCP未能从接收方主机接收接收到的数据的确认的步骤。 该方法还包括删除与发送方主机中的第一网关相关联的ARP条目的步骤。 该方法还包括由发送方主机的应用或新应用使用第一网关建立新通信的步骤。 该方法还包括由发送方主机向第一网关发送ARP请求的步骤。 如果发送方主机从第一网关接收到响应,则该方法还包括如果新通信分别是TCP或非TCP通信,则该方法还包括使用第一网关发送TCP或非TCP数据包的步骤。 如果发送方主机没有从第一网关接收到响应,则该方法还包括通过发送方主机的路由表中的备选第一跳网关选择替代路径的步骤。 如果新通信分别是TCP或非TCP通信,则发送方主机的应用程序或新应用程序将使用替代网关发送TCP或非TCP数据包。
    • 6. 发明授权
    • System and method for sequencing packets for multiprocessor parallelization in a computer network system
    • 用于对计算机网络系统中多处理器并行化的数据包进行排序的系统和方法
    • US06338078B1
    • 2002-01-08
    • US09213920
    • 1998-12-17
    • Tai-chien Daisy ChangHerman Dietrich Dierks, Jr.Satya Prakesh SharmaHelmut CossmannWilliam James Hymas
    • Tai-chien Daisy ChangHerman Dietrich Dierks, Jr.Satya Prakesh SharmaHelmut CossmannWilliam James Hymas
    • G06F1300
    • H04L29/06H04L45/745H04L67/1002H04L69/32H04L2029/06054
    • Network input processing is distributed to multiple CPUs on multiprocessor systems to improve network throughput and take advantage of MP scalability. Packets received on the network are distributed to N high priority threads, wherein N is the number of CPUs on the system. N queues are provided to which the incoming packets are distributed. When one of the queues is started, one of the threads is scheduled to process packets on this queue at any one of the CPUs that is availableat the time. When all of the packets on the queue are processed, the thread becomes dormant. Packets are distributed to one of the N queues by using a hashing function based on the source MAC address, source IP address, or the packet's source and destination TCP port number, or all or a combination of the foregoing. The hashing mechanism ensures that the sequence of packets within a given communication session will be preserved. Distribution is effected by the device drivers of the system. Parallelism is thereby increased on network I/O processing, eliminating CPU bottleneck for high speed network I/Os, thereby improving network performance
    • 网络输入处理分配到多处理器系统上的多个CPU,以提高网络吞吐量并利用MP可扩展性。 在网络上接收到的数据包被分配给N个高优先级线程,其中N是系统上的CPU数量。 N个队列被提供给传入的分组被分配到的队列。 当其中一个队列启动时,其中一个线程被调度为在当前可用的任何一个CPU处理此队列上的数据包。 当队列中的所有数据包都被处理时,线程将处于休眠状态。 通过使用基于源MAC地址,源IP地址或数据包的源和目标TCP端口号的散列函数,或全部或前述的组合,将分组分发到N个队列中的一个队列。 散列机制确保给定通信会话内的数据包序列将被保留。 分配由系统的设备驱动程序实现。 因此,在网络I / O处理上并行性增加,消除了高速网络I / O的CPU瓶颈,从而提高了网络性能