会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 72. 发明申请
    • METHOD AND SYSTEM FOR TCP/IP USING GENERIC BUFFERS FOR NON-POSTING TCP APPLICATIONS
    • TCP / IP方法和系统使用通用缓冲区进行非点对点TCP应用
    • WO2004019165A3
    • 2004-09-23
    • PCT/US0326122
    • 2003-08-21
    • BROADCOM CORP
    • FAN KAN FRANKIEMCDANIEL SCOTT STERLING
    • G06F20060101G06F3/00H04L12/56H04L29/06
    • H04L69/16G06F9/544H04L49/90H04L49/9047H04L49/9073H04L69/163
    • Aspects of the invention for posting buffers for a non-posting TCP application (402) may comprise posting at least one generic buffer (426) located in a memory external to a host adapter (406) and transferring incoming data for a TCP connection to the posted generic buffer prior to the non-posting TCP application posting a TCP application buffer for the incoming data. At least one generic buffer may be allocated from a pool of available generic buffers (426a, 426b, 426n) upon receipt of the incoming TCP connection data. At least a portion of the incoming data may be stored in the allocated generic buffer if the TCP application buffer is unable to accommodate the incoming data. The method may further determine whether the incoming data for the TCP connection transferred to the posted generic buffer is in sequence and ordering the incoming data based on a sequence number if the incoming data is out of sequence.
    • 用于发布用于非发布TCP应用(402)的缓冲器的本发明的各方面可以包括:将位于主机适配器(406)外部的存储器中的至少一个通用缓冲器(426)发布,并将用于TCP连接的输入数据传送到 在非发布TCP应用程序发布用于传入数据的TCP应用程序缓冲区之前发布通用缓冲区。 在接收到传入的TCP连接数据时,至少可以从可用通用缓冲器(426a,426b,426n)池中分配至少一个通用缓冲器。 如果TCP应用缓冲区不能容纳输入数据,则至少一部分传入数据可以存储在所分配的通用缓冲器中。 该方法可以进一步确定传输到已发布的通用缓冲器的TCP连接的输入数据是否是顺序的,并且如果输入的数据不在序列,则基于序列号排序输入数据。
    • 73. 发明申请
    • SYSTEM AND METHOD FOR TCP OFFLOAD
    • TCP卸载的系统和方法
    • WO2004021627A3
    • 2004-09-16
    • PCT/US0327231
    • 2003-08-29
    • BROADCOM CORP
    • ELZUR URIFAN FRANKIELINDSAY STEVEMCDANIEL SCOTT S
    • H04L20060101H04L12/56H04L29/06
    • H04L29/06H04L47/193H04L47/2441H04L47/34H04L49/90H04L49/9063H04L49/9073H04L49/9094H04L69/10H04L69/12H04L69/16H04L69/161H04L69/162H04L69/163H04L69/166
    • Aspects of the invention may comprise receiving an incoming TCP packet at a TEEC (270) and processing at least a portion of the incoming packet once by the TEEC (270) without having to do any reassembly and/or retransmission by the TEEC (270). At least a portion of the incoming TCP packet may be buffered in at least one internal elastic buffer (280,290) of the TEEC (270). The internal elastic buffer (280,290) may comprise a receive internal elastic buffer (290) and/or a transmit internal elastic buffer (280). Accordingly, at least a portion of the incoming TCP packet may be buffered in the receive internal elastic buffer (290). At least a portion of the processed incoming packet may be placed in a portion of a host memory (230) for processing by a host processor or CPU (210). Furthermore, at least a portion of the processed incoming TCP packet may be DMA transferred to a portion of the host memory (230).
    • 本发明的方面可以包括在TEEC(270)处接收进入的TCP分组,并且由TEEC(270)处理至少一部分输入分组一次,而不必由TEEC(270)进行任何重新组合和/或重传, 。 进入的TCP分组的至少一部分可以缓冲在TEEC(270)的至少一个内部弹性缓冲器(280,290)中。 内部弹性缓冲器(280,290)可以包括接收内部弹性缓冲器(290)和/或发射内部弹性缓冲器(280)。 因此,进入的TCP分组的至少一部分可以缓冲在接收内部弹性缓冲器(290)中。 处理的输入分组的至少一部分可以被放置在主机存储器(230)的一部分中,以供主处理器或CPU(210)处理。 此外,经处理的输入TCP分组的至少一部分可以被DMA传送到主机存储器(230)的一部分。
    • 74. 发明申请
    • INTEGRATED CIRCUIT AND METHOD FOR ESTABLISHING TRANSACTIONS
    • 集成电路和建立交易的方法
    • WO2004034676A1
    • 2004-04-22
    • PCT/IB2003/003036
    • 2003-07-04
    • KONINKLIJKE PHILIPS ELECTRONICS N.V.GOOSSENS, Kees, G., W.
    • GOOSSENS, Kees, G., W.
    • H04L29/12
    • H04L47/10H04L47/32H04L49/9073
    • An integrated circuit comprising a plurality of modules (M, S) and a network (N) arranged for transferring messages between said modules (M, S) is provided, wherein a message issued by a first module (M) comprises first information indicative for a location of an addressed module within the network (N), and second information indicative for a location within the addressed module (S). Said integrated circuit further comprises at least one address translation means (AT) for arranging the first and the second information as a single address. Said address translation means (AT) is adapted to determine which module is addressed based on said single address, and the selected location of the addressed module (S) is determined based on said single address. Accordingly, the design of the first modules (M), i.e. master modules, can implemented independent of the address mapping to the addressed modules (S), i.e. the slave modules. Furthermore, a more efficient network resource utilization is achieved and this scheme is backward compatible with busses.
    • 提供一种集成电路,包括多个模块(M,S)和布置成用于在所述模块(M,S)之间传送消息的网络(N)),其中由第一模块(M)发出的消息包括指示 网络(N)内的寻址模块的位置,以及指示寻址的模块(S)内的位置的第二信息。 所述集成电路还包括用于将第一和第二信息作为单个地址布置的至少一个地址转换装置(AT)。 所述地址转换装置(AT)适于基于所述单个地址确定哪个模块被寻址,并且基于所述单个地址来确定寻址的模块(S)的所选择的位置。 因此,第一模块(M)的设计,即主模块可以独立于与所寻址的模块(S),即从模块的地址映射来实现。 此外,实现了更有效的网络资源利用,并且该方案向下兼容总线。
    • 76. 发明申请
    • EXTENDED INSTRUCTION SET FOR PACKET PROCESSING APPLICATIONS
    • 分组处理应用的扩展指令集
    • WO2003023556A2
    • 2003-03-20
    • PCT/US2002/026474
    • 2002-08-19
    • CLEARWATER NETWORKS, INC.
    • MUSOLL, EnriqueNEMIROVSKY, MarioMELVIN, Stephen
    • G06F
    • G06F9/30003G06F9/30101G06F9/3851G06F9/3885G06F9/5016G06F9/546H04L47/2441H04L47/32H04L47/621H04L47/6215H04L49/201H04L49/205H04L49/90H04L49/901H04L49/9073
    • A software program extension for a dynamic multi-streaming processor is disclosed. The extension comprising an instruction set enabling coordinated interaction between a packet management component and a core processing component of the processor. The software program comprises, a portion thereof for managing packet uploads and downloads into and out of memory, a portion thereof for managing specific memory allocations and de-allocations associated with enqueueing and dequeuing data packets, a portion thereof for managing the use of multiple contexts dedicated to the processing of a single data packet; and a portion thereof for managing selection and utilization of arithmetic and other context memory functions associated with data packet processing. The extension complements standard data packet processing program architecture for specific use for processors having a packet management unit that functions independently from a streaming processor unit.
    • 公开了用于动态多流处理器的软件程序扩展。 该扩展包括实现分组管理组件和处理器的核心处理组件之间的协调交互的指令集。 该软件程序包括用于管理分组上载和下载到存储器和从存储器下载的一部分,其一部分用于管理与入队和出队数据分组相关联的特定存储器分配和解除分配,其一部分用于管理多个上下文的使用 专用于处理单个数据包; 及其一部分,用于管理与数据包处理相关联的算术和其他上下文存储器功能的选择和利用。 该扩展补充了标准数据包处理程序体系结构,用于具有独立于流式处理器单元运行的数据包管理单元的处理器。
    • 77. 发明申请
    • VARIABLE SIZE FIRST IN FIRST OUT (FIFO) MEMORY WITH HEAD AND TAIL CACHING
    • 先进先出(FIFO)存储器,具有头和尾部高速缓存
    • WO2003017541A1
    • 2003-02-27
    • PCT/US2002/025425
    • 2002-08-09
    • INTERNET MACHINES CORPORATION
    • HAYWOOD, Chris
    • H04J1/16
    • G06F12/0875G06F5/10G06F2205/108H04L49/90H04L49/9042H04L49/9073
    • A variable size FIFO memory (13) is provided by the use of head (17) and tail (16) FIFO memories operating at a very high data rate and then an off chip buffer memory (18), for example, of a dynamic RAM type, which temporarily stores data packets when both head (17) and tail (16) FIFO memories are filled. Data blocks of each of the memories are the same size for efficient transfer of data. After a sudden data burst which causes memory overflow ceases, the head (17) and tail (16) FIFO memories return to the initial functions with the head memory directly receiving high speed data and transmitting it to various switching elements and the tails (16) FIFO memory stores temporary overflows of data from the head (17) FIFO memory.
    • 通过使用以非常高的数据速率操作的头部(17)和尾部(16)FIFO存储器,然后例如动态RAM的片外缓冲存储器(18)来提供可变大小的FIFO存储器(13) 类型,当头(17)和尾(16)FIFO存储器都被填满时临时存储数据包。 每个存储器的数据块具有相同的尺寸以便有效地传输数据。 在导致存储器溢出的突发数据突发停止之后,头部(17)和尾部(16)FIFO存储器返回到初始功能,头部存储器直接接收高速数据并将其发送到各种开关元件和尾部(16) FIFO存储器存储来自头部(17)FIFO存储器的数据的临时溢出。
    • 78. 发明申请
    • SELECTIVE ROUTING OF DATA FLOWS USING A TCAM
    • 使用TCAM的数据流选择性路由
    • WO03012672A2
    • 2003-02-13
    • PCT/US0221229
    • 2002-07-03
    • NOKIA INC
    • MATE ASHUTOSHMAHAMUNI ATULCHANDER VIJAY
    • H04L12/56G06F15/173
    • H04L47/6215H04L12/5601H04L45/00H04L45/50H04L45/54H04L45/7457H04L47/10H04L47/2441H04L47/50H04L47/627H04L49/90H04L49/901H04L49/9036H04L49/9047H04L49/9073
    • The present invention relates to a method and system for supporting in a router a plurality of data flows using a ternary content addressable memory (TCAM) (30) in which the number of accesses to write to the TCAM (30) is optimized to improve efficiency of updating and subsequent look up. To accommodate the plurality of data flows, the TCAM is partitioned into at least two partitions in which a first portion includes indices (32) having a higher priority and a second portion includes indices having a lower priority. For example, multiple protocol label switching (MPLS) flows and IP-Virtual Private Network (VPN) (36) can be added to the first partition and policy based routing flows can be added to the second partition. During subsequent TCAM look-up of a prefix of an incoming packet the MPLS or IP-VPN flow will subsume any matching policy based routing flow, such as flows classified by an access control list or traffic manager flows.
    • 本发明涉及一种使用三进制内容可寻址存储器(TCAM)(30)在路由器中支持多个数据流的方法和系统,其中对TCAM(30)写入的访问次数被优化以提高效率 的更新和随后的查找。 为了适应多个数据流,TCAM被划分为至少两个分区,其中第一部分包括具有较高优先级的索引(32),第二部分包括具有较低优先级的索引。 例如,可以将多协议标签交换(MPLS)流和IP虚拟专用网(VPN)(36)添加到第一分区,并且基于策略的路由流可以被添加到第二分区。 在随后的TCAM查询中,进入分组的前缀,MPLS或IP-VPN流将包含任何基于匹配策略的路由流,例如由访问控制列表或流量管理器流分类的流。
    • 80. 发明申请
    • METHOD AND APPARATUS FOR OPTIMIZING SELECTION OF AVAILABLE CONTEXTS FOR PACKET PROCESSING IN MULTI-STREAM PACKET PROCESSING
    • 用于优化选择可用于在多流包分组处理中进行分组处理的内容的方法和装置
    • WO2002102001A1
    • 2002-12-19
    • PCT/US2002/012469
    • 2002-04-18
    • CLEARWATER NETWORKS, INC.
    • MUSOLL, EnriqueNEMIROVSKY, Mario
    • H04L12/56
    • H04L29/06G06F9/546H04L47/2441H04L47/32H04L47/621H04L47/6215H04L49/201H04L49/205H04L49/90H04L49/901H04L49/9073H04L69/12H04L69/22
    • A context-selection mechanism (201) is provided for selecting a best context from a pool of contexts (229) for processing a data packet. The context selection mechanism (201) comprises, an interface (203) for communicating with a multi-streaming processor (105); circuitry for computing input data into a result value according to logic rule and for selecting a context based on the computed value and a loading mechanism (219) for preloading the packet information into the selected context for subsequent processing. The computation of the input data functions to enable identification and selection of a best context for processing a data packet according to the logic rule at the instant time such that a multitude of subsequent context selections over a period of time acts to balance load pressure on functional units housed within the multi-streaming processor (105) and required for packet processing. In preferred aspects, programmable singular or multiple predictive rules of logic are utulized in the selection process.
    • 提供上下文选择机制(201),用于从用于处理数据分组的上下文池(229)中选择最佳上下文。 上下文选择机制(201)包括:与多流处理器(105)通信的接口(203); 用于根据逻辑规则将输入数据计算到结果值中并且用于基于所计算的值来选择上下文的电路和用于将分组信息预加载到所选择的上下文中用于后续处理的加载机制(219)。 输入数据的计算功能用于根据逻辑规则来识别和选择用于处理数据分组的最佳上下文,使得在一段时间内多个随后的上下文选择用于平衡负载压力对功能性 容纳在多流处理器(105)内并且分组处理所需的单元。 在优选的方面,逻辑的可编程奇异或多重预测规则在选择过程中被排除。