会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Mixed topology data switching system
    • 混合拓扑数据交换系统
    • US06463065B1
    • 2002-10-08
    • US09340577
    • 1999-06-28
    • Brian A. PetersenHarish R. DevanagondiJames R. Rivers
    • Brian A. PetersenHarish R. DevanagondiJames R. Rivers
    • H04L1228
    • H04L47/32H04L12/44H04L29/06H04L47/13H04L47/2433H04L47/30H04L49/00H04L49/103H04L49/108H04L49/3072H04L49/309H04L2012/5651H04L2012/5667
    • Methods and apparatus for enabling communication between a source network device and one or more destination network devices are disclosed. A system enabling communication between a source network device and one or more destination network devices includes a switch and a ring interconnect. The switch is adapted for connecting to the source network device and the one or more destination network devices. More particularly, the switch is capable of storing data provided by the source network device and retrieving the data for the one or more destination network devices. The ring interconnect is adapted for connecting the source network device and the one or more destination network devices to one another. In addition, the ring interconnect is capable of passing one or more free slot symbols along the ring interconnect. Thus, the ring interconnect is capable of expanding when one or more of the free slot symbols are each replaced by a frame notify message indicating that the data has been stored by the switch for retrieval by the one or more destination network devices.
    • 公开了用于实现源网络设备与一个或多个目的地网络设备之间的通信的方法和设备。 能够实现源网络设备与一个或多个目标网络设备之间的通信的系统包括交换机和环形互连。 交换机适于连接到源网络设备和一个或多个目的地网络设备。 更具体地,交换机能够存储由源网络设备提供的数据并且检索用于一个或多个目的地网络设备的数据。 环形互连适于将源网络设备和一个或多个目的地网络设备彼此连接起来。 另外,环形互连能够沿着环形互连通过一个或多个空闲符号。 因此,当一个或多个自由时隙符号被帧通知消息替换时,环互连能够扩展,指示该数据已由交换机存储以供一个或多个目的地网络设备检索。
    • 2. 发明授权
    • Methods and apparatus for providing interfaces for mixed topology data switching system
    • 为混合拓扑数据交换系统提供接口的方法和装置
    • US06526452B1
    • 2003-02-25
    • US09340855
    • 1999-06-28
    • Brian A. PetersenHarish R. DevanagondiJames R. Rivers
    • Brian A. PetersenHarish R. DevanagondiJames R. Rivers
    • G06F1516
    • H04L12/44H04L12/42H04L49/00H04L49/103H04L49/108H04L49/3072H04L49/309H04L2012/5651H04L2012/5667
    • Methods and apparatus for providing a source interface device and destination interface device are disclosed. A method of enabling communication between the source device and one or more destination devices includes sending data from the source device to the switch for storage. A frame notify message addressed to the one or more destination devices and indicating that the data has been stored by the switch for retrieval is then sent on the ring interconnect. One of the specified destination devices obtains the frame notify message from the source device via the ring interconnect. A frame retrieval message identifying the data is then sent from the destination device to the switch in response to the frame notify message. In addition, the destination device modifies the frame notify message to indicate whether the destination device was capable of accepting the frame notify message. The modified frame notify message is then sent on the ring interconnect for retrieval by another destination device or the source device.
    • 公开了用于提供源接口设备和目的地接口设备的方法和设备。 能够实现源设备与一个或多个目标设备之间的通信的方法包括:将数据从源设备发送到交换机进行存储。 然后在环形互连上发送寻址到一个或多个目的地设备的帧通知消息,并指示已经由交换机存储的数据进行检索。 指定的目的设备中的一个通过环形互连从源设备获得帧通知消息。 然后,识别数据的帧检索消息响应于帧通知消息从目的地设备发送到交换机。 此外,目的设备修改帧通知消息,以指示目的地设备是否能够接受帧通知消息。 然后,修改的帧通知消息在环形互连上发送,以便由另一目的地设备或源设备检索。
    • 3. 发明授权
    • Multi-slice network processor
    • 多层网络处理器
    • US07486678B1
    • 2009-02-03
    • US10612889
    • 2003-07-03
    • Harish R. DevanagondiHarish P. BelurBrian A. PetersenRichard J. HeatonMajid Torabi
    • Harish R. DevanagondiHarish P. BelurBrian A. PetersenRichard J. HeatonMajid Torabi
    • H04L12/28H04L12/56H04L12/54
    • H04L49/608H04L49/201H04L49/252H04L49/30H04L49/3009H04L49/3072
    • A multi-slice network processor processes a packet in packet slices for transfer over a multi-port network interface such as a switch fabric. The network processor segments a packet into cells having a target size. A group of cells of a common packet form a packet slice which is independently processed by one of a number of parallel processing and storage slices. Load balancing may be used in the selection of processing slices. Furthermore, the network processor may load balance slices across the multi-port network interface to one or more destination slices of another network processor. The multi-slice processor uses post header storage delivery on ingress processing to the multi-port interface thereby reducing temporary storage requirements. The multi-slice network processor may also utilize sequence numbers associated with each packet to ensure that prior to transmission onto a destination network, the packet is in the correct order for a communication flow.
    • 多层网络处理器处理分组片段中的分组以通过诸如交换结构的多端口网络接口进行传送。 网络处理器将分组分割成具有目标大小的单元。 公共分组的一组小区形成一个分组切片,它由多个并行处理和存储切片之一独立地处理。 可以在选择处理片时使用负载平衡。 此外,网络处理器可以将跨端口网络接口的平衡片段负载到另一网络处理器的一个或多个目标片段。 多片处理器在多端口接口的入口处理中使用后头存储传送,从而减少临时存储要求。 多层网络处理器还可以利用与每个分组相关联的序列号来确保在传输到目的地网络之前,分组对于通信流是正确的顺序。
    • 4. 发明授权
    • Single-chip multi-port Ethernet switch
    • 单芯片多端口以太网交换机
    • US07289537B1
    • 2007-10-30
    • US10700385
    • 2003-11-03
    • Harish R. DevanagondiHarish P. BelurBrian A. Petersen
    • Harish R. DevanagondiHarish P. BelurBrian A. Petersen
    • H04J3/24
    • H04L49/9094H04L49/109H04L49/205H04L49/3009H04L49/3018H04L49/3072H04L49/351H04L49/503H04L49/90
    • An architecture for a multi-port switching device is described having a very regular structure that lends itself to scaling for performance speed and a high level of integration. The distribution of packet data internal to the chip is described as using a cell-based TDM packet transport configuration such as a ring. Similarly, a method of memory allocation in a transmit buffer of each port allows for reassembly of the cells of a packet for storage in a contiguous manner in a queue. Each port includes multiple queues. The destination queue and port for a packet is identified in a multi-bit destination map that is prepended to the start cell of the packet and used by a port to identify packets destined for it. The architecture is useful for a single-chip multi-port Ethernet switch where each of the ports is capable of 10 Gbps data rates.
    • 描述了一种用于多端口交换设备的架构,其具有非常规则的结构,其适用于性能速度和高集成度的缩放。 芯片内部的分组数据的分布被描述为使用诸如环的基于小区的TDM分组传输配置。 类似地,在每个端口的发送缓冲器中的存储器分配方法允许重新组合数据包的单元以在队列中以连续的方式存储。 每个端口包括多个队列。 分组的目的地队列和端口在多位目的地映射中标识,该位是前缀到数据包的起始单元,并由端口用于识别目的地为其的数据包。 该架构对于单芯片多端口以太网交换机是有用的,其中每个端口都能够达到10 Gbps数据速率。
    • 5. 发明授权
    • System and method for queue management using queue sets
    • 使用队列集队列管理的系统和方法
    • US07782885B1
    • 2010-08-24
    • US10734081
    • 2003-12-10
    • Simon SabatoHarish R. DevanagondiYou-Wen YiHarish P. Belur
    • Simon SabatoHarish R. DevanagondiYou-Wen YiHarish P. Belur
    • H04L12/28H04L12/56
    • H04L49/90H04L47/22H04L47/50H04L47/527H04L47/6205H04L49/9015
    • The disclosure describes queue management based on queue sets. A queue set comprises a group of packets or packet references that are processed as a single entity or unit. For example, when a queue set reaches the head of a queue in which it is stored, the entire queue set including its packets or packet references is passed for scheduling as a single unit. A queue set provides the benefit of a single operation associated with enqueuing and a single operation associated with dequeuing. Since only one operation on a queue is required for the typical case of several packets in a queue set rather than for every packet, the rate of queue operations may be significantly reduced. A queue set has a target data unit size, for example, a roughly equal number of packet bytes represented by each queue set, regardless of the number of packets referenced by a queue set. This means that a scheduler of a queue manager, which is tasked with metering the number of packet bytes transmitted from each queue per time unit, is provided with a list of packets which represents a predictable quantity of packet bytes, and this predictability streamlines the scheduling task and significantly reduces the number of operations.
    • 本发明描述了基于队列集的队列管理。 队列集包括作为单个实体或单元处理的一组分组或分组引用。 例如,当队列集达到其中存储的队列的头部时,将包括其包或包引用的整个队列集合作为一个单元进行调度。 一个队列集提供了与排队相关联的单个操作以及与出队相关的单个操作的好处。 由于对于队列集中的几个分组而不是每个分组的典型情况,只需要对队列进行一次操作,所以队列操作的速率可能会显着降低。 队列集具有目标数据单元大小,例如,由每个队列集合表示的大致相等数量的分组字节数,而不管队列集引用的分组数量。 这意味着,一个队列管理器的调度器被配置有一个表示可预测数量的分组字节数量的分组列表,该分配器的任务是计量每个时间单位从每个队列发送的分组字节的数量,并且这种可预测性简化了调度 任务并显着减少操作次数。
    • 8. 发明授权
    • High bandwidth memory management using multi-bank DRAM devices
    • 使用多存储DRAM器件的高带宽存储器管理
    • US07296112B1
    • 2007-11-13
    • US10734082
    • 2003-12-10
    • Ramesh YarlagaddaShwetal DesaiHarish R. Devanagondi
    • Ramesh YarlagaddaShwetal DesaiHarish R. Devanagondi
    • G06F12/00
    • G06F12/06
    • The disclosure describes implementations for accessing in parallel a plurality of banks across a plurality of DRAM devices. These implementations are suited for operation within a parallel packet processor. A data word in partitioned into data segments which are stored in the plurality of banks in accordance with an access scheme that hides pre-charging of rows behind data transfers. A storage distribution control module is communicatively coupled to a memory comprising a plurality of storage request queues, and a retrieval control module is communicatively coupled to a memory comprising a plurality of retrieval request queues. In one example, each request queue may be implemented as a first-in-first-out (FIFO) memory buffer. The plurality of storage request queues are subdivided into sets as are the plurality of retrieval queues. Each is set is associated with a respective DRAM device. A scheduler for each respective DRAM device schedules data transfer between its respective storage queue set and the DRAM device and between its retrieval queue set and the DRAM device independently of the scheduling of the other devices, but based on a shared criteria for queue service.
    • 本公开描述了用于并行地跨越多个DRAM设备访问多个存储体的实现。 这些实施方案适用于并行数据包处理器内的操作。 根据存储在数据传输之后的行的预充电的访问方案,被分割成数据段的数据字被存储在多个存储区中。 存储分配控制模块通信地耦合到包括多个存储请求队列的存储器,并且检索控制模块通信地耦合到包括多个检索请求队列的存储器。 在一个示例中,每个请求队列可以被实现为先进先出(FIFO)存储器缓冲器。 多个存储请求队列与多个检索队列一样分为多个集合。 每个设置与相应的DRAM设备相关联。 用于每个相应DRAM设备的调度器独立于其他设备的调度,但是基于用于队列服务的共享标准来调度其各自的存储队列组和DRAM设备之间以及其检索队列集和DRAM设备之间的数据传输。