会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 11. 发明授权
    • System for reverse address resolution for remote network device
independent of its physical address
    • 用于远程网络设备的反向地址解析系统,与其物理地址无关
    • US5526489A
    • 1996-06-11
    • US33914
    • 1993-03-19
    • Chandrasekharan NilakantanLy LoiNagaraj ArunkumarMichael J. Seaman
    • Chandrasekharan NilakantanLy LoiNagaraj ArunkumarMichael J. Seaman
    • G06F13/00H04L12/46H04L12/66H04L12/931H04L29/06H04L29/12G06F15/16G06F15/163G06F15/177
    • H04L29/12018H04L29/06H04L29/12009H04L49/351H04L61/10
    • A reverse address resolution protocol for use in a communication network which allows resolution logic to provide a higher level protocol information (such as an IP address) to a source of a request for such information, independent of the physical network address of such source. The protocol is used in a processor having a plurality of ports, at least one of such ports connected by a point-to-point channel to a remote network device. Reverse address resolution protocol is responsive to a resolution request from the remote network device across the point-to-point channel to supply the higher level protocol information based upon the port through which the resolution request is received, rather than the physical network address of the requesting device. Thus, a remote device may be coupled to a network, and connected to a central management site across a point-to-point communication link, in a "plug and play" mode. The person connecting the device to the remote network does not need to determine the physical network address of the device or configure the device with a higher level address protocol.
    • 一种在通信网络中使用的反向地址解析协议,其允许分辨率逻辑向这种信息的请求的源提供更高级别的协议信息(诸如IP地址),而与该源的物理网络地址无关。 该协议用于具有多个端口的处理器中,至少一个这样的端口通过点到点信道连接到远程网络设备。 反向地址解析协议响应来自远程网络设备的跨点对点信道的解析请求,以便基于接收到解析请求的端口而不是接收到的物理网络地址来提供更高级协议信息 请求设备。 因此,远程设备可以耦合到网络,并且以“即插即用”模式通过点对点通信链路连接到中央管理站点。 将设备连接到远程网络的人不需要确定设备的物理网络地址或配置具有较高级别地址协议的设备。
    • 12. 发明授权
    • Scheme for interlocking line card to an address recognition engine to
support plurality of routing and bridging protocols by using network
information look-up database
    • 将互联线卡与地址识别引擎相结合的方案,通过使用网络信息查找数据库来支持多个路由和桥接协议
    • US5524254A
    • 1996-06-04
    • US269997
    • 1994-07-01
    • Fearghal MorganJoseph O'CallaghanMichael J. SeamanJohn RigbyAndrew WaltonUna M. QuinlanStewart F. Bryant
    • Fearghal MorganJoseph O'CallaghanMichael J. SeamanJohn RigbyAndrew WaltonUna M. QuinlanStewart F. Bryant
    • G06F13/00H04L12/46H04L29/06H04L29/12
    • H04L29/12801H04L12/46H04L29/12009H04L29/12839H04L45/742H04L61/6004H04L61/6022H04L69/18H04L69/22
    • The present invention provides an interlock scheme for use between a line card and an address recognition apparatus. The interlock scheme reduces the total number of read/write operations over a backplane bus coupling the line card to the address recognition apparatus required to complete a request/response transfer. Thus, the line card and address recognition apparatus are able to perform a large amount of request/response transfers with a high level of system efficiency. Generally, the interlocking scheme according to the present invention merges each ownership information storage location into the location of the request/response memory utilized to store the corresponding request/response pair to reduce data transfer traffic over the backplane bus. According to another feature of the interlock scheme of the present invention, each of the line card and the address recognition engine includes a table for storing information relating to a plurality of database specifiers. Each of the database specifiers contains control information for the traversal of a lookup database used by the address recognition apparatus. At the time the processor of a line card generates a request for the address recognition apparatus, it will analyze the protocol type information contained in the header of a data packet. The processor will utilize the protocol type information as a look-up index to its table of database specifiers for selection of one of the database specifiers. The processor will then insert an identification of the selected database specifier into the request with the network address extracted from the data packet.
    • 本发明提供了一种用于线卡和地址识别装置之间的互锁方案。 互锁方案减少了通过将线卡耦合到完成请求/响应传输所需的地址识别装置的背板总线上的读/写操作的总数。 因此,线卡和地址识别装置能够以高水平的系统效率执行大量的请求/响应传送。 通常,根据本发明的联锁方案将每个所有权信息存储位置合并到用于存储相应的请求/响应对的请求/响应存储器的位置,以减少背板总线上的数据传输流量。 根据本发明的联锁方案的另一特征,线卡和地址识别引擎中的每一个都包括用于存储与多个数据库说明符有关的信息的表。 每个数据库说明符包含用于遍历由地址识别装置使用的查找数据库的控制信息。 当线卡的处理器产生对地址识别装置的请求时,它将分析包含在数据分组头部中的协议类型信息。 处理器将利用协议类型信息作为其数据库说明符表的查找索引,以选择其中一个数据库说明符。 然后处理器将所选择的数据库说明符的标识插入到从数据包中提取的网络地址的请求中。
    • 13. 发明授权
    • Address recognition engine with look-up database for storing network
information
    • 地址识别引擎,具有用于存储网络信息的查找数据库
    • US5519858A
    • 1996-05-21
    • US819490
    • 1992-01-10
    • Andrew WaltonUna M. QuinlanStewart F. BryantMichael J. SeamanJohn RigbyFearghal MorganJoseph O'Callaghan
    • Andrew WaltonUna M. QuinlanStewart F. BryantMichael J. SeamanJohn RigbyFearghal MorganJoseph O'Callaghan
    • G06F17/30H04L12/56H04L29/06
    • H04L29/06
    • The present invention is directed to an address recognition apparatus including an address recognition engine coupled to a look-up database. The look-up database is arranged to store network information relating to network addresses. The look-up database includes a primary database and a secondary database. The address recognition engine accepts as an input a network address for which network information is required. The address recognition engine uses the network address as an index to the primary database. The primary database comprises a multiway tree node structure (TRIE) arranged for traversal of the nodes as a function of preselected segments of the network address and in a fixed sequence of the segments to locate a pointer to an entry in the secondary database. The entry in the secondary database pointed to by the primary database pointer contains the network information corresponding to the network address. The address recognition engine includes a table for storing a plurality of database specifiers. Each of the database specifiers contains control information for the traversal of the primary and secondary databases. In addition, each of the nodes in the primary database and each of the entries in the secondary database is provided with control data structures that are programmable to control the traversal of the database.
    • 本发明涉及包括耦合到查找数据库的地址识别引擎的地址识别装置。 查找数据库被设置为存储与网络地址有关的网络信息。 查找数据库包括主数据库和辅助数据库。 地址识别引擎接受需要网络信息的网络地址作为输入。 地址识别引擎使用网络地址作为主数据库的索引。 主数据库包括多路树节点结构(TRIE),其被布置为根据网络地址的预选段的顺序遍历节点,并且在段的固定序列中定位到辅助数据库中的条目的指针。 主数据库指针指向的辅助数据库中的条目包含与网络地址对应的网络信息。 地址识别引擎包括用于存储多个数据库说明符的表。 每个数据库说明符都包含用于遍历主数据库和辅助数据库的控制信息。 此外,主数据库中的每个节点和辅助数据库中的每个条目都具有可编程以控制数据库遍历的控制数据结构。
    • 15. 发明授权
    • Apparatus and method for addressing a variable sized block of memory
    • 用于寻址可变大小的存储器块的装置和方法
    • US5404474A
    • 1995-04-04
    • US819393
    • 1992-01-10
    • Neal A. CrookStewart F. BryantMichael J. SeamanJohn M. Lenthall
    • Neal A. CrookStewart F. BryantMichael J. SeamanJohn M. Lenthall
    • G06F12/02G06F12/06
    • G06F12/0223
    • A method and apparatus for aliasing an address for a location in a memory system. The aliasing permits an address generating unit to access a memory block of variable size based upon an address space of fixed size so that the size of the memory block can be changed without changing the address generating software of the address generating unit. The invention provides an address aliasing device arranged to receive an address from the address generating unit. The address aliasing device includes a register that stores memory block size information. The memory block size information is read by the address aliasing device and decoded to provide bit information representative of the size of the memory block. The address aliasing device logically combines the bit information with appropriate corresponding bits of the input address to provide an alias address that is consistent with the size of the memory block.
    • 一种用于对存储器系统中的位置进行混叠的地址的方法和装置。 混叠允许地址生成单元基于固定大小的地址空间访问可变大小的存储块,使得可以改变存储块的大小而不改变地址生成单元的地址生成软件。 本发明提供了一种地址混叠装置,其被布置成从地址生成单元接收地址。 地址混叠装置包括存储存储器块大小信息的寄存器。 存储器块大小信息由地址混叠器件读取并被解码以提供表示存储块大小的位信息。 地址混叠设备逻辑地将位信息与输入地址的适当对应位组合,以提供与存储块大小一致的别名地址。
    • 16. 发明授权
    • Broadband tree-configured ring for metropolitan area networks
    • 用于城域网的宽带树配置环
    • US06826158B2
    • 2004-11-30
    • US09796825
    • 2001-03-01
    • Michael J. SeamanVipin Jain
    • Michael J. SeamanVipin Jain
    • H04L1228
    • G06Q30/02G06Q30/0601
    • A method for configuring a network, and a network configured according to such method, are provide in which a communication links laid out in a ring in a metropolitan area are partitioned into link segments, and managed according to a spanning tree protocol. The switches are configured to establish unique, mesh or tree type network configurations suitable for application to communication media arranged to support ring-based protocols. The method is used for connecting communication links arranged in a plurality of rings, which traverse a plurality of collocation sites in a metropolitan area. The method comprises configuring switches in the plurality of collocation sites to partition rings in the plurality of rings into a plurality of link segments providing point to point paths between switches at collocation sites in the plurality of collocation sites. The switches and link segments are managed according to a spanning tree protocol. In one embodiment of the invention, the configuring of switches includes allocating a first set of the link segments as a first ring and a second set of the link segments as a second ring, breaking the first and second rings by blocking transmission on a link segment in the first ring between the first pair of collocation sites, and by blocking transmission on a link segment in the second ring between a second pair of collocation sites. In addition, the method includes cross-connection the first and second rings by a communication link.
    • 提供了一种用于配置网络的方法和根据这种方法配置的网络,其中将布置在城域中的环中的通信链路划分为链路段,并根据生成树协议进行管理。 交换机被配置为建立适合于应用于布置成支持基于环的协议的通信媒体的唯一的,网状或树型网络配置。 该方法用于连接布置在多个环中的通信链路,其遍历大都市区域中的多个并置站点。 该方法包括配置多个搭配位置中的交换机,将多个环中的环分隔成多个链路段,提供多个搭配位置中的配置位置处的交换机之间的点对点路径。 交换机和链路段根据生成树协议进行管理。 在本发明的一个实施例中,交换机的配置包括将第一组链路段分配为第一环,将第二组链路段分配为第二环,通过阻止链路段上的传输来破坏第一和第二环 在第一对搭配位置之间的第一环中,并且阻止在第二对搭配位置之间的第二环中的链路段上的传输。 此外,该方法包括通过通信链路将第一和第二环交叉连接。
    • 17. 发明授权
    • Spanning tree with protocol for bypassing port state transition timers
    • 生成树,用于绕过端口状态转换定时器
    • US06771610B1
    • 2004-08-03
    • US09416827
    • 1999-10-12
    • Michael J. Seaman
    • Michael J. Seaman
    • H04L1228
    • H04L12/4625
    • Mechanisms for use on designated ports in spanning tree protocol entities allow such ports to transition to a forwarding state on the basis of actual communication delays between neighboring bridges, rather than upon expiration of forwarding delay timers. The logic that manages transition of states in the spanning tree protocol entity identifies ports which are changing to a designated port role, and issues a message on such ports informing the downstream port that the issuing port is able to assume a forwarding state. The logic begins the standard delay timer for entry to the listening state and then the learning state, prior to assuming the forwarding state. However, when a reply from the downstream port is received, then the issuing port reacts by changing immediately to the forwarding state without continuing to await expiration of the delay timer and without traversing transitional listening and learning states. A downstream port which receives a message from an upstream port indicating that it is able to assume a forwarding state, reacts by ensuring that no loop will be formed by the change in state of the upstream port. In one embodiment, the downstream port changes the state of designated ports on the protocol entity which were recently root ports to a blocking state, and then issues messages downstream indicating that such designated ports are ready to resume the forwarding state. The designated ports on the downstream protocol entity await a reply from ports further downstream. In this way, loops are blocked step-by-step through the network, as the topology of the tree settles.
    • 在生成树协议实体中的指定端口上使用的机制允许这样的端口基于相邻网桥之间的实际通信延迟而不是在转发延迟定时器到期时转换到转发状态。 管理生成树协议实体中状态转换的逻辑标识正在改变为指定端口角色的端口,并在这些端口上发出消息,通知下游端口发布端口能够承担转发状态。 逻辑开始标准延迟定时器进入侦听状态,然后开始学习状态,然后再采取转发状态。 然而,当接收到来自下游端口的应答时,发送端口通过立即改变到转发状态而不继续等待延迟定时器的期满而不经过过渡的监听和学习状态来做出反应。 从上游端口接收到表示能够承担转发状态的消息的下行端口通过确保上游端口的状态变化不会形成环路而做出反应。 在一个实施例中,下游端口将最近根端口的协议实体上的指定端口的状态改变为阻塞状态,然后向下游发送指示这些指定端口准备好恢复转发状态的消息。 下游协议实体上的指定端口等待来自下游端口的回复。 以这种方式,循环通过网络逐步阻止,因为树的拓扑结构。
    • 18. 发明授权
    • High throughput message passing process using latency and reliability
classes
    • 使用延迟和可靠性类的高吞吐量消息传递过程
    • US5828835A
    • 1998-10-27
    • US675663
    • 1996-07-03
    • Mark S. IsfeldTracy D. MalloryBruce W. MitchellMichael J. SeamanNagaraj ArunkumarPyda Srisuresh
    • Mark S. IsfeldTracy D. MalloryBruce W. MitchellMichael J. SeamanNagaraj ArunkumarPyda Srisuresh
    • G06F13/38H04L12/46H04L12/56H04L29/06G06F13/00
    • H04L49/901G06F13/387H04L12/4604H04L12/5601H04L45/00H04L47/6215H04L49/107H04L49/108H04L49/256H04L49/309H04L49/90H04L49/9031H04L49/9047H04L49/9084H04L2012/5627H04L2012/5651H04L2012/5665H04L2012/5681H04L69/16
    • A communication technique for high volume connectionless-protocol, backbone communication links in distributed processing systems provides for control of latency and reliability of messages transmitted. The system provides for transmit list and receive list processes in the processors on the link. On the transmit side, a high priority command list and a normal priority command list are provided. In the message passing process, the command transmit function transmits commands across the backplane according to a queue priority rule that allows for control of transmit latency. Messages that require low latency are written into the high priority transmit list, while a majority of messages are written into the high throughput or normal priority transmit list. A receive filtering process in the receiving processor includes dispatch logic which dispatches messages either to a high priority receive list or a normal priority receive list. The filtering function also acts to drop messages received according to the amount of available buffer space in the receiving processor, as measured against watermarks based on reliability tags in message headers. The messages received are routed to either the high priority receive list or a normal priority receive list based on another control bit in the message headers. The receiving processor processes the messages in the receive queues according to a priority rule that allows for control of the latency between receipt of a message, and actual processing of the message by the receiving processor.
    • 分布式处理系统中用于大容量无连接协议,骨干通信链路的通信技术提供了传输消息的延迟和可靠性的控制。 该系统在链路上的处理器中提供发送列表和接收列表过程。 在发送侧,提供高优先级命令列表和普通优先级命令列表。 在消息传递过程中,命令发送功能根据允许发送等待时间的控制的队列优先级规则跨背板传输命令。 需要低延迟的消息被写入高优先级发送列表,而大多数消息被写入高吞吐量或普通优先级发送列表。 接收处理器中的接收过滤处理器包括将消息分派到高优先级接收列表或普通优先接收列表的分派逻辑。 滤波功能还用于根据基于消息头中的可靠性标签的水印来测量根据接收处理器中的可用缓冲器空间量而接收的消息。 接收的消息基于消息头中的另一控制位被路由到高优先级接收列表或普通优先接收列表。 接收处理器根据允许控制接收到消息之间的等待时间和接收处理器对消息的实际处理的优先级规则处理接收队列中的消息。
    • 20. 发明授权
    • Packet forwarding system for measuring the age of data packets flowing
through a computer network
    • 分组转发系统,用于测量流经计算机网络的数据包的年龄
    • US5590366A
    • 1996-12-31
    • US421141
    • 1995-04-13
    • Stewart F. BryantMichael J. SeamanChristopher R. Szmidt
    • Stewart F. BryantMichael J. SeamanChristopher R. Szmidt
    • H04L12/56G06F13/00
    • H04L47/10Y10S370/902
    • A packet forwarding node for a computer network comprises at least one receiving module and at least one output module including packet list (21) for maintaining a list of packets to be transmitted therefrom. The time for which a packet remains in the node is determined by grouping the packets into groups or "buckets" which are created at regular intervals, each bucket containing packets arriving within the same time interval, and keeping track of the age of each bucket. A bucket counter (33) counts the total number of buckets in existence, so indicating the age of the oldest packet. This counter is incremented by 1 at regular intervals and decremented by 1 each time the oldest bucket is emptied (or found to be empty). A bucket list shift register (30) has its contents shifted at each change of time interval, and its the bottom stage accumulates the number of packets arriving in a time interval, and an overflow accumulator (31) accumulates counts shifted out of its top end. The bucket list shift register may comprise a plurality of sections each of which is shifted at an exact submultiple of the rate of shifting of the previous section, the bottom stage of each section accumulating counts shifted out of the previous section. In an alternative embodiment, bucket boundary markers are inserted into the packet list at each change of time interval.
    • 用于计算机网络的分组转发节点包括至少一个接收模块和至少一个输出模块,所述至少一个输出模块包括用于维护要从其发送的分组的列表的分组列表(21)。 分组保留在节点中的时间通过将分组分组为以规则间隔创建的分组或“分组”来确定,每个分组包含在相同时间间隔内到达的分组,并且跟踪每个分组的年龄。 桶计数器(33)对存在的桶的总数进行计数,从而指示最旧包的年龄。 该计数器以规则的间隔递增1,每当最旧的桶被清空(或发现为空)时减1。 桶列表移位寄存器(30)的每个时间间隔的变化都有其内容,并且其底部段累积以时间间隔到达的分组的数量,并且溢出累加器(31)累积从其顶端移出的计数 。 铲斗列表移位寄存器可以包括多个部分,每个部分以前一部分的移位速率的精确多项移位,每个部分的底部累积计数从前一部分移出。 在替代实施例中,每个时间间隔的每次更改都将桶边界标记插入到分组列表中。