会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Apparatus and method for addressing a variable sized block of memory
    • 用于寻址可变大小的存储器块的装置和方法
    • US5404474A
    • 1995-04-04
    • US819393
    • 1992-01-10
    • Neal A. CrookStewart F. BryantMichael J. SeamanJohn M. Lenthall
    • Neal A. CrookStewart F. BryantMichael J. SeamanJohn M. Lenthall
    • G06F12/02G06F12/06
    • G06F12/0223
    • A method and apparatus for aliasing an address for a location in a memory system. The aliasing permits an address generating unit to access a memory block of variable size based upon an address space of fixed size so that the size of the memory block can be changed without changing the address generating software of the address generating unit. The invention provides an address aliasing device arranged to receive an address from the address generating unit. The address aliasing device includes a register that stores memory block size information. The memory block size information is read by the address aliasing device and decoded to provide bit information representative of the size of the memory block. The address aliasing device logically combines the bit information with appropriate corresponding bits of the input address to provide an alias address that is consistent with the size of the memory block.
    • 一种用于对存储器系统中的位置进行混叠的地址的方法和装置。 混叠允许地址生成单元基于固定大小的地址空间访问可变大小的存储块,使得可以改变存储块的大小而不改变地址生成单元的地址生成软件。 本发明提供了一种地址混叠装置,其被布置成从地址生成单元接收地址。 地址混叠装置包括存储存储器块大小信息的寄存器。 存储器块大小信息由地址混叠器件读取并被解码以提供表示存储块大小的位信息。 地址混叠设备逻辑地将位信息与输入地址的适当对应位组合,以提供与存储块大小一致的别名地址。
    • 2. 发明授权
    • Access request prioritization and summary device
    • 访问请求优先级和摘要设备
    • US5202999A
    • 1993-04-13
    • US819186
    • 1992-01-10
    • John M. LenthallNeal A. CrookHelen C. McGrealMichael J. Seaman
    • John M. LenthallNeal A. CrookHelen C. McGrealMichael J. Seaman
    • G06F13/37
    • G06F13/37
    • An access request prioritization and summary device for determining the current highest priority among n entities. The device includes a bitmap having n bit storage locations. Each one of the n bit storage locations corresponds to one of the entities and is used to store a value which represents when the corresponding entity is available for prioritization. A plurality of combinational logic blocks are connected to the bitmap so that each one of the combinational logic blocks receives a preselected portion of the values stored in the n bit storage locations of the bitmap. Each one of the combinational logic blocks has a token signal input and a token signal output. The token signal inputs and outputs are coupled together to form a series of token signal links between the combinational logic blocks. When certain preselected highest priority determination conditions occur within one of the combinational logic blocks, the combinational logic block generates a token signal which serves as the token signal to the respective succeeding combinational logic block. Each combinational logic block is capable of receiving a token signal from the previous combinational logic block and is responsive to the input of a token signal to determine a current highest priority from the values which it received as input signals.
    • 用于确定n个实体当前最高优先级的访问请求优先级和摘要设备。 该设备包括具有n位存储位置的位图。 n位存储位置中的每一个对应于一个实体,并且用于存储表示当对应实体可用于优先化的时间的值。 多个组合逻辑块连接到位图,使得组合逻辑块中的每一个接收存储在位图的n位存储位置中的值的预选部分。 组合逻辑块中的每一个具有令牌信号输入和令牌信号输出。 令牌信号输入和输出耦合在一起以在组合逻辑块之间形成一系列令牌信号链路。 当在组合逻辑块之一内发生某些预选的最高优先级确定条件时,组合逻辑块产生令牌信号,令牌信号用作相应的后续组合逻辑块的令牌信号。 每个组合逻辑块能够从先前的组合逻辑块接收令牌信号,并响应于令牌信号的输入,以从作为输入信号接收的值确定当前最高优先级。
    • 7. 发明授权
    • Scheme for interlocking line card to an address recognition engine to
support plurality of routing and bridging protocols by using network
information look-up database
    • 将互联线卡与地址识别引擎相结合的方案,通过使用网络信息查找数据库来支持多个路由和桥接协议
    • US5524254A
    • 1996-06-04
    • US269997
    • 1994-07-01
    • Fearghal MorganJoseph O'CallaghanMichael J. SeamanJohn RigbyAndrew WaltonUna M. QuinlanStewart F. Bryant
    • Fearghal MorganJoseph O'CallaghanMichael J. SeamanJohn RigbyAndrew WaltonUna M. QuinlanStewart F. Bryant
    • G06F13/00H04L12/46H04L29/06H04L29/12
    • H04L29/12801H04L12/46H04L29/12009H04L29/12839H04L45/742H04L61/6004H04L61/6022H04L69/18H04L69/22
    • The present invention provides an interlock scheme for use between a line card and an address recognition apparatus. The interlock scheme reduces the total number of read/write operations over a backplane bus coupling the line card to the address recognition apparatus required to complete a request/response transfer. Thus, the line card and address recognition apparatus are able to perform a large amount of request/response transfers with a high level of system efficiency. Generally, the interlocking scheme according to the present invention merges each ownership information storage location into the location of the request/response memory utilized to store the corresponding request/response pair to reduce data transfer traffic over the backplane bus. According to another feature of the interlock scheme of the present invention, each of the line card and the address recognition engine includes a table for storing information relating to a plurality of database specifiers. Each of the database specifiers contains control information for the traversal of a lookup database used by the address recognition apparatus. At the time the processor of a line card generates a request for the address recognition apparatus, it will analyze the protocol type information contained in the header of a data packet. The processor will utilize the protocol type information as a look-up index to its table of database specifiers for selection of one of the database specifiers. The processor will then insert an identification of the selected database specifier into the request with the network address extracted from the data packet.
    • 本发明提供了一种用于线卡和地址识别装置之间的互锁方案。 互锁方案减少了通过将线卡耦合到完成请求/响应传输所需的地址识别装置的背板总线上的读/写操作的总数。 因此,线卡和地址识别装置能够以高水平的系统效率执行大量的请求/响应传送。 通常,根据本发明的联锁方案将每个所有权信息存储位置合并到用于存储相应的请求/响应对的请求/响应存储器的位置,以减少背板总线上的数据传输流量。 根据本发明的联锁方案的另一特征,线卡和地址识别引擎中的每一个都包括用于存储与多个数据库说明符有关的信息的表。 每个数据库说明符包含用于遍历由地址识别装置使用的查找数据库的控制信息。 当线卡的处理器产生对地址识别装置的请求时,它将分析包含在数据分组头部中的协议类型信息。 处理器将利用协议类型信息作为其数据库说明符表的查找索引,以选择其中一个数据库说明符。 然后处理器将所选择的数据库说明符的标识插入到从数据包中提取的网络地址的请求中。
    • 8. 发明授权
    • Address recognition engine with look-up database for storing network
information
    • 地址识别引擎,具有用于存储网络信息的查找数据库
    • US5519858A
    • 1996-05-21
    • US819490
    • 1992-01-10
    • Andrew WaltonUna M. QuinlanStewart F. BryantMichael J. SeamanJohn RigbyFearghal MorganJoseph O'Callaghan
    • Andrew WaltonUna M. QuinlanStewart F. BryantMichael J. SeamanJohn RigbyFearghal MorganJoseph O'Callaghan
    • G06F17/30H04L12/56H04L29/06
    • H04L29/06
    • The present invention is directed to an address recognition apparatus including an address recognition engine coupled to a look-up database. The look-up database is arranged to store network information relating to network addresses. The look-up database includes a primary database and a secondary database. The address recognition engine accepts as an input a network address for which network information is required. The address recognition engine uses the network address as an index to the primary database. The primary database comprises a multiway tree node structure (TRIE) arranged for traversal of the nodes as a function of preselected segments of the network address and in a fixed sequence of the segments to locate a pointer to an entry in the secondary database. The entry in the secondary database pointed to by the primary database pointer contains the network information corresponding to the network address. The address recognition engine includes a table for storing a plurality of database specifiers. Each of the database specifiers contains control information for the traversal of the primary and secondary databases. In addition, each of the nodes in the primary database and each of the entries in the secondary database is provided with control data structures that are programmable to control the traversal of the database.
    • 本发明涉及包括耦合到查找数据库的地址识别引擎的地址识别装置。 查找数据库被设置为存储与网络地址有关的网络信息。 查找数据库包括主数据库和辅助数据库。 地址识别引擎接受需要网络信息的网络地址作为输入。 地址识别引擎使用网络地址作为主数据库的索引。 主数据库包括多路树节点结构(TRIE),其被布置为根据网络地址的预选段的顺序遍历节点,并且在段的固定序列中定位到辅助数据库中的条目的指针。 主数据库指针指向的辅助数据库中的条目包含与网络地址对应的网络信息。 地址识别引擎包括用于存储多个数据库说明符的表。 每个数据库说明符都包含用于遍历主数据库和辅助数据库的控制信息。 此外,主数据库中的每个节点和辅助数据库中的每个条目都具有可编程以控制数据库遍历的控制数据结构。
    • 9. 发明授权
    • Packet forwarding system for measuring the age of data packets flowing
through a computer network
    • 分组转发系统,用于测量流经计算机网络的数据包的年龄
    • US5590366A
    • 1996-12-31
    • US421141
    • 1995-04-13
    • Stewart F. BryantMichael J. SeamanChristopher R. Szmidt
    • Stewart F. BryantMichael J. SeamanChristopher R. Szmidt
    • H04L12/56G06F13/00
    • H04L47/10Y10S370/902
    • A packet forwarding node for a computer network comprises at least one receiving module and at least one output module including packet list (21) for maintaining a list of packets to be transmitted therefrom. The time for which a packet remains in the node is determined by grouping the packets into groups or "buckets" which are created at regular intervals, each bucket containing packets arriving within the same time interval, and keeping track of the age of each bucket. A bucket counter (33) counts the total number of buckets in existence, so indicating the age of the oldest packet. This counter is incremented by 1 at regular intervals and decremented by 1 each time the oldest bucket is emptied (or found to be empty). A bucket list shift register (30) has its contents shifted at each change of time interval, and its the bottom stage accumulates the number of packets arriving in a time interval, and an overflow accumulator (31) accumulates counts shifted out of its top end. The bucket list shift register may comprise a plurality of sections each of which is shifted at an exact submultiple of the rate of shifting of the previous section, the bottom stage of each section accumulating counts shifted out of the previous section. In an alternative embodiment, bucket boundary markers are inserted into the packet list at each change of time interval.
    • 用于计算机网络的分组转发节点包括至少一个接收模块和至少一个输出模块,所述至少一个输出模块包括用于维护要从其发送的分组的列表的分组列表(21)。 分组保留在节点中的时间通过将分组分组为以规则间隔创建的分组或“分组”来确定,每个分组包含在相同时间间隔内到达的分组,并且跟踪每个分组的年龄。 桶计数器(33)对存在的桶的总数进行计数,从而指示最旧包的年龄。 该计数器以规则的间隔递增1,每当最旧的桶被清空(或发现为空)时减1。 桶列表移位寄存器(30)的每个时间间隔的变化都有其内容,并且其底部段累积以时间间隔到达的分组的数量,并且溢出累加器(31)累积从其顶端移出的计数 。 铲斗列表移位寄存器可以包括多个部分,每个部分以前一部分的移位速率的精确多项移位,每个部分的底部累积计数从前一部分移出。 在替代实施例中,每个时间间隔的每次更改都将桶边界标记插入到分组列表中。
    • 10. 发明授权
    • Message processing system having separate message receiving and
transmitting processors with message processing being distributed
between the separate processors
    • 消息处理系统具有分离的消息接收和发送处理器,其中消息处理被分配在各个处理器之间
    • US5195181A
    • 1993-03-16
    • US820299
    • 1992-01-10
    • Stewart F. BryantMichael J. Seaman
    • Stewart F. BryantMichael J. Seaman
    • G06F13/00
    • G06F13/00
    • A scheme for efficient implementation of workload partitioning between separate receive and transmit processors is provided so that a message can be effectively moved through a multiprocessor router. Generally, each receiving processor collects, into a digest, information relating to network protocol processing of a particular message, obtained via sequential byte processing of the message at the time of reception of the message. The information placed into the digest is information that is necessary for the completion of the processing tasks to be performed by the processor of the transmitting line card. The digest is passed to the transmit processor through a buffer exchange between the receive and transmit processors. The transmit processor reads the digest before processing of the related message for transmission and uses the information in the network protocol processing of the message. Thus, the transmit processor does not have to "look ahead" to bytes of the message needed to complete certain processing functions already completed by the receive processor and does require extra buffering and/or memory bandwidth to make the modifications to the message.
    • 提供了一种用于在单独的接收和发送处理器之间有效实施工作负载划分的方案,使得可以通过多处理器路由器有效地移动消息。 通常,每个接收处理器在摘要中收集与在消息接收时通过消息的连续字节处理获得的特定消息的网络协议处理有关的信息。 放入摘要的信息是完成由传输线卡的处理器执行的处理任务所必需的信息。 摘要通过接收和发送处理器之间的缓冲区交换传递给发送处理器。 发送处理器在处理相关消息以进行传输之前读取摘要,并使用消息的网络协议处理中的信息。 因此,发送处理器不必向前瞻地看到完成由接收处理器已经完成的某些处理功能所需的消息的字节,并且需要额外的缓冲和/或存储器带宽来对消息进行修改。