会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Link aggregation in ethernet frame switches
    • 以太网帧交换机中的链路聚合
    • US06553029B1
    • 2003-04-22
    • US09351406
    • 1999-07-09
    • Thomas Alexander
    • Thomas Alexander
    • H04L1228
    • H04L49/351H04L49/3009
    • Data packets containing source and destination addresses are received on one or more incoming ports for distribution on one or more outgoing ports. An address look-up table stores previously processed source and destination addresses, together with source and destination contexts associated with the respective source and destination addresses. The contexts represent either a specific physical port, or an aggregated grouping of ports. A distribution table stores, for each aggregated grouping of outgoing ports, a corresponding aggregated group of identifiers of specific outgoing ports. As each packet is received, its source and destination addresses are extracted and the address look-up table is searched for those source and destination addresses. If the address look-up table contains those source and destination addresses then the source and destination contexts associated with those source and destination addresses are retrieved from the address look-up table. If the address look-up table does not contain a source address corresponding to the extracted source address, then a source context corresponding to the extracted source address is derived and stored in the address look-up table with the extracted source address. If the retrieved destination address context represents a specific outgoing port, then the received packet is queued for outgoing transmission on that port. If the retrieved destination address context represents an aggregated grouping of outgoing ports, then the identifiers for the outgoing ports comprising that grouping are retrieved from the distribution table, and the received packet is queued for outgoing transmission on all of the outgoing ports comprising that grouping.
    • 包含源地址和目标地址的数据包在一个或多个入站端口上接收,以便在一个或多个输出端口上进行分发。 地址查找表存储先前处理的源和目的地址,以及与各个源地址和目的地地址相关联的源和目的地上下文。 上下文表示特定的物理端口或聚合的端口组。 分发表对于出口端口的每个聚合分组存储特定输出端口的对应的聚合组标识符。 随着每个数据包被接收,它的源地址和目标地址被提取,地址查找表被搜索那些源地址和目的地址。 如果地址查找表包含那些源地址和目标地址,则从地址查找表中检索与这些源地址和目标地址相关联的源和目的地上下文。 如果地址查找表不包含与提取的源地址相对应的源地址,则导出与提取的源地址相对应的源上下文,并将其与提取的源地址一起存储在地址查找表中。 如果检索到的目的地地址上下文表示特定的出站端口,则接收的分组排队等待该端口上的出站传输。 如果检索到的目的地地址上下文表示出站端口的聚合分组,则从分布表中检索包括该分组的出站端口的标识符,并且接收到的分组排队等待包括该分组的所有出站端口的出站传输。
    • 2. 发明授权
    • Monolithic terminal interface
    • 单片终端接口
    • US06220877B1
    • 2001-04-24
    • US09565089
    • 2000-05-05
    • Thomas AlexanderStefano Spadoni
    • Thomas AlexanderStefano Spadoni
    • H01R1200
    • H01R9/28
    • A monolithic terminal interface for supporting, and establishing electrical contact to, components such as relays and fuses includes a terminal board fabricated from a self-supporting sheet of resilient, electrically insulating material. The terminal board includes one or more terminal sockets, each being defined by a terminal slot formed through the board. The terminal slot creates opposed contact beams which include contact portions spaced by a distance less than the thickness of a terminal. The contact portions of the contact beams engage a terminal inserted therebetween, and because of the resilient nature of the material of the board, maintain contact pressure with the terminal. A pattern of metalization extends across at least one face of the board and into the terminal slot so as to cover the contact regions in this manner, electrical contact can be established through the conductive metal pattern to a terminal inserted in the slot. The monolithic terminal interface can be used to fabricate a power distribution box for a motor vehicle.
    • 用于支撑和建立与诸如继电器和保险丝的部件的电接触的单片端子接口包括由弹性电绝缘材料的自支撑片材制成的端子板。 端子板包括一个或多个端子插座,每个端子插座由通过该板形成的端子槽限定。 端子槽产生相对的接触梁,其包括间隔距离小于端子厚度的接触部分。 接触梁的接触部分接合插入其间的端子,并且由于板的材料的弹性特性,保持与端子的接触压力。 金属化的图案延伸穿过板的至少一个表面并进入端子槽,以便以这种方式覆盖接触区域,可以通过导电金属图案将电接触建立到插入槽中的端子。 单片终端接口可用于制造汽车配电箱。
    • 4. 发明授权
    • Divider/multiplier circuit having high precision mode
    • 具有高精度模式的分频/乘法电路
    • US5825681A
    • 1998-10-20
    • US590656
    • 1996-01-24
    • Andrew D. DanielThomas Alexander
    • Andrew D. DanielThomas Alexander
    • G06F7/52
    • G06F7/5324G06F7/535G06F2207/5354G06F7/4873G06F7/49936
    • A divider/multiplier circuit (10) is disclosed. In a divider mode, numerator terms are coupled to a normalizer (14) which generates normalized numerator values and corresponding numerator exponent values therefrom. Denominator terms are coupled to a look-up normalizer (20) which generates normalized denominator inverse values and corresponding denominator exponent values therefrom. The numerator and denominator exponent values are summed in an adder circuit (18) to generate a sum exponent value. The normalized numerator and inverse denominator values are multiplied in a multiplier circuit (16) to generate a normalized quotient value. The normalized quotient value is denormalized according to the sum exponent value. In a multiply mode of operation first and second multiplicands are coupled to the multiplier circuit (16). In a high precision divide mode, a sequence of numerator and inverse denominator values are coupled to the multiplier circuit (16) to generate a sequence of partial product terms. The partial product terms are accumulated in a high precision loop (24) to provide a high precision division value. Negative multiplicands and numerator values are handled by a leading absolute value generator (12) which generates the absolute value of the multiplicand or numerator value. A trailing signed value generator (22) additively inverts the product or quotient if the multiplicand or numerator value was negative.
    • 公开了一种分频器/乘法器电路(10)。 在分频模式中,分子项耦合到归一化器(14),其从其产生归一化的分子值和相应的分子指数值。 分母项耦合到查找归一化器(20),其生成归一化分母反向值和对应的分母指数值。 分子和分母指数值在加法器电路(18)中相加以产生和指数值。 归一化分子和反分母值在乘法器电路(16)中相乘以产生归一化的商值。 归一化商值根据求和指数值进行非规格化。 在乘法运算模式中,第一和第二被乘数耦合到乘法器电路(16)。 在高精度分频模式中,分子和反分母值的序列耦合到乘法器电路(16)以产生部分乘积项的序列。 部分乘积项以高精度回路(24)累积,以提供高精度的分割值。 负的乘数和分子值由前导绝对值发生器(12)处理,其产生被乘数或分子值的绝对值。 如果被乘数或分子值为负,则尾随符号值生成器(22)将乘积或商加上反相。
    • 5. 发明申请
    • System and method for a fast, programmable packet processing system
    • 用于快速,可编程数据包处理系统的系统和方法
    • US20060242710A1
    • 2006-10-26
    • US11366267
    • 2006-03-02
    • Thomas Alexander
    • Thomas Alexander
    • G06F12/14
    • G06F21/84G09G5/363
    • The present invention provides a cost effective method to improve the performance of communication appliances by retargeting the graphics processing unit as a coprocessor to accelerate networking operations. A system and method is disclosed for using a coprocessor on a standard personal computer to accelerate packet processing operations common to network appliances. The appliances include but are not limited to routers, switches, load balancers and Unified Threat Management appliances. More specifically, the method uses common advanced graphics processor engines to accelerate the packet processing tasks.
    • 本发明提供了一种通过重新定位图形处理单元作为协处理器来加速网络操作来提高通信设备的性能的成本有效的方法。 公开了一种在标准个人计算机上使用协处理器来加速网络设备通用的分组处理操作的系统和方法。 设备包括但不限于路由器,交换机,负载平衡器和统一威胁管理设备。 更具体地说,该方法使用通用的高级图形处理器引擎来加速分组处理任务。
    • 6. 发明申请
    • DMA engine for protocol processing
    • DMA引擎进行协议处理
    • US20060206635A1
    • 2006-09-14
    • US11373858
    • 2006-03-10
    • Thomas AlexanderMarc QuattromaniAlexander Rekow
    • Thomas AlexanderMarc QuattromaniAlexander Rekow
    • G06F13/28
    • G06F13/28
    • A DMA engine, includes, in part, a DMA controller, an associative memory buffer, a request FIFO accepting data transfer requests from a programmable engine, such as a CPU, and a response FIFO that returns the completion status of the transfer requests to the CPU. Each request includes, in part, a target external memory address from which data is to be loaded or to which data is to be stored; a block size, specifying the amount of data to be transferred; and context information. The associative buffer holds data fetched from the external memory; and provides the data to the CPUs for processing. Loading into and storing from the associative buffer is done under the control of the DMA controller. When a request to fetch data from the external memory is processed, the DMA controller allocates a block within the associative buffer and loads the data into the allocated block.
    • DMA引擎部分地包括DMA控制器,关联存储器缓冲器,接受来自诸如CPU的可编程引擎的数据传送请求的请求FIFO以及将传送请求的完成状态返回给 中央处理器。 每个请求部分地包括要从中加载数据或要存储哪些数据的目标外部存储器地址; 块大小,指定要传输的数据量; 和上下文信息。 关联缓冲器保存从外部存储器取出的数据; 并将数据提供给CPU进行处理。 在DMA控制器的控制下,从关联缓冲区加载和存储完成。 当处理从外部存储器获取数据的请求时,DMA控制器在关联缓冲器内分配一个块并将数据加载到分配的块中。
    • 7. 发明授权
    • Topology-independent priority arbitration for stackable frame switches
    • 可堆叠帧交换机的拓扑无关优先仲裁
    • US06467006B1
    • 2002-10-15
    • US09350738
    • 1999-07-09
    • Thomas AlexanderMatt Smith
    • Thomas AlexanderMatt Smith
    • G06F1314
    • H04L49/40H04L49/45
    • Each one of a plurality of processors has a data storage register and a unique identifier. A message passing network interconnects the registers and processors. Each processor can store data in each register, but can read data only from its own register. “Master” priority is arbitratively allocated to one of the processors by repetitively, for each processor which has not previously been dismissed as a master candidate and until all but one processor is dismissed as a master candidate: storing a dismissal value in the processor's register; selecting the next portion of the processor's identifier; if the selected portion corresponds to a non-dismissal value, storing the non-dismissal value in all of the registers; if the selected portion corresponds to the dismissal value and if the non-dismissal value is stored in the processor's register, dismissing the processor as a master candidate.
    • 多个处理器中的每一个具有数据存储寄存器和唯一标识符。 消息传递网络将寄存器和处理器互连。 每个处理器可以在每个寄存器中存储数据,但只能从自己的寄存器读取数据。 对于以前没有被解雇为主候选人的每个处理器,除了一个处理器被解除为主要候选人之外,对于处理器之一重复地仲裁地分配给主处理器:将解雇的值存储在处理器的寄存器中; 选择处理器标识符的下一部分; 如果所选择的部分对应于非解雇值,则将所述非撤销值存储在所有寄存器中; 如果所选部分对应于解雇值,并且如果非解雇值存储在处理器的寄存器中,则将处理器解除为主候选者。
    • 8. 发明授权
    • High speed/low speed interface with prediction cache
    • 具有预测缓存的高速/低速接口
    • US06301629B1
    • 2001-10-09
    • US09034537
    • 1998-03-03
    • Bharat SastriThomas AlexanderChitranjan N. Reddy
    • Bharat SastriThomas AlexanderChitranjan N. Reddy
    • G06F1338
    • G06F13/4059
    • The present invention provides a monolithic or discrete high speed/low speed interface that is capable of interfacing with the high speed subsystems of a data processing system and low speed subsystems of a data processing system. In one embodiment, the high speed/low speed interface subsystem of the present invention comprises a high speed interface for interfacing with high speed subsystems via a high speed bus, a low speed interface for interfacing with low speed subsystems via a low speed bus, a control circuitry coupled to both the high speed and low speed interfaces, and an internal bus coupled to the control circuitry and the high speed and low speed interfaces. The control circuitry controls the transfer of information between the interfaces. In a second embodiment of the present invention, the high speed/low speed interface subsystem of the present invention comprises all the elements of the first embodiment and a prediction unit. In a third embodiment of the present invention, the high speed/low speed interface subsystem comprises all the elements of the second embodiment and a memory controller. The embodiments of the present invention could be implemented with discrete components or could be implemented on a single semiconductor substrate.
    • 本发明提供了能够与数据处理系统的高速子系统和数据处理系统的低速子系统进行接口的单片或离散高速/低速接口。 在一个实施例中,本发明的高速/低速接口子系统包括用于经由高速总线与高速子系统接口的高速接口,用于经低速总线与低速子系统接口的低速接口, 耦合到高速和低速接口的控制电路以及耦合到控制电路和高速和低速接口的内部总线。 控制电路控制接口之间的信息传输。 在本发明的第二实施例中,本发明的高速/低速接口子系统包括第一实施例的所有元件和预测单元。 在本发明的第三实施例中,高速/低速接口子系统包括第二实施例的所有元件和存储器控制器。 本发明的实施例可以用分立元件实现,或者可以在单个半导体衬底上实现。
    • 9. 发明授权
    • Predictive caching system and method based on memory access which
previously followed a cache miss
    • 基于存储器访问的预测缓存系统和方法,其先前遵循高速缓存未命中
    • US5778436A
    • 1998-07-07
    • US978320
    • 1997-11-25
    • Gershon KedemThomas Alexander
    • Gershon KedemThomas Alexander
    • G06F12/08G06F12/12
    • G06F12/0862
    • Predictive cache memory systems and methods are responsive to cache misses to prefetch a data block from main memory based upon the data block which last followed the memory address which caused the cache miss. In response to an access request to main memory for a first main memory data block, which is caused by a primary cache miss, a second main memory data block is identified which was accessed following a previous access request to the main memory for the first main memory data block. Once identified, the second memory data block is stored in a predictive cache if the second main memory data block is not already stored in the predictive cache. Thus, if the next main memory request is for the second main memory block, as was earlier predicted, the second main memory block is already in the predictive cache and may be accessed rapidly. The identification of data blocks for prefetching may be provided by a prediction table which stores therein identifications of a plurality of succeeding main memory data blocks, each of which was accessed following an access request to a corresponding one of a plurality of preceding main memory data blocks. The Prediction Table is updated if predictions are incorrect. The predictive cache may be implemented using on-chip SRAM cache which is integrated with a DRAM array so that transfers between the DRAM array and the predictive cache may occur at high speed using an internal buffer.
    • 预测性高速缓冲存储器系统和方法响应于高速缓存未命中,以根据最后跟随导致高速缓存未命中的存储器地址的数据块从主存储器预取数据块。 响应于主存储器数据块对主存储器数据块的访问请求,该第一主存储器数据块是由一次高速缓存未命中引起的,识别出第二主存储器数据块,该第二主存储器数据块是根据对主存储器的先前访问请求而被访问的, 内存数据块。 一旦被识别,如果第二主存储器数据块尚未存储在预测高速缓存中,则第二存储器数据块被存储在预测高速缓存中。 因此,如果下一个主存储器请求是用于第二主存储器块,如先前预测的那样,第二主存储器块已经在预测高速缓存中并且可以被快速访问。 用于预取的数据块的识别可以由其中存储多个后续主存储器数据块的标识的预测表提供,每个后续主存储器数据块在访问请求之后被访问到多个先前主存储器数据块中的相应一个 。 如果预测不正确,则更新预测表。 可以使用与DRAM阵列集成的片上SRAM缓存来实现预测高速缓存,使得DRAM阵列和预测高速缓存之间的传输可以使用内部缓冲器以高速度发生。
    • 10. 发明授权
    • Method for controlling the operation of a computer implemented apparatus
to selectively execute instructions of different bit lengths
    • 用于控制计算机实现的装置的操作以选择性地执行不同位长的指令的方法
    • US5511174A
    • 1996-04-23
    • US286662
    • 1994-08-05
    • Gary D. HicokThomas AlexanderYong J. LimYongmin Kim
    • Gary D. HicokThomas AlexanderYong J. LimYongmin Kim
    • G06F9/30G06F9/32G06F12/00
    • G06F9/321G06F9/30149
    • A method for selectively controlling the operation of a computer system so that the computer system is selectively caused to execute instructions of a first predetermined bit length or instructions of a second predetermined bit length. The method comprises the preliminary steps of storing instruction data in a set of EVEN instruction storage locations; storing instruction data in a set of ODD instruction locations; establishing an EVEN execution pointer; and establishing an ODD execution pointer. At a first given time, either the EVEN execution pointer is incremented by a predetermined COUNT or the ODD execution pointer is incremented by the predetermined COUNT; but both pointers are not simultaneously incremented by the COUNT. The method causes an instruction to be executed, which instruction was stored entirely in either an EVEN instruction location or entirely in an ODD instruction location. At a second given time, both the EVEN instruction pointer and the ODD instruction pointer are incremented by the predetermined COUNT, thereby causing an instruction to be executed, which instruction constitutes a combination of instruction data from an EVEN instruction storage location and instruction data from an ODD instruction storage location.
    • 一种用于选择性地控制计算机系统的操作的方法,使得选择性地使计算机系统执行第一预定位长度的指令或第二预定位长度的指令。 该方法包括将指令数据存储在一组EVEN指令存储位置中的预备步骤; 将指令数据存储在一组ODD指令位置中; 建立一个EVEN执行指针; 并建立一个ODD执行指针。 在第一给定时间,将EVEN执行指针递增预定的COUNT,或者ODD执行指针递增预定的COUNT; 但是两个指针都不会同时递增COUNT。 该方法导致执行指令,哪个指令完全存储在EVEN指令位置或完全存储在ODD指令位置。 在第二给定时间,EVEN指令指针和ODD指令指针都被增加预定的COUNT,从而使指令被执行,该指令构成来自EVEN指令存储位置的指令数据和来自 ODD指令存储位置。