会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明授权
    • Selected register decode values for pipeline stage register addressing
    • 流水线级寄存器寻址的选定寄存器解码值
    • US07139899B2
    • 2006-11-21
    • US09390079
    • 1999-09-03
    • Darren KerrJohn William Marshall
    • Darren KerrJohn William Marshall
    • G06F9/34
    • G06F9/3826G06F9/3885
    • An instruction decode mechanism enables an instruction to control data flow bypassing hardware within a pipelined processor of a programmable processing engine. The control mechanism is defined by an instruction set of the processor as a unique register decode value that specifies either source operand bypassing (via a source bypass operand) or result bypassing (via a result bypass operand) from a previous instruction executing in pipeline stages of the processor. The source bypass operand allows source operand data to be shared among the parallel execution units of the pipelined processor, whereas the result bypass operand explicitly controls data flow within a pipeline of the processor through the use of result bypassing hardware of the processor. The instruction decode control mechanism essentially allows an instruction to directly identify a pipeline stage register for use as its source operand.
    • 指令解码机制使得能够控制在可编程处理引擎的流水线处理器内绕过硬件的数据流的指令。 控制机制由处理器的指令集定义为唯一的寄存器解码值,其指定源操作数旁路(经由源旁路操作数)或结果绕过(通过结果旁路操作数)从前一条指令执行的流水线阶段 处理器。 源旁路操作数允许在流水线处理器的并行执行单元之间共享源操作数数据,而结果旁路操作数通过使用结果绕过处理器的硬件来明确地控制处理器流水线内的数据流。 指令解码控制机制基本上允许指令直接识别流水线级寄存器以用作其源操作数。
    • 5. 发明授权
    • Packet striping across a parallel header processor
    • 数据包通过并行头处理器进行条带化
    • US06965615B1
    • 2005-11-15
    • US09663777
    • 2000-09-18
    • Darren KerrJeffery ScottJohn William MarshallScott Nellenbach
    • Darren KerrJeffery ScottJohn William MarshallScott Nellenbach
    • H04J3/24H04L12/56
    • H04L49/3072H04L49/25H04L49/3063
    • A technique is provided for striping packets across pipelines of a processing engine within a network switch. The processing engine comprises a plurality of processors arrayed as pipeline rows and columns embedded between input and output buffers of the engine. Each pipeline row or cluster includes a context memory having a plurality of window buffers of a defined size. Each packet is apportioned into fixed-sized contexts corresponding to the defined window size associated with each buffer of the context memory. The technique includes a mapping mechanism for correlating each context with a relative position within the packet, i.e., the beginning, middle and end contexts of a packet. The mapping mechanism facilitates reassembly of the packet at the output buffer, while obviating any any out-of-order issues involving the particular contexts of a packet.
    • 提供了一种技术,用于在网络交换机内的处理引擎的管道上分条分组。 处理引擎包括多个处理器,其排列成嵌入在发动机的输入和输出缓冲器之间的管线行和列。 每个管道行或群集包括具有多个定义大小的窗口缓冲器的上下文存储器。 每个数据包被分配到与上下文存储器的每个缓冲器相关联的定义的窗口大小相对应的固定大小的上下文中。 该技术包括用于将每个上下文与分组内的相对位置(即,分组的开始,中间和结束上下文)相关联的映射机制。 映射机制有助于在输出缓冲器处重新组合分组,同时避免涉及分组的特定上下文的任何无序的问题。
    • 7. 发明授权
    • Architecture for a process complex of an arrayed pipelined processing engine
    • 用于处理器阵列的流水线处理引擎的架构
    • US06442669B2
    • 2002-08-27
    • US09727068
    • 2000-11-30
    • Michael L. WrightDarren KerrKenneth Michael KeyWilliam E. Jennings
    • Michael L. WrightDarren KerrKenneth Michael KeyWilliam E. Jennings
    • G06F1500
    • G06F15/8053
    • A processor complex architecture facilitates accurate passing of transient data among processor complex stages of a pipelined processing engine. The processor complex comprises a central processing unit (CPU) coupled to an instruction memory and a pair of context data memory structures via a memory manager circuit. The context memories store transient “context” data for processing by the CPU in accordance with instructions stored in the instruction memory. The architecture further comprises data mover circuitry that cooperates with the context memories and memory manager to provide a technique for efficiently passing data among the stages in a manner that maintains data coherency in the processing engine. An aspect of the architecture is the ability of the CPU to operate on the transient data substantially simultaneously with the passing of that data by the data mover.
    • 处理器复杂架构有助于在流水线处理引擎的处理器复杂级之间准确地传递瞬态数据。 处理器复合体包括经由存储器管理器电路耦合到指令存储器和一对上下文数据存储器结构的中央处理单元(CPU)。 上下文存储器存储瞬时“上下文”数据,以便CPU根据存储在指令存储器中的指令进行处理。 该架构还包括与上下文存储器和存储器管理器配合的数据移动器电路,以提供一种用于以维持处理引擎中的数据一致性的方式在各个级之间高效地传送数据的技术。 该体系结构的一个方面是CPU能够在数据移动器通过该数据时同时对瞬态数据进行操作。
    • 9. 发明授权
    • Architecture for a processor complex of an arrayed pipelined processing engine
    • 用于处理器阵列的流水线处理引擎的架构
    • US07380101B2
    • 2008-05-27
    • US11023283
    • 2004-12-27
    • Michael L. WrightDarren KerrKenneth Michael KeyWilliam E. Jennings
    • Michael L. WrightDarren KerrKenneth Michael KeyWilliam E. Jennings
    • G06F15/00
    • G06F15/8053
    • A processor complex architecture facilitates accurate passing of transient data among processor complex stages of a pipelined processing engine. The processor complex comprises a central processing unit (CPU) coupled to an instruction memory and a pair of context data memory structures via a memory manager circuit. The context memories store transient “context” data for processing by the CPU in accordance with instructions stored in the instruction memory. The architecture further comprises data mover circuitry that cooperates with the context memories and memory manager to provide a technique for efficiently passing data among the stages in a manner that maintains data coherency in the processing engine. An aspect of the architecture is the ability of the CPU to operate on the transient data substantially simultaneously with the passing of that data by the data mover.
    • 处理器复杂架构有助于在流水线处理引擎的处理器复杂级之间准确地传递瞬态数据。 处理器复合体包括经由存储器管理器电路耦合到指令存储器和一对上下文数据存储器结构的中央处理单元(CPU)。 上下文存储器存储瞬时“上下文”数据,以便CPU根据存储在指令存储器中的指令进行处理。 该架构还包括与上下文存储器和存储器管理器配合的数据移动器电路,以提供一种用于以维持处理引擎中的数据一致性的方式在各个级之间高效地传送数据的技术。 该体系结构的一个方面是CPU能够在数据移动器通过该数据时同时对瞬态数据进行操作。