会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • PROCESSING CONCURRENCY IN A NETWORK DEVICE
    • 网络设备中的处理和解
    • WO2015056101A2
    • 2015-04-23
    • PCT/IB2014/002825
    • 2014-10-16
    • MARVELL ISRAELWOHLGEMUTH, AronZEMACH, RamiLEVY, Gil
    • WOHLGEMUTH, AronZEMACH, RamiLEVY, Gil
    • H04L29/06
    • H04L49/3063G06F9/524G06F12/023G06F2212/163H04L45/74H04L69/12H04L69/22
    • One or more processing operations with respect to a packet are performed at a packet processing node of a network device, the packet processing node configured to perform multiple different processing operations with respect to the packet. A first accelerator engine is triggered for performing a first additional processing operation with respect to the packet. The first additional processing operation constitutes an operation that is different from the multiple different processing operations that the packet processing node is configured to perform. The first additional processing operation is performed by the first accelerator engine. Concurrently with performing the first additional processing operation at the first accelerator engine, at least a portion of a second additional processing operation with respect to the packet is performed by the packet processing node, the second additional processing operation not dependent on a result of the first additional processing operation.
    • 在网络设备的分组处理节点处执行关于分组的一个或多个处理操作,所述分组处理节点被配置为针对分组执行多个不同的处理操作。 触发第一加速器引擎以执行关于分组的第一附加处理操作。 第一附加处理操作构成与分组处理节点被配置为执行的多个不同处理操作不同的操作。 第一附加处理操作由第一加速器引擎执行。 同时在第一加速器引擎执行第一附加处理操作时,由分组处理节点执行相对于分组的第二附加处理操作的至少一部分,第二附加处理操作不依赖于第一加速器的结果 附加处理操作。
    • 2. 发明申请
    • MULTI-STAGE INTERCONNECT NETWORK IN A PARALLEL PROCESSING NETWORK DEVICE
    • 并行处理网络设备中的多级互连网络
    • WO2015036870A3
    • 2015-06-11
    • PCT/IB2014002656
    • 2014-09-10
    • MARVELL WORLD TRADE LTDKADOSH AVIRANZEMACH RAMI
    • KADOSH AVIRANZEMACH RAMI
    • H04L12/933H04L12/931
    • H04L45/04H04L47/122H04L49/109H04L49/1515H04L49/505
    • A packet is received at a packet processing element, among a plurality of like packet processing elements, of a network device, and request specifying a processing operation to be performed with respect to the packet by an accelerator engine functionally different from the plurality of like packet processing elements is generated by the packet processing element. The request is transmitted to an interconnect network that includes a plurality of interconnect units arranged in stages. A path through the interconnect network is selected among a plurality of candidate paths, wherein no path of the candidate paths includes multiple interconnect units within a same stage of the interconnect network. The request is then transmitted via the determined path to a particular accelerator engine among multiple candidate accelerator engines configured to perform the processing operation. The processing operation is then performed by the particular accelerator engine..
    • 在网络设备的多个相似的分组处理单元中的分组处理元件处接收分组,并且通过与多个相似分组功能不同的加速器引擎来指定要针对分组执行的处理操作的请求 处理元件由分组处理元件生成。 该请求被发送到包括分阶段布置的多个互连单元的互连网络。 在多个候选路径中选择通过互连网络的路径,其中候选路径的路径不包括在互连网络的相同阶段内的多个互连单元。 然后,该请求经由确定的路径被发送到配置成执行处理操作的多个候选加速器引擎中的特定加速器引擎。 然后由特定的加速器引擎执行处理操作。
    • 3. 发明申请
    • PROCESSING CONCURRENCY IN A NETWORK DEVICE
    • 在网络设备中处理并发
    • WO2015056101A3
    • 2015-08-13
    • PCT/IB2014002825
    • 2014-10-16
    • MARVELL ISRAELWOHLGEMUTH ARONZEMACH RAMILEVY GIL
    • WOHLGEMUTH ARONZEMACH RAMILEVY GIL
    • H04L29/06
    • H04L49/3063G06F9/524G06F12/023G06F2212/163H04L45/74H04L69/12H04L69/22
    • One or more processing operations with respect to a packet are performed at a packet processing node of a network device, the packet processing node configured to perform multiple different processing operations with respect to the packet. A first accelerator engine is triggered for performing a first additional processing operation with respect to the packet. The first additional processing operation constitutes an operation that is different from the multiple different processing operations that the packet processing node is configured to perform. The first additional processing operation is performed by the first accelerator engine. Concurrently with performing the first additional processing operation at the first accelerator engine, at least a portion of a second additional processing operation with respect to the packet is performed by the packet processing node, the second additional processing operation not dependent on a result of the first additional processing operation.
    • 关于分组的一个或多个处理操作在网络设备的分组处理节点处执行,分组处理节点被配置为关于分组执行多个不同的处理操作。 第一加速器引擎被触发用于执行关于分组的第一附加处理操作。 第一附加处理操作构成与分组处理节点被配置为执行的多个不同处理操作不同的操作。 第一附加处理操作由第一加速器引擎执行。 与在第一加速器引擎执行第一附加处理操作同时,由分组处理节点执行关于分组的第二附加处理操作的至少一部分,第二附加处理操作不依赖于第一附加处理操作的结果 额外的处理操作。
    • 4. 发明申请
    • MEMORY AGGREGATION DEVICE
    • 记忆聚集装置
    • WO2014202129A1
    • 2014-12-24
    • PCT/EP2013/062727
    • 2013-06-19
    • HUAWEI TECHNOLOGIES CO., LTD.SHACHAR, YaronPELEG, YoavTAL, AlexUMANSKY, AlexZEMACH, RamiXIONG, LixiaLU, Yuchun
    • SHACHAR, YaronPELEG, YoavTAL, AlexUMANSKY, AlexZEMACH, RamiXIONG, LixiaLU, Yuchun
    • G06F5/06
    • G06F13/37G06F5/065G06F13/1673G06F13/4234
    • The invention relates to a memory aggregation device (990) for storing a set of input data streams (902) and retrieving data to a set of output data streams (904), both the set of input data streams (902) and the set of output data streams (904) being operable to perform one of sending and stop sending new data in each clock cycle, the memory aggregation device (990) comprising: a set of FIFO memories (901a, 901b,..., 901c) each comprising an input and an output; an input interconnector (903) configured to interconnect each one of the set of input data streams (902) to each input of the set of FIFO memories (901a, 901b,..., 901c) according to an input interconnection matrix; an output interconnector (905) configured to interconnect each output of the set of FIFO memories (901a, 901b,..., 901c) to each one of the set of output data streams (904) according to an output interconnection matrix; an input selector (907) configured to select the input interconnection matrix according to an input data scheduling scheme; an output selector (909) configured to select the output interconnection matrix according to an output data scheduling scheme; and a memory controller (911) coupled to both, the input selector (907) and the output selector (909), wherein the memory controller (911) is configured to control the input data scheduling scheme such that data from the set of input data streams (902) is spread among the set of FIFO memories (901a, 901b,..., 901c) in a round-robin manner and to control the output data scheduling scheme such that data from the set of FIFO memories (901a, 901b,..., 901c) is retrieved to the set of output data streams (904) in a round-robin manner.
    • 本发明涉及一种用于存储一组输入数据流(902)并且将数据检索到一组输出数据流(904)的存储器聚合设备(990),所述一组输入数据流(902)和所述一组 输出数据流(904)可操作以执行在每个时钟周期中发送和停止发送新数据之一,所述存储器聚合设备(990)包括:一组FIFO存储器(901a,901b,...,901c),每组包括 输入和输出; 输入互连器(903),其被配置为根据输入互连矩阵将所述一组输入数据流(902)中的每一个互连到所述一组FIFO存储器(901a,901b,...,901c)的每个输入; 输出互连器(905),被配置为根据输出互连矩阵将所述一组FIFO存储器(901a,901b,...,901c)的每个输出与所述一组输出数据流(904)中的每一个相互连接; 输入选择器(907),被配置为根据输入数据调度方案选择输入互连矩阵; 输出选择器(909),被配置为根据输出数据调度方案选择输出互连矩阵; 以及耦合到所述输入选择器(907)和所述输出选择器(909)两者的存储器控​​制器(911),其中所述存储器控制器(911)被配置为控制所述输入数据调度方案,使得来自所述一组输入数据 流(902)以循环方式分布在一组FIFO存储器(901a,901b,...,901c)中,并且控制输出数据调度方案,使得来自该组FIFO存储器(901a,901b)的数据 ,...,901c)以循环方式被检索到输出数据流集合(904)。
    • 5. 发明申请
    • MULTI-STAGE INTERCONNECT NETWORK IN A PARALLEL PROCESSING NETWORK DEVICE
    • 并行处理网络设备中的多级互连网络
    • WO2015036870A2
    • 2015-03-19
    • PCT/IB2014/002656
    • 2014-09-10
    • MARVELL WORLD TRADE LTD.KADOSH, AviranZEMACH, Rami
    • KADOSH, AviranZEMACH, Rami
    • H04L45/04H04L47/122H04L49/109H04L49/1515H04L49/505
    • A packet is received at a packet processing element, among a plurality of like packet processing elements, of a network device, and request specifying a processing operation to be performed with respect to the packet by an accelerator engine functionally different from the plurality of like packet processing elements is generated by the packet processing element. The request is transmitted to an interconnect network that includes a plurality of interconnect units arranged in stages. A path through the interconnect network is selected among a plurality of candidate paths, wherein no path of the candidate paths includes multiple interconnect units within a same stage of the interconnect network. The request is then transmitted via the determined path to a particular accelerator engine among multiple candidate accelerator engines configured to perform the processing operation. The processing operation is then performed by the particular accelerator engine..
    • 在网络设备的多个类似分组处理元件中的分组处理元件处接收分组,并且请求指定要由加速器引擎相对于分组执行的处理操作 由分组处理单元生成功能上与多个相同分组处理单元不同的功能。 该请求被传送到包括分级布置的多个互连单元的互连网络。 从多个候选路径中选择通过互连网络的路径,其中候选路径的路径中不包括互连网络的同一级内的多个互连单元。 该请求然后经由所确定的路径被传输到被配置为执行处理操作的多个候选加速器引擎中的特定加速器引擎。 处理操作然后由特定的加速器引擎执行。