会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • MULTI-CORE INTERCONNECT IN A NETWORK PROCESSOR
    • 网络处理器中的多核连接
    • WO2013066798A1
    • 2013-05-10
    • PCT/US2012/062378
    • 2012-10-29
    • CAVIUM, INC.
    • KESSLER, Richard, E.ASHER, David, H.PERVEILER, John, M.DOBBIE, Bradley, D.
    • G06F12/08G06F15/78
    • G06F12/0813G06F12/08
    • A network processor includes multiple processor cores for processing packet data. In order to provide the processor cores with access to a memory subsystem, an interconnect circuit directs communications between the processor cores and the L2 Cache and other memory devices. The processor cores are divided into several groups, each group sharing an individual bus, and the L2 Cache is divided into a number of banks, each bank having access to a separate bus. The interconnect circuit processes requests to store and retrieve data from the processor cores across multiple buses, and processes responses to return data from the cache banks. As a result, the network processor provides high-bandwidth memory access for multiple processor cores.
    • 网络处理器包括用于处理分组数据的多个处理器核。 为了向处理器核提供对存储器子系统的访问,互连电路指导处理器核与L2 Cache和其他存储器件之间的通信。 处理器核心分为几组,每组共享一条单独的总线,二级缓存分为多个银行,每个银行都可以访问单独的总线。 互连电路处理在多个总线上存储和检索来自处理器核心的数据的请求,并处理从缓存存储器返回数据的响应。 因此,网络处理器为多个处理器内核提供高带宽存储器访问。
    • 2. 发明申请
    • METHOD AND APPARATUS FOR TRANSFER OF WIDE COMMAND AND DATA BETWEEN A PROCESSOR AND COPROCESSOR
    • 处理器与协处理器之间传送宽指令和数据的方法和装置
    • WO2015138312A1
    • 2015-09-17
    • PCT/US2015/019426
    • 2015-03-09
    • CAVIUM, INC.
    • SNYDER, Wilson, P.KESSLER, Richard, E.BERTONE, Michael, S.
    • G06F15/76G06F9/38
    • G06F9/30145G06F9/3881G06F15/17337G06F15/76
    • According to at least one example embodiment, a method of processing a wide command includes storing wide command data in a first physical structure of a processor. Information associated with the wide command is determined based on the wide command data and/or a corresponding memory address range associated with the wide command. The information associated with the wide command determined includes a size of the wide command and is stored in a second physical structure of the processor. The processor causes the wide command data and the information associated with the wide command to be provided directly to a coprocessor for executing the wide command. The processor and the coprocessor may reside on a single chip device. Alternatively, the processor and the coprocessor may reside on separate chip devices in a multi-chip system.
    • 根据至少一个示例性实施例,处理宽命令的方法包括将宽命令数据存储在处理器的第一物理结构中。 与宽命令相关联的信息基于宽命令数据和/或与宽命令相关联的相应存储器地址范围来确定。 与所确定的广泛命令相关联的信息包括宽命令的大小并存储在处理器的第二物理结构中。 处理器使宽命令数据和与宽命令相关联的信息直接提供给协处理器以执行宽命令。 处理器和协处理器可以驻留在单个芯片设备上。 或者,处理器和协处理器可以驻留在多芯片系统中的单独的芯片装置上。
    • 4. 发明申请
    • SYSTEM AND METHOD TO REDUCE MEMORY ACCESS LATENCIES USING SELECTIVE REPLICATION ACROSS MULTIPLE MEMORY PORTS
    • 使用多个存储器端口选择性复制来减少存储器访问延迟的系统和方法
    • WO2013062708A1
    • 2013-05-02
    • PCT/US2012/057269
    • 2012-09-26
    • CAVIUM, INC.
    • PANGBORN, JeffreyBOUCHARD, Gregg, A.GOYAL, RajanKESSLER, Richard, E.
    • G06F12/02G06F12/06
    • G06F12/1018G06F12/0292G06F12/06G06F2212/174G06F2212/2532
    • In one embodiment, a system comprises a plurality of memory ports (608A-608D). The memory ports are distributed into a plurality of subsets, where each subset is identified by a subset index and each of the memory ports have an individual wait time based on a respective workload. The system further comprises a first address hashing unit (602B) configured to receive a read request including a virtual memory address. The virtual memory address is associated with a replication factor, and the virtual memory address refers to graph data.. The first address hashing unit translates the replication factor into a corresponding subset index based on the virtual memory address, and converts the virtual memory address to a hardware based memory address. The hardware based address refers to graph data in the memory ports within a subset indicated by the corresponding subset index. The system further comprises a memory replication controller (604) configured to direct read requests to the hardware based address to the one of the memory ports within the subset indicated by the corresponding subset index with a lowest individual wait time.
    • 在一个实施例中,系统包括多个存储器端口(608A-608D)。 存储器端口被分配到多个子集中,其中每个子集由子集索引标识,并且每个存储器端口基于各自的工作负载具有单独的等待时间。 该系统还包括:第一地址散列单元(602B),被配置为接收包括虚拟存储器地址的读取请求。 虚拟内存地址与复制因子相关联,虚拟内存地址是指图形数据。第一地址哈希单元将复制因子转换为基于虚拟内存地址的相应子集索引,并将虚拟内存地址转换为 一个基于硬件的内存地址。 基于硬件的地址是指由相应子集索引指示的子集内的存储器端口中的图形数据。 该系统还包括存储器复制控制器(604),其被配置为将读取请求引导到基于硬件的地址,由具有最低个人等待时间的相应子集索引所指示的子集内的存储器端口之一引导。
    • 6. 发明申请
    • METHOD AND SYSTEM FOR ORDERING I/O ACCESS IN A MULTI-NODE ENVIRONMENT
    • 用于在多节点环境中订购I / O访问的方法和系统
    • WO2015134103A1
    • 2015-09-11
    • PCT/US2014/072816
    • 2014-12-30
    • CAVIUM, INC.
    • KESSLER, Richard, E.
    • G06F13/20G06F5/06
    • G06F13/423G06F12/0822G06F12/0833G06F13/10G06F13/20
    • According to at least one example embodiment, a multi-chip system includes multiple chip devices configured to communicate to each other and share resources, such as I/O devices. According to at least one example embodiment, a method of synchronizing access to an input/output (I/O) device in the multi-chip system comprises initiating, by a first agent of the multi-chip system, a first operation for accessing the I/O device, the first operation is queued, prior to execution by the I/O device, in a queue. Once the first operation is queued, an indication of such queuing is provided. Upon detecting, by a second agent of the multi-chip system, the indication of queuing the first operation in the queue, initiating a second operation to access the I/O device, the second operation is queued subsequent to the first operation in the queue.
    • 根据至少一个示例性实施例,多芯片系统包括被配置为彼此通信并共享诸如I / O设备之类的资源的多个芯片设备。 根据至少一个示例性实施例,一种使对多芯片系统中的输入/输出(I / O)设备的访问同步的方法包括由多芯片系统的第一代理启动用于访问 I / O设备,第一个操作在I / O设备执行之前排队等待队列。 一旦第一操作排队,就提供这种排队的指示。 在由多芯片系统的第二代理检测到排队队列中的第一操作的指示时,启动访问I / O设备的第二操作,第二操作在队列中的第一操作之后排队 。
    • 10. 发明申请
    • METHOD AND APPARATUS FOR MEMORY ALLOCATION IN A MULTI-NODE SYSTEM
    • 多节点系统中记忆分配的方法与装置
    • WO2015134100A1
    • 2015-09-11
    • PCT/US2014/072808
    • 2014-12-30
    • CAVIUM, INC.
    • KESSLER, Richard, E.SNYDER, Wilson, P.
    • G06F13/16
    • G06F3/0608G06F3/0631G06F3/0659G06F3/0673G06F13/1657
    • According to at least one example embodiment, a multi-chip system includes multiple chip devices configured to communicate to each other and share resources. According to at least one example embodiment, a method of memory allocation in the multi-chip system comprises managing, by each of one or more free-pool allocator (FPA) coprocessors in the multi-chip system, a corresponding list of pools of free-buffer pointers. Based on the one or more lists of free-buffer pointers managed by the one or more FPA coprocessors, a memory allocator (MA) hardware component allocates a free buffer, associated with a chip device of the multiple chip devices, to data associated with a work item. According to at least one aspect, the data associated with the work item represents a data packet.
    • 根据至少一个示例性实施例,多芯片系统包括被配置为彼此通信并共享资源的多个芯片装置。 根据至少一个示例实施例,多芯片系统中的存储器分配的方法包括通过多芯片系统中的一个或多个空闲池分配器(FPA)协处理器中的每一个来管理相应的免费池池列表 缓存指针 基于由一个或多个FPA协处理器管理的一个或多个空闲缓冲器指针列表,存储器分配器(MA)硬件组件将与多个芯片设备的芯片设备相关联的空闲缓冲区分配给与 工作项目。 根据至少一个方面,与工作项相关联的数据表示数据分组。