会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • PACKET FLOW CLASSIFICATION
    • 分组流程分类
    • US20150341473A1
    • 2015-11-26
    • US14286975
    • 2014-05-23
    • Cristian Florin F. DumitrescuNamakkal N. VenkatesanPierre LaurentBruce Richardson
    • Cristian Florin F. DumitrescuNamakkal N. VenkatesanPierre LaurentBruce Richardson
    • H04L29/06H04L12/743
    • H04L69/22H04L12/6418H04L45/7453
    • Technologies for packet flow classification on a computing device include a hash table including a plurality of hash table buckets in which each hash table bucket maps a plurality of keys to corresponding traffic flows. The computing device performs packet flow classification on received data packets, where the packet flow classification includes a plurality of sequential classification stages and fetch classification operations and non-fetch classification operations are performed in each classification stage. The fetch classification operations include to prefetch a key of a first received data packet based on a set of packet fields of the first received data packet for use during a subsequent classification stage, prefetch a hash table bucket from the hash table based on a key signature of the prefetched key for use during another subsequent classification stage, and prefetch a traffic flow to be applied to the first received data packet based on the prefetched hash table bucket and the prefetched key. The computing device handles processing of received data packets such that a fetch classification operation is performed by the flow classification module on the first received data packet while a non-fetch classification operation is performed by the flow classification module on a second received data packet.
    • 在计算设备上进行数据包流分类的技术包括包括多个散列表桶的散列表,其中每个散列表桶将多个密钥映射到对应的业务流。 计算设备对接收到的数据分组执行分组流分类,其中分组流分类包括多个顺序分类阶段,并且在每个分类阶段执行获取分类操作和非获取分类操作。 获取分类操作包括:基于第一接收数据分组的一组分组字段来预取第一接收数据分组的密钥以便在随后的分类阶段期间使用,以便基于密钥签名从散列表预取哈希表桶 的预取密钥用于另一后续分类阶段,并且基于预取的散列表桶和预取密钥预取要应用于第一接收数据分组的业务流。 所述计算装置处理所接收的数据分组的处理,使得由所述流分类模块对所述第一接收数据分组执行取样分类操作,同时由所述流分类模块对第二接收数据分组进行非取指分类操作。
    • 7. 发明申请
    • METHOD AND APPARATUS FOR SHARED LINE UNIFIED CACHE
    • 用于共享线路统一高速缓存的方法和装置
    • US20150178199A1
    • 2015-06-25
    • US14137359
    • 2013-12-20
    • Liang-Min WangJohn M. MorganNamakkal N. Venkatesan
    • Liang-Min WangJohn M. MorganNamakkal N. Venkatesan
    • G06F12/08
    • G06F12/084G06F12/0811G06F12/082G06F12/0831G06F12/126G06F2212/1016G06F2212/6042Y02D10/13
    • An apparatus and method for implementing a shared unified cache. For example, one embodiment of a processor comprises: a plurality of processor cores grouped into modules, wherein each module has at least two processor cores grouped therein; a plurality of level 1 (L1) caches, each L1 cache directly accessible by one of the processor cores; a level 2 (L2) cache associated with each module, the L2 cache directly accessible by each of the processor cores associated with its respective module; a shared unified cache to store data and/or instructions for each of the processor cores in each of the modules; and a cache management module to manage the cache lines in the shared unified cache using a first cache line eviction policy favoring cache lines which are shared across two or more modules and which are accessed relatively more frequently from the modules.
    • 一种用于实现共享统一缓存的装置和方法。 例如,处理器的一个实施例包括:分组为模块的多个处理器核心,其中每个模块具有分组在其中的至少两个处理器核心; 多个级别1(L1)高速缓存,每个L1高速缓存可由所述处理器核心之一直接访问; 与每个模块相关联的级别2(L2)缓存,所述L2高速缓存可由与其相应模块相关联的每个处理器核心直接访问; 共享统一缓存,用于存储每个模块中每个处理器核心的数据和/或指令; 以及高速缓存管理模块,用于使用第一高速缓存行驱逐策略来管理共享统一高速缓存中的高速缓存行,所述第一高速缓存行驱逐策略有利于在两个或更多个模块之间共享的高速缓存行,并且被从模块相对更频繁地访问。