会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 21. 发明授权
    • Slave cache having sub-line valid bits updated by a master cache
    • 具有由主缓存更新的子行有效位的从缓存
    • US5784590A
    • 1998-07-21
    • US618637
    • 1996-03-19
    • Earl T. CohenJay C. Pattin
    • Earl T. CohenJay C. Pattin
    • G06F12/08
    • G06F12/0848G06F12/0857G06F12/0897G06F12/0831
    • A cache system has a large master cache and smaller slave caches. The slave caches are coupled to the processor's pipelines and are kept small and simple to increase their speed. The master cache is set-associative and performs many of the complex cache management operations for the slave caches, freeing the slaves of these bandwidth-robbing duties. Only the slave caches store sub-line valid bits with all cache lines; the master cache has only full cache lines valid. During a miss from a slave cache, the slave cache sends its sub-line valid bits to the master cache. The slave's sub-line valid bits are loaded into a request pipeline in the master cache. As requests are fulfilled and finish the pipeline, its address is compared to the addresses of all other pending requests in the master's pipeline. If another pending request matches the slave's index and tag, its sub-line valid bits are updated by setting the corresponding sub-line valid bit for the completing request's sub-line. If another pending request matches the slave's index but not the tag, all of the other request's sub-line valid bits are cleared. Thus subline valid bits of pending requests are updated as each request completes the master's pipeline and writes its sub-line to the slave cache.
    • 缓存系统具有大的主缓存和较小的从高速缓存。 从属高速缓存耦合到处理器的管道,并保持较小且简单以增加其速度。 主缓存设置关联,并为奴隶缓存执行许多复杂的高速缓存管理操作,释放这些带宽抢占任务的从站。 只有从属缓存存储所有高速缓存行的子行有效位; 主缓存只有完整的高速缓存行有效。 在从高速缓存缺失时,从站缓存将其子行有效位发送到主缓存。 从站的子行有效位加载到主缓存中的请求流水线中。 当满足请求并完成流水线时,将其地址与主控管道中所有其他待处理请求的地址进行比较。 如果另一个未决请求与从属索引和标签相匹配,则通过设置完成请求的子行的相应子行有效位来更新其子行有效位。 如果另一个挂起的请求与从站的索引匹配,而不是标记,则所有其他请求的子行有效位都将被清除。 因此,当每个请求完成主控管道并将其子行写入从缓存时,更新挂起请求的子行有效位。
    • 22. 发明申请
    • FLASH MEMORY READ SCRUB AND CHANNEL TRACKING
    • 闪存存储器读取SCRUB和通道跟踪
    • US20140068365A1
    • 2014-03-06
    • US13597489
    • 2012-08-29
    • Zhengang ChenEarl T. Cohen
    • Zhengang ChenEarl T. Cohen
    • G06F11/07
    • G06F11/106
    • An apparatus having a first circuit and a second circuit is disclosed. The first circuit may be configured to (i) read data from a region of a memory circuit during a read scrub of the region and (ii) generate a plurality of statistics based on (a) the data and (b) one or more bit flips performed during an error correction of the data. The memory circuit is generally configured to store the data in a nonvolatile condition. One or more reference voltages may be used to read the data. The second circuit may be configured to (i) update a plurality of parameters of the region based on the statistics and (ii) compute updated values of the reference voltages based on the parameters.
    • 公开了一种具有第一电路和第二电路的装置。 第一电路可以被配置为(i)在区域的读取擦除期间从存储器电路的区域读取数据,以及(ii)基于(a)数据生成多个统计信息,以及(b)一个或多个位 在数据的错误校正期间执行翻转。 存储器电路通常被配置为将数据存储在非易失性状态中。 可以使用一个或多个参考电压来读取数据。 第二电路可以被配置为(i)基于统计信息来更新区域的多个参数,以及(ii)基于参数计算参考电压的更新值。
    • 23. 发明申请
    • LOOKUP ENGINE WITH PIPELINED ACCESS, SPECULATIVE ADD AND LOCK-IN-HIT FUNCTION
    • LOOKUP发动机具有管道接入,调节增加和锁定功能
    • US20140068176A1
    • 2014-03-06
    • US13600464
    • 2012-08-31
    • Leonid BaryudinEarl T. CohenKent Wayne Wendorf
    • Leonid BaryudinEarl T. CohenKent Wayne Wendorf
    • G06F12/12G06F12/00
    • G11C15/00G06F3/0619G06F3/0659G06F3/0671
    • Described embodiments provide a lookup engine that receives lookup requests including a requested key and a speculative add requestor. Iteratively, for each one of the lookup requests, the lookup engine searches each entry of a lookup table for an entry having a key matching the requested key of the lookup request. If the lookup table does not include an entry having a key matching the requested key, the lookup engine sends a miss indication corresponding to the lookup request to the control processor. If the speculative add requestor is set, the lookup engine speculatively adds the requested key to a free entry in the lookup table. Speculatively added keys are searchable in the lookup table for subsequent lookup requests to maintain coherency of the lookup table without creating duplicate key entries, comparing missed keys with each other or stalling the lookup engine to insert missed keys.
    • 描述的实施例提供了一种查找引擎,其接收包括所请求的密钥和推测添加请求者的查找请求。 迭代地,对于每个查找请求,查找引擎搜索查找表的每个条目,以获得具有与查找请求的请求密钥匹配的密钥的条目。 如果查找表不包括具有与所请求的密钥匹配的密钥的条目,则查找引擎向控制处理器发送对应于查找请求的未命中指示。 如果设置了推测性添加请求者,则查找引擎推测性地将请求的密钥添加到查找表中的空闲条目。 在查找表中可以搜索用于后续查找请求的推测加密密钥,以维护查找表的一致性,而不创建重复密钥条目,将错过的密钥彼此进行比较,或者阻止查找引擎插入错过的密钥。
    • 28. 发明授权
    • Tokens in token buckets maintained among primary and secondary storages
    • 令牌桶中的令牌在主存储和辅助存储之间保持
    • US07599287B2
    • 2009-10-06
    • US11271247
    • 2005-11-11
    • James Fraser TestaEyal OrenEarl T. Cohen
    • James Fraser TestaEyal OrenEarl T. Cohen
    • H04L12/56
    • H04L47/10H04L47/215
    • Token buckets are used in a computer or communications system for controlling rates at which corresponding items are processed. The number of tokens in a token bucket identifies the amount of processing that is available for the corresponding item. Instead of storing the value of a token bucket as a single value in a single memory location as traditionally done, the value of a token bucket is stored across multiple storage locations, such as in on-chip storage and in off-chip storage (e.g., in a memory device). An indication (e.g., one or more bits) can also be stored on chip to identify whether or not the off-chip stored value is zero and/or of at least of a certain magnitude such that it may be readily determined whether there are sufficient tokens to process an item without accessing the off-chip storage.
    • 在计算机或通信系统中使用令牌桶来控制处理相应项目的速率。 令牌桶中的令牌数量标识可用于相应项目的处理量。 令牌桶的值不像传统上将单个值存储在单个存储单元中而不是将令牌桶的值存储在多个存储位置之间,例如片上存储和片外存储(例如, ,在存储设备中)。 指示(例如,一个或多个比特)也可以存储在芯片上以识别片外存储的值是否为零和/或至少具有一定幅度,使得可以容易地确定是否有足够的 令牌处理项目而不访问片外存储。
    • 30. 发明授权
    • Multi-threaded packet processing architecture with global packet memory, packet recirculation, and coprocessor
    • 具有全局分组存储器,分组再循环和协处理器的多线程分组处理架构
    • US07551617B2
    • 2009-06-23
    • US11054076
    • 2005-02-08
    • Will EathertonEarl T. CohenJohn Andrew FingerhutDonald E. SteissJohn Williams
    • Will EathertonEarl T. CohenJohn Andrew FingerhutDonald E. SteissJohn Williams
    • H04L12/56
    • H04L47/56H04L45/60H04L47/50
    • A network processor has numerous novel features including a multi-threaded processor array, a multi-pass processing model, and Global Packet Memory (GPM) with hardware managed packet storage. These unique features allow the network processor to perform high-touch packet processing at high data rates. The packet processor can also be coded using a stack-based high-level programming language, such as C or C++. This allows quicker and higher quality porting of software features into the network processor.Processor performance also does not severely drop off when additional processing features are added. For example, packets can be more intelligently processed by assigning processing elements to different bounded duration arrival processing tasks and variable duration main processing tasks. A recirculation path moves packets between the different arrival and main processing tasks. Other novel hardware features include a hardware architecture that efficiently intermixes co-processor operations with multi-threaded processing operations and improved cache affinity.
    • 网络处理器具有许多新颖的特征,包括多线程处理器阵列,多遍处理模型和具有硬件管理分组存储的全局分组存储器(GPM)。 这些独特的功能允许网络处理器以高数据速率执行高触摸数据包处理。 分组处理器也可以使用基于堆栈的高级编程语言(例如C或C ++)进行编码。 这样可以更快速地将软件功能移植到网络处理器中。 当添加额外的处理功能时,处理器性能也不会严重下降。 例如,可以通过将处理元素分配给不同的有界持续时间到达处理任务和可变持续时间主处理任务来更智能地处理分组。 再循环路径在不同的到达和主要处理任务之间移动分组。 其他新颖的硬件功能包括硬件架构,可以将协处理器操作与多线程处理操作高效地混合,并提高缓存关联度。