会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 91. 发明授权
    • Multiple hash scheme for use in a pattern matching accelerator
    • 用于模式匹配加速器的多个哈希方案
    • US08635180B2
    • 2014-01-21
    • US13021757
    • 2011-02-06
    • Giora BiranChristoph HagleitnerTimothy Hume HeilJan Van Lunteren
    • Giora BiranChristoph HagleitnerTimothy Hume HeilJan Van Lunteren
    • G06N5/04
    • G06N5/047G06N5/025
    • A pattern matching accelerator (PMA) for assisting software threads to find the presence and location of strings in an input data stream that match a given pattern. The patterns are defined using regular expressions that are compiled into a data structure comprised of rules subsequently processed by the PMA. The patterns to be searched in the input stream are defined by the user as a set of regular expressions. The patterns to be searched are grouped in pattern context sets. The sets of regular expressions which define the pattern context sets are compiled to generate a rules structure used by the PMA hardware. The rules are compiled before search run time and stored in main memory, in rule cache memory within the PMA or a combination thereof. For each input character, the PMA executes the search and returns the search results.
    • 模式匹配加速器(PMA),用于帮助软件线程查找匹配给定模式的输入数据流中字符串的存在和位置。 使用正则表达式定义模式,该正则表达式被编译成由PMA随后处理的规则组成的数据结构。 在输入流中要搜索的模式由用户定义为一组正则表达式。 要搜索的模式分组在模式上下文集中。 定义模式上下文集的正则表达式集合被编译为生成由PMA硬件使用的规则结构。 该规则在搜索运行时间之前被编译并存储在主存储器中,在PMA内的规则高速缓冲存储器中或其组合中。 对于每个输入字符,PMA执行搜索并返回搜索结果。
    • 93. 发明授权
    • Performance monitoring mechanism for use in a pattern matching accelerator
    • 用于模式匹配加速器的性能监视机制
    • US08402003B2
    • 2013-03-19
    • US13022904
    • 2011-02-08
    • Giora BiranChristoph HagleitnerTimothy H. HeilJan Van Lunteren
    • Giora BiranChristoph HagleitnerTimothy H. HeilJan Van Lunteren
    • G06F17/30
    • G06F17/30985
    • A pattern matching accelerator (PMA) for assisting software threads to find the presence and location of strings in an input data stream that match a given pattern. The patterns are defined using regular expressions that are compiled into a data structure comprised of rules subsequently processed by the PMA. The patterns to be searched in the input stream are defined by the user as a set of regular expressions. The patterns to be searched are grouped in pattern context sets. The sets of regular expressions which define the pattern context sets are compiled to generate a rules structure used by the PMA hardware. The rules are compiled before search run time and stored in main memory, in rule cache memory within the PMA or a combination thereof. For each input character, the PMA executes the search and returns the search results.
    • 模式匹配加速器(PMA),用于帮助软件线程查找匹配给定模式的输入数据流中字符串的存在和位置。 使用正则表达式定义模式,该正则表达式被编译成由PMA随后处理的规则组成的数据结构。 在输入流中要搜索的模式由用户定义为一组正则表达式。 要搜索的模式分组在模式上下文集中。 编译定义模式上下文集的正则表达式集合,以生成PMA硬件使用的规则结构。 该规则在搜索运行时间之前被编译并存储在主存储器中,在PMA内的规则高速缓冲存储器中或其组合中。 对于每个输入字符,PMA执行搜索并返回搜索结果。
    • 94. 发明申请
    • ACCELERATOR ENGINE COMMANDS SUBMISSION OVER AN INTERCONNECT LINK
    • 加速器发动机命令通过互连链路提交
    • US20120254587A1
    • 2012-10-04
    • US13077804
    • 2011-03-31
    • Giora BiranIlya Granovsky
    • Giora BiranIlya Granovsky
    • G06F15/76G06F9/30
    • G06F9/3877G06F9/547G06F13/38G06F13/4022G06F13/4282G06F15/167G06F2213/0026
    • An apparatus and method of submitting hardware accelerator engine commands over an interconnect link such as a PCI Express (PCIe) link. In one embodiment, the mechanism is implemented inside a PCIe Host Bridge which is integrated into a host IC or chipset. The mechanism provides an interface compatible with other integrated accelerators thereby eliminating the overhead of maintaining different programming models for local and remote accelerators. Co-processor requests issued by threads requesting a service (client threads) targeting remote accelerator are queued and sent to a PCIe adapter and remote accelerator engine over a PCIe link. The remote accelerator engine performs the requested processing task, delivers results back to host memory and the PCIe Host Bridge performs co-processor request completion sequence (status update, write to flag, interrupt) include in the co-processor command.
    • 通过诸如PCI Express(PCIe)链路的互连链路提交硬件加速器引擎命令的装置和方法。 在一个实施例中,该机制在集成到主机IC或芯片组中的PCIe主机桥内部实现。 该机制提供了与其他集成加速器兼容的接口,从而消除了为本地和远程加速器维护不同编程模型的开销。 请求针对远程加速器的服务(客户端线程)的线程发出的协处理器请求排队,并通过PCIe链路发送到PCIe适配器和远程加速器引擎。 远程加速器引擎执行所请求的处理任务,将结果传回主机存储器,并且PCIe主机桥执行协处理器请求完成序列(状态更新,写入标志,中断)包括在协处理器命令中。
    • 95. 发明申请
    • METHOD OF CONSTRUCTING AN APPROXIMATED DYNAMIC HUFFMAN TABLE FOR USE IN DATA COMPRESSION
    • 构建用于数据压缩的大致动态霍夫曼表的方法
    • US20100253556A1
    • 2010-10-07
    • US12418896
    • 2009-04-06
    • Giora BiranHubertus FrankeAmit GolanderHao Yu
    • Giora BiranHubertus FrankeAmit GolanderHao Yu
    • H03M7/40
    • H03M7/40
    • A novel and useful method of constructing a fast approximation of a dynamic Huffman table from a data sample comprising a subset of data to be compressed. The frequency of incidence of each symbol in the sample is calculated, and the symbols are then allocated to predefined bins based on their frequency of incidence. The bins are then transformed into binary sub-trees, where the leaf nodes of the binary sub-trees comprise the symbols of the bin associated with the binary sub-trees. The binary sub-trees are then combined via nesting, thereby creating a coarse grained binary tree, where all leaves are mapped to a specified number of depths. The coarse grained binary tree is then traversed, thereby yielding a canonical code for each symbol, thereby defining the entries for a dynamic Huffman table.
    • 一种从包括要压缩的数据的子集的数据样本构建动态霍夫曼表的快速近似的新颖且有用的方法。 计算样本中每个符号的入射频率,然后根据其入射频率将符号分配给预定义的分组。 然后,将这些分组转换成二进制子树,其中二进制子树的叶节点包括与二进制子树相关联的bin的符号。 然后通过嵌套组合二进制子树,从而创建粗粒度二叉树,其中所有叶被映射到指定数量的深度。 然后遍历粗粒二进制树,从而为每个符号产生规范代码,从而为动态霍夫曼表定义条目。
    • 96. 发明授权
    • Method and apparatus for data decompression in the presence of memory hierarchies
    • 在存在层次结构的情况下进行数据解压缩的方法和装置
    • US07692561B2
    • 2010-04-06
    • US12175214
    • 2008-07-17
    • Giora BiranHubertus FrankeAmit GolanderHao Yu
    • Giora BiranHubertus FrankeAmit GolanderHao Yu
    • H03M7/34
    • H03M7/3086
    • A method for decompressing a stream of a compressed data packet includes determining whether first data of a data-dictionary for a first decompression copy operation is located in a history buffer on a remote memory or a local memory, and when it is determined that the first data is located in the remote memory, stalling the first decompression copy operation, performing a second decompression operation using second data that is located in the history buffer on the local memory and fetching the first data from the remote memory to the history buffer on the local memory. The method further includes performing the first decompression operation using the first data in the history buffer on the local memory.
    • 用于对压缩数据分组的流进行解压缩的方法包括:确定用于第一解压缩复制操作的数据字典的第一数据是否位于远程存储器或本地存储器上的历史缓冲器中,以及当确定第一 数据位于远程存储器中,停止第一次解压缩复制操作,使用位于本地存储器上的历史缓冲器中的第二数据执行第二次解压缩操作,并将第一数据从远程存储器读取到本地的历史缓冲区 记忆。 该方法还包括使用本地存储器上的历史缓冲器中的第一数据来执行第一解压缩操作。
    • 97. 发明申请
    • METHOD AND APPARATUS FOR DATA DECOMPRESSION IN THE PRESENCE OF MEMORY HIERARCHIES
    • 存储分层存在的数据分解方法与装置
    • US20100013678A1
    • 2010-01-21
    • US12175214
    • 2008-07-17
    • Giora BiranHubertus FrankeAmit GolanderHao Yu
    • Giora BiranHubertus FrankeAmit GolanderHao Yu
    • H03M5/00
    • H03M7/3086
    • A method for decompressing a stream of a compressed data packet includes determining whether first data of a data-dictionary for a first decompression copy operation is located in a history buffer on a remote memory or a local memory, and when it is determined that the first data is located in the remote memory, stalling the first decompression copy operation, performing a second decompression operation using second data that is located in the history buffer on the local memory and fetching the first data from the remote memory to the history buffer on the local memory. The method further includes performing the first decompression operation using the first data in the history buffer on the local memory.
    • 用于对压缩数据分组的流进行解压缩的方法包括:确定用于第一解压缩复制操作的数据字典的第一数据是否位于远程存储器或本地存储器上的历史缓冲器中,以及当确定第一 数据位于远程存储器中,停止第一次解压缩复制操作,使用位于本地存储器上的历史缓冲器中的第二数据执行第二次解压缩操作,并将第一数据从远程存储器读取到本地的历史缓冲区 记忆。 该方法还包括使用本地存储器上的历史缓冲器中的第一数据来执行第一解压缩操作。
    • 99. 发明申请
    • Descriptor Prefetch Mechanism for High Latency and Out of Order DMA Device
    • 高延迟和超出DMA设备的描述符预取机制
    • US20080168259A1
    • 2008-07-10
    • US11621789
    • 2007-01-10
    • Giora BiranLuis E. De la TorreBernard C. DrerupJyoti GuptaRichard Nicholas
    • Giora BiranLuis E. De la TorreBernard C. DrerupJyoti GuptaRichard Nicholas
    • G06F9/30G06F12/14G06F11/00
    • G06F13/28
    • A DMA device prefetches descriptors into a descriptor prefetch buffer. The size of descriptor prefetch buffer holds an appropriate number of descriptors for a given latency environment. To support a linked list of descriptors, the DMA engine prefetches descriptors based on the assumption that they are sequential in memory and discards any descriptors that are found to violate this assumption. The DMA engine seeks to keep the descriptor prefetch buffer full by requesting multiple descriptors per transaction whenever possible. The bus engine fetches these descriptors from system memory and writes them to the prefetch buffer. The DMA engine may also use an aggressive prefetch where the bus engine requests the maximum number of descriptors that the buffer will support whenever there is any space in the descriptor prefetch buffer. The DMA device discards any remaining descriptors that cannot be stored.
    • DMA设备将描述符预取到描述符预取缓冲区中。 描述符预取缓冲区的大小在给定的等待时间环境中保存适当数量的描述符。 为了支持描述符的链表,DMA引擎基于它们在存储器中是连续的假设来预取描述符,并丢弃任何被发现违反这个假设的描述符。 DMA引擎寻求通过每个事务请求多个描述符尽可能地保持描述符预取缓冲区已满。 总线引擎从系统内存中读取这些描述符,并将它们写入预取缓冲区。 DMA引擎还可以使用积极的预取,其中总线引擎请求缓冲区将在描述符预取缓冲器中存在任何空间时将支持的最大数量的描述符。 DMA设备丢弃任何其他不能存储的描述符。