会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • DATA COMPRESSION USING MULTIPLE LEVELS
    • 使用多级数据压缩
    • WO1992022141A1
    • 1992-12-10
    • PCT/US1992003932
    • 1992-05-11
    • TRIADA, LTD.
    • TRIADA, LTD.BUGAJSKI, Joseph, M.RUSSO, James, T.
    • H03M07/30
    • G06F17/3061H03M7/3084H03M7/3088
    • A series of data processors (20a-20n) operate on a body of data to convert it to compressed form for storage or transmission. The processors are connected in series such that the output of one processor is the input to the next processor. Each processor has an associated memory (24a-24n). Each processor analyzes the serially input data elements in pairs to detect previously non-occurring pairs and stores those pairs in its memory. The output signal from each processor identifies the storage position in its associated memory of each input pair, whether or not the pair previously occurred. Each processor provides a single storage location output signal for each input pair of elements and maintains a count of the occurrences of each pair.
    • 一系列数据处理器(20a-20n)对一组数据进行操作,将其转换为压缩形式用于存储或传输。 处理器串联连接,使得一个处理器的输出是下一个处理器的输入。 每个处理器具有相关联的存储器(24a-24n)。 每个处理器成对地分析串行输入数据元素以检测先前不存在的对并将这些对存储在其存储器中。 来自每个处理器的输出信号识别每个输入对的关联存储器中的存储位置,无论该对是否先前发生。 每个处理器为每个输入元件对提供单个存储位置输出信号,并保持每对的出现次数。
    • 3. 发明授权
    • Method of storing compressed data for accelerated interrogation
    • 存储用于加速询问的压缩数据的方法
    • US5592667A
    • 1997-01-07
    • US336942
    • 1994-11-10
    • Joseph M. Bugajski
    • Joseph M. Bugajski
    • G06F17/30H03M7/30G06F7/00G06F12/00
    • G06F17/3061H03M7/3084H03M7/3088Y10S707/99943
    • A method of data compression includes means to accelerate a direct query thereof. Input data are transformed into a multilevel n-ary tree structure wherein each leaf node corresponds to the creation of a memory storing unique occurrences of a particular data body, and each non-leaf node corresponds to a memory storing unique occurrences associated with its child nodes, whether leaf or non-leaf types. To accelerate a determination as to the solution of a query of the data, one or more pointers are further stored at each memory level, the pointers at least including those used to identify the parent of each child node and the children of each parent. In the preferred embodiment additional pointers are further stored in conjunction with each non-leaf node, these being used to identify other locations corresponding to unique occurrences derived through the same child nodes.
    • 数据压缩的方法包括加速其直接查询的手段。 输入数据被转换为多级n元树结构,其中每个叶节点对应于存储特定数据体的唯一出现的存储器的创建,并且每个非叶节点对应于存储与其子节点相关联的唯一出现的存储器 ,无论是叶类还是非叶型。 为了加速对数据查询的解决的确定,在每个存储器级别进一步存储一个或多个指针,指针至少包括用于标识每个子节点的父节点和每个父节点的子节点的指针。 在优选实施例中,附加指针还与每个非叶节点相结合存储,这些指针用于标识对应于通过相同子节点导出的唯一出现的其他位置。
    • 7. 发明授权
    • Method of optimizing an N-gram memory structure
    • 优化N-gram记忆体结构的方法
    • US5966709A
    • 1999-10-12
    • US939023
    • 1997-09-26
    • Tao ZhangJoseph M. BugajskiK. R. Raghavan
    • Tao ZhangJoseph M. BugajskiK. R. Raghavan
    • G06F17/30H03M7/30
    • H03M7/3084Y10S707/99933Y10S707/99942
    • By placing a low cardinality node or a leaf in a lower level, and a high cardinality node or a leaf at a higher level, an optimal memory structure is automatically generated which yields the best compression within N-gram technology. After making an initial list of parallel streams or fields, the streams are ordered in accordance with increasing cardinality. Adjacent streams (fields), or nodes, are paired, and the children of the resulting node are eliminated from the list, while a new parent node is added to the list. The resulting new list is re-arranged from right to left as a function of increasing cardinality, and the pairing steps are repeated until a single root node is remains for the final memory structure.
    • 通过将低基数节点或叶片放置在较低级别,以及高基数节点或较高级别的叶片,将自动生成最佳存储器结构,从而在N-gram技术中产生最佳压缩。 在制作并行流或字段的初始列表之后,流将按照增加的基数进行排序。 相邻的流(字段)或节点被配对,生成的节点的子节点从列表中删除,而新的父节点被添加到列表中。 所产生的新列表作为增加基数的函数从右到左重新布置,并且重复配对步骤,直到为最终存储器结构保留单个根节点。
    • 8. 发明授权
    • Data compression with pipeline processor having separate memories
    • 具有分离存储器的流水线处理器的数据压缩
    • US5293164A
    • 1994-03-08
    • US978360
    • 1992-11-18
    • Joseph M. BugajskiJames T. Russo
    • Joseph M. BugajskiJames T. Russo
    • G06F5/00G06F17/30H03M7/30H03M7/42
    • G06F17/3061H03M7/3084H03M7/3088
    • The compression system includes a series of pipelined data processors. Each processor has an associated memory. The body of digital data is applied serially to the first processor in the chain. The first processor analyzes pairs of data elements in its incoming signal to detect the occurrence of previously non-occurring sequences and stores those sequences in its associated memory. The output signal from the processor identifies the storage position in its associated memory of each pair of data elements in its input, whether or not those sequences have previously occurred in the data stream. Subsequent processors work with storage location signals only. Each processor provides a single output location signal for each pair of signals in its input. Each processor also determines the number of times that each incoming sequences has occurred and stores that number in association with each stored pair. A hashing table created by each processor and stored in its associated memory is used to segregate the stored pairs into groups having common lower significant figures to simplify the task of determining whether a pair of elements in the input has previously been stored. Pointers stored with each unique pair link the pairs in each hashed sequence in the order of their frequency of occurrence in the incoming data stream so that the incoming elements may be compared with the previously stored elements in the order of probability of occurrence of each pair in the data stream.
    • 压缩系统包括一系列流水线数据处理器。 每个处理器都有一个关联的内存。 数字数据的主体被串行地应用到链中的第一处理器。 第一处理器在其输入信号中分析数据元素对以检测先前不发生的序列的发生,并将这些序列存储在其相关联的存储器中。 来自处理器的输出信号识别其输入中每对数据元素的关联存储器中的存储位置,无论这些序列是否先前已经在数据流中发生。 后续处理器仅与存储位置信号一起使用。 每个处理器在其输入中为每对信号提供单个输出位置信号。 每个处理器还确定每个输入序列已经发生的次数,并存储与每个存储的对相关联的该数量。 由每个处理器创建并存储在其相关联的存储器中的散列表用于将存储的对分离成具有共同较低有效图形的组,以简化确定输入中的一对元素是否先前已被存储的任务。 存储有每个唯一对的指针按照其在输入数据流中的发生频率的顺序将每个散列序列中的对链接起来,使得可以将输入元素与先前存储的元素以每对的出现概率的顺序进行比较 数据流。
    • 9. 发明授权
    • Data compression with pipeline processors having separate memories
    • 具有独立存储器的管道处理器的数据压缩
    • US5245337A
    • 1993-09-14
    • US706949
    • 1991-05-29
    • Joseph M. BugajskiJames T. Russo
    • Joseph M. BugajskiJames T. Russo
    • G06F5/00G06F17/30H03M7/30H03M7/42
    • G06F17/3061H03M7/3084H03M7/3088
    • The compression system includes a series of piplined data processors. Each processor has an associated memory. The body of digital data is applied serially to the first processor in the chain. The first processor analyzes pairs of data elements in its incoming signal to detect the occurrence of previously non-occurring sequences and stores those sequences in its associated memory. The output signal from the processor identifies the storage position in its associated memory of each pair of data elements in its input, whether or not those sequences have previously occurred in the data stream. Subsequent processors work with storage location signals only. Each processor provides a single output location signal for each pair of signals in its input. Each processor also determines the number of times that each incoming sequence has occurred and stores that number in association with each stored pair. A hashing table created by each processor and stored in its associated memory is used to segregate the stored pairs into groups having common lower significant figures to simplify the task of determining whether a pair of elements in the input has previously been stored. Pointers stored with each unique pair link the pairs in each hashed sequence in the order of their frequency of occurrence in the incoming data stream so that the incoming elements may be compared with the previously stored elements in the order of probability of occurrence of each pair in the data stream.
    • 该压缩系统包括一系列线性化数据处理器。 每个处理器都有一个关联的内存。 数字数据的主体被串行地应用到链中的第一处理器。 第一处理器在其输入信号中分析数据元素对以检测先前不发生的序列的发生,并将这些序列存储在其相关联的存储器中。 来自处理器的输出信号识别其输入中每对数据元素的关联存储器中的存储位置,无论这些序列是否先前已经在数据流中发生。 后续处理器仅与存储位置信号一起使用。 每个处理器在其输入中为每对信号提供单个输出位置信号。 每个处理器还确定每个传入序列已经发生的次数,并且存储与每个存储的对相关联的数字。 由每个处理器创建并存储在其相关联的存储器中的散列表用于将存储的对分离成具有共同较低有效图形的组,以简化确定输入中的一对元素是否先前已被存储的任务。 存储有每个唯一对的指针按照其在输入数据流中的发生频率的顺序将每个散列序列中的对链接起来,使得可以将输入元素与先前存储的元素以每对的出现概率的顺序进行比较 数据流。