会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 6. 发明申请
    • DATA COMPRESSION USING A NESTED HIERACHY OF FIXED PHRASE LENGTH STATIC AND DYNAMIC DICTIONARIES
    • 数据压缩使用固定长度静态和动态字典的嵌套等级
    • US20110043387A1
    • 2011-02-24
    • US12544726
    • 2009-08-20
    • Bulent AbaliMohammad BanikazemiPeter FranaszekLuis A. LastrasDan E. Poff
    • Bulent AbaliMohammad BanikazemiPeter FranaszekLuis A. LastrasDan E. Poff
    • H03M7/34
    • H03M7/3088
    • The present invention describes lossless data compression/decompression methods and systems. A random access memory (RAM) operates as a static dictionary and includes commonly used strings/symbols/phrases/words. An input buffer operates as a dynamic dictionary and includes input strings/phrases/symbols/words. A set-associative cache memory operates as a hash table, and includes pointers pointing to the commonly used strings/symbols/phrases/words in the static dictionary and/or pointing to one or more of the input strings/phrases/symbols/words in the dynamic dictionary. Alternatively, the set-associative cache memory combines the dynamic dictionary, the static dictionary and the hash table. When encountering a symbol/phrase/string/word in the static or dynamic dictionary in an input stream, a compressor logic or module places a pointer pointing to the symbol/phrase/string/word at a current location on the output stream. The hash table may include phrases/symbols/strings/words and/or pointers pointing to phrases/symbols/strings/words.
    • 本发明描述了无损数据压缩/解压缩方法和系统。 随机访问存储器(RAM)作为静态字典操作,并且包括常用的字符串/符号/短语/单词。 输入缓冲器作为动态字典操作,包括输入字符串/短语/符号/字。 集合关联高速缓冲存储器作为散列表操作,并且包括指向静态字典中常用的字符串/符号/短语/单词的指针和/或指向一个或多个输入字符串/短语/符号/单词中的一个或多个 动态词典。 或者,集合关联高速缓存存储器组合动态字典,静态字典和散列表。 当在输入流中的静态或动态字典中遇到符号/短语/字符串/单词时,压缩器逻辑或模块将指向指向输出流上当前位置的符号/短语/字符串/字的指针放置。 哈希表可以包括短语/符号/字符串/单词和/或指向短语/符号/字符串/单词的指针。
    • 7. 发明申请
    • WEAR REDUCTION METHODS BY USING COMPRESSION/DECOMPRESSION TECHNIQUES WITH FAST RANDOM ACCESS
    • 使用快速随机访问的压缩/解压缩技术减少减少方法
    • US20100302077A1
    • 2010-12-02
    • US12476297
    • 2009-06-02
    • Bulent AbaliMohammad BanikazemiDan E. Poff
    • Bulent AbaliMohammad BanikazemiDan E. Poff
    • H03M7/34G06F12/08G06F12/00
    • H03M7/3084G06F12/0804G06F2212/401
    • The present invention reduces the number of writes to a main memory to increase useful life of the main memory. To reduce the number of writes to the main memory, data to be written is written to a cache line in a lowest-level cache memory and in a higher-level cache memory(s). If the cache line in the lowest-level cache memory is full, the number of used cache lines in the lowest-level cache reaches a threshold, or there is a need for an empty entry in the lowest-level cache, a processor or a hardware unit compresses content of the cache line and stores the compressed content in the main memory. The present invention also provides LZB algorithm allowing decompression of data from an arbitrary location in compressed data stream with a bound on the number of characters which needs to be processed before a character or string of interest is processed.
    • 本发明减少了对主存储器的写入次数,以增加主存储器的使用寿命。 为了减少对主存储器的写入次数,要写入的数据被写入最低级高速缓冲存储器和较高级高速缓冲存储器中的高速缓存行。 如果最低级高速缓存中的高速缓存线已满,则最低级别高速缓存中使用的高速缓存行的数量达到阈值,或者需要在最低级缓存中的空条目,处理器或 硬件单元压缩高速缓存线的内容并将压缩的内容存储在主存储器中。 本发明还提供了LZB算法,其允许在压缩数据流中的任意位置解压缩数据,并且在处理感兴趣的字符串或字符串之前需要处理的字符数量的限制。