会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 9. 发明授权
    • Efficient lexical trending topic detection over streams of data using a modified sequitur algorithm
    • 使用修改的Sequitur算法对数据流进行有效的词汇趋势主题检测
    • US08838599B2
    • 2014-09-16
    • US12780850
    • 2010-05-14
    • Zhichen XuYun FuNeal Sample
    • Zhichen XuYun FuNeal Sample
    • G06F17/30
    • G06F17/30616
    • Embodiments are directed towards a Modified Sequitur algorithm (MSA) using pipelining and indexed arrays to identify trending topics within a plurality of documents having user generated content (UGC). The documents are parallelized and distributed across a plurality of network devices, which place at least some of the received documents into a buffer for which the MSA may then be applied to the documents within the buffer to identify n-grams or phrases within the documents' contents. The identified phrases are further analyzed to remove extraneous co-occurrences of phrases, and/or words based on a part of speech analysis. A weighting of the remaining phrases is used to identify trending topic phrases. Links to content in the plurality of UGC documents that is associated with the trending topic phrases may then be displayed to a client device.
    • 实施例针对使用流水线和索引数组来修改具有用户生成内容(UGC)的多个文档内的趋势主题的修改的序列算法(MSA)。 这些文档被并行化并且分布在多个网络设备上,这些网络设备将至少一些接收到的文档放置在缓冲器中,然后可以将MSA应用于缓冲器中的文档,以识别文档中的n个或多个短语, 内容。 进一步分析识别的短语,以消除基于词性分析的短语和/或单词的无关共存。 使用剩余短语的加权来识别趋势主题短语。 然后可以将与趋势主题短语相关联的多个UGC文档中的内容的链接显示给客户端设备。
    • 10. 发明授权
    • Push pull caching for social network information
    • 推拉缓存用于社交网络信息
    • US08655842B2
    • 2014-02-18
    • US12542144
    • 2009-08-17
    • Zhichen Xu
    • Zhichen Xu
    • G06F17/30
    • H04L67/32G06F12/0868G06F2015/765G06Q50/01H04L67/26H04L67/2842H04L67/306H04L67/325
    • Embodiments are directed towards modifying a distribution of writers as either a push writer or a pull writer based on a cost model that decides for a given content reader whether it is more effective for the writer to be a pull writer or a push writer. A cache is maintained for each content reader for caching content items pushed by a push writer in the content writer's push list of writers when the content is generated. At query time, content items are pulled by the content reader based on writers a content reader's pull list. One embodiment of the cost model employs data about a previous number of requests for content items for a given writer for a number of previous blended display results of content items. When a writer is determined to be popular, mechanisms are proposed for pushing content items to a plurality of content readers.
    • 实施例旨在基于决定给定内容读取器的成本模型来将作者的分布修改为推送写入器或拉写入器,以使作者是更有效的作为拉写入器还是推动写入器。 为每个内容读取器维护高速缓存,用于在生成内容时缓存内容写入者的推送列表中的推送写入器推送的内容项目。 在查询时,内容读取器根据写入者提取内容读取器的列表。 成本模型的一个实施例使用关于给定写入器的先前数量的内容项的请求的数据用于多个内容项的先前混合显示结果。 当作者被确定为流行时,提出将内容项推送到多个内容阅读器的机制。