会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • MULTI-DOMAIN JOINT SEMANTIC FRAME PARSING
    • 多领域联合语义框架PARSING
    • WO2017223009A1
    • 2017-12-28
    • PCT/US2017/038209
    • 2017-06-20
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • HAKKANI-TUR, Dilek, Z.CELIKYILMAZ, AsliCHEN, Yun-NungDENG, LiGAO, JianfengTUR, GokhanWANG, Ye-Yi
    • G10L15/22G10L15/18G10L15/16
    • G06N3/08G06N3/0445G10L15/16G10L15/1822G10L15/22
    • A processing unit can train a model as a joint multi-domain recurrent neural network (JRNN), such as a bi-directional recurrent neural network (bRNN) and/or a recurrent neural network with long-short term memory (RNN-LSTM) for spoken language understanding (SLU). The processing unit can use the trained model to, e.g., jointly model slot filling, intent determination, and domain classification. The joint multi-domain model described herein can estimate a complete semantic frame per query, and the joint multi-domain model enables multi-task deep learning leveraging the data from multiple domains. The joint multi-domain recurrent neural network (JRNN) can leverage semantic intents (such as, finding or identifying, e.g., a domain specific goal) and slots (such as, dates, times, locations, subjects, etc.) across multiple domains.
    • 处理单元可以将模型训练为联合多域递归神经网络(JRNN),例如双向递归神经网络(bRNN)和/或具有长期递归神经网络的递归神经网络, 用于口语理解(SLU)的短期记忆(RNN-LSTM)。 处理单元可以使用训练的模型来例如联合建模时隙填充,意图确定和域分类。 本文描述的联合多域模型可以估计每个查询的完整语义帧,并且联合多域模型使得能够利用来自多个域的数据的多任务深度学习。 联合多域递归神经网络(JRNN)可以跨越多个域利用语义意图(例如,查找或识别,例如域特定目标)和时隙(例如日期,时间,位置,主题等)
    • 3. 发明申请
    • LEVERAGING GLOBAL DATA FOR ENTERPRISE DATA ANALYTICS
    • 利用企业数据分析的全球数据
    • WO2017019318A1
    • 2017-02-02
    • PCT/US2016/042374
    • 2016-07-15
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • DENG, LiGAO, JianfengHE, XiaodongSINGH, Prabhdeep
    • G06Q10/06
    • G06N3/08G06N3/04G06Q10/067
    • A deep learning network is trained to automatically analyze enterprise data. Raw data from one or more global data sources is received, and a specific training dataset that includes data exemplary of the enterprise data is also received. The raw data from the global data sources is used to pre-train the deep learning network to predict the results of a specific enterprise outcome scenario. The specific training dataset is then used to further train the deep learning network to predict the results of a specific enterprise outcome scenario. Alternately, the raw data from the global data sources may be automatically mined to identify semantic relationships there-within, and the identified semantic relationships may be used to pre-train the deep learning network to predict the results of a specific enterprise outcome scenario.
    • 培训深入学习网络,自动分析企业数据。 接收来自一个或多个全局数据源的原始数据,并且还接收包括企业数据示例的数据的特定训练数据集。 来自全球数据源的原始数据用于预培训深度学习网络,以预测特定企业成果情景的结果。 然后将具体的培训数据集用于进一步训练深入学习网络,以预测特定企业成果情景的结果。 或者,来自全球数据源的原始数据可以被自动挖掘以识别其中的语义关系,并且所识别的语义关系可以用于预培训深度学习网络以预测特定企业成果情景的结果。
    • 4. 发明申请
    • END-TO-END MEMORY NETWORKS FOR CONTEXTUAL LANGUAGE UNDERSTANDING
    • 用于上下文语义理解的端到端存储器网络
    • WO2017223010A1
    • 2017-12-28
    • PCT/US2017/038210
    • 2017-06-20
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • CHEN, Yun-NungHAKKANI-TUR, Dilek, Z.TUR, GokhanDENG, LiGAO, Jianfeng
    • G10L15/18G10L15/16G10L15/22
    • G06N3/08G06N3/0445G10L15/16G10L15/1822G10L15/22
    • A processing unit can extract salient semantics to model knowledge carryover, from one turn to the next, in multi-turn conversations. Architecture described herein can use the end-to-end memory networks to encode inputs, e.g., utterances, with intents and slots, which can be stored as embeddings in memory, and in decoding the architecture can exploit latent contextual information from memory, e.g., demographic context, visual context, semantic context, etc. e.g., via an attention model, to leverage previously stored semantics for semantic parsing, e.g., for joint intent prediction and slot tagging. In examples, architecture is configured to build an end-to-end memory network model for contextual, e.g., multi-turn, language understanding, to apply the end-to-end memory network model to multiple turns of conversational input; and to fill slots for output of contextual, e.g., multi-turn, language understanding of the conversational input. The neural network can be learned using backpropagation from output to input using gradient descent optimization.
    • 一个处理单元可以提取显着的语义,以在多回合对话中从一个回合到下一个回归建模知识遗留。 这里描述的体系结构可以使用端到端存储器网络来对具有意图和插槽的输入(例如话语)进行编码,其可以作为嵌入存储在存储器中,并且在解码体系结构时可以利用来自存储器的潜在上下文信息, 人口统计上下文,视觉上下文,语义上下文等,以例如经由关注模型来利用先前存储的语义进行语义分析,例如用于联合意图预测和时隙标记。 在示例中,架构被配置为建立端到端存储器网络模型以用于上下文(例如,多回合)语言理解,以将端到端存储器网络模型应用于多轮会话输入; 并填充用于输出对话输入的上下文(例如,多回合)语言理解的时隙。 使用梯度下降优化,可以使用从输出到输入的反向传播学习神经网络。
    • 5. 发明申请
    • MODELING INTERESTINGNESS WITH DEEP NEURAL NETWORKS
    • 建立与深层神经网络的兴趣
    • WO2015191652A1
    • 2015-12-17
    • PCT/US2015/034994
    • 2015-06-10
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • GAO, JianfengDENG, LiGAMON, MichaelHE, XiaodongPANTEL, Patrick
    • G06F17/30
    • G06N3/04G06F17/30967G06N3/0427G06N3/082
    • An "Interestingness Modeler" uses deep neural networks to learn deep semantic models (DSM) of "interestingness." The DSM, consisting of two branches of deep neural networks or their convolutional versions, identifies and predicts target documents that would interest users reading source documents. The learned model observes, identifies, and detects naturally occurring signals of interestingness in click transitions between source and target documents derived from web browser logs. Interestingness is modeled with deep neural networks that map source-target document pairs to feature vectors in a latent space, trained on document transitions in view of a "context" and optional "focus" of source and target documents. Network parameters are learned to minimize distances between source documents and their corresponding "interesting" targets in that space. The resulting interestingness model has applicable uses, including, but not limited to, contextual entity searches, automatic text highlighting, prefetching documents of likely interest, automated content recommendation, automated advertisement placement, etc.
    • “有趣的建模者”使用深层神经网络来学习“趣味性”的深层语义模型(DSM)。 DSM由深度神经网络的两个分支或其卷积版本组成,识别并预测将感兴趣用户阅读源文档的目标文档。 所学习的模型观察,识别和检测从Web浏览器日志导出的源文档和目标文档之间的点击转换中的自然发生的兴趣信号。 有趣的是用深层神经网络建模,将源目标文档对映射到潜在空间中的特征向量,考虑到源文件和目标文档的“上下文”和可选的“焦点”,对文档转换进行了训练。 学习网络参数以最小化源文档与其空间中相应的“有趣”目标之间的距离。 所产生的兴趣模型具有适用的用途,包括但不限于上下文实体搜索,自动文本突出显示,预取可能感兴趣的文档,自动内容推荐,自动广告投放等。
    • 6. 发明申请
    • END-TO-END LEARNING OF DIALOGUE AGENTS FOR INFORMATION ACCESS
    • 对话信息访问的对话代理端对端学习
    • WO2018044633A1
    • 2018-03-08
    • PCT/US2017/048098
    • 2017-08-23
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • LI, LihongDHINGRA, BhuwanGAO, JianfengLI, XiujunCHEN, Yun-NungDENG, LiAHMED, Faisal
    • G06N3/02G06N3/04G06F17/30
    • Described herein are systems, methods, and techniques by which a processing unit can build an end-to-end dialogue agent model for end-to-end learning of dialogue agents for information access and apply the end-to-end dialogue agent model with soft attention over knowledge base entries to make the dialogue system differentiable. In various examples the processing unit can apply the end-to-end dialogue agent model to a source of input, fill slots for output from the knowledge base entries, induce a posterior distribution over the entities in a knowledge base or induce a posterior distribution of a target of the requesting user over entities from a knowledge base, develop an end-to-end differentiable model of a dialogue agent, use supervised and/or imitation learning to initialize network parameters, calculate a modified version of an episodic algorithm, e.g., the REINFORCE algorithm, for training an end-to-end differentiable model based on user feedback.
    • 这里描述的是处理单元可以通过其建立端对端对话代理模型的系统,方法和技术,以用于信息访问的对话代理的端到端学习并且将结束 对对话代理模式,对知识库条目给予高度关注,使对话系统具有可区分性。 在各种示例中,处理单元可将端到端对话代理模型应用于输入源,填充时隙以供从知识库条目输出,诱导知识库中的实体的后验分布或诱导后验分布 请求用户相对于来自知识库的实体的目标,开发对话代理的端到端可微模型,使用监督和/或模仿学习来初始化网络参数,计算情景算法的修改版本,例如, REINFORCE算法,用于根据用户反馈来训练端到端的可微模型。
    • 9. 发明申请
    • CONTEXT-SENSITIVE SEARCH USING A DEEP LEARNING MODEL
    • 使用深度学习模型进行语境敏感搜索
    • WO2015160544A1
    • 2015-10-22
    • PCT/US2015/024417
    • 2015-04-06
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • GUO, ChenleiGAO, JianfengWANG, Ye-YiDENG, LiHE, Xiaodong
    • G06F17/30
    • G06F17/30554G06F17/3053G06F17/30867G06N3/0454
    • A search engine is described herein for providing search results based on a context in which a query has been submitted, as expressed by context information. The search engine operates by ranking a plurality of documents based on a consideration of the query, and based, in part, on a context concept vector and a plurality of document concept vectors, both generated using a deep learning model (such as a deep neural network). The context concept vector is formed by a projection of the context information into a semantic space using the deep learning model. Each document concept vector is formed by a projection of document information, associated with a particular document, into the same semantic space using the deep learning model. The ranking operates by favoring documents that are relevant to the context within the semantic space, and disfavoring documents that are not relevant to the context.
    • 本文描述了一种搜索引擎,用于根据上下文信息所表示的提交查询的上下文来提供搜索结果。 搜索引擎通过基于查询的考虑对多个文档进行排序来操作,并且部分地基于上下文概念向量和多个文档概念向量来进行操作,两者都使用深度学习模型(例如深层神经元 网络)。 上下文概念向量是通过使用深度学习模型将上下文信息投影到语义空间中而形成的。 通过使用深度学习模型,将与特定文档相关联的文档信息投影到相同的语义空间中来形成每个文档概念向量。 排名通过有利于与语义空间内的上下文相关的文档,以及不利于与上下文无关的文档。