会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • MULTI-DOMAIN JOINT SEMANTIC FRAME PARSING
    • 多领域联合语义框架PARSING
    • WO2017223009A1
    • 2017-12-28
    • PCT/US2017/038209
    • 2017-06-20
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • HAKKANI-TUR, Dilek, Z.CELIKYILMAZ, AsliCHEN, Yun-NungDENG, LiGAO, JianfengTUR, GokhanWANG, Ye-Yi
    • G10L15/22G10L15/18G10L15/16
    • G06N3/08G06N3/0445G10L15/16G10L15/1822G10L15/22
    • A processing unit can train a model as a joint multi-domain recurrent neural network (JRNN), such as a bi-directional recurrent neural network (bRNN) and/or a recurrent neural network with long-short term memory (RNN-LSTM) for spoken language understanding (SLU). The processing unit can use the trained model to, e.g., jointly model slot filling, intent determination, and domain classification. The joint multi-domain model described herein can estimate a complete semantic frame per query, and the joint multi-domain model enables multi-task deep learning leveraging the data from multiple domains. The joint multi-domain recurrent neural network (JRNN) can leverage semantic intents (such as, finding or identifying, e.g., a domain specific goal) and slots (such as, dates, times, locations, subjects, etc.) across multiple domains.
    • 处理单元可以将模型训练为联合多域递归神经网络(JRNN),例如双向递归神经网络(bRNN)和/或具有长期递归神经网络的递归神经网络, 用于口语理解(SLU)的短期记忆(RNN-LSTM)。 处理单元可以使用训练的模型来例如联合建模时隙填充,意图确定和域分类。 本文描述的联合多域模型可以估计每个查询的完整语义帧,并且联合多域模型使得能够利用来自多个域的数据的多任务深度学习。 联合多域递归神经网络(JRNN)可以跨越多个域利用语义意图(例如,查找或识别,例如域特定目标)和时隙(例如日期,时间,位置,主题等)
    • 2. 发明申请
    • SESSION CONTEXT MODELING FOR CONVERSATIONAL UNDERSTANDING SYSTEMS
    • 对话理解系统的会话语境建模
    • WO2015195729A1
    • 2015-12-23
    • PCT/US2015/036116
    • 2015-06-17
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • AKBACAK, MuratHAKKANI-TUR, Dilek Z.TUR, GokhanHECK, Larry P.
    • G10L15/06G06F17/30G10L15/183
    • G06F17/2836G06F17/2818G06F17/30766G10L15/06G10L15/183G10L2015/227
    • Systems and methods are provided for improving language models for speech recognition by adapting knowledge sources utilized by the language models to session contexts. A knowledge source, such as a knowledge graph, is used to capture and model dynamic session context based on user interaction information from usage history, such as session logs, that is mapped to the knowledge source. From sequences of user interactions, higher level intent sequences may be determined and used to form models that anticipate similar intents but with different arguments including arguments that do not necessarily appear in the usage history. In this way, the session context models may be used to determine likely next interactions or "turns" from a user, given a previous turn or turns. Language models corresponding to the likely next turns are then interpolated and provided to improve recognition accuracy of the next turn received from the user.
    • 提供了系统和方法,用于通过将语言模型所使用的知识源适应于会话环境来改进用于语音识别的语言模型。 诸如知识图的知识源被用于基于映射到知识源的使用历史(例如会话日志)的用户交互信息来捕获和建模动态会话上下文。 根据用户交互序列,可以确定较高级别的意图序列,并用于形成预期相似意图但具有不同参数的模型,包括不一定出现在使用历史中的参数。 以这种方式,会话上下文模型可以用于确定来自用户的可能的下一个交互或“转弯”,给定先前的转弯或转弯。 然后内插并提供与可能的下一匝对应的语言模型,以提高从用户接收的下一匝的识别精度。
    • 4. 发明申请
    • ORPHANED UTTERANCE DETECTION SYSTEM AND METHOD
    • ORPHANED UTTERANCE检测系统和方法
    • WO2016028946A1
    • 2016-02-25
    • PCT/US2015/045978
    • 2015-08-20
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • TUR, GokhanDEORAS, AnoopHAKKANI-TUR, Dilek
    • G10L15/22
    • G06F17/30864G06F17/277G06F17/2785G06F17/28G06F17/30684G06F17/30705G10L15/22G10L2015/223G10L2015/225
    • An orphan detector. The orphan detector processes out-of-domain utterances from a targeted language understanding dialog system to determine whether the out-of-domain utterance expresses a specific intent to have the targeted language understanding dialog system to take a certain action where fallback processing, such as performing a generic web search, is unlikely to be satisfied by web searches. Such utterances are referred to as orphans because they are not appropriately handled by any of the task domains or fallback processing. The orphan detector distinguishes orphans from web search queries and other out-of-domain utterances by focusing primarily on the structure of the utterance rather than the content. Orphans detected by the orphan detector may be used both online and offline to improve user experiences with targeted language understanding dialog systems. The orphan detector may also be used to mine structurally similar queries or sentences from the web search engine query logs.
    • 孤儿探测器 孤儿检测器处理来自目标语言理解对话系统的域外话语,以确定域外话语是否表达具体意图,使目标语言理解对话系统在回退处理中采取某种行动,例如 执行通用网络搜索,不太可能通过网络搜索来满足。 这样的话语被称为孤儿,因为它们没有被任何任务域或后备处理适当地处理。 孤儿检测器通过主要关注话语的结构而不是内容来区分孤儿与网络搜索查询和其他域外话语。 孤儿检测器检测到的孤儿可以在线和离线使用,以通过目标语言理解对话系统来改善用户体验。 孤儿检测器还可以用于从网络搜索引擎查询日志挖掘结构上类似的查询或句子。
    • 5. 发明申请
    • END-TO-END MEMORY NETWORKS FOR CONTEXTUAL LANGUAGE UNDERSTANDING
    • 用于上下文语义理解的端到端存储器网络
    • WO2017223010A1
    • 2017-12-28
    • PCT/US2017/038210
    • 2017-06-20
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • CHEN, Yun-NungHAKKANI-TUR, Dilek, Z.TUR, GokhanDENG, LiGAO, Jianfeng
    • G10L15/18G10L15/16G10L15/22
    • G06N3/08G06N3/0445G10L15/16G10L15/1822G10L15/22
    • A processing unit can extract salient semantics to model knowledge carryover, from one turn to the next, in multi-turn conversations. Architecture described herein can use the end-to-end memory networks to encode inputs, e.g., utterances, with intents and slots, which can be stored as embeddings in memory, and in decoding the architecture can exploit latent contextual information from memory, e.g., demographic context, visual context, semantic context, etc. e.g., via an attention model, to leverage previously stored semantics for semantic parsing, e.g., for joint intent prediction and slot tagging. In examples, architecture is configured to build an end-to-end memory network model for contextual, e.g., multi-turn, language understanding, to apply the end-to-end memory network model to multiple turns of conversational input; and to fill slots for output of contextual, e.g., multi-turn, language understanding of the conversational input. The neural network can be learned using backpropagation from output to input using gradient descent optimization.
    • 一个处理单元可以提取显着的语义,以在多回合对话中从一个回合到下一个回归建模知识遗留。 这里描述的体系结构可以使用端到端存储器网络来对具有意图和插槽的输入(例如话语)进行编码,其可以作为嵌入存储在存储器中,并且在解码体系结构时可以利用来自存储器的潜在上下文信息, 人口统计上下文,视觉上下文,语义上下文等,以例如经由关注模型来利用先前存储的语义进行语义分析,例如用于联合意图预测和时隙标记。 在示例中,架构被配置为建立端到端存储器网络模型以用于上下文(例如,多回合)语言理解,以将端到端存储器网络模型应用于多轮会话输入; 并填充用于输出对话输入的上下文(例如,多回合)语言理解的时隙。 使用梯度下降优化,可以使用从输出到输入的反向传播学习神经网络。