会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • MULTI-DOMAIN JOINT SEMANTIC FRAME PARSING
    • 多领域联合语义框架PARSING
    • WO2017223009A1
    • 2017-12-28
    • PCT/US2017/038209
    • 2017-06-20
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • HAKKANI-TUR, Dilek, Z.CELIKYILMAZ, AsliCHEN, Yun-NungDENG, LiGAO, JianfengTUR, GokhanWANG, Ye-Yi
    • G10L15/22G10L15/18G10L15/16
    • G06N3/08G06N3/0445G10L15/16G10L15/1822G10L15/22
    • A processing unit can train a model as a joint multi-domain recurrent neural network (JRNN), such as a bi-directional recurrent neural network (bRNN) and/or a recurrent neural network with long-short term memory (RNN-LSTM) for spoken language understanding (SLU). The processing unit can use the trained model to, e.g., jointly model slot filling, intent determination, and domain classification. The joint multi-domain model described herein can estimate a complete semantic frame per query, and the joint multi-domain model enables multi-task deep learning leveraging the data from multiple domains. The joint multi-domain recurrent neural network (JRNN) can leverage semantic intents (such as, finding or identifying, e.g., a domain specific goal) and slots (such as, dates, times, locations, subjects, etc.) across multiple domains.
    • 处理单元可以将模型训练为联合多域递归神经网络(JRNN),例如双向递归神经网络(bRNN)和/或具有长期递归神经网络的递归神经网络, 用于口语理解(SLU)的短期记忆(RNN-LSTM)。 处理单元可以使用训练的模型来例如联合建模时隙填充,意图确定和域分类。 本文描述的联合多域模型可以估计每个查询的完整语义帧,并且联合多域模型使得能够利用来自多个域的数据的多任务深度学习。 联合多域递归神经网络(JRNN)可以跨越多个域利用语义意图(例如,查找或识别,例如域特定目标)和时隙(例如日期,时间,位置,主题等)
    • 2. 发明申请
    • END-TO-END MEMORY NETWORKS FOR CONTEXTUAL LANGUAGE UNDERSTANDING
    • 用于上下文语义理解的端到端存储器网络
    • WO2017223010A1
    • 2017-12-28
    • PCT/US2017/038210
    • 2017-06-20
    • MICROSOFT TECHNOLOGY LICENSING, LLC
    • CHEN, Yun-NungHAKKANI-TUR, Dilek, Z.TUR, GokhanDENG, LiGAO, Jianfeng
    • G10L15/18G10L15/16G10L15/22
    • G06N3/08G06N3/0445G10L15/16G10L15/1822G10L15/22
    • A processing unit can extract salient semantics to model knowledge carryover, from one turn to the next, in multi-turn conversations. Architecture described herein can use the end-to-end memory networks to encode inputs, e.g., utterances, with intents and slots, which can be stored as embeddings in memory, and in decoding the architecture can exploit latent contextual information from memory, e.g., demographic context, visual context, semantic context, etc. e.g., via an attention model, to leverage previously stored semantics for semantic parsing, e.g., for joint intent prediction and slot tagging. In examples, architecture is configured to build an end-to-end memory network model for contextual, e.g., multi-turn, language understanding, to apply the end-to-end memory network model to multiple turns of conversational input; and to fill slots for output of contextual, e.g., multi-turn, language understanding of the conversational input. The neural network can be learned using backpropagation from output to input using gradient descent optimization.
    • 一个处理单元可以提取显着的语义,以在多回合对话中从一个回合到下一个回归建模知识遗留。 这里描述的体系结构可以使用端到端存储器网络来对具有意图和插槽的输入(例如话语)进行编码,其可以作为嵌入存储在存储器中,并且在解码体系结构时可以利用来自存储器的潜在上下文信息, 人口统计上下文,视觉上下文,语义上下文等,以例如经由关注模型来利用先前存储的语义进行语义分析,例如用于联合意图预测和时隙标记。 在示例中,架构被配置为建立端到端存储器网络模型以用于上下文(例如,多回合)语言理解,以将端到端存储器网络模型应用于多轮会话输入; 并填充用于输出对话输入的上下文(例如,多回合)语言理解的时隙。 使用梯度下降优化,可以使用从输出到输入的反向传播学习神经网络。