会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • Session Context Modeling For Conversational Understanding Systems
    • 对话理解系统的会话背景建模
    • US20150370787A1
    • 2015-12-24
    • US14308174
    • 2014-06-18
    • Microsoft Corporation
    • Murat AkbacakDilek Z. Hakkani-TurGokhan TurLarry P. Heck
    • G06F17/28
    • G06F17/2836G06F16/637G06F17/2818G10L15/06G10L15/183G10L2015/227
    • Systems and methods are provided for improving language models for speech recognition by adapting knowledge sources utilized by the language models to session contexts. A knowledge source, such as a knowledge graph, is used to capture and model dynamic session context based on user interaction information from usage history, such as session logs, that is mapped to the knowledge source. From sequences of user interactions, higher level intent sequences may be determined and used to form models that anticipate similar intents but with different arguments including arguments that do not necessarily appear in the usage history. In this way, the session context models may be used to determine likely next interactions or “turns” from a user, given a previous turn or turns. Language models corresponding to the likely next turns are then interpolated and provided to improve recognition accuracy of the next turn received from the user.
    • 提供了系统和方法,用于通过将语言模型所使用的知识源适应于会话环境来改进用于语音识别的语言模型。 诸如知识图的知识源被用于基于来自映射到知识源的使用历史(例如会话日志)的用户交互信息来捕获和建模动态会话上下文。 根据用户交互序列,可以确定较高级别的意图序列,并用于形成预期相似意图但具有不同参数的模型,包括不一定出现在使用历史中的参数。 以这种方式,会话上下文模型可以用于确定来自用户的可能的下一个交互或“转弯”,给定先前的转弯或转弯。 然后内插并提供与可能的下一匝对应的语言模型,以提高从用户接收的下一匝的识别精度。
    • 6. 发明申请
    • Language Modeling For Conversational Understanding Domains Using Semantic Web Resources
    • 使用语义网络资源的会话理解域的语言建模
    • US20150332670A1
    • 2015-11-19
    • US14278659
    • 2014-05-15
    • Microsoft Corporation
    • Murat AkbacakDilek Z. Hakkani-TurGokhan TurLarry P. HeckBenoit Dumoulin
    • G10L15/06G10L15/18G06F17/28
    • G10L15/063G06F17/28G06F17/30654G06F17/30766G10L15/18G10L15/183
    • Systems and methods are provided for training language models using in-domain-like data collected automatically from one or more data sources. The data sources (such as text data or user-interactional data) are mined for specific types of data, including data related to style, content, and probability of relevance, which are then used for language model training. In one embodiment, a language model is trained from features extracted from a knowledge graph modified into a probabilistic graph, where entity popularities are represented and the popularity information is obtained from data sources related to the knowledge. Embodiments of language models trained from this data are particularly suitable for domain-specific conversational understanding tasks where natural language is used, such as user interaction with a game console or a personal assistant application on personal device.
    • 提供了系统和方法,用于使用从一个或多个数据源自动收集的类似域的数据来训练语言模型。 为特定类型的数据挖掘数据源(如文本数据或用户交互数据),包括与风格,内容和相关概率相关的数据,然后将其用于语言模型培训。 在一个实施例中,从从修改为概率图的知识图中提取的特征来训练语言模型,其中表示实体流行度,并且从与知识相关的数据源获得流行度信息。 从该数据训练的语言模型的实施例特别适用于使用自然语言的领域特定对话理解任务,例如用户与个人设备上的游戏控制台或个人助理应用程序的交互。
    • 9. 发明申请
    • Unsupervised Relation Detection Model Training
    • 无监督关系检测模型训练
    • US20150178273A1
    • 2015-06-25
    • US14136919
    • 2013-12-20
    • MICROSOFT CORPORATION
    • Dilek Z. Hakkani-TurGokhan TurLarry Paul Heck
    • G06F17/28
    • G06F17/28
    • A relation detection model training solution. The relation detection model training solution mines freely available resources from the World Wide Web to train a relationship detection model for use during linguistic processing. The relation detection model training system searches the web for pairs of entities extracted from a knowledge graph that are connected by a specific relation. Performance is enhanced by clipping search snippets to extract patterns that connect the two entities in a dependency tree and refining the annotations of the relations according to other related entities in the knowledge graph. The relation detection model training solution scales to other domains and languages, pushing the burden from natural language semantic parsing to knowledge base population. The relation detection model training solution exhibits performance comparable to supervised solutions, which require design, collection, and manual labeling of natural language data.
    • 关系检测模型训练解决方案。 关系检测模型训练解决方案从万维网上免费提供资源,培养语言处理中使用的关系检测模型。 关系检测模型训练系统在网络上搜索通过特定关系连接的知识图提取的实体对。 通过剪切搜索片段来提取性能,以提取连接依赖关系树中的两个实体的模式,并根据知识图中的其他相关实体细化关系的注释。 关系检测模型训练解决方案扩展到其他领域和语言,将自然语言语义解析的负担推向知识库人口。 关系检测模型训练解决方案表现出与监督解决方案相当的性能,需要对自然语言数据进行设计,收集和手动标注。