会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 31. 发明授权
    • Method for building a natural language understanding model for a spoken dialog system
    • 建立语言对话系统的自然语言理解模型的方法
    • US07933766B2
    • 2011-04-26
    • US12582062
    • 2009-10-20
    • Narendra K. GuptaMazin G. RahimGokhan TurAntony Van der Mude
    • Narendra K. GuptaMazin G. RahimGokhan TurAntony Van der Mude
    • G06F17/27
    • G10L15/193G10L15/063G10L15/183
    • A method of generating a natural language model for use in a spoken dialog system is disclosed. The method comprises using sample utterances and creating a number of hand crafted rules for each call-type defined in a labeling guide. A first NLU model is generated and tested using the hand crafted rules and sample utterances. A second NLU model is built using the sample utterances as new training data and using the hand crafted rules. The second NLU model is tested for performance using a first batch of labeled data. A series of NLU models are built by adding a previous batch of labeled data to training data and using a new batch of labeling data as test data to generate the series of NLU models with training data that increases constantly. If not all the labeling data is received, the method comprises repeating the step of building a series of NLU models until all labeling data is received. After all the training data is received, at least once, the method comprises building a third NLU model using all the labeling data, wherein the third NLU model is used in generating the spoken dialog service.
    • 公开了一种生成在口头对话系统中使用的自然语言模型的方法。 该方法包括对标签指南中定义的每个呼叫类型使用样本话语和创建许多手工制作规则。 使用手工制作的规则和样品说话来生成和测试第一个NLU模型。 使用示例语句作为新的训练数据并使用手工制作规则构建了第二个NLU模型。 使用第一批标签数据对第二个NLU模型进行性能测试。 通过将前一批标签数据添加到训练数据并使用新批签名数据作为测试数据来生成一系列NLU模型,训练数据不断增加,构建了一系列NLU模型。 如果不是全部接收到标签数据,则该方法包括重复建立一系列NLU模型的步骤,直到接收到所有标记数据为止。 在接收到所有训练数据之后,至少一次,该方法包括使用所有标签数据构建第三NLU模型,其中第三NLU模型用于生成口语对话服务。
    • 32. 发明授权
    • Method for building a natural language understanding model for a spoken dialog system
    • 建立语言对话系统的自然语言理解模型的方法
    • US07295981B1
    • 2007-11-13
    • US10755014
    • 2004-01-09
    • Narendra K. GuptaMazin G. RahimGokhan TurAntony Van der Mude
    • Narendra K. GuptaMazin G. RahimGokhan TurAntony Van der Mude
    • G10L15/18
    • G10L15/193G10L15/063G10L15/183
    • A method of generating a natural language model for use in a spoken dialog system is disclosed. The method comprises using sample utterances and creating a number of hand crafted rules for each call-type defined in a labeling guide. A first NLU model is generated and tested using the hand crafted rules and sample utterances. A second NLU model is built using the sample utterances as new training data and using the hand crafted rules. The second NLU model is tested for performance using a first batch of labeled data. A series of NLU models are built by adding a previous batch of labeled data to training data and using a new batch of labeling data as test data to generate the series of NLU models with training data that increases constantly. If not all the labeling data is received, the method comprises repeating the step of building a series of NLU models until all labeling data is received. After all the training data is received, at least once, the method comprises building a third NLU model using all the labeling data, wherein the third NLU model is used in generating the spoken dialog service.
    • 公开了一种生成在口头对话系统中使用的自然语言模型的方法。 该方法包括对标签指南中定义的每个呼叫类型使用样本话语和创建许多手工制作规则。 使用手工制作的规则和样品说话来生成和测试第一个NLU模型。 使用示例语句作为新的训练数据并使用手工制作规则构建了第二个NLU模型。 使用第一批标签数据对第二个NLU模型进行性能测试。 通过将前一批标签数据添加到训练数据并使用新批签名数据作为测试数据来生成一系列NLU模型,训练数据不断增加,构建了一系列NLU模型。 如果不是全部接收到标签数据,则该方法包括重复建立一系列NLU模型的步骤,直到接收到所有标记数据为止。 在接收到所有训练数据之后,至少一次,该方法包括使用所有标签数据构建第三NLU模型,其中第三NLU模型用于生成口语对话服务。
    • 36. 发明授权
    • Multitask learning for spoken language understanding
    • 多任务学习口语理解
    • US07664644B1
    • 2010-02-16
    • US11423212
    • 2006-06-09
    • Gokhan Tur
    • Gokhan Tur
    • G10L15/18
    • G10L15/063G10L13/08G10L15/1822G10L15/183G10L15/26G10L2015/0635H04M3/4936
    • A system, method and computer-readable medium provide a multitask learning method for intent or call-type classification in a spoken language understanding system. Multitask learning aims at training tasks in parallel while using a shared representation. A computing device automatically re-uses the existing labeled data from various applications, which are similar but may have different call-types, intents or intent distributions to improve the performance. An automated intent mapping algorithm operates across applications. In one aspect, active learning is employed to selectively sample the data to be re-used.
    • 系统,方法和计算机可读介质在口语理解系统中提供用于意图或呼叫类型分类的多任务学习方法。 多任务学习旨在同时使用共享表示法来并行训练任务。 计算设备自动重新使用来自各种应用程序的现有标记数据,这些应用程序类似但可能具有不同的调用类型,意图或意图分发以提高性能。 自动化意图映射算法在应用程序之间运行。 在一个方面,采用主动学习来选择性地对要再次使用的数据进行采样。
    • 37. 发明申请
    • Answer determination for natural language questioning
    • 回答自然语言提问的决心
    • US20070136246A1
    • 2007-06-14
    • US11319188
    • 2005-12-28
    • Svetlana StenchikovaGokhan TurDilek Tur
    • Svetlana StenchikovaGokhan TurDilek Tur
    • G06F17/30
    • G06F17/30401G06F17/279G06F17/30657G06F17/30864
    • Open-domain question answering is the task of finding a concise answer to a natural language question using a large domain, such as the Internet. The use of a semantic role labeling approach to the extraction of the answers to an open domain factoid (Who/When/What/Where) natural language question that contains a predicate is described. Semantic role labeling identities predicates and semantic argument phrases in the natural language question and the candidate sentences. When searching for an answer to a natural language question, the missing argument in the question is matched using semantic parses of the candidate answers. Such a technique may improve the accuracy of a question answering system and may decrease the length of answers for enabling voice interface to a question answering system.
    • 开放域问答是使用大型域(如互联网)找到一个简明的自然语言问题答案的任务。 描述了使用语义角色标注方法来提取对包含谓词的开放域factoid(Who / When / What / Where)自然语言问题的答案。 自然语言问题和候选句子中的语义角色标识身份谓词和语义参数短语。 在搜索自然语言问题的答案时,使用候选答案的语义分析来匹配问题中的缺失参数。 这样的技术可以提高问答系统的准确性,并且可以减少用于启用语音接口到问答系统的答案的长度。