会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Dialog repair based on discrepancies between user model predictions and speech recognition results
    • 基于用户模型预测和语音识别结果之间的差异的对话框修复
    • US08244545B2
    • 2012-08-14
    • US11393321
    • 2006-03-30
    • Timothy S. PaekDavid M. Chickering
    • Timothy S. PaekDavid M. Chickering
    • G10L21/00
    • G10L15/22G10L2015/228
    • An architecture is presented that leverages discrepancies between user model predictions and speech recognition results by identifying discrepancies between the predictive data and the speech recognition data and repairing the data based in part on the discrepancy. User model predictions predict what goal or action speech application users are likely to pursue based in part on past user behavior. Speech recognition results indicate what goal speech application users are likely to have spoken based in part on words spoken under specific constraints. Discrepancies between the predictive data and the speech recognition data are identified and a dialog repair is engaged for repairing these discrepancies. By engaging in repairs when there is a discrepancy between the predictive results and the speech recognition results, and utilizing feedback obtained via interaction with a user, the architecture can learn about the reliability of both user model predictions and speech recognition results for future processing.
    • 提出了一种通过识别预测数据和语音识别数据之间的差异以及部分地基于差异来修复数据来利用用户模型预测和语音识别结果之间的差异的架构。 用户模型预测部分地基于过去的用户行为来预测用户可能追求的目标或动作语音应用程序。 语音识别结果表明,目标语音应用程序用户可能部分地基于特定约束条件下所说的话语言。 识别预测数据和语音识别数据之间的差异,并进行对话修复以修复这些差异。 通过在预测结果和语音识别结果之间存在差异并利用通过与用户的交互获得的反馈来进行维修,架构可以了解用户模型预测和语音识别结果的可靠性以供将来处理。
    • 5. 发明授权
    • Using generic predictive models for slot values in language modeling
    • 在语言建模中使用时隙值的通用预测模型
    • US08032375B2
    • 2011-10-04
    • US11378202
    • 2006-03-17
    • David M. ChickeringTimothy S. Paek
    • David M. ChickeringTimothy S. Paek
    • G10L11/00G10L15/00G06N3/08
    • G06Q10/10
    • A generic predictive argument model that can be applied to a set of shot values to predict a target slot value is provided. The generic predictive argument model can predict whether or not a particular value or item is the intended target of the user command given various features. A prediction for each of the slot values can then be normalized to infer a distribution over all values or items. For any set of slot values (e.g., contacts), a number of binary variable s are created that indicate whether or not each specific slot value was the intended target. For each slot value, a set of input features can be employed to predict the corresponding binary variable. These input features are generic properties of the contact that are “instantiated” based o n properties of the contact (e.g., contact-specific features). These contact-specific features can be stored in a user data store.
    • 提供了可应用于一组镜头值以预测目标时隙值的通用预测参数模型。 通用预测参数模型可以预测特定值或项目是否是给定各种特征的用户命令的预期目标。 然后可以对每个时隙值的预测进行归一化以推断所有值或项目上的分布。 对于任何一组时隙值(例如,联系人),创建指示每个特定时隙值是否为预期目标的多个二进制变量。 对于每个时隙值,可以采用一组输入特征来预测相应的二进制变量。 这些输入特征是基于联系人的“实例化”属性的联系人的通用属性(例如联系人特定的特征)。 这些联系人特定的功能可以存储在用户数据存储中。
    • 9. 发明授权
    • Easy generation and automatic training of spoken dialog systems using text-to-speech
    • 使用文字转语音轻松地生成和自动训练语音对话系统
    • US07885817B2
    • 2011-02-08
    • US11170584
    • 2005-06-29
    • Timothy S. PaekDavid M. Chickering
    • Timothy S. PaekDavid M. Chickering
    • G10L21/00
    • G10L15/22G10L13/00G10L15/063
    • A dialog system training environment and method using text-to-speech (TTS) are provided. The only knowledge a designer requires is a simple specification of when the dialog system has failed or succeeded, and for any state of the dialog, a list of the possible actions the system can take.The training environment simulates a user using TTS varied at adjustable levels, a dialog action model of a dialog system responds to the produced utterance by trying out all possible actions until it has failed or succeeded. From the data accumulated in the training environment it is possible for the dialog action model to learn which states to go to when it observes the appropriate speech and dialog features so as to increase the likelihood of success. The data can also be used to improve the speech model.
    • 提供了使用文本到语音(TTS)的对话系统训练环境和方法。 设计师需要的唯一知识是对话系统何时失败或成功的简单规范,对于对话框的任何状态,系统可能采取的行动的列表。 训练环境模拟用户使用可调节级别变化的TTS,对话系统的对话动作模型通过尝试所有可能的动作直到失败或成功来响应所产生的话语。 从训练环境中累积的数据可以看出,当对话动作模型观察适当的语音和对话特征时,对话动作模型可以了解哪些状态可以增加成功的可能性。 数据也可用于改进语音模型。
    • 10. 发明授权
    • Thompson strategy based online reinforcement learning system for action selection
    • 基于Thompson战略的在线强化学习系统的行动选择
    • US07707131B2
    • 2010-04-27
    • US11169503
    • 2005-06-29
    • David M. ChickeringTimothy S. PaekEric J. Horvitz
    • David M. ChickeringTimothy S. PaekEric J. Horvitz
    • G06N5/04G06N7/00G06N7/02
    • G06N99/005
    • A system and method for online reinforcement learning is provided. In particular, a method for performing the explore-vs.-exploit tradeoff is provided. Although the method is heuristic, it can be applied in a principled manner while simultaneously learning the parameters and/or structure of the model (e.g., Bayesian network model).The system includes a model which receives an input (e.g., from a user) and provides a probability distribution associated with uncertainty regarding parameters of the model to a decision engine. The decision engine can determine whether to exploit the information known to it or to explore to obtain additional information based, at least in part, upon the explore-vs.-exploit tradeoff (e.g., Thompson strategy). A reinforcement learning component can obtain additional information (e.g., feedback from a user) and update parameter(s) and/or the structure of the model. The system can be employed in scenarios in which an influence diagram is used to make repeated decisions and maximization of long-term expected utility is desired.
    • 提供了一种在线强化学习的系统和方法。 特别地,提供了用于执行探索与利用的权衡的方法。 尽管该方法是启发式的,但是它可以以原则的方式应用,同时学习模型的参数和/或结构(例如,贝叶斯网络模型)。 该系统包括接收输入(例如,来自用户)并且向决策引擎提供与关于模型的参数的不确定性相关联的概率分布的模型。 决策引擎可以确定是否利用已知的信息,或者至少部分地基于探索与利用权衡(Thompson策略)来探索获取附加信息。 强化学习组件可以获得附加信息(例如,来自用户的反馈)和更新参数和/或模型的结构。 该系统可用于使用影响图进行重复决策的场景,并期望实现长期预期效用的最大化。