会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • Voice tagging, voice annotation, and speech recognition for portable devices with optional post processing
    • 语音标记,语音注释和可选后置处理的便携式设备的语音识别
    • US20050075881A1
    • 2005-04-07
    • US10677174
    • 2003-10-02
    • Luca RigazioRobert BomanPatrick NguyenJean-Claude Junqua
    • Luca RigazioRobert BomanPatrick NguyenJean-Claude Junqua
    • G10L15/26G10L21/00
    • G06F17/30796G10L15/26
    • A media capture device has an audio input receptive of user speech relating to a media capture activity in close temporal relation to the media capture activity. A plurality of focused speech recognition lexica respectively relating to media capture activities are stored on the device, and a speech recognizer recognizes the user speech based on a selected one of the focused speech recognition lexica. A media tagger tags captured media with generated speech recognition text, and a media annotator annotates the captured media with a sample of the user speech that is suitable for input to a speech recognizer. Tagging and annotating are based on close temporal relation between receipt of the user speech and capture of the captured media. Annotations may be converted to tags during post processing, employed to edit a lexicon using letter-to-sound rules and spelled word input, or matched directly to speech to retrieve captured media.
    • 媒体捕获设备具有接收与媒体捕获活动紧密相关的媒体捕获活动的用户语音的音频输入。 分别与媒体捕获活动相关的多个聚焦语音识别词典被存储在设备上,并且语音识别器基于所选择的一个焦点语音识别词典识别用户语音。 媒体标签器使用生成的语音识别文本来标记捕获的媒体,并且媒体注释器用适合于输入到语音识别器的用户语音的样本来注释所捕获的媒体。 标记和注释是基于用户语音的接收和捕获的媒体的捕获之间的紧密的时间关系。 在后期处理中,注释可以转换为标签,用于使用字母对声音规则和拼写单词输入来编辑词典,或直接与语音匹配以检索所捕获的媒体。
    • 3. 发明授权
    • Voice tagging, voice annotation, and speech recognition for portable devices with optional post processing
    • 语音标记,语音注释和可选后置处理的便携式设备的语音识别
    • US07324943B2
    • 2008-01-29
    • US10677174
    • 2003-10-02
    • Luca RigazioRobert BomanPatrick NguyenJean-Claude Junqua
    • Luca RigazioRobert BomanPatrick NguyenJean-Claude Junqua
    • G10L21/00H04N5/76
    • G06F17/30796G10L15/26
    • A media capture device has an audio input receptive of user speech relating to a media capture activity in close temporal relation to the media capture activity. A plurality of focused speech recognition lexica respectively relating to media capture activities are stored on the device, and a speech recognizer recognizes the user speech based on a selected one of the focused speech recognition lexica. A media tagger tags captured media with generated speech recognition text, and a media annotator annotates the captured media with a sample of the user speech that is suitable for input to a speech recognizer. Tagging and annotating are based on close temporal relation between receipt of the user speech and capture of the captured media. Annotations may be converted to tags during post processing, employed to edit a lexicon using letter-to-sound rules and spelled word input, or matched directly to speech to retrieve captured media.
    • 媒体捕获设备具有接收与媒体捕获活动紧密相关的媒体捕获活动的用户语音的音频输入。 分别与媒体捕获活动相关的多个聚焦语音识别词典被存储在设备上,并且语音识别器基于所选择的一个焦点语音识别词典识别用户语音。 媒体标签器使用生成的语音识别文本来标记捕获的媒体,并且媒体注释器用适合于输入到语音识别器的用户语音的样本来注释所捕获的媒体。 标记和注释是基于用户语音的接收和捕获的媒体的捕获之间的紧密的时间关系。 在后期处理中,注释可以转换为标签,用于使用字母对声音规则和拼写单词输入来编辑词典,或直接与语音匹配以检索所捕获的媒体。
    • 4. 发明申请
    • Speech data mining for call center management
    • 语音数据挖掘用于呼叫中心管理
    • US20050010411A1
    • 2005-01-13
    • US10616006
    • 2003-07-09
    • Luca RigazioPatrick NguyenJean-Claude JunquaRobert Boman
    • Luca RigazioPatrick NguyenJean-Claude JunquaRobert Boman
    • G10L15/26G10L17/00G10L15/00
    • G10L15/26G10L17/00
    • A speech data mining system for use in generating a rich transcription having utility in call center management includes a speech differentiation module differentiating between speech of interacting speakers, and a speech recognition module improving automatic recognition of speech of one speaker based on interaction with another speaker employed as a reference speaker. A transcript generation module generates a rich transcript based on recognized speech of the speakers. Focused, interactive language models improve recognition of a customer on a low quality channel using context extracted from speech of a call center operator on a high quality channel with a speech model adapted to the operator. Mined speech data includes number of interaction turns, customer frustration phrases, operator polity, interruptions, and/or contexts extracted from speech recognition results, such as topics, complaints, solutions, and resolutions. Mined speech data is useful in call center and/or product or service quality management.
    • 用于产生在呼叫中心管理中具有效用的丰富录音的语音数据挖掘系统包括区分交互式扬声器的语音的语音区分模块和改善一个扬声器的语音的自动识别的语音识别模块, 作为参考发言人。 转录本生成模块基于扬声器的识别语音生成丰富的录音。 专注的交互式语言模型通过使用适合于操作员的语音模型,在高质量频道上从呼叫中心运营商的语音提取的上下文,改善对低质量信道上客户的识别。 挖掘的语音数据包括从诸如主题,投诉,解决方案和分辨率的语音识别结果中提取的交互轮廓数量,客户沮丧短语,运营商政治,中断和/或上下文。 挖掘的语音数据在呼叫中心和/或产品或服务质量管理中是有用的。
    • 10. 发明授权
    • Pattern matching for large vocabulary speech recognition systems
    • 大词汇语音识别系统的模式匹配
    • US06879954B2
    • 2005-04-12
    • US10127184
    • 2002-04-22
    • Patrick NguyenLuca Rigazio
    • Patrick NguyenLuca Rigazio
    • G10L15/08G10L15/10G10L15/28G10L15/00G06F15/76
    • G10L15/08G10L15/10G10L15/285G10L15/30G10L15/34
    • A method is provided for improving pattern matching in a speech recognition system having a plurality of acoustic models. The improved method includes: receiving continuous speech input; generating a sequence of acoustic feature vectors that represent temporal and spectral behavior of the speech input; loading a first group of acoustic feature vectors from the sequence of acoustic feature vectors into a memory workspace accessible to a processor; loading an acoustic model from the plurality of acoustic models into the memory workspace; and determining a similarity measure for each acoustic feature vector of the first group of acoustic feature vectors in relation to the acoustic model. Prior to retrieving another group of acoustic feature vectors, similarity measures are computed for the first group of acoustic feature vectors in relation to each of the acoustic models employed by the speech recognition system. In this way, the improved method reduces the number I/O operations associated with loading and unloading each acoustic model into memory.
    • 提供了一种用于改进具有多个声学模型的语音识别系统中的模式匹配的方法。 改进的方法包括:接收连续语音输入; 产生表示语音输入的时间和频谱行为的声学特征向量序列; 将来自声学特征向量序列的第一组声学特征向量加载到可由处理器访问的存储器工作空间; 将来自所述多个声学模型的声学模型加载到所述存储器工作空间中; 以及针对声学模型确定第一组声学特征向量的每个声学特征向量的相似性度量。 在检索另一组声学特征向量之前,相对于由语音识别系统采用的每个声学模型,针对第一组声学特征向量计算相似性度量。 以这种方式,改进的方法减少了将每个声学模型加载和卸载到存储器中的数量I / O操作。