会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 7. 发明申请
    • SPEAKER RECOGNITION FROM TELEPHONE CALLS
    • 扬声器从电话呼叫中识别
    • WO2011057650A1
    • 2011-05-19
    • PCT/EP2009/008063
    • 2009-11-12
    • AGNITIO, S.L.LANGEHOVEEN BRUMMER, Johan, NikolaasBUERA RODRIGUEZ, LuisGARCIA GOMAR, Marta
    • LANGEHOVEEN BRUMMER, Johan, NikolaasBUERA RODRIGUEZ, LuisGARCIA GOMAR, Marta
    • G10L17/00
    • G10L17/02
    • The present invention relates to a method for speaker recognition, comprising the steps of obtaining and storing speaker information for at least one target speaker; obtaining a plurality of speech samples from a plurality of telephone calls from at least one unknown speaker; classifying the speech samples according to the at least one unknown speaker thereby providing speaker-dependent classes of speech samples; extracting speaker information for the speech samples of each of the speaker-dependent classes of speech samples; combining the extracted speaker information for each of the speaker-dependent classes of speech samples; comparing the combined extracted speaker information for each of the speaker-dependent classes of speech samples with the stored speaker information for the at least one target speaker to obtain at least one comparison result; and determining whether one of the at least one unknown speakers is identical with the at least one target speaker based on the at least one comparison result.
    • 本发明涉及一种用于说话者识别的方法,包括以下步骤:获取和存储至少一个目标说话者的说话者信息; 从至少一个未知扬声器的多个电话呼叫中获取多个语音样本; 根据至少一个未知扬声器对语音样本进行分类,从而提供说话者相关的语音样本类别; 提取每个说话者相关类别的语音样本的语音样本的说话者信息; 组合提取的说话者信息,用于每个说话者依赖类的语音样本; 将用于每个与扬声器相关的语音样本类别的提取的演讲人信息与用于所述至少一个目标讲话者的所存储的演讲者信息进行比较,以获得至少一个比较结果; 以及基于所述至少一个比较结果,确定所述至少一个未知扬声器中的一个是否与所述至少一个目标说话者相同。
    • 8. 发明申请
    • CUT AND PASTE SPOOFING DETECTION USING DYNAMIC TIME WRAPING
    • 使用动态时间封装的切割和馅料检测
    • WO2010066435A1
    • 2010-06-17
    • PCT/EP2009/008851
    • 2009-12-10
    • AGNITIO S.L.VILLALBA LÓPEZ, Jesús, AntonioORTEGA GIMÉNEZ, AlfonsoLLEIDA SOLANO, EduardoVARELA REDONDO, SaraGARCĺA GOMAR, Marta
    • VILLALBA LÓPEZ, Jesús, AntonioORTEGA GIMÉNEZ, AlfonsoLLEIDA SOLANO, EduardoVARELA REDONDO, SaraGARCĺA GOMAR, Marta
    • G10L17/00
    • G10L17/00B66B13/26G10L17/02G10L17/06G10L17/24G10L17/26
    • The invention refers to a method for comparing voice utterances, the method comprising the steps: extracting a plurality of features (201) from a first voice utterance of a given text sample and extracting a plurality of features (201) from a second voice utterance of said given text sample, wherein each feature is extracted as a function of time, and wherein each feature of the second voice utterance corresponds to a feature of the first voice utterance; applying dynamic time warping (202) to one or more time dependent characteristics of the first and/or second voice utterance e.g. by minimizing one or more distance measures, wherein a distance measure is a measure for the difference of a time dependent characteristic of the first voice utterance and a corresponding time dependent characteristic of the second voice utterance, and wherein a time dependent characteristic of a voice utterance is a time dependent characteristic of either a single feature or a combination of two or more features; calculating a total distance measure (203), wherein the total distance measure is a measure for the difference between the first voice utterance of the given text sample and the second voice utterance of said given text sample, wherein the total distance measure is calculated based on one or more pairs of said time dependent characteristic, and wherein a pair of time dependent characteristic is composed of a time dependent characteristic of the first or second voice utterance and of a dynamically time warped (202) time dependent characteristic of the respectively second or first voice utterance, or wherein a pair of time dependent characteristic is composed of a dynamically time warped (202) time dependent characteristic of the first voice utterance and of a dynamically time warped (202) time dependent characteristic of the second voice utterance.
    • 本发明涉及一种用于比较语音话语的方法,所述方法包括以下步骤:从给定文本样本的第一语音发音中提取多个特征(201),并从第二话音语音提取多个特征(201) 所述给定文本样本,其中每个特征被提取为时间的函数,并且其中所述第二语音话语的每个特征对应于所述第一语音话语的特征; 将动态时间扭曲(202)应用于第一和/或第二语音话语的一个或多个时间依赖特性,例如。 通过最小化一个或多个距离度量,其中距离度量是用于第一语音发音的时间相关特性与第二语音发音的对应时间相关特性的差的度量,并且其中语音话语的时间相关特性 是单个特征或两个或更多个特征的组合的时间依赖特性; 计算总距离测量(203),其中所述总距离测量是用于所述给定文本样本的所述第一语音发音与所述给定文本样本的所述第二语音发声之间的差的度量,其中所述总距离测量基于 一对或多对所述时间依赖特性,并且其中一对时间依赖特性由第一或第二语音话语的时间依赖特性和分别第二或第二语音话语的动态时间扭曲(202)时间依赖特性组成 或者其中一对时间依赖特性由第一语音话语的动态时间扭曲(202)时间依赖特性和第二语音发声的动态时间扭曲(202)时间依赖特性组成。