会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Robot audiovisual system
    • 机器人视听系统
    • US06967455B2
    • 2005-11-22
    • US10468396
    • 2002-03-08
    • Kazuhiro NakadaiKen-ichi HidaiHiroshi OkunoHiroaki Kitano
    • Kazuhiro NakadaiKen-ichi HidaiHiroshi OkunoHiroaki Kitano
    • B25J13/00B25J19/02G06T1/00G10L21/0208B25J13/08
    • G06T1/0014B25J13/00B25J13/003B25J19/023G10L2021/02087
    • A robot visuoauditory system that makes it possible to process data in real time to track vision and audition for an object, that can integrate visual and auditory information on an object to permit the object to be kept tracked without fail and that makes it possible to process the information in real time to keep tracking the object both visually and auditorily and visualize the real-time processing is disclosed. In the system, the audition module (20) in response to sound signals from microphones extracts pitches therefrom, separate their sound sources from each other and locate sound sources such as to identify a sound source as at least one speaker, thereby extracting an auditory event (28) for each object speaker. The vision module (30) on the basis of an image taken by a camera identifies by face, and locate, each such speaker, thereby extracting a visual event (39) therefor. The motor control module (40) for turning the robot horizontally. extracts a motor event (49) from a rotary position of the motor. The association module (60) for controlling these modules forms from the auditory, visual and motor control events an auditory stream (65) and a visual stream (66) and then associates these streams with each other to form an association stream (67). The attention control module (6) effects attention control designed to make a plan of the course in which to control the drive motor, e.g., upon locating the sound source for the auditory event and locating the face for the visual event, thereby determining the direction in which each speaker lies. The system also includes a display (27, 37, 48, 68) for displaying at least a portion of auditory, visual and motor information. The attention control module (64) servo-controls the robot on the basis of the association stream or streams.
    • 机器人视觉系统,使得可以实时处理数据以跟踪对象的视觉和试镜,从而可以将物体上的视觉和听觉信息整合在一起,以允许对象被保持跟踪而不会失败,这使得可以处理 公开了实时的信息,以视觉和听觉方式跟踪对象,并且可视化实时处理。 在系统中,响应于来自麦克风的声音信号的试听模块(20)从其中提取音高,从而将它们的声源彼此分离,并且定位声源,例如将声源识别为至少一个扬声器,从而提取听觉事件 (28)。 基于由照相机拍摄的图像的视觉模块(30)通过面部识别并定位每个这样的扬声器,从而提取其视觉事件(39)。 用于水平地转动机器的马达控制模块(40)。 从马达的旋转位置提取马达事件(49)。 用于控制这些模块的关联模块(60)从听觉,视觉和运动控制事件形成听觉流(65)和视觉流(66),然后将这些流彼此关联以形成关联流(67)。 注意力控制模块(6)实现设计的注意力控制,以制定控制驱动电动机的过程的计划,例如,在定位用于听觉事件的声源并定位视觉事件的面部,从而确定方向 每个演讲者都在其中。 该系统还包括用于显示听觉,视觉和运动信息的至少一部分的显示器(27,37,48,68)。 注意力控制模块(64)基于关联流或流来对机器人进行伺服控制。
    • 2. 发明授权
    • Robot acoustic device
    • 机器人声学装置
    • US07016505B1
    • 2006-03-21
    • US10130295
    • 2000-11-01
    • Kazuhiro NakadaiHiroshi OkunoHiroaki Kitano
    • Kazuhiro NakadaiHiroshi OkunoHiroaki Kitano
    • A61F11/06G10K11/16H03B29/00
    • B25J19/026G10L15/20G10L21/0216G10L2021/02165
    • The invention is directed to an auditory robot for a human or animal like robot, e.g., a human like robot (10) having a noise generating source such as a driving system in its interior. The apparatus includes a sound insulating cover (14) with which at least a head part (13) of the robot is covered; a pair of outer microphones (16; 16a and 16b) installed outside of the cover and located at a pair of positions where a pair of ears may be provided spaced apart for the robot, respectively, for collecting an external sound primarily; at least one inner microphone (17; 17a and 17b) installed inside of the cover for primarily collecting a noise from the noise generating source in the robot interior; and a processing module (18) on the basis of signals from the outer and inner microphones for removing from sound signals from the outer microphones (16a and 16b), a noise signal from the internal noise generating source. Thus, the robot auditory apparatus of the invention is made capable of effecting active perception by permitting an external sound from a target to be collected unaffected by a noise in the inside of the robot such as from the driving system.
    • 本发明涉及用于人或动物如机器人的听觉机器人,例如具有诸如其内部的驱动系统的噪声产生源的人类似的机器人(10)。 所述装置包括隔音盖(14),所述机器人的至少头部(13)与所述绝缘罩(14)相覆盖; 一对外部麦克风(16; 16a和16b),其安装在所述盖子的外部并且分别位于一对位置处,所述一对耳朵可分别设置用于所述机器人,用于主要收集外部声音; 安装在所述盖内部的至少一个内部麦克风(17; 17a和17b),用于主要收集来自所述机器人内部的噪声发生源的噪声; 以及基于来自外麦克风和内麦克风的信号的处理模块(18),用于从外麦克风(16a和16b)的声音信号中去除来自内部噪声产生源的噪声信号。 因此,本发明的机器人听觉装置能够通过允许来自目标的外部声音被收集而不受来自诸如驱动系统的机器人内部的噪声的影响而影响主动感知。
    • 3. 发明授权
    • Robotics visual and auditory system
    • 机器人视觉和听觉系统
    • US07526361B2
    • 2009-04-28
    • US10506167
    • 2002-08-30
    • Kazuhiro NakadaiHiroshi OkunoHiroaki Kitano
    • Kazuhiro NakadaiHiroshi OkunoHiroaki Kitano
    • G06F19/00
    • G06K9/0057B25J13/003
    • Robotics visual and auditory system is provided which is made capable of accurately conducting the sound source localization of a target by associating a visual and an auditory information with respect to a target. It is provided with an audition module (20), a face module (30), a stereo module (37), a motor control module (40), an association module (50) for generating streams by associating events from said each module (20, 30, 37, and 40), and an attention control module (57) for conducting attention control based on the streams generated by the association module (50), and said association module (50) generates an auditory stream (55) and a visual stream (56) from a auditory event (28) from the auditory module (20), a face event (39) from the face module (30), a stereo event (39a) from the stereo module (37), and a motor event (48) from the motor control module (40), and an association stream (57) which associates said streams, as well as said audition module (20) collects sub-bands having the interaural phase difference (IPD) or the interaural intensity difference (IID) within the preset range by an active direction pass filter (23a) having a pass range which, according to auditory characteristics, becomes minimum in the frontal direction, and larger as the angle becomes wider to the left and right, based on an accurate sound source directional information from the association module (50), and conducts sound source separation by restructuring the wave shape of the sound source.
    • 提供了机器人视觉和听觉系统,其能够通过将视觉和听觉信息相对于目标相关联来准确地进行目标的声源定位。 它设置有试听模块(20),面部模块(30),立体声模块(37),电机控制模块(40),通过将来自所述每个模块的事件相关联来生成流的关联模块(50) 20,30,37和40),以及用于基于由关联模块(50)生成的流进行注意控制的注意力控制模块(57),并且所述关联模块(50)生成听觉流(55)和 来自听觉模块(20)的听觉事件(28)的视觉流(56),来自面部模块(30)的面部事件(39),来自立体声模块(37)的立体声事件(39a)以及 来自马达控制模块(40)的马达事件(48)以及关联流(57),所述连接流(57)以及所述试奏模块(20)收集具有所述相位差(IPD)或 通过具有通过范围的有源方向通过滤波器(23a)在预设范围内的昼间强度差(IID),其根据听觉字符 基于来自关联模块(50)的准确的声源方向信息,在正面方向上变得最小,并且随着角度变宽到更大,并且通过重构波形的波形来进行声源分离 声源。
    • 4. 发明授权
    • Robot acoustic device and robot acoustic system
    • 机器人声学装置和机器人声学系统
    • US07215786B2
    • 2007-05-08
    • US10296244
    • 2001-06-08
    • Kazuhiro NakadaiHiroshi OkunoHiroaki Kitano
    • Kazuhiro NakadaiHiroshi OkunoHiroaki Kitano
    • H04B15/00H04R1/02B25J5/00
    • G10L21/0208G10L2021/02165
    • A robot auditory apparatus and system are disclosed which are made capable of attaining active perception upon collecting a sound from an external target with no influence received from noises generated interior of the robot such as those emitted from the robot driving elements. The apparatus and system are for a robot having a noise generating source in its interior, and include: a sound insulating cladding (14) with which at least a portion of the robot is covered; at least two outer microphones (16 and 16) disposed outside of the cladding (14) for collecting an external sound primarily; at least one inner microphone (17) disposed inside of the cladding (14) for primarily collecting noises from the noise generating source in the robot interior; a processing section (23, 24) responsive to signals from the outer and inner microphones (16 and 16; and 17) for canceling from respective sound signals from the outer microphones (16 and 16), noises signal from the interior noise generating source and then issuing a left and a right sound signal; and a directional information extracting section (27) responsive to the left and right sound signals from the processing section (23, 24) for determining the direction from which the external sound is emitted. The processing section (23, 24) is adapted to detect burst noises owing to the noise generating source from a signal from the at least one inner microphone (17) for removing signal portions from the sound signals for bands containing the burst noises.
    • 公开了一种机器人听觉装置和系统,其能够在从机器人内部产生的噪声(例如从机器人驱动元件发射的噪声)中收到来自外部目标的声音而获得主动感知。 该装置和系统用于在其内部具有噪声发生源的机器人,并且包括:绝缘包层(14),至少一部分机器人被覆盖; 至少两个外部麦克风(16和16),其布置在所述包层(14)的外部,用于主要收集外部声音; 设置在所述包层(14)的内部的至少一个内部麦克风(17),用于主要从所述机器人内部的所述噪声发生源收集噪声; 响应于来自外部麦克风(16和16)和17的信号的来自外部麦克风(16和16)的相应声音信号的噪声信号的处理部分(23,24),来自内部噪声发生源的噪声信号;以及 然后发出左右声音信号; 以及响应于来自处理部分(23,24)的左和右声音信号的方向信息提取部分(27),用于确定从其发出外部声音的方向。 处理部分(23,24)适于从来自至少一个内部麦克风(17)的信号中检测由于噪声发生源引起的突发噪声,用于从包含脉冲串噪声的频带的声音信号中去除信号部分。
    • 5. 发明申请
    • Robotics visual and auditory system
    • 机器人视觉和听觉系统
    • US20060241808A1
    • 2006-10-26
    • US10506167
    • 2002-08-30
    • Kazuhiro NakadaiHiroshi OkunoHiroaki Kitano
    • Kazuhiro NakadaiHiroshi OkunoHiroaki Kitano
    • G06F19/00
    • G06K9/0057B25J13/003
    • Robotics visual and auditory system is provided which is made capable of accurately conducting the sound source localization of a target by associating a visual and an auditory information with respect to a target. It is provided with an audition module (20), a face module (30), a stereo module (37), a motor control module (40), an association module (50) for generating streams by associating events from said each module (20, 30, 37, and 40), and an attention control module (57) for conducting attention control based on the streams generated by the association module (50), and said association module (50) generates an auditory stream (55) and a visual stream (56) from a auditory event (28) from the auditory module (20), a face event (39) from the face module (30), a stereo event (39a) from the stereo module (37), and a motor event (48) from the motor control module (40), and an association stream (57) which associates said streams, as well as said audition module (20) collects sub-bands having the interaural phase difference (IPD) or the interaural intensity difference (IID) within the preset range by an active direction pass filter (23a) having a pass range which, according to auditory characteristics, becomes minimum in the frontal direction, and larger as the angle becomes wider to the left and right, based on an accurate sound source directional information from the association module (50), and conducts sound source separation by restructuring the wave shape of the sound source.
    • 提供了机器人视觉和听觉系统,其能够通过将视觉和听觉信息相对于目标相关联来准确地进行目标的声源定位。 它设置有试听模块(20),面部模块(30),立体声模块(37),电动机控制模块(40),通过将来自所述每个模块的事件相关联来生成流的关联模块(50) 20,30,37和40),以及用于基于由关联模块(50)生成的流进行注意控制的注意力控制模块(57),并且所述关联模块(50)生成听觉流(55)和 来自听觉模块(20)的听觉事件(28)的可视流(56),来自面部模块(30)的面部事件(39),来自立体声模块(37)的立体声事件(39a) 和来自马达控制模块(40)的马达事件(48)以及关联流(57),所述关联流(57)以及所述试奏模块(20)收集具有相位差(IPD)的子带或 通过具有通过的主动方向通过滤波器(23a)在预设范围内的昼间强度差(IID) 根据听觉特性,根据来自关联模块(50)的准确声源方向信息,根据听觉特性在前方方向上变得最小,并且随着角度变宽到更大,并且通过以下方式进行声源分离 重组声源的波形。
    • 6. 发明申请
    • Robotics visual and auditory system
    • 机器人视觉和听觉系统
    • US20090030552A1
    • 2009-01-29
    • US10539047
    • 2003-02-12
    • Kazuhiro NakadaiHiroshi OkunoHiroaki Kitano
    • Kazuhiro NakadaiHiroshi OkunoHiroaki Kitano
    • G10L21/02G10L15/20G06F19/00G05B19/00
    • G06N3/008G10L15/28G10L21/028G10L2015/228G10L2021/02166
    • It is a robotics visual and auditory system provided with an auditory module (20), a face module (30), a stereo module (37), a motor control module (40), and an association module (50) to control these respective modules. The auditory module (20) collects sub-bands having interaural phase difference (IPD) or interaural intensity difference (IID) within a predetermined range by an active direction pass filter (23a) having a pass range which, according to auditory characteristics, becomes minimum in the frontal direction, and larger as the angle becomes wider to the left and right, based on an accurate sound source directional information from the association module (50), and conducts sound source separation by restructuring a wave shape of a sound source, conducts speech recognition of separated sound signals from respective sound sources using a plurality of acoustic models (27d), integrates speech recognition results from each acoustic model by a selector, and judges the most reliable speech recognition result among the speech recognition results.
    • 它是具有听觉模块(20),面部模块(30),立体声模块(37),电机控制模块(40)和关联模块(50)的机器人视觉和听觉系统,用于控制这些相应的 模块。 听觉模块(20)通过具有根据听觉特性变为最小的通过范围的有源方向通过滤波器(23a)来收集在预定范围内的具有耳间相位差(IPD)或urala内强度差(IID)的子带 基于来自关联模块(50)的准确的声源方向信息,通过重新构成声源的波形来进行声源分离,进行左侧和右侧的角度的变宽, 使用多个声学模型(27d)对来自相应声源的分离的声音信号进行语音识别,通过选择器对来自每个声学模型的语音识别结果进行积分,并且判断语音识别结果中最可靠的语音识别结果。