会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Gesture recognition system
    • 手势识别系统
    • US06804396B2
    • 2004-10-12
    • US09820130
    • 2001-03-28
    • Nobuo HigakiYuichi YoshidaKikuo Fujimura
    • Nobuo HigakiYuichi YoshidaKikuo Fujimura
    • G06K900
    • G06K9/00335B25J11/0005B25J13/003G06F3/017G06F3/167G10L15/26
    • The present invention provides a system for recognizing gestures made by a moving subject. The system comprises a sound detector for detecting sound, one or more image sensors for capturing an image of the moving subject, a human recognizer for recognizing a human being from the image captured by said one or more image sensors, and a gesture recognizer, activated when human voice is identified by said sound detector, for recognizing a gesture of the human being. In a preferred embodiment, the system includes a hand recognizer for recognizing a hand of the human being. The gesture recognizer recognizes a gesture of the human being based on movement of the hand identified by the hand recognizer. The system may further include a voice recognizer that recognizes human voice and determines words from human voice input to the sound detector. The gesture recognizer is activated when the voice recognizer recognizes one of a plurality of predetermined keywords such as “hello!”, “bye”, and “move”.
    • 本发明提供了一种用于识别由移动的被摄体制作的手势的系统。 该系统包括用于检测声音的声音检测器,用于捕获移动被摄体的图像的一个或多个图像传感器,用于从由所述一个或多个图像传感器捕获的图像中识别人的人类识别器,以及手势识别器,被激活 当由所述声音检测器识别人声时,用于识别人的手势。在优选实施例中,该系统包括用于识别人的手的手识别器。 手势识别器基于由手识别器识别的手的移动来识别人的手势。 该系统还可以包括语音识别器,其识别人类语音并且确定从人声输入到声音检测器的单词。 当语音识别器识别诸如“你好”,“再见”和“移动”的多个预定关键字之一时,手势识别器被激活。
    • 3. 发明授权
    • Target orientation estimation using depth sensing
    • 使用深度感测的目标姿态估计
    • US08031906B2
    • 2011-10-04
    • US12572619
    • 2009-10-02
    • Kikuo FujimuraYouding Zhu
    • Kikuo FujimuraYouding Zhu
    • G06K9/00
    • G06K9/00228G06F3/012G06K9/00201G06K9/00241G06K9/00845G06K9/32G06K9/6203G06K9/6214G06T7/74G06T2200/04G06T2207/10028G06T2207/30201G08B21/06
    • A system for estimating orientation of a target based on real-time video data uses depth data included in the video to determine the estimated orientation. The system includes a time-of-flight camera capable of depth sensing within a depth window. The camera outputs hybrid image data (color and depth). Segmentation is performed to determine the location of the target within the image. Tracking is used to follow the target location from frame to frame. During a training mode, a target-specific training image set is collected with a corresponding orientation associated with each frame. During an estimation mode, a classifier compares new images with the stored training set to determine an estimated orientation. A motion estimation approach uses an accumulated rotation/translation parameter calculation based on optical flow and depth constrains. The parameters are reset to a reference value each time the image corresponds to a dominant orientation.
    • 用于基于实时视频数据估计目标的方向的系统使用包括在视频中的深度数据来确定估计的取向。 该系统包括能够在深度窗口内进行深度感测的飞行时间相机。 相机输出混合图像数据(颜色和深度)。 执行分割以确定图像内的目标的位置。 跟踪用于从帧到帧跟随目标位置。 在训练模式期间,以与每个帧相关联的对应取向收集目标特定训练图像集合。 在估计模式期间,分类器将新图像与存储的训练集进行比较以确定估计的方位。 运动估计方法使用基于光流和深度约束的累积旋转/平移参数计算。 每次图像对应于主导方向时,参数将重置为参考值。
    • 4. 发明授权
    • Moving object detection using low illumination depth capable computer vision
    • 使用低照度深度的计算机视觉进行移动物体检测
    • US07366325B2
    • 2008-04-29
    • US10964299
    • 2004-10-12
    • Kikuo FujimuraXia Liu
    • Kikuo FujimuraXia Liu
    • G06K9/00
    • G06K9/2018G06K9/00369G06K9/00805
    • Moving object detection is based on low illumination image data that includes distance or depth information. The vision system operates on a platform with a dominant translational motion and with a small amount of rotational motion. Detection of moving objects whose motions are not consistent with the movement of the background is complementary to shape-based approaches. For low illumination computer-based vision assistance a two-stage technique is used for simultaneous and subsequent frame blob correspondence. Using average scene disparity, motion is detected without explicit ego-motion calculation. These techniques make use of characteristics of infrared sensitive video data, in which heat emitting objects appear as hotspots.
    • 移动物体检测是基于包括距离或深度信息的低照度图像数据。 视觉系统在具有主要平移运动和少量旋转运动的平台上运行。 运动与背景运动不一致的运动物体的检测与基于形状的方法是互补的。 对于低照度基于计算机的视觉辅助,两级技术用于同时和随后的帧间通信。 使用平均场景差异,在没有显式自我运动计算的情况下检测到运动。 这些技术利用红外敏感视频数据的特性,其中发热物体表现为热点。
    • 5. 发明申请
    • 3-DIMENSIONAL (3-D) NAVIGATION
    • 三维(3-D)导航
    • US20140268353A1
    • 2014-09-18
    • US14041614
    • 2013-09-30
    • Kikuo FujimuraVictor Ng-Thow-Hing
    • Kikuo FujimuraVictor Ng-Thow-Hing
    • G02B27/01
    • G02B27/01G02B27/0101G02B2027/0127G02B2027/0134
    • One or more embodiments of techniques or systems for 3-dimensional (3-D) navigation are provided herein. A heads-up display (HUD) component can project graphic elements on focal planes around an environment surrounding a vehicle. The HUD component can cause these graphic elements to appear volumetric or 3-D by moving or adjusting a distance between a focal plane and the vehicle. Additionally, a target position for graphic elements can be adjusted. This enables the HUD component to project graphic elements as moving avatars. In other words, adjusting the focal plane distance and the target position enables graphic elements to be projected in three dimensions along an x, y, and z axis. Further, a moving avatar can be ‘animated’ by sequentially projecting the avatar on different focal planes, thereby providing an occupant with the perception that the avatar is moving towards or away from the vehicle.
    • 本文提供了用于三维(3-D)导航的技术或系统的一个或多个实施例。 平视显示器(HUD)组件可以围绕车辆周围的环境在焦平面上投影图形元素。 HUD组件可以通过移动或调整焦平面和车辆之间的距离来使这些图形元素显示为体积或3-D。 此外,可以调整图形元素的目标位置。 这使得HUD组件可以将图形元素投影为移动头像。 换句话说,调整焦平面距离和目标位置可使图形元素沿x,y和z轴三维投影。 此外,可以通过在不同的焦平面上顺序地投影化身,从而为乘客提供头像朝向或远离车辆移动的感觉,可以“移动”动画。
    • 7. 发明申请
    • TARGET ORIENTATION ESTIMATION USING DEPTH SENSING
    • 使用深度感测的目标定位估计
    • US20100034427A1
    • 2010-02-11
    • US12572619
    • 2009-10-02
    • Kikuo FujimuraYouding Zhu
    • Kikuo FujimuraYouding Zhu
    • G06K9/00
    • G06K9/00228G06F3/012G06K9/00201G06K9/00241G06K9/00845G06K9/32G06K9/6203G06K9/6214G06T7/74G06T2200/04G06T2207/10028G06T2207/30201G08B21/06
    • A system for estimating orientation of a target based on real-time video data uses depth data included in the video to determine the estimated orientation. The system includes a time-of-flight camera capable of depth sensing within a depth window. The camera outputs hybrid image data (color and depth). Segmentation is performed to determine the location of the target within the image. Tracking is used to follow the target location from frame to frame. During a training mode, a target-specific training image set is collected with a corresponding orientation associated with each frame. During an estimation mode, a classifier compares new images with the stored training set to determine an estimated orientation. A motion estimation approach uses an accumulated rotation/translation parameter calculation based on optical flow and depth constrains. The parameters are reset to a reference value each time the image corresponds to a dominant orientation.
    • 用于基于实时视频数据估计目标的方向的系统使用包括在视频中的深度数据来确定估计的取向。 该系统包括能够在深度窗口内进行深度感测的飞行时间相机。 相机输出混合图像数据(颜色和深度)。 执行分割以确定图像内的目标的位置。 跟踪用于从帧到帧跟随目标位置。 在训练模式期间,以与每个帧相关联的对应取向收集目标特定训练图像集合。 在估计模式期间,分类器将新图像与存储的训练集进行比较以确定估计的方位。 运动估计方法使用基于光流和深度约束的累积旋转/平移参数计算。 每次图像对应于主导方向时,参数将重置为参考值。
    • 8. 发明授权
    • Pose estimation based on critical point analysis
    • 基于临界点分析的姿态估计
    • US07317836B2
    • 2008-01-08
    • US11378573
    • 2006-03-17
    • Kikuo FujimuraYouding Zhu
    • Kikuo FujimuraYouding Zhu
    • G06K9/00G06K9/36G06K9/46G06K9/62G06K9/66
    • G06K9/00362G06K9/00375G06T7/75G06T2207/30196
    • Methods and systems for estimating a pose of a subject. The subject can be a human, an animal, a robot, or the like. A camera receives depth information associated with a subject, a pose estimation module to determine a pose or action of the subject from images, and an interaction module to output a response to the perceived pose or action. The pose estimation module separates portions of the image containing the subject into classified and unclassified portions. The portions can be segmented using k-means clustering. The classified portions can be known objects, such as a head and a torso, that are tracked across the images. The unclassified portions are swept across an x and y axis to identify local minimums and local maximums. The critical points are derived from the local minimums and local maximums. Potential joint sections are identified by connecting various critical points, and the joint sections having sufficient probability of corresponding to an object on the subject are selected.
    • 用于估计受试者姿势的方法和系统。 受试者可以是人,动物,机器人等。 相机接收与被摄体相关联的深度信息,姿势估计模块,用于根据图像确定被摄体的姿势或动作,以及交互模块,以输出对所感知的姿势或动作的响应。 姿态估计模块将包含被摄体的图像的部分分成分类的和未分类的部分。 可以使用k均值聚类来分割这些部分。 分类部分可以是跨越图像被跟踪的已知对象,例如头部和躯干。 未分类的部分扫过x和y轴以识别局部最小值和局部最大值。 关键点是从局部最小值和局部最大值得出。 通过连接各种临界点来识别潜在的接合部分,并且选择具有与被检体上的物体相对应的足够概率的关节部分。
    • 9. 发明申请
    • Visual tracking using depth data
    • 使用深度数据进行视觉跟踪
    • US20050031166A1
    • 2005-02-10
    • US10857581
    • 2004-05-28
    • Kikuo FujimuraHarsh Nanda
    • Kikuo FujimuraHarsh Nanda
    • G06K9/00G06T1/00G06T7/20
    • G06K9/00375G06K9/00369G06K9/32G06T7/20
    • Real-time visual tracking using depth sensing camera technology, results in illumination-invariant tracking performance. Depth sensing (time-of-flight) cameras provide real-time depth and color images of the same scene. Depth windows regulate the tracked area by controlling shutter speed. A potential field is derived from the depth image data to provide edge information of the tracked target. A mathematically representable contour can model the tracked target. Based on the depth data, determining a best fit between the contour and the edge of the tracked target provides position information for tracking. Applications using depth sensor based visual tracking include head tracking, hand tracking, body-pose estimation, robotic command determination, and other human-computer interaction systems.
    • 使用深度感测相机技术实时视觉跟踪,导致照明不变跟踪性能。 深度感测(飞行时间)相机提供相同场景的实时深度和彩色图像。 深度窗口通过控制快门速度来调节跟踪区域。 从深度图像数据导出势场,以提供跟踪的目标的边缘信息。 可数学表示的轮廓可以对跟踪的目标进行建模。 基于深度数据,确定跟踪目标的轮廓和边缘之间的最佳拟合提供用于跟踪的位置信息。 使用基于深度传感器的视觉跟踪的应用包括头跟踪,手跟踪,身体姿态估计,机器人命令确定以及其他人机交互系统。