会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Head pose tracking system
    • 头姿态跟踪系统
    • US07515173B2
    • 2009-04-07
    • US10154892
    • 2002-05-23
    • Zhengyou ZhangRuigang Yang
    • Zhengyou ZhangRuigang Yang
    • H04N7/14
    • H04N7/15H04N7/144
    • Video images representative of a conferee's head are received and evaluated with respect to a reference model to monitor a head position of the conferee. A personalized face model of the conferee is captured to track head position of the conferee. In a stereo implementation, first and second video images representative of a first conferee taken from different views are concurrently captured. A head position of the first conferee is tracked from the first and second video images. The tracking of head-position through a personalized model-based approach can be used in a number of applications such as human-computer interaction and eye-gaze correction for video conferencing.
    • 代表参加者头部的视频图像被接收并且相对于参考模型进行评估以监视与会者的头部位置。 捕获与会者的个性化面部模型,以跟踪与会者的头部位置。 在立体声实现中,同时捕获代表从不同视图拍摄的第一与会者的第一和第二视频图像。 从第一和第二视频图像追踪第一个与会者的头部位置。 通过基于个性化的基于模型的方法跟踪头位可以用于许多应用,例如用于视频会议的人机交互和眼睛注视校正。
    • 4. 发明授权
    • Video-teleconferencing system with eye-gaze correction
    • 具有眼睛注视校正的视频会议系统
    • US06771303B2
    • 2004-08-03
    • US10128888
    • 2002-04-23
    • Zhengyou ZhangRuigang Yang
    • Zhengyou ZhangRuigang Yang
    • H04N714
    • H04N7/144
    • Correcting for eye-gaze in video communication devices is accomplished by blending information captured from a stereoscopic view of the conferee and generating a virtual image of the conferee. A personalized face model of the conferee is captured to track head position of the conferee. First and second video images representative of a first conferee taken from different views are concurrently captured. A head position of the first conferee is tracked from the first and second video images. Matching features and contours from the first and second video images are ascertained. The head position as well as the matching features and contours from the first and second video images are synthesized to generate a virtual image video stream of the first conferee that makes the first conferee appear to be making eye contact with a second conferee who is watching the virtual image video stream.
    • 在视频通信设备中校正眼睛注视是通过混合从与会者的立体视图获取的信息并产生与会者的虚拟图像来实现的。 捕获与会者的个性化面部模型,以跟踪与会者的头部位置。 同时捕获代表从不同视图取得的第一同伴的第一和第二视频图像。 从第一和第二视频图像追踪第一个与会者的头部位置。 确定来自第一和第二视频图像的匹配特征和轮廓。 头部位置以及来自第一和第二视频图像的匹配特征和轮廓被合成以产生第一与会者的虚拟图像视频流,使得第一与会者看起来正在与正在观看的第二参与者进行目标接触 虚拟图像视频流。
    • 5. 发明申请
    • RECOVERING DIS-OCCLUDED AREAS USING TEMPORAL INFORMATION INTEGRATION
    • 使用时间信息整合恢复分散区域
    • US20130294710A1
    • 2013-11-07
    • US13463934
    • 2012-05-04
    • Philip Andrew ChouCha ZhangZhengyou ZhangShujie Liu
    • Philip Andrew ChouCha ZhangZhengyou ZhangShujie Liu
    • G06K9/32
    • G06K9/32G06T7/593
    • A temporal information integration dis-occlusion system and method for using historical data to reconstruct a virtual view containing an occluded area. Embodiments of the system and method use temporal information of the scene captured previously to obtain a total history. This total history is warped onto information captured by a camera at a current time in order to help reconstruct the dis-occluded areas. The historical data (or frames) from the total history match only a portion of the frames contained in the captured information. This warping yields warped history information. Warping is performed by using one of two embodiments to match points in an estimation of the current information to points in the captured information. Next, regions of current information are split using a classifier. The warped history information and the captured information then are merged to obtain an estimate for the current information and the reconstructed virtual view.
    • 一种用于使用历史数据重建包含遮挡区域的虚拟视图的时间信息整合遮挡系统和方法。 系统和方法的实施例使用先前捕获的场景的时间信息来获得总历史。 这个总历史在当前时间由相机拍摄的信息扭曲,以帮助重建被遮挡的区域。 来自总历史记录的历史数据(或帧)仅匹配捕获信息中包含的帧的一部分。 这种扭曲产生扭曲的历史信息。 通过使用两个实施例中的一个实现扭曲,以将当前信息的估计中的点与捕获的信息中的点进行匹配。 接下来,使用分类器分割当前信息的区域。 然后将翘曲的历史信息和捕获的信息合并,以获得当前信息和重建的虚拟视图的估计。
    • 6. 发明授权
    • Multiple category learning for training classifiers
    • 训练分类器的多类学习
    • US08401979B2
    • 2013-03-19
    • US12618799
    • 2009-11-16
    • Cha ZhangZhengyou Zhang
    • Cha ZhangZhengyou Zhang
    • G06F15/18
    • G06N99/005
    • Described is multiple category learning to jointly train a plurality of classifiers in an iterative manner. Each training iteration associates an adaptive label with each training example, in which during the iterations, the adaptive label of any example is able to be changed by the subsequent reclassification. In this manner, any mislabeled training example is corrected by the classifiers during training. The training may use a probabilistic multiple category boosting algorithm that maintains probability data provided by the classifiers, or a winner-take-all multiple category boosting algorithm selects the adaptive label based upon the highest probability classification. The multiple category boosting training system may be coupled to a multiple instance learning mechanism to obtain the training examples. The trained classifiers may be used as weak classifiers that provide a label used to select a deep classifier for further classification, e.g., to provide a multi-view object detector.
    • 描述了多类学习,以迭代的方式联合训练多个分类器。 每个训练迭代将自适应标签与每个训练示例相关联,其中在迭代期间,任何示例的自适应标签能够由随后的重新分类改变。 以这种方式,任何错误标记的训练示例在训练期间由分类器校正。 训练可以使用维护由分类器提供的概率数据的概率多类别提升算法,或者获胜者全部多类别增强算法基于最高概率分类来选择自适应标签。 多类别增强训练系统可以耦合到多实例学习机制以获得训练示例。 经训练的分类器可以用作弱分类器,其提供用于选择用于进一步分类的深分类器的标签,例如提供多视图对象检测器。
    • 8. 发明授权
    • Depth reconstruction using plural depth capture units
    • 使用多个深度捕获单元进行深度重建
    • US09536312B2
    • 2017-01-03
    • US13107986
    • 2011-05-16
    • Cha ZhangWenwu ZhuZhengyou ZhangPhilip A. Chou
    • Cha ZhangWenwu ZhuZhengyou ZhangPhilip A. Chou
    • G06T7/00G06K9/20
    • G06T7/521G06K9/2036G06T7/593G06T2207/10048G06T2207/10152G06T2207/30196
    • A depth construction module is described that receives depth images provided by two or more depth capture units. Each depth capture unit generates its depth image using a structured light technique, that is, by projecting a pattern onto an object and receiving a captured image in response thereto. The depth construction module then identifies at least one deficient portion in at least one depth image that has been received, which may be attributed to overlapping projected patterns that impinge the object. The depth construction module then uses a multi-view reconstruction technique, such as a plane sweeping technique, to supply depth information for the deficient portion. In another mode, a multi-view reconstruction technique can be used to produce an entire depth scene based on captured images received from the depth capture units, that is, without first identifying deficient portions in the depth images.
    • 描述了深度构造模块,其接收由两个或更多个深度捕获单元提供的深度图像。 每个深度捕获单元使用结构化的光技术生成其深度图像,即通过将图案投射到对象上并响应于此而接收捕获的图像。 然后,深度构造模块识别已经接收到的至少一个深度图像中的至少一个缺陷部分,其可归因于撞击物体的重叠投影图案。 然后,深度构建模块使用诸如平面扫描技术的多视图重建技术来为缺陷部分提供深度信息。 在另一种模式中,可以使用多视图重建技术来基于从深度捕获单元接收到的捕获图像来生成整个深度场景,即,没有首先识别深度图像中的不足部分。