会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • Real-time Bayesian 3D pose tracking
    • 实时贝叶斯3D姿态跟踪
    • US20070122001A1
    • 2007-05-31
    • US11290135
    • 2005-11-30
    • Qiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • Qiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • G06K9/00
    • G06K9/00208G06K9/00241
    • Systems and methods are described for real-time Bayesian 3D pose tracking. In one implementation, exemplary systems and methods formulate key-frame based differential pose tracking in a probabilistic graphical model. An exemplary system receives live captured video as input and tracks a video object's 3D pose in real-time based on the graphical model. An exemplary Bayesian inter-frame motion inference technique simultaneously performs online point matching and pose estimation. This provides robust pose tracking because the relative pose estimate for a current frame is simultaneously estimated from two independent sources, from a key-frame pool and from the video frame preceding the current frame. Then, an exemplary online Bayesian frame fusion technique infers the current pose from the two independent sources, providing stable and drift-free tracking, even during agile motion, occlusion, scale change, and drastic illumination change of the tracked object.
    • 描述了实时贝叶斯3D姿态跟踪的系统和方法。 在一个实现中,示例性系统和方法在概率图形模型中制定基于关键帧的差分姿态跟踪。 示例性系统基于图形模型实时地接收实时捕获的视频作为输入并实时跟踪视频对象的3D姿态。 示例性的贝叶斯帧间运动推理技术同时执行在线点匹配和姿态估计。 这提供了鲁棒的姿势跟踪,因为当前帧的相对姿态估计是从两个独立的来源,从关键帧池和当前帧之前的视频帧同时估计的。 然后,示例性的在线贝叶斯帧融合技术从两个独立的来源推测出当前姿态,即使在敏捷运动,闭塞,比例变化和被跟踪对象的剧烈照明改变的情况下,也提供稳定和无漂移的跟踪。
    • 2. 发明申请
    • Digital Video Effects
    • 数码影像效果
    • US20070216675A1
    • 2007-09-20
    • US11467859
    • 2006-08-28
    • Jian SunQiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • Jian SunQiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • H04N13/04G06T15/00
    • G06T11/00
    • Digital video effects are described. In one aspect, a foreground object in a video stream is identified. The video stream comprises multiple image frames. The foreground object is modified by rendering a 3-dimensional (3-D) visual feature over the foreground object for presentation to a user in a modified video stream. Pose of the foreground object is tracked in 3-D space across respective ones of the image frames to identify when the foreground object changes position in respective ones of the image frames. Based on this pose tracking, aspect ratio of the 3-D visual feature is adaptively modified and rendered over the foreground object in corresponding image frames for presentation to the user in the modified video stream.
    • 描述数字视频效果。 在一个方面,识别视频流中的前景对象。 视频流包括多个图像帧。 通过在前景对象上呈现三维(3-D)视觉特征来修改前景对象,以呈现给经修改的视频流中的用户。 前景物体的姿态在相应的图像帧中的3-D空间中被跟​​踪,以识别前景对象何时改变相应图像帧中的位置。 基于这种姿态跟踪,3-D视觉特征的宽高比被自适应地修改并在相应图像帧中的前景对象上呈现,以便在修改的视频流中呈现给用户。
    • 3. 发明授权
    • Digital video effects
    • 数字视频效果
    • US08026931B2
    • 2011-09-27
    • US11467859
    • 2006-08-28
    • Jian SunQiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • Jian SunQiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • G09G5/00G06K9/34
    • G06T11/00
    • Digital video effects are described. In one aspect, a foreground object in a video stream is identified. The video stream comprises multiple image frames. The foreground object is modified by rendering a 3-dimensional (3-D) visual feature over the foreground object for presentation to a user in a modified video stream. Pose of the foreground object is tracked in 3-D space across respective ones of the image frames to identify when the foreground object changes position in respective ones of the image frames. Based on this pose tracking, aspect ratio of the 3-D visual feature is adaptively modified and rendered over the foreground object in corresponding image frames for presentation to the user in the modified video stream.
    • 描述数字视频效果。 在一个方面,识别视频流中的前景对象。 视频流包括多个图像帧。 通过在前景对象上呈现三维(3-D)视觉特征来修改前景对象,以呈现给经修改的视频流中的用户。 前景物体的姿态在相应的图像帧中的3-D空间中被跟​​踪,以识别前景对象何时改变相应图像帧中的位置。 基于这种姿态跟踪,3-D视觉特征的宽高比被自适应地修改并在相应图像帧中的前景对象上呈现,以便在修改的视频流中呈现给用户。
    • 4. 发明授权
    • Real-time Bayesian 3D pose tracking
    • 实时贝叶斯3D姿态跟踪
    • US07536030B2
    • 2009-05-19
    • US11290135
    • 2005-11-30
    • Qiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • Qiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • G06K9/00
    • G06K9/00208G06K9/00241
    • Systems and methods are described for real-time Bayesian 3D pose tracking. In one implementation, exemplary systems and methods formulate key-frame based differential pose tracking in a probabilistic graphical model. An exemplary system receives live captured video as input and tracks a video object's 3D pose in real-time based on the graphical model. An exemplary Bayesian inter-frame motion inference technique simultaneously performs online point matching and pose estimation. This provides robust pose tracking because the relative pose estimate for a current frame is simultaneously estimated from two independent sources, from a key-frame pool and from the video frame preceding the current frame. Then, an exemplary online Bayesian frame fusion technique infers the current pose from the two independent sources, providing stable and drift-free tracking, even during agile motion, occlusion, scale change, and drastic illumination change of the tracked object.
    • 描述了实时贝叶斯3D姿态跟踪的系统和方法。 在一个实现中,示例性系统和方法在概率图形模型中制定基于关键帧的差分姿态跟踪。 示例性系统基于图形模型实时地接收实时捕获的视频作为输入并实时跟踪视频对象的3D姿态。 示例性的贝叶斯帧间运动推理技术同时执行在线点匹配和姿态估计。 这提供了鲁棒的姿势跟踪,因为当前帧的相对姿态估计是从两个独立的来源,从关键帧池和当前帧之前的视频帧同时估计的。 然后,示例性的在线贝叶斯帧融合技术从两个独立的来源推测出当前姿态,即使在敏捷运动,闭塞,比例变化和跟踪对象的剧烈照明改变期间也能提供稳定和无漂移的跟踪。
    • 5. 发明申请
    • Automatic 3D Face-Modeling From Video
    • 从视频自动3D面部建模
    • US20070091085A1
    • 2007-04-26
    • US11465369
    • 2006-08-17
    • Qiang WangHeung-Yeung ShumXiaoou Tang
    • Qiang WangHeung-Yeung ShumXiaoou Tang
    • G06T17/00
    • G06T17/20G06T7/55G06T2200/08
    • Systems and methods perform automatic 3D face modeling. In one implementation, a brief video clip of a user's head turning from front to side provides enough input for automatically achieving a model that includes 2D feature matches, 3D head pose, 3D face shape, and facial textures. The video clip of the user may be of poor quality. In a two layer iterative method, the video clip is divided into segments. Flow-based feature estimation and model-based feature refinement are applied recursively to each segment. Then the feature estimation and refinement are iteratively applied across all the segments. The entire modeling method is automatic and the two layer iterative method provides speed and efficiency, especially when sparse bundle adjustment is applied to boost efficiency.
    • 系统和方法执行自动3D脸部建模。 在一个实现中,用户头部从前到后的简短视频剪辑提供足够的输入,用于自动实现包括2D特征匹配,3D头部姿势,3D脸部形状和面部纹理的模型。 用户的视频剪辑可能质量差。 在两层迭代方法中,视频剪辑被划分成段。 基于流的特征估计和基于模型的特征细化被递归地应用于每个段。 然后,特征估计和细化被迭代地应用于所有段。 整个建模方法是自动的,两层迭代法提供了速度和效率,特别是当稀疏束调整应用于提高效率时。
    • 6. 发明授权
    • Automatic 3D face-modeling from video
    • 从视频自动3D面部建模
    • US07755619B2
    • 2010-07-13
    • US11465369
    • 2006-08-17
    • Qiang WangHeung-Yeung ShumXiaoou Tang
    • Qiang WangHeung-Yeung ShumXiaoou Tang
    • G06T15/00
    • G06T17/20G06T7/55G06T2200/08
    • Systems and methods perform automatic 3D face modeling. In one implementation, a brief video clip of a user's head turning from front to side provides enough input for automatically achieving a model that includes 2D feature matches, 3D head pose, 3D face shape, and facial textures. The video clip of the user may be of poor quality. In a two layer iterative method, the video clip is divided into segments. Flow-based feature estimation and model-based feature refinement are applied recursively to each segment. Then the feature estimation and refinement are iteratively applied across all the segments. The entire modeling method is automatic and the two layer iterative method provides speed and efficiency, especially when sparse bundle adjustment is applied to boost efficiency.
    • 系统和方法执行自动3D脸部建模。 在一个实现中,用户头部从前到后的简短视频剪辑提供足够的输入,用于自动实现包括2D特征匹配,3D头部姿势,3D脸部形状和面部纹理的模型。 用户的视频剪辑可能质量差。 在两层迭代方法中,视频剪辑被划分成段。 基于流的特征估计和基于模型的特征细化被递归地应用于每个段。 然后,特征估计和细化被迭代地应用于所有段。 整个建模方法是自动的,两层迭代法提供了速度和效率,特别是当稀疏束调整应用于提高效率时。