会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • Digital Video Effects
    • 数码影像效果
    • US20070216675A1
    • 2007-09-20
    • US11467859
    • 2006-08-28
    • Jian SunQiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • Jian SunQiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • H04N13/04G06T15/00
    • G06T11/00
    • Digital video effects are described. In one aspect, a foreground object in a video stream is identified. The video stream comprises multiple image frames. The foreground object is modified by rendering a 3-dimensional (3-D) visual feature over the foreground object for presentation to a user in a modified video stream. Pose of the foreground object is tracked in 3-D space across respective ones of the image frames to identify when the foreground object changes position in respective ones of the image frames. Based on this pose tracking, aspect ratio of the 3-D visual feature is adaptively modified and rendered over the foreground object in corresponding image frames for presentation to the user in the modified video stream.
    • 描述数字视频效果。 在一个方面,识别视频流中的前景对象。 视频流包括多个图像帧。 通过在前景对象上呈现三维(3-D)视觉特征来修改前景对象,以呈现给经修改的视频流中的用户。 前景物体的姿态在相应的图像帧中的3-D空间中被跟​​踪,以识别前景对象何时改变相应图像帧中的位置。 基于这种姿态跟踪,3-D视觉特征的宽高比被自适应地修改并在相应图像帧中的前景对象上呈现,以便在修改的视频流中呈现给用户。
    • 2. 发明授权
    • Digital video effects
    • 数字视频效果
    • US08026931B2
    • 2011-09-27
    • US11467859
    • 2006-08-28
    • Jian SunQiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • Jian SunQiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • G09G5/00G06K9/34
    • G06T11/00
    • Digital video effects are described. In one aspect, a foreground object in a video stream is identified. The video stream comprises multiple image frames. The foreground object is modified by rendering a 3-dimensional (3-D) visual feature over the foreground object for presentation to a user in a modified video stream. Pose of the foreground object is tracked in 3-D space across respective ones of the image frames to identify when the foreground object changes position in respective ones of the image frames. Based on this pose tracking, aspect ratio of the 3-D visual feature is adaptively modified and rendered over the foreground object in corresponding image frames for presentation to the user in the modified video stream.
    • 描述数字视频效果。 在一个方面,识别视频流中的前景对象。 视频流包括多个图像帧。 通过在前景对象上呈现三维(3-D)视觉特征来修改前景对象,以呈现给经修改的视频流中的用户。 前景物体的姿态在相应的图像帧中的3-D空间中被跟​​踪,以识别前景对象何时改变相应图像帧中的位置。 基于这种姿态跟踪,3-D视觉特征的宽高比被自适应地修改并在相应图像帧中的前景对象上呈现,以便在修改的视频流中呈现给用户。
    • 3. 发明申请
    • Real-time Bayesian 3D pose tracking
    • 实时贝叶斯3D姿态跟踪
    • US20070122001A1
    • 2007-05-31
    • US11290135
    • 2005-11-30
    • Qiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • Qiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • G06K9/00
    • G06K9/00208G06K9/00241
    • Systems and methods are described for real-time Bayesian 3D pose tracking. In one implementation, exemplary systems and methods formulate key-frame based differential pose tracking in a probabilistic graphical model. An exemplary system receives live captured video as input and tracks a video object's 3D pose in real-time based on the graphical model. An exemplary Bayesian inter-frame motion inference technique simultaneously performs online point matching and pose estimation. This provides robust pose tracking because the relative pose estimate for a current frame is simultaneously estimated from two independent sources, from a key-frame pool and from the video frame preceding the current frame. Then, an exemplary online Bayesian frame fusion technique infers the current pose from the two independent sources, providing stable and drift-free tracking, even during agile motion, occlusion, scale change, and drastic illumination change of the tracked object.
    • 描述了实时贝叶斯3D姿态跟踪的系统和方法。 在一个实现中,示例性系统和方法在概率图形模型中制定基于关键帧的差分姿态跟踪。 示例性系统基于图形模型实时地接收实时捕获的视频作为输入并实时跟踪视频对象的3D姿态。 示例性的贝叶斯帧间运动推理技术同时执行在线点匹配和姿态估计。 这提供了鲁棒的姿势跟踪,因为当前帧的相对姿态估计是从两个独立的来源,从关键帧池和当前帧之前的视频帧同时估计的。 然后,示例性的在线贝叶斯帧融合技术从两个独立的来源推测出当前姿态,即使在敏捷运动,闭塞,比例变化和被跟踪对象的剧烈照明改变的情况下,也提供稳定和无漂移的跟踪。
    • 4. 发明申请
    • Bi-Directional Tracking Using Trajectory Segment Analysis
    • 使用轨迹段分析进行双向跟踪
    • US20070086622A1
    • 2007-04-19
    • US11380635
    • 2006-04-27
    • Jian SunWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • Jian SunWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • G06K9/00G06K9/34
    • G06K9/3241G06K9/32G06T7/277
    • The present video tracking technique outputs a Maximum A Posterior (MAP) solution for a target object based on two object templates obtained from a start and an end keyframe of a whole state sequence. The technique first minimizes the whole state space of the sequence by generating a sparse set of local two-dimensional modes in each frame of the sequence. The two-dimensional modes are converted into three-dimensional points within a three-dimensional volume. The three-dimensional points are clustered using a spectral clustering technique where each cluster corresponds to a possible trajectory segment of the target object. If there is occlusion in the sequence, occlusion segments are generated so that an optimal trajectory of the target object can be obtained.
    • 本视频跟踪技术基于从整个状态序列的开始和结束关键帧获得的两个对象模板,为目标对象输出最大A后验(MAP)解决方案。 该技术首先通过在序列的每个帧中生成稀疏的局部二维模式集来最小化序列的整个状态空间。 二维模式在三维体积内被转换成三维点。 使用光谱聚类技术对三维点进行聚类,其中每个聚类对应于目标对象的可能的轨迹段。 如果序列中存在闭塞,则生成闭塞段,从而可以获得目标对象的最佳轨迹。
    • 6. 发明授权
    • Real-time Bayesian 3D pose tracking
    • 实时贝叶斯3D姿态跟踪
    • US07536030B2
    • 2009-05-19
    • US11290135
    • 2005-11-30
    • Qiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • Qiang WangWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • G06K9/00
    • G06K9/00208G06K9/00241
    • Systems and methods are described for real-time Bayesian 3D pose tracking. In one implementation, exemplary systems and methods formulate key-frame based differential pose tracking in a probabilistic graphical model. An exemplary system receives live captured video as input and tracks a video object's 3D pose in real-time based on the graphical model. An exemplary Bayesian inter-frame motion inference technique simultaneously performs online point matching and pose estimation. This provides robust pose tracking because the relative pose estimate for a current frame is simultaneously estimated from two independent sources, from a key-frame pool and from the video frame preceding the current frame. Then, an exemplary online Bayesian frame fusion technique infers the current pose from the two independent sources, providing stable and drift-free tracking, even during agile motion, occlusion, scale change, and drastic illumination change of the tracked object.
    • 描述了实时贝叶斯3D姿态跟踪的系统和方法。 在一个实现中,示例性系统和方法在概率图形模型中制定基于关键帧的差分姿态跟踪。 示例性系统基于图形模型实时地接收实时捕获的视频作为输入并实时跟踪视频对象的3D姿态。 示例性的贝叶斯帧间运动推理技术同时执行在线点匹配和姿态估计。 这提供了鲁棒的姿势跟踪,因为当前帧的相对姿态估计是从两个独立的来源,从关键帧池和当前帧之前的视频帧同时估计的。 然后,示例性的在线贝叶斯帧融合技术从两个独立的来源推测出当前姿态,即使在敏捷运动,闭塞,比例变化和跟踪对象的剧烈照明改变期间也能提供稳定和无漂移的跟踪。
    • 7. 发明授权
    • Background removal in a live video
    • 在实时视频中进行后台删除
    • US07720283B2
    • 2010-05-18
    • US11469371
    • 2006-08-31
    • Jian SunHeung-Yeung ShumXiaoou TangWeiwei Zhang
    • Jian SunHeung-Yeung ShumXiaoou TangWeiwei Zhang
    • G06K9/34
    • G06K9/38G06T7/11G06T7/90G06T2207/10016
    • Exemplary systems and methods segment a foreground from a background image in a video sequence. In one implementation, a system refines a segmentation boundary between the foreground and the background image by attenuating background contrast while preserving contrast of the segmentation boundary itself, providing an accurate background cut of live video in real time. A substitute background may then be merged with the segmented foreground within the live video. The system can apply an adaptive background color mixture model to improve segmentation of foreground from background under various background changes, such as camera movement, illumination change, and movement of small objects in the background.
    • 示例性系统和方法从视频序列中的背景图像分割前景。 在一个实现中,系统通过衰减背景对比度同时保留分割边界本身的对比度来优化前景和背景图像之间的分割边界,从而实时提供实况视频的精确背景切割。 然后可以将替代背景与实时视频中的分段前景合并。 该系统可以应用自适应背景颜色混合模型,从而在各种背景变化(例如相机移动,照明变化和背景中的小物体的移动)下改进背景的前景分割。
    • 8. 发明授权
    • Bi-directional tracking using trajectory segment analysis
    • 使用轨迹段分析进行双向跟踪
    • US07817822B2
    • 2010-10-19
    • US11380635
    • 2006-04-27
    • Jian SunWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • Jian SunWeiwei ZhangXiaoou TangHeung-Yeung Shum
    • G06K9/00
    • G06K9/3241G06K9/32G06T7/277
    • The present video tracking technique outputs a Maximum A Posterior (MAP) solution for a target object based on two object templates obtained from a start and an end keyframe of a whole state sequence. The technique first minimizes the whole state space of the sequence by generating a sparse set of local two-dimensional modes in each frame of the sequence. The two-dimensional modes are converted into three-dimensional points within a three-dimensional volume. The three-dimensional points are clustered using a spectral clustering technique where each cluster corresponds to a possible trajectory segment of the target object. If there is occlusion in the sequence, occlusion segments are generated so that an optimal trajectory of the target object can be obtained.
    • 本视频跟踪技术基于从整个状态序列的开始和结束关键帧获得的两个对象模板,为目标对象输出最大A后验(MAP)解决方案。 该技术首先通过在序列的每个帧中生成稀疏的局部二维模式集来最小化序列的整个状态空间。 二维模式在三维体积内被转换成三维点。 使用光谱聚类技术对三维点进行聚类,其中每个聚类对应于目标对象的可能的轨迹段。 如果序列中存在闭塞,则生成闭塞段,从而可以获得目标对象的最佳轨迹。