会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Method for encoding and decoding free viewpoint videos
    • 免费观看视频的编码和解码方法
    • US07324594B2
    • 2008-01-29
    • US10723035
    • 2003-11-26
    • Edouard LamborayMichael WaschbüschStephan WürmlinMarkus GrossHanspeter Pfister
    • Edouard LamborayMichael WaschbüschStephan WürmlinMarkus GrossHanspeter Pfister
    • H04N7/12
    • G06T9/001G06T9/00H04N19/20H04N19/597
    • A system encodes videos acquired of a moving object in a scene by multiple fixed cameras. Camera calibration data of each camera are first determined. The camera calibration data of each camera are associated with the corresponding video. A segmentation mask for each frame of each video is determined. The segmentation mask identifies only foreground pixels in the frame associated with the object. A shape encoder then encodes the segmentation masks, a position encoder encodes a position of each pixel, and a color encoder encodes a color of each pixel. The encoded data can be combined into a single bitstream and transferred to a decoder. At the decoder, the bitstream is decoded to an output video having an arbitrary user selected viewpoint. A dynamic 3D point model defines a geometry of the moving object. Splat sizes and surface normals used during the rendering can be explicitly determined by the encoder, or explicitly by the decoder.
    • 系统通过多个固定摄像机对场景中的移动物体所获取的视频进行编码。 首先确定每个摄像机的摄像机校准数据。 每个摄像机的摄像机校准数据与相应的视频相关联。 确定每个视频的每个帧的分割掩码。 分割掩码仅识别与对象相关联的帧中的前景像素。 然后,形状编码器对分割掩模进行编码,位置编码器对每个像素的位置进行编码,并且颜色编码器对每个像素的颜色进行编码。 编码数据可以组合成单个比特流并传送到解码器。 在解码器处,比特流被解码为具有任意用户选择的视点的输出视频。 动态3D点模型定义了移动物体的几何形状。 在渲染期间使用的Splat尺寸和表面法线可以由编码器显式确定,或者由解码器显式确定。
    • 3. 发明授权
    • Method and system for generating a 3D representation of a dynamically changing 3D scene
    • 用于生成动态变化的3D场景的3D表示的方法和系统
    • US09406131B2
    • 2016-08-02
    • US12302928
    • 2007-05-24
    • Stephan WürmlinChristoph Niederberger
    • Stephan WürmlinChristoph Niederberger
    • G06T7/20G06T7/00G06T15/20A63B24/00G06T5/00
    • G06T7/002A63B2024/0025G06K9/00724G06T5/005G06T7/20G06T7/292G06T7/593G06T7/85G06T15/205G06T2200/08G06T2207/30221G06T2207/30241
    • A method for generating a 3D representation of a dynamically changing 3D scene, which includes the steps of: acquiring at least two synchronised video streams (120) from at least two cameras located at different locations and observing the same 3D scene (102); determining camera parameters, which comprise the orientation and zoom setting, for the at least two cameras (103); tracking the movement of objects (310a,b, 312a,b; 330a,b, 331a,b, 332a,b; 410a,b, 411a,b; 430a,b, 431a,b; 420a,b, 421a,b) in the at least two video streams (104); determining the identity of the objects in the at least two video streams (105); determining the 3D position of the objects by combining the information from the at least two video streams (106); wherein the step of tracking (104) the movement of objects in the at least two video streams uses position information derived from the 3D position of the objects in one or more earlier instants in time. As a result, the quality, speed and robustness of the 2D tracking in the video streams is improved.
    • 一种用于生成动态变化的3D场景的3D表示的方法,其包括以下步骤:从位于不同位置的至少两个摄像机获取至少两个同步的视频流(120)并观察相同的3D场景(102); 确定包括所述至少两个照相机(103)的取向和变焦设置的相机参数; 跟踪物体(310a,b,312a,b; 330a,b,331a,b,332a,b; 410a,b,411a,b; 410a,b,411a,b; 430a,b,431a,b; 420a,b, 在所述至少两个视频流(104)中; 确定所述至少两个视频流(105)中的对象的身份; 通过组合来自至少两个视频流(106)的信息来确定对象的3D位置; 其中跟踪(104)所述至少两个视频流中的对象的移动的步骤使用从一个或多个先前时刻的对象的3D位置导出的位置信息。 结果,提高了视频流中2D跟踪的质量,速度和鲁棒性。
    • 4. 发明申请
    • METHOD AND SYSTEM FOR GENERATING A 3D REPRESENTATION OF A DYNAMICALLY CHANGING 3D SCENE
    • 用于生成动态变化3D场景的三维表示的方法和系统
    • US20090315978A1
    • 2009-12-24
    • US12302928
    • 2007-05-24
    • Stephan WürmlinChristoph Niederberger
    • Stephan WürmlinChristoph Niederberger
    • H04N13/00H04N5/225
    • G06T7/002A63B2024/0025G06K9/00724G06T5/005G06T7/20G06T7/292G06T7/593G06T7/85G06T15/205G06T2200/08G06T2207/30221G06T2207/30241
    • A method for generating a 3D representation of a dynamically changing 3D scene, which includes the steps of: acquiring at least two synchronised video streams (120) from at least two cameras located at different locations and observing the same 3D scene (102); determining camera parameters, which comprise the orientation and zoom setting, for the at least two cameras (103); tracking the movement of objects (310a,b, 312a,b; 330a,b, 331 a,b, 332a,b; 410a,b, 411a,b; 430a,b, 431a,b; 420a,b, 421 a,b) in the at least two video streams (104); determining the identity of the objects in the at least two video streams (105); determining the 3D position of the objects by combining the information from the at least two video streams (106); wherein the step of tracking (104) the movement of objects in the at least two video streams uses position information derived from the 3D position of the objects in one or more earlier instants in time. As a result, the quality, speed and robustness of the 2D tracking in the video streams is improved.
    • 一种用于生成动态变化的3D场景的3D表示的方法,其包括以下步骤:从位于不同位置的至少两个摄像机获取至少两个同步的视频流(120)并观察相同的3D场景(102); 确定包括所述至少两个照相机(103)的取向和变焦设置的相机参数; 跟踪物体(310a,b,312a,b; 330a,b,331a,b,332a,b; 410a,b,411a,b; 410a,b,411a,b; 430a,b,431a,b; 420a, b)在所述至少两个视频流(104)中; 确定所述至少两个视频流(105)中的对象的身份; 通过组合来自至少两个视频流(106)的信息来确定对象的3D位置; 其中跟踪(104)所述至少两个视频流中的对象的移动的步骤使用从一个或多个先前时刻的对象的3D位置导出的位置信息。 结果,提高了视频流中2D跟踪的质量,速度和鲁棒性。