会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 31. 发明授权
    • System and process for generating high dynamic range video
    • 用于生成高动态范围视频的系统和过程
    • US07010174B2
    • 2006-03-07
    • US10965935
    • 2004-10-15
    • Sing Bing KangMatthew T. UyttendaeleSimon WinderRichard Szeliski
    • Sing Bing KangMatthew T. UyttendaeleSimon WinderRichard Szeliski
    • G06K9/40
    • H04N5/2355G06T5/50H04N5/235H04N5/2352H04N5/77H04N5/781H04N5/85
    • A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.
    • 提出了用于产生高动态范围(HDR)视频的系统和过程,其涉及首先在改变曝光的同时捕获视频图像序列,以便在具有较短和较长曝光的帧之间交替。 每个帧的曝光在其被捕获之前被设置为在先前帧中的像素亮度分布的函数。 接下来,对于视频的每个帧,识别所考虑的帧与前后帧之间的对应像素。 对于每个对应的像素集合,至少一个像素被识别为表示可靠的像素。 然后使用与可信赖像素相关联的像素颜色信息来计算每个像素组的辐射值以形成辐射图。 然后可以执行色调映射过程以将辐射图转换成HDR帧的8位表示。
    • 34. 发明授权
    • Self-calibration for a catadioptric camera
    • 反折射相机的自校准
    • US06870563B1
    • 2005-03-22
    • US09591781
    • 2000-06-12
    • Sing Bing Kang
    • Sing Bing Kang
    • G06T5/00G06T7/00H04N5/225H04N5/228H04N5/262
    • H04N5/2628G06T5/006G06T7/80
    • A method and a system for self-calibrating a wide field-of-view camera (such as a catadioptric camera) using a sequence of omni-directional images of a scene obtained from the camera. The present invention uses the consistency of pairwise features tracked across at least a portion of the image collection and uses these tracked features to determine unknown calibration parameters based on the characteristics of catadioptric imaging. More specifically, the self-calibration method of the present invention generates a sequence of omni-directional images representing a scene and tracks features across the image sequence. An objective function is defined in terms of the tracked features and an error metric (an image-based error metric in a preferred embodiment). The catadioptric imaging characteristics are defined by calibration parameters, and determination of optimal calibration parameters is accomplished by minimizing the objective function using an optimizing technique. Moreover, the present invention also includes a technique for reformulating a projection equation such that the projection equation is equivalent to that of a rectilinear perspective camera. This technique allows analyses (such as structure from motion) to be applied (subsequent to calibration of the catadioptric camera) in the same direct manner as for rectilinear image sequences.
    • 一种用于使用从相机获得的场景的全方位图像序列来自动校准宽视野相机(例如反折射相机)的方法和系统。 本发明使用在图像集合的至少一部分上跟踪的成对特征的一致性,并且使用这些跟踪的特征来基于反射折射成像的特征来确定未知的校准参数。 更具体地,本发明的自校准方法生成表示场景的全方位图像序列,并且跨越图像序列跟踪特征。 根据跟踪特征和误差度量(优选实施例中的基于图像的误差度量)来定义目标函数。 反射折射成像特征由校准参数定义,并且通过使用优化技术来最小化目标函数来实现最佳校准参数的确定。 此外,本发明还包括一种用于重新配置投影方程式的技术,使得投影方程相当于直线透视照相机的投影方程。 这种技术允许以与直线图像序列相同的直接方式应用分析(例如运动结构)(在反折射相机的校准之后)。
    • 35. 发明授权
    • Depth painting for 3-D rendering applications
    • 3-D渲染应用的深度绘画
    • US06417850B1
    • 2002-07-09
    • US09238250
    • 1999-01-27
    • Sing Bing Kang
    • Sing Bing Kang
    • G06T1540
    • G06T15/205
    • A 3-D effect is added to a single image by adding depth to the single image. Depth can be added to the single image by selecting an arbitrary region or a number of pixels. A user interface simultaneously displays the single image and novel views of the single original image taken from virtual camera positions rotated relative to the original field of view. Depths given to the original image allow pixels to be reprojected onto the novel views to allow the user to observe the depth changes as they are being added. Functions are provided to edit gaps or voids generated in the process of adding depth to the single image. The gaps occur because of depth discontinuities between regions to which depth has been added and the voids are due to the uncovering of previously occluded surfaces in the original image.
    • 通过向单个图像添加深度,将3-D效果添加到单个图像。 可以通过选择任意区域或多个像素将深度添加到单个图像。 用户界面同时显示从相对于原始视野旋转的虚拟摄像机位置拍摄的单个原始图像的单个图像和新颖视图。 给予原始图像的深度允许将像素重新投影到新颖视图上,以允许用户在添加时观察深度变化。 提供了功能来编辑在单个图像添加深度的过程中生成的空白或空白。 由于在添加了深度的区域之间的深度不连续性,并且空隙是由于原始图像中先前遮挡的表面的揭开而发生的。
    • 37. 发明授权
    • Multi-layer image-based rendering for video synthesis
    • 用于视频合成的多层基于图像的渲染
    • US06266068B1
    • 2001-07-24
    • US09039022
    • 1998-03-13
    • Sing Bing KangJames M. Rehg
    • Sing Bing KangJames M. Rehg
    • G06T1160
    • G06T11/60
    • A computerized method and related computer system synthesize video from a plurality of sources of image data. The sources include a variety of image data types such a collection of image stills, a sequence of video frames, and 3-D models of objects. Each source provides image data associated with an object. One source provides image data associated with a first object, and a second source provides image data associated with a second object. The image data of the first and second objects are combined to generate composite images of the first and second objects. From the composite images, an output image of the first and second objects as viewed from an arbitrary viewpoint is generated. Gaps of pixels with unspecified pixel values may appear in the output image. Accordingly, a pixel value for each of these “missing pixels” is obtained by using an epipolar search process to determine which one of the sources of image data should provide the pixel value for that missing pixel.
    • 计算机化方法和相关计算机系统从多个图像数据源合成视频。 这些源包括各种图像数据类型,诸如图像静止图像集合,视频帧序列和对象的3D模型。 每个源提供与对象相关联的图像数据。 一个源提供与第一对象相关联的图像数据,第二源提供与第二对象相关联的图像数据。 组合第一和第二对象的图像数据以生成第一和第二对象的合成图像。 从合成图像生成从任意视点观察第一和第二对象的输出图像。 输出图像中可能会出现具有未指定像素值的像素间隙。 因此,通过使用对极搜索处理来确定图像数据的哪一个源应该为缺失像素提供像素值,从而获得每个这些“丢失像素”的像素值。
    • 38. 发明授权
    • Method for reconstructing a three-dimensional object from a closed-loop
sequence of images taken by an uncalibrated camera
    • 从未校准的相机拍摄的闭环图像重建三维物体的方法
    • US6061468A
    • 2000-05-09
    • US901391
    • 1997-07-28
    • Sing Bing Kang
    • Sing Bing Kang
    • G06T1/00G06T7/20G06T17/40G06K9/00
    • G06T7/0071G06T2207/10016
    • In a computerized method, the three-dimensional structure of an object is recovered from a closed-loop sequence of two-dimensional images taken by a camera undergoing some arbitrary motion. In one type of motion, the camera is held fixed, while the object completes a full 360.degree. rotation about an arbitrary axis. Alternatively, the camera can make a complete rotation about the object. In the sequence of images, feature tracking points are selected using pair-wise image registration. Ellipses are fitted to the feature tracking points to estimate the tilt of the axis of rotation. A set of variables are set to fixed values while minimizing an image-based objective function to extract a set of first structure and motion parameters. Then the set of variables freed while minimizing of the objective function continues to extract a second set of structure and motion parameters that are substantially the same as the first set of structure and motion parameters.
    • 在计算机化方法中,从经历某种任意运动的相机拍摄的二维图像的闭环序列中恢复对象的三维结构。 在一种类型的运动中,摄像机保持固定,同时物体围绕任意轴完成360度旋转。 或者,相机可以围绕对象进行完全旋转。 在图像序列中,使用成对图像配准来选择特征跟踪点。 椭圆适配到特征跟踪点以估计旋转轴的倾斜度。 将一组变量设置为固定值,同时使基于图像的目标函数最小化以提取一组第一结构和运动参数。 然后在最小化目标函数的同时释放的变量集继续提取与第一组结构和运动参数基本相同的第二组结构和运动参数。
    • 40. 发明申请
    • AUTOMATIC 2D-TO-STEREOSCOPIC VIDEO CONVERSION
    • 自动2D到立体视像转换
    • US20130147911A1
    • 2013-06-13
    • US13315488
    • 2011-12-09
    • Kevin Robert KarschCe LiuSing Bing Kang
    • Kevin Robert KarschCe LiuSing Bing Kang
    • H04N13/00
    • H04N13/261
    • In general, a “Stereoscopic Video Converter” (SVC) provides various techniques for automatically converting arbitrary 2D video sequences into perceptually plausible stereoscopic or “3D” versions while optionally generating dense depth maps for every frame of the video sequence. In particular, the automated 2D-to-3D conversion process first automatically estimates scene depth for each frame of an input video sequence via a label transfer process that matches features extracted from those frames with features from a database of images and videos having known ground truth depths. The estimated depth distributions for all image frames of the input video sequence are then used by the SVC for automatically generating a “right view” of a corresponding stereoscopic image for each frame (assuming that each original input frame represents the “left view” of the stereoscopic image).
    • 通常,“立体视频转换器”(SVC)提供了用于将任意2D视频序列自动转换为听觉上合理的立体或“3D”版本的各种技术,同时可选地为视频序列的每一帧产生密集的深度图。 特别地,自动2D到3D转换过程首先通过标签传送过程自动地估计输入视频序列的每帧的场景深度,所述标签传送过程与从具有已知地面实况的图像和视频的数据库的特征相匹配的特征提取出匹配的特征 深度。 输入视频序列的所有图像帧的估计深度分布然后由SVC用于自动生成每帧的对应立体图像的“右视图”(假设每个原始输入帧表示“ 立体图像)。