会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明申请
    • SYSTEM AND METHOD FOR THREE-DIMENSIONAL ALIGNMENT OF OBJECTS USING MACHINE VISION
    • 使用机器视觉的三维对准对象的系统和方法
    • US20100166294A1
    • 2010-07-01
    • US12345130
    • 2008-12-29
    • Cyril C. MarrionNigel J. FosterLifeng LiuDavid Y. LiGuruprasad ShivaramAaron S. WallackXiangyun Ye
    • Cyril C. MarrionNigel J. FosterLifeng LiuDavid Y. LiGuruprasad ShivaramAaron S. WallackXiangyun Ye
    • G06K9/00
    • G06K9/00214G06K9/6211
    • This invention provides a system and method for determining the three-dimensional alignment of a modeledobject or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.
    • 本发明提供一种用于确定建模对象或场景的三维对准的系统和方法。 校准后,3D(立体声)传感器系统会查看对象以导出包含对象的场景的运行时3D表示。 来自每个立体声头的整流图像被预处理以增强其边缘特征。 然后,通过在第一图像上定位预定特征,然后将另一图像中的相同特征定位,对至少两(一对)经整流的预处理图像执行立体匹配处理。 为每对相机计算3D点以导出3D点云。 3D点云是通过将世界三维空间中的每个摄像机对的3D点变换为世界校准而产生的。 通过提取诸如线段的较高级几何形状(HLGS)来减少来自点云的3D数据量。 从运行时发现的HLGS对应于模型上的HLGS,以产生候选的3D姿势。 粗略的评分过程会减少姿势的数量。 然后,剩下的候选姿势进一步进行更精细的评分过程。 然后通过例如将候选姿势的3D或2D点拟合到较大的一组相应的三维或二维模型点来验证这些幸存的候选姿势,由此最接近的匹配是最佳精细三维姿态 。
    • 4. 发明授权
    • System and method for finding correspondence between cameras in a three-dimensional vision system
    • 用于在三维视觉系统中找到相机之间的对应关系的系统和方法
    • US08600192B2
    • 2013-12-03
    • US12962918
    • 2010-12-08
    • Lifeng LiuAaron S. WallackCyril C. Marrion
    • Lifeng LiuAaron S. WallackCyril C. Marrion
    • G06K9/36G06K9/00
    • G06K9/209G06T7/593G06T7/73G06T7/80G06T2207/30164G06T2207/30208
    • This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly, can be combined with the searched 2D object features in images of other camera assemblies (perspective or non-perspective), based on their trained object features to generate a set of 3D image features and thereby determine a 3D pose of the object. In this manner the speed and accuracy of the overall pose determination process is improved. The non-perspective lens can be a telecentric lens.
    • 本发明提供了一种用于确定3D视觉系统实现中的相机组件之间的对应关系的系统和方法,所述3D视觉系统实现具有相对于场景以不同取向排列的多个摄像机,以便获取运行时对象的同时期图像并确定 物体,并且其中至少一个相机组件包括非透视透镜。 所获取的非透视图像的2D对象特征对应于非透视相机组件中经过训练的对象特征,可以与基于其他相机组件(透视或非透视)的图像中搜索的2D对象特征组合 在其训练的对象特征上产生一组3D图像特征,从而确定对象的3D姿态。 以这种方式,提高了整体姿势确定过程的速度和精度。 非透视镜可以是远心镜头。
    • 5. 发明申请
    • SYSTEM AND METHOD FOR FINDING CORRESPONDENCE BETWEEN CAMERAS IN A THREE-DIMENSIONAL VISION SYSTEM
    • 在三维视觉系统中发现相机之间的相关系统和方法
    • US20120148145A1
    • 2012-06-14
    • US12962918
    • 2010-12-08
    • Lifeng LiuAaron S. WallackCyril C. Marrion
    • Lifeng LiuAaron S. WallackCyril C. Marrion
    • G06K9/00
    • G06K9/209G06T7/593G06T7/73G06T7/80G06T2207/30164G06T2207/30208
    • This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly, can be combined with the searched 2D object features in images of other camera assemblies (perspective or non-perspective), based on their trained object features to generate a set of 3D image features and thereby determine a 3D pose of the object. In this manner the speed and accuracy of the overall pose determination process is improved. The non-perspective lens can be a telecentric lens.
    • 本发明提供了一种用于确定3D视觉系统实现中的相机组件之间的对应关系的系统和方法,所述3D视觉系统实现具有相对于场景以不同取向布置的多个摄像机,以便获取运行时对象的同时期图像并且确定 物体,并且其中至少一个相机组件包括非透视透镜。 所获取的非透视图像的2D对象特征对应于非透视相机组件中经过训练的对象特征,可以与基于其他相机组件(透视或非透视)的图像中搜索的2D对象特征组合 在其训练的对象特征上产生一组3D图像特征,从而确定对象的3D姿态。 以这种方式,提高了整体姿势确定过程的速度和精度。 非透视镜可以是远心镜头。
    • 7. 发明申请
    • SYSTEM AND METHOD FOR TRAINING A MODEL IN A PLURALITY OF NON-PERSPECTIVE CAMERAS AND DETERMINING 3D POSE OF AN OBJECT AT RUNTIME WITH THE SAME
    • 用于训练多个非视觉摄像机中的模型的系统和方法,并确定其运行中的对象的3D位置
    • US20120147149A1
    • 2012-06-14
    • US12963007
    • 2010-12-08
    • Lifeng LiuAaron S. WallackCyril C. Marrion, JR.
    • Lifeng LiuAaron S. WallackCyril C. Marrion, JR.
    • H04N13/02
    • G06T7/75
    • This invention provides a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime. Each of the camera assemblies includes a non-perspective lens that acquires a respective non-perspective image for use in the process. The searched object features in one of the acquired non-perspective image can be used to define the expected location of object features in the second (or subsequent) non-perspective images based upon an affine transform, which is computed based upon at least a subset of the intrinsics and extrinsics of each camera. The locations of features in the second, and subsequent, non-perspective images can be refined by searching within the expected location of those images. This approach can be used in training, to generate the training model, and in runtime operating on acquired images of runtime objects. The non-perspective cameras can employ telecentric lenses.
    • 本发明提供一种用于在3D视觉系统中使用多个相机组件来训练和执行对象的运动时间3D姿势确定的系统和方法。 照相机相对于场景以不同的方向布置,以便在训练和运行时获取对象的同时期图像。 每个相机组件包括非透视透镜,其获取在该过程中使用的各自的非透视图像。 所获取的非透视图像之一中的被搜索对象特征可以用于基于仿射变换来定义第二(或后续)非透视图像中的对象特征的预期位置,所述仿射变换基于至少一个子集 的每个相机的内在和外在的。 可以通过在这些图像的预期位置内搜索来改进第二和随后的非透视图像中的特征的位置。 这种方法可以用于训练,生成训练模型,并在运行时对运行时对象的获取图像进行运行。 非透视相机可以使用远心镜头。
    • 10. 发明授权
    • Fast high-accuracy multi-dimensional pattern inspection
    • 快速高精度多维图案检查
    • US06658145B1
    • 2003-12-02
    • US09746147
    • 2000-12-22
    • William SilverAaron S. WallackAdam Wagman
    • William SilverAaron S. WallackAdam Wagman
    • G06K900
    • G06K9/6206G06T7/75
    • A method and apparatus are provided for identifying differences between a stored pattern and a matching image subset, where variations in pattern position, orientation, and size do not give rise to false differences. The invention is also a system for analyzing an object image with respect to a model pattern so as to detect flaws in the object image. The system includes extracting pattern features from the model pattern; generating a vector-valued function using the pattern features to provide a pattern field; extracting image features from the object image; evaluating each image feature, using the pattern field and an n-dimensional transformation that associates image features with pattern features, so as to determine at least one associated feature characteristic; and using at least one feature characteristic to identify at least one flaw in the object image. The invention can find at least two distinct kinds of flaws: missing features, and extra features. The invention provides pattern inspection that is faster and more accurate than any known prior art method by using a stored pattern that represents an ideal example of the object to be found and inspected, and that can be translated, rotated, and scaled to arbitrary precision much faster than digital image re-sampling, and without pixel grid quantization errors. Furthermore, since the invention does not use digital image re-sampling, there are no pixel quantization errors to cause false differences between the pattern and image that can limit inspection performance.
    • 提供了一种用于识别存储的图案和匹配图像子集之间的差异的方法和装置,其中图案位置,取向和尺寸的变化不会引起错误的差异。 本发明还是一种用于分析相对于模型图案的对象图像以便检测对象图像中的缺陷的系统。 该系统包括从模型模式中提取模式特征; 使用所述模式特征生成向量值函数以提供模式字段; 从对象图像提取图像特征; 使用所述图案字段和将图像特征与图案特征相关联的n维变换来评估每个图像特征,以便确定至少一个相关联的特征特征; 以及使用至少一个特征特征来识别所述对象图像中的至少一个缺陷。 本发明可以发现至少两种不同种类的缺陷:缺失特征和额外特征。 本发明通过使用表示待发现和检查的对象的理想示例的存储模式,并且可以被转换,旋转和缩放到任意精度而提供比任何已知的现有技术方法更快更准确的模式检查 比数字图像重采样更快,并且没有像素网格量化误差。 此外,由于本发明不使用数字图像重新采样,因此不存在像素量化误差,导致图案和图像之间可能限制检查性能的误差。