会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明申请
    • SYSTEM AND METHOD FOR THREE-DIMENSIONAL ALIGNMENT OF OBJECTS USING MACHINE VISION
    • 使用机器视觉的三维对准对象的系统和方法
    • WO2010077524A1
    • 2010-07-08
    • PCT/US2009/066247
    • 2009-12-01
    • COGNEX CORPORATIONMARRION, Cyril, C.FOSTER, Nigel, J.LIU, LifengLI, David, Y.SHIVARAM, GuruprasadWALLACK, Aaron, S.YE, Xiangyun
    • MARRION, Cyril, C.FOSTER, Nigel, J.LIU, LifengLI, David, Y.SHIVARAM, GuruprasadWALLACK, Aaron, S.YE, Xiangyun
    • G06K9/64G06K9/00
    • G06K9/00214G06K9/6211
    • This invention provides a system and method for determining the three-dimensional alignment of a modeled object or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.
    • 本发明提供了一种用于确定建模对象或场景的三维对准的系统和方法。 校准后,3D(立体声)传感器系统会查看对象以导出包含对象的场景的运行时3D表示。 来自每个立体声头的整流图像被预处理以增强其边缘特征。 然后,通过在第一图像上定位预定特征,然后将另一图像中的相同特征定位,对至少两(一对)经整流的预处理图像执行立体匹配处理。 为每对相机计算3D点以导出3D点云。 3D点云是通过将世界三维空间中的每个摄像机对的3D点变换为世界校准而产生的。 通过提取诸如线段的较高级几何形状(HLGS)来减少来自点云的3D数据量。 从运行时发现的HLGS对应于模型上的HLGS,以产生候选的3D姿势。 粗略的评分过程会减少姿势的数量。 然后,剩下的候选姿势进一步进行更精细的评分过程。 然后通过例如将候选姿势的3D或2D点拟合到较大的一组相应的三维或二维模型点来验证这些幸存的候选姿势,由此最接近的匹配是最佳精细三维姿态 。
    • 5. 发明申请
    • SYSTEM AND METHOD FOR LOCATING A THREE-DIMENSIONAL OBJECT USING MACHINE VISON
    • 使用机器视觉定位三维物体的系统和方法
    • WO2008153721A1
    • 2008-12-18
    • PCT/US2008/006535
    • 2008-05-22
    • COGNEX CORPORATIONWALLACK, Aaron, S.MICHAEL, David, J.
    • WALLACK, Aaron, S.MICHAEL, David, J.
    • G06T7/00
    • G06K9/32G06T7/73G06T2207/30164
    • This invention provides a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces of the object, and thereby refining the location of the object. First a rough pose estimate of the object is derived. This rough pose estimate can be based upon predetermined pose data, or can be derived by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing an iterative weighted, least squares error calculation to minimize the error between the edgelets of trained model image and the acquired runtime edgelets. The overall, refined/optimized pose estimate incorporates data from each of the cameras' acquired images. Thereby, the estimate minimizes the total error between the edgelets of each camera's/view's trained model image and the associated camera's/view's acquired runtime edgelets. A final transformation of trained features relative to the runtime features is derived from the iterative error computation.
    • 本发明提供一种用于通过在对象的多个平面中的每一个上采用2D机器视觉处理来确定三维视图对象的位置的系统和方法,从而改善对象的位置。 首先推导出对象的粗略姿态估计。 这种粗略姿态估计可以基于预定的姿态数据,或者可以通过获取对象的多个平面脸部姿势(使用例如多个照相机)并且将已训练图像图案的角部相关联而得到,其具有已知的坐标相对 到原产地,获得的模式。 一旦实现了粗糙姿态,就通过将姿态定义为旋转的四元数(a,b,c和d)和用于平移的三个变量(x,y,z)来进行改进,并采用迭代加权最小二乘误差 计算以最小化训练模型图像的边缘与所获取的运行时间边缘之间的误差。 整体,精细/优化的姿态估计包含来自每个摄像机所获取图像的数据。 因此,估计使每个相机/视图的训练模型图像的边缘与相关联的相机/视图的获取的运行时间边缘之间的总误差最小化。 相对于运行时特征的训练特征的最终变换是从迭代误差计算得出的。
    • 6. 发明申请
    • MODEL-BASED POSE ESTIMATION USING A NON-PERSPECTIVE CAMERA
    • 使用非视觉摄像机的基于模型的位置估计
    • WO2012076979A1
    • 2012-06-14
    • PCT/IB2011/003044
    • 2011-12-14
    • COGNEX CORPORATIONLIU, LifengWALLACK, Aaron, S.MARRION, Cyril, C., Jr.
    • LIU, LifengWALLACK, Aaron, S.MARRION, Cyril, C., Jr.
    • G06T7/00
    • G06T7/75G06T2207/10012G06T2207/30164
    • This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features, to generate a set of 3D image features and thereby determine a 3D pose. Also provided is a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime.
    • 本发明提供了一种用于确定3D视觉系统中的相机组件之间的对应关系以便获取运行时对象的同时图像并确定对象的姿态的系统和方法,并且其中至少一个相机组件包括: 透视镜 所获取的非透视图像的所搜索的2D对象特征可以基于其训练对象特征与其他相机组件的图像中的搜索到的2D对象特征组合,以生成一组3D图像特征,从而确定3D姿态。 还提供了一种用于在3D视觉系统中使用多个相机组件来训练和执行对象的运动时间3D姿势确定的系统和方法。 照相机相对于场景以不同的方向布置,以便在训练和运行时获取对象的同时期图像。
    • 8. 发明申请
    • 3D ASSEMBLY VERIFICATION FROM 2D IMAGES
    • 3D二维图像组装验证
    • WO2008147355A1
    • 2008-12-04
    • PCT/US2007/012637
    • 2007-05-29
    • COGNEX TECHNOLOGY AND INVESTMENT CORPORATIONMICHAEL, David, J.WALLACK, Aaron, S.
    • MICHAEL, David, J.WALLACK, Aaron, S.
    • G06T7/00
    • G06T7/0006G06T7/593G06T2207/10012G06T2207/30164
    • A method and apparatus for assembly verification is disclosed. A measurement of the 3D position of each subcomponent is performed using triangulation from three cameras acquiring images simultaneously. An operator trains one 2D model, correspond to the same subcomponent of the assembly, per camera. At run-time, models are registered in each camera view so as to provide measured 3D position of the subcomponents. Then, measured 3D positions are compared with expected nominal 3D positions, and differences in 3D position are checked against tolerances. The invention simplifies the task of assembly verification, requiring only multiple cameras fixed above an assembly line. After minor operator activity, the invention can then perform assembly verification automatically. Since the invention can perform fixtureless assembly verification, a part can be presented to the machine vision system with arbitrary 3D position and orientation. Stroboscopic illumination can be used to illuminate parts on a rapidly moving assembly line.
    • 公开了一种装配验证的方法和装置。 使用三个同时采集图像的摄像机的三角测量来执行每个子组件的3D位置的测量。 每个摄像机操作员训练一个2D模型,对应于组件的相同子组件。 在运行时,在每个摄像机视图中注册模型,以提供子组件的测量3D位置。 然后,将测量的3D位置与预期的标称3D位置进行比较,并且针对公差检查3D位置的差异。 本发明简化了装配验证的任务,仅需要在装配线上固定多个相机。 在轻微的操作员活动之后,本发明可以自动执行装配验证。 由于本发明可以执行无夹具组装验证,所以可以将具有任意3D位置和取向的部件呈现给机器视觉系统。 频闪照明可用于照亮快速移动的装配线上的部件。