会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Lane tracking system
    • 车道跟踪系统
    • US09139203B2
    • 2015-09-22
    • US13289517
    • 2011-11-04
    • Nikolai K. MoshchukShuqing ZengXingping ChenBakhtiar Brian Litkouhi
    • Nikolai K. MoshchukShuqing ZengXingping ChenBakhtiar Brian Litkouhi
    • G06F19/00B60W30/12
    • B60W30/12B60W2420/42
    • A lane tracking system for tracking the position of a vehicle within a lane includes a camera configured to provide a video feed representative of a field of view and a video processor configured to receive the video feed from the camera and to generate latent video-based position data indicative of the position of the vehicle within the lane. The system further includes a vehicle motion sensor configured to generate vehicle motion data indicative of the motion of the vehicle, and a lane tracking processor. The lane tracking processor is configured to receive the video-based position data, updated at a first frequency; receive the sensed vehicle motion data, updated at a second frequency; estimate the position of the vehicle within the lane from the sensed vehicle motion data; and fuse the video-based position data with the estimate of the vehicle position within the lane using a Kalman filter.
    • 用于跟踪车道在车道内的位置的车道跟踪系统包括被配置为提供表示视野的视频馈送的照相机和被配置为从照相机接收视频馈送并且产生潜在视频的位置的视频处理器 指示车辆在车道内的位置的数据。 该系统还包括被配置为产生指示车辆的运动的车辆运动数据的车辆运动传感器和车道跟踪处理器。 车道跟踪处理器被配置为接收以第一频率更新的基于视频的位置数据; 接收以第二频率更新的感测车辆运动数据; 从感测的车辆运动数据估计车道在车道内的位置; 并使用卡尔曼滤波器将基于视频的位置数据与车道内的车辆位置的估计融合。
    • 4. 发明申请
    • LANE TRACKING SYSTEM
    • LANE跟踪系统
    • US20130116854A1
    • 2013-05-09
    • US13289517
    • 2011-11-04
    • Nikolai K. MoshchukShuqing ZengXingping ChenBakhtiar Brian Litkouhi
    • Nikolai K. MoshchukShuqing ZengXingping ChenBakhtiar Brian Litkouhi
    • G06F7/00
    • B60W30/12B60W2420/42
    • A lane tracking system for tracking the position of a vehicle within a lane includes a camera configured to provide a video feed representative of a field of view and a video processor configured to receive the video feed from the camera and to generate latent video-based position data indicative of the position of the vehicle within the lane. The system further includes a vehicle motion sensor configured to generate vehicle motion data indicative of the motion of the vehicle, and a lane tracking processor. The lane tracking processor is configured to receive the video-based position data, updated at a first frequency; receive the sensed vehicle motion data, updated at a second frequency; estimate the position of the vehicle within the lane from the sensed vehicle motion data; and fuse the video-based position data with the estimate of the vehicle position within the lane using a Kalman filter.
    • 用于跟踪车道在车道内的位置的车道跟踪系统包括被配置为提供表示视野的视频馈送的照相机和被配置为从照相机接收视频馈送并且产生潜在视频的位置的视频处理器 指示车辆在车道内的位置的数据。 该系统还包括被配置为产生指示车辆的运动的车辆运动数据的车辆运动传感器和车道跟踪处理器。 车道跟踪处理器被配置为接收以第一频率更新的基于视频的位置数据; 接收以第二频率更新的感测车辆运动数据; 从感测的车辆运动数据估计车道在车道内的位置; 并使用卡尔曼滤波器将基于视频的位置数据与车道内的车辆位置的估计融合。
    • 5. 发明授权
    • Enhanced data association of fusion using weighted Bayesian filtering
    • 使用加权贝叶斯滤波增强融合数据关联
    • US08705797B2
    • 2014-04-22
    • US13413861
    • 2012-03-07
    • Shuqing ZengLufeng ShiDaniel GandhiJames N. Nickolaou
    • Shuqing ZengLufeng ShiDaniel GandhiJames N. Nickolaou
    • G06K9/00H04N5/225
    • G01S13/726G01S13/867G01S13/931G01S2013/9375G06K9/00805G06T1/0007
    • A method of associating targets from at least two object detection systems. An initial prior correspondence matrix is generated based on prior target data from a first object detection system and a second object detection system. Targets are identified in a first field-of-view of the first object detection system based on a current time step. Targets are identified in a second field-of-view of the second object detection system based on the current time step. The prior correspondence matrix is adjusted based on respective targets entering and leaving the respective fields-of-view. A posterior correspondence matrix is generated as a function of the adjusted prior correspondence matrix. A correspondence is identified in the posterior correspondence matrix between a respective target of the first object detection system and a respective target of the second object detection system.
    • 将来自至少两个物体检测系统的目标相关联的方法。 基于来自第一对象检测系统和第二对象检测系统的先前目标数据生成初始先前对应矩阵。 基于当前时间步长,在第一对象检测系统的第一视场中识别目标。 基于当前时间步长,在第二对象检测系统的第二视场中识别目标。 基于进入和离开相应视场的各个目标来调整先前的对应矩阵。 作为调整后的对应矩阵的函数产生后向对应矩阵。 在第一对象检测系统的相应目标和第二对象检测系统的相应目标之间的后向对应矩阵中识别对应关系。
    • 6. 发明申请
    • ENHANCED DATA ASSOCIATION OF FUSION USING WEIGHTED BAYESIAN FILTERING
    • 使用加权贝叶斯滤波的增强数据协会
    • US20130236047A1
    • 2013-09-12
    • US13413861
    • 2012-03-07
    • Shuqing ZengLufeng ShiDaniel GandhiJames N. Nickolaou
    • Shuqing ZengLufeng ShiDaniel GandhiJames N. Nickolaou
    • G06K9/00
    • G01S13/726G01S13/867G01S13/931G01S2013/9375G06K9/00805G06T1/0007
    • A method of associating targets from at least two object detection systems. An initial prior correspondence matrix is generated based on prior target data from a first object detection system and a second object detection system. Targets are identified in a first field-of-view of the first object detection system based on a current time step. Targets are identified in a second field-of-view of the second object detection system based on the current time step. The prior correspondence matrix is adjusted based on respective targets entering and leaving the respective fields-of-view. A posterior correspondence matrix is generated as a function of the adjusted prior correspondence matrix. A correspondence is identified in the posterior correspondence matrix between a respective target of the first object detection system and a respective target of the second object detection system.
    • 将来自至少两个物体检测系统的目标相关联的方法。 基于来自第一对象检测系统和第二对象检测系统的先前目标数据生成初始先前对应矩阵。 基于当前时间步长,在第一对象检测系统的第一视场中识别目标。 基于当前时间步长,在第二对象检测系统的第二视场中识别目标。 基于进入和离开相应视场的各个目标来调整先前的对应矩阵。 作为调整后的对应矩阵的函数产生后向对应矩阵。 在第一对象检测系统的相应目标和第二对象检测系统的相应目标之间的后向对应矩阵中识别对应关系。
    • 7. 发明申请
    • FUSION OF OBSTACLE DETECTION USING RADAR AND CAMERA
    • 使用雷达和摄像机进行障碍物检测的融合
    • US20140035775A1
    • 2014-02-06
    • US13563993
    • 2012-08-01
    • Shuqing ZengWende ZhangBakhtiar Brian Litkouhi
    • Shuqing ZengWende ZhangBakhtiar Brian Litkouhi
    • G01S13/86G01S13/93
    • G01S13/931G01S13/867G06K9/00805G06K9/629
    • A vehicle obstacle detection system includes an imaging system for capturing objects in a field of view and a radar device for sensing objects in a substantially same field of view. The substantially same field of view is partitioned into an occupancy grid having a plurality of observation cells. A fusion module receives radar data from the radar device and imaging data from the imaging system. The fusion module projects the occupancy grid and associated radar data onto the captured image. The fusion module extracts features from each corresponding cell using sensor data from the radar device and imaging data from the imaging system. A primary classifier determines whether an extracted feature extracted from a respective observation cell is an obstacle.
    • 车辆障碍物检测系统包括用于在视野中捕获物体的成像系统和用于在基本相同的视场中感测物体的雷达装置。 基本上相同的视场划分成具有多个观察单元的占用网格。 融合模块从雷达装置接收雷达数据并从成像系统接收成像数据。 融合模块将占用网格和相关的雷达数据投影到捕获的图像上。 融合模块使用来自雷达装置的传感器数据和来自成像系统的成像数据从每个对应的小区中提取特征。 主分类器确定从相应观察单元提取的提取特征是否是障碍物。
    • 8. 发明授权
    • Fusion of obstacle detection using radar and camera
    • 使用雷达和摄像机融合障碍物检测
    • US09429650B2
    • 2016-08-30
    • US13563993
    • 2012-08-01
    • Shuqing ZengWende ZhangBakhtiar Brian Litkouhi
    • Shuqing ZengWende ZhangBakhtiar Brian Litkouhi
    • G01S13/86G01S13/93G06K9/00G06K9/62
    • G01S13/931G01S13/867G06K9/00805G06K9/629
    • A vehicle obstacle detection system includes an imaging system for capturing objects in a field of view and a radar device for sensing objects in a substantially same field of view. The substantially same field of view is partitioned into an occupancy grid having a plurality of observation cells. A fusion module receives radar data from the radar device and imaging data from the imaging system. The fusion module projects the occupancy grid and associated radar data onto the captured image. The fusion module extracts features from each corresponding cell using sensor data from the radar device and imaging data from the imaging system. A primary classifier determines whether an extracted feature extracted from a respective observation cell is an obstacle.
    • 车辆障碍物检测系统包括用于在视野中捕获物体的成像系统和用于在基本相同的视场中感测物体的雷达装置。 基本上相同的视场划分成具有多个观察单元的占用网格。 融合模块从雷达装置接收雷达数据并从成像系统接收成像数据。 融合模块将占用网格和相关的雷达数据投影到捕获的图像上。 融合模块使用来自雷达装置的传感器数据和来自成像系统的成像数据从每个对应的小区中提取特征。 主分类器确定从相应观察单元提取的提取特征是否是障碍物。