高级检索
颜洪杰,朱志峰,蔡伯华,等. 二维激光雷达与相机数据融合标定[J]. 安徽工业大学学报(自然科学版),2024,41(5):516-524. DOI: 10.12415/j.issn.1671-7872.24039
引用本文: 颜洪杰,朱志峰,蔡伯华,等. 二维激光雷达与相机数据融合标定[J]. 安徽工业大学学报(自然科学版),2024,41(5):516-524. DOI: 10.12415/j.issn.1671-7872.24039
YAN Hongjie, ZHU Zhifeng, CAI Bohua, YAO Yong. Calibration for the Integration of 2D LiDAR and Camera Data[J]. Journal of Anhui University of Technology(Natural Science), 2024, 41(5): 516-524. DOI: 10.12415/j.issn.1671-7872.24039
Citation: YAN Hongjie, ZHU Zhifeng, CAI Bohua, YAO Yong. Calibration for the Integration of 2D LiDAR and Camera Data[J]. Journal of Anhui University of Technology(Natural Science), 2024, 41(5): 516-524. DOI: 10.12415/j.issn.1671-7872.24039

二维激光雷达与相机数据融合标定

Calibration for the Integration of 2D LiDAR and Camera Data

  • 摘要: 针对单线激光雷达和相机之间外参标定速度慢、实时性差的不足,提出1种基于特征点的数据融合标定方法。在特殊设计的标靶上提取相机外参标定所需的像素特征点,通过激光雷达角度标定得到所需的点云特征点;采用特征点法融合标定像素与点云特征点,得到激光雷达数据与相机图像像素点对应的坐标序列,融合处理2种坐标序列计算出融合矩阵,且通过标定实验验证所提方法的可行性与有效性。结果表明:与传统标靶相比,采用特殊设计标靶的标定速度可提高31.4%;结合数据融合的处理方法可得到单线激光雷达和相机的标定参数,数据融合的平均重投影误差为1.9 pixel、标准差为0.7 pixel,雷达点云数据能够在标靶上较好地投影,使激光雷达点云与标靶图像成功匹配;通过融合矩阵只需较少对应的激光雷达点云和像素点,可避免计算标定的各个参数,标定过程更简便、快捷。

     

    Abstract: Aiming at the shortcomings of slow calibration speed and poor real-time performance between single-line LiDAR and camera extrinsics, a data fusion calibration method based on feature points was proposed. The pixel feature points required for camera external parameter calibration were extracted from a specially designed target, and then the necessary point cloud feature points were obtained by LiDAR angle calibration.The pixel feature points and point cloud feature points were jointly calibrated by the feature point method to obtain the coordinate sequence corresponding to the LiDAR data and the camera image pixel points. The two types of coordinate sequences were fused to calculate the fusion matrix. The feasibility and effectiveness of the proposed method were verified by calibration experiments. The results show that compared to traditional calibration targets, compared to traditional calibration targets. Combining the data fusion processing method, the calibration parameters of single-line LiDAR and camera can be obtained. The average reprojection error of data fusion is 1.9 pixel, with a standard deviation of 0.7 pixels. The LiDAR point cloud data can be projected well on the calibration target, which can successfully match the LiDAR point cloud with the target image. By the fusion matrix, fewer corresponding LiDAR point clouds and pixels are required, which can avoid the calculation of individual calibration parameters, thus making the calibration process simpler and faster.

     

/

返回文章
返回