机械工程学报 ›› 2024, Vol. 60 ›› Issue (10): 3-21.doi: 10.3901/JME.2024.10.003
张彧1,2, 檀祖冰1,2, 曹东璞3, 陈龙4
收稿日期:
2023-06-20
修回日期:
2024-01-15
出版日期:
2024-05-20
发布日期:
2024-07-24
作者简介:
张彧,男,1996年出生。主要研究方向为机器人同时定位与建图技术。基金资助:
ZHANG Yu1,2, TAN Zubing1,2, CAO Dongpu3, CHEN Long4
Received:
2023-06-20
Revised:
2024-01-15
Online:
2024-05-20
Published:
2024-07-24
摘要: 环境感知与状态估计是智能网联车关键技术之一。同时定位与建图技术(Simultaneous location and mapping technology, SLAM),旨在同时完成自身的状态估计与环境建模,被广泛应用于智能网联车领域。随着研究的深入,学者们发现通过融合多种传感器,可以实现传感之间的短板互补,提升和加强状态估计的实时性与稳定性。融合视觉和惯性导航仪(Inertial measurement unit, IMU)的实例——视觉惯性里程计(Visual-inertial odometry, VIO),由于具有较高的性价比获得了许多研究人员的青睐。VIO在视觉里程计(Visual odometry,VO)的基础上引入IMU测量,很好地改善了尺度漂移的问题,同时也能极大缓解短期内图像过曝、特征缺失等问题导致的视觉定位失效问题。并且VIO在通过结合冗余传感器提升精度的同时,也通过滑动窗口和状态边缘化等方案保证系统实时性,是兼顾精度和运行效率的典范。细致介绍VIO系统的标准定义与基础模型,并对其关键模块,包括初始化、视觉信息提取与关联、求解与优化、标定,进行详尽的技术梳理与前沿工作回顾,对前沿工作的优点和局限进行详细分析,总结了常用的视觉惯性数据集,并对VIO当前存在的问题和未来发展方向进行了总结和展望。
中图分类号:
张彧, 檀祖冰, 曹东璞, 陈龙. 基于视觉和惯性测量单元的里程计关键技术研究综述[J]. 机械工程学报, 2024, 60(10): 3-21.
ZHANG Yu, TAN Zubing, CAO Dongpu, CHEN Long. Survey on Key Techniques for Visual and Inertial Based Odomety[J]. Journal of Mechanical Engineering, 2024, 60(10): 3-21.
[1] CADENA C,CARLONE L,CARRILLO H,et al. Past,present,and future of simultaneous localization and mapping:Toward the robust-perception age[J]. IEEE Transactions on robotics,2016,32(6):1309-1332. [2] BARFOOT T D. State estimation for robotics[M]. Cambridge:Cambridge University Press,2017. [3] THRUN S. Probabilistic robotics[J]. Communications of the ACM,2002,45(3):52-57. [4] CHIRIKJIAN G S. Stochastic models,information theory,and lie groups,volume 1:Classical results and geometric methods[M]. Berlin:Springer Science & Business Media,2009. [5] CHIRIKJIAN G S. Stochastic models,information theory,and lie groups,volume 2:Analytic methods and modern applications[M]. Berlin:Springer Science & Business Media,2011. [6] FORSTER C,CARLONE L,DELLAERT F,et al. On-manifold preintegration for real-time visual inertial odometry[J]. IEEE Transactions on Robotics,2016,33(1):1-21. [7] DELLAERT F. Factor graphs and GTSAM:A hands-on introduction[R]. Georgia Institute of Technology,2012. [8] SOLA J,DERAY J,ATCHUTHAN D. A micro Lie theory for state estimation in robotics[J]. arXiv preprint arXiv:1812.01537,2018. [9] KOPPEL L,WASLANDER S L. Manifold geometry with fast automatic derivatives and coordinate frame semantics checking in C++[C]//201815th Conference on Computer and Robot Vision (CRV). IEEE,2018:126-133. [10] USENKO V,DEMMEL N,SCHUBERT D,et al. Visual-inertial mapping with non-linear factor recovery[J]. IEEE Robotics and Automation Letters,2019,5(2):422-429. [11] TRIGGS B,MClAUCHLAN P F,HARTLEY R I,et al. Bundle adjustment—A modern synthesis[C]//International workshop on vision algorithms. Springer,Berlin,Heidelberg,1999:298-372. [12] LEUTENEGGER S,LYNEN S,BOSSE M,et al. Keyframe-based visual-inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research,2015,34(3):314-334. [13] BLOESCH M,BURRI M,OMARI S,et al. Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback[J]. The International Journal of Robotics Research,2017,36(10):1053-1072. [14] QIN T,LI P,SHEN S. Vins-mono:A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics,2018,34(4):1004-1020. [15] VON Stumberg L,USENKO V,CREMERS D. Direct sparse visual-inertial odometry using dynamic marginalization[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2018:2510-2517. [16] ROSINOL A,ABATE M,CHANG Y,et al. Kimera:An open-source library for real-time metric-semantic localization and mapping[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2020:1689-1696. [17] CAMPOS C,ELVIRA R,RODRÍGUEZ J J G,et al. ORB-SLAM3:An accurate open-source library for visual,visual–Inertial,and multimap SLAM[J]. IEEE Transactions on Robotics,2021,37(6):1874-1890. [18] MARTINELLI A. Vision and IMU data fusion:Closed-form solutions for attitude,speed,absolute scale,and bias determination[J]. IEEE Transactions on Robotics,2011,28(1):44-60. [19] MARTINELLI A. Closed-form solution of visual-inertial structure from motion[J]. International Journal of Computer Vision,2014,106(2):138-152. [20] KAISER J,MARTINELLI A,Fontana F,et al. Simultaneous state initialization and gyroscope bias calibration in visual inertial aided navigation[J]. IEEE Robotics and Automation Letters,2016,2(1):18-25. [21] MARTINELLI A. Closed-form solution to cooperative visual-inertial structure from motion[J]. arXiv preprint arXiv:1802.08515,2018. [22] KNEIP L,WEISS S,SIEGWART R. Deterministic initialization of metric state estimation filters for loosely-coupled monocular vision-inertial systems[C]//2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE,2011:2235-2241. [23] YANG Z,SHEN S. Monocular visual-inertial state estimation with online initialization and camera-IMU extrinsic calibration[J]. IEEE Transactions on Automation Science and Engineering,2016,14(1):39-51. [24] SHEN S,MULGAONKAR Y,MICHAEL N,et al. Initialization-free monocular visual-inertial state estimation with application to autonomous MAVs[C]//Experimental Robotics. Springer,Cham,2016:211-227. [25] FAESSLER M,FONTANA F,FORSTER C,et al. Automatic re-initialization and failure recovery for aggressive flight with a monocular vision-based quadrotor[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2015:1722-1729. [26] FORSTER C,PIZZOLI M,SCARAMUZZA D. SVO:Fast semi-direct monocular visual odometry[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2014:15-22. [27] WEISS S,BROCKERS R,ALBREKTSEN S,et al. Inertial optical flow for throw-and-go micro air vehicles[C]//2015 IEEE Winter Conference on Applications of Computer Vision. IEEE,2015:262-269. [28] CAMPOS C,MONTIEL J M M,TARDÓS J D. Fast and robust initialization for visual-inertial SLAM[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE,2019:1288-1294. [29] MUR-ARTAL R,TARDÓS J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters,2017,2(2):796-803. [30] QIN T,SHEN S. Robust initialization of monocular visual-inertial estimation on aerial robots[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2017:4225-4232. [31] CAMPOS C,MONTIEL J M M,TARDÓS J D. Inertial-only optimization for visual-inertial initialization[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2020:51-57. [32] MUR-ARTAL R,MONTIEL J M M,TARDOS J D. ORB-SLAM:A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics,2015,31(5):1147-1163. [33] HARRIS C,STEPHENS M. A combined corner and edge detector[C]//Alvey Vision Conference. 1988:147-151. [34] VISWANATHAN D G. Features from accelerated segment test (fast)[C]//Proceedings of the 10th workshop on Image Analysis for Multimedia Interactive Services,London,UK, 2009:6-8. [35] SHI J. Good features to track[C]//1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE,1994:593-600. [36] RUBLEE E,RABAUD V,KONOLIGE K,et al. ORB:An efficient alternative to SIFT or SURF[C]//2011 International Conference on Computer Vision. IEEE,2011:2564-2571. [37] LEUTENEGGER S,CHLI M,SIEGWART R Y. BRISK:Binary robust invariant scalable keypoints[C]//2011 International Conference on Computer Vision. IEEE,2011:2548-2555. [38] BAY H,ESS A,TUYTELAARS T,et al. Speeded-up robust features (SURF)[J]. Computer Vision and Image Understanding,2008,110(3):346-359. [39] LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision,2004,60(2):91-110. [40] MUR-ARTAL R,TARDÓS J D. Orb-slam2:An open-source slam system for monocular,stereo,and rgb-d cameras[J]. IEEE Transactions on Robotics,2017,33(5):1255-1262. [41] KOTTAS D G,ROUMELIOTIS S I. Efficient and consistent vision-aided inertial navigation using line observations[C]//2013 IEEE International Conference on Robotics and Automation. IEEE,2013:1540-1547. [42] YU H,MOURIKIS A I. Vision-aided inertial navigation with line features and a rolling-shutter camera[C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2015:892-899. [43] YU H,MOURIKIS A I. Edge-based visual-inertial odometry[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2017:6670-6677. [44] ZHENG F,TSAI G,ZHANG Z,et al. Trifo-VIO:Robust and efficient stereo visual inertial odometry using points and lines[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2018:3686-3693. [45] GUO C X,ROUMELIOTIS S I. IMU-RGBD camera navigation using point and plane features[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE,2013:3164-3171. [46] HSIAO M,WESTMAN E,KAESS M. Dense planar-inertial SLAM with structural constraints[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2018:6521-6528. [47] NEWCOMBE R A,LOVEGROVE S J,DAVISON A J. DTAM:Dense tracking and mapping in real-time[C]//2011 International Conference on Computer Vision. IEEE,2011:2320-2327. [48] ENGEL J,SCHÖPS T,CREMERS D. LSD-SLAM:Large-scale direct monocular SLAM[C]//European Conference on Computer Vision. Springer,Cham,2014:834-849. [49] ENGEL J,KOLTUN V,CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(3):611-625. [50] ENGEL J,STÜCKLER J,CREMERS D. Large-scale direct SLAM with stereo cameras[C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2015:1935-1942. [51] ECKENHOFF K,GENEVA P,HUANG G. Direct visual-inertial navigation with analytical preintegration[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2017:1429-1435. [52] WANG R,SCHWORER M,CREMERS D. Stereo DSO:Large-scale direct sparse visual odometry with stereo cameras[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017:3903-3911. [53] KAESS M,JOHANNSSON H,ROBERTS R,et al. iSAM2:Incremental smoothing and mapping using the Bayes tree[J]. The International Journal of Robotics Research,2012,31(2):216-235. [54] WU K,AHMED A,GEORGIOU G A,et al. A square root inverse filter for efficient vision-aided inertial navigation on mobile devices[C]//Robotics:Science and Systems. 2015:2. [55] SIBLEY G,SUKHATME G S,MATTHIES L H. The iterated sigma point kalman filter with applications to long range stereo[J]. Robotics:Science and Systems,2006,8(1):235-244. [56] BELL B M,CATHEY F W. The iterated Kalman filter update as a Gauss-Newton method[J]. IEEE Transactions on Automatic Control,1993,38(2):294-297. [57] DONG-SI T C,MOURIKIS A I. Motion tracking with fixed-lag smoothing:Algorithm and consistency analysis[C]//2011 IEEE International Conference on Robotics and Automation. IEEE,2011:5655-5662. [58] DAVISON A J,REID I D,MOLTON N D,et al. MonoSLAM:Real-time single camera SLAM[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(6):1052-1067. [59] JONES E S,SOATTO S. Visual-inertial navigation,mapping and localization:A scalable real-time causal approach[J]. The International Journal of Robotics Research,2011,30(4):407-430. [60] BLOESCH M,OMARI S,HUTTER M,et al. Robust visual inertial odometry using a direct EKF-based approach[C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2015:298-304. [61] MOURIKIS A I,ROUMELIOTIS S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]//Proceedings 2007 IEEE International Conference on Robotics and Automation. IEEE,2007:3565-3572. [62] MONTIEL J M M,CIVERA J,DAVISON A J. Unified inverse depth parametrization for monocular SLAM[J]. Robotics:Science and Systems,2006,3:1-8. [63] CARLONE L,KIRA Z,BEALL C,et al. Eliminating conditionally independent sets in factor graphs:A unifying perspective based on smart factors[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2014:4290-4297. [64] HUAI Z,HUANG G. Robocentric visual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2018:6319-6326. [65] PAUL M K,WU K,HESCH J A,et al. A comparative analysis of tightly-coupled monocular,binocular,and stereo VINS[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2017:165-172. [66] WU K,ZHANG T,SU D,et al. An invariant-EKF VINS algorithm for improving consistency[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2017:1578-1585. [67] ZHANG T,WU K,SONG J,et al. Convergence and consistency analysis for a 3-D invariant-EKF SLAM[J]. IEEE Robotics and Automation Letters,2017,2(2):733-740. [68] BROSSARD M,BONNABEL S,BARRAU A. Invariant kalman filtering for visual inertial slam[C]//2018 21st International Conference on Information Fusion (FUSION). IEEE,2018:2021-2028. [69] BARRAU A,BONNABEL S. Invariant Kalman filtering[J]. Annual Review of Control,Robotics,and Autonomous Systems,2018,1:237-257. [70] BROSSARD M,BONNABEL S,CONDOMINES J P. Unscented Kalman filtering on Lie groups[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2017:2485-2491. [71] HEO S,PARK C G. Consistent EKF-based visual-inertial odometry on matrix Lie group[J]. IEEE Sensors Journal,2018,18(9):3780-3788. [72] GUO C X,KOTTAS D G,DUTOIT R,et al. Efficient visual-inertial navigation using a rolling-shutter camera with inaccurate timestamps[C]//Robotics:Science and Systems. 2014. [73] ECKENHOFF K,GENEVA P,BLOECKER J,et al. Multi-camera visual-inertial navigation with online intrinsic and extrinsic calibration[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE,2019:3158-3164. [74] ECKENHOFF K,GENEVA P,HUANG G. Mimc-vins:A versatile and resilient multi-imu multi-camera visual-inertial navigation system[J]. IEEE Transactions on Robotics,2021,37(5):1360-1380. [75] HUANG G P,MOURIKIS A I,ROUMELIOTIS S I. A first-estimates Jacobian EKF for improving SLAM consistency[C]//Experimental Robotics. Springer,Berlin,Heidelberg,2009:373-382. [76] HUANG G P,MOURIKIS A I,ROUMELIOTIS S I. Analysis and improvement of the consistency of extended Kalman filter based SLAM[C]//2008 IEEE International Conference on Robotics and Automation. IEEE,2008:473-479. [77] HUANG G. Improving the consistency of nonlinear estimators:Analysis,algorithms,and applications[M]. Sao Paulo:University of Minnesota,2013. [78] LI M,MOURIKIS A I. Improving the accuracy of EKF-based visual-inertial odometry[C]//2012 IEEE International Conference on Robotics and Automation. IEEE,2012:828-835. [79] HESCH J A,KOTTAS D G,BOWMAN S L,et al. Consistency analysis and improvement of vision-aided inertial navigation[J]. IEEE Transactions on Robotics,2013,30(1):158-176. [80] LI M,MOURIKIS A I. High-precision,consistent EKF-based visual-inertial odometry[J]. The International Journal of Robotics Research,2013,32(6):690-711. [81] KOTTAS D G,HESCH J A,BOWMAN S L,et al. On the consistency of vision-aided inertial navigation[C]//Experimental Robotics. Springer,Heidelberg,2013:303-317. [82] HESCH J A,KOTTAS D G,BOWMAN S L,et al. Camera-IMU-based localization:Observability analysis and consistency improvement[J]. The International Journal of Robotics Research,2014,33(1):182-201. [83] MOURIKIS A I,ROUMELIOTIS S I. A dual-layer estimator architecture for long-term localization[C]//2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE,2008:1-8. [84] WALTER M R,EUSTICE R M,LEONARD J J. Exactly sparse extended information filters for feature-based SLAM[J]. The International Journal of Robotics Research,2007,26(4):335-359. [85] STRASDAT H,MONTIEL J M M,DAVISON A J. Real-time monocular SLAM:Why filter?[C]//2010 IEEE International Conference on Robotics and Automation. IEEE,2010:2657-2664. [86] KAESS M,RANGANATHAN A,DELLAERT F. iSAM:Incremental smoothing and mapping[J]. IEEE Transactions on Robotics,2008,24(6):1365-1378. [87] INDELMAN V,WILLIAMS S,KAESS M,et al. Information fusion in navigation systems via factor graph based incremental smoothing[J]. Robotics and Autonomous Systems,2013,61(8):721-738. [88] INDELMAN V,MELIM A,DELLAERT F. Incremental light bundle adjustment for robotics navigation[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE,2013:1952-1959. [89] LUPTON T,SUKKARIEH S. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions[J]. IEEE Transactions on Robotics,2011,28(1):61-76. [90] MOAKHER M. Means and averaging in the group of rotations[J]. SIAM Journal on Matrix Analysis and Applications,2002,24(1):1-16. [91] ECKENHOFF K,GENEVA P,HUANG G. Closed-form preintegration methods for graph-based visual-inertial navigation[J]. The International Journal of Robotics Research,2019,38(5):563-586. [92] HERNANDEZ J,TSOTSOS K,SOATTO S. Observability,identifiability and sensitivity of vision-aided inertial navigation[C]//2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2015:2319-2325. [93] MARTINELLI A. State estimation based on the concept of continuous symmetry and observability analysis:The case of calibration[J]. IEEE Transactions on Robotics,2011,27(2):239-255. [94] MARTINELLI A. Visual-inertial structure from motion:Observability and resolvability[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE,2013:4235-4242. [95] MARTINELLI A. Visual-inertial structure from motion:Observability vs minimum number of sensors[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2014:1020-1027. [96] MARTINELLI A. Nonlinear unknown input observability:The general analytic solution[J]. arXiv preprint arXiv:1704.03252,2017. [97] MARTINELLI A. Nonlinear unknown input observability:Extension of the observability rank condition[J]. IEEE Transactions on Automatic Control,2018,64(1):222-237. [98] HERMANN R,KRENER A. Nonlinear controllability and observability[J]. IEEE Transactions on Automatic Control,1977,22(5):728-740. [99] MIRZAEI F M,ROUMELIOTIS S I. A Kalman filter-based algorithm for IMU-camera calibration:Observability analysis and performance evaluation[J]. IEEE Transactions on Robotics,2008,24(5):1143-1156. [100] KELLY J,SUKHATME G S. Visual-inertial sensor fusion:Localization,mapping and sensor-to-sensor self-calibration[J]. The International Journal of Robotics Research,2011,30(1):56-79. [101] PANAHANDEH G,GUO C X,JANSSON M,et al. Observability analysis of a vision-aided inertial navigation system using planar features on the ground[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE,2013:4187-4194. [102] PANAHANDEH G,HUTCHINSON S,HÄNDEL P,et al. Planar-based visual inertial navigation:Observability analysis and motion estimation[J]. Journal of Intelligent & Robotic Systems,2016,82(2):277-299. [103] ZHANG Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(11):1330-1334. [104] USENKO V,DEMMEL N,CREMERS D. The double sphere camera model[C]//2018 International Conference on 3D Vision (3DV). IEEE,2018:552-560. [105] REHDER J,NIKOLIC J,SCHNEIDER T,et al. Extending kalibr:Calibrating the extrinsics of multiple IMUs and of individual axes[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2016:4304-4311. [106] REHDER J,SIEGWART R. Camera/IMU calibration revisited[J]. IEEE Sensors Journal,2017,17(11):3257-3268. [107] OLSON E. APRILtAG:A robust and flexible visual fiducial system[C]//2011 IEEE International Conference on Robotics and Automation. IEEE,2011:3400-3407. [108] WANG J,OLSON E. AprilTag 2:Efficient and robust fiducial detection[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2016:4193-4198. [109] KREBS C,Generic IMU-camera calibration algorithm:Influence of IMU-axis on each other[R/OL] Autonomous Systems Lab,ETH Zurich,Tech. Rep. 2012 Dec. [110] FURGALE P,REHDER J,SIEGWART R. Unified temporal and spatial calibration for multi-sensor systems[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE,2013:1280-1286. [111] PATRON-PEREZ A,LOVEGROVE S,SIBLEY G. A spline-based trajectory representation for sensor fusion and rolling shutter cameras[J]. International Journal of Computer Vision,2015,113(3):208-219. [112] FURGALE P,BARFOOT T D,SIBLEY G. Continuous-time batch estimation using temporal basis functions[C]//2012 IEEE International Conference on Robotics and Automation. IEEE,2012:2088-2095. [113] LI M,MOURIKIS A I. Online temporal calibration for camera-IMU systems:Theory and algorithms[J]. The International Journal of Robotics Research,2014,33(7):947-964. [114] LI M,YU H,ZHENG X,et al. High-fidelity sensor modeling and self-calibration in vision-aided inertial navigation[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2014:409-416. [115] KIM D,SHIN S,KWEON I S. On-line initialization and extrinsic calibration of an inertial navigation system with a relative preintegration method on manifold[J]. IEEE Transactions on Automation Science and Engineering,2017,15(3):1272-1285. [116] QIN T,SHEN S. Online temporal calibration for monocular visual-inertial systems[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2018:3662-3669. [117] LING Y,BAO L,JIE Z,et al. Modeling varying camera-imu time offset in optimization-based visual-inertial odometry[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018:484-500. [118] GENEVA P,ECKENHOFF K,LEE W,et al. Openvins:A research platform for visual-inertial estimation[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2020:4666-4672. [119] MAYE J,FURGALE P,SIEGWART R. Self-supervised calibration for robotic systems[C]//2013 IEEE Intelligent Vehicles Symposium (IV). IEEE,2013:473-480. [120] LI M. Visual-inertial odometry on resource-constrained systems[M]. Los Angeles:University of California,Riverside,2014. [121] SCHNEIDER T,LI M,BURRI M,et al. Visual-inertial self-calibration on informative motion segments[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2017:6487-6494. [122] SCHNEIDER T,LI M,CADENA C,et al. Observability-aware self-calibration of visual and inertial sensors for ego-motion estimation[J]. IEEE Sensors Journal,2019,19(10):3846-3860. [123] YANG Y,Geneva P,Eckenhoff K,et al. Degenerate motion analysis for aided ins with online spatial and temporal sensor calibration[J]. IEEE Robotics and Automation Letters,2019,4(2):2070-2077. [124] NIKOLIC J. Characterisation,calibration,and design of visual-inertial sensor systems for robot navigation[D]. Zurich:ETH Zurich,2016. [125] HUANG G. Visual-inertial navigation:A concise review[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE,2019:9572-9582. [126] HENG L,LI B,POLLEFEYS M. Camodocal:Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE,2013:1793-1800. [127] GEIGER A,LENZ P,STILLER C,et al. Vision meets robotics:The kitti dataset[J]. The International Journal of Robotics Research,2013,32(11):1231-1237. [128] BURRI M,NIKOLIC J,GOHL P,et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research,2016,35(10):1157-1163. [129] MAJDIK A L,TILL C,SCARAMUZZA D. The Zurich urban micro aerial vehicle dataset[J]. The International Journal of Robotics Research,2017,36(3):269-273. [130] SCHUBERT D,GOLL T,DEMMEL N,et al. The TUM VI benchmark for evaluating visual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,2018:1680-1687. [131] CORTÉS S,SOLIN A,RAHTU E,et al. ADVIO:An authentic dataset for visual-inertial odometry[C]//Proceedings of the European Conference on Computer Vision (ECCV),2018:419-434. [132] SOLIN A,CORTES S,RAHTU E,et al. Inertial odometry on handheld smartphones[C]//201821st International Conference on Information Fusion (FUSION). IEEE,2018:1-5. [133] MILLER M,CHUNG S J,HUTCHINSON S. The visual-inertial canoe dataset[J]. The International Journal of Robotics Research,2018,37(1):13-20. [134] ZHU A Z,THAKUR D,ÖZASLAN T,et al. The multivehicle stereo event camera dataset:An event camera dataset for 3D perception[J]. IEEE Robotics and Automation Letters,2018,3(3):2032-2039. [135] HESS W,KOHLER D,RAPP H,et al. Real-time loop closure in 2D LIDAR SLAM[C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2016:1271-1278. [136] PIRE T,MUJICA M,CIVERA J,et al. The Rosario dataset:Multisensor data for localization and mapping in agricultural environments[J]. The International Journal of Robotics Research,2019,38(6):633-641. [137] LI Yuchen,LI Zixuan,Teng Siyu,et al. AutoMine:An unmanned mine dataset[C]//InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2022:21308-21317. [138] CHEN L,LI Y,HUANG C,et al. Milestones in autonomous driving and intelligent vehicles:Survey of surveys[J]. IEEE Transactions on Intelligent Vehicles. 2022,8(2):1046-1056. [139] CHEN L,TENG S y,LI B,et al. Milestones in autonomous driving and intelligent vehicles-part ii:Perception and planning[J]. IEEE Transactions on Systems,Man,and Cybernetics:Systems,2023,53(10):6401-6415. [140] LICHTSTEINER P,POSCH C,DELBRUCK T. A 128$\times $128120 dB 15$\mu $ s latency asynchronous temporal contrast vision sensor[J]. IEEE Journal of Solid-state Circuits,2008,43(2):566-576. [141] LIU S C,DELBRUCK T. Neuromorphic sensory systems[J]. Current Opinion in Neurobiology,2010,20(3):288-295. [142] MUEGGLER E,GALLEGO G,REBECQ H,et al. Continuous-time visual-inertial odometry for event cameras[J]. IEEE Transactions on Robotics,2018,34(6):1425-1440. [143] ZHOU Y,GALLEGO G,SHEN S. Event-based stereo visual odometry[J]. IEEE Transactions on Robotics,2021,37(5):1433-1450. [144] CHEN L,SUN L,YANG T,et al. Rgb-t slam:A flexible slam framework by combining appearance and thermal information[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2017:5682-5687. [145] KHATTAK S,PAPACHRISTOS C,ALEXIS K. Keyframe-based direct thermal-inertial odometry[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE,2019:3563-3569. [146] CHEN L,LI Y c,LI L x,et al. High-precision positioning,perception and safe navigation for automated heavy-duty mining trucks[J]. IEEE Transactions on Intelligent Vehicles,2024:1-13. [147] CHEN L,ZHANG Y q TIAN B. Parallel driving OS: A ubiquitous operating system for autonomous driving in CPSS[J]. IEEE Transactions on Intelligent Vehicles,2022,7(4):886-895. [148] CHEN L,XIE Y t,WANG Y t,et al. Sustainable mining in the era of artificial intelligence[J]. IEEE/CAA Journal of Automatica Sinica,2024,11(1): 1-4. [149] TENG S y,LI L x,LI Y c,et al. Fusionplanner:A multi-task motion planner for mining trucks using multi-sensor fusion method[J]. arXiv Preprint arXiv: 2308. 06931. 2023. [150] CHEN L,XIE J k,ZHANG X t,et al. Mining 5.0:Concept and framework for intelligent mining systems in CPSS[J]. IEEE Transactions on Intelligent Vehicles,2023,8(16):3533-3536. [151] CHEN L,LI Y c,SILAMU W,et al. Smart mining with autonomous driving in industry 5.0:Architectures,platforms,operating systems,foundation models,and applications[J]. IEEE Transactions on Intelligent Vehicles,2024:1-11. [152] HARTLEY R,TRUMPF J,DAI Y,et al. Rotation averaging[J]. International Journal of Computer Vision,2013,103(3):267-305. [153] BUSTOS A P,CHIN T J,ERIKSSON A,et al. Visual SLAM:Why bundle adjust?[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE,2019:2385-2391. [154] CHNG C K,PARRA A,CHIN T J,et al. Monocular rotational odometry with incremental rotation averaging and loop closure[C]//2020 Digital Image Computing:Techniques and Applications (DICTA). IEEE,2020:1-8. [155] KNEIP L,LYNEN S. Direct optimization of frame-to-frame rotation[C]//Proceedings of the IEEE International Conference on Computer Vision. 2013:2352-2359. [156] CHATTERJEE A,GOVINDU V M. Robust relative rotation averaging[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(4):958-972. [157] LEE S H,CIVERA J. Rotation-only bundle adjustment[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021:424-433. [158] DELLAERT F,ROSEN D M,WU J,et al. Shonan rotation averaging:Global optimality by surfing $$ SO (p)^ n $$[C]//European Conference on Computer Vision. Springer,Cham,2020:292-308. [159] YANG Y,GENEVA P,ZUO X,et al. Tightly-coupled aided inertial navigation with point and plane features[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE,2019:6094-6100. [160] WANG X,WANG Q,ZHAO Y z,et al. Lightweight single-image super-resolution network with attentive auxiliary feature learning[C/CD]//Proceedings of the Asian Conference on Computer Vision,2020. [161] CHEN L,FAN L,XIE G d,et al. Moving-object detection from consecutive stereo pairs using slanted plane smoothing[J]. IEEE Transactions on Intelligent Transportation Systems,2017,18(11):3093-3102. [162] LIU K,ZHANG Y Q,XIE Y t,et al. SynerFill:A synergistic RGB-D image inpainting network via fast Fourier convolutions[J]. IEEE Transactions on Intelligent Vehicles,2023,9(1):69-78 [163] DETONE D,MALISIEWICZ T,RABINOVICH A. Superpoint:Self-supervised interest point detection and description[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018:224-236. [164] REVAUD J,WEINZAEPFEL P,DE SOUZA C,et al. R2D2:Repeatable and reliable detector and descriptor[J]. arXiv Preprint arXiv:1906.06195,2019. [165] SARLIN P E,DETONE D,MALISIEWICZ T,et al. Superglue:Learning feature matching with graph neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020:4938-4947. [166] PIRE T,FISCHER T,CASTRO G,et al. S-ptam:Stereo parallel tracking and mapping[J]. Robotics and Autonomous Systems,2017,93:27-42. [167] BOWMAN S L,ATANASOV N,DANIILIDIS K,et al. Probabilistic data association for semantic slam[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2017:1722-1729. [168] LIANOS K N,SCHONBERGER J L,POLLEFEYS M,et al. Vso:Visual semantic odometry[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018:234-250. |
[1] | 张志勇, 王宇翔, 黄彩霞, 吴悠, 杜荣华. 融合灰色预测和卡尔曼滤波的车辆侧向碰撞预警[J]. 机械工程学报, 2024, 60(20): 240-250. |
[2] | 王博, 罗禹贡, 赵超, 王永胜. 面向预期功能安全的汽车队列主动容错控制[J]. 机械工程学报, 2024, 60(10): 384-398. |
[3] | 任好玲, 吴江东, 林添良, 张春晖, 李玉坤. 基于多传感器紧耦合的工程机械定位与建图系统[J]. 机械工程学报, 2023, 59(24): 323-333. |
[4] | 何洪文, 王浩宇, 王勇, 李双歧. 基于实车运行数据的锂离子电池健康状态估计[J]. 机械工程学报, 2023, 59(22): 46-58. |
[5] | 魏中宝, 钟浩, 何洪文. 基于多物理过程约束的锂离子电池优化充电方法[J]. 机械工程学报, 2023, 59(2): 223-232. |
[6] | 刘逸群, 李猛猛, 刘涛, 杨娜, 李卫华, 王剑锋. 电流信号采样偏差下动力电池荷电状态估计研究[J]. 机械工程学报, 2023, 59(16): 288-299. |
[7] | 车云弘, 邓忠伟, 李佳承, 谢翌, 胡晓松. 基于数据驱动的电池系统泛化SOH估计方法[J]. 机械工程学报, 2022, 58(24): 253-263. |
[8] | 李乃鹏, 蔡潇, 雷亚国, 徐鹏程, 王文廷, 王彪. 一种融合多传感器数据的数模联动机械剩余寿命预测方法[J]. 机械工程学报, 2021, 57(20): 29-37,46. |
[9] | 高铭琨, 徐海亮, 吴明铂. 基于等效电路模型的动力电池SOC估计方法综述 *[J]. 电气工程学报, 2021, 16(1): 90-102. |
[10] | 张志勇, 张淑芝, 黄彩霞, 张刘铸, 李博浩. 基于自适应扩展卡尔曼滤波的分布式驱动电动汽车状态估计[J]. 机械工程学报, 2019, 55(6): 156-165. |
[11] | 张雷, 胡晓松, 王震坡. 超级电容管理技术及在电动汽车中的应用综述[J]. 机械工程学报, 2017, 53(16): 32-43,69. |
[12] | 杨洲,宫俊,车文彬,刘世嵩,张宏达,李希孟. 智能变电站状态估计的工程应用[J]. 电气工程学报, 2017, 12(10): 31-36. |
[13] | 鲜斌, 刘洋, 张旭, 曹美会. 基于视觉的小型四旋翼无人机自主飞行控制[J]. 机械工程学报, 2015, 51(9): 58-63. |
[14] | 卢志刚,陈月敏. 基于改进聚类的输电线路参数多代理辨识与估计[J]. 电气工程学报, 2015, 10(4): 70-81. |
[15] | 沈法鹏;赵又群;孙秋云;林棻;汪伟. 基于IEKF-APF算法的汽车状态估计[J]. , 2014, 50(22): 136-141. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||