Journal of Mechanical Engineering ›› 2024, Vol. 60 ›› Issue (22): 224-240.doi: 10.3901/JME.2024.22.224
Previous Articles Next Articles
CHU Wenbo1,2,3, GAN Lu1,2, LI Guofa1, TANG Xiaolin1, LI Keqiang4
Received:
2024-02-25
Revised:
2024-06-25
Online:
2024-11-20
Published:
2025-01-02
About author:
10.3901/JME.2024.22.224
CLC Number:
CHU Wenbo, GAN Lu, LI Guofa, TANG Xiaolin, LI Keqiang. Large Models Efficient Compression Technology for Autonomous Driving: A Review[J]. Journal of Mechanical Engineering, 2024, 60(22): 224-240.
[1] KHAN M A,SAYED H E,MALIK S,et al. Level-5 autonomous driving—Are we there yet? A review of research literature[J]. ACM Computing Surveys,2023,55(2):1-38. [2] GRIGORESCU S,TRASNEA B,COCIAS T,et al. A survey of deep learning techniques for autonomous driving[J]. Journal of Field Robotics,2020,37(3):362-386. [3] TAY Y,DEHGHANI M,BAHRI D,et al. Efficient transformers:A survey[J]. ACM Computing Surveys,2022,55(6):1-28. [4] 廖俊伟. 深度学习大模型时代的自然语言生成技术研究[D]. 成都:电子科技大学,2024. LIAO Junwei. Research on natural language generation techniques in the large language model era of deep learning[D]. Chengdu:University of Electronic Science and Technology of China,2024. [5] 汪文靖,杨文瀚,方玉明,等. 恶劣场景下视觉感知与理解综述[J]. 中国图象图形学报,2024,29(6):1667-1684. WANG Wenjing,YANG Wenhan,FANG Yuming,et al. Visual perception and understanding in degraded scenarios[J]. Journal of Image and Graphics,2024,29(6):1667-1684. [6] 张钦彤,王昱超,王鹤羲,等. 大语言模型微调技术的研究综述[J]. 计算机工程与应用,2024,60(17):17-33. ZHANG Qintong,WANG Yichao,WANG Hexi,et al. A comprehensive review of large language model fine-tuning[J]. Computer Engineering and Applications,2024,60(17):17-33. [7] 魏子舒,韩越,刘思浩,等. 2021至2023年人工智能领域研究热点分析述评与展望[J]. 计算机研究与发展,2024,61(5):1261-1275. WEI Zishu,HAN Yue,LIU Sihao,et al. Lookahead analysis and discussion of research hotspots in artificial intelligence from 2021 to 2023[J]. Journal of Computer Research and Development,2024,61(5):1261-1275. [8] 陈浩泷,陈罕之,韩凯峰,等. 垂直领域大模型的定制化:理论基础与关键技术[J]. 数据采集与处理,2024,39(3):524-546. CHEN Haolong,CHEN Hanzhi,HAN Kaifeng,et al. Domain-specific foundation-model customization:Theoretical foundation and key technology[J]. Journal of Data Acquisition and Processing,2024,39(3):524-546. [9] 王耀祖,李擎,戴张杰,等. 大语言模型研究现状与趋势[J]. 工程科学学报,2024,46(8):1411-1425. WANG Yaozu,LI Qing,DAI Zhangjie,et al. Current status and trends in large language modeling research[J]. Chinese Journal of Engineering,2024,46(8):1411-1425. [10] 王祥,谭国真. 基于知识与大语言模型的高速环境自动驾驶决策研究[J/OL].系统仿真学报,1-9[2024-09-27]. https://doi.org/10.16182/j.issn1004731x.joss.24-0065. WANG Xiang,TAN Guozhen. Research on decision- making of autonomous driving in highway environment based on knowledge and large language model[J/OL]. Journal of System Simulation,1-9[2024-09-27]. https:// doi.org/10.16182/j.issn1004731x.joss.24-0065. [11] WU S,FEI H,QU L,et al. NExT-GPT:Any-to-any multimodal LLM[J]. arXiv,arXiv preprint arXiv:2309. 05519,2023. [12] JIN Y,SHEN X,PENG H,et al. Surrealdriver:Designing generative driver agent simulation framework in urban contexts based on large language model[J]. Arxiv,Preprint Arxiv:2309.13193,2023. [13] JIA F,MAO W,LIU Y,et al. ADriver-I:A general world model for autonomous driving[J]. arXiv,preprint arXiv:2311. 135492023. [14] DERUYTTERE T,GRUJICIC D,BLASCHKO M B,et al. Talk2Car:Predicting physical trajectories for natural language commands[J]. IEEE Access,2022,10:123809-123834. [15] 高杨,曹仰杰,段鹏松. 神经网络模型轻量化方法综述[J]. 计算机科学,2024,51(增刊1):23-33. GAO Yang,CAO Yangjie,DUAN Pengsong. Lightweighting methods for neural network models:A review[J]. Computer Science,2024,51(Suppl.1):23-33. [16] 王永威,沈弢,张圣宇,等. 大小模型端云协同进化技术进展[J]. 中国图象图形学报,2024,29(6):1510-1534. WANG Yongwei,SHEN Tao,ZHANG Shengyu,et al. Advances in edge-cloud collaboration and evolution for large-small models[J]. Journal of Image and Graphics,2024,29(6):1510-1534. [17] 李荣涵,浦荣成,沈佳楠,等. 基于思维链的大语言模型知识蒸馏[J]. 数据采集与处理,2024,39(3):547-558. LI Ronghan,PU Rongcheng,SHEN Jianan,et al. Knowledge distillation of large language models based on chain of thought[J]. Journal of Data Acquisition and Processing,2024,39(3):547-558. [18] ZHONG Z,REMPE D,CHEN Y,et al. Language-guided traffic simulation via scene-level diffusion[C]//Conference on Robot Learning. PMLR,2023:144-177. [19] ZHENG W,CHEN W,HUANG Y,et al. OccWorld:Learning a 3ed occupancy world model for autonomous driving[J]. arXiv,preprint arXiv:2311.16038,2023. [20] CHENG W,YIN J,LI W,et al. Language-guided 3D object detection in point cloud for autonomous driving[J]. arXiv,preprint arXiv:2305. 15765,2023. [21] FU D,LI X,WEN L,et al. Drive like a human:Rethinking autonomous driving with large language models[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024:910-919. [22] XU Z,ZHANG Y,XIE E,et al. DriveGPT4:Interpretable end-to-end autonomous driving via large language model[J]. IEEE Robotics and Automation Letters,2023,9(10):8186-8193. [23] LINGO-1:Exploring natural language for autonomous driving-wayve[EB/OL]. [2023-09-14]. https://wayve.ai/ thinking/lingo-natural-language-autonomous-driving/. [24] KEYSAN A,LOOK A,KOSMAN E,et al. Can you text what is happening? Integrating pre-trained language encoders into trajectory prediction models for autonomous driving[J]. arXiv,preprint arXiv:2309.05282,2023. [25] CUI C,MA Y,CAO X,et al. Drive as you speak:Enabling human-like interaction with large language models in autonomous vehicles[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024:902-909. [26] WEN L,FU D,LI X,et al. DiLu:A knowledge-driven approach to autonomous driving with large language models[J]. arXiv,preprint arXiv:2309.16292,2023. [27] SHA H,MU Y,JIANG Y,et al. LanguageMPC:Large language models as decision makers for autonomous driving[J]. arXiv,preprint arXiv:2310.03026,2023. [28] MAO J,QIAN Y,YE J,et al. GPT-Driver:Learning to drive with GPT[J]. arXiv,preprint arXiv:2310.01415,2023. [29] DEWANGAN V,CHOUDHARY T,CHANDHOK S,et al. Talk2BEV:Language-enhanced bird’s-eye view maps for autonomous driving[J]. arXiv preprint arXiv:2310. 02251,2023. [30] SIMA C,RENZ K,CHITTA K,et al. DriveLM:Driving with graph visual question answering[J]. arXiv,preprint arXiv:2312.14150,2023. [31] DING X,HAN J,XU H,et al. HiLM-D:Towards high-resolution understanding in multimodal large language models for autonomous driving[J]. arXiv,preprint arXiv:2309.05186,2023. [32] WU D,HAN W,WANG T,et al. Language prompt for autonomous driving[J]. arXiv,preprint arXiv:2309.04379,2023. [33] ZHOU Y,CAI L,CHENG X,et al. OpenAnnotate2:Multi-modal auto-annotating for autonomous driving[J]. IEEE Transactions on Intelligent Vehicles,2024:1-13. [34] CHEN L,SINAVSKI O,HÜNERMANN J,et al. Driving with llms:Fusing object-level vector modality for explainable autonomous driving[C]//2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE,2024:14093-14100. [35] HUANG W,ABBEEL P,PATHAK D,et al. Language models as zero-shot planners:Extracting actionable knowledge for embodied agents[C]//International Conference on Machine Learning. PMLR,2022:9118-9147. [36] CUI C,MA Y,CAO X,et al. Receive,reason,and react:Drive as you say with large language models in autonomous vehicles[J]. IEEE Intelligent Transportation Systems Magazine,2024,16(4):81-94. [37] GAO H,LI Y,LONG K,et al. A survey for foundation models in autonomous driving[J]. arXiv,preprint arXiv:2402.01105,2024. [38] 王聪,胡文,李文博,等. 社会认知自动驾驶[J]. 机械工程学报,2023,59(20):304-324. WANG Cong,HU Wen,LI Wenbo,et al. Social cognitive autonomous driving[J]. Journal of Mechanical Engineering,2023,59(20):304-324. [39] HUA N,LU W. Basis operator network:A neural network-based model for learning nonlinear operators via neural basis[J]. Neural Networks,2023,164:21-37. [40] BUCHANAN M. Generalizing moore[J]. Nature Physics,2016,12(3):200. [41] CUI B,LI Y,ZHANG Z. Joint structured pruning and dense knowledge distillation for efficient transformer model compression[J]. Neurocomputing,2021,458:56-69. [42] KANG T,DING W,CHEN P. CRESPR:Modular sparsification of DNNs to improve pruning performance and model interpretability[J]. Neural Networks,2024,172:106067. [43] TAN Y,HAN K,ZHAO K,et al. Accelerating sparse convolution with column vector-wise sparsity[J]. Advances in Neural Information Processing Systems,2022,35:30307-30317. [44] ZHAO Z,LING N,GUAN N,et al. Miriam:Exploiting elastic kernels for real-time multi-DNN inference on edge gpu[C]//Proceedings of the 21st ACM Conference on Embedded Networked Sensor Systems,2023:97-110. [45] ZHAI Y,JIANG C,WANG L,et al. ByteTransformer:A high-performance transformer boosted for variable- length inputs[C]//2023 IEEE International Parallel and Distributed Processing Symposium (IPDPS). 2023:344-355. [46] YU S,CHEN T,SHEN J,et al. Unified visual transformer compression[J]. arXiv,preprint arXiv:2203.08243,2022. [47] GUO S,LAI B,YANG S,et al. Sensitivity pruner:Filter-level compression algorithm for deep neural networks[J]. Pattern Recognition,2023,140:109508. [48] ZHAO X,YAO Y,WU H,et al. Structural watermarking to deep neural networks via network channel pruning[C]//2021 IEEE International Workshop on Information Forensics and Security (WIFS). 2021:1-6. [49] CHOI J I,TIAN Q. Visual saliency-guided channel pruning for deep visual detectors in autonomous driving[C]//2023 IEEE Intelligent Vehicles Symposium (IV),2023:1-6. [50] LIU W,PENG Z,LEE T. CoMFLP:Correlation measure based fast search on ASR layer pruning[J]. arXiv,preprint arXiv:2309.11768,2023. [51] SUN H,ZHANG S,TIAN X,et al. Pruning DETR:Efficient end-to-end object detection with sparse structured pruning[J]. Signal,Image and Video Processing,2024,18(1):129-135. [52] LI B,KONG Z,ZHANG T,et al. Efficient transformer- based large scale language representations using hardware-friendly block structured pruning[J]. arXiv,preprint arXiv:2009.08065,2020. [53] TAO C,HOU L,BAI H,et al. Structured Pruning for efficient generative pre-trained language models[C]// Findings of the Association for Computational Linguistics:ACL 2023. Toronto,Canada:Association for Computational Linguistics,2023:10880-10895. [54] MA X,FANG G,WANG X. LLM-Pruner:On the structural pruning of large language models[J]. Advances in Neural Information Processing Systems,2023,36:21702-21720. [55] ZHAO X,QI M,LIU Z,et al. End-to-end autonomous driving decision model joined by attention mechanism and spatiotemporal features[J]. IET Intelligent Transport Systems,2021,15(9):1119-1130. [56] RAO Y,ZHAO W,LIU B,et al. DynamicViT:Efficient vision transformers with dynamic token sparsification[C]// Advances in Neural Information Processing Systems:vol 34. Curran Associates,Inc.,2021:13937-13949. [57] YANG C,ZHAO P,LI Y,et al. Pruning parameterization with Bi-level optimization for efficient semantic segmentation on the edge[C]//Proceedings of the IEEE/ CVF Conference on Computer Vision and Pattern Recognition. 2023:15402-15412. [58] LIU J,WU C,YANG C,et al. Efficient pruning of large language model with adaptive estimation fusion[J]. arXiv,preprint arXiv:2403.10799,2024. [59] FRANTAR E,ALISTARH D. SparseGPT:Massive language models can be accurately pruned in one- shot[C]//Proceedings of the 40th International Conference on Machine Learning. PMLR,2023:10323-10337. [60] LIAO Z,QUÉTU V,NGUYEN V T,et al. Can unstructured pruning reduce the depth in deep neural networks?[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023:1402-1406. [61] ZHANG M,CHEN H,SHEN C,et al. LoRAPrune:Pruning meets low-rank parameter-efficient fine-tuning[J]. arXiv,preprint arXiv:2305.18403,2023. [62] SYED A,GUO P H,SUNDARAPANDIYAN V. Prune and tune:Improving efficient pruning techniques for massive language models[EB/OL]. [2023-03-02]. https:// openreview.net/forum?id=cKlgcx7nSZ. [63] SUN M,LIU Z,BAIR A,et al. A Simple and effective pruning approach for large language models[J]. arXiv,preprint arXiv:2306.11695,2023. [64] MELLOR J,TURNER J,STORKEY A,et al. Neural architecture search without training[C]//Proceedings of the 38th International Conference on Machine Learning. PMLR,2021:7588-7598. [65] BELLO I,ZOPH B,VASUDEVAN V,et al. Neural optimizer search with reinforcement learning[C]// International Conference on Machine Learning. PMLR,2017:459-468. [66] TAN M,CHEN B,PANG R,et al. MnasNet:Platform-aware neural architecture search for mobile[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019:2820-2828. [67] LAUBE K A,ZELL A. ShuffleNASNets:Efficient CNN models through modified efficient neural architecture search[C]//2019 International Joint Conference on Neural Networks (IJCNN). 2019:1-6. [68] YAN L,ZHANG Z,LIANG J,et al. ASMEvoNAS:Adaptive segmented multi-objective evolutionary network architecture search[J]. Applied Soft Computing,2023,146:110639. [69] QIAN G,ZHANG X,LI G,et al. When NAS meets trees:An efficient algorithm for neural architecture search[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022:2782-2787. [70] MECHARBAT L A,BENMEZIANE H,OUARNOUGHI H,et al. Hyt-nas:Hybrid transformers neural architecture search for edge devices[C]//Proceedings of the 2023 Workshop on Compilers,Deployment,and Tooling for Edge AI. 2023:41-45. [71] DONG X,YANG Y. NAS-Bench-201:Extending the scope of reproducible neural architecture search[J]. arXiv:2001.00326,2020. [72] LIU S,ZHANG H,JIN Y. A survey on computationally efficient neural architecture search[J]. Journal of Automation and Intelligence,2022,1(1):100002. [73] GAO J,XU H,SHI H,et al. AutoBERT-Zero:Evolving BERT backbone from scratch[J]. Proceedings of the AAAI Conference on Artificial Intelligence,2022,36(10):10663-10671. [74] ZHENG X,JI R,CHEN Y,et al. MIGO-NAS:Towards fast and generalizable neural architecture search[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2021,43(9):2936-2952. [75] WAN X,RU B,ESPERANÇA P M,et al. On redundancy and diversity in cell-based neural architecture search[J]. arXiv,preprint arXiv:2203.08887,2022. [76] YANG Z,WANG Y,CHEN X,et al. CARS:Continuous evolution for efficient neural architecture search[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020:1829-1838. [77] LIU C,MIAO Q,HUANG M. Improving efficient neural architecture search using out-net[C]//Proceedings of the 20217th International Conference on Computing and Artificial Intelligence. New York,NY,USA:Association for Computing Machinery,2021:266-271. [78] LI J,HONG H,SHI M,et al. Knowledge transfer for object detection with evolution architecture search[C]// 20224th International Conference on Data-driven Optimization of Complex Systems (DOCS). 2022:1-6. [79] LIU Z,OGUZ B,ZHAO C,et al. LLM-QAT:Data-free quantization aware training for large language models[J]. arXiv,preprint arXiv:2305.17888,2023. [80] KIM J,LEE J H,KIM S,et al. Memory-efficient fine-tuning of compressed large language models via sub-4-bit integer quantization[J]. Advances in Neural Information Processing Systems,2023,36:36187-36207. [81] DETTMERS T,PAGNONI A,HOLTZMAN A,et al. QLoRA:Efficient finetuning of quantized LLMs[J]. Advances in Neural Information Processing Systems,2023,36:10088-10115. [82] GOPALKRISHNAN A,GREER R,TRIVEDI M. Multi-frame,lightweight & efficient vision-language models for question answering in autonomous driving[J]. arXiv,preprint arXiv:2403.19838,2024. [83] PARK G,PARK B,KIM M,et al. LUT-GEMM:Quantized matrix multiplication based on luts for efficient inference in large-scale generative language models[J]. arXiv,preprint arXiv:2206.09557,2022. [84] DETTMERS T,LEWIS M,BELKADA Y,et al. GPT3.int8():8-bit matrix multiplication for transformers at scale[J]. Advances in Neural Information Processing Systems,2022,35:30318-30332. [85] FRANTAR E,ASHKBOOS S,HOEFLER T,et al. GPTQ:Accurate post-training quantization for generative pre-trained transformers[J]. arXiv,preprint arXiv:2210. 17323,2023. [86] DETTMERS T,ZETTLEMOYER L. The case for 4-bit precision:k-Bit inference scaling laws[C]//International Conference on Machine Learning. PMLR,2023:7750-7774. [87] LIN J,TANG J,TANG H,et al. AWQ:Activation-aware weight quantization for LLM compression and acceleration[J]. arXiv,preprint arXiv:2306.00978,2023. [88] LI Y,YU Y,ZHANG Q,et al. LoSparse:Structured compression of large language models based on low-rank and sparse approximation[C]//Proceedings of the 40th International Conference on Machine Learning. PMLR, 2023:20336-20350. [89] HAYOU S,GHOSH N,YU B. LoRA+:Efficient low rank adaptation of large models[J]. arXiv,preprint arXiv:2402.12354,2024. [90] XU M,XU Y L,MANDIC D P. TensorGPT:Efficient compression of the embedding layer in LLMs based on the tensor-train decomposition[J]. arXiv,preprint arXiv:2307.005262023. [91] GU Y,DONG L,WEI F,et al. MiniLLM:Knowledge distillation of large language models[EB/OL]. [2024-01-16]. https://openreview.net/forum?id=5h0qf7IBZZ. [92] AGARWAL R,VIEILLARD N,ZHOU Y,et al. On-policy distillation of language models:Learning from self-generated mistakes[EB/OL]. [2024-01-16]. https:// openreview.net/forum?id=3zKtaqxLhW. [93] JHA A H,SHERBORNE T,WALSH E P,et al. How to train your (compressed) large language model[J]. arXiv,preprint arXiv:2305.14864,2023. [94] HUANG Y,CHEN Y,YU Z,et al. In-context learning distillation:Transferring few-shot learning ability of pre-trained language models[J]. arXiv preprint arXiv:2212.10670,2022. [95] LI S,CHEN J,SHEN Y,et al. Explanations from large language models make small reasoners better[J]. arXiv,preprint arXiv:2210.06726,2022. [96] ZHANG Z,ZHANG A,LI M,et al. Automatic chain of thought prompting in large language models[J]. arXiv,preprint arXiv:2210.03493,2022. [97] HO N,SCHMID L,YUN S Y. Large language models are reasoning teachers[J]. arXiv,preprint arXiv:2212.10071,2022. [98] FU Y,PENG H,OU L,et al. Specializing smaller language models towards multi-step reasoning[C]// Proceedings of the 40th International Conference on Machine Learning. PMLR,2023:10421-10430. [99] SHRIDHAR K,STOLFO A,SACHAN M. Distilling reasoning capabilities into smaller language models[J]. arXiv,preprint arXiv:2212.00193,2022. [100] ZHU X,QI B,ZHANG K,et al. PaD:Program-aided distillation can teach small models reasoning better than chain-of-thought fine-tuning[J]. arXiv,preprint arXiv:2305.13888,2023. [101] SAHA S,HASE P,BANSAL M. Can language models teach weaker agents? teacher explanations improve students via personalization[J]. arXiv,preprint arXiv:2306.09299,2023. [102] JIANG Y,CHAN C,CHEN M,et al. Lion:Adversarial distillation of proprietary large language models[J]. arXiv,preprint arXiv:2305.12870,2023. [103] WU M,WAHEED A,ZHANG C,et al. LaMini-LM:A diverse herd of distilled models from large-scale instructions[J]. arXiv,preprint arXiv:2304.14402,2023. |
[1] | CHU Duanfeng, LIU Hongxiang, GAO Bolin, WANG Jinxiang, YIN Guodong. Survey of Predictive Cruise Control for Vehicle Platooning [J]. Journal of Mechanical Engineering, 2024, 60(18): 218-246. |
[2] | ZHOU Honglong, PEI Xiaofei, LIU Yiping, ZHAO Kefan. Study on Spatio-temporal Coupled Hierarchical Trajectory Planning of Autonomous Vehicles for Dynamic Uncertain Scenarios [J]. Journal of Mechanical Engineering, 2024, 60(10): 222-234. |
[3] | NIE Shida, LIU Hui, LIAO Zhihao, XIE Yujia, XIANG Changle, HAN Lijin, LIN Sihao. Study on Path Planning for Off-road Autonomous Vehicles in Complex Terrains [J]. Journal of Mechanical Engineering, 2024, 60(10): 261-272. |
[4] | LIN Chen, WEI Hongqian, JING Wei, ZHANG Youtong. Automotive Functional Safety: Risk Mitigation Control of Path Planning for Autonomous Vehicles towards Sensitive Command Attack Scenarios [J]. Journal of Mechanical Engineering, 2024, 60(10): 302-316. |
[5] | YANG Shuo, LI Shizhen, ZHAO Zhongyuan, HUANG Xiaopeng, HUANG Yanjun. Integrated Autonomous Driving Lane Change Policy Based on Temporal Difference Learning Model Predictive Control [J]. Journal of Mechanical Engineering, 2024, 60(10): 329-338. |
[6] | ZHOU Shaodong, NIE Chang, ZHANG Hui, WANG Zhengyu, SUN Zhifeng. Intelligent Automobile Headlights: Recent Developments and Future Trends [J]. Journal of Mechanical Engineering, 2023, 59(22): 380-400. |
[7] | WANG Cong, HU Wen, LI Wenbo, XING Yang, CHEN Hongchang, CAO Dongpu. Social Cognitive Autonomous Driving [J]. Journal of Mechanical Engineering, 2023, 59(20): 304-324. |
[8] | ZHAO Chuan, SUN Feng, PEI Wenzhe, JIN Junjie, XU Fangchao, OKA Koichi, YU Suyuan. Realization Mechanism and Development of Permanent Magnetic Levitation: A Review [J]. Journal of Mechanical Engineering, 2023, 59(17): 189-207. |
[9] | XU Can, ZHAO Wanzhong, LI Lin, ZHANG Ruijun, WANG Chunyan, CHEN Feng. Interactive Decision-making and Planning for Autonomous Driving Vehicles in Unsignalized Intersection [J]. Journal of Mechanical Engineering, 2023, 59(14): 202-212. |
[10] | LIANG Zhongchao, HUANG Zhuo, HU Xing, CHEN Jie. Research on Distance Measurement Using Vehicle Four-point Calibration Based on YOLO Neural Network [J]. Journal of Mechanical Engineering, 2023, 59(10): 226-235. |
[11] | GAO Kai, LI Xun-hao, HU Lin, CHEN Bin, DU Rong-hua. Lane Change Intention Prediction of CNN-LSTM Based on Multi-head Attention [J]. Journal of Mechanical Engineering, 2022, 58(22): 369-378. |
[12] | ZHANG Donghao, LIU Zhenyu, JIA Weiqiang, LIU Hui, TAN Jianrong. A Review on Knowledge Graph and Its Application Prospects to Intelligent Manufacturing [J]. Journal of Mechanical Engineering, 2021, 57(5): 90-113. |
[13] | LI Ming, JIANG Yanda, CUI Qifeng, ZHOU Xin, WANG Zhiyi, SHI Feizhou. Immature State-of-the-art Review on Origami-inspired Spaceborne Deployable Structures [J]. Journal of Mechanical Engineering, 2021, 57(23): 53-65. |
[14] | KONG Fansen. Framework and Dynamic Mechanism of Reliability Analysis of Manufacturing System [J]. Journal of Mechanical Engineering, 2020, 56(20): 223-236. |
[15] | LIU Zhaolin, CHEN Jiqing, LAN Fengchong, XIA Hongyang. Methodology on Comprehensive Mapping of Multi-information of Autonomous Driving Based on Trajectory Tensor [J]. Journal of Mechanical Engineering, 2020, 56(16): 214-226. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||