[1] 尹佳, 唐宇阳, 张俊, 等. 基于复合加工特征的航空结构件频响快速预测[J]. 机械工程学报, 2023, 59(3): 200-207. YIN Jia, TANG Yuyang, ZHANG Jun, et al. Rapid frequency response function prediction of aeronautical structural parts based on composite machining features[J]. Journal of Mechanical Engineering, 2023, 59(3): 200-207. [2] 常智勇, 陶礼尊, 李佳佳, 等. 基于加工意图的机加工艺知识重用方法研究[J]. 机械工程学报, 2018, 54(3): 160-168. CHANG Zhiyong, TAO Lizun, LI Jiajia, et al. The measure and search method of process knowledge element based on machining intent[J]. Journal of Mechanical Engineering, 2018, 54(3): 160-168. [3] ZHOU D, DAI X. A method for discovering typical process sequence using granular computing and similarity algorithm based on part features[J]. The International Journal of Advanced Manufacturing Technology, 2015, 78(9): 1781-1793. [4] 罗滨鸿, 周虎, 张祺薇, 等. 基于Apriori算法的网线编织工艺缺陷数据挖掘方法[J]. 制造业自动化, 2022, 44(5): 75-77+102. LUO Binhong, ZHOU Hu, ZHANG Qiwei, et al. A data mining method for defect data of net rope weaving process based on apriori algorithm[J]. Manufacturing Automation, 2022, 44(5): 75-77+102. [5] 阳树梅, 王华昌, 李建军. 基于经验的数控工艺知识挖掘算法研究[J]. 模具工业, 2023, 49(8): 1-10. YANG Shumei, WANG Huachang, LI Jianjun. Research on knowledge mining algorithm of NC process based on experience[J]. Die & Mould Industry, 2023, 49(8): 1-10. [6] ZHOU B, BAO J S, LI J, et al. A novel knowledge graph-based optimization approach for resource allocation in discrete manufacturing workshops[J]. Robotics and Computer-Integrated Manufacturing, 2021, 71(3): 102160-102173. [7] ZHENG P, XIA L Q, LI C X, et al. Towards Self-X cognitive manufacturing network: an industrial knowledge graph-based multi-agent reinforcement learning approach[J]. Journal of Manufacturing Systems, 2021, 61(1): 16-26. [8] ZHOU B, LI X, LIU T, et al. CausalKGPT: industrial structure causal knowledge-enhanced large language model for cause analysis of quality problems in aerospace product manufacturing[J]. Advanced Engineering Informatics, 2024, 59: 102333. [9] 夏润泽, 李丕绩. ChatGPT大模型技术发展与应用[J].数据采集与处理, 2023, 38(5): 1017-1034. XIA Runze, LI Piji. Large language model ChatGPT: evolution and application[J]. Journal of Data Acquisition and Processing, 2023, 38(5): 1017-1034. [10] DING N, QIN Y J, YANG G, et al. Parameter-efficient fine-tuning of large-scale pre-trained language models[J]. Nature Machine Intelligence, 2023, 5(3): 220-235. [11] KOJIMA T, GU S S, REID M, et al. Large language models are zero-shot reasoners[J]. Advances in Neural Information Processing Systems, 2022, 35: 22199-22213. [12] LIU Xiao, JI Kaixuan, FU Yicheng, et al. P-Tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks[M/OL]. arXiv, 2021[2023-12-6]. https: //arxiv.org/abs/2110.07602. [13] HU E J, SHEN Y, WALLIS P, et al. Lora: Low-rank adaptation of large language models[M/OL]. arXiv, 2021[2023-12-1]. https: //arxiv.org/abs/2106.09685. [14] DETTMERS T, PAGNONI A, HOLTZMAN A, et al. Qlora: efficient finetuning of quantized llms[M/OL]. arXiv, 2023[2023-12-8]. https: //arxiv.org/abs/2305.14314. [15] 张鹤译, 王鑫, 韩立帆, 等. 大语言模型融合知识图谱的问答系统研究[J]. 计算机科学与探索, 2023, 17(10): 2377-2388. ZHANG Heyi, WANG Xin, HAN Lifan, et al. Research on question answering system on joint of knowledge graph and large language models[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(10): 2377-2388. [16] 赵鑫, 窦志成, 文继荣. 大语言模型时代下的信息检索研究发展趋势[J]. 中国科学基金, 2023, 37(5): 786-792. ZHAO Xin, DOU Zhicheng, WEN Jirong. The development of information retrieval in the era of large language model[J]. Bulletin of National Natural Science Foundation of China, 2023, 37(5): 786-792. [17] SINGHAL K, AZIZI S, TU T, et al. Large language models encode clinical knowledge[J]. Nature, 2023, 620(7972): 172-180. [18] HUAGN A H, WANG H, YANG Y. FinBERT: A large language model for extracting information from financial text[J]. Contemporary Accounting Research, 2023, 40(2): 806-841. [19] 张俊, 徐箭, 许沛东, 等. 人工智能大模型在电力系统运行控制中的应用综述及展望[J]. 武汉大学学报(工学版), 2023, 56(11): 1368-1379. ZHANG Jun, XU Jian, XU Peidong, et al. Overview and prospect of application of artificial intelligence large model in power system operation control[J]. Engineering Journal of Wuhan University, 2023, 56(11): 1368-1379. [20] TOUVRON H, LAVRIL T, IZACARD G, et al. Llama: open and efficient foundation language models[M/OL]. arXiv, 2023[2023-12-20]. https: //arxiv.org/abs/2302.13971. [21] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9. [22] TAORI R, GULRAJANI I, ZHANG Tianyi, et al. Stanford alpaca: An instruction-following llama model[EB/OL]. GitHub Repository, 2023[2023-12-12]. https: //github.com/tatsu-lab/stanford_alpaca. [23] DU Zhengxiao, QIAN Yujie, LIU Xiao, et al. General language model pretraining with autoregressive blank infilling[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin: ACL, 2022: 320-335. [24] GUU K, LEE K, TUNG Z, et al. Retrieval augmented language model pre-training[C]//Proceedings of the 37th International Conference on Machine Learning. PMLR, 2020: 3929-3938. [25] PAPINENI K, ROUKO S, WARD T, et al. Bleu: a method for automatic evaluation of machine translation[C]//Proceedings of the 40 th annual meeting of the Association for Computational Linguistics. Philadelphia, Pennsylvania, USA: ACL, 2002: 311-318. [26] LIN C Y. Rouge: A package for automatic evaluation of summaries[C]//Text summarization branches out. Barcelona, Spain: ACL, 2004: 74-81. [27] KORBAK T, ELSAHAR H, KRUSZEWSKI G, et al. Controlling conditional language models without catastrophic forgetting[C]//Proceedings of the 39th International Conference on Machine Learning. Baltimore, Maryland, USA: PMLR, 2022, 162: 11499-11528. [28] ZHANG Tianyi, KISHORE V, WU F, et al. Bertscore: evaluating text generation with bert[M/OL]. arXiv, 2019[2023-12-26]. https: //arxiv.org/abs/1904.09675. |