• CN:11-2187/TH
  • ISSN:0577-6686

机械工程学报 ›› 2024, Vol. 60 ›› Issue (12): 194-206.doi: 10.3901/JME.2024.12.194

• 特邀专栏:可解释可信AI驱动的智能监测与诊断 • 上一篇    下一篇

扫码分享

基于不确定性感知网络的可信机械故障诊断

邵海东1, 肖一鸣1, 邓乾旺1, 任颖莹2, 韩特3   

  1. 1. 湖南大学机械与运载工程学院 长沙 410082;
    2. 盾构及掘进技术国家重点实验室 郑州 450001;
    3. 清华大学工业工程系 北京 100084
  • 收稿日期:2023-07-13 修回日期:2023-12-25 出版日期:2024-06-20 发布日期:2024-08-23
  • 作者简介:邵海东(通信作者),男,1990年出生,博士,副教授,博士研究生导师。主要研究方向为故障诊断与寿命预测,数据挖掘与信息融合。E-mail:hdshao@hnu.edu.cn;肖一鸣,男,1999年出生,博士研究生。主要研究方向为机械智能故障诊断与工业大数据分析。E-mail:xiaoym@hnu.edu.cn
  • 基金资助:
    国家重点研发计划(2020YFB1712100)、国家自然科学基金(52275104)、湖南省科技创新计划(2023RC3097)和湖南省优秀青年科学基金(2021JJ20017)资助项目。

Trustworthy Mechanical Fault Diagnosis Using Uncertainty-aware Network

SHAO Haidong1, XIAO Yiming1, DENG Qianwang1, REN Yingying2, HAN Te3   

  1. 1. College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082;
    2. State Key Laboratory of Shield Machine and Boring Technology, Zhengzhou 450001;
    3. Department of Industrial Engineering, Tsinghua University, Beijing 100084
  • Received:2023-07-13 Revised:2023-12-25 Online:2024-06-20 Published:2024-08-23

摘要: 基于深度学习的故障诊断方法受其黑箱特性限制难以给出可信赖和可解释的诊断结果。现有可解释故障诊断研究多集中在开发可解释模块并嵌入深度学习模型以赋予诊断结果一定物理意义,或以结果为基础反推模型做出此决策的深层逻辑,对于如何量化诊断结果中的不确定性并解释其来源和构成的研究工作十分有限。不确定性量化及分解不仅能提供诊断结果的可信度,还能辨析数据中未知因素的来源,最终指导提升诊断模型的可解释性。因此,提出将贝叶斯变分学习嵌入Transformer以开发一种不确定性感知网络,用于可信机械故障诊断。设计了一种变分注意力机制并定义了相应的优化目标函数,可建模注意力权重的先验分布和变分后验分布,从而赋予网络感知不确定性的能力。制定了一种不确定性量化及分解方案,可实现诊断结果的可信度表征以及认知不确定性和偶然不确定性的分离。以行星齿轮箱智能故障诊断为例,在测试数据中含有未知故障模式、未知噪声水平以及未知工况样本的分布外泛化场景中,充分验证了所提方法用于可信故障诊断的可行性。

关键词: 可信故障诊断, 不确定性感知网络, 变分注意力, 不确定性量化及分解, 贝叶斯深度学习

Abstract: Deep learning-based fault diagnosis methods are limited by their black-box nature to give trustworthy and interpretable results. Most of the existing research on interpretable fault diagnosis focuses on developing interpretable modules to be embedded in deep models to give some physical meaning to the results, or using the results as a basis to infer the deeper logic of the model to make such decisions, with limited research on how to quantify the uncertainty in diagnostic results and explain their sources and composition. Uncertainty quantification and decomposition can not only provide confidence in diagnostic results, but also identify the source of unknown factors in the data, ultimately guiding the enhancement of the interpretability of diagnostic models. Therefore, Bayesian variational learning is proposed to be embedded into Transformer to develop an uncertainty-aware network for trustworthy mechanical fault diagnosis. A variational attention mechanism is designed and the corresponding optimization objective function is defined, which can model the prior and variational posterior distributions of attention weights, thus empowering the network to be aware of uncertainty. An uncertainty quantification and decomposition scheme is developed to achieve confidence characterization of diagnostic results and separation of epistemic and aleatoric uncertainty. Using fault diagnosis of planetary gearboxes as an example, the feasibility of the proposed method for trustworthy fault diagnosis is fully validated in an out-of-distribution generalization scenario where the test data contains unknown failure modes, unknown noise levels and unknown operating condition samples.

Key words: trustworthy fault diagnosis, uncertainty-aware network, variational attention, uncertainty quantification and decomposition, Bayesian deep learning

中图分类号: