[1] 郭东明. 高性能制造[J]. 机械工程学报, 2022, 58(21):225-242. GUO Dongming. High performance manufacturing[J]. Journal of Mechanical Engineering, 2022, 58(21):225-242. [2] 高晖, 李光耀, 韩旭. 材料参数直接反求方法研究[J]. 中国机械工程, 2008, 19(17):2081-2084. GAO Hui, LI Guangyao, HAN Xu. Research on material parameter direct inverse method[J]. China Mechanical Engineering, 19(17):2081-2084. [3] HAN Xu, LIU Jie, Numerical simulation-based design[M]. Springer Verlag, Singapore, 2017. [4] LIU Guirong. FEA-AI and AI-AI:Two-way deepnets for real-time computations for both forward and inverse mechanics problems[J]. International Journal of Computational Methods, 2019, 16(8):1950045. [5] LIU Guirong, HAN Xu, XU Y, et al. Material characterization of functionally graded material by means of elastic waves and a progressive-learning neural network[J]. Composites Science and Technology, 2001, 61(10):1401-1411. [6] LIU Guirong, LAM K, HAN Xu. Determination of elastic constants of anisotropic laminated plates using elastic waves and a progressive neural network[J]. Journal of Sound and Vibration, 2002, 252(2):239-259. [7] DUAN Shuyong, WANG Li, WANG Fang, et al. A technique for inversely identifying joint stiffnesses of robot arms via two-way TubeNets[J]. Inverse Problems in Science and Engineering, 2021, 29(13):3041-3061. [8] DUAN Shuyong, HAN Xu, LIU Guirong. Two-way trumpetnets and tubenets for identification of material parameters[J]. Artificial Intelligence for Materials Science:Springer, 2021:59-91. [9] KUMAR V, MINZ S, Feature selection:a literature review[J]. Smart CR, 2014, 4(3):211-229. [10] VELLIANGIRI S, ALAGUMUTHUKRISHNAN S. A review of dimensionality reduction techniques for efficient computation[J]. Procedia Computer Science, 2019, 165:104-111. [11] JANECEK A, GANSTERER W, DEMEL M, et al. On the relationship between feature selection and classification accuracy[C]. New Challenges for Feature Selection in Data Mining and Knowledge Discovery, 2008:90-105. [12] WANG Sa, WANG Keyong, ZHENG Lian. Feature selection via analysis of relevance and redundancy[J]. Journal of Beijing Institute of Technology, 2008, 17(3):300-304. [13] MARTINEZ A, KAK A. Pca versus lda[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(2):228-233. [14] YANG Jian, ZHANG D, FRANGI A, et al. Two-dimensional PCA:a new approach to appearance-based face representation and recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(1):131-137. [15] LI Tao, LI Mengyuan, GAO Quanxue, et al. F-norm distance metric based robust 2DPCA and face recognition[J]. Neural Networks, 2017, 94:204-211. [16] HINTON G, SALAKHUTDINOV R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786):504-507. [17] WANG Yasi, YAO Hongxun, ZHAO Sicheng, et al. Dimensionality reduction strategy based on auto-encoder[C]. Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, 2015:1-4. [18] DUAN Shuyong, HOU Zhiping, HAN Xu, et al. A novel inverse procedure Via creating tubenet with constraint autoencoder for feature-space dimension-reduction[J]. International Journal of Applied Mechanics, 2021, 13(8):2150091. [19] ZABALZA J, REN J, ZHENG J, et al. Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging[J]. Neurocomputing, 2016, 185:1-10. [20] WANG Wei, HUANG Yan, WANG Yizhou, et al. Generalized autoencoder:a neural network framework for dimensionality reduction[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2014:490-497. [21] PHAM C, LADJAL S, NEWSON A. PCA-AE:Principal component analysis autoencoder for organising the latent space of generative networks[J]. Journal of Mathematical Imaging and Vision, 2022, 64(5):569-585. [22] SPERDUTI A. Linear autoencoder networks for structured data[C]. International Workshop on Neural-Symbolic Learning and Reasoning, 2013. [23] KUMAR A, SATTIGERI P, BALAKRISHNAN A. Variational inference of disentangled latent concepts from unlabeled observations[C]. International Conference on Learning Representations. ICLR, 2018. [24] PEARSON K. On lines and planes of closest fit to systems of points in space[J]. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2010, 2(11):559-572. [25] OU Jun, LI Yujian, SHEN Chenkai. Unlabeled PCA-shuffling initialization for convolutional neural networks[J]. Applied Intelligence, 2018, 48:4565-4576. [26] LI Xuan, ZHANG Tao, ZHAO Xin, et al. Guided autoencoder for dimensionality reduction of pedestrian features[J]. Applied Intelligence, 2020, 50:4557-4567. [27] RUMELHART D, HINTON G, WILLIAMS R J J N. Learning representations by back-propagating errors[J]. Nature, 1986, 323(6088):533-536. [28] HUANG Lei, LIU Xianglong, LANG Bo, et al. Orthogonal weight normalization:Solution to optimization over multiple dependent stiefel manifolds in deep neural networks[C]. Proceedings of the AAAI Conference on Artificial Intelligence, 2018, 32(1). [29] CHARTE D, CHARTE F, GARCÍA S, et al. A practical tutorial on autoencoders for nonlinear feature fusion:Taxonomy, models, software and guidelines[J]. Information Fusion, 2018, 44:78-96. [30] RANJAN C. Understanding deep learning:Application in rare event prediction[M]. Atlanta, GA, USA:Connaissance Publishing, 2020. [31] LIU Guirong, HAN Xu. Computational inverse techniques in nondestructive evaluation[M]. CRC Press, 2003. [32] DUCHI J, HAZAN E, SINGER Y. Adaptive subgradient methods for online learning and stochastic optimization[J]. Journal of Machine Learning Research, 2011, 12(7) |