2026 Vol. 46, No. 5
Display Method:
2026, 46(5): 051411.
doi: 10.11883/bzycj-2025-0238
Abstract:
Rapid and accurate assessment of blast loads in complex urban blocks is critical for efficient blast-resistant structural design and post-disaster damage evaluation. However, traditional methods, including empirical formulas, physical models, and numerical simulations, struggle to simultaneously achieve high computational efficiency and prediction accuracy. Furthermore, existing deep learning-based blast load prediction models are hard to be applied in complex urban block scenarios. To achieve rapid and accurate assessment of blast loads in complex urban street blocks, a physics-information and data fusion-driven method is proposed. The core idea of the method is a “spatial partitioning and progressive inference” strategy, which involves constructing distinct rapid prediction models for “the detonation street” and “non-detonation streets”. These models then collaborate synergistically via their shared boundary pressures to predict the spatiotemporal evolution of the pressure field across the entire urban block. The two network models incorporate the results from method of images, signed distance fields, and energy density factors to integrate key physical features of the flow field. For the architectures, the two models adopt a 3D-UNet and a cascaded network composed of a 2D-UNet and a 3D-UNet, respectively. The target outputs for both networks were generated using a validated numerical simulation method, which were then used to train the models. Evaluation of the model’s predictive performance demonstrates that the proposed method accurately predicts the spatiotemporal evolution of the pressure field. The relative error between the predicted flow field and numerical simulation results is within 20% in both detonation and non-detonation streets. Moreover, the method effectively captures the pressure-time histories at specified locations. The inference time of the proposed dual-network collaborative method is approximately 2% of the computation time of the corresponding numerical simulation, and the flow field storage cost for a single time step is less than 0.2% of a D3PLOT file, thereby significantly reducing computational and storage costs. The research provides a novel method for the rapid assessment of blast loads in large-scale, complex urban blocks, offering efficient decision-making support for the blast-resistant design and evaluation of urban buildings.
Rapid and accurate assessment of blast loads in complex urban blocks is critical for efficient blast-resistant structural design and post-disaster damage evaluation. However, traditional methods, including empirical formulas, physical models, and numerical simulations, struggle to simultaneously achieve high computational efficiency and prediction accuracy. Furthermore, existing deep learning-based blast load prediction models are hard to be applied in complex urban block scenarios. To achieve rapid and accurate assessment of blast loads in complex urban street blocks, a physics-information and data fusion-driven method is proposed. The core idea of the method is a “spatial partitioning and progressive inference” strategy, which involves constructing distinct rapid prediction models for “the detonation street” and “non-detonation streets”. These models then collaborate synergistically via their shared boundary pressures to predict the spatiotemporal evolution of the pressure field across the entire urban block. The two network models incorporate the results from method of images, signed distance fields, and energy density factors to integrate key physical features of the flow field. For the architectures, the two models adopt a 3D-UNet and a cascaded network composed of a 2D-UNet and a 3D-UNet, respectively. The target outputs for both networks were generated using a validated numerical simulation method, which were then used to train the models. Evaluation of the model’s predictive performance demonstrates that the proposed method accurately predicts the spatiotemporal evolution of the pressure field. The relative error between the predicted flow field and numerical simulation results is within 20% in both detonation and non-detonation streets. Moreover, the method effectively captures the pressure-time histories at specified locations. The inference time of the proposed dual-network collaborative method is approximately 2% of the computation time of the corresponding numerical simulation, and the flow field storage cost for a single time step is less than 0.2% of a D3PLOT file, thereby significantly reducing computational and storage costs. The research provides a novel method for the rapid assessment of blast loads in large-scale, complex urban blocks, offering efficient decision-making support for the blast-resistant design and evaluation of urban buildings.
2026, 46(5): 051421.
doi: 10.11883/bzycj-2025-0103
Abstract:
To accurately characterize the stress-strain constitutive relationship of metal materials under high strain-rate conditions, a novel, high-precision constitutive-relationship-prediction model based on Graph Neural Networks (GNNs) and Kolmogorov-Arnold Networks (KANs) was developed. Traditional Johnson-Cook (JC) models often fail to account for the coupling effects among temperature, strain rate, and strain, all of which are crucial for describing the dynamic behavior of materials under extreme conditions. This limitation was addressed by constructing graph-structured data in the GNN model to capture the nonlinear correlations of multidimensional parameters and by leveraging the Kolmogorov-Arnold theorem in the KAN model to achieve precise mapping of high-dimensional input spaces. The research methodology involved several key steps. Experimental data from ODS copper subjected to high-strain-rate compression were collected using a split Hopkinson pressure bar (SHPB) system and subsequently preprocessed. The dataset included temperature, strain rate, strain, and stress. In the GNN model, when temperature and strain rate were held constant, nodes were connected sequentially based on strain values to form edges. When temperature was held constant, a reasonable threshold was established between nodes with adjacent strain rates, and nodes within this threshold were connected to form edges. The GNN employed a Message Passing Neural Network (MPNN) architecture to learn and predict material properties. Model parameters were optimized using the Adam optimizer, with the Root Mean Squared Error (RMSE) serving as the loss function. The KAN model was constructed based on the Kolmogorov-Arnold representation theorem and consisted of multiple KAN-Linear layers. Each KAN-Linear unit included base weights and spline weights. Base weights handled linear relationships through traditional linear transformations, while spline weights managed nonlinear mappings via B-spline interpolation. Both models were trained on the preprocessed dataset, and their performance was evaluated using metrics such as the Mean Relative Error (MRE), Root Mean Squared Error (RMSE), and the coefficient of determination (R2). The GNN model achieved an average MRE of 9.2% with an R2 value exceeding 0.95, while the KAN model recorded an MRE of 9.1% with a similar R2 value. Both models significantly outperformed the JC model, which had an MRE of 38% and an R2 value of 0.75. Furthermore, the predictive capabilities of the GNN and KAN models were validated through finite element simulations. The simulation results demonstrated that the stress-strain distributions predicted by the GNN and KAN models were more consistent with theoretical expectations compared to those predicted by the JC model, particularly in capturing the material's softening phase. The findings highlight the potential of integrating advanced machine learning techniques, such as GNNs and KANs, into the field of materials science to enhance the accuracy and efficiency of constitutive modeling. These models offer a promising alternative to traditional empirical models and hold significant implications for engineering applications in aerospace, automotive, and other industries where materials are subjected to high strain rates.
To accurately characterize the stress-strain constitutive relationship of metal materials under high strain-rate conditions, a novel, high-precision constitutive-relationship-prediction model based on Graph Neural Networks (GNNs) and Kolmogorov-Arnold Networks (KANs) was developed. Traditional Johnson-Cook (JC) models often fail to account for the coupling effects among temperature, strain rate, and strain, all of which are crucial for describing the dynamic behavior of materials under extreme conditions. This limitation was addressed by constructing graph-structured data in the GNN model to capture the nonlinear correlations of multidimensional parameters and by leveraging the Kolmogorov-Arnold theorem in the KAN model to achieve precise mapping of high-dimensional input spaces. The research methodology involved several key steps. Experimental data from ODS copper subjected to high-strain-rate compression were collected using a split Hopkinson pressure bar (SHPB) system and subsequently preprocessed. The dataset included temperature, strain rate, strain, and stress. In the GNN model, when temperature and strain rate were held constant, nodes were connected sequentially based on strain values to form edges. When temperature was held constant, a reasonable threshold was established between nodes with adjacent strain rates, and nodes within this threshold were connected to form edges. The GNN employed a Message Passing Neural Network (MPNN) architecture to learn and predict material properties. Model parameters were optimized using the Adam optimizer, with the Root Mean Squared Error (RMSE) serving as the loss function. The KAN model was constructed based on the Kolmogorov-Arnold representation theorem and consisted of multiple KAN-Linear layers. Each KAN-Linear unit included base weights and spline weights. Base weights handled linear relationships through traditional linear transformations, while spline weights managed nonlinear mappings via B-spline interpolation. Both models were trained on the preprocessed dataset, and their performance was evaluated using metrics such as the Mean Relative Error (MRE), Root Mean Squared Error (RMSE), and the coefficient of determination (R2). The GNN model achieved an average MRE of 9.2% with an R2 value exceeding 0.95, while the KAN model recorded an MRE of 9.1% with a similar R2 value. Both models significantly outperformed the JC model, which had an MRE of 38% and an R2 value of 0.75. Furthermore, the predictive capabilities of the GNN and KAN models were validated through finite element simulations. The simulation results demonstrated that the stress-strain distributions predicted by the GNN and KAN models were more consistent with theoretical expectations compared to those predicted by the JC model, particularly in capturing the material's softening phase. The findings highlight the potential of integrating advanced machine learning techniques, such as GNNs and KANs, into the field of materials science to enhance the accuracy and efficiency of constitutive modeling. These models offer a promising alternative to traditional empirical models and hold significant implications for engineering applications in aerospace, automotive, and other industries where materials are subjected to high strain rates.
2026, 46(5): 051422.
doi: 10.11883/bzycj-2025-0259
Abstract:
Metastable high-entropy alloys (HEA) have attracted considerable attention due to their exceptional mechanical properties at high strain rates. However, their engineering applications under high strain rates are limited, which stems from an inadequate understanding of the relationship between microstructure and impact response. An end-to-end deep learning framework has been implemented, combining the crystal plasticity finite element (CPFE) method with a convolutional neural network (CNN) to elucidate the mapping between microstructure and shock response. A crystal plasticity constitutive model, which couples dislocation slip and martensitic transformation mechanisms, has been developed and validated against experimental results, confirming the model's effectiveness. Based on this constitutive model, a dataset for training the deep learning model is generated, including the complete stress-strain response and martensite volume fraction evolution of metastable HEA with corresponding textures and loading conditions at high strain rates. The two-branch CNN model is used to extract microstructural features. Its input is microstructural information in image format and loading conditions, and its output consists of two branches corresponding to stress-strain curves and the evolution of martensite volume fraction. The collected dataset was used to train the CNN model. The results show that the model can accurately predict the shock response of metastable HEA under high strain rate conditions. This study demonstrates that the deep learning framework, while maintaining predictive accuracy, offers a significant computational efficiency advantage over CPFE simulations. It provides a novel approach for efficiently assessing the mechanical behavior of metastable high-entropy alloys under high strain rates.
Metastable high-entropy alloys (HEA) have attracted considerable attention due to their exceptional mechanical properties at high strain rates. However, their engineering applications under high strain rates are limited, which stems from an inadequate understanding of the relationship between microstructure and impact response. An end-to-end deep learning framework has been implemented, combining the crystal plasticity finite element (CPFE) method with a convolutional neural network (CNN) to elucidate the mapping between microstructure and shock response. A crystal plasticity constitutive model, which couples dislocation slip and martensitic transformation mechanisms, has been developed and validated against experimental results, confirming the model's effectiveness. Based on this constitutive model, a dataset for training the deep learning model is generated, including the complete stress-strain response and martensite volume fraction evolution of metastable HEA with corresponding textures and loading conditions at high strain rates. The two-branch CNN model is used to extract microstructural features. Its input is microstructural information in image format and loading conditions, and its output consists of two branches corresponding to stress-strain curves and the evolution of martensite volume fraction. The collected dataset was used to train the CNN model. The results show that the model can accurately predict the shock response of metastable HEA under high strain rate conditions. This study demonstrates that the deep learning framework, while maintaining predictive accuracy, offers a significant computational efficiency advantage over CPFE simulations. It provides a novel approach for efficiently assessing the mechanical behavior of metastable high-entropy alloys under high strain rates.
2026, 46(5): 051423.
doi: 10.11883/bzycj-2025-0324
Abstract:
A novel deep neural network was proposed to predict the growth of micro voids in single-crystal metal based on U-Net and Transformer in this paper. The dataset was constructed through molecular dynamics (MD) simulation results of a single-crystal copper atom model with initial double ellipsoidal voids. A data preprocessing scheme based on background mesh was proposed to perform local statistics on the simulation results. The information obtained from simulation results, such as void morphology, dislocation distribution, and von Mises effective stress, was converted into local statistics on the background mesh. These statistics were then converted into pixel matrix format as the input of the deep neural network. Multiple data samples can be generated from the results of one single MD simulation, which significantly reduces the computational resources required for dataset generation. The samples encompass typical stages of the void growth, which enables the network to capture key features and to facilitate data augmentation conveniently. The deep neural network model consists of four parts: U-Net composed of down-sampling and up-sampling networks, a generation model, a Query network model, and a regression prediction network. The model input includes both physical information and positional information. The output is the predicted physical information for the next time step. The loss function is a superposition of loss functions for each predicted variable. Numerical examples demonstrate that the aforementioned deep-learning method can accurately predict the global porosity ratio, dislocation density, and von Mises stress during growth of micro voids in single-crystal metal. The time for the network prediction can reach two orders of magnitude lower than that of MD simulation.
A novel deep neural network was proposed to predict the growth of micro voids in single-crystal metal based on U-Net and Transformer in this paper. The dataset was constructed through molecular dynamics (MD) simulation results of a single-crystal copper atom model with initial double ellipsoidal voids. A data preprocessing scheme based on background mesh was proposed to perform local statistics on the simulation results. The information obtained from simulation results, such as void morphology, dislocation distribution, and von Mises effective stress, was converted into local statistics on the background mesh. These statistics were then converted into pixel matrix format as the input of the deep neural network. Multiple data samples can be generated from the results of one single MD simulation, which significantly reduces the computational resources required for dataset generation. The samples encompass typical stages of the void growth, which enables the network to capture key features and to facilitate data augmentation conveniently. The deep neural network model consists of four parts: U-Net composed of down-sampling and up-sampling networks, a generation model, a Query network model, and a regression prediction network. The model input includes both physical information and positional information. The output is the predicted physical information for the next time step. The loss function is a superposition of loss functions for each predicted variable. Numerical examples demonstrate that the aforementioned deep-learning method can accurately predict the global porosity ratio, dislocation density, and von Mises stress during growth of micro voids in single-crystal metal. The time for the network prediction can reach two orders of magnitude lower than that of MD simulation.
2026, 46(5): 051424.
doi: 10.11883/bzycj-2025-0254
Abstract:
The Riedel-Hiermaier-Thoma (RHT) constitutive model has been widely applied in tunnel blasting, impact-resistant structural design, and underground protective engineering due to its strong capability to describe the mechanical behavior of brittle materials such as rock and concrete under high-strain-rate and high-pressure conditions. However, the model involves a large number of nonlinear parameters, some of which are difficult to determine experimentally because of the high cost of testing. These key parameters are often adjusted through trial-and-error methods, which limit both modeling efficiency and simulation accuracy. To overcome these limitations, a comprehensive parameter inversion framework was developed for 16 difficult-to-calibrate parameters of the RHT model. The framework integrated the PAWN (Pianosi-Wagener) global sensitivity analysis method with intelligent optimization algorithms and coupled MATLAB with the ANSYS/LS-DYNA simulation platform. The area difference of the stress-strain curve was introduced as the core evaluation metric, and a batch result-extraction and automated three-wave alignment technique was developed. Based on these developments, an efficient and reliable RHT parameter inversion system was established, achieving, for the first time, a global sensitivity analysis (GSA) and automated inversion of key parameters in the RHT model. The results show that, among the 16 parameters analyzed, only eight exert a significant influence on the model response. The intelligent optimization–based inversion achieved relative errors ranging from 0.23% to 9.28%, and the reliability of the calibrated parameters was verified through Semicircular Bend Split Hopkinson Pressure Bar (SCB-SHPB) tests and scaled blasting experiments. The proposed method significantly enhances both the efficiency and accuracy of RHT parameter calibration without the need to construct large sample datasets, and it is applicable to a wide range of loading conditions. Compared with traditional calibration approaches, the required inversion accuracy was achieved in fewer than 15 iterations, meeting the dual demands of computational efficiency and precision. Overall, the proposed framework provides a new and effective approach for sensitivity analysis and parameter calibration of dynamic constitutive models, demonstrating strong engineering applicability and practical value.
The Riedel-Hiermaier-Thoma (RHT) constitutive model has been widely applied in tunnel blasting, impact-resistant structural design, and underground protective engineering due to its strong capability to describe the mechanical behavior of brittle materials such as rock and concrete under high-strain-rate and high-pressure conditions. However, the model involves a large number of nonlinear parameters, some of which are difficult to determine experimentally because of the high cost of testing. These key parameters are often adjusted through trial-and-error methods, which limit both modeling efficiency and simulation accuracy. To overcome these limitations, a comprehensive parameter inversion framework was developed for 16 difficult-to-calibrate parameters of the RHT model. The framework integrated the PAWN (Pianosi-Wagener) global sensitivity analysis method with intelligent optimization algorithms and coupled MATLAB with the ANSYS/LS-DYNA simulation platform. The area difference of the stress-strain curve was introduced as the core evaluation metric, and a batch result-extraction and automated three-wave alignment technique was developed. Based on these developments, an efficient and reliable RHT parameter inversion system was established, achieving, for the first time, a global sensitivity analysis (GSA) and automated inversion of key parameters in the RHT model. The results show that, among the 16 parameters analyzed, only eight exert a significant influence on the model response. The intelligent optimization–based inversion achieved relative errors ranging from 0.23% to 9.28%, and the reliability of the calibrated parameters was verified through Semicircular Bend Split Hopkinson Pressure Bar (SCB-SHPB) tests and scaled blasting experiments. The proposed method significantly enhances both the efficiency and accuracy of RHT parameter calibration without the need to construct large sample datasets, and it is applicable to a wide range of loading conditions. Compared with traditional calibration approaches, the required inversion accuracy was achieved in fewer than 15 iterations, meeting the dual demands of computational efficiency and precision. Overall, the proposed framework provides a new and effective approach for sensitivity analysis and parameter calibration of dynamic constitutive models, demonstrating strong engineering applicability and practical value.
2026, 46(5): 051431.
doi: 10.11883/bzycj-2025-0154
Abstract:
Gas leakage and explosion accidents pose a serious threat to public safety. A critical prerequisite for accurately predicting the explosive effects of combustible gas leakage lies in determining the concentration distribution following the leakage. To develop a real-time, full-field spatiotemporal prediction model for combustible gas leakage and diffusion, and to achieve efficient prediction of the equivalent gas cloud volume, a novel graph neural network model based on a dual-neural-network architecture and a multi-stage training strategy, named multi-stage dual graph neural network (MSDGNN), was proposed. The MSDGNN model consists of two synergistic sub-networks: (1) a concentration network (Ncon), which establishes the mapping relationship between the concentration fields of two consecutive timesteps, and (2) a volume network (Nvol), which generates the equivalent gas cloud volume at each timestep to provide a quantitative metric for explosion risk assessment. To further enhance model performance, a multi-stage progressive training strategy was developed to jointly optimize the dual networks. Experimental results demonstrate that compared with mesh-based graph network (MGN), the dual-network architecture effectively decouples the tasks of concentration field prediction and equivalent gas cloud volume prediction. This approach significantly mitigates the interference of weight factors in single-objective loss functions during the training process. The multi-stage training strategy, through stepwise parameter optimization, addresses the issue of insufficient data fitting encountered in traditional methods, significantly reducing the mean absolute percentage error\begin{document}$ {{ \varepsilon }}_{\rm{MAPE}} $\end{document} for concentration fields and equivalent gas cloud volumes from 49.47% and 108.93% to 7.55% and 9.07%, respectively. Furthermore, the generalization error of MSDGNN for concentration fields and equivalent gas cloud volumes is reduced from 41.18% and 38.81% to 8.01% and 14.92%, respectively. In addition, MSDGNN exhibits robust prediction performance even when key parameters such as leakage rate, leakage height, and leakage duration exceed the range of training data. Compared with numerical simulation methods, the proposed model achieves a three-order-of-magnitude improvement in computational efficiency while maintaining prediction accuracy, providing an effective real-time analytical tool for combustible gas safety monitoring.
Gas leakage and explosion accidents pose a serious threat to public safety. A critical prerequisite for accurately predicting the explosive effects of combustible gas leakage lies in determining the concentration distribution following the leakage. To develop a real-time, full-field spatiotemporal prediction model for combustible gas leakage and diffusion, and to achieve efficient prediction of the equivalent gas cloud volume, a novel graph neural network model based on a dual-neural-network architecture and a multi-stage training strategy, named multi-stage dual graph neural network (MSDGNN), was proposed. The MSDGNN model consists of two synergistic sub-networks: (1) a concentration network (Ncon), which establishes the mapping relationship between the concentration fields of two consecutive timesteps, and (2) a volume network (Nvol), which generates the equivalent gas cloud volume at each timestep to provide a quantitative metric for explosion risk assessment. To further enhance model performance, a multi-stage progressive training strategy was developed to jointly optimize the dual networks. Experimental results demonstrate that compared with mesh-based graph network (MGN), the dual-network architecture effectively decouples the tasks of concentration field prediction and equivalent gas cloud volume prediction. This approach significantly mitigates the interference of weight factors in single-objective loss functions during the training process. The multi-stage training strategy, through stepwise parameter optimization, addresses the issue of insufficient data fitting encountered in traditional methods, significantly reducing the mean absolute percentage error
2026, 46(5): 051432.
doi: 10.11883/bzycj-2025-0320
Abstract:
To overcome the high computational cost of traditional ballistic prediction methods and their inability to satisfy rapid assessment requirements, this study proposes an efficient predictive model for the penetration ballistics of multi-layer thin concrete targets based on a convolutional neural network (CNN). First, a numerically simulated approach, validated by experiments, was employed to analyze and confirm the significant influence of projectile angular velocity on trajectory deflection, and this parameter was consequently identified as a key projectile–target engagement condition. By systematically varying the initial conditions, a dataset comprising 127 cases of single-layer thin concrete target penetration was constructed. On this basis, a high-accuracy ballistic prediction model for single-layer targets was developed, taking projectile parameters, target parameters, and engagement conditions as inputs, and post-impact projectile motion parameters as outputs. Furthermore, by incorporating rigid-body kinematic equations describing the projectile flight between successive targets, a complete iterative penetration-flight prediction framework was established, enabling rapid prediction of ballistic characteristics for multi-layer spaced thin concrete targets. The results indicate that an increase in counterclockwise angular velocity leads to a positive increase in the radial residual velocity behind the target and an upward deflection of the trajectory, whereas clockwise angular velocity produces the opposite effect. These findings demonstrate that projectile angular velocity is a critical and non-negligible factor in thin-target penetration. For single-layer target cases, the model exhibited strong predictive capability, with the mean MSE values of the training and test sets stabilizing at approximately0.0012 and 0.0019 , respectively. For multi-layer target predictions, while maintaining high accuracy (with a maximum relative error of 10.65% in residual velocity and a maximum absolute error of 3.47° in attitude angle), the computational time of the proposed method was only about 0.05% of that required by conventional numerical simulation. This study not only elucidates the influence of the key factor-projectile angular velocity-on penetration ballistics, but also proposes a novel “data-driven and physics-equation fusion” modeling paradigm, providing an important methodological reference for weapon damage effectiveness assessment and design optimization.
To overcome the high computational cost of traditional ballistic prediction methods and their inability to satisfy rapid assessment requirements, this study proposes an efficient predictive model for the penetration ballistics of multi-layer thin concrete targets based on a convolutional neural network (CNN). First, a numerically simulated approach, validated by experiments, was employed to analyze and confirm the significant influence of projectile angular velocity on trajectory deflection, and this parameter was consequently identified as a key projectile–target engagement condition. By systematically varying the initial conditions, a dataset comprising 127 cases of single-layer thin concrete target penetration was constructed. On this basis, a high-accuracy ballistic prediction model for single-layer targets was developed, taking projectile parameters, target parameters, and engagement conditions as inputs, and post-impact projectile motion parameters as outputs. Furthermore, by incorporating rigid-body kinematic equations describing the projectile flight between successive targets, a complete iterative penetration-flight prediction framework was established, enabling rapid prediction of ballistic characteristics for multi-layer spaced thin concrete targets. The results indicate that an increase in counterclockwise angular velocity leads to a positive increase in the radial residual velocity behind the target and an upward deflection of the trajectory, whereas clockwise angular velocity produces the opposite effect. These findings demonstrate that projectile angular velocity is a critical and non-negligible factor in thin-target penetration. For single-layer target cases, the model exhibited strong predictive capability, with the mean MSE values of the training and test sets stabilizing at approximately
2026, 46(5): 051433.
doi: 10.11883/bzycj-2025-0326
Abstract:
To investigate the spatial dispersion characteristics of behind-armor debris (BAD) generated by the penetration of tantalum alloy explosively-formed projectile (EFP) into steel targets, a comprehensive study combining experimental testing, numerical simulation, and machine learning prediction was performed. First, X-ray imaging and fragment-distribution experiments were conducted on 45 steel targets penetrated by tantalum alloy EFP to obtain initial experimental data. Subsequently, the finite element-smoothed particle hydrodynamics (FE-SPH) fixed-coupling method, which had been validated by the experimental data, was employed to simulate the perforation process. These numerical simulations were carried out under a wide range of working conditions, specifically varying the projectile velocity and target thickness. Through this process, a comprehensive dataset describing the spatial dispersion of BAD was generated. Finally, to achieve rapid prediction capabilities, a support vector regression (SVR) model was established. The Bayesian optimization algorithm was utilized to train the model using the dense-fragment dispersion angle data extracted from the simulation dataset, thereby creating a robust predictive model for spatial dispersion of BAD. The experimental results indicate that the morphology of the BAD cloud exhibits a typical truncated-ellipsoidal shape. Due to the density difference between tantalum and steel, fragments composed of different materials display distinct radial expansion behaviors, i.e. steel fragments are distributed along the outer surface of the ellipsoid whereas tantalum fragments are concentrated on the inner surface. Spatially, the debris is primarily concentrated within a circular region surrounding the central perforation area of the witness plate. The FE-SPH fixed-coupling method successfully reproduced the BAD formation process, yielding debris-cloud morphologies that closely match the experimental results. The relative error between the simulated and measured mean maximum fragment dispersion angles is less than 10%, thereby confirming the accuracy of the numerical simulations. Furthermore, the analysis reveals that the Bayesian-optimized SVR model enables accurate prediction of dense-fragment dispersion angles under varying target thicknesses and EFP impact velocities, with maximum relative errors below 10%. Based on these predictions, the damage area on witness plates within a certain distance behind the target can be rapidly estimated.
To investigate the spatial dispersion characteristics of behind-armor debris (BAD) generated by the penetration of tantalum alloy explosively-formed projectile (EFP) into steel targets, a comprehensive study combining experimental testing, numerical simulation, and machine learning prediction was performed. First, X-ray imaging and fragment-distribution experiments were conducted on 45 steel targets penetrated by tantalum alloy EFP to obtain initial experimental data. Subsequently, the finite element-smoothed particle hydrodynamics (FE-SPH) fixed-coupling method, which had been validated by the experimental data, was employed to simulate the perforation process. These numerical simulations were carried out under a wide range of working conditions, specifically varying the projectile velocity and target thickness. Through this process, a comprehensive dataset describing the spatial dispersion of BAD was generated. Finally, to achieve rapid prediction capabilities, a support vector regression (SVR) model was established. The Bayesian optimization algorithm was utilized to train the model using the dense-fragment dispersion angle data extracted from the simulation dataset, thereby creating a robust predictive model for spatial dispersion of BAD. The experimental results indicate that the morphology of the BAD cloud exhibits a typical truncated-ellipsoidal shape. Due to the density difference between tantalum and steel, fragments composed of different materials display distinct radial expansion behaviors, i.e. steel fragments are distributed along the outer surface of the ellipsoid whereas tantalum fragments are concentrated on the inner surface. Spatially, the debris is primarily concentrated within a circular region surrounding the central perforation area of the witness plate. The FE-SPH fixed-coupling method successfully reproduced the BAD formation process, yielding debris-cloud morphologies that closely match the experimental results. The relative error between the simulated and measured mean maximum fragment dispersion angles is less than 10%, thereby confirming the accuracy of the numerical simulations. Furthermore, the analysis reveals that the Bayesian-optimized SVR model enables accurate prediction of dense-fragment dispersion angles under varying target thicknesses and EFP impact velocities, with maximum relative errors below 10%. Based on these predictions, the damage area on witness plates within a certain distance behind the target can be rapidly estimated.
2026, 46(5): 051434.
doi: 10.11883/bzycj-2025-0152
Abstract:
hock wave pressure sensor acquisition systems exhibit both high- and low-frequency dynamic characteristics; however, traditional transfer-function-based modeling and compensation methods cannot achieve accurate full-band representation, thereby limiting further improvements in compensation accuracy and reconstructed signal fidelity under complex dynamic conditions. To overcome this limitation, a fusion modeling method integrating the sparrow search algorithm (SSA), variational mode decomposition (VMD), and a long short-term memory (LSTM) network was developed to enhance the dynamic characteristic modeling accuracy of shock wave pressure acquisition systems. In this method, SSA was employed to globally optimize the mode number and penalty factor of VMD using a comprehensive fitness function that combined sample entropy and the Pearson correlation coefficient, thereby improving the adaptability of the decomposition to nonstationary response signals contaminated by oscillations and noise. With the optimized parameters, VMD decomposed the sensor response signal into multiple intrinsic modal components; the frequency-domain characteristics of each component were then analyzed, and correlation coefficients together with jump durations were calculated and compared according to the spectral distribution characteristics of shock waves to identify the signal types contained in each mode. Based on this identification, high-frequency oscillatory modes and noise modes were discarded, enabling reconstruction of the effective shock wave signal. A sinusoidal signal generator was used to obtain pressure acquisition waveforms in the range of 0.1–10 Hz; the amplitudes were converted into decibels to form the low-frequency magnitude-frequency characteristic curve, and a frequency-domain rational function fitting procedure was applied to establish the low-frequency transfer function. Using this transfer function, low-frequency dynamic compensation was performed on the reconstructed signal, and the compensated low-frequency signal was combined with the original sensor response to construct an input-output dataset that simultaneously preserved the compensated dynamic information and the original response characteristics. On the basis of this dataset, SSA was further used to optimize key LSTM hyperparameters, including the number of hidden units, the maximum number of training epochs, and the initial learning rate, and an LSTM network was trained to model the nonlinear, time-dependent, and memory-dependent behavior of the acquisition system, thereby achieving fusion modeling of high- and low-frequency dynamic characteristics within a unified framework. Simulation analyses and live explosion tests demonstrated that, compared with the traditional inverse-filtering compensation method, the proposed approach reduced the mean absolute percentage error (MAPE) between the compensated signal and the reference pressure curve by approximately 75% and decreased oscillation residuals by about 38%, satisfying the accuracy requirements for input pressure signals; compared with a single LSTM-based modeling approach, the VMD-LSTM fusion model reduced the overall modeling error to 13%, indicating improved accuracy and robustness. These results indicate that the SSA-optimized VMD decomposition, transfer-function-based low-frequency compensation, and SSA-tuned LSTM fusion modeling together provide an effective full-band modeling strategy, and that the proposed framework offers a robust solution for accurate dynamic characteristic modeling and compensation in shock wave pressure sensor acquisition systems.
hock wave pressure sensor acquisition systems exhibit both high- and low-frequency dynamic characteristics; however, traditional transfer-function-based modeling and compensation methods cannot achieve accurate full-band representation, thereby limiting further improvements in compensation accuracy and reconstructed signal fidelity under complex dynamic conditions. To overcome this limitation, a fusion modeling method integrating the sparrow search algorithm (SSA), variational mode decomposition (VMD), and a long short-term memory (LSTM) network was developed to enhance the dynamic characteristic modeling accuracy of shock wave pressure acquisition systems. In this method, SSA was employed to globally optimize the mode number and penalty factor of VMD using a comprehensive fitness function that combined sample entropy and the Pearson correlation coefficient, thereby improving the adaptability of the decomposition to nonstationary response signals contaminated by oscillations and noise. With the optimized parameters, VMD decomposed the sensor response signal into multiple intrinsic modal components; the frequency-domain characteristics of each component were then analyzed, and correlation coefficients together with jump durations were calculated and compared according to the spectral distribution characteristics of shock waves to identify the signal types contained in each mode. Based on this identification, high-frequency oscillatory modes and noise modes were discarded, enabling reconstruction of the effective shock wave signal. A sinusoidal signal generator was used to obtain pressure acquisition waveforms in the range of 0.1–10 Hz; the amplitudes were converted into decibels to form the low-frequency magnitude-frequency characteristic curve, and a frequency-domain rational function fitting procedure was applied to establish the low-frequency transfer function. Using this transfer function, low-frequency dynamic compensation was performed on the reconstructed signal, and the compensated low-frequency signal was combined with the original sensor response to construct an input-output dataset that simultaneously preserved the compensated dynamic information and the original response characteristics. On the basis of this dataset, SSA was further used to optimize key LSTM hyperparameters, including the number of hidden units, the maximum number of training epochs, and the initial learning rate, and an LSTM network was trained to model the nonlinear, time-dependent, and memory-dependent behavior of the acquisition system, thereby achieving fusion modeling of high- and low-frequency dynamic characteristics within a unified framework. Simulation analyses and live explosion tests demonstrated that, compared with the traditional inverse-filtering compensation method, the proposed approach reduced the mean absolute percentage error (MAPE) between the compensated signal and the reference pressure curve by approximately 75% and decreased oscillation residuals by about 38%, satisfying the accuracy requirements for input pressure signals; compared with a single LSTM-based modeling approach, the VMD-LSTM fusion model reduced the overall modeling error to 13%, indicating improved accuracy and robustness. These results indicate that the SSA-optimized VMD decomposition, transfer-function-based low-frequency compensation, and SSA-tuned LSTM fusion modeling together provide an effective full-band modeling strategy, and that the proposed framework offers a robust solution for accurate dynamic characteristic modeling and compensation in shock wave pressure sensor acquisition systems.
2026, 46(5): 051435.
doi: 10.11883/bzycj-2025-0179
Abstract:
The efficient and accurate prediction of structural responses in reinforced concrete components under blast loading plays a critical role in emergency repair decision, structural strengthening, and protective design. Existing rapid methods for calculating structural response, such as analytical models and lightweight data-driven approaches, are computationally efficient. However, they are limited in accurately resolving three-dimensional structural response fields. A Graph Neural Network (GNN)-based model for the rapid prediction of damage in reinforced concrete (RC) columns was proposed in this paper. By leveraging the neighborhood node aggregation mechanism of GNNs, the model efficiently transmits mechanical correlation information within the structure. This allows the model to establish an end-to-end mapping between blast load inputs and the 3D structural response of the component, enabling rapid prediction of the column’s damage state. Furthermore, a multi-scenario feature coupling training strategy is introduced to significantly enhance the model’s generalization capability. This strategy enables the GNN model to effectively adapt to variations in key design and loading parameters, including reinforcement ratios, explosive charge weights, and blast locations. The results demonstrate that the proposed model achieves a prediction time of merely 55 milliseconds per instance, representing a computational speed improvement of four orders of magnitude over conventional methods; meanwhile, the prediction error remains below 3.33%. Furthermore, it delivers high-precision damage predictions across various blast scenarios. The proposed study successfully highlights the significant potential of GNN-based approaches in predicting blast-induced damage and offers an innovative, data-driven solution for rapid structural assessment and protective design in the field of blast engineering.
The efficient and accurate prediction of structural responses in reinforced concrete components under blast loading plays a critical role in emergency repair decision, structural strengthening, and protective design. Existing rapid methods for calculating structural response, such as analytical models and lightweight data-driven approaches, are computationally efficient. However, they are limited in accurately resolving three-dimensional structural response fields. A Graph Neural Network (GNN)-based model for the rapid prediction of damage in reinforced concrete (RC) columns was proposed in this paper. By leveraging the neighborhood node aggregation mechanism of GNNs, the model efficiently transmits mechanical correlation information within the structure. This allows the model to establish an end-to-end mapping between blast load inputs and the 3D structural response of the component, enabling rapid prediction of the column’s damage state. Furthermore, a multi-scenario feature coupling training strategy is introduced to significantly enhance the model’s generalization capability. This strategy enables the GNN model to effectively adapt to variations in key design and loading parameters, including reinforcement ratios, explosive charge weights, and blast locations. The results demonstrate that the proposed model achieves a prediction time of merely 55 milliseconds per instance, representing a computational speed improvement of four orders of magnitude over conventional methods; meanwhile, the prediction error remains below 3.33%. Furthermore, it delivers high-precision damage predictions across various blast scenarios. The proposed study successfully highlights the significant potential of GNN-based approaches in predicting blast-induced damage and offers an innovative, data-driven solution for rapid structural assessment and protective design in the field of blast engineering.
2026, 46(5): 051436.
doi: 10.11883/bzycj-2025-0343
Abstract:
With the increasing application of engineering structures under extreme conditions, accurately predicting their fracture behavior has become a critical challenge in materials science and fracture mechanics. Column-shell structures, as typical load-bearing components, are highly sensitive to crack initiation and propagation, which directly affect their safety and reliability. Although traditional finite element methods can provide accurate fracture evolution simulations, their high computational cost limits applicability in rapid prediction scenarios. To address this issue, a hybrid framework that integrates the phase-field method with the Fourier neural operator (FNO) is proposed for predicting the fracture evolution of column-shell structures. In the proposed framework, the phase-field method is first employed to describe crack initiation, propagation, and possible coalescence in a continuous manner, avoiding explicit crack tracking and enabling physically consistent simulations. Based on this formulation, a finite element model of the column-shell structure is established to generate high-fidelity fracture evolution data under various conditions, including different critical energy release rates, geometric configurations, and loading scenarios. Subsequently, a data-driven learning framework is developed using the FNO to approximate the nonlinear mapping between input parameters and fracture responses. The input of the model includes the spatial distribution of the critical energy release rate, geometric features, and applied loading conditions, while the output corresponds to the evolving phase-field variable that characterizes crack growth over time. A series of FNO models are constructed and trained in a sequential manner to separately capture crack initiation and propagation stages, forming a coupled prediction framework. The training process is carried out using the generated dataset, with appropriate normalization and validation strategies to ensure model robustness and generalization capability. The results demonstrate that the proposed method achieves high prediction accuracy under random critical energy release rates, varying geometries, and complex loading conditions, while significantly reducing computational cost compared to conventional finite element simulations. Once trained, the model enables near real-time prediction of fracture evolution.
With the increasing application of engineering structures under extreme conditions, accurately predicting their fracture behavior has become a critical challenge in materials science and fracture mechanics. Column-shell structures, as typical load-bearing components, are highly sensitive to crack initiation and propagation, which directly affect their safety and reliability. Although traditional finite element methods can provide accurate fracture evolution simulations, their high computational cost limits applicability in rapid prediction scenarios. To address this issue, a hybrid framework that integrates the phase-field method with the Fourier neural operator (FNO) is proposed for predicting the fracture evolution of column-shell structures. In the proposed framework, the phase-field method is first employed to describe crack initiation, propagation, and possible coalescence in a continuous manner, avoiding explicit crack tracking and enabling physically consistent simulations. Based on this formulation, a finite element model of the column-shell structure is established to generate high-fidelity fracture evolution data under various conditions, including different critical energy release rates, geometric configurations, and loading scenarios. Subsequently, a data-driven learning framework is developed using the FNO to approximate the nonlinear mapping between input parameters and fracture responses. The input of the model includes the spatial distribution of the critical energy release rate, geometric features, and applied loading conditions, while the output corresponds to the evolving phase-field variable that characterizes crack growth over time. A series of FNO models are constructed and trained in a sequential manner to separately capture crack initiation and propagation stages, forming a coupled prediction framework. The training process is carried out using the generated dataset, with appropriate normalization and validation strategies to ensure model robustness and generalization capability. The results demonstrate that the proposed method achieves high prediction accuracy under random critical energy release rates, varying geometries, and complex loading conditions, while significantly reducing computational cost compared to conventional finite element simulations. Once trained, the model enables near real-time prediction of fracture evolution.
2026, 46(5): 051441.
doi: 10.11883/bzycj-2025-0282
Abstract:
Inspired by the hybrid topology design that integrates Miura origami and star-shaped honeycomb, this study proposes a novel origami metamaterial sandwich and employs machine learning to predict low-velocity impact response and perform multi-objective optimization. Through drop-weight impact experiments and finite element simulations, the dynamic mechanical response and deformation failure modes of the sandwich under low-velocity impact are systematically investigated. The results demonstrate that the origami-inspired topologies effectively transform the instantaneous complete fracture of traditional honeycombs into progressive crushing failure, thereby significantly enhancing impact resistance. Subsequently, a residual connection-enhanced deep learning model is developed, enabling rapid and precise end-to-end prediction of the complete low-velocity impact response, with computational efficiency substantially surpassing that of finite element simulations. Based on the developed deep learning model, parametric analysis of the key angles revealed their effects on the impact response and effective density, particularly the angle-induced load redistribution between panel tension-compression deformation and crease bending deformation, enabling functional switching between load-bearing and cushioning modes and providing a mechanistic basis for the active tunability of impact response and failure modes. Furthermore, by integrating reinforcement learning and Pareto front analysis, the trained deep learning model served as a surrogate model to achieve lightweight multi-objective optimization tailored for load-bearing and impact-mitigation protection requirements. At similar effective densities, the metamaterial enables broad-range tuning of peak force, offering significant advantages for developing customized protective structures for diverse scenarios. This research not only establishes a solid foundation for creating customizable high-performance impact protection structures but also advances the field toward a new paradigm of intelligent, on-demand design.
Inspired by the hybrid topology design that integrates Miura origami and star-shaped honeycomb, this study proposes a novel origami metamaterial sandwich and employs machine learning to predict low-velocity impact response and perform multi-objective optimization. Through drop-weight impact experiments and finite element simulations, the dynamic mechanical response and deformation failure modes of the sandwich under low-velocity impact are systematically investigated. The results demonstrate that the origami-inspired topologies effectively transform the instantaneous complete fracture of traditional honeycombs into progressive crushing failure, thereby significantly enhancing impact resistance. Subsequently, a residual connection-enhanced deep learning model is developed, enabling rapid and precise end-to-end prediction of the complete low-velocity impact response, with computational efficiency substantially surpassing that of finite element simulations. Based on the developed deep learning model, parametric analysis of the key angles revealed their effects on the impact response and effective density, particularly the angle-induced load redistribution between panel tension-compression deformation and crease bending deformation, enabling functional switching between load-bearing and cushioning modes and providing a mechanistic basis for the active tunability of impact response and failure modes. Furthermore, by integrating reinforcement learning and Pareto front analysis, the trained deep learning model served as a surrogate model to achieve lightweight multi-objective optimization tailored for load-bearing and impact-mitigation protection requirements. At similar effective densities, the metamaterial enables broad-range tuning of peak force, offering significant advantages for developing customized protective structures for diverse scenarios. This research not only establishes a solid foundation for creating customizable high-performance impact protection structures but also advances the field toward a new paradigm of intelligent, on-demand design.
2026, 46(5): 051442.
doi: 10.11883/bzycj-2025-0288
Abstract:
Strut-based lattice metamaterials are a category of ultra-lightweight, load-bearing, and energy-absorbing materials with broad application prospects in fields such as impact protection, aerospace engineering, and lightweight structural design. Benefiting from their unique periodic architectures and adjustable meso-structural parameters, these materials exhibit exceptional mechanical tunability and multifunctional potential. However, due to the extensive parameter space of mesoscopic configurations and the highly nonlinear correlation between the structural geometry and the mechanical response, the optimization of mechanical performance for lattice metamaterials remains a formidable challenge. Based on the meso-structural characteristics of strut-based lattice metamaterials, an efficient rapid digital modeling method was proposed. A Python script coupled with Abaqus software was utilized for the rapid modeling of truss lattice metamaterials and fast calculations about the mechanical properties of the metamaterials. Based on the calculation results, a machine learning dataset was constructed. Three types of truss lattice structures were randomly selected and additively manufactured. Quasi-static compression tests on these three lattice structures were conducted using a universal testing machine to verify the reliability of the dataset. Subsequently, an artificial neural network (ANN) was trained to rapidly predict the mechanical properties of the truss lattice metamaterials. Focusing on the load-bearing capacity, energy absorption capability, and the concurrent optimization of both, a non-dominated sorting genetic algorithm Ⅱ (NSGA-Ⅱ) was employed. The well-trained ANN served as a surrogate model embedded within NSGA-Ⅱ. Lattice configurations that exhibited high load-bearing capacity and superior energy absorption characteristics were generated by the optimization process. These configurations also achieved a balance between load-bearing and energy-absorption performance, facilitating the optimization design of truss lattice metamaterials. Additionally, simulation validations confirmed the reliability of the optimization outcomes, demonstrating the effectiveness of integrating ANN with evolutionary algorithms for the advanced design of metamaterials. By integrating machine learning with numerical simulations, the computational cost of optimization design was effectively reduced, offering support for the rapid performance optimization and customized design of complex lattice metamaterials.
Strut-based lattice metamaterials are a category of ultra-lightweight, load-bearing, and energy-absorbing materials with broad application prospects in fields such as impact protection, aerospace engineering, and lightweight structural design. Benefiting from their unique periodic architectures and adjustable meso-structural parameters, these materials exhibit exceptional mechanical tunability and multifunctional potential. However, due to the extensive parameter space of mesoscopic configurations and the highly nonlinear correlation between the structural geometry and the mechanical response, the optimization of mechanical performance for lattice metamaterials remains a formidable challenge. Based on the meso-structural characteristics of strut-based lattice metamaterials, an efficient rapid digital modeling method was proposed. A Python script coupled with Abaqus software was utilized for the rapid modeling of truss lattice metamaterials and fast calculations about the mechanical properties of the metamaterials. Based on the calculation results, a machine learning dataset was constructed. Three types of truss lattice structures were randomly selected and additively manufactured. Quasi-static compression tests on these three lattice structures were conducted using a universal testing machine to verify the reliability of the dataset. Subsequently, an artificial neural network (ANN) was trained to rapidly predict the mechanical properties of the truss lattice metamaterials. Focusing on the load-bearing capacity, energy absorption capability, and the concurrent optimization of both, a non-dominated sorting genetic algorithm Ⅱ (NSGA-Ⅱ) was employed. The well-trained ANN served as a surrogate model embedded within NSGA-Ⅱ. Lattice configurations that exhibited high load-bearing capacity and superior energy absorption characteristics were generated by the optimization process. These configurations also achieved a balance between load-bearing and energy-absorption performance, facilitating the optimization design of truss lattice metamaterials. Additionally, simulation validations confirmed the reliability of the optimization outcomes, demonstrating the effectiveness of integrating ANN with evolutionary algorithms for the advanced design of metamaterials. By integrating machine learning with numerical simulations, the computational cost of optimization design was effectively reduced, offering support for the rapid performance optimization and customized design of complex lattice metamaterials.
2026, 46(5): 051443.
doi: 10.11883/bzycj-2025-0250
Abstract:
Prefabricated building structures have been widely applied in civil engineering due to their advantages of energy conservation, environmental protection, controllable quality, and efficient construction. As the core load-bearing components of prefabricated building structures, precast reinforced concrete (PC) slabs are vulnerable to threats from gas explosions, industrial explosions, and terrorist attacks. To accurately assess the damage state of PC slabs under explosion, enhance structural blast resistance, and reduce casualties, an explosion response dataset of PC slabs was constructed. Six geometric parameters (slab thickness/length/width, steel reinforcement ratio, compressive strength of concrete, etc.) and two explosion load parameters (explosive weight and explosive distance) were selected as input features. Three machine learning algorithms (GPR, RF, and XGBoost) were used to predict the maximum displacement of PC slabs, and their prediction accuracies are compared by root mean square error, coefficient of determination, mean absolute error, scattering index, and comprehensive performance objective function. Furthermore, a damage classification evaluation model based on the support rotation angle damage criterion is proposed. The performance differences of the model under three criteria are analyzed by confusion matrix and five classification indices (accuracy, precision, recall, F1-score, and Kappa coefficient), and compared with simplified models and empirical prediction methods. The research results indicate that in terms of maximum displacement prediction for PC slabs under explosion loads, the XGBoost model demonstrates the best performance among the three machine learning models (GPR, RF and XGBoost). Specifically, the fitting degree of XGBoost is superior to those of GPR and RF models. Meanwhile, and the XGBoost shows the most outstanding comprehensive performance, with a damage recognition accuracy of 92.5%, which demonstrates its high-efficiency in identifying different damage types. The XGBoost-based damage classification evaluation model for PC slabs under explosion loads exhibits powerful performance, providing important references for structural blast resistance design and rapid post-blast damage assessment.
Prefabricated building structures have been widely applied in civil engineering due to their advantages of energy conservation, environmental protection, controllable quality, and efficient construction. As the core load-bearing components of prefabricated building structures, precast reinforced concrete (PC) slabs are vulnerable to threats from gas explosions, industrial explosions, and terrorist attacks. To accurately assess the damage state of PC slabs under explosion, enhance structural blast resistance, and reduce casualties, an explosion response dataset of PC slabs was constructed. Six geometric parameters (slab thickness/length/width, steel reinforcement ratio, compressive strength of concrete, etc.) and two explosion load parameters (explosive weight and explosive distance) were selected as input features. Three machine learning algorithms (GPR, RF, and XGBoost) were used to predict the maximum displacement of PC slabs, and their prediction accuracies are compared by root mean square error, coefficient of determination, mean absolute error, scattering index, and comprehensive performance objective function. Furthermore, a damage classification evaluation model based on the support rotation angle damage criterion is proposed. The performance differences of the model under three criteria are analyzed by confusion matrix and five classification indices (accuracy, precision, recall, F1-score, and Kappa coefficient), and compared with simplified models and empirical prediction methods. The research results indicate that in terms of maximum displacement prediction for PC slabs under explosion loads, the XGBoost model demonstrates the best performance among the three machine learning models (GPR, RF and XGBoost). Specifically, the fitting degree of XGBoost is superior to those of GPR and RF models. Meanwhile, and the XGBoost shows the most outstanding comprehensive performance, with a damage recognition accuracy of 92.5%, which demonstrates its high-efficiency in identifying different damage types. The XGBoost-based damage classification evaluation model for PC slabs under explosion loads exhibits powerful performance, providing important references for structural blast resistance design and rapid post-blast damage assessment.
2026, 46(5): 051444.
doi: 10.11883/bzycj-2025-0329
Abstract:
Constructing a knowledge graph for accidental explosion damage using investigation reports of explosion accident characterized by multi-source, heterogeneous, and overlapping information plays a significant role in data-driven explosion assessment and traceability analysis. To address the overlapping and nested events in accidental explosion investigation data, a knowledge graph construction method centered on event joint extraction was employed, utilizing explosion investigation reports to build the accidental explosion damage knowledge graph. By retrieving similar explosion events within the knowledge graph using cosine similarity and applying a Bayesian classification method, the type of explosive materials involved in the Beirut port explosion incident was identified with relatively high accuracy. The knowledge graph construction results demonstrate that on the accidental explosion damage corpus, the proposed dynamic masking-based event joint extraction method improved the F1 scores for event classification and event element classification by at least 2% and 5.4%, respectively, compared to existing extraction models. Traceability analysis indicates that knowledge graph-based traceability offers significant improvements in both speed and accuracy compared to traditional manual traceability methods.
Constructing a knowledge graph for accidental explosion damage using investigation reports of explosion accident characterized by multi-source, heterogeneous, and overlapping information plays a significant role in data-driven explosion assessment and traceability analysis. To address the overlapping and nested events in accidental explosion investigation data, a knowledge graph construction method centered on event joint extraction was employed, utilizing explosion investigation reports to build the accidental explosion damage knowledge graph. By retrieving similar explosion events within the knowledge graph using cosine similarity and applying a Bayesian classification method, the type of explosive materials involved in the Beirut port explosion incident was identified with relatively high accuracy. The knowledge graph construction results demonstrate that on the accidental explosion damage corpus, the proposed dynamic masking-based event joint extraction method improved the F1 scores for event classification and event element classification by at least 2% and 5.4%, respectively, compared to existing extraction models. Traceability analysis indicates that knowledge graph-based traceability offers significant improvements in both speed and accuracy compared to traditional manual traceability methods.
Prediction of gas explosion consequences in residential buildings based on artificial neural network
2026, 46(5): 051445.
doi: 10.11883/bzycj-2025-0382
Abstract:
Addressing the challenge of highly nonlinear evolution and the difficulty in accurately predicting the consequences of residential gas explosion accidents, a data-driven investigation into gas explosion consequence prediction was conducted. An artificial neural network-based prediction method for explosion accident consequences was proposed. By large-scale numerical simulations, a gas explosion consequence dataset covering various residential unit layouts was generated. Through sensitivity analysis and accuracy validation, an intelligent prediction model for gas explosion consequences was ultimately established. The model achieves prediction errors below 15% and 5% for indoor peak overpressure and temperature, respectively, while the maximum error in predicting spatial location coordinates remains within 25%. This enables batch prediction of the most severe indoor explosion consequences and their spatial locations for arbitrary ignition positions across different residential unit layouts. The results indicate that as unit area increases and spatial layout becomes progressively more complex, the peak overpressure and temperature values correspondingly increase. The living room consistently exhibits the lowest overpressure levels, whereas areas near windowless bedroom walls tend to form extreme overpressure and temperature zones. Ignition in the kitchen and bedroom leads to the most severe indoor overpressure and temperature consequences, respectively, reflecting the differential impact patterns of ignition location on explosion outcomes. The findings provide important references for expanding the predictive application of artificial intelligence in the field of gas explosions and for the efficient prevention and control of explosion accidents.
Addressing the challenge of highly nonlinear evolution and the difficulty in accurately predicting the consequences of residential gas explosion accidents, a data-driven investigation into gas explosion consequence prediction was conducted. An artificial neural network-based prediction method for explosion accident consequences was proposed. By large-scale numerical simulations, a gas explosion consequence dataset covering various residential unit layouts was generated. Through sensitivity analysis and accuracy validation, an intelligent prediction model for gas explosion consequences was ultimately established. The model achieves prediction errors below 15% and 5% for indoor peak overpressure and temperature, respectively, while the maximum error in predicting spatial location coordinates remains within 25%. This enables batch prediction of the most severe indoor explosion consequences and their spatial locations for arbitrary ignition positions across different residential unit layouts. The results indicate that as unit area increases and spatial layout becomes progressively more complex, the peak overpressure and temperature values correspondingly increase. The living room consistently exhibits the lowest overpressure levels, whereas areas near windowless bedroom walls tend to form extreme overpressure and temperature zones. Ignition in the kitchen and bedroom leads to the most severe indoor overpressure and temperature consequences, respectively, reflecting the differential impact patterns of ignition location on explosion outcomes. The findings provide important references for expanding the predictive application of artificial intelligence in the field of gas explosions and for the efficient prevention and control of explosion accidents.


Read OL
PDF
Cited by