基于GA-XGBoost的概率积分法矿山沉降预计方法研究

Research on Probability Integration Method for Mine Settlement Prediction Method Based on GA-XGBoost

  • 摘要: 煤矿开采导致的地表沉陷对生态环境及人员生命安全带来了严重威胁,因而,精确预测地表沉陷对于降低风险至关重要。目前,采用的概率积分法在计算中存在精度不高、参数难以确定的问题。尽管引入了机器学习算法以期改进,但该方法仍易陷入局部最优解,且收敛速度较慢,从而影响预测的准确性。为此,提出基于GA-XGBoost的概率积分法沉降预计方法,通过GA算法来对XGBoost模型的学习率、树的最大深度、叶节点最小权重求取最优值;利用XGBoost模型优秀的非线性拟合能力,提高XGBoost模型预测性能;与GA-BP神经网络模型和XGBoost模型进行对比分析,其中GA-XGBoost模型的判定系数R2(0.95)、均方根误差(0.008)均优于GA-BP神经网络模型和XGBoost模型,在预测中误差最小,并将GA-XGBoost模型应用于友众15210工作面。结果表明,GA-XGBoost模型的误差小于GA-BP神经网络和XGBoost模型,在工程实践中取得了优异的效果。

     

    Abstract: The surface subsidence caused by coal mining poses a severe threat to the ecological environment and personnel life safety, therefore, accurate prediction of surface subsidence is crucial for decreasing risks. At present, the adopted probability integration method has problems of low accuracy and difficulty in determining parameters in calculations. Although machine learning algorithm is introduced for improvement, this method is still prone to getting stuck in local optima and has a relatively slow convergence speed, which affects the accuracy of prediction. For this reason, a probability integration method for settlement prediction method based on GA-XGBoost is proposed, the optimal values of learning rate, maximum tree depth, and minimum leaf node weight of XGBoost model are obtained through GA algorithm; The excellent nonlinear fitting ability of XGBoost model is utilized to improve its predictive performance; Compared and analyzed with GA-BP neural network model and XGBoost model, GA-XGBoost model shows better determination coefficient R2 (0.95) and root mean square error (0.008) than GA-BP neural network model and XGBoost model, with the smallest error in prediction, and GA-XGBoost model is applied to the 15210 working face of Youzhong Coal Industry. The results show that the error of GA-XGBoost model is smaller than that of GA-BP neural network and XGBoost model, and it achieves excellent effects in engineering practice.

     

/

返回文章
返回