Application of RR-XGBoost combined model in data calibration of micro air quality detector

Grid monitoring is the current development direction of atmospheric monitoring. The micro air quality detector is of great help to the grid monitoring of the atmosphere, so higher requirements are put forward for the accuracy of the micro air quality detector. This paper presents a model to calibrate the measurement data of the micro air quality detector using the monitoring data of the air quality monitoring station. The concentration of six types of air pollutants is the research object of this study to establish a calibration model for the measurement data of the micro air quality detector. The first step is to use correlation analysis to find out the main factors affecting the concentration of the six types of pollutants. The second step uses Ridge Regression (RR) to select variables, find out the factors that have significant effects on the concentration of pollutants, and give the quantitative relationship between these factors and the pollutants. Finally, the predicted value of the ridge regression model and the measurement data of the micro air quality detector are used as input variables, and the Extreme Gradient Boosting (XGBoost) algorithm is used to give the final pollutant concentration prediction model. We named the combined model of ridge regression and XGBoost algorithm RR-XGBoost model. Relative Mean Absolute Percent Error (MAPE), Mean Absolute Error (MAE), goodness of fit (R2), and Root Mean Square Error (RMSE) were used to evaluate the prediction accuracy of the RR-XGBoost model. The results show that the model is superior to some commonly used pollutant prediction methods such as random forest, support vector machine, and multilayer perceptron neural network in the evaluation of various indicators. The model not only has a good prediction effect on the training set but also on the test set, indicating that the model has good generalization ability. Using the RR-XGBoost model to calibrate the data of the micro air quality detector can make up for the shortcomings of the data monitoring accuracy of the micro air quality detector. The model plays an active role in the deployment of micro air quality detectors and grid monitoring of the atmosphere.

Introduction to pollutant concentration prediction model. Air pollutants mainly include O 3 , PM2.5, PM10, CO, NO 2 , and SO 2 ("two dust and four gases"). Many air quality assessment indicators take the concentration of "two dust and four gases" as an important basis. At present, a variety of algorithm models have been used by scholars at home and abroad to predict the concentration of pollutants in the atmosphere, and relatively good results have been achieved. These model algorithms mainly include time series models, chemical transmission models, machine learning models, etc.
The time series models used to predict air quality include: After comparison, it is found that the Singh fuzzy time series model is the most accurate and effective forecasting model 9 .
The chemical transport model is based on scientific theories and assumptions. It uses numerical methods combined with meteorological principles to simulate and describe processes such as the transmission, diffusion, and chemical reactions of pollutants in the atmosphere. The chemical transmission model obtains the pollutant concentration distribution by inputting the source emission, topography, meteorological data, and operation mode of the study area [10][11][12] . Because the pollutant formation and transmission process is very complicated, the calculation complexity of the chemical transmission model is relatively high, and the model accuracy is not high.
Since the linear regression model is convenient to explain the quantitative relationship between pollutants and other variables of the model, the multivariate linear regression model is still a commonly used pollutant concentration prediction model [13][14][15] . The artificial neural network model combined with an effective training algorithm can detect the complex and potentially non-linear relationship between the predictor variable and the response variable, and this model has become the current mainstream 13,[16][17][18] . In addition, prediction methods such as Markov chain [19][20][21] , support vector machine [22][23][24] , and random forest [25][26][27] are also commonly used to predict the concentration of air pollutants. Because Extreme Gradient Boosting (XGBoost) has excellent computing efficiency and prediction accuracy, it has also been widely used in the prediction of air pollutant concentration in recent years. Zhai et al. used LASSO, Adaboost, XGBoost and other algorithms to integrate with support vector regression, and successfully predicted the daily average concentration of PM2.5 in Beijing, China 28 . Joharestani et al. used Random Forest, XGBoost, and Deep Learning to predict PM2.5 concentration, and the results showed that the model performance obtained by using the XGBoost algorithm was the best 29 .

Material and methods
Data source and preprocessing. The insufficient measurement accuracy of the micro air quality detector is an important factor affecting its promotion. In order to establish the measurement data correction model of the micro air quality detector, this study collected two sets of data. The first set of data comes from an air quality monitoring station in Nanjing, which is considered accurate data in this study. It contains 4200 samples, which records the hourly concentration of six pollutants from November 14, 2018 to June 11, 2019. The second set of data is provided by the micro air quality detector and the location of the micro air quality detector is juxtaposed with the air quality monitoring station. Electrochemical sensors are used in the monitoring equipment of the micro air quality detector. 234,717 samples are included in the second set of data, and the time interval between each sample does not exceed 5 min. The micro air quality detector not only provides the concentration of six pollutants, but also provides five meteorological parameters including wind speed, pressure, precipitation, temperature and humidity. Due to the insufficient accuracy of the measurement data of the micro air quality detector, it is necessary to establish a pollutant concentration correction model to correct the measurement data.
Before constructing the data correction model of the micro air quality detector, the original data should be preprocessed. First, remove the outliers in the measurement data of the self-built points. In this paper, data whose measured value is greater than 3 times the average value of the left and right adjacent data or less than 1/3 times the average value of the left and right adjacent data are regarded as the outlier. Then calculate the hourly average of the self-built point measurement data, in order to correspond with the national control point measurement data. For the data whose self-built point cannot correspond to the national control point, this article directly deletes them. After preprocessing, a total of 4135 samples were obtained 13,24 . Table 1  www.nature.com/scientificreports/ Data exploratory analysis. Because the research methods of the six types of pollutants concentration are similar, this paper selects O 3 concentration as the main research object. The ozone in the atmosphere is divided into tropospheric near-ground ozone and stratospheric ozone. What is harmful to the environment and human health is near-surface ozone in the troposphere, also known as bad ozone. If humans are exposed to bad ozone for a long time, it will cause damage to the respiratory system and immune system. Before establishing the data correction model of the micro air quality detector, it is necessary to perform descriptive statistics on the data in order to grasp the overall trend of the pollutant concentration in the air and the measurement error of the micro air quality detector 15,30 . Because too much sample data is not conducive to visually analyzing the change trend of air pollutant concentration and the measurement error of the micro air quality detector, we calculated the daily average of the O 3 concentration. After the data were averaged, a total of 206 sets of data were obtained 31 . It can be seen from Fig. 1 that the O 3 concentration of the self-built point and the national control point are in good agreement in the later period, but there is a certain deviation in the previous period. The low temperature and huge changes in humidity in autumn and winter interfere with the electrochemical sensor, which leads to deviations in the measurement data of the micro air quality detector. In addition, the obvious difference in O 3 concentration in different time periods can also be seen from Fig. 1. In  Figure 2 shows that the highest O 3 concentration is in June, and the lowest O 3 concentration is in December (no data from July to October). O 3 pollution has obvious seasonal characteristics 32 . Near-ground ozone is mostly generated by the secondary conversion of nitrogen oxides and volatile organic compounds under high temperature and strong light conditions. The strong solar radiation and high temperature in summer can easily cause photochemical smog and secondary ozone production. Continuous high temperature and strong sunshine weather is conducive to atmospheric photochemical reaction of nitrogen oxides and volatile organic compounds, thereby generating strong oxidants such as near-ground ozone. Therefore, the O 3 concentration in summer will increase as the temperature rises.
Correlation analysis. Correlation mainly describes a potential relationship between two attributes. This relationship measures the degree to which one attribute contains the other. For the attribute of numerical value, the commonly used measure of correlation is the correlation coefficient. Correlation coefficients are divided into Pearson correlation coefficients, Spearman correlation coefficients and so on according to the applicable data types. The Pearson correlation coefficient measures the degree of linear correlation between two continuous numerical attributes, and the Spearman correlation coefficient mainly describes the degree of correlation between hierarchical or ordered attributes. In this paper, the Pearson correlation coefficient (Eq. 1) is selected as the evaluation index to measure the correlation between various pollutants and meteorological parameters. The absolute value of the correlation coefficient is between [0, 1]. An absolute value of 0 indicates that the two attributes are completely unrelated, and an absolute value of 1 indicates that the two attributes are completely related. The larger the absolute value of the correlation coefficient, the stronger the correlation.
It can be seen from Table 2 that among the 11 variables, only the NO 2 concentration and temperature are not significantly correlated, and there is a significant correlation between the other variables. Figure 3 is a scatter plot of correlations between various variables. From the diagonal frequency histogram, it can be seen that the concentrations of the six types of pollutants all present a right-skewed distribution, indicating that extreme weather with high pollutant concentrations often occurs in this area. Most of the scatter plots between different variables are near a straight line, indicating that there is a certain linear correlation between them.

Establishment of sensor calibration model
Introduction to basic principles. The classical least square estimation has been widely used due to its many excellent properties. With the development of electronic computing technology, more and more accumulated experience in dealing with large-scale regression problems show that the results obtained by least square estimation are sometimes very unsatisfactory. When the design matrix X is ill-conditioned, there is a strong linear correlation between the column vectors of X , that is, there is serious multicollinearity between the independent variables. In this case, using ordinary least squares to estimate the model parameters, the variance of the parameters obtained is too large, and the effect of ordinary least squares becomes very unsatisfactory.
Aiming at the problem that the ordinary least squares method obviously deteriorates when multicollinearity occurs, the American scholar Hoerl proposed an improved least squares estimation method called ridge estimation in 1962. Later Hoerl and Kennard made a systematic discussion in 1970 33 . When there is multicollinearity between the independent variables, then X ′ X ≈ 0 . We add a matrix kI(k > 0) to X ′ X , then the degree to which matrix X ′ X + kI is close to singularity will be much smaller than the degree to which matrix X ′ X is close to singularity. Taking into account the dimension of variables, this article first standardizes the data. For the convenience of writing, the standardized design matrix is still denoted by X . Equation (2) is defined as the ridge regression estimation of β , where k is called the ridge parameter. Since X is assumed to have been standardized, X ′ X is the sample correlation matrix of the independent variables. β (k) as the estimate of β is more stable than the least square estimation β . When k = 0 , the ridge estimation β (0) is the ordinary least square estimation.  www.nature.com/scientificreports/ Because the ridge parameter k is not unique, the ridge regression estimate β (k) is actually an estimated family of the regression parameter β . For the selection of the ridge parameter k , the commonly used methods include the ridge trace method and the variance inflation factor method.
The XGBoost algorithm is based on an integrated learning method. The integrated learning method combines multiple learning models so that the combined model has stronger generalization ability to obtain better modeling effects. XGBoost is an improvement on the boosting algorithm based on the gradient descent tree. It is composed of multiple decision tree iterations. XGBoost first builds multiple CART (Classification and Regression Trees) models to predict the data set, and then integrates these trees as a new tree model. The model will continue to iteratively improve, and the new tree model generated in each iteration will fit the residual of the previous tree. As the number of trees increases, the complexity of the ensemble model will gradually increase until it approaches the complexity of the data itself, at which point the training achieves the best results. Equation (3) is the XGBoost algorithm model, where f t (x i ) = ω q (x) is the space of CART, ω q (x) is the score of sample x , the model prediction value is obtained by accumulation, and q represents the structure of each tree , T is the number of trees, and each f t corresponds to an independent tree structure q and leaf weight.
XGBoost internal decision tree uses regression tree. For the squared loss function, the split node of the regression tree fits the residual. For the general loss function (gradient descent), the split node of the regression tree fits the approximate value of the residual. Therefore, the accuracy of XGBoost will be higher. Equations (4)-(7) are the iterative process of residual fitting. In Eq. (7), ŷ (t−1) i is the predicted value of the i-th sample after t-1 iterations. ŷ The objective optimization function of the XGBoost algorithm, that is, the loss function (Eq. 8), can be obtained according to the iterative process of the residuals. For the general loss function, XGBoost will perform a second-order Taylor expansion in order to dig out more information about the gradient, and at the same time remove the constant term, so that the gradient descent method can be better trained. Equations (9) and (10) are the loss function of the t-th step, where g i and h i are the first and second derivatives.
Different from other algorithms, the XGBoost algorithm adds a regularization term f (Eq. (11)) to prevent over-fitting and better improve the accuracy of the model. f is a function that represents the complexity of the tree. The smaller the function value, the stronger the generalization ability of the tree. ω j is the weight on the j-th leaf node in the tree f, T is the total number of leaf nodes in the tree, γ is the penalty term of the L1 regularity, and is the penalty term of the L2 regularity, which is the custom parameter of the algorithm. Therefore, the objective function (Eqs. (12)- (14)) are obtained, where I j = i|q(x i ) = j represents the sample set on the j-th leaf node 28,34 . www.nature.com/scientificreports/ Ridge regression model construction. Classical least squares estimation is often used to build pollutant concentration prediction models. It can also derive the quantitative relationship between the various influencing factors and the concentration of pollutants 15 . However, the factors that affect the concentration of pollutants are more complicated, and through the previous correlation analysis, it can be seen that there is a significant correlation between them. If the multiple linear regression model is directly established, multicollinearity will be generated, which will cause the model's regression coefficients to be very unstable, and the model application ability will deteriorate. Ridge regression is often used to solve the problem of model multicollinearity. We take the national control point O 3 as the dependent variable, the pollutant concentration and meteorological parameters measured at the self-built point as the independent variables, and establish a ridge regression model with the help of SPSS (Version20.0,https:// www. ibm. com/ cn-zh/ analy tics/ spss-stati stics-softw are). In this paper, the ridge trace method is used to select the independent variables introduced into the model and the ridge parameter k . In Fig. 4, the abscissa represents the value of the ridge parameter k , and each curve represents the standardized ridge regression coefficient of each variable. It can be seen that x 4 , x 6 , and x 10 have relatively stable ridge regression coefficients with relatively small absolute values, indicating that these variables have a small impact on the O 3 concentration, and they can be deleted in the actual modeling. In addition, although the standardized ridge regression coefficient of x 2 is not small, it is very unstable, and rapidly tends to zero as k increases. For this kind of variable whose ridge regression coefficient is not stable and the rapid vibration tends to zero, it can also be eliminated in the ridge regression model.
After completing the selection of the independent variables of the ridge regression model, the next step is the selection of the ridge parameter k . We reduce the step length of the ridge parameter k to 0.02, and draw the ridge trace diagram of the remaining variables as Fig. 5. It can be seen that when the ridge parameter k = 0.2 , the ridge trace of each variable is relatively stable, and the coefficient of determination R 2 is not reduced much, so the ridge parameter k = 0.2 can be selected. Finally, with the help of SPSS software, use the selected variables and ridge parameters to make a ridge regression model. Table 3 shows the unstandardized ridge regression   www.nature.com/scientificreports/ used to establish a prediction model for the concentration of each pollutant. We call this model the RR-XGBoost model. Figure 6 is the flux diagram of the RR-XGBoost model. Before constructing the Ridge-XGBoost model, first divide all samples into training set and test set randomly at a ratio of 8:2 (the other 5 pollutants data sets are also divided in the same way), and normalize all data to the range of [0,1] based on experience 29,34 . The modeling in this paper is implemented using Python language programming, the simulation platform is Pycharm, and the Grid Search Method (GSM) is used to find the optimal parameter combination.
The XGBoost model has many parameters. If all parameters are optimized, the computer's memory will be challenged and the optimization time will be greatly increased. In this paper, the following four main parameters are selected for optimization: (i) the number of gradient boosted trees n_estimators, the larger the parameter, the better, but the occupied memory and training time will also increase accordingly, the optimization range of this article is 100-300; (ii) the maximum tree depth for base learners max_depth, this parameter is used to avoid overfitting, the value range is 3-10; (iii) learning rate learning_rate, the value range is 0.01-0.3; and (iv) the minimum sum of instance weight(hessian) needed in a child min_child_weight, which is similar to max_depth, used to avoid over-fitting, and the value range is 1-9. The four initial parameters of the XGBoost model are set to 100, 6, 0.1, and 1. In addition, GSM needs to set the optimization step distance of each parameter during the optimization process (this article takes 10, 1, 0.01, 1). Table 4 shows the parameters of the XGBoost model determined after using the grid search method. In order to show the fitting effect of the RR-XGBoost model more intuitively, this paper draws the fitting effect of O 3 concentration as shown in Fig. 7. It can be seen that the correlation coefficient between the true concentration of O 3 and the predicted concentration of the model in both the training set and the test set exceeds 0.95. In addition, the    Tables 5, 6, 7, and 8, it can be seen that the measurement accuracy of self-built points is the lowest among all evaluation indicators, which shows that the measurement accuracy of the micro air quality detector needs to be improved. Although ridge regression can give the quantitative relationship between each variable and the concentration of pollutants, the fitting effect is not particularly good. Random forest regression and XGBoost prediction methods are better in the accuracy of pollutant concentration prediction. In particular, the XGBoost prediction method can greatly improve the accuracy of pollutant concentration prediction. The model combining ridge regression and XGBoost algorithm presented in this study is not only slightly higher in accuracy than the single XGBoost prediction method, but also retains the advantages of ridge regression model.
Human activities are one of the important factors affecting the concentration of pollutants. Human activities have obvious periodic laws. We choose one week as a cycle to evaluate the correction ability of the RR-XGBoost model to the measurement data of the micro air quality detector 35 . The blue curve in Fig. 9 is the measured value of the national control point, the red curve is the measured value of the self-built point, and the black curve is www.nature.com/scientificreports/ the predicted value of the RR-XGBoost model. It can be seen that the red curve and the blue curve have a certain error, but the black curve and the blue curve basically overlap, indicating that the RR-XGBoost model has performed a good correction on the measurement data of the micro air quality detector.

Conclusions
Today, the situation of air pollution is still not very optimistic 3 , and atmospheric monitoring is gradually developing in the direction of refined monitoring. At present, the most feasible solution for refined atmospheric monitoring is grid-based monitoring, that is, multiple air quality monitoring devices are set up within a certain distance or range in a monitoring area to measure the specific dust particle concentration and pollutant gas   www.nature.com/scientificreports/ concentration. A city will set up dozens to hundreds of monitoring points. Accurate and fine grid air monitoring can quickly perceive and locate pollution events, and timely take control measures to achieve a multiplier control and governance effect 5,7 . At present, many places use such micro-stations for the detection and law enforcement of sudden pollution situations, and even rank, reward and punish the air quality in the jurisdiction. Therefore, higher requirements are put forward for the stability and accuracy of the micro air quality inspection station.
With the development of computer technology, machine learning has entered the latest stage, and machine learning has been more widely used in air quality prediction. The XGBoost algorithm is widely used in data modeling due to its excellent computational efficiency and prediction accuracy. Unlike the random forest assigning the same voting weight to each decision tree, the generation of the next decision tree in the XGBoost algorithm is related to the training and prediction of the previous decision tree. The XGBoost algorithm gives higher learning weights to the sample which has lower accuracy in the previous round of decision tree training. Therefore, its accuracy is generally higher than the random forest algorithm. Compared with other ensemble learning algorithms, XGBoost improves the robustness of the model by introducing regular terms and column sampling methods. On the other hand, it adopts a parallelization strategy when each tree chooses the split point, which greatly improves the speed of the model.
The combined model of ridge regression and XGBoost algorithm given in this paper can not only explain the quantitative relationship between input variables and output variables, but also has certain advantages over other commonly used air quality monitoring models in terms of model accuracy. A total of 4135 samples were introduced into the Ridge-XGBoost model, and the sample time spanned 4 seasons (206 days), which showed that the model performed well in terms of stability. Using the RR-XGBoost model to calibrate the data of the micro air quality detector can make up for the shortcomings of the data monitoring accuracy of the micro air quality detector. The model plays an active role in the deployment of micro air quality detectors and grid monitoring of the atmosphere. In future research, we can consider introducing more data to explore the evolution of pollutant concentrations on a larger time scale. In addition, in terms of finding the optimal parameters, the grid search algorithm used in this study is not efficient enough when there are many parameters. We can try to find a more efficient parameter optimization method to introduce more parameters to the model to further improve the accuracy of the model. www.nature.com/scientificreports/