Next Article in Journal
Detection of Polystyrene Microplastic Particles in Water Using Surface-Functionalized Terahertz Microfluidic Metamaterials
Next Article in Special Issue
Multifunctional Models, Including an Artificial Neural Network, to Predict the Compressive Strength of Self-Compacting Concrete
Previous Article in Journal
Scorecard Model-Based Motion Stability Evaluation of Manipulators for Robot Training
Previous Article in Special Issue
Adaptive Salp Swarm Algorithm for Optimization of Geotechnical Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting Rock Brittleness Using a Robust Evolutionary Programming Paradigm and Regression-Based Feature Selection Model

1
Faculty of Engineering, Shohadaye Hoveizeh Campus of Technology, Shahid Chamran University, Dasht-e Azadegan, Susangerd 6155634899, Iran
2
Civil Engineering Department, College of Engineering, University of Sulaimani, Sulaymaniyah 46001, Iraq
3
Department of Civil Engineering, Behbahan Khatam Alanbia University of Technology, Behbahan 6361663973, Iran
4
Peter the Great St. Petersburg Polytechnic University, St. Petersburg 195251, Russia
5
Water Engineering Department, Faculty of Agriculture, University of Zanjan, Zanjan 4537138791, Iran
6
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(14), 7101; https://doi.org/10.3390/app12147101
Submission received: 19 April 2022 / Revised: 18 June 2022 / Accepted: 19 June 2022 / Published: 14 July 2022
(This article belongs to the Special Issue Novel Hybrid Intelligence Techniques in Engineering)

Abstract

:
Brittleness plays an important role in assessing the stability of the surrounding rock mass in deep underground projects. To this end, the present study deals with developing a robust evolutionary programming paradigm known as linear genetic programming (LGP) for estimating the brittleness index (BI). In addition, the bootstrap aggregate (Bagged) regression tree (BRT) and two efficient lazy machine learning approaches, namely local weighted linear regression (LWLR) and KStar approach, were examined to validate the LGP model. To the best of our knowledge, this is the first attempt to estimate the BI through the LGP model. A tunneling project in Pahang state, Malaysia, was investigated, and the requirement datasets were measured to construct the proposed models. According to the results from the testing phase, the LGP model yielded the best statistical indicators (R = 0.9529, RMSE = 0.4838, and IA = 0.9744) for modeling BI, followed by LWLR (R = 0.9490, RMSE = 0.6607, and IA = 0.9400), BRT (R = 0.9433, RMSE = 0.6875, and IA = 0.9324), and KStar (R = 0.9310, RMSE = 0.7933, and IA = 0.9095), respectively. In addition, the sensitivity analysis demonstrated that the dry density factor demonstrated the most effective prediction of BI.

1. Introduction

The brittleness of rock should be measured as the main property of rock mass in any ground excavation project. It is important to properly consider the brittleness of the rock to design structures of geotechnical engineering, particularly structures constructed on the rock mass. For example, engineers can use the information on rock brittleness to assess the wellbore performance quality and stability of a hydraulic fracturing job [1,2,3]. Furthermore, such information can be used to regulate the mechanical properties of shale rocks well. Meanwhile, Young’s modulus and strength of these properties can be defined using certain parameters such as the volumetric fraction of strong minerals [4,5,6].
One of the reasons for different disasters due to rock mechanics, such as rock bursts, is brittleness [7,8,9]. The literature shows that brittleness can be an effective and significant factor that can predict tunnel boring machines (TBMs) and road header performance [10,11].
Moreover, this property can effectively define the excavation effectiveness of drilling as a parameter highly affecting coal mining processes [3,12]. Therefore, measuring rock brittleness is necessary for any ground excavation project [7]. Although all of the above facts had been explained, Altindag [13] argued that there was no consensus on measurement and definition of standards for this brittleness. On the other hand, Yagiz [12] argues that rock brittleness is affected by different properties of the rock. Some researchers have described the relationship between brittleness and ductility inversion or the lack of ductility [14]. Ramsey [15] defined brittleness as the lack of cohesion in rock particles. Brittleness was defined by Obert and Duvall [16] as the inclination of a material, such as rock or cast iron, to split. There are normally six characteristics of highly brittle rock: a large compressive-to-tensile strength ratio, a large interior friction angle, the production of small particles, failure under an insignificant force, high firmness, and producing completely developed characteristics after hardness lab experiments [16].
The relationship between the rock’s uniaxial compressive and tensile strengths is a significant subject in rock brittleness index (BI) studies [17,18,19]. Nevertheless, the relationship between BI and other rock properties such as Poisson’s ratio, internal friction angle, hardness, elasticity modulus, etc. is limited in the literature [20,21]. There has not enough capability to estimate BI in these models due to them using one or two dependent parameters [12,22].
Rock brittleness can be approximated using empirical formulas proposed by several studies [20,23,24]. Alternatively, multi-input and single-input predictive methods such as multiple and simple linear regression can be used to predict the BI value of rock [22,24]. However, despite a higher accuracy than the existing simple regression [17,25], they sometimes cannot accurately describe complex systems’ behavior since they are not always robust enough [26]. Furthermore, rock BI cannot be predicted due to the insufficient accuracy level of these models [22]. Recently, many researchers have applied machine learning (ML) methods and metaheuristic algorithms to solve engineering and science problems [27,28,29,30,31,32,33].
Despite some researchers confirming that ML techniques could be used to solve problems in engineering fields, studies with a focus on the prediction of rock BI have not used different ML techniques yet. Kaunda and Asbury [34] used Poisson’s ratio, velocity, and elastic modulus to apply a neural network (NN) method. Yagiz and Gokceoglu [17] formed a fuzzy system and conducted a multiple regression method to estimate rock BI by using different input parameters such as Brazilian tensile strength (BTS). Their findings demonstrated the effective application of the fuzzy system to estimate BI. Koopialipoor et al. [25] suggested some models that predict rock BI value. The proposed equations were developed by hybridizing the firefly and ANN algorithm into a single model. Another study by Khandelwal et al. [22] tested the feasibility of a genetic programming model to predict the brittleness level of intact rocks. Multiple input variables such as unit weight, BTS, and UCS were employed to estimate the rock mass BI. Jahed Armaghani et al. [3] offered different support vector machine methods for BI prediction. In their study, different kernels were used to implement SVM methods. They indicated the effectiveness of proposed SVM methods in the BI prediction field. In another study, Yagiz et al. [28] predicted BI values through a differential evolution (DE) algorithm using 48 datasets. With this aim, they employed DE to develop linear and nonlinear models. They demonstrated an acceptable application of the DE algorithm in predicting BI. Recently, comprehensive study was conducted by Sun et al. [8] to predict BI using several efficient machine learning methods such as SVM and Chi-square automatic interaction detector methods. According to their results, the proposed models could predict BI with good performance.
This study aims to assess the applicability of a novel evolutionary programming paradigm (LGP) for estimating BI to enhance the accuracy of BI simulation compared to the previous study [3]. Three advanced machine learning methods (bootstrap aggregate (Bagged) regression tree (BRT), local weighted linear regression (LWLR), and KStar models) were implemented for evaluation of the predictive performance of the LGP approach. To the best of our knowledge, all implemented models have not yet been used in rock mechanics-based soft computing research. Here, as the novelty, best subset analysis was employed to identify the best input combination, and the results of obtained models were validated using several metrics, a graphical tool, and error analysis. In addition, an efficient sensitivity analysis was conducted to determine the most influential features in BI modeling.

2. Materials and Methods

2.1. Materials

2.1.1. Field Investigation

A tunneling project in Pahang state, Malaysia, was used to extract the data used in this study. Additional information regarding the field study can be found in Jahed Aramaghani et al. [3]. Three tunnel boring machines (TBMs) were used to excavate 35 km of the tunnel, and drilling and blasting techniques were used to excavate the rest of the tunnel [3]. Although most of the excavated rocks consisted of granite (based on the mentioned techniques), there were metamorphic and some sedimentary rocks in the geological units. The research team collected a total of 120 granite block samples from the tunnel face at different tunnel distances and several locations, and the tests were performed by transferring these block samples to the rock mechanics laboratory. Then, the procedure suggested by the ISRM [35] was applied to prepare the block rock samples for each planned test. Laboratory tests—including UCS, point load, density, the Schmidt hammer, BTS, and p-wave—were planned and conducted on the samples in the experimental program. Then, to model this study, the obtained results were considered. As mainly suggested by the literature, the BI values were calculated as BI = UCS/BTS, and then set as the output. The related inputs of the model included the p-wave velocity (Vp), point load strength index (Is50), dry density (D), and Schmidt hammer rebound number (Rn). In Figure 1 and Figure 2, BTS and UCS tests were conducted on the samples and their failures, respectively.
In this study, 85 data points were collected to model the BI; 75% (64 data points) of the data was allocated for the training dataset, with the rest for the testing dataset. The descriptive statistics of all features and target variables are tabulated in Table 1. The skewness ([0.116, 0.7339]) and kurtosis ([−0.76, 0.3369]) range of variables demonstrate that both criteria fall in an acceptable range ([−2, 2]) [36,37]. Thus, it can be inferred that all datasets have a fairly normal distribution, which is a good indication for modeling rock brittleness with data-driven methods.
The predictive and target parameters for decreasing the computational cost and complexity of prediction, normalized in range of [0, 1], are expressed in the following formula:
x n o r = x x m i n x m a x x m i n
where the xnor is the normalized value and xmax, xmin, and x are the maximum, minimum, and original values of the modeling dataset, respectively.

2.1.2. Feature Selection Process

Best feature selection is one of the most crucial stages for building a predictive model based on a data-driven model; it has a key role in the accuracy and reliability of developed models. Best subsect regression analysis [38] is one of the most popular schemes for identifying the best input features based on linear regression modeling. In this approach, six metrics (mean square error (MSE), correlation coefficient (R), adjusted R2, Mallows coefficient (Cp) [39], Akaike (AIC) [40], and Amemiya (PC) [41]) have been computed for choosing the best input combination [38] (see Table 2). The possible tree combination demonstrates that the last case includes all input parameters and has the highest R2 (0.817) and lowest Mallows, Akaike, and Amemiya (MSE = 0.463, Cp = 5 AIC = −60.552, and PC = 0.21); as such, this case can be identified as the best combination for modeling BI. Thus, the functional relationship between the chosen features and target can be expressed as follows:
B I = f ( R n , V p , D , I S 50 )

2.2. Methods

2.2.1. Linear Genetic Programming (LGP)

The LGP is a novel variant of the GP model proposed by Koza [42]. The LGP model is a version of the tree-based GP model with linear instruction. A comparison between the structure of the LGP and GP models is displayed in Figure 3. In the LGP, each program is described by using a parameter-length sequence of C language instructions. The instructions of LGP model include arithmetic operations (+, −, ÷, ×), conditional branches (if x[i] ≤ y[l]), and function calls (exp(x), x, sin, cos, tan) [43]. Each function consists of an assignment to a parameter x[i], which simplifies the utilization of multiple outputs in the LGP model. Table 3 reports the functional set and operation parameters employed in the GP. The main steps of the LGP can be described as follows:
  • Initialization: Creating the initial population randomly (programs), and then calculating the fitness function of each program.
  • Main operators:
    (1)
    Tournament selection: This operator randomly selects several individuals from the population. Two individuals with the best fitness functions are chosen from these individuals, and two others as the worst solutions [43].
    (2)
    Crossover operator: This operator is applied to combine some elements of the best solutions with each other to create two new solutions (individuals).
    (3)
    Mutation operator: Mutation is used to create two new individuals by transforming each of the best solutions.
  • Elitist mechanism: The worst solutions are replaced with transformed solutions based on this mechanism.

2.2.2. Local Weighted Linear Regression (LWLR)

The LWLR method is an advanced version of the multiple linear regression (MLR) model developed by Atkeson et al. [44]. LWLR is able to improve MLR performance significantly. To illustrate the LWLR model, consider the following model:
z m k = α k o + m = 1 M α k m x k m + ε k
In the above model, zmk is a dependent variable that can be calculated based on at least two independent variables (xk). α is the regression coefficient calculated by the least-squares (LS) method, M is the number of data, and ε is the random error.
In the LWLR method, a weight function describes the relationship between input and output data. The fitness function of the LWLR model can be expressed by the following equation [44,45,46]:
F = 1 2 M m = 1 M w m ( z o m z m ) 2
where w is the regression weight, zo is the observed data, and z is the data obtained from the model. The above equation can also be expressed in the form of the following matrix:
F = ( X α Z ) T W ( X α Z )
By solving the above equation for α, we obtain
α = ( X T W X ) 1 X T W Z
where X is the matrix of input training dataset, W denotes the weight matrix, and Z is the vector of data obtained from the model. A kernel function can be used instead of the weight matrix in the LWLR model [47,48]. In the present study, the RBF function was used as the kernel in the LWLR model. The RBF kernel equation is defined as follows:
w i k = exp ( μ ( x i x k ) 2 ) a
where µ is a positive number as a kernel variable and (xixk) is the difference between point i and k [49]. It should be mentioned that the main setting parameter of LWLR model can be optimized by a trial and error procedure.

2.2.3. KStar Model

The KStar algorithm is a lazy learner method introduced by Machine [50]. This method is an instance-based (IB) algorithm with a fast learning capability. Generally, the IB requires only one instance for each group to create successful estimations. In this method, the distance between various instances is considered by the complexity of transforming an instance into another [51]. The KS employs an entropy-based distance function for the regression.
Considering a transformation and instance as V and I , respectively, the instance maps to other instances utilizing i : I I which belong to V ( i V ). For mapping instances to themselves, a parameter called the distinct member ( μ ) is used, where μ ( α ) = α . This parameter is used to determine all prefix codes from V * . V * comprises members which describe a one-to-one transformation to V . Provided that the P f is a probability function on V * , the probability of all paths from n to m is defined as
P * ( m n ) = P ( v )
where v is the value of set V . Then, the K * function can be expressed as
K * ( m n ) = log 2 P * ( m n )
If the examples are real numbers, then it is possible to demonstrate that P * ( m n ) is dependent solely on the absolute difference between m and n. Therefore, it can be defined as
K * ( m n ) = K * ( i ) = 1 2 log 2 ( 2 e e 2 ) log 2 ( e ) + i [ log 2 ( i e ) log 2 ( 1 2 e e 2 ) ]
where i = | m n | and e denotes the model parameter, whose possible values range from 0 to 1. As a result, the distance between two points is equivalent to their absolute difference. Furthermore, for real numbers, the assumption is that the real space is underlain by a discrete space with extremely short distances between discrete instances. The first thing that has to be done is to evaluate those expressions in their limit as the variable e becomes closer and closer to 0. Thus, we obtain
P * ( i ) = e / 2 e i 2 e
The likelihood of generating an integer with a value between i and I + i can be expressed as a probability density function (PDF) as follows:
P * ( i ) = e / 2 e i 2 e Δ i
To obtain the PDF over the real numbers, x / x 0 = i 2 e can be adjusted in aspects of a real value x.
P * ( x ) = 1 2 x 0 e x x 0 d x
where x 0 , the mean predicted value for x across the distribution P, must be suitable for practical purposes. A number between n o and N is picked in the KStar method, which selects x 0 as the training instance with the lowest distance from m. It should be noted that the KS model was developed in this study by utilizing open-source WEKA software. The main parameter of the KS model is the global blend (GB: n), which is determined by using the trial-and-error method.

2.2.4. Bootstrap Aggregate (Bagged) Regression Tree (BRT)

Bagging (bootstrap aggregating) is one of the learning methods of the ensemble learning model [52]. In this method, the training data series is divided into N new training data series by the bootstrap sampling method, and a weak learner is used to train N datasets. In the bootstrap sampling method, random sampling is performed by replacement, which means that some of the training series data may be repeated, and some may be omitted. In the bagging regression tree (BRT) method, each of the N training data series is learned by a tree regression model. The final result is obtained by averaging the output of the N tree models (Figure 4). In the tree regression method, the results of each tree individually have high variance and low bias. Averaging the results of N trees reduces the variance of the model, increases the accuracy, and prevents overfitting of the model. The performance of the BRT method depends on the correct choice of the number of trees (N). To determine the optimal value of N, out-of-bag (OOB) error estimation curves can be used. Usually, two-thirds of the data series are used in model training by bootstrapping. One-third of the remaining data that does not enter the training phase in each tree is called out-of-bag (OOB) observations. OOB observations are used to estimate the prediction error. The error value of the obtained OOB observations is a good criterion for model error validation. In the present study, the fit ensemble function in MATLAB software was used to build a bagged regression tree model.

3. Statistical Criteria for Evaluation of Models

To check the precision of the proposed models, different statistical criteria including R, root mean square error (RMSE), mean absolute percentage error (MAPE), Scatter Index (SI), and Willmott’s agreement Index (IA) were employed [53,54,55,56,57,58,59,60,61,62,63,64,65,66].
  • Correlation coefficient (R) can be expressed as
R = i = 1 N ( B I p , i B I ¯ p ) . ( B I o , i B I ¯ o ) i = 1 N ( B I p , i B I ¯ p ) 2 i = 1 N ( B I o , i B I ¯ o ) 2 ,   0 < R < 1
2.
Root mean square error (RMSE) can be expressed as
R M S E = ( 1 N i = 1 N ( B I o , i B I p , i ) 2 ) 0.5
3.
Mean absolute percentage error is defined as
M A P E = 100 N i = 1 N | B I o , i B I p , i | B I o , i
4.
Scatter Index can be expressed as
S I = R M S E / B I ¯ o
5.
Willmott’s agreement Index [49] can be expressed as
I A = i = 1 N ( B I o , i B I p , i ) 2 i = 1 N ( | B I o , i B I ¯ o | + | B I o , i B I ¯ o | ) 2 ,   0 < I A < 1
where B I o is observed value; B I p is predicted value; B I ¯ o and B I ¯ p   are average values of observed and predicted data, respectively; and N is the number of data.

4. Results and Discussion

The LGP model is provided based on free software called “Discipulus”; its setting parameters are listed in Table 3. In addition, to provide the BRT model, the “bag” method of the “fitresemble” function of the Machine Learning Toolbox of MATLAB 2019 was implemented. The setting parameters for the BRT model are tabulated in Table 3, which were optimized to avoid overfitting by using a trial-and-error procedure [67,68]. The kernel variable in the LWLR model was adopted through a trial-and-error process, leading to a value of 0.4. To provide the KStar model, the global blend—as a crucial parameter of the model—was optimized using a grid search scheme, leading to a value of 30. Figure 5 demonstrates the road map of predicting the procedure of BI parameters using provided AI models.
This paper examines the LGP approach to predict the brittleness index (BI) based on four input variables: Rn, Vp, D, and Is50. Also, two lazy machine learning models (namely LWLR and KStar) and a tree decision-based model (BRT) were measured to evaluate the outcome of the LGP approach. Figure 6 depicts the regression tree constructed from the BRT model, in which the terminal nodes or leafs identify the response of prediction. Table 4 presents the modeling results obtained by all models in the training and testing phases. The quantitative results in the training phase indicate that the KStar model (R = 0.9984, RMSE = 0.0865, MAPE = 0.2564, and IA = 0.9992) is superior to the BRT (R = 0.9459, RMSE = 0.5297, and MAPE = 3.1569), LWLR (R = 0.9252, RMSE = 0.5960, and MAPE = 3.4088), and LGP (R = 0.9248, RMSE = 0.5867, and MAPE = 3.6279) models. Testing results show that the LGP approach exhibits the best efficiency for BI prediction by having the highest correlation coefficient (R = 0.9529) and lowest metrics error (RMSE = 0.4838 and MAPE = 3.2155), followed by LWLR (R = 0.9490, RMSE = 0.6607, and MAPE = 4.1549), BRT (R = 0.9433, RMSE = 0.6875, and MAPE = 4.3884), and KStar (R = 0.9310, RMSE = 0.9733, and MAPE = 5.0573), respectively. A scatter plot of each model, as a powerful graphical tool, is depicted in Figure 7 for comparison between predicted and observed values of BI. Careful examination of the scatters indicates that the LGP approach—due to the closest distribution of predicted points to the 1:1 line—demonstrates better performance than the other AI methods for whole data. The LWLR and BRT models, with acceptable accuracy and similar predictive performance, are ranked as the second and third best models, respectively. KStar, despite the remarkable performance in the training phase (R = 0.9984), is identified as the weakest method due to the highest dispersion of testing predicted points.
In the next graphical validation stage, half violin plots for all datasets are featured to show the distribution of quantitative data across several predicted values levels compared to observed ones. The underlying distribution of the models has been estimated using a smooth kernel density function by showing attractive benchmark points, namely the median and quartiles depicted in Figure 8. It is abundantly clear that KStar and BRT have closer Q25% values (11.366 and 11.562, respectively) to the observed values (11.395) compared with the LWLR and LGP approaches, whereas the LGP and LWLR Q75% values (13.251 and 13.146, respectively) exhibit better agreement with the observed values (13.21). Given the arrangement of the datasets, it is evident that the first Q25% is filled into the training data. Regarding KStar, the remarkable performance in training and disappointment in the testing phase implies that overfitting occurred in this paradigm.
The trend variation of BI plots in both training and testing modes is shown in Figure 9. The results indicate that the LGP model can properly capture the nonlinear behavior of BI in both triaging and testing datasets, and is capable of demonstrating promising predictive performance compared to other models. Complete error analysis was performed to evaluate the performance of the proposed predictive methods in BI estimation. According to Figure 10, the KStar (RDB = 5.52%) and LWLR (RDB = 21.51%) models are identified as having the best and worst predictive performance, as indicated from the lowest and highest relative deviation bands in the training stage, respectively. Furthermore, LGP with the lowest RDB (14.40%) and KStar with the highest RDB (23.41%) have yielded the most promising and weakest results in forecasting BI in testing mode, respectively.
As a final error assessment, the cumulative distribution function (CDF) of the absolute percentage of relative error (APRE) for the testing dataset was considered. Figure 11 indicates that for more than 80% of testing datasets in predicting BI values, the APRE values of LGP, BRT, LWLR, and KStar are less than 5%, 7.01%, 7.65%, and 7.80%, respectively. Eventually, it can be inferred that the LGP model, as the main novelty of this research, is superior to all proposed AI models for accurately predicting BI. The KStar approach, despite its amazing performance in the training phase, yielded the weakest results in the test phase among all models, which means that this method may not work properly for unseen data. The KStar model cannot be identified as an efficient predictive method for BI prediction due to overfitting. Thus, LGP and LWLR were identified as the best and second-best predictive models. The BRT model—ranking third, with predictive performance close to LWLR—yielded the admitted results for the prediction of BI values. It is worth noting that although KStar in this study showed unfavorable performance in testing mode, the accuracy of its results is far better than the results of previous research [3]. In the literature, some studies have predicted BI by using different machine learning methods. Yagiz et al. [69] used the genetic algorithm (GA) and particle swarm optimization (PSO) to predict BI. According to their results, the values of R2 ranged between 0.851 and 0.932. In another study, Koopialipoor et al. [25] predicted BI through a combination of ANN and firefly algorithm, yielding prediction results with an R2 of 0.896. In the present study, BI has been predicted with better performance (R2 of 0.953) from the LGP model. This indicates the effectiveness of the model proposed in this study compared to aforementioned models used in the literature. According to the objectives of this study, the uncertainty of the data has not been investigated. Given great importance, uncertainty of data and results of machine learning-based methods could be considered as the subject of future research. Also, the models presented in the current study generally suffer from a lack of laboratory data. Therefore, in the future, it is necessary to examine the accuracy of presented methods with a greater number of datasets.

5. Sensitivity Analysis

For more effective use of the AI methods, recognizing the effective parameters is an essential issue. One of the most widely used techniques for sensitivity analysis (SA) is consecutive elimination of the input variables and executing the AI model for all created situations. This research used the LGP model as the best model to implement the SA. Table 5 lists the SA results for five modes of combining inputs. The results demonstrate that Dry Density, with the lowest R (0.9081) and highest RMSE (0.8027) and MAPE (5.4642), is the most efficient input variable to estimate the brittleness index (BI). In addition, the Vp (R = 0.9163 and RMSE = 0.7944) ranks second, followed by Is50 (R = 0.9169 and RMSE = 0.7861) and Rn (R = 0.9273 and RMSE = 0.6959). A spider plot based on the six statistical criteria for all combining inputs is displayed in Figure 12. According to this figure, the combination with eliminating the dry density variable (i.e., all-dry density), showing the lowest R and IA and highest RMSE and MAPE, has the greatest impact on the accuracy of predicting BI. It should be mentioned that some feature selection methods such as Boruta-random forest can be utilized to specify the influential parameters, which has great ability to capture the non-linear interaction between the predictors and target. This aim can be considered as an alternative of classical sensitivity analysis.

6. Conclusions

Precise estimation of BI is necessary for any ground excavation project, and this issue requires the application of appropriate prediction models. With this in view, several advanced ML methods, including LGP, BRT, LWLR, and KStar models, were proposed to estimate BI. In this regard, a database collected from a tunneling project in Pahang state, Malaysia, was used, using four input parameters (Vp, Is50, D, and Rn) and BI as the output parameter. In the modeling processes, 64 and 21 datasets, respectively, were used for training and testing phases. Finally, the models’ accuracy was compared using several statistical criteria such as R and RSME. The findings of this study can be summarized as follows:
  • Based on the results, all developed models’ performance capacity was suitable and acceptable. Accordingly, all proposed models can be used with confidence for future research on predictions of other issues in the field of rock mechanics.
  • Among the proposed models, the KStar (R = 0.9984 and RMSE = 0.0865) model predicted BI with the best performance in the training phase, while the best performance for the testing phase was achieved by the LGP (R = 0.9529 and RMSE = 0.4838) model. In addition, both LWLR (R = 0.9490 and RMSE = 0.6607) and BRT (R = 0.9433 and RMSE = 0.6875), ranking second and third, respectively, lead to desired results for modeling BI values.
  • The authors recommend increasing the accuracy of BI modeling as a possible future study, examining the ensemble of stacked models to integrate the advantages of standalone data-driven models.
  • Sensitivity analysis demonstrated that dry density (D) was the most influential parameter with respect to BI.

Author Contributions

Conceptualization, M.H. and A.S.M.; methodology, M.J., I.A., M.M.S.S. and M.K.; validation, M.J. and I.A.; investigation, A.S.M. and M.K.; writing—original draft, M.H., A.S.M., M.J., I.A., M.M.S.S. and M.K., writing—review and editing, M.H., A.S.M., M.J., I.A., M.M.S.S. and M.K.; supervision, M.H.; funding acquisition, M.M.S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Ministry of Science and Higher Education of the Russian Federation, under the strategic academic leadership program ‘Priority 2030’ (Agreement 075-15-2021-1333, dated 30 September 2021).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rickman, R.; Mullen, M.J.; Petre, J.E.; Grieser, W.V.; Kundert, D. A practical use of shale petrophysics for stimulation design optimization: All shale plays are not clones of the Barnett Shale. In Proceedings of the SPE Annual Technical Conference and Exhibition, Denver, CO, USA, 21–24 September 2008. [Google Scholar]
  2. Miskimins, J.L. The impact of mechanical stratigraphy on hydraulic fracture growth and design considerations for horizontal wells. Bulletin 2012, 91, 475–499. [Google Scholar]
  3. Jahed Armaghani, D.; Asteris, P.G.; Askarian, B.; Hasanipanah, M.; Tarinejad, R.; Huynh, V.V. Examining hybrid and single SVM models with different kernels to predict rock brittleness. Sustainability 2020, 12, 2229. [Google Scholar] [CrossRef] [Green Version]
  4. Hajiabdolmajid, V.; Kaiser, P. Brittleness of rock and stability assessment in hard rock tunnelling. Tunn. Undergr. Space Technol. 2003, 18, 35–48. [Google Scholar] [CrossRef]
  5. Rybacki, E.; Reinicke, A.; Meier, T.; Makasi, M.; Dresen, G. What controls the mechanical properties of shale rocks?–Part I: Strength and Young’s modulus. J. Pet. Sci. Eng. 2015, 135, 702–722. [Google Scholar] [CrossRef]
  6. Rybacki, E.; Meier, T.; Dresen, G. What controls the mechanical properties of shale rocks?—Part II: Brittleness. J. Pet. Sci. Eng. 2016, 144, 39–58. [Google Scholar] [CrossRef] [Green Version]
  7. Singh, S.P. Brittleness and the mechanical winning of coal. Min. Sci. Technol. 1986, 3, 173–180. [Google Scholar] [CrossRef]
  8. Sun, D.; Lonbani, M.; Askarian, B.; Jahed Armaghani, D.; Tarinejad, R.; Pham, B.T.; Huynh, V.V. Investigating the Applications of Machine Learning Techniques to Predict the Rock Brittleness Index. Appl. Sci. 2020, 10, 1691. [Google Scholar] [CrossRef] [Green Version]
  9. Zhou, J.; Guo, H.; Koopialipoor, M.; Armaghani, D.J.; Tahir, M.M. Investigating the effective parameters on the risk levels of rockburst phenomena by developing a hybrid heuristic algorithm. Eng. Comput. 2020, 37, 1679–1694. [Google Scholar] [CrossRef]
  10. Yagiz, S. Utilizing rock mass properties for predicting TBM performance in hard rock condition. Tunn. Undergr. Space Technol. 2008, 23, 326–339. [Google Scholar] [CrossRef]
  11. Ebrahimabadi, A.; Goshtasbi, K.; Shahriar, K.; Cheraghi Seifabad, M. A model to predict the performance of roadheaders based on the Rock Mass Brittleness Index. J. S. Afr. Inst. Min. Metall. 2011, 111, 355–364. [Google Scholar]
  12. Yagiz, S. Assessment of brittleness using rock strength and density with punch penetration test. Tunn. Undergr. Space Technol. 2009, 24, 66–74. [Google Scholar] [CrossRef]
  13. Altindag, R. Assessment of some brittleness indexes in rock-drilling efficiency. Rock Mech. Rock Eng. 2010, 43, 361–370. [Google Scholar] [CrossRef]
  14. Morley, A. Strength of Material, Longmans, 11th ed.; Green: London, UK, 1954. [Google Scholar]
  15. Ramsay, J.G. Folding and Fracturing of Rocks; Mc Graw Hill B. Co.: New York, NY, USA, 1967; Volume 568. [Google Scholar]
  16. Obert, L.; Duvall, W.I. Rock Mechanics and the Design of Structures in Rock; Wiley: Hoboken, NJ, USA, 1967. [Google Scholar]
  17. Yagiz, S.; Gokceoglu, C. Application of fuzzy inference system and nonlinear regression models for predicting rock brittleness. Expert Syst. Appl. 2010, 37, 2265–2272. [Google Scholar] [CrossRef]
  18. Wang, Y.; Watson, R.; Rostami, J.; Wang, J.Y.; Limbruner, M.; He, Z. Study of borehole stability of Marcellus shale wells in longwall mining areas. J. Pet. Explor. Prod. Technol. 2014, 4, 59–71. [Google Scholar] [CrossRef] [Green Version]
  19. Meng, F.; Zhou, H.; Zhang, C.; Xu, R.; Lu, J. Evaluation methodology of brittleness of rock based on post-peak stress–strain curves. Rock Mech. Rock Eng. 2015, 48, 1787–1805. [Google Scholar] [CrossRef]
  20. Hucka, V.; Das, B. Brittleness determination of rocks by different methods. Int. J. Rock Mech. Min. Sci. Geomech. Abstr. 1974, 11, 389–392. [Google Scholar] [CrossRef]
  21. Lawn, B.R.; Marshall, D.B. Hardness, toughness, and brittleness: An indentation analysis. J. Am. Ceram. Soc. 1979, 62, 347–350. [Google Scholar] [CrossRef]
  22. Khandelwal, M.; Faradonbeh, R.S.; Monjezi, M.; Armaghani, D.J.; Bin Abd Majid, M.Z.; Yagiz, S. Function development for appraising brittleness of intact rocks using genetic programming and non-linear multiple regression models. Eng. Comput. 2017, 33, 13–21. [Google Scholar] [CrossRef]
  23. Altindag, R. The role of rock brittleness on analysis of percussive drilling performance. In Proceedings of the 5th National Rock Mechanics, Isparta, Turkey, 30–31 October 2000; pp. 105–112. [Google Scholar]
  24. Nejati, H.R.; Moosavi, S.A. A new brittleness index for estimation of rock fracture toughness. J. Min. Environ. 2017, 8, 83–91. [Google Scholar]
  25. Koopialipoor, M.; Noorbakhsh, A.; Noroozi Ghaleini, E.; Jahed Armaghani, D.; Yagiz, S. A new approach for estimation of rock brittleness based on non-destructive tests. Nondestruct. Test. Eval. 2019, 34, 354–375. [Google Scholar] [CrossRef]
  26. Armaghani, D.J.; Koopialipoor, M.; Marto, A.; Yagiz, S. Application of several optimization techniques for estimating TBM advance rate in granitic rocks. J. Rock Mech. Geotech. Eng. 2019, 11, 779–789. [Google Scholar] [CrossRef]
  27. Dehghan, S.; Sattari, G.H.; Chelgani, S.C.; Aliabadi, M.A. Prediction of uniaxial compressive strength and modulus of elasticity for Travertine samples using regression and artificial neural networks. Min. Sci. Technol. 2010, 20, 41–46. [Google Scholar] [CrossRef]
  28. Yagiz, S.; Yazitova, A.; Karahan, H. Application of differential evolution algorithm and comparing its performance with literature to predict rock brittleness for excavatability. Int. J. Min. Reclam. Environ. 2020, 34, 672–685. [Google Scholar] [CrossRef]
  29. Ahmadianfar, I.; Jamei, M.; Chu, X. A novel hybrid wavelet-locally weighted linear regression (W-LWLR) model for electrical conductivity (EC) prediction in water surface. J. Contam. Hydrol. 2020, 232, 103641. [Google Scholar] [CrossRef]
  30. Jiang, H.; Mohammed, A.S.; Kazeroon, R.A.; Sarir, P. Use of the Gene-Expression Programming Equation and FEM for the High-Strength CFST Columns. Appl. Sci. 2021, 11, 10468. [Google Scholar] [CrossRef]
  31. Asteris, P.G.; Rizal, F.I.M.; Koopialipoor, M.; Roussis, P.C.; Ferentinou, M.; Armaghani, D.J.; Gordan, B. Slope Stability Classification under Seismic Conditions Using Several Tree-Based Intelligent Techniques. Appl. Sci. 2022, 12, 1753. [Google Scholar] [CrossRef]
  32. Massalov, T.; Yagiz, S.; Adoko, A.C. Application of Soft Computing Techniques to Estimate Cutter Life Index Using Mechanical Properties of Rocks. Appl. Sci. 2022, 12, 1446. [Google Scholar] [CrossRef]
  33. Qian, Y.; Aghaabbasi, M.; Ali, M.; Alqurashi, M.; Salah, B.; Zainol, R.; Moeinaddini, M.; Hussein, E.E. Classification of Imbalanced Travel Mode Choice to Work Data Using Adjustable SVM Model. Appl. Sci. 2021, 11, 11916. [Google Scholar] [CrossRef]
  34. Kaunda, R.B.; Asbury, B. Prediction of rock brittleness using nondestructive methods for hard rock tunnelling. J. Rock Mech. Geotech. Eng. 2016, 8, 533–540. [Google Scholar] [CrossRef] [Green Version]
  35. Hatheway, A.W. The complete ISRM suggested methods for rock characterization, testing and monitoring; 1974–2006. Environ. Eng. Geosci. 2009, 15, 47–48. [Google Scholar] [CrossRef]
  36. Geroge, D.; Mallery, P. SPSS for Windows Step by Step: A Simple Guide and Reference; Allyn and Bacon: Boston, MA, USA, 2003. [Google Scholar]
  37. Nie, N.H.; Bent, D.H.; Hull, C.H. SPSS: Statistical Package for the Social Sciences; McGraw-Hill: New York, NY, USA, 1975. [Google Scholar]
  38. Kobayashi, M.; Sakata, S. Mallows’ Cp criterion and unbiasedness of model selection. J. Econom. 1990, 45, 385–395. [Google Scholar] [CrossRef]
  39. Gilmour, S.G. The interpretation of Mallows’s Cp-statistic. J. R. Stat. Soc. Ser. D Stat. 1996, 45, 49–56. [Google Scholar]
  40. Akaike, H. A new look at the statistical model identification. IEEE Trans. Automat. Contr. 1974, 19, 716–723. [Google Scholar] [CrossRef]
  41. Claeskens, G.; Hjort, N.L. Model Selection and Model Averaging; Cambridge Books: Cambridge, UK, 2008. [Google Scholar]
  42. Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  43. Gandomi, A.H.; Mohammadzadeh, D.; Pérez-Ordóñez, J.L.; Alavi, A.H. Linear genetic programming for shear strength prediction of reinforced concrete beams without stirrups. Appl. Soft Comput. 2014, 19, 112–120. [Google Scholar] [CrossRef]
  44. Atkeson, C.G.; Moore, A.W.; Schaal, S. Locally Weighted Learning for Control. Artif. Intell. Rev. 1997, 11, 75–113. [Google Scholar] [CrossRef]
  45. Jamei, M.; Ahmadianfar, I. Prediction of scour depth at piers with debris accumulation effects using linear genetic programming. Mar. Georesources Geotechnol. 2020, 38, 468–479. [Google Scholar] [CrossRef]
  46. Ahmadianfar, I.; Jamei, M.; Chu, X. Prediction of local scour around circular piles under waves using a novel artificial intelligence approach. Mar. Georesources Geotechnol. 2019, 39, 44–55. [Google Scholar] [CrossRef]
  47. Wang, J.; Yu, L.C.; Lai, K.R.; Zhang, X. Locally weighted linear regression for cross-lingual valence-arousal prediction of affective words. Neurocomputing 2016, 194, 271–278. [Google Scholar] [CrossRef]
  48. Pourrajab, R.; Ahmadianfar, I.; Jamei, M.; Behbahani, M. A meticulous intelligent approach to predict thermal conductivity ratio of hybrid nanofluids for heat transfer applications. J. Therm. Anal. Calorim. 2020, 146, 611–628. [Google Scholar] [CrossRef]
  49. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. (NY) 2020, 540, 131–159. [Google Scholar] [CrossRef]
  50. Machine, P. Practical Machine Learning Tools and Techniques. In Data Mining; Elsevier: Amsterdam, The Netherlands, 2011. [Google Scholar]
  51. Williams, T.P.; Gong, J. Predicting construction cost overruns using text mining, numerical data and ensemble classifiers. Autom. Constr. 2014, 43, 23–29. [Google Scholar] [CrossRef]
  52. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  53. Hasanipanah, M.; Monjezi, M.; Shahnazar, A.; Jahed Armaghanid, D.; Farazmand, A. Feasibility of indirect determination of blast induced ground vibration based on support vector machine. Measurement 2015, 75, 289–297. [Google Scholar] [CrossRef]
  54. Parsajoo, M.; Armaghani, D.J.; Mohammed, A.S.; Khari, M.; Jahandari, S. Tensile strength prediction of rock material using. Transp. Geotech. 2021, 31, 100652. [Google Scholar] [CrossRef]
  55. non-destructive tests: A comparative intelligent study. Transp. Geotech. 2021, 31, 100652. [CrossRef]
  56. Asteris, P.G.; Mamou, A.; Hajihassani, M.; Hasanipanah, M.; Koopialipoor, M.; Le, T.T.; Kardani, N.; Armaghani, D.J. Soft computing based closed form equations correlating L and N-type Schmidt hammer rebound numbers of rocks. Transp. Geotech. 2021, 29, 100588. [Google Scholar] [CrossRef]
  57. Pham, B.T.; Nguyen, M.D.; Nguyen-Thoi, T.; Ho, L.S.; Koopialipoor, M.; Quoc, N.K.; Armaghani, D.J.; Van Le, H. A novel approach for classification of soils based on laboratory tests using Adaboost, Tree and ANN modeling. Transp. Geotech. 2020, 27, 100508. [Google Scholar] [CrossRef]
  58. Zhou, J.; Qiu, Y.; Zhu, S.; Jahed Armaghani, D.; Khandelwal, M.; Mohamad, E.T. Estimating TBM advance rate in hard rock condition using XGBoost and Bayesian optimization. Undergr. Space 2020, 6, 506–515. [Google Scholar] [CrossRef]
  59. Harandizadeh, H.; Jahed Armaghani, D.; Hasanipanah, M.; Jahandari, S. A novel TS Fuzzy-GMDH model optimized by PSO to determine the deformation values of rock material. Neural Comput. Appl. 2022, in press. [Google Scholar] [CrossRef]
  60. Hasanipanah, M.; Amnieh, H.B. A fuzzy rule-based approach to address uncertainty in risk assessment and prediction of blast-induced flyrock in a quarry. Nat. Resour. Res. 2020, 29, 669–689. [Google Scholar] [CrossRef]
  61. Hasanipanah, M.; Amnieh, H.B. Developing a new uncertain rule-based fuzzy approach for evaluating the blast-induced backbreak. Eng. Comput. 2020, 37, 1879–1893. [Google Scholar] [CrossRef]
  62. Hasanipanah, M.; Keshtegar, B.; Thai, D.K.; Troung, N.-T. An ANN adaptive dynamical harmony search algorithm to approximate the flyrock resulting from blasting. Eng. Comput. 2020, 38, 1257–1269. [Google Scholar] [CrossRef]
  63. Hasanipanah, M.; Meng, D.; Keshtegar, B.; Trung, N.T.; Thai, D.K. Nonlinear models based on enhanced Kriging interpolation for prediction of rock joint shear strength. Neural Comput. Appl. 2020, 33, 4205–4215. [Google Scholar] [CrossRef]
  64. Hasanipanah, M.; Zhang, W.; Jahed Armaghani, D.; Rad, H.N. The potential application of a new intelligent based approach in predicting the tensile strength of rock. IEEE Access 2020, 8, 57148–57157. [Google Scholar] [CrossRef]
  65. Zhu, W.; Nikafshan Rad, H.; Hasanipanah, M. A chaos recurrent ANFIS optimized by PSO to predict ground vibration generated in rock blasting. Appl. Soft. Comput. 2021, 108, 107434. [Google Scholar] [CrossRef]
  66. Hasanipanah, M.; Jamei, M.; Mohammed, A.S.; Nait Amar, M.; Hocine, O.; Khedher, K.M. Intelligent prediction of rock mass deformation modulus through three optimized cascaded forward neural network models. Earth Sci. Inform. 2022, in press. [Google Scholar] [CrossRef]
  67. Babyak, M.A. What you see may not be what you get: A brief, nontechnical introduction to overfitting in regression-type models. Psychosom. Med. 2004, 66, 411–421. [Google Scholar]
  68. Hill, T.; Lewicki, P. Statistics: Methods and Applications: A Comprehensive Reference for Science, Industry, and Data Mining; StatSoft, Inc.: Tulsa, OK, USA, 2006. [Google Scholar]
  69. Yagiz, S.; Ghasemi, E.; Adoko, A.C. Prediction of Rock Brittleness Using Genetic Algorithm and Particle Swarm Optimization Techniques. Geotech. Geol. Eng. 2018, 36, 3767–3777. [Google Scholar] [CrossRef]
Figure 1. Failure of a sample under a BTS test [3].
Figure 1. Failure of a sample under a BTS test [3].
Applsci 12 07101 g001
Figure 2. UCS test (a) before and (b) after failure [3].
Figure 2. UCS test (a) before and (b) after failure [3].
Applsci 12 07101 g002
Figure 3. Comparison between (a) GP and (b) LGP structure.
Figure 3. Comparison between (a) GP and (b) LGP structure.
Applsci 12 07101 g003
Figure 4. Training procedure of bagged regression tree (BRT).
Figure 4. Training procedure of bagged regression tree (BRT).
Applsci 12 07101 g004
Figure 5. The road map of predicting BI using machine learning approaches.
Figure 5. The road map of predicting BI using machine learning approaches.
Applsci 12 07101 g005
Figure 6. Decision trees of BRT model for prediction of BI.
Figure 6. Decision trees of BRT model for prediction of BI.
Applsci 12 07101 g006
Figure 7. Comparison of four soft computing approaches and observed BI using scatter plot.
Figure 7. Comparison of four soft computing approaches and observed BI using scatter plot.
Applsci 12 07101 g007
Figure 8. Performance assessment of predicted and observed BI values using half violin plots.
Figure 8. Performance assessment of predicted and observed BI values using half violin plots.
Applsci 12 07101 g008
Figure 9. The trend physical plot for comparison between the observed and predicted BI values.
Figure 9. The trend physical plot for comparison between the observed and predicted BI values.
Applsci 12 07101 g009
Figure 10. Box plots for the relative deviation (%) distribution of all predictive models in testing and training.
Figure 10. Box plots for the relative deviation (%) distribution of all predictive models in testing and training.
Applsci 12 07101 g010
Figure 11. The cumulative frequency percentage versus the relative absolute error (%) for LWLR, BRT, KStar, and LGP models for the testing dataset.
Figure 11. The cumulative frequency percentage versus the relative absolute error (%) for LWLR, BRT, KStar, and LGP models for the testing dataset.
Applsci 12 07101 g011
Figure 12. The influence input variables ranking for estimation of BI value.
Figure 12. The influence input variables ranking for estimation of BI value.
Applsci 12 07101 g012
Table 1. Descriptive statistics of all variables used in the modeling.
Table 1. Descriptive statistics of all variables used in the modeling.
Parameters R n Vp (m/s)Dry Density (g/cm3)Is50 (MPa)BI
Minimum2029102.380.872210.12
Maximum5979432.756.5916.75
Mean37.1649752.5363.44112.61
Std. Deviation10.1211990.0791.1181.554
Range3950330.375.7186.626
Skewness0.39510.24490.11610.12940.7339
Kurtosis−0.76−0.605−0.34730.33690.2216
Table 2. Best subset analysis for selecting the optimum input combination.
Table 2. Best subset analysis for selecting the optimum input combination.
Number of VariablesVariablesMSER2Adjusted R2Mallows’ CpAkaike’s AICAmemiya’s PC
2Vp/D0.6520.7360.73036.387−33.4190.276
3 V p , D , I S 50 0.5300.7880.78115.611−50.1090.227
4 R n , V p , D , I S 50 0.4630.8170.8085.000−60.5520.201
Table 3. The characteristics and setting parameters of proposed AI-based approaches.
Table 3. The characteristics and setting parameters of proposed AI-based approaches.
ModelsSetting of Parameter
LGPFunction set+, −, ×, ÷, √, power, sin, cos
Population size300
Mutation frequency %85
Crossover frequency %50
Number of replication10
Block mutation rate %20
Instruction mutation rate %20
Instruction data mutation rate %60
Homologous crossover %90
Program size64–256
LWLRµ = 4
KStar• Global blend = 30
BRT• Function: “Bag”, Learning cycles = 50, MinLeafSize = 1
Table 4. Quantitative evaluation of AI base approaches for predicting BI.
Table 4. Quantitative evaluation of AI base approaches for predicting BI.
MetricsLGPK-StarBRTLWLR
TrainingR0.92480.99840.94590.9252
RMSE0.58670.08650.52970.5960
MAPE%3.62790.25643.15693.4088
SI0.04630.00680.04180.0470
IA0.95600.99920.96280.9531
St.D1.33391.51951.26401.2828
TestingR0.95290.93100.94330.9490
RMSE0.48380.79330.68750.6607
MAPE%3.21555.05734.38844.1549
SI0.03890.06380.05530.0532
IA0.97440.90950.93240.9400
St.D1.50591.08611.11161.1686
Table 5. The sensitivity analysis results for all possible situations.
Table 5. The sensitivity analysis results for all possible situations.
MetricAll-Rn All-Vp All-Dry Density All-Is50 All
R0.92730.91630.90810.91690.9433
RMSE0.69590.79440.80270.78610.6875
MAPE4.45925.16955.46425.04334.3884
SI0.05600.06390.06460.06330.0553
IA0.93180.90180.90040.90490.9324
St.D1.62771.62771.62771.62771.6277
Rank4.00003.00001.00002.0000-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jamei, M.; Mohammed, A.S.; Ahmadianfar, I.; Sabri, M.M.S.; Karbasi, M.; Hasanipanah, M. Predicting Rock Brittleness Using a Robust Evolutionary Programming Paradigm and Regression-Based Feature Selection Model. Appl. Sci. 2022, 12, 7101. https://doi.org/10.3390/app12147101

AMA Style

Jamei M, Mohammed AS, Ahmadianfar I, Sabri MMS, Karbasi M, Hasanipanah M. Predicting Rock Brittleness Using a Robust Evolutionary Programming Paradigm and Regression-Based Feature Selection Model. Applied Sciences. 2022; 12(14):7101. https://doi.org/10.3390/app12147101

Chicago/Turabian Style

Jamei, Mehdi, Ahmed Salih Mohammed, Iman Ahmadianfar, Mohanad Muayad Sabri Sabri, Masoud Karbasi, and Mahdi Hasanipanah. 2022. "Predicting Rock Brittleness Using a Robust Evolutionary Programming Paradigm and Regression-Based Feature Selection Model" Applied Sciences 12, no. 14: 7101. https://doi.org/10.3390/app12147101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop