Introduction to Tree Methods in MLIB – News Couple
ANALYTICS

Introduction to Tree Methods in MLIB


Now we will train the random forest model (here classifier) for that firstly the random forest object needs to be created by passing in the relevant parameter.

random_forest = RandomForestClassifier(labelCol="label", featuresCol="features", numTrees=20)

Inference: While creating the Random forest classifier we are passing the label column that is our target and the features column (collectively all the features) and as it is a random forest classifier so we have to specify the total number of trees as 20.

model_rf = random_forest.fit(trainingSet)

Inference: Now to train the model we use the fit() method by passing the set of training data as the parameter to that function and don’t forget to use the random forest object for calling it.

predictions = model_rf.transform(testSet)

Inference: Here comes the stage where we will make predictions using the evaluation method of the MLIB library and make sure to evaluate the model on testing data as then only the purpose of using it will be fulfilled.

predictions.printSchema()

Output:

Inference: Let’s look at what the prediction DataFrame holds. So from the above output, we can see that there are 5 columns:

  1. Label: This is the target column.
  2. features: All the features/dependent columns in the form of vector
  3. rawPrediction: This column will be very handy in the case of the GBT classifier
  4. Probability: It holds the probability of how much are the chances that the predictions are correct.
  5. Predictions: Predicted values ​​by the model during evaluation.
predictions.select("prediction", "label", "features").show(5)

Output:

Inference: In the above output we have filtered the main DataFrame to extract the important columns only ie prediction, label, and features.

evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy")

Inference: Here we are in the stage of model evaluation and for that, we are using the Multi-class classification Evaluator there is one key difference between Binary and Multi-class evaluatorsbinary on one side can only return the AUC curve while the other one can return the accuracy, precision and recall metrics as well.

accuracy = evaluator.evaluate(predictions)
print("Test Error = %g" % (1.0 - accuracy))

Output:

Test Error = 0

Inference: Well!! to get the 0 test error It seems to be a bit unacceptable especially in the real setup, having said that keep one thing in mind this dataset is highly separable and cleaned so one might expect such tremendous results.

Gradient Boosted Trees

Gradient boosting Trees (GBTs) is another tree method that can be used for both classification and regression GBTs problems are built based on ensemble methods using several decision trees. Though one doesn’t have to worry about the mathematics behind this algorithm as Spark handles it in a better way.

from pyspark.ml.classification import GBTClassifier

data_gbt = spark.read.format("libsvm").load("sample_libsvm_data.txt")

(trainingSet, testSet) = data_gbt.randomSplit([0.7, 0.3])

gbt_mdl = GBTClassifier(labelCol="label", featuresCol="features", maxIter=10)

model = gbt_mdl.fit(trainingSet)

predictions = model.transform(testSet)

predictions.select("prediction", "label", "features").show(5)

Output:

Code breakdown: This is just for the walkthrough purpose otherwise if you have learned the random forest part then this one will be easy to pick up in terms of implementation.

  1. Firstly importing the GBT model and reading the dataset by the same method.
  2. Splitting the dataset into training and testing sets using the random split method.
  3. Training of the GBT model using object creation and fit method.
  4. Making predictions using the transform method on testing data.
evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predictions)
print("Test Error = %g" % (1.0 - accuracy))

Output:

Test Error = 0.0625

Inference: As we did in the case of random forest similarly following the same approach for GBT where we got the test result as around 0.088 which seems a bit realistic.

Note: The same pipeline can be applied in the case of the decision tree as well.

Conclusion to MLIB

In this article, we have discussed all the important tree methods that can be implemented using PySpark’s MLIB and went through the hands-on practice by using the official documented dataset provided by Spark. Now, let’s discuss everything we did in a nutshell.

  1. First, we did an environment setup thing and read the dataset and then look at the dataset which was preprocessed and ready to use for model development.
  2. Then the stage of splitting the dataset comes into existence followed by the random forest model development, prediction, and in the end evaluation.
  3. Similarly, we perform the same process in the case of the GBT model (practically) and concludes that using any tree method you need to follow the same process.

Here’s the repo link to this article. I hope you liked my article on Introduction to tree methods in MLIB. If you have any opinions or questions, then comment below.

Connect with me on LinkedIn for further discussion on MLIB or otherwise.



Source link

Related Articles

Deixe uma resposta

O seu endereço de email não será publicado. Campos obrigatórios marcados com *

Back to top button