Introduction to Stacking Classifier

Stacking

I already introduced you to Adaptive Boosting Classifier and Gradient Boosting Classifier, so now it is time to go to another ensemble classifier which I used in my research during creating a master’s thesis – Stacking. It will be a theoretical introduction to this algorithm and an example of its implementation in Python using the MLxtent and Scikit-learn libraries.

Theoretical introduction

The main idea behind Stacking is to explore the space of different models for the same problem. This approach is that you try to resolve it using various types of models that are able to deal with some parts of it. The goal is to build multiple learners to then create an indirect prediction. The next step is adding a new model that learns based on indirect predictions of the same goal. The input of the algorithm is the training set S = \lbrace {(x_i,y_i) }\rbrace ^{n}_{i=1} and the output is in the form of ensemble classifier H [1].

The algorithm can be presented in the following steps:

1. Training classifiers from the first level.

2. Creating a new set of predicitions: S_h = \lbrace {(x^’_i,y_i) }\rbrace , where x^’_i = \lbrace h_i(x_i),…,h_T(x_i) \rbrace

3. Training classifiers from the second level H and the final prediction.

The diagram of the stacking operation is presented in the following picture:

Stacking schema

Choosing the classifiers

As I mentioned earlier, Stacking classifiers consist of a few classifiers and you need to choose them. The most popular of them used in this case are the following:

  • the level one classifiers: Decision Tree classifier, Random Forest Classifier, Nearest Neighbour Classifier
  • the level two classifiers: Neural Network, Random Forest Classifier, Support Vector Classifier

Example in Python

If you want to use this algorithm in Python, you can do that with the Sklearn and MLxtent. In this case, we use the k Nearest Neighbor algorithm for k equal to 1 and 3, as well as the Decision Tree algorithm. The second-degree model is the Random Forest. So you need to import the following method:

from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier 
from sklearn.ensemble import RandomForestClassifier
from mlxtend.classifier import StackingClassifier

Let me leave part of the implementation responsible for preparing the training and test set. You can find it out in my other article. I will proceed to use the classifier:

c1 = KNeighborsClassifier(n_neighbors=1)
c2 = KNeighborsClassifier(n_neighbors=3)
c3 = DecisionTreeClassifier()
c4= RandomForestClassifier(random_state=1)
stacking = StackingClassifier(classifiers=[c1, c2, c3], 
                          meta_classifier=c4)
stacking.fit(x_train, y_train.ravel())
predictions = stacking.predict(x_test)

The libraries allow you to set certain parameters for the classifiers, for example, count of the nearest neighbors for kNN or the randomness of the bootstrapping of the samples used when building trees in the Random Forest. All information can be found in the documentation of these libraries.

Summary

I wanted to introduce you to Stacking Classifier. This is another ensemble classifier I wrote about here. If you want to read more on this method, I encourage you to check the book Combining Pattern Classifiers Methods and Algorithms by Ludmila I. Kuncheva or the paper that I added in sources of this blog post. I invite you to read my other articles connected with Machine Learning.


Sources

[1] Rising Odegua, An Empirical Study of Ensemble Techniques (Bagging, Boosting and Stacking), Conference: Deep Learning IndabaX, 2019

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top