When working with dataset containing N-dimensional data points, not all features are equally important. Some are more discriminative than others. If we have this information, we can use it to reduce the dimensional. This is very useful in reducing the complexity and increasing the speed of the algorithm. Sometimes, a few features are completely redundant. Hence they can be easily removed from the dataset.
In this post we'll use the AdaBoost regressor to compute feature importance. AdaBoost, short for Adaptive Boosting, is an algorithm that's frequently used in conjunction with other machine learning algorithms to improve their performance. In AdaBoost, the training data points are drawn from a distribution to train the current classifier. This distribution is updated iteratively so that the subsequent classifiers get to focus on the more difficult data points. The difficult data points are the ones that are misclassified. This is done by updating the distribution at each step. This will make the data points that were previously misclassified more likely to come up in the next sample dataset that's used for training. These classifiers are then cascaded and the decision is taken through weighted majority voting.
See the code below:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn import datasets
from sklearn.metrics import mean_squared_error, explained_variance_score
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
# Load housing data
housing_data = datasets.load_boston()
# Shuffle the data
X, y = shuffle(housing_data.data, housing_data.target, random_state=7)
# Split data into training and testing datasets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=7)
# AdaBoost Regressor model
regressor = AdaBoostRegressor(DecisionTreeRegressor(max_depth=4),
n_estimators=400, random_state=7)
regressor.fit(X_train, y_train)
# Evaluate performance of AdaBoost regressor
y_pred = regressor.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
evs = explained_variance_score(y_test, y_pred )
print("\nADABOOST REGRESSOR")
print("Mean squared error =", round(mse, 2))
print("Explained variance score =", round(evs, 2))
# Extract feature importances
feature_importances = regressor.feature_importances_
feature_names = housing_data.feature_names
# Normalize the importance values
feature_importances = 100.0 * (feature_importances / max(feature_importances))
# Sort the values and flip them
index_sorted = np.flipud(np.argsort(feature_importances))
# Arrange the X ticks
pos = np.arange(index_sorted.shape[0]) + 0.5
# Plot the bar graph
plt.figure()
plt.bar(pos, feature_importances[index_sorted], align='center')
plt.xticks(pos, feature_names[index_sorted])
plt.ylabel('Relative Importance')
plt.title('Feature importance using AdaBoost regressor')
plt.show()
After importing the required packages we'll load our dataset from scikit-learn's inbuilt housing dataset :
housing_data = datasets.load_boston()
Then we'll shuffle the data so that we don't bias our analysis:
X, y = shuffle(housing_data.data, housing_data.target, random_state=7)
Next we'll split the dataset into training and testing:
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.2, random_state=7)
Now we define and train an AdaBoostregressor using the Decision Tree regressor as the individual model:
regressor = AdaBoostRegressor(DecisionTreeRegressor(max_depth=4),
n_estimators=400, random_state=7)
regressor.fit(X_train, y_train)
Then estimate the performance of the regressor:
y_pred = regressor.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
evs = explained_variance_score(y_test, y_pred )
print("\nADABOOST REGRESSOR")
print("Mean squared error =", round(mse, 2))
print("Explained variance score =", round(evs, 2))
We use the regressor's inbuilt method to compute the relative feature importance:
feature_importances = regressor.feature_importances_
feature_names = housing_data.feature_names
Next we normalize the values of the relative feature importance, sort them and arrange the ticks on the X axis for the bar graph:
feature_importances = 100.0 * (feature_importances / max(feature_importances))
index_sorted = np.flipud(np.argsort(feature_importances))
pos = np.arange(index_sorted.shape[0]) + 0.5
Finally plot the bar graph:
plt.figure()
plt.bar(pos, feature_importances[index_sorted], align='center')
plt.xticks(pos, feature_names[index_sorted])
plt.ylabel('Relative Importance')
plt.title('Feature importance using AdaBoost regressor')
plt.show()
When we run the program, the command window shows the following output:
ADABOOST REGRESSOR
Mean squared error = 22.94
Explained variance score = 0.79
The graph is shown below:
According to our analysis, the feature LSTAT is the most important feature in that dataset. In the next post we'll use the Extremely Random Forest regressor and predict traffic conditions.
In this post we'll use the AdaBoost regressor to compute feature importance. AdaBoost, short for Adaptive Boosting, is an algorithm that's frequently used in conjunction with other machine learning algorithms to improve their performance. In AdaBoost, the training data points are drawn from a distribution to train the current classifier. This distribution is updated iteratively so that the subsequent classifiers get to focus on the more difficult data points. The difficult data points are the ones that are misclassified. This is done by updating the distribution at each step. This will make the data points that were previously misclassified more likely to come up in the next sample dataset that's used for training. These classifiers are then cascaded and the decision is taken through weighted majority voting.
See the code below:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn import datasets
from sklearn.metrics import mean_squared_error, explained_variance_score
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
# Load housing data
housing_data = datasets.load_boston()
# Shuffle the data
X, y = shuffle(housing_data.data, housing_data.target, random_state=7)
# Split data into training and testing datasets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=7)
# AdaBoost Regressor model
regressor = AdaBoostRegressor(DecisionTreeRegressor(max_depth=4),
n_estimators=400, random_state=7)
regressor.fit(X_train, y_train)
# Evaluate performance of AdaBoost regressor
y_pred = regressor.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
evs = explained_variance_score(y_test, y_pred )
print("\nADABOOST REGRESSOR")
print("Mean squared error =", round(mse, 2))
print("Explained variance score =", round(evs, 2))
# Extract feature importances
feature_importances = regressor.feature_importances_
feature_names = housing_data.feature_names
# Normalize the importance values
feature_importances = 100.0 * (feature_importances / max(feature_importances))
# Sort the values and flip them
index_sorted = np.flipud(np.argsort(feature_importances))
# Arrange the X ticks
pos = np.arange(index_sorted.shape[0]) + 0.5
# Plot the bar graph
plt.figure()
plt.bar(pos, feature_importances[index_sorted], align='center')
plt.xticks(pos, feature_names[index_sorted])
plt.ylabel('Relative Importance')
plt.title('Feature importance using AdaBoost regressor')
plt.show()
After importing the required packages we'll load our dataset from scikit-learn's inbuilt housing dataset :
housing_data = datasets.load_boston()
Then we'll shuffle the data so that we don't bias our analysis:
X, y = shuffle(housing_data.data, housing_data.target, random_state=7)
Next we'll split the dataset into training and testing:
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.2, random_state=7)
Now we define and train an AdaBoostregressor using the Decision Tree regressor as the individual model:
regressor = AdaBoostRegressor(DecisionTreeRegressor(max_depth=4),
n_estimators=400, random_state=7)
regressor.fit(X_train, y_train)
Then estimate the performance of the regressor:
y_pred = regressor.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
evs = explained_variance_score(y_test, y_pred )
print("\nADABOOST REGRESSOR")
print("Mean squared error =", round(mse, 2))
print("Explained variance score =", round(evs, 2))
We use the regressor's inbuilt method to compute the relative feature importance:
feature_importances = regressor.feature_importances_
feature_names = housing_data.feature_names
Next we normalize the values of the relative feature importance, sort them and arrange the ticks on the X axis for the bar graph:
feature_importances = 100.0 * (feature_importances / max(feature_importances))
index_sorted = np.flipud(np.argsort(feature_importances))
pos = np.arange(index_sorted.shape[0]) + 0.5
Finally plot the bar graph:
plt.figure()
plt.bar(pos, feature_importances[index_sorted], align='center')
plt.xticks(pos, feature_names[index_sorted])
plt.ylabel('Relative Importance')
plt.title('Feature importance using AdaBoost regressor')
plt.show()
When we run the program, the command window shows the following output:
ADABOOST REGRESSOR
Mean squared error = 22.94
Explained variance score = 0.79
The graph is shown below:
According to our analysis, the feature LSTAT is the most important feature in that dataset. In the next post we'll use the Extremely Random Forest regressor and predict traffic conditions.
0 comments:
Post a Comment