A Recommended Preprocessing

The type of preprocessing needed depends on the type of model being fit. For example, models that use distance functions or dot products should have all of their predictors on the same scale so that distance is measured appropriately.

To learn more about each of these models, and others that might be available, see https://www.tidymodels.org/find/parsnip/.

This Appendix provides recommendations for baseline levels of preprocessing that are needed for various model functions. In Table A.1, the preprocessing methods are categorized as:

  • dummy: Do qualitative predictors require a numeric encoding (e.g., via dummy variables or other methods)?

  • zv: Should columns with a single unique value be removed?

  • impute: If some predictors are missing, should they be estimated via imputation?

  • decorrelate: If there are correlated predictors, should this correlation be mitigated? This might mean filtering out predictors, using principal component analysis, or a model-based technique (e.g., regularization).

  • normalize: Should predictors be centered and scaled?

  • transform: Is it helpful to transform predictors to be more symmetric?

The information in Table A.1 is not exhaustive and somewhat depends on the implementation. For example, as noted below the table, some models may not require a particular preprocessing operation but the implementation may require it. In the table, ✔ indicates that the method is required for the model and × indicates that it is not. The ◌ symbol means that the model may be helped by the technique but it is not required.

Table A.1: Preprocessing methods for different models.
model dummy zv impute decorrelate normalize transform
C5_rules() × × × × × ×
bag_mars() × ×
bag_tree() × × × ◌¹ × ×
bart() × × × ◌¹ × ×
boost_tree() ײ ✔² ◌¹ × ×
cubist_rules() × × × × × ×
decision_tree() × × × ◌¹ × ×
discrim_flexible() × ×
discrim_linear() ×
discrim_regularized() ×
gen_additive_mod() ×
linear_reg() ×
logistic_reg() ×
mars() × ×
mlp()
multinom_reg() ײ
naive_Bayes() × ◌¹ × ×
nearest_neighbor()
pls() ×
poisson_reg() ×
rand_forest() × ✔² ◌¹ × ×
rule_fit() × ◌¹ ×
svm_*()

Footnotes:

  1. Decorrelating predictors may not help improve performance. However, fewer correlated predictors can improve the estimation of variance importance scores (see Fig. 11.4 of M. Kuhn and Johnson (2020)). Essentially, the selection of highly correlated predictors is almost random.
  2. The needed preprocessing for these models depends on the implementation. Specifically:
  • Theoretically, any tree-based model does not require imputation. However, many tree ensemble implementations require imputation.
  • While tree-based boosting methods generally do not require the creation of dummy variables, models using the xgboost engine do.

REFERENCES

———. 2020. Feature Engineering and Selection: A Practical Approach for Predictive Models. CRC Press.