Hello World

Buy from Amazon

Welcome to Tidy Modeling with R! This book is a guide to using a collection of software in the R programming language for model building called tidymodels, and it has two main goals:

  • First and foremost, this book provides a practical introduction to how to use these specific R packages to create models. We focus on a dialect of R called the tidyverse that is designed with a consistent, human-centered philosophy, and demonstrate how the tidyverse and the tidymodels packages can be used to produce high quality statistical and machine learning models.

  • Second, this book will show you how to develop good methodology and statistical practices. Whenever possible, our software, documentation, and other materials attempt to prevent common pitfalls.

In Chapter 1, we outline a taxonomy for models and highlight what good software for modeling is like. The ideas and syntax of the tidyverse, which we introduce (or review) in Chapter 2, are the basis for the tidymodels approach to these challenges of methodology and practice. Chapter 3 provides a quick tour of conventional base R modeling functions and summarizes the unmet needs in that area.

After that, this book is separated into parts, starting with the basics of modeling with tidy data principles. Chapters 4 through 9 introduces an example data set on house prices and demonstrates how to use the fundamental tidymodels packages: recipes, parsnip, workflows, yardstick, and others.

The next part of the book moves forward with more details on the process of creating an effective model. Chapters 10 through 15 focus on creating good estimates of performance as well as tuning model hyperparameters.

Finally, the last section of this book, Chapters 16 through 21, covers other important topics for model building. We discuss more advanced feature engineering approaches like dimensionality reduction and encoding high cardinality predictors, as well as how to answer questions about why a model makes certain predictions and when to trust your model predictions.

We do not assume that readers have extensive experience in model building and statistics. Some statistical knowledge is required, such as random sampling, variance, correlation, basic linear regression, and other topics that are usually found in a basic undergraduate statistics or data analysis course. We do assume that the reader is at least slightly familiar with dplyr, ggplot2, and the %>% “pipe” operator in R, and is interested in applying these tools to modeling. For users who don’t yet have this background R knowledge, we recommend books such as R for Data Science by Wickham and Grolemund (2016). Investigating and analyzing data are an important part of any model process,

This book is not intended to be a comprehensive reference on modeling techniques; we suggest other resources to learn more about the statistical methods themselves. For general background on the most common type of model, the linear model, we suggest Fox (2008). For predictive models, M. Kuhn and Johnson (2013) and M. Kuhn and Johnson (2020) are good resources. For machine learning methods, Goodfellow, Bengio, and Courville (2016) is an excellent (but formal) source of information. In some cases, we do describe the models we use in some detail, but in a way that is less mathematical, and hopefully more intuitive.


We are so thankful for the contributions, help, and perspectives of people who have supported us in this project. There are several we would like to thank in particular.

We would like to thank our RStudio colleagues on the tidymodels team (Davis Vaughan, Hannah Frick, Emil Hvitfeldt, and Simon Couch) as well as the rest of our coworkers on the RStudio open-source team. Thank you to Desirée De Leon for the site design of the online work. We would also like to thank our technical reviewers, Chelsea Parlett-Pelleriti and Dan Simpson, for their detailed, insightful feedback that substantively improved this book, as well as our editors, Nicole Tache and Rita Fernando, for their perspective and guidance during the process of writing and publishing.

This book was written in the open, and multiple people contributed via pull requests or issues. Special thanks goes to the thirty-six people who contributed via GitHub pull requests (in alphabetical order by username): @arisp99, Brad Hill (@bradisbrad), Bryce Roney (@bryceroney), Cedric Batailler (@cedricbatailler), Ildikó Czeller (@czeildi), David Kane (@davidkane9), @DavZim, @DCharIAA, Emil Hvitfeldt (@EmilHvitfeldt), Emilio (@emilopezcano), Fgazzelloni (@Fgazzelloni), Hannah Frick (@hfrick), Hlynur (@hlynurhallgrims), Howard Baek (@howardbaek), Jae Yeon Kim (@jaeyk), Jonathan D. Trattner (@jdtrat), Jeffrey Girard (@jmgirard), John W Pickering (@JohnPickering), Jon Harmon (@jonthegeek), Joseph B. Rickert (@joseph-rickert), Maximilian Rohde (@maxdrohde), @MikeJohnPage, Mine Cetinkaya-Rundel (@mine-cetinkaya-rundel), Mohammed Hamdy (@mmhamdy), @nattalides, Y. Yu (@PursuitOfDataScience), Riaz Hedayati (@riazhedayati), Scott (@scottyd22), Simon Schölzel (@simonschoe), Simon Sayz (@tagasimon), @thrkng, Tanner Stauss (@tmstauss), Tony ElHabr (@tonyelhabr), Dmitry Zotikov (@x1o), Xiaochi (@xiaochi-liu), Zach Bogart (@zachbogart).

Using Code Examples

This book was written with RStudio using bookdown. The website is hosted via Netlify, and automatically built after every push by GitHub Actions. The complete source is available on GitHub. We generated all plots in this book using ggplot2 and its black and white theme (theme_bw()).

This version of the book was built with R version 4.2.0 (2022-04-22), pandoc version 2.14.2, and the following packages: applicable (, RSPM), av (0.7.0, RSPM), baguette (0.2.0, RSPM), beans (0.1.0, RSPM), bestNormalize (1.8.2, RSPM), bookdown (0.26, RSPM), broom (0.8.0, RSPM), censored (, Github), corrplot (0.92, RSPM), corrr (0.4.3, RSPM), Cubist (0.4.0, RSPM), DALEXtra (2.2.0, RSPM), dials (0.1.1, RSPM), dimRed (0.2.5, RSPM), discrim (0.2.0, RSPM), doMC (1.3.8, RSPM), dplyr (1.0.8, RSPM), earth (5.3.1, RSPM), embed (0.2.0, RSPM), fastICA (1.2-3, RSPM), finetune (0.2.0, RSPM), forcats (0.5.1, RSPM), ggforce (0.3.3, RSPM), ggplot2 (3.3.5, RSPM), glmnet (4.1-4, RSPM), gridExtra (2.3, RSPM), infer (1.0.0, RSPM), kableExtra (1.3.4, RSPM), kernlab (0.9-30, RSPM), kknn (1.3.1, RSPM), klaR (1.7-0, RSPM), knitr (1.39, RSPM), learntidymodels (, Github), lime (0.5.2, RSPM), lme4 (1.1-29, RSPM), lubridate (1.8.0, RSPM), mda (0.5-2, RSPM), mixOmics (6.20.0, Bioconduc~), modeldata (0.1.1, RSPM), multilevelmod (0.1.0, RSPM), nlme (3.1-157, CRAN), nnet (7.3-17, CRAN), parsnip (, Github), patchwork (1.1.1, RSPM), pillar (1.7.0, RSPM), poissonreg (0.2.0, RSPM), prettyunits (1.1.1, RSPM), probably (0.0.6, RSPM), pscl (1.5.5, RSPM), purrr (0.3.4, RSPM), ranger (0.13.1, RSPM), recipes (0.2.0, RSPM), rlang (1.0.2, RSPM), rmarkdown (2.14, RSPM), rpart (4.1.16, CRAN), rsample (0.1.1, RSPM), rstanarm (2.21.3, RSPM), rules (0.2.0, RSPM), sessioninfo (1.2.2, RSPM), stacks (0.2.2, RSPM), stringr (1.4.0, RSPM), svglite (2.1.0, RSPM), text2vec (0.6.1, RSPM), textrecipes (0.5.1, RSPM), themis (0.2.1, RSPM), tibble (3.1.6, RSPM), tidymodels (0.2.0, RSPM), tidyposterior (0.1.0, RSPM), tidyverse (1.3.1, RSPM), tune (0.2.0, RSPM), uwot (0.1.11, RSPM), workflows (0.2.6, RSPM), workflowsets (0.2.1, RSPM), xgboost (, RSPM), and yardstick (0.0.9, RSPM).


Fox, J. 2008. Applied Regression Analysis and Generalized Linear Models. Second. Thousand Oaks, CA: Sage.
Goodfellow, I, Y Bengio, and A Courville. 2016. Deep Learning. MIT Press.
Kuhn, M, and K Johnson. 2013. Applied Predictive Modeling. Springer.
———. 2020. Feature Engineering and Selection: A Practical Approach for Predictive Models. CRC Press.