# Regularization **Chapter Status:** Currently this chapter is very sparse. It essentially only expands upon an example discussed in ISL, thus only illustrates usage of the methods. Mathematical and conceptual details of the methods will be added later. Also, more comments on using `glmnet` with `caret` will be discussed. ```{r reg_opts, include = FALSE} knitr::opts_chunk$set(cache = TRUE, autodep = TRUE, fig.align = "center") ``` We will use the `Hitters` dataset from the `ISLR` package to explore two shrinkage methods: **ridge regression** and **lasso**. These are otherwise known as **penalized regression** methods. ```{r} data(Hitters, package = "ISLR") ``` This dataset has some missing data in the response `Salaray`. We use the `na.omit()` function the clean the dataset. ```{r} sum(is.na(Hitters)) sum(is.na(Hitters$Salary)) Hitters = na.omit(Hitters) sum(is.na(Hitters)) ``` The predictors variables are offensive and defensive statistics for a number of baseball players. ```{r} names(Hitters) ``` We use the `glmnet()` and `cv.glmnet()` functions from the `glmnet` package to fit penalized regressions. ```{r, message = FALSE, warning = FALSE} library(glmnet) ``` Unfortunately, the `glmnet` function does not allow the use of model formulas, so we setup the data for ease of use with `glmnet`. Eventually we will use `train()` from `caret` which does allow for fitting penalized regression with the formula syntax, but to explore some of the details, we first work with the functions from `glmnet` directly. ```{r} X = model.matrix(Salary ~ ., Hitters)[, -1] y = Hitters$Salary ``` First, we fit an ordinary linear regression, and note the size of the predictors' coefficients, and predictors' coefficients squared. (The two penalties we will use.) ```{r} fit = lm(Salary ~ ., Hitters) coef(fit) sum(abs(coef(fit)[-1])) sum(coef(fit)[-1] ^ 2) ``` ## Ridge Regression We first illustrate **ridge regression**, which can be fit using `glmnet()` with `alpha = 0` and seeks to minimize $$ \sum_{i=1}^{n} \left( y_i - \beta_0 - \sum_{j=1}^{p} \beta_j x_{ij} \right) ^ 2 + \lambda \sum_{j=1}^{p} \beta_j^2 . $$ Notice that the intercept is **not** penalized. Also, note that that ridge regression is **not** scale invariant like the usual unpenalized regression. Thankfully, `glmnet()` takes care of this internally. It automatically standardizes predictors for fitting, then reports fitted coefficient using the original scale. The two plots illustrate how much the coefficients are penalized for different values of $\lambda$. Notice none of the coefficients are forced to be zero. ```{r ridge, fig.height = 4, fig.width = 8} par(mfrow = c(1, 2)) fit_ridge = glmnet(X, y, alpha = 0) plot(fit_ridge) plot(fit_ridge, xvar = "lambda", label = TRUE) ``` We use cross-validation to select a good $\lambda$ value. The `cv.glmnet()`function uses 10 folds by default. The plot illustrates the MSE for the $\lambda$s considered. Two lines are drawn. The first is the $\lambda$ that gives the smallest MSE. The second is the $\lambda$ that gives an MSE within one standard error of the smallest. ```{r} fit_ridge_cv = cv.glmnet(X, y, alpha = 0) plot(fit_ridge_cv) ``` The `cv.glmnet()` function returns several details of the fit for both $\lambda$ values in the plot. Notice the penalty terms are smaller than the full linear regression. (As we would expect.) ```{r} # fitted coefficients, using 1-SE rule lambda, default behavior coef(fit_ridge_cv) ``` ```{r} # fitted coefficients, using minimum lambda coef(fit_ridge_cv, s = "lambda.min") ``` ```{r} # penalty term using minimum lambda sum(coef(fit_ridge_cv, s = "lambda.min")[-1] ^ 2) ``` ```{r} # fitted coefficients, using 1-SE rule lambda coef(fit_ridge_cv, s = "lambda.1se") ``` ```{r} # penalty term using 1-SE rule lambda sum(coef(fit_ridge_cv, s = "lambda.1se")[-1] ^ 2) ``` ```{r, eval = FALSE} # predict using minimum lambda predict(fit_ridge_cv, X, s = "lambda.min") ``` ```{r, eval = FALSE} # predict using 1-SE rule lambda, default behavior predict(fit_ridge_cv, X) ``` ```{r} # calcualte "train error" mean((y - predict(fit_ridge_cv, X)) ^ 2) ``` ```{r} # CV-RMSEs sqrt(fit_ridge_cv$cvm) ``` ```{r} # CV-RMSE using minimum lambda sqrt(fit_ridge_cv$cvm[fit_ridge_cv$lambda == fit_ridge_cv$lambda.min]) ``` ```{r} # CV-RMSE using 1-SE rule lambda sqrt(fit_ridge_cv$cvm[fit_ridge_cv$lambda == fit_ridge_cv$lambda.1se]) ``` ## Lasso We now illustrate **lasso**, which can be fit using `glmnet()` with `alpha = 1` and seeks to minimize $$ \sum_{i=1}^{n} \left( y_i - \beta_0 - \sum_{j=1}^{p} \beta_j x_{ij} \right) ^ 2 + \lambda \sum_{j=1}^{p} |\beta_j| . $$ Like ridge, lasso is not scale invariant. The two plots illustrate how much the coefficients are penalized for different values of $\lambda$. Notice some of the coefficients are forced to be zero. ```{r lasso, fig.height = 4, fig.width = 8} par(mfrow = c(1, 2)) fit_lasso = glmnet(X, y, alpha = 1) plot(fit_lasso) plot(fit_lasso, xvar = "lambda", label = TRUE) ``` Again, to actually pick a $\lambda$, we will use cross-validation. The plot is similar to the ridge plot. Notice along the top is the number of features in the model. (Which changed in this plot.) ```{r} fit_lasso_cv = cv.glmnet(X, y, alpha = 1) plot(fit_lasso_cv) ``` `cv.glmnet()` returns several details of the fit for both $\lambda$ values in the plot. Notice the penalty terms are again smaller than the full linear regression. (As we would expect.) Some coefficients are 0. ```{r} # fitted coefficients, using 1-SE rule lambda, default behavior coef(fit_lasso_cv) ``` ```{r} # fitted coefficients, using minimum lambda coef(fit_lasso_cv, s = "lambda.min") ``` ```{r} # penalty term using minimum lambda sum(coef(fit_lasso_cv, s = "lambda.min")[-1] ^ 2) ``` ```{r} # fitted coefficients, using 1-SE rule lambda coef(fit_lasso_cv, s = "lambda.1se") ``` ```{r} # penalty term using 1-SE rule lambda sum(coef(fit_lasso_cv, s = "lambda.1se")[-1] ^ 2) ``` ```{r, eval = FALSE} # predict using minimum lambda predict(fit_lasso_cv, X, s = "lambda.min") ``` ```{r, eval = FALSE} # predict using 1-SE rule lambda, default behavior predict(fit_lasso_cv, X) ``` ```{r} # calcualte "train error" mean((y - predict(fit_lasso_cv, X)) ^ 2) ``` ```{r} # CV-RMSEs sqrt(fit_lasso_cv$cvm) ``` ```{r} # CV-RMSE using minimum lambda sqrt(fit_lasso_cv$cvm[fit_lasso_cv$lambda == fit_lasso_cv$lambda.min]) ``` ```{r} # CV-RMSE using 1-SE rule lambda sqrt(fit_lasso_cv$cvm[fit_lasso_cv$lambda == fit_lasso_cv$lambda.1se]) ``` ## `broom` Sometimes, the output from `glmnet()` can be overwhelming. The `broom` package can help with that. ```{r, message = FALSE, warning = FALSE} library(broom) # the output from the commented line would be immense # fit_lasso_cv tidy(fit_lasso_cv) # the two lambda values of interest glance(fit_lasso_cv) ``` ## Simulated Data, $p > n$ Aside from simply shrinking coefficients (ridge) and setting some coefficients to 0 (lasso), penalized regression also has the advantage of being able to handle the $p > n$ case. ```{r} set.seed(1234) n = 1000 p = 5500 X = replicate(p, rnorm(n = n)) beta = c(1, 1, 1, rep(0, 5497)) z = X %*% beta prob = exp(z) / (1 + exp(z)) y = as.factor(rbinom(length(z), size = 1, prob = prob)) ``` We first simulate a classification example where $p > n$. ```{r} # glm(y ~ X, family = "binomial") # will not converge ``` We then use a lasso penalty to fit penalized logistic regression. This minimizes $$ \sum_{i=1}^{n} L\left(y_i, \beta_0 + \sum_{j=1}^{p} \beta_j x_{ij}\right) + \lambda \sum_{j=1}^{p} |\beta_j| $$ where $L$ is the appropriate *negative* **log**-likelihood. ```{r} library(glmnet) fit_cv = cv.glmnet(X, y, family = "binomial", alpha = 1) plot(fit_cv) ``` ```{r} head(coef(fit_cv), n = 10) ``` ```{r} fit_cv$nzero ``` Notice, only the first three predictors generated are truly significant, and that is exactly what the suggested model finds. ```{r} fit_1se = glmnet(X, y, family = "binomial", lambda = fit_cv$lambda.1se) which(as.vector(as.matrix(fit_1se$beta)) != 0) ``` We can also see in the following plots, the three features entering the model well ahead of the irrelevant features. ```{r, fig.height = 4, fig.width = 8} par(mfrow = c(1, 2)) plot(glmnet(X, y, family = "binomial")) plot(glmnet(X, y, family = "binomial"), xvar = "lambda") ``` We can extract the two relevant $\lambda$ values. ```{r} fit_cv$lambda.min fit_cv$lambda.1se ``` Since `cv.glmnet()` does not calculate prediction accuracy for classification, we take the $\lambda$ values and create a grid for `caret` to search in order to obtain prediction accuracy with `train()`. We set $\alpha = 1$ in this grid, as `glmnet` can actually tune over the $\alpha = 1$ parameter. (More on that later.) Note that we have to force `y` to be a factor, so that `train()` recognizes we want to have a binomial response. The `train()` function in `caret` use the type of variable in `y` to determine if you want to use `family = "binomial"` or `family = "gaussian"`. ```{r, message = FALSE, warning = FALSE} library(caret) cv_5 = trainControl(method = "cv", number = 5) lasso_grid = expand.grid(alpha = 1, lambda = c(fit_cv$lambda.min, fit_cv$lambda.1se)) lasso_grid ``` ```{r} sim_data = data.frame(y, X) fit_lasso = train( y ~ ., data = sim_data, method = "glmnet", trControl = cv_5, tuneGrid = lasso_grid ) fit_lasso$results ``` The interaction between the `glmnet` and `caret` packages is sometimes frustrating, but for obtaining results for particular values of $\lambda$, we see it can be easily used. More on this next chapter. ## External Links - [`glmnet` Web Vingette](https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html) - Details from the package developers. ## `rmarkdown` The `rmarkdown` file for this chapter can be found [**here**](24-regularization.Rmd). The file was created using `R` version `r paste0(version$major, "." ,version$minor)`. The following packages (and their dependencies) were loaded when knitting this file: ```{r, echo = FALSE} names(sessionInfo()$otherPkgs) ```