This is a simple adaptation of the getOptcv.glmnet function from glmnet. The code takes a grid of lambda penalties along with a vector of associated mean squared errors (mse), standard errors (se), and the mean squared error from each cross-validation run (fullMSE). From this three potential options for the cross-validated penalty parameter are computed. 1) The lambda that has the minimum average mean squared error across all the cross-validation runs (lambda$lambda.min), 2) The lambda the largest lambda that is associated with an average cross-validated mean squared error within one standard error of the minimum average cross-validated mean squared error (lambda$lambda.1se), and 3) the lamda that is the median of the set of lambdas (lambda$lambda.median).

getOptcv.scul(lambdapath, mse, se, fullMSE)

Arguments

lambdapath

A grid of lambdas that is used in each cross-validation run as potential options for the optimal penalty parameter.

mse

A vector of the average mean squared error (average across cross-validation runs) for each given lambda in the lambdapath grid.

se

A vector of the standard error associated with each average mean squared error (average across cross-validation runs) for each given lambda in the lambdapath grid.

fullMSE

A matrix of the mean squared error for each cross-validation run and each given lambda in the lambdapath grid.