Grid search in the tidyverse

时间:2021-09-29 22:53:10

@drsimonj here to share a tidyverse method of grid search for optimizing a model’s hyperparameters.

Grid Search

For anyone who’s unfamiliar with the term, grid search involves running a model many times with combinations of various hyperparameters. The point is to identify which hyperparameters are likely to work best. A more technical definition from Wikipedia, grid search is:

an exhaustive searching through a manually specified subset of the hyperparameter space of a learning algorithm

What this post isn’t about

To keep the focus on grid search, this post does NOT cover…

  • k-fold cross-validation. Although a practically essential addition to grid search, I’ll save the combination of these techniques for a future post. If you can’t wait, check out my last post for some inspiration.
  • Complex learning models. We’ll stick to a simple decision tree.
  • Getting a great model fit. I’ve deliberately chosen input variables and hyperparameters that highlight the approach.

Decision tree example

Say we want to run a simple decision tree to predict cars’ transmission type (am) based on their miles per gallon (mpg) and horsepower (hp) using themtcars data set. Let’s prep the data:

library(tidyverse)

d <- mtcars %>%
# Convert `am` to factor and select relevant variables
mutate(am = factor(am, labels = c("Automatic", "Manual"))) %>%
select(am, mpg, hp) ggplot(d, aes(mpg, hp, color = am)) +
geom_point()

Grid search in the tidyverse

For a decision tree, it looks like a step-wise function until mpg > 25, at which point it’s all Manual cars. Let’s grow a full decision tree on this data:

library(rpart)
library(rpart.plot) # Set minsplit = 2 to fit every data point
full_fit <- rpart(am ~ mpg + hp, data = d, minsplit = 2)
prp(full_fit)

Grid search in the tidyverse

We don’t want a model like this, as it almost certainly has overfitting problems. So the question becomes, which hyperparameter specifications would work best for our model to generalize?

Training-Test Split

To help validate our hyperparameter combinations, we’ll split our data into training and test sets (in an 80/20 split):

set.seed(245)
n <- nrow(d)
train_rows <- sample(seq(n), size = .8 * n)
train <- d[ train_rows, ]
test <- d[-train_rows, ]

Create the Grid

Step one for grid search is to define our hyperparameter combinations. Say we want to test a few values for minsplit and maxdepth. I like to setup the grid of their combinations in a tidy data frame with a list and cross_d as follows:

# Define a named list of parameter values
gs <- list(minsplit = c(2, 5, 10),
maxdepth = c(1, 3, 8)) %>%
cross_d() # Convert to data frame grid gs
#> # A tibble: 9 × 2
#> minsplit maxdepth
#> <dbl> <dbl>
#> 1 2 1
#> 2 5 1
#> 3 10 1
#> 4 2 3
#> 5 5 3
#> 6 10 3
#> 7 2 8
#> 8 5 8
#> 9 10 8

Note that the list names are the names of the hyperparameters that we want to adjust in our model function.

Create a model function

We’ll be iterating down the gs data frame to use the hyperparameter values in a rpart model. The easiest way to handle this is to define a function that accepts a row of our data frame values and passes them correctly to our model. Here’s what I’ll use:

mod <- function(...) {
rpart(am ~ hp + mpg, data = train, control = rpart.control(...))
}

Notice the argument ... is being passed to control in rpart, which is where these hyperparameters can be used.

Fit the models

Now, to fit our models, use pmap to iterate down the values. The following is iterating through each row of our gs data frame, plugging the hyperparameter values for that row into our model.

gs <- gs %>% mutate(fit = pmap(gs, mod))
gs
#> # A tibble: 9 × 3
#> minsplit maxdepth fit
#> <dbl> <dbl> <list>
#> 1 2 1 <S3: rpart>
#> 2 5 1 <S3: rpart>
#> 3 10 1 <S3: rpart>
#> 4 2 3 <S3: rpart>
#> 5 5 3 <S3: rpart>
#> 6 10 3 <S3: rpart>
#> 7 2 8 <S3: rpart>
#> 8 5 8 <S3: rpart>
#> 9 10 8 <S3: rpart>

Obtain accuracy

Next, let’s assess the performance of each fit on our test data. To handle this efficiently, let’s write another small function:

compute_accuracy <- function(fit, test_features, test_labels) {
predicted <- predict(fit, test_features, type = "class")
mean(predicted == test_labels)
}

Now apply this to each fit:

test_features <- test %>% select(-am)
test_labels <- test$am gs <- gs %>%
mutate(test_accuracy = map_dbl(fit, compute_accuracy,
test_features, test_labels))
gs
#> # A tibble: 9 × 4
#> minsplit maxdepth fit test_accuracy
#> <dbl> <dbl> <list> <dbl>
#> 1 2 1 <S3: rpart> 0.7142857
#> 2 5 1 <S3: rpart> 0.7142857
#> 3 10 1 <S3: rpart> 0.7142857
#> 4 2 3 <S3: rpart> 0.8571429
#> 5 5 3 <S3: rpart> 0.8571429
#> 6 10 3 <S3: rpart> 0.7142857
#> 7 2 8 <S3: rpart> 0.8571429
#> 8 5 8 <S3: rpart> 0.8571429
#> 9 10 8 <S3: rpart> 0.7142857

Arrange results

To find the best model, we arrange the data based on desc(test_accuracy). The best fitting model will then be in the first row. You might see above that we have many models with the same fit. This is unusual, and likley due to the example I’ve chosen. Still, to handle this, I’ll break ties in accuracy withdesc(minsplit) and maxdepth to find the model that is most accurate and also simplest.

gs <- gs %>% arrange(desc(test_accuracy), desc(minsplit), maxdepth)
gs
#> # A tibble: 9 × 4
#> minsplit maxdepth fit test_accuracy
#> <dbl> <dbl> <list> <dbl>
#> 1 5 3 <S3: rpart> 0.8571429
#> 2 5 8 <S3: rpart> 0.8571429
#> 3 2 3 <S3: rpart> 0.8571429
#> 4 2 8 <S3: rpart> 0.8571429
#> 5 10 1 <S3: rpart> 0.7142857
#> 6 10 3 <S3: rpart> 0.7142857
#> 7 10 8 <S3: rpart> 0.7142857
#> 8 5 1 <S3: rpart> 0.7142857
#> 9 2 1 <S3: rpart> 0.7142857

It looks like a minsplit of 5 and maxdepth of 3 is the way to go!

To compare to our fully fit tree, here’s a plot of this top-performing model. Remember, it’s in the first row so we can reference [[1]].

prp(gs$fit[[1]])

Grid search in the tidyverse

Food for thought

Having the results in a tidy data frame lets us do a lot more than just pick the optimal hyperparameters. It lets us quickly wrangle with and visualize the results of the various combinations. Here are some ideas:

  • Search among the top performers for the simplest model.
  • Plot performance across the hyperparameter combinations.
  • Save time by restricting the hypotheses before model fitting. For example, in a large data set, it’s practically pointless to try a small minsplit and smallmaxdepth. In this case, before fitting the models, we can filter the gs data frame to exclude certain combinations.

Sign off

Thanks for reading and I hope this was useful for you.

For updates of recent blog posts, follow @drsimonj on Twitter, or email me atdrsimonjackson@gmail.com to get in touch.

If you’d like the code that produced this blog, check out the blogR GitHub repository.

转自:https://drsimonj.svbtle.com/grid-search-in-the-tidyverse

Grid search in the tidyverse的更多相关文章

  1. Comparing randomized search and grid search for hyperparameter estimation

    Comparing randomized search and grid search for hyperparameter estimation Compare randomized search ...

  2. 3&period;2&period; Grid Search&colon; Searching for estimator parameters

    3.2. Grid Search: Searching for estimator parameters Parameters that are not directly learnt within ...

  3. How to Grid Search Hyperparameters for Deep Learning Models in Python With Keras

    Hyperparameter optimization is a big part of deep learning. The reason is that neural networks are n ...

  4. Grid Search学习

    转自:https://www.cnblogs.com/ysugyl/p/8711205.html Grid Search:一种调参手段:穷举搜索:在所有候选的参数选择中,通过循环遍历,尝试每一种可能性 ...

  5. grid search 超参数寻优

    http://scikit-learn.org/stable/modules/grid_search.html 1. 超参数寻优方法 gridsearchCV 和  RandomizedSearchC ...

  6. scikit-learn:3&period;2&period; Grid Search&colon; Searching for estimator parameters

    參考:http://scikit-learn.org/stable/modules/grid_search.html GridSearchCV通过(蛮力)搜索參数空间(參数的全部可能组合).寻找最好的 ...

  7. &lbrack;转载&rsqb;Grid Search

    [转载]Grid Search 初学机器学习,之前的模型都是手动调参的,效果一般.同学和我说他用了一个叫grid search的方法.可以实现自动调参,顿时感觉非常高级.吃饭的时候想调参的话最差不过也 ...

  8. grid search

    sklearn.metrics.make_scorer(score_func, greater_is_better=True, needs_proba=False, needs_threshold=F ...

  9. Hackerrank - The Grid Search

    https://www.hackerrank.com/challenges/the-grid-search/forum 今天碰见这题,看见难度是Moderate,觉得应该能半小时内搞定. 读完题目发现 ...

随机推荐

  1. JavaScript中的arguments&comma;callee&comma;caller

    在提到上述的概念之前,首先想说说javascript中函数的隐含参数: arguments: arguments 该对象代表正在执行的函数和调用它的函数的参数. [function.]argument ...

  2. JNI系列——C文件中使用logcat

    1.在Android.mk文件中添加:LOCAL_LDLIBS += -llog 注:加载的这个库在NDK对应平台目录下的lib目录中. 2.在C文件中添加如下内容: #include <and ...

  3. VirtualBox安装Ubuntu16&period;04过程

    1. 软件版本 Windows: Win7/Win10 VirtualBox: VirtualBox-6.0.24-108355-Win Ubuntu: ubuntu-16.04-desktop-am ...

  4. GBDT&lpar;Gradient Boosting Decision Tree&rpar; 没有实现仅仅有原理

                阿弥陀佛.好久没写文章,实在是受不了了.特来填坑,近期实习了(ting)解(shuo)到(le)非常多工业界经常使用的算法.诸如GBDT,CRF,topic model的一些算 ...

  5. thinkphp3&period;2 实现点击图片或文字进入内容页

    首先要先把页面渲染出来,http://www.mmkb.com/weixiang/index/index.html <div class="main3 mt"> &lt ...

  6. Some&lowbar;tools

    Why and what There are lots of nice or great tools on the internet, sometimes I will just forget a p ...

  7. MapReduce调度器

    1. 先进先出(FIFO)调度器 先进先出调度器是Hadoop的默认调度器.就像这个名字所隐含的那样,这种调度器就是用简单按照“先到先得”的算法来调度任务的.例如,作业A和作业B被先后提交.那么在执行 ...

  8. hdu 2196(求树上每个节点到树上其他节点的最远距离&rpar;

    题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=2196 思路:首先任意一次dfs求出树上最长直径的一个端点End,然后以该端点为起点再次dfs求出另一个 ...

  9. ZT 针对接口编程而不是针对实现编程

    java中继承用extends 实现接口用 implements 针对接口编程而不是针对实现编程 2009-01-08 10:23 zhangrun_gz | 分类:其他编程语言 老听说这句,不知道到 ...

  10. VPS性能测试

    1 cpu硬件参数 cat /proc/cpuinfo 我们可以看到CPU的型号.物理CPU个数(显示0)表示只有1个只有1个物理处理器.CPU核心数(cpu cores)等参数,至少我们需要比较商家 ...