Low-level R interface to train a LightGBM model. Unlike lightgbm
,
this function is focused on performance (e.g. speed, memory efficiency). It is also
less likely to have breaking API changes in new releases than lightgbm
.
a list of parameters. See the "Parameters" section of the documentation for a list of parameters and valid values.
a lgb.Dataset
object, used for training. Some functions, such as lgb.cv
,
may allow you to pass other types of data like matrix
and then separately supply
label
as a keyword argument.
number of training rounds
a list of lgb.Dataset
objects, used for validation
objective function, can be character or custom objective function. Examples include
regression
, regression_l1
, huber
,
binary
, lambdarank
, multiclass
, multiclass
evaluation function(s). This can be a character vector, function, or list with a mixture of strings and functions.
a. character vector: If you provide a character vector to this argument, it should contain strings with valid evaluation metrics. See The "metric" section of the documentation for a list of valid metrics.
b. function:
You can provide a custom evaluation function. This
should accept the keyword arguments preds
and dtrain
and should return a named
list with three elements:
name
: A string with the name of the metric, used for printing
and storing results.
value
: A single number indicating the value of the metric for the
given predictions and true values
higher_better
: A boolean indicating whether higher values indicate a better fit.
For example, this would be FALSE
for metrics like MAE or RMSE.
c. list: If a list is given, it should only contain character vectors and functions. These should follow the requirements from the descriptions above.
verbosity for output, if <= 0 and valids
has been provided, also will disable the
printing of evaluation during training
Boolean, TRUE will record iteration message to booster$record_evals
evaluation output frequency, only effective when verbose > 0 and valids
has been provided
path of model file or lgb.Booster
object, will continue training from this model
int. Activates early stopping. When this parameter is non-null,
training will stop if the evaluation of any metric on any validation set
fails to improve for early_stopping_rounds
consecutive boosting rounds.
If training stops early, the returned model will have attribute best_iter
set to the iteration number of the best iteration.
List of callback functions that are applied at each iteration.
Boolean, setting it to TRUE (not the default value) will transform the booster model into a predictor model which frees up memory and the original datasets
whether to make the resulting objects serializable through functions such as
save
or saveRDS
(see section "Model serialization").
a trained booster model lgb.Booster
.
"early stopping" refers to stopping the training process if the model's performance on a given validation set does not improve for several consecutive iterations.
If multiple arguments are given to eval
, their order will be preserved. If you enable
early stopping by setting early_stopping_rounds
in params
, by default all
metrics will be considered for early stopping.
If you want to only consider the first metric for early stopping, pass
first_metric_only = TRUE
in params
. Note that if you also specify metric
in params
, that metric will be considered the "first" one. If you omit metric
,
a default metric will be used based on your choice for the parameter obj
(keyword argument)
or objective
(passed into params
).
NOTE: if using boosting_type="dart"
, any early stopping configuration will be ignored
and early stopping will not be performed.
# \donttest{
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
data(agaricus.test, package = "lightgbm")
test <- agaricus.test
dtest <- lgb.Dataset.create.valid(dtrain, test$data, label = test$label)
params <- list(
objective = "regression"
, metric = "l2"
, min_data = 1L
, learning_rate = 1.0
, num_threads = 2L
)
valids <- list(test = dtest)
model <- lgb.train(
params = params
, data = dtrain
, nrounds = 5L
, valids = valids
, early_stopping_rounds = 3L
)
#> [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000436 seconds.
#> You can set `force_row_wise=true` to remove the overhead.
#> And if memory is not enough, you can set `force_col_wise=true`.
#> [LightGBM] [Info] Total Bins 232
#> [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
#> [LightGBM] [Info] Start training from score 0.482113
#> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
#> [1]: test's l2:6.44165e-17
#> Will train until there is no improvement in 3 rounds.
#> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
#> [2]: test's l2:1.97215e-31
#> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
#> [3]: test's l2:0
#> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
#> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
#> [4]: test's l2:0
#> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
#> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
#> [5]: test's l2:0
#> Did not meet early stopping, best iteration is: [3]: test's l2:0
# }