Skip to content

Commit

Permalink
replace nround with nrounds to match actual parameter (dmlc#3592)
Browse files Browse the repository at this point in the history
  • Loading branch information
jakob-r authored and hcho3 committed Aug 15, 2018
1 parent 73bd590 commit 725f4c3
Show file tree
Hide file tree
Showing 17 changed files with 51 additions and 51 deletions.
2 changes: 1 addition & 1 deletion R-package/R/callbacks.R
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ cb.evaluation.log <- function() {
#' at the beginning of each iteration.
#'
#' Note that when training is resumed from some previous model, and a function is used to
#' reset a parameter value, the \code{nround} argument in this function would be the
#' reset a parameter value, the \code{nrounds} argument in this function would be the
#' the number of boosting rounds in the current training.
#'
#' Callback function expects the following values to be set in its calling frame:
Expand Down
6 changes: 3 additions & 3 deletions R-package/R/xgb.create.features.R
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,9 @@
#' dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)
#'
#' param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
#' nround = 4
#' nrounds = 4
#'
#' bst = xgb.train(params = param, data = dtrain, nrounds = nround, nthread = 2)
#' bst = xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)
#'
#' # Model accuracy without new features
#' accuracy.before <- sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label) /
Expand All @@ -68,7 +68,7 @@
#' new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
#' new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
#' watchlist <- list(train = new.dtrain)
#' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nround, nthread = 2)
#' bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)
#'
#' # Model accuracy with new features
#' accuracy.after <- sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label) /
Expand Down
2 changes: 1 addition & 1 deletion R-package/R/xgb.train.R
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
#' \item \code{gamma} minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be.
#' \item \code{max_depth} maximum depth of a tree. Default: 6
#' \item \code{min_child_weight} minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. Default: 1
#' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nround}. Default: 1
#' \item \code{subsample} subsample ratio of the training instance. Setting it to 0.5 means that xgboost randomly collected half of the data instances to grow trees and this will prevent overfitting. It makes computation shorter (because less data to analyse). It is advised to use this parameter with \code{eta} and increase \code{nrounds}. Default: 1
#' \item \code{colsample_bytree} subsample ratio of columns when constructing each tree. Default: 1
#' \item \code{num_parallel_tree} Experimental parameter. number of trees to grow per round. Useful to test Random Forest through Xgboost (set \code{colsample_bytree < 1}, \code{subsample < 1} and \code{round = 1}) accordingly. Default: 1
#' \item \code{monotone_constraints} A numerical vector consists of \code{1}, \code{0} and \code{-1} with its length equals to the number of features in the training data. \code{1} is increasing, \code{-1} is decreasing and \code{0} is no constraint.
Expand Down
10 changes: 5 additions & 5 deletions R-package/demo/cross_validation.R
Original file line number Diff line number Diff line change
Expand Up @@ -5,20 +5,20 @@ data(agaricus.test, package='xgboost')
dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)

nround <- 2
nrounds <- 2
param <- list(max_depth=2, eta=1, silent=1, nthread=2, objective='binary:logistic')

cat('running cross validation\n')
# do cross validation, this will print result out as
# [iteration] metric_name:mean_value+std_value
# std_value is standard deviation of the metric
xgb.cv(param, dtrain, nround, nfold=5, metrics={'error'})
xgb.cv(param, dtrain, nrounds, nfold=5, metrics={'error'})

cat('running cross validation, disable standard deviation display\n')
# do cross validation, this will print result out as
# [iteration] metric_name:mean_value+std_value
# std_value is standard deviation of the metric
xgb.cv(param, dtrain, nround, nfold=5,
xgb.cv(param, dtrain, nrounds, nfold=5,
metrics='error', showsd = FALSE)

###
Expand All @@ -43,9 +43,9 @@ evalerror <- function(preds, dtrain) {
param <- list(max_depth=2, eta=1, silent=1,
objective = logregobj, eval_metric = evalerror)
# train with customized objective
xgb.cv(params = param, data = dtrain, nrounds = nround, nfold = 5)
xgb.cv(params = param, data = dtrain, nrounds = nrounds, nfold = 5)

# do cross validation with prediction values for each fold
res <- xgb.cv(params = param, data = dtrain, nrounds = nround, nfold = 5, prediction = TRUE)
res <- xgb.cv(params = param, data = dtrain, nrounds = nrounds, nfold = 5, prediction = TRUE)
res$evaluation_log
length(res$pred)
4 changes: 2 additions & 2 deletions R-package/demo/predict_first_ntree.R
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ dtest <- xgb.DMatrix(agaricus.test$data, label = agaricus.test$label)

param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
watchlist <- list(eval = dtest, train = dtrain)
nround = 2
nrounds = 2

# training the model for two rounds
bst = xgb.train(param, dtrain, nround, nthread = 2, watchlist)
bst = xgb.train(param, dtrain, nrounds, nthread = 2, watchlist)
cat('start testing prediction from first n trees\n')
labels <- getinfo(dtest,'label')

Expand Down
6 changes: 3 additions & 3 deletions R-package/demo/predict_leaf_indices.R
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ dtrain <- xgb.DMatrix(data = agaricus.train$data, label = agaricus.train$label)
dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label)

param <- list(max_depth=2, eta=1, silent=1, objective='binary:logistic')
nround = 4
nrounds = 4

# training the model for two rounds
bst = xgb.train(params = param, data = dtrain, nrounds = nround, nthread = 2)
bst = xgb.train(params = param, data = dtrain, nrounds = nrounds, nthread = 2)

# Model accuracy without new features
accuracy.before <- sum((predict(bst, agaricus.test$data) >= 0.5) == agaricus.test$label) / length(agaricus.test$label)
Expand Down Expand Up @@ -43,7 +43,7 @@ new.features.test <- create.new.tree.features(bst, agaricus.test$data)
new.dtrain <- xgb.DMatrix(data = new.features.train, label = agaricus.train$label)
new.dtest <- xgb.DMatrix(data = new.features.test, label = agaricus.test$label)
watchlist <- list(train = new.dtrain)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nround, nthread = 2)
bst <- xgb.train(params = param, data = new.dtrain, nrounds = nrounds, nthread = 2)

# Model accuracy with new features
accuracy.after <- sum((predict(bst, new.dtest) >= 0.5) == agaricus.test$label) / length(agaricus.test$label)
Expand Down
2 changes: 1 addition & 1 deletion R-package/man/cb.reset.parameters.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 3 additions & 3 deletions R-package/man/xgb.create.features.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion R-package/man/xgb.train.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion R-package/tests/testthat/test_gc_safety.R
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ test_that("train and prediction when gctorture is on", {
test <- agaricus.test
gctorture(TRUE)
bst <- xgboost(data = train$data, label = train$label, max.depth = 2,
eta = 1, nthread = 2, nround = 2, objective = "binary:logistic")
eta = 1, nthread = 2, nrounds = 2, objective = "binary:logistic")
pred <- predict(bst, test$data)
gctorture(FALSE)
})
4 changes: 2 additions & 2 deletions demo/kaggle-higgs/higgs-train.R
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ param <- list("objective" = "binary:logitraw",
"silent" = 1,
"nthread" = 16)
watchlist <- list("train" = xgmat)
nround = 120
nrounds = 120
print ("loading data end, start to boost trees")
bst = xgb.train(param, xgmat, nround, watchlist );
bst = xgb.train(param, xgmat, nrounds, watchlist );
# save out model
xgb.save(bst, "higgs.model")
print ('finish training')
Expand Down
4 changes: 2 additions & 2 deletions demo/kaggle-higgs/speedtest.R
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,9 @@ for (i in 1:length(threads)){
"silent" = 1,
"nthread" = thread)
watchlist <- list("train" = xgmat)
nround = 120
nrounds = 120
print ("loading data end, start to boost trees")
bst = xgb.train(param, xgmat, nround, watchlist );
bst = xgb.train(param, xgmat, nrounds, watchlist );
# save out model
xgb.save(bst, "higgs.model")
print ('finish training')
Expand Down
8 changes: 4 additions & 4 deletions demo/kaggle-otto/otto_train_pred.R
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,13 @@ param <- list("objective" = "multi:softprob",
"nthread" = 8)

# Run Cross Validation
cv.nround = 50
cv.nrounds = 50
bst.cv = xgb.cv(param=param, data = x[trind,], label = y,
nfold = 3, nrounds=cv.nround)
nfold = 3, nrounds=cv.nrounds)

# Train the model
nround = 50
bst = xgboost(param=param, data = x[trind,], label = y, nrounds=nround)
nrounds = 50
bst = xgboost(param=param, data = x[trind,], label = y, nrounds=nrounds)

# Make prediction
pred = predict(bst,x[teind,])
Expand Down
10 changes: 5 additions & 5 deletions demo/kaggle-otto/understandingXGBoostModel.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -121,19 +121,19 @@ param <- list("objective" = "multi:softprob",
"eval_metric" = "mlogloss",
"num_class" = numberOfClasses)
cv.nround <- 5
cv.nrounds <- 5
cv.nfold <- 3
bst.cv = xgb.cv(param=param, data = trainMatrix, label = y,
nfold = cv.nfold, nrounds = cv.nround)
nfold = cv.nfold, nrounds = cv.nrounds)
```
> As we can see the error rate is low on the test dataset (for a 5mn trained model).
Finally, we are ready to train the real model!!!

```{r modelTraining}
nround = 50
bst = xgboost(param=param, data = trainMatrix, label = y, nrounds=nround)
nrounds = 50
bst = xgboost(param=param, data = trainMatrix, label = y, nrounds=nrounds)
```

Model understanding
Expand All @@ -142,7 +142,7 @@ Model understanding
Feature importance
------------------

So far, we have built a model made of **`r nround`** trees.
So far, we have built a model made of **`r nrounds`** trees.

To build a tree, the dataset is divided recursively several times. At the end of the process, you get groups of observations (here, these observations are properties regarding **Otto** products).

Expand Down
8 changes: 4 additions & 4 deletions doc/R-package/discoverYourData.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ The code below is very usual. For more information, you can look at the document

```r
bst <- xgboost(data = sparse_matrix, label = output_vector, max.depth = 4,
eta = 1, nthread = 2, nround = 10,objective = "binary:logistic")
eta = 1, nthread = 2, nrounds = 10,objective = "binary:logistic")
```

```
Expand All @@ -244,7 +244,7 @@ A model which fits too well may [overfit](http://en.wikipedia.org/wiki/Overfitti

> Here you can see the numbers decrease until line 7 and then increase.
>
> It probably means we are overfitting. To fix that I should reduce the number of rounds to `nround = 4`. I will let things like that because I don't really care for the purpose of this example :-)
> It probably means we are overfitting. To fix that I should reduce the number of rounds to `nrounds = 4`. I will let things like that because I don't really care for the purpose of this example :-)
Feature importance
------------------
Expand Down Expand Up @@ -448,7 +448,7 @@ train <- agaricus.train
test <- agaricus.test

#Random Forest™ - 1000 trees
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nround = 1, objective = "binary:logistic")
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nrounds = 1, objective = "binary:logistic")
```

```
Expand All @@ -457,7 +457,7 @@ bst <- xgboost(data = train$data, label = train$label, max.depth = 4, num_parall

```r
#Boosting - 3 rounds
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, nround = 3, objective = "binary:logistic")
bst <- xgboost(data = train$data, label = train$label, max.depth = 4, nrounds = 3, objective = "binary:logistic")
```

```
Expand Down
Loading

0 comments on commit 725f4c3

Please sign in to comment.