Skip to content

Commit

Permalink
[DOC] cleanup distributed training
Browse files Browse the repository at this point in the history
  • Loading branch information
tqchen committed Jan 16, 2016
1 parent df7c793 commit e7d8ed7
Show file tree
Hide file tree
Showing 11 changed files with 155 additions and 237 deletions.
115 changes: 59 additions & 56 deletions CHANGES.md → NEWS.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,30 @@
Change Log
==========
XGBoost Change Log
==================

xgboost-0.1
-----------
* Initial release
This file records the chanegs in xgboost library in reverse chronological order.

xgboost-0.2x
------------
* Python module
* Weighted samples instances
* Initial version of pairwise rank

xgboost-0.3
-----------
* Faster tree construction module
- Allows subsample columns during tree construction via ```bst:col_samplebytree=ratio```
* Support for boosting from initial predictions
* Experimental version of LambdaRank
* Linear booster is now parallelized, using parallel coordinated descent.
* Add [Code Guide](src/README.md) for customizing objective function and evaluation
* Add R module

xgboost-0.4
-----------
* Distributed version of xgboost that runs on YARN, scales to billions of examples
* Direct save/load data and model from/to S3 and HDFS
* Feature importance visualization in R module, by Michael Benesty
* Predict leaf index
* Poisson regression for counts data
* Early stopping option in training
* Native save load support in R and python
- xgboost models now can be saved using save/load in R
- xgboost python model is now pickable
* sklearn wrapper is supported in python module
* Experimental External memory version
## brick: next release candidate
* Major refactor of core library.
- Goal: more flexible and modular code as a portable library.
- Switch to use of c++11 standard code.
- Random number generator defaults to ```std::mt19937```.
- Share the data loading pipeline and logging module from dmlc-core.
- Enable registry pattern to allow optionally plugin of objective, metric, tree constructor, data loader.
- Future plugin modules can be put into xgboost/plugin and register back to the library.
- Remove most of the raw pointers to smart ptrs, for RAII safety.
* Change library name to libxgboost.so
* Backward compatiblity
- The binary buffer file is not backward compatible with previous version.
- The model file is backward compatible on 64 bit platforms.
* The model file is compatible between 64/32 bit platforms(not yet tested).
* External memory version and other advanced features will be exposed to R library as well on linux.
- Previously some of the features are blocked due to C++11 and threading limits.
- The windows version is still blocked due to Rtools do not support ```std::thread```.
* rabit and dmlc-core are maintained through git submodule
- Anyone can open PR to update these dependencies now.

## v0.47 (2016.01.14)

xgboost-0.47
------------
* Changes in R library
- fixed possible problem of poisson regression.
- switched from 0 to NA for missing values.
Expand All @@ -58,23 +45,39 @@ xgboost-0.47
* Java api is ready for use
* Added more test cases and continuous integration to make each build more robust.

xgboost brick: next release candidate
-------------------------------------
* Major refactor of core library.
- Goal: more flexible and modular code as a portable library.
- Switch to use of c++11 standard code.
- Random number generator defaults to ```std::mt19937```.
- Share the data loading pipeline and logging module from dmlc-core.
- Enable registry pattern to allow optionally plugin of objective, metric, tree constructor, data loader.
- Future plugin modules can be put into xgboost/plugin and register back to the library.
- Remove most of the raw pointers to smart ptrs, for RAII safety.
* Change library name to libxgboost.so
* Backward compatiblity
- The binary buffer file is not backward compatible with previous version.
- The model file is backward compatible on 64 bit platforms.
* The model file is compatible between 64/32 bit platforms(not yet tested).
* External memory version and other advanced features will be exposed to R library as well on linux.
- Previously some of the features are blocked due to C++11 and threading limits.
- The windows version is still blocked due to Rtools do not support ```std::thread```.
* rabit and dmlc-core are maintained through git submodule
- Anyone can open PR to update these dependencies now.
## v0.4 (2015.05.11)

* Distributed version of xgboost that runs on YARN, scales to billions of examples
* Direct save/load data and model from/to S3 and HDFS
* Feature importance visualization in R module, by Michael Benesty
* Predict leaf index
* Poisson regression for counts data
* Early stopping option in training
* Native save load support in R and python
- xgboost models now can be saved using save/load in R
- xgboost python model is now pickable
* sklearn wrapper is supported in python module
* Experimental External memory version


## v0.3 (2014.09.07)

* Faster tree construction module
- Allows subsample columns during tree construction via ```bst:col_samplebytree=ratio```
* Support for boosting from initial predictions
* Experimental version of LambdaRank
* Linear booster is now parallelized, using parallel coordinated descent.
* Add [Code Guide](src/README.md) for customizing objective function and evaluation
* Add R module


## v0.2x (2014.05.20)

* Python module
* Weighted samples instances
* Initial version of pairwise rank


## v0.1 (2014.03.26)

* Initial release
18 changes: 4 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,23 +15,14 @@ XGBoost is part of [DMLC](http://dmlc.github.io/) projects.

Contents
--------
* [Documentation](https://xgboost.readthedocs.org)
* [Usecases](doc/index.md#highlight-links)
* [Documentation and Tutorials](https://xgboost.readthedocs.org)
* [Code Examples](demo)
* [Build Instruction](doc/build.md)
* [Committers and Contributors](CONTRIBUTORS.md)

What's New
----------
* XGBoost [brick](CHANGES.md)
* XGBoost helps Vlad Mironov, Alexander Guschin to win the [CERN LHCb experiment Flavour of Physics competition](https://www.kaggle.com/c/flavours-of-physics). Check out the [interview from Kaggle](http://blog.kaggle.com/2015/11/30/flavour-of-physics-technical-write-up-1st-place-go-polar-bears/).
* XGBoost helps Mario Filho, Josef Feigl, Lucas, Gilberto to win the [Caterpillar Tube Pricing competition](https://www.kaggle.com/c/caterpillar-tube-pricing). Check out the [interview from Kaggle](http://blog.kaggle.com/2015/09/22/caterpillar-winners-interview-1st-place-gilberto-josef-leustagos-mario/).
* XGBoost helps Halla Yang to win the [Recruit Coupon Purchase Prediction Challenge](https://www.kaggle.com/c/coupon-purchase-prediction). Check out the [interview from Kaggle](http://blog.kaggle.com/2015/10/21/recruit-coupon-purchase-winners-interview-2nd-place-halla-yang/).

Version
-------
* Current version xgboost-0.6 (brick)
- See [Change log](CHANGES.md) for details
* [XGBoost brick](NEWS.md) Release

Features
--------
Expand All @@ -45,17 +36,16 @@ Features

Bug Reporting
-------------

* For reporting bugs please use the [xgboost/issues](https://github.com/dmlc/xgboost/issues) page.
* For generic questions or to share your experience using xgboost please use the [XGBoost User Group](https://groups.google.com/forum/#!forum/xgboost-user/)


Contributing to XGBoost
-----------------------
XGBoost has been developed and used by a group of active community members. Everyone is more than welcome to contribute. It is a way to make the project better and more accessible to more users.
* Check out [Feature Wish List](https://github.com/dmlc/xgboost/labels/Wish-List) to see what can be improved, or open an issue if you want something.
* Contribute to the [documents and examples](https://github.com/dmlc/xgboost/blob/master/doc/) to share your experience with other users.
* Please add your name to [CONTRIBUTORS.md](CONTRIBUTORS.md) after your patch has been merged.
* Please add your name to [CONTRIBUTORS.md](CONTRIBUTORS.md) and after your patch has been merged.
- Please also update [NEWS.md](NEWS.md) on changes and improvements in API and docs.

License
-------
Expand Down
7 changes: 7 additions & 0 deletions demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,15 @@ However, the parameter settings can be applied to all versions
* [Multiclass classification](multiclass_classification)
* [Regression](regression)
* [Learning to Rank](rank)
* [Distributed Training](distributed-training)

Benchmarks
----------
* [Starter script for Kaggle Higgs Boson](kaggle-higgs)
* [Kaggle Tradeshift winning solution by daxiongshu](https://github.com/daxiongshu/kaggle-tradeshift-winning-solution)

Machine Learning Challenge Winning Solutions
--------------------------------------------
* XGBoost helps Vlad Mironov, Alexander Guschin to win the [CERN LHCb experiment Flavour of Physics competition](https://www.kaggle.com/c/flavours-of-physics). Check out the [interview from Kaggle](http://blog.kaggle.com/2015/11/30/flavour-of-physics-technical-write-up-1st-place-go-polar-bears/).
* XGBoost helps Mario Filho, Josef Feigl, Lucas, Gilberto to win the [Caterpillar Tube Pricing competition](https://www.kaggle.com/c/caterpillar-tube-pricing). Check out the [interview from Kaggle](http://blog.kaggle.com/2015/09/22/caterpillar-winners-interview-1st-place-gilberto-josef-leustagos-mario/).
* XGBoost helps Halla Yang to win the [Recruit Coupon Purchase Prediction Challenge](https://www.kaggle.com/c/coupon-purchase-prediction). Check out the [interview from Kaggle](http://blog.kaggle.com/2015/10/21/recruit-coupon-purchase-winners-interview-2nd-place-halla-yang/).
52 changes: 52 additions & 0 deletions demo/distributed-training/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
Distributed XGBoost Training
============================
This is an tutorial of Distributed XGBoost Training.
Currently xgboost supports distributed training via CLI program with the configuration file.
There is also plan push distributed python and other language bindings, please open an issue
if you are interested in contributing.

Build XGBoost with Distributed Filesystem Support
-------------------------------------------------
To use distributed xgboost, you only need to turn the options on to build
with distributed filesystems(HDFS or S3) in ```xgboost/make/config.mk```.

How to Use
----------
* Input data format: LIBSVM format. The example here uses generated data in ../data folder.
* Put the data into some distribute filesytem (S3 or HDFS)
* Use tracker script in dmlc-core/tracker to submit the jobs
* Like all other DMLC tools, xgboost support taking a path to a folder as input argument
- All the files in the folder will be used as input
* Quick start in Hadoop YARN: run ```bash run_yarn.sh <n_hadoop_workers> <n_thread_per_worker> <path_in_HDFS>```

Example
-------
* [run_yarn.sh](run_yarn.sh) shows how to submit job to Hadoop via YARN.

Single machine vs Distributed Version
-------------------------------------
If you have used xgboost (single machine version) before, this section will show you how to run xgboost on hadoop with a slight modification on conf file.
* IO: instead of reading and writing file locally, we now use HDFS, put ```hdfs://``` prefix to the address of file you like to access
* File cache: ```dmlc_yarn.py``` also provide several ways to cache necesary files, including binary file (xgboost), conf file
- ```dmlc_yarn.py``` will automatically cache files in the command line. For example, ```dmlc_yarn.py -n 3 $localPath/xgboost.dmlc mushroom.hadoop.conf``` will cache "xgboost.dmlc" and "mushroom.hadoop.conf".
- You could also use "-f" to manually cache one or more files, like ```-f file1 -f file2```
- The local path of cached files in command is "./".
* More details of submission can be referred to the usage of ```dmlc_yarn.py```.
* The model saved by hadoop version is compatible with single machine version.

Notes
-----
* The code is optimized with multi-threading, so you will want to run xgboost with more vcores for best performance.
- You will want to set <n_thread_per_worker> to be number of cores you have on each machine.


External Memory Version
-----------------------
XGBoost supports external memory, this will make each process cache data into local disk during computation, without taking up all the memory for storing the data.
See [external memory](https://github.com/dmlc/xgboost/tree/master/doc/external_memory.md) for syntax using external memory.

You only need to add cacheprefix to the input file to enable external memory mode. For example set training data as
```
data=hdfs:///path-to-my-data/#dtrain.cache
```
This will make xgboost more memory efficient, allows you to run xgboost on larger-scale dataset.
33 changes: 33 additions & 0 deletions demo/distributed-training/run_yarn.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
#!/bin/bash
if [ "$#" -lt 3 ];
then
echo "Usage: <nworkers> <nthreads> <path_in_HDFS>"
exit -1
fi

# put the local training file to HDFS
hadoop fs -mkdir $3/data
hadoop fs -put ../data/agaricus.txt.train $3/data
hadoop fs -put ../data/agaricus.txt.test $3/data

# running rabit, pass address in hdfs
../../dmlc-core/tracker/dmlc_yarn.py -n $1 --vcores $2 ../../xgboost mushroom.hadoop.conf nthread=$2\
data=hdfs://$3/data/agaricus.txt.train\
eval[test]=hdfs://$3/data/agaricus.txt.test\
model_out=hdfs://$3/mushroom.final.model

# get the final model file
hadoop fs -get $3/mushroom.final.model final.model

# use dmlc-core/yarn/run_hdfs_prog.py to setup approperiate env

# output prediction task=pred
#../../xgboost.dmlc mushroom.hadoop.conf task=pred model_in=final.model test:data=../data/agaricus.txt.test
../../dmlc-core/yarn/run_hdfs_prog.py ../../xgboost mushroom.hadoop.conf task=pred model_in=final.model test:data=../data/agaricus.txt.test
# print the boosters of final.model in dump.raw.txt
#../../xgboost.dmlc mushroom.hadoop.conf task=dump model_in=final.model name_dump=dump.raw.txt
../../dmlc-core/yarn/run_hdfs_prog.py ../../xgboost mushroom.hadoop.conf task=dump model_in=final.model name_dump=dump.raw.txt
# use the feature map in printing for better visualization
#../../xgboost.dmlc mushroom.hadoop.conf task=dump model_in=final.model fmap=../data/featmap.txt name_dump=dump.nice.txt
../../dmlc-core/yarn/run_hdfs_prog.py ../../xgboost mushroom.hadoop.conf task=dump model_in=final.model fmap=../data/featmap.txt name_dump=dump.nice.txt
cat dump.nice.txt
28 changes: 0 additions & 28 deletions multi-node/README.md

This file was deleted.

19 changes: 0 additions & 19 deletions multi-node/col-split/README.md

This file was deleted.

25 changes: 0 additions & 25 deletions multi-node/col-split/mushroom-col-rabit-mock.sh

This file was deleted.

28 changes: 0 additions & 28 deletions multi-node/col-split/mushroom-col-rabit.sh

This file was deleted.

Loading

0 comments on commit e7d8ed7

Please sign in to comment.