Closed
Description
@dmlc/xgboost-committer please add your items here by editing this post. Let's ensure that
-
each item has to be associated with a ticket
-
major design/refactoring are associated with a RFC before committing the code
-
blocking issue must be marked as blocking
-
breaking change must be marked as breaking
for other contributors who have no permission to edit the post, please comment here about what you think should be in 1.0.0
I have created three new types labels, 1.0.0, Blocking, Breaking
- Improve installation experience on Mac OSX (Better XGBoost installation on Mac OSX? #4477)
- Remove old GPU objectives.
- Remove gpu_exact updater (deprecated) Deprecate gpu_exact, bump required cuda version in docs #4527
- Remove multi threaded multi gpu support (deprecated) [RFC] Remove support for single process multi-GPU #4531
- External memory for gpu and associated dmatrix refactoring [RFC] External memory support for GPU #4357 [RFC] Possible DMatrix refactor #4354
- Spark Checkpoint Performance Improvement ([jvm-packages] Checkpointing performance issue in XGBoost4J-Spark #3946)
- [BLOCKING] the sync mechanism in hist method in master branch is broken due to the inconsistent shape of tree in different workers ([HOTFIX] distributed training with hist method #4716, [BLOCKING] Per-node sync slows down distributed training with 'hist' #4679)
- Per-node sync slows down distributed training with 'hist' ([BLOCKING] Per-node sync slows down distributed training with 'hist' #4679)
- Regression tests including binary IO compatibility, output stability, performance regressions.