Skip to content

[SPARK-3162] [MLlib] Add local tree training for decision tree regressors #19433

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 29 commits into from

Conversation

smurching
Copy link
Contributor

@smurching smurching commented Oct 4, 2017

What changes were proposed in this pull request?

Overview

This PR adds local tree training for decision tree regressors as a first step for addressing SPARK-3162: train decision trees locally when possible.

See this design doc (in particular the local tree training section) for detailed discussion of the proposed changes.

Distributed training logic has been refactored but only minimally modified; the local tree training implementation leverages existing distributed training logic for computing impurities and splits. This shared logic has been refactored into ...Utils objects (e.g. SplitUtils.scala, ImpurityUtils.scala).

How to Review

Each commit in this PR adds non-overlapping functionality, so the PR can be reviewed commit-by-commit.

Changes introduced by each commit:

  1. Adds new data structures for local tree training (FeatureVector, TrainingInfo)
  2. Adds shared utility methods for computing splits/impurities (SplitUtils, ImpurityUtils, AggUpdateUtils), largely copied from existing distributed training code in RandomForest.scala.
  3. Unit tests for split/impurity utility methods (TreeSplitUtilsSuite)
  4. Updates distributed training code in RandomForest.scala to depend on the utility methods introduced in 2.
  5. Adds local tree training logic (LocalDecisionTree)
  6. Local tree unit/integration tests (LocalTreeUnitSuite, LocalTreeIntegrationSuite)

How was this patch tested?

No existing tests were modified. The following new tests were added (also described above):

  • Unit tests for new data structures specific to local tree training (LocalTreeDataSuite, LocalTreeUtilsSuite)
  • Unit tests for impurity/split utility methods (TreeSplitUtilsSuite)
  • Unit tests for local tree training logic (LocalTreeUnitSuite)
  • Integration tests verifying that local & distributed tree training produce the same trees (LocalTreeIntegrationSuite)

…calTreeDataSuite):

    * TrainingInfo: primary local tree training data structure, contains all information required to describe state of
    algorithm at any point during learning
    * FeatureVector: Stores data for an individual feature as an Array[Int]
…oth local & distributed training:

 * AggUpdateUtils: Helper methods for updating sufficient stats for a given node
 * ImpurityUtils: Helper methods for impurity-related calcluations during node split decisions
 * SplitUtils: Helper methods for choosing splits given sufficient stats

NOTE: Both ImpurityUtils and SplitUtils primarily contain code taken from RandomForest.scala, with slight modifications.
Tests for SplitUtils are contained in the next commit.
 * TreeSplitUtilsSuite: Test suite for SplitUtils
 * TreeTests: Add utility method (getMetadata) for TreeSplitUtilsSuite

 Also add methods used by these tests in LocalDecisionTree.scala, RandomForest.scala
@smurching
Copy link
Contributor Author

@WeichenXu123 would you be able to take an initial look at this?

val numFeatures = rowStore(0).length
require(numFeatures > 0, "Local decision tree training requires numFeatures > 0.")
// Return the transpose of the rowStore matrix
0.until(numFeatures).map { colIdx =>
Copy link
Contributor Author

@smurching smurching Oct 9, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: replace this with an in-place matrix transpose for memory efficiency.

@WeichenXu123
Copy link
Contributor

@smurching Does it still WIP ? If done remove "[WIP]", I will begin review, thanks!

@smurching
Copy link
Contributor Author

Thanks! I'll remove the WIP. To clear things up for the future, I'd thought [WIP] was the appropriate tag for a PR that's ready for review but not ready to be merged (based on https://spark.apache.org/contributing.html) -- have we stopped using the WIP tag?

@smurching smurching changed the title [SPARK-3162] [MLlib][WIP] Add local tree training for decision tree regressors [SPARK-3162] [MLlib] Add local tree training for decision tree regressors Oct 9, 2017
@jkbradley
Copy link
Member

add to whitelist

@SparkQA
Copy link

SparkQA commented Oct 9, 2017

Test build #82557 has finished for PR 19433 at commit 9a7174e.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds the following public classes (experimental):
  • class LocalTreeIntegrationSuite extends SparkFunSuite with MLlibTestSparkContext
  • class LocalTreeUtilsSuite extends SparkFunSuite

@smurching
Copy link
Contributor Author

smurching commented Oct 9, 2017

The failing tests (in DecisionTreeSuite) fail because we've historically handled

a) splits that have 0 gain

differently from

b) splits that fail to achieve user-specified minimum gain (metadata.minInfoGain) or don't meet minimum instance-counts per node (metadata.minInstancesPerNode).

Previously we'd create a leaf node with valid impurity stats in case a) and invalid impurity stats in case b). This PR creates a leaf node with invalid impurity stats in both cases.

As a fix I'd suggest creating a LeafNode with correct impurity stats in case a), but with the stats.valid member set to false to indicate that the node should not be split.

This will keep the process of determining split validity simple (just check stats.valid) and avoid changes to existing distributed tree-training logic.

…ranspose in LocalDecisionTreeUtils.

Changes made to fix tests:
 * Return correct impurity stats for splits that achieved a gain of 0 but didn't violate user-specified constraints on min info gain or min
 instances per node
 * Previously, ImpurityStats.impurity was set incorrectly in ImpurityStats.getInvalidImpurityStats(), requiring a correction in LearningNode.toNode.
   This commit fixes the issue by directly setting impurity = -1 in getInvalidSplits()
@SparkQA
Copy link

SparkQA commented Oct 9, 2017

Test build #82570 has finished for PR 19433 at commit abc86b2.

  • This patch fails SparkR unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@smurching
Copy link
Contributor Author

smurching commented Oct 12, 2017

The failing SparkR test (which compares RandomForest predictions to hardcoded values) fails not due to a correctness issue but (AFAICT) because of an implementation change in best-split selection.

In this PR we recompute parent node impurity stats when considering each split for a feature, instead of computing parent impurity stats once per feature (see this by comparing RandomForest.calculateImpurityStats in Spark master and ImpurityUtils.calculateImpurityStats in this PR).

The process of repeatedly computing parent impurity stats results in slightly different impurity values at each iteration due to Double precision limitations. This in turn can cause different splits to be selected (e.g. if two splits have mathematically equal gains, Double precision limitations can cause one split to have a larger/smaller gain than the other, influencing tiebreaking).

@SparkQA
Copy link

SparkQA commented Oct 12, 2017

Test build #82652 has finished for PR 19433 at commit 5c29d3d.

  • This patch fails SparkR unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@WeichenXu123
Copy link
Contributor

I made a rough pass. I have only a few issues for now, I haven't go into code details:

  • The colStoreInit currently ignore the subsampleWeights, it should be used, isn't it ? I read your doc, in the higher level, the local training will be used to train sub-trees as parts of the global distributed training, subsampleWeights should be important info. and here it will train only single tree so subsampleWeights only contains one element, does we still need use BaggedPoint structure ?

  • The logic of training for regression and for classification will be the same I think, only impurity difference but do not influence the code logic.

  • The key idea is to use the columnar storage format for features, is the purpose to improve memory cost & cache locality when finding best splits ? I see the code will do some reordering operation on feature values and use indices, but I haven't go into details. It's a complex part I need more time to review.

  • Maybe we can support multithreads in local training, what do you think about it ?

@smurching
Copy link
Contributor Author

Thanks for the comments!

  • Yep, feature subsampling is necessary for using local tree training in distributed training. I was thinking of adding subsampling in a follow-up PR. You're right that we don't need to pass an array of BaggedPoints to local tree training; we should just pass an array of subsampleWeights (weights for the current tree) and an array of TreePoints. I'll push an update for this.

  • Agreed that the logic for classification will be the same but with a different impurity metric. I can add support for classification & associated tests in a follow-up PR.

  • IMO the primary advantage of the columnar storage format is that it'll eventually enable improvements to best split calculations; specifically, for continuous features we could sort the unbinned feature values and consider every possible threshold. There are also the locality & memory advantages described in the design doc. In brief, DTStatsAggregator stores a flat array partitioned by (feature x bin). If we can iterate through all values for a single feature at once, most updates to DTStatsAggregatorwill occur within the same subarray.

  • Multithreading could be a nice way to increase parallelism since we don't use Spark during local tree training. I think we could add it in a follow-up PR.

@smurching
Copy link
Contributor Author

Sorry, realized I conflated feature subsampling and subsampleWeights (instance weights for training examples). IMO feature subsampling can be added in a follow-up PR, but subsampleWeights should go in this PR.

@SparkQA
Copy link

SparkQA commented Oct 13, 2017

Test build #82717 has finished for PR 19433 at commit c9a8e01.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Oct 13, 2017

Test build #82721 has finished for PR 19433 at commit 93e17fc.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Contributor

@WeichenXu123 WeichenXu123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made a deeper pass review, Later I will put more thoughts on the columnar feature storage design. Thanks!

// gives us the split bit value for each instance based on the instance's index.
// We copy our feature values into @tempVals and @tempIndices either:
// 1) in the [from, numLeftRows) range if the bit is false, or
// 2) in the [numBitsNotSet, to) range if the bit is true.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although numLeftRows == numBitsNotSet, it is better to keep them the same in doc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will change this, thanks for the catch!

// Filter out leaf nodes from the previous iteration
val activeNonLeafs = activeNodes.zipWithIndex.filterNot(_._1.isLeaf)
// Iterate over the active nodes in the current level.
activeNonLeafs.flatMap { case (node: LearningNode, nodeIndex: Int) =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The var name activeNodes, activeNonLeafs are not accurate I think.
Here the activeNodes are actually "next level nodes", including "probably splittable nodes(active nodes)" and "leaf nodes".

val activeNodes: Array[LearningNode] =
computeBestSplits(trainingInfo, labels, metadata, splits)
// Filter active node periphery by impurity.
val estimatedRemainingActive = activeNodes.count(_.stats.impurity > 0.0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use activeNodes.count(_.isLeaf) instead. Make code simpler.
And as mentioned above, the activeNodes is better to be renamed to nextLevelNodes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed on using isLeaf instead of checking for positive impurity, thanks for the suggestion.

AFAICT at this point in the code activeNodes actually does refer to the nodes in the current level; the children of nodes in activeNodes are the nodes in the next level, and are returned by computeBestSplits. I forgot to include the return type of computeBestSplit in its method signature, which probably made this more confusing - my mistake.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Sorry for confusing you. The change what I said was changing to:

val nextLevelNodes: Array[LearningNode] =
        computeBestSplits(trainingInfo, labels, metadata, splits)

Does it look more reasonable ?
And change the member name in trainingInfo:
TrainingInfo.activeNodes ==> TrainingInfo.currentLevelNodes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha agreed on the naming change, how about currentLevelActiveNodes? Since only the non-leaf nodes from the current level are included.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait... I check the code here: trainingInfo = trainingInfo.update(splits, activeNodes) So it seems you do not filter out the leaf node from the "activeNodes"(which is actually the nextLevelNode I mentioned above).
So I think TrainingInfo.activeNodes is still possible to contains leaf node.

Copy link
Contributor Author

@smurching smurching Oct 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh true -- I'll reword the doc for currentLevelActiveNodes to say:

 * @param currentLevelActiveNodes  Nodes which are active (could still be split).
 *                                 Inactive nodes are known to be leaves in the final tree.

*/
private[impl] case class TrainingInfo(
columns: Array[FeatureVector],
instanceWeights: Array[Double],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The instanceWeights will never be updated in each iteration, so why put it in the TrainingInfo structure ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call, I'll move instanceWeights outside TrainingInfo

*/
private[impl] def updateParentImpurity(
statsAggregator: DTStatsAggregator,
col: FeatureVector,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, updateParentImpurity has no relation with any feature column, but here you pass in the feature column only want to use the indices array, passing anyone feature column will be OK. But, this looks weird, maybe it can be better designed.

label: Double,
featureIndex: Int,
featureIndexIdx: Int,
splits: Array[Array[Split]],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You only need to pass in the featureSplit: Array[Split], don't pass all splits for all features.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call, I'll make this change.

from: Int,
to: Int,
split: Split,
allSplits: Array[Array[Split]]): BitSet = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto, you only need to pass in the featureSplit: Array[Split], don't pass all splits for all features.

@WeichenXu123
Copy link
Contributor

@smurching I found some issues and have some thoughts on the columnar features format:

  • In your doc, you said "Specifically, we only need to store sufficient stats for each bin of a single feature, as opposed to each bin of every feature", BUT, current implementation, you still allocate space for all features when computing: -- see DTStatsAggregator implementation, you pass featureSubset = None so DTStatsAggregator will allocate space for every features. According to your purpose, you should pass featureSubset = Some(Array(currentFeatureIndex)).

  • Current implementation still use binnedFeatures. You said in future it will be improved to sort feature values for continuous feature (for more precise tree training), if you want to consider every possible thresholds, you need hold rawFeatures instead of binnedFeatures in the columnar feature array, and in each split range offset, you need sort every continuous features. Is this the thing you want to do in the future ? This will increase calculation amount.

  • For current implementation(using binnedFeature) , there is no need to sort continuous features inside each split offset. So the indices for each feature is exactly the same. In order to save memory, I think these indices should be shared, no need to create separate indices array for each features. Even if you add the improvements for continuous features mentioned above, you can create separate indices array for only continuous features, the categorical features can still share the same indices array.

  • About locality advantage of columnar format, I have some doubts. Current implementation, you do not reorder the label and weight array, access label and weight value need use indices, when calculating DTStat, this break locality. (But I'm not sure how much impact to perf this will bring).

  • About the overhead of columnar format: when making reordering (when get new split, we need reorder left sub-tree samples into front), so you need reordering on each column, and at the same time, update the indices array. But, if we use row format, like:
    Array[(features, label, weight)], reordering will be much easier, and do not need indices.
    So, I am considering, whether we can use row format, but at the time when we need DTStatsAggregator computation, copy the data we need from the row format into columnar format array (only need to copy rows between sub-node offset and only copy the sampled features if using feature subsampling).

* Move instanceWeights outside TrainingInfo
* Only pass a single array of splits (instead of an array of arrays of splits) when possible
@SparkQA
Copy link

SparkQA commented Nov 5, 2017

Test build #83464 has finished for PR 19433 at commit 3f72cc0.

  • This patch fails to generate documentation.
  • This patch merges cleanly.
  • This patch adds no public classes.

@smurching
Copy link
Contributor Author

jenkins retest this please

@SparkQA
Copy link

SparkQA commented Nov 6, 2017

Test build #83503 has finished for PR 19433 at commit 3f72cc0.

  • This patch fails to generate documentation.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Nov 6, 2017

Test build #83507 has finished for PR 19433 at commit b7e6e40.

  • This patch fails to generate documentation.
  • This patch merges cleanly.
  • This patch adds no public classes.

@jkbradley
Copy link
Member

CC @dbtsai in case you're interested b/c of Sequoia forests

@SparkQA
Copy link

SparkQA commented Nov 8, 2017

Test build #3983 has finished for PR 19433 at commit b7e6e40.

  • This patch fails to generate documentation.
  • This patch does not merge cleanly.
  • This patch adds no public classes.

Copy link
Member

@jkbradley jkbradley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done with pass over the parts which refactor elements of RandomForest.scala into utility classes. Will review more after updates!

agg: DTStatsAggregator,
featureValue: Int,
label: Double,
featureIndex: Int,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

featureIndex is not used

private[impl] def getNonConstantFeatures(
metadata: DecisionTreeMetadata,
featuresForNode: Option[Array[Int]]): Seq[(Int, Int)] = {
Range(0, metadata.numFeaturesPerNode).map { featureIndexIdx =>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was there a reason to remove the use of view and withFilter here? With the output of this method going through further Seq operations, I would expect the previous implementation to be more efficient.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At some point when refactoring I was hitting errors caused by a stateful operation within a map over the output of this method (IIRC the result of the map was accessed repeatedly, causing the stateful operation to inadvertently be run multiple times).

However using withFilter and view now seems to work, I'll change it back :)

// Cumulative sum (scanLeft) of bin statistics.
// Afterwards, binAggregates for a bin is the sum of aggregates for
// that bin + all preceding bins.
assert(!binAggregates.metadata.isUnordered(featureIndex))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this (If there's any chance of this, then we should find ways to test it.)

val featureValue = categoriesSortedByCentroid(splitIndex)
val leftChildStats =
binAggregates.getImpurityCalculator(nodeFeatureOffset, featureValue)
val rightChildStats =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line can be moved outside of the map. Actually, this is the parentCalc, right? So if it's not available, parentCalc can be computed beforehand outside of the map.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, it's the parentCalc minus the left child stats. Since ImpurityCalculator.subtract() updates the impurity calculator in place, we call binAggregates.getParentImpurityCalculator() to get a copy of the parent impurity calculator, then subtract the left child stats.

// Unordered categorical feature
val nodeFeatureOffset = binAggregates.getFeatureOffset(featureIndexIdx)
val numSplits = binAggregates.metadata.numSplits(featureIndex)
var parentCalc = parentCalculator
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nice to calculate the parentCalc right away here, if needed. That seems possible just by taking the first candidate split. Then we could simplify calculateImpurityStats by not passing in parentCalc as an option.

val centroid = ImpurityUtils.getCentroid(binAggregates.metadata, categoryStats)
(featureValue, centroid)
}
// TODO(smurching): How to handle logging statements like these?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the issue? You should be able to call logDebug if this object inherits from org.apache.spark.internal.Logging

node: LearningNode): (Split, ImpurityStats) = {
val validFeatureSplits = getNonConstantFeatures(binAggregates.metadata, featuresForNode)
// For each (feature, split), calculate the gain, and select the best (feature, split).
val parentImpurityCalc = if (node.stats == null) None else Some(node.stats.impurityCalculator)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to check: Will node.stats == null for the top level for sure?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe so, the nodes at the top level are created (RandomForest.scala:178) with LearningNode.emptyNode, which sets node.stats = null.

I could change this to check node depth (via node index), but if we're planning on deprecating node indices in the future it might be best not to.

@@ -112,7 +113,7 @@ private[spark] object ImpurityStats {
* minimum number of instances per node.
*/
def getInvalidImpurityStats(impurityCalculator: ImpurityCalculator): ImpurityStats = {
new ImpurityStats(Double.MinValue, impurityCalculator.calculate(),
new ImpurityStats(Double.MinValue, impurity = -1,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: Why -1 here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed this to be -1 here since node impurity would eventually get set to -1 anyways when LearningNodes with invalid ImpurityStats were converted into decision tree leaf nodes (see LearningNode.toNode)

…ately reflect what the method actually does). Switch back to view, withFilter in getNonConstantFeatures
…e the map call in chooseUnorderedCategoricalSplit, orderedSplitHelper
@SparkQA
Copy link

SparkQA commented Nov 15, 2017

Test build #83873 has finished for PR 19433 at commit 0b27c56.

  • This patch fails to generate documentation.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Nov 15, 2017

Test build #83874 has finished for PR 19433 at commit d86dd18.

  • This patch fails to generate documentation.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Oct 24, 2018

Test build #97977 has finished for PR 19433 at commit d86dd18.

  • This patch fails to generate documentation.
  • This patch does not merge cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jan 22, 2019

Test build #101549 has finished for PR 19433 at commit d86dd18.

  • This patch fails to generate documentation.
  • This patch does not merge cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jan 23, 2019

Test build #101588 has finished for PR 19433 at commit d86dd18.

  • This patch fails to generate documentation.
  • This patch does not merge cleanly.
  • This patch adds no public classes.

@holdenk
Copy link
Contributor

holdenk commented Feb 11, 2019

Is this still a thing you are actively working on?

@rstarosta
Copy link

Thank you for your contribution! We've used this code extensively as a basis for our @cisco/oraf library, which incorporates local training into the existing decision tree and random forest APIs, and managed to significantly speed-up the training process.

@holdenk
Copy link
Contributor

holdenk commented May 10, 2019

That's cool @rstarosta . Does having it in a library meet the needs of folks and we can close this PR?

@SparkQA
Copy link

SparkQA commented Sep 13, 2019

Test build #110569 has finished for PR 19433 at commit d86dd18.

  • This patch fails to generate documentation.
  • This patch does not merge cleanly.
  • This patch adds no public classes.

@github-actions
Copy link

We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable.
If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!

@github-actions github-actions bot added the Stale label Jan 15, 2020
@github-actions github-actions bot closed this Jan 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants