-
Notifications
You must be signed in to change notification settings - Fork 937
Metrics for categorical forecasting #2767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #2767 +/- ##
==========================================
- Coverage 95.23% 95.03% -0.21%
==========================================
Files 145 146 +1
Lines 15092 15182 +90
==========================================
+ Hits 14373 14428 +55
- Misses 719 754 +35 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jonasblanc, this is a nice start 🚀
Some thoughts:
- remove
per time step
metric support for all metrics, except maybe the accuracy itself - add support for probabilistic series:
- if preds are sampled, take the label with highest count
- if preds are likelihood parameters, take the label with highest probability
- for a first collection, we should have accuracy, precision, recall, f1
- if we don't add timeseries support for confusion matrix, etc. then we should make these functions private
darts/metrics/categorical_metrics.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could call it classification.py
then it's a bit more nice to import:
from darts.metrics.classification import acc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe forecasting_classification in order not to be confused with up-coming TS classification ?
darts/metrics/categorical_metrics.py
Outdated
intersect, | ||
remove_nan_union=False, | ||
) | ||
return np.mean( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here you give already the aggregated value over time. But your metric is defined as a per time step
metric since it accepts the time_reduction
.
Either you return here the time dependent accuracy (e.g. boolean 1 / 0 whether it's a hit per time step) and add a mean accuracy metric that aggregates (see metrics.ae
and metrics.mae
as an example), or you call it macc
directly and remove the time_reduction
.
darts/metrics/categorical_metrics.py
Outdated
|
||
@multi_ts_support | ||
@multivariate_support | ||
def bacc( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these metrics below should not allow the time_reduction
but should always return the aggregated metric
Checklist before merging this PR:
Fixes #.
Summary
Implement metrics for categorical forecasting.
TODO:
Other Information