You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your paper, you reported the following results on SST-5 for RoBERTa without Self-Explaining as a baseline:
Model
Accuracy
RoBERTa-base
56.4%
RoBERTa-large
57.9%
The original paper by Liu et al. (2019b) does not list any results for SST-5, so I'm assuming you obtained these results yourself.
Could you share how you did that?
Did you fine-tune these baselines on the SST-5 dataset, or are these the performances right out of the box?
Many thanks in advance.
The text was updated successfully, but these errors were encountered:
I have faced the same problem. I use the huggingface official script to finetune Roberta-base. However, I can hardly achieve such results. Have you reproduced the baseline and would you mind sharing the configuration?
In your paper, you reported the following results on SST-5 for RoBERTa without Self-Explaining as a baseline:
The original paper by Liu et al. (2019b) does not list any results for SST-5, so I'm assuming you obtained these results yourself.
Could you share how you did that?
Did you fine-tune these baselines on the SST-5 dataset, or are these the performances right out of the box?
Many thanks in advance.
The text was updated successfully, but these errors were encountered: