-
Notifications
You must be signed in to change notification settings - Fork 2.1k
test: mark more tests as slow #9296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Pull Request Test Coverage Report for Build 14621588111Details
💛 - Coveralls |
@@ -421,20 +423,21 @@ def test_run_small_batch_size(self, query, docs_before_texts, expected_first_tex | |||
|
|||
# Returns an empty list if no documents are provided | |||
@pytest.mark.integration | |||
@pytest.mark.slow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anakin87 could be out of scope for this PR, but this test could be made into a unit test with a patch on the warm_up
method.
Maybe worth noting down somewhere which of these tests could be made into unit tests?
def test_raises_component_error_if_model_not_warmed_up(self): | ||
sampler = TransformersSimilarityRanker() | ||
with pytest.raises(RuntimeError): | ||
sampler.run(query="query", documents=[Document(content="document")]) | ||
|
||
@pytest.mark.integration | ||
@pytest.mark.slow | ||
@pytest.mark.parametrize( | ||
"query,docs_before_texts,expected_first_text", | ||
[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To slightly help with speed I don't think there is any reason we should be parametrizing the tests in this file. Doing just one is enough to test the behavior. We don't need to run three which should also help save on cpu inference time.
@@ -378,6 +379,7 @@ def test_run(self, query, docs_before_texts, expected_first_text, scores): | |||
assert docs_after[2].score == pytest.approx(sorted_scores[2], abs=1e-6) | |||
|
|||
@pytest.mark.integration | |||
@pytest.mark.slow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as mentioned below, no need to have multiple parametrized tests here, could just be one.
@@ -336,6 +336,7 @@ def __init__(self): | |||
assert ranker.device == ComponentDevice.from_multiple(DeviceMap.from_hf({"layer_1": 1, "classifier": "cpu"})) | |||
|
|||
@pytest.mark.integration | |||
@pytest.mark.slow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment here on reducing number of parametrized tests to one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor comments, but otherwise looks good!
I also wanted to ask what is the general guidance moving forward when we add new tests? I.e. which ones should be marked as slow and if they are marked as slow do we need to update the slow.yml github workflow?
Related Issues
Proposed Changes:
How did you test it?
CI
Notes for the reviewer
Checklist
fix:
,feat:
,build:
,chore:
,ci:
,docs:
,style:
,refactor:
,perf:
,test:
and added!
in case the PR includes breaking changes.