Skip to content

test: mark more tests as slow #9296

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

test: mark more tests as slow #9296

wants to merge 4 commits into from

Conversation

anakin87
Copy link
Member

@anakin87 anakin87 commented Apr 23, 2025

Related Issues

Proposed Changes:

  • mark as "slow" all Hugging Face-related tests that:
    • perform local inference (slow due to CPU inference; sometimes they fail when downloading the model from HF)
    • call HF Inference API (they frequently fail due to API instability)

How did you test it?

CI

Notes for the reviewer

Checklist

  • I have read the contributors guidelines and the code of conduct
  • I have updated the related issue with new insights and changes
  • I added unit tests and updated the docstrings
  • I've used one of the conventional commit types for my PR title: fix:, feat:, build:, chore:, ci:, docs:, style:, refactor:, perf:, test: and added ! in case the PR includes breaking changes.
  • I documented my code
  • I ran pre-commit hooks and fixed any issue

@coveralls
Copy link
Collaborator

coveralls commented Apr 23, 2025

Pull Request Test Coverage Report for Build 14621588111

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.009%) to 90.525%

Totals Coverage Status
Change from base Build 14613694888: 0.009%
Covered Lines: 10834
Relevant Lines: 11968

💛 - Coveralls

@anakin87 anakin87 changed the title test: mark tests as slow test: mark more tests as slow Apr 23, 2025
@anakin87 anakin87 marked this pull request as ready for review April 23, 2025 16:01
@anakin87 anakin87 requested a review from a team as a code owner April 23, 2025 16:01
@anakin87 anakin87 requested review from vblagoje and removed request for a team April 23, 2025 16:01
@@ -421,20 +423,21 @@ def test_run_small_batch_size(self, query, docs_before_texts, expected_first_tex

# Returns an empty list if no documents are provided
@pytest.mark.integration
@pytest.mark.slow
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anakin87 could be out of scope for this PR, but this test could be made into a unit test with a patch on the warm_up method.

Maybe worth noting down somewhere which of these tests could be made into unit tests?

def test_raises_component_error_if_model_not_warmed_up(self):
sampler = TransformersSimilarityRanker()
with pytest.raises(RuntimeError):
sampler.run(query="query", documents=[Document(content="document")])

@pytest.mark.integration
@pytest.mark.slow
@pytest.mark.parametrize(
"query,docs_before_texts,expected_first_text",
[
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To slightly help with speed I don't think there is any reason we should be parametrizing the tests in this file. Doing just one is enough to test the behavior. We don't need to run three which should also help save on cpu inference time.

@@ -378,6 +379,7 @@ def test_run(self, query, docs_before_texts, expected_first_text, scores):
assert docs_after[2].score == pytest.approx(sorted_scores[2], abs=1e-6)

@pytest.mark.integration
@pytest.mark.slow
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as mentioned below, no need to have multiple parametrized tests here, could just be one.

@@ -336,6 +336,7 @@ def __init__(self):
assert ranker.device == ComponentDevice.from_multiple(DeviceMap.from_hf({"layer_1": 1, "classifier": "cpu"}))

@pytest.mark.integration
@pytest.mark.slow
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here on reducing number of parametrized tests to one.

Copy link
Contributor

@sjrl sjrl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor comments, but otherwise looks good!

I also wanted to ask what is the general guidance moving forward when we add new tests? I.e. which ones should be marked as slow and if they are marked as slow do we need to update the slow.yml github workflow?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants