-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Vectorize remove_copy
and unique_copy
#5355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Less error prone, especially if implementing _copy someday
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as off-topic.
This comment was marked as off-topic.
# Conflicts: # benchmarks/src/unique.cpp # stl/inc/algorithm # stl/src/vector_algorithms.cpp
Thanks! 😻 I pushed fixes for significant bugs in the "can compare equal to value type" codepaths, and expanded the test coverage (auditing all existing codepaths, and verifying that the new tests caught the bugs). Please double-check. 5950X results:
|
All looks good. |
/azp run STL-ASan-CI |
Azure Pipelines successfully started running 1 pipeline(s). |
I'm mirroring this to the MSVC-internal repo - please notify me if any further changes are pushed. |
I resolved a trivial adjacent-add conflict with #5352 in |
Thanks for these unique speedups, removing all that execution time! 😹 🚀 🤪 |
⚙️ The optimization
remove_copy
andunique_copy
are different from their non-_copy
counterparts in that they don't have room they are allowed to overwrite. This means we can't directly store results from vector registers.The previous attempt #5062 tried to use masked stores to bypass that limitation. Unfortunately, this doesn't perform well for some CPUs. Also the minimum granularity of AVX2 masked store is 32 bits, so it would not work for smaller elements.
This time, temporary storage comes to rescue. The algorithms already use some additional memory (the tables), so why wouldn't it use a bit more. I arbitrarily picked 512 bytes, should be not too much. Each time the temporary buffer is full, it can be copied to the destination with
memcpy
, it should be fast enough for this buffer size.🚫 No
find
beforeremove_copy
In #4987, it was explained that doing
find
beforeremove
is good for both correctness and performance. Originally it was in vectorization code, but during the review @StephanTLavavej observed that it is done in the headers already (#4987 (comment)).For
remove_copy
it is not necessary for correctness, and may be harmful for performance.find
would needcopy
in addition, this will be double pass on the input, which can make the performance worse for large input and memry-bound situation.We may have special handling of the range before the first match in vectorization code, this is another story, and it would not be harmful, but I'm not doing this in the current PR. Maybe later.
So, as we have not called
find
, and so have not checked if value type can even match iterator value type, we need this_Could_compare_equal_to_value_type
check here.✅ Test coverage
Shared with non-
_copy
counterparts to save total tests run time and some lines of code, at the expense with otherwise unnecessary coupling.We check both modified and unmodified destination parts, to make sure unmodified indeed didn't modify.
⏱️ Benchmark results
🥇 Results interpretation
Good improvement!
Not as good as for non-
_copy
counterparts though, asmemcpy
takes some noticeable time.The usual codegen gremlins that cause results variation are observed for non-vectorized tight loops. I've marked the most notorious one with clown. I can't explain that anomality.