-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Seeking Guidance: Addressing Performance-Related Warning Messages to Optimize Execution Speed #329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I am getting the same warnings. My environment :
|
Same for me on
|
Hi @eanzero @dario-spagnolo @renhaa, you can turn off this warning by changing the line sam2/sam2/modeling/sam/transformer.py Line 23 in 52198ea
OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = True, True, True This would directly try out all the available kernels (instead of trying Flash Attention first and then falling back to other kernels upon errors). @eanzero The error message above shows that the Flash Attention kernel failed
but PyTorch didn't print a further line explaining why it failed. Meanwhile, the GPU you're using (RTX 3070) has a CUDA compute capability of 8.6 according to https://developer.nvidia.com/cuda-gpus, so it should support Flash Attention in principle. A possible cause is that there could be some mismatch between your CUDA driver, CUDA runtime, and PyTorch versions, causing Flash Attention kernels to fail, especially given that you're using Windows. Previously people have reported issues with Flash Attention on Windows (e.g. in pytorch/pytorch#108175 and Dao-AILab/flash-attention#553), and it could be the same issue in your case. To avoid these issues, it's recommended to use Windows Subsystem for Linux if you're running on Windows. |
I met the same problem. My env:
In my test, the flash attention is working. But it can't work in sam2. The whole message is as follows:
|
Luckily, there is a PR to solve this problem. It works for me. |
I had this warning: The above mentioned pr (#322) fixed that issue for me. |
The mentioned PR (#322) "works" as in: it silents the logs because it falls back to the next kernel. It will not make you use FlashAttention. I suspect the reason you have that issue may be that you don't use autocast as described here. The way the code deals with dtypes when not using autocast is a bit weird. |
hey dude, it's not "slients the logs". If FlashAttention correctly setup, it will work. |
glad it helps. |
The true root cause of the error in the logs above is this:
This means somewhere code is run in 32 bit, when FlashAttention requires 16 bit (fp16 or bf16). You can fix this by using autocast or patching the code base to add |
Well, before using (#322), I installed flash-attention using |
Here is a way to know: instead of passing fallbacks like #322, only pass If it still works it is fine, if it breaks you were not using Flash Attention. |
I agree with you, because after the changes in the mentioned PR, the training speed is still the same. |
Thank you for taking the time to review my question.
Before I proceed, I would like to mention that I am a beginner, and I would appreciate your consideration of this fact.
I am seeking assistance in resolving the following warnings to improve execution speed. While I am able to obtain results, I receive the warning messages listed below. From my research, I understand that these warnings can affect execution speed, but I have been unable to find a solution, hence my question.
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\backbones\hieradet.py:68: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
x = F.scaled_dot_product_attention(
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: Memory efficient kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:723.)
out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p)
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: Memory Efficient attention has been runtime disabled. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen/native/transformers/sdp_utils_cpp.h:495.)
out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p)
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: Flash attention kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:725.)
out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p)
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: CuDNN attention kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:727.)
out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p)
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: The CuDNN backend needs to be enabled by setting the enviornment variable
TORCH_CUDNN_SDPA_ENABLED=1
(Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:497.)out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p)
C:\Users\USER\anaconda3\envs\ddd\Lib\site-packages\torch\nn\modules\module.py:1562: UserWarning: Flash Attention kernel failed due to: No available kernel. Aborting execution.
Falling back to all available kernels for scaled_dot_product_attention (which may have a slower speed).
return forward_call(*args, **kwargs)
My execution environment is as follows:
The CUDA environment on the host machine is:
Cuda compilation tools, release 12.5, V12.5.82 Build cuda_12.5.r12.5/compiler.34385749_0
I would greatly appreciate any guidance on how to address these warnings. Thank you in advance for your help.
The text was updated successfully, but these errors were encountered: