Skip to content

why torch.cuda.is_available()= False in main_task() #21

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
felix-ky opened this issue Apr 21, 2025 · 1 comment
Open

why torch.cuda.is_available()= False in main_task() #21

felix-ky opened this issue Apr 21, 2025 · 1 comment

Comments

@felix-ky
Copy link

Thank you for open source the code,

I have a problem importing a neural network model in main_task()

and I got torch.cuda.is_available()= False.

I try to allocate GPU for the task by using @ray.remote(num_gpus=1) instead of num_cpus=1

However, if so, the code blocks in ray.get([pg.ready() for pg in pgs]) forever

@LiuRicky
Copy link
Collaborator

Thanks for your interest.

For your first question, torch.cuda.is_available()= False. I guess there maybe some error when you install pytorch. You can try reinstall with GPU version torch. Or maybe caused by your cuda installation.

For the second question, code blocks. We do not recommend you change any setting in main_task(). Try solving cuda or torch issue at first.

Besides, I guess you can paste your error information to Deepseek or ChatGPT for solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants