Open
Description
Is there an existing issue for this problem?
- I have searched the existing issues
Operating system
Linux
GPU vendor
AMD (ROCm)
GPU model
AMD RYZEN AI MAX+ 395 w/ Radeon 8060S
GPU VRAM
96GB
Version number
5.12.0rc1
Browser
Brave
Python dependencies
{
"version": "5.12.0rc1",
"dependencies": {
"accelerate" : "1.7.0" ,
"compel" : "2.0.2" ,
"cuda" : null ,
"diffusers" : "0.33.0" ,
"numpy" : "1.26.3" ,
"opencv" : "4.9.0.80" ,
"onnx" : "1.16.1" ,
"pillow" : "11.0.0" ,
"python" : "3.12.9" ,
"torch" : "2.7.0+rocm6.3" ,
"torchvision" : "0.22.0+rocm6.3",
"transformers": "4.51.3" ,
"xformers" : null
},
"config": {
"schema_version": "4.0.2",
"legacy_models_yaml_path": null,
"host": "127.0.0.1",
"port": 9090,
"allow_origins": [],
"allow_credentials": true,
"allow_methods": ["*"],
"allow_headers": ["*"],
"ssl_certfile": null,
"ssl_keyfile": null,
"log_tokenization": false,
"patchmatch": true,
"models_dir": "models",
"convert_cache_dir": "models/.convert_cache",
"download_cache_dir": "models/.download_cache",
"legacy_conf_dir": "configs",
"db_dir": "databases",
"outputs_dir": "outputs",
"custom_nodes_dir": "nodes",
"style_presets_dir": "style_presets",
"workflow_thumbnails_dir": "workflow_thumbnails",
"log_handlers": ["console"],
"log_format": "color",
"log_level": "info",
"log_sql": false,
"log_level_network": "warning",
"use_memory_db": false,
"dev_reload": false,
"profile_graphs": false,
"profile_prefix": null,
"profiles_dir": "profiles",
"max_cache_ram_gb": null,
"max_cache_vram_gb": null,
"log_memory_usage": false,
"device_working_mem_gb": 3,
"enable_partial_loading": false,
"keep_ram_copy_of_weights": true,
"ram": null,
"vram": null,
"lazy_offload": true,
"pytorch_cuda_alloc_conf": null,
"device": "auto",
"precision": "auto",
"sequential_guidance": false,
"attention_type": "auto",
"attention_slice_size": "auto",
"force_tiled_decode": false,
"pil_compress_level": 1,
"max_queue_size": 10000,
"clear_queue_on_startup": false,
"allow_nodes": null,
"deny_nodes": null,
"node_cache_size": 512,
"hashing_algorithm": "blake3_single",
"remote_api_tokens": null,
"scan_models_on_startup": false
},
"set_config_fields": ["legacy_models_yaml_path"]
}
What happened
Error on launch and error on image generation.
Starting up...
Preparing first run of this install - may take a minute or two...
Started Invoke process with PID: 60646
[2025-05-20 12:03:35,390]::[InvokeAI]::INFO --> Using torch device: AMD Radeon Graphics
Could not load bitsandbytes native library: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/bitsandbytes/cextension.py", line 85, in <module>
lib = get_native_library()
^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/bitsandbytes/cextension.py", line 64, in get_native_library
cuda_specs = get_cuda_specs()
^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/bitsandbytes/cuda_specs.py", line 39, in get_cuda_specs
cuda_version_string=(get_cuda_version_string()),
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/bitsandbytes/cuda_specs.py", line 29, in get_cuda_version_string
major, minor = get_cuda_version_tuple()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/bitsandbytes/cuda_specs.py", line 24, in get_cuda_version_tuple
major, minor = map(int, torch.version.cuda.split("."))
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'split'
CUDA Setup failed despite CUDA being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/bitsandbytes-foundation/bitsandbytes/issues
[2025-05-20 12:03:37,389]::[InvokeAI]::INFO --> cuDNN version: 3003000
[2025-05-20 12:03:39,613]::[InvokeAI]::INFO --> Patchmatch initialized
[2025-05-20 12:03:40,132]::[InvokeAI]::INFO --> InvokeAI version 5.12.0rc1
[2025-05-20 12:03:40,132]::[InvokeAI]::INFO --> Root directory = /home/mho/invokeai
[2025-05-20 12:03:40,133]::[InvokeAI]::INFO --> Initializing database at /home/mho/invokeai/databases/invokeai.db
[2025-05-20 12:03:40,136]::[InvokeAI]::INFO --> Database update needed
[2025-05-20 12:03:40,136]::[InvokeAI]::INFO --> Backing up database to /home/mho/invokeai/databases/invokeai_backup_20250520-120340.db
[2025-05-20 12:03:40,145]::[InvokeAI]::INFO --> Database updated successfully
[2025-05-20 12:03:40,147]::[ModelManagerService]::INFO --> [MODEL CACHE] Calculated model RAM cache size: 60897.80 MB. Heuristics applied: [1, 2].
[2025-05-20 12:03:40,153]::[InvokeAI]::INFO --> Pruned 1 finished queue items
[2025-05-20 12:03:40,179]::[InvokeAI]::INFO --> Invoke running on http://127.0.0.1:9090/ (Press CTRL+C to quit)
[2025-05-20 12:03:51,287]::[InvokeAI]::INFO --> Executing queue item 7, session 6b5299d0-0d7a-4cc5-ac18-51a91ca055cb
[2025-05-20 12:03:51,828]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '26d526b6-fd71-45d3-874d-3aa7b3ca57c2:text_encoder' (CLIPTextModel) onto cuda device in 0.37s. Total model size: 234.72MB, VRAM: 234.72MB (100.0%)
[2025-05-20 12:03:51,878]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '26d526b6-fd71-45d3-874d-3aa7b3ca57c2:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
[2025-05-20 12:03:51,959]::[InvokeAI]::ERROR --> Error while invoking session 6b5299d0-0d7a-4cc5-ac18-51a91ca055cb, invocation d44433b7-d383-4bc8-b635-ee04e9ae73e0 (sdxl_compel_prompt): HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
[2025-05-20 12:03:51,959]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/invokeai/app/invocations/baseinvocation.py", line 241, in invoke_internal
output = self.invoke(context)
^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/invokeai/app/invocations/compel.py", line 268, in invoke
c1, c1_pooled = self.run_clip_compel(context, self.clip, self.prompt, False, "lora_te1_", zero_on_empty=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/invokeai/app/invocations/compel.py", line 217, in run_clip_compel
c, _options = compel.build_conditioning_tensor_for_conjunction(conjunction)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/compel/compel.py", line 186, in build_conditioning_tensor_for_conjunction
this_conditioning, this_options = self.build_conditioning_tensor_for_prompt_object(p)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/compel/compel.py", line 218, in build_conditioning_tensor_for_prompt_object
return self._get_conditioning_for_flattened_prompt(prompt), {}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/compel/compel.py", line 282, in _get_conditioning_for_flattened_prompt
return self.conditioning_provider.get_embeddings_for_weighted_prompt_fragments(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/compel/embeddings_provider.py", line 120, in get_embeddings_for_weighted_prompt_fragments
base_embedding = self.build_weighted_embedding_tensor(tokens, per_token_weights, mask, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/compel/embeddings_provider.py", line 357, in build_weighted_embedding_tensor
empty_z = self._encode_token_ids_to_embeddings(empty_token_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/compel/embeddings_provider.py", line 390, in _encode_token_ids_to_embeddings
text_encoder_output = self.text_encoder(token_ids,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/transformers/utils/generic.py", line 965, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/transformers/models/clip/modeling_clip.py", line 1049, in forward
return self.text_model(
^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/transformers/utils/generic.py", line 965, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/transformers/models/clip/modeling_clip.py", line 945, in forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/transformers/models/clip/modeling_clip.py", line 292, in forward
inputs_embeds = self.token_embedding(input_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/invokeai/backend/model_manager/load/model_cache/torch_module_autocast/custom_modules/custom_embedding.py", line 29, in forward
return super().forward(input)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/nn/modules/sparse.py", line 190, in forward
return F.embedding(
^^^^^^^^^^^^
File "/home/mho/invokeai/.venv/lib/python3.12/site-packages/torch/nn/functional.py", line 2551, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
[2025-05-20 12:03:51,981]::[InvokeAI]::INFO --> Graph stats: 6b5299d0-0d7a-4cc5-ac18-51a91ca055cb
Node Calls Seconds VRAM Used
sdxl_model_loader 1 0.005s 0.000G
sdxl_compel_prompt 1 0.653s 0.236G
TOTAL GRAPH EXECUTION TIME: 0.658s
TOTAL GRAPH WALL TIME: 0.659s
RAM used by InvokeAI process: 1.81G (+0.395G)
RAM used to load models: 0.23G
VRAM in use: 0.236G
RAM cache statistics:
Model cache hits: 2
Model cache misses: 2
Models cached: 2
Models cleared from cache: 0
Cache high water mark: 0.23/0.00G
Shutting down...
[2025-05-20 12:04:10,156]::[ModelInstallService]::INFO --> Installer thread 126389474817728 exiting
Process exited with signal SIGTERM
We'll activate the virtual environment for the install at /home/mho/invokeai.
What you expected to happen
No errors.
Image generated.
How to reproduce the problem
Run invoke on computer with AMD RYZEN AI MAX+ 395 w/ Radeon 8060S (gfx1151)
Additional context
No response
Discord username
No response