Skip to content

[Bug]: ValueError: Model architectures ['OPTForCausalLM'] failed to be inspected. #17031

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
sydarb opened this issue Apr 23, 2025 · 2 comments · Fixed by #17043
Closed
1 task done

[Bug]: ValueError: Model architectures ['OPTForCausalLM'] failed to be inspected. #17031

sydarb opened this issue Apr 23, 2025 · 2 comments · Fixed by #17043
Labels
bug Something isn't working

Comments

@sydarb
Copy link
Contributor

sydarb commented Apr 23, 2025

Your current environment

The output of `python collect_env.py`
INFO 04-23 05:35:37 [__init__.py:239] Automatically detected platform cuda.
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A

OS: Amazon Linux 2023.7.20250414 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34

Python version: 3.9.21 (main, Mar 19 2025, 00:00:00)  [GCC 11.5.0 20240719 (Red Hat 11.5.0-5)] (64-bit runtime)
Python platform: Linux-6.1.132-147.221.amzn2023.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        48 bits physical, 48 bits virtual
Byte Order:                           Little Endian
CPU(s):                               4
On-line CPU(s) list:                  0-3
Vendor ID:                            AuthenticAMD
Model name:                           AMD EPYC 7R32
CPU family:                           23
Model:                                49
Thread(s) per core:                   2
Core(s) per socket:                   2
Socket(s):                            1
Stepping:                             0
BogoMIPS:                             5599.99
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext pti ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Hypervisor vendor:                    KVM
Virtualization type:                  full
L1d cache:                            64 KiB (2 instances)
L1i cache:                            64 KiB (2 instances)
L2 cache:                             1 MiB (2 instances)
L3 cache:                             8 MiB (1 instance)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-3
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow:   Mitigation; safe RET
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.4.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] transformers==4.51.3
[pip3] triton==3.2.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.8.4
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
	GPU0	CPU Affinity	NUMA Affinity	GPU NUMA ID
GPU0	 X 	0-3	0		N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY

🐛 Describe the bug

Installed the latest vllm version through pip in a separate environment:

python3 -m venv vllm_env
source vllm_env/bin/activate
pip install vllm==0.8.4

Ran the following python script:

from vllm import LLM, SamplingParams

prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

llm = LLM(model="facebook/opt-125m")

outputs = llm.generate(prompts, sampling_params)

# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Got the error output:

INFO 04-23 05:39:12 [__init__.py:239] Automatically detected platform cuda.
config.json: 100%|████████████████████████████████████████████████████████████████████████████████| 651/651 [00:00<00:00, 96.4kB/s]
ERROR 04-23 05:39:29 [registry.py:346] Error in inspecting model architecture 'OPTForCausalLM'
ERROR 04-23 05:39:29 [registry.py:346] Traceback (most recent call last):
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 578, in _run_in_subprocess
ERROR 04-23 05:39:29 [registry.py:346]     returned.check_returncode()
ERROR 04-23 05:39:29 [registry.py:346]   File "/usr/lib64/python3.9/subprocess.py", line 460, in check_returncode
ERROR 04-23 05:39:29 [registry.py:346]     raise CalledProcessError(self.returncode, self.args, self.stdout,
ERROR 04-23 05:39:29 [registry.py:346] subprocess.CalledProcessError: Command '['/home/ec2-user/vllm_env/bin/python', '-m', 'vllm.model_executor.models.registry']' returned non-zero exit status 1.
ERROR 04-23 05:39:29 [registry.py:346] 
ERROR 04-23 05:39:29 [registry.py:346] The above exception was the direct cause of the following exception:
ERROR 04-23 05:39:29 [registry.py:346] 
ERROR 04-23 05:39:29 [registry.py:346] Traceback (most recent call last):
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 344, in _try_inspect_model_cls
ERROR 04-23 05:39:29 [registry.py:346]     return model.inspect_model_cls()
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 315, in inspect_model_cls
ERROR 04-23 05:39:29 [registry.py:346]     return _run_in_subprocess(
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 581, in _run_in_subprocess
ERROR 04-23 05:39:29 [registry.py:346]     raise RuntimeError(f"Error raised in subprocess:\n"
ERROR 04-23 05:39:29 [registry.py:346] RuntimeError: Error raised in subprocess:
ERROR 04-23 05:39:29 [registry.py:346] /usr/lib64/python3.9/runpy.py:127: RuntimeWarning: 'vllm.model_executor.models.registry' found in sys.modules after import of package 'vllm.model_executor.models', but prior to execution of 'vllm.model_executor.models.registry'; this may result in unpredictable behaviour
ERROR 04-23 05:39:29 [registry.py:346]   warn(RuntimeWarning(msg))
ERROR 04-23 05:39:29 [registry.py:346] Traceback (most recent call last):
ERROR 04-23 05:39:29 [registry.py:346]   File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
ERROR 04-23 05:39:29 [registry.py:346]     return _run_code(code, main_globals, None,
ERROR 04-23 05:39:29 [registry.py:346]   File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
ERROR 04-23 05:39:29 [registry.py:346]     exec(code, run_globals)
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 602, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     _run()
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 595, in _run
ERROR 04-23 05:39:29 [registry.py:346]     result = fn()
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 316, in <lambda>
ERROR 04-23 05:39:29 [registry.py:346]     lambda: _ModelInfo.from_model_cls(self.load_model_cls()))
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 319, in load_model_cls
ERROR 04-23 05:39:29 [registry.py:346]     mod = importlib.import_module(self.module_name)
ERROR 04-23 05:39:29 [registry.py:346]   File "/usr/lib64/python3.9/importlib/__init__.py", line 127, in import_module
ERROR 04-23 05:39:29 [registry.py:346]     return _bootstrap._gcd_import(name[level:], package, level)
ERROR 04-23 05:39:29 [registry.py:346]   File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
ERROR 04-23 05:39:29 [registry.py:346]   File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
ERROR 04-23 05:39:29 [registry.py:346]   File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
ERROR 04-23 05:39:29 [registry.py:346]   File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
ERROR 04-23 05:39:29 [registry.py:346]   File "<frozen importlib._bootstrap_external>", line 850, in exec_module
ERROR 04-23 05:39:29 [registry.py:346]   File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/opt.py", line 36, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from vllm.model_executor.layers.logits_processor import LogitsProcessor
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/layers/logits_processor.py", line 13, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from vllm.model_executor.layers.vocab_parallel_embedding import (
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/layers/vocab_parallel_embedding.py", line 139, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     def get_masked_input_and_mask(
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/__init__.py", line 2536, in fn
ERROR 04-23 05:39:29 [registry.py:346]     return compile(
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/__init__.py", line 2565, in compile
ERROR 04-23 05:39:29 [registry.py:346]     return torch._dynamo.optimize(
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 842, in optimize
ERROR 04-23 05:39:29 [registry.py:346]     return _optimize(rebuild_ctx, *args, **kwargs)
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 917, in _optimize
ERROR 04-23 05:39:29 [registry.py:346]     backend.get_compiler_config()
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/__init__.py", line 2343, in get_compiler_config
ERROR 04-23 05:39:29 [registry.py:346]     from torch._inductor.compile_fx import get_patched_config_dict
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/compile_fx.py", line 97, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from .fx_passes.joint_graph import joint_graph_passes
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/fx_passes/joint_graph.py", line 22, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from ..pattern_matcher import (
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/pattern_matcher.py", line 95, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from .lowering import fallback_node_due_to_unsupported_type
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/lowering.py", line 6515, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from . import kernel
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/kernel/__init__.py", line 1, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from . import mm, mm_common, mm_plus_mm, unpack_mixed_mm
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/kernel/mm.py", line 16, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from torch._inductor.codegen.cpp_gemm_template import CppGemmTemplate
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/codegen/cpp_gemm_template.py", line 24, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from .cpp_micro_gemm import (
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/codegen/cpp_micro_gemm.py", line 16, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from .cpp_template_kernel import CppTemplateKernel
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/codegen/cpp_template_kernel.py", line 20, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from .cpp_wrapper_cpu import CppWrapperCpu
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/codegen/cpp_wrapper_cpu.py", line 22, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from .aoti_hipify_utils import maybe_hipify_code_wrapper
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/_inductor/codegen/aoti_hipify_utils.py", line 5, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     from torch.utils.hipify.hipify_python import PYTORCH_MAP, PYTORCH_TRIE
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/utils/hipify/hipify_python.py", line 770, in <module>
ERROR 04-23 05:39:29 [registry.py:346]     CAFFE2_TRIE = Trie()
ERROR 04-23 05:39:29 [registry.py:346]   File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/torch/utils/hipify/hipify_python.py", line 683, in __init__
ERROR 04-23 05:39:29 [registry.py:346]     self._hash = hashlib.md5()
ERROR 04-23 05:39:29 [registry.py:346] ValueError: [digital envelope routines] unsupported
ERROR 04-23 05:39:29 [registry.py:346] 
Traceback (most recent call last):
  File "/home/ec2-user/test_vllm/test_simple.py", line 11, in <module>
    llm = LLM(model="facebook/opt-125m")
  File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/utils.py", line 1099, in inner
    return fn(*args, **kwargs)
  File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/entrypoints/llm.py", line 248, in __init__
    self.llm_engine = LLMEngine.from_engine_args(
  File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/engine/llm_engine.py", line 515, in from_engine_args
    vllm_config = engine_args.create_engine_config(usage_context)
  File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/engine/arg_utils.py", line 1154, in create_engine_config
    model_config = self.create_model_config()
  File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/engine/arg_utils.py", line 1042, in create_model_config
    return ModelConfig(
  File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/config.py", line 489, in __init__
    self.multimodal_config = self._init_multimodal_config(
  File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/config.py", line 558, in _init_multimodal_config
    if self.registry.is_multimodal_model(self.architectures):
  File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 496, in is_multimodal_model
    model_cls, _ = self.inspect_model_cls(architectures)
  File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 456, in inspect_model_cls
    return self._raise_for_unsupported(architectures)
  File "/home/ec2-user/vllm_env/lib64/python3.9/site-packages/vllm/model_executor/models/registry.py", line 406, in _raise_for_unsupported
    raise ValueError(
ValueError: Model architectures ['OPTForCausalLM'] failed to be inspected. Please check the logs for more details.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@sydarb sydarb added the bug Something isn't working label Apr 23, 2025
@DarkLight1337
Copy link
Member

cc @russellb

@sydarb
Copy link
Contributor Author

sydarb commented Apr 23, 2025

Update:
I am observing that this is occurring only when the FIPS compliance is enabled.
Specifically, the error can be reproduced with running torch.compile and fixed by setting usedforsecurity=False while instantiating hashlib.md5() everywhere within torch and vllm at:

hash_str = hashlib.md5(str(factors).encode()).hexdigest()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants