You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
the model is llama2-7b (with MHA replaced by a simple local function like lambda q, k, v: return q). When the number of layers is set to 2, onnxsim works well. When number of layers is set to 32:
Simplifying...
Traceback (most recent call last):
File "/usr/local/bin/onnxsim", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/onnxsim/onnx_simplifier.py", line 489, in main
model_opt, check_ok = simplify(
^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/onnxsim/onnx_simplifier.py", line 199, in simplify
model_opt_bytes = C.simplify(
^^^^^^^^^^^
RuntimeError: model with IR version >= 3 must specify opset_import for ONNX
The text was updated successfully, but these errors were encountered:
tp-nan
changed the title
[BUG] model with IR version >= 3 must specify opset_import for ONNX for model larger than 2GB
[BUG] model with IR version >= 3 must specify opset_import for ONNX for model larger than 2GB
Apr 18, 2025
tp-nan
changed the title
[BUG] model with IR version >= 3 must specify opset_import for ONNX for model larger than 2GB
[BUG] For model larger than 2GB: model with IR version >= 3 must specify opset_import for ONNXApr 18, 2025
Describe the bug
the model is llama2-7b (with MHA replaced by a simple local function like
lambda q, k, v: return q
). When the number of layers is set to 2, onnxsim works well. When number of layers is set to 32:onnxsim.version
0.4.36
torch.version
'2.7.0a0+7c8ec84dab.nv25.03'
onnxruntime.version
'1.18.0'
The text was updated successfully, but these errors were encountered: