-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: intel/ipex-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Llama.cpp portable fails to initialise with context sizes above 22528 (24 x 1024).
user issue
#13130
opened May 3, 2025 by
HumerousGorgon
Instructions for Ollama Installation with IPEX on Multi-GPU Fail to Work with Arc GPU
user issue
#13126
opened Apr 30, 2025 by
Kaszanas
ImportError: cannot import name 'LoadLoraAdapterRequest'
user issue
#13124
opened Apr 30, 2025 by
kirel
It is hoped that the qwen3 series models can be supported quickly
#13121
opened Apr 29, 2025 by
brownplayer
IPv6 needs to be disabled before PPA install
user issue
#13112
opened Apr 26, 2025 by
dennis-george0
[XPU] library mismatch and version issue while performing fine-tuning on B580
user issue
#13108
opened Apr 24, 2025 by
raj-ritu17
Ollama failed to run deepseek-coder-v2,Error: unable to load model
user issue
#13107
opened Apr 24, 2025 by
weryswang
Can't set Ollama context size - seems to be fixed to 8k
user issue
#13106
opened Apr 24, 2025 by
kirel
--verbose-prompt does not print any additional information
user issue
#13090
opened Apr 17, 2025 by
HanShengGoodWay
Feature request - add support for the mojo lang and max platform
#13086
opened Apr 17, 2025 by
NewtonChutney
IPEX-LLM Slow Token Generation on Gemma 3 12B on Arc A770M
user issue
#13080
opened Apr 15, 2025 by
Sketchfellow
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.