Skip to content

[BUG] Run 8-bit and16-bit gemma3-4b #13099

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
bibekyess opened this issue Apr 21, 2025 · 5 comments
Open

[BUG] Run 8-bit and16-bit gemma3-4b #13099

bibekyess opened this issue Apr 21, 2025 · 5 comments

Comments

@bibekyess
Copy link

Hello!

I am experimenting with gemma-3-4b and noticed that 4bit works smoothly:.

ollama run modelscope.cn/lmstudio-community/gemma-3-4b-it-GGUF

In the same model repository, there is 8bit version also, so when I execute the following commands. It gives error.

ollama run modelscope.cn/lmstudio-community/gemma-3-4b-it-GGUF:Q8_0

I want to run the 16bit. Is it possible at the current state?

IpexLLM-Ollama Version
ollama version is 0.5.4-ipexllm-20250318. I tried with the latest shared bare metal executable (tag-v2.2.0) also, it gives the same issue.

Device Details
Device Name: LG Gram Pro Laptop (MFD 2024/05)
Operating System: Windows 11
Processor: Intel(R) Core(TM) Ultra 7 155H @ 3.80 GHz
RAM: 32.0 GB (31.5 GB usable)
Graphics and NPU VRAM: 16.0 GB usable (Intel(R) Arc(™) Graphics and Intel(R) AI Boost)
System Type: 64-bit operating system, x64-based processor
GPU driver version: 32.0.101.6734

Long error message with 8bit gemma3-4b is attached below:


18:18:00.894 > stderr: time=2025-04-21T18:18:00.893+09:00 level=INFO source=runner.go:967 msg="starting go runner"
time=2025-04-21T18:18:00.893+09:00 level=INFO source=runner.go:968 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | cgo(clang)" threads=6

18:18:00.896 > stderr: time=2025-04-21T18:18:00.894+09:00 level=INFO source=runner.go:1026 msg="Server listening on 127.0.0.1:57297"
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_load_model_from_file: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16847 MiB free

18:18:00.937 > stderr: llama_model_loader: loaded meta data with 40 key-value pairs and 444 tensors from C:\Users\bibek\neoali\models\blobs\sha256-283baeca5e0ffc2a7f6cd56b9b7b5ce1d4dda08ca11f11afa869127caf745e94 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str         
     = gemma3
llama_model_loader: - kv   1:                               general.type str         
     = model
llama_model_loader: - kv   2:                               general.name str         
     = Gemma 3 4b It
llama_model_loader: - kv   3:                           general.finetune str         
     = it
llama_model_loader: - kv   4:                           general.basename str         
     = gemma-3
llama_model_loader: - kv   5:                         general.size_label str         
     = 4B
llama_model_loader: - kv   6:                            general.license str         
     = gemma
llama_model_loader: - kv   7:                   general.base_model.count u32         
     = 1
llama_model_loader: - kv   8:                  general.base_model.0.name str         
     = Gemma 3 4b Pt
llama_model_loader: - kv   9:          general.base_model.0.organization str         
     = Google
llama_model_loader: - kv  10:              general.base_model.0.repo_url str         
     = https://huggingface.co/google/gemma-3...
llama_model_loader: - kv  11:                               general.tags arr[str,1]       = ["image-text-to-text"]
llama_model_loader: - kv  12:                      gemma3.context_length u32         
     = 131072
llama_model_loader: - kv  13:                    gemma3.embedding_length u32         
     = 2560
llama_model_loader: - kv  14:                         gemma3.block_count u32         
     = 34
llama_model_loader: - kv  15:                 gemma3.feed_forward_length u32         
     = 10240
llama_model_loader: - kv  16:                gemma3.attention.head_count u32         
     = 8
llama_model_loader: - kv  17:    gemma3.attention.layer_norm_rms_epsilon f32         
     = 0.000001
llama_model_loader: - kv  18:                gemma3.attention.key_length u32         
     = 256
llama_model_loader: - kv  19:              gemma3.attention.value_length u32         
     = 256
llama_model_loader: - kv  20:                      gemma3.rope.freq_base f32         
     = 1000000.000000
llama_model_loader: - kv  21:            gemma3.attention.sliding_window u32         
     = 1024
llama_model_loader: - kv  22:             gemma3.attention.head_count_kv u32         
     = 4
llama_model_loader: - kv  23:                   gemma3.rope.scaling.type str         
     = linear
llama_model_loader: - kv  24:                 gemma3.rope.scaling.factor f32         
     = 8.000000
llama_model_loader: - kv  25:                       tokenizer.ggml.model str         
     = llama
llama_model_loader: - kv  26:                         tokenizer.ggml.pre str         
     = default

18:18:01.001 > stderr: llama_model_loader: - kv  27:                      tokenizer.ggml.tokens arr[str,262144]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...

18:18:01.124 > stderr: llama_model_loader: - kv  28:                      tokenizer.ggml.scores arr[f32,262144]  = [-1000.000000, -1000.000000, -1000.00...

18:18:01.127 > stderr: time=2025-04-21T18:18:01.126+09:00 level=INFO source=server.go:605 msg="waiting for server to become available" status="llm server loading model"  

18:18:01.145 > stderr: llama_model_loader: - kv  29:                  tokenizer.ggml.token_type arr[i32,262144]  = [3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  30:                tokenizer.ggml.bos_token_id u32         
     = 2
llama_model_loader: - kv  31:                tokenizer.ggml.eos_token_id u32         
     = 1
llama_model_loader: - kv  32:            tokenizer.ggml.unknown_token_id u32         
     = 3
llama_model_loader: - kv  33:            tokenizer.ggml.padding_token_id u32         
     = 0
llama_model_loader: - kv  34:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  35:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  36:                    tokenizer.chat_template str         
     = {{ bos_token }}\n{%- if messages[0]['r...
llama_model_loader: - kv  37:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  38:               general.quantization_version u32         
     = 2
llama_model_loader: - kv  39:                          general.file_type u32         
     = 7
llama_model_loader: - type  f32:  205 tensors
llama_model_loader: - type q8_0:  239 tensors

18:18:01.294 > stderr: llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect

18:18:01.296 > stderr: llm_load_vocab: special tokens cache size = 6414

18:18:01.325 > stderr: llm_load_vocab: token to piece cache size = 1.9446 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = gemma3
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 262144
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 131072
llm_load_print_meta: n_embd           = 2560
llm_load_print_meta: n_layer          = 34
llm_load_print_meta: n_head           = 8
llm_load_print_meta: n_head_kv        = 4
llm_load_print_meta: n_rot            = 256
llm_load_print_meta: n_swa            = 1024
llm_load_print_meta: n_embd_head_k    = 256
llm_load_print_meta: n_embd_head_v    = 256
llm_load_print_meta: n_gqa            = 2
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: f_attn_scale     = 6.2e-02
llm_load_print_meta: n_ff             = 10240
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 0.125
llm_load_print_meta: n_ctx_orig_yarn  = 131072
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 4B
llm_load_print_meta: model ftype      = Q8_0
llm_load_print_meta: model params     = 3.88 B
llm_load_print_meta: model size       = 3.84 GiB (8.50 BPW)
llm_load_print_meta: general.name     = Gemma 3 4b It
llm_load_print_meta: BOS token        = 2 '<bos>'
llm_load_print_meta: EOS token        = 1 '<eos>'
llm_load_print_meta: EOT token        = 106 '<end_of_turn>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: PAD token        = 0 '<pad>'
llm_load_print_meta: LF token         = 248 '<0x0A>'
llm_load_print_meta: EOG token        = 1 '<eos>'
llm_load_print_meta: EOG token        = 106 '<end_of_turn>'
llm_load_print_meta: max token length = 48
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory

18:18:01.331 > stderr: get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory     

18:18:02.175 > stderr: get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory     

18:18:02.339 > stderr: llm_load_tensors: offloading 34 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 35/35 layers to GPU
llm_load_tensors:        SYCL0 model buffer size =  3932.65 MiB
llm_load_tensors:    SYCL_Host model buffer size =   680.00 MiB

18:18:12.178 > stderr: llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 2048
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 1000000.0
llama_new_context_with_model: freq_scale    = 0.125
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
[SYCL] call ggml_check_sycl
ggml_check_sycl: GGML_SYCL_DEBUG: 0
ggml_check_sycl: GGML_SYCL_F16: no
Found 1 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]|                     Intel Arc Graphics|  12.71|    128|    1024|   32| 17665M|            1.6.32960|

18:18:12.278 > stderr: llama_kv_cache_init:      SYCL0 KV buffer size =   272.00 MiB
llama_new_context_with_model: KV self size  =  272.00 MiB, K (f16):  136.00 MiB, V (f16):  136.00 MiB

18:18:12.280 > stderr: llama_new_context_with_model:  SYCL_Host  output buffer size =     1.01 MiB

18:18:12.661 > stderr: llama_new_context_with_model:      SYCL0 compute buffer size =   517.00 MiB
llama_new_context_with_model:  SYCL_Host compute buffer size =    13.01 MiB
llama_new_context_with_model: graph nodes  = 1401
llama_new_context_with_model: graph splits = 2

18:18:12.664 > stderr: key general.file_type not found in file

18:18:12.668 > stderr: Exception 0xe06d7363 0x19930520 0x51d2ff770 0x7ffe0c5c933a    
PC=0x7ffe0c5c933a
signal arrived during external code execution

runtime.cgocall(0x7ff7c7a1d8b0, 0xc00047dc78)
        runtime/cgocall.go:167 +0x3e fp=0xc00047dc50 sp=0xc00047dbe8 pc=0x7ff7c6e69c1e
ollama/llama/llamafile._Cfunc_clip_model_load(0x1d9000f15f0, 0x1)
        _cgo_gotypes.go:307 +0x56 fp=0xc00047dc78 sp=0xc00047dc50 pc=0x7ff7c723f8d6  
ollama/llama/llamafile.NewClipContext(0xc000384bd0, {0xc000040230, 0x6a})
        ollama/llama/llamafile/llama.go:488 +0x90 fp=0xc00047dd38 sp=0xc00047dc78 pc=0x7ff7c7246cd0

18:18:12.671 > stderr: ollama/llama/runner.NewImageContext(0xc000384bd0, {0xc000040230, 0x6a})
        ollama/llama/runner/image.go:37 +0xf8 fp=0xc00047ddb8 sp=0xc00047dd38 pc=0x7ff7c724be58
ollama/llama/runner.(*Server).loadModel(0xc0000fd560, {0x3e7, 0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc000208820, 0x0}, ...)
        ollama/llama/runner/runner.go:881 +0x24f fp=0xc00047df10 sp=0xc00047ddb8 pc=0x7ff7c72519cf
ollama/llama/runner.Execute.gowrap1()
        ollama/llama/runner/runner.go:1001 +0xda fp=0xc00047dfe0 sp=0xc00047df10 pc=0x7ff7c72533da
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00047dfe8 sp=0xc00047dfe0 pc=0x7ff7c6e78901
created by ollama/llama/runner.Execute in goroutine 1
        ollama/llama/runner/runner.go:1001 +0xd0d

goroutine 1 gp=0xc00008e000 m=nil [IO wait]:
runtime.gopark(0x7ff7c6e7a0c0?, 0x7ff7c8613ac0?, 0x20?, 0x4f?, 0xc0001f4fcc?)        
        runtime/proc.go:424 +0xce fp=0xc000587418 sp=0xc0005873f8 pc=0x7ff7c6e703ce  

18:18:12.672 > stderr: runtime.netpollblock(0x3c0?, 0xc6e08366?, 0xf7?)
        runtime/netpoll.go:575 +0xf7 fp=0xc000587450 sp=0xc000587418 pc=0x7ff7c6e34f97
internal/poll.runtime_pollWait(0x1d97ee7ec90, 0x72)
        runtime/netpoll.go:351 +0x85 fp=0xc000587470 sp=0xc000587450 pc=0x7ff7c6e6f645
internal/poll.(*pollDesc).wait(0x7ff7c6f02bd5?, 0x7ff7c6e6ae7d?, 0x0)
        internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000587498 sp=0xc000587470 pc=0x7ff7c6f04207
internal/poll.execIO(0xc0001f4f20, 0xc000587540)
        internal/poll/fd_windows.go:177 +0x105 fp=0xc000587510 sp=0xc000587498 pc=0x7ff7c6f05645
internal/poll.(*FD).acceptOne(0xc0001f4f08, 0x3d4, {0xc0003860f0?, 0xc0005875a0?, 0x7ff7c6f0d3c5?}, 0xc0005875d4?)
        internal/poll/fd_windows.go:946 +0x65 fp=0xc000587570 sp=0xc000587510 pc=0x7ff7c6f09c85
internal/poll.(*FD).Accept(0xc0001f4f08, 0xc000587720)
        internal/poll/fd_windows.go:980 +0x1b6 fp=0xc000587628 sp=0xc000587570 pc=0x7ff7c6f09fb6
net.(*netFD).accept(0xc0001f4f08)
        net/fd_windows.go:182 +0x4b fp=0xc000587740 sp=0xc000587628 pc=0x7ff7c6f7082b
net.(*TCPListener).accept(0xc0002ba7c0)
        net/tcpsock_posix.go:159 +0x1e fp=0xc000587790 sp=0xc000587740 pc=0x7ff7c6f8699e
net.(*TCPListener).Accept(0xc0002ba7c0)
        net/tcpsock.go:372 +0x30 fp=0xc0005877c0 sp=0xc000587790 pc=0x7ff7c6f85750   
net/http.(*onceCloseListener).Accept(0xc0000fd5f0?)
        <autogenerated>:1 +0x24 fp=0xc0005877d8 sp=0xc0005877c0 pc=0x7ff7c7200044    
net/http.(*Server).Serve(0xc00047ef00, {0x7ff7c7e4c6f0, 0xc0002ba7c0})
        net/http/server.go:3330 +0x30c fp=0xc000587908 sp=0xc0005877d8 pc=0x7ff7c71d7fcc
ollama/llama/runner.Execute({0xc0000ce010?, 0x0?, 0x0?})
        ollama/llama/runner/runner.go:1027 +0x11a9 fp=0xc000587ca8 sp=0xc000587908 pc=0x7ff7c7252fa9
ollama/cmd.NewCLI.func2(0xc000498f00?, {0x7ff7c7c8e8ce?, 0x4?, 0x7ff7c7c8e8d2?})     
        ollama/cmd/cmd.go:1430 +0x45 fp=0xc000587cd0 sp=0xc000587ca8 pc=0x7ff7c7a1d0c5
github.com/spf13/cobra.(*Command).execute(0xc0002be908, {0xc0002b85a0, 0x11, 0x11})  
        github.com/spf13/[email protected]/command.go:985 +0xaaa fp=0xc000587e58 sp=0xc000587cd0 pc=0x7ff7c700a4ea
github.com/spf13/cobra.(*Command).ExecuteC(0xc00027e308)
        github.com/spf13/[email protected]/command.go:1117 +0x3ff fp=0xc000587f30 sp=0xc000587e58 pc=0x7ff7c700adbf
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/[email protected]/command.go:1041
github.com/spf13/cobra.(*Command).ExecuteContext(...)
        github.com/spf13/[email protected]/command.go:1034
main.main()
        ollama/main.go:12 +0x4d fp=0xc000587f50 sp=0xc000587f30 pc=0x7ff7c7a1d72d    
runtime.main()
        runtime/proc.go:272 +0x27d fp=0xc000587fe0 sp=0xc000587f50 pc=0x7ff7c6e3df9d 
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000587fe8 sp=0xc000587fe0 pc=0x7ff7c6e78901

18:18:12.675 > stderr:
goroutine 2 gp=0xc00008e700 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000091fa8 sp=0xc000091f88 pc=0x7ff7c6e703ce  
runtime.goparkunlock(...)
        runtime/proc.go:430
runtime.forcegchelper()
        runtime/proc.go:337 +0xb8 fp=0xc000091fe0 sp=0xc000091fa8 pc=0x7ff7c6e3e2b8  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000091fe8 sp=0xc000091fe0 pc=0x7ff7c6e78901
created by runtime.init.7 in goroutine 1
        runtime/proc.go:325 +0x1a

goroutine 3 gp=0xc00008ea80 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000093f80 sp=0xc000093f60 pc=0x7ff7c6e703ce  
runtime.goparkunlock(...)
        runtime/proc.go:430
runtime.bgsweep(0xc000034100)
        runtime/mgcsweep.go:317 +0xdf fp=0xc000093fc8 sp=0xc000093f80 pc=0x7ff7c6e26f9f
runtime.gcenable.gowrap1()
        runtime/mgc.go:204 +0x25 fp=0xc000093fe0 sp=0xc000093fc8 pc=0x7ff7c6e1b5c5   
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000093fe8 sp=0xc000093fe0 pc=0x7ff7c6e78901
created by runtime.gcenable in goroutine 1
        runtime/mgc.go:204 +0x66

goroutine 4 gp=0xc00008ec40 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x7ff7c7e3bb18?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc0000a3f78 sp=0xc0000a3f58 pc=0x7ff7c6e703ce  
runtime.goparkunlock(...)
        runtime/proc.go:430
runtime.(*scavengerState).park(0x7ff7c86379c0)
        runtime/mgcscavenge.go:425 +0x49 fp=0xc0000a3fa8 sp=0xc0000a3f78 pc=0x7ff7c6e24969
runtime.bgscavenge(0xc000034100)
        runtime/mgcscavenge.go:658 +0x59 fp=0xc0000a3fc8 sp=0xc0000a3fa8 pc=0x7ff7c6e24ef9
runtime.gcenable.gowrap2()
        runtime/mgc.go:205 +0x25 fp=0xc0000a3fe0 sp=0xc0000a3fc8 pc=0x7ff7c6e1b565   
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a3fe8 sp=0xc0000a3fe0 pc=0x7ff7c6e78901
created by runtime.gcenable in goroutine 1
        runtime/mgc.go:205 +0xa5

goroutine 5 gp=0xc00008f180 m=nil [finalizer wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc0000a5e20 sp=0xc0000a5e00 pc=0x7ff7c6e703ce  
runtime.runfinq()
        runtime/mfinal.go:193 +0x107 fp=0xc0000a5fe0 sp=0xc0000a5e20 pc=0x7ff7c6e1a687
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a5fe8 sp=0xc0000a5fe0 pc=0x7ff7c6e78901
created by runtime.createfing in goroutine 1
        runtime/mfinal.go:163 +0x3d

goroutine 6 gp=0xc0001ea380 m=nil [chan receive]:
runtime.gopark(0xc000095f60?, 0x7ff7c6f5a2e5?, 0x10?, 0xa8?, 0x7ff7c7e628a0?)        
        runtime/proc.go:424 +0xce fp=0xc000095f18 sp=0xc000095ef8 pc=0x7ff7c6e703ce  

18:18:12.677 > stderr: runtime.chanrecv(0xc0000404d0, 0x0, 0x1)
        runtime/chan.go:639 +0x41e fp=0xc000095f90 sp=0xc000095f18 pc=0x7ff7c6e0ac9e 
runtime.chanrecv1(0x7ff7c6e3e100?, 0xc000095f76?)
        runtime/chan.go:489 +0x12 fp=0xc000095fb8 sp=0xc000095f90 pc=0x7ff7c6e0a852  
runtime.unique_runtime_registerUniqueMapCleanup.func1(...)
        runtime/mgc.go:1781
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
        runtime/mgc.go:1784 +0x2f fp=0xc000095fe0 sp=0xc000095fb8 pc=0x7ff7c6e1e6af  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000095fe8 sp=0xc000095fe0 pc=0x7ff7c6e78901
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
        runtime/mgc.go:1779 +0x96

goroutine 7 gp=0xc0001eaa80 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc00009ff38 sp=0xc00009ff18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc00009ffc8 sp=0xc00009ff38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc00009ffe0 sp=0xc00009ffc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00009ffe8 sp=0xc00009ffe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 18 gp=0xc000484000 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc00048bf38 sp=0xc00048bf18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc00048bfc8 sp=0xc00048bf38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc00048bfe0 sp=0xc00048bfc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00048bfe8 sp=0xc00048bfe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 34 gp=0xc0001061c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000487f38 sp=0xc000487f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000487fc8 sp=0xc000487f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000487fe0 sp=0xc000487fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000487fe8 sp=0xc000487fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 19 gp=0xc0004841c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc00048df38 sp=0xc00048df18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc00048dfc8 sp=0xc00048df38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc00048dfe0 sp=0xc00048dfc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00048dfe8 sp=0xc00048dfe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 8 gp=0xc0001eac40 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc0000a1f38 sp=0xc0000a1f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc0000a1fc8 sp=0xc0000a1f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc0000a1fe0 sp=0xc0000a1fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0000a1fe8 sp=0xc0000a1fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 35 gp=0xc000106380 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000489f38 sp=0xc000489f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000489fc8 sp=0xc000489f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000489fe0 sp=0xc000489fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000489fe8 sp=0xc000489fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 20 gp=0xc000484380 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000495f38 sp=0xc000495f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000495fc8 sp=0xc000495f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000495fe0 sp=0xc000495fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000495fe8 sp=0xc000495fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 9 gp=0xc0001eae00 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000491f38
18:18:12.683 > stderr:  sp=0xc000491f18 pc=0x7ff7c6e703ce
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000491fc8 sp=0xc000491f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000491fe0 sp=0xc000491fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000491fe8 sp=0xc000491fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 36 gp=0xc000106540 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000113f38 sp=0xc000113f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000113fc8 sp=0xc000113f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000113fe0 sp=0xc000113fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000113fe8 sp=0xc000113fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 21 gp=0xc000484540 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000497f38 sp=0xc000497f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000497fc8 sp=0xc000497f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000497fe0 sp=0xc000497fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000497fe8 sp=0xc000497fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 10 gp=0xc0001eafc0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000493f38 sp=0xc000493f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000493fc8 sp=0xc000493f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000493fe0 sp=0xc000493fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000493fe8 sp=0xc000493fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 37 gp=0xc000106700 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000115f38 sp=0xc000115f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000115fc8 sp=0xc000115f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000115fe0 sp=0xc000115fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000115fe8 sp=0xc000115fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 22 gp=0xc000484700 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc00010ff38 sp=0xc00010ff18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc00010ffc8 sp=0xc00010ff38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc00010ffe0 sp=0xc00010ffc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00010ffe8 sp=0xc00010ffe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 23 gp=0xc0004848c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000111f38 sp=0xc000111f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000111fc8 sp=0xc000111f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000111fe0 sp=0xc000111fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000111fe8 sp=0xc000111fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 11 gp=0xc0001eb180 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000473f38 sp=0xc000473f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000473fc8 sp=0xc000473f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000473fe0 sp=0xc000473fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000473fe8 sp=0xc000473fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 38 gp=0xc0001068c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc00046ff38 sp=0xc00046ff18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc00046ffc8 sp=0xc00046ff38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc00046ffe0 sp=0xc00046ffc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00046ffe8 sp=0xc00046ffe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 24 gp=0xc000484a80 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc00049ff38 sp=0xc00049ff18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc00049ffc8 sp=0xc00049ff38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc00049ffe0 sp=0xc00049ffc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00049ffe8 sp=0xc00049ffe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 39 gp=0xc000106a80 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000471f38 sp=0xc000471f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000471fc8 sp=0xc000471f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000471fe0 sp=0xc000471fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000471fe8 sp=0xc000471fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 12 gp=0xc0001eb340 m=nil [GC worker (idle)]:
runtime.gopark(0x7ff7c86864a0?, 0x1?, 0x28?, 0x3e?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc000475f38 sp=0xc000475f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc000475fc8 sp=0xc000475f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc000475fe0 sp=0xc000475fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc000475fe8 sp=0xc000475fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 40 gp=0xc000106c40 m=nil [GC worker (idle)]:
runtime.gopark(0x4a86ff95cb0?, 0x0?, 0x0?, 0x0?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc00049bf38 sp=0xc00049bf18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc00049bfc8 sp=0xc00049bf38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc00049bfe0 sp=0xc00049bfc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00049bfe8 sp=0xc00049bfe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 25 gp=0xc000484c40 m=nil [GC worker (idle)]:
runtime.gopark(0x7ff7c86864a0?, 0x1?, 0x28?, 0x3e?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc0004a1f38 sp=0xc0004a1f18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc0004a1fc8 sp=0xc0004a1f38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc0004a1fe0 sp=0xc0004a1fc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc0004a1fe8 sp=0xc0004a1fe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 13 gp=0xc0001eb500 m=nil [GC worker (idle)]:
runtime.gopark(0x4a86ff95cb0?, 0x1?, 0x8c?, 0x59?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc00047bf38 sp=0xc00047bf18 pc=0x7ff7c6e703ce  
runtime.gcBgMarkWorker(0xc0000418f0)
        runtime/mgc.go:1412 +0xe9 fp=0xc00047bfc8 sp=0xc00047bf38 pc=0x7ff7c6e1d9a9  
runtime.gcBgMarkStartWorkers.gowrap1()
        runtime/mgc.go:1328 +0x25 fp=0xc00047bfe0 sp=0xc00047bfc8 pc=0x7ff7c6e1d885  
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00047bfe8 sp=0xc00047bfe0 pc=0x7ff7c6e78901
created by runtime.gcBgMarkStartWorkers in goroutine 1
        runtime/mgc.go:1328 +0x105

goroutine 15 gp=0xc000107880 m=nil [semacquire]:
runtime.gopark(0x0?, 0x0?, 0x60?, 0x3e?, 0x0?)
        runtime/proc.go:424 +0xce fp=0xc00049de18 sp=0xc00049ddf8 pc=0x7ff7c6e703ce  
runtime.goparkunlock(...)
        runtime/proc.go:430
runtime.semacquire1(0xc0000fd568, 0x0, 0x1, 0x0, 0x12)
        runtime/sema.go:178 +0x232 fp=0xc00049de80 sp=0xc00049de18 pc=0x7ff7c6e50092 
sync.runtime_Semacquire(0x0?)
        runtime/sema.go:71 +0x25 fp=0xc00049deb8 sp=0xc00049de80 pc=0x7ff7c6e718a5   
sync.(*WaitGroup).Wait(0x0?)
        sync/waitgroup.go:118 +0x48 fp=0xc00049dee0 sp=0xc00049deb8 pc=0x7ff7c6e896c8
ollama/llama/runner.(*Server).run(0xc0000fd560, {0x7ff7c7e4e9b0, 0xc0000f50e0})      
        ollama/llama/runner/runner.go:315 +0x47 fp=0xc00049dfb8 sp=0xc00049dee0 pc=0x7ff7c724dec7
ollama/llama/runner.Execute.gowrap2()
        ollama/llama/runner/runner.go:1006 +0x28 fp=0xc00049dfe0 sp=0xc00049dfb8 pc=0x7ff7c72532c8
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00049dfe8 sp=0xc00049dfe0 pc=0x7ff7c6e78901
created by ollama/llama/runner.Execute in goroutine 1
        ollama/llama/runner/runner.go:1006 +0xde5

goroutine 16 gp=0xc000107c00 m=nil [IO wait]:
runtime.gopark(0x0?, 0xc0001f51a0?, 0x48?, 0x52?, 0xc0001f524c?)
        runtime/proc.go:424 +0xce fp=0xc00004d890 sp=0xc00004d870 pc=0x7ff7c6e703ce  
runtime.netpollblock(0x3cc?, 0xc6e08366?, 0xf7?)
        runtime/netpoll.go:575 +0xf7 fp=0xc00004d8c8 sp=0xc00004d890 pc=0x7ff7c6e34f97
internal/poll.runtime_pollWait(0x1d97ee7eb78, 0x72)
        runtime/netpoll.go:351 +0x85 fp=0xc00004d8e8 sp=0xc00004d8c8 pc=0x7ff7c6e6f645
internal/poll.(*pollDesc).wait(0x1d938b7d768?, 0xc0000a8600?, 0x0)
        internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00004d910 sp=0xc00004d8e8 pc=0x7ff7c6f04207
internal/poll.execIO(0xc0001f51a0, 0x7ff7c7d105e8)
        internal/poll/fd_windows.go:177 +0x105 fp=0xc00004d988 sp=0xc00004d910 pc=0x7ff7c6f05645
internal/poll.(*FD).Read(0xc0001f5188, {0xc0001cf000, 0x1000, 0x1000})
        internal/poll/fd_windows.go:438 +0x2a7 fp=0xc00004da30 sp=0xc00004d988 pc=0x7ff7c6f06347
net.(*netFD).Read(0xc0001f5188, {0xc0001cf000?, 0xc00004daa0?, 0x7ff7c6f046c5?})     
        net/fd_posix.go:55 +0x25 fp=0xc00004da78 sp=0xc00004da30 pc=0x7ff7c6f6e945   
net.(*conn).Read(0xc0000989f8, {0xc0001cf000?, 0x0?, 0xc00020a0f8?})
        net/net.go:189 +0x45 fp=0xc00004dac0 sp=0xc00004da78 pc=0x7ff7c6f7df25       
net.(*TCPConn).Read(0xc00020a0f0?, {0xc0001cf000?, 0xc0001f5188?, 0xc00004daf8?})    
        <autogenerated>:1 +0x25 fp=0xc00004daf0 sp=0xc00004dac0 pc=0x7ff7c6f8f945    
net/http.(*connReader).Read(0xc00020a0f0, {0xc0001cf000, 0x1000, 0x1000})
        net/http/server.go:798 +0x14b fp=0xc00004db40 sp=0xc00004daf0 pc=0x7ff7c71cdd8b
bufio.(*Reader).fill(0xc0000a8120)
        bufio/bufio.go:110 +0x103 fp=0xc00004db78 sp=0xc00004db40 pc=0x7ff7c6f94583  
bufio.(*Reader).Peek(0xc0000a8120, 0x4)
        bufio/bufio.go:148 +0x53 fp=0xc00004db98 sp=0xc00004db78 pc=0x7ff7c6f946b3   
net/http.(*conn).serve(0xc0000fd5f0, {0x7ff7c7e4e978, 0xc0001c1380})
        net/http/server.go:2127 +0x738 fp=0xc00004dfb8 sp=0xc00004db98 pc=0x7ff7c71d30d8
net/http.(*Server).Serve.gowrap3()
        net/http/server.go:3360 +0x28 fp=0xc00004dfe0 sp=0xc00004dfb8 pc=0x7ff7c71d83c8
runtime.goexit({})
        runtime/asm_amd64.s:1700 +0x1 fp=0xc00004dfe8 sp=0xc00004dfe0 pc=0x7ff7c6e78901
created by net/http.(*Server).Serve in goroutine 1
        net/http/server.go:3360 +0x485
rax     0x0
rbx     0x51d2ff718
rcx     0x26
rdx     0x51d2fee60
rdi     0xe06d7363
rsi     0x1
rbp     0x4
rsp     0x51d2ff5f0
r8      0xffff0000
r9      0x51d2ff0ec
r10     0x4
r11     0x7ffe0c000000
r12     0xc00047dcf8
rflags  0x202
cs      0x33
fs      0x53
gs      0x2b
@bibekyess bibekyess changed the title [BUG] Run 8-bit and16-bit gemma4-3b [BUG] Run 8-bit and16-bit gemma3-4b Apr 21, 2025
@sgwhat
Copy link
Contributor

sgwhat commented Apr 22, 2025

Hi @bibekyess, you may install our latest v0.6.2 ipex-llm ollama in https://github.com/ipex-llm/ipex-llm/releases/tag/v2.3.0-nightly, which could support gemma3-fp16.

@chnxq
Copy link

chnxq commented Apr 22, 2025

Hi @bibekyess, you may install our latest v0.6.2 ipex-llm ollama in https://github.com/ipex-llm/ipex-llm/releases/tag/v2.3.0-nightly, which could support .gemma3-fp16

Hi @sgwhat

I made the Ollama's source code support OneApi by making changes. However, the inference speed is slower than the portable version released here. For example, on the same device and model, mine is 30 tokens/s, while v2.3.0-nightly or v2.2.0 can achieve 48 tokens/s.

What did I do wrong?

my code in https://github.com/chnxq/ollama/tree/chnxq/add-oneapi

@bibekyess
Copy link
Author

Hi @sgwhat!
Thank you for your response. Unfortunately, nightly versions (both updated last week and yesterday) are not able to serve gemma3:4b-it-fp16.
For ollama-ipex-llm-2.3.0b20250415-win, the error logs is as follow:

time=2025-04-30T17:25:34.370+09:00 level=INFO source=server.go:106 msg="system memory" total="31.5 GiB" free="15.0 GiB" free_swap="36.0 GiB"
time=2025-04-30T17:25:34.374+09:00 level=INFO source=server.go:139 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[15.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="0 B" memory.required.kv="1.1 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="6.0 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="1.3 GiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-04-30T17:25:34.662+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-04-30T17:25:34.671+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-30T17:25:34.679+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-04-30T17:25:34.693+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-30T17:25:34.694+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-30T17:25:34.694+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-30T17:25:34.694+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-30T17:25:34.695+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-30T17:25:34.712+09:00 level=INFO source=server.go:414 msg="starting llama server" cmd="C:\\Users\\bibek\\Downloads\\ollama-ipex-llm-2.3.0b20250415-win\\ollama-lib.exe runner --ollama-engine --model C:\\Users\\bibek\\neoali\\models\\blobs\\sha256-2e1715faf889527461e76d271e827bbe03f3d22b4b86acf6146671d72eb6d11d --ctx-size 8192 --batch-size 512 --n-gpu-layers 999 --threads 6 --no-mmap --parallel 4 --port 62914"
time=2025-04-30T17:25:34.725+09:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-04-30T17:25:34.726+09:00 level=INFO source=server.go:589 msg="waiting for llama runner to start responding"
time=2025-04-30T17:25:34.728+09:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server error"
time=2025-04-30T17:25:34.824+09:00 level=INFO source=runner.go:757 msg="starting ollama engine"
time=2025-04-30T17:25:34.828+09:00 level=INFO source=runner.go:817 msg="Server listening on 127.0.0.1:62914"
time=2025-04-30T17:25:34.943+09:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-04-30T17:25:34.943+09:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-04-30T17:25:34.944+09:00 level=INFO source=ggml.go:68 msg="" architecture=gemma3 file_type=F16 name="" description="" num_tensors=883 num_key_values=36
time=2025-04-30T17:25:34.950+09:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
panic: runtime error: index out of range [-1]

goroutine 25 [running]:
github.com/ollama/ollama/ml/backend/ggml.New.func6(...)
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend/ggml/ggml.go:169
github.com/ollama/ollama/ml/backend/ggml.New(0xc0004a2018, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend/ggml/ggml.go:175 +0x3145
github.com/ollama/ollama/ml.NewBackend(0xc0004a2018, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend.go:91 +0x9c
github.com/ollama/ollama/model.New({0xc0000a40e0?, 0x0?}, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/model/model.go:104 +0xfb
github.com/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc00062e360, {0xc0000a40e0, 0x6a}, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0}, ...)
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/runner/ollamarunner/runner.go:683 +0x95
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/runner/ollamarunner/runner.go:787 +0x9c5
time=2025-04-30T17:25:34.972+09:00 level=ERROR source=server.go:458 msg="llama runner terminated" error="exit status 2"
time=2025-04-30T17:25:34.980+09:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error:index out of range [-1]\n\ngoroutine 25 [running]:\ngithub.jpy.wang/ollama/ollama/ml/backend/ggml.New.func6(..."

While for \ollama-ipex-llm-2.3.0b20250428-win, looks like there is a bug during your packing, when passing ngl parameter (maybe --ngl is needed instead of -ngl, so it gives following error :

time=2025-04-30T17:02:49.738+09:00 level=INFO source=server.go:106 msg="system memory" total="31.5 GiB" free="18.0 GiB" free_swap="37.5 GiB"
time=2025-04-30T17:02:49.738+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-04-30T17:02:49.739+09:00 level=INFO source=server.go:151 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[18.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="4.0 GiB" memory.required.partial="0 B" memory.required.kv="1.1 GiB" memory.required.allocations="[4.0 GiB]" memory.weights.total="1.8 GiB" memory.weights.repeating="1.8 GiB" memory.weights.nonrepeating="525.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB"
time=2025-04-30T17:02:49.889+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-04-30T17:02:49.897+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-30T17:02:49.904+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-04-30T17:02:49.904+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-04-30T17:02:49.904+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0
time=2025-04-30T17:02:49.904+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-04-30T17:02:49.904+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0
time=2025-04-30T17:02:49.904+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0
time=2025-04-30T17:02:49.904+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-04-30T17:02:49.904+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-04-30T17:02:49.904+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
time=2025-04-30T17:02:49.904+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-04-30T17:02:49.926+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-30T17:02:49.927+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-30T17:02:49.927+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-30T17:02:49.927+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-30T17:02:49.952+09:00 level=INFO source=server.go:426 msg="starting llama server" cmd="C:\\Users\\bibek\\Downloads\\ollama-ipex-llm-2.3.0b20250428-win\\ollama-lib.exe runner --ollama-engine --model C:\\Users\\bibek\\neoali\\models\\blobs\\sha256-be49949e48422e4547b00af14179a193d3777eea7fbbd7d6e1b0861304628a01 --ctx-size 8192 --batch-size 512 -ngl 999 --threads 6 --no-mmap --parallel 4 --port 55585"
time=2025-04-30T17:02:49.963+09:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-04-30T17:02:49.963+09:00 level=INFO source=server.go:601 msg="waiting for llama runner to start responding"
time=2025-04-30T17:02:49.965+09:00 level=INFO source=server.go:635 msg="waiting for server to become available" status="llm server error"
flag provided but not defined: -ngl
Runner usage
  -batch-size int
        Batch size (default 512)
  -ctx-size int
        Context (or KV cache) size (default 2048)
  -flash-attn
        Enable flash attention
  -kv-cache-type string
        quantization type for KV cache (default: f16)
  -lora value
        Path to lora layer file (can be specified multiple times)
  -main-gpu int
        Main GPU
  -mlock
        force system to keep model in RAM rather than swapping or compressing
  -model string
        Path to model binary file
  -multiuser-cache
        optimize input cache algorithm for multiple users
  -n-gpu-layers int
        Number of layers to offload to GPU
  -no-mmap
        do not memory-map model (slower load but may reduce pageouts if not using mlock)
  -parallel int
        Number of sequences to handle simultaneously (default 1)
  -port int
        Port to expose the server on (default 8080)
  -tensor-split string
        fraction of the model to offload to each GPU, comma-separated list of proportions
  -threads int
        Number of threads to use during generation (default 22)
  -verbose
        verbose output (default: disabled)
time=2025-04-30T17:02:50.216+09:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: exit status 2"

Thank you!

@sgwhat
Copy link
Contributor

sgwhat commented Apr 30, 2025

Hi @bibekyess , I have fixed the ngl issue, you may try it tmr via pip install --pre --upgrade ipex-llm[cpp] or download a zip from link. Good luck for you :)

@bibekyess
Copy link
Author

Hi @sgwhat
Thank you for your prompt response. I am using zip version and it fixes the ngl issue but still I cannot run both of fp16 and Q8_0 models.
Logs are attached:

time=2025-04-30T17:47:57.573+09:00 level=INFO source=server.go:107 msg="system memory" total="31.5 GiB" free="15.6 GiB" free_swap="36.6 GiB"
time=2025-04-30T17:47:57.577+09:00 level=INFO source=server.go:154 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[15.6 GiB]" memory.gpu_overhead="0 B" memory.required.full="10.8 GiB" memory.required.partial="0 B" memory.required.kv="1.1 GiB" memory.required.allocations="[10.8 GiB]" memory.weights.total="6.0 GiB" memory.weights.repeating="6.0 GiB" memory.weights.nonrepeating="1.3 GiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="795.9 MiB" projector.graph="1.0 GiB"
time=2025-04-30T17:47:57.830+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-04-30T17:47:57.837+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-30T17:47:57.844+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-04-30T17:47:57.857+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.attention.layer_norm_rms_epsilon default=9.999999974752427e-07
time=2025-04-30T17:47:57.857+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-30T17:47:57.857+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-30T17:47:57.857+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-30T17:47:57.857+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-30T17:47:57.877+09:00 level=INFO source=server.go:430 msg="starting llama server" cmd="C:\\Users\\bibek\\Downloads\\ollama-intel-2.3.0b20250429-win\\ollama-lib.exe runner --ollama-engine --model C:\\Users\\bibek\\neoali\\models\\blobs\\sha256-2e1715faf889527461e76d271e827bbe03f3d22b4b86acf6146671d72eb6d11d --ctx-size 8192 --batch-size 512 --n-gpu-layers 999 --threads 6 --no-mmap --parallel 4 --port 53395"
time=2025-04-30T17:47:57.887+09:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-04-30T17:47:57.887+09:00 level=INFO source=server.go:605 msg="waiting for llama runner to start responding"
time=2025-04-30T17:47:57.889+09:00 level=INFO source=server.go:639 msg="waiting for server to become available" status="llm server error"
time=2025-04-30T17:47:57.958+09:00 level=INFO source=runner.go:757 msg="starting ollama engine"
time=2025-04-30T17:47:57.959+09:00 level=INFO source=runner.go:817 msg="Server listening on 127.0.0.1:53395"
time=2025-04-30T17:47:58.062+09:00 level=WARN source=ggml.go:149 msg="key not found" key=general.name default=""
time=2025-04-30T17:47:58.062+09:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-04-30T17:47:58.062+09:00 level=INFO source=ggml.go:68 msg="" architecture=gemma3 file_type=F16 name="" description="" num_tensors=883 num_key_values=36
time=2025-04-30T17:47:58.066+09:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
panic: runtime error: index out of range [-1]

goroutine 50 [running]:
github.com/ollama/ollama/ml/backend/ggml.New.func6(...)
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend/ggml/ggml.go:169
github.com/ollama/ollama/ml/backend/ggml.New(0xc0004a63b0, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend/ggml/ggml.go:175 +0x3145
github.com/ollama/ollama/ml.NewBackend(0xc0004a63b0, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend.go:91 +0x9c
github.com/ollama/ollama/model.New({0xc0000a40e0?, 0x0?}, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/model/model.go:104 +0xfb
github.com/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc000118000, {0xc0000a40e0, 0x6a}, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0}, ...)
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/runner/ollamarunner/runner.go:683 +0x95
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/runner/ollamarunner/runner.go:787 +0x9c5
time=2025-04-30T17:47:58.082+09:00 level=ERROR source=server.go:474 msg="llama runner terminated" error="exit status 2"
time=2025-04-30T17:47:58.140+09:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error:index out of range [-1]\n\ngoroutine 50 [running]:\ngithub.jpy.wang/ollama/ollama/ml/backend/ggml.New.func6(...)\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend/ggml/ggml.go:169\ngithub.jpy.wang/ollama/ollama/ml/backend/ggml.New(0xc0004a63b0, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend/ggml/ggml.go:175 +0x3145\ngithub.jpy.wang/ollama/ollama/ml.NewBackend(0xc0004a63b0, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend.go:91 +0x9c\ngithub.jpy.wang/ollama/ollama/model.New({0xc0000a40e0?, 0x0?}, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/model/model.go:104 +0xfb\ngithub.jpy.wang/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc000118000, {0xc0000a40e0, 0x6a}, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0}, ...)\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/runner/ollamarunner/runner.go:683 +0x95\ncreated by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/runner/ollamarunner/runner.go:787 +0x9c5"
[GIN] 2025/04/30 - 17:47:58 | 500 |    929.5454ms |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/04/30 - 17:48:17 | 200 |            0s |       127.0.0.1 | HEAD     "/"
[GIN] 2025/04/30 - 17:48:17 | 200 |    114.4031ms |       127.0.0.1 | POST     "/api/show"
time=2025-04-30T17:48:17.534+09:00 level=INFO source=server.go:107 msg="system memory" total="31.5 GiB" free="15.8 GiB" free_swap="36.5 GiB"
time=2025-04-30T17:48:17.538+09:00 level=INFO source=server.go:154 msg=offload library=cpu layers.requested=-1 layers.model=35 layers.offload=0 layers.split="" memory.available="[15.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.3 GiB" memory.required.partial="0 B" memory.required.kv="1.1 GiB" memory.required.allocations="[6.3 GiB]" memory.weights.total="3.2 GiB" memory.weights.repeating="3.2 GiB" memory.weights.nonrepeating="680.0 MiB" memory.graph.full="517.0 MiB" memory.graph.partial="1.0 GiB" projector.weights="811.8 MiB" projector.graph="0 B"
time=2025-04-30T17:48:17.680+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-04-30T17:48:17.685+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.add_eot_token default=false
time=2025-04-30T17:48:17.694+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-04-30T17:48:17.694+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-04-30T17:48:17.695+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.num_channels default=0
time=2025-04-30T17:48:17.695+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.block_count default=0
time=2025-04-30T17:48:17.695+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.embedding_length default=0
time=2025-04-30T17:48:17.696+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.head_count default=0
time=2025-04-30T17:48:17.696+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.image_size default=0
time=2025-04-30T17:48:17.696+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.patch_size default=0
time=2025-04-30T17:48:17.696+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.vision.attention.layer_norm_epsilon default=0
time=2025-04-30T17:48:17.696+09:00 level=WARN source=ggml.go:149 msg="key not found" key=tokenizer.ggml.pretokenizer default="(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}{1,3}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
time=2025-04-30T17:48:17.708+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.local.freq_base default=10000
time=2025-04-30T17:48:17.708+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.global.freq_base default=1e+06
time=2025-04-30T17:48:17.709+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.rope.freq_scale default=1
time=2025-04-30T17:48:17.709+09:00 level=WARN source=ggml.go:149 msg="key not found" key=gemma3.mm_tokens_per_image default=256
time=2025-04-30T17:48:17.722+09:00 level=INFO source=server.go:430 msg="starting llama server" cmd="C:\\Users\\bibek\\Downloads\\ollama-intel-2.3.0b20250429-win\\ollama-lib.exe runner --ollama-engine --model C:\\Users\\bibek\\neoali\\models\\blobs\\sha256-283baeca5e0ffc2a7f6cd56b9b7b5ce1d4dda08ca11f11afa869127caf745e94 --ctx-size 8192 --batch-size 512 --n-gpu-layers 999 --threads 6 --no-mmap --parallel 4 --port 53479"
time=2025-04-30T17:48:17.732+09:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-04-30T17:48:17.733+09:00 level=INFO source=server.go:605 msg="waiting for llama runner to start responding"
time=2025-04-30T17:48:17.735+09:00 level=INFO source=server.go:639 msg="waiting for server to become available" status="llm server error"
time=2025-04-30T17:48:17.788+09:00 level=INFO source=runner.go:757 msg="starting ollama engine"
time=2025-04-30T17:48:17.789+09:00 level=INFO source=runner.go:817 msg="Server listening on 127.0.0.1:53479"
time=2025-04-30T17:48:17.838+09:00 level=WARN source=ggml.go:149 msg="key not found" key=general.description default=""
time=2025-04-30T17:48:17.838+09:00 level=INFO source=ggml.go:68 msg="" architecture=gemma3 file_type=Q8_0 name="Gemma 3 4b It" description="" num_tensors=444 num_key_values=41
time=2025-04-30T17:48:17.841+09:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
panic: runtime error: index out of range [-1]

goroutine 53 [running]:
github.com/ollama/ollama/ml/backend/ggml.New.func6(...)
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend/ggml/ggml.go:169
github.com/ollama/ollama/ml/backend/ggml.New(0xc0004ba1c8, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend/ggml/ggml.go:175 +0x3145
github.com/ollama/ollama/ml.NewBackend(0xc0004ba1c8, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend.go:91 +0x9c
github.com/ollama/ollama/model.New({0xc0000a40e0?, 0x0?}, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/model/model.go:104 +0xfb
github.com/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc0004c3b00, {0xc0000a40e0, 0x6a}, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0}, ...)
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/runner/ollamarunner/runner.go:683 +0x95
created by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1
        D:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/runner/ollamarunner/runner.go:787 +0x9c5
time=2025-04-30T17:48:17.854+09:00 level=ERROR source=server.go:474 msg="llama runner terminated" error="exit status 2"
time=2025-04-30T17:48:17.985+09:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: error:index out of range [-1]\n\ngoroutine 53 [running]:\ngithub.jpy.wang/ollama/ollama/ml/backend/ggml.New.func6(...)\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend/ggml/ggml.go:169\ngithub.jpy.wang/ollama/ollama/ml/backend/ggml.New(0xc0004ba1c8, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend/ggml/ggml.go:175 +0x3145\ngithub.jpy.wang/ollama/ollama/ml.NewBackend(0xc0004ba1c8, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/ml/backend.go:91 +0x9c\ngithub.jpy.wang/ollama/ollama/model.New({0xc0000a40e0?, 0x0?}, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0})\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/model/model.go:104 +0xfb\ngithub.jpy.wang/ollama/ollama/runner/ollamarunner.(*Server).loadModel(0xc0004c3b00, {0xc0000a40e0, 0x6a}, {0x6, 0x0, 0x3e7, {0x0, 0x0, 0x0}, 0x0}, ...)\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/runner/ollamarunner/runner.go:683 +0x95\ncreated by github.com/ollama/ollama/runner/ollamarunner.Execute in goroutine 1\n\tD:/actions-runner/release-cpp-oneapi_2024_2/_work/llm.cpp/llm.cpp/ollama-internal/runner/ollamarunner/runner.go:787 +0x9c5"
[GIN] 2025/04/30 - 17:48:17 | 500 |    684.8215ms |       127.0.0.1 | POST     "/api/generate"

Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants