why MiniCPM-o-2.6 show info that is qwen2

#4
by orca-zhang - opened

[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 0: general.architecture str = qwen2
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 1: general.type str = model
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 2: general.name str = Model
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 3: general.size_label str = 7.6B
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 4: qwen2.block_count u32 = 28
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 5: qwen2.context_length u32 = 32768
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 6: qwen2.embedding_length u32 = 3584
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 7: qwen2.feed_forward_length u32 = 18944
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 8: qwen2.attention.head_count u32 = 28
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 9: qwen2.attention.head_count_kv u32 = 4
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 10: qwen2.rope.freq_base f32 = 1000000.000000
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 11: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 12: tokenizer.ggml.model str = gpt2
[2025-02-12 11:48:10.368] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 13: tokenizer.ggml.pre str = qwen2
[2025-02-12 11:48:10.387] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,151700] = ["!", """, "#", "$", "%", "&", "'", ...
[2025-02-12 11:48:10.396] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,151700] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 151644
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 151645
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 128244
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 151643
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = false
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 22: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 23: general.quantization_version u32 = 2
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - kv 24: general.file_type u32 = 2
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - type f32: 141 tensors
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - type q4_0: 197 tensors
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: llama_model_loader: - type q6_K: 1 tensors
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: print_info: file format = GGUF V3 (latest)
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: print_info: file type = Q4_0
[2025-02-12 11:48:10.414] [info] [WASI-NN] llama.cpp: print_info: file size = 4.12 GiB (4.65 BPW)
[2025-02-12 11:48:10.525] [info] [WASI-NN] llama.cpp: load: special tokens cache size = 58
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: load: token to piece cache size = 0.9313 MB
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: arch = qwen2
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: vocab_only = 0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_ctx_train = 32768
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_embd = 3584
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_layer = 28
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_head = 28
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_head_kv = 4
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_rot = 128
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_swa = 0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_embd_head_k = 128
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_embd_head_v = 128
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_gqa = 7
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_embd_k_gqa = 512
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_embd_v_gqa = 512
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: f_norm_eps = 0.0e+00
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: f_norm_rms_eps = 1.0e-06
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: f_clamp_kqv = 0.0e+00
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: f_max_alibi_bias = 0.0e+00
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: f_logit_scale = 0.0e+00
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_ff = 18944
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_expert = 0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_expert_used = 0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: causal attn = 1
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: pooling type = 0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: rope type = 2
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: rope scaling = linear
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: freq_base_train = 1000000.0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: freq_scale_train = 1
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_ctx_orig_yarn = 32768
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: rope_finetuned = unknown
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: ssm_d_conv = 0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: ssm_d_inner = 0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: ssm_d_state = 0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: ssm_dt_rank = 0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: ssm_dt_b_c_rms = 0
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: model type = 7B
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: model params = 7.61 B
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: general.name = Model
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: vocab type = BPE
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_vocab = 151700
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: n_merges = 151387
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: BOS token = 151644 '<|im_start|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: EOS token = 151645 '<|im_end|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: EOT token = 151645 '<|im_end|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: UNK token = 128244 ''
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: PAD token = 151643 '<|endoftext|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: LF token = 198 'Ċ'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: FIM PRE token = 151659 '<|fim_prefix|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: FIM SUF token = 151661 '<|fim_suffix|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: FIM MID token = 151660 '<|fim_middle|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: FIM PAD token = 151662 '<|fim_pad|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: FIM REP token = 151663 '<|repo_name|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: FIM SEP token = 151664 '<|file_sep|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: EOG token = 151643 '<|endoftext|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: EOG token = 151645 '<|im_end|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: EOG token = 151662 '<|fim_pad|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: EOG token = 151663 '<|repo_name|>'
[2025-02-12 11:48:10.571] [info] [WASI-NN] llama.cpp: print_info: EOG token = 151664 '<|file_sep|>'

tokenizer.chat_template also match qwen2's not like minicpm-v

OpenBMB org

The llm part of the model is qwen2.5.

can we use this latest model on ollama , i am trying for the past few days but no sucess at all

can we use this latest model on ollama , i am trying for the past few days but no sucess at all

I can use with the llama.cpp series toolkit normally. If the download fails, you can try multiple times. If it is a running error, it is recommended to consider changing the ollama version.

orca-zhang changed discussion status to closed

Sign up or log in to comment