koboldcpp/models
Sam ef0144c087
model: support GLM 4.5 family of models (#14939)
* model: Add GLM 4.5 (#14921)

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Merge in PR suggestions

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* model: Add GLM 4.5 family of models (#14921)

1. Updated tensor_mapping.py with NextN tensor mappings

- Added proper tensor mappings for all NextN/MTP tensors in /Users/samm/git/llama.cpp/gguf-py/gguf/tensor_mapping.py
- Added mappings for: eh_proj, embed_tokens, enorm, hnorm, shared_head.head, shared_head.norm

2. Added num_nextn_predict_layers configuration

- Added LLM_KV_NUM_NEXTN_PREDICT_LAYERS constant to llama-arch.h and llama-arch.cpp
- Added num_nextn_predict_layers field to llama_hparams struct
- Updated GLM4_MOE parameter loading in llama-model.cpp to read this parameter
- Modified tensor loading logic to conditionally load NextN tensors based on num_nextn_predict_layers
- Added GGUF writer support in gguf_writer.py with add_num_nextn_predict_layers() method
- Updated conversion script to extract and write this parameter from HuggingFace config

3. Added FIM tokens for GLM4_MOE

- Added GLM-4.5's FIM tokens to llama-vocab.cpp:
  - <|code_prefix|> for FIM_PRE
  - <|code_suffix|> for FIM_SUF
  - <|code_middle|> for FIM_MID

4. Removed manual NextN tensor handling

- Removed the special-case handling in convert_hf_to_gguf.py that manually mapped NextN tensors
- NextN tensors are now handled automatically through the proper tensor mapping system

* glm 4.5 update tensors names

* model: glm 4.5 apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* model: glm 4.5 apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* model: glm 4.5 apply suggestions from code review

* Apply suggestions from code review

* patch broken chat template

* typings fix

* add TENSOR_SKIP flag


Co-authored-by: Diego Devesa <slarengh@gmail.com>

* Update src/llama-model-loader.h

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-08-04 20:29:25 +02:00
..
templates model: support GLM 4.5 family of models (#14939) 2025-08-04 20:29:25 +02:00
.editorconfig gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
ggml-vocab-aquila.gguf Work on the BPE tokenizer (#3252) 2023-10-03 09:16:26 +02:00
ggml-vocab-baichuan.gguf Add more tokenizer tests (#3742) 2023-10-24 09:17:17 +02:00
ggml-vocab-bert-bge.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-bert-bge.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-bert-bge.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-command-r.gguf command-r : add BPE pre-tokenization (#7063) 2024-05-05 08:19:30 +03:00
ggml-vocab-command-r.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-command-r.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-deepseek-coder.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-deepseek-coder.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-deepseek-coder.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-deepseek-llm.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-deepseek-llm.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-deepseek-llm.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-falcon.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-falcon.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-falcon.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-gpt-2.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-gpt-2.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-gpt-2.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-gpt-neox.gguf Add more tokenizer tests (#3742) 2023-10-24 09:17:17 +02:00
ggml-vocab-llama-bpe.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-llama-bpe.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-llama-bpe.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-llama-spm.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-llama-spm.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-llama-spm.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-mpt.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-mpt.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-mpt.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-nomic-bert-moe.gguf tests : improve UGM tokenizer test coverage (#13773) 2025-05-25 16:22:29 +02:00
ggml-vocab-phi-3.gguf Per token attributes (#7685) 2024-06-04 09:17:17 +02:00
ggml-vocab-phi-3.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-phi-3.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-qwen2.gguf llama : add BPE pre-tokenization for Qwen2 (#7114) 2024-05-08 15:06:43 +03:00
ggml-vocab-qwen2.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-qwen2.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-refact.gguf tests : add test-tokenizer-0.sh + fix some tokenizers (#7036) 2024-05-04 08:32:32 +03:00
ggml-vocab-refact.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-refact.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-starcoder.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-starcoder.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-starcoder.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00