koboldcpp/src/models
3 a l i 2bf318fd2f
model : add JAIS-2 architecture support (#19488)
* model: add JAIS-2 architecture support

Add support for the JAIS-2 family of Arabic-English bilingual models
from Inception AI (https://huggingface.co/inceptionai/Jais-2-8B-Chat).

Architecture characteristics:
- LayerNorm (not RMSNorm) with biases
- ReLU² (ReLU squared) activation function
- Separate Q/K/V projections with biases
- Simple MLP without gate projection (up -> act -> down)
- RoPE positional embeddings
- GPT-2 BPE tokenizer

Supported model sizes:
- Jais-2-8B (32 layers, 26 heads, 3328 hidden)
- Jais-2-70B (68 layers, 56 heads, 7168 hidden)

Tested with quantizations: BF16, Q8_0, Q6_K, Q5_K_M, Q5_0, Q4_K_M, Q4_0, Q3_K_M, Q2_K

Note: JAIS-2 requires F32 precision accumulators for numerical stability
and uses standard attention (not flash attention) on CUDA backends.

* fix: run convert_hf_to_gguf_update.py for jais-2 tokenizer hash

* fix: use NEOX RoPE type for JAIS2

* fix: remove Q/K permutation (NEOX RoPE doesn't need it)

* fix: enable flash attention for JAIS2 (fixed by #19115)

* fix: add dedicated JAIS2 pre-tokenizer type and control vector support

- Add LLAMA_VOCAB_PRE_TYPE_JAIS2 with cascading whitespace regex
- Include original regex from tokenizer.json as comment
- Add build_cvec call for control vector support

* no longer necessary to override set_vocab

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-02-19 13:30:17 +01:00
..
afmoe.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
apertus.cpp
arcee.cpp
arctic.cpp
arwkv7.cpp
baichuan.cpp
bailingmoe.cpp
bailingmoe2.cpp
bert.cpp model : add support for JinaBertModel with non-gated ffn (#18475) 2026-01-01 18:38:51 +01:00
bitnet.cpp
bloom.cpp
chameleon.cpp
chatglm.cpp
codeshell.cpp
cogvlm.cpp graph : reduce topology branching (#18548) 2026-01-02 19:01:56 +02:00
cohere2-iswa.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
command-r.cpp
dbrx.cpp
deci.cpp
deepseek.cpp
deepseek2.cpp model: support GLM MoE DSA arch (NOTE: indexer is not yet supported) (#19460) 2026-02-13 14:56:53 +01:00
delta-net-base.cpp models : dedup Kimi Linear delta net implementation (#19668) 2026-02-19 08:15:17 +02:00
dots1.cpp
dream.cpp
ernie4-5-moe.cpp
ernie4-5.cpp models : move build_inp_out_ids outside loop (#17151) 2025-11-10 22:55:30 +01:00
exaone-moe.cpp model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
exaone.cpp
exaone4.cpp
falcon-h1.cpp models : deduplicate delta-net graphs for Qwen family (#19597) 2026-02-16 14:35:04 +02:00
falcon.cpp
gemma-embedding.cpp graph : reduce topology branching (#18548) 2026-01-02 19:01:56 +02:00
gemma.cpp
gemma2-iswa.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
gemma3.cpp graph : reduce topology branching (#18548) 2026-01-02 19:01:56 +02:00
gemma3n-iswa.cpp graph : utilize ggml_build_forward_select() to avoid reallocations (#18898) 2026-01-23 18:22:34 +02:00
glm4-moe.cpp model: support GLM4V vision encoder (#18042) 2025-12-16 11:25:26 +01:00
glm4.cpp model: support GLM-OCR (#19677) 2026-02-18 17:51:40 +01:00
gpt2.cpp
gptneox.cpp
granite-hybrid.cpp models : deduplicate delta-net graphs for Qwen family (#19597) 2026-02-16 14:35:04 +02:00
granite.cpp
grok.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
grovemoe.cpp
hunyuan-dense.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
hunyuan-moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
internlm2.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
jais.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
jais2.cpp model : add JAIS-2 architecture support (#19488) 2026-02-19 13:30:17 +01:00
jamba.cpp models : deduplicate delta-net graphs for Qwen family (#19597) 2026-02-16 14:35:04 +02:00
kimi-linear.cpp models : dedup Kimi Linear delta net implementation (#19668) 2026-02-19 08:15:17 +02:00
lfm2.cpp model : add tokenizer from LFM2.5-Audio-1.5B (#19687) 2026-02-19 09:54:48 +01:00
llada-moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
llada.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
llama-iswa.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
llama.cpp model : support for LlamaBidirectionalModel architecture (#18220) 2025-12-24 14:02:36 +01:00
maincoder.cpp model : Maincoder-1B support (#18534) 2026-01-02 20:11:59 +01:00
mamba-base.cpp models : deduplicate delta-net graphs for Qwen family (#19597) 2026-02-16 14:35:04 +02:00
mamba.cpp models : deduplicate delta-net graphs for Qwen family (#19597) 2026-02-16 14:35:04 +02:00
mimo2-iswa.cpp model: support MiMo-V2-Flash (#18328) 2025-12-24 23:07:08 +01:00
minicpm3.cpp mla : make the V tensor a view of K (#18986) 2026-01-22 22:09:01 +02:00
minimax-m2.cpp
mistral3.cpp model: support Ministral3 (#17644) 2025-12-01 12:26:52 +01:00
models.h model : add JAIS-2 architecture support (#19488) 2026-02-19 13:30:17 +01:00
modern-bert.cpp model : full modern bert support (#18330) 2026-02-19 08:52:21 +01:00
mpt.cpp
nemotron-h.cpp models : deduplicate delta-net graphs for Qwen family (#19597) 2026-02-16 14:35:04 +02:00
nemotron.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
neo-bert.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
olmo.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
olmo2.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
olmoe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
openai-moe-iswa.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
openelm.cpp models : remove unnecessary cont in openelm (#19289) 2026-02-03 14:20:57 +01:00
orion.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
pangu-embedded.cpp model : add openPangu-Embedded (#16941) 2025-11-05 10:28:58 +01:00
phi2.cpp
phi3.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
plamo.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
plamo2.cpp models : deduplicate delta-net graphs for Qwen family (#19597) 2026-02-16 14:35:04 +02:00
plamo3.cpp model : Plamo3 support (#17304) 2025-12-28 17:28:31 +01:00
plm.cpp mla : make the V tensor a view of K (#18986) 2026-01-22 22:09:01 +02:00
qwen.cpp
qwen2.cpp model : add KORMo model (#18032) 2025-12-15 18:51:43 +01:00
qwen2moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen2vl.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen3.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen3moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen3next.cpp models : dedup Kimi Linear delta net implementation (#19668) 2026-02-19 08:15:17 +02:00
qwen3vl-moe.cpp graph : utilize ggml_build_forward_select() to avoid reallocations (#18898) 2026-01-23 18:22:34 +02:00
qwen3vl.cpp graph : utilize ggml_build_forward_select() to avoid reallocations (#18898) 2026-01-23 18:22:34 +02:00
qwen35.cpp models : dedup qwen35 graphs (#19660) 2026-02-19 08:17:49 +02:00
qwen35moe.cpp models : dedup qwen35 graphs (#19660) 2026-02-19 08:17:49 +02:00
refact.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
rnd1.cpp models : Added support for RND1 Diffusion Language Model (#17433) 2025-11-24 14:16:56 +08:00
rwkv6-base.cpp models : deduplicate delta-net graphs for Qwen family (#19597) 2026-02-16 14:35:04 +02:00
rwkv6.cpp
rwkv6qwen2.cpp
rwkv7-base.cpp models : deduplicate delta-net graphs for Qwen family (#19597) 2026-02-16 14:35:04 +02:00
rwkv7.cpp
seed-oss.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
smallthinker.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
smollm3.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
stablelm.cpp
starcoder.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
starcoder2.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
step35-iswa.cpp model : support Step3.5-Flash (#19283) 2026-02-06 21:06:14 +01:00
t5-dec.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
t5-enc.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
wavtokenizer-dec.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
xverse.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00