koboldcpp/src
2025-07-10 10:00:20 +03:00
..
CMakeLists.txt memory : Hybrid recurrent cache (#13979) 2025-06-19 08:08:14 +03:00
llama-adapter.cpp llama : do not crash if there is no CPU backend (#13395) 2025-05-09 13:02:07 +02:00
llama-adapter.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-arch.cpp llama : minor coding style fix for smollm3 (#14605) 2025-07-10 10:00:20 +03:00
llama-arch.h llama : support Jamba hybrid Transformer-Mamba models (#7531) 2025-07-09 14:59:57 -04:00
llama-batch.cpp batch : add optional for sequential equal split (#14511) 2025-07-04 09:08:59 +03:00
llama-batch.h batch : add optional for sequential equal split (#14511) 2025-07-04 09:08:59 +03:00
llama-chat.cpp model : fix hunyuan moe chat template (#14584) 2025-07-08 18:29:29 +02:00
llama-chat.h model : add hunyuan moe (#14425) 2025-07-08 11:24:06 +03:00
llama-context.cpp kv-cells : fix tracking of seq_pos (#14339) 2025-06-23 12:27:35 +03:00
llama-context.h memory : rename interface to llama_memory_context_i (#14296) 2025-06-21 08:03:46 +03:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-grammar.cpp server: streaming of tool calls and thoughts when --jinja is on (#12379) 2025-05-25 01:48:08 +01:00
llama-grammar.h tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
llama-graph.cpp llama : remove llm_graph_input_one (#14603) 2025-07-09 23:09:28 +02:00
llama-graph.h llama : remove llm_graph_input_one (#14603) 2025-07-09 23:09:28 +02:00
llama-hparams.cpp llama : initial Mamba-2 support (#9126) 2025-07-02 13:10:24 -04:00
llama-hparams.h llama : initial Mamba-2 support (#9126) 2025-07-02 13:10:24 -04:00
llama-impl.cpp GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
llama-impl.h cleanup: fix compile warnings associated with gnu_printf (#11811) 2025-02-12 10:06:53 -04:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache-unified-iswa.cpp batch : add optional for sequential equal split (#14511) 2025-07-04 09:08:59 +03:00
llama-kv-cache-unified-iswa.h kv-cache : use ggml_set_rows (#14285) 2025-07-03 10:53:35 +03:00
llama-kv-cache-unified.cpp batch : add n_used count (#14512) 2025-07-04 09:04:59 +03:00
llama-kv-cache-unified.h kv-cache : use ggml_set_rows (#14285) 2025-07-03 10:53:35 +03:00
llama-kv-cells.h kv-cache : use ggml_set_rows (#14285) 2025-07-03 10:53:35 +03:00
llama-memory-hybrid.cpp batch : add optional for sequential equal split (#14511) 2025-07-04 09:08:59 +03:00
llama-memory-hybrid.h kv-cache : use ggml_set_rows (#14285) 2025-07-03 10:53:35 +03:00
llama-memory-recurrent.cpp llama : support Jamba hybrid Transformer-Mamba models (#7531) 2025-07-09 14:59:57 -04:00
llama-memory-recurrent.h memory : rename interface to llama_memory_context_i (#14296) 2025-06-21 08:03:46 +03:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-mmap.cpp llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources (#14013) 2025-06-05 11:57:42 +02:00
llama-mmap.h llama-mmap: fix missing include (#11796) 2025-02-10 20:58:18 +02:00
llama-model-loader.cpp llama : support multiple classifier outputs and labels (#13940) 2025-06-06 09:03:25 +02:00
llama-model-loader.h llama : add option to override model tensor buffers (#11397) 2025-04-02 14:52:01 +02:00
llama-model-saver.cpp llama : improve sep token handling (#14272) 2025-06-20 14:04:09 +02:00
llama-model-saver.h llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
llama-model.cpp llama : remove llm_graph_input_one (#14603) 2025-07-09 23:09:28 +02:00
llama-model.h llama : support Jamba hybrid Transformer-Mamba models (#7531) 2025-07-09 14:59:57 -04:00
llama-quant.cpp model : gemma3n text-only (#14400) 2025-06-26 20:34:02 +03:00
llama-quant.h llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
llama-sampling.cpp sampling : make sure samplers return at least 1 token (#13822) 2025-05-27 12:07:52 +03:00
llama-sampling.h llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-vocab.cpp model : add skt/A.X-4.0 model vocabulary (#14589) 2025-07-09 11:22:31 +03:00
llama-vocab.h llama : improve sep token handling (#14272) 2025-06-20 14:04:09 +02:00
llama.cpp llama : add thread safety test (#14035) 2025-06-16 08:11:43 -07:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp build : suppress gcc15 compile warnings (#14261) 2025-06-19 14:49:48 +02:00
unicode.h unicode : improve naming style (#10838) 2024-12-16 12:31:45 +02:00