koboldcpp/examples
Jonathan Graehl 5cdb27e091
finetune: SGD optimizer, more CLI args (#13873)
* examples/finetune -opt SGD (stochastic gradient descent) memory opt

add unit tested GGML_OPT_OPTIMIZER_SGD to ggml - avoids allocating
m, v tensors.

support finetune.cpp arg -opt SGD (or sgd). (default adamw as before)

llama 3.2-1b-F32 result: observed 11gb gpu ram (41 sec/epoch)
when using SGD instead of 19gb (55 sec/epoch) using adamw.
(wikipedia 100 lines finetune)

(
using the same GPU memory, adamw can only do before OOM 512
batch/context, reaching:
train: [███████▉] data=0000140/0000140 loss=0.02575±0.00099 acc=99.52±0.03% t=00:00:47 ETA=00:00:00
val:   [███████▉] data=0000008/0000008 loss=4.76565±0.28810 acc=41.46±0.77% t=00:00:00 ETA=00:00:00

SGD is superior, though it converges slower, with max before OOM 1728
batch/context (esp see the better validation perf):
train: [███████▉] data=0000039/0000039 loss=0.00371±0.00010 acc=99.96±0.01% t=00:00:41 ETA=00:00:00
val:   [███████▉] data=0000003/0000003 loss=5.11406±0.76034 acc=48.01±0.69% t=00:00:01 ETA=00:00:00
)

note: when finetuning long enough (or w/ enough -lr),
validation accuracy *eventually* drops ('catastrophic forgetting')

-lr-half (halflife) option useful for SGD to avoid oscillation or
super slow underdamped learning (makes setting -lr more forgiving).
terminal -lr for now is set by lr-halvings i.e. if you want at most
1/8 the inital -lr you set -lr-halvings 3.

note: objective loss not directly comparable between adamw, sgd? -
check perplexity or accuracy or consider relative improvements
for convergence

new finetune args -wd 1e-9 to enable weight decay in sgd or adamw,
and max -epochs N (default 2 as before)

cache (1 - wd*alpha) in 'adamw' opt struct -
no noticeable perf benefit, disabled (still done
for new SGD though)

since opt. memory is pre-allocated, the ggml_opt_get_optimizer_params
would probably be able to change between SGD and AdamW with each epoch
but would need to use adamw for the first (unconfirmed - no cmdline arg
to set such a policy yet)

test-opt checks adamw as before and now sgd (except for a few disabled
tests for sgd only; probably just needs logging values and adding
alternate reference values);  tolerance on the 'regression'
test is broader for sgd (so we don't need many more epochs)

* Vulkan: Implement GGML_OP_OPT_STEP_SGD

* tests: Fix OPT_STEP_SGD test-backend-ops

* SGD op param store weight-decay and not 1-alpha*wd

* minor + cosmetic changes

* fix vulkan sgd

* try CI fix

---------

Co-authored-by: 0cc4m <picard12@live.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-08-14 12:03:57 +02:00
..
batched common : refactor downloading system, handle mmproj with -hf option (#12694) 2025-04-01 23:44:05 +02:00
batched.swift llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
convert-llama2c-to-ggml llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
deprecation-warning Update deprecation-warning.cpp (#10619) 2024-12-04 23:19:20 +01:00
diffusion Add LLaDA 8b Diffusion model (#14771) 2025-07-31 19:49:09 +08:00
embedding tests : update for LLAMA_SET_ROWS=1 (#14961) 2025-07-30 15:12:02 +03:00
eval-callback eval-callback : check for empty input (#14539) 2025-07-05 07:18:09 +03:00
gen-docs ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
gguf-hash GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
gritlm llama : rework embeddings logic (#14208) 2025-06-16 14:14:00 +03:00
jeopardy scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
llama.android llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
llama.swiftui llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
lookahead llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
lookup llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
parallel parallel : add option for different RNG seeds (#14757) 2025-07-18 17:33:41 +03:00
passkey llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
retrieval llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
save-load-state tests : update for LLAMA_SET_ROWS=1 (#14961) 2025-07-30 15:12:02 +03:00
simple fix: check model pointer validity before use (#13631) 2025-05-19 13:25:41 +03:00
simple-chat simple-chat : fix context-exceeded condition (#14494) 2025-07-02 14:12:07 +03:00
simple-cmake-pkg repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
speculative common : add --override-tensor-draft, --cpu-moe-draft and --n-cpu-moe-draft parameters (#15191) 2025-08-13 12:44:40 +02:00
speculative-simple common : add --override-tensor-draft, --cpu-moe-draft and --n-cpu-moe-draft parameters (#15191) 2025-08-13 12:44:40 +02:00
sycl scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
training finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
chat-persistent.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
chat-vicuna.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
chat.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
CMakeLists.txt Support diffusion models: Add Dream 7B (#14644) 2025-07-16 20:03:51 +08:00
convert_legacy_llama.py metadata: Detailed Dataset Authorship Metadata (#8875) 2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
json_schema_to_grammar.py grammar : handle maxItems == 0 in JSON schema (#13117) 2025-04-26 10:10:20 +02:00
llama.vim repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
pydantic_models_to_grammar.py pydantic : replace uses of __annotations__ with get_type_hints (#8474) 2024-07-14 19:51:21 -04:00
pydantic_models_to_grammar_examples.py llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
reason-act.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
regex_to_grammar.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
server-llama2-13B.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
server_embd.py llama : fix FA when KV cache is not used (i.e. embeddings) (#12825) 2025-04-08 19:54:51 +03:00
ts-type-to-grammar.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00