mirror of
https://github.com/LostRuins/koboldcpp.git
synced 2026-05-07 00:41:50 +00:00
* feat: Add granite-docling conversion using trillion pretokenizer Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Add granite-docling vocab pre enum Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * fix: Use granite-docling pre Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Add clip_is_idefics3 Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Allow multi-token boundary sequences for image templating Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Add tiling support for idefices3 in clip.cpp This should likely be moved into llava_uhd::get_slice_instructions, but for now this avoids disrupting the logic there. Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Partial support for full templating for idefics3 in mtmd There are still errors encoding some of the image chunks, but the token sequence now matches transformers _almost_ perfectly, except for the double newline before the global image which shows up as two consecutive newline tokens instead of a single double-newline token. I think this is happening because the blocks are tokenized separately then concatenated. Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Fully working image preprocessing for idefics3 w/ resize and slicing Branch: gabe-l-hart/GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * feat: Parse the preprocessor config's longest side and add it to the mmproj hparams Branch: GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * fix: Use the longest side instead of size * scale_factor For Granite Docling, these come out to the same value, but that was just a conicidence. Branch: GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * fix: Allow batch encoding and remove clip_is_idefics3 Branch: GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * refactor: Remove unnecessary conditionals for empty token vectors Branch: GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * refactor: Use image_manipulation util Branch: GraniteDocling Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> * add test model --------- Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> Co-authored-by: Xuan Son Nguyen <son@huggingface.co> |
||
|---|---|---|
| .. | ||
| CMakeLists.txt | ||
| llama-adapter.cpp | ||
| llama-adapter.h | ||
| llama-arch.cpp | ||
| llama-arch.h | ||
| llama-batch.cpp | ||
| llama-batch.h | ||
| llama-chat.cpp | ||
| llama-chat.h | ||
| llama-context.cpp | ||
| llama-context.h | ||
| llama-cparams.cpp | ||
| llama-cparams.h | ||
| llama-grammar.cpp | ||
| llama-grammar.h | ||
| llama-graph.cpp | ||
| llama-graph.h | ||
| llama-hparams.cpp | ||
| llama-hparams.h | ||
| llama-impl.cpp | ||
| llama-impl.h | ||
| llama-io.cpp | ||
| llama-io.h | ||
| llama-kv-cache-iswa.cpp | ||
| llama-kv-cache-iswa.h | ||
| llama-kv-cache.cpp | ||
| llama-kv-cache.h | ||
| llama-kv-cells.h | ||
| llama-memory-hybrid.cpp | ||
| llama-memory-hybrid.h | ||
| llama-memory-recurrent.cpp | ||
| llama-memory-recurrent.h | ||
| llama-memory.cpp | ||
| llama-memory.h | ||
| llama-mmap.cpp | ||
| llama-mmap.h | ||
| llama-model-loader.cpp | ||
| llama-model-loader.h | ||
| llama-model-saver.cpp | ||
| llama-model-saver.h | ||
| llama-model.cpp | ||
| llama-model.h | ||
| llama-quant.cpp | ||
| llama-quant.h | ||
| llama-sampling.cpp | ||
| llama-sampling.h | ||
| llama-vocab.cpp | ||
| llama-vocab.h | ||
| llama.cpp | ||
| unicode-data.cpp | ||
| unicode-data.h | ||
| unicode.cpp | ||
| unicode.h | ||