koboldcpp/examples/llava
2025-03-15 17:49:49 +08:00
..
clip-quantize-cli.cpp llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644) 2025-02-05 10:45:40 +03:00
clip.cpp fixed a clip processing bug 2025-03-15 17:49:49 +08:00
clip.h gemma3 vision works, but is using more tokens than expected - may need resizing 2025-03-13 00:31:16 +08:00
convert_image_encoder_to_gguf.py llava: add big-endian conversion for image encoder (#12218) 2025-03-06 09:33:21 +01:00
gemma3-cli.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
gemma3_convert_encoder_to_gguf.py llama : Add Gemma 3 support (+ experimental vision capability) (#12343) 2025-03-12 09:30:24 +01:00
glmedge-convert-image-encoder-to-gguf.py llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
glmedge-surgery.py llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
llava-cli.cpp llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llava.cpp Rewrite history to fix bad vulkan shader commits without increasing repo size 2025-03-05 00:02:20 +08:00
llava.h llava : support MiniCPM-V-2.5 (#7599) 2024-08-09 13:33:53 +03:00
llava_surgery.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
llava_surgery_v2.py Rewrite history to fix bad vulkan shader commits without increasing repo size 2025-03-05 00:02:20 +08:00
minicpmv-cli.cpp clip : bring back GPU support (#12322) 2025-03-11 09:20:16 +01:00
minicpmv-convert-image-encoder-to-gguf.py llava : fix bug in minicpm-v code (#11513) 2025-03-10 10:33:24 +02:00
minicpmv-surgery.py llava : support Minicpm-omni (#11289) 2025-01-22 09:35:48 +02:00
quantclip.cpp better quant clip 2024-08-18 22:15:59 +08:00
qwen2_vl_surgery.py llava : Allow locally downloaded models for QwenVL (#10833) 2024-12-15 21:43:25 +01:00
qwen2vl-cli.cpp llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
README-gemma3.md llama : Add Gemma 3 support (+ experimental vision capability) (#12343) 2025-03-12 09:30:24 +01:00
README-glmedge.md llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
README-granitevision.md Rewrite history to fix bad vulkan shader commits without increasing repo size 2025-03-05 00:02:20 +08:00
README-minicpmo2.6.md llava : fix bug in minicpm-v code (#11513) 2025-03-10 10:33:24 +02:00
README-minicpmv2.5.md llava : fix bug in minicpm-v code (#11513) 2025-03-10 10:33:24 +02:00
README-minicpmv2.6.md llava : fix bug in minicpm-v code (#11513) 2025-03-10 10:33:24 +02:00
README-quantize.md llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644) 2025-02-05 10:45:40 +03:00
requirements.txt py : fix requirements check '==' -> '~=' (#8982) 2024-08-12 11:02:01 +03:00