.. |
android
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
clip.cpp
|
You'll never take us alive
|
2025-01-09 11:27:06 +08:00 |
clip.h
|
temporarily make qwenv2l use clip on cpu for vulkan and macos
|
2024-12-21 09:15:31 +08:00 |
convert_image_encoder_to_gguf.py
|
ci : reduce severity of unused Pyright ignore comments (#9697)
|
2024-09-30 14:13:16 -04:00 |
llava-cli.cpp
|
llama : add llama_vocab , functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
llava.cpp
|
llama : add llama_vocab , functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
llava.h
|
llava : support MiniCPM-V-2.5 (#7599)
|
2024-08-09 13:33:53 +03:00 |
llava_surgery.py
|
py : switch to snake_case (#8305)
|
2024-07-05 07:53:33 +03:00 |
llava_surgery_v2.py
|
py : type-check all Python scripts with Pyright (#8341)
|
2024-07-07 15:04:39 -04:00 |
minicpmv-cli.cpp
|
llama : add llama_vocab , functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
minicpmv-convert-image-encoder-to-gguf.py
|
llava : support MiniCPM-V-2.6 (#8967)
|
2024-08-16 16:34:41 +03:00 |
minicpmv-surgery.py
|
llava : support MiniCPM-V-2.6 (#8967)
|
2024-08-16 16:34:41 +03:00 |
quantclip.cpp
|
better quant clip
|
2024-08-18 22:15:59 +08:00 |
qwen2_vl_surgery.py
|
llava : Allow locally downloaded models for QwenVL (#10833)
|
2024-12-15 21:43:25 +01:00 |
qwen2vl-cli.cpp
|
llama : add llama_vocab , functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
README-minicpmv2.5.md
|
Fix minicpm example directory (#9111)
|
2024-08-27 14:33:08 +02:00 |
README-minicpmv2.6.md
|
llava : support MiniCPM-V-2.6 (#8967)
|
2024-08-16 16:34:41 +03:00 |
requirements.txt
|
py : fix requirements check '==' -> '~=' (#8982)
|
2024-08-12 11:02:01 +03:00 |