mirror of
https://github.com/LostRuins/koboldcpp.git
synced 2026-05-08 18:30:50 +00:00
# Conflicts: # .flake8 # .github/workflows/python-lint.yml # flake.lock # ggml-cuda.cu # ggml-quants.c # llama.cpp # pocs/vdot/q8dot.cpp # pocs/vdot/vdot.cpp # tests/test-quantize-fns.cpp # tests/test-quantize-perf.cpp |
||
|---|---|---|
| .. | ||
| CMakeLists.txt | ||
| embedding.cpp | ||
| README.md | ||
llama.cpp/example/embedding
This example demonstrates generate high-dimensional embedding vector of a given text with llama.cpp.
Quick Start
To get started right away, run the following command, making sure to use the correct path for the model you have:
Unix-based systems (Linux, macOS, etc.):
./embedding -m ./path/to/model --log-disable -p "Hello World!" 2>/dev/null
Windows:
embedding.exe -m ./path/to/model --log-disable -p "Hello World!" 2>$null
The above command will output space-separated float values.