mirror of
https://github.com/LostRuins/koboldcpp.git
synced 2025-09-12 09:59:41 +00:00
# Conflicts: # .devops/cloud-v-pipeline # .devops/llama-cli-cuda.Dockerfile # .devops/llama-cli-rocm.Dockerfile # .devops/llama-cli-vulkan.Dockerfile # .devops/llama-cli.Dockerfile # .devops/llama-cpp-clblast.srpm.spec # .devops/llama-cpp-cuda.srpm.spec # .devops/llama-cpp.srpm.spec # .devops/llama-server-cuda.Dockerfile # .devops/llama-server-rocm.Dockerfile # .devops/llama-server-vulkan.Dockerfile # .devops/llama-server.Dockerfile # .devops/nix/apps.nix # .devops/nix/package.nix # .devops/tools.sh # .dockerignore # .github/ISSUE_TEMPLATE/01-bug-low.yml # .github/ISSUE_TEMPLATE/02-bug-medium.yml # .github/ISSUE_TEMPLATE/03-bug-high.yml # .github/ISSUE_TEMPLATE/04-bug-critical.yml # .github/workflows/bench.yml # .github/workflows/build.yml # .github/workflows/docker.yml # .github/workflows/server.yml # .gitignore # Makefile # README-sycl.md # README.md # ci/run.sh # docs/token_generation_performance_tips.md # flake.nix # grammars/README.md # pocs/vdot/CMakeLists.txt # scripts/get-hellaswag.sh # scripts/get-wikitext-103.sh # scripts/get-wikitext-2.sh # scripts/get-winogrande.sh # scripts/hf.sh # scripts/pod-llama.sh # scripts/qnt-all.sh # scripts/run-all-ppl.sh # scripts/run-with-preset.py # scripts/server-llm.sh # tests/test-backend-ops.cpp
33 lines
1.7 KiB
Markdown
33 lines
1.7 KiB
Markdown
# llama.cpp/example/main-cmake-pkg
|
|
|
|
This program builds [llama-cli](../main) using a relocatable CMake package. It serves as an example of using the `find_package()` CMake command to conveniently include [llama.cpp](https://github.com/ggerganov/llama.cpp) in projects which live outside of the source tree.
|
|
|
|
## Building
|
|
|
|
Because this example is "outside of the source tree", it is important to first build/install llama.cpp using CMake. An example is provided here, but please see the [llama.cpp build instructions](../..) for more detailed build instructions.
|
|
|
|
### Considerations
|
|
|
|
When hardware acceleration libraries are used (e.g. CUDA, Metal, CLBlast, etc.), CMake must be able to locate the associated CMake package. In the example below, when building _main-cmake-pkg_ notice the `CMAKE_PREFIX_PATH` includes the Llama CMake package location _in addition to_ the CLBlast package—which was used when compiling _llama.cpp_.
|
|
|
|
### Build llama.cpp and install to C:\LlamaCPP directory
|
|
|
|
In this case, CLBlast was already installed so the CMake package is referenced in `CMAKE_PREFIX_PATH`.
|
|
|
|
```cmd
|
|
git clone https://github.com/ggerganov/llama.cpp
|
|
cd llama.cpp
|
|
cmake -B build -DBUILD_SHARED_LIBS=OFF -DLLAMA_CLBLAST=ON -DCMAKE_PREFIX_PATH=C:/CLBlast/lib/cmake/CLBlast -G "Visual Studio 17 2022" -A x64
|
|
cmake --build build --config Release
|
|
cmake --install build --prefix C:/LlamaCPP
|
|
```
|
|
|
|
### Build llama-cli-cmake-pkg
|
|
|
|
|
|
```cmd
|
|
cd ..\examples\main-cmake-pkg
|
|
cmake -B build -DBUILD_SHARED_LIBS=OFF -DCMAKE_PREFIX_PATH="C:/CLBlast/lib/cmake/CLBlast;C:/LlamaCPP/lib/cmake/Llama" -G "Visual Studio 17 2022" -A x64
|
|
cmake --build build --config Release
|
|
cmake --install build --prefix C:/MyLlamaApp
|
|
```
|