llama: use FA + max. GPU layers by default (#15434)

* llama: use max. GPU layers by default, auto -fa

* ggml-backend: abort instead of segfault
This commit is contained in:
Johannes Gäßler 2025-08-30 16:32:10 +02:00 committed by GitHub
parent 38ad381f9f
commit e81b8e4b7f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
19 changed files with 235 additions and 72 deletions

View file

@ -14,6 +14,7 @@ def create_server():
server.model_draft = download_file(MODEL_DRAFT_FILE_URL)
server.draft_min = 4
server.draft_max = 8
server.fa = "off"
@pytest.fixture(autouse=True)