prima.cpp/examples
Douglas Hanley 03bf161eb6
llama : support batched embeddings (#5466)
* batched embedding: pool outputs by sequence id. updated embedding example

* bring back non-causal attention

* embd : minor improvements

* llama : minor

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-13 14:06:58 +02:00
..
baby-llama
batched
batched-bench
batched.swift
beam-search
benchmark
convert-llama2c-to-ggml
embedding llama : support batched embeddings (#5466) 2024-02-13 14:06:58 +02:00
export-lora
finetune
gguf
imatrix Adding some imatrix tools (#5302) 2024-02-04 10:39:58 +02:00
infill
jeopardy
llama-bench
llama.android
llama.swiftui
llava
lookahead
lookup
main main : ctrl+C print timing in non-interactive mode (#3873) 2024-02-11 15:35:50 +02:00
main-cmake-pkg
parallel
passkey
perplexity
quantize
quantize-stats
save-load-state
server
simple
speculative
sycl
tokenize
train-text-from-scratch
alpaca.sh
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
gpt4all.sh
json-schema-to-grammar.py
llama.vim
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
pydantic-models-to-grammar-examples.py
pydantic_models_to_grammar.py
reason-act.sh
server-llama2-13B.sh