prima.cpp/ggml-cuda
2024-05-18 12:36:25 +02:00
..
acc.cu
acc.cuh
arange.cu
arange.cuh
argsort.cu ggml : mul_mat_id use the same tensor for all the experts (#6387) 2024-04-03 16:07:05 +03:00
argsort.cuh
binbcast.cu ggml : group all experts in a single ggml_mul_mat_id (#6505) 2024-04-18 15:18:48 +02:00
binbcast.cuh
clamp.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
clamp.cuh
common.cuh CUDA: deduplicate FlashAttention code (#7352) 2024-05-18 12:36:25 +02:00
concat.cu
concat.cuh
convert.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
convert.cuh llama : add Command R Plus support (#6491) 2024-04-09 11:16:13 +03:00
cpy.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
cpy.cuh Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
dequantize.cuh llama : add Command R Plus support (#6491) 2024-04-09 11:16:13 +03:00
diagmask.cu
diagmask.cuh
dmmv.cu llama : add Command R Plus support (#6491) 2024-04-09 11:16:13 +03:00
dmmv.cuh sync : ggml (#6351) 2024-03-29 17:45:46 +02:00
fattn-common.cuh CUDA: deduplicate FlashAttention code (#7352) 2024-05-18 12:36:25 +02:00
fattn-tile-f16.cu CUDA: deduplicate FlashAttention code (#7352) 2024-05-18 12:36:25 +02:00
fattn-tile-f16.cuh CUDA: faster large batch FA without tensor cores (#7314) 2024-05-17 18:54:52 +02:00
fattn-tile-f32.cu CUDA: deduplicate FlashAttention code (#7352) 2024-05-18 12:36:25 +02:00
fattn-tile-f32.cuh CUDA: faster large batch FA without tensor cores (#7314) 2024-05-17 18:54:52 +02:00
fattn-vec-f16.cu CUDA: deduplicate FlashAttention code (#7352) 2024-05-18 12:36:25 +02:00
fattn-vec-f16.cuh CUDA: add FP32 FlashAttention vector kernel (#7188) 2024-05-12 19:40:45 +02:00
fattn-vec-f32.cu CUDA: deduplicate FlashAttention code (#7352) 2024-05-18 12:36:25 +02:00
fattn-vec-f32.cuh CUDA: add FP32 FlashAttention vector kernel (#7188) 2024-05-12 19:40:45 +02:00
fattn.cu CUDA: deduplicate FlashAttention code (#7352) 2024-05-18 12:36:25 +02:00
fattn.cuh ggml : add Flash Attention (#5021) 2024-04-30 12:16:08 +03:00
getrows.cu
getrows.cuh
im2col.cu
im2col.cuh
mmq.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
mmq.cuh
mmvq.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
mmvq.cuh
norm.cu
norm.cuh
pad.cu
pad.cuh
pool2d.cu
pool2d.cuh
quantize.cu llama : add Command R Plus support (#6491) 2024-04-09 11:16:13 +03:00
quantize.cuh llama : add Command R Plus support (#6491) 2024-04-09 11:16:13 +03:00
rope.cu
rope.cuh
scale.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
scale.cuh
softmax.cu CUDA: deduplicate FlashAttention code (#7352) 2024-05-18 12:36:25 +02:00
softmax.cuh
sumrows.cu
sumrows.cuh
tsembd.cu
tsembd.cuh
unary.cu feat: implemented sigmoid function (ggml/806) 2024-05-11 15:38:34 +03:00
unary.cuh feat: implemented sigmoid function (ggml/806) 2024-05-11 15:38:34 +03:00
upscale.cu ggml : add ggml_upscale_ext (ggml/814) 2024-05-15 13:23:33 +03:00
upscale.cuh
vecdotq.cuh IQ1_M: 1.75 bpw quantization (#6302) 2024-03-26 15:21:27 +01:00