prima.cpp/ggml-cuda
Justina Cho f5ef34e428 feat: implemented sigmoid function (ggml/806)
* added sigmoid function

* implemented metal kernel for sigmoid

* implemented cuda kernel for sigmoid

* added sigmoid unary op and incremented count
2024-05-11 15:38:34 +03:00
..
acc.cu
acc.cuh
arange.cu
arange.cuh
argsort.cu
argsort.cuh
binbcast.cu ggml : group all experts in a single ggml_mul_mat_id (#6505) 2024-04-18 15:18:48 +02:00
binbcast.cuh
clamp.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
clamp.cuh
common.cuh CUDA: generalize FP16 fattn vec kernel (#7061) 2024-05-09 14:32:02 +02:00
concat.cu
concat.cuh
convert.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
convert.cuh
cpy.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
cpy.cuh Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
dequantize.cuh
diagmask.cu
diagmask.cuh
dmmv.cu
dmmv.cuh
fattn.cu ggml : full ALiBi support (#7192) 2024-05-11 10:32:41 +03:00
fattn.cuh ggml : add Flash Attention (#5021) 2024-04-30 12:16:08 +03:00
getrows.cu
getrows.cuh
im2col.cu
im2col.cuh
mmq.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
mmq.cuh
mmvq.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
mmvq.cuh
norm.cu
norm.cuh
pad.cu
pad.cuh
pool2d.cu
pool2d.cuh
quantize.cu
quantize.cuh
rope.cu
rope.cuh
scale.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
scale.cuh
softmax.cu ggml : full ALiBi support (#7192) 2024-05-11 10:32:41 +03:00
softmax.cuh
sumrows.cu
sumrows.cuh
tsembd.cu
tsembd.cuh
unary.cu feat: implemented sigmoid function (ggml/806) 2024-05-11 15:38:34 +03:00
unary.cuh feat: implemented sigmoid function (ggml/806) 2024-05-11 15:38:34 +03:00
upscale.cu
upscale.cuh
vecdotq.cuh