Concedo
|
0e2b031159
|
colab cpus are too slow to run kokoro. swap back to outetts
Copilot Setup Steps / copilot-setup-steps (push) Has been cancelled
|
2025-08-24 16:20:01 +08:00 |
|
Concedo
|
774a399068
|
updated colab
|
2025-08-24 16:07:58 +08:00 |
|
Concedo
|
e7eb6d3200
|
increase default ctx size to 8k, rename usecublas to usecuda
|
2025-07-13 18:27:42 +08:00 |
|
Concedo
|
dcf88d6e78
|
Revert "make tts use gpu by default. use --ttscpu to disable"
This reverts commit 669f80265b .
|
2025-06-08 17:08:04 +08:00 |
|
Concedo
|
669f80265b
|
make tts use gpu by default. use --ttscpu to disable
|
2025-06-08 17:06:19 +08:00 |
|
Concedo
|
a80dfa5c10
|
various minor fixes
|
2025-06-08 01:11:42 +08:00 |
|
Concedo
|
2142d6ba68
|
Updated Colab to use internal downloader
fixed model command (+1 squashed commits)
Squashed commits:
[a4d8fd9f1] tryout new colab (+1 squashed commits)
Squashed commits:
[c97333d44] tryout new colab
|
2025-06-01 11:41:45 +08:00 |
|
Concedo
|
59c02aa1a6
|
embeddings model colab
|
2025-04-05 10:30:47 +08:00 |
|
Concedo
|
75e7902789
|
add localtunnel fallback (+1 squashed commits)
Squashed commits:
[ff0a63f6] add localtunnel fallback
|
2025-03-26 17:35:59 +08:00 |
|
Concedo
|
dbd8c680ba
|
allow remote saving to google drive
|
2025-03-09 15:04:43 +08:00 |
|
Concedo
|
6b7d2349a7
|
Rewrite history to fix bad vulkan shader commits without increasing repo size
added dpe colab (+8 squashed commit)
Squashed commit:
[b8362da4] updated lite
[ed6c037d] move nsigma into the regular sampler stack
[ac5f61c6] relative filepath fixed
[05fe96ab] export template
[ed0a5a3e] nix_example.md: refactor (#1401)
* nix_example.md: add override example
* nix_example.md: drop graphics example, already basic nixos knowledge
* nix_example.md: format
* nix_example.md: Vulkan is disabled on macOS
Disabled in: 1ccd253acc
* nix_examples.md: nixpkgs.config.cuda{Arches -> Capabilities}
Fixes: https://github.com/LostRuins/koboldcpp/issues/1367
[675c62f7] AutoGuess: Phi 4 (mini) (#1402)
[4bf56982 ] phrasing
[b8c0df04 ] Add Rep Pen to Top N Sigma sampler chain (#1397)
- place after nsigma and before xtc (+3 squashed commit)
Squashed commit:
[87c52b97 ] disable VMM from HIP
[ee8906f3 ] edit description
[e85c0e69 ] Remove Unnecessary Rep Counting (#1394)
* stop counting reps
* fix range-based initializer
* strike that - reverse it
|
2025-03-05 00:02:20 +08:00 |
|
Concedo
|
5ee7cbe08c
|
add cydonia to colab
|
2025-02-22 23:02:44 +08:00 |
|
Concedo
|
03def285db
|
updated colab
|
2025-01-23 00:13:55 +08:00 |
|
Concedo
|
4d92b4e98e
|
updated readme and colab
|
2025-01-14 00:31:52 +08:00 |
|
Concedo
|
dcfa1eca4e
|
Merge commit '017cc5f446863316d05522a87f25ec48713a9492' into concedo_experimental
# Conflicts:
# .github/ISSUE_TEMPLATE/010-bug-compilation.yml
# .github/ISSUE_TEMPLATE/019-bug-misc.yml
# CODEOWNERS
# examples/batched-bench/batched-bench.cpp
# examples/batched/batched.cpp
# examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp
# examples/gritlm/gritlm.cpp
# examples/llama-bench/llama-bench.cpp
# examples/passkey/passkey.cpp
# examples/quantize-stats/quantize-stats.cpp
# examples/run/run.cpp
# examples/simple-chat/simple-chat.cpp
# examples/simple/simple.cpp
# examples/tokenize/tokenize.cpp
# ggml/CMakeLists.txt
# ggml/src/ggml-metal/CMakeLists.txt
# ggml/src/ggml-vulkan/CMakeLists.txt
# scripts/sync-ggml.last
# src/llama.cpp
# tests/test-autorelease.cpp
# tests/test-model-load-cancel.cpp
# tests/test-tokenizer-0.cpp
# tests/test-tokenizer-1-bpe.cpp
# tests/test-tokenizer-1-spm.cpp
|
2025-01-08 23:15:21 +08:00 |
|
Concedo
|
1012281320
|
updated colab
|
2025-01-03 18:02:02 +08:00 |
|
Concedo
|
df7c2b9923
|
renamed some labels
|
2024-11-11 19:40:47 +08:00 |
|
Concedo
|
90f5cd0f67
|
wip logprobs data
|
2024-10-30 00:59:34 +08:00 |
|
Concedo
|
efc6939294
|
flashattn default true on colab
|
2024-10-14 18:50:02 +08:00 |
|
Concedo
|
1803382415
|
updated colab
|
2024-10-06 21:30:58 +08:00 |
|
Concedo
|
1df850c95c
|
add magnum to colab models
|
2024-07-30 21:13:29 +08:00 |
|
Concedo
|
a441c27cb5
|
fixed broken link
|
2024-07-16 01:00:16 +08:00 |
|
Concedo
|
066e7ac540
|
minor fixes: colab gpu backend, lite bugs, package python file with embd
|
2024-07-15 17:36:03 +08:00 |
|
Concedo
|
5b605d03ea
|
Merge branch 'upstream' into concedo_experimental
# Conflicts:
# .github/ISSUE_TEMPLATE/config.yml
# .gitignore
# CMakeLists.txt
# CONTRIBUTING.md
# Makefile
# README.md
# ci/run.sh
# common/common.h
# examples/main-cmake-pkg/CMakeLists.txt
# ggml/src/CMakeLists.txt
# models/ggml-vocab-bert-bge.gguf.inp
# models/ggml-vocab-bert-bge.gguf.out
# models/ggml-vocab-deepseek-coder.gguf.inp
# models/ggml-vocab-deepseek-coder.gguf.out
# models/ggml-vocab-deepseek-llm.gguf.inp
# models/ggml-vocab-deepseek-llm.gguf.out
# models/ggml-vocab-falcon.gguf.inp
# models/ggml-vocab-falcon.gguf.out
# models/ggml-vocab-gpt-2.gguf.inp
# models/ggml-vocab-gpt-2.gguf.out
# models/ggml-vocab-llama-bpe.gguf.inp
# models/ggml-vocab-llama-bpe.gguf.out
# models/ggml-vocab-llama-spm.gguf.inp
# models/ggml-vocab-llama-spm.gguf.out
# models/ggml-vocab-mpt.gguf.inp
# models/ggml-vocab-mpt.gguf.out
# models/ggml-vocab-phi-3.gguf.inp
# models/ggml-vocab-phi-3.gguf.out
# models/ggml-vocab-starcoder.gguf.inp
# models/ggml-vocab-starcoder.gguf.out
# requirements.txt
# requirements/requirements-convert_legacy_llama.txt
# scripts/check-requirements.sh
# scripts/pod-llama.sh
# src/CMakeLists.txt
# src/llama.cpp
# tests/test-rope.cpp
|
2024-07-06 00:25:10 +08:00 |
|
Concedo
|
6b0756506b
|
improvements to model downloader and chat completions adapter loader
|
2024-07-04 15:34:08 +08:00 |
|
Concedo
|
4f369b0a0a
|
update colab
|
2024-06-27 15:41:06 +08:00 |
|
Concedo
|
967b6572a2
|
try to use GPU for whisper
|
2024-06-03 23:07:26 +08:00 |
|
Concedo
|
5ebc532ca9
|
update colab
|
2024-06-03 14:55:12 +08:00 |
|
Concedo
|
868446bd1a
|
replace sdconfig and hordeconfig
|
2024-05-09 22:43:50 +08:00 |
|
Concedo
|
640f195140
|
add kobble tiny to readme
|
2024-05-03 18:13:39 +08:00 |
|
Concedo
|
69dcffa4ec
|
updated lite and colab
|
2024-04-21 16:48:48 +08:00 |
|
Concedo
|
d54af7fa31
|
updated swagger json link fix
|
2024-04-09 14:55:27 +08:00 |
|
Concedo
|
47c42fd45c
|
fix for mamba processing
|
2024-03-13 13:27:46 +08:00 |
|
Concedo
|
60d234550b
|
fix colab
|
2024-03-12 20:09:49 +08:00 |
|
Concedo
|
a69bc44e7a
|
edit colab (+1 squashed commits)
Squashed commits:
[c7ccb99d] update colab with llava
|
2024-03-12 15:24:53 +08:00 |
|
Concedo
|
308f33fc00
|
updated colab (+1 squashed commits)
Squashed commits:
[d42c3848] update colab (+2 squashed commit)
Squashed commit:
[213b1d00] Revert "temporarily disable image gen on colab"
This reverts commit f44df0e251 .
[af4e9803] Revert "remove for now"
This reverts commit 5174f9de7b .
|
2024-03-08 19:19:14 +08:00 |
|
Concedo
|
5174f9de7b
|
remove for now
|
2024-03-08 00:15:33 +08:00 |
|
Concedo
|
8ae4266bed
|
switch colab to q4_k_s
|
2024-03-08 00:08:29 +08:00 |
|
Concedo
|
410516f5b0
|
apt update
|
2024-03-07 23:56:59 +08:00 |
|
Concedo
|
fd9c7341b8
|
added model to colab
|
2024-03-07 21:14:03 +08:00 |
|
Concedo
|
f44df0e251
|
temporarily disable image gen on colab
|
2024-03-06 18:52:20 +08:00 |
|
Concedo
|
5760bd010b
|
added clamped as a SD launch option
|
2024-03-06 12:09:22 +08:00 |
|
Concedo
|
59c5448ac8
|
fixed colab (+1 squashed commits)
Squashed commits:
[1d1c686f] updated colab and docs
|
2024-03-02 10:09:07 +08:00 |
|
Concedo
|
f3dbe0a192
|
colab gguf
|
2024-01-24 16:40:55 +08:00 |
|
Concedo
|
14de08586e
|
added more compile flags to set apart the conda paths, and also for colab. updated readme for multitool
|
2024-01-21 17:38:33 +08:00 |
|
Concedo
|
1804238e3f
|
update colab
|
2024-01-15 20:32:50 +08:00 |
|
Concedo
|
5b2d93a1f8
|
updated lite and colab, added logit bias support to lite
|
2023-12-27 21:32:18 +08:00 |
|
Concedo
|
4d6d967c10
|
silence autoplay for colab
|
2023-12-27 19:13:34 +08:00 |
|
Concedo
|
b75152e3e9
|
added a proper quiet mode
|
2023-11-28 21:20:51 +08:00 |
|
Concedo
|
93e99179be
|
colab updated
|
2023-11-09 13:49:06 +08:00 |
|