Commit graph

177 commits

Author SHA1 Message Date
Daniel Han
29ed805a13 Update README.md 2025-09-15 01:42:59 -07:00
Daniel Han
db4f3cde14 Update README.md 2025-09-15 01:40:28 -07:00
Daniel Han
2f8baabd7a Update README.md 2025-09-15 01:40:06 -07:00
Daniel Han
92f972bb7c Update README.md 2025-09-15 01:39:39 -07:00
Daniel Han
46e7370878 Blackwell support 2025-09-15 01:39:03 -07:00
Michael Han
bf92d129b4 Update README.md 2025-09-13 21:45:22 -07:00
Michael Han
8a1ff4a3f0 Update README.md
Adding new install instructions
2025-09-13 21:30:52 -07:00
Michael Han
413ec45d8b Update README.md 2025-08-09 15:53:29 -07:00
Michael Han
2a8ca1ef5a Update README.md 2025-08-08 12:14:38 -07:00
Quentin Gallouédec
580b5bca11 Update README.md (#2991)
* Update README.md

* Update README.md
2025-07-18 15:43:59 -07:00
Daniel Han
681b10dc0c Fixes 2025-07-11 00:01:37 -07:00
Michael Han
78e17304a0 Update README.md
Updating icon sizes
2025-07-04 15:50:31 -07:00
Michael Han
9ecc97a67c Update README.md
Editing icon sizes
2025-07-04 15:37:44 -07:00
Michael Han
f4a922fc6f Update README.md 2025-07-01 09:28:59 -07:00
Daniel Han
f1e1b890ac Move AMD to AMD branch 2025-07-01 01:02:51 -07:00
billishyahao
25d73efe8a [Feature] enable unsloth on amd gpu (#2520)
* [Feature] enable unsloth on amd gpu

* fix the comment

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>
2025-06-30 16:52:05 -07:00
Michael Han
b017f2395a Update README.md
Updating links
2025-06-25 01:32:24 -07:00
Michael Han
ce9e54755f Update README.md
Better Qwen3 notebook
2025-05-26 23:44:41 -07:00
Michael Han
1f4e74cb96 Update README.md 2025-05-25 03:35:43 -07:00
Quentin Gallouédec
ce5c2d2145 Remove dataset_text_field from SFTConfig (#2609) 2025-05-25 03:20:16 -07:00
Michael Han
e771760e53 Update README.md
Updating model support
2025-05-20 09:51:55 -07:00
Michael Han
61b68725e9 Update README.md 2025-05-19 21:26:19 -07:00
Michael Han
b22e654ef0 Update README.md 2025-05-16 01:56:40 -07:00
Michael Han
41e3701251 Update README.md
TTS support
2025-05-15 15:15:53 -07:00
Michael Han
b5ba71a3d3 Update README.md 2025-05-15 06:54:05 -07:00
omahs
28304e4101 Fix typos (#2540) 2025-05-15 04:23:27 -07:00
Michael Han
f4cbf303fe Update README.md 2025-05-13 01:39:59 -07:00
Yuanzhe Dong
75f3f8a7e5 Fix readme example 2025-05-06 19:26:35 -07:00
Michael Han
8821057420 Update README.md
Adding extra synthetic data notebook, cleaning repo
2025-05-05 20:56:01 -07:00
Michael Han
bb802c8a4a Update README.md 2025-05-02 23:14:34 -07:00
Michael Han
8bfe5fd4ab Update README.md 2025-05-02 09:06:57 -07:00
Michael Han
97a63f809f Update README.md
Qwen3 notebook
2025-05-01 22:52:42 -07:00
Michael Han
53e6fba362 Update README.md 2025-04-28 19:08:12 -07:00
Michael Han
29b25e36eb Update README.md 2025-04-05 14:56:01 -07:00
zhaozh
c107f46b5e Update README.md
Gemma3 HF uploaded GGUFs, 4-bit models link.
2025-04-02 16:10:21 +08:00
Michael Han
0b8e01ddb9 Update README.md 2025-03-27 00:26:18 -07:00
Michael Han
d8fc81f47b Update README.md 2025-03-19 04:23:52 -07:00
Michael Han
2f0de2be1f Update README.md 2025-03-19 04:21:39 -07:00
Michael Han
d82a707a4a Update README.md 2025-03-15 17:47:25 -07:00
Daniel Han
e1c24a01f8 Update README.md (#2028) 2025-03-14 22:06:53 -07:00
Daniel Han
05fdaff970 Gemma 3 readme (#2019)
* Update README.md

* Update README.md

* Update README.md

---------

Co-authored-by: Michael Han <107991372+shimmyshimmer@users.noreply.github.com>
2025-03-14 11:12:02 -07:00
Daniel Han
3410744e88 Gemma 3, bug fixes (#2014)
* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

* Bug fixes

* Update llama.py

* Update _utils.py

* _wrap_fast_inference

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* SFT dataset prepare

* Update pyproject.toml

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update utils.py

* bug fix

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update __init__.py

* Update _utils.py

* Version

* versioning

* Update _utils.py

* Update llama.py

* Update llama.py

* Bug fixes

* FastModel

* __doc__

* Update vision.py

* Update loader.py

* Update loader.py

* Update loader.py

* version

* move use_modelscope to _utils (#1938)

* move use_modelscope to _utils

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Don't use revision when loading model_config and is_peft=True (#1949)

* More syntax warnings (#1944)

* move use_modelscope to _utils

* fix

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update loader.py

* Full finetuning and other fixes

* UNSLOTH_ENABLE_FULL_FINETUNING

* Update loader.py

* Update loader.py

* Update loader.py

* Update vision.py

* Update vision.py

* full finetuning

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* max_seq_length

* Update rl.py

* Update rl.py

* Update rl.py

* Update pyproject.toml

* AutoModelForImageTextToText

* Update mapper.py

* Update pyproject.toml

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Batch samples

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Temporary patches

* Update loader.py

* model names

* Gemma 3 chat template

* Bug fixes

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update llama.py

* Update llama.py

* Update rl.py

* Update chat_templates.py

* Update chat_templates.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Revert

* Update _utils.py

* forced precision

* Autocast

* Update vision.py

* Update vision.py

* Update rl.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* vLLM fixes

* constexpr

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update save.py

* New models

* Triton windows update (#1976)

* Update pyproject.toml

* Update README.md

* Update RMS LayerNorm implementation, and list compr. change in chat templates (#1974)

* Update RMS LayerNorm implementation with optimizations and testing suite

* perf: optimize list comprehension in get_ollama_eos_tokens

* Update Zoo

* Update llama.py

* Update llama.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* grpo fix

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Update vision.py

* Update loader.py

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Wilson Wu <140025193+wiwu2390@users.noreply.github.com>
Co-authored-by: Akshay Behl <126911424+Captain-T2004@users.noreply.github.com>
2025-03-14 06:42:44 -07:00
Daniel Han
3e5f061133 Bug fixes (#1891)
* Update rl.py

* Patching

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* NEFTune

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Extra replacements

* Update rl_replacements.py

* Update rl.py

* extra RL replacements

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update _utils.py

* Update loader_utils.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* autocast

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update pyproject.toml

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* GRPO optimized

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Selective Log softmax

* Fix GRPO bsz

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix TRL

* Metrics GRPO

* Update rl_replacements.py

* Update rl_replacements.py

* No compile

* Update rl.py

* Remove docs

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* llama-quantize on WINDOWS WSL error fix - edit save.py (gguf saving breaks) (#1649)

* edit save.py to fix gguf saving breaks.

* add check for .exe or not exe file extension for linux and windows

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* unsloth_num_chunks

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py (#1754)

Fix typo in comment: know -> now.

This was printed when running the Llama3.1_(8B)-GRPO.ipynb example notebook, so I'd expect others to run into it as well.

* Optional logits

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Gennadii Manzhos <105049664+everythingisc00l@users.noreply.github.com>
Co-authored-by: Seth Weidman <seth@sethweidman.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
2025-03-04 03:55:49 -08:00
Michael Han
c018ea28db Update README.md 2025-03-03 21:27:20 -08:00
Michael Han
e02561d883 Update README.md 2025-03-02 20:44:26 -08:00
Michael Han
8b5883275d Update README.md 2025-03-02 20:35:27 -08:00
Michael Han
788563f8fe Update README.md 2025-03-02 20:34:36 -08:00
J. M Areeb Uzair
c6d2433547 Added Python version warning to Windows Install Section (#1872)
I spent half a day on the wrong Python version, so I am adding this big, red sign.
2025-03-02 03:48:21 -08:00
Aditya Ghai
08bc291300 Direct windows support for unsloth (#1841)
* Direct Windows Support(main)

* Update pyproject.toml

* Update README.md

Added the suggested changes to README
2025-02-27 20:25:46 -08:00
Michael Han
569b4422c4 Update README.md 2025-02-26 17:03:47 -08:00