mirror of
https://github.com/unslothai/unsloth.git
synced 2026-05-17 03:56:07 +00:00
Revert: drop Windows cache steps -- measured neutral / negative
The cache plan added in d65f8b19 was meant to shave ~5min off Windows
install time, but a controlled rerun on the same SHA shows it doesn't.
Side-by-side timing of the install step (cache miss vs cache hit on the
same Windows Update CI job, same workflow, same source):
cache miss (385s) | cache hit (450s, +65s slower)
----------------------- | -----------------------------
Cache restore 1s | 83s (76s Studio venv + 4 + 3)
Frontend build 159s | 204s ("Frontend source changed since
| last build -- rebuilding...")
PyTorch + 9 deps 81s | 95s
llama.cpp install 39s | 13s ("prebuilt up to date and validated")
Cache save (post) 17s | 0s (no upload, hash matched)
Root causes:
1. The Studio venv cache is a no-op. install.ps1 line 1097-1120 sees the
cached venv, calls Start-StudioVenvRollback to MOVE it aside as a
rollback backup, then unconditionally creates a fresh venv at line
1167. Cache restore costs 76s for a 398MB venv that is then thrown
away.
2. The frontend dist cache is a no-op. setup.ps1 line 1281-1296 checks
`LastWriteTime > $DistTime` for every source file. git checkout sets
all source mtimes to "now" while restored dist mtimes are from
cache-creation time, so the staleness check always wins and rebuilds.
3. Only the llama.cpp prebuilt cache works (saves ~26s). Not enough to
offset the other two.
Reverting the cache plan is safer than partially fixing it and waiting
for a follow-up to land. install.ps1 + setup.ps1 would both need
modification to make the cache useful, and that change touches all
platforms. The non-Windows mirrors of these workflows (-mac-, regular
linux) never had cache steps, so this revert restores parity.
The four other commits in this branch (Recents click bound, jq health
check, sys.stdio explicit handles, setup.ps1 stdout mirror, single-
process Chromium darwin-only, github_blob_to_raw netloc check) all
remain.
This commit is contained in:
parent
6a37af24a6
commit
e1345d5f1b
4 changed files with 0 additions and 145 deletions
24
.github/workflows/studio-windows-api-smoke.yml
vendored
24
.github/workflows/studio-windows-api-smoke.yml
vendored
|
|
@ -78,30 +78,6 @@ jobs:
|
|||
HF_HUB_ENABLE_HF_TRANSFER=1 \
|
||||
hf download "$GGUF_REPO" "$GGUF_FILE"
|
||||
|
||||
# Caches around install.ps1 -- restore the heavy artefacts the
|
||||
# installer would otherwise rebuild from scratch every run.
|
||||
# Keys hash every input file that would meaningfully change
|
||||
# the produced artefact, so a stale cache cannot mask a real
|
||||
# dependency change.
|
||||
- name: Cache Studio venv (~/.unsloth/studio/unsloth_studio)
|
||||
id: cache-studio-venv
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/studio/unsloth_studio
|
||||
key: ${{ runner.os }}-studio-venv-v1-${{ hashFiles('pyproject.toml', 'studio/backend/requirements/**', 'install.ps1', 'studio/setup.ps1', 'studio/install_python_stack.py') }}
|
||||
- name: Cache prebuilt llama.cpp (~/.unsloth/llama.cpp)
|
||||
id: cache-llama-prebuilt
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/llama.cpp
|
||||
key: ${{ runner.os }}-llama-cpp-prebuilt-v1-${{ hashFiles('studio/install_llama_prebuilt.py') }}
|
||||
- name: Cache frontend dist (studio/frontend/dist)
|
||||
id: cache-frontend-dist
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: studio/frontend/dist
|
||||
key: ${{ runner.os }}-studio-frontend-dist-v1-${{ hashFiles('studio/frontend/package-lock.json', 'studio/frontend/src/**', 'studio/frontend/index.html', 'studio/frontend/vite.config.*', 'studio/frontend/tsconfig*.json', 'studio/frontend/components.json') }}
|
||||
|
||||
- name: Install Studio (--local, --no-torch)
|
||||
shell: pwsh
|
||||
env:
|
||||
|
|
|
|||
|
|
@ -88,30 +88,6 @@ jobs:
|
|||
HF_HUB_ENABLE_HF_TRANSFER=1 \
|
||||
hf download "$GGUF_REPO" "$GGUF_FILE"
|
||||
|
||||
# Caches around install.ps1 -- restore the heavy artefacts the
|
||||
# installer would otherwise rebuild from scratch every run.
|
||||
# Keys hash every input file that would meaningfully change
|
||||
# the produced artefact, so a stale cache cannot mask a real
|
||||
# dependency change.
|
||||
- name: Cache Studio venv (~/.unsloth/studio/unsloth_studio)
|
||||
id: cache-studio-venv
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/studio/unsloth_studio
|
||||
key: ${{ runner.os }}-studio-venv-v1-${{ hashFiles('pyproject.toml', 'studio/backend/requirements/**', 'install.ps1', 'studio/setup.ps1', 'studio/install_python_stack.py') }}
|
||||
- name: Cache prebuilt llama.cpp (~/.unsloth/llama.cpp)
|
||||
id: cache-llama-prebuilt
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/llama.cpp
|
||||
key: ${{ runner.os }}-llama-cpp-prebuilt-v1-${{ hashFiles('studio/install_llama_prebuilt.py') }}
|
||||
- name: Cache frontend dist (studio/frontend/dist)
|
||||
id: cache-frontend-dist
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: studio/frontend/dist
|
||||
key: ${{ runner.os }}-studio-frontend-dist-v1-${{ hashFiles('studio/frontend/package-lock.json', 'studio/frontend/src/**', 'studio/frontend/index.html', 'studio/frontend/vite.config.*', 'studio/frontend/tsconfig*.json', 'studio/frontend/components.json') }}
|
||||
|
||||
- name: Install Studio (--local, --no-torch)
|
||||
shell: pwsh
|
||||
env:
|
||||
|
|
@ -377,30 +353,6 @@ jobs:
|
|||
HF_HUB_ENABLE_HF_TRANSFER=1 \
|
||||
hf download "$GGUF_REPO" "$GGUF_FILE"
|
||||
|
||||
# Caches around install.ps1 -- restore the heavy artefacts the
|
||||
# installer would otherwise rebuild from scratch every run.
|
||||
# Keys hash every input file that would meaningfully change
|
||||
# the produced artefact, so a stale cache cannot mask a real
|
||||
# dependency change.
|
||||
- name: Cache Studio venv (~/.unsloth/studio/unsloth_studio)
|
||||
id: cache-studio-venv
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/studio/unsloth_studio
|
||||
key: ${{ runner.os }}-studio-venv-v1-${{ hashFiles('pyproject.toml', 'studio/backend/requirements/**', 'install.ps1', 'studio/setup.ps1', 'studio/install_python_stack.py') }}
|
||||
- name: Cache prebuilt llama.cpp (~/.unsloth/llama.cpp)
|
||||
id: cache-llama-prebuilt
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/llama.cpp
|
||||
key: ${{ runner.os }}-llama-cpp-prebuilt-v1-${{ hashFiles('studio/install_llama_prebuilt.py') }}
|
||||
- name: Cache frontend dist (studio/frontend/dist)
|
||||
id: cache-frontend-dist
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: studio/frontend/dist
|
||||
key: ${{ runner.os }}-studio-frontend-dist-v1-${{ hashFiles('studio/frontend/package-lock.json', 'studio/frontend/src/**', 'studio/frontend/index.html', 'studio/frontend/vite.config.*', 'studio/frontend/tsconfig*.json', 'studio/frontend/components.json') }}
|
||||
|
||||
- name: Install Studio (--local, --no-torch)
|
||||
shell: pwsh
|
||||
env:
|
||||
|
|
@ -757,30 +709,6 @@ jobs:
|
|||
HF_HUB_ENABLE_HF_TRANSFER=1 \
|
||||
hf download "$GGUF_REPO" "$MMPROJ_FILE"
|
||||
|
||||
# Caches around install.ps1 -- restore the heavy artefacts the
|
||||
# installer would otherwise rebuild from scratch every run.
|
||||
# Keys hash every input file that would meaningfully change
|
||||
# the produced artefact, so a stale cache cannot mask a real
|
||||
# dependency change.
|
||||
- name: Cache Studio venv (~/.unsloth/studio/unsloth_studio)
|
||||
id: cache-studio-venv
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/studio/unsloth_studio
|
||||
key: ${{ runner.os }}-studio-venv-v1-${{ hashFiles('pyproject.toml', 'studio/backend/requirements/**', 'install.ps1', 'studio/setup.ps1', 'studio/install_python_stack.py') }}
|
||||
- name: Cache prebuilt llama.cpp (~/.unsloth/llama.cpp)
|
||||
id: cache-llama-prebuilt
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/llama.cpp
|
||||
key: ${{ runner.os }}-llama-cpp-prebuilt-v1-${{ hashFiles('studio/install_llama_prebuilt.py') }}
|
||||
- name: Cache frontend dist (studio/frontend/dist)
|
||||
id: cache-frontend-dist
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: studio/frontend/dist
|
||||
key: ${{ runner.os }}-studio-frontend-dist-v1-${{ hashFiles('studio/frontend/package-lock.json', 'studio/frontend/src/**', 'studio/frontend/index.html', 'studio/frontend/vite.config.*', 'studio/frontend/tsconfig*.json', 'studio/frontend/components.json') }}
|
||||
|
||||
- name: Install Studio (--local, --no-torch)
|
||||
shell: pwsh
|
||||
env:
|
||||
|
|
|
|||
25
.github/workflows/studio-windows-ui-smoke.yml
vendored
25
.github/workflows/studio-windows-ui-smoke.yml
vendored
|
|
@ -87,31 +87,6 @@ jobs:
|
|||
HF_HUB_ENABLE_HF_TRANSFER=1 \
|
||||
hf download "$GGUF_REPO" "$GGUF_FILE"
|
||||
|
||||
# Caches around install.ps1 -- restore the heavy artefacts the
|
||||
# installer would otherwise rebuild from scratch every run.
|
||||
# Keys hash on every input file that would meaningfully change
|
||||
# the produced artefact, so a stale cache cannot mask a real
|
||||
# dependency change. Cache scope is branch-partitioned by
|
||||
# GitHub Actions, so a malicious PR can't poison main's cache.
|
||||
- name: Cache Studio venv (~/.unsloth/studio/unsloth_studio)
|
||||
id: cache-studio-venv
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/studio/unsloth_studio
|
||||
key: ${{ runner.os }}-studio-venv-v1-${{ hashFiles('pyproject.toml', 'studio/backend/requirements/**', 'install.ps1', 'studio/setup.ps1', 'studio/install_python_stack.py') }}
|
||||
- name: Cache prebuilt llama.cpp (~/.unsloth/llama.cpp)
|
||||
id: cache-llama-prebuilt
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/llama.cpp
|
||||
key: ${{ runner.os }}-llama-cpp-prebuilt-v1-${{ hashFiles('studio/install_llama_prebuilt.py') }}
|
||||
- name: Cache frontend dist (studio/frontend/dist)
|
||||
id: cache-frontend-dist
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: studio/frontend/dist
|
||||
key: ${{ runner.os }}-studio-frontend-dist-v1-${{ hashFiles('studio/frontend/package-lock.json', 'studio/frontend/src/**', 'studio/frontend/index.html', 'studio/frontend/vite.config.*', 'studio/frontend/tsconfig*.json', 'studio/frontend/components.json') }}
|
||||
|
||||
- name: Install Studio (--local, --no-torch)
|
||||
# install.ps1 is the supported Windows installer. install.sh
|
||||
# has no Windows branch (apt-get / brew calls). The PS1
|
||||
|
|
|
|||
|
|
@ -73,30 +73,6 @@ jobs:
|
|||
# then fatal-errors with "Cache folder path is retrieved
|
||||
# for pip but doesn't exist on disk".
|
||||
|
||||
# Caches around install.ps1 -- restore the heavy artefacts the
|
||||
# installer would otherwise rebuild from scratch every run.
|
||||
# Keys hash every input file that would meaningfully change
|
||||
# the produced artefact, so a stale cache cannot mask a real
|
||||
# dependency change.
|
||||
- name: Cache Studio venv (~/.unsloth/studio/unsloth_studio)
|
||||
id: cache-studio-venv
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/studio/unsloth_studio
|
||||
key: ${{ runner.os }}-studio-venv-v1-${{ hashFiles('pyproject.toml', 'studio/backend/requirements/**', 'install.ps1', 'studio/setup.ps1', 'studio/install_python_stack.py') }}
|
||||
- name: Cache prebuilt llama.cpp (~/.unsloth/llama.cpp)
|
||||
id: cache-llama-prebuilt
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.unsloth/llama.cpp
|
||||
key: ${{ runner.os }}-llama-cpp-prebuilt-v1-${{ hashFiles('studio/install_llama_prebuilt.py') }}
|
||||
- name: Cache frontend dist (studio/frontend/dist)
|
||||
id: cache-frontend-dist
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: studio/frontend/dist
|
||||
key: ${{ runner.os }}-studio-frontend-dist-v1-${{ hashFiles('studio/frontend/package-lock.json', 'studio/frontend/src/**', 'studio/frontend/index.html', 'studio/frontend/vite.config.*', 'studio/frontend/tsconfig*.json', 'studio/frontend/components.json') }}
|
||||
|
||||
- name: Install Studio (--local, --no-torch)
|
||||
shell: pwsh
|
||||
env:
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue