studio: setup log styling (#4494)

* refactor(studio): unify setup terminal output style and add verbose setup mode

* studio(windows): align setup.ps1 banner/steps with setup.sh (ANSI, verbose)

* studio(setup): revert nvcc path reordering to match main

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* studio(setup): restore fail-fast llama.cpp setup flow

* studio(banner): use IPv6 loopback URL when binding :: or ::1

* Fix IPv6 URL bracketing, try_quiet stderr, _step label clamp

- Bracket IPv6 display_host in external_url to produce clickable URLs
- Redirect try_quiet failure log to stderr instead of stdout
- Clamp _step label to column width to prevent negative padding

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add sandbox integration tests for PR #4494 UX fixes

Simulation harness (tests/simulate_pr4494.py) creates an isolated uv
venv, copies the real source files into it, and runs subprocess tests
for all three fixes with visual before/after demos and edge cases.

Standalone bash test (tests/test_try_quiet.sh) validates try_quiet
stderr redirect across 8 scenarios including broken-version contrast.

39 integration tests total (14 IPv6 + 15 try_quiet + 10 _step), all
existing 75 unit tests still pass.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Truncate step() labels in setup.sh to match PS1 and Python

The %-15s printf format pads short labels but does not truncate long
ones.  Change to %-15.15s so labels wider than 15 chars are clipped,
matching the PowerShell .Substring(0,15) and Python label[:15] logic.

* Remove sandbox integration tests from PR

These test files are not part of the styling fix and should not
ship with this PR.

* Show error output on failure instead of suppressing it

- install_python_stack.py: restore _red for patch_package_file
  warnings (was downgraded to _dim)
- setup.ps1: capture winget output and show on failure for CUDA,
  Node, Python, and OpenSSL installs (was piped to Out-Null)
- setup.ps1: always show git pull failure warning, not just in
  verbose mode

* Show winget error output for Git and CMake installs on failure

Same capture-and-print-on-failure pattern already used for
Node, Python, CUDA, and OpenSSL winget installs.

* fix: preserve stderr for _run_quiet error messages in setup.sh

The step() helper writes to stdout, but _run_quiet's error header
was originally sent to stderr (>&2). Without the redirect, callers
that separate stdout/stderr would miss the failure headline while
still seeing the log body on stderr. Add >&2 to both step calls
inside _run_quiet to match main's behavior.

* feat: add --verbose flag to setup and update commands

Wire UNSLOTH_VERBOSE=1 through _run_setup_script() so that
'unsloth studio update --verbose' (and the deprecated 'setup')
passes the flag to setup.sh / setup.ps1 / install_python_stack.py.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Daniel Han <danielhanchen@gmail.com>
This commit is contained in:
Lee Jackson 2026-03-27 10:12:48 +00:00 committed by GitHub
parent 3a5e3bbd6d
commit 0233fe7f9c
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 489 additions and 283 deletions

View file

@ -24,6 +24,7 @@ if str(backend_dir) not in sys.path:
import _platform_compat # noqa: F401
from loggers import get_logger
from startup_banner import print_studio_access_banner
logger = get_logger(__name__)
@ -338,19 +339,11 @@ def run_server(
if not silent:
display_host = _resolve_external_ip() if host == "0.0.0.0" else host
print("")
print("=" * 50)
print(f"🦥 Open your web browser, and enter http://localhost:{port}")
print("=" * 50)
print("")
print("=" * 50)
print(f"🦥 Unsloth Studio is running on port {port}")
print(f" Local Access: http://localhost:{port}")
print(f" Worldwide Web Address: http://{display_host}:{port}")
print(f" API: http://{display_host}:{port}/api")
print(f" Health: http://{display_host}:{port}/api/health")
print("=" * 50)
print_studio_access_banner(
port = port,
bind_host = host,
display_host = display_host,
)
return app

View file

@ -0,0 +1,115 @@
# SPDX-License-Identifier: AGPL-3.0-only
# Copyright 2026-present the Unsloth AI Inc. team. All rights reserved. See /studio/LICENSE.AGPL-3.0
"""Terminal banner for Studio startup.
Stdlib only safe to import without the rest of the backend (no structlog/uvicorn).
"""
from __future__ import annotations
import os
import sys
def stdout_supports_color() -> bool:
"""True if we should emit ANSI colors."""
if os.environ.get("NO_COLOR", "").strip():
return False
if os.environ.get("FORCE_COLOR", "").strip():
return True
try:
return sys.stdout.isatty()
except Exception:
return False
def print_port_in_use_notice(original_port: int, new_port: int) -> None:
"""Message when the requested port is taken and another is chosen."""
msg = f"Port {original_port} is in use, using port {new_port} instead."
if stdout_supports_color():
print(f"\033[38;5;245m{msg}\033[0m")
else:
print(msg)
def print_studio_access_banner(
*,
port: int,
bind_host: str,
display_host: str,
) -> None:
"""Pretty-print URLs after the server is listening (beginner-friendly)."""
use_color = stdout_supports_color()
dim = "\033[38;5;245m"
title = "\033[38;5;150m"
local_url_style = "\033[38;5;108;1m"
secondary = "\033[38;5;109m"
reset = "\033[0m"
def style(text: str, code: str) -> str:
return f"{code}{text}{reset}" if use_color else text
ipv6_bind = bind_host in ("::", "::1")
if ipv6_bind:
local_url = f"http://[::1]:{port}"
alt_local = f"http://localhost:{port}"
else:
local_url = f"http://127.0.0.1:{port}"
alt_local = f"http://localhost:{port}"
if ":" in display_host:
external_url = f"http://[{display_host}]:{port}"
else:
external_url = f"http://{display_host}:{port}"
listen_all = bind_host in ("0.0.0.0", "::")
loopback_bind = bind_host in ("127.0.0.1", "localhost", "::1")
api_base = local_url if listen_all or loopback_bind else external_url
lines: list[str] = [
"",
style("🦥 Unsloth Studio is running", title),
style("" * 52, dim),
style(" On this machine — open this in your browser:", dim),
style(f" {local_url}", local_url_style),
style(f" (same as {alt_local})", dim),
]
if listen_all and display_host not in (
"127.0.0.1",
"localhost",
"::1",
"0.0.0.0",
"::",
):
lines.extend(
[
"",
style(" From another device on your network / to share:", dim),
style(f" {external_url}", secondary),
]
)
elif not listen_all and bind_host not in ("127.0.0.1", "localhost", "::1"):
lines.extend(
[
"",
style(" Bound address:", dim),
style(f" {external_url}", secondary),
]
)
lines.extend(
[
"",
style(" API & health:", dim),
style(f" {api_base}/api", secondary),
style(f" {api_base}/api/health", secondary),
style("" * 52, dim),
style(
" Tip: if you are on the same computer, use the Local link above.",
dim,
),
"",
]
)
print("\n".join(lines))

View file

@ -45,6 +45,7 @@ NO_TORCH = _infer_no_torch()
# -- Verbosity control ----------------------------------------------------------
# By default the installer shows a minimal progress bar (one line, in-place).
# Set UNSLOTH_VERBOSE=1 in the environment to restore full per-step output:
# CLI: unsloth studio setup --verbose
# Linux/Mac: UNSLOTH_VERBOSE=1 ./studio/setup.sh
# Windows: $env:UNSLOTH_VERBOSE="1" ; .\studio\setup.ps1
VERBOSE: bool = os.environ.get("UNSLOTH_VERBOSE", "0") == "1"
@ -96,15 +97,18 @@ def _safe_print(*args: object, **kwargs: object) -> None:
)
# -- Color support ------------------------------------------------------
# ── Color support ──────────────────────────────────────────────────────
# Same logic as startup_banner: NO_COLOR disables, FORCE_COLOR or TTY enables.
def _enable_colors() -> bool:
"""Try to enable ANSI color support. Returns True if available."""
if not hasattr(sys.stdout, "fileno"):
def _stdout_supports_color() -> bool:
"""True if we should emit ANSI colors (matches startup_banner)."""
if os.environ.get("NO_COLOR", "").strip():
return False
if os.environ.get("FORCE_COLOR", "").strip():
return True
try:
if not os.isatty(sys.stdout.fileno()):
if not sys.stdout.isatty():
return False
except Exception:
return False
@ -113,24 +117,26 @@ def _enable_colors() -> bool:
import ctypes
kernel32 = ctypes.windll.kernel32
# Enable ENABLE_VIRTUAL_TERMINAL_PROCESSING (0x0004) on stdout
handle = kernel32.GetStdHandle(-11) # STD_OUTPUT_HANDLE
handle = kernel32.GetStdHandle(-11)
mode = ctypes.c_ulong()
kernel32.GetConsoleMode(handle, ctypes.byref(mode))
kernel32.SetConsoleMode(handle, mode.value | 0x0004)
return True
except Exception:
return False
return True # Unix terminals support ANSI by default
return True
# Colors disabled -- Colab and most CI runners render ANSI fine, but plain output
# is cleaner in the notebook cell. Re-enable by setting _HAS_COLOR = _enable_colors()
_HAS_COLOR = False
_HAS_COLOR = _stdout_supports_color()
# Column layout — matches setup.sh step() helper:
# 2-space indent, 15-char label (dim), then value.
_LABEL = "deps"
_COL = 15
def _green(msg: str) -> str:
return f"\033[92m{msg}\033[0m" if _HAS_COLOR else msg
return f"\033[38;5;108m{msg}\033[0m" if _HAS_COLOR else msg
def _cyan(msg: str) -> str:
@ -141,21 +147,39 @@ def _red(msg: str) -> str:
return f"\033[91m{msg}\033[0m" if _HAS_COLOR else msg
def _progress(label: str) -> None:
"""Print an in-place progress bar for the current install step.
def _dim(msg: str) -> str:
return f"\033[38;5;245m{msg}\033[0m" if _HAS_COLOR else msg
Uses only stdlib (sys.stdout) -- no extra packages required.
In VERBOSE mode this is a no-op; per-step labels are printed by run() instead.
"""
def _title(msg: str) -> str:
return f"\033[38;5;150m{msg}\033[0m" if _HAS_COLOR else msg
_RULE = "\u2500" * 52
def _step(label: str, value: str, color_fn = None) -> None:
"""Print a single step line in the column format."""
if color_fn is None:
color_fn = _green
padded = label[:_COL]
print(f" {_dim(padded)}{' ' * (_COL - len(padded))}{color_fn(value)}")
def _progress(label: str) -> None:
"""Print an in-place progress bar aligned to the step column layout."""
global _STEP
_STEP += 1
if VERBOSE:
return # verbose mode: run() already printed the label
return
width = 20
filled = int(width * _STEP / _TOTAL)
bar = "=" * filled + "-" * (width - filled)
end = "\n" if _STEP >= _TOTAL else "" # newline only on the final step
sys.stdout.write(f"\r[{bar}] {_STEP:2}/{_TOTAL} {label:<40}{end}")
pad = " " * (_COL - len(_LABEL))
end = "\n" if _STEP >= _TOTAL else ""
sys.stdout.write(
f"\r {_dim(_LABEL)}{pad}[{bar}] {_STEP:2}/{_TOTAL} {label:<20}{end}"
)
sys.stdout.flush()
@ -164,14 +188,14 @@ def run(
) -> subprocess.CompletedProcess[bytes]:
"""Run a command; on failure print output and exit."""
if VERBOSE:
print(f" {label}...")
_step(_LABEL, f"{label}...", _dim)
result = subprocess.run(
cmd,
stdout = subprocess.PIPE if quiet else None,
stderr = subprocess.STDOUT if quiet else None,
)
if result.returncode != 0:
_safe_print(_red(f"{label} failed (exit code {result.returncode}):"))
_step("error", f"{label} failed (exit code {result.returncode})", _red)
if result.stdout:
print(result.stdout.decode(errors = "replace"))
sys.exit(result.returncode)
@ -353,9 +377,7 @@ def patch_package_file(package_name: str, relative_path: str, url: str) -> None:
text = True,
)
if result.returncode != 0:
_safe_print(
_red(f" ⚠️ Could not find package {package_name}, skipping patch")
)
_step(_LABEL, f"package {package_name} not found, skipping patch", _red)
return
location = None
@ -365,11 +387,11 @@ def patch_package_file(package_name: str, relative_path: str, url: str) -> None:
break
if not location:
_safe_print(_red(f" ⚠️ Could not determine location of {package_name}"))
_step(_LABEL, f"could not locate {package_name}", _red)
return
dest = Path(location) / relative_path
print(_cyan(f" Patching {dest.name} in {package_name}..."))
_step(_LABEL, f"patching {dest.name} in {package_name}...", _dim)
download_file(url, dest)
@ -633,7 +655,7 @@ def install_python_stack() -> int:
stderr = subprocess.DEVNULL,
)
_safe_print(_green("✅ Python dependencies installed"))
_step(_LABEL, "installed")
return 0

View file

@ -10,13 +10,28 @@
full setup including frontend build.
Supports NVIDIA GPU (full training + inference) and CPU-only (GGUF chat mode).
.NOTES
Usage: powershell -ExecutionPolicy Bypass -File setup.ps1
Default output is minimal (step/substep), aligned with studio/setup.sh.
FULL / LEGACY LOGGING (defensible audit trail, multi-line [OK]/[WARN]/paths):
unsloth studio setup --verbose
(sets UNSLOTH_VERBOSE=1; same as install_python_stack.py)
Or: $env:UNSLOTH_VERBOSE='1'; powershell -File .\studio\setup.ps1
Or: .\setup.ps1 --verbose
#>
$ErrorActionPreference = "Stop"
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
$PackageDir = Split-Path -Parent $ScriptDir
# Same as: unsloth studio setup --verbose (see unsloth_cli/commands/studio.py)
foreach ($a in $args) {
if ($a -eq '--verbose' -or $a -eq '-v') {
$env:UNSLOTH_VERBOSE = '1'
break
}
}
$script:UnslothVerbose = ($env:UNSLOTH_VERBOSE -eq '1')
# Detect if running from pip install (no frontend/ dir in studio)
$FrontendDir = Join-Path $ScriptDir "frontend"
$OxcValidatorDir = Join-Path $ScriptDir "backend\core\data_recipe\oxc-validator"
@ -247,17 +262,138 @@ function Find-VsBuildTools {
return $null
}
# ─────────────────────────────────────────────
# Output style (aligned with studio/setup.sh: step / substep)
# ─────────────────────────────────────────────
$Rule = [string]::new([char]0x2500, 52)
function Enable-StudioVirtualTerminal {
if ($env:NO_COLOR) { return $false }
try {
Add-Type -Namespace StudioVT -Name Native -MemberDefinition @'
[DllImport("kernel32.dll")] public static extern IntPtr GetStdHandle(int nStdHandle);
[DllImport("kernel32.dll")] public static extern bool GetConsoleMode(IntPtr h, out uint m);
[DllImport("kernel32.dll")] public static extern bool SetConsoleMode(IntPtr h, uint m);
'@ -ErrorAction Stop
$h = [StudioVT.Native]::GetStdHandle(-11)
[uint32]$mode = 0
if (-not [StudioVT.Native]::GetConsoleMode($h, [ref]$mode)) { return $false }
$mode = $mode -bor 0x0004
return [StudioVT.Native]::SetConsoleMode($h, $mode)
} catch {
return $false
}
}
$script:StudioVtOk = Enable-StudioVirtualTerminal
function Get-StudioAnsi {
param(
[Parameter(Mandatory = $true)]
[ValidateSet('Title', 'Dim', 'Ok', 'Warn', 'Err', 'Reset')]
[string]$Kind
)
$e = [char]27
switch ($Kind) {
'Title' { return "${e}[38;5;150m" }
'Dim' { return "${e}[38;5;245m" }
'Ok' { return "${e}[38;5;108m" }
'Warn' { return "${e}[38;5;136m" }
'Err' { return "${e}[91m" }
'Reset' { return "${e}[0m" }
}
}
function Write-SetupVerboseDetail {
param(
[Parameter(Mandatory = $true)][string]$Message,
[string]$Color = "Gray"
)
if (-not $script:UnslothVerbose) { return }
if ($script:StudioVtOk -and -not $env:NO_COLOR) {
$ansi = switch ($Color) {
'Green' { (Get-StudioAnsi Ok) }
'Gray' { (Get-StudioAnsi Dim) }
'DarkGray' { (Get-StudioAnsi Dim) }
'Yellow' { (Get-StudioAnsi Warn) }
'Cyan' { (Get-StudioAnsi Title) }
'Red' { (Get-StudioAnsi Err) }
default { (Get-StudioAnsi Dim) }
}
Write-Host ($ansi + $Message + (Get-StudioAnsi Reset))
} else {
$fc = switch ($Color) {
'Green' { 'DarkGreen' }
'Gray' { 'DarkGray' }
'Cyan' { 'Green' }
default { $Color }
}
Write-Host $Message -ForegroundColor $fc
}
}
function step {
param(
[Parameter(Mandatory = $true)][string]$Label,
[Parameter(Mandatory = $true)][string]$Value,
[string]$Color = "Green"
)
if ($script:StudioVtOk -and -not $env:NO_COLOR) {
$dim = Get-StudioAnsi Dim
$rst = Get-StudioAnsi Reset
$val = switch ($Color) {
'Green' { Get-StudioAnsi Ok }
'Yellow' { Get-StudioAnsi Warn }
'Red' { Get-StudioAnsi Err }
'DarkGray' { Get-StudioAnsi Dim }
default { Get-StudioAnsi Ok }
}
$padded = if ($Label.Length -ge 15) { $Label.Substring(0, 15) } else { $Label.PadRight(15) }
Write-Host (" {0}{1}{2}{3}{4}{2}" -f $dim, $padded, $rst, $val, $Value)
} else {
$padded = if ($Label.Length -ge 15) { $Label.Substring(0, 15) } else { $Label.PadRight(15) }
Write-Host (" {0}" -f $padded) -NoNewline -ForegroundColor DarkGray
$fc = switch ($Color) {
'Green' { 'DarkGreen' }
'Yellow' { 'Yellow' }
'Red' { 'Red' }
'DarkGray' { 'DarkGray' }
default { 'DarkGreen' }
}
Write-Host $Value -ForegroundColor $fc
}
}
function substep {
param(
[Parameter(Mandatory = $true)][string]$Message,
[string]$Color = "DarkGray"
)
if ($script:StudioVtOk -and -not $env:NO_COLOR) {
$msgCol = switch ($Color) {
'Yellow' { (Get-StudioAnsi Warn) }
default { (Get-StudioAnsi Dim) }
}
$pad = "".PadRight(15)
Write-Host (" {0}{1}{2}{3}" -f $msgCol, $pad, $Message, (Get-StudioAnsi Reset))
} else {
$fc = switch ($Color) {
'Yellow' { 'Yellow' }
default { 'DarkGray' }
}
Write-Host (" {0,-15}{1}" -f "", $Message) -ForegroundColor $fc
}
}
# ─────────────────────────────────────────────
# Banner
# ─────────────────────────────────────────────
if ($env:SKIP_STUDIO_BASE -eq "1") {
Write-Host "+==============================================+" -ForegroundColor Green
Write-Host "| Unsloth Studio Setup (Windows) |" -ForegroundColor Green
Write-Host "+==============================================+" -ForegroundColor Green
Write-Host ""
if ($script:StudioVtOk -and -not $env:NO_COLOR) {
Write-Host (" " + (Get-StudioAnsi Title) + [char]::ConvertFromUtf32(0x1F9A5) + " Unsloth Studio Setup" + (Get-StudioAnsi Reset))
Write-Host (" {0}{1}{2}" -f (Get-StudioAnsi Dim), $Rule, (Get-StudioAnsi Reset))
} else {
Write-Host "+==============================================+" -ForegroundColor Green
Write-Host "| Unsloth Studio Update (Windows) |" -ForegroundColor Green
Write-Host "+==============================================+" -ForegroundColor Green
Write-Host (" " + [char]::ConvertFromUtf32(0x1F9A5) + " Unsloth Studio Setup") -ForegroundColor Green
Write-Host " $Rule" -ForegroundColor DarkGray
}
# ==========================================================================
@ -303,12 +439,12 @@ if (-not $HasNvidiaSmi) {
}
if (-not $HasNvidiaSmi) {
Write-Host ""
Write-Host "[WARN] No NVIDIA GPU detected. Studio will run in chat-only (GGUF) mode." -ForegroundColor Yellow
step "gpu" "none (chat-only / GGUF)" "Yellow"
Write-Host " Training and GPU inference require an NVIDIA GPU with drivers installed." -ForegroundColor Yellow
Write-Host " https://www.nvidia.com/Download/index.aspx" -ForegroundColor Yellow
Write-Host ""
} else {
Write-Host "[OK] NVIDIA GPU detected" -ForegroundColor Green
step "gpu" "NVIDIA GPU detected"
}
# ============================================
@ -364,9 +500,9 @@ if (-not $HasGit) {
Write-Host " Install Git from https://git-scm.com/download/win and re-run." -ForegroundColor Red
exit 1
}
Write-Host "[OK] Git installed: $(git --version)" -ForegroundColor Green
step "git" "$(git --version)"
} else {
Write-Host "[OK] Git found: $(git --version)" -ForegroundColor Green
step "git" "$(git --version)"
}
# ============================================
@ -408,14 +544,14 @@ if (-not $HasCmake) {
}
}
if ($HasCmake) {
Write-Host "[OK] CMake installed" -ForegroundColor Green
step "cmake" "installed"
} else {
Write-Host "[ERROR] CMake is required but could not be installed." -ForegroundColor Red
Write-Host " Install CMake from https://cmake.org/download/ and re-run." -ForegroundColor Red
exit 1
}
} else {
Write-Host "[OK] CMake found: $(cmake --version | Select-Object -First 1)" -ForegroundColor Green
step "cmake" "$(cmake --version | Select-Object -First 1)"
}
# ============================================
@ -442,7 +578,7 @@ if (-not $vsResult) {
if ($vsResult) {
$CmakeGenerator = $vsResult.Generator
$VsInstallPath = $vsResult.InstallPath
Write-Host "[OK] $CmakeGenerator detected via $($vsResult.Source)" -ForegroundColor Green
step "vs" "$CmakeGenerator ($($vsResult.Source))"
if ($vsResult.ClExe) { Write-Host " cl.exe: $($vsResult.ClExe)" -ForegroundColor Gray }
} else {
Write-Host "[ERROR] Visual Studio Build Tools could not be found or installed." -ForegroundColor Red
@ -713,7 +849,7 @@ if ($VsInstallPath -and $CudaToolkitRoot) {
}
}
Write-Host "[OK] CUDA Toolkit: $NvccPath" -ForegroundColor Green
step "cuda" $NvccPath
Write-Host " CUDA_PATH = $CudaToolkitRoot" -ForegroundColor Gray
Write-Host " CudaToolkitDir = $CudaToolkitRoot\" -ForegroundColor Gray
@ -730,7 +866,7 @@ if (-not $CudaArch) {
# 1f. Node.js / npm (skip if pip-installed -- only needed for frontend build)
# ============================================
if ($IsPipInstall) {
Write-Host "[OK] Running from pip install - frontend already bundled, skipping Node/npm check" -ForegroundColor Green
step "frontend" "bundled (pip install)"
} else {
# setup.sh installs Node LTS (v22) via nvm. We enforce the same range here:
# Vite 8 requires Node ^20.19.0 || >=22.12.0, npm >= 11.
@ -771,7 +907,7 @@ if ($IsPipInstall) {
}
}
Write-Host "[OK] Node $(node -v) | npm $(npm -v)" -ForegroundColor Green
step "node" "$(node -v) | npm $(npm -v)"
# ── bun (optional, faster package installs) ──
# Installed via npm — Node is already guaranteed above. Works on all platforms.
@ -825,7 +961,7 @@ if ($HasPython) {
Write-Host " Install Python 3.12 from https://python.org/downloads/" -ForegroundColor Yellow
exit 1
}
Write-Host "[OK] Python $(python --version)" -ForegroundColor Green
step "python" "$(python --version 2>&1)"
$PythonOk = $true
}
@ -860,7 +996,7 @@ $DistDir = Join-Path $FrontendDir "dist"
$NeedFrontendBuild = $true
if ($IsPipInstall) {
$NeedFrontendBuild = $false
Write-Host "[OK] Running from pip install - frontend already bundled, skipping build" -ForegroundColor Green
step "frontend" "bundled (pip install)"
} elseif (Test-Path $DistDir) {
$DistTime = (Get-Item $DistDir).LastWriteTime
$NewerFile = $null
@ -881,7 +1017,7 @@ if ($IsPipInstall) {
}
if (-not $NewerFile) {
$NeedFrontendBuild = $false
Write-Host "[OK] Frontend already built and up to date -- skipping build" -ForegroundColor Green
step "frontend" "up to date"
} else {
Write-Host "[INFO] Frontend source changed since last build -- rebuilding..." -ForegroundColor Yellow
}
@ -992,10 +1128,9 @@ if ($NeedFrontendBuild -and -not $IsPipInstall) {
$CssFiles = Get-ChildItem (Join-Path $DistDir "assets") -Filter "*.css" -ErrorAction SilentlyContinue
$MaxCssSize = ($CssFiles | Measure-Object -Property Length -Maximum).Maximum
if ($MaxCssSize -lt 100000) {
Write-Host "[WARN] Largest CSS file is only $([math]::Round($MaxCssSize / 1024))KB -- Tailwind may not have scanned all source files." -ForegroundColor Yellow
Write-Host " Expected >100KB. Check for .gitignore files blocking the Tailwind oxide scanner." -ForegroundColor Yellow
step "frontend" "built (warning: CSS may be truncated)" "Yellow"
} else {
Write-Host "[OK] Frontend built to frontend/dist (CSS: $([math]::Round($MaxCssSize / 1024))KB)" -ForegroundColor Green
step "frontend" "built"
}
}
@ -1319,7 +1454,7 @@ if ($LASTEXITCODE -ne 0) {
Write-Host "[WARN] Could not install tiktoken into .venv_t5/ -- Qwen tokenizers may fail" -ForegroundColor Yellow
}
$ErrorActionPreference = $prevEAP_t5
Write-Host "[OK] Transformers 5.x pre-installed to .venv_t5/" -ForegroundColor Green
step "transformers" "5.x pre-installed"
# ==========================================================================
# PHASE 3.4: Prefer prebuilt llama.cpp bundles before source build
@ -1400,7 +1535,7 @@ if ($env:UNSLOTH_LLAMA_FORCE_COMPILE -eq "1") {
$ErrorActionPreference = $prevEAPPrebuilt
if ($prebuiltExit -eq 0) {
Write-Host "[OK] Prebuilt llama.cpp installed and validated" -ForegroundColor Green
step "llama.cpp" "prebuilt installed and validated"
} else {
if (Test-Path $LlamaCppDir) {
Write-Host "[WARN] Prebuilt update failed; existing install was restored or cleaned before source build fallback" -ForegroundColor Yellow
@ -1495,10 +1630,10 @@ if (Test-Path $LlamaServerBin) {
if (-not $NeedLlamaSourceBuild) {
Write-Host ""
Write-Host "[OK] Using validated prebuilt llama.cpp install at $LlamaCppDir" -ForegroundColor Green
step "llama.cpp" "prebuilt (validated)"
} elseif ((Test-Path $LlamaServerBin) -and -not $NeedRebuild) {
Write-Host ""
Write-Host "[OK] llama-server already exists at $LlamaServerBin" -ForegroundColor Green
step "llama.cpp" "already built"
} elseif (-not $HasCmakeForBuild) {
Write-Host ""
if (-not $HasNvidiaSmi) {
@ -1719,38 +1854,37 @@ if (-not $NeedLlamaSourceBuild) {
$totalSec = [math]::Round($totalSw.Elapsed.TotalSeconds % 60, 1)
# -- Summary --
Write-Host ""
if ($BuildOk -and (Test-Path $LlamaServerBin)) {
Write-Host "[OK] llama-server built at $LlamaServerBin" -ForegroundColor Green
step "llama.cpp" "built"
$QuantizeBin = Join-Path $BuildDir "bin\Release\llama-quantize.exe"
if (Test-Path $QuantizeBin) {
Write-Host "[OK] llama-quantize available for GGUF export" -ForegroundColor Green
step "llama-quantize" "built"
}
Write-Host " Build time: ${totalMin}m ${totalSec}s" -ForegroundColor Cyan
step "build time" "${totalMin}m ${totalSec}s" "DarkGray"
} else {
# Check alternate paths (some cmake generators don't use Release subdir)
$altBin = Join-Path $BuildDir "bin\llama-server.exe"
if ($BuildOk -and (Test-Path $altBin)) {
Write-Host "[OK] llama-server built at $altBin" -ForegroundColor Green
Write-Host " Build time: ${totalMin}m ${totalSec}s" -ForegroundColor Cyan
step "llama.cpp" "built"
step "build time" "${totalMin}m ${totalSec}s" "DarkGray"
} else {
Write-Host "[FAILED] llama.cpp build failed at step: $FailedStep (${totalMin}m ${totalSec}s)" -ForegroundColor Red
Write-Host " To retry: delete $LlamaCppDir and re-run setup." -ForegroundColor Yellow
step "llama.cpp" "build failed at: $FailedStep (${totalMin}m ${totalSec}s)" "Red"
substep "To retry: delete $LlamaCppDir and re-run setup." "Yellow"
exit 1
}
}
}
# ============================================
# Done
# ============================================
# ─────────────────────────────────────────────
# Footer
# ─────────────────────────────────────────────
if ($script:StudioVtOk -and -not $env:NO_COLOR) {
Write-Host (" {0}{1}{2}" -f (Get-StudioAnsi Dim), $Rule, (Get-StudioAnsi Reset))
Write-Host (" " + (Get-StudioAnsi Title) + "Unsloth Studio Installed" + (Get-StudioAnsi Reset))
Write-Host (" {0}{1}{2}" -f (Get-StudioAnsi Dim), $Rule, (Get-StudioAnsi Reset))
} else {
Write-Host " $Rule" -ForegroundColor DarkGray
Write-Host " Unsloth Studio Installed" -ForegroundColor Green
Write-Host " $Rule" -ForegroundColor DarkGray
}
step "launch" "unsloth studio -H 0.0.0.0 -p 8888"
Write-Host ""
$doneLine = if ($env:SKIP_STUDIO_BASE -eq "1") { "Setup Complete!" } else { "Update Complete!" }
$doneContent = " $doneLine"
Write-Host "+===============================================+" -ForegroundColor Green
Write-Host ("|" + $doneContent.PadRight(47) + "|") -ForegroundColor Green
Write-Host "| |" -ForegroundColor Green
Write-Host "| Launch with: |" -ForegroundColor Green
Write-Host "| unsloth studio -H 0.0.0.0 -p 8888 |" -ForegroundColor Green
Write-Host "| |" -ForegroundColor Green
Write-Host "+===============================================+" -ForegroundColor Green

View file

@ -6,6 +6,27 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
RULE=$(printf '\342\224\200%.0s' {1..52})
# ── Colors (same palette as startup_banner / install_python_stack) ──
if [ -n "${NO_COLOR:-}" ]; then
C_TITLE= C_DIM= C_OK= C_WARN= C_ERR= C_RST=
elif [ -t 1 ] || [ -n "${FORCE_COLOR:-}" ]; then
C_TITLE=$'\033[38;5;150m'
C_DIM=$'\033[38;5;245m'
C_OK=$'\033[38;5;108m'
C_WARN=$'\033[38;5;136m'
C_ERR=$'\033[91m'
C_RST=$'\033[0m'
else
C_TITLE= C_DIM= C_OK= C_WARN= C_ERR= C_RST=
fi
# ── Output helpers ──
# Consistent column layout: 2-space indent, 15-char label (fits llama-quantize), then value.
# Usage: step <label> <message> [color] (color defaults to C_OK)
step() { printf " ${C_DIM}%-15.15s${C_RST}${3:-$C_OK}%s${C_RST}\n" "$1" "$2"; }
substep() { printf " ${C_DIM}%-15s%s${C_RST}\n" "" "$1"; }
# ── Helper: run command quietly, show output only on failure ──
_run_quiet() {
@ -15,7 +36,7 @@ _run_quiet() {
local tmplog
tmplog=$(mktemp) || {
printf '%s\n' "Failed to create temporary file" >&2
step "error" "Failed to create temporary file" "$C_ERR" >&2
[ "$on_fail" = "exit" ] && exit 1 || return 1
}
@ -24,7 +45,7 @@ _run_quiet() {
return 0
else
local exit_code=$?
printf 'Failed: %s (exit code %s):\n' "$label" "$exit_code" >&2
step "error" "$label failed (exit code $exit_code)" "$C_ERR" >&2
cat "$tmplog" >&2
rm -f "$tmplog"
@ -44,51 +65,41 @@ run_quiet_no_exit() {
_run_quiet return "$@"
}
if [ "${SKIP_STUDIO_BASE:-0}" = "1" ]; then
echo "╔══════════════════════════════════════╗"
echo "║ Unsloth Studio Setup Script ║"
echo "╚══════════════════════════════════════╝"
else
echo "╔══════════════════════════════════════╗"
echo "║ Unsloth Studio Update Script ║"
echo "╚══════════════════════════════════════╝"
fi
# ── Banner ──
echo ""
printf " ${C_TITLE}%s${C_RST}\n" "🦥 Unsloth Studio Setup"
printf " ${C_DIM}%s${C_RST}\n" "$RULE"
# ── Clean up stale Unsloth compiled caches ──
# ── Clean up stale caches ──
rm -rf "$REPO_ROOT/unsloth_compiled_cache"
rm -rf "$SCRIPT_DIR/backend/unsloth_compiled_cache"
rm -rf "$SCRIPT_DIR/tmp/unsloth_compiled_cache"
# ── Detect Colab (like unsloth does) ──
# ── Detect Colab ──
IS_COLAB=false
keynames=$'\n'$(printenv | cut -d= -f1)
if [[ "$keynames" == *$'\nCOLAB_'* ]]; then
IS_COLAB=true
fi
# ── Detect whether frontend needs building ──
# Skip if dist/ exists AND no tracked input is newer than dist/.
# Checks top-level config/entry files and src/, public/ recursively.
# This handles: PyPI installs (dist/ bundled), repeat runs (no changes),
# and upgrades/pulls (source newer than dist/ triggers rebuild).
# ── Frontend ──
_NEED_FRONTEND_BUILD=true
if [ -d "$SCRIPT_DIR/frontend/dist" ]; then
# Check all top-level files (package.json, bun.lock, vite.config.ts, index.html, etc.)
_changed=$(find "$SCRIPT_DIR/frontend" -maxdepth 1 -type f \
! -name 'bun.lock' \
-newer "$SCRIPT_DIR/frontend/dist" -print -quit 2>/dev/null)
# Check src/ and public/ recursively (|| true guards against set -e when dirs are missing)
if [ -z "$_changed" ]; then
_changed=$(find "$SCRIPT_DIR/frontend/src" "$SCRIPT_DIR/frontend/public" \
-type f -newer "$SCRIPT_DIR/frontend/dist" -print -quit 2>/dev/null) || true
fi
if [ -z "$_changed" ]; then
_NEED_FRONTEND_BUILD=false
fi
[ -z "$_changed" ] && _NEED_FRONTEND_BUILD=false
fi
if [ "$_NEED_FRONTEND_BUILD" = false ]; then
echo "✅ Frontend already built and up to date -- skipping Node/npm check."
step "frontend" "up to date"
else
# ── Node ──
NEED_NODE=true
if command -v node &>/dev/null && command -v npm &>/dev/null; then
NODE_MAJOR=$(node -v | sed 's/v//' | cut -d. -f1)
@ -100,90 +111,71 @@ if command -v node &>/dev/null && command -v npm &>/dev/null; then
if [ "$NODE_MAJOR" -eq 22 ] && [ "$NODE_MINOR" -ge 12 ]; then NODE_OK=true; fi
if [ "$NODE_MAJOR" -ge 23 ]; then NODE_OK=true; fi
if [ "$NODE_OK" = true ] && [ "$NPM_MAJOR" -ge 11 ]; then
echo "✅ Node $(node -v) and npm $(npm -v) already meet requirements. Skipping nvm install."
NEED_NODE=false
else
if [ "$IS_COLAB" = true ] && [ "$NODE_OK" = true ]; then
echo "✅ Node $(node -v) and npm $(npm -v) detected in Colab."
# In Colab, just upgrade npm directly - nvm doesn't work well
if [ "$NPM_MAJOR" -lt 11 ]; then
echo " Upgrading npm to latest..."
substep "upgrading npm..."
npm install -g npm@latest > /dev/null 2>&1
fi
NEED_NODE=false
else
echo "⚠️ Node $(node -v) / npm $(npm -v) too old. Installing via nvm..."
fi
fi
else
echo "⚠️ Node/npm not found. Installing via nvm..."
fi
if [ "$NEED_NODE" = true ]; then
# ── 2. Install nvm ──
export NODE_OPTIONS=--dns-result-order=ipv4first # or else fails on colab.
echo "Installing nvm..."
substep "installing nvm..."
export NODE_OPTIONS=--dns-result-order=ipv4first
curl -so- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash > /dev/null 2>&1
# Load nvm (source ~/.bashrc won't work inside a script)
export NVM_DIR="$HOME/.nvm"
set +u
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
# ── Fix npmrc conflict with nvm ──
# System npm (apt, conda, etc.) may have written `prefix` or `globalconfig`
# to ~/.npmrc, which is incompatible with nvm and causes "nvm use" to fail
# with: "has a `globalconfig` and/or a `prefix` setting, which are
# incompatible with nvm."
if [ -f "$HOME/.npmrc" ]; then
if grep -qE '^\s*(prefix|globalconfig)\s*=' "$HOME/.npmrc"; then
echo " Removing incompatible prefix/globalconfig from ~/.npmrc for nvm..."
sed -i.bak '/^\s*\(prefix\|globalconfig\)\s*=/d' "$HOME/.npmrc"
fi
fi
# ── 3. Install Node LTS ──
echo "Installing Node LTS..."
substep "installing Node LTS..."
run_quiet "nvm install" nvm install --lts
nvm use --lts > /dev/null 2>&1
set -u
# ── 4. Verify versions ──
NODE_MAJOR=$(node -v | sed 's/v//' | cut -d. -f1)
NPM_MAJOR=$(npm -v | cut -d. -f1)
if [ "$NODE_MAJOR" -lt 20 ]; then
echo "❌ ERROR: Node version must be >= 20 (got $(node -v))"
step "node" "FAILED -- version must be >= 20 (got $(node -v))" "$C_ERR"
exit 1
fi
if [ "$NPM_MAJOR" -lt 11 ]; then
echo "⚠️ npm version is $(npm -v), expected >= 11. Updating..."
substep "upgrading npm..."
run_quiet "npm update" npm install -g npm@latest
fi
fi
echo "✅ Node $(node -v) | npm $(npm -v)"
step "node" "$(node -v) | npm $(npm -v)"
# ── Install bun (optional, faster package installs) ──
# Uses npm to install bun globally -- Node is already guaranteed above,
# avoids platform-specific installers, PATH issues, and admin requirements.
if ! command -v bun &>/dev/null; then
echo " Installing bun (faster frontend package installs)..."
substep "installing bun..."
if npm install -g bun > /dev/null 2>&1 && command -v bun &>/dev/null; then
echo " bun installed ($(bun --version))"
substep "bun installed ($(bun --version))"
else
echo " bun install skipped (npm will be used instead)"
substep "bun install skipped (npm will be used instead)"
fi
else
echo " bun already installed ($(bun --version))"
substep "bun already installed ($(bun --version))"
fi
# ── 5. Build frontend ──
# ── Build frontend ──
substep "building frontend..."
cd "$SCRIPT_DIR/frontend"
# Tailwind v4's oxide scanner respects .gitignore in parent directories.
# Python venvs create a .gitignore with "*" (ignore everything), which
# prevents Tailwind from scanning .tsx source files for class names.
# Temporarily hide any such .gitignore during the build, then restore it.
_HIDDEN_GITIGNORES=()
_dir="$(pwd)"
while [ "$_dir" != "/" ]; do
@ -257,36 +249,31 @@ run_quiet "npm run build" npm run build
_restore_gitignores
trap - EXIT
# Validate CSS output -- catch truncated Tailwind builds
_MAX_CSS=$(find "$SCRIPT_DIR/frontend/dist/assets" -name '*.css' -exec wc -c {} + 2>/dev/null | sort -n | tail -1 | awk '{print $1}')
if [ -z "$_MAX_CSS" ]; then
echo "⚠️ WARNING: No CSS files were emitted. The frontend build may have failed."
step "frontend" "built (warning: no CSS emitted)" "$C_WARN"
elif [ "$_MAX_CSS" -lt 100000 ]; then
echo "⚠️ WARNING: Largest CSS file is only $((_MAX_CSS / 1024))KB (expected >100KB)."
echo " Tailwind may not have scanned all source files. Check for .gitignore interference."
step "frontend" "built (warning: CSS may be truncated)" "$C_WARN"
else
step "frontend" "built"
fi
cd "$SCRIPT_DIR"
echo "✅ Frontend built to frontend/dist"
fi # end frontend build check
# ── oxc-validator runtime (needs npm -- skip if not available) ──
# ── oxc-validator runtime ──
if [ -d "$SCRIPT_DIR/backend/core/data_recipe/oxc-validator" ] && command -v npm &>/dev/null; then
cd "$SCRIPT_DIR/backend/core/data_recipe/oxc-validator"
run_quiet "npm install (oxc validator runtime)" npm install
cd "$SCRIPT_DIR"
fi
# ── 6. Python venv + deps ──
# The venv must already exist (created by install.sh).
# This script (setup.sh / "unsloth studio update") only updates packages.
# ── Python venv + deps ──
STUDIO_HOME="$HOME/.unsloth/studio"
VENV_DIR="$STUDIO_HOME/unsloth_studio"
VENV_T5_DIR="$STUDIO_HOME/.venv_t5"
# Clean up legacy in-repo venvs if they exist
[ -d "$REPO_ROOT/.venv" ] && rm -rf "$REPO_ROOT/.venv"
[ -d "$REPO_ROOT/.venv_overlay" ] && rm -rf "$REPO_ROOT/.venv_overlay"
[ -d "$REPO_ROOT/.venv_t5" ] && rm -rf "$REPO_ROOT/.venv_t5"
@ -299,15 +286,15 @@ if [ ! -x "$VENV_DIR/bin/python" ]; then
# Strip all version constraints so pip keeps Colab's pre-installed
# packages (huggingface-hub, datasets, transformers) and only pulls
# in genuinely missing ones (structlog, fastapi, etc.).
echo " Colab detected, installing Studio backend dependencies..."
substep "Colab detected, installing Studio backend dependencies..."
sed 's/[><=!~;].*//' "$SCRIPT_DIR/backend/requirements/studio.txt" \
| grep -v '^#' | grep -v '^$' \
| pip install -q -r /dev/stdin 2>/dev/null || true
_COLAB_NO_VENV=true
else
echo "❌ ERROR: Virtual environment not found at $VENV_DIR"
echo " Run install.sh first to create the environment:"
echo " curl -fsSL https://unsloth.ai/install.sh | sh"
step "python" "venv not found at $VENV_DIR" "$C_ERR"
substep "Run install.sh first to create the environment:"
substep "curl -fsSL https://unsloth.ai/install.sh | sh"
exit 1
fi
else
@ -318,7 +305,6 @@ install_python_stack() {
python "$SCRIPT_DIR/install_python_stack.py"
}
# ── Ensure uv is available (much faster than pip) ──
USE_UV=false
if command -v uv &>/dev/null; then
USE_UV=true
@ -327,7 +313,6 @@ elif curl -LsSf https://astral.sh/uv/install.sh | sh > /dev/null 2>&1; then
command -v uv &>/dev/null && USE_UV=true
fi
# Helper: install a package, preferring uv with pip fallback
fast_install() {
if [ "$USE_UV" = true ]; then
uv pip install --python "$(command -v python)" "$@" && return 0
@ -366,12 +351,12 @@ print(version('$_PKG_NAME'))
|| echo "")
if [ -n "$INSTALLED_VER" ] && [ -n "$LATEST_VER" ] && [ "$INSTALLED_VER" = "$LATEST_VER" ]; then
echo "$_PKG_NAME $INSTALLED_VER is up to date (matches PyPI latest)"
step "python" "$_PKG_NAME $INSTALLED_VER is up to date"
_SKIP_PYTHON_DEPS=true
elif [ -n "$INSTALLED_VER" ] && [ -n "$LATEST_VER" ]; then
echo "⬆️ $_PKG_NAME $INSTALLED_VER$LATEST_VER available, updating dependencies..."
substep "$_PKG_NAME $INSTALLED_VER -> $LATEST_VER available, updating..."
elif [ -z "$LATEST_VER" ]; then
echo "⚠️ Could not reach PyPI, updating dependencies to be safe..."
substep "could not reach PyPI, updating to be safe..."
fi
fi
@ -382,18 +367,14 @@ if [ "$_SKIP_PYTHON_DEPS" = false ]; then
# Models like GLM-4.7-Flash need transformers>=5.3.0. Instead of pip-installing
# at runtime (slow, ~10-15s), we pre-install into a separate directory.
# The training subprocess just prepends .venv_t5/ to sys.path -- instant switch.
echo ""
echo " Pre-installing transformers 5.x for newer model support..."
mkdir -p "$VENV_T5_DIR"
run_quiet "install transformers 5.x" fast_install --target "$VENV_T5_DIR" --no-deps "transformers==5.3.0"
run_quiet "install huggingface_hub for t5" fast_install --target "$VENV_T5_DIR" --no-deps "huggingface_hub==1.7.1"
run_quiet "install hf_xet for t5" fast_install --target "$VENV_T5_DIR" --no-deps "hf_xet==1.4.2"
# tiktoken is needed by Qwen-family tokenizers. Install with deps since
# regex/requests may be missing on Windows.
run_quiet "install tiktoken for t5" fast_install --target "$VENV_T5_DIR" "tiktoken"
echo "✅ Transformers 5.x pre-installed to $VENV_T5_DIR/"
step "transformers" "5.x pre-installed"
else
echo "✅ Python dependencies up to date — skipping"
step "python" "dependencies up to date"
fi
# ── 7. Prefer prebuilt llama.cpp bundles before any source build path ──
@ -418,8 +399,7 @@ else
_RESOLVED_LLAMA_TAG=""
fi
if [ -z "$_RESOLVED_LLAMA_TAG" ]; then
echo ""
echo "⚠️ Failed to resolve an installable prebuilt llama.cpp tag via $_HELPER_RELEASE_REPO"
step "llama.cpp" "failed to resolve prebuilt tag via $_HELPER_RELEASE_REPO" "$C_WARN"
cat "$_RESOLVE_LLAMA_LOG" >&2 || true
set +e
# Resolve the llama.cpp tag for source-build fallback. Pass --published-repo
@ -445,21 +425,18 @@ if [ -z "$_RESOLVED_LLAMA_TAG" ]; then
fi
rm -f "$_RESOLVE_LLAMA_LOG"
echo ""
echo "Resolved llama.cpp release tag: $_RESOLVED_LLAMA_TAG"
substep "resolved llama.cpp tag: $_RESOLVED_LLAMA_TAG"
if [ "$_LLAMA_FORCE_COMPILE" = "1" ]; then
echo ""
echo "⚠️ UNSLOTH_LLAMA_FORCE_COMPILE=1 -- skipping prebuilt llama.cpp install"
step "llama.cpp" "UNSLOTH_LLAMA_FORCE_COMPILE=1 -- skipping prebuilt" "$C_WARN"
_NEED_LLAMA_SOURCE_BUILD=true
else
echo ""
echo "Installing prebuilt llama.cpp bundle (preferred path)..."
substep "installing prebuilt llama.cpp..."
if [ -d "$LLAMA_CPP_DIR" ]; then
echo "Existing llama.cpp install detected -- validating staged prebuilt update before replacement"
substep "existing install detected -- validating update"
fi
if [ "${_SKIP_PREBUILT_INSTALL:-false}" = true ]; then
echo "⚠️ Skipping prebuilt install because prebuilt tag resolution failed -- falling back to source build"
substep "prebuilt tag resolution failed -- falling back to source build"
else
_PREBUILT_CMD=(
python "$SCRIPT_DIR/install_llama_prebuilt.py"
@ -476,12 +453,12 @@ else
set -e
if [ "$_PREBUILT_STATUS" -eq 0 ]; then
echo "✅ Prebuilt llama.cpp installed and validated"
step "llama.cpp" "prebuilt installed and validated"
else
if [ -d "$LLAMA_CPP_DIR" ]; then
echo "⚠️ Prebuilt update failed; existing install was restored or cleaned before source build fallback"
substep "prebuilt update failed; existing install restored"
fi
echo "⚠️ Prebuilt llama.cpp path unavailable or failed validation -- falling back to source build"
substep "falling back to source build"
_NEED_LLAMA_SOURCE_BUILD=true
fi
fi
@ -491,15 +468,10 @@ fi
# On WSL, sudo requires a password and can't be entered during GGUF export
# (runs in a non-interactive subprocess). Install build deps here instead.
if [ "$_NEED_LLAMA_SOURCE_BUILD" = true ] && grep -qi microsoft /proc/version 2>/dev/null; then
echo ""
echo "⚠️ WSL detected -- installing build dependencies for GGUF export..."
_GGUF_DEPS="pciutils build-essential cmake curl git libcurl4-openssl-dev"
# Try without sudo first (works when already root)
apt-get update -y >/dev/null 2>&1 || true
apt-get install -y $_GGUF_DEPS >/dev/null 2>&1 || true
# Check which packages are still missing
_STILL_MISSING=""
for _pkg in $_GGUF_DEPS; do
case "$_pkg" in
@ -512,16 +484,11 @@ if [ "$_NEED_LLAMA_SOURCE_BUILD" = true ] && grep -qi microsoft /proc/version 2>
_STILL_MISSING=$(echo "$_STILL_MISSING" | sed 's/^ *//')
if [ -z "$_STILL_MISSING" ]; then
echo "✅ GGUF build dependencies installed"
step "gguf deps" "installed"
elif command -v sudo >/dev/null 2>&1; then
echo ""
echo " !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo " WARNING: We require sudo elevated permissions to install:"
echo " $_STILL_MISSING"
echo " If you accept, we'll run sudo now, and it'll prompt your password."
echo " !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo ""
printf " Accept? [Y/n] "
step "gguf deps" "sudo required for: $_STILL_MISSING" "$C_WARN"
printf " %-15s" ""
printf "accept? [Y/n] "
if [ -r /dev/tty ]; then
read -r REPLY </dev/tty || REPLY="y"
else
@ -529,21 +496,19 @@ if [ "$_NEED_LLAMA_SOURCE_BUILD" = true ] && grep -qi microsoft /proc/version 2>
fi
case "$REPLY" in
[nN]*)
echo ""
echo " Please install these packages first, then re-run Unsloth Studio setup:"
echo " sudo apt-get update -y && sudo apt-get install -y $_STILL_MISSING"
substep "skipped -- run manually:"
substep "sudo apt-get install -y $_STILL_MISSING"
_SKIP_GGUF_BUILD=true
;;
*)
sudo apt-get update -y
sudo apt-get install -y $_STILL_MISSING
echo "✅ GGUF build dependencies installed"
step "gguf deps" "installed"
;;
esac
else
echo " sudo is not available on this system."
echo " Please install as root, then re-run setup:"
echo " apt-get install -y $_STILL_MISSING"
step "gguf deps" "missing (no sudo) -- install manually:" "$C_WARN"
substep "apt-get install -y $_STILL_MISSING"
_SKIP_GGUF_BUILD=true
fi
fi
@ -557,23 +522,14 @@ fi
if [ "$_NEED_LLAMA_SOURCE_BUILD" = false ]; then
:
elif [ "${_SKIP_GGUF_BUILD:-}" = true ]; then
echo ""
echo "Skipping llama-server build (missing dependencies)"
echo " Install the missing packages and re-run setup to enable GGUF inference."
step "llama.cpp" "skipped (missing build deps)" "$C_WARN"
else
{
# Check prerequisites
if ! command -v cmake &>/dev/null; then
echo ""
echo "⚠️ cmake not found — skipping llama-server build (GGUF inference won't be available)"
echo " Install cmake and re-run setup.sh to enable GGUF inference."
step "llama.cpp" "skipped (cmake not found)" "$C_WARN"
elif ! command -v git &>/dev/null; then
echo ""
echo "⚠️ git not found — skipping llama-server build (GGUF inference won't be available)"
step "llama.cpp" "skipped (git not found)" "$C_WARN"
else
echo ""
echo "Building llama-server for GGUF inference..."
BUILD_OK=true
_CLONE_BRANCH_ARGS=()
if [ "$_RESOLVED_LLAMA_TAG" != "latest" ] && [ -n "$_RESOLVED_LLAMA_TAG" ]; then
@ -584,19 +540,13 @@ else
run_quiet_no_exit "clone llama.cpp" git clone --depth 1 "${_CLONE_BRANCH_ARGS[@]}" https://github.com/ggml-org/llama.cpp.git "$_BUILD_TMP" || BUILD_OK=false
if [ "$BUILD_OK" = true ]; then
# Skip tests/examples we don't need (faster build)
CMAKE_ARGS="-DLLAMA_BUILD_TESTS=OFF -DLLAMA_BUILD_EXAMPLES=OFF -DLLAMA_BUILD_SERVER=ON -DGGML_NATIVE=ON"
# Use ccache if available (dramatically faster rebuilds)
if command -v ccache &>/dev/null; then
CMAKE_ARGS="$CMAKE_ARGS -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache"
echo " Using ccache for faster compilation"
fi
# Detect GPU backend: CUDA (NVIDIA) or ROCm (AMD)
GPU_BACKEND=""
# Check for CUDA: check nvcc on PATH, then common install locations
NVCC_PATH=""
if command -v nvcc &>/dev/null; then
NVCC_PATH="$(command -v nvcc)"
@ -629,15 +579,12 @@ else
fi
fi
_BUILD_DESC="building"
if [ -n "$NVCC_PATH" ]; then
echo " Building with CUDA support (nvcc: $NVCC_PATH)..."
CMAKE_ARGS="$CMAKE_ARGS -DGGML_CUDA=ON"
# Detect GPU compute capability and limit CUDA architectures
# Without this, cmake builds for ALL default archs (very slow)
CUDA_ARCHS=""
if command -v nvidia-smi &>/dev/null; then
# Read all GPUs, deduplicate (handles mixed-GPU hosts)
_raw_caps=$(nvidia-smi --query-gpu=compute_cap --format=csv,noheader 2>/dev/null || true)
while IFS= read -r _cap; do
_cap=$(echo "$_cap" | tr -d '[:space:]')
@ -653,13 +600,12 @@ else
fi
if [ -n "$CUDA_ARCHS" ]; then
echo " GPU compute capabilities: ${CUDA_ARCHS//;/, } -- limiting build to detected archs"
CMAKE_ARGS="$CMAKE_ARGS -DCMAKE_CUDA_ARCHITECTURES=${CUDA_ARCHS}"
_BUILD_DESC="building (CUDA, sm_${CUDA_ARCHS//;/+sm_})"
else
echo " Could not detect GPU arch -- building for all default CUDA architectures (slower)"
_BUILD_DESC="building (CUDA)"
fi
# Multi-threaded nvcc compilation (uses all CPU cores per .cu file)
CMAKE_ARGS="$CMAKE_ARGS -DCMAKE_CUDA_FLAGS=--threads=0"
elif [ "$GPU_BACKEND" = "rocm" ]; then
# Resolve hipcc symlinks to find the real ROCm root
@ -672,7 +618,7 @@ else
ROCM_ROOT="$(cd "$(dirname "$_HIPCC_REAL")/.." 2>/dev/null && pwd)"
fi
echo " Building with ROCm support (AMD GPU, hipcc: $_HIPCC_REAL)..."
_BUILD_DESC="building (ROCm)"
CMAKE_ARGS="$CMAKE_ARGS -DGGML_HIP=ON"
export ROCM_PATH="$ROCM_ROOT"
export HIP_PATH="$ROCM_ROOT"
@ -697,24 +643,20 @@ else
fi
if [ -n "$GPU_TARGETS" ]; then
echo " AMD GPU architectures: ${GPU_TARGETS//;/, } -- limiting build to detected targets"
CMAKE_ARGS="$CMAKE_ARGS -DGPU_TARGETS=${GPU_TARGETS}"
else
echo " Could not detect AMD GPU arch -- building for default targets (cmake will auto-detect)"
_BUILD_DESC="building (ROCm, ${GPU_TARGETS//;/+})"
fi
elif [ -d /usr/local/cuda ] || nvidia-smi &>/dev/null; then
echo " CUDA driver detected but nvcc not found — building CPU-only"
echo " To enable GPU: install cuda-toolkit or add nvcc to PATH"
_BUILD_DESC="building (CPU, CUDA driver found but nvcc missing)"
elif [ -d /opt/rocm ] || command -v rocm-smi &>/dev/null; then
echo " ROCm driver detected but hipcc not found — building CPU-only"
echo " To enable GPU: install rocm-dev or add hipcc to PATH"
_BUILD_DESC="building (CPU, ROCm driver found but hipcc missing)"
else
echo " Building CPU-only (no CUDA detected)..."
_BUILD_DESC="building (CPU)"
fi
NCPU=$(nproc 2>/dev/null || sysctl -n hw.ncpu 2>/dev/null || echo 4)
substep "$_BUILD_DESC..."
# Use Ninja if available (faster parallel builds than Make)
NCPU=$(nproc 2>/dev/null || sysctl -n hw.ncpu 2>/dev/null || echo 4)
CMAKE_GENERATOR_ARGS=""
if command -v ninja &>/dev/null; then
CMAKE_GENERATOR_ARGS="-G Ninja"
@ -727,7 +669,6 @@ else
run_quiet_no_exit "build llama-server" cmake --build "$_BUILD_TMP/build" --config Release --target llama-server -j"$NCPU" || BUILD_OK=false
fi
# Also build llama-quantize (needed by unsloth-zoo's GGUF export pipeline)
if [ "$BUILD_OK" = true ]; then
run_quiet_no_exit "build llama-quantize" cmake --build "$_BUILD_TMP/build" --config Release --target llama-quantize -j"$NCPU" || true
fi
@ -745,45 +686,30 @@ else
rm -rf "$_BUILD_TMP"
fi
if [ "$BUILD_OK" = true ]; then
if [ -f "$LLAMA_SERVER_BIN" ]; then
echo "✅ llama-server built at $LLAMA_SERVER_BIN"
else
echo "⚠️ llama-server binary not found after build — GGUF inference won't be available"
fi
if [ -f "$LLAMA_CPP_DIR/llama-quantize" ]; then
echo "✅ llama-quantize available for GGUF export"
fi
if [ "$BUILD_OK" = true ] && [ -f "$LLAMA_SERVER_BIN" ]; then
step "llama.cpp" "built"
[ -f "$LLAMA_CPP_DIR/llama-quantize" ] && step "llama-quantize" "built"
elif [ "$BUILD_OK" = true ]; then
step "llama.cpp" "binary not found after build" "$C_WARN"
else
echo "⚠️ llama-server build failed — GGUF inference won't be available, but everything else works"
step "llama.cpp" "build failed" "$C_ERR"
fi
fi
}
fi # end _SKIP_GGUF_BUILD check
echo ""
if [ "${SKIP_STUDIO_BASE:-0}" = "1" ]; then
_DONE_LINE="║ Setup Complete! ║"
else
_DONE_LINE="║ Update Complete! ║"
fi
# ── Footer ──
if [ "$IS_COLAB" = true ]; then
echo "╔══════════════════════════════════════╗"
echo "$_DONE_LINE"
echo "╠══════════════════════════════════════╣"
echo "║ Unsloth Studio is ready to start ║"
echo "║ in your Colab notebook! ║"
echo "║ ║"
echo "║ from colab import start ║"
echo "║ start() ║"
echo "╚══════════════════════════════════════╝"
echo ""
printf " ${C_DIM}%s${C_RST}\n" "$RULE"
printf " ${C_TITLE}%s${C_RST}\n" "Unsloth Studio Setup Complete"
printf " ${C_DIM}%s${C_RST}\n" "$RULE"
substep "from colab import start"
substep "start()"
else
echo "╔══════════════════════════════════════╗"
echo "$_DONE_LINE"
echo "╠══════════════════════════════════════╣"
echo "║ Launch with: ║"
echo "║ ║"
echo "║ unsloth studio -H 0.0.0.0 -p 8888 ║"
echo "╚══════════════════════════════════════╝"
printf " ${C_DIM}%s${C_RST}\n" "$RULE"
printf " ${C_TITLE}%s${C_RST}\n" "Unsloth Studio Installed"
printf " ${C_DIM}%s${C_RST}\n" "$RULE"
printf " ${C_DIM}%-15s${C_OK}%s${C_RST}\n" "launch" "unsloth studio -H 0.0.0.0 -p 8888"
fi
echo ""

View file

@ -237,31 +237,41 @@ def stop():
# ── unsloth studio setup / update ─────────────────────────────────────
def _run_setup_script() -> None:
def _run_setup_script(*, verbose: bool = False) -> None:
"""Find and run the studio setup/update script."""
script = _find_setup_script()
if not script:
typer.echo("Error: Could not find setup script (setup.sh / setup.ps1).")
raise typer.Exit(1)
env = {**os.environ, "UNSLOTH_VERBOSE": "1"} if verbose else None
if platform.system() == "Windows":
result = subprocess.run(
["powershell", "-ExecutionPolicy", "Bypass", "-File", str(script)],
env = env,
)
else:
result = subprocess.run(["bash", str(script)])
result = subprocess.run(["bash", str(script)], env = env)
if result.returncode != 0:
raise typer.Exit(result.returncode)
@studio_app.command(hidden = True)
def setup():
def setup(
verbose: bool = typer.Option(
False,
"--verbose",
"-v",
help = "Full pip/build output during setup for troubleshooting.",
),
):
"""Deprecated: use 'unsloth studio update' or re-run install.sh."""
typer.echo(
"Note: 'unsloth studio setup' is deprecated. Use 'unsloth studio update' or re-run install.sh."
)
_run_setup_script()
_run_setup_script(verbose = verbose)
@studio_app.command()
@ -272,6 +282,12 @@ def update(
package: str = typer.Option(
"unsloth", "--package", help = "Package name to install/update (for testing)"
),
verbose: bool = typer.Option(
False,
"--verbose",
"-v",
help = "Full pip/build output during update for troubleshooting.",
),
):
"""Update Unsloth Studio dependencies and rebuild."""
os.environ["STUDIO_LOCAL_INSTALL"] = "1" if local else "0"
@ -281,7 +297,7 @@ def update(
# have to guess from SCRIPT_DIR (which may be inside site-packages).
repo_root = Path(__file__).resolve().parents[2]
os.environ["STUDIO_LOCAL_REPO"] = str(repo_root)
_run_setup_script()
_run_setup_script(verbose = verbose)
# ── unsloth studio reset-password ────────────────────────────────────