mirror of
https://github.com/block/goose.git
synced 2026-04-28 03:29:36 +00:00
merge main + resolve conflicts for sources-projects
Signed-off-by: Douwe Osinga <douwe@squareup.com>
This commit is contained in:
commit
c4ef532a9c
165 changed files with 12472 additions and 5801 deletions
|
|
@ -92,6 +92,16 @@ You are a senior engineer conducting a thorough code review. Review **only the l
|
|||
- **AnimatePresence**: Is it used properly with unique keys for dialog/modal transitions?
|
||||
- **Reduced Motion**: Is `useReducedMotion()` respected for accessibility?
|
||||
|
||||
### Async State, Defaults & Persistence
|
||||
- **Async Source of Truth**: During async provider/model/session mutations, does UI/session/localStorage state update only after the backend accepts the change? If the UI updates optimistically, is there an explicit rollback path?
|
||||
- **UI/Backend Drift**: Could the UI show provider/model/project/persona X while the backend is still on Y after a failed mutation, delayed prepare, or pending-to-real session handoff?
|
||||
- **Requested vs Fallback Authority**: Do explicit user or caller selections stay authoritative over sticky defaults, saved preferences, aliases, or fallback resolution?
|
||||
- **Dependent State Invalidation**: When a parent selection changes (provider/project/persona/workspace/etc.), are dependent values like `modelId`, `modelName`, defaults, or cached labels cleared or recomputed so stale state does not linger?
|
||||
- **Persisted Preference Validation**: Are stored selections validated against current inventory/capabilities before reuse, and do stale values fail soft instead of breaking creation flows?
|
||||
- **Compatibility of Fallbacks**: Are default or sticky selections guaranteed to remain compatible with the active concrete provider/backend, instead of leaking across providers?
|
||||
- **Best-Effort Lookups**: Do inventory/config/default-resolution lookups degrade gracefully on transient failure, or can they incorrectly block a primary flow that should still work with a safe fallback?
|
||||
- **Draft/Home/Handoff Paths**: If the product has draft, Home, pending, or pre-created sessions, did you review those handoff paths separately from the already-active session path?
|
||||
|
||||
### General Code Quality
|
||||
- **Error Handling**: Are errors handled gracefully with user-friendly messages?
|
||||
- **Loading States**: Are loading states shown during async operations?
|
||||
|
|
@ -104,13 +114,18 @@ You are a senior engineer conducting a thorough code review. Review **only the l
|
|||
|
||||
### Step 0: Run Quality Checks
|
||||
|
||||
Before reading any code, run the project's CI gate to establish a baseline:
|
||||
Before reading any code, run the project's CI gate to establish a baseline. Use **check-only** commands so the baseline never mutates the working tree — otherwise auto-formatters can introduce unstaged diffs and you'll end up reviewing formatter output instead of the author's actual changes.
|
||||
|
||||
Avoid `just check-everything` as the baseline in this repo: that recipe runs `cargo fmt --all` in write mode and will modify the working tree. Run the non-mutating equivalents instead:
|
||||
|
||||
```bash
|
||||
just ci
|
||||
cargo fmt --all -- --check
|
||||
cargo clippy --all-targets -- -D warnings
|
||||
(cd ui/desktop && pnpm run lint:check)
|
||||
./scripts/check-openapi-schema.sh
|
||||
```
|
||||
|
||||
This runs: `pnpm check` (Biome lint/format + file sizes), `pnpm typecheck`, `just clippy` (Rust linting), `pnpm test`, `pnpm build`, and `just tauri-check` (Rust type checking).
|
||||
If the project has a stronger pre-push or CI gate than this helper set, run that fuller gate when the review is meant to be PR-ready, but only after confirming it is also non-mutating (or run it from a clean stash). In this repo, targeted tests for the changed area plus the pre-push checks are often the practical follow-up.
|
||||
|
||||
Report the results as pass/fail. Any failures are automatically **P0** issues and should appear at the top of the findings list. Do not skip this step even if the user only wants a quick review.
|
||||
|
||||
|
|
@ -120,7 +135,8 @@ For each file in the list:
|
|||
|
||||
1. Run `git diff main...HEAD -- <file>` to get the exact lines that changed
|
||||
2. Review **only those changed lines** against the Review Checklist — do not flag issues in unchanged code
|
||||
3. Note the file path and line numbers from the diff output for each issue found
|
||||
3. For stateful UI or async flow changes, trace the full path end to end: user selection -> local/session state update -> persistence -> backend prepare/set/update call -> failure/rollback path
|
||||
4. Note the file path and line numbers from the diff output for each issue found
|
||||
|
||||
### Step 2: Categorize Issues
|
||||
|
||||
|
|
@ -152,16 +168,17 @@ After reviewing all files, provide:
|
|||
|
||||
### Step 3b: Self-Check
|
||||
|
||||
Before presenting findings to the user, silently review the issue list two more times:
|
||||
Before presenting findings to the user, silently review the issue list three times:
|
||||
|
||||
1. **Pass 1**: For each issue, ask — is this genuinely a problem, or could it be intentional/acceptable? Remove false positives.
|
||||
2. **Pass 2**: For each remaining issue, ask — does the recommended fix actually improve the code, or is it a matter of preference?
|
||||
3. **Pass 3**: For async state/default-resolution issues, ask — can the UI, persisted state, and backend ever disagree after a failure, fallback, or session handoff?
|
||||
|
||||
After both passes, tag each surviving issue as one of:
|
||||
After these passes, tag each surviving issue as one of:
|
||||
- **[Must Fix]** — clear violation, will likely get flagged in PR review
|
||||
- **[Your Call]** — valid concern but may be intentional or a reasonable tradeoff (e.g. stepping outside the design system for a specific reason). Present it but let the user decide.
|
||||
|
||||
Only present issues that survived both passes.
|
||||
Only present issues that survived these passes.
|
||||
|
||||
### Step 4: Fix Issues
|
||||
|
||||
|
|
@ -189,7 +206,7 @@ Once all issues are fixed, display:
|
|||
|
||||
**✅ Code review complete! All issues have been addressed.**
|
||||
|
||||
Your code is ready to commit and push. Lefthook will run the full CI gate (`just ci`) automatically when you push.
|
||||
Your code is ready to commit and push. Lefthook and CI will run the repo's configured gates when you push.
|
||||
|
||||
Next steps: generate a PR summary that explains the intent of this change, what files were modified and why, and how to verify the changes work.
|
||||
|
||||
|
|
|
|||
78
.github/workflows/build-cli.yml
vendored
78
.github/workflows/build-cli.yml
vendored
|
|
@ -31,36 +31,44 @@ jobs:
|
|||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
# Linux builds
|
||||
- os: ubuntu-latest
|
||||
architecture: x86_64
|
||||
target-suffix: unknown-linux-gnu
|
||||
build-on: ubuntu-latest
|
||||
use-cross: true
|
||||
cc: gcc-10
|
||||
variant: standard
|
||||
- os: ubuntu-latest
|
||||
architecture: aarch64
|
||||
target-suffix: unknown-linux-gnu
|
||||
build-on: ubuntu-latest
|
||||
use-cross: true
|
||||
cc: gcc-10
|
||||
# macOS builds
|
||||
variant: standard
|
||||
- os: macos-latest
|
||||
architecture: x86_64
|
||||
target-suffix: apple-darwin
|
||||
build-on: macos-latest
|
||||
use-cross: true
|
||||
variant: standard
|
||||
- os: macos-latest
|
||||
architecture: aarch64
|
||||
target-suffix: apple-darwin
|
||||
build-on: macos-latest
|
||||
use-cross: true
|
||||
# Windows builds (only x86_64 supported)
|
||||
variant: standard
|
||||
- os: windows
|
||||
architecture: x86_64
|
||||
target-suffix: pc-windows-msvc
|
||||
build-on: windows-latest
|
||||
use-cross: false
|
||||
variant: standard
|
||||
- os: windows
|
||||
architecture: x86_64
|
||||
target-suffix: pc-windows-msvc
|
||||
build-on: windows-latest
|
||||
use-cross: false
|
||||
variant: cuda
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
|
|
@ -89,7 +97,7 @@ jobs:
|
|||
if: matrix.os == 'windows'
|
||||
uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
|
||||
with:
|
||||
key: windows-msvc-cli
|
||||
key: windows-msvc-cli-${{ matrix.variant }}
|
||||
|
||||
- name: Build CLI (Linux/macOS)
|
||||
if: matrix.use-cross
|
||||
|
|
@ -118,21 +126,54 @@ jobs:
|
|||
rustup show
|
||||
rustup target add x86_64-pc-windows-msvc
|
||||
|
||||
- name: Install CUDA toolkit (Windows CUDA)
|
||||
if: ${{ matrix.os == 'windows' && matrix.variant == 'cuda' }}
|
||||
uses: Jimver/cuda-toolkit@v0.2.35
|
||||
with:
|
||||
cuda: '12.9.1'
|
||||
method: 'local'
|
||||
log-file-suffix: 'build-cli-windows-cuda.txt'
|
||||
|
||||
- name: Set up MSVC developer environment (Windows CUDA)
|
||||
if: ${{ matrix.os == 'windows' && matrix.variant == 'cuda' }}
|
||||
uses: ilammy/msvc-dev-cmd@0b201ec74fa43914dc39ae48a89fd1d8cb592756 # v1.13.0
|
||||
with:
|
||||
arch: amd64
|
||||
|
||||
- name: Verify CUDA toolchain (Windows CUDA)
|
||||
if: ${{ matrix.os == 'windows' && matrix.variant == 'cuda' }}
|
||||
shell: pwsh
|
||||
env:
|
||||
CUDA_COMPUTE_CAP: "80"
|
||||
run: |
|
||||
Write-Output "CUDA_PATH=$env:CUDA_PATH"
|
||||
Write-Output "CUDA_COMPUTE_CAP=$env:CUDA_COMPUTE_CAP"
|
||||
where.exe cl
|
||||
where.exe nvcc
|
||||
nvcc -V
|
||||
|
||||
- name: Build CLI (Windows)
|
||||
if: matrix.os == 'windows'
|
||||
shell: bash
|
||||
shell: pwsh
|
||||
env:
|
||||
CUDA_COMPUTE_CAP: ${{ matrix.variant == 'cuda' && '80' || '' }}
|
||||
run: |
|
||||
echo "🚀 Building Windows CLI executable..."
|
||||
cargo build --release --target x86_64-pc-windows-msvc -p goose-cli
|
||||
Write-Output "Building Windows CLI executable..."
|
||||
if ("${{ matrix.variant }}" -eq "cuda") {
|
||||
$cudaRustflagsConfig = 'target.x86_64-pc-windows-msvc.rustflags=["-C","target-feature=+crt-static"]'
|
||||
cargo build --config $cudaRustflagsConfig --release --target x86_64-pc-windows-msvc -p goose-cli --features cuda
|
||||
} else {
|
||||
cargo build --release --target x86_64-pc-windows-msvc -p goose-cli
|
||||
}
|
||||
|
||||
if [ ! -f "./target/x86_64-pc-windows-msvc/release/goose.exe" ]; then
|
||||
echo "❌ Windows CLI binary not found."
|
||||
ls -la ./target/x86_64-pc-windows-msvc/release/ || echo "Release directory doesn't exist"
|
||||
if (-not (Test-Path "./target/x86_64-pc-windows-msvc/release/goose.exe")) {
|
||||
Write-Error "Windows CLI binary not found."
|
||||
Get-ChildItem ./target/x86_64-pc-windows-msvc/release/ -ErrorAction SilentlyContinue
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
echo "✅ Windows CLI binary found!"
|
||||
ls -la ./target/x86_64-pc-windows-msvc/release/goose.exe
|
||||
Write-Output "Windows CLI binary found."
|
||||
Get-Item ./target/x86_64-pc-windows-msvc/release/goose.exe
|
||||
|
||||
- name: Package CLI (Linux/macOS)
|
||||
if: matrix.use-cross
|
||||
|
|
@ -157,21 +198,24 @@ jobs:
|
|||
shell: bash
|
||||
run: |
|
||||
export TARGET="${{ matrix.architecture }}-${{ matrix.target-suffix }}"
|
||||
export VARIANT_SUFFIX=""
|
||||
if [ "${{ matrix.variant }}" = "cuda" ]; then
|
||||
VARIANT_SUFFIX="-cuda"
|
||||
fi
|
||||
|
||||
mkdir -p "target/${TARGET}/release/goose-package"
|
||||
|
||||
cp "target/${TARGET}/release/goose.exe" "target/${TARGET}/release/goose-package/"
|
||||
|
||||
cd "target/${TARGET}/release"
|
||||
7z a -tzip "goose-${TARGET}.zip" goose-package/
|
||||
echo "ARTIFACT_ZIP=target/${TARGET}/release/goose-${TARGET}.zip" >> $GITHUB_ENV
|
||||
7z a -tzip "goose-${TARGET}${VARIANT_SUFFIX}.zip" goose-package/
|
||||
echo "ARTIFACT_ZIP=target/${TARGET}/release/goose-${TARGET}${VARIANT_SUFFIX}.zip" >> $GITHUB_ENV
|
||||
|
||||
- name: Upload CLI artifact
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
|
||||
with:
|
||||
name: goose-${{ matrix.architecture }}-${{ matrix.target-suffix }}
|
||||
name: goose-${{ matrix.architecture }}-${{ matrix.target-suffix }}${{ matrix.variant == 'cuda' && '-cuda' || '' }}
|
||||
path: |
|
||||
${{ env.ARTIFACT_BZ2 }}
|
||||
${{ env.ARTIFACT_GZ }}
|
||||
${{ env.ARTIFACT_ZIP }}
|
||||
|
||||
|
|
|
|||
95
.github/workflows/bundle-desktop-windows.yml
vendored
95
.github/workflows/bundle-desktop-windows.yml
vendored
|
|
@ -8,6 +8,11 @@ on:
|
|||
required: false
|
||||
type: boolean
|
||||
default: false
|
||||
windows_variant:
|
||||
description: 'Windows artifact variant to build'
|
||||
required: false
|
||||
type: string
|
||||
default: 'standard'
|
||||
workflow_call:
|
||||
inputs:
|
||||
version:
|
||||
|
|
@ -24,6 +29,11 @@ on:
|
|||
required: false
|
||||
type: string
|
||||
default: ''
|
||||
windows_variant:
|
||||
description: 'Windows artifact variant to build'
|
||||
required: false
|
||||
type: string
|
||||
default: 'standard'
|
||||
|
||||
# Permissions required for OIDC authentication with Azure Trusted Signing
|
||||
permissions:
|
||||
|
|
@ -64,7 +74,7 @@ jobs:
|
|||
- name: Cache Rust dependencies
|
||||
uses: Swatinem/rust-cache@v2
|
||||
with:
|
||||
key: windows-msvc-desktop
|
||||
key: windows-msvc-desktop-${{ inputs.windows_variant }}
|
||||
|
||||
- name: Setup Rust
|
||||
shell: bash
|
||||
|
|
@ -72,21 +82,54 @@ jobs:
|
|||
rustup show
|
||||
rustup target add x86_64-pc-windows-msvc
|
||||
|
||||
- name: Build Windows executable
|
||||
shell: bash
|
||||
- name: Install CUDA toolkit (Windows CUDA)
|
||||
if: ${{ inputs.windows_variant == 'cuda' }}
|
||||
uses: Jimver/cuda-toolkit@v0.2.35
|
||||
with:
|
||||
cuda: '12.9.1'
|
||||
method: 'local'
|
||||
log-file-suffix: 'bundle-desktop-windows-cuda.txt'
|
||||
|
||||
- name: Set up MSVC developer environment (Windows CUDA)
|
||||
if: ${{ inputs.windows_variant == 'cuda' }}
|
||||
uses: ilammy/msvc-dev-cmd@0b201ec74fa43914dc39ae48a89fd1d8cb592756 # v1.13.0
|
||||
with:
|
||||
arch: amd64
|
||||
|
||||
- name: Verify CUDA toolchain (Windows CUDA)
|
||||
if: ${{ inputs.windows_variant == 'cuda' }}
|
||||
shell: pwsh
|
||||
env:
|
||||
CUDA_COMPUTE_CAP: "80"
|
||||
run: |
|
||||
echo "🚀 Building Windows executable..."
|
||||
cargo build --release --target x86_64-pc-windows-msvc
|
||||
Write-Output "CUDA_PATH=$env:CUDA_PATH"
|
||||
Write-Output "CUDA_COMPUTE_CAP=$env:CUDA_COMPUTE_CAP"
|
||||
where.exe cl
|
||||
where.exe nvcc
|
||||
nvcc -V
|
||||
|
||||
- name: Build Windows executable
|
||||
shell: pwsh
|
||||
env:
|
||||
CUDA_COMPUTE_CAP: ${{ inputs.windows_variant == 'cuda' && '80' || '' }}
|
||||
run: |
|
||||
Write-Output "Building Windows executable..."
|
||||
if ("${{ inputs.windows_variant }}" -eq "cuda") {
|
||||
$cudaRustflagsConfig = 'target.x86_64-pc-windows-msvc.rustflags=["-C","target-feature=+crt-static"]'
|
||||
cargo build --config $cudaRustflagsConfig --release --target x86_64-pc-windows-msvc -p goose-server --features cuda
|
||||
} else {
|
||||
cargo build --release --target x86_64-pc-windows-msvc -p goose-server
|
||||
}
|
||||
|
||||
# Verify build succeeded
|
||||
if [ ! -f "./target/x86_64-pc-windows-msvc/release/goosed.exe" ]; then
|
||||
echo "❌ Windows binary not found."
|
||||
ls -la ./target/x86_64-pc-windows-msvc/release/ || echo "Release directory doesn't exist"
|
||||
if (-not (Test-Path "./target/x86_64-pc-windows-msvc/release/goosed.exe")) {
|
||||
Write-Error "Windows binary not found."
|
||||
Get-ChildItem ./target/x86_64-pc-windows-msvc/release/ -ErrorAction SilentlyContinue
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
echo "✅ Windows binary found!"
|
||||
ls -la ./target/x86_64-pc-windows-msvc/release/goosed.exe
|
||||
Write-Output "Windows binary found."
|
||||
Get-Item ./target/x86_64-pc-windows-msvc/release/goosed.exe
|
||||
|
||||
- name: Prepare Windows binary
|
||||
shell: bash
|
||||
|
|
@ -155,7 +198,7 @@ jobs:
|
|||
- name: Upload unsigned distribution
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
|
||||
with:
|
||||
name: windows-unsigned
|
||||
name: windows-unsigned${{ inputs.windows_variant == 'cuda' && '-cuda' || '' }}
|
||||
path: ui/desktop/dist-windows/
|
||||
|
||||
sign-desktop-windows:
|
||||
|
|
@ -167,9 +210,9 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Download unsigned distribution
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: windows-unsigned
|
||||
name: windows-unsigned${{ inputs.windows_variant == 'cuda' && '-cuda' || '' }}
|
||||
path: dist-windows
|
||||
|
||||
- name: Azure login
|
||||
|
|
@ -206,13 +249,17 @@ jobs:
|
|||
- name: Create Windows zip package
|
||||
shell: bash
|
||||
run: |
|
||||
7z a -tzip "Goose-win32-x64.zip" dist-windows/
|
||||
ZIP_NAME="Goose-win32-x64"
|
||||
if [ "${{ inputs.windows_variant }}" = "cuda" ]; then
|
||||
ZIP_NAME="${ZIP_NAME}-cuda"
|
||||
fi
|
||||
7z a -tzip "${ZIP_NAME}.zip" dist-windows/
|
||||
|
||||
- name: Upload signed Windows build
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
|
||||
with:
|
||||
name: Goose-win32-x64
|
||||
path: Goose-win32-x64.zip
|
||||
name: Goose-win32-x64${{ inputs.windows_variant == 'cuda' && '-cuda' || '' }}
|
||||
path: Goose-win32-x64${{ inputs.windows_variant == 'cuda' && '-cuda' || '' }}.zip
|
||||
|
||||
# When signing is disabled, package the unsigned build directly
|
||||
package-desktop-windows:
|
||||
|
|
@ -223,18 +270,22 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Download unsigned distribution
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: windows-unsigned
|
||||
name: windows-unsigned${{ inputs.windows_variant == 'cuda' && '-cuda' || '' }}
|
||||
path: dist-windows
|
||||
|
||||
- name: Create Windows zip package
|
||||
shell: bash
|
||||
run: |
|
||||
7z a -tzip "Goose-win32-x64.zip" dist-windows/
|
||||
ZIP_NAME="Goose-win32-x64"
|
||||
if [ "${{ inputs.windows_variant }}" = "cuda" ]; then
|
||||
ZIP_NAME="${ZIP_NAME}-cuda"
|
||||
fi
|
||||
7z a -tzip "${ZIP_NAME}.zip" dist-windows/
|
||||
|
||||
- name: Upload Windows build
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
|
||||
with:
|
||||
name: Goose-win32-x64
|
||||
path: Goose-win32-x64.zip
|
||||
name: Goose-win32-x64${{ inputs.windows_variant == 'cuda' && '-cuda' || '' }}
|
||||
path: Goose-win32-x64${{ inputs.windows_variant == 'cuda' && '-cuda' || '' }}.zip
|
||||
|
|
|
|||
98
.github/workflows/bundle-goose2.yml
vendored
98
.github/workflows/bundle-goose2.yml
vendored
|
|
@ -38,6 +38,11 @@ on:
|
|||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
windows-signing:
|
||||
description: "Whether to perform Windows signing via Azure Trusted Signing"
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
cli-run-id:
|
||||
description: >
|
||||
Run ID of a prior build-cli.yml workflow run to download the goose
|
||||
|
|
@ -105,7 +110,7 @@ jobs:
|
|||
# ── Goose CLI: download from prior run OR build from source ──
|
||||
- name: Download goose CLI from build-cli run
|
||||
if: inputs.cli-run-id != ''
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: goose-aarch64-apple-darwin
|
||||
run-id: ${{ inputs.cli-run-id }}
|
||||
|
|
@ -125,7 +130,7 @@ jobs:
|
|||
|
||||
- name: Cache Rust dependencies
|
||||
if: inputs.cli-run-id == ''
|
||||
uses: Swatinem/rust-cache@v2
|
||||
uses: Swatinem/rust-cache@e18b497796c12c097a38f9edb9d0641fb99eee32 # v2
|
||||
with:
|
||||
key: goose2-macos-arm64
|
||||
|
||||
|
|
@ -175,13 +180,11 @@ jobs:
|
|||
certificate-password: ${{ secrets.APPLE_CERTIFICATE_PASSWORD }}
|
||||
|
||||
# ── Tauri bundle ──
|
||||
- name: Check disk space before bundle
|
||||
run: df -h
|
||||
|
||||
- name: Bundle Goose 2 (pnpm tauri build)
|
||||
env:
|
||||
APPLE_SIGNING_IDENTITY: ${{ inputs.signing && 'Developer ID Application' || '' }}
|
||||
APPLE_ID: ${{ inputs.signing && secrets.APPLE_ID || '' }}
|
||||
APPLE_ID_PASSWORD: ${{ inputs.signing && secrets.APPLE_ID_PASSWORD || '' }}
|
||||
APPLE_PASSWORD: ${{ inputs.signing && secrets.APPLE_ID_PASSWORD || '' }}
|
||||
APPLE_TEAM_ID: ${{ inputs.signing && secrets.APPLE_TEAM_ID || '' }}
|
||||
working-directory: ui/goose2
|
||||
run: |
|
||||
|
|
@ -272,7 +275,7 @@ jobs:
|
|||
# ── Goose CLI: download from prior run OR build from source ──
|
||||
- name: Download goose CLI from build-cli run
|
||||
if: inputs.cli-run-id != ''
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: goose-x86_64-apple-darwin
|
||||
run-id: ${{ inputs.cli-run-id }}
|
||||
|
|
@ -291,7 +294,7 @@ jobs:
|
|||
|
||||
- name: Cache Rust dependencies
|
||||
if: inputs.cli-run-id == ''
|
||||
uses: Swatinem/rust-cache@v2
|
||||
uses: Swatinem/rust-cache@e18b497796c12c097a38f9edb9d0641fb99eee32 # v2
|
||||
with:
|
||||
key: goose2-macos-x86_64
|
||||
|
||||
|
|
@ -360,8 +363,9 @@ jobs:
|
|||
# ── Tauri bundle (cross-compile for Intel) ──
|
||||
- name: Bundle Goose 2 for Intel
|
||||
env:
|
||||
APPLE_SIGNING_IDENTITY: ${{ inputs.signing && 'Developer ID Application' || '' }}
|
||||
APPLE_ID: ${{ inputs.signing && secrets.APPLE_ID || '' }}
|
||||
APPLE_ID_PASSWORD: ${{ inputs.signing && secrets.APPLE_ID_PASSWORD || '' }}
|
||||
APPLE_PASSWORD: ${{ inputs.signing && secrets.APPLE_ID_PASSWORD || '' }}
|
||||
APPLE_TEAM_ID: ${{ inputs.signing && secrets.APPLE_TEAM_ID || '' }}
|
||||
working-directory: ui/goose2
|
||||
run: |
|
||||
|
|
@ -458,7 +462,7 @@ jobs:
|
|||
# ── Goose CLI: download from prior run OR build from source ──
|
||||
- name: Download goose CLI from build-cli run
|
||||
if: inputs.cli-run-id != ''
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: goose-x86_64-unknown-linux-gnu
|
||||
run-id: ${{ inputs.cli-run-id }}
|
||||
|
|
@ -477,7 +481,7 @@ jobs:
|
|||
|
||||
- name: Cache Rust dependencies
|
||||
if: inputs.cli-run-id == ''
|
||||
uses: Swatinem/rust-cache@v2
|
||||
uses: Swatinem/rust-cache@e18b497796c12c097a38f9edb9d0641fb99eee32 # v2
|
||||
with:
|
||||
key: goose2-linux-x86_64
|
||||
|
||||
|
|
@ -564,6 +568,7 @@ jobs:
|
|||
runs-on: windows-latest
|
||||
timeout-minutes: 60
|
||||
permissions:
|
||||
id-token: write
|
||||
contents: read
|
||||
actions: read
|
||||
steps:
|
||||
|
|
@ -597,7 +602,7 @@ jobs:
|
|||
# ── Goose CLI: download from prior run OR build from source ──
|
||||
- name: Download goose CLI from build-cli run
|
||||
if: inputs.cli-run-id != ''
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: goose-x86_64-pc-windows-msvc
|
||||
run-id: ${{ inputs.cli-run-id }}
|
||||
|
|
@ -621,7 +626,7 @@ jobs:
|
|||
|
||||
- name: Cache Rust dependencies
|
||||
if: inputs.cli-run-id == ''
|
||||
uses: Swatinem/rust-cache@v2
|
||||
uses: Swatinem/rust-cache@e18b497796c12c097a38f9edb9d0641fb99eee32 # v2
|
||||
with:
|
||||
key: goose2-windows-x86_64
|
||||
|
||||
|
|
@ -697,3 +702,70 @@ jobs:
|
|||
name: Goose2-windows-x64-msi
|
||||
path: ui/goose2/src-tauri/target/x86_64-pc-windows-msvc/release/bundle/msi/*.msi
|
||||
if-no-files-found: warn
|
||||
|
||||
sign-windows:
|
||||
name: "Sign Windows installers"
|
||||
needs: bundle-windows
|
||||
if: inputs.windows-signing
|
||||
runs-on: windows-latest
|
||||
environment: signing
|
||||
permissions:
|
||||
id-token: write
|
||||
contents: read
|
||||
actions: read
|
||||
steps:
|
||||
- name: Download NSIS installer
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose2-windows-x64-nsis
|
||||
path: unsigned/nsis
|
||||
|
||||
- name: Download MSI installer
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose2-windows-x64-msi
|
||||
path: unsigned/msi
|
||||
|
||||
- name: Azure login
|
||||
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2
|
||||
with:
|
||||
client-id: ${{ secrets.AZURE_CLIENT_ID }}
|
||||
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
|
||||
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
|
||||
|
||||
- name: Sign Windows installers with Azure Trusted Signing
|
||||
uses: azure/trusted-signing-action@db7a3a6bd3912025c705162fb7475389f5b69ec6 # v1
|
||||
with:
|
||||
endpoint: ${{ secrets.AZURE_SIGNING_ENDPOINT }}
|
||||
trusted-signing-account-name: ${{ secrets.AZURE_SIGNING_ACCOUNT_NAME }}
|
||||
certificate-profile-name: ${{ secrets.AZURE_CERTIFICATE_PROFILE_NAME }}
|
||||
files-folder: ${{ github.workspace }}/unsigned
|
||||
files-folder-filter: exe,msi
|
||||
files-folder-recurse: true
|
||||
|
||||
- name: Verify signed installers
|
||||
shell: pwsh
|
||||
run: |
|
||||
$files = Get-ChildItem -Path unsigned -Recurse -Include *.exe,*.msi
|
||||
foreach ($file in $files) {
|
||||
Write-Output "Verifying signature: $($file.FullName)"
|
||||
$sig = Get-AuthenticodeSignature $file.FullName
|
||||
if ($sig.Status -ne "Valid") {
|
||||
throw "Signature invalid for $($file.Name): $($sig.Status)"
|
||||
}
|
||||
Write-Output "✅ Signature valid: $($file.Name)"
|
||||
}
|
||||
|
||||
- name: Upload signed NSIS installer
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: Goose2-windows-x64-nsis-signed
|
||||
path: unsigned/nsis/*.exe
|
||||
if-no-files-found: error
|
||||
|
||||
- name: Upload signed MSI installer
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: Goose2-windows-x64-msi-signed
|
||||
path: unsigned/msi/*.msi
|
||||
if-no-files-found: error
|
||||
|
|
|
|||
14
.github/workflows/canary.yml
vendored
14
.github/workflows/canary.yml
vendored
|
|
@ -111,13 +111,21 @@ jobs:
|
|||
version: ${{ needs.prepare-version.outputs.version }}
|
||||
signing: false
|
||||
|
||||
bundle-desktop-windows-cuda:
|
||||
needs: [prepare-version]
|
||||
uses: ./.github/workflows/bundle-desktop-windows.yml
|
||||
with:
|
||||
version: ${{ needs.prepare-version.outputs.version }}
|
||||
signing: false
|
||||
windows_variant: cuda
|
||||
|
||||
# ------------------------------------
|
||||
# 7) Create/Update GitHub Release
|
||||
# ------------------------------------
|
||||
release:
|
||||
name: Release
|
||||
runs-on: ubuntu-latest
|
||||
needs: [build-cli, install-script, bundle-desktop, bundle-desktop-intel, bundle-desktop-linux, bundle-desktop-windows]
|
||||
needs: [build-cli, install-script, bundle-desktop, bundle-desktop-intel, bundle-desktop-linux, bundle-desktop-windows, bundle-desktop-windows-cuda]
|
||||
permissions:
|
||||
contents: write
|
||||
id-token: write # Required for Sigstore OIDC signing
|
||||
|
|
@ -125,7 +133,7 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
merge-multiple: true
|
||||
|
||||
|
|
@ -135,6 +143,7 @@ jobs:
|
|||
subject-path: |
|
||||
goose-*.tar.bz2
|
||||
goose-*.tar.gz
|
||||
goose-*.zip
|
||||
Goose*.zip
|
||||
*.deb
|
||||
*.rpm
|
||||
|
|
@ -151,6 +160,7 @@ jobs:
|
|||
artifacts: |
|
||||
goose-*.tar.bz2
|
||||
goose-*.tar.gz
|
||||
goose-*.zip
|
||||
Goose*.zip
|
||||
*.deb
|
||||
*.rpm
|
||||
|
|
|
|||
4
.github/workflows/ci.yml
vendored
4
.github/workflows/ci.yml
vendored
|
|
@ -23,7 +23,7 @@ jobs:
|
|||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
|
||||
- name: Check for file changes
|
||||
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # pin@v3
|
||||
uses: dorny/paths-filter@fbd0ab8f3e69293af611ebaee6363fc25e6d187d # pin@v3
|
||||
id: filter
|
||||
with:
|
||||
filters: |
|
||||
|
|
@ -119,7 +119,7 @@ jobs:
|
|||
echo "msrv=$msrv" >> "$GITHUB_OUTPUT"
|
||||
echo "MSRV: $msrv"
|
||||
|
||||
- uses: actions-rust-lang/setup-rust-toolchain@150fca883cd4034361b621bd4e6a9d34e5143606 # v1
|
||||
- uses: actions-rust-lang/setup-rust-toolchain@2b1f5e9b395427c92ee4e3331786ca3c37afe2d7 # v1
|
||||
with:
|
||||
toolchain: ${{ steps.msrv.outputs.msrv }}
|
||||
|
||||
|
|
|
|||
37
.github/workflows/dependabot-auto-merge.yml
vendored
Normal file
37
.github/workflows/dependabot-auto-merge.yml
vendored
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
name: Dependabot Auto Merge
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
dependabot:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.pull_request.user.login == 'dependabot[bot]' && github.repository == 'aaif-goose/goose'
|
||||
steps:
|
||||
- name: Fetch Dependabot metadata
|
||||
id: metadata
|
||||
uses: dependabot/fetch-metadata@d7267f607e9d3fb96fc2fbe83e0af444713e90b7
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Approve patch and minor PRs
|
||||
if: |
|
||||
steps.metadata.outputs.update-type == 'version-update:semver-patch' ||
|
||||
steps.metadata.outputs.update-type == 'version-update:semver-minor'
|
||||
run: gh pr review --approve "$PR_URL"
|
||||
env:
|
||||
PR_URL: ${{ github.event.pull_request.html_url }}
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Enable auto-merge for patch and minor PRs
|
||||
if: |
|
||||
steps.metadata.outputs.update-type == 'version-update:semver-patch' ||
|
||||
steps.metadata.outputs.update-type == 'version-update:semver-minor'
|
||||
run: gh pr merge --auto --merge "$PR_URL"
|
||||
env:
|
||||
PR_URL: ${{ github.event.pull_request.html_url }}
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
5
.github/workflows/pr-comment-build-cli.yml
vendored
5
.github/workflows/pr-comment-build-cli.yml
vendored
|
|
@ -122,7 +122,7 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Download CLI artifacts
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
pattern: goose-*
|
||||
path: cli-dist
|
||||
|
|
@ -140,7 +140,8 @@ jobs:
|
|||
- [📦 Linux (aarch64)](https://nightly.link/${{ github.repository }}/actions/runs/${{ github.run_id }}/goose-aarch64-unknown-linux-gnu.zip)
|
||||
- [📦 macOS (x86_64)](https://nightly.link/${{ github.repository }}/actions/runs/${{ github.run_id }}/goose-x86_64-apple-darwin.zip)
|
||||
- [📦 macOS (aarch64)](https://nightly.link/${{ github.repository }}/actions/runs/${{ github.run_id }}/goose-aarch64-apple-darwin.zip)
|
||||
- [📦 Windows (x86_64)](https://nightly.link/${{ github.repository }}/actions/runs/${{ github.run_id }}/goose-x86_64-pc-windows-gnu.zip)
|
||||
- [📦 Windows (x86_64)](https://nightly.link/${{ github.repository }}/actions/runs/${{ github.run_id }}/goose-x86_64-pc-windows-msvc.zip)
|
||||
- [📦 Windows CUDA (x86_64)](https://nightly.link/${{ github.repository }}/actions/runs/${{ github.run_id }}/goose-x86_64-pc-windows-msvc-cuda.zip)
|
||||
|
||||
These links are provided by nightly.link and will work even if you're not logged into GitHub.
|
||||
|
||||
|
|
|
|||
|
|
@ -79,7 +79,7 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Download Intel artifact
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose-darwin-x64
|
||||
path: intel-dist
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Download Windows artifact
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose-win32-x64
|
||||
path: windows-dist
|
||||
|
|
|
|||
2
.github/workflows/pr-comment-bundle.yml
vendored
2
.github/workflows/pr-comment-bundle.yml
vendored
|
|
@ -172,7 +172,7 @@ jobs:
|
|||
|
||||
steps:
|
||||
- name: Download ARM64 artifact
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose-darwin-arm64
|
||||
path: arm64-dist
|
||||
|
|
|
|||
34
.github/workflows/pr-smoke-test.yml
vendored
34
.github/workflows/pr-smoke-test.yml
vendored
|
|
@ -36,7 +36,7 @@ jobs:
|
|||
fetch-depth: 0
|
||||
|
||||
- name: Check for code changes
|
||||
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # pin@v3
|
||||
uses: dorny/paths-filter@fbd0ab8f3e69293af611ebaee6363fc25e6d187d # pin@v3
|
||||
id: filter
|
||||
with:
|
||||
base: ${{ github.event.before || github.event.pull_request.base.sha }}
|
||||
|
|
@ -94,7 +94,7 @@ jobs:
|
|||
ref: ${{ github.event.inputs.branch || github.ref }}
|
||||
|
||||
- name: Download Binary
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: goose-binary
|
||||
path: target/debug
|
||||
|
|
@ -108,9 +108,13 @@ jobs:
|
|||
node-version: '22'
|
||||
|
||||
- name: Install agentic providers
|
||||
run: npm install -g @anthropic-ai/claude-code @openai/codex @google/gemini-cli @zed-industries/claude-agent-acp @zed-industries/codex-acp
|
||||
run: npm install -g @anthropic-ai/claude-code @zed-industries/claude-agent-acp @zed-industries/codex-acp
|
||||
|
||||
- name: Run Smoke Tests with Provider Script
|
||||
- name: Install Node.js Dependencies
|
||||
run: source ../../bin/activate-hermit && pnpm install --frozen-lockfile
|
||||
working-directory: ui/desktop
|
||||
|
||||
- name: Run Smoke Tests (Normal Mode)
|
||||
env:
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
|
|
@ -127,12 +131,10 @@ jobs:
|
|||
SKIP_BUILD: 1
|
||||
SKIP_PROVIDERS: ${{ vars.SKIP_PROVIDERS || '' }}
|
||||
run: |
|
||||
# Ensure the HOME directory structure exists
|
||||
mkdir -p $HOME/.local/share/goose/sessions
|
||||
mkdir -p $HOME/.config/goose
|
||||
|
||||
# Run the provider test script (binary already built and downloaded)
|
||||
bash scripts/test_providers.sh
|
||||
source ../../bin/activate-hermit && pnpm run test:integration:providers
|
||||
working-directory: ui/desktop
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
|
|
@ -180,7 +182,7 @@ jobs:
|
|||
ref: ${{ github.event.inputs.branch || github.ref }}
|
||||
|
||||
- name: Download Binary
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: goose-binary
|
||||
path: target/debug
|
||||
|
|
@ -188,6 +190,10 @@ jobs:
|
|||
- name: Make Binary Executable
|
||||
run: chmod +x target/debug/goose
|
||||
|
||||
- name: Install Node.js Dependencies
|
||||
run: source ../../bin/activate-hermit && pnpm install --frozen-lockfile
|
||||
working-directory: ui/desktop
|
||||
|
||||
- name: Run Provider Tests (Code Execution Mode)
|
||||
env:
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
|
|
@ -205,7 +211,8 @@ jobs:
|
|||
run: |
|
||||
mkdir -p $HOME/.local/share/goose/sessions
|
||||
mkdir -p $HOME/.config/goose
|
||||
bash scripts/test_providers_code_exec.sh
|
||||
source ../../bin/activate-hermit && pnpm run test:integration:providers-code-exec
|
||||
working-directory: ui/desktop
|
||||
|
||||
compaction-tests:
|
||||
name: Compaction Tests
|
||||
|
|
@ -218,7 +225,7 @@ jobs:
|
|||
ref: ${{ github.event.inputs.branch || github.ref }}
|
||||
|
||||
- name: Download Binary
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: goose-binary
|
||||
path: target/debug
|
||||
|
|
@ -258,7 +265,7 @@ jobs:
|
|||
ref: ${{ github.event.inputs.branch || github.ref }}
|
||||
|
||||
- name: Download Binary
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: goosed-binary
|
||||
path: target/debug
|
||||
|
|
@ -277,7 +284,8 @@ jobs:
|
|||
GOOSE_PROVIDER: anthropic
|
||||
GOOSE_MODEL: claude-sonnet-4-5-20250929
|
||||
SHELL: /bin/bash
|
||||
SKIP_BUILD: 1
|
||||
run: |
|
||||
echo 'export PATH=/some/fake/path:$PATH' >> $HOME/.bash_profile
|
||||
source ../../bin/activate-hermit && pnpm run test:integration:debug
|
||||
source ../../bin/activate-hermit && pnpm run test:integration:goosed
|
||||
working-directory: ui/desktop
|
||||
|
|
|
|||
2
.github/workflows/pr-website-preview.yml
vendored
2
.github/workflows/pr-website-preview.yml
vendored
|
|
@ -51,7 +51,7 @@ jobs:
|
|||
cleanup:
|
||||
runs-on: ubuntu-latest
|
||||
needs: deploy
|
||||
if: github.event.action == 'closed'
|
||||
if: github.event.action == 'closed' && github.event.pull_request.head.repo.full_name == 'aaif-goose/goose'
|
||||
permissions:
|
||||
contents: write
|
||||
steps:
|
||||
|
|
|
|||
2
.github/workflows/publish-ask-ai-bot.yml
vendored
2
.github/workflows/publish-ask-ai-bot.yml
vendored
|
|
@ -33,7 +33,7 @@ jobs:
|
|||
|
||||
- name: Extract metadata
|
||||
id: meta
|
||||
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
|
||||
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6.0.0
|
||||
with:
|
||||
images: ghcr.io/${{ github.repository_owner }}/ask-ai-bot
|
||||
tags: |
|
||||
|
|
|
|||
2
.github/workflows/publish-docker.yml
vendored
2
.github/workflows/publish-docker.yml
vendored
|
|
@ -37,7 +37,7 @@ jobs:
|
|||
|
||||
- name: Extract metadata
|
||||
id: meta
|
||||
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
|
||||
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6.0.0
|
||||
with:
|
||||
images: ghcr.io/${{ github.repository_owner }}/goose
|
||||
tags: |
|
||||
|
|
|
|||
2
.github/workflows/publish-npm.yml
vendored
2
.github/workflows/publish-npm.yml
vendored
|
|
@ -139,7 +139,7 @@ jobs:
|
|||
environment: npm-production-publishing
|
||||
steps:
|
||||
- name: Download built packages
|
||||
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: npm-packages
|
||||
path: ui/
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
|
|
|
|||
192
.github/workflows/release-goose2.yml
vendored
Normal file
192
.github/workflows/release-goose2.yml
vendored
Normal file
|
|
@ -0,0 +1,192 @@
|
|||
on:
|
||||
push:
|
||||
tags:
|
||||
- "v2.*"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
version:
|
||||
description: "Version string (e.g. 2.0.0-rc.1). Used when testing from a branch."
|
||||
required: true
|
||||
type: string
|
||||
cli-run-id:
|
||||
description: "Run ID of a build-cli workflow to pull goose binaries from (skips CLI build step)"
|
||||
required: false
|
||||
type: string
|
||||
default: ""
|
||||
|
||||
name: "Release Goose 2"
|
||||
|
||||
permissions:
|
||||
id-token: write # Sigstore OIDC signing + Azure OIDC (Windows signing)
|
||||
contents: write # Creating releases + actions/checkout
|
||||
actions: read # Downloading artifacts across workflow runs
|
||||
attestations: write # SLSA build provenance attestations
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
prepare-version:
|
||||
name: Prepare Version
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
version: ${{ steps.set-version.outputs.version }}
|
||||
steps:
|
||||
- name: Extract version
|
||||
id: set-version
|
||||
run: |
|
||||
if [ -n "${{ inputs.version }}" ]; then
|
||||
VERSION="${{ inputs.version }}"
|
||||
else
|
||||
# Strip the leading "v" from the tag (e.g. v2.0.0 → 2.0.0)
|
||||
VERSION="${GITHUB_REF_NAME#v}"
|
||||
fi
|
||||
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
|
||||
echo "Release version: $VERSION"
|
||||
|
||||
build-cli:
|
||||
if: inputs.cli-run-id == ''
|
||||
needs: [prepare-version]
|
||||
uses: ./.github/workflows/build-cli.yml
|
||||
with:
|
||||
version: ${{ needs.prepare-version.outputs.version }}
|
||||
|
||||
bundle-goose2:
|
||||
needs: [prepare-version, build-cli]
|
||||
if: ${{ !cancelled() && needs.prepare-version.result == 'success' && (needs.build-cli.result == 'success' || needs.build-cli.result == 'skipped') }}
|
||||
uses: ./.github/workflows/bundle-goose2.yml
|
||||
permissions:
|
||||
id-token: write
|
||||
contents: read
|
||||
actions: read
|
||||
with:
|
||||
version: ${{ needs.prepare-version.outputs.version }}
|
||||
signing: true
|
||||
windows-signing: true
|
||||
environment: signing
|
||||
cli-run-id: ${{ inputs.cli-run-id != '' && inputs.cli-run-id || github.run_id }}
|
||||
secrets: inherit
|
||||
|
||||
install-script:
|
||||
name: Upload Install Script
|
||||
runs-on: ubuntu-latest
|
||||
if: inputs.cli-run-id == ''
|
||||
needs: [build-cli]
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
|
||||
- uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: download_cli.sh
|
||||
path: download_cli.sh
|
||||
|
||||
release:
|
||||
name: Release
|
||||
runs-on: ubuntu-latest
|
||||
needs: [prepare-version, build-cli, install-script, bundle-goose2]
|
||||
if: ${{ !cancelled() && needs.bundle-goose2.result == 'success' }}
|
||||
permissions:
|
||||
contents: write
|
||||
id-token: write
|
||||
actions: read
|
||||
attestations: write
|
||||
steps:
|
||||
- name: Download CLI artifacts
|
||||
if: needs.build-cli.result == 'success'
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
pattern: goose-*
|
||||
merge-multiple: true
|
||||
path: release
|
||||
|
||||
- name: Download install script
|
||||
if: needs.install-script.result == 'success'
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: download_cli.sh
|
||||
path: release
|
||||
|
||||
- name: Download macOS ARM64
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose2-darwin-arm64
|
||||
path: release
|
||||
|
||||
- name: Download macOS Intel
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose2-darwin-x64
|
||||
path: release
|
||||
|
||||
- name: Download Linux .deb
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose2-linux-x64-deb
|
||||
path: release
|
||||
continue-on-error: true
|
||||
|
||||
- name: Download Linux AppImage
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose2-linux-x64-appimage
|
||||
path: release
|
||||
continue-on-error: true
|
||||
|
||||
- name: Download Linux RPM
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose2-linux-x64-rpm
|
||||
path: release
|
||||
continue-on-error: true
|
||||
|
||||
- name: Download signed Windows NSIS installer
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose2-windows-x64-nsis-signed
|
||||
path: release
|
||||
|
||||
- name: Download signed Windows MSI installer
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: Goose2-windows-x64-msi-signed
|
||||
path: release
|
||||
|
||||
- name: List downloaded artifacts
|
||||
run: |
|
||||
echo "=== All release artifacts ==="
|
||||
find release -type f | sort
|
||||
|
||||
- name: Attest build provenance
|
||||
uses: actions/attest-build-provenance@977bb373ede98d70efdf65b84cb5f73e068dcc2a # v3
|
||||
with:
|
||||
subject-path: |
|
||||
release/goose-*.tar.bz2
|
||||
release/goose-*.tar.gz
|
||||
release/goose-*.zip
|
||||
release/*.dmg
|
||||
release/*.exe
|
||||
release/*.msi
|
||||
release/*.deb
|
||||
release/*.rpm
|
||||
release/*.AppImage
|
||||
release/download_cli.sh
|
||||
|
||||
# Create/update the versioned pre-release (e.g. v2.0.0)
|
||||
- name: Release versioned
|
||||
uses: ncipollo/release-action@339a81892b84b4eeb0f6e744e4574d79d0d9b8dd # v1
|
||||
with:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
prerelease: true
|
||||
artifacts: |
|
||||
release/goose-*.tar.bz2
|
||||
release/goose-*.tar.gz
|
||||
release/goose-*.zip
|
||||
release/*.dmg
|
||||
release/*.exe
|
||||
release/*.msi
|
||||
release/*.deb
|
||||
release/*.rpm
|
||||
release/*.AppImage
|
||||
release/download_cli.sh
|
||||
allowUpdates: true
|
||||
omitBody: true
|
||||
15
.github/workflows/release.yml
vendored
15
.github/workflows/release.yml
vendored
|
|
@ -85,20 +85,31 @@ jobs:
|
|||
signing: true
|
||||
secrets: inherit
|
||||
|
||||
bundle-desktop-windows-cuda:
|
||||
uses: ./.github/workflows/bundle-desktop-windows.yml
|
||||
permissions:
|
||||
id-token: write
|
||||
contents: read
|
||||
actions: read
|
||||
with:
|
||||
signing: true
|
||||
windows_variant: cuda
|
||||
secrets: inherit
|
||||
|
||||
# ------------------------------------
|
||||
# 7) Create/Update GitHub Release
|
||||
# ------------------------------------
|
||||
release:
|
||||
name: Release
|
||||
runs-on: ubuntu-latest
|
||||
needs: [build-cli, install-script, bundle-desktop, bundle-desktop-intel, bundle-desktop-linux, bundle-desktop-windows]
|
||||
needs: [build-cli, install-script, bundle-desktop, bundle-desktop-intel, bundle-desktop-linux, bundle-desktop-windows, bundle-desktop-windows-cuda]
|
||||
permissions:
|
||||
contents: write
|
||||
id-token: write # Required for Sigstore OIDC signing
|
||||
attestations: write # Required for SLSA build provenance attestations
|
||||
steps:
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
merge-multiple: true
|
||||
|
||||
|
|
|
|||
3
.github/workflows/stale.yml
vendored
3
.github/workflows/stale.yml
vendored
|
|
@ -75,9 +75,6 @@ jobs:
|
|||
# Skip PRs with these labels (comma-separated)
|
||||
exempt-pr-labels: 'keep-open,wip,work-in-progress,security,pinned,dependencies'
|
||||
|
||||
# Skip draft PRs (they're typically work in progress)
|
||||
exempt-draft-pr: true
|
||||
|
||||
# === ISSUE CONFIGURATION (DISABLED) ===
|
||||
# We only want to manage PRs, not issues
|
||||
days-before-issue-stale: -1
|
||||
|
|
|
|||
|
|
@ -16,8 +16,13 @@ if git diff --cached --no-renames --name-only | grep -q '^ui/goose2/'; then
|
|||
REPO_ROOT="$(pwd)"
|
||||
echo "Running goose2 pre-commit checks..."
|
||||
|
||||
# Auto-format only staged files and re-stage them
|
||||
STAGED_FILES=$(git diff --cached --no-renames --diff-filter=ACMR --name-only | grep '^ui/goose2/' | sed 's|^ui/goose2/||' || true)
|
||||
# Auto-format only staged files that biome can process, then re-stage them.
|
||||
# Exclude justfile and .swift files — biome doesn't understand these formats
|
||||
# and would fail with "no files were processed" when only such files are staged.
|
||||
STAGED_FILES=$(git diff --cached --no-renames --diff-filter=ACMR --name-only \
|
||||
| grep '^ui/goose2/' \
|
||||
| grep -v -E '(^ui/goose2/justfile$|\.swift$)' \
|
||||
| sed 's|^ui/goose2/||' || true)
|
||||
if [ -n "$STAGED_FILES" ]; then
|
||||
cd ui/goose2
|
||||
echo "$STAGED_FILES" | xargs npx biome format --write
|
||||
|
|
|
|||
|
|
@ -118,7 +118,8 @@ Ink-TrailingMargin: Don't apply `marginBottom` to the last item in a list — it
|
|||
## Never
|
||||
|
||||
Never: Edit ui/desktop/openapi.json manually
|
||||
Never: Edit Cargo.toml use cargo add
|
||||
Cargo.toml: For human-authored dependency changes, use `cargo add` instead of manually editing dependency entries unless there is a specific reason not to.
|
||||
Cargo.toml: Automated dependency bump PRs are exempt; when manual edits are necessary, keep `Cargo.lock` consistent.
|
||||
Never: Skip cargo fmt
|
||||
Never: Merge without running clippy
|
||||
Never: Comment self-evident operations (`// Initialize`, `// Return result`), getters/setters, constructors, or standard Rust idioms
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ This guide covers building the goose Desktop application from source on various
|
|||
**Debian/Ubuntu:**
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install -y dpkg fakeroot build-essential libxcb1-dev libxcb-util-dev protobuf-compiler
|
||||
sudo apt install -y dpkg fakeroot build-essential clang libxcb1-dev libxcb-util-dev protobuf-compiler
|
||||
```
|
||||
|
||||
**Arch/Manjaro:**
|
||||
|
|
@ -44,6 +44,7 @@ pkg install cmake protobuf clang build-essential
|
|||
- **Rust**: Install via [rustup](https://rustup.rs/)
|
||||
- **Node.js**: Version 22.9.0 or later (use [nvm](https://github.com/nvm-sh/nvm) for version management)
|
||||
- **pnpm**: Version 10 or later (managed via Hermit, or install globally)
|
||||
- **just**: Install via `cargo install just` after Rust is installed. More [info](https://github.com/casey/just#packages)
|
||||
|
||||
## Build Process
|
||||
|
||||
|
|
@ -53,11 +54,27 @@ git clone https://github.com/aaif-goose/goose.git
|
|||
cd goose
|
||||
```
|
||||
|
||||
### 2. Build the Rust Backend
|
||||
### 2. Build
|
||||
|
||||
Build Goose CLI:
|
||||
|
||||
```bash
|
||||
cargo build --release -p goose-cli
|
||||
```
|
||||
|
||||
Build Goose Server:
|
||||
|
||||
```bash
|
||||
cargo build --release -p goose-server
|
||||
```
|
||||
|
||||
This command should give you a list of possible packages in the
|
||||
workspace:
|
||||
|
||||
```bash
|
||||
cargo test -p
|
||||
```
|
||||
|
||||
### 3. Prepare the Desktop Application
|
||||
```bash
|
||||
cd ui/desktop
|
||||
|
|
|
|||
506
Cargo.lock
generated
506
Cargo.lock
generated
File diff suppressed because it is too large
Load diff
10
Cargo.toml
10
Cargo.toml
|
|
@ -9,7 +9,7 @@ resolver = "2"
|
|||
|
||||
[workspace.package]
|
||||
edition = "2021"
|
||||
version = "1.31.0"
|
||||
version = "1.32.0"
|
||||
rust-version = "1.91.1"
|
||||
authors = ["AAIF <ai-oss-tools@block.xyz>"]
|
||||
license = "Apache-2.0"
|
||||
|
|
@ -65,12 +65,12 @@ tracing-appender = "0.2"
|
|||
tracing-subscriber = "0.3"
|
||||
urlencoding = "2.1"
|
||||
utoipa = "4.1"
|
||||
uuid = { version = "1.11", features = ["v4"] }
|
||||
webbrowser = "1.0"
|
||||
uuid = { version = "1.23", features = ["v4"] }
|
||||
webbrowser = "1.2"
|
||||
which = "8.0.0"
|
||||
winapi = { version = "0.3", features = ["wincred"] }
|
||||
wiremock = "0.6"
|
||||
zip = { version = "^8.0", default-features = false, features = ["deflate"] }
|
||||
zip = { version = "^8.6", default-features = false, features = ["deflate"] }
|
||||
serial_test = "3.2.0"
|
||||
sha2 = "0.10"
|
||||
shell-words = "1.1.1"
|
||||
|
|
@ -84,7 +84,7 @@ opentelemetry-stdout = { version = "0.31", features = ["trace", "metrics", "logs
|
|||
tracing-futures = { version = "0.2", features = ["futures-03"] }
|
||||
tracing-opentelemetry = "0.32"
|
||||
|
||||
rayon = "1.10"
|
||||
rayon = "1.12"
|
||||
tree-sitter = "0.26"
|
||||
tree-sitter-go = "0.25"
|
||||
tree-sitter-java = "0.23"
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ Make a copy of this document for each version and check off as steps are verifie
|
|||
|
||||
### Provider Testing
|
||||
|
||||
- [ ] Run `./scripts/test_providers.sh` locally from the release branch and verify all providers/models work
|
||||
- [ ] Run `cd ui/desktop && pnpm run test:integration:providers` locally from the release branch and verify all providers/models work
|
||||
- [ ] Launch goose, click reset providers, choose databricks and a model
|
||||
|
||||
### Starting Conversations
|
||||
|
|
|
|||
|
|
@ -55,14 +55,14 @@ bzip2 = "0.5"
|
|||
webbrowser = { workspace = true }
|
||||
indicatif = "0.18.1"
|
||||
tokio-util = { workspace = true, features = ["compat", "rt"] }
|
||||
anstream = "0.6.18"
|
||||
open = "5.3.2"
|
||||
anstream = "1.0.0"
|
||||
open = "5.3.4"
|
||||
url = { workspace = true }
|
||||
urlencoding = { workspace = true }
|
||||
clap_complete = "4.5.62"
|
||||
clap_complete = "4.6.2"
|
||||
comfy-table = "7.2.2"
|
||||
sha2 = { workspace = true }
|
||||
sigstore-verify = { version = "0.6", default-features = false }
|
||||
sigstore-verify = { version = "=0.6.5", default-features = false }
|
||||
axum.workspace = true
|
||||
|
||||
[target.'cfg(target_os = "windows")'.dependencies]
|
||||
|
|
|
|||
|
|
@ -709,6 +709,8 @@ enum Command {
|
|||
/// Show verbose information including current configuration
|
||||
#[arg(short, long, help = "Show verbose information including config.yaml")]
|
||||
verbose: bool,
|
||||
#[arg(long, help = "Test provider connection and show status")]
|
||||
check: bool,
|
||||
},
|
||||
|
||||
#[command(about = "Check that your Goose setup is working")]
|
||||
|
|
@ -1765,7 +1767,7 @@ pub async fn cli() -> anyhow::Result<()> {
|
|||
}
|
||||
Some(Command::Configure {}) => handle_configure().await,
|
||||
Some(Command::Doctor {}) => crate::commands::doctor::handle_doctor().await,
|
||||
Some(Command::Info { verbose }) => handle_info(verbose),
|
||||
Some(Command::Info { verbose, check }) => handle_info(verbose, check).await,
|
||||
Some(Command::Mcp { server }) => handle_mcp_command(server).await,
|
||||
Some(Command::Acp { builtins }) => goose::acp::server::run(builtins).await,
|
||||
Some(Command::Serve {
|
||||
|
|
|
|||
|
|
@ -1,9 +1,12 @@
|
|||
use anyhow::Result;
|
||||
use anyhow::{anyhow, Result};
|
||||
use console::style;
|
||||
use goose::config::paths::Paths;
|
||||
use goose::config::Config;
|
||||
use goose::conversation::message::Message;
|
||||
use goose::providers::errors::ProviderError;
|
||||
use goose::session::session_manager::{DB_NAME, SESSIONS_FOLDER};
|
||||
use serde_yaml;
|
||||
use std::time::Duration;
|
||||
|
||||
fn print_aligned(label: &str, value: &str, width: usize) {
|
||||
println!(" {:<width$} {}", label, value, width = width);
|
||||
|
|
@ -32,7 +35,74 @@ fn check_path_status(path: &Path) -> String {
|
|||
}
|
||||
}
|
||||
|
||||
pub fn handle_info(verbose: bool) -> Result<()> {
|
||||
struct ProviderCheckSuccess {
|
||||
provider: String,
|
||||
model: String,
|
||||
elapsed: Duration,
|
||||
}
|
||||
|
||||
enum ProviderCheckError {
|
||||
NotConfigured {
|
||||
label: &'static str,
|
||||
error: String,
|
||||
},
|
||||
InvalidModel(String),
|
||||
ProviderCreate {
|
||||
error: String,
|
||||
show_api_key_hint: bool,
|
||||
},
|
||||
ProviderRequest(ProviderError),
|
||||
}
|
||||
|
||||
async fn check_provider(
|
||||
config: &Config,
|
||||
) -> std::result::Result<ProviderCheckSuccess, ProviderCheckError> {
|
||||
let (provider, model) = match (config.get_goose_provider(), config.get_goose_model()) {
|
||||
(Ok(provider), Ok(model)) => (provider, model),
|
||||
(Err(e), _) => {
|
||||
return Err(ProviderCheckError::NotConfigured {
|
||||
label: "Provider:",
|
||||
error: e.to_string(),
|
||||
});
|
||||
}
|
||||
(_, Err(e)) => {
|
||||
return Err(ProviderCheckError::NotConfigured {
|
||||
label: "Model:",
|
||||
error: e.to_string(),
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
let model_config = goose::model::ModelConfig::new(&model)
|
||||
.map_err(|e| ProviderCheckError::InvalidModel(e.to_string()))?
|
||||
.with_canonical_limits(&provider);
|
||||
|
||||
let provider_client = goose::providers::create(&provider, model_config, Vec::new())
|
||||
.await
|
||||
.map_err(|e| {
|
||||
let error = e.to_string();
|
||||
ProviderCheckError::ProviderCreate {
|
||||
show_api_key_hint: error.contains("not found") || error.contains("API_KEY"),
|
||||
error,
|
||||
}
|
||||
})?;
|
||||
|
||||
let test_msg = Message::user().with_text("Say 'ok'");
|
||||
let model_config = provider_client.get_model_config();
|
||||
let start = std::time::Instant::now();
|
||||
provider_client
|
||||
.complete(&model_config, "check", "", &[test_msg], &[])
|
||||
.await
|
||||
.map_err(ProviderCheckError::ProviderRequest)?;
|
||||
|
||||
Ok(ProviderCheckSuccess {
|
||||
provider,
|
||||
model,
|
||||
elapsed: start.elapsed(),
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn handle_info(verbose: bool, check: bool) -> Result<()> {
|
||||
let logs_dir = Paths::in_state_dir("logs");
|
||||
let sessions_dir = Paths::in_data_dir(SESSIONS_FOLDER);
|
||||
let sessions_db = sessions_dir.join(DB_NAME);
|
||||
|
|
@ -90,5 +160,115 @@ pub fn handle_info(verbose: bool) -> Result<()> {
|
|||
}
|
||||
}
|
||||
|
||||
if check {
|
||||
println!("\n{}", style("Provider Check:").cyan().bold());
|
||||
|
||||
let result = check_provider(config).await;
|
||||
match &result {
|
||||
Ok(success) => {
|
||||
print_aligned("Provider:", &success.provider, label_padding);
|
||||
print_aligned("Model:", &success.model, label_padding);
|
||||
print_aligned("Auth:", &style("ok").green().to_string(), label_padding);
|
||||
print_aligned(
|
||||
"Connection:",
|
||||
&format!(
|
||||
"{} (verified in {:.1}s)",
|
||||
style("ok").green(),
|
||||
success.elapsed.as_secs_f64()
|
||||
),
|
||||
label_padding,
|
||||
);
|
||||
}
|
||||
Err(ProviderCheckError::NotConfigured { label, error }) => {
|
||||
print_aligned(
|
||||
label,
|
||||
&format!("{} {}", style("not configured:").red(), error),
|
||||
label_padding,
|
||||
);
|
||||
print_aligned(
|
||||
"Hint:",
|
||||
&format!("Run '{}'", style("goose configure").cyan()),
|
||||
label_padding,
|
||||
);
|
||||
}
|
||||
Err(ProviderCheckError::InvalidModel(error)) => {
|
||||
print_aligned(
|
||||
"Model:",
|
||||
&format!("{} {}", style("invalid:").red(), error),
|
||||
label_padding,
|
||||
);
|
||||
}
|
||||
Err(ProviderCheckError::ProviderCreate {
|
||||
error,
|
||||
show_api_key_hint,
|
||||
}) => {
|
||||
// Split auth failures (missing/invalid credential) from provider
|
||||
// construction failures (unknown provider, malformed provider
|
||||
// config). Labeling the latter as "Auth: FAILED" misdirects
|
||||
// troubleshooting toward rotating API keys.
|
||||
if *show_api_key_hint {
|
||||
print_aligned(
|
||||
"Auth:",
|
||||
&format!("{} {}", style("FAILED").red().bold(), error),
|
||||
label_padding,
|
||||
);
|
||||
print_aligned(
|
||||
"Hint:",
|
||||
&format!(
|
||||
"Set the API key in your environment or run '{}'",
|
||||
style("goose configure").cyan()
|
||||
),
|
||||
label_padding,
|
||||
);
|
||||
} else {
|
||||
print_aligned(
|
||||
"Provider:",
|
||||
&format!("{} {}", style("FAILED").red().bold(), error),
|
||||
label_padding,
|
||||
);
|
||||
print_aligned(
|
||||
"Hint:",
|
||||
&format!(
|
||||
"Check the provider name and config, or run '{}'",
|
||||
style("goose configure").cyan()
|
||||
),
|
||||
label_padding,
|
||||
);
|
||||
}
|
||||
}
|
||||
Err(ProviderCheckError::ProviderRequest(error)) => match error {
|
||||
ProviderError::Authentication(_) => {
|
||||
print_aligned(
|
||||
"Auth:",
|
||||
&format!("{} {}", style("FAILED").red().bold(), error),
|
||||
label_padding,
|
||||
);
|
||||
print_aligned(
|
||||
"Hint:",
|
||||
&format!(
|
||||
"Check your API key or run '{}'",
|
||||
style("goose configure").cyan()
|
||||
),
|
||||
label_padding,
|
||||
);
|
||||
}
|
||||
_ => {
|
||||
print_aligned(
|
||||
"Check:",
|
||||
&format!("{} {}", style("FAILED").red().bold(), error),
|
||||
label_padding,
|
||||
);
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
// Propagate non-zero exit status so automation (CI scripts, install
|
||||
// checks, health probes) can rely on `goose info --check` as a
|
||||
// pre-flight verifier.
|
||||
if result.is_err() {
|
||||
return Err(anyhow!("provider check failed"));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
use anyhow::{bail, Context, Result};
|
||||
use sha2::{Digest, Sha256};
|
||||
use sigstore_verify::trust_root::TrustedRoot;
|
||||
use sigstore_verify::trust_root::{TrustedRoot, SIGSTORE_PRODUCTION_TRUSTED_ROOT};
|
||||
use sigstore_verify::types::{Bundle, Sha256Hash};
|
||||
use sigstore_verify::VerificationPolicy;
|
||||
use std::env;
|
||||
|
|
@ -26,7 +26,11 @@ fn asset_name() -> &'static str {
|
|||
{
|
||||
"goose-aarch64-unknown-linux-gnu.tar.bz2"
|
||||
}
|
||||
#[cfg(all(target_os = "windows", target_arch = "x86_64"))]
|
||||
#[cfg(all(target_os = "windows", target_arch = "x86_64", feature = "cuda"))]
|
||||
{
|
||||
"goose-x86_64-pc-windows-msvc-cuda.zip"
|
||||
}
|
||||
#[cfg(all(target_os = "windows", target_arch = "x86_64", not(feature = "cuda")))]
|
||||
{
|
||||
"goose-x86_64-pc-windows-msvc.zip"
|
||||
}
|
||||
|
|
@ -165,7 +169,8 @@ async fn verify_provenance(archive_data: &[u8], tag: &str) -> Result<bool> {
|
|||
}
|
||||
};
|
||||
|
||||
let trusted_root = TrustedRoot::production().context("Failed to load Sigstore trusted root")?;
|
||||
let trusted_root = TrustedRoot::from_json(SIGSTORE_PRODUCTION_TRUSTED_ROOT)
|
||||
.context("Failed to load Sigstore trusted root")?;
|
||||
let policy = VerificationPolicy::with_issuer(GITHUB_ACTIONS_ISSUER);
|
||||
let artifact_digest =
|
||||
Sha256Hash::from_hex(&digest).context("Failed to parse artifact digest")?;
|
||||
|
|
|
|||
|
|
@ -123,7 +123,7 @@ impl GooseCompleter {
|
|||
|
||||
/// Complete skill names for the /skills command
|
||||
fn complete_skill_names(&self, line: &str) -> Result<(usize, Vec<Pair>)> {
|
||||
use goose::agents::platform_extensions::skills::list_installed_skills;
|
||||
use goose::skills::list_installed_skills;
|
||||
|
||||
let cwd = std::env::current_dir().unwrap_or_default();
|
||||
let skills = list_installed_skills(Some(&cwd));
|
||||
|
|
|
|||
|
|
@ -907,7 +907,7 @@ impl CliSession {
|
|||
|
||||
async fn handle_list_skills(&mut self) -> Result<()> {
|
||||
use comfy_table::{presets, Cell, ContentArrangement, Table};
|
||||
use goose::agents::platform_extensions::skills::list_installed_skills;
|
||||
use goose::skills::list_installed_skills;
|
||||
let cwd = std::env::current_dir().unwrap_or_default();
|
||||
let skills = list_installed_skills(Some(&cwd));
|
||||
|
||||
|
|
|
|||
|
|
@ -34,8 +34,9 @@ etcetera = { workspace = true }
|
|||
tempfile = { workspace = true }
|
||||
include_dir = { workspace = true }
|
||||
once_cell = { workspace = true }
|
||||
lopdf = "0.36.0"
|
||||
docx-rs = "0.4.7"
|
||||
lopdf = "0.40.0"
|
||||
docx-rs = "0.4.20"
|
||||
image = { version = "0.24.9", features = ["jpeg"] }
|
||||
umya-spreadsheet = "2.2.3"
|
||||
shell-words = { workspace = true }
|
||||
process-wrap = { version = "9.1.0", features = ["std"] }
|
||||
|
|
|
|||
|
|
@ -40,6 +40,7 @@ impl SubprocessExt for std::process::Command {
|
|||
/// same fix available to all MCP extensions in goose-mcp.
|
||||
#[cfg(not(windows))]
|
||||
fn resolve_login_shell_path() -> Option<String> {
|
||||
use process_wrap::std::{CommandWrap, ProcessSession};
|
||||
use std::path::PathBuf;
|
||||
use std::process::Stdio;
|
||||
|
||||
|
|
@ -56,26 +57,31 @@ fn resolve_login_shell_path() -> Option<String> {
|
|||
}
|
||||
});
|
||||
|
||||
std::process::Command::new(&shell)
|
||||
let mut cmd = CommandWrap::from(std::process::Command::new(&shell));
|
||||
cmd.command_mut()
|
||||
.args(["-l", "-i", "-c", "echo $PATH"])
|
||||
.stdin(Stdio::null())
|
||||
.stderr(Stdio::null())
|
||||
.output()
|
||||
.ok()
|
||||
.and_then(|output| {
|
||||
if output.status.success() {
|
||||
// Take the last non-empty line — interactive shells may emit
|
||||
// extra output from profile scripts before our echo.
|
||||
String::from_utf8_lossy(&output.stdout)
|
||||
.lines()
|
||||
.rev()
|
||||
.find(|line| !line.trim().is_empty())
|
||||
.map(|line| line.trim().to_string())
|
||||
.filter(|path| !path.is_empty())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.stdout(Stdio::piped())
|
||||
.stderr(Stdio::null());
|
||||
|
||||
// Spawn in a new session so that interactive shell job-control setup
|
||||
// cannot steal the terminal foreground from the parent goose process.
|
||||
cmd.wrap(ProcessSession);
|
||||
|
||||
let child = cmd.spawn().ok()?;
|
||||
let output = child.wait_with_output().ok()?;
|
||||
if !output.status.success() {
|
||||
return None;
|
||||
}
|
||||
|
||||
// Take the last non-empty line — interactive shells may emit
|
||||
// extra output from profile scripts before our echo.
|
||||
String::from_utf8_lossy(&output.stdout)
|
||||
.lines()
|
||||
.rev()
|
||||
.find(|line| !line.trim().is_empty())
|
||||
.map(|line| line.trim().to_string())
|
||||
.filter(|path| !path.is_empty())
|
||||
}
|
||||
|
||||
/// Returns the user's full login shell PATH, resolved once and cached.
|
||||
|
|
|
|||
|
|
@ -99,11 +99,43 @@ pub struct GetExtensionsRequest {}
|
|||
/// List configured extensions and any warnings.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcResponse)]
|
||||
pub struct GetExtensionsResponse {
|
||||
/// Array of ExtensionEntry objects with `enabled` flag and config details.
|
||||
/// Array of ExtensionEntry objects with `enabled` flag, `configKey`, and flattened config details.
|
||||
pub extensions: Vec<serde_json::Value>,
|
||||
pub warnings: Vec<String>,
|
||||
}
|
||||
|
||||
/// Persist a new extension to the user's global goose config.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/config/extensions/add", response = EmptyResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct AddConfigExtensionRequest {
|
||||
pub name: String,
|
||||
/// Extension configuration. Must be a JSON object matching one of the
|
||||
/// `ExtensionConfig` variants (e.g. `stdio`, `streamable_http`, `builtin`).
|
||||
/// `name` and `enabled` are injected server-side.
|
||||
#[serde(default)]
|
||||
pub extension_config: serde_json::Value,
|
||||
#[serde(default)]
|
||||
pub enabled: bool,
|
||||
}
|
||||
|
||||
/// Remove a persisted extension from the user's global goose config.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/config/extensions/remove", response = EmptyResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct RemoveConfigExtensionRequest {
|
||||
pub config_key: String,
|
||||
}
|
||||
|
||||
/// Toggle the `enabled` flag for a persisted extension in the user's global goose config.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/config/extensions/toggle", response = EmptyResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct ToggleConfigExtensionRequest {
|
||||
pub config_key: String,
|
||||
pub enabled: bool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/session/extensions", response = GetSessionExtensionsResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
|
|
@ -190,6 +222,15 @@ pub struct UpdateSessionProjectRequest {
|
|||
pub project_id: Option<String>,
|
||||
}
|
||||
|
||||
/// Rename a session.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/session/rename", response = EmptyResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct RenameSessionRequest {
|
||||
pub session_id: String,
|
||||
pub title: String,
|
||||
}
|
||||
|
||||
/// Archive a session (soft delete).
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/session/archive", response = EmptyResponse)]
|
||||
|
|
@ -264,16 +305,35 @@ pub struct ProviderConfigKey {
|
|||
}
|
||||
|
||||
/// The type of source entity.
|
||||
#[derive(Debug, Default, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, JsonSchema)]
|
||||
#[derive(
|
||||
Debug, Default, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize, JsonSchema,
|
||||
)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub enum SourceType {
|
||||
#[default]
|
||||
Skill,
|
||||
BuiltinSkill,
|
||||
Recipe,
|
||||
Subrecipe,
|
||||
Agent,
|
||||
Project,
|
||||
}
|
||||
|
||||
/// A source — a user-editable entity backed by an on-disk directory. Sources
|
||||
/// may be either `global` (shared across all projects) or project-specific.
|
||||
impl std::fmt::Display for SourceType {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
SourceType::Skill => write!(f, "skill"),
|
||||
SourceType::BuiltinSkill => write!(f, "builtin skill"),
|
||||
SourceType::Recipe => write!(f, "recipe"),
|
||||
SourceType::Subrecipe => write!(f, "subrecipe"),
|
||||
SourceType::Agent => write!(f, "agent"),
|
||||
SourceType::Project => write!(f, "project"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A source discovered by Goose and backed by an on-disk path. Sources may be
|
||||
/// either `global` (shared across all projects) or project-specific.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct SourceEntry {
|
||||
|
|
@ -282,18 +342,35 @@ pub struct SourceEntry {
|
|||
pub name: String,
|
||||
pub description: String,
|
||||
pub content: String,
|
||||
/// Absolute path to the source's directory on disk.
|
||||
/// Absolute path to the source on disk. A directory for skills, a file for
|
||||
/// recipes and agents.
|
||||
pub directory: String,
|
||||
/// True when the source lives in the user's global sources directory; false
|
||||
/// when it lives inside a specific project.
|
||||
pub global: bool,
|
||||
/// Paths (absolute) of additional files that live alongside the source.
|
||||
/// Only skills currently populate this; empty for other source types.
|
||||
#[serde(default, skip_serializing_if = "Vec::is_empty")]
|
||||
pub supporting_files: Vec<String>,
|
||||
/// Arbitrary key/value pairs for type-specific metadata (e.g. icon, color,
|
||||
/// preferredProvider for projects). Stored in the frontmatter.
|
||||
#[serde(default, skip_serializing_if = "std::collections::HashMap::is_empty")]
|
||||
pub properties: std::collections::HashMap<String, serde_json::Value>,
|
||||
}
|
||||
|
||||
/// Create a new source (global or project-scoped).
|
||||
impl SourceEntry {
|
||||
/// Render this source as a markdown block suitable for injecting into an
|
||||
/// LLM context. Used by the skills and summon runtimes when loading a
|
||||
/// source into the current conversation.
|
||||
pub fn to_load_text(&self) -> String {
|
||||
format!(
|
||||
"## {} ({})\n\n{}\n\n### Content\n\n{}",
|
||||
self.name, self.source_type, self.description, self.content
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a new source in an explicit target scope (global or project-scoped).
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/sources/create", response = CreateSourceResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
|
|
@ -308,8 +385,7 @@ pub struct CreateSourceRequest {
|
|||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub project_dir: Option<String>,
|
||||
/// Project source ID. When set with `global: false`, the backend resolves
|
||||
/// the project's first working directory automatically. Takes precedence
|
||||
/// over `project_dir`.
|
||||
/// the project's first working directory automatically.
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub project_id: Option<String>,
|
||||
/// Arbitrary key/value metadata.
|
||||
|
|
@ -323,8 +399,11 @@ pub struct CreateSourceResponse {
|
|||
pub source: SourceEntry,
|
||||
}
|
||||
|
||||
/// List sources. If `type` is omitted, sources of all known types are returned.
|
||||
/// Both global and project-scoped sources are included when `project_dir` is set.
|
||||
/// List discovered sources.
|
||||
///
|
||||
/// Today this endpoint only returns skills. If `type` is omitted, it defaults
|
||||
/// to listing skill sources. Both global and project-scoped skills are included
|
||||
/// when `project_dir` is set.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/sources/list", response = ListSourcesResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
|
|
@ -334,7 +413,7 @@ pub struct ListSourcesRequest {
|
|||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub project_dir: Option<String>,
|
||||
/// When true, also scan the working directories of all known projects for
|
||||
/// project-scoped sources (e.g. skills stored under `{workingDir}/.agents/skills/`).
|
||||
/// project-scoped sources.
|
||||
#[serde(default)]
|
||||
pub include_project_sources: bool,
|
||||
}
|
||||
|
|
@ -345,22 +424,17 @@ pub struct ListSourcesResponse {
|
|||
pub sources: Vec<SourceEntry>,
|
||||
}
|
||||
|
||||
/// Update an existing source's description and content.
|
||||
/// Update an existing source's name, description, and content by absolute path.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/sources/update", response = UpdateSourceResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct UpdateSourceRequest {
|
||||
#[serde(rename = "type")]
|
||||
pub source_type: SourceType,
|
||||
pub path: String,
|
||||
pub name: String,
|
||||
pub description: String,
|
||||
pub content: String,
|
||||
pub global: bool,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub project_dir: Option<String>,
|
||||
/// Arbitrary key/value metadata. Replaces all existing properties.
|
||||
#[serde(default, skip_serializing_if = "std::collections::HashMap::is_empty")]
|
||||
pub properties: std::collections::HashMap<String, serde_json::Value>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcResponse)]
|
||||
|
|
@ -369,30 +443,24 @@ pub struct UpdateSourceResponse {
|
|||
pub source: SourceEntry,
|
||||
}
|
||||
|
||||
/// Delete a source and its on-disk directory.
|
||||
/// Delete a source and its on-disk directory by absolute path.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/sources/delete", response = EmptyResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct DeleteSourceRequest {
|
||||
#[serde(rename = "type")]
|
||||
pub source_type: SourceType,
|
||||
pub name: String,
|
||||
pub global: bool,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub project_dir: Option<String>,
|
||||
pub path: String,
|
||||
}
|
||||
|
||||
/// Export a source as a portable JSON payload.
|
||||
/// Export a source at an absolute path as a portable JSON payload.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/sources/export", response = ExportSourceResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct ExportSourceRequest {
|
||||
#[serde(rename = "type")]
|
||||
pub source_type: SourceType,
|
||||
pub name: String,
|
||||
pub global: bool,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
pub project_dir: Option<String>,
|
||||
pub path: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcResponse)]
|
||||
|
|
@ -403,8 +471,8 @@ pub struct ExportSourceResponse {
|
|||
}
|
||||
|
||||
/// Import a source from a JSON export payload produced by `_goose/sources/export`.
|
||||
/// The imported source is written under the given scope; on name collisions a
|
||||
/// `-imported` suffix is appended.
|
||||
/// The imported source is written into the explicit target scope; on name
|
||||
/// collisions a `-imported` suffix is appended.
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize, JsonSchema, JsonRpcRequest)]
|
||||
#[request(method = "_goose/sources/import", response = ImportSourcesResponse)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
|
|
|
|||
|
|
@ -73,14 +73,14 @@ socket2 = "0.6.1"
|
|||
fs2 = { workspace = true }
|
||||
rustls = { version = "0.23", features = ["aws_lc_rs"], optional = true }
|
||||
uuid = { workspace = true }
|
||||
rcgen = "0.13"
|
||||
rcgen = "0.14"
|
||||
axum-server = { version = "0.8.0" }
|
||||
aws-lc-rs = { version = "1.16.0", optional = true }
|
||||
aws-lc-rs = { version = "1.16.3", optional = true }
|
||||
openssl = { version = "0.10", optional = true }
|
||||
pem = "3.0.6"
|
||||
|
||||
[target.'cfg(windows)'.dependencies]
|
||||
winreg = { version = "0.55.0" }
|
||||
winreg = { version = "0.56.0" }
|
||||
|
||||
[[bin]]
|
||||
name = "goosed"
|
||||
|
|
|
|||
|
|
@ -9,6 +9,10 @@ use goose_server::tls::setup_tls;
|
|||
use tower_http::cors::{Any, CorsLayer};
|
||||
use tracing::info;
|
||||
|
||||
fn boot_marker(message: &str) {
|
||||
eprintln!("GOOSED_BOOT: {message}");
|
||||
}
|
||||
|
||||
#[cfg(unix)]
|
||||
async fn shutdown_signal() {
|
||||
use tokio::signal::unix::{signal, SignalKind};
|
||||
|
|
@ -35,6 +39,7 @@ pub async fn run() -> Result<()> {
|
|||
#[cfg(feature = "rustls-tls")]
|
||||
let _ = rustls::crypto::ring::default_provider().install_default();
|
||||
|
||||
boot_marker("main entered");
|
||||
crate::logging::setup_logging(Some("goosed"))?;
|
||||
|
||||
let settings = configuration::Settings::new()?;
|
||||
|
|
@ -42,6 +47,7 @@ pub async fn run() -> Result<()> {
|
|||
let secret_key = std::env::var("GOOSE_SERVER__SECRET_KEY")
|
||||
.unwrap_or_else(|_| hex::encode(rand::random::<[u8; 32]>()));
|
||||
|
||||
boot_marker("appstate init start");
|
||||
let app_state = state::AppState::new(settings.tls).await?;
|
||||
|
||||
// Share the server secret with the tunnel manager so it uses the same
|
||||
|
|
@ -78,6 +84,7 @@ pub async fn run() -> Result<()> {
|
|||
if settings.tls {
|
||||
#[cfg(any(feature = "rustls-tls", feature = "native-tls"))]
|
||||
{
|
||||
boot_marker("tls setup start");
|
||||
let tls_setup = setup_tls(
|
||||
settings.tls_cert_path.as_deref(),
|
||||
settings.tls_key_path.as_deref(),
|
||||
|
|
@ -92,6 +99,7 @@ pub async fn run() -> Result<()> {
|
|||
});
|
||||
|
||||
info!("listening on https://{}", addr);
|
||||
boot_marker("listening");
|
||||
|
||||
#[cfg(feature = "rustls-tls")]
|
||||
axum_server::bind_rustls(addr, tls_setup.config)
|
||||
|
|
@ -114,9 +122,11 @@ pub async fn run() -> Result<()> {
|
|||
);
|
||||
}
|
||||
} else {
|
||||
boot_marker("tcp bind start");
|
||||
let listener = tokio::net::TcpListener::bind(addr).await?;
|
||||
|
||||
info!("listening on http://{}", addr);
|
||||
boot_marker("listening");
|
||||
|
||||
axum::serve(listener, app)
|
||||
.with_graceful_shutdown(async { shutdown_signal().await })
|
||||
|
|
|
|||
|
|
@ -9,6 +9,7 @@ mod state;
|
|||
mod tunnel;
|
||||
|
||||
use std::path::PathBuf;
|
||||
use std::{backtrace::Backtrace, panic::PanicHookInfo};
|
||||
|
||||
use clap::{Parser, Subcommand};
|
||||
use goose::agents::validate_extensions;
|
||||
|
|
@ -42,9 +43,42 @@ enum Commands {
|
|||
},
|
||||
}
|
||||
|
||||
fn boot_marker(message: &str) {
|
||||
eprintln!("GOOSED_BOOT: {message}");
|
||||
}
|
||||
|
||||
fn install_panic_hook() {
|
||||
let default_hook = std::panic::take_hook();
|
||||
std::panic::set_hook(Box::new(move |panic_info: &PanicHookInfo<'_>| {
|
||||
let location = panic_info
|
||||
.location()
|
||||
.map(|location| format!("{}:{}", location.file(), location.line()))
|
||||
.unwrap_or_else(|| "unknown".to_string());
|
||||
|
||||
let payload = panic_info
|
||||
.payload()
|
||||
.downcast_ref::<&str>()
|
||||
.map(|msg| (*msg).to_string())
|
||||
.or_else(|| panic_info.payload().downcast_ref::<String>().cloned())
|
||||
.unwrap_or_else(|| "unknown panic payload".to_string());
|
||||
|
||||
eprintln!("GOOSED_BOOT: panic at {location}: {payload}");
|
||||
eprintln!("GOOSED_BOOT: backtrace:\n{}", Backtrace::force_capture());
|
||||
|
||||
default_hook(panic_info);
|
||||
}));
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> anyhow::Result<()> {
|
||||
install_panic_hook();
|
||||
boot_marker("main entered");
|
||||
|
||||
let cli = Cli::parse();
|
||||
boot_marker(&format!(
|
||||
"command parsed: {:?}",
|
||||
std::mem::discriminant(&cli.command)
|
||||
));
|
||||
|
||||
match cli.command {
|
||||
Commands::Agent => {
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ use goose::config::declarative_providers::LoadedProvider;
|
|||
use goose::config::paths::Paths;
|
||||
use goose::config::ExtensionEntry;
|
||||
use goose::config::{Config, ConfigError};
|
||||
use goose::custom_requests::SourceType;
|
||||
use goose::model::ModelConfig;
|
||||
use goose::providers::base::{ProviderMetadata, ProviderType};
|
||||
use goose::providers::canonical::maybe_get_canonical_model;
|
||||
|
|
@ -136,6 +137,7 @@ pub enum CommandType {
|
|||
Builtin,
|
||||
Recipe,
|
||||
Skill,
|
||||
Agent,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, ToSchema)]
|
||||
|
|
@ -426,9 +428,7 @@ pub async fn get_slash_commands(
|
|||
}
|
||||
|
||||
let working_dir = query.working_dir.map(std::path::PathBuf::from);
|
||||
for source in
|
||||
goose::agents::platform_extensions::skills::list_installed_skills(working_dir.as_deref())
|
||||
{
|
||||
for source in goose::skills::list_installed_skills(working_dir.as_deref()) {
|
||||
commands.push(SlashCommand {
|
||||
command: source.name,
|
||||
help: source.description,
|
||||
|
|
@ -436,6 +436,25 @@ pub async fn get_slash_commands(
|
|||
});
|
||||
}
|
||||
|
||||
let discover_dir = working_dir
|
||||
.as_deref()
|
||||
.unwrap_or_else(|| std::path::Path::new("."));
|
||||
for source in
|
||||
goose::agents::platform_extensions::summon::discover_filesystem_sources(discover_dir)
|
||||
{
|
||||
if matches!(
|
||||
source.source_type,
|
||||
SourceType::Agent | SourceType::Recipe | SourceType::Subrecipe
|
||||
) && !source.content.is_empty()
|
||||
{
|
||||
commands.push(SlashCommand {
|
||||
command: source.name,
|
||||
help: source.description,
|
||||
command_type: CommandType::Agent,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
Ok(Json(SlashCommandsResponse { commands }))
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -230,7 +230,8 @@ pub async fn sync_featured_models() -> Result<StatusCode, ErrorResponse> {
|
|||
pub async fn list_local_models(
|
||||
axum::extract::State(state): axum::extract::State<Arc<AppState>>,
|
||||
) -> Result<Json<Vec<LocalModelResponse>>, ErrorResponse> {
|
||||
let recommended_id = recommend_local_model(&state.inference_runtime);
|
||||
let runtime = state.get_inference_runtime()?;
|
||||
let recommended_id = recommend_local_model(&runtime);
|
||||
|
||||
let registry = get_registry()
|
||||
.lock()
|
||||
|
|
@ -360,7 +361,8 @@ pub async fn get_repo_files(
|
|||
.await
|
||||
.map_err(|e| ErrorResponse::internal(format!("Failed to fetch repo files: {}", e)))?;
|
||||
|
||||
let available_memory = available_inference_memory_bytes(&state.inference_runtime);
|
||||
let runtime = state.get_inference_runtime()?;
|
||||
let available_memory = available_inference_memory_bytes(&runtime);
|
||||
let recommended_index = hf_models::recommend_variant(&variants, available_memory);
|
||||
|
||||
let downloaded_quants = {
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ use goose::scheduler_trait::SchedulerTrait;
|
|||
use goose::session::SessionManager;
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::sync::{Arc, OnceLock};
|
||||
use tokio::sync::Mutex;
|
||||
use tokio::task::JoinHandle;
|
||||
|
||||
|
|
@ -28,7 +28,7 @@ pub struct AppState {
|
|||
pub gateway_manager: Arc<GatewayManager>,
|
||||
pub extension_loading_tasks: ExtensionLoadingTasks,
|
||||
#[cfg(feature = "local-inference")]
|
||||
pub inference_runtime: Arc<InferenceRuntime>,
|
||||
inference_runtime: Arc<OnceLock<Arc<InferenceRuntime>>>,
|
||||
session_buses: Arc<Mutex<HashMap<String, Arc<SessionEventBus>>>>,
|
||||
}
|
||||
|
||||
|
|
@ -48,11 +48,31 @@ impl AppState {
|
|||
gateway_manager,
|
||||
extension_loading_tasks: Arc::new(Mutex::new(HashMap::new())),
|
||||
#[cfg(feature = "local-inference")]
|
||||
inference_runtime: InferenceRuntime::get_or_init(),
|
||||
inference_runtime: Arc::new(OnceLock::new()),
|
||||
session_buses: Arc::new(Mutex::new(HashMap::new())),
|
||||
}))
|
||||
}
|
||||
|
||||
#[cfg(feature = "local-inference")]
|
||||
pub fn get_inference_runtime(&self) -> anyhow::Result<Arc<InferenceRuntime>> {
|
||||
if let Some(runtime) = self.inference_runtime.get() {
|
||||
return Ok(runtime.clone());
|
||||
}
|
||||
|
||||
let runtime = InferenceRuntime::get_or_init()?;
|
||||
|
||||
// Another thread may win the race to cache the runtime in AppState.
|
||||
// In that case, return the already-initialized cached runtime.
|
||||
match self.inference_runtime.set(runtime.clone()) {
|
||||
Ok(()) => Ok(runtime),
|
||||
Err(_) => Ok(self
|
||||
.inference_runtime
|
||||
.get()
|
||||
.expect("inference runtime initialized by another thread")
|
||||
.clone()),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn set_extension_loading_task(
|
||||
&self,
|
||||
session_id: String,
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@
|
|||
//! the server listener always uses OpenSSL when this feature is active.
|
||||
|
||||
use anyhow::{bail, Result};
|
||||
use goose::config::paths::Paths;
|
||||
use rcgen::{CertificateParams, DnType, KeyPair, SanType};
|
||||
use std::path::Path;
|
||||
|
||||
|
|
@ -105,6 +106,65 @@ pub async fn setup_tls(cert_path: Option<&str>, key_path: Option<&str>) -> Resul
|
|||
}
|
||||
}
|
||||
|
||||
fn tls_cache_dir() -> std::path::PathBuf {
|
||||
Paths::config_dir().join("tls")
|
||||
}
|
||||
|
||||
fn write_private_key(path: &std::path::Path, contents: &[u8]) {
|
||||
#[cfg(unix)]
|
||||
{
|
||||
use std::io::Write;
|
||||
use std::os::unix::fs::OpenOptionsExt;
|
||||
|
||||
let result = std::fs::OpenOptions::new()
|
||||
.write(true)
|
||||
.create(true)
|
||||
.truncate(true)
|
||||
.mode(0o600)
|
||||
.open(path);
|
||||
if let Ok(mut file) = result {
|
||||
let _ = file.write_all(contents);
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(unix))]
|
||||
{
|
||||
let _ = std::fs::write(path, contents);
|
||||
}
|
||||
}
|
||||
|
||||
async fn load_cached_tls() -> Option<TlsSetup> {
|
||||
let dir = tls_cache_dir();
|
||||
let cert_pem = std::fs::read(dir.join("server.pem")).ok()?;
|
||||
let key_pem = std::fs::read(dir.join("server.key")).ok()?;
|
||||
|
||||
let der = pem::parse(&cert_pem).ok()?.into_contents();
|
||||
let fingerprint = sha256_fingerprint(&der);
|
||||
|
||||
#[cfg(feature = "rustls-tls")]
|
||||
let config = axum_server::tls_rustls::RustlsConfig::from_pem(cert_pem, key_pem)
|
||||
.await
|
||||
.ok()?;
|
||||
#[cfg(feature = "native-tls")]
|
||||
let config = axum_server::tls_openssl::OpenSSLConfig::from_pem(&cert_pem, &key_pem).ok()?;
|
||||
|
||||
Some(TlsSetup {
|
||||
config,
|
||||
fingerprint,
|
||||
})
|
||||
}
|
||||
|
||||
/// All errors are silently ignored — this is a best-effort optimisation and
|
||||
/// must never prevent the server from starting.
|
||||
fn save_tls_to_cache(cert_pem: &str, key_pem: &str) {
|
||||
let dir = tls_cache_dir();
|
||||
if std::fs::create_dir_all(&dir).is_err() {
|
||||
return;
|
||||
}
|
||||
let _ = std::fs::write(dir.join("server.pem"), cert_pem);
|
||||
write_private_key(&dir.join("server.key"), key_pem.as_bytes());
|
||||
}
|
||||
|
||||
/// Generate a self-signed TLS certificate for localhost (127.0.0.1) and
|
||||
/// return a [`TlsSetup`] containing the server config and the SHA-256
|
||||
/// fingerprint of the generated certificate (colon-separated hex).
|
||||
|
|
@ -115,6 +175,12 @@ pub async fn self_signed_config() -> Result<TlsSetup> {
|
|||
#[cfg(feature = "rustls-tls")]
|
||||
let _ = rustls::crypto::aws_lc_rs::default_provider().install_default();
|
||||
|
||||
// Fast path: reuse a previously cached certificate if one exists.
|
||||
if let Some(cached) = load_cached_tls().await {
|
||||
println!("GOOSED_CERT_FINGERPRINT={}", cached.fingerprint);
|
||||
return Ok(cached);
|
||||
}
|
||||
|
||||
let (cert, key_pair) = generate_self_signed_cert()?;
|
||||
|
||||
let fingerprint = sha256_fingerprint(cert.der());
|
||||
|
|
@ -123,6 +189,9 @@ pub async fn self_signed_config() -> Result<TlsSetup> {
|
|||
let cert_pem = cert.pem();
|
||||
let key_pem = key_pair.serialize_pem();
|
||||
|
||||
// Persist for future restarts before moving the strings into the config.
|
||||
save_tls_to_cache(&cert_pem, &key_pem);
|
||||
|
||||
#[cfg(feature = "rustls-tls")]
|
||||
let config = axum_server::tls_rustls::RustlsConfig::from_pem(
|
||||
cert_pem.into_bytes(),
|
||||
|
|
|
|||
|
|
@ -86,9 +86,9 @@ uuid = { workspace = true, features = ["v7"] }
|
|||
regex = { workspace = true }
|
||||
async-trait = { workspace = true }
|
||||
async-stream = { workspace = true }
|
||||
minijinja = { version = "2.12.0", features = ["loader"] }
|
||||
minijinja = { version = "2.19.0", features = ["loader"] }
|
||||
include_dir = { workspace = true }
|
||||
tiktoken-rs = "0.6.0"
|
||||
tiktoken-rs = "0.11.0"
|
||||
chrono = { workspace = true }
|
||||
clap = { workspace = true }
|
||||
indoc = { workspace = true }
|
||||
|
|
@ -130,7 +130,7 @@ sqlx = { version = "0.8", default-features = false, features = [
|
|||
|
||||
# For Bedrock provider (optional, behind "aws-providers" feature)
|
||||
aws-config = { version = "=1.8.12", features = ["behavior-version-latest"], optional = true }
|
||||
aws-smithy-types = { version = "=1.3.5", optional = true }
|
||||
aws-smithy-types = { version = "=1.4.7", optional = true }
|
||||
aws-sdk-bedrockruntime = { version = "=1.120.0", default-features = false, features = ["default-https-client", "rt-tokio"], optional = true }
|
||||
|
||||
# For SageMaker TGI provider (optional, behind "aws-providers" feature)
|
||||
|
|
@ -139,7 +139,7 @@ aws-sdk-sagemakerruntime = { version = "1.62.0", default-features = false, featu
|
|||
# For GCP Vertex AI provider auth
|
||||
jsonwebtoken = { version = "10.3.0", default-features = false, features = ["use_pem"] }
|
||||
|
||||
blake3 = "1.5"
|
||||
blake3 = "1.8"
|
||||
fs2 = { workspace = true }
|
||||
tokio-stream = { workspace = true, features = ["io-util"] }
|
||||
tempfile = { workspace = true }
|
||||
|
|
@ -164,9 +164,9 @@ sys-info = "0.9"
|
|||
schemars = { workspace = true, features = [
|
||||
"derive",
|
||||
] }
|
||||
insta = "1.43.2"
|
||||
insta = "1.47.2"
|
||||
shellexpand = { workspace = true }
|
||||
indexmap = "2.12.0"
|
||||
indexmap = "2.14.0"
|
||||
ignore = { workspace = true }
|
||||
rayon = { workspace = true }
|
||||
tree-sitter = { workspace = true }
|
||||
|
|
@ -182,17 +182,18 @@ tree-sitter-typescript = { workspace = true }
|
|||
which = { workspace = true }
|
||||
pctx_code_mode = { version = "^0.3.0", optional = true }
|
||||
pulldown-cmark = "0.13.0"
|
||||
llama-cpp-2 = { version = "0.1.143", features = ["sampler", "mtmd"], optional = true }
|
||||
llama-cpp-2 = { version = "0.1.145", features = ["sampler", "mtmd"], optional = true }
|
||||
encoding_rs = "0.8.35"
|
||||
pastey = "0.2.1"
|
||||
pastey = "0.2.2"
|
||||
shell-words = { workspace = true }
|
||||
pem = { version = "3", optional = true }
|
||||
pkcs1 = { version = "0.7", default-features = false, features = ["pkcs8"], optional = true }
|
||||
pkcs8 = { version = "0.10", default-features = false, features = ["alloc"], optional = true }
|
||||
sec1 = { version = "0.7", default-features = false, features = ["der", "pkcs8"], optional = true }
|
||||
goose-acp-macros = { version = "1.31.0", path = "../goose-acp-macros" }
|
||||
goose-acp-macros = { path = "../goose-acp-macros" }
|
||||
tower-http = { workspace = true, features = ["cors"] }
|
||||
http-body-util = "0.1.3"
|
||||
process-wrap = { version = "9.1.0", features = ["std"] }
|
||||
|
||||
|
||||
[target.'cfg(target_os = "windows")'.dependencies]
|
||||
|
|
@ -208,7 +209,7 @@ keyring = { version = "3.6.2", features = ["apple-native"] }
|
|||
|
||||
[target.'cfg(target_os = "linux")'.dependencies]
|
||||
keyring = { version = "3.6.2", features = ["sync-secret-service"] }
|
||||
libc = "0.2.184"
|
||||
libc = "0.2.186"
|
||||
|
||||
[dev-dependencies]
|
||||
serial_test = { workspace = true }
|
||||
|
|
|
|||
|
|
@ -35,6 +35,21 @@
|
|||
"requestType": "GetExtensionsRequest",
|
||||
"responseType": "GetExtensionsResponse"
|
||||
},
|
||||
{
|
||||
"method": "_goose/config/extensions/add",
|
||||
"requestType": "AddConfigExtensionRequest",
|
||||
"responseType": "EmptyResponse"
|
||||
},
|
||||
{
|
||||
"method": "_goose/config/extensions/remove",
|
||||
"requestType": "RemoveConfigExtensionRequest",
|
||||
"responseType": "EmptyResponse"
|
||||
},
|
||||
{
|
||||
"method": "_goose/config/extensions/toggle",
|
||||
"requestType": "ToggleConfigExtensionRequest",
|
||||
"responseType": "EmptyResponse"
|
||||
},
|
||||
{
|
||||
"method": "_goose/session/extensions",
|
||||
"requestType": "GetSessionExtensionsRequest",
|
||||
|
|
@ -95,6 +110,11 @@
|
|||
"requestType": "UpdateSessionProjectRequest",
|
||||
"responseType": "EmptyResponse"
|
||||
},
|
||||
{
|
||||
"method": "_goose/session/rename",
|
||||
"requestType": "RenameSessionRequest",
|
||||
"responseType": "EmptyResponse"
|
||||
},
|
||||
{
|
||||
"method": "_goose/session/archive",
|
||||
"requestType": "ArchiveSessionRequest",
|
||||
|
|
|
|||
|
|
@ -151,7 +151,7 @@
|
|||
"extensions": {
|
||||
"type": "array",
|
||||
"items": {},
|
||||
"description": "Array of ExtensionEntry objects with `enabled` flag and config details."
|
||||
"description": "Array of ExtensionEntry objects with `enabled` flag, `configKey`, and flattened config details."
|
||||
},
|
||||
"warnings": {
|
||||
"type": "array",
|
||||
|
|
@ -168,6 +168,60 @@
|
|||
"x-side": "agent",
|
||||
"x-method": "_goose/config/extensions"
|
||||
},
|
||||
"AddConfigExtensionRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string"
|
||||
},
|
||||
"extensionConfig": {
|
||||
"description": "Extension configuration. Must be a JSON object matching one of the\n`ExtensionConfig` variants (e.g. `stdio`, `streamable_http`, `builtin`).\n`name` and `enabled` are injected server-side.",
|
||||
"default": null
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"name"
|
||||
],
|
||||
"description": "Persist a new extension to the user's global goose config.",
|
||||
"x-side": "agent",
|
||||
"x-method": "_goose/config/extensions/add"
|
||||
},
|
||||
"RemoveConfigExtensionRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"configKey": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"configKey"
|
||||
],
|
||||
"description": "Remove a persisted extension from the user's global goose config.",
|
||||
"x-side": "agent",
|
||||
"x-method": "_goose/config/extensions/remove"
|
||||
},
|
||||
"ToggleConfigExtensionRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"configKey": {
|
||||
"type": "string"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"configKey",
|
||||
"enabled"
|
||||
],
|
||||
"description": "Toggle the `enabled` flag for a persisted extension in the user's global goose config.",
|
||||
"x-side": "agent",
|
||||
"x-method": "_goose/config/extensions/toggle"
|
||||
},
|
||||
"GetSessionExtensionsRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
|
|
@ -689,6 +743,24 @@
|
|||
"x-side": "agent",
|
||||
"x-method": "_goose/session/update_project"
|
||||
},
|
||||
"RenameSessionRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"sessionId": {
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"sessionId",
|
||||
"title"
|
||||
],
|
||||
"description": "Rename a session.",
|
||||
"x-side": "agent",
|
||||
"x-method": "_goose/session/rename"
|
||||
},
|
||||
"ArchiveSessionRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
|
|
@ -768,7 +840,7 @@
|
|||
"string",
|
||||
"null"
|
||||
],
|
||||
"description": "Project source ID. When set with `global: false`, the backend resolves\nthe project's first working directory automatically. Takes precedence\nover `project_dir`."
|
||||
"description": "Project source ID. When set with `global: false`, the backend resolves\nthe project's first working directory automatically."
|
||||
},
|
||||
"properties": {
|
||||
"type": "object",
|
||||
|
|
@ -783,7 +855,7 @@
|
|||
"content",
|
||||
"global"
|
||||
],
|
||||
"description": "Create a new source (global or project-scoped).",
|
||||
"description": "Create a new source in an explicit target scope (global or project-scoped).",
|
||||
"x-side": "agent",
|
||||
"x-method": "_goose/sources/create"
|
||||
},
|
||||
|
|
@ -791,6 +863,10 @@
|
|||
"type": "string",
|
||||
"enum": [
|
||||
"skill",
|
||||
"builtinSkill",
|
||||
"recipe",
|
||||
"subrecipe",
|
||||
"agent",
|
||||
"project"
|
||||
],
|
||||
"description": "The type of source entity."
|
||||
|
|
@ -825,12 +901,19 @@
|
|||
},
|
||||
"directory": {
|
||||
"type": "string",
|
||||
"description": "Absolute path to the source's directory on disk."
|
||||
"description": "Absolute path to the source on disk. A directory for skills, a file for\nrecipes and agents."
|
||||
},
|
||||
"global": {
|
||||
"type": "boolean",
|
||||
"description": "True when the source lives in the user's global sources directory; false\nwhen it lives inside a specific project."
|
||||
},
|
||||
"supportingFiles": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"description": "Paths (absolute) of additional files that live alongside the source.\nOnly skills currently populate this; empty for other source types."
|
||||
},
|
||||
"properties": {
|
||||
"type": "object",
|
||||
"additionalProperties": {},
|
||||
|
|
@ -845,7 +928,7 @@
|
|||
"directory",
|
||||
"global"
|
||||
],
|
||||
"description": "A source — a user-editable entity backed by an on-disk directory. Sources\nmay be either `global` (shared across all projects) or project-specific."
|
||||
"description": "A source discovered by Goose and backed by an on-disk path. Sources may be\neither `global` (shared across all projects) or project-specific."
|
||||
},
|
||||
"ListSourcesRequest": {
|
||||
"type": "object",
|
||||
|
|
@ -868,11 +951,11 @@
|
|||
},
|
||||
"includeProjectSources": {
|
||||
"type": "boolean",
|
||||
"description": "When true, also scan the working directories of all known projects for\nproject-scoped sources (e.g. skills stored under `{workingDir}/.agents/skills/`).",
|
||||
"description": "When true, also scan the working directories of all known projects for\nproject-scoped sources.",
|
||||
"default": false
|
||||
}
|
||||
},
|
||||
"description": "List sources. If `type` is omitted, sources of all known types are returned.\nBoth global and project-scoped sources are included when `project_dir` is set.",
|
||||
"description": "List discovered sources.\n\nToday this endpoint only returns skills. If `type` is omitted, it defaults\nto listing skill sources. Both global and project-scoped skills are included\nwhen `project_dir` is set.",
|
||||
"x-side": "agent",
|
||||
"x-method": "_goose/sources/list"
|
||||
},
|
||||
|
|
@ -898,6 +981,9 @@
|
|||
"type": {
|
||||
"$ref": "#/$defs/SourceType"
|
||||
},
|
||||
"path": {
|
||||
"type": "string"
|
||||
},
|
||||
"name": {
|
||||
"type": "string"
|
||||
},
|
||||
|
|
@ -906,30 +992,16 @@
|
|||
},
|
||||
"content": {
|
||||
"type": "string"
|
||||
},
|
||||
"global": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"projectDir": {
|
||||
"type": [
|
||||
"string",
|
||||
"null"
|
||||
]
|
||||
},
|
||||
"properties": {
|
||||
"type": "object",
|
||||
"additionalProperties": {},
|
||||
"description": "Arbitrary key/value metadata. Replaces all existing properties."
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"type",
|
||||
"path",
|
||||
"name",
|
||||
"description",
|
||||
"content",
|
||||
"global"
|
||||
"content"
|
||||
],
|
||||
"description": "Update an existing source's description and content.",
|
||||
"description": "Update an existing source's name, description, and content by absolute path.",
|
||||
"x-side": "agent",
|
||||
"x-method": "_goose/sources/update"
|
||||
},
|
||||
|
|
@ -952,25 +1024,15 @@
|
|||
"type": {
|
||||
"$ref": "#/$defs/SourceType"
|
||||
},
|
||||
"name": {
|
||||
"path": {
|
||||
"type": "string"
|
||||
},
|
||||
"global": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"projectDir": {
|
||||
"type": [
|
||||
"string",
|
||||
"null"
|
||||
]
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"type",
|
||||
"name",
|
||||
"global"
|
||||
"path"
|
||||
],
|
||||
"description": "Delete a source and its on-disk directory.",
|
||||
"description": "Delete a source and its on-disk directory by absolute path.",
|
||||
"x-side": "agent",
|
||||
"x-method": "_goose/sources/delete"
|
||||
},
|
||||
|
|
@ -980,25 +1042,15 @@
|
|||
"type": {
|
||||
"$ref": "#/$defs/SourceType"
|
||||
},
|
||||
"name": {
|
||||
"path": {
|
||||
"type": "string"
|
||||
},
|
||||
"global": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"projectDir": {
|
||||
"type": [
|
||||
"string",
|
||||
"null"
|
||||
]
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"type",
|
||||
"name",
|
||||
"global"
|
||||
"path"
|
||||
],
|
||||
"description": "Export a source as a portable JSON payload.",
|
||||
"description": "Export a source at an absolute path as a portable JSON payload.",
|
||||
"x-side": "agent",
|
||||
"x-method": "_goose/sources/export"
|
||||
},
|
||||
|
|
@ -1039,7 +1091,7 @@
|
|||
"data",
|
||||
"global"
|
||||
],
|
||||
"description": "Import a source from a JSON export payload produced by `_goose/sources/export`.\nThe imported source is written under the given scope; on name collisions a\n`-imported` suffix is appended.",
|
||||
"description": "Import a source from a JSON export payload produced by `_goose/sources/export`.\nThe imported source is written into the explicit target scope; on name\ncollisions a `-imported` suffix is appended.",
|
||||
"x-side": "agent",
|
||||
"x-method": "_goose/sources/import"
|
||||
},
|
||||
|
|
@ -1457,6 +1509,33 @@
|
|||
"description": "Params for _goose/config/extensions",
|
||||
"title": "GetExtensionsRequest"
|
||||
},
|
||||
{
|
||||
"allOf": [
|
||||
{
|
||||
"$ref": "#/$defs/AddConfigExtensionRequest"
|
||||
}
|
||||
],
|
||||
"description": "Params for _goose/config/extensions/add",
|
||||
"title": "AddConfigExtensionRequest"
|
||||
},
|
||||
{
|
||||
"allOf": [
|
||||
{
|
||||
"$ref": "#/$defs/RemoveConfigExtensionRequest"
|
||||
}
|
||||
],
|
||||
"description": "Params for _goose/config/extensions/remove",
|
||||
"title": "RemoveConfigExtensionRequest"
|
||||
},
|
||||
{
|
||||
"allOf": [
|
||||
{
|
||||
"$ref": "#/$defs/ToggleConfigExtensionRequest"
|
||||
}
|
||||
],
|
||||
"description": "Params for _goose/config/extensions/toggle",
|
||||
"title": "ToggleConfigExtensionRequest"
|
||||
},
|
||||
{
|
||||
"allOf": [
|
||||
{
|
||||
|
|
@ -1565,6 +1644,15 @@
|
|||
"description": "Params for _goose/session/update_project",
|
||||
"title": "UpdateSessionProjectRequest"
|
||||
},
|
||||
{
|
||||
"allOf": [
|
||||
{
|
||||
"$ref": "#/$defs/RenameSessionRequest"
|
||||
}
|
||||
],
|
||||
"description": "Params for _goose/session/rename",
|
||||
"title": "RenameSessionRequest"
|
||||
},
|
||||
{
|
||||
"allOf": [
|
||||
{
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
265
crates/goose/src/acp/transport/connection.rs
Normal file
265
crates/goose/src/acp/transport/connection.rs
Normal file
|
|
@ -0,0 +1,265 @@
|
|||
//! Connection-level state shared between HTTP and WebSocket transports.
|
||||
//!
|
||||
//! Each connection hosts one ACP agent task. All server→client messages for
|
||||
//! the connection are multicast through a single broadcast channel; HTTP GET
|
||||
//! SSE streams and WebSocket sinks subscribe to that channel. POSTs (and WS
|
||||
//! text frames) forward client→server messages into the agent over an mpsc.
|
||||
|
||||
use std::{
|
||||
collections::{HashMap, VecDeque},
|
||||
sync::Arc,
|
||||
};
|
||||
|
||||
use anyhow::Result;
|
||||
use tokio::sync::{broadcast, mpsc, Mutex, RwLock};
|
||||
use tokio_util::compat::{TokioAsyncReadCompatExt, TokioAsyncWriteCompatExt};
|
||||
use tracing::{error, info, warn};
|
||||
|
||||
use crate::acp::adapters::{ReceiverToAsyncRead, SenderToAsyncWrite};
|
||||
use crate::acp::server_factory::AcpServer;
|
||||
|
||||
/// Broadcast capacity for agent→client messages. Large enough to buffer a
|
||||
/// typical prompt's streaming notifications even if the subscriber is briefly
|
||||
/// slow (e.g. during reconnect).
|
||||
const OUTBOUND_BROADCAST_CAPACITY: usize = 1024;
|
||||
|
||||
/// Maximum number of server→client messages to retain while no subscriber is
|
||||
/// attached. In the HTTP flow the client opens `GET /acp` only after receiving
|
||||
/// the initialize response, so any notifications or server-initiated requests
|
||||
/// emitted by the agent in that window would otherwise be broadcast to zero
|
||||
/// subscribers and permanently lost. We buffer them here and replay on the
|
||||
/// first subscribe. On overflow the oldest message is dropped with a warning.
|
||||
const PRE_SUBSCRIBE_BUFFER_CAPACITY: usize = 1024;
|
||||
|
||||
pub(crate) struct Connection {
|
||||
/// Send client→server messages into the agent.
|
||||
pub to_agent_tx: mpsc::Sender<String>,
|
||||
/// Subscribe here to receive all server→client messages for this connection.
|
||||
pub outbound_tx: broadcast::Sender<String>,
|
||||
/// Pulled exactly once during `initialize` to read the synchronous response
|
||||
/// that must be returned as the HTTP 200 body before any broadcast
|
||||
/// subscribers exist. `None` once consumed.
|
||||
pub init_receiver: Mutex<Option<mpsc::UnboundedReceiver<String>>>,
|
||||
/// Set once the initialize handler has captured the initialize response and
|
||||
/// handed ownership of the agent output pump over to the broadcast fan-out.
|
||||
pub init_complete: Mutex<bool>,
|
||||
/// Handle to the agent task; aborted on connection termination.
|
||||
pub agent_handle: tokio::task::JoinHandle<()>,
|
||||
/// Handle to the fan-out pump task; aborted on connection termination.
|
||||
pub pump_handle: Mutex<Option<tokio::task::JoinHandle<()>>>,
|
||||
pre_subscribe_buffer: Arc<Mutex<Option<VecDeque<String>>>>,
|
||||
}
|
||||
|
||||
pub(crate) struct ConnectionRegistry {
|
||||
pub server: Arc<AcpServer>,
|
||||
connections: RwLock<HashMap<String, Arc<Connection>>>,
|
||||
}
|
||||
|
||||
impl ConnectionRegistry {
|
||||
pub fn new(server: Arc<AcpServer>) -> Self {
|
||||
Self {
|
||||
server,
|
||||
connections: RwLock::new(HashMap::new()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a new connection, spawn the ACP agent task, and return
|
||||
/// (connection_id, connection). The initialize request body should be sent
|
||||
/// via `connection.to_agent_tx` and the synchronous initialize response
|
||||
/// read via `consume_initialize_response`.
|
||||
pub async fn create_connection(&self) -> Result<(String, Arc<Connection>)> {
|
||||
let (to_agent_tx, to_agent_rx) = mpsc::channel::<String>(256);
|
||||
let (from_agent_tx, from_agent_rx) = mpsc::unbounded_channel::<String>();
|
||||
let (outbound_tx, _) = broadcast::channel::<String>(OUTBOUND_BROADCAST_CAPACITY);
|
||||
|
||||
let agent = self.server.create_agent().await?;
|
||||
let connection_id = uuid::Uuid::new_v4().to_string();
|
||||
|
||||
let read_stream = ReceiverToAsyncRead::new(to_agent_rx);
|
||||
let write_stream = SenderToAsyncWrite::new(from_agent_tx);
|
||||
let fut =
|
||||
crate::acp::server::serve(agent, read_stream.compat(), write_stream.compat_write());
|
||||
|
||||
let conn_id_for_task = connection_id.clone();
|
||||
let agent_handle = tokio::spawn(async move {
|
||||
if let Err(e) = fut.await {
|
||||
error!(connection_id = %conn_id_for_task, "ACP agent task error: {}", e);
|
||||
}
|
||||
});
|
||||
|
||||
let connection = Arc::new(Connection {
|
||||
to_agent_tx,
|
||||
outbound_tx,
|
||||
init_receiver: Mutex::new(Some(from_agent_rx)),
|
||||
init_complete: Mutex::new(false),
|
||||
agent_handle,
|
||||
pump_handle: Mutex::new(None),
|
||||
pre_subscribe_buffer: Arc::new(Mutex::new(Some(VecDeque::new()))),
|
||||
});
|
||||
|
||||
self.connections
|
||||
.write()
|
||||
.await
|
||||
.insert(connection_id.clone(), connection.clone());
|
||||
|
||||
info!(connection_id = %connection_id, "Connection created");
|
||||
Ok((connection_id, connection))
|
||||
}
|
||||
|
||||
pub async fn get(&self, connection_id: &str) -> Option<Arc<Connection>> {
|
||||
self.connections.read().await.get(connection_id).cloned()
|
||||
}
|
||||
|
||||
pub async fn remove(&self, connection_id: &str) -> Option<Arc<Connection>> {
|
||||
self.connections.write().await.remove(connection_id)
|
||||
}
|
||||
}
|
||||
|
||||
impl Connection {
|
||||
/// After the synchronous initialize response has been consumed, spawn a
|
||||
/// task that forwards all remaining agent output to the broadcast channel.
|
||||
/// Idempotent.
|
||||
pub async fn start_fanout(self: &Arc<Self>) {
|
||||
let mut complete = self.init_complete.lock().await;
|
||||
if *complete {
|
||||
return;
|
||||
}
|
||||
let Some(mut rx) = self.init_receiver.lock().await.take() else {
|
||||
return;
|
||||
};
|
||||
let outbound_tx = self.outbound_tx.clone();
|
||||
let buffer = self.pre_subscribe_buffer.clone();
|
||||
let handle = tokio::spawn(async move {
|
||||
while let Some(msg) = rx.recv().await {
|
||||
let mut buf_guard = buffer.lock().await;
|
||||
match buf_guard.as_mut() {
|
||||
Some(buf) => {
|
||||
if buf.len() >= PRE_SUBSCRIBE_BUFFER_CAPACITY {
|
||||
warn!(
|
||||
"Pre-subscribe buffer full ({} messages); dropping oldest",
|
||||
PRE_SUBSCRIBE_BUFFER_CAPACITY
|
||||
);
|
||||
buf.pop_front();
|
||||
}
|
||||
buf.push_back(msg);
|
||||
}
|
||||
None => {
|
||||
drop(buf_guard);
|
||||
let _ = outbound_tx.send(msg);
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
*self.pump_handle.lock().await = Some(handle);
|
||||
*complete = true;
|
||||
}
|
||||
|
||||
pub async fn subscribe_with_replay(&self) -> (Vec<String>, broadcast::Receiver<String>) {
|
||||
let mut guard = self.pre_subscribe_buffer.lock().await;
|
||||
let receiver = self.outbound_tx.subscribe();
|
||||
let replay = guard.take().map(Vec::from).unwrap_or_default();
|
||||
(replay, receiver)
|
||||
}
|
||||
|
||||
/// Terminate the connection: abort the agent task and the fan-out pump.
|
||||
pub async fn shutdown(&self) {
|
||||
self.agent_handle.abort();
|
||||
if let Some(h) = self.pump_handle.lock().await.take() {
|
||||
h.abort();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::time::Duration;
|
||||
use tokio::time::timeout;
|
||||
|
||||
fn fake_connection() -> (Arc<Connection>, mpsc::UnboundedSender<String>) {
|
||||
let (to_agent_tx, _to_agent_rx) = mpsc::channel::<String>(256);
|
||||
let (from_agent_tx, from_agent_rx) = mpsc::unbounded_channel::<String>();
|
||||
let (outbound_tx, _) = broadcast::channel::<String>(OUTBOUND_BROADCAST_CAPACITY);
|
||||
|
||||
let agent_handle = tokio::spawn(async {
|
||||
std::future::pending::<()>().await;
|
||||
});
|
||||
|
||||
let connection = Arc::new(Connection {
|
||||
to_agent_tx,
|
||||
outbound_tx,
|
||||
init_receiver: Mutex::new(Some(from_agent_rx)),
|
||||
init_complete: Mutex::new(false),
|
||||
agent_handle,
|
||||
pump_handle: Mutex::new(None),
|
||||
pre_subscribe_buffer: Arc::new(Mutex::new(Some(VecDeque::new()))),
|
||||
});
|
||||
|
||||
(connection, from_agent_tx)
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn buffers_messages_emitted_before_first_subscribe() {
|
||||
let (conn, agent_tx) = fake_connection();
|
||||
conn.start_fanout().await;
|
||||
|
||||
agent_tx.send("one".to_string()).unwrap();
|
||||
agent_tx.send("two".to_string()).unwrap();
|
||||
agent_tx.send("three".to_string()).unwrap();
|
||||
|
||||
tokio::task::yield_now().await;
|
||||
tokio::time::sleep(Duration::from_millis(20)).await;
|
||||
|
||||
let (replay, _rx) = conn.subscribe_with_replay().await;
|
||||
assert_eq!(replay, vec!["one", "two", "three"]);
|
||||
|
||||
conn.shutdown().await;
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn switches_to_live_broadcast_after_subscribe() {
|
||||
let (conn, agent_tx) = fake_connection();
|
||||
conn.start_fanout().await;
|
||||
|
||||
let (replay, mut rx) = conn.subscribe_with_replay().await;
|
||||
assert!(replay.is_empty());
|
||||
|
||||
agent_tx.send("live-one".to_string()).unwrap();
|
||||
agent_tx.send("live-two".to_string()).unwrap();
|
||||
|
||||
let got1 = timeout(Duration::from_secs(1), rx.recv())
|
||||
.await
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
let got2 = timeout(Duration::from_secs(1), rx.recv())
|
||||
.await
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
assert_eq!(got1, "live-one");
|
||||
assert_eq!(got2, "live-two");
|
||||
|
||||
conn.shutdown().await;
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn pre_subscribe_buffer_is_bounded() {
|
||||
let (conn, agent_tx) = fake_connection();
|
||||
conn.start_fanout().await;
|
||||
|
||||
for i in 0..(PRE_SUBSCRIBE_BUFFER_CAPACITY + 50) {
|
||||
agent_tx.send(format!("m{}", i)).unwrap();
|
||||
}
|
||||
|
||||
tokio::time::sleep(Duration::from_millis(50)).await;
|
||||
|
||||
let (replay, _rx) = conn.subscribe_with_replay().await;
|
||||
assert_eq!(replay.len(), PRE_SUBSCRIBE_BUFFER_CAPACITY);
|
||||
assert_eq!(
|
||||
replay.last().unwrap(),
|
||||
&format!("m{}", PRE_SUBSCRIBE_BUFFER_CAPACITY + 49)
|
||||
);
|
||||
assert_eq!(replay.first().unwrap(), &format!("m{}", 50));
|
||||
|
||||
conn.shutdown().await;
|
||||
}
|
||||
}
|
||||
|
|
@ -1,200 +1,30 @@
|
|||
use anyhow::Result;
|
||||
use std::{convert::Infallible, sync::Arc, time::Duration};
|
||||
|
||||
use axum::{
|
||||
body::Body,
|
||||
extract::State,
|
||||
http::{Request, StatusCode},
|
||||
http::{HeaderValue, Request, StatusCode},
|
||||
response::{IntoResponse, Response, Sse},
|
||||
};
|
||||
use http_body_util::BodyExt;
|
||||
use serde_json::Value;
|
||||
use std::{collections::HashMap, convert::Infallible, sync::Arc, time::Duration};
|
||||
use tokio::sync::{mpsc, Mutex, RwLock};
|
||||
use tokio_util::compat::{TokioAsyncReadCompatExt, TokioAsyncWriteCompatExt};
|
||||
use tracing::{error, info};
|
||||
use tokio::sync::broadcast;
|
||||
use tracing::{debug, error, info, trace};
|
||||
|
||||
use super::connection::{Connection, ConnectionRegistry};
|
||||
use super::*;
|
||||
use crate::acp::adapters::{ReceiverToAsyncRead, SenderToAsyncWrite};
|
||||
use crate::acp::server_factory::AcpServer;
|
||||
|
||||
pub(crate) struct HttpState {
|
||||
server: Arc<AcpServer>,
|
||||
// Keyed by acp_session_id: a connection-scoped UUID serving many Goose sessions.
|
||||
sessions: RwLock<HashMap<String, TransportSession>>,
|
||||
}
|
||||
|
||||
impl HttpState {
|
||||
pub fn new(server: Arc<AcpServer>) -> Self {
|
||||
Self {
|
||||
server,
|
||||
sessions: RwLock::new(HashMap::new()),
|
||||
}
|
||||
}
|
||||
|
||||
async fn create_session(&self) -> Result<String, StatusCode> {
|
||||
let (to_agent_tx, to_agent_rx) = mpsc::channel::<String>(256);
|
||||
let (from_agent_tx, from_agent_rx) = mpsc::unbounded_channel::<String>();
|
||||
|
||||
let agent = self.server.create_agent().await.map_err(|e| {
|
||||
error!("Failed to create agent: {}", e);
|
||||
StatusCode::INTERNAL_SERVER_ERROR
|
||||
})?;
|
||||
|
||||
let acp_session_id = uuid::Uuid::new_v4().to_string();
|
||||
|
||||
let read_stream = ReceiverToAsyncRead::new(to_agent_rx);
|
||||
let write_stream = SenderToAsyncWrite::new(from_agent_tx);
|
||||
let fut =
|
||||
crate::acp::server::serve(agent, read_stream.compat(), write_stream.compat_write());
|
||||
let handle = tokio::spawn(async move {
|
||||
if let Err(e) = fut.await {
|
||||
error!("ACP session error: {}", e);
|
||||
}
|
||||
});
|
||||
|
||||
self.sessions.write().await.insert(
|
||||
acp_session_id.clone(),
|
||||
TransportSession {
|
||||
to_agent_tx,
|
||||
from_agent_rx: Arc::new(Mutex::new(from_agent_rx)),
|
||||
handle,
|
||||
},
|
||||
);
|
||||
|
||||
info!(acp_session_id = %acp_session_id, "Session created");
|
||||
Ok(acp_session_id)
|
||||
}
|
||||
|
||||
async fn has_session(&self, acp_session_id: &str) -> bool {
|
||||
self.sessions.read().await.contains_key(acp_session_id)
|
||||
}
|
||||
|
||||
async fn remove_session(&self, acp_session_id: &str) {
|
||||
if let Some(session) = self.sessions.write().await.remove(acp_session_id) {
|
||||
session.handle.abort();
|
||||
info!(acp_session_id = %acp_session_id, "Session removed");
|
||||
}
|
||||
}
|
||||
|
||||
async fn send_message(&self, acp_session_id: &str, message: String) -> Result<(), StatusCode> {
|
||||
let sessions = self.sessions.read().await;
|
||||
let session = sessions.get(acp_session_id).ok_or(StatusCode::NOT_FOUND)?;
|
||||
session
|
||||
.to_agent_tx
|
||||
.send(message)
|
||||
.await
|
||||
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)
|
||||
}
|
||||
|
||||
async fn get_receiver(
|
||||
&self,
|
||||
acp_session_id: &str,
|
||||
) -> Result<Arc<Mutex<mpsc::UnboundedReceiver<String>>>, StatusCode> {
|
||||
let sessions = self.sessions.read().await;
|
||||
let session = sessions.get(acp_session_id).ok_or(StatusCode::NOT_FOUND)?;
|
||||
Ok(session.from_agent_rx.clone())
|
||||
}
|
||||
}
|
||||
|
||||
fn create_sse_stream(
|
||||
receiver: Arc<Mutex<mpsc::UnboundedReceiver<String>>>,
|
||||
cleanup: Option<(Arc<HttpState>, String)>,
|
||||
) -> Sse<impl futures::Stream<Item = Result<axum::response::sse::Event, Infallible>>> {
|
||||
let stream = async_stream::stream! {
|
||||
let mut rx = receiver.lock().await;
|
||||
while let Some(msg) = rx.recv().await {
|
||||
yield Ok::<_, Infallible>(axum::response::sse::Event::default().data(msg));
|
||||
}
|
||||
if let Some((state, acp_session_id)) = cleanup {
|
||||
state.remove_session(&acp_session_id).await;
|
||||
}
|
||||
};
|
||||
|
||||
Sse::new(stream).keep_alive(
|
||||
axum::response::sse::KeepAlive::new()
|
||||
.interval(Duration::from_secs(15))
|
||||
.text(""),
|
||||
)
|
||||
}
|
||||
|
||||
async fn handle_initialize(state: Arc<HttpState>, json_message: &Value) -> Response {
|
||||
let acp_session_id = match state.create_session().await {
|
||||
Ok(id) => id,
|
||||
Err(status) => return status.into_response(),
|
||||
};
|
||||
|
||||
let message_str = serde_json::to_string(json_message).unwrap();
|
||||
if let Err(status) = state.send_message(&acp_session_id, message_str).await {
|
||||
state.remove_session(&acp_session_id).await;
|
||||
return status.into_response();
|
||||
}
|
||||
|
||||
let receiver = match state.get_receiver(&acp_session_id).await {
|
||||
Ok(r) => r,
|
||||
Err(status) => {
|
||||
state.remove_session(&acp_session_id).await;
|
||||
return status.into_response();
|
||||
}
|
||||
};
|
||||
|
||||
let sse = create_sse_stream(receiver, Some((state.clone(), acp_session_id.clone())));
|
||||
let mut response = sse.into_response();
|
||||
response
|
||||
.headers_mut()
|
||||
.insert(HEADER_SESSION_ID, acp_session_id.parse().unwrap());
|
||||
response
|
||||
}
|
||||
|
||||
async fn handle_request(
|
||||
state: Arc<HttpState>,
|
||||
acp_session_id: String,
|
||||
json_message: &Value,
|
||||
) -> Response {
|
||||
if !state.has_session(&acp_session_id).await {
|
||||
return (StatusCode::NOT_FOUND, "Session not found").into_response();
|
||||
}
|
||||
|
||||
let message_str = serde_json::to_string(json_message).unwrap();
|
||||
if let Err(status) = state.send_message(&acp_session_id, message_str).await {
|
||||
return status.into_response();
|
||||
}
|
||||
|
||||
let receiver = match state.get_receiver(&acp_session_id).await {
|
||||
Ok(r) => r,
|
||||
Err(status) => return status.into_response(),
|
||||
};
|
||||
|
||||
create_sse_stream(receiver, None).into_response()
|
||||
}
|
||||
|
||||
async fn handle_notification_or_response(
|
||||
state: Arc<HttpState>,
|
||||
acp_session_id: String,
|
||||
json_message: &Value,
|
||||
) -> Response {
|
||||
if !state.has_session(&acp_session_id).await {
|
||||
return (StatusCode::NOT_FOUND, "Session not found").into_response();
|
||||
}
|
||||
|
||||
let message_str = serde_json::to_string(json_message).unwrap();
|
||||
if let Err(status) = state.send_message(&acp_session_id, message_str).await {
|
||||
return status.into_response();
|
||||
}
|
||||
|
||||
StatusCode::ACCEPTED.into_response()
|
||||
}
|
||||
|
||||
/// POST /acp
|
||||
///
|
||||
/// - `initialize`: creates a new connection, forwards the request, waits for
|
||||
/// the synchronous initialize response from the agent, and returns it as a
|
||||
/// 200 OK JSON body with the `Acp-Connection-Id` header set.
|
||||
/// - All other messages: require `Acp-Connection-Id` (and `Acp-Session-Id`
|
||||
/// for session-scoped methods), forward to the agent, return 202 Accepted.
|
||||
pub(crate) async fn handle_post(
|
||||
State(state): State<Arc<HttpState>>,
|
||||
State(registry): State<Arc<ConnectionRegistry>>,
|
||||
request: Request<Body>,
|
||||
) -> Response {
|
||||
if !accepts_json_and_sse(&request) {
|
||||
return (
|
||||
StatusCode::NOT_ACCEPTABLE,
|
||||
"Not Acceptable: Client must accept both application/json and text/event-stream",
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
|
||||
if !content_type_is_json(&request) {
|
||||
return (
|
||||
StatusCode::UNSUPPORTED_MEDIA_TYPE,
|
||||
|
|
@ -203,7 +33,8 @@ pub(crate) async fn handle_post(
|
|||
.into_response();
|
||||
}
|
||||
|
||||
let acp_session_id = get_session_id(&request);
|
||||
let connection_id = header_value(&request, HEADER_CONNECTION_ID);
|
||||
let session_id = header_value(&request, HEADER_SESSION_ID);
|
||||
|
||||
let body_bytes = match request.into_body().collect().await {
|
||||
Ok(collected) => collected.to_bytes(),
|
||||
|
|
@ -216,7 +47,6 @@ pub(crate) async fn handle_post(
|
|||
let json_message: Value = match serde_json::from_slice(&body_bytes) {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
error!("Failed to parse JSON: {}", e);
|
||||
return (StatusCode::BAD_REQUEST, format!("Invalid JSON: {}", e)).into_response();
|
||||
}
|
||||
};
|
||||
|
|
@ -230,31 +60,128 @@ pub(crate) async fn handle_post(
|
|||
}
|
||||
|
||||
if is_initialize_request(&json_message) {
|
||||
handle_initialize(state.clone(), &json_message).await
|
||||
} else if is_jsonrpc_request(&json_message) {
|
||||
let Some(id) = acp_session_id else {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Bad Request: Acp-Session-Id header required",
|
||||
)
|
||||
.into_response();
|
||||
};
|
||||
handle_request(state.clone(), id, &json_message).await
|
||||
} else if is_jsonrpc_notification(&json_message) || is_jsonrpc_response(&json_message) {
|
||||
let Some(id) = acp_session_id else {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Bad Request: Acp-Session-Id header required",
|
||||
)
|
||||
.into_response();
|
||||
};
|
||||
handle_notification_or_response(state.clone(), id, &json_message).await
|
||||
} else {
|
||||
(StatusCode::BAD_REQUEST, "Invalid JSON-RPC message").into_response()
|
||||
return handle_initialize(registry, json_message).await;
|
||||
}
|
||||
|
||||
let Some(connection_id) = connection_id else {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Bad Request: Acp-Connection-Id header required",
|
||||
)
|
||||
.into_response();
|
||||
};
|
||||
|
||||
let Some(connection) = registry.get(&connection_id).await else {
|
||||
return (StatusCode::NOT_FOUND, "Unknown Acp-Connection-Id").into_response();
|
||||
};
|
||||
|
||||
if let Some(method) = json_message.get("method").and_then(|m| m.as_str()) {
|
||||
if method_requires_session_header(method) && session_id.is_none() {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Bad Request: Acp-Session-Id header required for session-scoped methods",
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
}
|
||||
|
||||
if !is_jsonrpc_request_with_id(&json_message)
|
||||
&& !is_jsonrpc_notification(&json_message)
|
||||
&& !is_jsonrpc_response(&json_message)
|
||||
{
|
||||
return (StatusCode::BAD_REQUEST, "Invalid JSON-RPC message").into_response();
|
||||
}
|
||||
|
||||
let message_str = serde_json::to_string(&json_message).unwrap();
|
||||
trace!(connection_id = %connection_id, payload = %message_str, "POST → agent");
|
||||
if connection.to_agent_tx.send(message_str).await.is_err() {
|
||||
return (
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Failed to forward message to agent",
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
|
||||
StatusCode::ACCEPTED.into_response()
|
||||
}
|
||||
|
||||
pub(crate) async fn handle_get(state: Arc<HttpState>, request: Request<Body>) -> Response {
|
||||
async fn handle_initialize(registry: Arc<ConnectionRegistry>, json_message: Value) -> Response {
|
||||
let (connection_id, connection) = match registry.create_connection().await {
|
||||
Ok(pair) => pair,
|
||||
Err(e) => {
|
||||
error!("Failed to create connection: {}", e);
|
||||
return (
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Failed to create connection",
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
};
|
||||
|
||||
let message_str = serde_json::to_string(&json_message).unwrap();
|
||||
trace!(connection_id = %connection_id, payload = %message_str, "initialize → agent");
|
||||
if connection.to_agent_tx.send(message_str).await.is_err() {
|
||||
registry.remove(&connection_id).await;
|
||||
connection.shutdown().await;
|
||||
return (
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Failed to forward initialize to agent",
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
|
||||
// Read exactly one message from the agent: the initialize response.
|
||||
let init_response = {
|
||||
let mut guard = connection.init_receiver.lock().await;
|
||||
let Some(rx) = guard.as_mut() else {
|
||||
registry.remove(&connection_id).await;
|
||||
connection.shutdown().await;
|
||||
return (
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Initialize receiver already consumed",
|
||||
)
|
||||
.into_response();
|
||||
};
|
||||
rx.recv().await
|
||||
};
|
||||
|
||||
let init_response = match init_response {
|
||||
Some(msg) => msg,
|
||||
None => {
|
||||
registry.remove(&connection_id).await;
|
||||
connection.shutdown().await;
|
||||
return (
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"Agent closed before initialize response",
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
};
|
||||
|
||||
connection.start_fanout().await;
|
||||
|
||||
let mut response = (
|
||||
StatusCode::OK,
|
||||
[(axum::http::header::CONTENT_TYPE, JSON_MIME_TYPE)],
|
||||
init_response,
|
||||
)
|
||||
.into_response();
|
||||
if let Ok(v) = HeaderValue::from_str(&connection_id) {
|
||||
response.headers_mut().insert(HEADER_CONNECTION_ID, v);
|
||||
}
|
||||
info!(connection_id = %connection_id, "Initialize complete");
|
||||
response
|
||||
}
|
||||
|
||||
/// GET /acp (no Upgrade)
|
||||
///
|
||||
/// Opens the single long-lived SSE stream for a connection. All server→client
|
||||
/// messages (responses + notifications + server-initiated requests) are
|
||||
/// delivered here, correlated by their JSON-RPC body fields.
|
||||
pub(crate) async fn handle_get(
|
||||
registry: Arc<ConnectionRegistry>,
|
||||
request: Request<Body>,
|
||||
) -> Response {
|
||||
if !accepts_mime_type(&request, EVENT_STREAM_MIME_TYPE) {
|
||||
return (
|
||||
StatusCode::NOT_ACCEPTABLE,
|
||||
|
|
@ -263,61 +190,77 @@ pub(crate) async fn handle_get(state: Arc<HttpState>, request: Request<Body>) ->
|
|||
.into_response();
|
||||
}
|
||||
|
||||
let acp_session_id = match get_session_id(&request) {
|
||||
Some(id) => id,
|
||||
None => {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Bad Request: Acp-Session-Id header required",
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
let Some(connection_id) = header_value(&request, HEADER_CONNECTION_ID) else {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Bad Request: Acp-Connection-Id header required",
|
||||
)
|
||||
.into_response();
|
||||
};
|
||||
|
||||
if !state.has_session(&acp_session_id).await {
|
||||
return (StatusCode::NOT_FOUND, "Session not found").into_response();
|
||||
let Some(connection) = registry.get(&connection_id).await else {
|
||||
return (StatusCode::NOT_FOUND, "Unknown Acp-Connection-Id").into_response();
|
||||
};
|
||||
|
||||
let (replay, receiver) = connection.subscribe_with_replay().await;
|
||||
let sse = build_sse_stream(connection.clone(), replay, receiver);
|
||||
|
||||
let mut response = sse.into_response();
|
||||
if let Ok(v) = HeaderValue::from_str(&connection_id) {
|
||||
response.headers_mut().insert(HEADER_CONNECTION_ID, v);
|
||||
}
|
||||
response
|
||||
}
|
||||
|
||||
let receiver = match state.get_receiver(&acp_session_id).await {
|
||||
Ok(r) => r,
|
||||
Err(status) => return status.into_response(),
|
||||
};
|
||||
|
||||
fn build_sse_stream(
|
||||
_connection: Arc<Connection>,
|
||||
replay: Vec<String>,
|
||||
mut receiver: broadcast::Receiver<String>,
|
||||
) -> Sse<impl futures::Stream<Item = Result<axum::response::sse::Event, Infallible>>> {
|
||||
let stream = async_stream::stream! {
|
||||
let mut rx = receiver.lock().await;
|
||||
while let Some(msg) = rx.recv().await {
|
||||
for msg in replay {
|
||||
trace!(payload = %msg, "SSE → client (replay)");
|
||||
yield Ok::<_, Infallible>(axum::response::sse::Event::default().data(msg));
|
||||
}
|
||||
};
|
||||
|
||||
Sse::new(stream)
|
||||
.keep_alive(
|
||||
axum::response::sse::KeepAlive::new()
|
||||
.interval(Duration::from_secs(15))
|
||||
.text(""),
|
||||
)
|
||||
.into_response()
|
||||
}
|
||||
|
||||
pub(crate) async fn handle_delete(
|
||||
State(state): State<Arc<HttpState>>,
|
||||
request: Request<Body>,
|
||||
) -> Response {
|
||||
let acp_session_id = match get_session_id(&request) {
|
||||
Some(id) => id,
|
||||
None => {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Bad Request: Acp-Session-Id header required",
|
||||
)
|
||||
.into_response();
|
||||
loop {
|
||||
match receiver.recv().await {
|
||||
Ok(msg) => {
|
||||
trace!(payload = %msg, "SSE → client");
|
||||
yield Ok::<_, Infallible>(axum::response::sse::Event::default().data(msg));
|
||||
}
|
||||
Err(broadcast::error::RecvError::Lagged(n)) => {
|
||||
debug!("SSE subscriber lagged {} messages", n);
|
||||
continue;
|
||||
}
|
||||
Err(broadcast::error::RecvError::Closed) => break,
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
if !state.has_session(&acp_session_id).await {
|
||||
return (StatusCode::NOT_FOUND, "Session not found").into_response();
|
||||
}
|
||||
Sse::new(stream).keep_alive(
|
||||
axum::response::sse::KeepAlive::new()
|
||||
.interval(Duration::from_secs(15))
|
||||
.text(""),
|
||||
)
|
||||
}
|
||||
|
||||
state.remove_session(&acp_session_id).await;
|
||||
/// DELETE /acp
|
||||
pub(crate) async fn handle_delete(
|
||||
State(registry): State<Arc<ConnectionRegistry>>,
|
||||
request: Request<Body>,
|
||||
) -> Response {
|
||||
let Some(connection_id) = header_value(&request, HEADER_CONNECTION_ID) else {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
"Bad Request: Acp-Connection-Id header required",
|
||||
)
|
||||
.into_response();
|
||||
};
|
||||
|
||||
let Some(connection) = registry.remove(&connection_id).await else {
|
||||
return (StatusCode::NOT_FOUND, "Unknown Acp-Connection-Id").into_response();
|
||||
};
|
||||
connection.shutdown().await;
|
||||
info!(connection_id = %connection_id, "Connection terminated via DELETE");
|
||||
StatusCode::ACCEPTED.into_response()
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
pub mod connection;
|
||||
pub mod http;
|
||||
pub mod websocket;
|
||||
|
||||
|
|
@ -9,27 +10,21 @@ use axum::{
|
|||
ws::{rejection::WebSocketUpgradeRejection, WebSocketUpgrade},
|
||||
State,
|
||||
},
|
||||
http::{header, Method, Request},
|
||||
http::{header, HeaderName, Method, Request},
|
||||
response::Response,
|
||||
routing::{delete, get, post},
|
||||
Router,
|
||||
};
|
||||
use serde_json::Value;
|
||||
use tokio::sync::{mpsc, Mutex};
|
||||
use tower_http::cors::{Any, CorsLayer};
|
||||
|
||||
use crate::acp::server_factory::AcpServer;
|
||||
|
||||
pub(crate) const HEADER_CONNECTION_ID: &str = "Acp-Connection-Id";
|
||||
pub(crate) const HEADER_SESSION_ID: &str = "Acp-Session-Id";
|
||||
pub(crate) const EVENT_STREAM_MIME_TYPE: &str = "text/event-stream";
|
||||
pub(crate) const JSON_MIME_TYPE: &str = "application/json";
|
||||
|
||||
pub(crate) struct TransportSession {
|
||||
pub to_agent_tx: mpsc::Sender<String>,
|
||||
pub from_agent_rx: Arc<Mutex<mpsc::UnboundedReceiver<String>>>,
|
||||
pub handle: tokio::task::JoinHandle<()>,
|
||||
}
|
||||
|
||||
pub(crate) fn accepts_mime_type(request: &Request<Body>, mime_type: &str) -> bool {
|
||||
request
|
||||
.headers()
|
||||
|
|
@ -38,16 +33,6 @@ pub(crate) fn accepts_mime_type(request: &Request<Body>, mime_type: &str) -> boo
|
|||
.is_some_and(|accept| accept.contains(mime_type))
|
||||
}
|
||||
|
||||
pub(crate) fn accepts_json_and_sse(request: &Request<Body>) -> bool {
|
||||
request
|
||||
.headers()
|
||||
.get(axum::http::header::ACCEPT)
|
||||
.and_then(|v| v.to_str().ok())
|
||||
.is_some_and(|accept| {
|
||||
accept.contains(JSON_MIME_TYPE) && accept.contains(EVENT_STREAM_MIME_TYPE)
|
||||
})
|
||||
}
|
||||
|
||||
pub(crate) fn content_type_is_json(request: &Request<Body>) -> bool {
|
||||
request
|
||||
.headers()
|
||||
|
|
@ -56,15 +41,15 @@ pub(crate) fn content_type_is_json(request: &Request<Body>) -> bool {
|
|||
.is_some_and(|ct| ct.starts_with(JSON_MIME_TYPE))
|
||||
}
|
||||
|
||||
pub(crate) fn get_session_id(request: &Request<Body>) -> Option<String> {
|
||||
pub(crate) fn header_value(request: &Request<Body>, name: &str) -> Option<String> {
|
||||
request
|
||||
.headers()
|
||||
.get(HEADER_SESSION_ID)
|
||||
.get(name)
|
||||
.and_then(|v| v.to_str().ok())
|
||||
.map(|s| s.to_string())
|
||||
}
|
||||
|
||||
pub(crate) fn is_jsonrpc_request(value: &Value) -> bool {
|
||||
pub(crate) fn is_jsonrpc_request_with_id(value: &Value) -> bool {
|
||||
value.get("method").is_some() && value.get("id").is_some()
|
||||
}
|
||||
|
||||
|
|
@ -73,21 +58,35 @@ pub(crate) fn is_jsonrpc_notification(value: &Value) -> bool {
|
|||
}
|
||||
|
||||
pub(crate) fn is_jsonrpc_response(value: &Value) -> bool {
|
||||
value.get("id").is_some() && (value.get("result").is_some() || value.get("error").is_some())
|
||||
value.get("id").is_some()
|
||||
&& value.get("method").is_none()
|
||||
&& (value.get("result").is_some() || value.get("error").is_some())
|
||||
}
|
||||
|
||||
pub(crate) fn is_initialize_request(value: &Value) -> bool {
|
||||
value.get("method").is_some_and(|m| m == "initialize") && value.get("id").is_some()
|
||||
}
|
||||
|
||||
/// Methods that are scoped to a session and require an Acp-Session-Id header.
|
||||
pub(crate) fn method_requires_session_header(method: &str) -> bool {
|
||||
matches!(
|
||||
method,
|
||||
"session/prompt"
|
||||
| "session/cancel"
|
||||
| "session/load"
|
||||
| "session/set_mode"
|
||||
| "session/set_model"
|
||||
)
|
||||
}
|
||||
|
||||
async fn handle_get(
|
||||
ws_upgrade: Result<WebSocketUpgrade, WebSocketUpgradeRejection>,
|
||||
State(state): State<(Arc<http::HttpState>, Arc<websocket::WsState>)>,
|
||||
State(state): State<Arc<connection::ConnectionRegistry>>,
|
||||
request: Request<Body>,
|
||||
) -> Response {
|
||||
match ws_upgrade {
|
||||
Ok(ws) => websocket::handle_get(state.1, ws).await,
|
||||
Err(_) => http::handle_get(state.0, request).await,
|
||||
Ok(ws) => websocket::handle_ws_upgrade(state, ws).await,
|
||||
Err(_) => http::handle_get(state, request).await,
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -96,8 +95,7 @@ async fn health() -> &'static str {
|
|||
}
|
||||
|
||||
pub fn create_router(server: Arc<AcpServer>) -> Router {
|
||||
let http_state = Arc::new(http::HttpState::new(server.clone()));
|
||||
let ws_state = Arc::new(websocket::WsState::new(server));
|
||||
let registry = Arc::new(connection::ConnectionRegistry::new(server));
|
||||
|
||||
let cors = CorsLayer::new()
|
||||
.allow_origin(Any)
|
||||
|
|
@ -105,24 +103,23 @@ pub fn create_router(server: Arc<AcpServer>) -> Router {
|
|||
.allow_headers([
|
||||
header::CONTENT_TYPE,
|
||||
header::ACCEPT,
|
||||
HEADER_SESSION_ID.parse().unwrap(),
|
||||
HeaderName::from_static("acp-connection-id"),
|
||||
HeaderName::from_static("acp-session-id"),
|
||||
header::SEC_WEBSOCKET_VERSION,
|
||||
header::SEC_WEBSOCKET_KEY,
|
||||
header::CONNECTION,
|
||||
header::UPGRADE,
|
||||
])
|
||||
.expose_headers([
|
||||
HeaderName::from_static("acp-connection-id"),
|
||||
HeaderName::from_static("acp-session-id"),
|
||||
]);
|
||||
|
||||
Router::new()
|
||||
.route("/health", get(health))
|
||||
.route("/status", get(health))
|
||||
.route(
|
||||
"/acp",
|
||||
post(http::handle_post).with_state(http_state.clone()),
|
||||
)
|
||||
.route(
|
||||
"/acp",
|
||||
get(handle_get).with_state((http_state.clone(), ws_state)),
|
||||
)
|
||||
.route("/acp", delete(http::handle_delete).with_state(http_state))
|
||||
.route("/acp", post(http::handle_post).with_state(registry.clone()))
|
||||
.route("/acp", get(handle_get).with_state(registry.clone()))
|
||||
.route("/acp", delete(http::handle_delete).with_state(registry))
|
||||
.layer(cors)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,75 +1,29 @@
|
|||
use anyhow::Result;
|
||||
use std::sync::Arc;
|
||||
|
||||
use axum::{
|
||||
extract::ws::{Message, WebSocket, WebSocketUpgrade},
|
||||
http::StatusCode,
|
||||
http::{HeaderValue, StatusCode},
|
||||
response::{IntoResponse, Response},
|
||||
};
|
||||
use futures::{SinkExt, StreamExt};
|
||||
use std::{collections::HashMap, sync::Arc};
|
||||
use tokio::sync::{mpsc, Mutex, RwLock};
|
||||
use tokio_util::compat::{TokioAsyncReadCompatExt, TokioAsyncWriteCompatExt};
|
||||
use tracing::{debug, error, info, warn};
|
||||
use tracing::{debug, error, info, trace, warn};
|
||||
|
||||
use super::{TransportSession, HEADER_SESSION_ID};
|
||||
use crate::acp::adapters::{ReceiverToAsyncRead, SenderToAsyncWrite};
|
||||
use crate::acp::server_factory::AcpServer;
|
||||
use super::connection::ConnectionRegistry;
|
||||
use super::HEADER_CONNECTION_ID;
|
||||
|
||||
pub(crate) struct WsState {
|
||||
server: Arc<AcpServer>,
|
||||
// Keyed by acp_session_id: a connection-scoped UUID serving many Goose sessions.
|
||||
sessions: RwLock<HashMap<String, TransportSession>>,
|
||||
}
|
||||
|
||||
impl WsState {
|
||||
pub fn new(server: Arc<AcpServer>) -> Self {
|
||||
Self {
|
||||
server,
|
||||
sessions: RwLock::new(HashMap::new()),
|
||||
}
|
||||
}
|
||||
|
||||
async fn create_connection(&self) -> Result<String> {
|
||||
let (to_agent_tx, to_agent_rx) = mpsc::channel::<String>(256);
|
||||
let (from_agent_tx, from_agent_rx) = mpsc::unbounded_channel::<String>();
|
||||
|
||||
let agent = self.server.create_agent().await?;
|
||||
|
||||
let acp_session_id = uuid::Uuid::new_v4().to_string();
|
||||
|
||||
let read_stream = ReceiverToAsyncRead::new(to_agent_rx);
|
||||
let write_stream = SenderToAsyncWrite::new(from_agent_tx);
|
||||
let fut =
|
||||
crate::acp::server::serve(agent, read_stream.compat(), write_stream.compat_write());
|
||||
let handle = tokio::spawn(async move {
|
||||
if let Err(e) = fut.await {
|
||||
error!("ACP WebSocket session error: {}", e);
|
||||
}
|
||||
});
|
||||
|
||||
self.sessions.write().await.insert(
|
||||
acp_session_id.clone(),
|
||||
TransportSession {
|
||||
to_agent_tx,
|
||||
from_agent_rx: Arc::new(Mutex::new(from_agent_rx)),
|
||||
handle,
|
||||
},
|
||||
);
|
||||
|
||||
info!(acp_session_id = %acp_session_id, "WebSocket connection created");
|
||||
Ok(acp_session_id)
|
||||
}
|
||||
|
||||
async fn remove_connection(&self, acp_session_id: &str) {
|
||||
if let Some(session) = self.sessions.write().await.remove(acp_session_id) {
|
||||
session.handle.abort();
|
||||
info!(acp_session_id = %acp_session_id, "WebSocket connection removed");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) async fn handle_get(state: Arc<WsState>, ws: WebSocketUpgrade) -> Response {
|
||||
let acp_session_id = match state.create_connection().await {
|
||||
Ok(id) => id,
|
||||
/// GET /acp with `Upgrade: websocket`
|
||||
///
|
||||
/// Creates a new connection (same lifecycle as Streamable HTTP), upgrades to a
|
||||
/// WebSocket, and runs a bidirectional message loop. The client still sends
|
||||
/// `initialize` as the first WS text frame — unlike the HTTP path, the
|
||||
/// initialize response is streamed back over the same WebSocket rather than
|
||||
/// returned synchronously.
|
||||
pub(crate) async fn handle_ws_upgrade(
|
||||
registry: Arc<ConnectionRegistry>,
|
||||
ws: WebSocketUpgrade,
|
||||
) -> Response {
|
||||
let (connection_id, connection) = match registry.create_connection().await {
|
||||
Ok(pair) => pair,
|
||||
Err(e) => {
|
||||
error!("Failed to create WebSocket connection: {}", e);
|
||||
return (
|
||||
|
|
@ -80,80 +34,102 @@ pub(crate) async fn handle_get(state: Arc<WsState>, ws: WebSocketUpgrade) -> Res
|
|||
}
|
||||
};
|
||||
|
||||
let mut response = ws.on_upgrade({
|
||||
let acp_session_id = acp_session_id.clone();
|
||||
move |socket| handle_ws(socket, state, acp_session_id)
|
||||
// WebSocket does not need the synchronous initialize split — start the
|
||||
// broadcast fan-out immediately so the WS sink reads from the same stream
|
||||
// of server→client messages as any HTTP SSE subscribers would.
|
||||
connection.start_fanout().await;
|
||||
|
||||
let conn_id_for_handler = connection_id.clone();
|
||||
let registry_for_handler = registry.clone();
|
||||
let mut response = ws.on_upgrade(move |socket| async move {
|
||||
run_ws(
|
||||
socket,
|
||||
registry_for_handler,
|
||||
conn_id_for_handler,
|
||||
connection,
|
||||
)
|
||||
.await
|
||||
});
|
||||
response
|
||||
.headers_mut()
|
||||
.insert(HEADER_SESSION_ID, acp_session_id.parse().unwrap());
|
||||
|
||||
if let Ok(v) = HeaderValue::from_str(&connection_id) {
|
||||
response.headers_mut().insert(HEADER_CONNECTION_ID, v);
|
||||
}
|
||||
info!(connection_id = %connection_id, "WebSocket connection created");
|
||||
response
|
||||
}
|
||||
|
||||
pub(crate) async fn handle_ws(socket: WebSocket, state: Arc<WsState>, acp_session_id: String) {
|
||||
async fn run_ws(
|
||||
socket: WebSocket,
|
||||
registry: Arc<ConnectionRegistry>,
|
||||
connection_id: String,
|
||||
connection: Arc<super::connection::Connection>,
|
||||
) {
|
||||
let (mut ws_tx, mut ws_rx) = socket.split();
|
||||
let (replay, mut outbound_rx) = connection.subscribe_with_replay().await;
|
||||
|
||||
let (to_agent, from_agent) = {
|
||||
let sessions = state.sessions.read().await;
|
||||
match sessions.get(&acp_session_id) {
|
||||
Some(session) => (session.to_agent_tx.clone(), session.from_agent_rx.clone()),
|
||||
None => {
|
||||
error!(acp_session_id = %acp_session_id, "Session not found after creation");
|
||||
return;
|
||||
debug!(connection_id = %connection_id, "Starting WebSocket message loop");
|
||||
|
||||
for text in replay {
|
||||
trace!(connection_id = %connection_id, payload = %text, "Agent → Client (replay): {} bytes", text.len());
|
||||
if ws_tx.send(Message::Text(text.into())).await.is_err() {
|
||||
error!(connection_id = %connection_id, "WebSocket send failed during replay");
|
||||
if let Some(conn) = registry.remove(&connection_id).await {
|
||||
conn.shutdown().await;
|
||||
}
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
debug!(acp_session_id = %acp_session_id, "Starting bidirectional message loop");
|
||||
|
||||
let mut from_agent_rx = from_agent.lock().await;
|
||||
}
|
||||
|
||||
loop {
|
||||
tokio::select! {
|
||||
Some(msg_result) = ws_rx.next() => {
|
||||
msg_result = ws_rx.next() => {
|
||||
match msg_result {
|
||||
Ok(Message::Text(text)) => {
|
||||
Some(Ok(Message::Text(text))) => {
|
||||
let text_str = text.to_string();
|
||||
debug!(acp_session_id = %acp_session_id, "Client → Agent: {} bytes", text_str.len());
|
||||
if let Err(e) = to_agent.send(text_str).await {
|
||||
error!(acp_session_id = %acp_session_id, "Failed to send to agent: {}", e);
|
||||
trace!(connection_id = %connection_id, payload = %text_str, "Client → Agent: {} bytes", text_str.len());
|
||||
if connection.to_agent_tx.send(text_str).await.is_err() {
|
||||
error!(connection_id = %connection_id, "Agent channel closed");
|
||||
break;
|
||||
}
|
||||
}
|
||||
Ok(Message::Close(frame)) => {
|
||||
debug!(acp_session_id = %acp_session_id, "Client closed connection: {:?}", frame);
|
||||
Some(Ok(Message::Close(frame))) => {
|
||||
debug!(connection_id = %connection_id, "Client closed connection: {:?}", frame);
|
||||
break;
|
||||
}
|
||||
Ok(Message::Ping(_)) | Ok(Message::Pong(_)) => {
|
||||
// Axum handles ping/pong automatically
|
||||
Some(Ok(Message::Ping(_))) | Some(Ok(Message::Pong(_))) => continue,
|
||||
Some(Ok(Message::Binary(_))) => {
|
||||
warn!(connection_id = %connection_id, "Ignoring binary message (ACP uses text)");
|
||||
continue;
|
||||
}
|
||||
Ok(Message::Binary(_)) => {
|
||||
warn!(acp_session_id = %acp_session_id, "Ignoring binary message (ACP uses text)");
|
||||
continue;
|
||||
}
|
||||
Err(e) => {
|
||||
error!(acp_session_id = %acp_session_id, "WebSocket error: {}", e);
|
||||
Some(Err(e)) => {
|
||||
error!(connection_id = %connection_id, "WebSocket error: {}", e);
|
||||
break;
|
||||
}
|
||||
None => break,
|
||||
}
|
||||
}
|
||||
|
||||
Some(text) = from_agent_rx.recv() => {
|
||||
debug!(acp_session_id = %acp_session_id, "Agent → Client: {} bytes", text.len());
|
||||
if let Err(e) = ws_tx.send(Message::Text(text.into())).await {
|
||||
error!(acp_session_id = %acp_session_id, "Failed to send to client: {}", e);
|
||||
break;
|
||||
recv = outbound_rx.recv() => {
|
||||
match recv {
|
||||
Ok(text) => {
|
||||
trace!(connection_id = %connection_id, payload = %text, "Agent → Client: {} bytes", text.len());
|
||||
if ws_tx.send(Message::Text(text.into())).await.is_err() {
|
||||
error!(connection_id = %connection_id, "WebSocket send failed");
|
||||
break;
|
||||
}
|
||||
}
|
||||
Err(tokio::sync::broadcast::error::RecvError::Lagged(n)) => {
|
||||
warn!(connection_id = %connection_id, "WebSocket lagged {} messages", n);
|
||||
continue;
|
||||
}
|
||||
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
|
||||
}
|
||||
}
|
||||
|
||||
else => {
|
||||
debug!(acp_session_id = %acp_session_id, "Both channels closed");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
debug!(acp_session_id = %acp_session_id, "Cleaning up connection");
|
||||
state.remove_connection(&acp_session_id).await;
|
||||
debug!(connection_id = %connection_id, "Cleaning up WebSocket connection");
|
||||
if let Some(conn) = registry.remove(&connection_id).await {
|
||||
conn.shutdown().await;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -21,6 +21,7 @@ use crate::agents::extension_manager::{
|
|||
get_parameter_names, ExtensionManager, ExtensionManagerCapabilities,
|
||||
};
|
||||
use crate::agents::final_output_tool::{FINAL_OUTPUT_CONTINUATION_MESSAGE, FINAL_OUTPUT_TOOL_NAME};
|
||||
use crate::agents::platform_extensions::summon::discover_filesystem_sources;
|
||||
use crate::agents::platform_extensions::MANAGE_EXTENSIONS_TOOL_NAME_COMPLETE;
|
||||
use crate::agents::platform_tools::PLATFORM_MANAGE_SCHEDULE_TOOL_NAME;
|
||||
use crate::agents::prompt_manager::PromptManager;
|
||||
|
|
@ -372,10 +373,17 @@ impl Agent {
|
|||
}
|
||||
let initial_messages = conversation.messages().clone();
|
||||
|
||||
let (tools, toolshim_tools, system_prompt) = self
|
||||
let (tools, toolshim_tools, mut system_prompt) = self
|
||||
.prepare_tools_and_prompt(session_id, working_dir)
|
||||
.await?;
|
||||
|
||||
if let Some(instructions) = self.resolve_at_mention(&conversation, working_dir) {
|
||||
system_prompt = format!(
|
||||
"{}\n\n# Instructions from active agent:\n\n{}",
|
||||
system_prompt, instructions
|
||||
);
|
||||
}
|
||||
|
||||
let goose_mode = *self.current_goose_mode.lock().await;
|
||||
|
||||
if goose_mode == GooseMode::SmartApprove {
|
||||
|
|
@ -409,6 +417,30 @@ impl Agent {
|
|||
})
|
||||
}
|
||||
|
||||
fn resolve_at_mention(
|
||||
&self,
|
||||
conversation: &Conversation,
|
||||
working_dir: &std::path::Path,
|
||||
) -> Option<String> {
|
||||
let last_message = conversation.messages().last()?;
|
||||
if last_message.role == rmcp::model::Role::User {
|
||||
let after_at = last_message
|
||||
.as_concat_text()
|
||||
.trim()
|
||||
.strip_prefix('@')?
|
||||
.to_lowercase();
|
||||
|
||||
for source in discover_filesystem_sources(working_dir) {
|
||||
let name = source.name.to_lowercase();
|
||||
let is_match = after_at == name || after_at.starts_with(&format!("{} ", name));
|
||||
if is_match && !source.content.is_empty() {
|
||||
return Some(source.content.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
async fn categorize_tools(
|
||||
&self,
|
||||
response: &Message,
|
||||
|
|
|
|||
|
|
@ -140,8 +140,8 @@ impl Agent {
|
|||
}
|
||||
|
||||
async fn handle_skills_command(&self, session_id: &str) -> Result<Option<Message>> {
|
||||
use super::platform_extensions::skills::list_installed_skills;
|
||||
use super::platform_extensions::SourceKind;
|
||||
use crate::skills::list_installed_skills;
|
||||
use goose_sdk::custom_requests::SourceType;
|
||||
|
||||
let working_dir = self
|
||||
.config
|
||||
|
|
@ -153,7 +153,7 @@ impl Agent {
|
|||
let sources = list_installed_skills(working_dir.as_deref());
|
||||
let skills: Vec<_> = sources
|
||||
.iter()
|
||||
.filter(|s| matches!(s.kind, SourceKind::Skill | SourceKind::BuiltinSkill))
|
||||
.filter(|s| matches!(s.source_type, SourceType::Skill | SourceType::BuiltinSkill))
|
||||
.collect();
|
||||
|
||||
let mut output = String::new();
|
||||
|
|
@ -165,7 +165,7 @@ impl Agent {
|
|||
} else {
|
||||
output.push_str(&format!("**Installed skills ({}):**\n\n", skills.len()));
|
||||
for skill in &skills {
|
||||
let kind_label = if skill.kind == SourceKind::BuiltinSkill {
|
||||
let kind_label = if skill.source_type == SourceType::BuiltinSkill {
|
||||
" *(builtin)*"
|
||||
} else {
|
||||
""
|
||||
|
|
|
|||
|
|
@ -1,5 +1,4 @@
|
|||
mod agent;
|
||||
pub(crate) mod builtin_skills;
|
||||
pub mod container;
|
||||
pub mod execute_commands;
|
||||
pub mod extension;
|
||||
|
|
|
|||
|
|
@ -166,27 +166,34 @@ pub struct ShellOutput {
|
|||
/// source the user's profile and recover the full PATH.
|
||||
#[cfg(not(windows))]
|
||||
fn resolve_login_shell_path() -> Option<String> {
|
||||
use process_wrap::std::{CommandWrap, ProcessSession};
|
||||
|
||||
let shell = unix_shell();
|
||||
|
||||
let mut child = if is_flatpak() {
|
||||
flatpak_spawn_process()
|
||||
.args([&shell, "-l", "-i", "-c", "echo $PATH"])
|
||||
.stdin(Stdio::null())
|
||||
.stdout(Stdio::piped())
|
||||
.stderr(Stdio::null())
|
||||
.spawn()
|
||||
.ok()?
|
||||
// Build the command, varying only the flatpak vs direct invocation.
|
||||
let mut cmd = if is_flatpak() {
|
||||
let mut c = flatpak_spawn_process();
|
||||
c.args([&shell, "-l", "-i", "-c", "echo $PATH"]);
|
||||
CommandWrap::from(c)
|
||||
} else {
|
||||
std::process::Command::new(&shell)
|
||||
.args(["-l", "-i", "-c", "echo $PATH"])
|
||||
.stdin(Stdio::null())
|
||||
.stdout(Stdio::piped())
|
||||
.stderr(Stdio::null())
|
||||
.spawn()
|
||||
.ok()?
|
||||
let mut c = std::process::Command::new(&shell);
|
||||
c.args(["-l", "-i", "-c", "echo $PATH"]);
|
||||
CommandWrap::from(c)
|
||||
};
|
||||
|
||||
let mut stdout = child.stdout.take()?;
|
||||
cmd.command_mut()
|
||||
.stdin(Stdio::null())
|
||||
.stdout(Stdio::piped())
|
||||
.stderr(Stdio::null());
|
||||
|
||||
// Spawn in a new session so that bash's interactive job-control setup
|
||||
// (TIOCSPGRP) cannot steal the terminal foreground from goose, which
|
||||
// would cause goose to receive SIGTTIN and be suspended on startup.
|
||||
cmd.wrap(ProcessSession);
|
||||
|
||||
let mut child = cmd.spawn().ok()?;
|
||||
|
||||
let mut stdout = child.stdout().take()?;
|
||||
let (tx, rx) = std::sync::mpsc::channel();
|
||||
std::thread::spawn(move || {
|
||||
let mut buf = Vec::new();
|
||||
|
|
@ -197,7 +204,11 @@ fn resolve_login_shell_path() -> Option<String> {
|
|||
});
|
||||
|
||||
match rx.recv_timeout(Duration::from_secs(5)) {
|
||||
Ok(buf) if child.wait().is_ok_and(|s| s.success()) => {
|
||||
Ok(buf)
|
||||
if child
|
||||
.wait()
|
||||
.is_ok_and(|s: std::process::ExitStatus| s.success()) =>
|
||||
{
|
||||
// Take the last non-empty line — interactive shells may emit
|
||||
// extra output from profile scripts before our echo.
|
||||
String::from_utf8_lossy(&buf)
|
||||
|
|
@ -209,7 +220,6 @@ fn resolve_login_shell_path() -> Option<String> {
|
|||
}
|
||||
_ => {
|
||||
let _ = child.kill();
|
||||
let _ = child.wait();
|
||||
None
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,74 +6,16 @@ pub mod code_execution;
|
|||
pub mod developer;
|
||||
pub mod ext_manager;
|
||||
pub mod orchestrator;
|
||||
pub mod skills;
|
||||
pub mod summarize;
|
||||
pub mod summon;
|
||||
pub mod todo;
|
||||
pub mod tom;
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::path::PathBuf;
|
||||
|
||||
use crate::agents::mcp_client::McpClientTrait;
|
||||
use crate::session::Session;
|
||||
use once_cell::sync::Lazy;
|
||||
use serde::Deserialize;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct Source {
|
||||
pub name: String,
|
||||
pub kind: SourceKind,
|
||||
pub description: String,
|
||||
pub path: PathBuf,
|
||||
pub content: String,
|
||||
pub supporting_files: Vec<PathBuf>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub enum SourceKind {
|
||||
Subrecipe,
|
||||
Recipe,
|
||||
Skill,
|
||||
Agent,
|
||||
BuiltinSkill,
|
||||
}
|
||||
|
||||
impl std::fmt::Display for SourceKind {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
SourceKind::Subrecipe => write!(f, "subrecipe"),
|
||||
SourceKind::Recipe => write!(f, "recipe"),
|
||||
SourceKind::Skill => write!(f, "skill"),
|
||||
SourceKind::Agent => write!(f, "agent"),
|
||||
SourceKind::BuiltinSkill => write!(f, "builtin skill"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Source {
|
||||
pub fn to_load_text(&self) -> String {
|
||||
format!(
|
||||
"## {} ({})\n\n{}\n\n### Content\n\n{}",
|
||||
self.name, self.kind, self.description, self.content
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn parse_frontmatter<T: for<'de> Deserialize<'de>>(
|
||||
content: &str,
|
||||
) -> Result<Option<(T, String)>, serde_yaml::Error> {
|
||||
let parts: Vec<&str> = content.split("---").collect();
|
||||
if parts.len() < 3 {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
let yaml_content = parts[1].trim();
|
||||
let metadata: T = serde_yaml::from_str(yaml_content)?;
|
||||
|
||||
let body = parts[2..].join("---").trim().to_string();
|
||||
Ok(Some((metadata, body)))
|
||||
}
|
||||
|
||||
pub use ext_manager::MANAGE_EXTENSIONS_TOOL_NAME_COMPLETE;
|
||||
|
||||
|
|
@ -248,15 +190,15 @@ pub static PLATFORM_EXTENSIONS: Lazy<HashMap<&'static str, PlatformExtensionDef>
|
|||
);
|
||||
|
||||
map.insert(
|
||||
skills::EXTENSION_NAME,
|
||||
crate::skills::EXTENSION_NAME,
|
||||
PlatformExtensionDef {
|
||||
name: skills::EXTENSION_NAME,
|
||||
name: crate::skills::EXTENSION_NAME,
|
||||
display_name: "Skills",
|
||||
description: "Discover and provide skill instructions from filesystem and builtins",
|
||||
default_enabled: true,
|
||||
unprefixed_tools: true,
|
||||
hidden: false,
|
||||
client_factory: |ctx| Box::new(skills::SkillsClient::new(ctx).unwrap()),
|
||||
client_factory: |ctx| Box::new(crate::skills::SkillsClient::new(ctx).unwrap()),
|
||||
},
|
||||
);
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
use super::{parse_frontmatter, Source, SourceKind};
|
||||
use crate::agents::extension::PlatformExtensionContext;
|
||||
use crate::agents::mcp_client::{Error, McpClientTrait};
|
||||
use crate::agents::subagent_handler::{run_subagent_task, OnMessageCallback, SubagentRunParams};
|
||||
|
|
@ -13,8 +12,10 @@ use crate::recipe::local_recipes::load_local_recipe_file;
|
|||
use crate::recipe::{Recipe, Settings, RECIPE_FILE_EXTENSIONS};
|
||||
use crate::session::extension_data::EnabledExtensionsState;
|
||||
use crate::session::SessionType;
|
||||
use crate::sources::parse_frontmatter;
|
||||
use anyhow::Result;
|
||||
use async_trait::async_trait;
|
||||
use goose_sdk::custom_requests::{SourceEntry, SourceType};
|
||||
use rmcp::model::{
|
||||
CallToolResult, Content, Implementation, InitializeResult, JsonObject, ListToolsResult, Meta,
|
||||
ServerCapabilities, ServerNotification, Tool,
|
||||
|
|
@ -33,11 +34,11 @@ use tracing::{info, warn};
|
|||
|
||||
pub static EXTENSION_NAME: &str = "summon";
|
||||
|
||||
fn kind_plural(kind: SourceKind) -> &'static str {
|
||||
fn kind_plural(kind: SourceType) -> &'static str {
|
||||
match kind {
|
||||
SourceKind::Subrecipe => "Subrecipes",
|
||||
SourceKind::Recipe => "Recipes",
|
||||
SourceKind::Agent => "Agents",
|
||||
SourceType::Subrecipe => "Subrecipes",
|
||||
SourceType::Recipe => "Recipes",
|
||||
SourceType::Agent => "Agents",
|
||||
_ => "Other",
|
||||
}
|
||||
}
|
||||
|
|
@ -95,7 +96,7 @@ struct AgentMetadata {
|
|||
model: Option<String>,
|
||||
}
|
||||
|
||||
fn parse_agent_content(content: &str, path: &Path) -> Option<Source> {
|
||||
fn parse_agent_content(content: &str, path: &Path) -> Option<SourceEntry> {
|
||||
let (metadata, body): (AgentMetadata, String) = match parse_frontmatter(content) {
|
||||
Ok(Some(parsed)) => parsed,
|
||||
Ok(None) => return None,
|
||||
|
|
@ -119,20 +120,22 @@ fn parse_agent_content(content: &str, path: &Path) -> Option<Source> {
|
|||
format!("Agent{}", model_info)
|
||||
});
|
||||
|
||||
Some(Source {
|
||||
Some(SourceEntry {
|
||||
source_type: SourceType::Agent,
|
||||
name: metadata.name,
|
||||
kind: SourceKind::Agent,
|
||||
description,
|
||||
path: path.to_path_buf(),
|
||||
content: body,
|
||||
directory: path.to_string_lossy().into_owned(),
|
||||
global: false,
|
||||
supporting_files: Vec::new(),
|
||||
properties: std::collections::HashMap::new(),
|
||||
})
|
||||
}
|
||||
|
||||
fn scan_recipes_from_dir(
|
||||
dir: &Path,
|
||||
kind: SourceKind,
|
||||
sources: &mut Vec<Source>,
|
||||
kind: SourceType,
|
||||
sources: &mut Vec<SourceEntry>,
|
||||
seen: &mut std::collections::HashSet<String>,
|
||||
) {
|
||||
let entries = match std::fs::read_dir(dir) {
|
||||
|
|
@ -164,13 +167,15 @@ fn scan_recipes_from_dir(
|
|||
match Recipe::from_file_path(&path) {
|
||||
Ok(recipe) => {
|
||||
seen.insert(name.clone());
|
||||
sources.push(Source {
|
||||
sources.push(SourceEntry {
|
||||
source_type: kind,
|
||||
name,
|
||||
kind,
|
||||
description: recipe.description.clone(),
|
||||
path: path.clone(),
|
||||
content: recipe.instructions.clone().unwrap_or_default(),
|
||||
directory: path.to_string_lossy().into_owned(),
|
||||
global: false,
|
||||
supporting_files: Vec::new(),
|
||||
properties: std::collections::HashMap::new(),
|
||||
});
|
||||
}
|
||||
Err(e) => {
|
||||
|
|
@ -182,7 +187,7 @@ fn scan_recipes_from_dir(
|
|||
|
||||
fn scan_agents_from_dir(
|
||||
dir: &Path,
|
||||
sources: &mut Vec<Source>,
|
||||
sources: &mut Vec<SourceEntry>,
|
||||
seen: &mut std::collections::HashSet<String>,
|
||||
) {
|
||||
let entries = match std::fs::read_dir(dir) {
|
||||
|
|
@ -218,8 +223,8 @@ fn scan_agents_from_dir(
|
|||
}
|
||||
}
|
||||
|
||||
fn discover_filesystem_sources(working_dir: &Path) -> Vec<Source> {
|
||||
let mut sources: Vec<Source> = Vec::new();
|
||||
pub fn discover_filesystem_sources(working_dir: &Path) -> Vec<SourceEntry> {
|
||||
let mut sources: Vec<SourceEntry> = Vec::new();
|
||||
let mut seen: std::collections::HashSet<String> = std::collections::HashSet::new();
|
||||
|
||||
let home = dirs::home_dir();
|
||||
|
|
@ -240,6 +245,7 @@ fn discover_filesystem_sources(working_dir: &Path) -> Vec<Source> {
|
|||
})
|
||||
.chain(
|
||||
[
|
||||
home.as_ref().map(|h| h.join(".goose/recipes")),
|
||||
Some(config.join("recipes")),
|
||||
home.as_ref().map(|h| h.join(".agents/recipes")),
|
||||
]
|
||||
|
|
@ -255,6 +261,7 @@ fn discover_filesystem_sources(working_dir: &Path) -> Vec<Source> {
|
|||
];
|
||||
|
||||
let global_agent_dirs: Vec<PathBuf> = [
|
||||
home.as_ref().map(|h| h.join(".goose/agents")),
|
||||
home.as_ref().map(|h| h.join(".agents/agents")),
|
||||
Some(config.join("agents")),
|
||||
home.as_ref().map(|h| h.join(".claude/agents")),
|
||||
|
|
@ -264,7 +271,7 @@ fn discover_filesystem_sources(working_dir: &Path) -> Vec<Source> {
|
|||
.collect();
|
||||
|
||||
for dir in local_recipe_dirs {
|
||||
scan_recipes_from_dir(&dir, SourceKind::Recipe, &mut sources, &mut seen);
|
||||
scan_recipes_from_dir(&dir, SourceType::Recipe, &mut sources, &mut seen);
|
||||
}
|
||||
|
||||
for dir in local_agent_dirs {
|
||||
|
|
@ -272,7 +279,7 @@ fn discover_filesystem_sources(working_dir: &Path) -> Vec<Source> {
|
|||
}
|
||||
|
||||
for dir in global_recipe_dirs {
|
||||
scan_recipes_from_dir(&dir, SourceKind::Recipe, &mut sources, &mut seen);
|
||||
scan_recipes_from_dir(&dir, SourceType::Recipe, &mut sources, &mut seen);
|
||||
}
|
||||
|
||||
for dir in global_agent_dirs {
|
||||
|
|
@ -313,7 +320,7 @@ fn is_session_id(s: &str) -> bool {
|
|||
pub struct SummonClient {
|
||||
info: InitializeResult,
|
||||
context: PlatformExtensionContext,
|
||||
source_cache: Mutex<Option<(Instant, PathBuf, Vec<Source>)>>,
|
||||
source_cache: Mutex<Option<(Instant, PathBuf, Vec<SourceEntry>)>>,
|
||||
background_tasks: Mutex<HashMap<String, BackgroundTask>>,
|
||||
completed_tasks: Mutex<HashMap<String, CompletedTask>>,
|
||||
notification_subscribers: Arc<Mutex<Vec<mpsc::Sender<ServerNotification>>>>,
|
||||
|
|
@ -475,11 +482,11 @@ impl SummonClient {
|
|||
.unwrap_or_else(|| std::env::current_dir().unwrap_or_default())
|
||||
}
|
||||
|
||||
async fn get_sources(&self, session_id: &str, working_dir: &Path) -> Vec<Source> {
|
||||
async fn get_sources(&self, session_id: &str, working_dir: &Path) -> Vec<SourceEntry> {
|
||||
let fs_sources = self.get_filesystem_sources(working_dir).await;
|
||||
|
||||
let mut seen: std::collections::HashSet<String> = std::collections::HashSet::new();
|
||||
let mut sources: Vec<Source> = Vec::new();
|
||||
let mut sources: Vec<SourceEntry> = Vec::new();
|
||||
|
||||
self.add_subrecipes(session_id, &mut sources, &mut seen)
|
||||
.await;
|
||||
|
|
@ -491,11 +498,11 @@ impl SummonClient {
|
|||
}
|
||||
}
|
||||
|
||||
sources.sort_by(|a, b| (&a.kind, &a.name).cmp(&(&b.kind, &b.name)));
|
||||
sources.sort_by(|a, b| (&a.source_type, &a.name).cmp(&(&b.source_type, &b.name)));
|
||||
sources
|
||||
}
|
||||
|
||||
async fn get_filesystem_sources(&self, working_dir: &Path) -> Vec<Source> {
|
||||
async fn get_filesystem_sources(&self, working_dir: &Path) -> Vec<SourceEntry> {
|
||||
let mut cache = self.source_cache.lock().await;
|
||||
if let Some((cached_at, cached_dir, sources)) = cache.as_ref() {
|
||||
if cached_dir == working_dir && cached_at.elapsed() < Duration::from_secs(60) {
|
||||
|
|
@ -512,11 +519,11 @@ impl SummonClient {
|
|||
session_id: &str,
|
||||
name: &str,
|
||||
working_dir: &Path,
|
||||
) -> Result<Option<Source>, String> {
|
||||
) -> Result<Option<SourceEntry>, String> {
|
||||
let sources = self.get_sources(session_id, working_dir).await;
|
||||
|
||||
if let Some(mut source) = sources.iter().find(|s| s.name == name).cloned() {
|
||||
if source.kind == SourceKind::Subrecipe && source.content.is_empty() {
|
||||
if source.source_type == SourceType::Subrecipe && source.content.is_empty() {
|
||||
source.content = self.load_subrecipe_content(session_id, &source.name).await;
|
||||
}
|
||||
return Ok(Some(source));
|
||||
|
|
@ -555,14 +562,14 @@ impl SummonClient {
|
|||
}
|
||||
}
|
||||
|
||||
fn discover_filesystem_sources(&self, working_dir: &Path) -> Vec<Source> {
|
||||
fn discover_filesystem_sources(&self, working_dir: &Path) -> Vec<SourceEntry> {
|
||||
discover_filesystem_sources(working_dir)
|
||||
}
|
||||
|
||||
async fn add_subrecipes(
|
||||
&self,
|
||||
session_id: &str,
|
||||
sources: &mut Vec<Source>,
|
||||
sources: &mut Vec<SourceEntry>,
|
||||
seen: &mut std::collections::HashSet<String>,
|
||||
) {
|
||||
let session = match self
|
||||
|
|
@ -588,13 +595,15 @@ impl SummonClient {
|
|||
|
||||
let description = self.build_subrecipe_description(sr).await;
|
||||
|
||||
sources.push(Source {
|
||||
sources.push(SourceEntry {
|
||||
source_type: SourceType::Subrecipe,
|
||||
name: sr.name.clone(),
|
||||
kind: SourceKind::Subrecipe,
|
||||
description,
|
||||
path: PathBuf::from(&sr.path),
|
||||
content: String::new(),
|
||||
directory: sr.path.clone(),
|
||||
global: false,
|
||||
supporting_files: Vec::new(),
|
||||
properties: std::collections::HashMap::new(),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
|
@ -839,8 +848,8 @@ impl SummonClient {
|
|||
}
|
||||
}
|
||||
|
||||
for kind in [SourceKind::Subrecipe, SourceKind::Recipe, SourceKind::Agent] {
|
||||
let kind_sources: Vec<_> = sources.iter().filter(|s| s.kind == kind).collect();
|
||||
for kind in [SourceType::Subrecipe, SourceType::Recipe, SourceType::Agent] {
|
||||
let kind_sources: Vec<_> = sources.iter().filter(|s| s.source_type == kind).collect();
|
||||
if !kind_sources.is_empty() {
|
||||
output.push_str(&format!("\n{}:\n", kind_plural(kind)));
|
||||
for source in kind_sources {
|
||||
|
|
@ -873,7 +882,7 @@ impl SummonClient {
|
|||
|
||||
let output = format!(
|
||||
"# Loaded: {} ({})\n\n{}\n\n---\nThis knowledge is now available in your context.",
|
||||
source.name, source.kind, content
|
||||
source.name, source.source_type, content
|
||||
);
|
||||
|
||||
Ok(vec![Content::text(output)])
|
||||
|
|
@ -1078,16 +1087,16 @@ impl SummonClient {
|
|||
.await?
|
||||
.ok_or_else(|| format!("Source '{}' not found", source_name))?;
|
||||
|
||||
let mut recipe = match source.kind {
|
||||
SourceKind::Recipe | SourceKind::Subrecipe => {
|
||||
let mut recipe = match source.source_type {
|
||||
SourceType::Recipe | SourceType::Subrecipe => {
|
||||
self.build_recipe_from_source(&source, params, session_id)
|
||||
.await?
|
||||
}
|
||||
SourceKind::Agent => self.build_recipe_from_agent(&source, params)?,
|
||||
SourceType::Agent => self.build_recipe_from_agent(&source, params)?,
|
||||
_ => {
|
||||
return Err(format!(
|
||||
"Source '{}' has kind '{}' which cannot be delegated from summon",
|
||||
source_name, source.kind
|
||||
source_name, source.source_type
|
||||
))
|
||||
}
|
||||
};
|
||||
|
|
@ -1106,7 +1115,7 @@ impl SummonClient {
|
|||
|
||||
async fn build_recipe_from_source(
|
||||
&self,
|
||||
source: &Source,
|
||||
source: &SourceEntry,
|
||||
params: &DelegateParams,
|
||||
session_id: &str,
|
||||
) -> Result<Recipe, String> {
|
||||
|
|
@ -1117,7 +1126,7 @@ impl SummonClient {
|
|||
.await
|
||||
.map_err(|e| format!("Failed to get session: {}", e))?;
|
||||
|
||||
if source.kind == SourceKind::Subrecipe {
|
||||
if source.source_type == SourceType::Subrecipe {
|
||||
let sub_recipes = session.recipe.as_ref().and_then(|r| r.sub_recipes.as_ref());
|
||||
|
||||
if let Some(sub_recipes) = sub_recipes {
|
||||
|
|
@ -1154,7 +1163,7 @@ impl SummonClient {
|
|||
}
|
||||
}
|
||||
|
||||
let recipe_file = load_local_recipe_file(source.path.to_str().unwrap_or(""))
|
||||
let recipe_file = load_local_recipe_file(&source.directory)
|
||||
.map_err(|e| format!("Failed to load recipe '{}': {}", source.name, e))?;
|
||||
|
||||
let param_values: Vec<(String, String)> = params
|
||||
|
|
@ -1184,13 +1193,13 @@ impl SummonClient {
|
|||
|
||||
fn build_recipe_from_agent(
|
||||
&self,
|
||||
source: &Source,
|
||||
source: &SourceEntry,
|
||||
params: &DelegateParams,
|
||||
) -> Result<Recipe, String> {
|
||||
let agent_content = if source.path.as_os_str().is_empty() {
|
||||
let agent_content = if source.directory.is_empty() {
|
||||
return Err("Agent source has no path".to_string());
|
||||
} else {
|
||||
std::fs::read_to_string(&source.path)
|
||||
std::fs::read_to_string(&source.directory)
|
||||
.map_err(|e| format!("Failed to read agent file: {}", e))?
|
||||
};
|
||||
|
||||
|
|
@ -1745,14 +1754,14 @@ You review code."#;
|
|||
|
||||
let recipe = sources
|
||||
.iter()
|
||||
.find(|s| s.name == "deploy" && s.kind == SourceKind::Recipe)
|
||||
.find(|s| s.name == "deploy" && s.source_type == SourceType::Recipe)
|
||||
.unwrap();
|
||||
assert_eq!(recipe.description, "Deploy to production");
|
||||
assert_eq!(recipe.content, "Run deploy steps");
|
||||
|
||||
let agent = sources
|
||||
.iter()
|
||||
.find(|s| s.name == "reviewer" && s.kind == SourceKind::Agent)
|
||||
.find(|s| s.name == "reviewer" && s.source_type == SourceType::Agent)
|
||||
.unwrap();
|
||||
assert_eq!(agent.description, "Code reviewer");
|
||||
assert!(agent.content.contains("You review code"));
|
||||
|
|
|
|||
|
|
@ -77,6 +77,10 @@ pub struct DeclarativeProviderConfig {
|
|||
#[serde(default)]
|
||||
pub skip_canonical_filtering: bool,
|
||||
#[serde(default, deserialize_with = "deserialize_non_empty_string")]
|
||||
pub model_doc_link: Option<String>,
|
||||
#[serde(default)]
|
||||
pub setup_steps: Vec<String>,
|
||||
#[serde(default, deserialize_with = "deserialize_non_empty_string")]
|
||||
pub fast_model: Option<String>,
|
||||
}
|
||||
|
||||
|
|
@ -233,6 +237,8 @@ pub fn create_custom_provider(
|
|||
env_vars: None,
|
||||
dynamic_models: None,
|
||||
skip_canonical_filtering: false,
|
||||
model_doc_link: None,
|
||||
setup_steps: vec![],
|
||||
fast_model: None,
|
||||
};
|
||||
|
||||
|
|
@ -300,6 +306,8 @@ pub fn update_custom_provider(params: UpdateCustomProviderParams) -> Result<()>
|
|||
env_vars: existing_config.env_vars,
|
||||
dynamic_models: existing_config.dynamic_models,
|
||||
skip_canonical_filtering: existing_config.skip_canonical_filtering,
|
||||
model_doc_link: existing_config.model_doc_link,
|
||||
setup_steps: existing_config.setup_steps,
|
||||
fast_model: existing_config.fast_model.clone(),
|
||||
};
|
||||
|
||||
|
|
@ -587,6 +595,33 @@ mod tests {
|
|||
serde_json::from_str(json).expect("groq.json should parse without env_vars");
|
||||
assert!(config.env_vars.is_none());
|
||||
assert!(config.dynamic_models.is_none());
|
||||
assert!(config.model_doc_link.is_none());
|
||||
assert!(config.setup_steps.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_nvidia_json_deserializes() {
|
||||
let json = include_str!("../providers/declarative/nvidia.json");
|
||||
let config: DeclarativeProviderConfig =
|
||||
serde_json::from_str(json).expect("nvidia.json should parse");
|
||||
assert_eq!(config.name, "nvidia");
|
||||
assert_eq!(config.display_name, "NVIDIA");
|
||||
assert!(matches!(config.engine, ProviderEngine::OpenAI));
|
||||
assert_eq!(config.api_key_env, "NVIDIA_API_KEY");
|
||||
assert_eq!(config.base_url, "https://integrate.api.nvidia.com/v1");
|
||||
assert_eq!(config.catalog_provider_id, Some("nvidia".to_string()));
|
||||
assert_eq!(config.dynamic_models, Some(true));
|
||||
assert_eq!(config.supports_streaming, Some(true));
|
||||
assert!(!config.skip_canonical_filtering);
|
||||
assert_eq!(
|
||||
config.model_doc_link,
|
||||
Some("https://build.nvidia.com/models".to_string())
|
||||
);
|
||||
assert_eq!(config.setup_steps.len(), 4);
|
||||
|
||||
assert_eq!(config.models.len(), 1);
|
||||
assert_eq!(config.models[0].name, "z-ai/glm-4.7");
|
||||
assert_eq!(config.models[0].context_limit, 131072);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ use crate::config::Config;
|
|||
#[cfg(feature = "local-inference")]
|
||||
use crate::dictation::whisper::LOCAL_WHISPER_MODEL_CONFIG_KEY;
|
||||
use crate::providers::api_client::{ApiClient, AuthMethod};
|
||||
use crate::providers::openai::parse_openai_base_url;
|
||||
use anyhow::Result;
|
||||
use serde::{Deserialize, Serialize};
|
||||
#[cfg(feature = "local-inference")]
|
||||
|
|
@ -10,6 +11,8 @@ use std::time::Duration;
|
|||
use utoipa::ToSchema;
|
||||
|
||||
const REQUEST_TIMEOUT: Duration = Duration::from_secs(30);
|
||||
const OPENAI_VERSIONLESS_TRANSCRIPTIONS_PATH: &str = "audio/transcriptions";
|
||||
type OpenAiDictationTarget = (String, Vec<(String, String)>, String);
|
||||
|
||||
#[cfg(feature = "local-inference")]
|
||||
static LOCAL_TRANSCRIBER: once_cell::sync::Lazy<
|
||||
|
|
@ -179,7 +182,25 @@ pub async fn transcribe_local(audio_bytes: Vec<u8>) -> Result<String> {
|
|||
})?
|
||||
}
|
||||
|
||||
fn build_api_client(provider: DictationProvider) -> Result<ApiClient> {
|
||||
fn openai_dictation_target(raw_url: &str) -> Result<OpenAiDictationTarget> {
|
||||
let (host, query_params, has_v1) = parse_openai_base_url(raw_url)?;
|
||||
let endpoint_path = if has_v1 {
|
||||
"v1/audio/transcriptions".to_string()
|
||||
} else {
|
||||
OPENAI_VERSIONLESS_TRANSCRIPTIONS_PATH.to_string()
|
||||
};
|
||||
Ok((host, query_params, endpoint_path))
|
||||
}
|
||||
|
||||
fn resolve_openai_base_url_target(raw_url: Option<&str>) -> Result<Option<OpenAiDictationTarget>> {
|
||||
raw_url
|
||||
.map(str::trim)
|
||||
.filter(|raw_url| !raw_url.is_empty())
|
||||
.map(openai_dictation_target)
|
||||
.transpose()
|
||||
}
|
||||
|
||||
fn build_api_client(provider: DictationProvider) -> Result<(ApiClient, String)> {
|
||||
let config = Config::global();
|
||||
let def = get_provider_def(provider);
|
||||
|
||||
|
|
@ -188,14 +209,35 @@ fn build_api_client(provider: DictationProvider) -> Result<ApiClient> {
|
|||
anyhow::anyhow!("{} not configured", def.config_key)
|
||||
})?;
|
||||
|
||||
let base_url = if let Some(host_key) = def.host_key {
|
||||
config
|
||||
let (base_url, query_params, endpoint_path) = if provider == DictationProvider::OpenAI {
|
||||
let openai_base_url = config.get_param::<String>("OPENAI_BASE_URL").ok();
|
||||
|
||||
if let Ok(host) = std::env::var("OPENAI_HOST") {
|
||||
(host, vec![], def.endpoint_path.to_string())
|
||||
} else if let Some(target) = resolve_openai_base_url_target(openai_base_url.as_deref())? {
|
||||
target
|
||||
} else if let Ok(host) = config.get_param::<String>("OPENAI_HOST") {
|
||||
(host, vec![], def.endpoint_path.to_string())
|
||||
} else {
|
||||
(
|
||||
def.default_base_url.to_string(),
|
||||
vec![],
|
||||
def.endpoint_path.to_string(),
|
||||
)
|
||||
}
|
||||
} else if let Some(host_key) = def.host_key {
|
||||
let base_url = config
|
||||
.get(host_key, false)
|
||||
.ok()
|
||||
.and_then(|v| v.as_str().map(|s| s.to_string()))
|
||||
.unwrap_or_else(|| def.default_base_url.to_string())
|
||||
.unwrap_or_else(|| def.default_base_url.to_string());
|
||||
(base_url, vec![], def.endpoint_path.to_string())
|
||||
} else {
|
||||
def.default_base_url.to_string()
|
||||
(
|
||||
def.default_base_url.to_string(),
|
||||
vec![],
|
||||
def.endpoint_path.to_string(),
|
||||
)
|
||||
};
|
||||
|
||||
let auth = match provider {
|
||||
|
|
@ -209,10 +251,14 @@ fn build_api_client(provider: DictationProvider) -> Result<ApiClient> {
|
|||
DictationProvider::Local => anyhow::bail!("Local provider should not use API client"),
|
||||
};
|
||||
|
||||
ApiClient::with_timeout(base_url, auth, REQUEST_TIMEOUT).map_err(|e| {
|
||||
let mut client = ApiClient::with_timeout(base_url, auth, REQUEST_TIMEOUT).map_err(|e| {
|
||||
tracing::error!("Failed to create API client: {}", e);
|
||||
e
|
||||
})
|
||||
})?;
|
||||
if !query_params.is_empty() {
|
||||
client = client.with_query(query_params);
|
||||
}
|
||||
Ok((client, endpoint_path))
|
||||
}
|
||||
|
||||
pub async fn transcribe_with_provider(
|
||||
|
|
@ -223,8 +269,7 @@ pub async fn transcribe_with_provider(
|
|||
extension: &str,
|
||||
mime_type: &str,
|
||||
) -> Result<String> {
|
||||
let client = build_api_client(provider)?;
|
||||
let def = get_provider_def(provider);
|
||||
let (client, endpoint_path) = build_api_client(provider)?;
|
||||
|
||||
let part = reqwest::multipart::Part::bytes(audio_bytes)
|
||||
.file_name(format!("audio.{}", extension))
|
||||
|
|
@ -239,7 +284,7 @@ pub async fn transcribe_with_provider(
|
|||
.text(model_param, model_value);
|
||||
|
||||
let response = client
|
||||
.request(None, def.endpoint_path)
|
||||
.request(None, &endpoint_path)
|
||||
.multipart_post(form)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
|
|
@ -274,3 +319,50 @@ pub async fn transcribe_with_provider(
|
|||
|
||||
Ok(text)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{
|
||||
openai_dictation_target, resolve_openai_base_url_target,
|
||||
OPENAI_VERSIONLESS_TRANSCRIPTIONS_PATH,
|
||||
};
|
||||
|
||||
#[test]
|
||||
fn openai_dictation_target_preserves_prefix_and_query_params() {
|
||||
let (host, query_params, endpoint_path) = openai_dictation_target(
|
||||
"https://user:pass@gateway.example.com/openai/v1?api-version=2024-02-01",
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(host, "https://user:pass@gateway.example.com/openai");
|
||||
assert_eq!(
|
||||
query_params,
|
||||
vec![("api-version".to_string(), "2024-02-01".to_string())]
|
||||
);
|
||||
assert_eq!(endpoint_path, "v1/audio/transcriptions");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn openai_dictation_target_uses_versionless_endpoint_without_v1() {
|
||||
let (host, query_params, endpoint_path) =
|
||||
openai_dictation_target("https://gateway.example.com/custom/api").unwrap();
|
||||
assert_eq!(host, "https://gateway.example.com/custom/api");
|
||||
assert!(query_params.is_empty());
|
||||
assert_eq!(endpoint_path, OPENAI_VERSIONLESS_TRANSCRIPTIONS_PATH);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn openai_dictation_target_keeps_v1_endpoint_for_bare_host() {
|
||||
let (host, query_params, endpoint_path) =
|
||||
openai_dictation_target("https://api.openai.com").unwrap();
|
||||
assert_eq!(host, "https://api.openai.com");
|
||||
assert!(query_params.is_empty());
|
||||
assert_eq!(endpoint_path, "v1/audio/transcriptions");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_openai_base_url_target_ignores_blank_values() {
|
||||
assert!(resolve_openai_base_url_target(Some(" "))
|
||||
.unwrap()
|
||||
.is_none());
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -38,6 +38,7 @@ pub mod scheduler_trait;
|
|||
pub mod security;
|
||||
pub mod session;
|
||||
pub mod session_context;
|
||||
pub mod skills;
|
||||
pub mod slash_commands;
|
||||
pub mod sources;
|
||||
pub mod subprocess;
|
||||
|
|
|
|||
|
|
@ -158,7 +158,11 @@ impl ModelConfig {
|
|||
self.context_limit = Some(canonical.limit.context);
|
||||
}
|
||||
if self.max_tokens.is_none() {
|
||||
self.max_tokens = canonical.limit.output.map(|o| o as i32);
|
||||
self.max_tokens = canonical
|
||||
.limit
|
||||
.output
|
||||
.filter(|&output| output < canonical.limit.context)
|
||||
.map(|output| output as i32);
|
||||
}
|
||||
if self.reasoning.is_none() {
|
||||
self.reasoning = canonical.reasoning;
|
||||
|
|
@ -491,6 +495,20 @@ mod tests {
|
|||
assert_eq!(config.max_tokens, Some(1_000));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn skips_canonical_output_limit_when_it_equals_context_limit() {
|
||||
let _guard = env_lock::lock_env([
|
||||
("GOOSE_MAX_TOKENS", None::<&str>),
|
||||
("GOOSE_CONTEXT_LIMIT", None::<&str>),
|
||||
]);
|
||||
let config =
|
||||
ModelConfig::new_or_fail("moonshotai/kimi-k2.5").with_canonical_limits("nvidia");
|
||||
|
||||
assert_eq!(config.context_limit, Some(262_144));
|
||||
assert_eq!(config.max_tokens, None);
|
||||
assert_eq!(config.max_output_tokens(), 4_096);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn unknown_model_leaves_fields_none() {
|
||||
let _guard = env_lock::lock_env([
|
||||
|
|
|
|||
|
|
@ -731,16 +731,44 @@ pub trait Provider: Send + Sync {
|
|||
))
|
||||
}
|
||||
|
||||
/// Returns the first 3 user messages as strings for session naming
|
||||
/// Returns the first 3 user messages as strings for session naming,
|
||||
/// filtering out assistant-only content (e.g. preprompt blocks).
|
||||
fn get_initial_user_messages(&self, messages: &Conversation) -> Vec<String> {
|
||||
messages
|
||||
.iter()
|
||||
.filter(|m| m.role == rmcp::model::Role::User)
|
||||
.take(MSG_COUNT_FOR_SESSION_NAME_GENERATION)
|
||||
.map(|m| m.as_concat_text())
|
||||
.map(|m| {
|
||||
m.content
|
||||
.iter()
|
||||
.filter_map(|c| c.filter_for_audience(rmcp::model::Role::User))
|
||||
.filter_map(|c| c.as_text().map(|s| s.to_string()))
|
||||
.collect::<Vec<_>>()
|
||||
.join("\n")
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Extracts preprompt context (assistant-audience blocks) from the first user message.
|
||||
/// These are content blocks visible to the assistant but not the user.
|
||||
fn get_preprompt_context(&self, messages: &Conversation) -> String {
|
||||
messages
|
||||
.iter()
|
||||
.filter(|m| m.role == rmcp::model::Role::User)
|
||||
.take(1)
|
||||
.flat_map(|m| m.content.iter())
|
||||
.filter_map(|c| {
|
||||
// If this block is NOT visible to the user, it's preprompt/assistant-only content
|
||||
if c.filter_for_audience(rmcp::model::Role::User).is_none() {
|
||||
c.as_text().map(|s| s.to_string())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
.join("\n")
|
||||
}
|
||||
|
||||
/// Generate a session name/description based on the conversation history
|
||||
/// Creates a prompt asking for a concise description in 4 words or less.
|
||||
async fn generate_session_name(
|
||||
|
|
@ -749,6 +777,7 @@ pub trait Provider: Send + Sync {
|
|||
messages: &Conversation,
|
||||
) -> Result<String, ProviderError> {
|
||||
let context = self.get_initial_user_messages(messages);
|
||||
let preprompt_context = self.get_preprompt_context(messages);
|
||||
let system = crate::prompt_template::render_template(
|
||||
"session_name.md",
|
||||
&std::collections::HashMap::<String, String>::new(),
|
||||
|
|
@ -758,8 +787,19 @@ pub trait Provider: Send + Sync {
|
|||
use super::cli_common::{
|
||||
SESSION_NAME_BEGIN_MARKER, SESSION_NAME_END_MARKER, SESSION_NAME_SUFFIX,
|
||||
};
|
||||
|
||||
let preprompt_section = if preprompt_context.is_empty() {
|
||||
String::new()
|
||||
} else {
|
||||
format!(
|
||||
"---BEGIN BACKGROUND CONTEXT (for understanding only, do NOT base the title on this)---\n{}\n---END BACKGROUND CONTEXT---\n\n",
|
||||
preprompt_context
|
||||
)
|
||||
};
|
||||
|
||||
let user_text = format!(
|
||||
"{}\n{}\n{}\n\n{}",
|
||||
"{}{}\n{}\n{}\n\n{}",
|
||||
preprompt_section,
|
||||
SESSION_NAME_BEGIN_MARKER,
|
||||
context.join("\n"),
|
||||
SESSION_NAME_END_MARKER,
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -8,7 +8,7 @@
|
|||
"env": [
|
||||
"302AI_API_KEY"
|
||||
],
|
||||
"model_count": 64
|
||||
"model_count": 95
|
||||
},
|
||||
{
|
||||
"id": "alibaba",
|
||||
|
|
@ -41,7 +41,7 @@
|
|||
"env": [
|
||||
"NANO_GPT_API_KEY"
|
||||
],
|
||||
"model_count": 518
|
||||
"model_count": 525
|
||||
},
|
||||
{
|
||||
"id": "abacus",
|
||||
|
|
@ -74,7 +74,7 @@
|
|||
"env": [
|
||||
"SILICONFLOW_CN_API_KEY"
|
||||
],
|
||||
"model_count": 79
|
||||
"model_count": 81
|
||||
},
|
||||
{
|
||||
"id": "submodel",
|
||||
|
|
@ -107,7 +107,7 @@
|
|||
"env": [
|
||||
"DEEPSEEK_API_KEY"
|
||||
],
|
||||
"model_count": 2
|
||||
"model_count": 4
|
||||
},
|
||||
{
|
||||
"id": "meta-llama",
|
||||
|
|
@ -129,7 +129,7 @@
|
|||
"env": [
|
||||
"FIREWORKS_API_KEY"
|
||||
],
|
||||
"model_count": 17
|
||||
"model_count": 18
|
||||
},
|
||||
{
|
||||
"id": "kimi-for-coding",
|
||||
|
|
@ -140,7 +140,7 @@
|
|||
"env": [
|
||||
"KIMI_API_KEY"
|
||||
],
|
||||
"model_count": 2
|
||||
"model_count": 3
|
||||
},
|
||||
{
|
||||
"id": "moark",
|
||||
|
|
@ -162,7 +162,7 @@
|
|||
"env": [
|
||||
"OPENCODE_API_KEY"
|
||||
],
|
||||
"model_count": 9
|
||||
"model_count": 14
|
||||
},
|
||||
{
|
||||
"id": "io-net",
|
||||
|
|
@ -184,7 +184,7 @@
|
|||
"env": [
|
||||
"DASHSCOPE_API_KEY"
|
||||
],
|
||||
"model_count": 76
|
||||
"model_count": 77
|
||||
},
|
||||
{
|
||||
"id": "minimax-cn-coding-plan",
|
||||
|
|
@ -239,7 +239,7 @@
|
|||
"env": [
|
||||
"HF_TOKEN"
|
||||
],
|
||||
"model_count": 22
|
||||
"model_count": 23
|
||||
},
|
||||
{
|
||||
"id": "zenmux",
|
||||
|
|
@ -250,7 +250,7 @@
|
|||
"env": [
|
||||
"ZENMUX_API_KEY"
|
||||
],
|
||||
"model_count": 88
|
||||
"model_count": 89
|
||||
},
|
||||
{
|
||||
"id": "upstage",
|
||||
|
|
@ -272,7 +272,7 @@
|
|||
"env": [
|
||||
"NOVITA_API_KEY"
|
||||
],
|
||||
"model_count": 90
|
||||
"model_count": 96
|
||||
},
|
||||
{
|
||||
"id": "xiaomi-token-plan-cn",
|
||||
|
|
@ -305,7 +305,7 @@
|
|||
"env": [
|
||||
"CHUTES_API_KEY"
|
||||
],
|
||||
"model_count": 69
|
||||
"model_count": 70
|
||||
},
|
||||
{
|
||||
"id": "dinference",
|
||||
|
|
@ -349,7 +349,7 @@
|
|||
"env": [
|
||||
"KILO_API_KEY"
|
||||
],
|
||||
"model_count": 334
|
||||
"model_count": 335
|
||||
},
|
||||
{
|
||||
"id": "morph",
|
||||
|
|
@ -371,7 +371,7 @@
|
|||
"env": [
|
||||
"GITHUB_TOKEN"
|
||||
],
|
||||
"model_count": 25
|
||||
"model_count": 26
|
||||
},
|
||||
{
|
||||
"id": "mixlayer",
|
||||
|
|
@ -415,7 +415,7 @@
|
|||
"env": [
|
||||
"OPENCODE_API_KEY"
|
||||
],
|
||||
"model_count": 52
|
||||
"model_count": 58
|
||||
},
|
||||
{
|
||||
"id": "stepfun",
|
||||
|
|
@ -448,7 +448,7 @@
|
|||
"env": [
|
||||
"POE_API_KEY"
|
||||
],
|
||||
"model_count": 128
|
||||
"model_count": 130
|
||||
},
|
||||
{
|
||||
"id": "helicone",
|
||||
|
|
@ -459,7 +459,7 @@
|
|||
"env": [
|
||||
"HELICONE_API_KEY"
|
||||
],
|
||||
"model_count": 91
|
||||
"model_count": 90
|
||||
},
|
||||
{
|
||||
"id": "ollama-cloud",
|
||||
|
|
@ -470,7 +470,7 @@
|
|||
"env": [
|
||||
"OLLAMA_API_KEY"
|
||||
],
|
||||
"model_count": 36
|
||||
"model_count": 37
|
||||
},
|
||||
{
|
||||
"id": "zai-coding-plan",
|
||||
|
|
@ -503,7 +503,7 @@
|
|||
"env": [
|
||||
"BASETEN_API_KEY"
|
||||
],
|
||||
"model_count": 12
|
||||
"model_count": 13
|
||||
},
|
||||
{
|
||||
"id": "zhipuai-coding-plan",
|
||||
|
|
@ -536,7 +536,7 @@
|
|||
"env": [
|
||||
"FIRMWARE_API_KEY"
|
||||
],
|
||||
"model_count": 24
|
||||
"model_count": 25
|
||||
},
|
||||
{
|
||||
"id": "lmstudio",
|
||||
|
|
@ -569,7 +569,18 @@
|
|||
"env": [
|
||||
"MOONSHOT_API_KEY"
|
||||
],
|
||||
"model_count": 6
|
||||
"model_count": 7
|
||||
},
|
||||
{
|
||||
"id": "wafer.ai",
|
||||
"display_name": "Wafer",
|
||||
"npm": "@ai-sdk/openai-compatible",
|
||||
"api": "https://pass.wafer.ai/v1",
|
||||
"doc": "https://docs.wafer.ai/wafer-pass",
|
||||
"env": [
|
||||
"WAFER_API_KEY"
|
||||
],
|
||||
"model_count": 2
|
||||
},
|
||||
{
|
||||
"id": "cloudferro-sherlock",
|
||||
|
|
@ -635,7 +646,7 @@
|
|||
"env": [
|
||||
"NVIDIA_API_KEY"
|
||||
],
|
||||
"model_count": 76
|
||||
"model_count": 77
|
||||
},
|
||||
{
|
||||
"id": "inference",
|
||||
|
|
@ -670,6 +681,17 @@
|
|||
],
|
||||
"model_count": 38
|
||||
},
|
||||
{
|
||||
"id": "digitalocean",
|
||||
"display_name": "DigitalOcean",
|
||||
"npm": "@ai-sdk/openai-compatible",
|
||||
"api": "https://inference.do-ai.run/v1",
|
||||
"doc": "https://docs.digitalocean.com/products/gradient-ai-platform/details/models/",
|
||||
"env": [
|
||||
"DIGITALOCEAN_ACCESS_TOKEN"
|
||||
],
|
||||
"model_count": 46
|
||||
},
|
||||
{
|
||||
"id": "vultr",
|
||||
"display_name": "Vultr",
|
||||
|
|
@ -701,7 +723,7 @@
|
|||
"env": [
|
||||
"OVHCLOUD_API_KEY"
|
||||
],
|
||||
"model_count": 13
|
||||
"model_count": 10
|
||||
},
|
||||
{
|
||||
"id": "friendli",
|
||||
|
|
@ -723,7 +745,7 @@
|
|||
"env": [
|
||||
"CORTECS_API_KEY"
|
||||
],
|
||||
"model_count": 32
|
||||
"model_count": 34
|
||||
},
|
||||
{
|
||||
"id": "siliconflow",
|
||||
|
|
@ -734,7 +756,7 @@
|
|||
"env": [
|
||||
"SILICONFLOW_API_KEY"
|
||||
],
|
||||
"model_count": 73
|
||||
"model_count": 74
|
||||
},
|
||||
{
|
||||
"id": "minimax",
|
||||
|
|
@ -756,7 +778,7 @@
|
|||
"env": [
|
||||
"LLMGATEWAY_API_KEY"
|
||||
],
|
||||
"model_count": 203
|
||||
"model_count": 182
|
||||
},
|
||||
{
|
||||
"id": "cloudflare-workers-ai",
|
||||
|
|
@ -768,7 +790,7 @@
|
|||
"CLOUDFLARE_ACCOUNT_ID",
|
||||
"CLOUDFLARE_API_KEY"
|
||||
],
|
||||
"model_count": 7
|
||||
"model_count": 8
|
||||
},
|
||||
{
|
||||
"id": "fastrouter",
|
||||
|
|
@ -835,7 +857,7 @@
|
|||
"env": [
|
||||
"MOONSHOT_API_KEY"
|
||||
],
|
||||
"model_count": 6
|
||||
"model_count": 7
|
||||
},
|
||||
{
|
||||
"id": "berget",
|
||||
|
|
@ -846,7 +868,7 @@
|
|||
"env": [
|
||||
"BERGET_API_KEY"
|
||||
],
|
||||
"model_count": 8
|
||||
"model_count": 5
|
||||
},
|
||||
{
|
||||
"id": "github-models",
|
||||
|
|
@ -870,6 +892,17 @@
|
|||
],
|
||||
"model_count": 9
|
||||
},
|
||||
{
|
||||
"id": "tencent-tokenhub",
|
||||
"display_name": "Tencent TokenHub",
|
||||
"npm": "@ai-sdk/openai-compatible",
|
||||
"api": "https://tokenhub.tencentmaas.com/v1",
|
||||
"doc": "https://cloud.tencent.com/document/product/1823/130050",
|
||||
"env": [
|
||||
"TENCENT_TOKENHUB_API_KEY"
|
||||
],
|
||||
"model_count": 1
|
||||
},
|
||||
{
|
||||
"id": "modelscope",
|
||||
"display_name": "ModelScope",
|
||||
|
|
@ -925,6 +958,17 @@
|
|||
],
|
||||
"model_count": 6
|
||||
},
|
||||
{
|
||||
"id": "regolo-ai",
|
||||
"display_name": "Regolo AI",
|
||||
"npm": "@ai-sdk/openai-compatible",
|
||||
"api": "https://api.regolo.ai/v1",
|
||||
"doc": "https://docs.regolo.ai/",
|
||||
"env": [
|
||||
"REGOLO_API_KEY"
|
||||
],
|
||||
"model_count": 13
|
||||
},
|
||||
{
|
||||
"id": "xiaomi-token-plan-ams",
|
||||
"display_name": "Xiaomi Token Plan (Europe)",
|
||||
|
|
|
|||
|
|
@ -62,22 +62,6 @@ pub const CHATGPT_CODEX_KNOWN_MODELS: &[ChatGptCodexModelAttrs] = &[
|
|||
name: "gpt-5.3-codex",
|
||||
reasoning_levels: &["low", "medium", "high", "xhigh"],
|
||||
},
|
||||
ChatGptCodexModelAttrs {
|
||||
name: "gpt-5.2-codex",
|
||||
reasoning_levels: &["low", "medium", "high", "xhigh"],
|
||||
},
|
||||
ChatGptCodexModelAttrs {
|
||||
name: "gpt-5.1-codex",
|
||||
reasoning_levels: &["low", "medium", "high", "xhigh"],
|
||||
},
|
||||
ChatGptCodexModelAttrs {
|
||||
name: "gpt-5.1-codex-mini",
|
||||
reasoning_levels: &["medium", "high"],
|
||||
},
|
||||
ChatGptCodexModelAttrs {
|
||||
name: "gpt-5.1-codex-max",
|
||||
reasoning_levels: &["low", "medium", "high", "xhigh"],
|
||||
},
|
||||
];
|
||||
|
||||
const CHATGPT_CODEX_DOC_URL: &str = "https://openai.com/chatgpt";
|
||||
|
|
|
|||
|
|
@ -56,7 +56,11 @@ pub(crate) fn generate_simple_session_description(
|
|||
})
|
||||
.map(|text| {
|
||||
// Strip the wrapper added by generate_session_name so we get
|
||||
// the actual user content.
|
||||
// the actual user content. First strip the optional background context section.
|
||||
let text = text
|
||||
.rfind(SESSION_NAME_BEGIN_MARKER)
|
||||
.and_then(|idx| text.get(idx..))
|
||||
.unwrap_or(text);
|
||||
let stripped = text
|
||||
.strip_prefix(SESSION_NAME_BEGIN_MARKER)
|
||||
.unwrap_or(text)
|
||||
|
|
|
|||
24
crates/goose/src/providers/declarative/nvidia.json
Normal file
24
crates/goose/src/providers/declarative/nvidia.json
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
{
|
||||
"name": "nvidia",
|
||||
"engine": "openai",
|
||||
"display_name": "NVIDIA",
|
||||
"description": "Hosted NVIDIA NIM models through the OpenAI-compatible API.",
|
||||
"api_key_env": "NVIDIA_API_KEY",
|
||||
"base_url": "https://integrate.api.nvidia.com/v1",
|
||||
"catalog_provider_id": "nvidia",
|
||||
"dynamic_models": true,
|
||||
"models": [
|
||||
{
|
||||
"name": "z-ai/glm-4.7",
|
||||
"context_limit": 131072
|
||||
}
|
||||
],
|
||||
"supports_streaming": true,
|
||||
"model_doc_link": "https://build.nvidia.com/models",
|
||||
"setup_steps": [
|
||||
"Sign in to https://build.nvidia.com",
|
||||
"Choose a Free Endpoint model from the model catalog",
|
||||
"Create an API key",
|
||||
"Copy the key and paste it above"
|
||||
]
|
||||
}
|
||||
|
|
@ -238,6 +238,41 @@ mod tests {
|
|||
assert!(!endpoint.secret, "Endpoint should not be secret");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_nvidia_declarative_provider_registry_wiring() {
|
||||
let nvidia = get_from_registry("nvidia")
|
||||
.await
|
||||
.expect("nvidia provider should be registered");
|
||||
let meta = nvidia.metadata();
|
||||
|
||||
assert_eq!(nvidia.provider_type(), ProviderType::Declarative);
|
||||
assert!(nvidia.supports_inventory_refresh());
|
||||
assert_eq!(meta.display_name, "NVIDIA");
|
||||
assert_eq!(meta.default_model, "z-ai/glm-4.7");
|
||||
assert_eq!(meta.model_doc_link, "https://build.nvidia.com/models");
|
||||
assert!(!meta.setup_steps.is_empty());
|
||||
|
||||
let api_key = meta
|
||||
.config_keys
|
||||
.iter()
|
||||
.find(|k| k.name == "NVIDIA_API_KEY")
|
||||
.expect("NVIDIA_API_KEY config key should exist");
|
||||
assert!(api_key.required, "NVIDIA_API_KEY should be required");
|
||||
assert!(api_key.secret, "NVIDIA_API_KEY should be secret");
|
||||
assert!(api_key.primary, "NVIDIA_API_KEY should be primary");
|
||||
assert!(
|
||||
!meta.config_keys.iter().any(|k| k.name == "OPENAI_HOST"),
|
||||
"NVIDIA should not expose OpenAI host configuration"
|
||||
);
|
||||
assert!(
|
||||
!meta
|
||||
.config_keys
|
||||
.iter()
|
||||
.any(|k| k.name == "OPENAI_BASE_PATH"),
|
||||
"NVIDIA should not expose OpenAI base path configuration"
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_openai_compatible_providers_config_keys() {
|
||||
let providers_list = providers().await;
|
||||
|
|
|
|||
|
|
@ -62,10 +62,10 @@ pub struct InferenceRuntime {
|
|||
static RUNTIME: StdMutex<Weak<InferenceRuntime>> = StdMutex::new(Weak::new());
|
||||
|
||||
impl InferenceRuntime {
|
||||
pub fn get_or_init() -> Arc<Self> {
|
||||
pub fn get_or_init() -> Result<Arc<Self>> {
|
||||
let mut guard = RUNTIME.lock().expect("runtime lock poisoned");
|
||||
if let Some(runtime) = guard.upgrade() {
|
||||
return runtime;
|
||||
return Ok(runtime);
|
||||
}
|
||||
// Safety invariant: the Weak::upgrade() check and LlamaBackend::init()
|
||||
// both execute inside this same mutex guard, so there is no window where
|
||||
|
|
@ -80,7 +80,10 @@ impl InferenceRuntime {
|
|||
the mutex guard prevents concurrent re-init"
|
||||
)
|
||||
}
|
||||
Err(e) => panic!("Failed to init llama backend: {}", e),
|
||||
Err(e) => {
|
||||
tracing::error!(error = %e, "failed to initialize local inference runtime");
|
||||
return Err(anyhow::anyhow!("Failed to init llama backend: {}", e));
|
||||
}
|
||||
};
|
||||
llama_cpp_2::send_logs_to_tracing(LogOptions::default());
|
||||
let runtime = Arc::new(Self {
|
||||
|
|
@ -88,7 +91,7 @@ impl InferenceRuntime {
|
|||
backend,
|
||||
});
|
||||
*guard = Arc::downgrade(&runtime);
|
||||
runtime
|
||||
Ok(runtime)
|
||||
}
|
||||
|
||||
pub fn backend(&self) -> &LlamaBackend {
|
||||
|
|
@ -357,7 +360,7 @@ pub struct LocalInferenceProvider {
|
|||
|
||||
impl LocalInferenceProvider {
|
||||
pub async fn from_env(model: ModelConfig, _extensions: Vec<ExtensionConfig>) -> Result<Self> {
|
||||
let runtime = InferenceRuntime::get_or_init();
|
||||
let runtime = InferenceRuntime::get_or_init()?;
|
||||
let model_slot = runtime.get_or_create_model_slot(&model.model_name);
|
||||
Ok(Self {
|
||||
runtime,
|
||||
|
|
|
|||
|
|
@ -34,8 +34,10 @@ use rmcp::model::Tool;
|
|||
|
||||
const OPEN_AI_PROVIDER_NAME: &str = "openai";
|
||||
const OPEN_AI_DEFAULT_BASE_PATH: &str = "v1/chat/completions";
|
||||
const OPEN_AI_VERSIONLESS_BASE_PATH: &str = "chat/completions";
|
||||
const OPEN_AI_DEFAULT_RESPONSES_PATH: &str = "v1/responses";
|
||||
const OPEN_AI_DEFAULT_MODELS_PATH: &str = "v1/models";
|
||||
const OPEN_AI_DEFAULT_EMBEDDINGS_PATH: &str = "v1/embeddings";
|
||||
pub const OPEN_AI_DEFAULT_MODEL: &str = "gpt-4o";
|
||||
pub const OPEN_AI_DEFAULT_FAST_MODEL: &str = "gpt-4o-mini";
|
||||
pub const OPEN_AI_KNOWN_MODELS: &[(&str, usize)] = &[
|
||||
|
|
@ -67,6 +69,48 @@ pub const OPEN_AI_KNOWN_MODELS: &[(&str, usize)] = &[
|
|||
|
||||
pub const OPEN_AI_DOC_URL: &str = "https://platform.openai.com/docs/models";
|
||||
|
||||
type OpenAiBaseUrlParts = (String, Vec<(String, String)>, bool);
|
||||
|
||||
/// Components extracted from an `OPENAI_BASE_URL` value.
|
||||
struct ParsedBaseUrl {
|
||||
/// The host (scheme + authority + any path prefix before `/v1`).
|
||||
host: String,
|
||||
/// Query parameters to forward on every request.
|
||||
query_params: Vec<(String, String)>,
|
||||
/// Whether the URL path ended with `/v1`.
|
||||
has_v1: bool,
|
||||
/// `true` when the host was derived from `OPENAI_BASE_URL`.
|
||||
/// Controls whether `OPENAI_BASE_PATH` is read from env only
|
||||
/// (to avoid persisted desktop defaults shadowing URL-derived paths)
|
||||
/// or from config too (to honour Docker Model Runner setups).
|
||||
from_base_url: bool,
|
||||
}
|
||||
|
||||
pub(crate) fn parse_openai_base_url(raw_url: &str) -> Result<OpenAiBaseUrlParts> {
|
||||
let parsed = url::Url::parse(raw_url)
|
||||
.map_err(|e| anyhow::anyhow!("Invalid OPENAI_BASE_URL '{}': {}", raw_url, e))?;
|
||||
|
||||
let authority = parsed[..url::Position::BeforePath].to_string();
|
||||
let query_params: Vec<(String, String)> = parsed
|
||||
.query_pairs()
|
||||
.map(|(k, v)| (k.into_owned(), v.into_owned()))
|
||||
.collect();
|
||||
|
||||
let path = parsed.path().trim_end_matches('/');
|
||||
if path.is_empty() || path == "/" {
|
||||
return Ok((authority, query_params, true));
|
||||
}
|
||||
|
||||
if path == "/v1" {
|
||||
return Ok((authority, query_params, true));
|
||||
}
|
||||
if let Some(prefix) = path.strip_suffix("/v1") {
|
||||
return Ok((format!("{}{}", authority, prefix), query_params, true));
|
||||
}
|
||||
|
||||
Ok((format!("{}{}", authority, path), query_params, false))
|
||||
}
|
||||
|
||||
#[derive(Debug, serde::Serialize)]
|
||||
pub struct OpenAiProvider {
|
||||
#[serde(skip)]
|
||||
|
|
@ -85,9 +129,78 @@ pub struct OpenAiProvider {
|
|||
impl OpenAiProvider {
|
||||
pub async fn from_env(model: ModelConfig) -> Result<Self> {
|
||||
let config = crate::config::Config::global();
|
||||
let host: String = config
|
||||
.get_param("OPENAI_HOST")
|
||||
.unwrap_or_else(|_| "https://api.openai.com".to_string());
|
||||
|
||||
// Resolve host and base_path.
|
||||
//
|
||||
// Priority (highest first):
|
||||
// 1. OPENAI_HOST env var — session override (deprecated but still
|
||||
// honoured so that `OPENAI_HOST=… goose` keeps working)
|
||||
// 2. OPENAI_BASE_URL (env or config) — ecosystem-standard
|
||||
// 3. OPENAI_HOST from config file — persisted by `goose configure`
|
||||
// 4. Default "https://api.openai.com"
|
||||
//
|
||||
// OPENAI_BASE_URL is parsed into host + query params + a flag
|
||||
// indicating whether the URL included a /v1 path segment. When /v1
|
||||
// is present the default base_path is "v1/chat/completions";
|
||||
// otherwise "chat/completions" to match the OpenAI SDK convention.
|
||||
//
|
||||
// OPENAI_BASE_PATH always wins when set explicitly.
|
||||
let parsed = if let Ok(h) = std::env::var("OPENAI_HOST") {
|
||||
// OPENAI_HOST env var takes priority as a session override so
|
||||
// that existing scripts like `OPENAI_HOST=… goose` still work
|
||||
// even after OPENAI_BASE_URL is persisted in config.
|
||||
ParsedBaseUrl {
|
||||
host: h,
|
||||
query_params: vec![],
|
||||
has_v1: true,
|
||||
from_base_url: false,
|
||||
}
|
||||
} else if let Some(raw_url) = config
|
||||
.get_param::<String>("OPENAI_BASE_URL")
|
||||
.ok()
|
||||
.map(|s| s.trim().to_string())
|
||||
.filter(|s| !s.is_empty())
|
||||
{
|
||||
Self::parse_base_url(&raw_url)?
|
||||
} else {
|
||||
let h: String = config
|
||||
.get_param("OPENAI_HOST")
|
||||
.unwrap_or_else(|_| "https://api.openai.com".to_string());
|
||||
ParsedBaseUrl {
|
||||
host: h,
|
||||
query_params: vec![],
|
||||
has_v1: true,
|
||||
from_base_url: false,
|
||||
}
|
||||
};
|
||||
|
||||
// When the host was derived from OPENAI_BASE_URL, read
|
||||
// OPENAI_BASE_PATH from env only so that the desktop UI's persisted
|
||||
// default ("v1/chat/completions") doesn't shadow the versionless
|
||||
// path. When the host came from OPENAI_HOST (env or config), read
|
||||
// from config too — Docker Model Runner and similar setups persist a
|
||||
// custom base_path that must be honoured.
|
||||
let default_bp = || {
|
||||
if parsed.has_v1 {
|
||||
OPEN_AI_DEFAULT_BASE_PATH.to_string()
|
||||
} else {
|
||||
OPEN_AI_VERSIONLESS_BASE_PATH.to_string()
|
||||
}
|
||||
};
|
||||
let base_path: String = if parsed.from_base_url {
|
||||
std::env::var("OPENAI_BASE_PATH").unwrap_or_else(|_| default_bp())
|
||||
} else {
|
||||
config
|
||||
.get_param("OPENAI_BASE_PATH")
|
||||
.unwrap_or_else(|_| default_bp())
|
||||
};
|
||||
|
||||
// Only apply the default fast model when talking to OpenAI directly.
|
||||
// Custom/compatible endpoints likely don't serve gpt-4o-mini, so
|
||||
// leave fast_model unset (complete_fast will fall back to the main model).
|
||||
// Parse the URL and compare the hostname exactly to avoid false positives
|
||||
// (e.g. https://api.openai.com.local:8000 or proxy paths containing api.openai.com).
|
||||
let host = parsed.host.clone();
|
||||
|
||||
// Only apply the default fast model when talking to OpenAI directly.
|
||||
// Custom/compatible endpoints likely don't serve gpt-4o-mini, so
|
||||
|
|
@ -114,9 +227,6 @@ impl OpenAiProvider {
|
|||
.cloned()
|
||||
.map(parse_custom_headers);
|
||||
|
||||
let base_path: String = config
|
||||
.get_param("OPENAI_BASE_PATH")
|
||||
.unwrap_or_else(|_| OPEN_AI_DEFAULT_BASE_PATH.to_string());
|
||||
let organization: Option<String> = config.get_param("OPENAI_ORGANIZATION").ok();
|
||||
let project: Option<String> = config.get_param("OPENAI_PROJECT").ok();
|
||||
let timeout_secs: u64 = config.get_param("OPENAI_TIMEOUT").unwrap_or(600);
|
||||
|
|
@ -125,8 +235,15 @@ impl OpenAiProvider {
|
|||
Some(key) if !key.is_empty() => AuthMethod::BearerToken(key),
|
||||
_ => AuthMethod::NoAuth,
|
||||
};
|
||||
let mut api_client =
|
||||
ApiClient::with_timeout(host, auth, std::time::Duration::from_secs(timeout_secs))?;
|
||||
let mut api_client = ApiClient::with_timeout(
|
||||
parsed.host,
|
||||
auth,
|
||||
std::time::Duration::from_secs(timeout_secs),
|
||||
)?;
|
||||
|
||||
if !parsed.query_params.is_empty() {
|
||||
api_client = api_client.with_query(parsed.query_params);
|
||||
}
|
||||
|
||||
if let Some(org) = &organization {
|
||||
api_client = api_client.with_header("OpenAI-Organization", org)?;
|
||||
|
|
@ -263,7 +380,16 @@ impl OpenAiProvider {
|
|||
})
|
||||
}
|
||||
|
||||
// Derive a base path from the raw URL path
|
||||
fn parse_base_url(raw_url: &str) -> Result<ParsedBaseUrl> {
|
||||
let (host, query_params, has_v1) = parse_openai_base_url(raw_url)?;
|
||||
Ok(ParsedBaseUrl {
|
||||
host,
|
||||
query_params,
|
||||
has_v1,
|
||||
from_base_url: true,
|
||||
})
|
||||
}
|
||||
|
||||
fn derive_base_path(url_path: &str) -> String {
|
||||
let stripped = url_path.trim_start_matches('/');
|
||||
let normalized = stripped.trim_end_matches('/');
|
||||
|
|
@ -300,6 +426,11 @@ impl OpenAiProvider {
|
|||
|
||||
fn should_use_responses_api(model_name: &str, base_path: &str) -> bool {
|
||||
let normalized_base_path = Self::normalize_base_path(base_path);
|
||||
// Only the standard "v1/chat/completions" is treated as a default
|
||||
// path that defers to model-based routing. The versionless
|
||||
// "chat/completions" (derived from an OPENAI_BASE_URL without /v1)
|
||||
// is treated as custom because versionless gateways typically do not
|
||||
// support the Responses API.
|
||||
let has_custom_base_path = normalized_base_path != OPEN_AI_DEFAULT_BASE_PATH;
|
||||
|
||||
if has_custom_base_path {
|
||||
|
|
@ -415,6 +546,7 @@ impl ProviderDef for OpenAiProvider {
|
|||
OPEN_AI_DOC_URL,
|
||||
vec![
|
||||
ConfigKey::new("OPENAI_API_KEY", false, true, None, true),
|
||||
ConfigKey::new("OPENAI_BASE_URL", false, false, None, false),
|
||||
ConfigKey::new(
|
||||
"OPENAI_HOST",
|
||||
true,
|
||||
|
|
@ -715,8 +847,13 @@ impl EmbeddingCapable for OpenAiProvider {
|
|||
};
|
||||
let request_value = serde_json::to_value(request_clone)
|
||||
.map_err(|e| ProviderError::ExecutionError(e.to_string()))?;
|
||||
let embeddings_path = Self::map_base_path(
|
||||
&self.base_path,
|
||||
"embeddings",
|
||||
OPEN_AI_DEFAULT_EMBEDDINGS_PATH,
|
||||
);
|
||||
self.api_client
|
||||
.api_post(Some(session_id), "v1/embeddings", &request_value)
|
||||
.api_post(Some(session_id), &embeddings_path, &request_value)
|
||||
.await
|
||||
.map_err(|e| ProviderError::ExecutionError(e.to_string()))
|
||||
})
|
||||
|
|
@ -894,54 +1031,91 @@ mod tests {
|
|||
let models_path = OpenAiProvider::map_base_path("/custom/path", "models", "v1/models");
|
||||
assert_eq!(models_path, "/v1/models");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn derive_base_path_empty_path_gives_default_endpoint() {
|
||||
assert_eq!(OpenAiProvider::derive_base_path("/"), "v1/chat/completions");
|
||||
fn parse_base_url_strips_v1_from_standard_openai_url() {
|
||||
let r = OpenAiProvider::parse_base_url("https://api.openai.com/v1").unwrap();
|
||||
assert_eq!(r.host, "https://api.openai.com");
|
||||
assert!(r.query_params.is_empty());
|
||||
assert!(r.has_v1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn derive_base_path_bare_v1_gives_chat_completions() {
|
||||
fn parse_base_url_preserves_prefix_before_v1() {
|
||||
let r = OpenAiProvider::parse_base_url("https://gateway.example.com/openai/v1").unwrap();
|
||||
assert_eq!(r.host, "https://gateway.example.com/openai");
|
||||
assert!(r.has_v1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_base_url_handles_no_path() {
|
||||
let r = OpenAiProvider::parse_base_url("https://api.openai.com").unwrap();
|
||||
assert_eq!(r.host, "https://api.openai.com");
|
||||
assert!(r.has_v1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_base_url_handles_trailing_slash() {
|
||||
let r = OpenAiProvider::parse_base_url("https://api.openai.com/v1/").unwrap();
|
||||
assert_eq!(r.host, "https://api.openai.com");
|
||||
assert!(r.has_v1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_base_url_preserves_port() {
|
||||
let r = OpenAiProvider::parse_base_url("https://localhost:8080/v1").unwrap();
|
||||
assert_eq!(r.host, "https://localhost:8080");
|
||||
assert!(r.has_v1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_base_url_preserves_non_v1_path() {
|
||||
let r = OpenAiProvider::parse_base_url("https://example.com/custom/api").unwrap();
|
||||
assert_eq!(r.host, "https://example.com/custom/api");
|
||||
assert!(!r.has_v1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_base_url_preserves_query_params() {
|
||||
let r = OpenAiProvider::parse_base_url("https://gw.example.com/v1?api-version=2024-02-01")
|
||||
.unwrap();
|
||||
assert_eq!(r.host, "https://gw.example.com");
|
||||
assert_eq!(
|
||||
OpenAiProvider::derive_base_path("/v1"),
|
||||
"v1/chat/completions"
|
||||
r.query_params,
|
||||
vec![("api-version".to_string(), "2024-02-01".to_string())]
|
||||
);
|
||||
assert!(r.has_v1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn derive_base_path_v1_with_trailing_slash() {
|
||||
assert_eq!(
|
||||
OpenAiProvider::derive_base_path("/v1/"),
|
||||
"v1/chat/completions"
|
||||
);
|
||||
fn parse_base_url_preserves_multiple_query_params() {
|
||||
let r = OpenAiProvider::parse_base_url("https://example.com/v1?key=val&foo=bar").unwrap();
|
||||
assert_eq!(r.query_params.len(), 2);
|
||||
assert_eq!(r.query_params[0], ("key".to_string(), "val".to_string()));
|
||||
assert_eq!(r.query_params[1], ("foo".to_string(), "bar".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn derive_base_path_prefixed_v1_appends_chat_completions() {
|
||||
assert_eq!(
|
||||
OpenAiProvider::derive_base_path("/zen/go/v1"),
|
||||
"zen/go/v1/chat/completions"
|
||||
);
|
||||
fn parse_base_url_preserves_credentials() {
|
||||
let r = OpenAiProvider::parse_base_url("https://user:pass@gateway.example.com/v1").unwrap();
|
||||
assert_eq!(r.host, "https://user:pass@gateway.example.com");
|
||||
assert!(r.has_v1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn derive_base_path_prefixed_v1_with_trailing_slash() {
|
||||
assert_eq!(
|
||||
OpenAiProvider::derive_base_path("/zen/go/v1/"),
|
||||
"zen/go/v1/chat/completions"
|
||||
);
|
||||
fn parse_base_url_rejects_empty_string() {
|
||||
assert!(OpenAiProvider::parse_base_url("").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn derive_base_path_full_chat_completions_url_unchanged() {
|
||||
assert_eq!(
|
||||
OpenAiProvider::derive_base_path("/openai/v1/chat/completions"),
|
||||
"openai/v1/chat/completions"
|
||||
);
|
||||
fn parse_base_url_rejects_whitespace_only() {
|
||||
assert!(OpenAiProvider::parse_base_url(" ").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn derive_base_path_non_v1_prefix_unchanged() {
|
||||
assert_eq!(OpenAiProvider::derive_base_path("/anthropic"), "anthropic");
|
||||
fn versionless_base_path_opts_out_of_responses_for_codex_models() {
|
||||
assert!(!OpenAiProvider::should_use_responses_api(
|
||||
"gpt-5-codex",
|
||||
"chat/completions"
|
||||
));
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
use super::base::{ModelInfo, Provider, ProviderDef, ProviderMetadata, ProviderType};
|
||||
use super::base::{ConfigKey, ModelInfo, Provider, ProviderDef, ProviderMetadata, ProviderType};
|
||||
use super::inventory::InventoryIdentityInput;
|
||||
use crate::config::{DeclarativeProviderConfig, ExtensionConfig};
|
||||
use crate::model::ModelConfig;
|
||||
|
|
@ -165,28 +165,32 @@ impl ProviderRegistry {
|
|||
})
|
||||
.collect();
|
||||
|
||||
let mut config_keys = base_metadata.config_keys.clone();
|
||||
|
||||
if let Some(api_key_index) = config_keys.iter().position(|key| key.secret) {
|
||||
if !config.requires_auth {
|
||||
config_keys.remove(api_key_index);
|
||||
} else if !config.api_key_env.is_empty() {
|
||||
let api_key_required = provider_type == ProviderType::Declarative;
|
||||
config_keys[api_key_index] = super::base::ConfigKey::new(
|
||||
&config.api_key_env,
|
||||
api_key_required,
|
||||
true,
|
||||
None,
|
||||
true,
|
||||
);
|
||||
let mut config_keys = if provider_type == ProviderType::Declarative {
|
||||
if config.requires_auth && !config.api_key_env.is_empty() {
|
||||
vec![ConfigKey::new(&config.api_key_env, true, true, None, true)]
|
||||
} else {
|
||||
Vec::new()
|
||||
}
|
||||
}
|
||||
} else {
|
||||
let mut config_keys = base_metadata.config_keys.clone();
|
||||
|
||||
if let Some(api_key_index) = config_keys.iter().position(|key| key.secret) {
|
||||
if !config.requires_auth {
|
||||
config_keys.remove(api_key_index);
|
||||
} else if !config.api_key_env.is_empty() {
|
||||
config_keys[api_key_index] =
|
||||
ConfigKey::new(&config.api_key_env, false, true, None, true);
|
||||
}
|
||||
}
|
||||
|
||||
config_keys
|
||||
};
|
||||
|
||||
if let Some(ref env_vars) = config.env_vars {
|
||||
for ev in env_vars {
|
||||
// Default primary to `required` so required fields show prominently in the UI
|
||||
let primary = ev.primary.unwrap_or(ev.required);
|
||||
config_keys.push(super::base::ConfigKey::new(
|
||||
config_keys.push(ConfigKey::new(
|
||||
&ev.name,
|
||||
ev.required,
|
||||
ev.secret,
|
||||
|
|
@ -202,9 +206,12 @@ impl ProviderRegistry {
|
|||
description,
|
||||
default_model,
|
||||
known_models,
|
||||
model_doc_link: base_metadata.model_doc_link,
|
||||
model_doc_link: config
|
||||
.model_doc_link
|
||||
.clone()
|
||||
.unwrap_or(base_metadata.model_doc_link),
|
||||
config_keys,
|
||||
setup_steps: vec![],
|
||||
setup_steps: config.setup_steps.clone(),
|
||||
model_selection_hint: None,
|
||||
};
|
||||
let inventory_config_keys = custom_metadata.config_keys.clone();
|
||||
|
|
|
|||
|
|
@ -30,8 +30,8 @@ pub struct ThreadMetadata {
|
|||
pub project_id: Option<String>,
|
||||
#[serde(default)]
|
||||
pub provider_id: Option<String>,
|
||||
#[serde(default)]
|
||||
pub model_name: Option<String>,
|
||||
#[serde(default, alias = "model_name")]
|
||||
pub model_id: Option<String>,
|
||||
#[serde(default)]
|
||||
pub mode: Option<String>,
|
||||
#[serde(flatten)]
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
use include_dir::{include_dir, Dir};
|
||||
|
||||
static BUILTIN_SKILLS_DIR: Dir =
|
||||
include_dir!("$CARGO_MANIFEST_DIR/src/agents/builtin_skills/skills");
|
||||
static BUILTIN_SKILLS_DIR: Dir = include_dir!("$CARGO_MANIFEST_DIR/src/skills/builtins");
|
||||
|
||||
pub fn get_all() -> Vec<&'static str> {
|
||||
BUILTIN_SKILLS_DIR
|
||||
|
|
@ -1,201 +1,19 @@
|
|||
use super::{parse_frontmatter, Source, SourceKind};
|
||||
use crate::agents::builtin_skills;
|
||||
use super::discover_skills;
|
||||
use crate::agents::extension::PlatformExtensionContext;
|
||||
use crate::agents::mcp_client::{Error, McpClientTrait};
|
||||
use crate::agents::tool_execution::ToolCallContext;
|
||||
use crate::config::paths::Paths;
|
||||
use crate::agents::ToolCallContext;
|
||||
use async_trait::async_trait;
|
||||
use goose_sdk::custom_requests::{SourceEntry, SourceType};
|
||||
use rmcp::model::{
|
||||
CallToolResult, Content, Implementation, InitializeResult, JsonObject, ListToolsResult,
|
||||
ServerCapabilities, ServerNotification, Tool,
|
||||
};
|
||||
use serde::Deserialize;
|
||||
use std::collections::HashSet;
|
||||
use std::path::{Path, PathBuf};
|
||||
use tokio::sync::mpsc;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tracing::warn;
|
||||
|
||||
pub static EXTENSION_NAME: &str = "skills";
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
struct SkillMetadata {
|
||||
name: String,
|
||||
description: String,
|
||||
}
|
||||
|
||||
fn parse_skill_content(content: &str, path: PathBuf) -> Option<Source> {
|
||||
let (metadata, body): (SkillMetadata, String) = match parse_frontmatter(content) {
|
||||
Ok(Some(parsed)) => parsed,
|
||||
Ok(None) => return None,
|
||||
Err(e) => {
|
||||
warn!("Failed to parse skill frontmatter: {}", e);
|
||||
return None;
|
||||
}
|
||||
};
|
||||
|
||||
if metadata.name.contains('/') {
|
||||
warn!("Skill name '{}' contains '/', skipping", metadata.name);
|
||||
return None;
|
||||
}
|
||||
|
||||
Some(Source {
|
||||
name: metadata.name,
|
||||
kind: SourceKind::Skill,
|
||||
description: metadata.description,
|
||||
path,
|
||||
content: body,
|
||||
supporting_files: Vec::new(),
|
||||
})
|
||||
}
|
||||
|
||||
fn should_skip_dir(path: &Path) -> bool {
|
||||
matches!(
|
||||
path.file_name().and_then(|name| name.to_str()),
|
||||
Some(".git") | Some(".hg") | Some(".svn")
|
||||
)
|
||||
}
|
||||
|
||||
fn walk_files_recursively<F, G>(
|
||||
dir: &Path,
|
||||
visited_dirs: &mut HashSet<PathBuf>,
|
||||
should_descend: &mut G,
|
||||
visit_file: &mut F,
|
||||
) where
|
||||
F: FnMut(&Path),
|
||||
G: FnMut(&Path) -> bool,
|
||||
{
|
||||
let canonical_dir = match std::fs::canonicalize(dir) {
|
||||
Ok(path) => path,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
if !visited_dirs.insert(canonical_dir) {
|
||||
return;
|
||||
}
|
||||
|
||||
let entries = match std::fs::read_dir(dir) {
|
||||
Ok(e) => e,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path();
|
||||
if path.is_dir() {
|
||||
if should_descend(&path) {
|
||||
walk_files_recursively(&path, visited_dirs, should_descend, visit_file);
|
||||
}
|
||||
} else if path.is_file() {
|
||||
visit_file(&path);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn scan_skills_from_dir(dir: &Path, seen: &mut HashSet<String>) -> Vec<Source> {
|
||||
let mut skill_files = Vec::new();
|
||||
let mut visited_dirs = HashSet::new();
|
||||
|
||||
walk_files_recursively(
|
||||
dir,
|
||||
&mut visited_dirs,
|
||||
&mut |path| !should_skip_dir(path),
|
||||
&mut |path| {
|
||||
if path.file_name().and_then(|name| name.to_str()) == Some("SKILL.md") {
|
||||
skill_files.push(path.to_path_buf());
|
||||
}
|
||||
},
|
||||
);
|
||||
|
||||
let mut sources = Vec::new();
|
||||
for skill_file in skill_files {
|
||||
let Some(skill_dir) = skill_file.parent() else {
|
||||
continue;
|
||||
};
|
||||
let content = match std::fs::read_to_string(&skill_file) {
|
||||
Ok(c) => c,
|
||||
Err(e) => {
|
||||
warn!("Failed to read skill file {}: {}", skill_file.display(), e);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
if let Some(mut source) = parse_skill_content(&content, skill_dir.to_path_buf()) {
|
||||
if !seen.contains(&source.name) {
|
||||
// Find supporting files in the skill directory
|
||||
let mut files = Vec::new();
|
||||
let mut visited_support_dirs = HashSet::new();
|
||||
walk_files_recursively(
|
||||
skill_dir,
|
||||
&mut visited_support_dirs,
|
||||
&mut |path| !should_skip_dir(path) && !path.join("SKILL.md").is_file(),
|
||||
&mut |path| {
|
||||
if path.file_name().and_then(|n| n.to_str()) != Some("SKILL.md") {
|
||||
files.push(path.to_path_buf());
|
||||
}
|
||||
},
|
||||
);
|
||||
source.supporting_files = files;
|
||||
|
||||
seen.insert(source.name.clone());
|
||||
sources.push(source);
|
||||
}
|
||||
}
|
||||
}
|
||||
sources
|
||||
}
|
||||
|
||||
fn discover_skills(working_dir: &Path) -> Vec<Source> {
|
||||
let mut sources = Vec::new();
|
||||
let mut seen = HashSet::new();
|
||||
|
||||
let home = dirs::home_dir();
|
||||
let config = Paths::config_dir();
|
||||
|
||||
let local_dirs = vec![
|
||||
working_dir.join(".goose/skills"),
|
||||
working_dir.join(".claude/skills"),
|
||||
working_dir.join(".agents/skills"),
|
||||
];
|
||||
|
||||
let global_dirs: Vec<PathBuf> = [
|
||||
home.as_ref().map(|h| h.join(".agents/skills")),
|
||||
Some(config.join("skills")),
|
||||
home.as_ref().map(|h| h.join(".claude/skills")),
|
||||
home.as_ref().map(|h| h.join(".config/agents/skills")),
|
||||
]
|
||||
.into_iter()
|
||||
.flatten()
|
||||
.collect();
|
||||
|
||||
for dir in local_dirs {
|
||||
sources.extend(scan_skills_from_dir(&dir, &mut seen));
|
||||
}
|
||||
for dir in global_dirs {
|
||||
sources.extend(scan_skills_from_dir(&dir, &mut seen));
|
||||
}
|
||||
|
||||
for content in builtin_skills::get_all() {
|
||||
if let Some(source) = parse_skill_content(content, PathBuf::new()) {
|
||||
if !seen.contains(&source.name) {
|
||||
seen.insert(source.name.clone());
|
||||
sources.push(Source {
|
||||
kind: SourceKind::BuiltinSkill,
|
||||
..source
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sources
|
||||
}
|
||||
|
||||
pub fn list_installed_skills(working_dir: Option<&Path>) -> Vec<Source> {
|
||||
let dir = working_dir
|
||||
.map(|p| p.to_path_buf())
|
||||
.unwrap_or_else(|| std::env::current_dir().unwrap_or_default());
|
||||
discover_skills(&dir)
|
||||
}
|
||||
|
||||
pub struct SkillsClient {
|
||||
info: InitializeResult,
|
||||
working_dir: PathBuf,
|
||||
|
|
@ -211,12 +29,14 @@ impl SkillsClient {
|
|||
|
||||
let mut instructions = String::new();
|
||||
if context.session.is_some() {
|
||||
let sources = discover_skills(&working_dir);
|
||||
let mut skills: Vec<&Source> = sources
|
||||
let sources = discover_skills(Some(&working_dir));
|
||||
let mut skills: Vec<&SourceEntry> = sources
|
||||
.iter()
|
||||
.filter(|s| s.kind == SourceKind::Skill || s.kind == SourceKind::BuiltinSkill)
|
||||
.filter(|s| {
|
||||
s.source_type == SourceType::Skill || s.source_type == SourceType::BuiltinSkill
|
||||
})
|
||||
.collect();
|
||||
skills.sort_by(|a, b| (&a.name, &a.path).cmp(&(&b.name, &b.path)));
|
||||
skills.sort_by(|a, b| (&a.name, &a.directory).cmp(&(&b.name, &b.directory)));
|
||||
|
||||
if !skills.is_empty() {
|
||||
instructions.push_str(
|
||||
|
|
@ -300,24 +120,24 @@ impl McpClientTrait for SkillsClient {
|
|||
)]));
|
||||
}
|
||||
|
||||
let skills = discover_skills(&self.working_dir);
|
||||
let skills = discover_skills(Some(&self.working_dir));
|
||||
|
||||
// Direct skill match
|
||||
if let Some(skill) = skills.iter().find(|s| s.name == skill_name) {
|
||||
let mut output = format!(
|
||||
"# Loaded Skill: {} ({})\n\n{}\n",
|
||||
skill.name,
|
||||
skill.kind,
|
||||
skill.source_type,
|
||||
skill.to_load_text()
|
||||
);
|
||||
|
||||
if !skill.supporting_files.is_empty() {
|
||||
let skill_dir = Path::new(&skill.directory);
|
||||
output.push_str(&format!(
|
||||
"\n## Supporting Files\n\nSkill directory: {}\n\n",
|
||||
skill.path.display()
|
||||
skill.directory
|
||||
));
|
||||
for file in &skill.supporting_files {
|
||||
if let Ok(relative) = file.strip_prefix(&skill.path) {
|
||||
if let Ok(relative) = Path::new(file).strip_prefix(skill_dir) {
|
||||
let rel_str = relative.to_string_lossy().replace('\\', "/");
|
||||
output.push_str(&format!(
|
||||
"- {} → load_skill(name: \"{}/{}\")\n",
|
||||
|
|
@ -331,27 +151,27 @@ impl McpClientTrait for SkillsClient {
|
|||
return Ok(CallToolResult::success(vec![Content::text(output)]));
|
||||
}
|
||||
|
||||
// Supporting file match (skill_name contains '/')
|
||||
if let Some((parent_skill_name, raw_relative_path)) = skill_name.split_once('/') {
|
||||
let relative_path = raw_relative_path.replace('\\', "/");
|
||||
if let Some(skill) = skills.iter().find(|s| {
|
||||
s.name == parent_skill_name
|
||||
&& matches!(s.kind, SourceKind::Skill | SourceKind::BuiltinSkill)
|
||||
&& matches!(s.source_type, SourceType::Skill | SourceType::BuiltinSkill)
|
||||
}) {
|
||||
let canonical_skill_dir = skill
|
||||
.path
|
||||
let skill_dir = PathBuf::from(&skill.directory);
|
||||
let canonical_skill_dir = skill_dir
|
||||
.canonicalize()
|
||||
.unwrap_or_else(|_| skill.path.clone());
|
||||
.unwrap_or_else(|_| skill_dir.clone());
|
||||
|
||||
for file_path in &skill.supporting_files {
|
||||
let Ok(rel) = file_path.strip_prefix(&skill.path) else {
|
||||
let file_path_buf = Path::new(file_path);
|
||||
let Ok(rel) = file_path_buf.strip_prefix(&skill_dir) else {
|
||||
continue;
|
||||
};
|
||||
if rel.to_string_lossy().replace('\\', "/") != relative_path {
|
||||
continue;
|
||||
}
|
||||
|
||||
return Ok(match file_path.canonicalize() {
|
||||
return Ok(match file_path_buf.canonicalize() {
|
||||
Ok(canonical) if canonical.starts_with(&canonical_skill_dir) => {
|
||||
match std::fs::read_to_string(&canonical) {
|
||||
Ok(content) => {
|
||||
|
|
@ -381,7 +201,8 @@ impl McpClientTrait for SkillsClient {
|
|||
.supporting_files
|
||||
.iter()
|
||||
.filter_map(|f| {
|
||||
f.strip_prefix(&skill.path)
|
||||
Path::new(f)
|
||||
.strip_prefix(&skill_dir)
|
||||
.ok()
|
||||
.map(|r| r.to_string_lossy().replace('\\', "/"))
|
||||
})
|
||||
|
|
@ -403,7 +224,6 @@ impl McpClientTrait for SkillsClient {
|
|||
}
|
||||
}
|
||||
|
||||
// No match — suggest similar skills
|
||||
let suggestions: Vec<&str> = skills
|
||||
.iter()
|
||||
.filter(|s| {
|
||||
387
crates/goose/src/skills/mod.rs
Normal file
387
crates/goose/src/skills/mod.rs
Normal file
|
|
@ -0,0 +1,387 @@
|
|||
//! Everything specific to skills: filesystem discovery (`SKILL.md` walking +
|
||||
//! built-ins) and the runtime MCP client (`client` submodule). User-facing
|
||||
//! CRUD lives in `crate::sources`, which generalizes across source types.
|
||||
|
||||
mod builtin;
|
||||
pub mod client;
|
||||
|
||||
pub use client::{SkillsClient, EXTENSION_NAME};
|
||||
|
||||
use crate::config::paths::Paths;
|
||||
use crate::sources::parse_frontmatter;
|
||||
use goose_sdk::custom_requests::{SourceEntry, SourceType};
|
||||
use sacp::Error;
|
||||
use serde::Deserialize;
|
||||
use std::collections::HashSet;
|
||||
use std::path::{Path, PathBuf};
|
||||
use tracing::warn;
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct SkillFrontmatter {
|
||||
#[serde(default)]
|
||||
pub name: Option<String>,
|
||||
#[serde(default)]
|
||||
pub description: String,
|
||||
}
|
||||
|
||||
/// Canonical writable location for global user skills: `~/.agents/skills`.
|
||||
pub fn global_skills_dir() -> Option<PathBuf> {
|
||||
dirs::home_dir().map(|h| h.join(".agents").join("skills"))
|
||||
}
|
||||
|
||||
/// Canonical writable location for project-scoped skills:
|
||||
/// `<project>/.goose/skills`.
|
||||
pub fn project_skills_dir(project_dir: &Path) -> PathBuf {
|
||||
project_dir.join(".goose").join("skills")
|
||||
}
|
||||
|
||||
pub(crate) fn skills_dir_global_or_err() -> Result<PathBuf, Error> {
|
||||
global_skills_dir()
|
||||
.ok_or_else(|| Error::internal_error().data("Could not determine home directory"))
|
||||
}
|
||||
|
||||
pub(crate) fn skills_dir_project_or_err(project_dir: &str) -> Result<PathBuf, Error> {
|
||||
if project_dir.trim().is_empty() {
|
||||
return Err(
|
||||
Error::invalid_params().data("projectDir must not be empty when global is false")
|
||||
);
|
||||
}
|
||||
Ok(project_skills_dir(Path::new(project_dir)))
|
||||
}
|
||||
|
||||
pub(crate) fn skill_base_dir(global: bool, project_dir: Option<&str>) -> Result<PathBuf, Error> {
|
||||
if global {
|
||||
skills_dir_global_or_err()
|
||||
} else {
|
||||
let pd = project_dir.ok_or_else(|| {
|
||||
Error::invalid_params().data("projectDir is required when global is false")
|
||||
})?;
|
||||
skills_dir_project_or_err(pd)
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn validate_skill_name(name: &str) -> Result<(), Error> {
|
||||
if name.is_empty() {
|
||||
return Err(Error::invalid_params().data("Skill name must not be empty"));
|
||||
}
|
||||
if name.len() > 64 {
|
||||
return Err(Error::invalid_params().data(format!(
|
||||
"Invalid skill name \"{}\". Names must be at most 64 characters.",
|
||||
name
|
||||
)));
|
||||
}
|
||||
if !name
|
||||
.chars()
|
||||
.all(|ch| ch.is_ascii_lowercase() || ch.is_ascii_digit() || ch == '-')
|
||||
{
|
||||
return Err(Error::invalid_params().data(format!(
|
||||
"Invalid skill name \"{}\". Names may only contain lowercase letters, digits, and hyphens.",
|
||||
name
|
||||
)));
|
||||
}
|
||||
if name.starts_with('-') || name.ends_with('-') {
|
||||
return Err(Error::invalid_params().data(format!(
|
||||
"Invalid skill name \"{}\". Names must not start or end with a hyphen.",
|
||||
name
|
||||
)));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn canonicalize_or_original(path: &Path) -> PathBuf {
|
||||
path.canonicalize().unwrap_or_else(|_| path.to_path_buf())
|
||||
}
|
||||
|
||||
fn inferred_discoverable_skill_root(path: &Path) -> Option<PathBuf> {
|
||||
let canonical_path = canonicalize_or_original(path);
|
||||
|
||||
let mut global_roots = Vec::new();
|
||||
if let Some(global_root) = global_skills_dir() {
|
||||
global_roots.push(global_root);
|
||||
}
|
||||
global_roots.push(Paths::config_dir().join("skills"));
|
||||
if let Some(home) = dirs::home_dir() {
|
||||
global_roots.push(home.join(".claude").join("skills"));
|
||||
global_roots.push(home.join(".config").join("agents").join("skills"));
|
||||
}
|
||||
|
||||
for root in global_roots {
|
||||
let canonical_root = canonicalize_or_original(&root);
|
||||
if canonical_path.starts_with(&canonical_root) {
|
||||
return Some(canonical_root);
|
||||
}
|
||||
}
|
||||
|
||||
canonical_path.ancestors().find_map(|ancestor| {
|
||||
let parent = ancestor.parent()?;
|
||||
let is_project_skills_root = ancestor.file_name().and_then(|name| name.to_str())
|
||||
== Some("skills")
|
||||
&& matches!(
|
||||
parent.file_name().and_then(|name| name.to_str()),
|
||||
Some(".goose") | Some(".claude") | Some(".agents")
|
||||
);
|
||||
is_project_skills_root.then(|| ancestor.to_path_buf())
|
||||
})
|
||||
}
|
||||
|
||||
pub(crate) fn resolve_discoverable_skill_dir(path: &str) -> Result<PathBuf, Error> {
|
||||
if path.is_empty() {
|
||||
return Err(Error::invalid_params().data("Source path must not be empty"));
|
||||
}
|
||||
|
||||
let canonical_dir = Path::new(path)
|
||||
.canonicalize()
|
||||
.map_err(|_| Error::invalid_params().data(format!("Source \"{}\" not found", path)))?;
|
||||
|
||||
if inferred_discoverable_skill_root(&canonical_dir).is_none()
|
||||
|| !canonical_dir.is_dir()
|
||||
|| !canonical_dir.join("SKILL.md").is_file()
|
||||
{
|
||||
return Err(Error::invalid_params().data(format!("Source \"{}\" not found", path)));
|
||||
}
|
||||
|
||||
Ok(canonical_dir)
|
||||
}
|
||||
|
||||
pub(crate) fn resolve_skill_dir(path: &str) -> Result<PathBuf, Error> {
|
||||
resolve_discoverable_skill_dir(path)
|
||||
}
|
||||
|
||||
pub(crate) fn is_global_skill_dir(path: &Path) -> bool {
|
||||
global_skills_dir().as_deref().is_some_and(|root| {
|
||||
canonicalize_or_original(path).starts_with(canonicalize_or_original(root))
|
||||
})
|
||||
}
|
||||
|
||||
pub(crate) fn infer_skill_name(dir: &Path) -> String {
|
||||
let md = dir.join("SKILL.md");
|
||||
if let Ok(raw) = std::fs::read_to_string(&md) {
|
||||
if let Ok(Some((meta, _))) = parse_frontmatter::<SkillFrontmatter>(&raw) {
|
||||
if let Some(n) = meta.name.filter(|n| !n.is_empty()) {
|
||||
return n;
|
||||
}
|
||||
}
|
||||
}
|
||||
dir.file_name()
|
||||
.and_then(|n| n.to_str())
|
||||
.unwrap_or("unnamed")
|
||||
.to_string()
|
||||
}
|
||||
|
||||
pub(crate) fn build_skill_md(name: &str, description: &str, content: &str) -> String {
|
||||
let safe_desc = description.replace('\'', "''");
|
||||
let mut md = format!("---\nname: {}\ndescription: '{}'\n---\n", name, safe_desc);
|
||||
if !content.is_empty() {
|
||||
md.push('\n');
|
||||
md.push_str(content);
|
||||
md.push('\n');
|
||||
}
|
||||
md
|
||||
}
|
||||
|
||||
pub(crate) fn parse_skill_frontmatter(raw: &str) -> (String, String) {
|
||||
if !raw.trim_start().starts_with("---") {
|
||||
return (String::new(), raw.to_string());
|
||||
}
|
||||
match parse_frontmatter::<SkillFrontmatter>(raw) {
|
||||
Ok(Some((meta, body))) => (meta.description, body),
|
||||
_ => (String::new(), raw.to_string()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Every directory the agent reads skills from, paired with whether each is a
|
||||
/// global (home-rooted) location. Order matches discovery precedence: project
|
||||
/// dirs first, then global dirs.
|
||||
pub fn all_skill_dirs(working_dir: Option<&Path>) -> Vec<(PathBuf, bool)> {
|
||||
let mut dirs: Vec<(PathBuf, bool)> = Vec::new();
|
||||
|
||||
if let Some(wd) = working_dir {
|
||||
dirs.push((wd.join(".goose").join("skills"), false));
|
||||
dirs.push((wd.join(".claude").join("skills"), false));
|
||||
dirs.push((wd.join(".agents").join("skills"), false));
|
||||
}
|
||||
|
||||
let home = dirs::home_dir();
|
||||
if let Some(h) = home.as_ref() {
|
||||
dirs.push((h.join(".agents").join("skills"), true));
|
||||
}
|
||||
dirs.push((Paths::config_dir().join("skills"), true));
|
||||
if let Some(h) = home.as_ref() {
|
||||
dirs.push((h.join(".claude").join("skills"), true));
|
||||
dirs.push((h.join(".config").join("agents").join("skills"), true));
|
||||
}
|
||||
|
||||
dirs
|
||||
}
|
||||
|
||||
fn parse_skill_content(content: &str, path: &Path, global: bool) -> Option<SourceEntry> {
|
||||
let (metadata, body): (SkillFrontmatter, String) = match parse_frontmatter(content) {
|
||||
Ok(Some(parsed)) => parsed,
|
||||
Ok(None) => return None,
|
||||
Err(e) => {
|
||||
warn!("Failed to parse skill frontmatter: {}", e);
|
||||
return None;
|
||||
}
|
||||
};
|
||||
|
||||
let name = match metadata.name.filter(|n| !n.is_empty()) {
|
||||
Some(n) => n,
|
||||
None => {
|
||||
warn!(
|
||||
"Skill at '{}' is missing a required 'name' in frontmatter, skipping",
|
||||
path.display()
|
||||
);
|
||||
return None;
|
||||
}
|
||||
};
|
||||
|
||||
if name.contains('/') {
|
||||
warn!("Skill name '{}' contains '/', skipping", name);
|
||||
return None;
|
||||
}
|
||||
|
||||
Some(SourceEntry {
|
||||
source_type: SourceType::Skill,
|
||||
name,
|
||||
description: metadata.description,
|
||||
content: body,
|
||||
directory: path.to_string_lossy().into_owned(),
|
||||
global,
|
||||
supporting_files: Vec::new(),
|
||||
properties: std::collections::HashMap::new(),
|
||||
})
|
||||
}
|
||||
|
||||
fn should_skip_dir(path: &Path) -> bool {
|
||||
matches!(
|
||||
path.file_name().and_then(|name| name.to_str()),
|
||||
Some(".git") | Some(".hg") | Some(".svn")
|
||||
)
|
||||
}
|
||||
|
||||
fn walk_files_recursively<F, G>(
|
||||
dir: &Path,
|
||||
visited_dirs: &mut HashSet<PathBuf>,
|
||||
should_descend: &mut G,
|
||||
visit_file: &mut F,
|
||||
) where
|
||||
F: FnMut(&Path),
|
||||
G: FnMut(&Path) -> bool,
|
||||
{
|
||||
let canonical_dir = match std::fs::canonicalize(dir) {
|
||||
Ok(path) => path,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
if !visited_dirs.insert(canonical_dir) {
|
||||
return;
|
||||
}
|
||||
|
||||
let entries = match std::fs::read_dir(dir) {
|
||||
Ok(e) => e,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path();
|
||||
if path.is_dir() {
|
||||
if should_descend(&path) {
|
||||
walk_files_recursively(&path, visited_dirs, should_descend, visit_file);
|
||||
}
|
||||
} else if path.is_file() {
|
||||
visit_file(&path);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn scan_skills_from_dir(dir: &Path, global: bool, seen: &mut HashSet<String>) -> Vec<SourceEntry> {
|
||||
let mut skill_files = Vec::new();
|
||||
let mut visited_dirs = HashSet::new();
|
||||
|
||||
walk_files_recursively(
|
||||
dir,
|
||||
&mut visited_dirs,
|
||||
&mut |path| !should_skip_dir(path),
|
||||
&mut |path| {
|
||||
if path.file_name().and_then(|name| name.to_str()) == Some("SKILL.md") {
|
||||
skill_files.push(path.to_path_buf());
|
||||
}
|
||||
},
|
||||
);
|
||||
|
||||
let mut sources = Vec::new();
|
||||
for skill_file in skill_files {
|
||||
let Some(skill_dir) = skill_file.parent() else {
|
||||
continue;
|
||||
};
|
||||
let content = match std::fs::read_to_string(&skill_file) {
|
||||
Ok(c) => c,
|
||||
Err(e) => {
|
||||
warn!("Failed to read skill file {}: {}", skill_file.display(), e);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
if let Some(mut source) = parse_skill_content(&content, skill_dir, global) {
|
||||
if !seen.contains(&source.name) {
|
||||
let mut files = Vec::new();
|
||||
let mut visited_support_dirs = HashSet::new();
|
||||
walk_files_recursively(
|
||||
skill_dir,
|
||||
&mut visited_support_dirs,
|
||||
&mut |path| !should_skip_dir(path) && !path.join("SKILL.md").is_file(),
|
||||
&mut |path| {
|
||||
if path.file_name().and_then(|n| n.to_str()) != Some("SKILL.md") {
|
||||
files.push(path.to_string_lossy().into_owned());
|
||||
}
|
||||
},
|
||||
);
|
||||
source.supporting_files = files;
|
||||
|
||||
seen.insert(source.name.clone());
|
||||
sources.push(source);
|
||||
}
|
||||
}
|
||||
}
|
||||
sources
|
||||
}
|
||||
|
||||
/// Discover skills from all configured filesystem locations and built-ins.
|
||||
/// Each returned entry has `global` set according to the directory it was
|
||||
/// found in (or `true` for built-ins).
|
||||
pub fn discover_skills(working_dir: Option<&Path>) -> Vec<SourceEntry> {
|
||||
let mut sources: Vec<SourceEntry> = Vec::new();
|
||||
let mut seen = HashSet::new();
|
||||
|
||||
for (dir, is_global) in all_skill_dirs(working_dir) {
|
||||
for source in scan_skills_from_dir(&dir, is_global, &mut seen) {
|
||||
sources.push(source);
|
||||
}
|
||||
}
|
||||
|
||||
for content in builtin::get_all() {
|
||||
if let Some(source) = parse_skill_content(content, &PathBuf::new(), true) {
|
||||
if !seen.contains(&source.name) {
|
||||
seen.insert(source.name.clone());
|
||||
sources.push(SourceEntry {
|
||||
source_type: SourceType::BuiltinSkill,
|
||||
..source
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sources
|
||||
}
|
||||
|
||||
pub fn list_installed_skills(working_dir: Option<&Path>) -> Vec<SourceEntry> {
|
||||
let fallback;
|
||||
let wd = match working_dir {
|
||||
Some(p) => Some(p),
|
||||
None => {
|
||||
fallback = std::env::current_dir().ok();
|
||||
fallback.as_deref()
|
||||
}
|
||||
};
|
||||
discover_skills(wd)
|
||||
}
|
||||
File diff suppressed because it is too large
Load diff
|
|
@ -57,12 +57,18 @@ pub async fn run_list_sessions<C: Connection>() {
|
|||
let mut response = conn.list_sessions().await.unwrap();
|
||||
for s in &mut response.sessions {
|
||||
s.updated_at = None;
|
||||
// createdAt is a dynamic timestamp — verify it exists then remove for comparison.
|
||||
if let Some(ref mut meta) = s.meta {
|
||||
assert!(meta.get("createdAt").and_then(|v| v.as_str()).is_some());
|
||||
meta.remove("createdAt");
|
||||
}
|
||||
}
|
||||
let mut expected_meta = serde_json::Map::new();
|
||||
expected_meta.insert(
|
||||
"messageCount".to_string(),
|
||||
serde_json::Value::Number(2.into()),
|
||||
);
|
||||
expected_meta.insert("userSetName".to_string(), serde_json::Value::Bool(false));
|
||||
assert_eq!(
|
||||
response,
|
||||
ListSessionsResponse::new(vec.
|
||||
:::
|
||||
|
|
@ -152,19 +156,33 @@ import { PanelLeft } from 'lucide-react';
|
|||
- **MSYS2**: Available from [msys2.org](https://www.msys2.org/)
|
||||
- **PowerShell**: Available on Windows 10/11 by default
|
||||
|
||||
Run the installation command in your chosen environment:
|
||||
Use the standard build for the general-purpose Windows install. Use the CUDA variant if you have an NVIDIA GPU with a compatible CUDA driver/runtime.
|
||||
|
||||
**Git Bash / MSYS2: Standard**
|
||||
|
||||
```bash
|
||||
curl -fsSL https://github.com/aaif-goose/goose/releases/download/stable/download_cli.sh | bash
|
||||
```
|
||||
|
||||
**Git Bash / MSYS2: Windows CUDA**
|
||||
|
||||
```bash
|
||||
curl -fsSL https://github.com/aaif-goose/goose/releases/download/stable/download_cli.sh | GOOSE_WINDOWS_VARIANT=cuda bash
|
||||
```
|
||||
|
||||
To install without interactive configuration, disable `CONFIGURE`:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://github.com/aaif-goose/goose/releases/download/stable/download_cli.sh | CONFIGURE=false bash
|
||||
```
|
||||
|
||||
**PowerShell Installation:**
|
||||
To install the CUDA variant without interactive configuration:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://github.com/aaif-goose/goose/releases/download/stable/download_cli.sh | GOOSE_WINDOWS_VARIANT=cuda CONFIGURE=false bash
|
||||
```
|
||||
|
||||
**PowerShell Installation: Standard**
|
||||
Download the PowerShell installation script to your current directory.
|
||||
|
||||
```powershell
|
||||
|
|
@ -175,6 +193,14 @@ import { PanelLeft } from 'lucide-react';
|
|||
.\download_cli.ps1
|
||||
```
|
||||
|
||||
**PowerShell Installation: Windows CUDA**
|
||||
|
||||
```powershell
|
||||
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/aaif-goose/goose/main/download_cli.ps1" -OutFile "download_cli.ps1";
|
||||
$env:GOOSE_WINDOWS_VARIANT="cuda"
|
||||
.\download_cli.ps1
|
||||
```
|
||||
|
||||
:::info Windows PATH Setup
|
||||
If you see a warning that goose is not in your PATH, you need to add goose to your PATH:
|
||||
|
||||
|
|
|
|||
|
|
@ -628,6 +628,7 @@ Once you're in an interactive session (via `goose session` or `goose run --inter
|
|||
- **`/recipe [filepath]`** - Generate a recipe from the current conversation and save it to the specified filepath (must end with .yaml). If no filepath is provided, it will be saved to ./recipe.yaml
|
||||
- **`/compact`** - Compact and summarize the current conversation to reduce context length while preserving key information
|
||||
- **`/r`** - Toggle full tool output display (show complete tool parameters without truncation)
|
||||
- **`/skills`** - List available skills
|
||||
- **`/t`** - Toggle between `light`, `dark`, and `ansi` themes. [More info](#themes).
|
||||
- **`/t <name>`** - Set theme directly (light, dark, ansi)
|
||||
|
||||
|
|
|
|||
117
documentation/docs/mcp/exa-mcp.md
Normal file
117
documentation/docs/mcp/exa-mcp.md
Normal file
|
|
@ -0,0 +1,117 @@
|
|||
---
|
||||
title: Exa Search Extension
|
||||
description: Add Exa MCP Server as a goose Extension
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import GooseDesktopInstaller from '@site/src/components/GooseDesktopInstaller';
|
||||
import CLIExtensionInstructions from '@site/src/components/CLIExtensionInstructions';
|
||||
|
||||
This tutorial covers how to add the [Exa MCP Server](https://github.com/exa-labs/exa-mcp-server) as a goose extension to enable AI-powered web search functionality.
|
||||
|
||||
:::tip Quick Install
|
||||
<Tabs groupId="interface">
|
||||
<TabItem value="ui" label="goose Desktop" default>
|
||||
[Launch the installer](goose://extension?cmd=npx&arg=-y&arg=exa-mcp-server&id=exa&name=Exa%20Search&description=AI-powered%20web%20search&env=EXA_API_KEY%3DExa%20API%20Key)
|
||||
</TabItem>
|
||||
<TabItem value="cli" label="goose CLI">
|
||||
**Command**
|
||||
```sh
|
||||
npx -y exa-mcp-server
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
**Environment Variable**
|
||||
```
|
||||
EXA_API_KEY: <YOUR_API_KEY>
|
||||
```
|
||||
:::
|
||||
|
||||
## Configuration
|
||||
|
||||
:::info
|
||||
Note that you'll need [Node.js](https://nodejs.org/) installed on your system to run this command, as it uses `npx`.
|
||||
:::
|
||||
|
||||
<Tabs groupId="interface">
|
||||
<TabItem value="ui" label="goose Desktop" default>
|
||||
<GooseDesktopInstaller
|
||||
extensionId="exa"
|
||||
extensionName="Exa Search"
|
||||
description="AI-powered web search"
|
||||
command="npx"
|
||||
args={["-y", "exa-mcp-server"]}
|
||||
envVars={[
|
||||
{ name: "EXA_API_KEY", label: "Exa API Key" }
|
||||
]}
|
||||
apiKeyLink="https://dashboard.exa.ai/api-keys"
|
||||
apiKeyLinkText="Exa API Key"
|
||||
/>
|
||||
</TabItem>
|
||||
<TabItem value="cli" label="goose CLI">
|
||||
<CLIExtensionInstructions
|
||||
name="Exa Search"
|
||||
description="AI-powered web search"
|
||||
command="npx -y exa-mcp-server"
|
||||
envVars={[
|
||||
{ key: "EXA_API_KEY", value: "▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪" }
|
||||
]}
|
||||
infoNote={
|
||||
<>
|
||||
Obtain your <a href="https://dashboard.exa.ai/api-keys" target="_blank" rel="noopener noreferrer">Exa API Key</a> and paste it in. You can get your API key by signing up at <a href="https://exa.ai" target="_blank" rel="noopener noreferrer">exa.ai</a> and navigating to the API keys page in the dashboard.
|
||||
</>
|
||||
}
|
||||
/>
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Example Usage
|
||||
|
||||
The Exa MCP server enables AI-powered web search in your goose interactions. Exa offers advanced search capabilities including:
|
||||
|
||||
1. Neural, keyword, and auto search modes
|
||||
2. Content retrieval with full text, highlights, or summaries
|
||||
3. Category filtering (news, research papers, companies, and more)
|
||||
4. Domain and date range filtering
|
||||
|
||||
### goose Prompt
|
||||
|
||||
```
|
||||
Find recent research papers about large language model reasoning capabilities and summarize the key findings.
|
||||
```
|
||||
|
||||
### goose Output
|
||||
|
||||
```
|
||||
I'll use Exa to search for recent research papers on LLM reasoning.
|
||||
|
||||
─── web_search_exa | exa ──────────────────────────
|
||||
query: large language model reasoning capabilities research paper
|
||||
category: research paper
|
||||
|
||||
|
||||
Here are the key findings from recent research on LLM reasoning:
|
||||
|
||||
1. Chain-of-Thought Reasoning:
|
||||
- Recent papers show that explicit chain-of-thought prompting significantly improves performance on multi-step reasoning tasks
|
||||
- Models trained with reasoning traces demonstrate better generalization to novel problem types
|
||||
|
||||
2. Mathematical and Logical Reasoning:
|
||||
- New benchmarks reveal that while LLMs have improved at arithmetic and formal logic, they still struggle with compositional reasoning
|
||||
- Hybrid approaches combining neural methods with symbolic solvers show promising results
|
||||
|
||||
3. Commonsense Reasoning:
|
||||
- Studies demonstrate that scaling model size continues to improve commonsense reasoning benchmarks
|
||||
- However, models still fail on tasks requiring physical intuition or temporal reasoning about everyday events
|
||||
|
||||
4. Limitations and Future Directions:
|
||||
- Several papers identify "reasoning shortcuts" where models exploit dataset patterns rather than performing genuine inference
|
||||
- Research into retrieval-augmented reasoning shows potential for grounding model outputs in verified facts
|
||||
|
||||
Would you like me to search for more details on any of these topics?
|
||||
```
|
||||
|
||||
:::tip
|
||||
Exa supports multiple search types including neural search for semantic queries and auto mode that intelligently combines search strategies. You can also filter results by category, domain, and date range for more targeted research.
|
||||
:::
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
---
|
||||
title: Debug Desktop Startup Failures
|
||||
sidebar_label: Debug Desktop Startup Failures
|
||||
description: Find the desktop startup diagnostics log, understand the key fields, and share the right artifacts when goose fails to start.
|
||||
---
|
||||
|
||||
When goose Desktop fails before the backend becomes ready, the normal server log may be empty or incomplete. In that case, the most useful artifact is the startup diagnostics JSON written by the desktop app.
|
||||
|
||||
## Find the Startup Diagnostics Log
|
||||
|
||||
goose Desktop writes one startup diagnostics file per launch attempt.
|
||||
|
||||
Typical locations:
|
||||
|
||||
- macOS: `~/Library/Application Support/Goose/logs/startup/`
|
||||
- Windows: `%APPDATA%\Goose\logs\startup\`
|
||||
- Linux: `~/.config/Goose/logs/startup/`
|
||||
|
||||
The files are named like:
|
||||
|
||||
```text
|
||||
goosed-startup-2026-04-21T01-24-03.149Z-23416.json
|
||||
```
|
||||
|
||||
If several files exist, use the newest one.
|
||||
|
||||
## What To Share
|
||||
|
||||
When reporting a desktop startup failure, share:
|
||||
|
||||
- the newest `goosed-startup-*.json`
|
||||
- your goose version
|
||||
- your operating system and version
|
||||
|
||||
For Windows native crashes, also attach the Windows crash report for `goosed.exe` if available.
|
||||
|
||||
Common places to find the Windows crash report:
|
||||
|
||||
- Event Viewer: `Windows Logs` → `Application`
|
||||
- Reliability Monitor: `View technical details`
|
||||
- WER files on disk:
|
||||
- `%LOCALAPPDATA%\Microsoft\Windows\WER\ReportArchive\`
|
||||
- `%LOCALAPPDATA%\Microsoft\Windows\WER\ReportQueue\`
|
||||
|
||||
Look for a `Report.wer` related to `goosed.exe`.
|
||||
|
||||
If you are filing a GitHub issue or asking for support, this is usually enough:
|
||||
|
||||
- the newest `goosed-startup-*.json`
|
||||
- your goose version
|
||||
- your operating system and version
|
||||
- on Windows, `Report.wer` for `goosed.exe` if Windows created one
|
||||
|
||||
## What The Startup Log Contains
|
||||
|
||||
In most cases, sharing the newest startup log is enough.
|
||||
|
||||
If you want a quick high-level read, focus on these fields:
|
||||
|
||||
- `childExitCode` or `childExitSignal`
|
||||
Shows whether the backend process exited during startup.
|
||||
- `certFingerprintSeen`
|
||||
Shows whether the backend reached the TLS startup stage.
|
||||
- `healthCheckSucceeded`
|
||||
Shows whether the desktop app ever observed the backend as ready.
|
||||
- `stderrTail`
|
||||
Shows the most recent startup output captured from the backend, including major startup stage markers when available.
|
||||
- `events`
|
||||
Shows the order of major startup steps like process spawn, health check, and child exit.
|
||||
|
||||
## Related Diagnostics
|
||||
|
||||
For session or in-app issues after goose has started, use the normal diagnostics bundle described in [Diagnostics and Reporting](/docs/troubleshooting/diagnostics-and-reporting).
|
||||
|
|
@ -20,6 +20,11 @@ import styles from '@site/src/components/Card/styles.module.css';
|
|||
description="Use built-in diagnostics, report bugs, and request new features. Includes step-by-step guides for generating troubleshooting data."
|
||||
link="/docs/troubleshooting/diagnostics-and-reporting"
|
||||
/>
|
||||
<Card
|
||||
title="Debug Desktop Startup Failures"
|
||||
description="Find the desktop startup diagnostics log, understand the key fields, and attach the right artifacts when goose fails before the backend becomes ready."
|
||||
link="/docs/troubleshooting/desktop-startup-debugging"
|
||||
/>
|
||||
<Card
|
||||
title="Known Issues"
|
||||
description="Comprehensive troubleshooting guide covering common problems, error messages, and platform-specific issues with step-by-step solutions."
|
||||
|
|
|
|||
14
documentation/package-lock.json
generated
14
documentation/package-lock.json
generated
|
|
@ -18,7 +18,7 @@
|
|||
"dotenv": "^16.4.7",
|
||||
"framer-motion": "^11.0.0",
|
||||
"lucide-react": "^0.475.0",
|
||||
"postcss": "^8.4.35",
|
||||
"postcss": "^8.5.10",
|
||||
"postcss-import": "^16.1.0",
|
||||
"prism-react-renderer": "^2.3.0",
|
||||
"react": "^19.0.0",
|
||||
|
|
@ -9162,9 +9162,9 @@
|
|||
}
|
||||
},
|
||||
"node_modules/follow-redirects": {
|
||||
"version": "1.15.11",
|
||||
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.11.tgz",
|
||||
"integrity": "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ==",
|
||||
"version": "1.16.0",
|
||||
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.16.0.tgz",
|
||||
"integrity": "sha512-y5rN/uOsadFT/JfYwhxRS5R7Qce+g3zG97+JrtFZlC9klX/W5hD7iiLzScI4nZqUS7DNUdhPgw4xI8W2LuXlUw==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "individual",
|
||||
|
|
@ -14233,9 +14233,9 @@
|
|||
}
|
||||
},
|
||||
"node_modules/postcss": {
|
||||
"version": "8.5.6",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz",
|
||||
"integrity": "sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==",
|
||||
"version": "8.5.10",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.10.tgz",
|
||||
"integrity": "sha512-pMMHxBOZKFU6HgAZ4eyGnwXF/EvPGGqUr0MnZ5+99485wwW41kW91A4LOGxSHhgugZmSChL5AlElNdwlNgcnLQ==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "opencollective",
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@
|
|||
"dotenv": "^16.4.7",
|
||||
"framer-motion": "^11.0.0",
|
||||
"lucide-react": "^0.475.0",
|
||||
"postcss": "^8.4.35",
|
||||
"postcss": "^8.5.10",
|
||||
"postcss-import": "^16.1.0",
|
||||
"prism-react-renderer": "^2.3.0",
|
||||
"react": "^19.0.0",
|
||||
|
|
|
|||
|
|
@ -5,13 +5,19 @@ const WindowsDesktopInstallButtons = () => {
|
|||
return (
|
||||
<div>
|
||||
<p>Click one of the buttons below to download goose Desktop for Windows:</p>
|
||||
<div className="pill-button">
|
||||
<div className="pill-button" style={{ display: "flex", gap: "0.75rem", flexWrap: "wrap" }}>
|
||||
<Link
|
||||
className="button button--primary button--lg"
|
||||
to="https://github.com/aaif-goose/goose/releases/download/stable/Goose-win32-x64.zip"
|
||||
>
|
||||
<IconDownload /> Windows
|
||||
</Link>
|
||||
<Link
|
||||
className="button button--primary button--lg"
|
||||
to="https://github.com/aaif-goose/goose/releases/download/stable/Goose-win32-x64-cuda.zip"
|
||||
>
|
||||
<IconDownload /> Windows CUDA
|
||||
</Link>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
|
|
|||
|
|
@ -280,6 +280,23 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "exa",
|
||||
"name": "Exa Search",
|
||||
"description": "AI-powered web search with neural and keyword capabilities",
|
||||
"command": "npx -y exa-mcp-server",
|
||||
"link": "https://github.com/exa-labs/exa-mcp-server",
|
||||
"installation_notes": "Install using npx package manager. Requires Exa API key.",
|
||||
"is_builtin": false,
|
||||
"endorsed": false,
|
||||
"environmentVariables": [
|
||||
{
|
||||
"name": "EXA_API_KEY",
|
||||
"description": "API key for Exa web search service",
|
||||
"required": true
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "excalidraw-mcp-app",
|
||||
"name": "Excalidraw",
|
||||
|
|
|
|||
|
|
@ -16,6 +16,7 @@
|
|||
# $env:GOOSE_VERSION - Optional: specific version to install (e.g., "v1.0.25"). Can be in the format vX.Y.Z, vX.Y.Z-suffix, or X.Y.Z
|
||||
# $env:GOOSE_PROVIDER - Optional: provider for goose
|
||||
# $env:GOOSE_MODEL - Optional: model for goose
|
||||
# $env:GOOSE_WINDOWS_VARIANT - Optional: Windows package variant to install ("standard" or "cuda")
|
||||
# $env:CANARY - Optional: if set to "true", downloads from canary release instead of stable
|
||||
# $env:CONFIGURE - Optional: if set to "false", disables running goose configure interactively
|
||||
##############################################################################
|
||||
|
|
@ -35,6 +36,7 @@ if (-not $env:GOOSE_BIN_DIR) {
|
|||
# Determine release type
|
||||
$RELEASE = if ($env:CANARY -eq "true") { "true" } else { "false" }
|
||||
$CONFIGURE = if ($env:CONFIGURE -eq "false") { "false" } else { "true" }
|
||||
$WINDOWS_VARIANT = if ($env:GOOSE_WINDOWS_VARIANT) { $env:GOOSE_WINDOWS_VARIANT.ToLowerInvariant() } else { "standard" }
|
||||
|
||||
# Determine release tag
|
||||
if ($env:GOOSE_VERSION) {
|
||||
|
|
@ -62,8 +64,13 @@ if ($ARCH -eq "AMD64") {
|
|||
exit 1
|
||||
}
|
||||
|
||||
if ($WINDOWS_VARIANT -ne "standard" -and $WINDOWS_VARIANT -ne "cuda") {
|
||||
Write-Error "Unsupported GOOSE_WINDOWS_VARIANT '$WINDOWS_VARIANT'. Expected 'standard' or 'cuda'."
|
||||
exit 1
|
||||
}
|
||||
|
||||
# --- 3) Build download URL ---
|
||||
$FILE = "goose-$ARCH-pc-windows-msvc.zip"
|
||||
$FILE = if ($WINDOWS_VARIANT -eq "cuda") { "goose-$ARCH-pc-windows-msvc-cuda.zip" } else { "goose-$ARCH-pc-windows-msvc.zip" }
|
||||
$DOWNLOAD_URL = "https://github.com/$REPO/releases/download/$RELEASE_TAG/$FILE"
|
||||
|
||||
Write-Host "Downloading $RELEASE_TAG release: $FILE..." -ForegroundColor Green
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@ set -eu
|
|||
# GOOSE_VERSION - Optional: specific version to install (e.g., "v1.0.25"). Overrides CANARY. Can be in the format vX.Y.Z, vX.Y.Z-suffix, or X.Y.Z
|
||||
# GOOSE_PROVIDER - Optional: provider for goose
|
||||
# GOOSE_MODEL - Optional: model for goose
|
||||
# GOOSE_WINDOWS_VARIANT - Optional: Windows package variant to install (`standard` or `cuda`)
|
||||
# CANARY - Optional: if set to "true", downloads from canary release instead of stable
|
||||
# CONFIGURE - Optional: if set to "false", disables running goose configure interactively
|
||||
# ** other provider specific environment variables (eg. DATABRICKS_HOST)
|
||||
|
|
@ -73,6 +74,7 @@ fi
|
|||
GOOSE_BIN_DIR="${GOOSE_BIN_DIR:-$DEFAULT_BIN_DIR}"
|
||||
RELEASE="${CANARY:-false}"
|
||||
CONFIGURE="${CONFIGURE:-true}"
|
||||
GOOSE_WINDOWS_VARIANT="${GOOSE_WINDOWS_VARIANT:-standard}"
|
||||
if [ -n "${GOOSE_VERSION:-}" ]; then
|
||||
# Validate the version format
|
||||
if [[ ! "$GOOSE_VERSION" =~ ^v?[0-9]+\.[0-9]+\.[0-9]+(-.*)?$ ]]; then
|
||||
|
|
@ -179,12 +181,22 @@ if [ "$OS" = "darwin" ]; then
|
|||
FILE="goose-$ARCH-apple-darwin.tar.bz2"
|
||||
EXTRACT_CMD="tar"
|
||||
elif [ "$OS" = "windows" ]; then
|
||||
case "$GOOSE_WINDOWS_VARIANT" in
|
||||
standard|cuda) ;;
|
||||
*)
|
||||
echo "Error: Unsupported GOOSE_WINDOWS_VARIANT '$GOOSE_WINDOWS_VARIANT'. Expected 'standard' or 'cuda'."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
# Windows only supports x86_64 currently
|
||||
if [ "$ARCH" != "x86_64" ]; then
|
||||
echo "Error: Windows currently only supports x86_64 architecture."
|
||||
exit 1
|
||||
fi
|
||||
FILE="goose-$ARCH-pc-windows-msvc.zip"
|
||||
if [ "$GOOSE_WINDOWS_VARIANT" = "cuda" ]; then
|
||||
FILE="goose-$ARCH-pc-windows-msvc-cuda.zip"
|
||||
fi
|
||||
EXTRACT_CMD="unzip"
|
||||
OUT_FILE="goose.exe"
|
||||
else
|
||||
|
|
|
|||
|
|
@ -1,71 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
LIB_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "$LIB_DIR/test_providers_lib.sh"
|
||||
|
||||
echo "Mode: normal (direct tool calls)"
|
||||
echo ""
|
||||
|
||||
GOOSE_BIN=$(build_goose)
|
||||
BUILTINS="developer"
|
||||
|
||||
mkdir -p target
|
||||
TEST_CONTENT="test-content-abc123"
|
||||
TEST_FILE="./target/test-content.txt"
|
||||
echo "$TEST_CONTENT" > "$TEST_FILE"
|
||||
|
||||
run_test() {
|
||||
local provider="$1" model="$2" result_file="$3" output_file="$4"
|
||||
local testdir=$(mktemp -d)
|
||||
|
||||
local prompt
|
||||
if is_agentic_provider "$provider"; then
|
||||
cp "$TEST_FILE" "$testdir/test-content.txt"
|
||||
prompt="read ./test-content.txt and output its contents exactly"
|
||||
else
|
||||
# Write two files with unique random tokens. Validation checks that the shell
|
||||
# tool was used and that both tokens appear in the output, proving the model
|
||||
# actually read the files (random tokens can't be guessed or hallucinated).
|
||||
local token_a="smoke-alpha-$RANDOM"
|
||||
local token_b="smoke-bravo-$RANDOM"
|
||||
echo "$token_a" > "$testdir/part-a.txt"
|
||||
echo "$token_b" > "$testdir/part-b.txt"
|
||||
# Store tokens so validation can check them
|
||||
echo "$token_a" > "$testdir/.token_a"
|
||||
echo "$token_b" > "$testdir/.token_b"
|
||||
prompt="Use the shell tool to cat ./part-a.txt and ./part-b.txt, then reply with ONLY the contents of both files, one per line, nothing else."
|
||||
fi
|
||||
|
||||
(
|
||||
export GOOSE_PROVIDER="$provider"
|
||||
export GOOSE_MODEL="$model"
|
||||
cd "$testdir" && "$GOOSE_BIN" run --text "$prompt" --with-builtin "$BUILTINS" 2>&1
|
||||
) > "$output_file" 2>&1
|
||||
|
||||
if is_agentic_provider "$provider"; then
|
||||
if grep -qi "$TEST_CONTENT" "$output_file"; then
|
||||
echo "success|test content found by model" > "$result_file"
|
||||
else
|
||||
echo "failure|test content not found by model" > "$result_file"
|
||||
fi
|
||||
else
|
||||
local token_a token_b
|
||||
token_a=$(cat "$testdir/.token_a")
|
||||
token_b=$(cat "$testdir/.token_b")
|
||||
if ! grep -qE "(shell \| developer)|(▸.*shell)" "$output_file"; then
|
||||
echo "failure|model did not use shell tool" > "$result_file"
|
||||
elif ! grep -q "$token_a" "$output_file"; then
|
||||
echo "failure|model did not return contents of part-a.txt ($token_a)" > "$result_file"
|
||||
elif ! grep -q "$token_b" "$output_file"; then
|
||||
echo "failure|model did not return contents of part-b.txt ($token_b)" > "$result_file"
|
||||
else
|
||||
echo "success|model read and returned both file contents" > "$result_file"
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -rf "$testdir"
|
||||
}
|
||||
|
||||
build_test_cases
|
||||
run_test_cases run_test
|
||||
report_results
|
||||
|
|
@ -1,45 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Provider smoke tests - code execution mode (JS batching)
|
||||
|
||||
LIB_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "$LIB_DIR/test_providers_lib.sh"
|
||||
|
||||
echo "Mode: code_execution (JS batching)"
|
||||
echo ""
|
||||
|
||||
# --- Setup ---
|
||||
|
||||
GOOSE_BIN=$(build_goose)
|
||||
BUILTINS="memory,code_execution"
|
||||
|
||||
# --- Test case ---
|
||||
|
||||
run_test() {
|
||||
local provider="$1" model="$2" result_file="$3" output_file="$4"
|
||||
local testdir=$(mktemp -d)
|
||||
|
||||
local prompt="Store a memory with category 'test' and data 'hello world', then retrieve all memories from category 'test'."
|
||||
|
||||
# Run goose
|
||||
(
|
||||
export GOOSE_PROVIDER="$provider"
|
||||
export GOOSE_MODEL="$model"
|
||||
cd "$testdir" && "$GOOSE_BIN" run --text "$prompt" --with-builtin "$BUILTINS" 2>&1
|
||||
) > "$output_file" 2>&1
|
||||
|
||||
# Matches: "execute_typescript | code_execution", "get_function_details | code_execution",
|
||||
# "tool call | execute", "tool calls | execute" (old format)
|
||||
# "▸ execute N tool call" (new format with tool_graph)
|
||||
# "▸ execute_typescript" (plain tool name in output)
|
||||
if grep -qE "(execute_typescript \| code_execution)|(get_function_details \| code_execution)|(tool calls? \| execute)|(▸.*execute.*tool call)|(▸ execute_typescript)" "$output_file"; then
|
||||
echo "success|code_execution tool called" > "$result_file"
|
||||
else
|
||||
echo "failure|no code_execution tool calls found" > "$result_file"
|
||||
fi
|
||||
|
||||
rm -rf "$testdir"
|
||||
}
|
||||
|
||||
build_test_cases --skip-agentic
|
||||
run_test_cases run_test
|
||||
report_results
|
||||
|
|
@ -1,244 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
PROVIDER_CONFIG="
|
||||
openrouter -> google/gemini-2.5-pro|anthropic/claude-sonnet-4.5|qwen/qwen3-coder:exacto|z-ai/glm-4.6:exacto|nvidia/nemotron-3-nano-30b-a3b
|
||||
xai -> grok-3
|
||||
openai -> gpt-4o|gpt-4o-mini|gpt-3.5-turbo|gpt-5
|
||||
anthropic -> claude-sonnet-4-5-20250929|claude-opus-4-5-20251101
|
||||
google -> gemini-2.5-pro|gemini-2.5-flash|gemini-3-pro-preview|gemini-3-flash-preview
|
||||
tetrate -> claude-sonnet-4-20250514
|
||||
databricks -> databricks-claude-sonnet-4|gemini-2-5-flash|gpt-4o
|
||||
azure_openai -> ${AZURE_OPENAI_DEPLOYMENT_NAME}
|
||||
aws_bedrock -> us.anthropic.claude-sonnet-4-5-20250929-v1:0
|
||||
gcp_vertex_ai -> gemini-2.5-pro
|
||||
snowflake -> claude-sonnet-4-5
|
||||
venice -> llama-3.3-70b
|
||||
litellm -> gpt-4o-mini
|
||||
sagemaker_tgi -> sagemaker-tgi-endpoint
|
||||
github_copilot -> gpt-4.1
|
||||
chatgpt_codex -> gpt-5.1-codex
|
||||
claude-code -> default
|
||||
codex -> gpt-5.2-codex
|
||||
gemini-cli -> gemini-2.5-pro
|
||||
cursor-agent -> auto
|
||||
ollama -> qwen3
|
||||
"
|
||||
|
||||
# Flaky models allowed to fail without blocking PRs.
|
||||
ALLOWED_FAILURES=(
|
||||
"google:gemini-2.5-flash"
|
||||
"google:gemini-3-pro-preview"
|
||||
"openrouter:nvidia/nemotron-3-nano-30b-a3b"
|
||||
"openrouter:qwen/qwen3-coder:exacto"
|
||||
"openai:gpt-3.5-turbo"
|
||||
)
|
||||
|
||||
AGENTIC_PROVIDERS=("claude-code" "codex" "gemini-cli" "cursor-agent")
|
||||
|
||||
if [ -f .env ]; then
|
||||
export $(grep -v '^#' .env | xargs)
|
||||
fi
|
||||
|
||||
build_goose() {
|
||||
if [ -z "$SKIP_BUILD" ]; then
|
||||
echo "Building goose..." >&2
|
||||
cargo build --bin goose >&2
|
||||
echo "" >&2
|
||||
else
|
||||
echo "Skipping build (SKIP_BUILD is set)..." >&2
|
||||
echo "" >&2
|
||||
fi
|
||||
|
||||
echo "$(pwd)/target/debug/goose"
|
||||
}
|
||||
|
||||
has_env() { [ -n "${!1}" ]; }
|
||||
has_cmd() { command -v "$1" &>/dev/null; }
|
||||
has_file() { [ -f "$1" ]; }
|
||||
|
||||
is_provider_available() {
|
||||
case "$1" in
|
||||
openrouter) has_env OPENROUTER_API_KEY ;;
|
||||
xai) has_env XAI_API_KEY ;;
|
||||
openai) has_env OPENAI_API_KEY ;;
|
||||
anthropic) has_env ANTHROPIC_API_KEY ;;
|
||||
google) has_env GOOGLE_API_KEY ;;
|
||||
tetrate) has_env TETRATE_API_KEY ;;
|
||||
databricks) has_env DATABRICKS_HOST && has_env DATABRICKS_TOKEN ;;
|
||||
azure_openai) has_env AZURE_OPENAI_ENDPOINT && has_env AZURE_OPENAI_DEPLOYMENT_NAME ;;
|
||||
aws_bedrock) has_env AWS_REGION && { has_env AWS_PROFILE || has_env AWS_ACCESS_KEY_ID; } ;;
|
||||
gcp_vertex_ai) has_env GCP_PROJECT_ID ;;
|
||||
snowflake) has_env SNOWFLAKE_HOST && has_env SNOWFLAKE_TOKEN ;;
|
||||
venice) has_env VENICE_API_KEY ;;
|
||||
litellm) has_env LITELLM_API_KEY ;;
|
||||
sagemaker_tgi) has_env SAGEMAKER_ENDPOINT_NAME && has_env AWS_REGION ;;
|
||||
github_copilot) has_env GITHUB_COPILOT_TOKEN || has_file "$HOME/.config/goose/github_copilot_token.json" ;;
|
||||
chatgpt_codex) has_env CHATGPT_CODEX_TOKEN || has_file "$HOME/.config/goose/chatgpt_codex_token.json" ;;
|
||||
ollama) has_env OLLAMA_HOST || has_cmd ollama ;;
|
||||
claude-code) has_cmd claude ;;
|
||||
codex) has_cmd codex ;;
|
||||
gemini-cli) has_cmd gemini ;;
|
||||
cursor-agent) has_cmd cursor-agent ;;
|
||||
*) return 0 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
is_allowed_failure() {
|
||||
local key="${1}:${2}"
|
||||
for allowed in "${ALLOWED_FAILURES[@]}"; do
|
||||
[ "$allowed" = "$key" ] && return 0
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
should_skip_provider() {
|
||||
[ -z "$SKIP_PROVIDERS" ] && return 1
|
||||
IFS=',' read -ra SKIP_LIST <<< "$SKIP_PROVIDERS"
|
||||
for skip in "${SKIP_LIST[@]}"; do
|
||||
skip=$(echo "$skip" | xargs)
|
||||
[ "$skip" = "$1" ] && return 0
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
is_agentic_provider() {
|
||||
for agentic in "${AGENTIC_PROVIDERS[@]}"; do
|
||||
[ "$agentic" = "$1" ] && return 0
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
# build_test_cases [--skip-agentic]
|
||||
build_test_cases() {
|
||||
local skip_agentic=false
|
||||
[ "$1" = "--skip-agentic" ] && skip_agentic=true
|
||||
|
||||
local providers=()
|
||||
while IFS= read -r line; do
|
||||
[[ "$line" =~ ^#.*$ || -z "$line" ]] && continue
|
||||
local provider="${line%% -> *}"
|
||||
if is_provider_available "$provider"; then
|
||||
providers+=("$line")
|
||||
echo "✓ Including $provider"
|
||||
else
|
||||
echo "⚠️ Skipping $provider (prerequisites not met)"
|
||||
fi
|
||||
done <<< "$PROVIDER_CONFIG"
|
||||
echo ""
|
||||
|
||||
TEST_CASES=()
|
||||
local job_index=0
|
||||
for provider_config in "${providers[@]}"; do
|
||||
local provider="${provider_config%% -> *}"
|
||||
local models_str="${provider_config#* -> }"
|
||||
|
||||
if should_skip_provider "$provider"; then
|
||||
echo "⊘ Skipping provider: ${provider} (SKIP_PROVIDERS)"
|
||||
continue
|
||||
fi
|
||||
|
||||
if [ "$skip_agentic" = true ] && is_agentic_provider "$provider"; then
|
||||
echo "⊘ Skipping agentic provider: ${provider}"
|
||||
continue
|
||||
fi
|
||||
|
||||
IFS='|' read -ra models <<< "$models_str"
|
||||
for model in "${models[@]}"; do
|
||||
TEST_CASES+=("$provider|$model|$job_index")
|
||||
((job_index++))
|
||||
done
|
||||
done
|
||||
}
|
||||
|
||||
# run_test_cases <test_fn>
|
||||
run_test_cases() {
|
||||
local test_fn="$1"
|
||||
|
||||
RESULTS_DIR=$(mktemp -d)
|
||||
trap 'if [ -n "${RESULTS_DIR:-}" ]; then rm -rf -- "$RESULTS_DIR"; fi; if [ -n "${CLEANUP_DIR:-}" ]; then rm -rf -- "$CLEANUP_DIR"; fi' EXIT
|
||||
MAX_PARALLEL=${MAX_PARALLEL:-$(nproc 2>/dev/null || sysctl -n hw.ncpu 2>/dev/null || echo 8)}
|
||||
echo "Running ${#TEST_CASES[@]} tests (max $MAX_PARALLEL parallel)"
|
||||
echo ""
|
||||
|
||||
local running=0
|
||||
for ((i=0; i<${#TEST_CASES[@]}; i++)); do
|
||||
IFS='|' read -r provider model idx <<< "${TEST_CASES[$i]}"
|
||||
|
||||
if [ $i -eq 0 ]; then
|
||||
# First test runs sequentially to catch early failures
|
||||
"$test_fn" "$provider" "$model" "$RESULTS_DIR/result_$idx" "$RESULTS_DIR/output_$idx"
|
||||
else
|
||||
"$test_fn" "$provider" "$model" "$RESULTS_DIR/result_$idx" "$RESULTS_DIR/output_$idx" &
|
||||
((running++))
|
||||
if [ $running -ge $MAX_PARALLEL ]; then
|
||||
wait -n 2>/dev/null || wait
|
||||
((running--))
|
||||
fi
|
||||
fi
|
||||
done
|
||||
wait
|
||||
}
|
||||
|
||||
report_results() {
|
||||
echo ""
|
||||
echo "=== Test Results ==="
|
||||
echo ""
|
||||
|
||||
RESULTS=()
|
||||
HARD_FAILURES=()
|
||||
|
||||
for job in "${TEST_CASES[@]}"; do
|
||||
IFS='|' read -r provider model idx <<< "$job"
|
||||
|
||||
echo "Provider: $provider"
|
||||
echo "Model: $model"
|
||||
echo ""
|
||||
cat "$RESULTS_DIR/output_$idx"
|
||||
echo ""
|
||||
|
||||
local result_line=""
|
||||
[ -f "$RESULTS_DIR/result_$idx" ] && result_line=$(cat "$RESULTS_DIR/result_$idx")
|
||||
local status="${result_line%%|*}"
|
||||
local msg="${result_line#*|}"
|
||||
|
||||
if [ "$status" = "success" ]; then
|
||||
echo "✓ SUCCESS: $msg"
|
||||
RESULTS+=("✓ ${provider}: ${model}")
|
||||
else
|
||||
if is_allowed_failure "$provider" "$model"; then
|
||||
echo "⚠ FLAKY: $msg"
|
||||
RESULTS+=("⚠ ${provider}: ${model} (flaky)")
|
||||
else
|
||||
echo "✗ FAILED: $msg"
|
||||
RESULTS+=("✗ ${provider}: ${model}")
|
||||
HARD_FAILURES+=("${provider}: ${model}")
|
||||
fi
|
||||
fi
|
||||
echo "---"
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "=== Test Summary ==="
|
||||
for result in "${RESULTS[@]}"; do
|
||||
echo "$result"
|
||||
done
|
||||
|
||||
if [ ${#HARD_FAILURES[@]} -gt 0 ]; then
|
||||
echo ""
|
||||
echo "Hard failures (${#HARD_FAILURES[@]}):"
|
||||
for failure in "${HARD_FAILURES[@]}"; do
|
||||
echo " - $failure"
|
||||
done
|
||||
echo ""
|
||||
echo "Some tests failed!"
|
||||
exit 1
|
||||
else
|
||||
if echo "${RESULTS[@]}" | grep -q "⚠"; then
|
||||
echo ""
|
||||
echo "All required tests passed! (some flaky tests failed but are allowed)"
|
||||
else
|
||||
echo ""
|
||||
echo "All tests passed!"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
|
@ -10,7 +10,7 @@
|
|||
"license": {
|
||||
"name": "Apache-2.0"
|
||||
},
|
||||
"version": "1.31.0"
|
||||
"version": "1.32.0"
|
||||
},
|
||||
"paths": {
|
||||
"/action-required/tool-confirmation": {
|
||||
|
|
@ -4165,7 +4165,8 @@
|
|||
"enum": [
|
||||
"Builtin",
|
||||
"Recipe",
|
||||
"Skill"
|
||||
"Skill",
|
||||
"Agent"
|
||||
]
|
||||
},
|
||||
"ConfigKey": {
|
||||
|
|
@ -4642,6 +4643,10 @@
|
|||
},
|
||||
"nullable": true
|
||||
},
|
||||
"model_doc_link": {
|
||||
"type": "string",
|
||||
"nullable": true
|
||||
},
|
||||
"models": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
|
|
@ -4654,6 +4659,12 @@
|
|||
"requires_auth": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"setup_steps": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"skip_canonical_filtering": {
|
||||
"type": "boolean"
|
||||
},
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"name": "goose-app",
|
||||
"productName": "Goose",
|
||||
"version": "1.31.0",
|
||||
"version": "1.32.0",
|
||||
"description": "Goose App",
|
||||
"engines": {
|
||||
"node": "^24.10.0",
|
||||
|
|
@ -35,9 +35,12 @@
|
|||
"test:ui": "vitest --ui",
|
||||
"test:coverage": "vitest run --coverage",
|
||||
"test:integration": "vitest run --config vitest.integration.config.ts",
|
||||
"test:integration:goosed": "vitest run --config vitest.integration.config.ts tests/integration/goosed.test.ts",
|
||||
"test:integration:providers": "vitest run --config vitest.integration.config.ts tests/integration/test_providers.test.ts",
|
||||
"test:integration:providers-code-exec": "vitest run --config vitest.integration.config.ts tests/integration/test_providers_code_exec.test.ts",
|
||||
"test:integration:watch": "vitest --config vitest.integration.config.ts",
|
||||
"test:integration:debug": "DEBUG=1 vitest run --config vitest.integration.config.ts",
|
||||
"i18n:extract": "formatjs extract 'src/**/*.{ts,tsx}' --out-file src/i18n/messages/en.json --flatten && pnpm run i18n:compile",
|
||||
"i18n:extract": "formatjs extract 'src/**/*.{ts,tsx}' --ignore '**/*.d.ts' --out-file src/i18n/messages/en.json --flatten && pnpm run i18n:compile",
|
||||
"i18n:check": "node scripts/i18n-check.js",
|
||||
"i18n:compile": "node scripts/i18n-compile.js"
|
||||
},
|
||||
|
|
|
|||
|
|
@ -16,7 +16,16 @@ const tmpFile = path.join(os.tmpdir(), 'en.i18n-check.json');
|
|||
|
||||
execFileSync(
|
||||
process.execPath,
|
||||
[formatjs, 'extract', 'src/**/*.{ts,tsx}', '--out-file', tmpFile, '--flatten'],
|
||||
[
|
||||
formatjs,
|
||||
'extract',
|
||||
'src/**/*.{ts,tsx}',
|
||||
'--ignore',
|
||||
'**/*.d.ts',
|
||||
'--out-file',
|
||||
tmpFile,
|
||||
'--flatten',
|
||||
],
|
||||
{ stdio: 'inherit', cwd: projectDir }
|
||||
);
|
||||
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ export type CheckProviderRequest = {
|
|||
provider: string;
|
||||
};
|
||||
|
||||
export type CommandType = 'Builtin' | 'Recipe' | 'Skill';
|
||||
export type CommandType = 'Builtin' | 'Recipe' | 'Skill' | 'Agent';
|
||||
|
||||
/**
|
||||
* Configuration key metadata for provider setup
|
||||
|
|
@ -220,9 +220,11 @@ export type DeclarativeProviderConfig = {
|
|||
headers?: {
|
||||
[key: string]: string;
|
||||
} | null;
|
||||
model_doc_link?: string | null;
|
||||
models: Array<ModelInfo>;
|
||||
name: string;
|
||||
requires_auth?: boolean;
|
||||
setup_steps?: Array<string>;
|
||||
skip_canonical_filtering?: boolean;
|
||||
supports_streaming?: boolean | null;
|
||||
timeout_seconds?: number | null;
|
||||
|
|
|
|||
|
|
@ -58,6 +58,14 @@ const MAX_IMAGES_PER_MESSAGE = 10;
|
|||
const TOKEN_LIMIT_DEFAULT = 128000; // fallback for custom models that the backend doesn't know about
|
||||
const TOOLS_MAX_SUGGESTED = 60; // max number of tools before we show a warning
|
||||
|
||||
const getContextAlertType = (totalTokens: number, tokenLimit: number): AlertType => {
|
||||
const percentage = tokenLimit ? (totalTokens / tokenLimit) * 100 : 0;
|
||||
|
||||
if (percentage > 90) return AlertType.Error;
|
||||
if (percentage > 75) return AlertType.Warning;
|
||||
return AlertType.Info;
|
||||
};
|
||||
|
||||
// Manual compact trigger message - must match backend constant
|
||||
const MANUAL_COMPACT_TRIGGER = '/compact';
|
||||
|
||||
|
|
@ -84,7 +92,8 @@ const i18n = defineMessages({
|
|||
},
|
||||
tooManyTools: {
|
||||
id: 'chatInput.tooManyTools',
|
||||
defaultMessage: 'Too many tools can degrade performance.\nTool count: {toolCount} (recommend: {recommended})',
|
||||
defaultMessage:
|
||||
'Too many tools can degrade performance.\nTool count: {toolCount} (recommend: {recommended})',
|
||||
},
|
||||
viewExtensions: {
|
||||
id: 'chatInput.viewExtensions',
|
||||
|
|
@ -544,7 +553,7 @@ export default function ChatInput({
|
|||
// Show alert when either there is registered token usage, or we know the limit
|
||||
if ((totalTokens && totalTokens > 0) || (isTokenLimitLoaded && tokenLimit)) {
|
||||
addAlert({
|
||||
type: AlertType.Info,
|
||||
type: getContextAlertType(totalTokens || 0, tokenLimit),
|
||||
message: intl.formatMessage(i18n.contextWindow),
|
||||
progress: {
|
||||
current: totalTokens || 0,
|
||||
|
|
@ -564,7 +573,10 @@ export default function ChatInput({
|
|||
if (toolCount !== null && toolCount > TOOLS_MAX_SUGGESTED) {
|
||||
addAlert({
|
||||
type: AlertType.Warning,
|
||||
message: intl.formatMessage(i18n.tooManyTools, { toolCount, recommended: TOOLS_MAX_SUGGESTED }),
|
||||
message: intl.formatMessage(i18n.tooManyTools, {
|
||||
toolCount,
|
||||
recommended: TOOLS_MAX_SUGGESTED,
|
||||
}),
|
||||
action: {
|
||||
text: intl.formatMessage(i18n.viewExtensions),
|
||||
onClick: () => setView('extensions'),
|
||||
|
|
@ -1559,7 +1571,9 @@ export default function ChatInput({
|
|||
<p className="text-sm text-text-primary truncate" title={file.name}>
|
||||
{file.name}
|
||||
</p>
|
||||
<p className="text-xs text-text-secondary">{file.type || intl.formatMessage(i18n.unknownType)}</p>
|
||||
<p className="text-xs text-text-secondary">
|
||||
{file.type || intl.formatMessage(i18n.unknownType)}
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
|
@ -1675,7 +1689,9 @@ export default function ChatInput({
|
|||
</Button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>
|
||||
{recipe ? intl.formatMessage(i18n.viewEditRecipe) : intl.formatMessage(i18n.createRecipeFromSession)}
|
||||
{recipe
|
||||
? intl.formatMessage(i18n.viewEditRecipe)
|
||||
: intl.formatMessage(i18n.createRecipeFromSession)}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ import {
|
|||
removeExtension as apiRemoveExtension,
|
||||
providers,
|
||||
} from '../api';
|
||||
import { syncBundledExtensions } from './settings/extensions';
|
||||
import { pruneDeprecatedBundledExtensions, syncBundledExtensions } from './settings/extensions';
|
||||
import type {
|
||||
ConfigResponse,
|
||||
UpsertConfigQuery,
|
||||
|
|
@ -88,16 +88,19 @@ export const ConfigProvider: React.FC<ConfigProviderProps> = ({ children }) => {
|
|||
[reloadConfig]
|
||||
);
|
||||
|
||||
const read = useCallback(async (key: string, is_secret: boolean = false, options?: { throwOnError?: boolean }) => {
|
||||
const query: ConfigKeyQuery = { key: key, is_secret: is_secret };
|
||||
const response = await readConfig({
|
||||
body: query,
|
||||
});
|
||||
if (options?.throwOnError && response.error) {
|
||||
throw response.error;
|
||||
}
|
||||
return response.data;
|
||||
}, []);
|
||||
const read = useCallback(
|
||||
async (key: string, is_secret: boolean = false, options?: { throwOnError?: boolean }) => {
|
||||
const query: ConfigKeyQuery = { key: key, is_secret: is_secret };
|
||||
const response = await readConfig({
|
||||
body: query,
|
||||
});
|
||||
if (options?.throwOnError && response.error) {
|
||||
throw response.error;
|
||||
}
|
||||
return response.data;
|
||||
},
|
||||
[]
|
||||
);
|
||||
|
||||
const remove = useCallback(
|
||||
async (key: string, is_secret: boolean) => {
|
||||
|
|
@ -178,6 +181,7 @@ export const ConfigProvider: React.FC<ConfigProviderProps> = ({ children }) => {
|
|||
try {
|
||||
const response = await providers();
|
||||
const providersData = response.data || [];
|
||||
providersListRef.current = providersData;
|
||||
setProvidersList(providersData);
|
||||
return providersData;
|
||||
} catch (error) {
|
||||
|
|
@ -199,6 +203,7 @@ export const ConfigProvider: React.FC<ConfigProviderProps> = ({ children }) => {
|
|||
try {
|
||||
const providersResponse = await providers();
|
||||
const providersData = providersResponse.data || [];
|
||||
providersListRef.current = providersData;
|
||||
setProvidersList(providersData);
|
||||
} catch (error) {
|
||||
console.error('Failed to load providers:', error);
|
||||
|
|
@ -224,6 +229,10 @@ export const ConfigProvider: React.FC<ConfigProviderProps> = ({ children }) => {
|
|||
const query: ExtensionQuery = { name, config, enabled };
|
||||
await apiAddExtension({ body: query });
|
||||
};
|
||||
const removeExtensionForSync = async (name: string) => {
|
||||
await apiRemoveExtension({ path: { name } });
|
||||
};
|
||||
extensions = await pruneDeprecatedBundledExtensions(extensions, removeExtensionForSync);
|
||||
await syncBundledExtensions(extensions, addExtensionForSync);
|
||||
// Reload extensions after sync
|
||||
const refreshedResponse = await apiGetExtensions();
|
||||
|
|
|
|||
|
|
@ -36,6 +36,8 @@ export const getItemIcon = (item: DisplayItem): IconInfo => {
|
|||
return { Icon: BookOpen, color: '#10b981' }; // Green
|
||||
case 'Skill':
|
||||
return { Icon: Sparkles, color: '#8b5cf6' }; // Purple
|
||||
case 'Agent':
|
||||
return { Icon: Terminal, color: '#f59e0b' }; // Amber
|
||||
case 'Directory':
|
||||
return { Icon: Folder, color: '#f59e0b' }; // Amber
|
||||
default: {
|
||||
|
|
|
|||
|
|
@ -38,11 +38,12 @@ const i18n = defineMessages({
|
|||
type DisplayItemType = CommandType | 'Directory' | 'File';
|
||||
|
||||
const typeOrder: Record<DisplayItemType, number> = {
|
||||
Directory: 0,
|
||||
File: 1,
|
||||
Builtin: 2,
|
||||
Skill: 3,
|
||||
Recipe: 4,
|
||||
Agent: 0,
|
||||
Directory: 1,
|
||||
File: 2,
|
||||
Builtin: 3,
|
||||
Skill: 4,
|
||||
Recipe: 5,
|
||||
};
|
||||
|
||||
export interface DisplayItem {
|
||||
|
|
@ -426,7 +427,9 @@ const MentionPopover = forwardRef<
|
|||
);
|
||||
|
||||
let finalScore = bestMatch.score;
|
||||
if (finalScore > 0 && currentWorkingDir) {
|
||||
if (finalScore > 0 && file.itemType === 'Agent') {
|
||||
finalScore += 100;
|
||||
} else if (finalScore > 0 && currentWorkingDir) {
|
||||
const depth = file.extra.replace(currentWorkingDir, '').split('/').length - 1;
|
||||
finalScore += depth <= 1 ? 50 : depth <= 2 ? 30 : depth <= 3 ? 15 : 0;
|
||||
}
|
||||
|
|
@ -449,6 +452,9 @@ const MentionPopover = forwardRef<
|
|||
}, [items, query, currentWorkingDir]);
|
||||
|
||||
const getSelectionText = (item: DisplayItem): string => {
|
||||
if (item.itemType === 'Agent') {
|
||||
return '@' + item.name + ' ';
|
||||
}
|
||||
if (item.itemType === 'Skill') {
|
||||
return `Use the ${item.name} skill to `;
|
||||
}
|
||||
|
|
@ -486,17 +492,34 @@ const MentionPopover = forwardRef<
|
|||
throwOnError: true,
|
||||
});
|
||||
if (cancelled) return;
|
||||
const commandItems: DisplayItem[] = (response.data?.commands || []).map((cmd) => ({
|
||||
name: cmd.command,
|
||||
extra: cmd.help,
|
||||
itemType: cmd.command_type,
|
||||
relativePath: cmd.command,
|
||||
}));
|
||||
const commandItems: DisplayItem[] = (response.data?.commands || [])
|
||||
.filter((cmd) => cmd.command_type !== 'Agent')
|
||||
.map((cmd) => ({
|
||||
name: cmd.command,
|
||||
extra: cmd.help,
|
||||
itemType: cmd.command_type,
|
||||
relativePath: cmd.command,
|
||||
}));
|
||||
setItems(commandItems);
|
||||
} else {
|
||||
const scannedFiles = await scanDirectoryFromRoot(currentWorkingDir || getDefaultStartPath());
|
||||
// Fetch agents from server and scan files in parallel
|
||||
const [agentResponse, scannedFiles] = await Promise.all([
|
||||
getSlashCommands({
|
||||
query: { working_dir: currentWorkingDir },
|
||||
throwOnError: true,
|
||||
}).catch(() => null),
|
||||
scanDirectoryFromRoot(currentWorkingDir || getDefaultStartPath()),
|
||||
]);
|
||||
if (cancelled) return;
|
||||
setItems(scannedFiles);
|
||||
const agentItems: DisplayItem[] = (agentResponse?.data?.commands || [])
|
||||
.filter((cmd) => cmd.command_type === 'Agent')
|
||||
.map((cmd) => ({
|
||||
name: cmd.command,
|
||||
extra: cmd.help,
|
||||
itemType: cmd.command_type,
|
||||
relativePath: cmd.command,
|
||||
}));
|
||||
setItems([...agentItems, ...scannedFiles]);
|
||||
}
|
||||
} catch (error) {
|
||||
if (!cancelled) {
|
||||
|
|
@ -572,7 +595,9 @@ const MentionPopover = forwardRef<
|
|||
{isLoading ? (
|
||||
<div className="flex items-center justify-center py-4">
|
||||
<div className="animate-spin rounded-full h-4 w-4 border-t-2 border-b-2"></div>
|
||||
<span className="ml-2 text-sm text-text-secondary">{intl.formatMessage(isSlashCommand ? i18n.loadingCommands : i18n.scanningFiles)}</span>
|
||||
<span className="ml-2 text-sm text-text-secondary">
|
||||
{intl.formatMessage(isSlashCommand ? i18n.loadingCommands : i18n.scanningFiles)}
|
||||
</span>
|
||||
</div>
|
||||
) : (
|
||||
<>
|
||||
|
|
@ -607,7 +632,9 @@ const MentionPopover = forwardRef<
|
|||
|
||||
{!isLoading && displayItems.length === 0 && query && (
|
||||
<div className="p-4 text-center text-text-secondary text-sm">
|
||||
{intl.formatMessage(isSlashCommand ? i18n.noCommandsFound : i18n.noItemsFound, { query })}
|
||||
{intl.formatMessage(isSlashCommand ? i18n.noCommandsFound : i18n.noItemsFound, {
|
||||
query,
|
||||
})}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@ import { Switch } from '../ui/switch';
|
|||
import { FixedExtensionEntry, useConfig } from '../ConfigContext';
|
||||
import { toastService } from '../../toasts';
|
||||
import { formatExtensionName } from '../settings/extensions/subcomponents/ExtensionList';
|
||||
import { nameToKey } from '../settings/extensions/utils';
|
||||
import { ExtensionConfig, getSessionExtensions } from '../../api';
|
||||
import { addToAgent, removeFromAgent } from '../settings/extensions/agent-api';
|
||||
import {
|
||||
|
|
@ -230,15 +231,29 @@ export const BottomMenuExtensionSelection = ({ sessionId }: BottomMenuExtensionS
|
|||
);
|
||||
}
|
||||
|
||||
const sessionExtensionNames = new Set(sessionExtensions.map((ext) => ext.name));
|
||||
const sessionExtensionKeys = new Set(sessionExtensions.map((ext) => nameToKey(ext.name)));
|
||||
const globalExtensionKeys = new Set(allExtensions.map((ext) => nameToKey(ext.name)));
|
||||
|
||||
return allExtensions.map(
|
||||
const mergedExtensions = allExtensions.map(
|
||||
(ext) =>
|
||||
({
|
||||
...ext,
|
||||
enabled: sessionExtensionNames.has(ext.name),
|
||||
enabled: sessionExtensionKeys.has(nameToKey(ext.name)),
|
||||
}) as FixedExtensionEntry
|
||||
);
|
||||
|
||||
for (const sessionExtension of sessionExtensions) {
|
||||
if (globalExtensionKeys.has(nameToKey(sessionExtension.name))) {
|
||||
continue;
|
||||
}
|
||||
|
||||
mergedExtensions.push({
|
||||
...sessionExtension,
|
||||
enabled: true,
|
||||
});
|
||||
}
|
||||
|
||||
return mergedExtensions;
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [allExtensions, sessionExtensions, isHubView, hubUpdateTrigger]);
|
||||
|
||||
|
|
@ -266,6 +281,9 @@ export const BottomMenuExtensionSelection = ({ sessionId }: BottomMenuExtensionS
|
|||
return extensionsList.filter((ext) => ext.enabled).length;
|
||||
}, [extensionsList]);
|
||||
|
||||
const shouldHideTrigger =
|
||||
extensionsList.length === 0 || (!isHubView && !isSessionExtensionsLoaded);
|
||||
|
||||
return (
|
||||
<DropdownMenu
|
||||
open={isOpen}
|
||||
|
|
@ -284,7 +302,7 @@ export const BottomMenuExtensionSelection = ({ sessionId }: BottomMenuExtensionS
|
|||
>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<button
|
||||
className={`flex items-center [&_svg]:size-4 text-text-primary/70 hover:text-text-primary hover:scale-100 hover:bg-transparent text-xs cursor-pointer ${allExtensions.length === 0 || (!isHubView && !isSessionExtensionsLoaded) ? 'invisible' : ''}`}
|
||||
className={`flex items-center [&_svg]:size-4 text-text-primary/70 hover:text-text-primary hover:scale-100 hover:bg-transparent text-xs cursor-pointer ${shouldHideTrigger ? 'invisible' : ''}`}
|
||||
title={intl.formatMessage(i18n.manageExtensions)}
|
||||
>
|
||||
<Puzzle className="mr-1 h-4 w-4" />
|
||||
|
|
@ -309,7 +327,9 @@ export const BottomMenuExtensionSelection = ({ sessionId }: BottomMenuExtensionS
|
|||
autoFocus
|
||||
/>
|
||||
<p className="text-xs text-text-primary/60 mt-1.5">
|
||||
{intl.formatMessage(isHubView ? i18n.extensionsForNewChats : i18n.extensionsForThisSession)}
|
||||
{intl.formatMessage(
|
||||
isHubView ? i18n.extensionsForNewChats : i18n.extensionsForThisSession
|
||||
)}
|
||||
</p>
|
||||
</div>
|
||||
<div
|
||||
|
|
@ -319,7 +339,9 @@ export const BottomMenuExtensionSelection = ({ sessionId }: BottomMenuExtensionS
|
|||
>
|
||||
{sortedExtensions.length === 0 ? (
|
||||
<div className="px-2 py-4 text-center text-sm text-text-primary/70">
|
||||
{intl.formatMessage(searchQuery ? i18n.noExtensionsFound : i18n.noExtensionsAvailable)}
|
||||
{intl.formatMessage(
|
||||
searchQuery ? i18n.noExtensionsFound : i18n.noExtensionsAvailable
|
||||
)}
|
||||
</div>
|
||||
) : (
|
||||
sortedExtensions.map((ext) => {
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue