goose/scripts
2025-11-11 10:15:42 -05:00
..
bench-postprocess-scripts [feat] goosebenchv2 additions for eval post-processing (#2619) 2025-05-21 15:00:13 -04:00
test-subrecipes-examples Change Recipes Test Script (#5457) 2025-10-30 16:00:25 -07:00
check-openapi-schema.sh bump openapi version directly (#5674) 2025-11-11 10:15:42 -05:00
clippy-baseline.sh Remove cognitive complexity clippy lint (#4010) 2025-08-11 20:24:37 -04:00
clippy-lint.sh chore: turn clippy on for test code (#4817) 2025-09-26 00:06:07 -04:00
goose-db-helper.sh (re)Standardize Session Name Attribute (#5279) 2025-10-24 13:34:08 -04:00
parse-benchmark-results.sh feat: goose bench framework for functional and regression testing 2025-03-05 21:23:00 -05:00
README.md Remove deprecated Claude 3.5 models (#4590) 2025-09-10 14:41:02 -05:00
run-benchmarks.sh Remove deprecated Claude 3.5 models (#4590) 2025-09-10 14:41:02 -05:00
test_compaction.sh Manual compaction test and fix (#5568) 2025-11-06 10:03:48 -05:00
test_lead_worker.sh fix: optimise reading large file content (#3767) 2025-08-06 09:38:52 +10:00
test_mcp.sh fix: gemini flash -> pro for mcp smoke tests (#5574) 2025-11-06 10:05:18 -05:00
test_providers.sh add clippy warning for string_slice (#5422) 2025-11-04 17:46:25 -05:00
test_subrecipes.sh Change Recipes Test Script (#5457) 2025-10-30 16:00:25 -07:00
test_web.sh fix: optimise reading large file content (#3767) 2025-08-06 09:38:52 +10:00

Goose Benchmark Scripts

This directory contains scripts for running and analyzing Goose benchmarks.

run-benchmarks.sh

This script runs Goose benchmarks across multiple provider:model pairs and analyzes the results.

Prerequisites

  • Goose CLI must be built or installed
  • jq command-line tool for JSON processing (optional, but recommended for result analysis)

Usage

./scripts/run-benchmarks.sh [options]

Options

  • -p, --provider-models: Comma-separated list of provider:model pairs (e.g., 'openai:gpt-4o,anthropic:claude-sonnet-4')
  • -s, --suites: Comma-separated list of benchmark suites to run (e.g., 'core,small_models')
  • -o, --output-dir: Directory to store benchmark results (default: './benchmark-results')
  • -d, --debug: Use debug build instead of release build
  • -h, --help: Show help message

Examples

# Run with release build (default)
./scripts/run-benchmarks.sh --provider-models 'openai:gpt-4o,anthropic:claude-sonnet-4' --suites 'core,small_models'

# Run with debug build
./scripts/run-benchmarks.sh --provider-models 'openai:gpt-4o' --suites 'core' --debug

How It Works

The script:

  1. Parses the provider:model pairs and benchmark suites
  2. Determines whether to use the debug or release binary
  3. For each provider:model pair:
    • Sets the GOOSE_PROVIDER and GOOSE_MODEL environment variables
    • Runs the benchmark with the specified suites
    • Analyzes the results for failures
  4. Generates a summary of all benchmark runs

Output

The script creates the following files in the output directory:

  • summary.md: A summary of all benchmark results
  • {provider}-{model}.json: Raw JSON output from each benchmark run
  • {provider}-{model}-analysis.txt: Analysis of each benchmark run

Exit Codes

  • 0: All benchmarks completed successfully
  • 1: One or more benchmarks failed

parse-benchmark-results.sh

This script analyzes a single benchmark JSON result file and identifies any failures.

Usage

./scripts/parse-benchmark-results.sh path/to/benchmark-results.json

Output

The script outputs an analysis of the benchmark results to stdout, including:

  • Basic information about the benchmark run
  • Results for each evaluation in each suite
  • Summary of passed and failed metrics

Exit Codes

  • 0: All metrics passed successfully
  • 1: One or more metrics failed