mirror of
https://github.com/agent0ai/agent-zero.git
synced 2026-05-12 05:50:23 +00:00
docs: update installation guide thumbnail and URL - Updated the video URL and thumbnail to be the new script-based installation guide from YT in README.md and internal docs. --- - Standardize Python imports for user plugins to `usr.plugins.<plugin_name>...`, replacing sys.path hacks and symlink-dependent patterns - Add cleanup policy: plugins must not leave permanent system modifications (symlinks, orphaned services, stray files) after deletion - Remove superseded plugin-import-standards-report.md (recommendations are now integrated into the actual docs) - Updated surfaces: AGENTS.md, AGENTS.plugins.md, developer/plugins.md, a0-create-plugin skill --- update AGENTS.md and enhance agent-facing knowledge of A0 Enhanced the knowledge of the framework by removing the 1:1 copy of our GitHub README.md at the root and replacing it with a docs set for agents, to know more about Agent Zero without all the noise from URLs, explanations meant only for users. Updated AGENTS.md accordingly.
102 lines
4.8 KiB
Markdown
102 lines
4.8 KiB
Markdown
# Agent Zero - Configuration Reference
|
|
|
|
## LLM Roles
|
|
|
|
Agent Zero uses four distinct LLM roles, each configurable independently:
|
|
|
|
| Role | Purpose |
|
|
|------|---------|
|
|
| `chat_llm` | Primary model for all agent reasoning and tool use |
|
|
| `utility_llm` | Secondary model for internal framework tasks: memory summarization, query generation, history compression, memory recall filtering |
|
|
| `browser_llm` | Model used by the browser agent; vision capability recommended |
|
|
| `embedding_llm` | Produces vector embeddings for memory and knowledge indexing |
|
|
|
|
The utility model handles high-volume, lower-stakes operations and can be a cheaper/faster model than the chat model. Changing the embedding model invalidates the existing vector index - the entire knowledge base is re-indexed automatically.
|
|
|
|
## Model Providers
|
|
|
|
Providers are defined in `conf/model_providers.yaml`. All chat and embedding providers go through LiteLLM, which normalizes the API interface. Supported chat providers (as of v0.9.8):
|
|
|
|
- Agent Zero API (a0_venice) - hosted service with no API key required for basic use
|
|
- Anthropic, OpenAI, OpenRouter, Google (Gemini), Groq, Mistral AI
|
|
- DeepSeek, xAI, Moonshot AI, Sambanova, CometAPI, Z.AI, Inception AI
|
|
- Venice.ai, AWS Bedrock, Azure OpenAI
|
|
- GitHub Copilot, HuggingFace
|
|
- Ollama, LM Studio (local models)
|
|
- Other OpenAI-compatible endpoints (custom `api_base`)
|
|
|
|
Embedding providers: OpenAI, Azure, Ollama, LM Studio, HuggingFace, Google, Mistral, OpenRouter (via OpenAI-compat), AWS Bedrock.
|
|
|
|
### Model Naming Convention
|
|
|
|
| Provider | Format |
|
|
|----------|--------|
|
|
| OpenAI | model name only (`gpt-4.1`, `o4-mini`) |
|
|
| Anthropic | model name only (`claude-sonnet-4-5`) |
|
|
| OpenRouter | `provider/model` (`anthropic/claude-sonnet-4-5`) |
|
|
| Ollama | model name only (`llama3.2`, `qwen2.5`) |
|
|
| Google | model name only (`gemini-2.0-flash`) |
|
|
|
|
## Agent Profiles
|
|
|
|
Profiles are in `agents/<profile>/`. Each profile can override any prompt fragment from the base `prompts/` directory. Built-in profiles:
|
|
|
|
| Profile | Description |
|
|
|---------|-------------|
|
|
| `default` | Base template for creating new profiles |
|
|
| `agent0` | Top-level general assistant; human as superior; delegates to specialized subordinates |
|
|
| `developer` | "Master Developer" - software architecture and full-stack implementation focus |
|
|
| `researcher` | "Deep Research" - research, analysis, and synthesis across academic and corporate domains |
|
|
| `hacker` | Red/blue team; penetration testing; Kali tools focus |
|
|
| `_example` | Minimal example for building custom profiles |
|
|
|
|
Custom profiles go in `usr/agents/<profile>/` to survive framework updates.
|
|
|
|
## Plugin System
|
|
|
|
Plugins are discovered from `plugins/` (framework plugins) and `usr/plugins/` (user plugins). Each plugin requires a `plugin.yaml` with at minimum: `name`, `description`, `version`.
|
|
|
|
### Activation
|
|
|
|
- **Global activation**: enabled/disabled for all contexts via the Plugins settings panel
|
|
- **Scoped activation**: enabled/disabled per project or per agent profile via the plugin Switch modal
|
|
- Activation state stored as `.toggle-1` (ON) and `.toggle-0` (OFF) files in the plugin's config dir
|
|
|
|
### Built-in Framework Plugins
|
|
|
|
| Plugin | Purpose |
|
|
|--------|---------|
|
|
| `_memory` | Memory and knowledge pipeline, recall, consolidation |
|
|
| `_code_execution` | Terminal and code execution tool |
|
|
| `_text_editor` | Structured file read/write/patch tool |
|
|
|
|
## Environment Variable Configuration
|
|
|
|
Any setting can be set via environment variable using the `A0_SET_` prefix. This is the primary mechanism for automated deployment and container configuration.
|
|
|
|
Format: `A0_SET_<setting_name>=<value>`
|
|
|
|
Examples:
|
|
```
|
|
A0_SET_chat_model_provider=anthropic
|
|
A0_SET_chat_model_name=claude-sonnet-4-5
|
|
A0_SET_utility_model_provider=openai
|
|
A0_SET_utility_model_name=gpt-4o-mini
|
|
A0_SET_embedding_model_provider=openai
|
|
A0_SET_embedding_model_name=text-embedding-3-small
|
|
```
|
|
|
|
These can be set in the `.env` file at the project root or passed as Docker `-e` flags during container creation.
|
|
|
|
## Key Behavioral Settings
|
|
|
|
| Setting | Effect |
|
|
|---------|--------|
|
|
| `agent_knowledge_subdir` | Which knowledge subdir to load (default: `custom`, resolved to `usr/knowledge/`) |
|
|
| `memory_recall_interval` | How many loop iterations between automatic memory recalls |
|
|
| `memory_results` | Number of memory chunks returned per recall query |
|
|
| `memory_threshold` | Similarity threshold for memory recall (0-1); lower = more results, potentially less relevant |
|
|
| `auth_login` / `auth_password` | Web UI authentication credentials |
|
|
| `agent_temperature` | LLM temperature for the chat model |
|
|
|
|
Settings are stored in `usr/settings.json` and managed through the Settings page in the web UI. The settings page also provides: API key management (multiple keys per provider with round-robin), backup/restore, external services (tunnels, MCP, A2A), and memory management.
|