prompts: restore tool examples for better model guidance

Commit 54362bf8ee went too far stripping too many JSON examples to guide LLMs.
This commit is contained in:
Alessandro 2026-04-02 18:50:13 +02:00
parent 756654b2ba
commit ef92a5e378
7 changed files with 128 additions and 6 deletions

View file

@ -6,11 +6,17 @@ args:
- `session`: terminal session id; default `0`
- `reset`: kill a session before running; `true` or `false`
rules:
- place the command or script in `code`
- use `runtime=output` to poll running work
- use `input` for interactive terminal prompts
- if a session is stuck, call again with the same `session` and `reset=true`
- check dependencies before running code
- replace placeholder or demo data with real values before execution
- use `print()` or `console.log()` when you need explicit output
- do not interleave other tools while waiting
- ignore framework `[SYSTEM: ...]` info in output
example:
examples:
1 terminal command
~~~json
{
"thoughts": ["I should run a terminal command in the default session."],
@ -24,3 +30,31 @@ example:
}
}
~~~
2 python snippet
~~~json
{
"thoughts": ["A short Python check is faster than using the shell."],
"headline": "Running Python snippet",
"tool_name": "code_execution_tool",
"tool_args": {
"runtime": "python",
"session": 0,
"reset": false,
"code": "import os\nprint(os.getcwd())"
}
}
~~~
3 wait for running output
~~~json
{
"thoughts": ["The previous command is still running, so I should poll for output."],
"headline": "Waiting for command output",
"tool_name": "code_execution_tool",
"tool_args": {
"runtime": "output",
"session": 0
}
}
~~~