mirror of
https://github.com/Skyvern-AI/skyvern.git
synced 2026-04-29 12:10:28 +00:00
🔄 synced local 'docs/' with remote 'docs/'
This commit is contained in:
parent
9fcfffd85f
commit
8a30ef8a42
71 changed files with 934 additions and 340 deletions
407
docs/developers/debugging/faq.mdx
Normal file
407
docs/developers/debugging/faq.mdx
Normal file
|
|
@ -0,0 +1,407 @@
|
|||
---
|
||||
title: Frequently Asked Questions
|
||||
subtitle: Common questions and answers from the Skyvern community
|
||||
description: Answers to common Skyvern questions covering troubleshooting failed tasks, workflow parameters, credentials, scheduling, proxy configuration, API polling, data extraction, and LLM model selection.
|
||||
slug: developers/debugging/faq
|
||||
keywords:
|
||||
- FAQ
|
||||
- common questions
|
||||
- troubleshooting
|
||||
- rate limit
|
||||
- model selection
|
||||
- polling
|
||||
- concurrent runs
|
||||
- community
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
<Accordion title="Why did my task fail or terminate?">
|
||||
Check these in order:
|
||||
|
||||
1. **Run status**: Use `get_run(run_id)` to see the status and `failure_reason`
|
||||
2. **Timeline**: Use `get_run_timeline(run_id)` to find which block failed
|
||||
3. **Artifacts**: Check `screenshot_final` to see where it stopped, and `recording` to watch what happened
|
||||
|
||||
Common causes:
|
||||
- **`terminated`**: The AI couldn't complete the goal (CAPTCHA, login blocked, element not found)
|
||||
- **`timed_out`**: Exceeded `max_steps`. Increase it or simplify the task
|
||||
- **`failed`**: System error (browser crash, network failure)
|
||||
|
||||
See the [Troubleshooting Guide](/developers/debugging/troubleshooting-guide) for detailed steps.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Why is the AI clicking the wrong element?">
|
||||
Multiple similar elements on the page can confuse the AI. Fix this by:
|
||||
|
||||
1. **Add visual context**: "Click the **blue** Submit button at the **bottom** of the form"
|
||||
2. **Reference surrounding text**: "Click Download in the row that says 'January 2024'"
|
||||
3. **Be specific about position**: "Click the first result" or "Click the link in the header"
|
||||
|
||||
Check the `llm_response_parsed` artifact to see what action the AI decided to take.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Why is my task timing out?">
|
||||
Tasks time out when they exceed `max_steps` (default: 50). This happens when:
|
||||
|
||||
- The task is too complex for the step limit
|
||||
- The AI gets stuck in a navigation loop
|
||||
- Missing completion criteria causes infinite attempts
|
||||
|
||||
**Fixes:**
|
||||
- Increase `max_steps` in your task parameters
|
||||
- Add explicit completion criteria: "COMPLETE when you see 'Order confirmed'"
|
||||
- Break complex tasks into multiple workflow blocks
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Why can't Skyvern find an element on the page?">
|
||||
Elements may not be detected if they:
|
||||
|
||||
- Load dynamically after the page appears
|
||||
- Are inside an iframe
|
||||
- Require scrolling to become visible
|
||||
- Use unusual rendering (canvas, shadow DOM)
|
||||
|
||||
**Fixes:**
|
||||
- Add `max_screenshot_scrolls` to capture lazy-loaded content
|
||||
- Describe elements visually rather than by technical properties
|
||||
- Add explicit wait or scroll instructions in your prompt
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Why is the extracted data wrong or incomplete?">
|
||||
Check the `screenshot_final` artifact to verify the AI ended on the right page.
|
||||
|
||||
Common causes:
|
||||
- Prompt was ambiguous about what to extract
|
||||
- `data_extraction_schema` descriptions weren't specific enough
|
||||
- Multiple similar elements and the AI picked the wrong one
|
||||
|
||||
**Fixes:**
|
||||
- Add specific descriptions: `"description": "The price in USD, without currency symbol"`
|
||||
- Add visual hints: "The price is displayed in bold below the product title"
|
||||
</Accordion>
|
||||
|
||||
---
|
||||
|
||||
## Workflows
|
||||
|
||||
<Accordion title="How do I pass parameters to a workflow?">
|
||||
Define parameters in the workflow editor, then pass them when starting a run:
|
||||
|
||||
```python
|
||||
result = await client.run_workflow(
|
||||
workflow_id="wpid_123456",
|
||||
parameters={
|
||||
"url": "https://example.com",
|
||||
"username": "user@example.com"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
Parameters are available in prompts using `{{ parameter_name }}` syntax. See [Workflow Parameters](/cloud/building-workflows/add-parameters) for details.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I use credentials in a workflow?">
|
||||
1. **Store credentials**: Go to Settings > Credentials and add your login
|
||||
2. **Reference in workflow**: Select the credential in the Login block's credential dropdown
|
||||
3. **Never hardcode**: Don't put passwords directly in prompts
|
||||
|
||||
The AI will automatically fill the credential fields when it encounters a login form. See [Credentials](/sdk-reference/credentials) for setup instructions.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Can I run workflows on a schedule?">
|
||||
Yes. Skyvern has built-in cron-based scheduling for workflows. You define a cron expression and timezone, and Skyvern triggers the workflow automatically at each interval.
|
||||
|
||||
- **Cloud UI**: Create and manage schedules from the [Schedules page](/cloud/building-workflows/scheduling)
|
||||
- **API**: Use the [scheduling API](/cloud/building-workflows/run-from-code#create-a-schedule) to create, update, enable/disable, and delete schedules programmatically
|
||||
|
||||
Scheduled runs appear in [Run History](/cloud/viewing-results/run-history) with a calendar icon badge. Set up a [webhook](/developers/going-to-production/webhooks) to get notified when scheduled runs complete.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Why isn't my workflow using the parameter values I passed?">
|
||||
Make sure you're using the parameter placeholder syntax correctly:
|
||||
|
||||
- **Correct:** `{{ parameter_name }}`
|
||||
- **Wrong:** `{parameter_name}` or `$parameter_name`
|
||||
|
||||
Also verify:
|
||||
- The parameter name matches exactly (case-sensitive)
|
||||
- The parameter is defined in the workflow's parameter list
|
||||
- You're passing the parameter when starting the run, not after
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I run workflows sequentially with the same credentials?">
|
||||
Use the "Run Sequentially" option with a credential key:
|
||||
|
||||
1. Enable "Run Sequentially" in workflow settings
|
||||
2. Set the sequential key to your credential identifier
|
||||
3. Skyvern will queue runs that share the same credential
|
||||
|
||||
This prevents concurrent logins that could trigger security blocks.
|
||||
</Accordion>
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
<Accordion title="Where do I find my API key?">
|
||||
Go to [Settings](https://app.skyvern.com/settings) in the Skyvern Cloud UI. Your API key is displayed there.
|
||||
|
||||
For local installations, run `skyvern init` to generate a local API key, which will be saved in your `.env` file.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I change the LLM model?">
|
||||
**Cloud:** The model is managed by Skyvern. Contact support for custom model requirements.
|
||||
|
||||
**Self-hosted:** Run `skyvern init llm` to reconfigure, or set these environment variables:
|
||||
- `LLM_KEY`: Primary model (e.g., `OPENAI_GPT4O`, `ANTHROPIC_CLAUDE_SONNET`)
|
||||
- `SECONDARY_LLM_KEY`: Optional lightweight model for faster operations
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I configure proxy settings?">
|
||||
Set `proxy_location` when running a task:
|
||||
|
||||
```python
|
||||
result = await client.run_task(
|
||||
prompt="...",
|
||||
url="https://example.com",
|
||||
proxy_location="RESIDENTIAL_US" # or RESIDENTIAL_ISP for static IPs
|
||||
)
|
||||
```
|
||||
|
||||
Available locations include `RESIDENTIAL_US`, `RESIDENTIAL_UK`, `RESIDENTIAL_ISP`, and more. Use `RESIDENTIAL_ISP` for sites that require consistent IP addresses.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Can I increase the max steps limit beyond 75?">
|
||||
Yes. Contact support to increase the limit for your account. Higher limits may incur additional costs.
|
||||
|
||||
Alternatively, break complex tasks into multiple workflow blocks, each with its own step budget.
|
||||
</Accordion>
|
||||
|
||||
---
|
||||
|
||||
## API & Integration
|
||||
|
||||
<Accordion title="How do I poll for task completion?">
|
||||
Check the run status in a loop:
|
||||
|
||||
```python
|
||||
while True:
|
||||
run = await client.get_run(run_id)
|
||||
if run.status in ["completed", "failed", "terminated", "timed_out", "canceled"]:
|
||||
break
|
||||
await asyncio.sleep(5)
|
||||
```
|
||||
|
||||
Or use [webhooks](/developers/going-to-production/webhooks) to get notified when runs complete without polling.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Where can I find extracted data after a run completes?">
|
||||
The extracted data is in `run.output`:
|
||||
|
||||
```python
|
||||
run = await client.get_run(run_id)
|
||||
print(run.output) # {"field1": "value1", "field2": "value2"}
|
||||
```
|
||||
|
||||
For downloaded files, check `run.downloaded_files` which contains signed URLs.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I get the recording URL from the API?">
|
||||
The recording URL is included in the run response:
|
||||
|
||||
```python
|
||||
run = await client.get_run(run_id)
|
||||
print(run.recording_url) # https://skyvern-artifacts.s3.amazonaws.com/.../recording.webm
|
||||
```
|
||||
|
||||
Note: Signed URLs expire after 24 hours. Request fresh URLs if expired.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I trigger a workflow via API?">
|
||||
Use the `run_workflow` method:
|
||||
|
||||
```python
|
||||
result = await client.run_workflow(
|
||||
workflow_id="wpid_123456",
|
||||
parameters={"key": "value"}
|
||||
)
|
||||
print(result.run_id) # wr_789012
|
||||
```
|
||||
|
||||
The workflow starts asynchronously. Poll the run status or configure a webhook to track completion.
|
||||
</Accordion>
|
||||
|
||||
---
|
||||
|
||||
## Authentication & Credentials
|
||||
|
||||
<Accordion title="Why is my login failing?">
|
||||
Check these common causes:
|
||||
|
||||
1. **Credentials expired**: Verify username/password in your credential store
|
||||
2. **CAPTCHA appeared**: Use a browser profile with an existing session
|
||||
3. **MFA required**: Configure TOTP if the site requires 2FA
|
||||
4. **Site detected automation**: Use `RESIDENTIAL_ISP` proxy for trusted IPs
|
||||
|
||||
Watch the `recording` artifact to see exactly what happened during login.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I handle CAPTCHA?">
|
||||
Options to bypass CAPTCHAs:
|
||||
|
||||
1. **Browser profile**: Create a profile with an authenticated session that skips login entirely
|
||||
2. **Human interaction block**: Pause the workflow for manual CAPTCHA solving
|
||||
3. **Residential proxy**: Use `RESIDENTIAL_ISP` for static IPs that sites trust more
|
||||
|
||||
Skyvern has built-in CAPTCHA solving for common types, but some sites use advanced anti-bot measures.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I set up 2FA/TOTP?">
|
||||
1. Go to Settings > Credentials
|
||||
2. Create a new TOTP credential with your authenticator secret
|
||||
3. Reference the TOTP credential in your workflow's login block
|
||||
|
||||
The AI will automatically fetch and enter the TOTP code when it encounters a 2FA prompt. See [TOTP Setup](/developers/credentials/totp) for detailed instructions.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I handle email OTP verification?">
|
||||
Email OTP requires an external integration:
|
||||
|
||||
1. Set up an email listener (Zapier, custom webhook, etc.)
|
||||
2. When the OTP arrives, send it back to Skyvern via the API
|
||||
3. Use a Human Interaction block to wait for the code
|
||||
|
||||
Alternatively, check if the site supports TOTP authenticators, which Skyvern handles natively.
|
||||
</Accordion>
|
||||
|
||||
---
|
||||
|
||||
## Browser & Proxy
|
||||
|
||||
<Accordion title="Why is the site blocking Skyvern?">
|
||||
Sites may detect automation through:
|
||||
|
||||
- IP reputation (datacenter IPs are often blocked)
|
||||
- Browser fingerprinting
|
||||
- Rate limiting
|
||||
|
||||
**Fixes:**
|
||||
- Use `RESIDENTIAL_ISP` proxy for consistent, trusted IPs
|
||||
- Create a browser profile with an existing authenticated session
|
||||
- Slow down requests by adding explicit waits in your prompt
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I persist cookies across runs?">
|
||||
Use browser profiles:
|
||||
|
||||
1. Create a browser profile in Settings > Browser Profiles
|
||||
2. Log in manually or via a workflow
|
||||
3. Reference the `browser_profile_id` in subsequent runs
|
||||
|
||||
The profile preserves cookies, localStorage, and session state across runs.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Can I connect Skyvern to my local browser?">
|
||||
Yes, for self-hosted installations:
|
||||
|
||||
```bash
|
||||
skyvern init # Choose "connect to existing Chrome" option
|
||||
skyvern run server
|
||||
```
|
||||
|
||||
This connects to Chrome running on your machine, useful for debugging or accessing internal tools.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Why am I getting a 504 timeout when creating browser sessions?">
|
||||
This usually indicates high load or network issues:
|
||||
|
||||
- Wait a few seconds and retry
|
||||
- Check your rate limits
|
||||
- For self-hosted: verify your infrastructure has sufficient resources
|
||||
|
||||
If persistent, contact support with your run IDs.
|
||||
</Accordion>
|
||||
|
||||
---
|
||||
|
||||
## Self-Hosting & Deployment
|
||||
|
||||
<Accordion title="What are the requirements for self-hosting Skyvern?">
|
||||
- Python 3.11, 3.12, or 3.13
|
||||
- PostgreSQL database
|
||||
- An LLM API key (OpenAI, Anthropic, Azure, Gemini, or Ollama)
|
||||
- Docker (optional, for containerized deployment)
|
||||
|
||||
Run `skyvern init` for an interactive setup wizard. See the [Quickstart](/developers/getting-started/quickstart) for full instructions.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I update my self-hosted Skyvern instance?">
|
||||
```bash
|
||||
pip install --upgrade skyvern
|
||||
alembic upgrade head # Run database migrations
|
||||
skyvern run server # Restart the server
|
||||
```
|
||||
|
||||
For Docker deployments, pull the latest image and restart your containers.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I configure a custom LLM provider?">
|
||||
Set these environment variables:
|
||||
|
||||
```bash
|
||||
ENABLE_OPENAI=true # or ENABLE_ANTHROPIC, ENABLE_AZURE, etc.
|
||||
OPENAI_API_KEY=sk-...
|
||||
LLM_KEY=OPENAI_GPT4O
|
||||
```
|
||||
|
||||
Skyvern supports OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Gemini, and Ollama. Run `skyvern init llm` for guided configuration.
|
||||
</Accordion>
|
||||
|
||||
---
|
||||
|
||||
## Billing & Pricing
|
||||
|
||||
<Accordion title="How is pricing calculated?">
|
||||
Pricing depends on your plan:
|
||||
|
||||
- **Pay-as-you-go**: Charged per step or per task completion
|
||||
- **Enterprise**: Custom pricing based on volume
|
||||
|
||||
View your usage and billing at [Settings > Billing](https://app.skyvern.com/billing).
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Why does it say insufficient balance when I have credit?">
|
||||
Credits may be reserved for in-progress runs. Check:
|
||||
|
||||
1. Active runs that haven't completed yet
|
||||
2. Failed runs that consumed partial credit
|
||||
3. Pending charges that haven't settled
|
||||
|
||||
Wait for active runs to complete, then check your balance again.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How do I get invoices for my payments?">
|
||||
Go to [Settings > Billing](https://app.skyvern.com/billing) to download invoices. For custom invoice requirements (company name, VAT, etc.), contact support.
|
||||
</Accordion>
|
||||
|
||||
---
|
||||
|
||||
## Still have questions?
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Troubleshooting Guide"
|
||||
icon="wrench"
|
||||
href="/developers/debugging/troubleshooting-guide"
|
||||
>
|
||||
Step-by-step debugging for failed runs
|
||||
</Card>
|
||||
<Card
|
||||
title="Contact Support"
|
||||
icon="headset"
|
||||
href="https://app.skyvern.com"
|
||||
>
|
||||
Chat with us via the support widget
|
||||
</Card>
|
||||
</CardGroup>
|
||||
229
docs/developers/debugging/observability-with-laminar.mdx
Normal file
229
docs/developers/debugging/observability-with-laminar.mdx
Normal file
|
|
@ -0,0 +1,229 @@
|
|||
---
|
||||
title: Observability with Laminar
|
||||
subtitle: Add observability to your Skyvern automations with Laminar
|
||||
description: Integrate Laminar observability into Skyvern to capture traces of automation runs, track LLM call latency and token usage, monitor failure rates, and debug performance across tasks and workflows.
|
||||
slug: developers/debugging/observability-with-laminar
|
||||
keywords:
|
||||
- Laminar
|
||||
- traces
|
||||
- LLM latency
|
||||
- token usage
|
||||
- failure rate
|
||||
- monitoring
|
||||
- performance
|
||||
- observability
|
||||
---
|
||||
|
||||
[Laminar](https://www.lmnr.ai/) is an observability platform for AI applications. When integrated with Skyvern, it captures traces of your automation runs in Laminar's dashboard. What you see depends on how you're running Skyvern:
|
||||
|
||||
- **Skyvern Cloud (via SDK)**: Laminar wraps the `run_task` / `run_workflow` call, so you get a trace span around the API request and response: latency, status, errors, and the returned output.
|
||||
- **Self-hosted**: Skyvern's server can export full traces to Laminar, including every LLM call (prompts, responses, token usage), browser actions, and workflow step execution. See [self-hosted tracing setup](#self-hosted-tracing-setup) below.
|
||||
|
||||
<Tip>
|
||||
Laminar traces complement [artifacts](/developers/debugging/using-artifacts). Use artifacts for per-run debugging (screenshots, recordings, logs) and Laminar for tracking patterns across runs (failure rates, response times, and which tasks are slowest).
|
||||
</Tip>
|
||||
|
||||
<Note>
|
||||
This guide is also available in the [Laminar documentation](https://docs.lmnr.ai/tracing/integrations/skyvern).
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Laminar integration requires a Skyvern SDK and the Laminar SDK:
|
||||
|
||||
<CodeGroup>
|
||||
```bash Python
|
||||
pip install skyvern 'lmnr[all]'
|
||||
```
|
||||
|
||||
```bash TypeScript
|
||||
npm install @skyvern/client @lmnr-ai/lmnr
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
You will also need:
|
||||
- A **Skyvern API key**: get one at [app.skyvern.com/settings](https://app.skyvern.com/settings/)
|
||||
- A **Laminar API key**: sign up at [lmnr.ai](https://www.lmnr.ai/) and create a project
|
||||
|
||||
---
|
||||
|
||||
## Set up environment variables
|
||||
|
||||
Add both keys to your `.env` file:
|
||||
|
||||
```bash .env
|
||||
SKYVERN_API_KEY=your-skyvern-api-key
|
||||
LMNR_PROJECT_API_KEY=your-laminar-api-key
|
||||
```
|
||||
|
||||
<Frame>
|
||||
<video
|
||||
autoPlay
|
||||
controls
|
||||
muted
|
||||
playsInline
|
||||
className="w-full aspect-video rounded-xl"
|
||||
src="/images/laminar-keys.mp4"
|
||||
></video>
|
||||
</Frame>
|
||||
|
||||
---
|
||||
|
||||
## Run a traced task
|
||||
|
||||
This example scrapes the top 3 posts from Hacker News with Laminar tracing enabled. Call `Laminar.initialize()` before any Skyvern calls; it reads `LMNR_PROJECT_API_KEY` from your environment automatically.
|
||||
|
||||
<CodeGroup>
|
||||
```python Python
|
||||
import os
|
||||
import asyncio
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv()
|
||||
|
||||
from lmnr import Laminar
|
||||
Laminar.initialize()
|
||||
|
||||
from skyvern import Skyvern
|
||||
|
||||
client = Skyvern(api_key=os.getenv("SKYVERN_API_KEY"))
|
||||
|
||||
async def main():
|
||||
result = await client.run_task(
|
||||
prompt="Get the title and URL of the top 3 posts on Hacker News.",
|
||||
url="https://news.ycombinator.com",
|
||||
wait_for_completion=True,
|
||||
)
|
||||
print(f"Status: {result.status}")
|
||||
print(f"Output: {result.output}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
```typescript TypeScript
|
||||
import { Laminar } from "@lmnr-ai/lmnr";
|
||||
Laminar.initialize();
|
||||
|
||||
import { Skyvern } from "@skyvern/client";
|
||||
|
||||
const client = new Skyvern({
|
||||
apiKey: process.env.SKYVERN_API_KEY,
|
||||
});
|
||||
|
||||
async function main() {
|
||||
const result = await client.runTask({
|
||||
body: {
|
||||
prompt: "Get the title and URL of the top 3 posts on Hacker News.",
|
||||
url: "https://news.ycombinator.com",
|
||||
},
|
||||
waitForCompletion: true,
|
||||
});
|
||||
console.log(`Status: ${result.status}`);
|
||||
console.log(`Output: ${JSON.stringify(result.output, null, 2)}`);
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
Expected output:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "completed",
|
||||
"output": {
|
||||
"posts": [
|
||||
{"title": "Zig – Type Resolution Redesign and Language Changes", "url": "https://ziglang.org/devlog/2026/..."},
|
||||
{"title": "Create value for others and don't worry about the returns", "url": "https://geohot.github.io/..."},
|
||||
{"title": "U+237C ⍼ Is Azimuth", "url": "https://ionathan.ch/2026/02/16/angzarr.html"}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The task runs, the output is returned, and the full trace (every HTTP call, timing, and payload) appears in your Laminar dashboard.
|
||||
|
||||
<Note>
|
||||
**Python only:** You will see a `ForgeApp is not initialized` error in stderr on startup. This is harmless; `lmnr[all]` tries to instrument Skyvern's server-side internals, which aren't present when using the client SDK. Your traces still work correctly.
|
||||
</Note>
|
||||
|
||||
<Warning>
|
||||
**Python only:** `LaminarLiteLLMCallback` is deprecated and unnecessary. Laminar instruments LiteLLM directly, so no callback setup is needed.
|
||||
</Warning>
|
||||
|
||||
---
|
||||
|
||||
## What traces capture
|
||||
|
||||
What shows up in Laminar depends on your setup.
|
||||
|
||||
### Skyvern Cloud (via SDK)
|
||||
|
||||
When calling `run_task` or `run_workflow` through the SDK, Laminar traces the client-side call:
|
||||
|
||||
| Trace data | What it shows |
|
||||
|------------|---------------|
|
||||
| API request/response | The full round-trip to Skyvern's API (status, latency, payload size) |
|
||||
| Task output | The extracted data or completion result |
|
||||
| Errors | HTTP errors, timeouts, and task failures |
|
||||
|
||||
This is useful for monitoring how your application interacts with Skyvern: tracking which tasks fail, how long they take, and what outputs you're getting back.
|
||||
|
||||
### Self-hosted
|
||||
|
||||
When running Skyvern on your own infrastructure, you get deep server-side traces by configuring Laminar in Skyvern's environment. This gives you visibility into everything happening inside the agent:
|
||||
|
||||
| Trace data | What it shows |
|
||||
|------------|---------------|
|
||||
| LLM interactions | Every prompt sent to the model and its response, including token counts |
|
||||
| Browser actions | Each click, type, and navigation the agent performed |
|
||||
| Workflow steps | Sequential block execution and data passed between blocks |
|
||||
| Image tracing | Screenshots sent to the LLM for analysis |
|
||||
| Performance metrics | Latency and cost per LLM call |
|
||||
| Errors | Exceptions at any layer: LLM, browser, workflow engine |
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted tracing setup
|
||||
|
||||
If you're running Skyvern on your own infrastructure, add these to your server's environment:
|
||||
|
||||
```bash .env
|
||||
LMNR_PROJECT_API_KEY=your-laminar-api-key
|
||||
```
|
||||
|
||||
Skyvern's server initializes Laminar at startup, which auto-instruments LiteLLM to capture every LLM call, token count, and cost. No manual callback setup is needed.
|
||||
|
||||
If you're running a self-hosted Laminar instance, also set the base URL and ports:
|
||||
|
||||
```bash .env
|
||||
LMNR_PROJECT_API_KEY=your-laminar-api-key
|
||||
LMNR_BASE_URL=http://localhost
|
||||
LMNR_GRPC_PORT=8011
|
||||
LMNR_HTTP_PORT=8010
|
||||
```
|
||||
|
||||
No code changes needed. Once the env vars are set, traces appear in your Laminar project automatically.
|
||||
|
||||
---
|
||||
|
||||
## Next steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Using Artifacts"
|
||||
icon="file-lines"
|
||||
href="/developers/debugging/using-artifacts"
|
||||
>
|
||||
Per-run recordings, screenshots, logs, and network data
|
||||
</Card>
|
||||
<Card
|
||||
title="Troubleshooting Guide"
|
||||
icon="wrench"
|
||||
href="/developers/debugging/troubleshooting-guide"
|
||||
>
|
||||
Common issues and how to fix them
|
||||
</Card>
|
||||
</CardGroup>
|
||||
386
docs/developers/debugging/troubleshooting-guide.mdx
Normal file
386
docs/developers/debugging/troubleshooting-guide.mdx
Normal file
|
|
@ -0,0 +1,386 @@
|
|||
---
|
||||
title: Troubleshooting Guide
|
||||
subtitle: Common issues, step timeline, and when to adjust what
|
||||
description: Diagnose and fix common Skyvern run failures using the step timeline, artifacts, and run status. Covers issues like wrong output, timeouts, CAPTCHA blocks, login failures, and element detection problems.
|
||||
slug: developers/debugging/troubleshooting-guide
|
||||
keywords:
|
||||
- step timeline
|
||||
- wrong output
|
||||
- timeout
|
||||
- CAPTCHA block
|
||||
- login failure
|
||||
- element detection
|
||||
- action failure
|
||||
- diagnosis
|
||||
---
|
||||
|
||||
When a run fails or produces unexpected results, this guide helps you identify the cause and fix it.
|
||||
|
||||
## Quick debugging checklist
|
||||
|
||||
<Frame>
|
||||
<img src="/images/debugging-checklist.png" alt="Skyvern Debugging Checklist" />
|
||||
</Frame>
|
||||
|
||||
<Steps>
|
||||
<Step title="Check run status">
|
||||
Call `get_run(run_id)` and check `status` and `failure_reason`
|
||||
</Step>
|
||||
<Step title="Find the failed step">
|
||||
Call `get_run_timeline(run_id)` to see which block failed
|
||||
</Step>
|
||||
<Step title="Inspect artifacts">
|
||||
Check `screenshot_final` to see where it stopped, `recording` to watch what happened
|
||||
</Step>
|
||||
<Step title="Determine the fix">
|
||||
Adjust prompt, parameters, or file a bug based on what you find
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
**Key artifacts to check:**
|
||||
- `screenshot_final`: Final page state
|
||||
- `recording`: Video of what happened
|
||||
- `llm_response_parsed`: What the AI decided to do
|
||||
- `visible_elements_tree`: What elements were detected
|
||||
- `har`: Network requests if something failed to load
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Check the run
|
||||
|
||||
<CodeGroup>
|
||||
```python Python
|
||||
import os
|
||||
from skyvern import Skyvern
|
||||
|
||||
client = Skyvern(api_key=os.getenv("SKYVERN_API_KEY"))
|
||||
|
||||
run = await client.get_run(run_id)
|
||||
|
||||
print(f"Status: {run.status}")
|
||||
print(f"Failure reason: {run.failure_reason}")
|
||||
print(f"Step count: {run.step_count}")
|
||||
print(f"Output: {run.output}")
|
||||
```
|
||||
|
||||
```typescript TypeScript
|
||||
import { SkyvernClient } from "@skyvern/client";
|
||||
|
||||
const client = new SkyvernClient({
|
||||
apiKey: process.env.SKYVERN_API_KEY,
|
||||
});
|
||||
|
||||
const run = await client.getRun(runId);
|
||||
|
||||
console.log(`Status: ${run.status}`);
|
||||
console.log(`Failure reason: ${run.failure_reason}`);
|
||||
console.log(`Step count: ${run.step_count}`);
|
||||
console.log(`Output: ${JSON.stringify(run.output)}`);
|
||||
```
|
||||
|
||||
```bash cURL
|
||||
curl -X GET "https://api.skyvern.com/v1/runs/$RUN_ID" \
|
||||
-H "x-api-key: $SKYVERN_API_KEY" \
|
||||
| jq '{status, failure_reason, step_count, output}'
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
| Status | What it means | Likely cause |
|
||||
|--------|---------------|--------------|
|
||||
| `completed` | Run finished, but output may be wrong | Prompt interpretation issue |
|
||||
| `failed` | System error | Browser crash, network failure |
|
||||
| `terminated` | AI gave up | Login blocked, CAPTCHA, page unavailable |
|
||||
| `timed_out` | Exceeded `max_steps` | Task too complex, or AI got stuck |
|
||||
| `canceled` | Manually stopped | N/A |
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Read the step timeline
|
||||
|
||||
The timeline shows each block and action in execution order.
|
||||
|
||||
<CodeGroup>
|
||||
```python Python
|
||||
timeline = await client.get_run_timeline(run_id)
|
||||
|
||||
for item in timeline:
|
||||
if item.type == "block":
|
||||
block = item.block
|
||||
print(f"Block: {block.label}")
|
||||
print(f" Status: {block.status}")
|
||||
print(f" Duration: {block.duration}s")
|
||||
if block.failure_reason:
|
||||
print(f" Failure: {block.failure_reason}")
|
||||
for action in block.actions:
|
||||
print(f" Action: {action.action_type} -> {action.status}")
|
||||
```
|
||||
|
||||
```typescript TypeScript
|
||||
const timeline = await client.getRunTimeline(runId);
|
||||
|
||||
for (const item of timeline) {
|
||||
if (item.type === "block") {
|
||||
const block = item.block;
|
||||
console.log(`Block: ${block.label}`);
|
||||
console.log(` Status: ${block.status}`);
|
||||
console.log(` Duration: ${block.duration}s`);
|
||||
if (block.failure_reason) {
|
||||
console.log(` Failure: ${block.failure_reason}`);
|
||||
}
|
||||
for (const action of block.actions || []) {
|
||||
console.log(` Action: ${action.action_type} -> ${action.status}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```bash cURL
|
||||
curl -X GET "https://api.skyvern.com/v1/runs/$RUN_ID/timeline" \
|
||||
-H "x-api-key: $SKYVERN_API_KEY"
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Inspect artifacts
|
||||
|
||||
<CodeGroup>
|
||||
```python Python
|
||||
artifacts = await client.get_run_artifacts(
|
||||
run_id,
|
||||
artifact_type=["screenshot_final", "llm_response_parsed", "skyvern_log"]
|
||||
)
|
||||
|
||||
for a in artifacts:
|
||||
print(f"{a.artifact_type}: {a.signed_url}")
|
||||
```
|
||||
|
||||
```typescript TypeScript
|
||||
const artifacts = await client.getRunArtifacts(runId, {
|
||||
artifact_type: ["screenshot_final", "llm_response_parsed", "skyvern_log"]
|
||||
});
|
||||
|
||||
for (const a of artifacts) {
|
||||
console.log(`${a.artifact_type}: ${a.signed_url}`);
|
||||
}
|
||||
```
|
||||
|
||||
```bash cURL
|
||||
curl -X GET "https://api.skyvern.com/v1/runs/$RUN_ID/artifacts?artifact_type=screenshot_final&artifact_type=llm_response_parsed&artifact_type=skyvern_log" \
|
||||
-H "x-api-key: $SKYVERN_API_KEY" \
|
||||
| jq '.[] | "\(.artifact_type): \(.signed_url)"'
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
See [Using Artifacts](/developers/debugging/using-artifacts) for the full list of artifact types.
|
||||
|
||||
---
|
||||
|
||||
## Common issues and fixes
|
||||
|
||||
### Run Status Issues
|
||||
|
||||
<Accordion title="Run completed but output is wrong">
|
||||
**Symptom:** Status is `completed`, but the extracted data is incorrect or incomplete.
|
||||
|
||||
**Check:** `screenshot_final` (right page?) and `llm_response_parsed` (right elements?)
|
||||
|
||||
**Fix:**
|
||||
- Add specific descriptions in your schema: `"description": "The price in USD, without currency symbol"`
|
||||
- Add visual hints: "The price is displayed in bold below the product title"
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Run timed out">
|
||||
**Symptom:** Status is `timed_out`.
|
||||
|
||||
**Check:** `recording` (stuck in a loop?) and `step_count` (how many steps?)
|
||||
|
||||
**Fix:**
|
||||
- Increase `max_steps` if the task genuinely needs more steps
|
||||
- Add explicit completion criteria: "COMPLETE when you see 'Order confirmed'"
|
||||
- Break complex tasks into multiple workflow blocks
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Run stuck in queued state">
|
||||
**Symptom:** Run stays in `queued` status and never starts.
|
||||
|
||||
**Likely causes:** Concurrency limit hit, sequential run lock, or high platform load.
|
||||
|
||||
**Fix:**
|
||||
- Wait for other runs to complete
|
||||
- Check your plan's concurrency limits in Settings
|
||||
- If using "Run Sequentially" with credentials, ensure prior runs finish
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="504 Gateway Timeout on browser session">
|
||||
**Symptom:** Getting `504 Gateway Timeout` errors when creating browser sessions.
|
||||
|
||||
**Fix:**
|
||||
- Retry after a few seconds; usually transient
|
||||
- For self-hosted: check infrastructure resources
|
||||
- If persistent, contact support with your run IDs
|
||||
</Accordion>
|
||||
|
||||
### Element & Interaction Issues
|
||||
|
||||
<Accordion title="AI clicked the wrong element">
|
||||
**Symptom:** Recording shows the AI clicking something other than intended.
|
||||
|
||||
**Check:** `visible_elements_tree` (element detected?) and `llm_prompt` (target described clearly?)
|
||||
|
||||
**Fix:**
|
||||
- Add distinguishing details: "Click the **blue** Submit button at the **bottom** of the form"
|
||||
- Reference surrounding text: "Click Download in the row that says 'January 2024'"
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Element not found">
|
||||
**Symptom:** `failure_reason` mentions an element couldn't be found.
|
||||
|
||||
**Check:** `screenshot_final` (element visible?) and `visible_elements_tree` (element detected?)
|
||||
|
||||
**Likely causes:** Dynamic loading, iframe, below the fold, unusual rendering (canvas, shadow DOM)
|
||||
|
||||
**Fix:**
|
||||
- Add `max_screenshot_scrolls` to capture lazy-loaded content
|
||||
- Describe elements visually rather than by technical properties
|
||||
- Add wait or scroll instructions in your prompt
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Date fields entering wrong values">
|
||||
**Symptom:** Dates entered in wrong format (MM/DD vs DD/MM) or wrong date entirely.
|
||||
|
||||
**Check:** `recording` (how did AI interact with the date picker?)
|
||||
|
||||
**Fix:**
|
||||
- Specify exact format: "Enter the date as **15/01/2024** (DD/MM/YYYY format)"
|
||||
- For date pickers: "Click the date field, then select January 15, 2024 from the calendar"
|
||||
- If site auto-formats: "Type 01152024 and let the field format it"
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Address autocomplete cycling or wrong selection">
|
||||
**Symptom:** AI keeps clicking through address suggestions in a loop, or selects wrong address.
|
||||
|
||||
**Check:** `recording` (is autocomplete dropdown appearing?)
|
||||
|
||||
**Fix:**
|
||||
- Specify selection: "Type '123 Main St' and select the first suggestion showing 'New York, NY'"
|
||||
- Disable autocomplete: "Type the full address without using autocomplete suggestions"
|
||||
- Break into steps: "Enter street address, wait for dropdown, select the matching suggestion"
|
||||
</Accordion>
|
||||
|
||||
### Authentication Issues
|
||||
|
||||
<Accordion title="Login failed">
|
||||
**Symptom:** Status is `terminated` with login-related failure reason.
|
||||
|
||||
**Check:** `screenshot_final` (error message?) and `recording` (fields filled correctly?)
|
||||
|
||||
**Likely causes:** Expired credentials, CAPTCHA, MFA required, automation detected
|
||||
|
||||
**Fix:**
|
||||
- Verify credentials in your credential store
|
||||
- Use `RESIDENTIAL_ISP` proxy for more stable IPs
|
||||
- Configure TOTP if MFA is required
|
||||
- Use a browser profile with an existing session
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="CAPTCHA blocked the run">
|
||||
**Symptom:** `failure_reason` mentions CAPTCHA.
|
||||
|
||||
**Fix options:**
|
||||
1. **Browser profile**: Use an authenticated session that skips login
|
||||
2. **Human interaction block**: Pause for manual CAPTCHA solving
|
||||
3. **Different proxy**: Use `RESIDENTIAL_ISP` for static IPs sites trust more
|
||||
</Accordion>
|
||||
|
||||
### Data & Output Issues
|
||||
|
||||
<Accordion title="AI hallucinating or inventing data">
|
||||
**Symptom:** Extracted data contains information not on the page.
|
||||
|
||||
**Check:** `screenshot_final` (data visible?) and `llm_response_parsed` (what did AI claim to see?)
|
||||
|
||||
**Fix:**
|
||||
- Add instruction: "Only extract data **visibly displayed** on the page. Return null if not found"
|
||||
- Make schema fields optional where appropriate
|
||||
- Add validation: "If price is not displayed, set price to null instead of guessing"
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="File downloads not working">
|
||||
**Symptom:** Files download repeatedly, wrong files, or downloads don't complete.
|
||||
|
||||
**Check:** `recording` (download button clicked correctly?) and `downloaded_files` in response
|
||||
|
||||
**Fix:**
|
||||
- Be specific: "Click the **PDF** download button, not the Excel one"
|
||||
- Add wait time: "Wait for the download to complete"
|
||||
- For multi-step downloads: "Click Download, then select PDF format, then click Confirm"
|
||||
</Accordion>
|
||||
|
||||
### Environment Issues
|
||||
|
||||
<Accordion title="Site showing content in wrong language">
|
||||
**Symptom:** Site displays in unexpected language based on proxy location.
|
||||
|
||||
**Check:** What `proxy_location` is being used? Does site geo-target?
|
||||
|
||||
**Fix:**
|
||||
- Use appropriate proxy: `RESIDENTIAL_US` for English US sites
|
||||
- Add instruction: "If the site asks for language preference, select English"
|
||||
- Set language in URL: `https://example.com/en/`
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Workflow gets stuck in a loop">
|
||||
**Symptom:** Recording shows AI repeating the same actions without progressing.
|
||||
|
||||
**Check:** `recording` (what pattern is repeating?) and conditional block exit conditions
|
||||
|
||||
**Fix:**
|
||||
- Add termination: "STOP if you've attempted this action 3 times without success"
|
||||
- Add completion markers: "COMPLETE when you see 'Success' message"
|
||||
- Review loop conditions to ensure exit criteria are achievable
|
||||
</Accordion>
|
||||
|
||||
---
|
||||
|
||||
## When to adjust prompts vs parameters
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="Adjust prompt" icon="pen">
|
||||
- AI misunderstood what to do
|
||||
- Clicked wrong element
|
||||
- Ambiguous completion criteria
|
||||
- Extracted wrong data
|
||||
</Card>
|
||||
<Card title="Adjust parameters" icon="sliders">
|
||||
- Timed out → increase `max_steps`
|
||||
- Site blocked → change `proxy_location`
|
||||
- Content didn't load → increase `max_screenshot_scrolls`
|
||||
</Card>
|
||||
<Card title="File a bug" icon="bug">
|
||||
- Visible element not detected
|
||||
- Standard UI patterns don't work
|
||||
- Consistent wrong element despite clear prompts
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
---
|
||||
|
||||
## Next steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Using Artifacts"
|
||||
icon="file-lines"
|
||||
href="/developers/debugging/using-artifacts"
|
||||
>
|
||||
Detailed reference for all artifact types
|
||||
</Card>
|
||||
<Card
|
||||
title="Reliability Tips"
|
||||
icon="shield-check"
|
||||
href="/developers/going-to-production/reliability-tips"
|
||||
>
|
||||
Write prompts that fail less often
|
||||
</Card>
|
||||
</CardGroup>
|
||||
222
docs/developers/debugging/using-artifacts.mdx
Normal file
222
docs/developers/debugging/using-artifacts.mdx
Normal file
|
|
@ -0,0 +1,222 @@
|
|||
---
|
||||
title: Using Artifacts
|
||||
subtitle: Access recordings, screenshots, logs, and network data for every run
|
||||
description: Retrieve and inspect artifacts from Skyvern runs including browser recordings, screenshots, AI reasoning logs, DOM element trees, network HAR files, and Playwright traces to debug failures and unexpected results.
|
||||
slug: developers/debugging/using-artifacts
|
||||
keywords:
|
||||
- artifact
|
||||
- recording
|
||||
- screenshot
|
||||
- HAR file
|
||||
- Playwright trace
|
||||
- DOM tree
|
||||
- AI reasoning log
|
||||
- network data
|
||||
- generated code
|
||||
---
|
||||
|
||||
When a Skyvern run fails or produces unexpected results, artifacts are your debugging toolkit. Every run automatically captures what happened: recordings of the browser session, screenshots at each step, the AI's reasoning, and network traffic.
|
||||
|
||||
This page covers how to retrieve artifacts and what each type tells you.
|
||||
|
||||
<Tip>
|
||||
Looking for a specific artifact type? Jump to the [artifact types reference](#artifact-types-reference) to see all available types, then come back here to learn how to retrieve them.
|
||||
</Tip>
|
||||
|
||||
---
|
||||
|
||||
## Before you start
|
||||
|
||||
You need a `run_id` to retrieve artifacts. You get this when you [run a task](/running-automations/run-a-task) or workflow.
|
||||
|
||||
---
|
||||
|
||||
## Retrieve artifacts
|
||||
|
||||
Use `get_run_artifacts` with the `run_id` from your task or workflow run:
|
||||
|
||||
<CodeGroup>
|
||||
```python Python
|
||||
import os
|
||||
from skyvern import Skyvern
|
||||
|
||||
client = Skyvern(api_key=os.getenv("SKYVERN_API_KEY"))
|
||||
|
||||
artifacts = await client.get_run_artifacts(run_id)
|
||||
|
||||
for artifact in artifacts:
|
||||
print(f"{artifact.artifact_type}: {artifact.signed_url}")
|
||||
```
|
||||
|
||||
```typescript TypeScript
|
||||
import { SkyvernClient } from "@skyvern/client";
|
||||
|
||||
const client = new SkyvernClient({
|
||||
apiKey: process.env.SKYVERN_API_KEY,
|
||||
});
|
||||
|
||||
const artifacts = await client.getRunArtifacts(runId);
|
||||
|
||||
for (const artifact of artifacts) {
|
||||
console.log(`${artifact.artifact_type}: ${artifact.signed_url}`);
|
||||
}
|
||||
```
|
||||
|
||||
```bash cURL
|
||||
curl -X GET "https://api.skyvern.com/v1/runs/$RUN_ID/artifacts" \
|
||||
-H "x-api-key: $SKYVERN_API_KEY"
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
<Note>
|
||||
Signed URLs expire after 24 hours. If a URL has expired, call `get_run_artifacts` again to get fresh URLs. When running Skyvern locally, `signed_url` will be `null`. Access artifacts directly from your local file system instead.
|
||||
</Note>
|
||||
|
||||
---
|
||||
|
||||
## Filter by artifact type
|
||||
|
||||
To get only specific artifact types, pass `artifact_type`:
|
||||
|
||||
<CodeGroup>
|
||||
```python Python
|
||||
# Get only recordings and final screenshots
|
||||
artifacts = await client.get_run_artifacts(
|
||||
run_id,
|
||||
artifact_type=["recording", "screenshot_final"]
|
||||
)
|
||||
```
|
||||
|
||||
```typescript TypeScript
|
||||
const artifacts = await client.getRunArtifacts(runId, {
|
||||
artifact_type: ["recording", "screenshot_final"],
|
||||
});
|
||||
```
|
||||
|
||||
```bash cURL
|
||||
curl -X GET "https://api.skyvern.com/v1/runs/$RUN_ID/artifacts?artifact_type=recording&artifact_type=screenshot_final" \
|
||||
-H "x-api-key: $SKYVERN_API_KEY"
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
---
|
||||
|
||||
## Artifact types reference
|
||||
|
||||
### Visual artifacts
|
||||
|
||||
These show you what the browser looked like at various points.
|
||||
|
||||
| Type | What it contains | Use this to |
|
||||
|------|------------------|-------------|
|
||||
| `recording` | Full video of the browser session (WebM) | Watch the entire run, see exactly what happened |
|
||||
| `screenshot` | Base screenshot capture | General page state capture |
|
||||
| `screenshot_final` | Screenshot when the run ended | Check the final state (did it end on the right page?) |
|
||||
| `screenshot_action` | Screenshot taken after each action | See the page state after each click, type, or navigation |
|
||||
| `screenshot_llm` | Screenshot sent to the LLM for decision-making | Understand what the AI "saw" when it made each decision |
|
||||
|
||||
**Debugging tip:** If the run failed, start with `screenshot_final` to see where it stopped, then watch the `recording` to see how it got there.
|
||||
|
||||
### AI reasoning artifacts
|
||||
|
||||
These show you why the AI made each decision.
|
||||
|
||||
| Type | What it contains | Use this to |
|
||||
|------|------------------|-------------|
|
||||
| `llm_prompt` | The prompt sent to the LLM | See exactly what instructions the AI received |
|
||||
| `llm_request` | Full request including context | Check if the right context was included |
|
||||
| `llm_response` | Raw LLM response | See the AI's unprocessed output |
|
||||
| `llm_response_parsed` | Parsed response (extracted actions) | Understand what actions the AI decided to take |
|
||||
| `llm_response_rendered` | Parsed response with real URLs | Debug URL targeting issues (see below) |
|
||||
|
||||
**Debugging tip:** If the AI did something unexpected, check `llm_prompt` to see if your prompt was interpreted correctly, then `llm_response_parsed` to see what action it extracted.
|
||||
|
||||
**About `llm_response_rendered`:** Skyvern hashes URLs before sending them to the LLM to reduce token usage and avoid confusing the model with long query strings. The `llm_response_parsed` artifact contains these hashed placeholders (like `{{_a1b2c3}}`), while `llm_response_rendered` shows the same response with actual URLs substituted back in. Use this when you need to verify which exact URL the AI targeted.
|
||||
|
||||
### Page structure artifacts
|
||||
|
||||
These show how Skyvern interpreted the page.
|
||||
|
||||
| Type | What it contains | Use this to |
|
||||
|------|------------------|-------------|
|
||||
| `visible_elements_tree` | DOM tree of visible elements (JSON) | See which elements Skyvern detected |
|
||||
| `visible_elements_tree_trimmed` | Trimmed tree (sent to LLM) | See what the AI could "see" |
|
||||
| `visible_elements_tree_in_prompt` | Element tree as plain text | Read exactly what was embedded in the prompt |
|
||||
| `visible_elements_id_css_map` | CSS selectors for each element | Debug element targeting issues |
|
||||
| `visible_elements_id_frame_map` | Element-to-iframe mapping | Debug elements inside iframes |
|
||||
| `visible_elements_id_xpath_map` | XPath selectors for each element | Alternative element targeting debug |
|
||||
| `hashed_href_map` | Mapping of hashed URLs to original URLs | Decode URL placeholders in LLM responses |
|
||||
| `html` | Generic HTML capture | Inspect page HTML |
|
||||
| `html_scrape` | Full HTML captured during scraping | Inspect the raw page structure |
|
||||
| `html_action` | HTML captured after an action | See how the page changed after each action |
|
||||
|
||||
**Debugging tip:** If the AI couldn't find an element, check `visible_elements_tree` to see if it was detected. If not, the element might be in an iframe, dynamically loaded, or outside the viewport.
|
||||
|
||||
**About the element tree variants:** The `visible_elements_tree` is a JSON structure, while `visible_elements_tree_in_prompt` is the plain-text representation that gets embedded directly into the LLM prompt. The latter is often easier to read when debugging prompt issues. If you're debugging iframe-related problems, `visible_elements_id_frame_map` shows which frame contains each element ID.
|
||||
|
||||
### Log artifacts
|
||||
|
||||
These contain structured execution data.
|
||||
|
||||
| Type | What it contains | Use this to |
|
||||
|------|------------------|-------------|
|
||||
| `skyvern_log` | Formatted execution logs | Read through the run step-by-step |
|
||||
| `skyvern_log_raw` | Raw JSON logs | Parse programmatically for automation |
|
||||
| `browser_console_log` | Browser console output | Debug JavaScript errors on the page |
|
||||
|
||||
**Debugging tip:** `skyvern_log` is human-readable and shows the flow of execution. Start here for an overview before diving into specific artifacts.
|
||||
|
||||
### Network artifacts
|
||||
|
||||
| Type | What it contains | Use this to |
|
||||
|------|------------------|-------------|
|
||||
| `har` | HTTP Archive (network requests/responses) | Debug API calls, check for failed requests, see response data |
|
||||
| `trace` | Playwright trace file | Replay the session in Playwright Trace Viewer |
|
||||
|
||||
**Debugging tip:** The HAR file captures every network request. If a form submission failed, check the HAR to see if the request was sent and what the server responded.
|
||||
|
||||
### File artifacts
|
||||
|
||||
| Type | What it contains | Use this to |
|
||||
|------|------------------|-------------|
|
||||
| `script_file` | Generated Playwright scripts | See the code Skyvern would use to repeat this task |
|
||||
|
||||
---
|
||||
|
||||
## Artifact response fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `artifact_id` | string | Unique identifier |
|
||||
| `artifact_type` | string | Type (see tables above) |
|
||||
| `uri` | string | Internal storage path |
|
||||
| `signed_url` | string \| null | Pre-signed URL for downloading |
|
||||
| `task_id` | string \| null | Associated task ID |
|
||||
| `step_id` | string \| null | Associated step ID |
|
||||
| `run_id` | string \| null | Run ID (e.g., `tsk_v2_...`, `wr_...`) |
|
||||
| `workflow_run_id` | string \| null | Workflow run ID |
|
||||
| `workflow_run_block_id` | string \| null | Workflow block ID |
|
||||
| `organization_id` | string | Your organization |
|
||||
| `created_at` | datetime | When created |
|
||||
| `modified_at` | datetime | When modified |
|
||||
|
||||
---
|
||||
|
||||
## Next steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Troubleshooting Guide"
|
||||
icon="wrench"
|
||||
href="/developers/debugging/troubleshooting-guide"
|
||||
>
|
||||
Common issues and how to fix them
|
||||
</Card>
|
||||
<Card
|
||||
title="Error Handling"
|
||||
icon="triangle-exclamation"
|
||||
href="/developers/going-to-production/error-handling"
|
||||
>
|
||||
Map errors to custom codes for programmatic handling
|
||||
</Card>
|
||||
</CardGroup>
|
||||
Loading…
Add table
Add a link
Reference in a new issue