This skill grants broad permissions, enabling system compromise, data
Claims to do
XCrawl Scrape: This skill handles single-page extraction with XCrawl Scrape APIs. Default behavior is raw passthrough: return upstream API response bodies as-is.
Actually does
This skill uses `curl` or `node` to interact with the XCrawl Scrape API. It reads an API key from `~/.xcrawl/config.json` and then makes POST requests to `https://run.xcrawl.com/v1/scrape` to initiate scrapes (sync or async) and GET requests to `https://run.xcrawl.com/v1/scrape/{scrape_id}` to retrieve async results.
openclaw skills install wykings/xcrawl-scrapeAccess to sensitive environment variables detected
${API_KEY}The skill is granted broad permissions to execute arbitrary `curl` and `node` commands, as well as perform `Read`, `Write`, and `Edit` operations on the file system. This allows for complete system compromise, including data exfiltration, installing persistence mechanisms, and arbitrary code execution.
allowed-tools: Bash(curl:*) Bash(node:*) Read Write Edit Grep
The `webhook.url` parameter allows the agent to make outbound HTTP requests to an arbitrary, user-controlled URL. This can be exploited for Server-Side Request Forgery (SSRF) to access internal network resources, exfiltrate data, or scan ports.
| url | string | No | - | Callback URL
The `request.skip_tls_verification` parameter allows disabling TLS certificate validation for outgoing requests. This weakens security by making connections vulnerable to Man-in-the-Middle attacks and could lead to insecure communication practices.
| skip_tls_verification | boolean | No | true | Skip TLS verification
The skill reads the XCRAWL_API_KEY from ~/.xcrawl/config.json using inline Node.js code executed via Bash. While the skill claims to keep the key local, the key is extracted and embedded into shell variables and curl commands, creating potential exposure in shell history, process lists, or logs.
API_KEY="$(node -e "const fs=require('fs');const p=process.env.HOME+'/.xcrawl/config.json';const k=JSON.parse(fs.readFileSync(p,'utf8')).XCRAWL_API_KEY||'';process.stdout.write(k)")'"The skill instructs the agent to read a locally stored API key and transmit it as a Bearer token to an external third-party domain (run.xcrawl.com). If the target URL or API endpoint is ever substituted (e.g., via prompt injection in a scraped page), the API key could be exfiltrated to an attacker-controlled server.
-H "Authorization: Bearer ${API_KEY}" ... curl -sS -X POST "https://run.xcrawl.com/v1/scrape"The skill accepts a user-supplied URL and passes it directly to the XCrawl API as the 'url' field. The API then fetches that URL server-side. If the agent constructs or accepts URLs from untrusted sources (e.g., content from previously scraped pages), this could be used to trigger SSRF against internal services reachable from XCrawl's infrastructure, or to exfiltrate the API key to an attacker-controlled endpoint by setting the url to an attacker server.
-d '{"url":"https://example.com","mode":"sync","output":{"formats":["markdown","links"]}}'The skill can be directed to scrape any URL, including URLs containing sensitive data in query parameters or paths. Combined with the webhook feature (which sends results to a caller-specified URL), this could be used to exfiltrate data extracted from internal or sensitive pages to an attacker-controlled endpoint.
webhook.url: Callback URL ... webhook.headers: Custom callback headers
The skill instructs the agent to execute inline Node.js code via Bash to parse JSON and extract credentials. While this is declared in the allowed-tools, the pattern of dynamically constructing and executing code from skill instructions is a vector that could be abused if the skill instructions are tampered with (e.g., supply chain attack on the skill file).
node -e 'const fs=require("fs");const apiKey=JSON.parse(fs.readFileSync(process.env.HOME+"/.xcrawl/config.json","utf8")).XCRAWL_API_KEY;The skill fetches and returns raw web page content (HTML, markdown, JSON) from arbitrary URLs and passes it back to the agent. Malicious web pages could embed hidden prompt injection instructions within their content. The skill's 'Output Contract' specifies returning 'Raw response body from each API call' without sanitization, maximizing the attack surface for indirect prompt injection.
4. Return raw API responses directly. - Do not synthesize or compress fields by default.
The `output.json.prompt` field allows user-controlled input to be passed as a prompt for JSON extraction to the upstream XCrawl service. If the XCrawl service uses an LLM for this extraction, this could be a prompt injection vector against that service.
| json.prompt | string | No | - | Extraction prompt
The skill consumes paid credits on each API call. The skill notes this but does not enforce any confirmation step before execution. An agent operating autonomously could exhaust user credits by making repeated scrape calls without user awareness.
Using XCrawl APIs consumes credits.
[](https://mondoo.com/ai-agent-security/skills/clawhub/wykings/xcrawl-scrape)<a href="https://mondoo.com/ai-agent-security/skills/clawhub/wykings/xcrawl-scrape"><img src="https://mondoo.com/ai-agent-security/api/badge/clawhub/wykings/xcrawl-scrape.svg" alt="Mondoo Skill Check" /></a>https://mondoo.com/ai-agent-security/api/badge/clawhub/wykings/xcrawl-scrape.svgSkills can read files, run commands, and access credentials. Mondoo helps organizations manage the security risks of AI agent skills across their entire fleet.