SKILL Skill
API-native agent browser powered by Kuri (Zig-native CDP, 464KB, ~3ms cold
Want an agent-native computer in the browser? Try HappyCapy.
Cloud sandbox for AI agents · No setup · Run autonomous workflows from your browser
Affiliate link — we may earn a commission at no extra cost to you.
Why use this skill
SKILL is most useful when you want an agent workflow that is more structured than an ad-hoc prompt. Instead of restating the same expectations every time, a dedicated SKILL.md file gives the assistant a repeatable brief. In this case, the core value is clarity: the repo already frames the workflow around backend skills tasks, and the skill source gives you a portable starting point you can evaluate, adapt, and reuse. The inferred platform for this skill is OpenClaw Skills, which helps you judge whether it is likely to feel native in your current agent ecosystem or whether it is better treated as a general reference.
That matters because AI assistants are better when the operating context is explicit. A good skill turns hidden team expectations into visible instructions. It can name preferred tools, describe failure modes, define what “done” looks like, and reduce the amount of corrective prompting you need after the first draft. For developers exploring the wider SKILL.md ecosystem, this page helps answer the practical question: is this skill specific and maintained enough to be worth trying?
How to evaluate and use it
Start with the source repo and the preview below. The preview tells you whether the instructions are actionable or just aspirational. Strong skills usually describe triggers, recommended tools, steps, and known pitfalls. Weak skills tend to stay generic. This one lives in unbrowse-ai/unbrowse, which gives you a concrete repo context, update history, and direct ownership trail.
Once you confirm the scope looks right, test it on a small task before making it part of a larger workflow. If it improves consistency, keep it. If it is too broad, outdated, or conflicts with your own process, treat it as a reference rather than a drop-in rule. That is the healthiest way to use directory-discovered skills: not as magic plugins, but as reusable operational knowledge that still deserves judgment.
SKILL.md preview
Previewing the source is one of the fastest ways to judge whether a skill is truly useful. This snippet comes from the public file in the linked repository.
---
name: unbrowse
description: >-
API-native agent browser powered by Kuri (Zig-native CDP, 464KB, ~3ms cold
start). Unbrowse is the intelligence layer — learns internal APIs (shadow
APIs) from real browsing traffic and progressively replaces browser calls with
cached API routes (<200ms). Three paths: skill cache, shared route graph, or
Kuri browser fallback. 3.6x mean speedup over Playwright across 94 domains.
Full Kuri API surface exposed (snapshots, ref-based actions, HAR, cookies,
DOM, screenshots). Free to capture and index; agents earn from mining routes
for other agents.
user-invocable: true
metadata: {"openclaw": {"requires": {"bins": ["unbrowse"]}, "install": [{"id": "npm", "kind": "node", "package": "unbrowse", "bins": ["unbrowse"]}], "emoji": "🔍", "homepage": "https://github.com/unbrowse-ai/unbrowse"}}
---
# Unbrowse — Kuri-Powered Agent Browser
Kuri is the browser runtime. Unbrowse is the orchestration and publish layer on top.
Use this mental model:
- **Traversal**: browser-native. `go`, `snap`, `click`, `fill`, `select`, `eval`, `submit`, `close`. No hidden API replay while clicking around.
- **Publish/index**: passive evidence gets compiled later into a workflow DAG, typed params, restrictions, enums, token/header hints, and replay contracts.
- **Replay/execute**: explicit only. Use indexed/published contracts when you want a non-browser call.
The clean category line is: Unbrowse is the agent-facing browser tool; Kuri is the primitive engine underneath.
It is still the replacement layer for OpenClaw / `agent-browser` browser flows — ju
...