Frontend SkillsClaude Code SkillsView source fileVisit repo

skills Skill

description: Extract best practices from PR review comments to build a curated library for code review automation

Want an agent-native computer in the browser? Try HappyCapy.

Cloud sandbox for AI agents · No setup · Run autonomous workflows from your browser

Explore HappyCapy

Affiliate link — we may earn a commission at no extra cost to you.

Stars
4
Forks
0
Updated
March 18, 2026
Quality score
36

Why use this skill

skills is most useful when you want an agent workflow that is more structured than an ad-hoc prompt. Instead of restating the same expectations every time, a dedicated SKILL.md file gives the assistant a repeatable brief. In this case, the core value is clarity: the repo already frames the workflow around frontend skills tasks, and the skill source gives you a portable starting point you can evaluate, adapt, and reuse. The inferred platform for this skill is Claude Code Skills, which helps you judge whether it is likely to feel native in your current agent ecosystem or whether it is better treated as a general reference.

That matters because AI assistants are better when the operating context is explicit. A good skill turns hidden team expectations into visible instructions. It can name preferred tools, describe failure modes, define what “done” looks like, and reduce the amount of corrective prompting you need after the first draft. For developers exploring the wider SKILL.md ecosystem, this page helps answer the practical question: is this skill specific and maintained enough to be worth trying?

How to evaluate and use it

Start with the source repo and the preview below. The preview tells you whether the instructions are actionable or just aspirational. Strong skills usually describe triggers, recommended tools, steps, and known pitfalls. Weak skills tend to stay generic. This one lives in valon-technologies/mine-best-practices, which gives you a concrete repo context, update history, and direct ownership trail.

Once you confirm the scope looks right, test it on a small task before making it part of a larger workflow. If it improves consistency, keep it. If it is too broad, outdated, or conflicts with your own process, treat it as a reference rather than a drop-in rule. That is the healthiest way to use directory-discovered skills: not as magic plugins, but as reusable operational knowledge that still deserves judgment.

SKILL.md preview

Previewing the source is one of the fastest ways to judge whether a skill is truly useful. This snippet comes from the public file in the linked repository.

---
name: mine-best-practices
description: Extract best practices from PR review comments to build a curated library for code review automation
license: MIT
argument-hint: "--since YYYY-MM-DD [--until YYYY-MM-DD] [--scope NAME]"
metadata:
  author: Valon Technologies
  version: "1.0"
---

# Mine Best Practices

Extract insights from PR review threads, validate against codebase, and consolidate into the best practices library.

## Your Role as Orchestrator

**You are the orchestrator** for this multi-stage pipeline. Your responsibilities:

1. **Execute scripts** - Run the Python scripts that prepare batches and aggregate results
2. **Launch subagents** - Create Task() calls to dispatch specialized subagents for extraction, validation, and synthesis. **Max 10 concurrent** — if more batches exist, wait for a wave to complete before launching the next.
3. **Validate outputs** - After each phase, review subagent outputs for quality, format correctness, and issues
5. **Stop on anomalies** - If you detect problems (malformed output, unexpected results, low yield), stop and alert the user. Do not attempt to fix issues on-the-fly.

**Key principle:** Validate each stage's output before proceeding. Only interrupt the user when something needs human judgment.

## When to Use This Skill

**Use when:**
- Building/updating the best practices library from recent PRs
- Mining a date range of PR reviews for patterns
- Seeding the library from historical review threads

**Don't use for:**
- Reviewing code against the library of current practices
- General PR reviews

## Usage

```
/mine-best

...