llm-council Skill
Multi-model LLM Council with live dashboard. Query multiple AI models simultaneously,
Want an agent-native computer in the browser? Try HappyCapy.
Cloud sandbox for AI agents · No setup · Run autonomous workflows from your browser
Affiliate link — we may earn a commission at no extra cost to you.
Why use this skill
llm-council is most useful when you want an agent workflow that is more structured than an ad-hoc prompt. Instead of restating the same expectations every time, a dedicated SKILL.md file gives the assistant a repeatable brief. In this case, the core value is clarity: the repo already frames the workflow around devops skills tasks, and the skill source gives you a portable starting point you can evaluate, adapt, and reuse. The inferred platform for this skill is Claude Code Skills, which helps you judge whether it is likely to feel native in your current agent ecosystem or whether it is better treated as a general reference.
That matters because AI assistants are better when the operating context is explicit. A good skill turns hidden team expectations into visible instructions. It can name preferred tools, describe failure modes, define what “done” looks like, and reduce the amount of corrective prompting you need after the first draft. For developers exploring the wider SKILL.md ecosystem, this page helps answer the practical question: is this skill specific and maintained enough to be worth trying?
How to evaluate and use it
Start with the source repo and the preview below. The preview tells you whether the instructions are actionable or just aspirational. Strong skills usually describe triggers, recommended tools, steps, and known pitfalls. Weak skills tend to stay generic. This one lives in happycapy-ai/Happycapy-skills, which gives you a concrete repo context, update history, and direct ownership trail.
Once you confirm the scope looks right, test it on a small task before making it part of a larger workflow. If it improves consistency, keep it. If it is too broad, outdated, or conflicts with your own process, treat it as a reference rather than a drop-in rule. That is the healthiest way to use directory-discovered skills: not as magic plugins, but as reusable operational knowledge that still deserves judgment.
SKILL.md preview
Previewing the source is one of the fastest ways to judge whether a skill is truly useful. This snippet comes from the public file in the linked repository.
--- name: llm-council description: >- Multi-model LLM Council with live dashboard. Query multiple AI models simultaneously, see responses side-by-side in a swarm-style dashboard, synthesize consensus, and run anonymous model-to-model voting. Use when the user asks to start the LLM council, compare models, query multiple models, convene council, ask all models, do model comparison, run multi-model queries, or launch the council dashboard. Supports Claude Sonnet 4.5, Claude Opus 4.5, GPT-4o, GPT-5.1, Gemini 2.5 Flash, and Gemini 2.5 Pro via AI Gateway. --- # LLM Council Query multiple AI models in parallel with a live web dashboard. ## Launch ```bash cd ~/.claude/skills/llm-council/scripts # Kill any existing server on the port fuser -k 8787/tcp 2>/dev/null # Start server nohup python3 server.py > /tmp/council-server.log 2>&1 & # Wait for startup, verify health sleep 2 && curl -s --max-time 5 http://localhost:8787/health # Export port for browser access /app/export-port.sh 8787 ``` Environment: `COUNCIL_PORT` (default 8787), `AI_GATEWAY_API_KEY` (required, auto-detected from environment). ## Files - `scripts/server.py` - ThreadingHTTPServer, serves static files + API routes, SSE streaming - `scripts/ai_gateway.py` - AI Gateway client: query, parallel query, streaming, synthesis, anonymous voting - `scripts/static/index.html` - Dashboard UI (HappyCapy design system, light/dark theme) - `scripts/static/app.js` - Client-side logic (model selector, SSE parsing, markdown rendering, voting UI) ## API | Method | Path | Description | |--------|------|---------- ...