Skills / AI skills bank
AI skills bank
AI Skills Bank is a unified, multi-tool platform designed to aggregate, manage, and route AI skills across various workflows and AI assistants (such as Antigravity, Claude Code, Cursor, and Copilot).
Installation
Kompatibilitaet
Beschreibung
skills-bank
High-performance skill aggregation, classification & routing platform for AI agents.
π Overview
skills-bank aggregates skills (workflows, tasks, specialized agents) from 100+ distributed repositories and provides a unified routing system for AI agents to discover, load, and invoke them efficiently.
Core Design Principles
- Source-of-Truth Loading: Agents load canonical
SKILL.mdfiles directly from source repositories, not from catalogs. This eliminates hallucination risks and optimizes token usage. - Hybrid Classification: A dual-stage pipeline combines fast keyword rules (Step A) with LLM-powered semantic classification (Step B) to route skills into 12 domain hubs and 40+ sub-hubs.
- Smart Deduplication: Skills are deduplicated by name OR description β catching both exact collisions and cross-repo clones with different names but identical content.
- Multi-Tool Support: Skills sync to major AI tools including GitHub Copilot, Claude-code, free-code (claude-code), Hermes, Cursor, Gemini, Antigravity, OpenCode, Codex, and Windsurf.
- Token Efficiency: Load minimal metadata first, then source files on-demandβnot batch-loading entire catalogs.
- Interactive TUI: A rich terminal UI (powered by Ratatui) provides real-time dashboard, skill explorer, and pipeline monitoring.
π Quick Start
1. Build the CLI
cd skills-bank/
cargo build --release
2. Run the Full Pipeline
# Interactive setup (first run)
cargo run --release
# Or run all steps in sequence
cargo run --release -- run
# Launch the interactive TUI
cargo run --release -- tui
3. Individual Commands
# Aggregate skills from configured repositories
cargo run --release -- aggregate
# Sync aggregated skills to AI tool directories
cargo run --release -- sync
# Validate installation
cargo run --release -- doctor
cargo run --release -- release-gate
# Cleanup legacy duplicate repos (legacy locations: src/, repos/)
cargo run --release -- cleanup-legacy-duplicates
Core Logic & CLI
- src/ β Rust source code containing the TUI, fetcher, aggregator, and sync components.
- Cargo.toml β Rust manifest defining project metadata and dependencies.
- .skills-bank-cli-config.json β User-specific configuration for sync targets and repository lists.
Outputs & Aggregation
- skills-aggregated/ β The generated "Single Source of Truth" containing routed skill hubs and
routing.csv. - lib/ β Canonical cache directory for cloned external skill repositories.
Documentation
- readme.md β Main platform documentation and quick-start guide.
- AGENTS.md β Instruction manual for AI agents on how to discover and load skills.
Tooling & Maintenance
- tests/ β Integration testing suite for the pipeline and TUI components.
- archive/ β Legacy PowerShell scripts from the original PoC phase.
- package.json β Node.js manifest for
npxdistribution support. - .agent/ β Local agent instructions and project-specific skills.
π§ CLI Reference
Interactive Setup
cargo run --release -- setup
This launches an interactive wizard to configure:
- Where skills should be synced (global, workspace, or both)
- Which AI tools to sync to
- Repository URLs to clone and aggregate
- Excluded categories
Commands
Aggregate Skills
cargo run --release -- aggregate
Collects, deduplicates, classifies and routes skills from configured repositories to skills-aggregated/.
Sync to Tools
cargo run --release -- sync
Distributes aggregated skills to configured AI tool directories.
- Skips existing junctions/symlinks to avoid recursive errors
- Falls back to direct writes if atomic writes fail
- Updates routing CSVs with absolute paths for global targets
Add Repository
cargo run --release -- add-repo <URL>
Run Full Pipeline
cargo run --release -- run
Interactive TUI
cargo run --release -- tui
Launches a real-time terminal dashboard with skill explorer, hub statistics, and LLM classification progress.
Validate
cargo run --release -- doctor
cargo run --release -- release-gate
Cleanup legacy duplicate repos
cargo run --release -- cleanup-legacy-duplicates
π Repository Cache & Fetching
- Repositories are cloned into the canonical cache directory
lib/at the repository root (notsrc/). The fetcher uses shallow clones (git clone --depth 1 --single-branch --no-tags) for speed and disk savings. - Existing repositories inside
lib/are updated withgit pullrather than being re-cloned; the fetch pipeline deduplicates manifest entries by normalized remote URL and repository name before operating. - If you need to remove legacy repository folders left in older locations (
src/,repos/), use the CLI commandcleanup-legacy-duplicates. This command is destructive: it only deletes a legacy folder when a matchinglib/repository exists and the Git remote origin identity matches. We recommend runningcargo run --release -- doctorto inspect repository state before running cleanup.
βοΈ Configuration Files
Generated during aggregation:
skills-aggregated/routing.csvβ Skill routing rules (hub, sub-hub, src_path)skills-aggregated/subhub-index.jsonβ Hub and sub-hub registryskills-aggregated/.skill-lock.jsonβ Aggregation metadata and lock (timestamps, repo state)- Per-subhub
skills-manifest.jsonβ Skill metadata and triggers skills-aggregated/hub-manifests.csvβ Master index of all skills across all hubs
π Environment Variables
skills-bank\.env-example
| Variable | Default | Description |
|---|---|---|
| LLM_ENABLED | true | Enable/disable LLM classification (set false for keyword-only) |
| LLM_PROVIDER | β | LLM provider: gemini, openai, or mock |
| LLM_API_KEY | β | API key for the configured LLM provider |
| LLM_API_URL | Provider default | Custom API endpoint URL |
| LLM_MODEL | gpt-4o-mini | Model name (OpenAI provider) |
| LLM_CACHE_PATH | ~/.skills-bank/llm-classifications.json | Persistent cache for classifications |
| LLM_CA_CERT_PATH | β | Custom CA certificate for HTTPS pinning |
| SKILL_MANAGE_EXCLUSIONS | β | Semicolon-separated category exclusion overrides |
π― Tool Integration Targets
Sync skills to any of these destinations:
| Tool | Project | Global |
|---|---|---|
| Claude | .claude/skills/ | ~/.claude/skills/ |
| free-code (claude-code) | .free-code-config/skills/ | ~/.free-code-config/skills/ |
| Hermes | .hermes/skills/ | ~/.hermes/skills/ |
| Code (Codex) | .agents/skills/ | ~/.agents/skills/ |
| GitHub Copilot | .github/skills/ | ~/.copilot/skills/ |
| Cursor | .cursor/skills/ | ~/.cursor/skills/ |
| Gemini | .gemini/skills/ | ~/.gemini/skills/ |
| Antigravity | .agent/skills/ | ~/.gemini/antigravity/skills/ |
| OpenCode | .opencode/skills/ | ~/.config/opencode/skills/ |
| Windsurf | .windsurf/skills/ | ~/.codeium/windsurf/skills/ |
ποΈ Classification Architecture
The aggregation pipeline processes 8000+ SKILL.md files through a multi-stage classification system:
SKILL.md files (8000+)
β
βΌ
ββββββββββββββββ
β YAML Parse β Extract name, description, triggers
ββββββββ¬ββββββββ
β
βΌ
ββββββββββββββββ
β Keyword β Fast token-based routing to hub/sub-hub
β Rules β (fallback if LLM unavailable)
ββββββββ¬ββββββββ
β
βΌ
ββββββββββββββββ
β Dedup β Name OR Description HashSet
β (two-key) β Catches cross-repo clones
ββββββββ¬ββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββ
β Hybrid Exclusion + LLM Classify β
β Step A: Keyword pre-filter β
β Step B: LLM semantic classify β
β (can return "excluded") β
ββββββββ¬ββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββ
β Output β routing.csv, per-hub manifests,
β Artifacts β skills-index.json
ββββββββββββββββ
π Classification Improvements (v2.0+)
The keyword-based classification system includes three critical enhancements to eliminate false negatives and resolve sub-hub conflicts:
1. Repository Name Extraction (Substring Matching)
Problem: Repository names like mukul975-anthropic-cybersecurity-skills were not being matched because the system used exact token matching (e.g., only matching the token "security", not the full repo name).
Solution: Introduced infer_hub_from_repo_name() function that:
- Extracts the repository directory name from the path (the segment right after
lib/orsrc/) - Uses substring matching to catch domain signals (e.g.,
"cybersecurity-skills"β matches"security") - Runs before other inference logic (highest priority)
- Supports domain keywords:
- Security:
security,cybersecurity,pentest,vulnerability,vibesec,bluebook - AI:
prompt,agent-skill,llm,ai-skills - Mobile (iOS):
swiftui,ios-,-ios,swift-patterns,apple-hig,app-store - Mobile (Android):
android,kotlin - Frontend/UI:
ui-ux,ui-skills - Testing/QA:
playwright,testdino
- Security:
Confidence Score: 98% (near-deterministic, reflects author intent)
2. Sub-Hub Conflict Resolution
Problem: When a skill matched multiple sub-hubs (e.g., python AND security simultaneously), language hubs often won due to their anchor keywords, defeating domain-specialist classification.
Solution: Introduced conflict resolution table (CONFLICT_RESOLUTION) that:
- Defines precedence rules when multiple sub-hubs match:
(losing_hub, losing_sub_hub, winning_hub, winning_sub_hub) - Ensures domain specialists always win over languages:
security>python|javascript|typescript|rust|golang|javatesting-qa>python|javascript|typescript|rustcode-review>python|javascript
- Applied in
resolve_conflict()function when multiple candidates score within 5 points of the top score - Fallback: hub priority ordering if no explicit rule applies
3. Confidence Boost for Path-Based Inference
Problem: Repository name signals (inferred from path) were scored 95%, allowing lower-confidence LLM results (80%) to potentially override them.
Solution: Raised the confidence score for path-based inference from 95 β 98%
- Score 98 is now treated as near-deterministic (same tier as explicit
canonicalize_assignmentlogic at 100) - Only scores β₯ 100 can override it
- Prevents low-confidence LLM results from contradicting repository metadata
π Example Classification Flow
For a skill in lib/mukul975-anthropic-cybersecurity-skills/:
1. apply_rules() called
β
2. canonicalize_assignment() β no match (0% confidence)
β
3. infer_from_path() called
ββ infer_hub_from_repo_name() extracts "mukul975-anthropic-cybersecurity-skills"
ββ Finds substring match: "cybersecurity"
ββ Returns ("code-quality", "security") with 98% confidence
β
4. β Final assignment: code-quality / security
β LLM classification skipped (98% > 80% threshold)
οΏ½ License
MIT β See LICENSE or cli/package.json
Aehnliche Skills
last30days skill
AI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
context mode
Context window optimization for AI coding agents. Sandboxes tool output, 98% reduction. 12 platforms
claude seo
Universal SEO skill for Claude Code. 19 sub-skills, 12 subagents, 3 extensions (DataForSEO, Firecrawl, Banana). Technical SEO, E-E-A-T, schema, GEO/AEO, backlinks, local SEO, maps intelligence, Google APIs, and PDF/Excel reporting.
pinme
Deploy Your Frontend in a Single Command. Claude Code Skills supported.
godogen
Claude Code & Codex skills that build complete Godot projects from a game description
claude ads
Comprehensive paid advertising audit & optimization skill for Claude Code. 250+ checks across Google, Meta, YouTube, LinkedIn, TikTok, Microsoft & Apple Ads with weighted scoring, parallel agents, industry templates, and AI creative generation.