Skills / Agentic SEO Skill
Agentic SEO Skill
An LLM-first SEO analysis skill for Antigravity, Codex, Claude with 16 specialized sub-skills, 10 specialist agents, and 33 optional utility scripts used as evidence collectors.
Installation
Kompatibilitaet
Beschreibung
SEO Skill (Antigravity / Claude / Codex)
An LLM-first SEO analysis skill for agent IDEs, with 16 specialized sub-skills, 10 specialist agents, and 33 scripts used as evidence collectors and workflow automation.
IDE Compatibility
- Antigravity IDE (
.agent/skills/seo) - Claude Code (
~/.claude/skills/seo) - Codex (
~/.codex/skills/seo)
π¦ Current Inventory
- Specialized sub-skills:
16 - Specialist agents:
10 - Scripts in
scripts/:33(32Python +1shell validation helper)
π GitHub SEO Metadata
Recommended GitHub repository description (About field):
LLM-first SEO skill for Antigravity, Claude, and Codex with 16 sub-skills, 10 specialist agents, and GitHub SEO workflows that output GITHUB-SEO-REPORT.md and GITHUB-ACTION-PLAN.md.
Suggested GitHub topics:
seo, llm, github-seo, ai-search, geo, aeo, technical-seo, schema, core-web-vitals, codex, claude-code, antigravity
β¨ Features
| Sub-Skill | Description |
|-----------|-------------|
| seo audit | Full website audit with evidence-backed scoring |
| seo article | Article data extraction & LLM-driven content optimization |
| seo page | Deep single-page analysis |
| seo technical | Crawlability, indexability, security, Core Web Vitals, AI crawlers |
| seo content | Content quality & E-E-A-T assessment (Sept 2025 QRG) |
| seo schema | Schema.org detection, validation & JSON-LD generation |
| seo sitemap | XML sitemap analysis & generation |
| seo images | Image optimization audit (alt text, formats, lazy loading, CLS) |
| seo geo | Generative Engine Optimization β AI Overviews, ChatGPT, Perplexity |
| seo aeo | Answer Engine Optimization β Featured Snippets, PAA, Knowledge Panel |
| seo links | Link profile analysis β internal links, backlinks, anchor text, orphan pages |
| seo programmatic | Programmatic SEO safeguards & quality gates |
| seo competitors | Comparison & alternatives page generation |
| seo hreflang | International SEO / hreflang validation |
| seo plan | Strategic SEO planning with topical clusters & industry templates |
| seo github | GitHub repository SEO: metadata/topics, README quality, community profile, query benchmarking, traffic archiving |
π§ LLM-First Workflow
This skill is designed for reasoning-first SEO analysis:
- Collect page evidence (
read_url_contentfirst, scripts optional). - Analyze with LLM using explicit proof for each finding.
- Apply confidence labels (
Confirmed,Likely,Hypothesis). - Prioritize by impact and effort.
- Produce a structured action plan.
Required Rubric
All audits should apply:
resources/references/llm-audit-rubric.md
The rubric standardizes:
- evidence format (
Finding,Evidence,Impact,Fix) - severity (
Critical,Warning,Pass,Info) - confidence labeling
- output contract for audit reports
π€ Specialist Agents
- Technical SEO β crawlability, indexability, security, mobile, JS rendering
- Content Quality β E-E-A-T scoring, AI content detection
- Performance β Core Web Vitals (LCP, INP, CLS) analysis
- Schema Markup β JSON-LD detection, validation, generation
- Sitemap β XML sitemap validation, quality gates
- Visual Analysis β screenshots, above-the-fold, responsiveness (Playwright)
- GitHub Analyst β metadata, topics, README, trust, title strategy
- GitHub Benchmark β query ranking and competitor intelligence
- GitHub Data β API/auth fallback and traffic archival continuity
- Verifier (Global) β dedupe/contradiction suppression before final reporting
π Reference Data (Updated Feb 2026)
- Core Web Vitals thresholds (INP replaced FID)
- E-E-A-T framework (Sept 2025 QRG + Dec 2025 core update)
- Schema.org types β active, restricted, deprecated
- Content quality gates & word count minimums
- Google SEO quick reference
- LLM audit rubric for consistent outputs
π Industry Templates
Pre-built strategy templates for: SaaS, E-commerce, Local Business, Publisher/Media, Agency, and Generic businesses.
π§ Installation (All IDEs)
Quick Install Script (Antigravity / Claude / Codex)
# 1) Clone
git clone https://github.com/Bhanunamikaze/Agentic-SEO-Skill.git
cd Agentic-SEO-Skill
# 2) Install for your target
# Antigravity (project-local):
bash install.sh --target antigravity --project-dir /path/to/your/project
# Claude:
bash install.sh --target claude
# Codex:
bash install.sh --target codex
# Global user install (Claude + Codex):
bash install.sh --target global
# All targets (Antigravity + Claude + Codex):
bash install.sh --target all --project-dir /path/to/your/project
# Install from another local checkout:
bash install.sh --target codex --repo-path /path/to/Agentic-SEO-Skill
Install directly from GitHub (remote source mode):
curl -fsSL https://raw.githubusercontent.com/Bhanunamikaze/Agentic-SEO-Skill/main/install.sh | \
bash -s -- --target codex
Manual Installation
Step 1: Clone the Repository
git clone https://github.com/Bhanunamikaze/Agentic-SEO-Skill.git
Step 2: Install Python Dependencies
pip install requests beautifulsoup4
Optional β for visual analysis (screenshots & layout checks):
pip install playwright && playwright install chromium
Step 3: Choose Target Directory (Manual Install)
If you prefer not to use install.sh, copy or symlink manually:
Antigravity IDE (project-local)
mkdir -p .agent/skills
cp -r /path/to/Agentic-SEO-Skill .agent/skills/seo
# or: ln -s /path/to/Agentic-SEO-Skill .agent/skills/seo
Claude Code (user-global)
mkdir -p ~/.claude/skills
cp -r /path/to/Agentic-SEO-Skill ~/.claude/skills/seo
# or: ln -s /path/to/Agentic-SEO-Skill ~/.claude/skills/seo
Codex (user-global)
mkdir -p ~/.codex/skills
cp -r /path/to/Agentic-SEO-Skill ~/.codex/skills/seo
# or: ln -s /path/to/Agentic-SEO-Skill ~/.codex/skills/seo
Step 4: Verify Triggering
The skill will auto-trigger when you mention SEO-related keywords in your IDE. Try:
- "Run an SEO audit on example.com"
- "Check the schema markup on my homepage"
- "Analyze Core Web Vitals for my site"
- "Create an SEO plan for my SaaS product"
- "Run GitHub SEO analysis for owner/repo"
π¬ Example Prompts (hackingdream.net)
How Prompts Route to Agents & Scripts
The IDE uses an LLM orchestration layer to match your natural language intent to the correct underlying sub-skill (e.g., seo-hreflang.md, seo-schema.md). You do not need to use explicit flags or commands.
- To run a specific test: Ask for it specifically (e.g., "Check hreflang"). The LLM will only trigger the necessary scripts.
- To force ALL agents/tests: Ask for a "full, comprehensive audit running all checks". The LLM will route this to
seo-audit.md, which acts as the master orchestrator calling all available scripts and analyzing the combined output.
Here's how specific phrases map to the skill's capabilities:
| You type... | Scope | Agent(s) activated | Scripts used |
|-------------|-------|-------------------|--------------|
| "Run SEO audit" | π Full domain | All 6 core website agents (technical, content, schema, performance, sitemap, visual) | parse_html.py, pagespeed.py, robots_checker.py, security_headers.py, broken_links.py, readability.py |
| "Analyze this article" / blog post URL | π Single page | Content + Schema + Technical | article_seo.py, parse_html.py, readability.py |
| "Check technical SEO" | π§ Technical only | Technical | robots_checker.py, security_headers.py, redirect_checker.py, parse_html.py |
| "Review content quality" / "E-E-A-T" | π Content only | Content | article_seo.py, readability.py, entity_checker.py |
| "Check schema markup" | π·οΈ Schema only | Schema | parse_html.py, validate_schema.py |
| "Audit sitemap" | πΊοΈ Sitemap only | Sitemap | broken_links.py |
| "Check page speed" / "Core Web Vitals" | β‘ Performance only | Performance | pagespeed.py |
| "Take screenshots" / "mobile check" | π± Visual only | Visual | capture_screenshot.py, analyze_visual.py |
| "Check GEO readiness" / "AI search" | π€ GEO/AI only | Technical + Content | llms_txt_checker.py, robots_checker.py, parse_html.py |
| "Analyze links" / "backlink profile" | π Links only | Technical | link_profile.py, internal_links.py, broken_links.py |
| "Check hreflang" | π i18n only | Technical | hreflang_checker.py |
| "Create SEO plan" / "SEO strategy" | π Strategy | None (LLM reasoning) | competitor_gap.py (optional) |
| "AEO analysis" / "Featured Snippets" | π― AEO only | Content | article_seo.py, parse_html.py |
| "Entity SEO" / "Knowledge Graph" | ποΈ Entity only | Content + Schema | entity_checker.py, parse_html.py |
| "Check IndexNow" | π‘ IndexNow only | Technical | indexnow_checker.py |
| "Find content gaps" / "competitor analysis" | π Gap analysis | None (LLM reasoning) | competitor_gap.py |
| "Check for duplicates" / "thin content" | π Dupe check | Content | duplicate_content.py |
| "GSC data" / "Search Console" | π GSC only | None | gsc_checker.py |
| "GitHub SEO" / "optimize this repo" | π Repository | GitHub Analyst + Benchmark + Data + Verifier | github_repo_audit.py, github_readme_lint.py, github_community_health.py, github_search_benchmark.py, github_competitor_research.py, github_traffic_archiver.py, github_seo_report.py, finding_verifier.py (outputs GITHUB-SEO-REPORT.md + GITHUB-ACTION-PLAN.md) |
Domain vs URL vs Blog Post β What's Different?
| Input type | What happens | Example |
|-----------|-------------|---------|
| Domain (hackingdream.net) | Crawls multiple pages, checks robots.txt, sitemap, site-wide patterns | Full audit, link profile, sitemap check |
| URL (hackingdream.net/page) | Single page deep-dive: HTML, meta, schema, content, CWV | Page audit, schema check, technical check |
| Blog post URL | Article-specific: readability, keyword density, heading structure, JSON-LD Article/BlogPosting schema, publish date | Article analysis, AEO check |
π Full Domain Audit
Run a full SEO audit for https://hackingdream.net and prioritize fixes by impact.
π Single Page / Blog Post Analysis
Analyze this article: https://www.hackingdream.net/2026/02/cobalt-strike-beacon-commands-red-team-field-guide.html
Do a single-page SEO analysis of https://hackingdream.net and show critical issues first.
π§ Technical SEO
Analyze technical SEO for https://hackingdream.net (robots, crawlability, canonicals, redirects, headers).
π Content Quality & E-E-A-T
Review content quality and E-E-A-T signals on https://hackingdream.net and suggest concrete rewrites.
π·οΈ Schema Markup
Check schema markup on https://hackingdream.net, validate errors, and generate corrected JSON-LD.
β‘ Performance & Core Web Vitals
Run Core Web Vitals analysis on https://hackingdream.net and break down LCP subparts.
π€ GEO / AI Search Readiness
Evaluate GEO readiness for https://hackingdream.net (AI crawler access, llms.txt, citation structure).
π― Answer Engine Optimization (AEO)
Analyze AEO signals for https://hackingdream.net β Featured Snippet targeting, PAA optimization, Knowledge Panel readiness.
π Link Profile Analysis
Analyze internal link structure and backlink profile for https://hackingdream.net.
ποΈ Entity SEO / Knowledge Graph
Check entity SEO for https://hackingdream.net β Wikidata presence, sameAs links, Knowledge Graph signals.
π Competitor Topic Gap
Find content gaps between https://hackingdream.net and competitors https://hackerone.com https://portswigger.net.
π Hreflang / International SEO
Validate hreflang implementation on https://hackingdream.net β BCP-47 tags, bidirectional links, x-default.
π‘ IndexNow
Check IndexNow implementation for https://hackingdream.net with key abc123def456.
π Topical Cluster Planning
Create a topical authority cluster plan for https://hackingdream.net covering cybersecurity topics.
π Google Search Console (requires credentials)
Pull GSC performance data for https://hackingdream.net and identify striking-distance keywords.
πΊοΈ Sitemap Audit
Audit sitemap quality for https://hackingdream.net and flag missing, redirected, or noindex URLs.
πΌοΈ Image SEO
Run image SEO checks for https://hackingdream.net (alt text, lazy loading, dimensions, format suggestions).
π Strategic SEO Plan
Create a 6-month SEO strategy for https://hackingdream.net with milestones and KPIs.
π± Visual / Mobile Analysis
Take desktop and mobile screenshots of https://hackingdream.net and analyze above-the-fold content.
Run Everything at Once
To run all analysis types on a single URL:
Run a complete SEO audit on https://hackingdream.net β include technical, content, schema, performance,
links, GEO, AEO, entity SEO, and sitemap analysis. Provide a prioritized action plan.
Example generated outputs:
FULL-AUDIT-REPORT.mdβ comprehensive findingsACTION-PLAN.mdβ prioritized fixes
π Report Generation
You can generate reports in two ways:
- LLM-first report in your IDE (Antigravity / Claude / Codex) (recommended for strategy + prioritization):
Run a full SEO audit for https://hackingdream.net and produce a prioritized action plan with evidence for each finding.
- Interactive HTML dashboard (recommended for shareable technical snapshots):
python3 scripts/generate_report.py "https://hackingdream.net" --output seo-report-hackingdream.html
The HTML report includes:
- overall score and category breakdown
- environment detection (platform/runtime inference)
- environment-specific fix plan
- section-level issues and recommendations
- readability "what to replace" suggestions
Example generated dashboard:
βοΈ Optional Script Workflow
Use scripts when you need additional verification or structured JSON outputs.
# GitHub auth setup for repository SEO scripts (choose one)
export GITHUB_TOKEN="ghp_xxx" # or: export GH_TOKEN="ghp_xxx"
# or authenticate gh CLI:
gh auth login -h github.com
gh auth status -h github.com
# Example target
URL="https://example.com"
# Fetch + parse HTML
python3 scripts/fetch_page.py "$URL" --output /tmp/page.html
python3 scripts/parse_html.py /tmp/page.html --url "$URL" --json
# Core checks
python3 scripts/robots_checker.py "$URL" --json
python3 scripts/llms_txt_checker.py "$URL" --json
python3 scripts/pagespeed.py "$URL" --strategy mobile --json
python3 scripts/security_headers.py "$URL" --json
python3 scripts/redirect_checker.py "$URL" --json
python3 scripts/social_meta.py "$URL" --json
# Content + structure checks
python3 scripts/readability.py /tmp/page.html --json
python3 scripts/internal_links.py "$URL" --depth 1 --max-pages 20 --json
python3 scripts/broken_links.py "$URL" --workers 5 --json
python3 scripts/article_seo.py "$URL" --json
# New analysis scripts
python3 scripts/hreflang_checker.py "$URL" --json
python3 scripts/entity_checker.py "$URL" --json
python3 scripts/duplicate_content.py "$URL" --json
python3 scripts/link_profile.py "$URL" --json
python3 scripts/competitor_gap.py "$URL" --competitor https://competitor.com --json
# python3 scripts/gsc_checker.py "$URL" --credentials creds.json --json # requires GSC credentials
# python3 scripts/indexnow_checker.py "$URL" --key YOUR_KEY --json # requires IndexNow key
# GitHub repository SEO scripts (provider fallback: auto|api|gh)
python3 scripts/github_repo_audit.py --repo owner/repo --provider auto --json
python3 scripts/github_readme_lint.py README.md --json
python3 scripts/github_community_health.py --repo owner/repo --provider auto --json
# Provide query/competitor inputs from LLM/web-search discovery when possible:
python3 scripts/github_search_benchmark.py --repo owner/repo --query "<llm_or_web_query>" --provider auto --json
python3 scripts/github_competitor_research.py --repo owner/repo --query "<llm_or_web_query>" --provider auto --top-n 6 --json
python3 scripts/github_competitor_research.py --repo owner/repo --competitor owner/repo --competitor owner/repo --provider auto --json
python3 scripts/github_traffic_archiver.py --repo owner/repo --provider auto --archive-dir .github-seo-data --json
# github_seo_report.py auto-derives repo-specific benchmark queries if none are provided
python3 scripts/github_seo_report.py --repo owner/repo --provider auto --markdown GITHUB-SEO-REPORT.md --action-plan GITHUB-ACTION-PLAN.md --json
# Optional: tune auto-derived query count (default: 6)
# python3 scripts/github_seo_report.py --repo owner/repo --provider auto --auto-query-max 8 --markdown GITHUB-SEO-REPORT.md --action-plan GITHUB-ACTION-PLAN.md --json
# Generic verifier stage (can be used by any workflow before final reporting)
python3 scripts/finding_verifier.py --findings-json raw-findings.json --json
Generate a single HTML dashboard if needed:
python3 scripts/generate_report.py "$URL"
π‘οΈ Critical Rules Enforced
| Rule | Detail | |------|--------| | INP not FID | FID removed Sept 2024. INP is the sole interactivity metric. | | FAQ schema restricted | FAQPage only for government/healthcare authority sites (Aug 2023) | | HowTo deprecated | Rich results removed Sept 2023 | | JSON-LD only | Never recommend Microdata or RDFa | | E-E-A-T everywhere | Applies to ALL competitive queries since Dec 2025 | | Mobile-first complete | 100% mobile-first indexing since July 2024 | | Location page limits | β οΈ Warning at 30+ pages, π Hard stop at 50+ |
π Requirements
| Requirement | Version |
|-------------|---------|
| Python | 3.8+ |
| requests | Any |
| beautifulsoup4 | Any |
| Playwright | Optional (for visual analysis) |
π Credits
This project is heavily built from claude-seo by AgriciDaniel. All core SEO logic, reference data, agent definitions, utility scripts, and sub-skill instructions originate from that project.
This repository restructures and adapts the content to function as a compatible skill package for Antigravity IDE, Claude Code, and Codex, while preserving the same core skill layout (SKILL.md + scripts/ + resources/).
π License
Licensed under the MIT License. See LICENSE.
Portions are derived from claude-seo, which is also MIT-licensed.
Aehnliche Skills
last30days skill
AI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
context mode
Context window optimization for AI coding agents. Sandboxes tool output, 98% reduction. 12 platforms
claude seo
Universal SEO skill for Claude Code. 19 sub-skills, 12 subagents, 3 extensions (DataForSEO, Firecrawl, Banana). Technical SEO, E-E-A-T, schema, GEO/AEO, backlinks, local SEO, maps intelligence, Google APIs, and PDF/Excel reporting.
pinme
Deploy Your Frontend in a Single Command. Claude Code Skills supported.
claude ads
Comprehensive paid advertising audit & optimization skill for Claude Code. 250+ checks across Google, Meta, YouTube, LinkedIn, TikTok, Microsoft & Apple Ads with weighted scoring, parallel agents, industry templates, and AI creative generation.
claude code
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows - all through natural language commands.