← All docs

SEO Improvement Plan v2 — Closed-Loop AI/GEO Automation

Date: 2026-04-28

Owner: Cinder (autonomous after Costa greenlight)

Goal: Continuously improve organic + AI-Overview citations across all 29 sites without Costa in the loop.


What we have today (running)

CronWhat it doesAction surface
gsc-weekly-pull (Mon 11:00)Pulls 28d clicks/impressions/CTR/position per siteJSON + Telegram digest
gsc-striking-distance (daily)Identifies queries in pos 4-20 (one-content-push-from-ranking) + risers/fallersJSON + Telegram digest
ga4-feedback-loop (daily)Sessions + conversions per siteJSON + Telegram digest
thin-content-dryrun (daily)Identifies pages under 700 wordsDRY-RUN ONLY — no fixes
indexnow-daily (daily)Pings IndexNow for crawl signals27/29 succeed (Orlando + Bloomington 403)
innovation-sprint (Mon 06:00)Weekly research digest covering Google updates, AI Overview shifts, R&R industry, competitor movesTelegram

The gap: Every loop is one-way. Signal → report → done. Nothing closes back to action.


What's missing — close-the-loop

Loop 1 — Striking-distance fallers → page refresh

Trigger: Query falls 3+ positions in gsc-striking-distance daily run.

Action: Enqueue (domain, page, query, current_pos, prior_pos) to data/action-queue/seo-refresh.jsonl.

Worker (tools/seo-refresh-worker.py, runs nightly 02:30):

  1. 1. Pulls top item from queue
  2. 2. Reads target page, identifies query intent + content gap (Sonnet via Claude API)
  3. 3. Drafts a 200-400 word content block that answers the query directly
  4. 4. Inserts block via Edit tool
  5. 5. Triggers IndexNow ping
  6. 6. Logs to data/action-queue/seo-refresh-completed.jsonl
  7. 7. Telegram morning brief surfaces what shipped

Loop 2 — Thin-content auto-expand

Trigger: Page identified by thin-content-dryrun AND has GSC impressions but 0 clicks (= ranking but not converting).

Action: Same as Loop 1, except prompt focuses on E-E-A-T expansion (case studies, FAQ, pricing transparency).

Safety rails:

Loop 3 — AI Overview / GEO citation tracking

Trigger: Weekly innovation-sprint pulls competitor SERP samples + AI Overview snippets for top-10 queries per site.

Action:

  1. 1. Detect AI Overview citations (us vs. competitors)
  2. 2. If site is mentioned in AI Overview: log win to data/geo-wins.jsonl
  3. 3. If competitor is mentioned and we're not: queue the relevant page for GEO-optimization rewrite (uses geo-optimize skill)
  4. 4. GEO-optimize prompt focuses on: structured data (FAQ, HowTo, Product schema), question-answer format, clear factual claims with citations, conversational language matching how AI synthesizes answers

Schema deployment:

Loop 4 — Internal-link sweep

Trigger: Page with rising impressions but flat clicks (CTR drop).

Action: Auto-add 2-3 contextual internal links FROM higher-authority pages on the same site TO the rising page. Concentrates link equity.

Loop 5 — IndexNow + GSC URL Inspection (the Costa-asked-for relay)

Trigger: Any of Loop 1/2/3/4 ships a page edit.

Action:

  1. 1. Ping IndexNow daemon (existing)
  2. 2. Queue URL to data/gsc-submit-queue.jsonl for Playwright + Chrome user-data-dir relay (per docs/gsc-relay-options-2026-04-28.md, Option 1)
  3. 3. GSC relay daemon (Mac mini, 03:00 CT) processes queue, submits via URL Inspection, screenshots verification

AI/GEO-specific layer

GEO = Generative Engine Optimization (ChatGPT, Perplexity, Claude, Google AI Overviews).

What changes for AI vs traditional SEO

Traditional SEOGEO addition
Keywords in title/H1Q&A blocks matching how users ask AI assistants
Backlinks for authorityMentions on authoritative third-party content (Wikipedia, industry directories, .gov, .edu)
Schema for rich snippetsSchema for fact-extraction (Product, Service, FAQ, HowTo, BreadcrumbList)
Page speedCrawlable structured facts above the fold
Title CTRDirect answer in first 60 words

Auto-implementation

Each site gets a per-page "AI-readability score" computed weekly:

Below 60/100: auto-queued for GEO-optimization rewrite via Loop 3.

AI-detection prompt (runs weekly per site)

For each site, query Perplexity/Claude API with:

Score: are we mentioned? In what position? With what description?

Track in data/geo-citations.jsonl over time. Trend = our GEO trajectory.


Per-site dashboard (single source of truth)

Build tools/site-health-dashboard.py (cron daily 06:30):

For each of 29 sites, write data/site-health/<domain>.json:

{
  "domain": "elkhornhardwood.com",
  "as_of": "2026-04-28",
  "gsc_28d": { "clicks": 13, "impressions": 985, "ctr": 1.3, "position": 17.3 },
  "ga4_7d": { "sessions": 24, "conversions": 1 },
  "twilio_7d": { "real_calls": 0, "spam_calls": 0 },
  "thin_pages": 3,
  "ai_readability": 52,
  "geo_citations_30d": 0,
  "queued_actions": [
    {"type": "seo-refresh", "page": "services/refinishing", "reason": "fall -3.4 pos"},
    {"type": "thin-expand", "page": "locations/papillion", "current_words": 312}
  ],
  "trend": "stable"
}

Surfaced in morning brief as a single 5-row exception report:

> Site health alerts (3):

> - elkhornhardwood.com — 0 calls 7d, 24 GA4 sessions, 2 actions queued for tonight

> - phxpoolresurfacing.com — fallers detected on 3 queries, refresh queued

> - orlandoconcretedriveway.com — IndexNow 403 7 days running, key needs regenerated


Implementation timeline (autonomous, no Costa input needed)

Tonight (overnight-v7)TomorrowDay 3Week 2
Build action-queue/seo-refresh.jsonl + workerWire striking-distance → queue (Loop 1)Wire thin-content live mode (Loop 2) on services/* whitelistLoop 3 (AI Overview citation tracking)
Fix IndexNow 403s on Orlando + BloomingtonBuild site-health-dashboard.py + cronFirst 5 Loop 1/2 runs Costa-reviewedLoop 4 + Loop 5 wiring
Patch 4 wrong-tel: numbers (DONE earlier today)Greenlight Loop 1 productionGreenlight Loop 2 productionGEO citation tracker live

Costa input required ONLY at:

  1. 1. First 5 Loop 1/2 runs — verify auto-rewrites are quality (Telegram: "approved" / "reject")
  2. 2. Confirming dedicated ITD Chrome profile for GSC relay daemon (per gsc-relay-options-2026-04-28.md)
  3. 3. Sourcerow domain + budget approval (separate ticket, not SEO)

After greenlight, all 5 loops run autonomously. Telegram morning brief surfaces exceptions only.


Why this works (ranked by impact)

  1. 1. Loop 1 (striking-distance refresh) — biggest mover. ~10-20 queries per site sit at pos 4-20. Each one closing into top-3 = 5-10x the click volume. With 29 sites × 10-20 queries = 300-600 active opportunities, automated refresh of even 20% = 60-120 ranking lifts in 90 days.
  1. 2. Loop 3 (GEO citations) — biggest 12-month opportunity. As AI Overviews replace traditional SERPs (already 30% of US queries by Apr 2026), being cited by AI = the new SEO. Most R&R competitors aren't optimizing for this yet.
  1. 3. Loop 2 (thin-content) — quick wins. ~7 thin pages per site × 29 sites = ~200 pages that could get to a baseline. Even +50 sessions/page/month = significant aggregate.
  1. 4. Loop 4 (internal links) — compounding effect on Loop 1.
  1. 5. Loop 5 (IndexNow + GSC relay) — accelerator for all 4 above. Without it, edits take 2-7 days to reflect in rankings; with it, often 24-48 hr.

Costa's role (what I need from you)

Once:

Ongoing (weekly, ~5 min):

That's it. Everything else is mine.