Back to Blog

We Let an AI Agent Submit Our SaaS to 135 Directories: Results and Playbook

A detailed breakdown of using an AI agent for directory distribution: what worked, what failed, and what we changed after 135 submission attempts.

3 min read
We Let an AI Agent Submit Our SaaS to 135 Directories: Results and Playbook

Most early-stage founders say they need "more distribution."

I ran a short distribution sprint for BeVisible to answer one question:

Can an agent do directory distribution faster than me, without losing traceability?

The workflow used Codex CLI for execution and a single source of truth tracker:

  • one structured submission tracker (sheet or CSV)

The tracker logged every attempt with:

  • directory
  • URL
  • status (submitted, blocked, needs_verification)
  • notes
  • last attempt timestamp

What We Ran

  • 48-hour sprint
  • 135 directory attempts
  • agent-first execution, manual review on uncertain cases

Results

  • 135 attempted
  • 53 submitted
  • 7 needs verification
  • 75 blocked

Excalidraw-style scoreboard of directory submission outcomes: 135 attempts, 53 submitted, 7 needs verification, 75 blocked

Why 75 Were Blocked

Mostly channel constraints, not agent failure:

  • auth/login walls
  • paid-only listing flows
  • broken/dead forms
  • captcha/anti-bot barriers
  • category/policy mismatch

Excalidraw-style blocker map for why 75 attempts were blocked

What Actually Worked

The agent was strong at:

  1. Repetitive execution at speed
  2. Consistent status logging
  3. Quickly eliminating dead-end directories

The tracker was the real asset. Without it, this would have been "busy work" with no learning.

What Did Not Change

Automation improved throughput, not demand quality.

An agent can submit forms. It cannot make a weak channel convert.

Directory listings are useful for baseline presence, but they are not our primary growth engine to $10k MRR.

What We Changed After the Sprint

We now treat directory distribution as:

  • one-time coverage + light maintenance
  • not a core weekly growth focus

Founder time moved to:

  • high-intent communities
  • conversion-focused pages
  • trial-to-paid and churn improvements

If You Want to Copy This

Keep it simple:

  1. Use an agent for repetitive submission work.
  2. Track every attempt in one structured tracker.
  3. Cap time spent on directories.
  4. Evaluate channels on paid outcomes, not activity.

The logic (so anyone can run this)

  1. Prepare a directory list and a structured tracker.
  2. Give Codex CLI one explicit prompt:
    • process the full list end-to-end
    • write status + notes + timestamp for every attempt
    • keep going until every directory has a final status
  3. Let Codex run until completion (no manual batching).
  4. After completion, review only needs_verification items manually.
  5. Count totals by status and decide channel allocation from that data.
  6. Keep directory work as maintenance, not your primary growth engine.

Bottom Line

This sprint was valuable because it removed uncertainty fast.

Using Codex CLI + a structured tracker, we turned a vague distribution task into concrete data:

  • what to keep
  • what to stop
  • where founder time should actually go