Tool UI

Run AlphaCrawler API documentation on any domain

Enter a URL, launch the crawler, and review the report with pages found, broken links, redirects, and metadata issues.

  • Free crawl workflow
  • No installation
  • Shareable report URLs
Preview Output

What the report highlights

PagesCoverage and discovered URLs
LinksBroken, internal, and external signals
RedirectsChains, stale targets, and mixed paths
MetadataTitles, descriptions, canonicals, and headings
API

AlphaCrawler API documentation

The API page documents how AlphaCrawler crawl data can be requested programmatically and how the response format can be used in reporting or QA workflows.

10Focused tools
10Learn hub guides
1Shareable crawl format

API overview

The AlphaCrawler API is designed for teams that want to move crawl data into dashboards, QA workflows, content operations, or release checks. The public documentation starts with a simple crawl endpoint so developers and technical marketers can understand the request format quickly, then expands into the response fields that are most commonly used in post-crawl automation.

The value of documenting the API on the public site is not just product clarity. It also gives the website a dedicated section for integration intent, which is important for search coverage and for users who discover AlphaCrawler through developer-facing or operations-focused queries rather than pure SEO tool queries.

Endpoint GET /api/crawl?url=example.com
Purpose Start or request crawl data for a website URL
Primary outputs pages, broken_links, redirects, meta_issues

Example request

A request should use the canonical domain you want to audit. In a more advanced implementation you might also include options for crawl scope, page limits, or authentication, but the public example focuses on the minimal shape needed to explain the contract clearly.

curl -G 'https://alphacrawler.pro/api/crawl' \
  --data-urlencode 'url=example.com'

Example response format

The response format is intentionally summary-first. The page count gives immediate scale. Broken link and redirect fields capture the most common operational issues. The meta issues object provides a starting point for template-level remediation where missing or duplicated tags are affecting multiple URLs at once.

{
    "pages": 412,
    "broken_links": 17,
    "redirects": 29,
    "meta_issues": {
        "missing_title": 6,
        "missing_description": 14,
        "duplicate_h1": 3
    }
}

Implementation notes

Teams typically use this kind of output in two ways. The first is reporting: sending crawl summaries into dashboards or stakeholder updates. The second is gating: checking release candidates, migrations, or content launches for unacceptable changes in broken links, redirects, or metadata completeness before the change ships broadly.

As AlphaCrawler grows, the API section can become a natural home for authentication docs, rate limits, webhooks, and endpoint-specific references. The route and content structure added here prepares the site for that expansion without having to redesign the information architecture later.

Next Step

Validate the API use case with a live crawl

Run a crawl first, then use the documentation to think through how the resulting metrics should be consumed inside dashboards, QA, or release workflows.

Launch AlphaCrawler
Link exchange