How to Find All URLs on a Domain

Discover two efficient ways to map an entire website using the Link Grabber Chrome extension: sitemap extraction and automated crawling.

Why Scan an Entire Domain?

Finding every URL on a website is essential for SEO audits, site migrations, and competitor research. Whether you need to check for broken links, analyze site structure, or build a list of pages for content inventory, having a complete list of URLs is the first step.

While manual copying is impossible for large sites, the Link Grabber Chrome extension offers two powerful automated methods to find all URLs on a domain in minutes.

Method 1: Extract from Sitemap (The Fast Way)

Most websites have a `sitemap.xml` file that lists all their important pages. According to Google Search Central, sitemaps are a crucial way to tell search engines about pages on your site that might not otherwise be discovered. This is often the fastest way to get a clean list of URLs without crawling the entire site.
  1. Step 1: Open the target website's sitemap (usually `domain.com/sitemap.xml`).
  2. Step 2: Select the text content of the sitemap (Ctrl+A or Command+A).
  3. Step 3: Right-click on the selection and choose "Grab links from selection".

Why use Sitemap method?

Link Grabber will instantly parse the raw XML text, strip away the tags, and present you with a clean list of URLs in the extension window. You can extract 500+ URLs in under 30 seconds using this method. You can then filter, sort, or export them to Excel/CSV.
  • Speed

    Instant extraction, no waiting for a crawler
  • Accuracy

    Gets exactly what the site owner tells Google to index
  • Efficiency

    Works even on huge sitemaps with thousands of URLs

Method 2: Link Crawler (The Thorough Way)

If a site doesn't have a sitemap, or you want to find "orphan" pages that aren't listed in the sitemap, the Link Crawler is your best tool. It automates the process of visiting pages and collecting links.
  1. Step 1: Open the Link Grabber extension popup and click on the Link Crawler button.
  2. Step 2: Set seed start url of site to crawl (normall root domain url e.g. example.com)
  3. Step 3: Set your Nav-Link Filter to "Same Origin" or "Same Domain". Same Origin: Keeps the crawler strictly on the current subdomain (e.g., `blog.example.com`); Same Domain: Allows crawling across all subdomains (e.g., `blog.example.com` and `www.example.com`).
  4. Step 4: Click "Grab Links".

Need more detailed Link Crawler user-guide → Read here

Why use Link Crawler method?

The extension will automatically navigate through the website, following links and collecting every URL it finds.

Bonus: Tree View Visualization
Once the crawl is complete, click the Tree View icon next to the results. This visualizes the site structure in a hierarchical tree, helping you understand how pages are linked together.
  • Auto-Pilot

    Runs in the background while you do other work.
  • Deep Control

    Configure crawl depth and limits to target just the pages you want and keep large site crawls efficient.
  • Deep Discovery

    Finds internal links that might be missed by simple scanners.
  • Export

    Download the full crawl data including source URLs and anchor text.

Which Method Should You Use?

For a complete audit, we recommend starting with Method 1 to get the official page list, then running Method 2 to find any extra or broken links that shouldn't be there.
Feature
Sitemap Extraction
Link Crawler
Best For
✅ Quick content inventory
✅ Deep SEO audits & hidden pages
Speed
✅ Instant
✅ Slower (depends on site size)
Completeness
✅ Limited to sitemap entries
✅ Finds everything linked on pages
Effort
✅ Low
✅ Medium (requires configuration)

Related Guides

FAQ

The Link Crawler can find any page that is linked to from another page on the site. It cannot find pages that are completely isolated (orphaned) and not in the sitemap.
Made on
Tilda