Screaming Frog SEO Spider is a Seo software that you can install on PC, Mac or Linux. The program helps analyze website links, images, CSS, scripts and applications from an SEO perspective. It fetches important components in SEO, puts them in tabs by type and allows you to filter common SEO issues, or break down the data the way you see fit by exporting to Excel. You can view, analyze and filter data collected because it is collected and updated continuously in the program’s user interface.
The Screaming Frog SEO Spider allows you to quickly crawl, analyze and test a website from an on-site SEO perspective. It is especially good for analyzing medium and large websites, manually checking the pages that you do manually will not be able to.
SEO Spider allows you to export on-site SEO elements (url, page title, meta description, title, etc.) to Excel so that it can easily be used as a basis for providing solutions. french from SEO …
A quick summary of some of the data collected during the data collection process includes:
- Errors – Client errors such as broken links & server errors (No responses, 4XX, 5XX).
- Redirects – Permanent or temporary redirects (3XX responses).
- Blocked URLs – View & audit URLs disallowed by the robots.txt protocol.
- External Links – All external links and their status codes.
- Protocol – Whether the URLs are secure (HTTPS) or insecure (HTTP).
- URI Issues – Non ASCII characters, underscores, uppercase characters, parameters, or long URLs.
- Duplicate Pages – Hash value / MD5checksums algorithmic check for exact duplicate pages.
- Page Titles – Missing, duplicate, over 65 characters, short, pixel width truncation, same as h1, or multiple.
- Meta Description – Missing, duplicate, over 156 characters, short, pixel width truncation or multiple.
- Meta Keywords – Mainly for reference, as they are not used by Google, Bing or Yahoo.
- File Size – Size of URLs & images.
- Response Time.
- Last-Modified Header.
- Page Depth Level.
- Word Count.
- H1 – Missing, duplicate, over 70 characters, multiple.
- H2 – Missing, duplicate, over 70 characters, multiple.
- Meta Robots – Index, noindex, follow, nofollow, noarchive, nosnippet, noodp, noydir etc.
- Meta Refresh – Including target page and time delay.
- Canonical link element & canonical HTTP headers.
- rel = “next” and rel = “prev”
- AJAX – The SEO Spider obeys Google’s AJAX Crawling Scheme.
- Inlinks – All pages linking to a URI.
- Outlinks – All pages a URI links out to.
- Anchor Text – All link text. Alt text from images with links.
- Follow & Nofollow – At page and link level (true / false).
- Images – All URIs with the image link & all images from a given page. Images over 100kb, missing alt text, alt text over 100 characters.
- User-Agent Switcher – Crawl as Googlebot, Bingbot, Yahoo! Slurp, mobile user-agents or your own custom UA.
- Configurable Accept-Language Header – Supply an Accept-Language HTTP header to crawl locale-adaptive content.
- Redirect Chains – Discover redirect chains and loops.
- Custom Source Code Search – The SEO Spider allows you to find anything you want in the source code of a website! Whether that’s Google Analytics code, specific text, or code etc.
- Custom Extraction – You can collect any data from the HTML of a URL using XPath, CSS Path selectors or regex.
- Google Analytics Integration – You can connect to the Google Analytics API and pull in user and conversion data directly during a crawl.
- Google Search Console Integration – You can connect to the Google Search Analytics API and collect impression, click and average position data against URLs.
- XML Sitemap Generator – You can create an XML sitemap and an image sitemap using the SEO spider.