





Netpeak Spider is your personal SEO crawler that helps you do a fast, comprehensive technical audit of the entire website. This tool allows you to:
Thousands of SEOs and webmasters all over the world use Netpeak Spider to perform everyday SEO tasks in the most efficient way. Try it free for 14 days!
62 issues according to 54 parameters checked by Netpeak Spider
Parameters (54) | Description |
---|---|
General | |
# | URL sequence number in the results table. |
URL | URL (Uniform Resource Locator) is the unified address of documents in the World Wide Web. In this column maximum severity of SEO issues found on a page is highlighted with appropriate color. Note that you always see decoded URLs in the program interface. |
Status Code | Part of the first line of HTTP headers: consists status code number and description. If necessary, special status codes are added to this field after '&' symbol showing indexation instructions (Disallowed, Canonicalized, Refresh Redirected and Noindex / Nofollow). |
Content-Type | Value for this field is taken from content of the 'Content-Type' field in HTTP response headers or <meta http-equiv="content-type" /> tag in <head> section. Tells to user what the content type of the returned content actually is. For example, text/html, image/png, application/pdf, etc. |
Issues | Number of all issues (errors, warnings and notices) found on the page. |
Response Time | Time (in milliseconds) taken by a website server to respond to user's or visitor's request. It is the same as Time To First Byte (TTFB). |
Content Download Time | Time (in milliseconds) taken by a website server to return an HTML code of the page. |
Depth | Number of clicks from initial URL to the current one. '0' value is initial URL's depth; '1' stands for pages received after following links from initial URL for the first time, etc. |
URL Depth | Number of segments in inspected page's URL. Unlike 'Depth' parameter, URL Depth is a static one not depending on initial URL. For instance, URL depth for https://example.com/category/ page is 1, for https://example.com/category/product/ page is 2 and so forth. |
Last-Modified | Content of the 'Last-Modified' field in HTTP response headers: indicates file's last modification date and time. |
Indexation | |
Allowed in robots.txt | Accessibility of URL by robots.txt file if it exists. TRUE means that URL is allowed to be indexed. False – disallowed in robots.txt file. |
Meta Robots | Content of <meta name="robots" /> tag in <head> section of the document. |
Canonical URL | Content of the Canonical directive in HTTP response header or <link rel="canonical" /> tag in <head> section of the document. |
Redirects | Number of redirects from the current URL: can be useful to determine chains of redirects. |
Target redirect URL | Target URL of single redirect or redirect chain if it exists. |
X-Robots-Tag | Content of the 'X-Robots-Tag' field in HTTP response header. It contains indexation instructions and is equivalent to Meta Robots in <head> section. |
Refresh | Content of the Refresh directive in HTTP response header or <meta http-equiv="refresh"> tag in <head> section of the document. |
Canonicals | Number of URLs in a Canonical chain starting from the current page. Check is automatically performed when the crawling is paused and after it has been successfully completed. Check can be canceled and then switched on via 'Analysis' menu → 'Checking Canonical chains'. |
Links | |
Internal PageRank | Relative weight of the page determined by the PageRank algorithm. Considers all main indexation instructions, link attributes and link juice distribution. This parameter is calculated automatically when the crawling is paused and after it has been successfully completed. Calculation can be canceled and then switched on via 'Analysis' menu → 'Calculate internal PageRank'.To see extended features and apply advanced settings for this parameter, go to 'Tools' → 'Internal PageRank calculation'. |
Incoming Links | All links to current page from the crawled URLs. Calculation is automatically performed when the crawling is paused and after it has been successfully completed. Check can be canceled and then switched on via 'Analysis' menu → 'Count incoming links'. |
Outgoing Links | All links from the current URL. |
Internal Links | Links from current URL to other URLs of crawled website. |
External Links | Links from current URL to other websites. |
Head Tags | |
Title | Content of the <title> tag in <head> section of the document. It is a name of webpage and one of the most important tags in SEO. |
Title Length | Number of characters (including spaces) in the <title> tag on the target URL. |
Description | Content of the <meta name="description" /> tag in <head> section of the document. Usually displayed in SERP for a relevant query to current page, thus affecting CTR. |
Description Length | Number of characters (including spaces) in the <meta name="description" /> tag on the target URL. |
Base Tag | Content of the <base> tag in <head> section of the document. |
Keywords | Content of the <meta name="keywords" /> tag in <head> section of the document. |
Keywords Length | Number of characters (including spaces) in the <meta name="keywords" /> tag on the target URL. |
Rel Next URL | Content of the <link rel="next" /> tag in <head> section of the document. |
Rel Prev URL | Content of the <link rel="prev" /> tag in <head> section of the document. |
AMP HTML | Indicates whether the target document is an AMP HTML page. It is determined by presence of the <html ⚡> or <html amp> tags in <head> section of the document. |
Link to AMP HTML | Content of the <link rel="amphtml" /> tag in <head> section of the document. |
Content | |
Images | Number of images found in <img> tags on the target page. At the same time with collecting number of images, we gather ALT attributes and initial URL source view. |
Content-Length | Content of the 'Content-Length' field in HTTP response headers. Indicates the size of the document in bytes. |
Content-Encoding | Content of the 'Content-Encoding' field in HTTP response headers. Indicates encodings applied to document. |
H1 Content | Content of the first non-empty <h1> tag on the target URL. |
H1 Length | Number of characters (including spaces) in the first non-empty <h1> tag on the target URL. |
H1 Headers | Number of <h1> headers on the target URL. |
H2 Headers | Number of <h2> headers on the target URL. |
H3 Headers | Number of <h3> headers on the target URL. |
H4 Headers | Number of <h4> headers on the target URL. |
H5 Headers | Number of <h5> headers on the target URL. |
H6 Headers | Number of <h6> headers on the target URL. |
HTML Size | Number of characters in <html> section of the target page including HTML tags. |
Content Size | Number of characters (including spaces) in <body> section of the target page excluding HTML tags. To put it simply, size of text on the page including spaces. |
Text/HTML Ratio | Percentage of plain text to whole content size ('Content Size' to 'HTML Size' parameters). |
Characters | Number of characters (excluding spaces) in <body> section of the target page excluding HTML tags. To put it simply, it's a size of text on the page excluding spaces. |
Words | Number of words in <body> section of the document. |
Characters in <p> | Number of characters (excluding spaces) in <p></p> tags in <body> section of the target page. |
Words in <p> | Number of words in <p></p> tags in <body> section of the target page. |
Page Hash | Unique key for the content of the entire page: allows you to find duplicates according to this parameter. |
Text Hash | Unique key for text content in the <body> section: allows you to find duplicates according to this parameter. |
Issues (62) | Description |
---|---|
Errors | |
Broken Links | Indicates unavailable pages, as well as the ones returning 4xx and higher HTTP status codes. |
PageRank: Dead End | Indicates URLs that were marked by the internal PageRank algorithm as dead ends. These pages contain incoming but not outgoing links, causing disbalance in the link juice distribution. |
Duplicate Pages | Indicates all pages that have the same page hash value. URLs in this report are grouped by 'Page Hash' parameter. |
Duplicate Text | Indicates all pages that have the same text content in <body> section. URLs in this report are grouped by 'Text Hash' parameter. |
Duplicate Titles | Indicates all pages with duplicate <title> tag content. URLs in this report are grouped by 'Title' parameter. |
Missing or Empty Title | Indicates all pages without <title> tag or with the empty one. |
Duplicate Descriptions | Indicates all pages with duplicate <meta name="description" /> tag content. URLs in this report are grouped by 'Description' parameter. |
Missing or Empty Description | Indicates all pages without <meta name="description" /> tag or with the empty one. |
4xx Error Pages: Client Error | Indicates all pages that return 4xx HTTP status code. |
Broken Redirect | Indicates all pages that redirect to unavailable URLs or URLs with 4xx or higher status code. |
Endless Redirect | Indicates all pages redirecting to themselves and thereby generating infinite redirect loop. |
Max Redirections | Indicates all pages that redirect more than 4 times (by default). |
Redirect Blocked by robots.txt | Indicates pages that return a redirect to URL blocked by robots.txt. |
Redirects with Bad URL Format | Indicates pages that return a redirect with bad URL format in HTTP response headers. |
Bad Base Tag Format | Indicates pages that contain <base> tag in incorrect format. Note that relative links сannot be used in this tag since they are not supported by search engine robots. |
Max URL Length | Indicates all pages with more than 2000 characters in URL (by default). |
Missing Internal Links | Indicates 'dead ends' – all pages without internal links. Note that such pages do get link juice but do not pass it. |
Links with Bad URL Format | Indicates pages that contain internal links with bad URL format. |
Broken Images | Indicates unavailable images, as well as the ones returning 4xx and higher HTTP status codes. Note that 'Images' Content Type should be enabled on 'General' tab of crawling settings to detect this issue. |
Canonical Blocked by robots.txt | Indicates pages that contain <link rel="canonical" /> tag pointing to URLs blocked by robots.txt. Note that if target URL starts Canonical chain to the blocked page, report will contain each URL from this chain. |
Warnings | |
Multiple Titles | Indicates all pages with more than one <title> tag. |
Multiple Descriptions | Indicates all pages with more than one <meta name="description" /> tag. |
Missing or Empty H1 | Indicates all pages without <h1> header tag or with the empty one. |
Multiple H1 | Indicates all pages with more than one <h1> header tag. |
Duplicate H1 | Indicates all pages with duplicate <h1> header tags content. URLs in this report are grouped by 'H1 Content' parameter. |
Min Content Size | Indicates all pages with less than 500 characters (by default) in the <body> section (excluding HTML tags). |
PageRank: Redirect | Indicates URLs marked by the internal PageRank algorithm as redirecting link juice. It could be pages that return a 3xx redirect or have canonical / refresh tags pointing to another URL. |
3xx Redirected Pages | Indicates all pages that return 3xx redirection status code. |
Redirect Chain | Indicates all pages that redirect more than 1 time. |
Refresh Redirected | Indicates all pages with redirect to another URL in Refresh directive in HTTP response header or <meta http-equip="refresh"> tag in <head> section of the document. |
Canonical Chain | Indicates pages starting Canonical chain or taking part in it. To view detailed information, open an additional table 'Canonicals'. |
External Redirect | Indicates all pages that return a 3xx redirect to external website which is not a part of the crawled one. |
Blocked by robots.txt | Indicates all pages disallowed in robots.txt file. |
Blocked by Meta Robots | Indicates all pages that contain <meta name="robots" content="noindex"> directive in the <head> section of the document. |
Blocked by X-Robots-Tag | Indicates all pages that contain 'noindex' directive in X-Robots-Tag of the HTTP header response. |
Missing Images ALT Attributes | Indicates all pages that contain images without ALT attribute or with an empty one. |
Max Image Size | Indicates images with the size exceeding 100 kBs. Take into account that 'Images' Content Type should be enabled on 'General' tab of crawling settings to detect this issue. |
5xx Error Pages: Server Error | Indicates all pages that return 5xx HTTP status code. |
Long Server Response Time | Indicates pages with TTFB (time to first byte) exceeding 500 ms (by default). |
Bad AMP HTML Format | Indicates AMP HTML documents that do not meet the AMP Project documentation standards. Note that there are at least 8 markup requirements to each AMP HTML page. |
Notices | |
Percent-Encoded URLs | Indicates pages that contain percent-encoded (non-ASCII) characters in URL. For instance, URL https://example.com/例 is encoded as https://example.com/%E4%BE%8B. |
Duplicate Canonical URLs | Indicates all pages with duplicate <link rel="canonical" /> tag content. URLs in this report are grouped by 'Canonical URL' parameter. |
PageRank: Orphan | Indicates URLs that were marked by the internal PageRank algorithm as orphans – these pages have no incoming links. |
PageRank: Missing Outgoing Links | Indicates URLs with no outgoing links found after calculating internal PageRank. It usually happens when outgoing links on the page had not been crawled yet. |
Same Title and H1 | Indicates all pages that have identical <title> and <h1> header tags. |
Max Title Length | Indicates all pages with <title> tag exceeding 70 characters (by default). |
Short Title | Indicates all pages that have less than 10 characters (by default) in <title> tag. |
Max Description Length | Indicates all pages with <meta name="description" /> tag exceeding 320 characters (by default). |
Short Description | Indicates all pages that have less than 50 characters (by default) in <meta name="description" /> tag. |
Max H1 Length | Indicates all pages with <h1> header tag exceeding 65 characters (by default). |
Max HTML Size | Indicates all pages with more than 200k characters (by default) in the <html> section (including HTML tags). |
Max Content Size | Indicates all pages with more than 50k characters (by default) in <body> section (excluding HTML tags). |
Min Text/HTML Ratio | Indicates all pages with less than 10% ratio (by default) of the text ('Content Size' parameter) to HTML ('HTML Size' parameter). |
Nofollowed by Meta Robots | Indicates all pages that contain <meta name="robots" content="nofollow"> directive in the <head> section. |
Nofollowed by X-Robots-Tag | Indicates all pages that contain 'nofollow' directive in X-Robots-Tag of the HTTP response header. |
Canonicalized Pages | Indicates all pages where URL in <link rel="canonical" /> tag differs from the Page URL. |
Non-HTTPS Protocol | Indicates the list of URLs without secure HTTPS protocol. |
Max Internal Links | Indicates all pages with more than 100 internal links (by default). |
Max External Links | Indicates all pages with more than 10 external links (by default). |
Internal Nofollow Links | Indicates all pages that contain internal links with rel="nofollow" attribute. |
External Nofollow Links | Indicates all pages that contain external links with rel="nofollow" attribute. |
Missing or Empty robots.txt File | Indicates all URLs related to missing or empty robots.txt file. Note that different subdomains and protocols (http / https) can contain different robots.txt files. This issue may occur when robots.txt redirects to any other URL or when it returns a status code other than 200 OK. |
Plans and Pricing

- best value30% off
12 months
$[[ vm.amountByMonthMain[vm.urlParameter__spider][vm.key_spider_12] ]][[ vm.amountByMonthFraction[vm.urlParameter__spider][vm.key_spider_12] ]]/mo eachBilled as one payment of $[[ vm.amountWithDiscount[vm.urlParameter__spider][vm.key_spider_12]|number:2 ]]Buy Now(save $[[ vm.discountTotal[vm.urlParameter__spider][vm.key_spider_12]|number:2 ]] )Initial price: $[[ vm.amountTotal[vm.urlParameter__spider][vm.key_spider_12]|number:2 ]]
Long-term discount: $[[ vm.discountByPeriod[vm.urlParameter__spider][vm.key_spider_12]|number:2 ]]
Volume discount: $[[ vm.discountByQuantity[vm.urlParameter__spider][vm.key_spider_12]|number:2 ]]
Loyalty discount: $[[ vm.discountByLoyalty[vm.urlParameter__spider][vm.key_spider_12]|number:2 ]]
[[vm.loyalty ]]% loyalty discount was applied - 20% off
6 months
$[[ vm.amountByMonthMain[vm.urlParameter__spider][vm.key_spider_6] ]][[ vm.amountByMonthFraction[vm.urlParameter__spider][vm.key_spider_6] ]]/mo eachBilled as one payment of $[[ vm.amountWithDiscount[vm.urlParameter__spider][vm.key_spider_6]|number:2 ]]Buy Now(save $[[ vm.discountTotal[vm.urlParameter__spider][vm.key_spider_6]|number:2 ]] )Initial price: $[[ vm.amountTotal[vm.urlParameter__spider][vm.key_spider_6]|number:2 ]]
Long-term discount: $[[ vm.discountByPeriod[vm.urlParameter__spider][vm.key_spider_6]|number:2 ]]
Volume discount: $[[ vm.discountByQuantity[vm.urlParameter__spider][vm.key_spider_6]|number:2 ]]
Loyalty discount: $[[ vm.discountByLoyalty[vm.urlParameter__spider][vm.key_spider_6]|number:2 ]]
[[vm.loyalty ]]% loyalty discount was applied - 10% off
3 months
$[[ vm.amountByMonthMain[vm.urlParameter__spider][vm.key_spider_3] ]][[ vm.amountByMonthFraction[vm.urlParameter__spider][vm.key_spider_3] ]]/mo eachBilled as one payment of $[[ vm.amountWithDiscount[vm.urlParameter__spider][vm.key_spider_3]|number:2 ]]Buy Now(save $[[ vm.discountTotal[vm.urlParameter__spider][vm.key_spider_3]|number:2 ]] )Initial price: $[[ vm.amountTotal[vm.urlParameter__spider][vm.key_spider_3]|number:2 ]]
Long-term discount: $[[ vm.discountByPeriod[vm.urlParameter__spider][vm.key_spider_3]|number:2 ]]
Volume discount: $[[ vm.discountByQuantity[vm.urlParameter__spider][vm.key_spider_3]|number:2 ]]
Loyalty discount: $[[ vm.discountByLoyalty[vm.urlParameter__spider][vm.key_spider_3]|number:2 ]]
[[vm.loyalty ]]% loyalty discount was applied 1 month
$[[ vm.amountByMonthMain[vm.urlParameter__spider][vm.key_spider_1] ]][[ vm.amountByMonthFraction[vm.urlParameter__spider][vm.key_spider_1] ]]/mo eachBilled as one payment of $[[ vm.amountWithDiscount[vm.urlParameter__spider][vm.key_spider_1]|number:2 ]]Buy Now(save $[[ vm.discountTotal[vm.urlParameter__spider][vm.key_spider_1]|number:2 ]] )Initial price: $[[ vm.amountTotal[vm.urlParameter__spider][vm.key_spider_1]|number:2 ]]
Long-term discount: $[[ vm.discountByPeriod[vm.urlParameter__spider][vm.key_spider_1]|number:2 ]]
Volume discount: $[[ vm.discountByQuantity[vm.urlParameter__spider][vm.key_spider_1]|number:2 ]]
Loyalty discount: $[[ vm.discountByLoyalty[vm.urlParameter__spider][vm.key_spider_1]|number:2 ]]
[[vm.loyalty ]]% loyalty discount was applied
Frequently Asked Questions
What is Netpeak Spider?
How does the free trial work?
How do I start using Netpeak Spider?
– To start using Netpeak Spider you need to:
- Create a Netpeak Software Account.
- Download Netpeak Launcher and install it.
- Log in to Netpeak Launcher and install Netpeak Spider.
Is it necessary to have a Netpeak Software Account to use Netpeak Spider?
What is Netpeak Launcher?
Can I use Netpeak Spider on more than one device?
– You can use Netpeak Spider on several devices, as long as they are not running at the same time. If you wish to use the software on multiple devices simultaneously, you need to buy the separate licenses.
To change or adjust the devices appiled for using Netpeak Spider, please visit ‘Device Management’ section in User Control Panel.