Skip to main content
All CollectionsOnsite SEO
Example Robots.txt Directives
Example Robots.txt Directives
Updated over a year ago

Robots.txt files are an important and necessary part of SEO, but they can be confusing. To understand some use cases, we’ve shared some common examples of Robots.txt directives so you can model them if you require:

Allow all web crawlers access to all content
User-agent: *
Disallow:

Block all web crawlers from all website content:
User-agent: *
Disallow: /

Block all web crawlers from PDF files, site-wide:
User-agent: *
Disallow: /*.pdf

Block all web crawlers from JPG files, only within a specific subfolder:
User-agent: *
Disallow: /subfolder/*.jpg$

Note: URLs and filenames are case-sensitive, so in the above example, .JPG files would still be allowed.

Block a specific web crawler from a specific folder:
User-agent: bingbot
Disallow: /blocked-subfolder/

Block a specific web crawler from a specific web page:
User-agent: baiduspider
Disallow: /subfolder/page-to-block

Note: Be careful with the above one, especially if a slash is added, which could mark it as a directory instead of a single page!

Did this answer your question?