You misunderstand. Sometimes you want your public website to be indexed by search engines but not scraped for the next LLM model. If you disallow scraping alltogether, then you won’t be indexed on the internet. That can be a problem.
I know that. Thats why I dont ban everyone but only those who dont follow the rules inside my robots.txt. All “sane” search engine crawlers should follow those so its no problem
Problem is that you’re also blocking search engines to index your site, no?
No. That’s why they wrote “this particular area”.
The point is to have an area of the site that serves no purpose other than to catch bots that ignore the rules in robots.txt. Legit search engine indexers will respect directives in robots.txt to avoid that area; they will still index everything else. Bad bots will ignore the directives, index the forbidden area anyway, and by doing so, reveal themselves in the server logs.
Problem is that you’re also blocking search engines to index your site, no?
Nope. Search engines should follow the robots.txt
You misunderstand. Sometimes you want your public website to be indexed by search engines but not scraped for the next LLM model. If you disallow scraping alltogether, then you won’t be indexed on the internet. That can be a problem.
I know that. Thats why I dont ban everyone but only those who dont follow the rules inside my robots.txt. All “sane” search engine crawlers should follow those so its no problem
Not if they obeyed the rules
No. That’s why they wrote “this particular area”.
The point is to have an area of the site that serves no purpose other than to catch bots that ignore the rules in robots.txt. Legit search engine indexers will respect directives in robots.txt to avoid that area; they will still index everything else. Bad bots will ignore the directives, index the forbidden area anyway, and by doing so, reveal themselves in the server logs.
That’s the trap, aka honeypot.