Last updated 10 hours ago.

What is Nicecrawler?


Nicecrawler is an archiver operated by NiceCrawler. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us.


Operator NiceCrawler


Snapshots websites for historical databases

Expected Behavior

Archivers visit websites on a roughly regular cadence, since snapshots are more useful when they're regularly spaced out. Popular websites will have more frequent visits since they are more likely to be queried in the historical database in the future.


Activity on Your Website

Half of your website's traffic probably comes from artificial agents, and they're becoming more intelligent every day.

Set Up Agent Analytics

Other Websites

of top websites are currently blocking Nicecrawler in some way
Learn How →

Access Control

Should I Block Nicecrawler?

It's up to you. Digital archiving is generally done to preserve a historical record. If you don't want to be part of that record for some reason, you can block archivers.

Using Robots.txt

User Agent Token Description
Nicecrawler Should match instances of Nicecrawler

You can block Nicecrawler or limit its access by setting user agent token rules in your website's robots.txt.

# robots.txt
# This should block Nicecrawler

User-agent: Nicecrawler
Disallow: /

Instead of doing this manually, you can generate a robots.txt that stays up to date with the agent list automatically.

Set Up Your Robots.txt