Arquivo-web-crawler
What is Arquivo-web-crawler?
About
Arquivo-web-crawler is the Portuguese web archive's bot that systematically crawls and preserves Portuguese websites for historical research, creating a comprehensive digital heritage of Portugal's web presence. You can see how often Arquivo-web-crawler visits your website by setting up Dark Visitors Agent Analytics.
Expected Behavior
Archivers visit websites on a roughly regular cadence, since snapshots are more useful when they're regularly spaced out. Popular websites will have more frequent visits since they are more likely to be queried in the historical database in the future.
Type
Detail
Operated By | Arquivo |
Last Updated | 9 hours ago |
Insights
Top Website Robots.txts
Country of Origin
Global Traffic
The percentage of all internet traffic coming from Archivers
Robots.txt
Should I Block Arquivo-web-crawler?
It's up to you. Digital archiving is generally done to preserve a historical record. If you don't want to be part of that record for some reason, you can block archivers.
How Do I Block Arquivo-web-crawler?
You can block Arquivo-web-crawler or limit its access by setting user agent token rules in your website's robots.txt. Set up Dark Visitors Agent Analytics to check whether it's actually following them.
User Agent String | Arquivo-web-crawler (compatible; heritrix/3.4.0-20200304 +https://arquivo.pt/faq-crawling) |
# In your robots.txt ...
User-agent: Arquivo-web-crawler # https://darkvisitors.com/agents/arquivo-web-crawler
Disallow: /
⚠️ Manual Robots.txt Editing Is Not Scalable
New agents are created every day. We recommend setting up Dark Visitors Automatic Robots.txt if you want to block all agents of this type.