Arquivo-web-crawler

What is Arquivo-web-crawler?

About

Arquivo-web-crawler is the Portuguese web archive's bot that systematically crawls and preserves Portuguese websites for historical research, creating a comprehensive digital heritage of Portugal's web presence. You can see how often Arquivo-web-crawler visits your website by setting up Dark Visitors agent analytics.

Expected Behavior

Archivers visit websites on a roughly regular cadence, since snapshots are more useful when they're regularly spaced out. Popular websites will have more frequent visits since they are more likely to be queried in the historical database in the future.

Type

Archiver
Snapshots websites for historical databases

Detail

Operated By Arquivo
Last Updated 20 hours ago

Insights

Top Website Robots.txts

1%
1% of top websites are blocking Arquivo-web-crawler
Learn How →

Country of Origin

Portugal
Arquivo-web-crawler normally visits from Portugal

Global Traffic

The percentage of all internet traffic coming from Archivers

Get These Insights for Your Website
Use the WordPress plugin, Node.js package, or API to get started in seconds.

Robots.txt

Should I Block Arquivo-web-crawler?

It's up to you. Digital archiving is generally done to preserve a historical record. If you don't want to be part of that record for some reason, you can block archivers.

How Do I Block Arquivo-web-crawler?

⚠️ Manual Robots.txt Edits Are Not Scalable
New agents are created every day. Instead, serve a continuously updating robots.txt that blocks new agents automatically.

You can block Arquivo-web-crawler or limit its access by setting user agent token rules in your website's robots.txt. Set up Dark Visitors agent analytics to check whether it's actually following them.

User Agent String Arquivo-web-crawler (compatible; heritrix/3.4.0-20200304 +https://arquivo.pt/faq-crawling)
# robots.txt
# This should block Arquivo-web-crawler

User-agent: Arquivo-web-crawler
Disallow: /

References