Arquivo-web-crawler
What is Arquivo-web-crawler?
About
Arquivo-web-crawler is an archiver operated by Arquivo.pt. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us.
You can set up agent analytics to see when Arquivo-web-crawler visits your website.
Detail
Operator | Arquivo.pt |
Documentation | https://arquivo.pt/faq-crawling |
Type
Expected Behavior
Archivers visit websites on a roughly regular cadence, since snapshots are more useful when they're regularly spaced out. Popular websites will have more frequent visits since they are more likely to be queried in the historical database in the future.
Insights
Arquivo-web-crawler Visiting Your Website
Half of your traffic probably comes from artificial agents, and there are more of them every day. Track their activity with the API or WordPress plugin.
Set Up Agent AnalyticsOther Websites
Access Control
Should I Block Arquivo-web-crawler?
It's up to you. Digital archiving is generally done to preserve a historical record. If you don't want to be part of that record for some reason, you can block archivers.
Using Robots.txt
You can block Arquivo-web-crawler or limit its access by setting user agent token rules in your website's robots.txt. We recommend setting up agent analytics to check whether it's actually following them.
User Agent Token | Description |
---|---|
Arquivo-web-crawler |
Should match instances of Arquivo-web-crawler |
# robots.txt
# This should block Arquivo-web-crawler
User-agent: Arquivo-web-crawler
Disallow: /
Instead of doing this manually, you can use the API or Wordpress plugin to keep your robots.txt updated with the latest known AI scrapers, crawlers, and assistants automatically.
Set Up Automatic Robots.txt