Arquivo-web-crawler

Last updated 12 hours ago.

What is Arquivo-web-crawler?

About

Arquivo-web-crawler is an archiver operated by Arquivo.pt. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us.

Detail

Operator Arquivo.pt
Documentation https://arquivo.pt/faq-crawling

Type

Archiver
Snapshots websites for historical databases

Expected Behavior

Archivers visit websites on a roughly regular cadence, since snapshots are more useful when they're regularly spaced out. Popular websites will have more frequent visits since they are more likely to be queried in the historical database in the future.

Insights

Activity on Your Website

Half of your website's traffic probably comes from artificial agents, and there are more of them every day. Track their activity with the API or WordPress plugin.

Set Up Agent Analytics

Other Websites

0%
of top websites are currently blocking Arquivo-web-crawler in some way
Learn How →

Access Control

Should I Block Arquivo-web-crawler?

It's up to you. Digital archiving is generally done to preserve a historical record. If you don't want to be part of that record for some reason, you can block archivers.

Using Robots.txt

User Agent Token Description
Arquivo-web-crawler Should match instances of Arquivo-web-crawler

You can block Arquivo-web-crawler or limit its access by setting user agent token rules in your website's robots.txt.

# robots.txt
# This should block Arquivo-web-crawler

User-agent: Arquivo-web-crawler
Disallow: /

Instead of doing this manually, you can generate a robots.txt using the API or WordPress plugin that stays up to date with the agent list automatically. The WordPress plugin can also enforce your robots.txt and block agents who try to ignore the rules.

Set Up Your Robots.txt