Alexa Archive
What is Alexa Archive?
About
Alexa Archive is a search engine crawler operated by Alexandria.org. It's not currently known to be artificially intelligent or AI-related. If you think that's incorrect or can provide more detail about its purpose, please contact us. You can see how often Alexa Archive visits your website by setting up Dark Visitors agent analytics.
Detail
Operator | Alexandria.org |
Type
Expected Behavior
Search engine crawlers do not adhere to a fixed visitation schedule for websites. The frequency of visits varies widely based on several factors, including popularity, the rate at which its content is updated, and the website's overall trustworthiness. Websites with fresh, high-quality content tend to be crawled more frequently, while less active or less reputable sites may be visited less often.
Agent Analytics
Visits to Your Website
Half of your traffic probably comes from artificial agents, and there are more of them every day. Track their activity with agent analytics.
Top Websites
Robots.txt
Should I Block Alexa Archive?
Probably not. Search engine crawlers power search engines, which are a useful way for users to discover your website. In fact, blocking search engine crawlers could severely reduce your traffic.
How Do I Block Alexa Archive?
You can block Alexa Archive or limit its access by setting user agent token rules in your website's robots.txt. Set up Dark Visitors agent analytics to check whether it's actually following them.
User Agent Token | Description |
---|---|
Alexa Archive |
Should match instances of Alexa Archive |
# robots.txt
# This should block Alexa Archive
User-agent: Alexa Archive
Disallow: /
Recommended Solution
Instead of doing this manually, use automatic robots.txt to keep your rules updated with the latest AI scrapers, crawlers, and assistants automatically.
Global Bot & LLM Traffic
Search Engine Crawlers
The overall volume of internet traffic coming from Search Engine Crawlers
See Global Bot & LLM Traffic →