Turnitin
What is Turnitin?
About
Turnitin crawler gathers web content to build a comprehensive database for plagiarism detection services, comparing student papers against internet content for academic integrity. You can see how often Turnitin visits your website by setting up Dark Visitors Agent Analytics.
Agent Type
Expected Behavior
Archivers crawl websites to create historical snapshots for preservation purposes. They typically visit on a regular cadence to build a chronological record of how content changes over time. Crawl frequency varies based on site popularity and content update patterns. Unlike search crawlers, archivers aim to capture and store complete page states rather than extract information for indexing.
Detail
Operated By | Turnitin |
Last Updated | 1 day ago |
Insights
Top Website Robots.txts
Country of Origin
Global Traffic
The percentage of all internet traffic coming from Archivers
Top Visited Website Categories
Robots.txt
Should I Block Turnitin?
It's up to you. Digital archiving is generally done to preserve a historical record. If you don't want to be part of that record for some reason, you can block archivers.
How Do I Block Turnitin?
You can block Turnitin or limit its access by setting user agent token rules in your website's robots.txt. Set up Dark Visitors Agent Analytics to check whether it's actually following them.
User Agent String | Turnitin (https://bit.ly/2UvnfoQ) |
# In your robots.txt ...
User-agent: Turnitin # https://darkvisitors.com/agents/turnitin
Disallow: /
⚠️ Manual Robots.txt Editing Is Not Scalable
New agents are created every day. We recommend setting up Dark Visitors Automatic Robots.txt if you want to block all agents of this type.