ICC-Crawler
What is ICC-Crawler?
About
ICC-Crawler is NICT's research crawler that automatically collects web pages from the Internet for academic research at Japan's National Institute of Information and Communications Technology. You can see how often ICC-Crawler visits your website by setting up Dark Visitors agent analytics.
Expected Behavior
It's generally unclear how AI data scrapers choose which websites to crawl and how often to crawl them. They might choose to visits websites with a higher information density more frequently, depending on the type of AI models they're training. For example, it would make sense that an agent training an LLM (Large Language Model) would favor sites with a lot of regularly updating text content.
Type
Detail
Operated By | NICT |
Last Updated | 13 hours ago |
Insights
Top Website Robots.txts
Country of Origin
Global Traffic
The percentage of all internet traffic coming from AI Data Scrapers
Robots.txt
Should I Block ICC-Crawler?
It's up to you. AI data scrapers usually download publicly available internet content, which is freely accessible by default. However, you might want to block them if you're concerned about attribution or how your creative work could be used in the resulting AI model.
How Do I Block ICC-Crawler?
You can block ICC-Crawler or limit its access by setting user agent token rules in your website's robots.txt. Set up Dark Visitors agent analytics to check whether it's actually following them.
User Agent String | ICC-Crawler/3.0 (Mozilla-compatible; ; https://ucri.nict.go.jp/en/icccrawler.html) |
# robots.txt
# This should block ICC-Crawler
User-agent: ICC-Crawler
Disallow: /