What Is ICC-Crawler?
ICC-Crawler is NICT's research crawler that automatically collects web pages from the Internet for academic research at Japan's National Institute of Information and Communications Technology. You can see how often ICC-Crawler visits your website by setting up Dark Visitors Agent Analytics.
Agent Type
Expected Behavior
AI data scrapers systematically crawl websites to collect training data for machine learning models. Unlike search engine crawlers that index for retrieval, these scrapers download content specifically for model training. Their crawling patterns are typically opaque. Operators rarely disclose site selection, frequency, or priorities. Scrapers may crawl more aggressively than traditional search engines, and the collected data becomes part of training datasets with limited transparency about attribution or usage.
Detail
| Operated By | NICT |
| Last Updated | 9 hours ago |
Top Website Robots.txts
Country of Origin
Top Website Blocking Trend Over Time
The percentage of the world's top 1000 websites who are blocking ICC-Crawler
Overall AI Data Scraper Traffic
The percentage of all internet traffic coming from AI data scrapers
User Agent String
| Example | ICC-Crawler/3.0 (Mozilla-compatible; ; https://ucri.nict.go.jp/en/icccrawler.html) |
Access other known user agent strings and recent IP addresses using the API.
Robots.txt
In this example, all pages are blocked. You can customize which pages are off-limits by swapping out / for a different disallowed path.
User-agent: ICC-Crawler # https://darkvisitors.com/agents/icc-crawler
Disallow: /
Frequently Asked Questions About ICC-Crawler
Should I Block ICC-Crawler?
Consider your priorities. ICC-Crawler collects content for training machine learning models. While this content is publicly accessible, you may want to block it if you're concerned about attribution, compensation, or how your creative work might be used in AI systems or generated outputs.
How Do I Block ICC-Crawler?
If you want to, you can block or limit ICC-Crawler's access by configuring user agent token rules in your robots.txt file. The best way to do this is using Automatic Robots.txt, which blocks all agents of this type and updates continuously as new agents are released. While the vast majority of agents operated by reputable companies honor these robots.txt directives, bad actors may choose to ignore them entirely. In that case, you'll need to implement alternative blocking methods such as firewall rules or server-level restrictions. You can verify whether ICC-Crawler is respecting your rules by setting up Agent Analytics to monitor its visits to your website.
Will Blocking ICC-Crawler Hurt My SEO?
Blocking AI data scrapers has minimal direct SEO impact since these tools don't contribute to search engine indexing. However, if your content is used to train models that power AI search engines, blocking scrapers might reduce your representation in AI-generated responses, potentially affecting future discoverability.
Does ICC-Crawler Access Private Content?
AI data scrapers typically focus on publicly available content for training data collection. However, some may attempt to access password-protected areas, API endpoints, or content behind paywalls. The scope varies widely depending on the operator's goals and technical sophistication. Most respect authentication barriers, but some may use techniques to bypass access controls.
How Can I Tell if ICC-Crawler Is Visiting My Website?
Setting up Agent Analytics will give you realtime visibility into ICC-Crawler visiting your website, along with hundreds of other AI agents, crawlers, and scrapers. This will also let you measure human traffic to your website coming from AI search and chat LLM platforms like ChatGPT, Perplexity, and Gemini.
Why Is ICC-Crawler Visiting My Website?
ICC-Crawler likely found your site through systematic web discovery methods like following links from other indexed sites, processing sitemaps, or using seed URLs from publicly available website lists. Your site may have been selected because it contains the type of content useful for training AI models.
How Can I Authenticate Visits From ICC-Crawler?
Agent Analytics authenticates agent visits from many agents, letting you know whether each one was actually from that agent, or spoofed by a bad actor. This helps you identify suspicious traffic patterns and make informed decisions about blocking or allowing specific user agents.