Web Robot - Crawler

Robots Useragent

What is a web crawler ?

A web crawler is an crawler application that reads web resources (mostly a web page) and parse them to extract meaningful information.


A crawl cycle consists of 4 steps:

  • Selects the urls to fetch
    • All urls are partitioned by domain, host or IP. This means that all urls from the same domain (host, IP) end up in the same partition and will be handled by the same (reduce) task. Within each partition all urls are sorted by score (best first).
    • A maximum of topN urls gets selected.
  • Parse: Parses all webpages. (scrape)
  • Persist: Persist the parse output in a database

Crawler needs to respect the rate limiting configuration.


Crawler are build with a headless browser library



Documentation / Reference

Discover More
Text Mining
Natural Language - Crawler

A crawler is an application (bot) that reads a document (such as web page, word file, ..) and parse them to extract meaningful information. Software for scanning large bodies of text such as collections...
Robots Useragent
Web - Robots.txt

robots.txt is a file that control and gives permission to Web Bot when they crawl your website. Googlebot should not crawl and all sub directory All other...
Url Inspection Google Search Screenshot
Web Search - Googlebot

googlebot is the crawler bot of Google that search and feed the index of the Google search engine When Googlebot renders a page, it flattens: the shadow DOM and light DOM content. Googlebot...

Share this page:
Follow us:
Task Runner