A collection of resources for building low-latency, scalable web crawlers on Apache Storm

StormCrawler is an open source SDK for building distributed web crawlers based on Apache Storm. The project is under Apache license v2 and consists of a collection of reusable resources and components, written mostly in Java.

The aim of StormCrawler is to help build web crawlers that are :

StormCrawler is a library and collection of resources that developers can leverage to build their own crawlers. The good news is that doing so can be pretty straightforward! Have a look at the Getting Started section for more details.

Apart from the core components, we provide some external resources that you can reuse in your project, like for instance our spout and bolts for ElasticSearch or a ParserBolt which uses Apache Tika to parse various document formats.

StormCrawler is perfectly suited to use cases where the URL to fetch and parse come as streams but is also an appropriate solution for large scale recursive crawls, particularly where low latency is required. The project is used in production by many organisations and is actively developed and maintained.

The Presentations page contains links to some recent presentations made about this project.

We are very grateful to our sponsors for their continued support.