Quickstart


NOTE: These instructions assume that you have Apache Maven installed. You will also need to install Apache Storm to run the crawler. The version of Storm to use must match the one defined in the pom.xml file of your topology. The major version of StormCrawler mirrors the one from Apache Storm, i.e whereas StormCrawler 1.x used Storm 1.2.3, the current version now requires Storm 2.4.0. Our Ansible-Storm repository contains resources to install Apache Storm using Ansible.

Once Storm is installed, the easiest way to get started is to generate a brand new StormCrawler project using :

mvn archetype:generate -DarchetypeGroupId=com.digitalpebble.stormcrawler -DarchetypeArtifactId=storm-crawler-archetype -DarchetypeVersion=2.4

You'll be asked to enter a groupId (e.g. com.mycompany.crawler), an artefactId (e.g. stormcrawler), a version and package name.

This will not only create a fully formed project containing a POM with the dependency above but also the default resource files, a default CrawlTopology class and a configuration file. Enter the directory you just created (should be the same as the artefactId you specified earlier e.g. stormcrawler) and follow the instructions on the README file.

Alternatively if you can't or don't want to use the Maven archetype above, you can simply copy the files from archetype-resources.

Have a look at the code of the CrawlTopology class, the crawler-conf.yaml file as well as the files in 'src/main/resources/', they are all that is needed to run a crawl topology : all the other components come from the core module.

What this CrawlTopology does is very simple : it keeps a list of URLs in a spout which emits them on the topology. These URLs are then partitioned by hostname to enfore the politeness and then fetched. The next bolt (SiteMapParserBolt) checks whether they are sitemap files and if not passes them on to a HTML parser. The parser extracts the text from the document and passes it to a dummy indexer which simply prints a representation of the content onto the standard out. The last component of the topology gathers information about newly discovered URLs (as part of the parsing bolts) or changes to the status of the URLs emitted by the spout (redirections, errors, success) and adds that to the data structure held in memory by the spout.

Of course this topology is very primitive and its purpose is merely to give you an idea of how StormCrawler works. In reality you'd use a different spout and index the documents to a proper backend. Look at the external modules to see what's already available. Another limitation of this topology is that it will work in local mode only or on a single worker.

You can run the topology in local mode with :

storm jar target/_INSERTJARNAMEHERE_.jar CrawlTopology -conf crawler-conf.yaml -local


The WIKI pages contain useful information on the components and configuration and should help you going further.


If you want to use StormCrawler with Elasticsearch, the tutorial below should be a good starting point.