-
crawl - a small and efficient HTTP crawler
The crawl utility starts a depth-first traversal of the web at
the specified URLs. It stores all JPEG images that match the configured
constraints. Crawl is fairly fast and allows for graceful termination.
After terminating crawl, it is possible to restart it at exactly the
same spot where it was terminated. Crawl keeps a persistent database
that allows multiple crawls without revisiting sites.
The main reason for writing crawl was the lack of simple
open source web crawlers. Crawl is only a few thousand lines of code
and fairly easy to debug and customize.
Features
- Saves encountered images or other media types
- Media selection based on regular expressions and size contraints
- Resume previous crawl after graceful termination
- Persistent database of visited URLs
- Very small and efficient code
- Asynchronous DNS lookups
- Supports robots.txt
The current version of Crawl identifies itself as Crawl/0.4 libcrawl/0.1
to web servers. It's default configuration also limits how often a fetch can
happen against the same web server.
Download
The crawl utility is distributed under a BSD-license and completely
free for any use including commercial.
Building
In order to build crawl, you need libevent, a library for asynchronous event notification.
You also need Berkeley DB
compiled with --enable-compat185 for 1.85 compatibility.
Example
$ crawl -m 0 http://www.w3.org/
Searches for images in the index page of the web consortium without
following any other links.
Acknowledgements
This product includes software developed by Ericsson Radio Systems.
This product includes software developed by the University of California,
Berkeley and its contributors.
Support
If you are inclined, you can leave a tip for me with
PayPal.
Sign up for it.
|
|