summaryrefslogtreecommitdiff
path: root/www/crawl/DESCR
diff options
context:
space:
mode:
authorpeter <peter@pkgsrc.org>2005-01-18 17:46:31 +0000
committerpeter <peter@pkgsrc.org>2005-01-18 17:46:31 +0000
commit17755172c88d50cc8537a0f7a3bdd4db84fd8fa0 (patch)
tree567d63160e9c0a97b1563f6dbf544c6d13ba52aa /www/crawl/DESCR
parentcfe5fc3c8433b06fc0ae38ed5ece78af13236e1a (diff)
downloadpkgsrc-17755172c88d50cc8537a0f7a3bdd4db84fd8fa0.tar.gz
Initial import of crawl-0.4 into the NetBSD Packages Collection.
The crawl utility starts a depth-first traversal of the web at the specified URLs. It stores all JPEG images that match the configured constraints. Crawl is fairly fast and allows for graceful termination. After terminating crawl, it is possible to restart it at exactly the same spot where it was terminated. Crawl keeps a persistent database that allows multiple crawls without revisiting sites. The main features of crawl are: * Saves encountered images or other media types * Media selection based on regular expressions and size contraints * Resume previous crawl after graceful termination * Persistent database of visited URLs * Very small and efficient code * Asynchronous DNS lookups * Supports robots.txt
Diffstat (limited to 'www/crawl/DESCR')
-rw-r--r--www/crawl/DESCR16
1 files changed, 16 insertions, 0 deletions
diff --git a/www/crawl/DESCR b/www/crawl/DESCR
new file mode 100644
index 00000000000..854815c3ab0
--- /dev/null
+++ b/www/crawl/DESCR
@@ -0,0 +1,16 @@
+The crawl utility starts a depth-first traversal of the web at the specified
+URLs. It stores all JPEG images that match the configured constraints.
+Crawl is fairly fast and allows for graceful termination. After terminating
+crawl, it is possible to restart it at exactly the same spot where it was
+terminated. Crawl keeps a persistent database that allows multiple crawls
+without revisiting sites.
+
+The main features of crawl are:
+
+ * Saves encountered images or other media types
+ * Media selection based on regular expressions and size contraints
+ * Resume previous crawl after graceful termination
+ * Persistent database of visited URLs
+ * Very small and efficient code
+ * Asynchronous DNS lookups
+ * Supports robots.txt