Giter Site home page Giter Site logo

webspider's Introduction

webspider Build Status

Open WEB spider platform. Uses Akka Cluster for distributed processing, along with Distributed PubSub.

The webspider-demo module contains the simple web application that starts one task scheduler node, and couple of web processing nodes, and exposes the interface at http://localhost:8080/

Planned features

  • extract text from HTML/PDF documents
  • process only documents, matching given patterns in names/content types
  • extract data using XPath expressions from not well-formed HTML pages or XHTML ones
  • maintain website graph (links between ancestor / successor pages)
  • process websites behind the authentication (HTTP Basic/Digest, Form-Based authentication)
  • handle failures and restart processing from point where application was aborted
  • provide extension API for document type handlers, protocol handlers
  • concurrent processing of website pages
  • minimize traffic using bzip/gzip encoding when possible, avoid donloading of same link twice or more times

Supported protocols:

  • HTTP(S)

webspider's People

Contributors

etorreborre avatar jdevelop avatar xdev-developer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

webspider's Issues

Use ZooKeeper for configuration/queuing.

It must be possible to use ZooKeeper for configuration of the spider components. Not sure hy and how, but why now? Yet another crazy idea. Store links in the queue?

research | Design parallel processing of same task

It should be possible to process same website by several crawler processes (separate JVM).

So they should share same link queue and same processed link storage among selves somehow.

Perhaps Hadoop might be an option? Or GridGain?

Implement BDB JE link back end

Link storage should be persisted on filesystem. In case of spider failure it should be possible to resume from the point it has failed.

No links should be duplicated in pending links queue.

Link redirects should be persisted in the separate collection.

It should be possible to post-pone link for processing in case of failure, so order of links should be present.

If some domain is marked as broken, links to that domain should be marked as broken as well.

Implement URL filters

There should be several types of pluggable filters:

  • url filter - reject links matching some patterns (regex). Accept custom functions to filter that.
  • header filter - reject links which return some headers we don't want to process (e.g octet/stream if we want only text/html). Accept functions to filter that.

Implement content storage

storage interfaces should be moved to separate module: webspider-storage.

The storage interface has to be defined with methods, allowing to save streams into files and transform the streams somehow. Implementation is not clean at this point, however

  • content name should be customizable, e.g - filename generated from UUID, directory splitting etc
  • pluggable content filters should be defined, so it must be possible to process content as it goes
  • content could be stored into filesystem, JDBC, put into HDFS or whatever else.

the definition of the interface should be discussed first :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.