Giter Site home page Giter Site logo

document-scraper's Introduction

Document scraping tools

Scraping framework to retrieve lists of articles, like RFP websites.

What ?

We want to scrap documents from websites that display list of documents after a search. Here is a web site example (arXiv): arXiv

The scraping tool should be configurable to allow:

  • retrieval of several linked documents for each item
  • retrieval of metadata for each item

We must also handle two kind of websites:

  • "full html" websites that can be scraped with requests/lxml kind of tools
  • "javascript" modern websites that requires tools like Selenium

The image below shows the scraping workflow. scraping

Solution

The package contains a set of configurable scraping tools. Here is a simple example to launch a scraper:

from documentscraper import load_config, DocumentScraper, RequestsScraperEngine
config = load_config("./arxiv.json")
scraper = DocumentScraper(RequestsScraperEngine(), verbose=True)
scraper.run(config, output_path=None)

Here is a configuration example for arXiv website:

{
	"rootUrl": "https://arxiv.org/search/cs",
	"baseUrl": "https://arxiv.org",
	"form": {
		"term": "deep learning"
	},
	"nextPage": {
		  "xpath": "xpath_to_next_page_link"
       },
	"item": {
		"selector": "xpath_to_element_in_list",
		"navigation": [
			"xpath_to_link_of_subpage"
		],
		"output": {
			"id": {
				"xpath": "xpath_to_id_element"
			},
			"metadata": {
				"author": {
					"xpath": "xpath_expression_to_author"
				},
				"date": {
					"xpath": "xpath_expression_to_date",
					"regex": "Submitted on (.*)"
				}
			},
			"files": [
				{
					"navigation" : [ "xpath_to_subpage" ],
					"xpath": "xpath_expression_to_file",
					"format": "auto",
					"filename": {
						"xpath": "xpath_to_filename",
						"regex": "/e-print/(.*)"
					}
				},
				{
					"xpath": "xpath_expression_to_file",
					"format": "pdf",
					"filename": {
						"xpath": "xpath_to_filename",
						"regex": "/pdf/(.*)"
					}
				}
			]
		}
	}
}

Develop

Install requirements:

pip install -r requirements.txt

Run sample:

python samples\sample_arxiv.py

Run tests

Install test requirements:

pip install -r requirements-test.txt

Run tests with pytest:

pytest tests

document-scraper's People

Contributors

gfournier avatar marion-pr avatar

Stargazers

Quang Tran avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.