Giter Site home page Giter Site logo

wikiget's Introduction

wikiget

Made with Python GitHub Actions Codecov coverage PyPI version PyPI license

Something like wget for downloading a file from MediaWiki sites (like Wikipedia or Wikimedia Commons) using only the file name or the URL of its description page.

Installation

Requires Python 3.7+ and pip. Install the latest version with:

pip install wikiget

For the latest features, at the risk of bugs and undocumented behavior, you can install the development version directly from GitHub:

pip install https://github.com/clpo13/wikiget/archive/refs/heads/master.zip

Alternatively, if you have Homebrew installed:

brew tap clpo13/clpo13
brew install wikiget

Usage

wikiget [-h] [-V] [-q | -v] [-f] [-s SITE] [-P PATH] [-u USERNAME] [-p PASSWORD] [-o OUTPUT | -a] [-l LOGFILE] [-j THREADS] FILE

The only required parameter is FILE, which is the file you want to download. It can either be the name of the file on the wiki, including the namespace prefix, or a link to the file description page. If FILE is in the form File:Example.jpg or Image:Example.jpg, it will be fetched from the default site, which is "commons.wikimedia.org". If it's the fully-qualified URL of a file description page, like https://en.wikipedia.org/wiki/File:Example.jpg, the file is fetched from the site in the URL, in this case "en.wikipedia.org". Note: full URLs may contain characters your shell interprets differently, so you can either escape those characters with a backslash \ or surround the entire URL with single ' or double " quotes. Use of a fully-qualified URL like this may require setting the --path flag (see next paragraph).

The site can also be specified with the --site flag, though this will not have any effect if the full URL is given. Non-Wikimedia sites should work, but you may need to specify the wiki's script path with --path (where index.php and api.php live; on Wikimedia sites it's /w/, but other sites may use / or something else entirely). Private wikis (those requiring login even for read access) are also supported with the use of the --username and --password flags.

More detailed information, such as the site used and full URL of the file, can be displayed with -v or --verbose. Use -vv to display even more detail, mainly debugging information or API messages. -q can be used to silence warnings. A logfile can be specified with -l or --logfile. If this option is present, the logfile will contain the same information as -v along with timestamps. New log entries will be appended to an existing logfile.

By default, the program won't overwrite existing files with the same name as the target, but this can be forced with -f or --force. Additionally, the file can be downloaded to a different name with -o.

Files can be batch downloaded with the -a or --batch flag. In this mode, FILE will be treated as an input file containing multiple files to download, one filename or URL per line. Blank lines and lines starting with "#" are ignored. If an error is encountered, execution stops immediately and the offending filename is printed. For large batches, the process can be sped up by downloading files in parallel. The number of parallel downloads can be set with -j. For instance, with -a -j4, wikiget will download four files at once. Without -j or with -j by itself without a number, wikiget will download the files one at a time.

Example usage

wikiget File:Example.jpg
wikiget --site en.wikipedia.org File:Example.jpg
wikiget https://en.wikipedia.org/wiki/File:Example.jpg -o test.jpg

Future plans

  • optional machine-readable (JSON) log output
  • batch download by (Commons) category or user uploads
  • maybe: download Wikipedia articles, in plain text, wikitext, or other formats

Contributing

Pull requests, bug reports, or feature requests are more than welcome.

See CONTRIBUTING for more info.

License

Copyright (C) 2018-2023 Cody Logan and contributors

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.

wikiget's People

Contributors

clpo13 avatar mirguest avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

wikiget's Issues

Add option to download with “Multi-threads” [Feature Request]

Currently, the Python script seems very slow to download numerous files (Manjaro KDE).

I have a list of 730,000 pronunciations to download from the German Wiktionary:
https://commons.wikimedia.org/wiki/User_talk:Jeuwre#Method_to_download_all_%22Jeuwre%22_audios_on_Commons

If such a large list is run with a single terminal, the process is eternal. The bandwidth consumed is around 20 Kb/s despite having a download speed of 700 Kb/s.

A workaround is to split a large .csv/.txt in small files. Then, open "20" terminal windows and run them simultaneously. However, it is very time-consuming because any error interrupts all the process, and they require “manual” restart.

Would it be possible to add an option of “multithreading” as to query and download many files simultaneously (e.g. 20) ?


Thanks for your great job ! Your code has been really helpful for me 😃 I am downloading the Pronunciations from German Wiktionary to make an offline pronunciation .mdx dictionary for GoldenDict (freely available).

Filling a .log file or .JSON while running WikiGet

It would be useful to have a .log file or .JSON file containing all the information regarding the scraping. Of special interest would be to write the "File URLs" where audios where actually downloaded ("Page URL" and "File URL" are different).

Obtaining the File URLs in a .log file would be helpful to download much faster if some files are corrupted or re-downloading in the future.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.