Giter Site home page Giter Site logo

levitation's Introduction

This is Levitation, a project to convert Wikipedia database dumps into Git
repositories. It has been successfully tested with a small Wiki
(bar.wikipedia.org) having 12,200 articles and 104,000 revisions. Importing
those took 6 minutes on a Core 2 Duo 1.66 GHz. RAM usage is minimal: Pages
are imported one after the other, it will at most require the amount of memory
needed to keep all revisions of a single page into memory. You should be safe
with 1 GB of RAM.

See below (“Things that work”) for the current status.

Some knowledge of Git is required to use this tool. And you will probably
need to edit some variables in the source code.

You need at least a 2.5 Python, my tests run with 2.6.


How it should be done:

You can get recent dumps of all Wikimedia wikis at:
http://download.wikimedia.org/backup-index.html

The pages-meta-history.xml file is what we want. (In case you’re wondering:
Wikimedia does not offer content SQL dumps anymore, and there are now full-
history dump for en.wikipedia.org because of its size.) It includes all pages
in all namespaces and all of their revisions.

Alternatively, you may use a MediaWiki’s “Special:Export” page to create an XML
dump of certain pages.


Things that work:

 - Read a Wikipedia XML full-history dump and output it in a format suitable
   for piping into git-fast-import(1). The resulting repository contains one
   file per page. All revisions are available in the history. There are some
   restrictions, read below.

 - Use the original modification summary as commit message.

 - Read the Wiki URL from the XML file and set user mail addresses accordingly.

 - Use the author name in the commit instead of the user ID.

 - Store additional information in the commit message that specifies page and
   revision ID as well as whether the edit was marked as “minor”.

 - Use the page’s name as file name instead of the page ID. Non-ASCII
   characters and some ASCII ones will be replaced by “.XX”, where .XX is their
   hex value.

 - Put pages in namespace-based subdirectories.

 - Put pages in a configurably deep subdirectory hierarchy.

 - Use command line options instead of hard-coded magic behavior. Thanks to
   stettberger for adding this.

 - Use a locally timezoned timestamp for the commit date instead of an UTC one.


Things that are still missing:

 - Allow IPv6 addresses as IP edit usernames. (Although afaics MediaWiki itself
   cannot handle IPv6 addresses, so we got some time.)


Things that are strange:

 - Since we use subdirectories, the Git repo is no longer larger than the
   uncompressed XML file, but instead about 30% of it. This is good. However,
   it is still way larger than the bz2 compressed file, and I don’t know why.


Things that are cool:

 - “git checkout master~30000” takes you back 30,000 edits in time — and on my
   test machine it only took about a second.

 - The XML data might be in the wrong order to directly create commits from it,
   but it is in the right order for blob delta compression: When passing blobs
   to git-fast-import, delta compression will be tried based on the previous
   blob — which is the same page, one revision before. Therefore, delta
   compression will succeed and save you tons of storage.


Example usage:

Please note that there’s the variable IMPORT_MAX, right at the beginning of
import.py. By default it’s set to 100, so Levitation will only import 100
pages, not more. This protects you from filling your disk when you’re too
impatient. ;) Set it to -1 when you’re ready for a “real” run.

This will import the pdc.wikipedia.org dump into a new Git repository “repo”:

  rm -rf repo; git init --bare repo && \
    ./import.py < ~/pdcwiki-20091103-pages-meta-history.xml | \
    GIT_DIR=repo git fast-import | \
    sed 's/^progress //'

Execute “import.py --help” to see all available options.


Storage requirements:

“maxrev” be the highest revision ID in the file.

“maxpage” be the highest page ID in the file.

“maxuser” be the highest user ID in the file.

The revision metadata storage needs maxrev*17 bytes.

The revision comment storage needs maxrev*257 bytes.

The author name storage needs maxuser*257 bytes.

The page title storage needs maxpage*257 bytes.

Those files can be deleted after an import.

Additionally, the content itself needs some space. My test repo was about 15%
the size of the uncompressed XML, that is about 300% the size of the bz2
compressed XML data (see “Things that are strange”).

Note that if you want to check out a working copy, the filesystem it will be
living on needs quite a few free inodes. If you get “no space left on device”
errors with plenty of space available, that’s what hit you.


Contacting the author:

This monster is written by Tim “Scytale” Weber. It is an experiment, whether the
current “relevance war” in the German Wikipedia can be ended by decentralizing
content.

Find the guys and gals around this project in freenode #levitation, or talk
directly to me via http://scytale.name/contact/ or Twitter (@Scytale).

Get the most up-to-date code at http://scytale.name/proj/levitation/.

Please also read the FAQ if you have any questions, including “how can I help”.


This whole bunch of tasty bytes is licensed under the terms of the WTFPLv2.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.