Giter Site home page Giter Site logo

shablona's Introduction

shablona

Build Status

Shablona is a template project for small scientific python projects. The recommendations we make here follow the standards and conventions of much of the scientific Python eco-system. Following these standards and recommendations will make it easier for others to use your code, and can make it easier for you to port your code into other projects and collaborate with other users of this eco-system.

To use it as a template for your own project, click the green "use this template" button at the top of the front page of this repo.

First, let me explain all the different moving parts that make up a small scientific python project, and all the elements which allow us to effectively share it with others, test it, document it, and track its evolution.

Organization of the project

The project has the following structure:

shablona/
  |- README.md
  |- shablona/
     |- __init__.py
     |- shablona.py
     |- due.py
     |- data/
        |- ...
     |- tests/
        |- ...
  |- doc/
     |- Makefile
     |- conf.py
     |- sphinxext/
        |- ...
     |- _static/
        |- ...
  |- setup.py
  |- .travis.yml
  |- .mailmap
  |- appveyor.yml
  |- LICENSE
  |- Makefile
  |- ipynb/
     |- ...

In the following sections we will examine these elements one by one. First, let's consider the core of the project. This is the code inside of shablona/shablona.py. The code provided in this file is intentionally rather simple. It implements some simple curve-fitting to data from a psychophysical experiment. It's not too important to know what it does, but if you are really interested, you can read all about it here.

Module code

We place the module code in a file called shablona.py in directory called shablona. This structure is a bit confusing at first, but it is a simple way to create a structure where when we type import shablona as sb in an interactive Python session, the classes and functions defined inside of the shablona.py file are available in the sb namespace. For this to work, we need to also create a file in __init__.py which contains code that imports everything in that file into the namespace of the project:

from .shablona import *

In the module code, we follow the convention that all functions are either imported from other places, or are defined in lines that precede the lines that use that function. This helps readability of the code, because you know that if you see some name, the definition of that name will appear earlier in the file, either as a function/variable definition, or as an import from some other module or package.

In the case of the shablona module, the main classes defined at the bottom of the file make use of some of the functions defined in preceding lines.

Remember that code will be probably be read more times than it will be written. Make it easy to read (for others, but also for yourself when you come back to it), by following a consistent formatting style. We strongly recommend following the PEP8 code formatting standard, and we enforce this by running a code-linter called flake8, which automatically checks the code and reports any violations of the PEP8 standard (and checks for other general code hygiene issues), see below.

Project Data

In this case, the project data is rather small, and recorded in csv files. Thus, it can be stored alongside the module code. Even if the data that you are analyzing is too large, and cannot be effectively tracked with github, you might still want to store some data for testing purposes.

Either way, you can create a shablona/data folder in which you can organize the data. As you can see in the test scripts, and in the analysis scripts, this provides a standard file-system location for the data at:

import os.path as op
import shablona as sb
data_path = op.join(sb.__path__[0], 'data')

Testing

Most scientists who write software constantly test their code. That is, if you are a scientist writing software, I am sure that you have tried to see how well your code works by running every new function you write, examining the inputs and the outputs of the function, to see if the code runs properly (without error), and to see whether the results make sense.

Automated code testing takes this informal practice, makes it formal, and automates it, so that you can make sure that your code does what it is supposed to do, even as you go about making changes around it.

Most scientists writing code are not really in a position to write a complete specification of their software, because when they start writing their code they don't quite know what they will discover in their data, and these chance discoveries might affect how the software evolves. Nor do most scientists have the inclination to write complete specs - scientific code often needs to be good enough to cover our use-case, and not any possible use-case. Testing the code serves as a way to provide a reader of the code with very rough specification, in the sense that it at least specifies certain input/output relationships that will certainly hold in your code.

We recommend using the 'pytest' library for testing. The py.test application traverses the directory tree in which it is issued, looking for files with the names that match the pattern test_*.py (typically, something like our shablona/tests/test_shablona.py). Within each of these files, it looks for functions with names that match the pattern test_*. Typically each function in the module would have a corresponding test (e.g. test_transform_data). This is sometimes called 'unit testing', because it independently tests each atomic unit in the software. Other tests might run a more elaborate sequence of functions ('end-to-end testing' if you run through the entire analysis), and check that particular values in the code evaluate to the same values over time. This is sometimes called 'regression testing'. We have one such test in shablona/tests/test_shablona.py called test_params_regression. Regressions in the code are often canaries in the coal mine, telling you that you need to examine changes in your software dependencies, the platform on which you are running your software, etc.

Test functions should contain assertion statements that check certain relations in the code. Most typically, they will test for equality between an explicit calculation of some kind and a return of some function. For example, in the test_cumgauss function, we test that our implmentation of the cumulative Gaussian function evaluates at the mean minus 1 standard deviation to approximately (1-0.68)/2, which is the theoretical value this calculation should have. We recommend using functions from the numpy.testing module (which we import as npt) to assert certain relations on arrays and floating point numbers. This is because npt contains functions that are specialized for handling numpy arrays, and they allow to specify the tolerance of the comparison through the decimal key-word argument.

To run the tests on the command line, change your present working directory to the top-level directory of the repository (e.g. /Users/arokem/code/shablona), and type:

py.test shablona

This will exercise all of the tests in your code directory. If a test fails, you will see a message such as:

shablona/tests/test_shablona.py .F...

=================================== FAILURES ===================================
________________________________ test_cum_gauss ________________________________

  def test_cum_gauss():
      sigma = 1
      mu = 0
      x = np.linspace(-1, 1, 12)
      y = sb.cumgauss(x, mu, sigma)
      # A basic test that the input and output have the same shape:
      npt.assert_equal(y.shape, x.shape)
      # The function evaluated over items symmetrical about mu should be
      # symmetrical relative to 0 and 1:
      npt.assert_equal(y[0], 1 - y[-1])
      # Approximately 68% of the Gaussian distribution is in mu +/- sigma, so
      # the value of the cumulative Gaussian at mu - sigma should be
      # approximately equal to (1 - 0.68/2). Note the low precision!
>       npt.assert_almost_equal(y[0], (1 - 0.68) / 2, decimal=3)
E       AssertionError:
E       Arrays are not almost equal to 3 decimals
E        ACTUAL: 0.15865525393145707
E        DESIRED: 0.15999999999999998

shablona/tests/test_shablona.py:49: AssertionError
====================== 1 failed, 4 passed in 0.82 seconds ======================

This indicates to you that a test has failed. In this case, the calculation is accurate up to 2 decimal places, but not beyond, so the decimal key-word argument needs to be adjusted (or the calculation needs to be made more accurate).

As your code grows and becomes more complicated, you might develop new features that interact with your old features in all kinds of unexpected and surprising ways. As you develop new features of your code, keep running the tests, to make sure that you haven't broken the old features. Keep writing new tests for your new code, and recording these tests in your testing scripts. That way, you can be confident that even as the software grows, it still keeps doing correctly at least the few things that are codified in the tests.

We have also provided a Makefile that allows you to run the tests with more verbose and informative output from the top-level directory, by issuing the following from the command line:

make test

Styling

It is a good idea to follow the PEP8 standard for code formatting. Common code formatting makes code more readable, and using tools such as flake8 (which combines the tools pep8 and pyflakes) can help make your code more readable, avoid extraneous imports and lines of code, and overall keep a clean project code-base.

Some projects include flake8 inside their automated tests, so that every pull request is examined for code cleanliness.

In this project, we have run flake8 most (but not all) files, on most (but not all) checks:

flake8 --ignore N802,N806 `find . -name *.py | grep -v setup.py | grep -v /doc/`

This means, check all .py files, but exclude setup.py and everything in directories named "doc". Do all checks except N802 and N806, which enforce lowercase-only names for variables and functions.

The Makefile contains an instruction for running this command as well:

make flake8

Documentation

Documenting your software is a good idea. Not only as a way to communicate to others about how to use the software, but also as a way of reminding yourself what the issues are that you faced, and how you dealt with them, in a few months/years, when you return to look at the code.

The first step in this direction is to document every function in your module code. We recommend following the numpy docstring standard, which specifies in detail the inputs/outputs of every function, and specifies how to document additional details, such as references to scientific articles, notes about the mathematics behind the implementation, etc.

This standard also plays well with a system that allows you to create more comprehensive documentation of your project. Writing such documentation allows you to provide more elaborate explanations of the decisions you made when you were developing the software, as well as provide some examples of usage, explanations of the relevant scientific concepts, and references to the relevant literature.

To document shablona we use the sphinx documentation system. You can follow the instructions on the sphinx website, and the example here to set up the system, but we have also already initialized and commited a skeleton documentation system in the docs directory, that you can build upon.

Sphinx uses a Makefile to build different outputs of your documentation. For example, if you want to generate the HTML rendering of the documentation (web pages that you can upload to a website to explain the software), you will type:

make html

This will generate a set of static webpages in the doc/_build/html, which you can then upload to a website of your choice.

Alternatively, readthedocs.org (careful, not readthedocs.com) is a service that will run sphinx for you, and upload the documentation to their website. To use this service, you will need to register with RTD. After you have done that, you will need to "import your project" from your github account, through the RTD web interface. To make things run smoothly, you also will need to go to the "admin" panel of the project on RTD, and navigate into the "advanced settings" so that you can tell it that your Python configuration file is in doc/conf.py:

RTD conf

http://shablona.readthedocs.org/en/latest/

Installation

For installation and distribution we will use the python standard library setuptools module. This module uses a setup.py file to figure out how to install your software on a particular system. For a small project such as this one, managing installation of the software modules and the data is rather simple.

A shablona/version.py contains all of the information needed for the installation and for setting up the PyPI page for the software. This also makes it possible to install your software with using pip and easy_install, which are package managers for Python software. The setup.py file reads this information from there and passes it to the setup function which takes care of the rest.

Much more information on packaging Python software can be found in the Hitchhiker's guide to packaging.

Continuous integration

Travis-CI is a system that can be used to automatically test every revision of your code directly from github, including testing of github pull requests, before they are merged into the master branch. This provides you with information needed in order to evaluate contributions made by others. It also serves as a source of information for others interested in using or contributing to your project about the degree of test coverage of your project.

You will need a .travis.yml file in your repo. This file contains the configuration of your testing environment. This includes the different environments in which you will test the source code (for example, we test shablona against Python 2.7, Python 3.3 and Python 3.4). It includes steps that need to be taken before installation of the software. For example, installation of the software dependencies. For shablona, we use the Miniconda software distribution (not to be confused with Anaconda, though they are similar and both produced by Continuum).

For details on setting up Travis-CI with github, see Travis-CI's getting started page. To summarize:

First, go to the Travis-CI website and get a Travis user account, linked to your github user account.

You will need to set up your github repo to talk to Travis (More explanation + pictures will come here).

You will need to go back to travis-ci, and flip on the switch on that side as well.

The travis output will also report to you about test coverage, if you set it up that way.

You will start getting emails telling you the state of the testing suite on every pull request for the software, and also when you break the test suite on the master branch. That way, you can be pretty sure that the master is working (or at least know when it isn't...).

You can also continuously test your code on a Windows system. This is done on another CI system called Appveyor. In prinicple, it does something that is very similar to what Travis does: downloads your code, installs it on a Windows machine, with various versions of python, and runs the tests. Appveyor is controlled through another configuration file: the appveyor.yml. In addition to committing this file into the repository, you will need to activate Appveyor for your project. This is done by signing into the Appveyor interface with your Github account, clicking on the "projects" tab at the top of the page, then clicking on the "+" sign for "New project" and selecting the project you would like to add from the menu that appears (you might need to give Appveyor the permission to see projects in your Github account).

Distribution

The main venue for distribution of Python software is the Python Package Index, or PyPI, also lovingly known as "the cheese-shop".

To distribute your software on PyPI, you will need to create a user account on PyPI. It is recommended that you upload your software using twine.

Using Travis, you can automatically upload your software to PyPI, every time you push a tag of your software to github. The instructions on setting this up can be found here. You will need to install the travis command-line interface

Licensing

License your code! A repository like this without a license maintains copyright to the author, but does not provide others with any conditions under which they can use the software. In this case, we use the MIT license. You can read the conditions of the license in the LICENSE file. As you can see, this is not an Apple software license agreement (has anyone ever actually tried to read one of those?). It's actually all quite simple, and boils down to "You can do whatever you want with my software, but I take no responsibility for what you do with my software"

For more details on what you need to think about when considering choosing a license, see this article!

Getting cited

When others use your code in their research, they should probably cite you. To make their life easier, we use duecredit. This is a software library that allows you to annotate your code with the correct way to cite it. To enable duecredit, we have added a file due.py into the main directory. This file does not need to change at all (though you might want to occasionally update it from duecredit itself. It's here, under the name stub.py).

In addition, you will want to provide a digital object identifier (DOI) to the article you want people to cite.

To get a DOI, use the instructions in this page

Another way to get your software cited is by writing a paper. There are several journals that publish papers about software.

Scripts

A scripts directory can be used as a place to experiment with your module code, and as a place to produce scripts that contain a narrative structure, demonstrating the use of the code, or producing scientific results from your code and your data and telling a story with these elements.

For example, this repository contains an [IPython notebook] that reads in some data, and creates a figure. Maybe this is Figure 1 from some future article? You can see this notebook fully rendered here.

Git Configuration

Currently there are two files in the repository which help working with this repository, and which you could extend further:

  • .gitignore -- specifies intentionally untracked files (such as compiled *.pyc files), which should not typically be committed to git (see man gitignore)
  • .mailmap -- if any of the contributors used multiple names/email addresses or their git commit identity is just an alias, you could specify the ultimate name/email(s) for each contributor, so such commands as git shortlog -sn could take them into account (see git shortlog --help)

Using shablona as a template

Let's assume that you want to create a small scientific Python project called smallish. Maybe you already have some code that you are interested in plugging into the module file, and some ideas about what the tests might look like.

To use this repository as a template, click the green "use this template" button on the front page of the "shablona" repository.

In "Repository name" enter the name of your project. For example, enter smallish here. After that, you can hit the "Create repository from template" button.

You should then be able to clone the new repo into your machine. You will want to change the names of the files. For example, you will want to move shablona/shablona.py to be called smallish/smallish.py

git mv shablona smallish
git mv smallish/shablona.py smallish/smallish.py
git mv smallish/tests/test_shablona.py smallish/tests/test_smallish.py

Make a commit recording these changes. Something like:

git commit -a -m"Moved names from `shablona` to `smallish`"

You will probably want to remove all the example data:

git rm smallish/data/*
git commit -a -m"Removed example `shablona` data"

Possibly, you will want to add some of your own data in there.

You will want to edit a few more places that still have shablona in them. Type the following to see where all these files are:

git grep shablona

You can replace shablona for smallish quickly with:

git grep -l 'shablona' | xargs sed -i 's/shablona/smallish/g'

This very file (README.md) should be edited to reflect what your project is about.

Other places that contain this name include the doc/conf.py file, which configures the sphinx documentation, as well as the doc/Makefile file (edit carefully!), and the doc/index.rst file.

The .coveragerc file contains a few mentions of that name, as well as the .travis.yml file. This one will also have to be edited to reflect your PyPI credentials (see [above](### Distribution)).

Edit all the mentions of shablona in the shablona/__init__.py file, and in the shablona/version.py file as well.

Finally, you will probably want to change the copyright holder in the LICENSE file to be you. You can also replace the text of that file, if it doesn't match your needs.

At this point, make another commit, and continue to develop your own code based on this template.

shablona's People

Contributors

andim avatar andyfaff avatar arokem avatar choldgraf avatar effigies avatar erramuzpe avatar gvwilson avatar jakevdp avatar kamuish avatar mandel01 avatar manodeep avatar mvdoc avatar patrickmineault avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shablona's Issues

General structure

The README file currently has the following sections (in this order):

  • Module code
  • Data
  • Testing (nosetests)
  • Documentation (sphinx)
  • Installation (setup.py)
  • Continuous integration (travis)
  • Distribution (PyPI and travis)
  • Licensing
  • Scripts

And finally:

  • How to use the repo

Is this a good structure? Any elements missing? Or too much?

Feature Request: dependencies

Since importing the wrong package version (e.g. numpy or pandas) can quickly break code (and hinder reproducible outputs), can you suggest a standard for handling dependencies? For example, pinning dependencies in a requirements.txt file add to the root directory and/or including a install_requires key in the setup.py file.

Does the shablona project structure allow for running a script as the main program?

Thanks for this project. Using it helped me to work out imports for testing a small python project. Does this structure allow you to run a script as a program (e.g., shablona.py)?

Using Python 3.5 from Anaconda on Mac OS, I get an import error if I try this:

bash-3.2$ python shablona/shablona.py
/Users/markmandel/anaconda/lib/python3.5/site-packages/pandas/computation/__init__.py:19: UserWarning: The installed version of numexpr 2.4.4 is not supported in pandas and will be not be used

  UserWarning)
Traceback (most recent call last):
  File "shablona/shablona.py", line 6, in <module>
    from .due import due, Doi
SystemError: Parent module '' not loaded, cannot perform relative import

I get the error even if I add the following at the end of shablona/shablona.py:

if __name__ == '__main__':
    print('oh hello there')

Is this the expected & desired behavior here? Thanks for any input.

requirements.txt doesn't match the requirements of the default code

It's a little confusing that the requirements in version.py differ from those in requirements.txt. You have to both install requirements.txt and -e for the code to work, and then you get a scary error about missing duecredit, so you also have to pip install duecredit. I can submit a PR, what would be the preferred route? Adding a -e . to requirements.txt, or changing the requirements in version.py?

Why not fork and clone?

I am confused about the following bit in the instructions:

To use this repository as a template, start by cloning it to your own computer under the name you will want your project to have:

git clone https://github.com/uwescience/shablona smallish
cd smallish

To point to your own repository on github you will have to issue something like the following:

git remote rm origin
git remote add origin https://github.com/arokem/smallish

(replace arokem with your own Github user name).

Why not just fork and clone? It seems that's what you are trying to achieve anyway by removing origin and adding a new remote.

Is there an example of readthedocs using conda installs?

Typically, readthedocs needs to import the project in order to build the documentation. This is fine on your own computer, or on the readthedocs server if you have dependencies that can be pip installed. It seems that they now support conda installs as well. Is there an example which shows how to deal with this when the dependencies are conda installed? (or require an install script?)

Dealing with more data

Very cool and helpful project! I have created similar ones that I don't make installable since they contain more data, which I setup to be downloaded from figshare as needed within the code, i.e., most analyses only require the smaller processed data CSVs. I'm still not totally happy with this however, since users can't really use the package without being in the project root directory.

Do you think maybe the data should be put in each user's home directory (assuming the data doesn't change) under a folder like $HOME/.shablona/data? This would help save space if users are using the package in conda or virtual envs, right?

I was also considering having users install such that the code is used in place, i.e., python setup.py develop or pip install -e shablona. This way, the data directory would always be known relative to the package directory (I see you've already implemented something similar), and the Python directory won't become bloated with data.

Any thoughts on how to effectively work with more data?

Add absolute_import to __init__

In one of my projects in python 2.7, I ran into a problem of distinguishing between a module in the package vs a package of the same name in the python path. I wanted to import an external package, but since I had a module in the package directory of the same name (that was a bad naming convention),
it was importing that module.

from  module import MyClass # is module a module in the package or a package in the python path ?

The solution was to add the line

from __future__ import absolute_import
# Now the imports are clear
from module import MyClass # imports MyClass from package called module
from .module import MyPackageClass # import MyPackageClass form package module called module

This also allows you to import a module within package in the following way which I think is one of the recommended ways in python 3

from . import module

Refresh doc section (and integrate with Docathon Sphinx template)

The parts of Shablona referring to documentation are in a pretty sad state.

Might be good to pull in things that are codified in @choldgraf's https://github.com/choldgraf/sphinx_template

The full wish-list (in order of priority):

I'd prefer the latter to auto-deployment from Travis, because I found that auto-deployment from master tends to confuse users that installed the released software (e.g., from PyPI). But am open to discussion on the topic.

Best-practice for package version.py and setup.py

First of all, thanks for creating the package. I have attempted to follow the instructions to use this package as the template for my own project.

I had a couple of questions about the setup.py:

  • Is there any way to avoid the exec on the text read in from version.py?
  • Related, would it be better practice to move the package info into setup.py (or setup.cfg) and then auto-generate the version.py (something like what numpy does)

I have a suggestion as well regarding the mis-match in the choice of documentation markup format. README.md is using markdown , however, the LONG_DESCRIPTION within shablona/version.py is using reStructuredText. Presumably, this is because PyPI does not play well with markdown, but the mis-match can be avoided by simply changing the md formats to rst. One advantage would then be that long_description be populated directly with the contents of the README.rst file during setup/upload

I apologize in advance if I have missed any steps or mis-understood the intent of the package template. Thanks again for creating this template.

make an introductory video

Make an introductory video actually showing how to use this repo. May help with any queries
This would be easy to do with asciiinema, any video recording tools like the wonderful Kazam

Conda dependencies from multiple channels

I was looking at the travis yml file and noticed that this added conda_deps and pip_deps to the script iteself. That is quite cool as a one liner instead of a script.

  • Is there an easy way to handle multiple conda channels in such dependencies?
  • I had heard that at some point conda install would default to pip install for packages not found in conda. I am guessing that has not yet happened in the conda releases yet?

sphinx.ext.pngmath docs error

When I run make html in the doc directory I get the following error:

Config value 'math_number_all' already present

Commenting out line 62 reading sphinx.ext.pngmath, following NYUCCL/psiTurk@e31484e appears to fix the issue.

Add data directory as variable in shablona namespace

Instead of

import os.path as op
import shablona as sb
data_path = op.join(sb.__path__[0], 'data')

It would be nice to be able to use

import shablona as sb
data_dir = sb.data_dir

Some clever use of os.path and __file__ should make this easy enough, but I forget the exact syntax.

I think this will also require that users install in editable mode, unless data is copied to site-packages with -e omitted:

pip install -e .

Documentation/readthedocs

Several issues are outstanding, if we are to use RTD as a documentation hosting service:

What is the best way to create documentation of the API?

This is currently using scripts that I cribbed from nipy projects, and then ported to be compatible with python 3. These scripts require having the software installed on the machine, and are triggered upon running a make html. RTD uses sphinx-build directly (I think), so it's not obvious how this would patch into the process

Latex in documentation

Building documentation with latex on RTD seems to be a no-go, because their system doesn't seem to be able to build/install PIL, which is required. See here: https://readthedocs.org/builds/shablona/2882654/

Add CircleCI support

Some folks prefer Circle CI nowadays for their projects. Would be nice if shablona included also a template for its support.

Run flake8

Great to have a template that follows PEP8 and passes pyflakes, even if most projects themselves choose not to follow the cleanliness of such a sparkling template :)

Repo vs. script?

Perhaps this isn't appropriate as an issue, but the model here is to clone the repo and do various renaming operations. An alternative strategy is to have Shablona be a collection of scripts that prepare the template like shablona --package-name smallish --doc sphynx or something like that.

Don't recommend travis cacheing for conda

I know I initially added Travis cacheing, but after a few negative experiences I'd now recommend against it. Basic rationale:

  1. a script that fails on a clean environment might succeed when parts of an old environment are cached
  2. in most cases, retrieving a cached environment takes about as long as creating that environment from scratch, so there's very little benefit.

I'm happy to do a quick PR removing the cacheing if people agree.

Feature Request: setuptools

Is there reason for your choice of distutils rather than setuptools? I've found the setuptools is needed in creating the newer wheels using the pip setup.py bdist_wheel command. Granted pip standarization is currently in a state of flux, perhaps the following is a better abstract and flexible standard for importing setup:

try:
    from setuptools import setup
except ImportError:
    from distutils.core import setup

[1] Beazley, D. Python Essential Reference. 4th ed. Upper Saddle River: Addison-Wesley, 2009. p.154
[2] https://stackoverflow.com/questions/25337706/setuptools-vs-distutils-why-is-distutils-still-a-thing

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.