Giter Site home page Giter Site logo

stingraysoftware / stingray Goto Github PK

View Code? Open in Web Editor NEW
168.0 21.0 137.0 109.47 MB

Anything can happen in the next half hour (including spectral timing made easy)!

Home Page: https://stingray.science/stingray

License: MIT License

Python 59.77% Jupyter Notebook 39.94% TeX 0.29%
x-ray astronomy blackhole pulsars timeseries astrophysics fourier-analysis fourier-transform time-series time-series-analysis

stingray's People

Contributors

abigailstev avatar achillesrasquinha avatar analyticalmonk avatar anonymouscodes911 avatar anuraghota avatar bsipocz avatar cadair avatar davis191 avatar devanshshukla99 avatar dhruv9vats avatar dhuppenkothen avatar evandromr avatar gaurav17joshi avatar haroonrashid235 avatar jdswinbank avatar kr-2003 avatar matteobachetti avatar mgullik avatar mihirtripathi97 avatar nitish6174 avatar omargamal8 avatar orkohunter avatar pabell avatar parkma99 avatar pbalm avatar pupperemeritus avatar swapsha96 avatar tappina avatar theand9 avatar usmanwardag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stingray's Issues

Cross Spectra

We should include cross spectra ASAP for spectral timing capabilities. Ideally, the interface should be similar to that of Powerspectrum.

Additions to `Lightcurve` class

We had a very informative discussion at the Python in Astronomy meeting today, and one thing that came up was the functionality of the Lightcurve class. I think some of these things currently live in pull requests, but I'm adding them here for completeness.

  • Rebin (including recalculating uncertainties)
  • Time index column both middle of bin and bin edges
  • Time shifting/frame conversion (is there a WCS standard for time?)
  • Sort itself
  • Truncate
  • concatenate (add two together)
  • Interpolate to new time stamps
  • add/subtract light curves from one another
  • Metadata (need to keep some information about energy ranges etc)
  • Regions of interest (in the light curve) or masks?
  • Values need units and uncertainties
  • Plot itself
  • Support for variable time bin sizes
  • optional attribute or method for sky background

All of these are open for discussion.

Docs with Sphinx are not compiling on Travis

This is the error message:

/home/travis/build/StingraySoftware/stingray/stingray/__init__.py:docstring of stingray.AveragedPowerspectrum:5: ERROR: Unexpected indentation.

Should be easy to cope with this.

Implement read and write functionalities in all classes

Lightcurve, Powerspectrum, Eventlist, etc. objects should have a way to be read and written with no loss of information. Preferably, using directly a high-performance file format such as HDF5. We might start with the simpler pickle

Make package pip-installable

Is the package installable? Relatedly, it should probably at some point also go on the python package index. This might become important if we want to get help from astronomers in terms of testing new code.

Tests doc looks wrong

nosetets command not found because nose is not including in requirements.txt
I guess nose is a dependency of some lib there are already in that file despite it should be set as a requirement of the project.

Additionally, I've got some problems with installation nose on the OSX El Captian - I'm trying to fix that. That is not stingray problem probably. I don't know but maybe it should be including in the installation part.
If installation readme will be too long I guess we should extract them to separated doc file.

Do not assume a power-of-2 number of bins

Modern FFT algorithms do not make the assumption of power-of-two bins. Whereas they are generally faster for power-of-two binning, they will work with any binning. This has some implications. First of all, we can allow input light curves to be of any length, and that's convenient in particular when memory is an issue; second, if we allow this, the frequency array in the PDS class has to be defined differently than it is now. The function scipy.fftpack.fftfreq should do the trick, as it gives the right frequency array for any number of bins. We can then filter the PDS with a mask on the positive frequencies when relevant.
Docs: http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.fftpack.fft.html

Compatible read/write methods

In their current form, read/write methods are a little messy.
Example 1:
_save_fits_object gets a class object and saves it in a FITS file
_retrieve_fits_object returns a dictionary. I would expect having back the same kind of object I saved in the first place!

Example 2:
If I want to save a Lightcurve, I can use the write method, and retrieve with read. Great! But... If I save and retrieve in ASCII format, I will get back an Astropy table. If I save and retrieve in pickle or HDF5 formats, I will not get a return value, but I will change the lightcurve instead.

etc.

I thing that a discussion about I/O is needed and we should find a common and predictable behavior for all classes. We might want to take a look at Astropy, again ;)

Travis-CI not enabled

Hello Developers,

Tests must be run on every Pull Request before they are merged. I see a .travis.yml file in the repo but Travis did not run on #42. I think the integration is not done yet for this repository.

`AveragedPowerspectrum.rebin()` doesn't work

using rebin on AveragedPowerspectrum returns the error:

TypeError                                 Traceback (most recent call last)
<ipython-input-12-d9030309c98d> in <module>()
----> 1 raps = aps.rebin(df=6, method='mean')

/home/evandromr/github/stingray/stingray/powerspectrum.py in rebin(self, df, method)
    172 
    173     def rebin(self, df, method="mean"):
--> 174         bin_ps = Crossspectrum.rebin(self, df=df, method=method)
    175         bin_ps.nphots = bin_ps.nphots1
    176 

/home/evandromr/github/stingray/stingray/crossspectrum.py in rebin(self, df, method)
    232         # make an empty cross spectrum object
    233         # note: syntax deliberate to work with subclass Powerspectrum
--> 234         bin_cs = self.__class__()
    235 
    236         # store the binned periodogram in the new object

TypeError: __init__() missing 2 required positional arguments: 'lc' and 'segment_size'

This happens because:

  1. AveragedPowerspectrum requires lc and segment_size but AveragedCrosspectrum doesn't (default to None)
  • Proposed Fix: make lc and segment_size default to None in AveragedPowerspectrum and allow an 'empty' instantiation of the class.
  1. Crossspectrum.rebin tries to assign nphots2 which doesn't exist in AveragedCrosspectrum.
    • Proposed Fix: Crosspectrum.rebin() checks for the attribute self.type and assign either self.nphots1 and self.nphots2 if self.type=='crosspectrum', or self.nphots if selftype=="powerpsectrum" (and raise TypeError if self.type is something else).

I can implement this fixes and also add one or two test for binning of AveragedPowerspectrum.
Until then, feel free to make comments on this or suggest a different approach to fix

Method to apply dead-time correction to PSDs

At some point, it would be useful to implement a method in Powerspectrum that allows users to apply the dead-time correction from Zhang et al (1995) to the power spectrum. It should probably also throw a warning that this is only strictly valid for RXTE data.

Not urgent; more a reminder that this would be useful to have.

Cloning - documentation improvement

First thing I saw when I tried to build sphinx doc was:

Exception occurred:
  File "conf.py", line 44, in <module>
ImportError: No module named astropy_helpers.sphinx.conf

Astropy helper is a git submodule and I think in Readme would be some info about it and how to clone submodules.

Power spectrum or periodogram?

How do we want to call that class? "Powerspectrum" or "Periodogram"?
I used to call it Powerspectrum, and have sort of convinced myself that "Periodogram" makes
more sense ("power spectrum" == the process that created what we see; "periodgram" == the realization of the process that was actually observed), but I could be convinced either way.
I guess power spectrum is the more commonly used term?

Warnings and logging

Let's use either the logging module or astropy.log for logging, and warnings for warnings. These models can handle very consistently, for example, the redirection of warnings to certain files or to an interface with the user, and handle autmoatically the level of verbosity. Much better than a simple print, at least. print should be used sparingly for corner cases, but not for the bulk of output. http://docs.astropy.org/en/stable/development/codeguide.html#standard-output-warnings-and-errors

Pulsar methods

This is for the future: for QPO studies, in the future implementing pulsar timing methods will be very useful. We should think in detail about which methods to include and how to best incorporate them.

Note: Matteo Bachetti tells me that there's some code in MaLTPyNT that we might be able to port.

Specify versions in requirements.txt

A trivial issue but this could come in handy nevertheless.
We can decide on and specify the (specific or lowest) versions of the required packages. This would help avoid ambiguity during setup and potential compatibility issues.

Add a 'dataset' module

I propose adding a dataset module that allows users to import sample light curves. I have located the data here, which we can use freely. Maybe, we can start off by allowing users to import data of certain celestial bodies. Any suggestions?

Use `pandas` or `astropy.tables` in Lightcurve?

There was a suggestion today to use either pandas or astropy.tables within the Lightcurve class. pandas has the advantage that it has lots of functionality built-in.

astropy.tables has the advantage that it connects to other useful astropy concepts such as units and time concepts (which might allow us to provide time conversions more easily).

Not something that needs to be fixed instantaneously, more a call for discussion.

Energy intervals

Many methods (Powerspectrum ,Bispectrum etc ) require their evaluation only in user specified intervals of energy ,but once we extract lightcurve from events without applying interval filters we lose on information

I think we can tackle this issue in two ways :

  1. Creating a intermediate structure(or more general lightcurve ? ) which defines standard channels of energy ( link1,link2 ) considering atomicity we might require and store relevant counts in each channel for every individual timestamp .This will aid us in fast query and construction of lightcurve given intervals ,but we might be wasting on memory if counts are sparse across defined channels

2 . Otherway , you will not have to create intermediate structure hence event list will be the the standard point of access ,every time different intervals are provided you will traverse through whole event list for creating lightcurve which will be unnecessary burden if the data isn’t sparse ,But in this we get little advantage of not having to stick to standard channels and atomicity if interval boundaries are to be strict

Open for discussion on better ways to go about it

Alter read method in io.py

Currently, in order to read fits file for missions like RXTE and XMM, load_events_and_gtis() method in io.py is called by user. This returns a dictionary object with key-value pairs. This is useful but inconsistent with read() functionality of stingray.

I am wondering if we should call load_events_and_gtis() method from within read() in io.py to make it consistent. One approach can be to specify a parameter mission in read() method. It will look as follows:

def read(filename, format_='pickle', mission=None):
    if mission is None:
        # follow normal routine
    else:
        load_events_and_gtis()

For now, mission parameter will just decide whether to use normal routine, or the specific load_events_and_gtis() method. Later, however, it can be used to take specific actions depending on the mission.

Thoughts?

Examples of existing classes and methods

The current implementations of classes and methods do not have examples for them. While this can be a long term process, we can start by

  • Writing examples in the docstrings and getting them tested by doctests.
  • Have a tutorial page in documentation which may act like a user_guide.
  • Have a notebooks repository in the organization, cross referenced, which contains numerous examples in the form of IPython notebooks.

Lightcurve.rebin and Powerspectrum.rebin return x, y with different lenghts

lc = old_lc.rebin(new_dt) returns a new Lightcurve object with lc.time array one element longer than lc.counts (or lc.countrate).

ps = old_ps.rebin(new_df) returns a new Powerspectrum object with ps.freq array one element longer or shorter than ps.ps

Only with Powerspectrum.rebin I saw the result varying (i.e. freq is one element longer or shorter, depending on the binning factor), but I only used 1 data set (for different binning factors).

I believe that in the scope of the class the rebin method should return equal-length arrays, even if that means dropping data at the extremities, in which case it might be useful to raise a Warning to the user.

"Join" operation with empty EventList crashes

Minimal example:

In [6]: e = EventList()

In [7]: e0 = EventList(time=np.array([0,1]))

In [8]: e.join(e0)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-8-4e555906d9ca> in <module>()
----> 1 e.join(e0)

/Users/meo/devel/spyder_projects/stingray/stingray/events.py in join(self, other)
    269 
    270         if (self.time is None) or (other.time is None):
--> 271             raise ValueError('Times of both event lists must be set before joining.')
    272 
    273         ev_new.time = np.concatenate([self.time, other.time])

ValueError: Times of both event lists must be set before joining.

Typo in stingray/simulator/tests/test_simulator.py ?

Hi,

@usmanwardag, you had the last commit in the file so maybe you can help me.

in the line 255 of the file stingray/simulator/tests/test_simulator.py

Shouldn't it be v_cutoff = 1.0/(2*15.0), instead of v_cutoff = 1.0/(2*10.0) ?

I copied the relevant test function below to make it easier. I can change it in a PR myself but I wanted to check if it's really a typo or I got it wrong, since the test only failed after I did a small change in the utils.rebin() method.

245 def test_simple_lag_spectrum(self):
246    """
247    Simulate light curve from simple impulse response and 
248    compute lag spectrum.
249    """
250    lc = sampledata.sample_data()
251  h = self.simulator.simple_ir(start=14, width=1)
252   delay = int(15/lc.dt)
253 
254   lag = self.calculate_lag(lc, h, delay)
255   v_cutoff = 1.0/(2*10.0)               <<<----------------- HERE
256   h_cutoff = lag[int((v_cutoff-0.0075)*1/0.0075)]
257
258    assert np.abs(15-h_cutoff) < np.sqrt(15)

Export LightCurves in `fits` format

The simulator project requires porting maltpynt's MPfake functionality to create event lists from light curves. Since, maltpynt accepts light-curves in fits format, I think it would be a good idea to create functionality to export LightCurves in this format. This would make the two libraries greatly compatible.

Take GTIs into account when calculating power/cross spectra

Up to now, there is no mention of GTIs when calculating power spectra or cross spectra. Data are assumed to be without gaps. This might work for long XMM observations, but it will not work, for example, for NuSTAR where typical observations are much longer than the orbital period of the satellite.

In MaLTPyNT, time intervals to calculate the power spectrum are chosen so that the usage of GTIs is optimized. Segments start at the start of each GTI and they never go out of GTIs.
See decide_spectrum_lc_intervals and decide_spectrum_intervals here: https://github.com/matteobachetti/MaLTPyNT/blob/master/maltpynt/fspec.py

Absolute rms normalization

Need to add absolute rms normalization for power spectra (and re-name current rms normalization to fractional rms normalization).

Move tests to subdirectory

I think it would be better if the tests relevant to a module were in a tests/ directory at the same path of the module. This will be particularly useful when the codebase will grow into more submodels.

Uncertainties in Lightcurve and (Averaged) Powerspectrum

Hi,

I believe it's important to implement atributes to store uncertainties in both lightcurve and powerspectra.

For light curve it can be easily done by calculating the square root, since it's Poisson distributed data.
(check the branch patch_errors in my fork for a very rushed implementation)

For powerspectrum It might depend on conventions. I believe the behavior of the HEASARC F-tool powspec is to assign a 100% error bar for individual powerspectrum (from a single segment), and for averaged powerspectrum it utilizes the standard deviation of the power across the averaged segments' powerspectra.

Light curve should throw warning if the input data is not on regular grid

I think when making a Lightcurve object the code should check whether the input array time defines bins of equal size all the way through, and throw a warning if it doesn't.

Many of the algorithms we define here depend critically on a regular time grid, so that not having one might make the results unpredictable. Hence it would be good to warn the user that they put in data that is unexpected and that they might not get sensible results.

Inconsistency in the results of the binning methods

As far as I could understand:

  • On powerspectrum._fourier_modulus we are excluding the zeroth frequency (when doing freqs[freqs > 0])
    • This means that the first element of the frequency array is supposed to be equal to the frequency resolution (freqs[0] = df).
    • This also means that the frequency array represent the central value of each frequency bin.
  • On a Lightcurve object I also expect the time array to represent the central value of each time bin.

So far so good, my problems are:

  • When we call Lightcurve.rebin_lightcurve, the x-array (time bins) are not what I would expect if we want the central value. For example: if my time array is [1,2,3,4,5,6,7,8,9,10] (dt = 1) and I want to bin by a factor 2 (dt = 2), to keep the same logic I would expect to end up with [1.5, 3.5, 5.5, 7.5, 8.5] (dt = 2, and bins centered in the mean of the combined bins), instead we get [2,4,6,8,10]. Which I believe is connected to the expect behavior that I mention in the next bullet points.

  • When calling Powerspectrum.rebin we do (lines 283 and 284):

        bin_ps.freq = np.hstack([binfreq[0] - self.df, binfsreq])
        bin_ps.ps = np.hstack([self.ps[0], binps])
    

    Since we already excluded the zeroth frequency there's no need to stack the binned arrays that comes as output from utils.rebin_data, the resulting binfreq array has at the first index the new resolution df_new (assuming that the original frequency array starts at the original df_old, as expected but this is not checked). This would make the test that I reference in this comment pass.

I may be able to make the necessary modifications but I need input in what and where to change.
For example, should we change the behavior of Lightcurve.rebin_lightcurve, Powerspectrum.rebin or utils.rebin_data?

Or is it working as expected and I'm just confused?

Also I would suggest to either rename Lightcurve.rebin_lightcurve to Lightcurve.rebin, or Powerspectrum.rebin to Powerspectrum.rebin_powerspectrum, thoughts?

python 2.7 and 3.5 compatibility

I ran nosetests using 3.5 today and found two major incompatibility errors in the code:

  • the built-in function xrange does not exist in python 3. Instead, the function range (which creates a static list in python 2) works like xrange in python 3 (where it produces a generator instead). We could in principle replace xrange with range entirely, however, if someone uses python 2.7 and if they generate a very large list with, say, a billion entries, they might run into memory errors. It's unlikely, but not impossible to happen
  • one of the tests uses func_defaults, which does not exist in python 3, either (the python 3 syntax is __defaults__).

We need to make a policy decision on how to deal with these problems. I'd rather not have the code break in either python 2.7 or python 3.5. @jdswinbank suggested we could check for the existence or xrange in the __init__ file and define range there, or, if this happens quite often, we could use a tool to help us make the code compliant to both.
My first instinct is to look at it on a case-by-case basis for now, and if we get swamped with problems, we might switch to an automatic tool. Thoughts?

Porting MaLPyNT as scripts

Porting MaLTPyNT as scripts

To start the merge between MaLTPyNT and Stingray's, I suggest to start porting MaLTPyNT in the same codebase, but as a separate package called scripts.

Then, before starting with the full merge, I would put the following priorities:

  1. Make MaLTPyNT's tests work on Travis CI form Stingray's codebase
  2. Make MaLTPyNT's docs merge with Stingray's

Then, we could start changing MaLTPyNT's internals in order to use more and more of Stingray's API:

+ lightcurves returned by functions -> `Lightcurve` objects

+ pds, cpds, -> `Powerspectrum`s, `Crossspectrum`s (depends on #34)

+ ... and so on. EventLists are the last, as they haven't yet been fully defined. Depends on #40 

Finally, I/O should also change. From the current dictionaries saved as pickle or NetCDF-4 files, to a full use of Stingray's objects read and write functions (something might be recycled). Depends on #56

Modify Readme - add link to doc and example of usage

Hello, It's my first issue (proposal) for this project.
EDITED:
I found already http://stingray.readthedocs.org/index.html - is in GitHub repository URL to the documentation?
What do you think about creating more user friendly doc?

ORIGINAL:
What do you think about publishing documentation on GitHub pages and making a some CI Actions (I'd see you use TravisCi) to build documentation automatically?
It will be definitely more comfortable for new user who'd like to just see what the library can do.
I can try to do this.

Plan:

  1. Build script to push current sphinx documentation on GitHub pages
  2. Create TravisCi job (I've never used Travis - only Jenkins so maybe there is not such thing as a job but I hope is understandable)
  3. Integrate this job with PRs

Coding guidelines?

We should also decide on some coding guidelines, I think, to make our lives easier (and code more consistent).

Here are some questions we might want to answer, or some suggestions, all of which are up for discussion:

  • follow astropy coding guidelines in general
  • indentation is ONLY with spaces (four spaces, to be precise), and no tabs-and-spaces-mixing.
  • follow a yet-to-be-decided subset of PEP8?
  • the maximum length of a line should be 79 characters
  • functions/methods should be lower-case only, those consisting of more than one word should be separated by a "_" (e.g. def this_is_my_method() )
  • make variable names as descriptive as possible (readability > economy). Only loop iteration variables are allowed to be a single letter.
  • classes start with an upper-case letter and can be CamelCase
  • never use "from mypackage import *" to avoid namespace clashes (except in the stingray init file)
  • all code should be submitted via pull request only (i.e. fork, branch, work on stuff, submit pull request). The point is that code reviews are super-useful: the other person can then review the code, which means we'll both be up to date with how everything fits together, and we can get better by reading each other's code! :)
  • all methods/functions/classes should have docstrings, written in reStructured Text format since this is what sphinx supports
  • all docstrings should be numpy-style https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
  • include examples in docstrings wherever applicable
  • use sphinx to build documentation
  • all extra documentation should go into a /docs subdirectory under the main stingray directory
  • line comments should start with # (one hashtags)
  • all tests should be py.test compliant: http://pytest.org/latest/
  • keep all tests in a /tests subdirectory under the main stingray directory
  • write one test script per module in the package
  • extra examples can go into an /examples folder in the main stingray directory, scripts that gather various data analysis tasks into longer procedures into a /scripts folder in the same location

Anything vital I missed, @abigailStev? Any additional suggestions or comments welcome!

First Release Plan

If we really aim for a Jan 31 first release, I think it'd help us stay on track to have a plan for a minimal release, i.e. everything we'd like to have included in our first release.

Here's my list:

  • make light curves
  • make (& average) power spectra
  • fit/sample power spectra with empirical models (power laws, Lorentzians, ...)
  • make (& average) cross spectra, compute coherence and lags
  • Timmer+Koenig implementation?
  • anything else that's essential that I am missing?

It should also include

  • documentation for all of the above
  • tests for all of the above
  • jupyter notebook tutorials for each of the above bullet points

I'd vote for not including too much in the first release, but make sure that all of it is well-documented and well-tested. I already have some code (and a few tests) for points 1,2 and 3, which I could slowly commit over the next couple of weeks, unless @abigailStev thinks we should write it from scratch, or use her version (both of which I'd be fine with, too).

Discuss! :)

Parallelization

Maybe, it would be interesting to parallelize some methods (specially those ones holding masive data) to improve performance. Maybe the performance is quite good and this is only complicating things without reason.

Shouldn't rebin and rebin_log both return `*Spectrum`s?

As things are now, Crossspectrum.rebin returns a Crossspectrum object while Crossspectrum.rebin_log returns a tuple (frequency, power, number). Is there a specific reason for this? Shouldn't they return the same kind of value (preferably a Crossspectrum?)

Use distributed computing for large data sets

One thing that came out of the Lorentz Center workshop was the need to think about new data sets that are much larger than the ones we currently have. Specifically, Astrosat is having some concerns about both runtime and memory for things like power spectra and cross spectra.

This is a reminder to us to explore these issues in the future. One library that might be useful for doing this is dask.

Code cleaning/refactoring

I'm looking through stingray's code and I saw some things I think we should avoid.

utils.py has 2 private methods which are used in io.py file
Not all imports are at the top of file
Sometimes between operators are spaces sometimes not:
1+2 vs 1 + 2
(I prefer second version)

I know that issues like that are not really important, but might be copied bu other contributors and more time consumed as a huge "problems"

Let me know if it should be "fixed" I will try to do it.

Move GTI related code into its own module

Currently, io.py is fairly big, partly because there's a lot of code related to Good Time Intervals (GTIs) in there. This should go into its own module so we can keep io.py clean for the basic input/output functions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.