Giter Site home page Giter Site logo

openpmd / openpmd-viewer Goto Github PK

View Code? Open in Web Editor NEW
60.0 11.0 49.0 3.42 MB

:snake: Python visualization tools for openPMD files

Home Page: https://openpmd-viewer.readthedocs.io/

License: Other

Python 99.45% Jupyter Notebook 0.55%
openpmd openscience visualization jupyter-notebook research community

openpmd-viewer's Introduction

openPMD-viewer

pypi version Binder License

Overview

This package contains a set of tools to load and visualize the contents of a set of openPMD files (typically, a timeseries).

The routines of openPMD-viewer can be used in two ways :

  • Use the Python API, in order to write a script that loads the data and produces a set of pre-defined plots.

  • Use the interactive GUI inside the Jupyter Notebook, in order to interactively visualize the data.

Usage

Tutorials

The notebooks in the folder tutorials/ demonstrate how to use both the API and the interactive GUI. You can view these notebooks online here.

Alternatively, you can even run our tutorials online!

You can also download and run these notebooks on your local computer (when viewing the notebooks with the above link, click on Raw to be able to save them to your local computer). In order to run the notebook on your local computer, please install openPMD-viewer first (see below), as well as wget (pip install wget).

Notebook quick-starter

If you wish to use the interactive GUI, the installation of openPMD-viewer provides a convenient executable which automatically creates a new pre-filled notebook and opens it in a browser. To use this executable, simply type in a regular terminal:

openPMD_notebook

(This executable is installed by default, when installing openPMD-viewer.)

Installation

Installation on a local computer

Installation with conda

In order to install openPMD-viewer with conda, please install the Anaconda distribution, and then type

conda install -c conda-forge openpmd-viewer

If you are using JupyterLab, please also install the jupyter-matplotlib extension (See installation instructions here).

Installation with pip

You can also install openPMD-viewer using pip

pip install openpmd-viewer

In addition, if you wish to use the interactive GUI, please type

pip install jupyter

Installation on a remote scientific cluster

If you wish to install the openPMD-viewer on a remote scientific cluster, please make sure that the packages numpy, scipy and h5py are available in your environment. This is typically done by a set of module load commands (e.g. module load h5py) -- please refer to the documentation of your scientific cluster.

Then type

pip install openPMD-viewer --user

Note: The package jupyter is only required for the interactive GUI and thus it does not need to be installed if you are only using the Python API. For NERSC users, access to Jupyter notebooks is provided when logging to https://ipython.nersc.gov.

Contributing to the openPMD-viewer

We welcome contributions to the code! Please read this page for guidelines on how to contribute.

openpmd-viewer's People

Contributors

angelfp avatar ax3l avatar benwibking avatar berceanu avatar fchapoton avatar franzpoeschel avatar hightower8083 avatar hvincenti avatar iliancs avatar jgerity avatar juliettepech avatar marc-marcos avatar maxthevenet avatar pordyna avatar prometheuspi avatar remilehe avatar rtsandberg avatar soerenjalas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openpmd-viewer's Issues

Keep zoom level when changing time step

Currently, the zoom level restores to unzoomed when the time step is changed. This might be unconvenient when one wants to only see a detail and study its change over time.
Keeping the zoom level, just as all the other settings are kept would be very helpful.

Off-topic: Cool tool!!!!

OpenPMDTimeSeries: Unit Systems

To improve the generality of the viewer without harming its usefulness for a specific domain, we could make the following adjustment regarding unit systems:

Per default, the OpenPMDTimeSeries should not convert, rename or exclude records.
But we could set a unit system that is used for reading data and formatting plots in a way such as:

from opmd_viewer import OpenPMDTimeSeries
import opmd_viewer.unit_systems

ts = OpenPMDTimeSeries('...')
# change to lambda0 & c based system
ts.set_unitsystem(unit_systems.LPA(800.e-9))
# change back to SI
ts.set_unitsystem(unit_systems.SI())
# change a plasma system
ts.set_unitsystem(unit_systems.Plasma(1.e15))

# read or plot data now

The important thing is that we apply the transformations as we read the raw data together with unitSI multiplications, etc.

What can be Converted

  • change record scaling (maybe by reading additional records such as mass of particles)
  • change record naming (e.g., "momentum/x" -> "ux")
  • change base-unit labels ("m" -> "lambda_0" or "m" -> "micron")

Example Unit Systems (Pre-Defined)

  • SI(): just forward what we read scaled by unitSI
  • Microns(): use microns instead of meters for lengths
  • LPA(lambda0): scale length to lambda0, times to lambda0/c, speed to c, ...
  • Plasma(omega_pe): scale time by (2 pi) / omega_pe, speed by c, length by (c * 2 pi) / omega_pe, ...
  • Raw(): ignore all scalings including unitSI (for debugging codes)
  • CGS(): I am just kidding, cgs units are deprecated and will be removed in future versions of science ๐Ÿ˜ˆ
  • ...

If we document the interface of those classes properly, power users could actually even use their own unit system converters even if we did not implement them (yet).

Re-Naming of Records

During the change of the unit system we can also rename the records since e.g., a momentum in beta gamma (ux) is more readable and known in a specific domain.

current_i: assumes zero-base & is dump number

We might need to update the object in OpenPMDTimeSeries to use dictionaries with key iteration instead of plain arrays.

Right now, the viewer assumes all simulations are zero-based in iterations. This still works when restarting (e.g. for high-resolution output) for output dumps only in a specific interval [i_min:i_max] but variables like current_i are wrong.

Also, current_i is due to that actually the n-th output and not the iteration of that output.

We can also generalize _find_output to return the found time and iteration again instead of modifying the members directly (allows to implement new member functions such as find_time(iteration) and find_iteration(time)).

wget in tutorials not a standard python (2.7) module

Hi,
this is really a minor thing: I noticed that my anaconda python does not ship with wget, which is used to download the sample data in the tutorials. I'm not sure if this is the case because my anaconda is outdated or if this is standard.
This could maybe lead to some confusion for inexperienced python users. I think adding it to the requirements.txt is a little bit overkill, since it is technically not needed for the viewer itself, but we could maybe add a side note in the tutorials.
If wget now comes with python, forget all I've said above :)

Include particle selection in dev branch

I noticed that the dev branch is a few commits behind the master branch. I think we should try to keep the branch up to date, since the contribution guidelines say to fork and update from this branch. Could you merge the current master branch into the dev branch?

Slider: Record Exclude List

The slider() could add an optional exclude_particle_records=list argument which can be used to exclude charge and mass as its currently hard-coded.

Slider: Naming Slice

With "slice direction" in the OpenPMDTimeSeries.slider we actually mean the slice normal, don't we? Should we rename it, I find the term direction to be quite arbitrary in the widget (because a plane would need at least to base vectors pointing somewhere in the plane to span a slice).

PIConGPU 2D x and y axis ranges are wrong

When using openPMD_viewer - OpenPMDTimeSeries with the slider() method in 2D, the ranges of x and y axis are interchanged wrong.

Full call:
openpmd_viewer_error01

Zoomed into the plot output:
openpmd_viewer_error02

In PIConGPU, the laser propagates in +y direction, as correctly marked by the axis labels. However, the ranges given are interchanged.
(In this LWFA simulation, I used a moving window)

unitSI Handling

Currently unitSI seems not to be used while reading data since warp is creating data already in SI (making unitSI==1.0).

We could combine the multiplication (scaling) with the scaling by weighting to make it more efficient.

avail_circ_modes set to None causes OpenPMDTimeSeries.slider() to fail

It appears that for Cartesian datasets, avail_circ_modes will be set to None in params_reader.py, but this appears to cause an error when calling OpenPMDTimeSeries.slider() for interactive exploration in an IPython (4.2.1-aed0eae) notebook with widgets. This is replicated for the example datasets

Replacing avail_circ_modes with [] as needed after creating the OpenPMDTimeSeries object seems to yield the correct behavior:

import opmd_viewer

ts = opmd_viewer.OpenPMDTimeSeries('./example-3d/hdf5/')

%matplotlib inline
if ts.avail_circ_modes is None:
    ts.avail_circ_modes = []
ts.slider()

Unless I've misunderstood something about this object, using the empty list [] instead of None at params_reader.py:L99 will resolve the issue without the need for intervention on the user's part.

Add particle tracker in openPMD-viewer

For one of the sets of simulations that I am currently doing, I would need to be able to track particles in postprocessing.

Assuming that the id of the particles have been stored in the openPMD data, then this should be possible with openPMD-viewer, if we add this feature.

I would definitely volunteer for this, but I would like us to first agree on the API. I thought about introducing a ParticleTracker object. This object would be used in the following way, for instance in order to select electrons having a longitudinal momentum between 10 and 100 at iteration 2000, and then fetching the positions of these electrons at iteration 30000 and 50000.

ts = OpenPMDTimeSeries( 'some/directory' )

# Select particles at iteration 2000
pt = ParticleTracker( ts, species='electrons', select={'uz':[10,100]}, iteration=2000 )

# Fetch them at iteration 30000 and 50000
x1, y1, z1 = ts.get_particle( ['x', 'y', 'z'], particle_tracker=pt, iteration=30000 )
x2, y2, z2 = ts.get_particle( ['x', 'y', 'z'], particle_tracker=pt, iteration=50000 )

Also, in this case x1, y1, etc. would be arrays that have the same length as the initial number of particles selected at iteration 2000, and the same tracked particle would have the same position within the array x1 and x2 for instance. Also, there would be NaNs in x1 if some of the particles were present at iteration 2000 but not anymore at the iteration 30000.

@ax3l @soerenjalas @MKirchen @jgerity : Does the above api make sense to you? Do you have suggestions for improvements, before I start coding?

`check_all_files=False` breaks slider and time access of data

Hi,
in its current implementation check_all_files=False, prevents the usage of the slider widget as well as the ability to call get_field(t=50years...). This happens because self.t is acquired in the same for loop where the data checking is done:

self.t = np.zeros( N_files )
...
if check_all_files:
            for k in range(1, N_files):
                t, params = read_openPMD_params(self.h5_files[k])
                self.t[k] = t
                for key in params0.keys():
                    if params != params0:
                        print("Warning: File %s has different openPMD "
                              "parameters than the rest of the time series."
                              % self.h5_files[k])

This results in a self.t array filled with zeros, if check_all_files=False.

Sorry for being not too familiar with the standard, but if the timestep is saved within the openPMD file, we could use that in order to determine the time from the iteration data. Otherwise ( if the timestep is not allowed to change, again I need to get more familiar with this stuff) we could only open two files and get the timestep from the difference in time and iteration.

EDIT:
"READ THE FREAKING MANUAL!!"

Turns out this is in fact stated in the standard, but not to the benefit of the problem:

dt
type: (float)
description: The latest time step (that was used to reach this iteration). This is needed at the iteration level, since the time step may vary from iteration to iteration in certain codes. 

Okay then I see no other way then opening all files to acquire the times.

Technically the api should work without the times so if we want to have a "fast" mode we could have a custom exception thrown whenever a user wants to use the fast mode and call a time argument.

EDIT 2:
This could be done for example in _find_output(self, t, iteration)

def _find_output(self, t, iteration):
...
        # If a time is requested
        elif (t is not None):
            # NEW
            if self.t == None:
                raise APIError:
                   'Fast-read mode does not support time data acquisition'

           # Make sur the time requested does not exceed the allowed bounds
            if t < self.tmin:
                self.current_i = 0
            elif t > self.tmax:
                self.current_i = len(self.t) - 1
...

To conclude this, I think that this flag is very nice since checking large datasets can take a great amount of time. Therefore we should try to find an elegant solution to this problem. I also tried to use multiprocessing for the initialisation step, but my attempts failed with the really not so nice implementation of the python multiprocessing module... What are your thoughts on this?

Add fit to get_main_frequency

Currently the method get_main_frequency in LpaDiagnostics simply searches for the maximum in the FFT. This produces quite unusable data when studying e.g. the redshift of the pulse in plasma. As you can see below the value changes quite unsteadily.
freq

I think by fitting the spectrum one could probably get better results as more data is taken into account.
The question is if a Gaussian fit is sufficient for this purpose as the pulse can change quite violently or people might use exotic pulses. I can work out a PR so we could test if this method gives better results. @RemiLehe any thoughts on this?

pip install fails

Currently the pip install is failing with the following error

pip install openPMD-viewer
Collecting openPMD-viewer
  Using cached openPMD-viewer-0.5.1.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-build-2_8yy6dj/openPMD-viewer/setup.py", line 47, in <module>
        "opmd_viewer/openpmd_timeseries/cython_function.pyx"),
      File "/data/home/branco77/python/data_analysis/lib/python3.4/site-packages/Cython/Build/Dependencies.py", line 818, in cythonize
        aliases=aliases)
      File "/data/home/branco77/python/data_analysis/lib/python3.4/site-packages/Cython/Build/Dependencies.py", line 704, in create_extension_list
        for file in nonempty(sorted(extended_iglob(filepattern)), "'%s' doesn't match any files" % filepattern):
      File "/data/home/branco77/python/data_analysis/lib/python3.4/site-packages/Cython/Build/Dependencies.py", line 108, in nonempty
        raise ValueError(error_msg)
    ValueError: 'opmd_viewer/openpmd_timeseries/cython_function.pyx' doesn't match any files
    
    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-2_8yy6dj/openPMD-viewer/

pandoc dependency

Pandoc

I upgraded a Python 2.7 install of openPMD-viewer 0.3.3 to 0.5.3 via pip today (note: there is still a tag for 0.5.3 missing on GitHub -> Releases) and noted the following install problem:

$ pip install openPMD-viewer --user -U
[....]

Collecting openPMD-viewer
  Using cached openPMD-viewer-0.5.3.tar.gz
    Complete output from command python setup.py egg_info:
    /home/axel/.local/lib/python2.7/site-packages/pkg_resources/__init__.py:1869: UserWarning: /usr/lib/pymodules/python2.7/rpl-1.5.5.egg-info could not be properly decoded in UTF-8
      warnings.warn(msg)
    Maybe try:
    
        sudo apt-get install pandoc
    See http://johnmacfarlane.net/pandoc/installing.html
    for installation options

pandoc is missing in the (pip) requirements.txt and besides the python part it also needs the printed binary-part to be installed. Anyway, it's probably only used for documentation building or rst file creation for packaging?

Not sure when this issue was introduced, because it's in setup.py already for a long time. Note that I installed previous versions (0.3.3) via python setup.py install from the sources directly, that might be the reason it was not triggered.

Python 3.4

Repeating the same install via Python 3.4 seemed fine even without pandoc, but only installs 0.5.2 o.0

Also, are we sure we drop support and CI for Python 3.4 already (see setup.py meta information)? It's still the stable Python release in many distributions.

field slider inverted

The slicing slider in the field view of the slider() method is inverted.
It goes from -1 to +1 with 0 representing the center. At least for slicing in x in 3D +1 is equal to the smallest x-cell-index while -1 is equal to the highest x-cell-index. This is a bit counter-intuitive.

License Headers

We should add a license header to each file stating the 3-Clause-BSD-LBNL license is used.
This is good practice since derivatives might only clone a specific file.

I can prepare a snippet later on, not urgent.

"""
File description 
...

__authors__ = "Remi Lehe, Soeren Jalas, Axel Huebl, ..." (from `git log` of each file)
__copyright__ = "Copyright 2015-2016"
__credits__ = ["Remi Lehe", "Soeren Jalas", "Axel Huebl"]
__license__ = "3-Clause-BSD-LBNL"
"""

(addLicense tool in PIConGPU)

Also: avoid UTF-8 for now, write names in pure ascii (sometimes crashes people's setups otherwise)

Particles: Original & Derived Attributes

  • handle "raw" and "derived" particle attributes separately (but transparent for the user)
  • add an example "ene" quantity for kinetic energy (if we want; already in get_gamma)
  • the raw "momentum/x" can not be queried right now (but not all stored momentum might be relativistic or with a mass != 0)
  • make each quantity in avail_ptcl_quantities a dictionary (with macroWeighted yes/no)
  • make this a dictionary of each species
  • slider: filter only for a limited set (e.g., not the momentum/x)

API & Slider: Coord->Component

Follow-up to #148: the "coord" option in the field API and interactive ipywidget should be renamed to component (or comp) since a component of a field does not necessarily reflect a certain quantity along a vector component of the chosen geometry. The current naming can be confusing, e.g. for tensor fields.

It is also the name used in the standard for the same reason, record and record component.

Need normalization for zero-mass particles

With the introduction of bremsstrahlung and Compton scattering into PIConGPU a photon model has been added to the available particle species. Since a photon's mass is zero we encounter a divide by zero during the normalization of the momentum data to (mc)^-1.

/lib/python3.4/site-packages/opmd_viewer/openpmd_timeseries/data_reader/particle_reader.py:84: 
RuntimeWarning: divide by zero encountered in true_divide
  norm_factor = 1. / (get_data(species_grp['mass']) * constants.c)

In this case it would be desirable to have an arbitrary normalization and maybe a logarithmic plot scaling as a default. Then the users could set a normalization value by themselves after seeing the logarithmic plot and deciding where the regions of interest are and maybe change back to linear scaling. ๐Ÿ˜€

positionOffset: int32 and untiSI

When trying to read a PIConGPU file with the current dev branch as of 135809c reading the position via

y,x = ts.get_particle( var_list=['y','x'], t=0, species='e', plot=True, nbins=300, vmax=5e7)

leads to:

openPMD-viewer/opmd_viewer/openpmd_timeseries/main.pyc in get_particle(self, var_list, species, t, iteration, select, output, plot, nbins, **kw)
    239         for quantity in var_list:
    240             data_list.append(read_species_data(
--> 241                 file_handle, species, quantity, self.extensions))
    242         # Apply selection if needed
    243         if select is not None:

openPMD-viewer/opmd_viewer/openpmd_timeseries/data_reader/particle_reader.pyc in read_species_data(file_handle, species, record_comp, extensions)
     72     # - Return positions in microns, with an offset
     73     if record_comp in ['x', 'y', 'z']:
---> 74         offset = get_data(species_grp['positionOffset/%s' % record_comp])
     75         data += offset
     76         data *= 1.e6

/openPMD-viewer/opmd_viewer/openpmd_timeseries/data_reader/utilities.pyc in get_data(dset, i_slice, pos_slice)
     92     # Scale by the conversion factor
     93     if dset.attrs['unitSI'] != 1.0:
---> 94         data *= dset.attrs['unitSI']
     95 
     96     return(data)

TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('int32') with casting rule 'same_kind'

That is likely do to the recent changes in #81 #85 and caused by our internal storage of positionOffset as an int32 :)

The result of positionOffset[()] (int32) and unitSI (float64) should of cause be promoted to float64 if unitSI != 1.0. Since *= is an in place operation we can just add a reallocation & cast beforehand if necessary:

if dset.attrs['unitSI'] != 1.0:
    if data.dtype != np.float64:
        data = data.astype(np.float64)
    data *= dset.attrs['unitSI']

In recent versions of numpy there is also a copy=False flag: see this answer. Note: view() will only work if the two casted types are the same size in byte.

Get_fields returns extents in different order than the field array

I noticed an unexpected behaviour in how get_fields returns the fields and the extent of the fields:
while the extents are returned as (z_min, z_max, r_min, r_max), or similar for other coordinates, the shape of the field array is (Nr, Nz). In other words "flipped" in order. I think this behaviour can lead to confusion and we should, if we don't change this, at least explicitly mention the structure of the returned quantities in the docstring.

User-friendly installation of the package

There are several tasks related to the installation that we should go through:

  • Prepare and test installation/uninstallation with pip.
  • Prepare and test installation/uninstallation with conda.

In particular, we need to add install instructions in the README for both pip and conda. In the case of pip, it would be good to distinguish between:

  • installation on a cluster (which requires module load instructions for hdf5 parallel)
  • installation on a personal computer (which requires that h5py be properly installed, for instance through pip if possible)

Consistent module naming: `openpmd_viewer`

For more consistency, the import statement should be changed from import opmd_viewer to import openpmd_viewer.

In addition:

  • The name of the of the folder which contains the source code should also be changed to openpmd_viewer (since it's like the module name)
  • The name of the package in setup.py should be changed to openpmd_viewer openpmd-viewer update from @ax3l: see #223

OpenPMDTimeSeries: Skip Check of other Files

To save time while opening an OpenPMDTimeSeries we could add an optional parameter that is only querying the first file for, e.g., available particle records and assumes the other files will not change the available attributes for each species over time.

Plot kwargs are not used

When using the slider of openpmd-viewer one can pass keyword arguments but they will not be used.

Implement Wigner Transform in `get_spectrogram`

The current implementation of get_spectrogram seems to return wrong results.
Example.
The white plots in the spectrogram show the projections of the spectrogram and should give the pulse spectrum and envelope.
For comparison the spectrum is calculated with numpy.fft.fft() and the envelope with get_envelope. These plots definitely fit to the input data of the simulation. As you can see they don't really agree with the spectrogram.
I'm not yet sure what's causing this discrepancy, as this could either be an error in the algorithm or some other issue. One idea I had is that the frog method just doesn't work well with the given pulse. In that case one could try to use a different gating function. Or maybe you have some other idea what might be the cause of this.
I just wanted to warn you about this issue and will look further into this. You can assign this issue to me.

utilities `list_h5_files` relies on time step in file name

Currently, all time series extractions are done using the list_h5_files function from opmd_viewer.openpmd_timeseries.main. This function extracts the time step from the last number before the suffix not from the time step in the hdf5 files. See code here. Thus files like libSplash serial output will result in wrong time steps (the serial output extracts files with a given mpi rank as filename_[timestep]_[rankx]_[ranky]_[rankz].h5).

Better parsing of the basePath and the iterations

Right now, openPMD-viewer can handle only the case where there is one iteration per file, and moreover, the iteration is obtained from the filename, not the from the basePath (e.g. /data/100). Finally, if there are additional paths in /data/, this could could the reader to fail (e.g. if /data/ contains /data/100/ and /data/some_other_path).

These are clearly limitations, and they should be fixed.

Exception for erroneous `field` and `var_list` arguments

Hi,
I came across some minor annoyances when using the api in rather long python scripts. When using wrong field or var_list arguments in get_field or get_particle we currently issue a warning in form of a python print:

if valid_var_list == False:
            quantity_list = '\n - '.join( self.avail_ptcl_quantities )
            print("The argument `var_list` is missing or erroneous.\nIt "
                  "should be a list of strings representing particle "
                  "quantities.\n The available quantities are: "
                  "\n - %s" %quantity_list )
            print("Please set the argument `var_list` accordingly.")
            return(None)

This has the downside of not knowing where exactly in the code the error occurs. Something like

field = get_field(t, field=bogus)

is then passed as NoneType and will lead to errors further down the road.
My suggestion would be to additionally throw an exception to have a proper python traceback. Would this lead to errors with the interactive interface?

Slide Layout with Long Record Names

Wanted to report that for a long time but always forgot. As seen here we like to have rather long field and particle attribute names which breaks the layout a bit. Can one adjust it a little so it is more flexible with regards to long record names?

Also the range "number input" boxes in the Plotting options are sometimes too small in width to see all numbers (e.g. "19" for "1e19") one entered. The last point might also just be a Chrome issue.

Synchronize two sliders with traitlets.dlink

The function dlink from package traitlets provides the ability to link the attributes of 2 widgets. For instance we could link the time sliders of two different simulation.

However, in order to do so, I think the method slider of OpenPMDTimeSeries needs to return the slider, not just display it, but this can easily be changed in openPMD-viewer. I'll investigate this.

Replace the time slider by an iteration slider

Right now the GUI has a time slider, which indicates the time in femtoseconds.

However, in version 1.0.0 of openPMD-viewer, we are planing to enforce unit consistency and return everything in SI. This means that, for LWFA simulations, the time in the slider will be of the order of 1.e-12.

It turns out that, in this case, the slider does not work anymore! (See for instance: jupyter-widgets/ipywidgets#259)

Therefore, for version 1.0.0, I am planning to replace the time slider by an iteration slider (which will be more robust to small values of the time). @soerenjalas @ax3l : any objection to this ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.