Giter Site home page Giter Site logo

imagine-consortium / imagine Goto Github PK

View Code? Open in Web Editor NEW
10.0 4.0 9.0 30.09 MB

Interstellar Magnetic Field Inference Engine, a modular open source framework for doing inference on generic parametric models of the Galaxy.

Home Page: https://imagine-code.readthedocs.io/

License: GNU General Public License v3.0

Dockerfile 0.09% Python 10.36% C++ 0.08% Jupyter Notebook 89.47%
galactic-magnetic-fields inference statistics parametric-models gmf imagine bayesian-inference

imagine's People

Contributors

1313e avatar ashley-stock avatar dependabot[bot] avatar gioacchinowang avatar luizfelippesr avatar trjaffe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

imagine's Issues

Dependencies between Fields

Often, one may want to write Fields which are interdependent. This should be added.

For example, one may have a cosmic ray distribution which is obtained launching tracer particles in the presence of a given magnetic field. Another example is the SN remnants project (IMAGINE project 5), where the magnetic field and gas distribution are computed simultaneously.

These two examples show two scenarios which are likely to be common (and already relevant for the problems we are working at the moment):

  1. to evaluate a Field (e.g. a passive CR distribution), one needs to know the values of another field (in this example, magnetic field). In this case, the CR Field object has to be called after all the MagneticField object, and somehow have access to the value of the sum of the evaluated MagneticFields.
  2. to evaluate a Field (e.g. the magnetic field of a SNR) one needs full access to another Field object (e.g. the gas distribution in the SN shell)

Better integration of Masks

At present, Masks objects can be supplied to Likelihood objects, which use them to ignore the masked pixels in the likelihood calculation. However, it would be interesting to use, when possible, the same masks in the Simulators. The end of tutorial_masks illustrates (using "low level"ish operations) how hammurabi X can make use of a mask.

Plans for the Mask integration:

  • The Simulator base class should include an argument for a masks dictionary in its __init__
  • The Pipeline base should pass the likelihood to the Simulator
  • There must be a way of making the user know if the Simulator can only handle a single mask
    • as this is the case of hammurabi X, and may be common,
    • one (crude but simple) possibility is requiring that masks of all the keys to be equal (raising an error otherwise)
  • Whenever a Masks dictionary is provided by the user, the Hammurabi simulator must be updated internally, following the exact same procedure showed in the end of tutorial_masks (using the same temp directory used for dumping the Fields)
  • Finally, tutorial_masks must be updated accordingly

Pipeline interface improvements

The Pipeline sub-classes need better, hopefully standardized, interfaces, to avoid forcing the user to reach on each sampler docs to choose the sampling_controllers.

Specification/checklist

  • The main parameters controlling the samplers should became keyword arguments
  • Docstrings should describe each of these
    • MultinestPipeline
    • DynestyPipeline
    • UltranestPipeline
  • Standard temporary (or permanent, if set) directory for the chains
  • Progress monitoring (Still difficult to do this in a general way.)

Hammurabi Simulator must handle non-dummy fields

Currently, the Hammurabi Simulator (in the new_field branch) does not work with any non-dummy field (i.e. one can only use it with its builtin fields). This needs to change.

Specification
Because a python SWIG interface for hammurabiX is not yet ready, the Hammurabi simulator will dump the fields to disk and edit hammurabi's XML files so that it can use them. This should allow the user to include any IMAGINE field in a Hammurabi run by simply adding it in the fields list.

The temporary directory for this is provided in the initialization of the Hammurabi simulator (potentially, a different directory for the temporary XML files could also be set). If absent, it should default to /run/shm (or /dev/shm ) or some other alternative. The temporary directory must be tested when the simulator is initialized. The temporary files must be removed when the simulator is deleted (i.e. during __del__ ).

Joint prior distributions

Currently, IMAGINE supports flexibly specifying marginal priors for each parameter. However, in most cases the parameters, and our prior expectations for them, exhibit correlations.

In practice, for nested sampling applications, the joint prior must be expressed as a pipeline.prior_transform(cube) method which operates on multiple rows of the incoming "cube" instead of the simple case which is there now.

I suspect the simplest way to approach this problem is perhaps using copulas to represent the dependence between variables. When defining a Prior, the user specifies the copula and the names of the dependent variables. From this, it should be possible to construct the joint pdf and, hopefully, the joint prior transform function.

Observables plotting tools

It would be useful to have a set of tools for plotting Observables.

This could be a simple function/method which takes an ObservableDict (i.e. either Measurements or Simulations) and shows the associated images, with appropriate labels and title.

Implement the CosmicRayElectronDensity base Field

This should represent cosmic ray electron density at different energies.

The basic idea is to be able to return a 4-array: the first 3 axes for the coordinate grid, and the last one for energy bins.

Choice of energy bins

The difficulty, however, is what is the choice of energy bins and how to communicate this to the simulator in a nice way.

Currently, the Simulator does not have direct access to the field objects, only to the data produced by them. This was chosen to be this way on purpose: keeping the details of fields hidden from simulators reduces the chances of simulators requesting too specialised fields, preserving this way the future modularity.

In my view the two simplest solutions are the following.

  1. Different spectral binning = different Field If there are only a few standard choices for the CR spectrum, one could have a different base field for each of them.
    • This would require no change in the present infrastructure
    • Multiple compatible CR Fields could be summed-up in usual way
    • Some simulators could be limited to specific choices of binning
    • Oddly, different choices of binning would have to be associated with different field_type strings.
  2. Return (array, bin_array) Instead of returning an array, the get_data() method associated with CR Fields could return a tuple or list containing the 4-array and an extra array with the binning information
    • The Simulator base class will need to deal with this!
    • Only array's with common bin_array should be summed-up
    • If multiple bin_arrays are detected by the Simulator prepare_fields method, we need to choose how it will behave. Some possibilities:
      a. The Simulator raises an exception (only one choice of CR binning is allowed)
      b. Instead of saving an array to self.fields[field.field_type], one could save a dictionary with tuples of bin edges as keys.

Flux versus density

Another question that needs to be addressed before implementing the CR field is whether we would like to model the flux or simply a density.

If we were really modelling a flux, instead of a 4-array, we would need a 5-array, with an extra axis for the 3 components of the flux. Nevertheless, usually the CR flux is (to a good approximation) assumed to be isotropic, so we could just work with a density instead. This may, perhaps, be different if we want to include a UHECR "detector" simulator, but in this case a separate CosmicRayElectronFlux could be added.

Units

Despite the usual assumption of isotropy, the usual units for CR distribution are: (GeV m^2 s Sr)^{-1}, i.e. a flux. The conversion to a differential density distribution involves multiplying by 4 pi (i.e. assuming isotropy) and m_e/v, where v=beta*c is the velocity associated with CRs in that energy bin. A more physical choice of standard units would be simply cm^-3 erg^-1.

Regardless of the choice of units, we must include a tool to convert between the two options to facilitate later development for the user.

A related point is the choice of units and spacing for the spectrum. The most common approach is to have logarithmic bins in particle energy. Alternatively, one could have bins of relativistic gamma. I suspect, however, that the first option is both more convenient and common.

Random seeds need checking

When I execute the convergence test in the end of tutorial_one, I get always the same answer, despite the fact that pipeline.random_type = 'free' was used.

(This refers to the new_field branch)

Fix the unit tests

Contents of the tests directory are out-of-date. A set of unit tests needs to be written.

Pipeline saving, and resuming after interruption

There is still no implemented standard way of saving the state of the IMAGINE Pipeline object. This would be very useful.

In particular, this could be done automatically just before the sampler starts running and if an interruption signal is caught. As most samplers allow resuming a run, saving the state of the assembled Pipeline (and thus the final likelihood function and prior transform) will allow continuing the run.

As the Pipeline also stores results, its state should probably also be saved after the run finishes. This would allow one to submit a script which assembles and runs a Pipeline on a cluster and later examine the results by inspecting the loaded Pipeline.

MultinestPipeline logs to STDOUT

As observed by by @trjaffe in #60 (comment) , when the sampling parameter verbose is set tot True, the MultinestPipeline logs to STDOUT. It would be good if we could find a way of redirecting that output (though it is not completely clear it is possible.

icy decorator - remove it?

The @icy decorator prevents new attributes to be added to an object/class. While this may prevent some specific types of error (e.g. typos in attributes name), it makes the code less modular (as it prevents one from adding extra structures to derived classes).

Is there any strong argument for keeping the @icy anywhere in the code?

Abstract Base Classes

As @1313e pointed out, it would be useful (and good practice) to write IMAGINE base classes as abstract base classes.

An Absctract Base Class cannot be directly instantiated, and its subclasses must contain implementation for the any methods with the @abstractmethod decorator. This prevents many possible mistakes.

Here is a list of the classes which should become ABC

  • GeneralField
  • GeneralFieldFactory
  • Likelihood
  • Simulator
  • Pipeline
  • ObservableDict

tutorial_one failing with Ultranest

After 7045112, tutorial_one stopped working with UltraNest.

The main change in that commit was that the random state was reset every time the likelihood function was evaluated, allowing different points in the parameter space to be evaluated with the same set of random seeds.

The changes in the commit were tested with a local modified version of tutorial_one which used MultiNest instead of UltraNest. This local version of the tutorial was actually converging quickly, without any problems.

The CI caught a problem in the standard version of tutorial_one which runs with UltraNest. The sampler gets stuck and evaluates the same point over and over again, without progressing.

Tutorials 4, 5, and 2

There are 3 tutorials which still need to be updated:

  • Tutorial 4, which demonstrates the use of Masks with Hammurabi
    • better masks integration with Hammurabi needs to be implemented (became #26)
  • Tutorial 5, which illustrates the use of the pipeline with Hammurabi
  • Tutorial 2, which illustrates MPI parallelisation

Saved Pipelines include path to hammurabi

When a saved pipeline is loaded, the Hammurabi simulator keeps the absolute path to hamx binary, as observed by @trjaffe in #60 (comment)

As we support Hammurabi as part of IMAGINE, this the save_pipeline should be instructed to check the path to hamx, and adjust it when loading.

rc variables

There should be an easy-to-find section in the docs describing the imagine.rc variables (i.e. global settings variables) and how they can be set using environment variables.

"Function" Fields

At present, Field objects are evaluated on an entire coordinate grid before been passed to a Simulator. More specifically, the get_field method (which relies on the user defined compute_field method) will always return a Quantity array evaluated over the whole coordinate grid that was provided.

This design choice was based on simplicity (it is easy to visualise pre-evaluating everything on a fixed grid) and the present way that Hammurabi works (it can read gridded fields from binary files). In the future, however, there may be room for fields which are only evaluated in the specific points where Simulator needs them. Below a draft design specification of this future feature:

Specification

  • Field's should have an attribute function_field which, when False falls back to the current behaviour
  • If function_field==True
    • The subclass has to include a standardized function (or similar name) method, which takes a coordinate point as argument and returns the evaluated field on that specific point
    • The get_field method will keep backwards compatibility, simply evaluating the function method, instead of relying on compute_field
  • The Simulator will have an attribute indicating whether or not it supports "function fields"
  • If it does support, instead of evaluating and summing the fields, a list of functions is passed to the simulator (which will be able to evaluate them wherever it wants)
  • There is room for Simulators sublcasses which support only "grid fields" (the present behaviour), "grid and function fields" and "only function fields"
    • If a particular Simulator subclass supports only "grid fields", any "function fields" will be evaluated on the grid before being handed to it.
    • If for a particular Simulator only "function fields" are supported, a fake "function field" can be automatically constructed from any "grid-only fields" using interpolation
    • In the case both "grid" and "function fields" are supported, both are provided to the subclasses

Field dependencies are expected to be a major technical challenge to this implementation. For example: a "grid field" which depends on a field type should force all the "function fields" to behave as "grid fields" (or is there a better alternative?).

Repeatable runs with Hammurabi simulator

If the fields that are supplied have identical ensemble_seeds, the output of the Hammurabi simulator should be exactly the same, even if the field is stochastic. This is not what is happening at the moment.

The following example illustrates the bogus behaviour

import numpy as np
import imagine as img
import astropy.units as u
import os
from imagine.fields.hamx import BregLSA, CREAna, TEregYMW16, BrndES

# Creates some empty fake datasets 
size = 12*32**2 
sync_dset = img.observables.SynchrotronHEALPixDataset(data=np.empty(size)*u.mK, 
                                              frequency=23*u.GHz, typ='I')

# Appends them to an Observables Dictionary
fakeMeasureDict = img.observables.Measurements()
fakeMeasureDict.append(dataset=sync_dset)

# Initializes Hammurabi
simer = img.simulators.Hammurabi(measurements=fakeMeasureDict)

ensemble_size = 2


## CRE and TE models
paramlist_cre = {'alpha': 3.0, 'beta': 0.0, 'theta': 0.0,
                 'r0': 5.6, 'z0': 1.2,
                 'E0': 20.5,
                 'j0': 0.03}
cre_ana = CREAna(parameters=paramlist_cre, ensemble_size=ensemble_size)
fereg_ymw16 = TEregYMW16(parameters={}, ensemble_size=ensemble_size)

# Parameters for B-field
paramlist_Brnd = {'rms': 6., 'k0': 0.5, 'a0': 1.7, 
                  'k1': 0.5, 'a1': 0.0,
                  'rho': 0.5, 'r0': 8., 'z0': 1.}
paramlist_Breg = {'b0': 6.0, 'psi0': 27.9, 'psi1': 1.3, 'chi0': 24.6}

# Sets Hammurabi to run on 8 cores
os.environ['OMP_NUM_THREADS']='8'

print('\n','-'*30, 'Deterministic case','-'*30, sep='\n')
for i in range(4):
    # Resets numpy (legacy) random number generator
    np.random.seed(42)
    # Regular B-field
    breg_wmap = BregLSA(parameters=paramlist_Breg, ensemble_size=ensemble_size)
    
    print('\nRun',i+1, '\n\tensemble seeds:', *breg_wmap.ensemble_seeds)
    
    maps = simer([breg_wmap, cre_ana, fereg_ymw16])
    sync_I_data = maps[('sync', 23.0, 32, 'I')].global_data
    print('\tsync_I_data (sum):', *sync_I_data.sum(axis=1))
    
    
print('\n','-'*30, 'Stochastic case','-'*30, sep='\n')
for i in range(4):
    # Resets numpy (legacy) random number generator
    np.random.seed(42)
    #  Random B-field
    brnd_es = BrndES(parameters=paramlist_Brnd, ensemble_size=ensemble_size,
                     grid_nx=100, grid_ny=100, grid_nz=40)
    
    print('\nRun',i+1, '\n\tensemble seeds:', *brnd_es.ensemble_seeds)
    
    maps = simer([brnd_es, cre_ana, fereg_ymw16])
    sync_I_data = maps[('sync', 23.0, 32, 'I')].global_data
    print('\tsync_I_data (sum):', *sync_I_data.sum(axis=1))    

# Sets Hammurabi to run on a single core
os.environ['OMP_NUM_THREADS']='1'

print('\n','-'*30, 'Stochastic case (serial Hammurabi)','-'*30, sep='\n')
for i in range(4):
    # Resets numpy (legacy) random number generator
    np.random.seed(42)
    brnd_es = BrndES(parameters=paramlist_Brnd, ensemble_size=ensemble_size,
                     grid_nx=100, grid_ny=100, grid_nz=40)
    print('\nRun',i+1, '\n\tensemble seeds:', *brnd_es.ensemble_seeds)
    
    maps = simer([brnd_es, cre_ana, fereg_ymw16])
    sync_I_data = maps[('sync', 23.0, 32, 'I')].global_data
    print('\tsync_I_data (sum):', *sync_I_data.sum(axis=1))

the output I get is

------------------------------
Deterministic case
------------------------------

Run 1 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 2115.1283515418586 2115.1283515418586

Run 2 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 2115.1283515418586 2115.1283515418586

Run 3 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 2115.1283515418586 2115.1283515418586

Run 4 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 2115.1283515418586 2115.1283515418586


------------------------------
Stochastic case
------------------------------

Run 1 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 5385.946091701992 5667.601821674344

Run 2 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 5384.209698423045 5663.441694022036

Run 3 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 5391.711379836459 5675.507874895424

Run 4 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 5374.580945159507 5662.862508687077


------------------------------
Stochastic case (serial Hammurabi)
------------------------------

Run 1 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 5322.912641998637 5423.7247891422

Run 2 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 5322.912641998637 5423.7247891422

Run 3 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 5322.912641998637 5423.7247891422

Run 4 
        ensemble seeds: 1608637542 1273642419
        sync_I_data (sum): 5322.912641998637 5423.7247891422

The sum of sync_I_data should always be the same, as the ensemble seeds are the same.

UPDATED on 3/9 to include the serial case (see comment by @shutsch )

Global switch for distributed objects

It would be helpful to be able to control whether distributed arrays will be used or not.

At present, all covariance arrays are distributed using MPI, and calculations with Covariances, Measurements and Simulations often involve collective MPI operations.

It is unclear in which circumstances this distribution leads to an enhancement in performance and under which conditions the extra overhead actually harms the performance. Most likely, the answer will depend on the size of the covariance matrices of a given problem and on the choice of sampler.

This all motivates the implementation of the global switch which controls the behaviour of these operations. This could be a global variable in the root imagine module, perhaps part of a rc dictionary with other useful settings. Thus, one could do in the beginning of the script:

import imagine as img
img.rc['distributed_covariances'] = True

Alternatively, the user could have this set in an special .imaginerc file or as an environmental variable

export IMAGINE_DISTRIBUTED_COVARIANCE=TRUE

Pipeline test-run

Before running an IMAGINE pipeline there are many basic questions which need (at least approximate) answering:

  • Which grid resolution is adequate?
  • What is the optimal ensemble size?
  • Roughly how long will a pipeline run take?

It is possible to get an idea of the answers to these questions looking at the final likelihood function being computed, which can be accessed in IMAGINE 2.0.0 through the method pipeline._likelihood_function(). If the grid resolution and/or the ensemble size are too small, the likelihood becomes extremely noisy. If they are too high, the computational cost of each evaluation becomes huge.

A good testing tool (this could be a new pipeline method) would choose a few neighbouring points in the parameter space and evaluate the likelihood function on them. This would be repeated for different ensemble sizes and grid sizes, and the variability and the run-times associated with each of them would be reported.

Restore logging capabilities

Base classes should have logging capability. A lot of this was already there before the refactoring.

At the very least, the logs should allow one know which parameters were used by each component at each stage. Later, we could consider logging function calls. A standard "logging interface" for new samplers would also be a good idea.

Dockerfile and travis CI

At the moment, the Dockerfile always clones the master branch. This means that when a pull request is made, Travis CI will check the master branch instead of the actual PR branch (i.e. it will test the current version of the code instead of the proposed code).

This needs to be adjusted. One way of doing this is introducing environment variables in the Dockerfile which contain the name of the branch Travis is looking at. Relevant instructions can be found here and here.

Implement non-Hammurabi random fields

Currently, there are no proper IMAGINE random fields implemented.

Hammurabi has a few built-in options of random field, which IMAGINE can manipulate using Dummy fields, but that may not be enough: because Hammurabi is a Simulator, it has to run on every evaluation of the likelihood (to generate the observables), this means that the costly random field generation is re-done for each ensemble realisation at every evaluation of the likelihood.

If the random field is generated outside Hammurabi, by a dedicated Field class, there are several ways we can optimize our problem (perhaps allowing the use a large number of ensemble size). In particular: multiple realisations can be produced by adding a phase, instead of recomputing everything; parts of the calculation could be cached and reused during the sampling.

Thanks @amitseta90 for the discussion and suggestions!

Using Masks in Likelihood leads to "unsupported data type" error

@ashley-stock reported the following error on the slack channel

I am getting an error (picture included) when I run EnsembleLikelihood(measurement_dict=data,mask_dict=masks) that the covariance dictionary is an unsupported data type (i.e. not a numpy array or Observable). If I just do EnsembleLikelihood(measurement_dict=data) it works without errors

image

This bug was probably introduced in #65

Standard plotting methods/functions?

Corner plots (and, perhaps also, traceplots) are essential tools for working with samplers.

Tutorial one illustrates how to use two different tools to show corner plots but it would be useful to have these as methods of the pipeline itself, allowing the user a quick diagnostic. In fact, by default, the Pipeline should perhaps show a corner plot after it runs on a jupyter notebook (forcing the user to at least have a quick look at it).

One attractive possibility is using the ChainConsumer package (suggested by @1313e) for this and other plotting/analysis tasks.

Finish main docs

ToDo list:

  • Installation section
    • Improve Docker subsection
  • Components - Datasets
  • Components - Measurements
  • Components - Simulations
  • Components - Masks
  • Components - Covariances
  • Components - Simulators (generic)
  • Components - Simulators (Hammurabi)
  • Components - Likelihoods
  • Components - Pipeline
  • Constraining parameters
    • Simple example and a bit of advice
  • Model comparison
    • Simple example and a bit of advice
  • Parallelisation
    • Ensemble distribution, array distribution and built-in samplers parallellisation
  • Contributors section
    • Tests, teams, extensions

Dealing with NaNs in measurements

It would be useful, when dealing with image data, to have an option which automatically masks NaNs before computing the likelihood.

Check and correct installation procedure

Someone should check the code dependencies and simplify installation.

Perhaps our aims should be the following:

  1. ensure easy pip installation on a system which already has hammurabiX installed
  2. find out how (or whether it is possible) to upload hammurabiX to conda-forge and make the whole project conda-installable
  3. update the dockerfile (potentially, using 2) and upload a docker image to dockerhub, using a IMAGINE collaboration account
  4. Set up "GitHub actions" to update the Docker image and PyPI package

Tentative task-list:

  • Check and update requirements.txt
  • Check and update imagine_conda_env.yml
  • Adjust the installation docs
  • Set up a PyPI IMAGINE package (For later?)
  • Update the dockerfile
  • Upload docker image to DockerHub GitHub packages
  • Set up GitHub Actions (?) set up Travis CI
  • Make the bundle IMAGINE+HammurabiX conda-installable (uploading to conda-forge)

Image datasets

Currently, there are two types of dataset compatible with IMAGINE: tabular datasets and HEALPix datasets. A much more basic dataset type is missing: image datasets, where the dataset comprises an array of pixels which do not cover the full sky.

"Core IMAGINE" projects tend to deal with the Milky Way, where one would either work with discrete sources (i.e. tabular datasets) or sky maps (either full or masked), which should work neatly with HEALPix. However, when looking at external galaxies or localized systems within the MW (e.g. SN remnants), it is convenient to have simple(r) image datasets too.

Save pipeline "can't pickle abc_data objects" error

As first reported by @ashley-stock, trying to save a Pipeline object which includes FieldFactory classes defined within the running notebook leads to error. A minimal version of such error can be seen below:

import imagine as img
import imagine_datasets as img_data

original_bregLSA = img.fields.hamx.BregLSAFactory()

class LocalBregLSAFactory(img.fields.FieldFactory):
    """This is fully equivalent to BregLSAFactory!"""
    FIELD_CLASS = original_bregLSA.field_class
    DEFAULT_PARAMETERS = original_bregLSA.default_parameters
    PRIORS = original_bregLSA.priors

factory_list = [LocalBregLSAFactory()]
measurement = img.observables.Measurements(img_data.HEALPix.fd.Oppermann2012(Nside=2))
simulator = img.simulators.Hammurabi(measurement)
likelihood = img.likelihoods.EnsembleLikelihood(measurement)
pipeline = img.pipelines.MultinestPipeline(simulator=simulator,
                                           likelihood=likelihood, 
                                           factory_list=factory_list)
pipeline.save()

this leads to

TypeError: can't pickle _abc_data objects

Interestingly, if one replaces LocalBregLSAFactory() by original_bregLSA, everything works normally.

The error is actually a problem with dill and the way it serializes objects defined in __main__. We can get the same error by simply doing (using the same definitions as above):

with open('/tmp/test.pkl', 'wb') as f:
    dill.dump(LocalBregLSAFactory(), f)

Issue uqfoundation/dill#332 is related to this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.