Giter Site home page Giter Site logo

modopt's Introduction

ModOpt

Usage Development Release
docs build release
license deploy pypi
wemake-python-styleguide codecov python
contribute CodeFactor
coc Updates

ModOpt is a series of Modular Optimisation tools for solving inverse problems.

See documentation for more details.

Installation

To install using pip run the following command:

  $ pip install modopt

To clone the ModOpt repository from GitHub run the following command:

  $ git clone https://github.com/CEA-COSMIC/ModOpt.git

Dependencies

All packages required by ModOpt should be installed automatically. Optional packages, however, will need to be installed manually.

Required Packages

In order to run the code in this repository the following packages must be installed:

Optional Packages

The following packages can optionally be installed to add extra functionality:

For (partial) GPU compliance the following packages can also be installed. Note that none of these are required for running on a CPU.

Citation

If you use ModOpt in a scientific publication, we would appreciate citations to the following paper:

PySAP: Python Sparse Data Analysis Package for multidisciplinary image processing, S. Farrens et al., Astronomy and Computing 32, 2020

The BibTeX citation is the following:

@Article{farrens2020pysap,
  title={{PySAP: Python Sparse Data Analysis Package for multidisciplinary image processing}},
  author={Farrens, S and Grigis, A and El Gueddari, L and Ramzi, Z and Chaithya, GR and Starck, S and Sarthou, B and Cherkaoui, H and Ciuciu, P and Starck, J-L},
  journal={Astronomy and Computing},
  volume={32},
  pages={100402},
  year={2020},
  publisher={Elsevier}
}

modopt's People

Contributors

chaithyagr avatar lelgueddari avatar paquiteau avatar pyup-bot avatar sfarrens avatar sylvainlan avatar tobias-liaudat avatar zaccharieramzi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

modopt's Issues

[HELP] Linear regression with errors

Hi Samuel. I have run through the notebook modopt_example.ipynb, and inverse_problems_1.ipynb and sparsity_1.ipynb in the CosmoStat/Tutorials ada branch, playing around with parameters to better my understanding. I am wondering how one takes the next step of fitting the curve if there are known random errors in y (e.g. weighted fit). Is it a term added to the cost function? Is there a modopt class to handle this?
thanks,
Marc

[BUG][URGENT] Metrics dont work right in new release

System setup
OS: [e.g] macOS v10.14.1
Python version: v3.6.10
Python environment (if any): [e.g.] conda v4.5.11

Describe the bug
I was recently moving from old to new version of modopt for some of other packages that use modopt.
I see the error on metrics:

I believe the error is coming from here:

@metrics.setter
def metrics(self, metrics):
if isinstance(metrics, type(None)):
self._metrics = {}
elif not isinstance(metrics, dict):
raise TypeError(
'Metrics must be a dictionary, not {0}.'.format(type(metrics)),
)

We don't have any else! if metrics is dictionary but not None, then we dont set any metrics! ie. if metrics = {}, which is default.
I am quite surprised how testing missed this ! I might be missing something here surely.

I checked the reason for most of the codes to work in test is cause we dont test the metrics, and also the SetUp by default gives None as input to metrics.

So, effectively, modopt would fail for any algorithm with metrics defined as a dictionary (which is how it must be used)!!

To Repro

from modopt.opt.algorithms import SetUp
A = SetUp(metrics={})

Results in :
AttributeError: 'SetUp' object has no attribute '_metrics'

To fix
we just need to add an else self._metrics = metrics line. However, @sfarrens how did we miss this in the refactoring?

@sfarrens, In my opinion this needs to be fixed at highest priority and the current release on PyPi must be removed and updated
with this. Perhaps, adding a dummy test with metrics is also very much needed!

Are you planning to submit a Pull Request?

  • Yes
  • No

[BUG] Sphinx bibtex error

Bug Summary

In a recent travis-ci job the following error was raised:

Extension error:
You must configure the bibtex_bibfiles setting
Error: Error building sphinx doc
The command "travis-sphinx -v -o docs/build build -n -s docs/source" exited with 1.

Are you planning to submit a Pull Request?

  • Yes
  • No

FISTA has no progress option?

follow-up on #46: when passing on the progress kwarg to FISTA, it crashes.

I believe this is because FISTA inherits object and not SetUp...?
https://github.com/CEA-COSMIC/ModOpt/blob/master/modopt/opt/algorithms.py#L224

Sorry if I'm completely off!

    427                 weight_optim = optimalg.ForwardBackward(alpha, weight_grad, coeff_prox, cost=weight_cost,
    428                                                 beta_param=weight_grad.inv_spec_rad, auto_iterate=False,
--> 429                                                 progress=self.modopt_verb)
    430                 weight_optim.iterate(max_iter=self.nb_subiter_weights)
    431                 alpha = weight_optim.x_final

~/anaconda2/envs/shapepipe/lib/python3.6/site-packages/modopt-1.3.1-py3.6.egg/modopt/opt/algorithms.py in __init__(self, x, grad, prox, cost, beta_param, lambda_param, beta_update, lambda_update, auto_iterate, metric_call_period, metrics, linear, **kwargs)
    535         self._beta_update = beta_update
    536         if isinstance(lambda_update, str) and lambda_update == 'fista':
--> 537             fista = FISTA(**kwargs)
    538             self._lambda_update = fista.update_lambda
    539             self._is_restart = fista.is_restart

TypeError: __init__() got an unexpected keyword argument 'progress'```

[BUG] Fix unsorting of data in Ordered weight L1 Norm and also ensure we error out in case we use increasing weights

Describe the bug

# Unsorting the data
data_abs_unsorted = data_abs[data_abs_sort_idx]

The above codes is supposed to unsort the data however this is wrong in current state.

I will move this to as proposed by @zaccharieramzi in #52 (comment)

self.weights = np.sort(weights.flatten())[::-1]

Here the code reorders weights, which causes issues:
a) It prevents users from providing constant weight number and not a float.
b) Results in bad and undesired output if weights arent non-increasing as required by theory of this operator.

I will get a PR for both of these.

[NEW FEATURE]: K-support norm proximity operator implementation

Is your feature request related to a problem? Please describe.
For multi-task learning or calibrationless method the k-support norm is a great structured sparsity norm

Describe the solution you'd like
The solution has originally been proposed by McDonald and the idea would be to implement Algorithm 1 page 18 in the special case of k-support norm

Describe alternatives you've considered
An alternative idea would be to implement the method proposed by Argyriou

Are you planning to submit a Pull Request?

  • Yes
  • No

[BUG] Possible error in Ridge proximal operator

System setup
modopt version: v1.5.1

Describe the bug
Using Ridge in a reconstruction (with Condat in my case) raises a ValueError: dimension mismatch. Other similar proximity operators (SparseThreshold or ElasticNet) don't have this issue.
Ridge applies the _linear operator to the input_data before applying the thresholding operations, but when running a reconstruction, this input_data is already in the image space of this linear operator, so only the thresholding step should be done. This is what happens in SparseThreshold and ElasticNet.

Screenshots
Error when using Condat with Ridge:
Ridge Error

Module and lines involved
Replacing line 729 of modopt.opt.proximity:
return self._linear.op(input_data) / (1 + threshold)
by:
return input_data / (1 + threshold).
worked and gave results that looked like what we would expect.

Changing the name of the parameter to 'linear_coeffs' or something similar would maybe make it less confusing.

Are you planning to submit a Pull Request?

  • Yes
  • No

[BUG] Filter trimming doesn't work properly when using non square data

System setup
OS: Ubuntu 16.04
Python version: v3.6.8

Describe the bug
When using get_mr_filters with the trim option to True, on non-square data, the trimming doesn't work properly, and the filter you get still has a lot of zeros in it and doesn't have the non-zero part centered.

To Reproduce

import matplotlib.pyplot as plt
from modopt.signal.wavelet import get_mr_filters

wav_filters = get_mr_filters((496, 336), opt=['-t 2', '-n 2'], coarse=True)

plt.figure()
plt.imshow(wav_filters[0])
plt.show()

print(wav_filters[0].shape)
print(np.where(wav_filters[0] != 0))

Expected behavior
We should trim even more in this situation.

Module and lines involved
The problem comes from the trimming which is done with the data being square in mind.

Are you planning to submit a Pull Request?

  • Yes
  • No

Warning for get_mr_filters

When getting the wavelet filters using the sparse2d wrappers, we get the following warning:

modopt/signal/wavelet.py:160: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
  fake_data[list(zip(data_shape // 2))] = 1

[BUG] Update ` get_notify_observers_kwargs` for POGM to return atleast one image space variable

Describe the bug

Currently, POGM holds wavelet coefficients in all cases.
This causes issues in trying to define ssim and other related metrics that need input as an image.
Although this seems more of an enhancement issue, I am marking it as a bug as this breaks the functionality of metrics with POGM.

return {
'u_new': self._u_new,
'x_new': self._x_new,
'y_new': self._y_new,
'z_new': self._z,
'xi': self._xi,
'sigma': self._sigma,
't': self._t_new,
'idx': self.idx,
}

This will mostly be a short PR and @sfarrens can we please wait for this PR before a release?

Are you planning to submit a Pull Request?

  • Yes
  • No

[NEW FEATURE] Make clear how to use metrics in algorithms

Is your feature request related to a problem? Please describe.
Some users may have a hard time understanding how to use metrics in optimization algorithms.

Describe the solution you'd like
It could be nice to have:

  • a clear documentation on this;
  • links to examples.

Are you planning to submit a Pull Request?

  • Yes
  • No

[QoL] Consistent naming scheme for step size parameters in FB/GFB

Is your feature request related to a problem? Please describe.
The main step size parameter is called beta_param in FB but gamma_param in GFB. I know these are so they exactly match the papers cited, but I think it might be easier on the user if they had the same name?

Describe the solution you'd like
Maybe forego letters completely and just call them step_size...?

Are you planning to submit a Pull Request?

  • Yes
  • No

[NEW FEATURE] Call Metrics after last iteration

Currently the mterics are called only after metric call period.

I would like to implement 2 things:

  1. A Way to keep metric call period infinite, ie we dont call to metrics at all.
  2. An option to call metric always at the end of iterations.

These 2 above features help a lot, especially in a grid search. We can obtain the values at say the beginning and end of iterations or maybe only at the end of iterations

Are you planning to submit a Pull Request?

Have more basic operator classes

Currently, we have GradParent and ProxParent base classes.

However, it would help to have a more generic Base Operator classes, particularly linear operator class like the OperatorBase in pysap-mri :

https://github.com/CEA-COSMIC/pysap-mri/blob/90c9d12687660a266f029680359a532efdb0db3f/mri/operators/base.py#L9

A similar base class with just op method could be used for Gradient and Proximity.

The reason why this will affect is cause I have some more generic proximities and gradients, which cant be inherited from GradParent and ProxParent, which misses the flow and also causes a lot of warnings.

Are you planning to submit a Pull Request?

  • Yes
  • No

Low rank constraint with highly rectangular matrix

Hello,

I would like to solve a low rank relaxed problem of the kind :

||Ex-y||^2 + ||x||_*

Where my data x is of shape (N,M) where N is typically 128*128 and M 32.

When I'm using the LowRankMatrix proximity in ModOpt I get a MemoryError because scipy tries to calculate the full svd and therefore wants to build a 128^^4 matrix, which I can't store.

I was wondering if I could change the way to calculate the svd by using the full_matrices argument from scipy.

I'm making a PR to open the discussion.

[BUG][URGENT] FISTA and POGM converge in single iteration!

System setup
OS: [e.g] macOS v10.14.1
Python version: [e.g.] v3.6.7
Python environment (if any): [e.g.] conda v4.5.11

Describe the bug
Currently, FISTA and POGM algorithms seem to converge in a single iteration.
The reason being that we have added a comma for python style guide:

if self._cost_func:
self.converge = (
self.any_convergence_flag()
or self._cost_func.get_cost(self._x_new),
)

if self._cost_func:
self.converge = (
self.any_convergence_flag()
or self._cost_func.get_cost(self._x_new),
)

These make self.converge into a tuple, i.e. (False,)!
Due to this, the check for convergence later here :

if self.converge:
if self.verbose:
print(' - Converged!')
break

Becomes true and the code ends.

We need to remove the commas from the above lines.
Also @sfarrens can we keep this issue to add tests for this? It seems like a very important bug that we should have not missed.

Are you planning to submit a Pull Request?

  • Yes
  • No

[NEW FEATURE] leave possibility to keep data writeable

Is your feature request related to a problem? Please describe.
After calling a modopt solver on X, y, I can no longer call a sklearn.linear_model.Lasso on it, because modopt has made X and y not writeable.

Describe the solution you'd like
The check_npdarray has a writeable parameter but it's not exposed at the solver level, hence I don't think the user can control it.

Code to reproduce:

from benchopt.datasets.simulated import make_correlated_data
import numpy as np
from numpy.linalg import norm

from modopt.opt.algorithms import ForwardBackward
from modopt.opt.proximity import SparseThreshold
from modopt.opt.linear import Identity
from modopt.opt.gradient import GradBasic
from sklearn.linear_model import Lasso


np.random.seed(0)
X = np.random.randn(10, 20)
y = np.random.randn(X.shape[0])
lmbd = norm(X.T @ y) / 20


def op(w):
    return X @ w


fb = ForwardBackward(
    x=np.zeros(X.shape[1]),  # this is the coefficient w
    grad=GradBasic(
        input_data=y, op=op,
        trans_op=lambda res: X.T @ res,
    ),
    prox=SparseThreshold(Identity(), lmbd),
    beta_param=1.0,
    min_beta=1,
    metric_call_period=None,
    xi_restart=0.96,
    restart_strategy='adaptive-1',
    s_greedy=1.01,
    p_lazy=1,
    q_lazy=1,
    auto_iterate=False,
    progress=False,
    cost=None,
)


L = np.linalg.norm(X, ord=2) ** 2
fb.beta_param = 1 / L
fb._beta = 1 / L
fb.iterate(max_iter=100)

# this fails:
Lasso(fit_intercept=False, alpha=lmbd/len(y)).fit(X, y)

Are you planning to submit a Pull Request?

  • Yes
  • No

[NEW FEATURE] Add supoort to ensure we never calculate cost

Currently, we calculate cost at every N iterations through :
call:

if self._cost_func:
self.converge = self.any_convergence_flag() or \
self._cost_func.get_cost(self._x_new)

actual per N:

ModOpt/modopt/opt/cost.py

Lines 185 to 190 in 609ecde

# Check if the cost should be calculated
if self._iteration % self._cost_interval:
test_result = False
else:

However, there is no way to ensure that we never calculate cost function at all.
I propose:

  1. Have extra line that checks if cost_interval is None, in which case we return without cost calculation.
  2. By default make cost_interval to None ie never calculate, rather than calculate at every iteration. In my opinion, we will gain a speedup of nearly 2 if we do this by default and allow user to specify only if he uses cost value.

Are you planning to submit a Pull Request?

  • Yes, @zaccharieramzi and @sfarrens can you please sign off on my proposals so that I can get this as a PR?
    Also, @sfarrens as it is a minor 2-3 line PR, we can have this too in release (I know I am having a lot at this point of time, but I just find the impact quite high)
  • No

[BUG] filter_convolve is not compatible with latest scipy

With recent scipy release in December 2019, the filter convolve seems to be having issues:

>>> from modopt.signal.wavelet import get_mr_filters, filter_convolve
>>> import numpy as np
>>> T = get_mr_filters((512,512),['-t24','-n4'],coarse=True)
>>> A = filter_convolve(np.zeros((512,512)), T)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/volatile/Chaithya/Codes/ModOpt/modopt/signal/wavelet.py", line 284, in filter_convolve
    return np.array([convolve(data, f, method=method) for f in filters])
  File "/volatile/Chaithya/Codes/ModOpt/modopt/signal/wavelet.py", line 284, in <listcomp>
    return np.array([convolve(data, f, method=method) for f in filters])
  File "/volatile/Chaithya/Codes/ModOpt/modopt/math/convolve.py", line 101, in convolve
    return scipy.signal.fftconvolve(data, kernel, mode='same')
  File "/neurospin/optimed/Chaithya/Environments/maniac_gpu/lib/python3.5/site-packages/scipy/signal/signaltools.py", line 542, in fftconvolve
    ret = _freq_domain_conv(in1, in2, axes, shape, calc_fast_len=True)
  File "/neurospin/optimed/Chaithya/Environments/maniac_gpu/lib/python3.5/site-packages/scipy/signal/signaltools.py", line 383, in _freq_domain_conv
    sp2 = fft(in2, fshape, axes=axes)
  File "/neurospin/optimed/Chaithya/Environments/maniac_gpu/lib/python3.5/site-packages/scipy/fft/_backend.py", line 23, in __ua_function__
    return fn(*args, **kwargs)
  File "/neurospin/optimed/Chaithya/Environments/maniac_gpu/lib/python3.5/site-packages/scipy/fft/_pocketfft/basic.py", line 186, in r2cn
    return pfft.r2c(tmp, axes, forward, norm, None, workers)
RuntimeError: unsupported data type

Are you planning to submit a Pull Request?

  • Yes
  • No

[NEW FEATURE]: Ridge et Elastic Net as regularization

Is your feature request related to a problem? Please describe.
Ridge & Elastic-net are considered as traditional regularization for predictions, It also enables other applications such as parallel MRI reconstruction

Are you planning to submit a Pull Request?

  • Yes
  • No

Implementing the Ordered Weighted l1-norm proximity operator

It would be great to have the Ordered Weighted L1-norm proximity operator implemented in ModOpt. This norm is known to have good statistical properties as a sparsity promoting regularizer and also to be the generalization of the Octagonal Shrinkage and Clustering Algorithm for Regression
which overcomes one of the biggest weakness of the LASSO regularization: when variables became highly correlated.

Solution
The solution would be to implement the Eq. (24) in the OWL paper. Moreover the proximity operator could be unit-tested using it relation with the LASSO.

[NEW FEATURE] Trim filters to the minimum in get_mr_filters

Is your feature request related to a problem? Please describe.
I'm always frustrated when using get_mr_filters because it gives me huge filters if I have a huge data_shape (in my case (512, 512)). The convolution then takes so much time.

Describe the solution you'd like
I would like the filters to be trimmed to the minimum so that the convolutions take less time (also it would additionally decrease memory reqs but it's not really an issue).

Describe alternatives you've considered
I could trim them afterwards outside the function, but I think it makes sense to trim them in it.

[BUG] PowerMethod is not implemented right.

Describe the bug
The PowerMethod algorithm implemented in math/matrix.py is not right, we are using the old norm to divide, and not the norm of the new estimate of the eigen vector.

see here

 x_old = self._set_initial_x()

        # Iterate until the L2 norm of x converges.
        for i_elem in range(max_iter):

            xp = get_array_module(x_old)

            x_old_norm = xp.linalg.norm(x_old)

            x_new = self._operator(x_old) / x_old_norm // <- should be x_new_norm

            x_new_norm = xp.linalg.norm(x_new)

The Pull Request is on the way.

[BUG] Progressbar parameter max_value -> maxval

if self.progress: with ProgressBar(redirect_stdout=True, max_value=max_iter) as bar: self._iterations(max_iter, bar=bar) else: self._iterations(max_iter)
I see that in progressbar it is maxval and not max_value , This is with python3.5 and progressbar version 2.5 .. Is this just version mismatch, as I already had things working fine on python3.6

New release 1.5 broke some downstream code

Hello,

Thanks for this nice optimization package. We are developing a framework for automatic benchmarking of optimization libraries called benchopt where we included modopt for the lasso (with the help of @zaccharieramzi ). I just wanted to report here that the latest release 1.5 broke our down stream code for two reasons:

  • The API changed without deprecation warning cycle. When 1.5 was released, data changed in a lot of places to input_data. software development best practices would have been to support both data and input_data for at least one release, with deprecation warning to make sure downstream code can be updated without breaking it. It is not a very big deal but at least mentionning the change in the what's new would be nice.

  • We also realized that the requirements impose pinned version of the dependencies like numpy. This tends to break compat with many other packages installed in a distribution and should be avoided if possible. At least using something like 1.19 would already help a lot instead of even pinning the release number. It would be nice to remove the pinned version from the requirement file.

I think fixing this and making sure to keep a stable API would make the package more robust for users. hope this can help for future development! :)

[BUG] wavelet filters getting returning FileNotFoundError when using T option incorrectly

System setup
OS: ubuntu v16.04.10
Python version: v3.6.7

Describe the bug
When trying to get the wavelet filters using the '-T' option, we get a FileNotFoundError, coming from the fact that the results have not been produced. The file not found is e.g. './mr_temp_2019.06.12_10.45.27.mr'.

To Reproduce

from modopt.signal.wavelet import get_mr_filters
data_shape = (512, 512)
get_mr_filters(data_shape, opt=[
    '-t 2', 
    '-n 2', 
    '-T 11',  # corresponds to the Haar wavelet
], coarse=True)

It doesn't work for Haar('-T 11') but even for Biorthogonal 7/9 filters ('-T 1'), supposed to be default, when specifying it in the opt, it doesn't work. It's really specifying the type of filters which bugs mr_transform.

Expected behavior
We should see that the options were not used correctly.

Module and lines involved
The bug is coming from check_call([executable] + opt + [file_fits, file_mr]) (line 113 in https://github.com/CEA-COSMIC/ModOpt/blob/master/modopt/signal/wavelet.py#L113) not generating the file_mr. file_fits also stands not removed.

It may be a bug with mr_transform, but it still needs to be handled in ModOpt imho.

Are you planning to submit a Pull Request?

  • Yes
  • No

[BUG] Inconsistent return type from modopt.signal.wavelet.get_mr_filters

System setup
OS: Ubuntu 18.04
Python version: 3.7.6
Python environment (if any): conda v4.8.2
Numpy version: 1.18.1

Describe the bug
The return type of modopt.signal.wavelet.get_mr_filters depends on the value of parameter trim. If False, a 3D ndarray is returned, as expected from the function documentation, but if True, an object array containing ndarrays of different shapes is returned. The documentation leads the user to believe that operations such as np.sum can be applied to the output of this function, but this breaks when trim=True.

Module and lines involved
See line 226 of modopt.signal.wavelet.

Are you planning to submit a Pull Request?

  • Yes
  • No

[NEW FEATURE] Automated profiling framework

Issue

Implement an automated code profiling framework for ModOpt in order to test performance improvements. This framework needs to run on a GPU either remotely (e.g. Google Collab, Kaggle Kernels) or locally (e.g. on a machine at NeuroSpin).

Are you planning to submit a Pull Request?

  • Yes
  • [] No

Replace the dot by vdot in the FISTA restarting criterion

The restarting FISTA implemented in #35 has a flaw in that it doesn't compute a proper inner product because it doesn't use the conjugate.

This can be done by replacing dot by vdot (and will also allow us to not flatten the matrices ourselves).

Initial Update

The bot created this issue to inform you that pyup.io has been set up on this repo.
Once you have closed it, the bot will open pull requests for updates as soon as they are available.

[NEW FEATURE] Parallelize filter_convolve

Is your feature request related to a problem? Please describe.
I'm always frustrated when I run the filter_convolve function on big images (512 x 512), because it takes a lot of time.
Example: 412 ms (measured with %%timeit) for a starlet transform with 4 levels of decomposition (without the coarse scale).

Describe the solution you'd like
I would like to have the filter_convolve function parallelized. Indeed, each convolution could be run in its own thread or process.
I think using joblib, especially the embarrassingly parallel helper, could help us move forward in that direction.

Describe alternatives you've considered
I haven't consider any alternatives.

[BUG]: skimage.metrics only exists for python version > 3.6

System setup
Python version: [e.g.] v3.5

Describe the bug
scikit-image dropped the support of python 3.5, making SSIM unusable in python 3.5 even if scikit-image has been installed.

Module and lines involved
for more information see scikit-image release note that is involved in ssim

Are you planning to submit a Pull Request?

  • Yes
  • No

[NEW FEATURE] GPU Versions of Optimization Algorithms

Describe the solution you'd like
We would want to implement a GPU version of the algorithms, which would carry out complete descent on GPU without any switch to and fro from GPU to CPU.

Describe alternatives you've considered
Something similar to cdfit in https://github.com/rapidsai/cuml/blob/branch-0.10/cpp/src/solver/cd.h
This is a simple stochastic Gradient descent.

My ultimate goal is that finally, we should have Linear operators, Gradient Operators and Fourier Operators working on GPU memory efficiently, and Modopt can then just work completely on GPU giving a large speedup.

Are you planning to submit a Pull Request?

  • Yes, but as a low priority.
  • [] No

Implement FAASTA

The FAASTA algorithm allows us to perform a FISTA algorithm in the analysis formulation. In our case we could see if the FAASTA still performs well with the new restarting options.

Since the FISTA with an orthogonal transform is way faster than the Condat-Vu, we are going to see now if this holds true even when we have to compute an extra inner iteration.

[NEW FEATURE] Drop support for Python < v3.5

Is your feature request related to a problem? Please describe.
The current versions of basic Python packages (e.g. numpy, scipy, etc.) no longer support Python versions less than 3.5. We should therefore focus on supporting Python >= 3.5 and drop fixes/hacks for Python 2.7 and 3.4.

Describe the solution you'd like

  • Set minimum dependency requirements to latest releases.
  • Drop future imports etc.
  • Drop import options depending on Python version.

Are you planning to submit a Pull Request?

  • Yes
  • No

[NEW FEATURE] Gradient Descent Algorithms and accelerations methods

With the integration of online forward-backward algorithms to pysap-mri, come also the need for Machine-Learning inspired gradient descent acceleration (eg. Momentum, SAGA, etc ...).

A solid draft of these algorithm implementation is available here.

This include:

  • Vanilla Gradient Descent
  • Epoch descent (proximal step at the end of each epoch)
  • ADA
  • RMSProp
  • Momentum
  • ADAM
  • SAGA

I suggest to refactor the opt/algorithms.py file in a module similarly to what is done in pysap-mri:

   op/algorithms/base.py -> Setup 
   opt/algorithms/forward_backward.py -> All forward backward algorithms (and POGM)
   opt/algorithms/primal_dual.py -> Condat
   opt/algorithms/gradient_descent.py ->  All the gradient descent algorithms mentioned above

To avoid breaking changes downstream, all these classes will be imported in algorithms/__init_.py

Are you planning to submit a Pull Request?

  • Yes
  • No

[NEW FEATURE] Define verbose mode for warnings

Is your feature request related to a problem? Please describe.
This is requested to solve this : issue on the MCCD package.

In summary, the MCCD algorithms uses some optimization algorithms form the ModOpt package. For that the gradient class GradParent() is used from .opt.gradient.py and the function check_npndarray() from .base.types.py throws the warning 'Making input data immutable.' for each instance of the gradient class I have when I run the algorithm.

The main issue is that when the MCCD algorithm is run over a big amount of data, it outputs some big stdout files that are unwanted.

Describe the solution you'd like
It could be possible to add a verbose attribute to the GradParent() class that would be passed to check_npndarray() in order to control its behaviour. As the function check_npndarray() has a verbose variable but it is defaulted to True always in that case.

Are you planning to submit a Pull Request?

  • Yes
  • No

If @sfarrens you're ok with it I can propose a pull request for this.

[NEW FEATURE] COmposition Operator

Is your feature request related to a problem? Please describe.
We need a composition operator that takes in a vector of operators and carries out composition.
These indivitual operations can as well be proximity or linear operators. This way we can implement SparseGroupLASSO as GroupLASSO(SparseThreshold) and so on.

Describe the solution you'd like
Implement a class as:

class CompositionOperator(Base):
    def __init__(list_of_operators):
        self.list = list
...
    def _op_method(data):
        result = data
        for i in range(len(self.list):
            result = self.list[i].op(result)
        return result

Are you planning to submit a Pull Request?

  • Yes
  • No

please don't pin numpy versions

just sharing my experience now...

I just tried to install modopt on my conda env. As you pin requirements and you don't provide conda packages
I had to use pip. Doing so just downgraded numpy on my env and simply broke all my installation.

please please avoid pinning numpy versions (use >= and not ==) and consider pypi and conda packages

๐Ÿ™

[NEW FEATURE] switch from progressbar to tqdm

When running multiples algorithms sequentially (like for instance reconstructing 800 volumes of an fMRI scan), the output gets cluttered by progressbars .

tqdm is the goto package nowadays for progress bar displays and management. It is faster, lighter, allows for nested progress bar and has great notebook support.

What I propose is to add a progressbar argument to the SetUp class in algorithms/base.py, so that the user can provide a customized tqdm object if needed (for nested progressbar for instance, use of color, notebooks runs, common progress bar for multiples runs of the algorithm etc)

The change will be mostly in place as progressbar and tqdm exposes almost similar APIs.

Also, we could display some extra information (like cost evaluation) alongside with the iteration progress

Are you planning to submit a Pull Request?

  • Yes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.