Giter Site home page Giter Site logo

kul-optec / abstractoperators.jl Goto Github PK

View Code? Open in Web Editor NEW
30.0 4.0 9.0 973 KB

Abstract operators for large scale optimization in Julia

License: Other

Julia 99.17% TeX 0.83%
derivatives back-propagation automatic-differentiation optimization large-scale julia-language

abstractoperators.jl's Introduction

AbstractOperators.jl

Build status codecov

Description

Abstract operators extend the syntax typically used for matrices to linear mappings of arbitrary dimensions and nonlinear functions. Unlike matrices however, abstract operators apply the mappings with specific efficient algorithms that minimize memory requirements. This is particularly useful in iterative algorithms and in first order large-scale optimization algorithms.

Installation

To install the package, hit ] from the Julia command line to enter the package manager, then

pkg> add AbstractOperators

Usage

With using AbstractOperators the package imports several methods like multiplication * and adjoint transposition ' (and their in-place methods mul!).

For example, one can create a 2-D Discrete Fourier Transform as follows:

julia> A = DFT(3,4)
ℱ  ℝ^(3, 4) ->^(3, 4)

Here, it can be seen that A has a domain of dimensions size(A,2) = (3,4) and of type domainType(A) = Float64 and a codomain of dimensions size(A,1) = (3,4) and type codomainType(A) = Complex{Float64}.

This linear transformation can be evaluated as follows:

julia> x = randn(3,4); #input matrix

julia> y = A*x
3×4 Array{Complex{Float64},2}:
  -1.11412+0.0im       3.58654-0.724452im  -9.10125+0.0im       3.58654+0.724452im
 -0.905575+1.98446im  0.441199-0.913338im  0.315788+3.29666im  0.174273+0.318065im
 -0.905575-1.98446im  0.174273-0.318065im  0.315788-3.29666im  0.441199+0.913338im

julia> mul!(y, A, x) == A*x #in-place evaluation
true

julia> all(A'*y - *(size(x)...)*x .< 1e-12) 
true

julia> mul!(x, A',y) #in-place evaluation
3×4 Array{Float64,2}:
  -2.99091   9.45611  -19.799     1.6327 
 -11.1841   11.2365   -26.3614   11.7261 
   5.04815   7.61552   -6.00498   6.25586

Notice that inputs and outputs are not necessarily Vectors.

It is also possible to combine multiple AbstractOperators using different calculus rules.

For example AbstractOperators can be concatenated horizontally:

julia> B = Eye(Complex{Float64},(3,4))
I  ℂ^(3, 4) ->^(3, 4)

julia> H = [A B]
[ℱ,I]  ℝ^(3, 4)  ℂ^(3, 4) ->^(3, 4)

In this case H has a domain of dimensions size(H,2) = ((3, 4), (3, 4)) and type domainType(H) = (Float64, Complex{Float64}).

When an AbstractOperators have multiple domains, this must be multiplied using an ArrayPartition (using RecursiveArrayTools with corresponding size and domain, for example:

julia> using RecursiveArrayTools

julia> H*ArrayPartition(x, complex(x))
3×4 Array{Complex{Float64},2}:
 -16.3603+0.0im      52.4946-8.69342im  -129.014+0.0im      44.6712+8.69342im
  -22.051+23.8135im  16.5309-10.9601im  -22.5719+39.5599im  13.8174+3.81678im
 -5.81874-23.8135im  9.70679-3.81678im  -2.21552-39.5599im  11.5502+10.9601im

Similarly, when an AbstractOperators have multiple codomains, this will return an ArrayPartition, for example:

julia> V = VCAT(Eye(3,3),FiniteDiff((3,3)))
[I;δx]  ℝ^(3, 3) ->^(3, 3)  ℝ^(2, 3)

julia> V*ones(3,3)
([1.0 1.0 1.0; 1.0 1.0 1.0; 1.0 1.0 1.0], [0.0 0.0 0.0; 0.0 0.0 0.0])

A list of the available AbstractOperators and calculus rules can be found in the documentation.

Related packages

Credits

AbstractOperators.jl is developed by Niccolò Antonello and Lorenzo Stella at KU Leuven, ESAT/Stadius,

abstractoperators.jl's People

Contributors

1oly avatar eviatarbach avatar hakkelt avatar juliatagbot avatar lostella avatar nantonel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

abstractoperators.jl's Issues

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Is there any plans for GPU support?

Just to put this question into context: Recently I developed a similar package (https://github.com/hakkelt/FunctionOperators.jl), unfortunately, before I realized that such a package already exists (which is even more annoying having that I spend some time searching for such packages before I started to develop that package...). As AbstractOperators seems a quite good package and is definitely more mature and feature-rich than mine, so I would be willing to contribute to this package to add GPU support (or, more specifically, support for CuArray input and output), rather than continue developing FunctionOperators.

I looked at the source code, and it appears to me that the main design barrier of GPU support is that the calculus rules allocate buffers upon parsing the expression, but it turns out only later, when the operator is applied to the input array, whether the input is a CPU or a GPU array. To avoid this problem for FunctionOperators, I allocated the buffers only when the operator is first applied to the inputs, and later this buffer is re-used, of course. I think this method would be viable here as well, and it needs only a reasonable amount of refactoring.

avoid parsing strings ending with commas

As of julia 1.1 x, is considered incomplete input, which is causing generated functions in this package to fail with parse errors. Trailing commas should be removed (or better yet use expression quoting instead of calling parse).

matrix-vector product with Jacobian

I get an error when trying to execute the following:

S = Sigmoid((3,), 1.0)
x = [1.0, 2.0, 3.0]
J = Jacobian(S, x)
y = similar(x)
A_mul_B!(y, J, x)

Here J should be a LinearOperator, if I understand correctly, and have the A_mul_B! operation defined.

size() returns Tuple of Tuples instead of Tuple

I am using the Conv operator as the first argument matrix-like operator in the Krylov.jl optimization functions.

It fails because the size() function applied to a Conv object does not return the expected tuple. Instead of

(m, n)

The Krylov.jl optimizer gets:

((m,), (n,))

If I override Base.size, it all works beautifully. Shouldn't the size operator conform to what is expected of a Matrix-like operator? Am I doing something wrong?

Edit: in order to silence a warning, I also had to override eltype() to return Float64 (in my case) instead of Any.

Fix Reshape of HCATs

Currently Reshape of HCATs and operators such as BlockArrays/ArrayPartitions is broken.
Possible solution by adding the following function:

function AbstractOperators.permute(T::Reshape{N,L}, p::AbstractVector{Int}) where {N,L}
    A = AbstractOperators.permute(T.A,p)
    return Reshape(A,T.dim_out) 
end
AbstractOperators.Jacobian(R::Reshape{N,L},x) where {N,L} = Reshape(Jacobian(R.A,x),R.dim_out) 

Please register a new (patch) version with identical content

The HEAD is still compatible with the newest versions of the DSP and FFTW libraries, and even the [compat] section of the Project.toml file permits them. However, the general registry recorded a narrower compatibility range, limiting DSP to version 0.6.

Bumping the patch version to 0.2.3 and reregistering the package would solve this issue.

Thanks!

DFT transform along dims...

Hi again,

This is related to #11 but may be more a feature request if I'm not mistaken.

In FFTW.jl/fft.jl it is possible to specify the dimensions along which to transform with an optional arg dims. This option doesn't seem to be supported in the DFT operator (or IDFT). Is that correct?

MWE:

using FFTW, AbstractOperators
A = rand(3,3)
fftA = fft(A) # equivalent to fft(A,(1,2))
DF = DFT(ones(3,3))
DF*A == fftA # true
# but if I what to transform along only one dim
fft(A,1) == DF*A # false

Is there an easy way around this or would it make sense to add as a feature?

Thanks,
Oliver.

EDIT:

Can see you already have it in IRDFT IRDFT...

2d convolution operator

Hi,

First of all thanks for putting together this package (and ProximalOperators/Algorithms) - it's really nice to play with and works very well together.

I'm having a difficult time defining an operator for 2d convolution (to be used in basic image deconvolution solved by LeastSquare). Normally, I would have something like

ifft(fft(psf).*fft(x))

instead of A*x, where A is the 2d convolution matrix (block Toeplitz with Toeplitz Blocks BTTB), and psf is the point spread function (forgive me if this is too much irrelevant information).

I've tried to use Compose with

n = 10
A = IDFT(n,n)
B = DFT(n,n)
A = Compose(A,B)

but this gives a DomainError since domainType(A) != codomainType(B).

Would appreciate any tips you might have, and can also cook up a full MWE if the above is not clear.

Thanks!

Removing potential type ambiguities

The following script, which tries to extend the behaviour of AbstractOperator, raises an error:

import Base: *
using LinearAlgebra, AbstractOperators, RecursiveArrayTools

@inline LinearAlgebra.mul!(y::ArrayPartition, L::AbstractOperator, x::ArrayPartition) =
    LinearAlgebra.mul!(y.x, L, x.x)

@inline (*)(L::AbstractOperator, x::ArrayPartition) = ArrayPartition(L * x.x)

D = DCAT(MatrixOp(randn(3,3)), MatrixOp(randn(3,3)))
x = ArrayPartition(randn(3), randn(3))
y = similar(x)

mul!(y, D, x)

I'm getting:

ERROR: LoadError: MethodError: LinearAlgebra.mul!(::ArrayPartition{Float64,Tuple{Array{Float64,1},Array{Float64,1}}}, ::DCAT{2,Tuple{MatrixOp{Float64,Float64,Array{Float64,2}},MatrixOp{Float64,Float64,Array{Float64,2}}},Tuple{Int64,Int64},Tuple{Int64,Int64}}, ::ArrayPartition{Float64,Tuple{Array{Float64,1},Array{Float64,1}}}) is ambiguous. Candidates:
  mul!(y, H::DCAT{N,L,P1,P2}, b) where {N, L, P1, P2} in AbstractOperators at [...]/.julia/packages/AbstractOperators/7oLxM/src/calculus/DCAT.jl:67
  mul!(y::ArrayPartition, L::AbstractOperator, x::ArrayPartition) in Main at [...]/issue.jl:5
Possible fix, define
  mul!(::ArrayPartition, ::DCAT{N,L,P1,P2}, ::ArrayPartition)
Stacktrace:
 [1] top-level scope at none:0
 [2] include at ./boot.jl:317 [inlined]
 [3] include_relative(::Module, ::String) at ./loading.jl:1044
 [4] include(::Module, ::String) at ./sysimg.jl:29
 [5] exec_options(::Base.JLOptions) at ./client.jl:266
 [6] _start() at ./client.jl:425
in expression starting at [...]/issue.jl:16

This is probably due to the fact that the types are unconstrained for first and third arguments in mul! for DCAT, see https://github.com/kul-forbes/AbstractOperators.jl/blob/master/src/calculus/DCAT.jl#L107

I believe this may be the case for several other concrete AbstractOperator types, and is limiting in the sense that it is hard to extend all AbstractOperator types at once, like I was trying to do in the above script.

If possible, we should consider restricting types.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.