Giter Site home page Giter Site logo

ferrite-fem / tensors.jl Goto Github PK

View Code? Open in Web Editor NEW
170.0 12.0 38.0 1.7 MB

Efficient computations with symmetric and non-symmetric tensors with support for automatic differentiation.

Home Page: https://ferrite-fem.github.io/Tensors.jl/

License: Other

Julia 100.00%
finite-elements cfd tensor symmetric-tensors automatic-differentiation

tensors.jl's Introduction

Tensors.jl

Efficient computations with symmetric and non-symmetric tensors with support for automatic differentiation.

Documentation Build Status

Introduction

This Julia package provides fast operations with symmetric and non-symmetric tensors of order 1, 2 and 4. The tensors are allocated on the stack which means that there is no need to preallocate output results for performance. Unicode infix operators are provided such that the tensor expression in the source code is similar to the one written with mathematical notation. When possible, symmetry of tensors is exploited for better performance. Supports Automatic Differentiation to easily compute first and second order derivatives of tensorial functions.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add Tensors

Or, equivalently, via the Pkg API:

julia> import Pkg; Pkg.add("Tensors")

Documentation

  • STABLEmost recently tagged version of the documentation.

Project Status

The package is tested against Julia 1.X on Linux, macOS, and Windows.

Contributing and Questions

Contributions are very welcome, as are feature requests and suggestions. Please open an issue if you encounter any problems.

Things to work on

If you are interested in contributing to Tensors.jl, here are a few topics that can get you started:

  • Implement support for third order tensors. These are more rarely used than first, second and fourth order tensors but are still useful in some applications. It would be good to support this.
  • Find a way to reduce code duplication without sacrificing performance or compilation time. Currently, there is quite a lot of code duplication in the implementation of different operators. It should be possible to have a higher level code generation framework that generates optimized functions from pretty much only the Einstein summation notation for the operation.
  • Tensors.jl has been developed with mostly the application to continuum mechanics in mind. For other fields, perhaps other tensor operations are useful. Implement these in a well performant manner and give good test coverage and documentation for the new functionalities.

Citing Tensors.jl

If you use Tensors.jl for research and publication, please cite the following article

@article{Tensors.jl,
  title = {Tensors.jl -- Tensor Computations in Julia},
  author = {Carlsson, Kristoffer and Ekre, Fredrik},
  year = {2019},
  journal = {Journal of Open Research Software},
  doi = {10.5334/jors.182},
}

Related packages

Both the packages below provide a convenience macro to provide einstein summation notation for standard Julia Array's:

tensors.jl's People

Contributors

femtocleaner[bot] avatar fredrikekre avatar heiderich avatar jfbarthelemy avatar keitanakamura avatar kimauth avatar knutam avatar koehlerson avatar kristofferc avatar kshyatt avatar miguelraz avatar staticfloat avatar timholy avatar tkelman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensors.jl's Issues

Divergence not working for second order tensors

MWE

julia> Tensors.divergence(x->Tensor{2,2}((x[1],0.0,0.0,x[2])), Vec((1.0,1.0)))

ERROR: MethodError: no method matching tr(::Tensor{3, 2, Float64, 8})

Closest candidates are:
  tr(::Number)
   @ LinearAlgebra ~/Tools/julia-1.9.3/share/julia/stdlib/v1.9/LinearAlgebra/src/generic.jl:1008
  tr(::Matrix{T}) where T
   @ LinearAlgebra ~/Tools/julia-1.9.3/share/julia/stdlib/v1.9/LinearAlgebra/src/dense.jl:344
  tr(::LinearAlgebra.Hermitian)
   @ LinearAlgebra ~/Tools/julia-1.9.3/share/julia/stdlib/v1.9/LinearAlgebra/src/symmetric.jl:366
  ...

Stacktrace:
 [1] divergence(f::var"#5#6", v::Vec{2, Float64})
   @ Tensors ~/.julia/packages/Tensors/ep7fg/src/automatic_differentiation.jl:554
 [2] top-level scope
   @ REPL[2]:1

Edit: Fix should be as easy as

Tensors.tr(t::Tensor{3,dim}) where dim = Vec{dim}(i->sum(t[:,i,:]))

but I have no good idea how to test this.

Third order tensors and mutable tensors

Hello,

I am currently working with sensitivities in multi-scale modelling.
In one case I am considering, third order tensors are used.
However, third order tensors are apparently not full supported e.g. zero(Tensor{3,dim}) errors.
I created a quick solution for my code, but probably it would be nice to have that in general.

Furthermore, the sensitivities are computed part by part, so they are tensors that need to be adapted.
This adaption is like inserting a tensor into another tensor of higher order.
In my case I do the operation using arrays, but maybe it would also be nice to have a general solution for that like mutable tensors or functions doing the insertion.

Just two suggestions. What do you think?
I am also willing to contribute, I am just still a bit unexperienced.

Promotion SymmetricTensor -> Tensor is very slow

julia> using Tensors, BenchmarkTools

julia> A = rand(Tensor{2, 3}); B = rand(SymmetricTensor{2, 3});

julia> @btime $A + $A;
  2.363 ns (0 allocations: 0 bytes)

julia> @btime $B + $B;
  1.972 ns (0 allocations: 0 bytes)

julia> @btime $A + $B;
  11.207 ns (0 allocations: 0 bytes)

Inference error in Julia 1.8

Found in PkgEval:

using Test, Tensors

function Ψ(C)
    μ = 1e10
    Kb = 1.66e11
    detC = det(C)
    J = sqrt(detC)
    Ĉ = detC^(-1/3)*C
    return 1/2** (tr(Ĉ)- 3) + Kb*(J-1)^2)
end

const F = one(Tensor{2,2}) + rand(Tensor{2,2})
const C = tdot(F)

@inferred hessian(Ψ, C)

git bisects points to JuliaLang/julia@e3b681c (cc @aviatesk if you have time to have a look).

e3b681c6872d87d74db63dd7ec3da2beee8e3ae4 is the first bad commit
commit e3b681c6872d87d74db63dd7ec3da2beee8e3ae4
Author: Shuhei Kadowaki <[email protected]>
Date:   Wed Feb 2 10:39:19 2022 +0900

    inference: always use const-prop'ed result (#44001)
    
    Previously, for `invoke`/opaque closure callsite, we use constant-prop'ed
    method body only when the inferred return type gets strictly improved
    by const-prop. But since the inliner is now able to inline the method
    body from `InferenceResult`, I believe it is always better to use method
    body shaped up by const-prop' no matter if the return type is improved
    (as we already do for ordinal call sites).
    
    > use constant prop' result even when the return type doesn't get refined
    ```julia
    const Gx = Ref{Any}()
    Base.@constprop :aggressive function conditional_escape!(cnd, x)
        if cnd
            Gx[] = x
        end
        return nothing
    end
    @test fully_eliminated((String,)) do x
        Base.@invoke conditional_escape!(false::Any, x::Any)
    end
    ```

:040000 040000 c48f5010124361b197808e99d69f5cd6d8327c73 e1c3e4c63b4c7128dbb0b4acae99ba864a580983 M     base
:040000 040000 5699d8a554ff36c25a94d4e2ed9576a2c6aab040 519f5968c110b456bbdab1b57015111630741106 M     test
bisect run success

tomandel! throws an error

tomandel! throws and error for symmetric tensor where tomandel and tovoigt! works fine:

julia> A_sym = rand(SymmetricTensor{2,3});
julia> v = zeros(6);
julia> tomandel!(v, A_sym);
ERROR: UndefVarError: dim not defined
Stacktrace:
 [1] tomandel!(v::Vector{Float64}, A::SymmetricTensor{2, 3, Float64, 6})
   @ Tensors ~\.julia\packages\Tensors\nMiXd\src\voigt.jl:138
 [2] top-level scope
   @ REPL[9]:1
julia> tovoigt!(v, A_sym);
julia> tomandel(A_sym);

AD norm is NaN at origin

Not really specific to Tensors.jl, but maybe it's good to document it here as well. I ran into the following issue yesterday

julia> using Tensors

julia> Tensors.hessian(x->norm(x),zero(Tensor{2,2}),:all)
([NaN NaN; NaN NaN;;; NaN NaN; NaN NaN;;;; NaN NaN; NaN NaN;;; NaN NaN; NaN NaN], [NaN NaN; NaN NaN], 0.0)

Basically a duplicate of JuliaDiff/ChainRules.jl#576 and JuliaDiff/ForwardDiff.jl#547 just at a different repo. You can close it if you think its too redundant.

NaN in eigenvector computation

julia> σ = SymmetricTensor{2,3}([
           50.0 -10.0 0.0
           -10.0 40.0 0.0
           0.0 0.0 0.0]);

julia> eigvecs(σ)
3×3 Tensor{2, 3, Float64, 9}:
 NaN  NaN  NaN
 NaN  NaN  NaN
 NaN  NaN  NaN

julia> S = @SMatrix[
           50.0 -10.0 0.0
           -10.0 40.0 0.0
           0.0 0.0 0.0];

julia> eigvecs(S)
3×3 SMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
 0.0  0.525731   0.850651
 0.0  0.850651  -0.525731
 1.0  0.0        0.0

Probably this has been fixed in StaticArrays after we borrowed that implementation.

`@generated` function generators don't follow the new rules for v0.6

The requirements for @generated and @pure functions have changed in Julia v0.6. For instance, AFAIK you can no longer call any function function from a function generator which may be specialized in the future, or you risk getting a world age error. My understanding of this is that function generators are compiled just once, and immediately as they are read in by the system, so they only have access to methods before their definition. Furthermore, any methods found of a later world age will result in a world age related error. I think the reason for this is that pure functions and generators do not populate back-edges in the dependency graph for automatic recompilation, but @vtjnash knows the exact details.

A specific example is this line of Tensors where one and zero may fail on user defined types created after the user executes using Tensors. I haven't looked around for any more problems, but I thought I'd let you know after experience this in StaticArrays.

Order of eigenvectors for SymmetricTensors{2,2}

This is related to StaticArrays#523:

using LinearAlgebra, Tensors, Test
S = SymmetricTensor{2,2}((70., 0., 1/70))
E = eigen(S)
eigvals(E)
V = eigvecs(E)
@test S  ≈ V * Diagonal(eigvals(E)) * V'
M = Matrix(S)
F = eigen(M)
U = eigvecs(F)
@test M  ≈ U * Diagonal(eigvals(F)) * U'

Suggestion: Implement numerical differentiation

We already have automatic differentiation which is definitely much better than numerical differentiation regarding both of accuracy and speed. Some computations, however, require use of numerical differentiation, for example, numerical computation of consistent (algorithmic) tangent modulus:

If you accept to implement numerical differentiation in Tensor.jl, I'm happy to code it.

Incorrect eigen vectors on symmetric tensors

On some 2D symmetric tensors with a diagonal term close but not equal to 0, eigen vectors are not computed correctly and are equal.
The error does not occur on the same, equal, general tensor, see below.

julia>a = SymmetricTensor{2, 2, Float64}([0.003 5.8002786937773495e-16 0.003])
2×2 SymmetricTensor{2, 2, Float64, 3}:
0.003 5.80028e-16
5.80028e-16 0.003

julia> eigvecs(a)
2×2 Tensor{2, 2, Float64, 4}:
0.0 0.0
1.0 1.0

julia> a = Tensor{2, 2, Float64}([0.003 +5.8002786937773495e-16 +5.8002786937773495e-16 0.003])
2×2 Tensor{2, 2, Float64, 4}:
0.003 5.80028e-16
5.80028e-16 0.003

julia> eigvecs(a)
2×2 Matrix{Float64}:
-0.707107 0.707107
0.707107 0.707107

Potential bug in elasticity tensor?

Dear developers,
many thanks for this great tool! I just started trying out ferrite and I am impressed by the scope and the simplicity. However, I just stumbled over one question, which might be a potential bug in the computation of the rank-4 elasticity tensor.

I wrote a simple unit test to validate the correctness and used the book of Bonet and Wood as reference (see code below). While for most entries (I,J,K,L) the results match my reference solutions, the ferrite results deviate for certain indices. In particular, the ferrite results seems to be not fully symmetric as they should be according to BonetWood, Equation 6.37. As an example, consider the entry ∂S∂C[1,3,2,3]. Due to symmetry, this entry should match ∂S∂C[1,3,3,2]. However, in the example stated below, this is not the case. Instead, the results read

∂S∂C[1,3,2,3] = 0
∂S∂C[1,3,3,2] = -30.258

whereas the reference solution D reads

D[1,3,2,3] = -15.129
D[1,3,3,2] = -15.129

So it seems as if ferrite "lumps" the two entries to one.

While I don't know whether this will actually cause a problem for hyper-elasticity, it is certainly surprising. Could you help me to understand whether I am using the code correctly?

Many thanks
Nils

using Tensors
using Ferrite
using Test

using Tensors: Tensor

# All references refere to 
# Bonet, Wood, 2016, Nonlinear Solid Mechanics for Finite Element Analysis: Statics

struct NeoHooke
  μ::Float64
  λ::Float64
end

function strainEnergy_NH_logJ(C, mp::NeoHooke)
  μ = mp.μ
  λ = mp.λ
  Ic = tr(C)
  J = sqrt(det(C))
  return μ / 2 * (Ic - 3) - μ * log(J) + λ / 2 * log(J)^2 # BonetWood, Eq. 6.27
end

  
@testset "Potential Bug" begin

  μ = 123
  λ = 456
  material = NeoHooke(μ,λ)

  # Test setup: pure shear straining ------------------------------------------------------------
  g = 0.123
  F = Tensor{2, 3}([1, 0, 0, 
                    g, 1, 0, 
                    0, 0, 1])
  J = 1
  
  # analytical solution -------------------------------------------------------------------------
  C = transpose(F) ⋅ F                                                          # Equation (4.15)
  invC = inv(C)
  I = Tensor{4,3}((I,J,K,L) -> 0.5*(invC[I,K]*invC[J,L] + invC[I,L]*invC[J,K])) # Equation (6.36)
  D = λ * invC ⊗ invC + 2*(μ-λ*log(J))*I                                        # Equation (6.30)

  # numerial solution --------------------------------------------------------------------------
  strainEnergyPotential = c -> strainEnergy_NH_logJ(c, material)
  ∂²Ψ∂C², ∂Ψ∂C = Tensors.hessian(y -> strainEnergyPotential(y), C, :all)
  S = 2.0 * ∂Ψ∂C
  ∂S∂C = 4.0 * ∂²Ψ∂C²

  # assert ---------------------------------------------------------------------------------------
  @test D[1,3,2,3] ≈ D[1,3,3,2]         # symmetry of reference solution: works
  @test ∂S∂C[1,3,2,3] ≈ ∂S∂C[1,3,3,2]   # symmetry of numerial solution: fails
  @test ∂S∂C[1,3,2,3] ≈ D[1,3,2,3]      # comparison between numerial and reference solution: fails

end

Problem with gradient performance when using anonymus functions

Hi,
I'm currently trying to calculate the gradient for a multiple value function (calculating laminate material solution) and I'm getting some weird performance results that I don't understand.

First my source code (using both Tensors.gradient and ForwardDiff.gradient functions):

using Tensors
using ForwardDiff
using LinearAlgebra


# need to simplify this operation in linear algbera operators
δ(i, j) = i == j ? 1.0 : 0.0
P(n) = SymmetricTensor{4,2}(
    (i, j, k, l) -> 1 / 2 * (n[i] * δ(j, k) * n[l]
                             + n[i] * δ(j, l) * n[k]
                             + n[j] * δ(i, k) * n[l]
                             + n[j] * δ(i, l) * n[k])
                    -
                    n[i] * n[j] * n[k] * n[l]
)

# inputs 
Cs = [rand(SymmetricTensor{4,2}) for _ in 1:3];
v_fractions = normalize(rand(3));

n = randn(Tensor{1,2});
I_tensor = one(SymmetricTensor{4,2});

function laminate_1(Cs::Vector{SymmetricTensor{4,2,Float64,9}}, v_fractions::Vector{Float64}, n)
    # create the P tensor with the normalized n
    P_ = P(normalize(n))
    # upper bound for largest eigenvalue (via frobenius norm)
    λ = maximum(norm.(Cs))
    rhs = zero(SymmetricTensor{4,2,Float64})
    for i in 1:3
        rhs += v_fractions[i] * inv(P_ + λ * inv(Cs[i] - λ * I_tensor))
    end
    return norm(inv(1 / λ * (inv(rhs) - P_)) + λ * I_tensor)
end

@time Tensors.gradient(n -> laminate_1(Cs, v_fractions, n), n)
@time ForwardDiff.gradient(n -> laminate_1(Cs, v_fractions, n), n)

function laminate_2(n)
    # create the P tensor with the normalized n
    P_ = P(normalize(n))
    # upper bound for largest eigenvalue (via frobenius norm)
    λ = maximum(norm.(Cs))
    rhs = zero(SymmetricTensor{4,2,Float64})
    for i in 1:3
        rhs += v_fractions[i] * inv(P_ + λ * inv(Cs[i] - λ * I_tensor))
    end
    return norm(inv(1 / λ * (inv(rhs) - P_)) + λ * I_tensor)
end

@time Tensors.gradient(laminate_2, n)
@time ForwardDiff.gradient(laminate_2, n)

The idea is I get much better perforamnce (basically less allocations ...) when using the second function (unary) with global input parameters ... and I don't understand why. Does the use of anonymus function also track some information for the Cs and v_fractions? Should I define the unary function in another way?

Deviatoric and volumetric part of second order tensor for dimension not equal to 3

Hi,
I just stumbled over an issue which might lead to unexpected results for second order tensors with dimensions not equal to $3$.

To the best of my knowledge, the deviatoric part for a second order tensor $X \in \mathbb{R}^{d \times d}$ is defined by
$$\mathrm{dev}_d (X) = X - \frac{1}{d} \mathrm{tr} (X) I_d$$ with $I_d \in \mathbb{R}^{d\times d}$ denoting the identity tensor. My guess is that the denominator $3$ in the functions dev and mean could be changed to dim in order to make them applicable for general dimensions.

Tensors.jl/src/math_ops.jl

Lines 297 to 306 in 258540f

@inline function dev(S::SecondOrderTensor)
Tt = get_base(typeof(S))
trace = tr(S) / 3
Tt(
@inline function(i, j)
@inbounds v = i == j ? S[i,j] - trace : S[i,j]
v
end
)
end

Statistics.mean(S::SecondOrderTensor) = tr(S) / 3

If interested I could work on a PR.

General transformation to voigt giving correct results for AD

Context: Taking the derivative wrt a symmetric tensor in Voigt notation (e.g. using the ForwardDiff package). This is typically required when solving a non-linear tensorial equation system, e.g. R_(X_(F), F)=0_ where X_=[x1, x2] and 0_=[0, 0] (_ indicate voigt notation and bold second order tensors), as noted in the documentation. Automatically (or numerically/analytically) differentiating R_ wrt X_ works fine for solving this problem.

However, if one needs to use the derivative for further expressions the result may become incorrect. Here are two use case examples:

Use case 1

Directly extracting the derivative from e.g. the inverse. Say that R_=[R1, R2] and we want to use the forth order tensor that is part of (∂R_∂X_)^(-1) corresponding to x2 and R2 (in voigt form the lower right 6x6 matrix of (∂R_∂X_)^(-1)). The following code example shows that this will in fact not give the desired result, as using voigt notation will not return the correct "symmetric" identity tensor for a simple function returning the input argument. Note, however, that the mandel notation works:

using Tensors, ForwardDiff

dim = 2
R(X::AbstractTensor) = X
R_voigt(X::AbstractVector) = tovoigt(R(fromvoigt(SymmetricTensor{2,dim}, X)))
R_mandel(X::AbstractVector) = tomandel(R(frommandel(SymmetricTensor{2,dim}, X)))
A = rand(SymmetricTensor{2,dim})
println("∂R∂X using Tensors.jl AD with correct seeds")
display(tovoigt(gradient(R, A)))
println("∂R∂X calculated using voigt vectors with ForwardDiff")
display(ForwardDiff.jacobian(R_voigt, tovoigt(A)))
println("∂R∂X calculated using mandel vectors with ForwardDiff (shown in voigt format)")
∂R∂X_mandel = ForwardDiff.jacobian(R_mandel, tomandel(A))
# Convert into voigt for display
∂R∂X_m2t2v = tovoigt(frommandel(SymmetricTensor{4,dim}, ∂R∂X_mandel))
display(∂R∂X_m2t2v)

Output

∂R∂X using Tensors.jl AD with correct seeds
3×3 Matrix{Float64}:
 1.0  0.0  0.0
 0.0  1.0  0.0
 0.0  0.0  0.5
∂R∂X calculated using voigt vectors with ForwardDiff
3×3 Matrix{Float64}:
 1.0  0.0  0.0
 0.0  1.0  0.0
 0.0  0.0  1.0
∂R∂X calculated using mandel vectors with ForwardDiff (shown in voigt format)
3×3 Matrix{Float64}:
 1.0  0.0  0.0
 0.0  1.0  0.0
 0.0  0.0  0.5

This also works for a range of other functions as well, both with scalar and tensor outputs. Not tested for vector outputs as third order tensors are not supported.

Use case 2

Combining the derivative from "use case 1" with other derivatives that are obtained using forward diff.

E.g. calculate ∂c∂X_ (∂R_∂X_)^(-1) ∂R_∂b that is common in modeling plasticity, the result is incorrect. In code this becomes something like

∂R∂X = ForwardDiff.jacobian((X)->R(X,𝐛), X)
∂R∂b = ForwardDiff.jacobian((b)->R(X,fromvoigt(SymmetricTensor{2,3}, b)), tovoigt(𝐛))
∂c∂X = ForwardDiff.jacobian((X)->fromvoigt(SymmetricTensor{2,3}, calc_c(X)), X)
∂𝐜∂𝐛 = fromvoigt(SymmetricTensor{4,3}, ∂c∂X * (∂R∂X\∂R∂b))

Here, we probably have the issue from use case 1, but also that we do a double contraction as a matrix multiplication: ∂c∂X * (∂R∂X\∂R∂b) . While this is not the intended use for Tensors.jl, in this particular use case it is difficult to avoid it. However, also in this case, using the mandel instead of the voigt notation will result in the correct results.

Summary

So using mandel seems to be the solution to this problem. But the problem is that mandel is only supported for symmetric tensors, otherwise only the standard voigt transformation can be used. For full tensors, the voigt notation will give correct results. I therefore propose to have a transformation that will give the correct result in both these use cases, independently of whether the tensors are symmetric or full (i.e. use mandel if symmetric and voigt if full). Two options seem relevant:

  1. Define from/to mandel also for full tensors, in that case using the regular voigt (could result in misconceptions that a tensor will by symmetrized?)
  2. Create a new voigt-type (logical naming could be an issue)

implement custom gradient with multi-argument functions

From Slack-comment by @koehlerson; how to implement custom gradient calculation for a multi-argument function.
It is common to have such a case for autodiff, so would be good to have a clear way of doing this.
The solution I can come up with now is

using Tensors
import ForwardDiff: Dual

# General setup for any function f(x, args...)
struct Foo{F,T<:Tuple} <: Function # <:Function optional
    f::F
    args::T
end
struct FooGrad{FT<:Foo} <: Function # <: Function required
    foo::FT
end

function (foo::Foo)(x)
    println("Foo with Any: ", typeof(x))  # To show that it works
    return foo.f(x, foo.args...)
end
function (foo::Foo)(x::AbstractTensor{<:Any,<:Any,<:Dual})
    println("Foo with Dual: ", typeof(x))  # To show that it works
    return Tensors._propagate_gradient(FooGrad(foo), x)
end
function (fg::FooGrad)(x)
    println("FooGrad: ", typeof(x)) # To show that it works
    return f_dfdx(fg.foo.f, x, fg.foo.args...)
end

# Specific example to setup for bar(x, a, b), must then also define f_dfdx(::typeof(bar), x, a, b):
bar(x, a, b) = norm(a*x)^b 
dbar_dx(x, a, b) = b*(a^b)*norm(x)^(b-2)*x
f_dfdx(::typeof(bar), args...) = (bar(args...), dbar_dx(args...))

# At the location in the code where the derivative will be calculated
t = rand(SymmetricTensor{2,3}); a = π; b = 2 # Typically inputs
foo = Foo(bar, (a, b))
gradient(foo, t) == dbar_dx(t, a, b)

But it is quite cumbersome, especially if only needed for one function, so a better method would be good.
(Tensors._propagate_gradient is renamed to propagate_gradient, exported, and documented in #181)

Slow otimesu/otimesl for symmetric tensors

The otimesu and otimesl are more than 10 times slower for symmetric tensors compared to full tensors.

x, xs = (rand(T{2,3}) for T in (Tensor, SymmetricTensor));

command time
@btime otimes($xs, $xs); 2.400 ns (0 allocations: 0 bytes)
@btime otimesu($xs, $xs); 249.734 ns (0 allocations: 0 bytes)
@btime otimesl($xs, $xs); 250.269 ns (0 allocations: 0 bytes)
@btime otimesl($x, $x); 11.512 ns (0 allocations: 0 bytes)
@btime otimesu($x, $x); 9.810 ns (0 allocations: 0 bytes)

Suggestion: Implement block like array

I'm writing code about return mapping in my research recently, and I thought it would be useful if we have block like array that can store tensors and scalars. Something like:

julia> A = one(SymmetricTensor{2,3});

julia> B = @block [A, 10.0]
  1.0
  1.0
  1.0
  0.0
  0.0
  0.0
 10.0

julia> isa(B, AbstractVector)
true

julia> @getblock B[1]
3×3 Tensors.SymmetricTensor{2,3,Float64,6}:
 1.0  0.0  0.0
 0.0  1.0  0.0
 0.0  0.0  1.0

julia> @getblock B[2]
10.0

julia> @setblock B[1] = 2A
  2.0
  2.0
  2.0
  0.0
  0.0
  0.0
 10.0

In the implementation, we could use mandel form for symmetric tensors. I'm not sure whether this block array should be in Tensors.jl or not, so I would like to ask your opinion.

Add a roadmap

It would be good to have a roadmap that outlines the direction of the package and what taska are good to work on to help out potential contributors.

Question: Co(ntra)variance, coordinates and pullbacks

Hi,

This seems like a cool package and the benchmarks blow me away. How can it be that much faster? :)

Anyway, I am looking into doing some research (still early stages) and am considering whether to use and contribute to this package, try to roll my own package from scratch or use this package as a basis for a new package. Or maybe there is another package more suitable for what I'm looking for?

From my personal perspective, the point of tensors is that they are coordinate independent and metric independent multilinear objects defined on smooth manifolds with both covariant (lower indices) and contravariant (upper indices) components. For example, a matrix is a once covariant and once contravariant tensor, vectors are once contravariant and covectors are once covariant. So the order of a tensor should not be a single integer, but something more like a tuple of booleans or something, e.g.

# v::Vector v^u
iscovariant(v) = (false,)
iscovariant(v') = (true,)
# m::Matrix m^u_v
iscovariant(m) = (false,true)
iscovariant(m') = (false,true)
 # t::Tensor T_{uv}^w
iscovariant(t) = (true,true,false)

I wish there was some way to express a tensor as a multilinear object on a smooth manifold without resorting to coordinates, but it is not obvious to me how to do that. So this package apparently starts by assuming a coordinate system and then encodes the tensor into an array of numbers. That makes sense and is fine, but I think that choice should probably be made more explicit. If you assume all tensors are Cartesian tensors with Euclidean metric then raising and lowering indices are trivial and the paragraph above is moot. If this package is intended to support only Cartesian tensors, that would be perfectly fine, of course, but maybe then it should be called CartesianTensors.jl or at least the README should make that clear?

Is there any desire to support mixed tensors with this package?

Also, since tensors are represented as numbers in an array implying a chosen coordinate system, I think the chosen coordinate system should be part of the tensor definition. For example, you cannot add a tensor expressed in Cartesian coordinates to a tensor expressed in spherical coordinates without first transforming them to the same coordinate system.

It would be nice if a tensor package (this one or maybe one built on top of this one) had a concept of coordinate system, e.g. Coordinates then we could incorporate that into the definition of the tensor, probably as a parameter, Tensor{order,dim,T,coord}.

Finally, one of the most important concepts involving tensors is pullback / pushforward, which generalize coordinate transforms to maps between different spaces.

Given smooth manifolds M and N, a smooth map F: M -> N, a covector a on N and a vector v on M, it would be great if we could express something like

<F^*(a),v> = <a,F_*(v)>

or in pseudocode

pullback(a)(v) == a(pushforward(v))

Anyway, these are the kinds of things I think about and would hope to have in a tensor package. Do you have any thoughts? Is there anything above that should / could be incorporated into this package? Would it be better to leave this package as it is and build another package on top that has things like mixed tensors, coordinate maps, pullback / pushforward? Or do something completely different?

Having said all that, I'd like to end by saying that if you are dealing with Cartesian tensors, this package seems fantastic and it is clear that a lot of thought has gone into performance 🙌

Support for Abaqus STRESS and STRAN conversion to Tensor

I am making Abaqus UMAT package: https://github.com/JuliaFEM/UMAT.jl and I will need to convert stress and strain tensors. According to Tensors.jl documentation tovoight index order is [11, 22, 33, 23, 13, 12, 32, 31, 21] while according to Abaqus Analysis User's Guide: 1.2.2 Convention used for stress and strain components the index order is [11, 22, 33, 12, 13, 23]. Could we add methods toabaqus and fromabaqus?

Change of component type of AbstractTensor for symbolic calculations

I have been trying to use the library in the framework of symbolic calculations. Unfortunately it doesn't work since the type of components of AbstractTensor is restricted to Real

abstract type AbstractTensor{order, dim, T <: Real} <: AbstractArray{T, order} end

However changing T <: Real into T <: Number seems to solve the problem insofar as it allows to use Tensors.jl together with SymPy.jl. Would it be relevant to make this change on the official repository?
Thank you very much.

Construct 2nd order tensor from vectors?

Quite often, I find myself doing something like

Q = Tensor{2,3}((e1..., e2..., e3...))

or

e1, e2, e3 = Q[:,1], Q[:,2], Q[:,3]

Would it make sense to implement these conversions to Tensors.jl?

assert if vec dim matches data

Had some troubles debugging my code because the stacktrace of the following error only provides you with the top level function. It would be nice to throw an error if data mismatches dim of Vec

julia> test = Vec{3}((0.0,))
ERROR: StackOverflowError:
Stacktrace:
 [1] Tensor{1,3,T,M} where M where T(::Tuple{Float64}) at /home/mkoehler/.julia/packages/Tensors/mZ0pw/src/constructors.jl:62 (repeats 79984 times)

This is not a problematic scenario. However, something like this is problematic:

julia> function some_nested_func()
           return Vec{3}((0.0,))
       end
some_nested_func (generic function with 1 method)

julia> function top_level_func()
           return some_nested_func()
       end
top_level_func (generic function with 1 method)

julia> top_level_func()
ERROR: StackOverflowError:
Stacktrace:
 [1] Tensor{1,3,T,M} where M where T(::Tuple{Float64}) at /home/mkoehler/.julia/packages/Tensors/mZ0pw/src/constructors.jl:62 (repeats 79984 times)

because you don't know where to start debugging.

I hope I'll understand the src code of Tensors soon, such that I can contribute something. As of now, I can just report things

Inserting a known derivative: function needs to exclude dual numbers

I ran into the following problem today:

function tensor_exp(A::SymmetricTensor{2})
    E = eigen(A)
    A_exp = zero(A)
    for (i, λ) in enumerate(E.values)
        N = E.vectors[:,i]
        A_exp += exp(λ) * N  N
    end
    return A_exp
end

tensor_exp_gradient(A::AbstractTensor{2}) = tensor_exp(A), tensor_exp(A)

@implement_gradient tensor_exp tensor_exp_gradient
julia> A = rand(SymmetricTensor{2,3}) + one(SymmetricTensor{2,3})
julia> gradient(tensor_exp, A)

ERROR: MethodError: no method matching precision(::Type{ForwardDiff.Dual{ForwardDiff.Tag{typeof(tensor_exp), SymmetricTensor{2, 3, Float64, 6}}, Float64, 6}})
Closest candidates are:
  precision(::Type{Float16}) at C:\Users\auth\AppData\Local\Programs\Julia-1.7.2\share\julia\base\float.jl:686
  precision(::Type{Float32}) at C:\Users\auth\AppData\Local\Programs\Julia-1.7.2\share\julia\base\float.jl:687
  precision(::Type{Float64}) at C:\Users\auth\AppData\Local\Programs\Julia-1.7.2\share\julia\base\float.jl:688
  ...
Stacktrace:
 [1] eigen(R::SymmetricTensor{2, 3, ForwardDiff.Dual{ForwardDiff.Tag{typeof(tensor_exp), SymmetricTensor{2, 3, Float64, 6}}, Float64, 6}, 6})
   @ Tensors C:\Users\auth\.julia\packages\Tensors\fjqpn\src\eigen.jl:137
 [2] tensor_exp(A::SymmetricTensor{2, 3, ForwardDiff.Dual{ForwardDiff.Tag{typeof(tensor_exp), SymmetricTensor{2, 3, Float64, 6}}, Float64, 6}, 6})
   @ Main c:\Users\auth\OneDrive - Chalmers\Documents\courses\Phd courses\Computational nonlinear mechanics\computer assignments\cass3\tensor_log_exp.jl:29
 [3] gradient(f::typeof(tensor_exp), v::SymmetricTensor{2, 3, Float64, 6})
   @ Tensors C:\Users\auth\.julia\packages\Tensors\fjqpn\src\automatic_differentiation.jl:455
 [4] top-level scope
   @ REPL[6]:1

The problem here is that we run into the tensor_exp function with dual numbers (instead of using tensor_exp_gradient).
Looking at the methods of tensor_exp, we can see why:

julia> methods(tensor_exp)
# 2 methods for generic function "tensor_exp":
[1] tensor_exp(A::SymmetricTensor{2}) in Main at c:\Users\auth\OneDrive - Chalmers\Documents\courses\Phd courses\Computational nonlinear mechanics\computer assignments\cass3\tensor_log_exp.jl:28
[2] tensor_exp(x::Union{ForwardDiff.Dual, AbstractTensor{<:Any, <:Any, <:ForwardDiff.Dual}}) in Main at C:\Users\auth\.julia\packages\Tensors\fjqpn\src\automatic_differentiation.jl:253

The original tensor_exp is more specific than the one defined for Dual numbers by @implement_gradient. The solution could be to not allow dual numbers in the original function at all, e.g. by

function tensor_exp(A::SymmetricTensor{2,dim,Float64}) where dim

(This is of course not so nice if one doesn't own this function. )

Perhaps there is a better solution than specifying the number type of the Tensor. In case there isn't we should probably add a hint about it to the docs.

Possible bug when computing derivatives of principal directions with AD

We found that there is some inconsistency when trying to compute the derivative of the eigvecs dyadic product wrt. a symmetric tensor. To support this claim a comparison with an analytical expression for the derivative is used:

if we have a second order symmetric tensor $\bm{A}$ and its principal directions (eigenvector) are $\bm{l}_1$, $\bm{l}_2$, $\bm{l}_3$ we would like to compute the derivative \dfrac{\partial \bm{l}_i \otimes \bm{l}_i}{\partial \bm{A}}

Code to compare:

using ForwardDiff
Base.exponent(VD::ForwardDiff.Dual) = exponent(VD.value)
Base.precision(::Type{TT}) where{TT<:ForwardDiff.Dual{T,V}} where{T,V} = precision(V)

function M11f(A)
    l = eigvecs(A)
    return symmetric(l[:,1]  l[:,1])
end

function dM11fAnalytic(A)
    λ = eigvals(A)
    l = eigvecs(A)
    𝕀s = one(SymmetricTensor{4,3})
    M11 = symmetric(l[:,1]  l[:,1])
    M22 = symmetric(l[:,2]  l[:,2])
    M33 = symmetric(l[:,3]  l[:,3])
    
    ∂M11∂A = (otimesu(M11,M22) + otimesu(M22,M11))/(λ[1]-λ[2]) + (otimesu(M11,M33) + otimesu(M33,M11))/(λ[1]-λ[3])

   return 𝕀s(∂M11∂A𝕀s)
end

A0 = SymmetricTensor{2,3}((1.2, 0., 0., 0.9, 0., 0.75))

gradient(M11f,A0)
dM11fAnalytic(A0)

Results:

julia> dM11fAnalytic(A0)[:,:,1,3]
3×3 Matrix{Float64}:
 -0.0      -0.0  -1.11111
 -0.0      -0.0  -0.0
 -1.11111  -0.0  -0.0

julia> gradient(M11f,A0)[:,:,1,3]
3×3 Matrix{Float64}:
 0.0  0.0  0.0
 0.0  0.0  0.0
 0.0  0.0  0.0

julia> dM11fAnalytic(A0)[:,:,2,3]
3×3 Matrix{Float64}:
 -0.0  -0.0      -0.0
 -0.0  -0.0      -3.33333
 -0.0  -3.33333  -0.0

julia> gradient(M11f,A0)[:,:,2,3]
3×3 Matrix{Float64}:
 0.0   0.0       0.0
 0.0  -0.0      -3.33333
 0.0  -3.33333   0.0

Additional checks:

ΔA = rand(SymmetricTensor{2,3})*1.e-6;
Danalytic = dM11fAnalytic(A0);
D_ad = gradient(M11f,A0);
V_Δ = M11f(A0+ΔA);
V = M11f(A0);
julia> V_Δ  V + DanalyticΔA
true
julia> V_Δ  V + D_adΔA
false

Get rid of dimension restriction?

I was recommending this package in a fantastic WIP JuliaManifolds/Manifolds.jl#12, but forgot that there is a restriction on dim. A brief check seems to indicate that one could in principle get rid of that restriction, since many operators are written in terms of loops i = 1:dim and would work right away. It seems that only a few functions (like inv) are currently written assuming dim in (1, 2, 3). Would you consider PRs to the effect of getting rid of the dimension restriction? One could let certain continuum mechanics-related functions be restricted to dim in (1, 2, 3) and throw errors otherwise?

Question on building symmetric tensors

A general anisotropic linear elastic material has 21 stiffness coefficients, which in Tensors.jl may correspond to something like: majorsymmetric(SymmetricTensor{4,3,Float64}).

My question is, how can we build tensors with higher symmetries using this package?
For example isotropic (2 coefficients), transversely isotropic (5 coefficients), orthotropic (9 coefficients), etc.

Just to clarify, the question is not about how we can write the coefficients themselves but whether we can define specific tensor constructors for the higher symmetries mentioned.

Thank you.

Suggestion: add rotate for 2nd and 4th rank tensors

Currently rotate is defined for vectors only, but it would be nice to provide a way to rotate the tensor. The following would be very convenient:

  • Construct a Rotation matrix, given an axis vector and a rotation angle theta
  • Given a Rotation matrix (alternatively, an axis vector and a rotation angle) and a 2nd or 4th order tensor, return the rotated 2nd or 4th order tensor.

If this sounds good, I'd be happy to work on and submit an implementation.

Support + and - operators for a Tensor and a literal value

I noticed that not all basic binary operators are defined for a Tensor and a literal value. I can do this:

using Tensors

T = Tensor{1, 3, Float64}(rand(3))
3-element Tensor{1,3,Float64,3}:
 0.769345151859792  
 0.49053491380559344
 0.5374616581398959

T * 5
3-element Tensor{1,3,Float64,3}:
 3.84672575929896  
 2.452674569027967 
 2.6873082906994794

T / 5
3-element Tensor{1,3,Float64,3}:
 0.1538690303719584 
 0.09810698276111869
 0.10749233162797918

But I cannot do this:

T + 1
ERROR: MethodError: no method matching +(::Tensor{1,3,Float64,3}, ::Int64)

or this:

T - 1
ERROR: MethodError: no method matching -(::Tensor{1,3,Float64,3}, ::Int64)

Symmetric 4th order tensor - minor or major symmetry?

Hi,

I was going through JuaFEM and my curiosity led me to this package to understand the parameter M in Tensor{order, dim, T, M}. It seems that the data is stored in a Tensor object as a tuple, rather than a static array, to save memory with symmetric tensors I suppose, and M is the number of elements in the data tuple.

So for a 4th order symmetric tensor, do you mean minor symmetry or major symmetry? Taking for example the 4th order stiffness tensor which has both minor and major symmetry that is C_{ijkm} = C_{jikm} = C_{ijmk} = C_{kmij}. From the following piece of code starting at line 57 in the file Tensors.jl, it seems that you are using minor symmetry only. Is this intentional or a bug?

@pure function n_components(::Type{SymmetricTensor{4, dim}}) where {dim}
    n = n_components(SymmetricTensor{2, dim})
    return n*n
end

To use major symmetry as well, it would be something like this:

@pure function n_components(::Type{SymmetricTensor{4, dim}}) where {dim}
    n = n_components(SymmetricTensor{2, dim})
    return n_components(SymmetricTensor{2, n})
end

Triggers a Julia assertion on nightly

would be good to reduce and report this upstream, if it hasn't been

cat Make.user && JULIA_PKGDIR=$PWD/../jlpkgtmp ./julia -e 'versioninfo(); Pkg.init(); Pkg.add("Tensors"); Pkg.test("Tensors")'
override LLVM_ASSERTIONS = 1
override MARCH = x86-64
CMAKE = /home/tkelman2/Julia/julia-0.6/deps/scratch/cmake-3.7.1-Linux-x86_64/bin/cmake
Julia Version 0.6.0-pre.beta.415
Commit 19b3f50* (2017-04-29 09:02 UTC)
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.9.1 (ORCJIT, haswell)
INFO: Initializing package repository /home/tkelman2/Julia/jlpkgtmp/v0.6
INFO: Cloning METADATA from https://github.com/JuliaLang/METADATA.jl
INFO: Cloning cache of Calculus from https://github.com/johnmyleswhite/Calculus.jl.git
INFO: Cloning cache of Compat from https://github.com/JuliaLang/Compat.jl.git
INFO: Cloning cache of DiffBase from https://github.com/JuliaDiff/DiffBase.jl.git
INFO: Cloning cache of ForwardDiff from https://github.com/JuliaDiff/ForwardDiff.jl.git
INFO: Cloning cache of NaNMath from https://github.com/mlubin/NaNMath.jl.git
INFO: Cloning cache of SIMD from https://github.com/eschnett/SIMD.jl.git
INFO: Cloning cache of SpecialFunctions from https://github.com/JuliaMath/SpecialFunctions.jl.git
INFO: Cloning cache of Tensors from https://github.com/KristofferC/Tensors.jl.git
INFO: Installing Calculus v0.2.2
INFO: Installing Compat v0.24.0
INFO: Installing DiffBase v0.1.0
INFO: Installing ForwardDiff v0.4.2
INFO: Installing NaNMath v0.2.4
INFO: Installing SIMD v0.2.0
INFO: Installing SpecialFunctions v0.1.1
INFO: Installing Tensors v0.6.1
INFO: Package database updated
INFO: Computing test dependencies for Tensors...
INFO: Cloning cache of Crayons from https://github.com/KristofferC/Crayons.jl.git
INFO: Cloning cache of DocStringExtensions from https://github.com/JuliaDocs/DocStringExtensions.jl.git
INFO: Cloning cache of Documenter from https://github.com/JuliaDocs/Documenter.jl.git
INFO: Cloning cache of TimerOutputs from https://github.com/KristofferC/TimerOutputs.jl.git
INFO: Installing Crayons v0.2.0
INFO: Installing DocStringExtensions v0.3.3
INFO: Installing Documenter v0.10.0
INFO: Installing TimerOutputs v0.2.6
INFO: Testing Tensors
Test Summary: | Pass  Total
constructors  |  333    333
Test Summary: | Pass  Total
diagm, one    |  762    762
Test Summary: | Pass  Total
base vectors  |   45     45
julia: /home/tkelman2/Julia/julia-0.6/src/cgutils.cpp:527: llvm::Type* julia_struct_to_llvm(jl_value_t*, jl_unionall_t*, bool*): Assertion `llvm_alignment == julia_alignment' failed.

signal (6): Aborted
while loading /home/tkelman2/Julia/jlpkgtmp/v0.6/Tensors/test/test_misc.jl, in expression starting on line 123
raise at /build/eglibc-oGUzwX/eglibc-2.19/signal/../nptl/sysdeps/unix/sysv/linux/raise.c:56
abort at /build/eglibc-oGUzwX/eglibc-2.19/stdlib/abort.c:89
__assert_fail_base at /build/eglibc-oGUzwX/eglibc-2.19/assert/assert.c:92
__assert_fail at /build/eglibc-oGUzwX/eglibc-2.19/assert/assert.c:101
julia_struct_to_llvm at /home/tkelman2/Julia/julia-0.6/src/cgutils.cpp:527
julia_type_to_llvm at /home/tkelman2/Julia/julia-0.6/src/cgutils.cpp:388
julia_struct_to_llvm at /home/tkelman2/Julia/julia-0.6/src/cgutils.cpp:484
julia_type_to_llvm at /home/tkelman2/Julia/julia-0.6/src/cgutils.cpp:388
mark_julia_type at /home/tkelman2/Julia/julia-0.6/src/codegen.cpp:774
emit_function at /home/tkelman2/Julia/julia-0.6/src/codegen.cpp:5244
jl_compile_linfo at /home/tkelman2/Julia/julia-0.6/src/codegen.cpp:1256
jl_compile_for_dispatch at /home/tkelman2/Julia/julia-0.6/src/gf.c:1672
jl_compile_method_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:305 [inlined]
jl_call_method_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:352 [inlined]
jl_apply_generic at /home/tkelman2/Julia/julia-0.6/src/gf.c:1930
jl_apply at /home/tkelman2/Julia/julia-0.6/src/julia.h:1422 [inlined]
jl_f__apply at /home/tkelman2/Julia/julia-0.6/src/builtins.c:426
jl_call_fptr_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:337 [inlined]
jl_call_method_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:356 [inlined]
jl_apply_generic at /home/tkelman2/Julia/julia-0.6/src/gf.c:1930
do_call at /home/tkelman2/Julia/julia-0.6/src/interpreter.c:75
eval at /home/tkelman2/Julia/julia-0.6/src/interpreter.c:242
eval_body at /home/tkelman2/Julia/julia-0.6/src/interpreter.c:546
eval_body at /home/tkelman2/Julia/julia-0.6/src/interpreter.c:592
jl_interpret_toplevel_thunk at /home/tkelman2/Julia/julia-0.6/src/interpreter.c:695
jl_toplevel_eval_flex at /home/tkelman2/Julia/julia-0.6/src/toplevel.c:590 [inlined]
jl_toplevel_eval at /home/tkelman2/Julia/julia-0.6/src/toplevel.c:598
jl_toplevel_eval_in at /home/tkelman2/Julia/julia-0.6/src/builtins.c:496
macro expansion at /home/tkelman2/Julia/jlpkgtmp/v0.6/Tensors/test/test_misc.jl:119 [inlined]
macro expansion at ./test.jl:853 [inlined]
macro expansion at /home/tkelman2/Julia/jlpkgtmp/v0.6/Tensors/test/runtests.jl:9 [inlined]
macro expansion at /home/tkelman2/Julia/jlpkgtmp/v0.6/TimerOutputs/src/TimerOutput.jl:128 [inlined]
macro expansion at /home/tkelman2/Julia/jlpkgtmp/v0.6/Tensors/test/runtests.jl:8 [inlined]
anonymous at ./<missing> (unknown line)
jl_call_fptr_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:337 [inlined]
jl_call_method_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:356 [inlined]
jl_toplevel_eval_flex at /home/tkelman2/Julia/julia-0.6/src/toplevel.c:587
jl_parse_eval_all at /home/tkelman2/Julia/julia-0.6/src/ast.c:873
jl_load at /home/tkelman2/Julia/julia-0.6/src/toplevel.c:614
include_from_node1 at ./loading.jl:539
unknown function (ip: 0x7fe660555852)
jl_call_fptr_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:337 [inlined]
jl_call_method_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:356 [inlined]
jl_apply_generic at /home/tkelman2/Julia/julia-0.6/src/gf.c:1930
include at ./sysimg.jl:14
unknown function (ip: 0x7fe67c5e944b)
jl_call_fptr_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:337 [inlined]
jl_call_method_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:356 [inlined]
jl_apply_generic at /home/tkelman2/Julia/julia-0.6/src/gf.c:1930
do_call at /home/tkelman2/Julia/julia-0.6/src/interpreter.c:75
eval at /home/tkelman2/Julia/julia-0.6/src/interpreter.c:242
jl_interpret_toplevel_expr at /home/tkelman2/Julia/julia-0.6/src/interpreter.c:34
jl_toplevel_eval_flex at /home/tkelman2/Julia/julia-0.6/src/toplevel.c:575
jl_parse_eval_all at /home/tkelman2/Julia/julia-0.6/src/ast.c:873
jl_load at /home/tkelman2/Julia/julia-0.6/src/toplevel.c:614
include_from_node1 at ./loading.jl:539
unknown function (ip: 0x7fe67c7544eb)
jl_call_fptr_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:337 [inlined]
jl_call_method_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:356 [inlined]
jl_apply_generic at /home/tkelman2/Julia/julia-0.6/src/gf.c:1930
include at ./sysimg.jl:14
unknown function (ip: 0x7fe67c5e944b)
jl_call_fptr_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:337 [inlined]
jl_call_method_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:356 [inlined]
jl_apply_generic at /home/tkelman2/Julia/julia-0.6/src/gf.c:1930
process_options at ./client.jl:305
_start at ./client.jl:371
unknown function (ip: 0x7fe67c75fc48)
jl_call_fptr_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:337 [inlined]
jl_call_method_internal at /home/tkelman2/Julia/julia-0.6/src/julia_internal.h:356 [inlined]
jl_apply_generic at /home/tkelman2/Julia/julia-0.6/src/gf.c:1930
jl_apply at /home/tkelman2/Julia/julia-0.6/ui/../src/julia.h:1422 [inlined]
true_main at /home/tkelman2/Julia/julia-0.6/ui/repl.c:127
main at /home/tkelman2/Julia/julia-0.6/ui/repl.c:264
__libc_start_main at /build/eglibc-oGUzwX/eglibc-2.19/csu/libc-start.c:287
unknown function (ip: 0x4015fa)
Allocations: 16955857 (Pool: 16951856; Big: 4001); GC: 37
====================================[ ERROR: Tensors ]====================================

failed process: Process(`/home/tkelman2/Julia/julia-0.6/usr/bin/julia -Cx86-64 -J/home/tkelman2/Julia/julia-0.6/usr/lib/julia/sys.so --compile=yes --depwarn=yes --check-bounds=yes --code-coverage=none --color=no --compilecache=yes /home/tkelman2/Julia/jlpkgtmp/v0.6/Tensors/test/runtests.jl`, ProcessSignaled(6)) [0]

==========================================================================================
INFO: Removing Crayons v0.2.0
INFO: Removing DocStringExtensions v0.3.3
INFO: Removing Documenter v0.10.0
INFO: Removing TimerOutputs v0.2.6
ERROR: Tensors had test errors

also happens on master of the package

grad, div, curl defintions

In the literature, there are different definitions for divergence and curl for second order and higher tensor fields. With the introduction of 3rd order Tensors, #205, we need to define this clearly. This issue is to get an overview over different sources with the goal to make a decision for which definition should be used.

Let's denote a general second-order tensor as $\boldsymbol{S}$ for the discussion. Note that we always assume an orthonormal, right-handed Cartesian coordinate system.

Just comment below the additional definitions and references, and I'll try to keep the tables updated (ping me on Slack if I forget)

Gradient

AFAIK, this is not problematic (correct me if I'm wrong). To my knowledge, there are just different notations (which can be confusing in itself), i.e.
$$\mathrm{grad}(\boldsymbol{S}) = \nabla \boldsymbol{S} = \boldsymbol{S} \otimes \nabla = \frac{\partial S_{ij}}{x_k} \boldsymbol{e}_i\otimes\boldsymbol{e}_j\otimes\boldsymbol{e}_k$$

Divergence

Tensor form Index form Sources Comment
$\nabla \cdot \boldsymbol{S} $ $d_i = \frac{\partial S_{ji}}{x_j}$ [1]
$\boldsymbol{S} \cdot \nabla $ $d_i = \frac{\partial S_{ij}}{x_j}$ [2] (2.134), [3] (2.3.11), [4] (2.112) Common in mechanics?

Curl

Here it is important that our definition fulfills $\mathrm{grad}(\mathrm{curl}(\boldsymbol{S}))=\boldsymbol{0}$. There exist definitions in the literature that don't. As a precursor, we haven't defined the cross-product for 2nd-order tensors, so for the discussion, let's define the cross product with a vector $\boldsymbol{v}$ as

$$\begin{align*} \boldsymbol{S}\times\boldsymbol{v} &= S_{ij} v_k \boldsymbol{e}_i \otimes \boldsymbol{e}_j \times \boldsymbol{e}_k = S_{ij} v_k \boldsymbol{e}_i \otimes [\varepsilon_{jkm} \boldsymbol{e}_m] = S_{ij} v_k \varepsilon_{jkm} \boldsymbol{e}_i \otimes \boldsymbol{e}_m \\\ \boldsymbol{v}\times \boldsymbol{S} &= v_i S_{jk} \boldsymbol{e}_i \times \boldsymbol{e}_j \otimes \boldsymbol{e}_k = v_i S_{jk} [\varepsilon_{ijm} \boldsymbol{e}_m] \otimes \boldsymbol{e}_k = v_i S_{jk} \varepsilon_{ijm} \boldsymbol{e}_m \otimes \boldsymbol{e}_k \end{align*}$$
Tensor form Index form Sources Comment
$- \boldsymbol{S} \times \nabla $ $d_{ij} = \varepsilon_{opj}\frac{\partial S_{ip}}{x_o}$ [3] (2.3.18)
$\nabla \times \boldsymbol{S}$ $d_{ij} = \varepsilon_{opi}\frac{\partial S_{jp}}{x_o}$ [1] From the definition of the cross-product, this should be $\nabla \times \boldsymbol{S}^\mathrm{T}$

where $\varepsilon_{ijk}$ is the Levi-Civita symbol.

Sources

[1] https://en.wikipedia.org/wiki/Tensor_derivative_(continuum_mechanics)
[2] Bonet and Wood (2008)
[3] Rubin (2000)
[4] Itskov (2015)

`symmetric` breaks automatic differentiation

julia> Isym = one(SymmetricTensor{4,3});

julia> gradient(symmetric, rand(Tensor{2,3}))  Isym
false

julia> gradient(symmetric, rand(SymmetricTensor{2,3}))  Isym
true

julia> gradient(x -> (x+x')/2, rand(Tensor{2,3}))  Isym
true

julia> gradient(x -> (x+x')/2, rand(SymmetricTensor{2,3}))  Isym
true

This is because symmetric changes type of argument from Tensor to SymmetricTensor, and AD in Tensors doesn't support this change in _extract_gradient. I'm not sure that this is really problem in this package. Should I carefully use by myself?

Julia Crashes if typeof(dim) = Int32

I'm trying to use the gmsh api to convert nodes to JuAFEM Nodes and I noticed (at least for me) a somewhat strange behavior

Vec{2}((1,2)) works typeof(2) = Int64

However, gmsh returns in gmsh.model.getDimension() an Int32 for whatever reason. So, consider this code snippet

dim = gmsh.model.getDimension()
Vec{dim}((1,2))

This will crash julia without any error message. This case can be even simplified by

dim = Int32(2)
Vec{dim}((1,2))

Is it intended that only Int64s are supported?

Missing test coverage

After #194, the following code paths with custom voigt order are not covered by tests (reminder to fix at some point)

  • tovoigt!(v, ::Tensor{2}; order::AbstractVecOrMat) (_tovoigt!(v::AbstractVector, A::Tensor{2, dim}, order::AbstractVecOrMat; offset::Int=0) where {dim}
  • tovoigt(SArray, ::AbstractTensor; order::AbstractVecOrMat), kwargs...) (_to_static_voigt(::AbstractTensor, ::AbstractVecOrMat; kwargs...)

SIMD promotion

It is possible to play with promotion a bit (for tensor products in particular) to streamline more calculations into simd.jl. This could potentially speed up things, if the promotion is cheap enough.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.