ERROR: LoadError: BoundsError: attempt to access 4-element Vector{Pair{QN, Int64}} at index [5]
Stacktrace:
[1] getindex
@ ./array.jl:861 [inlined]
[2] getindex
@ ./abstractarray.jl:1221 [inlined]
[3] blockdim
@ ~/.julia/packages/ITensors/5dcHw/src/qn/qnindex.jl:15 [inlined]
[4] blockdim
@ ~/.julia/packages/ITensors/5dcHw/src/qn/qnindex.jl:256 [inlined]
[5] blockdim
@ ~/.julia/packages/ITensors/5dcHw/src/qn/qnindex.jl:273 [inlined]
[6] blockdim
@ ~/.julia/packages/NDTensors/lbVmG/src/blocksparse/blockdims.jl:131 [inlined]
[7] #124
@ ~/.julia/packages/NDTensors/lbVmG/src/blocksparse/blockdims.jl:140 [inlined]
[8] macro expansion
@ ./ntuple.jl:74 [inlined]
[9] ntuple
@ ./ntuple.jl:69 [inlined]
[10] blockdims
@ ~/.julia/packages/NDTensors/lbVmG/src/blocksparse/blockdims.jl:140 [inlined]
[11] blockdim
@ ~/.julia/packages/NDTensors/lbVmG/src/blocksparse/blockdims.jl:149 [inlined]
[12] blockoffsets(blocks::Vector{Block{4}}, inds::NTuple{4, Index{Vector{Pair{QN, Int64}}}})
@ NDTensors ~/.julia/packages/NDTensors/lbVmG/src/blocksparse/blockoffsets.jl:71
[13] (NDTensors.BlockSparseTensor)(#unused#::Type{Float64}, blocks::Vector{Block{4}}, inds::NTuple{4, Index{Vector{Pair{QN, Int64}}}})
@ NDTensors ~/.julia/packages/NDTensors/lbVmG/src/blocksparse/blocksparsetensor.jl:97
[14] _Allreduce(#unused#::Type{NDTensors.BlockSparse{Float64, Vector{Float64}, 4}}, sendbuf::ITensor, op::Function, comm::MPI.Comm)
@ ITensorParallel ~/.julia/packages/ITensorParallel/63YWQ/src/mpi_projmposum.jl:44
[15] _Allreduce(sendbuf::ITensor, op::Function, comm::MPI.Comm)
@ ITensorParallel ~/.julia/packages/ITensorParallel/63YWQ/src/mpi_projmposum.jl:22
[16] eigsolve(A::MPISum{ProjMPO}, x₀::ITensor, howmany::Int64, which::Symbol, alg::KrylovKit.Lanczos{KrylovKit.ModifiedGramSchmidt2, Float64})
@ KrylovKit ~/.julia/packages/KrylovKit/kWdb6/src/eigsolve/lanczos.jl:11
It looks like the error slightly changes by changing the dimension of the MPS. In the previous example I used a 8x2 lattice while if I use a 4x2 lattice I get
ERROR: LoadError: MPIError(15): MPI_ERR_TRUNCATE: message truncated
Stacktrace:
[1] MPI_Allreduce
@ ~/.julia/packages/MPI/tJjHF/src/api/generated_api.jl:288 [inlined]
[2] Allreduce!(rbuf::MPI.RBuffer{Vector{Float64}, Vector{Float64}}, op::MPI.Op, comm::MPI.Comm)
@ MPI ~/.julia/packages/MPI/tJjHF/src/collective.jl:653
[3] _Allreduce(sendbuf::ITensor, op::Function, comm::MPI.Comm)
@ ITensorParallel ~/.julia/packages/ITensorParallel/63YWQ/src/mpi_projmposum.jl:22
[4] eigsolve(A::MPISum{ProjMPO}, x₀::ITensor, howmany::Int64, which::Symbol, alg::KrylovKit.Lanczos{KrylovKit.ModifiedGramSchmidt2, Float64})
@ KrylovKit ~/.julia/packages/KrylovKit/kWdb6/src/eigsolve/lanczos.jl:11
in expression starting at /mnt/home/nbaldelli/parheis.jl:61
#!/bin/bash
#SBATCH -N2
#SBATCH --ntasks-per-node 1
#SBATCH --cpus-per-task 8
module purge
module load slurm
module load openmpi/4
module load julia
julia --project -e 'ENV["JULIA_MPI_BINARY"]="system"; using Pkg; Pkg.build("MPI"; verbose=true)'
mpirun julia -t 8 <name_file_here>