Giter Site home page Giter Site logo

amrex-fluids / iamr Goto Github PK

View Code? Open in Web Editor NEW
78.0 78.0 57.0 40.45 MB

A parallel, adaptive mesh refinement (AMR) code that solves the variable-density incompressible Navier-Stokes equations.

Home Page: https://amrex-fluids.github.io/IAMR/

Fortran 0.50% Mathematica 30.97% Makefile 1.03% C++ 64.65% Python 2.33% Shell 0.52%

iamr's People

Contributors

ajnonaka avatar asalmgren avatar atmyers avatar bcfriesen avatar burlen avatar cgilet avatar dcoveney avatar drummerdoc avatar emotheau avatar esclapez avatar etpalmer63 avatar jbbel avatar jrood-nrel avatar maxpkatz avatar mic84 avatar oscarantepara avatar vebeckner avatar vricchiu avatar weiqunzhang avatar wyphan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

iamr's Issues

Force vector as a function of scalar gradients?

Hi,

I am trying to add a body force to the momentum equation, which is proportional to the gradients of an advected scalar (in this case Tracer). What I have noticed is that the getForce() function is called several times in each time step, across several files, depending on the procedure (scalar advection, velocity advection, initial time step estimate etc), with each function call using different ngrow, scomp, scalScomp etc.

I assume that I only need to add this body force in the functions where getForce() is called relating to the velocity updates. Hence at these points, I have tried to make a new MultiFab for the scalars that has more ghost cells than the force vector, e.g. in NavierStokesBase::initial_velocity_diffusion_update

MultiFab&  U_old          = get_old_data(State_Type);
 MultiFab UOldBorder(grids, dmap, NUM_STATE, 4);
 MultiFab::Copy(UOldBorder, U_old, 0, 0, NUM_STATE, U_old.nGrow());
 UOldBorder.FillBoundary(geom.periodicity());

Then MFIter-ating over UOldBorder as the Scal argument for getForce() . Then in FORT_MAKEFORCE:

      do k = f_lo(3), f_hi(3)
         do j = f_lo(2), f_hi(2)
           do i = f_lo(1), f_hi(1)
                   dTdx = (scal(i+1,j,k,nTracScal)-scal(i-1,j,k,nTracScal))/(2*hx)
		   dTdy = (scal(i,j+1,k,nTracScal)-scal(i,j-1,k,nTracScal))/(2*hy) 
                   force(i,j,k,nXvel) = someFunctionOf(dTdx)
                   force(i,j,k,nYvel) = someFunctionOf(dTdy)
                  enddo
               enddo
            enddo

When I try this I get strange behaviour, e.g. the pressure field is divided into square patches with the surrounding cells of the squares being wrong values. This may indicate that the way I am filling ghost cells or indexing the arrays is wrong, but I use a similar procedure for tagging based on density gradients which works okay (this may just be something to debug on my end).

I am somewhat confused by the various different places the function is called and the different values of scomp, ncomp etc., so any suggestions would be much appreciated. Is there something I need to do in addition to what I have described above in order to obtain the correct forcing, say changing some other arguments of getForce? Is there an example of a similar method in any other branches/AMReX code? Thanks

CUDA version report error on certain inputs

Hi,

I compiled the CUDA version of IAMR examples, but found they will report errors on certain inputs. For example, inIAMR/Exec/eb_run2d:
"./amr2d.gnu.MPI.CUDA.ex inputs.2d.double_shear_layer-rotate" runs correctly.
"./amr2d.gnu.MPI.CUDA.ex inputs.2d.flow_past_cylinder-x" reports the following errors:
No protocol specified
Initializing CUDA...
CUDA initialized with 1 GPU per MPI rank; 1 GPU(s) used in total
MPI initialized with 1 MPI processes
MPI initialized with thread support level 0
AMReX (21.12-dirty) initialized
xlo set to mass inflow.
xhi set to pressure outflow.
Warning: both amr.plot_int and amr.plot_per are > 0.!
NavierStokesBase::init_additional_state_types()::have_divu = 0
NavierStokesBase::init_additional_state_types()::have_dsdt = 0
NavierStokesBase::init_additional_state_types: num_state_type = 3
Initializing EB2 structs
Creating projector
Installing projector level 0
amrex::Abort::0::GPU last error detected in file ../../../amrex/Src/Base/AMReX_GpuLaunchFunctsG.H line 834: invalid device function !!!
SIGABRT
^[[ASee Backtrace.0 file for details

MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 6.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

Backtrace.0 is as follows:
=== If no file names and line numbers are shown below, one can run
addr2line -Cpfie my_exefile my_line_address
to convert my_line_address (e.g., 0x4a6b) into file name and line number.
Or one can use amrex/Tools/Backtrace/parse_bt.py.

=== Please note that the line number reported by addr2line may not be accurate.
One can use
readelf -wl my_exefile | grep my_line_address'
to find out the offset for that line.

0: ./amr2d.gnu.MPI.CUDA.ex(+0x2f20b5) [0x561c797640b5]
amrex::BLBackTrace::print_backtrace_info(_IO_FILE*) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Base/AMReX_BLBackTrace.cpp:179

1: ./amr2d.gnu.MPI.CUDA.ex(+0x2f3e35) [0x561c79765e35]
amrex::BLBackTrace::handler(int) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Base/AMReX_BLBackTrace.cpp:85

2: ./amr2d.gnu.MPI.CUDA.ex(+0x62265) [0x561c794d4265]
std::__cxx11::basic_string<char, std::char_traits, std::allocator >::_M_is_local() const at /usr/include/c++/9/bits/basic_string.h:222
(inlined by) std::__cxx11::basic_string<char, std::char_traits, std::allocator >::_M_dispose() at /usr/include/c++/9/bits/basic_string.h:231
(inlined by) std::__cxx11::basic_string<char, std::char_traits, std::allocator >::~basic_string() at /usr/include/c++/9/bits/basic_string.h:658
(inlined by) amrex::Gpu::ErrorCheck(char const*, int) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Base/AMReX_GpuError.H:54

3: ./amr2d.gnu.MPI.CUDA.ex(+0x7e41c) [0x561c794f041c]
amrex::Gpu::AsyncArray<amrex::Box, 0>::~AsyncArray() at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Base/AMReX_GpuAsyncArray.H:64
(inlined by) void amrex::GpuBndryFuncFab::ccfcdoitamrex::FilccCell(amrex::Box const&, amrex::FArrayBox&, int, int, amrex::Geometry const&, double, amrex::Vector<amrex::BCRec, std::allocatoramrex::BCRec > const&, int, int, amrex::FilccCell&&) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Base/AMReX_PhysBCFunct.H:393

4: ./amr2d.gnu.MPI.CUDA.ex(+0x71dc5) [0x561c794e3dc5]
amrex::GpuBndryFuncFab::operator()(amrex::Box const&, amrex::FArrayBox&, int, int, amrex::Geometry const&, double, amrex::Vector<amrex::BCRec, std::allocatoramrex::BCRec > const&, int, int) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Base/AMReX_PhysBCFunct.H:204
(inlined by) dummy_fill(amrex::Box const&, amrex::FArrayBox&, int, int, amrex::Geometry const&, double, amrex::Vector<amrex::BCRec, std::allocatoramrex::BCRec > const&, int, int) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../Source/NS_bcfill.H:272

5: ./amr2d.gnu.MPI.CUDA.ex(+0x3b94fa) [0x561c7982b4fa]
amrex::StateData::FillBoundary(amrex::Box const&, amrex::FArrayBox&, double, amrex::Geometry const&, int, int, int) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Amr/AMReX_StateData.cpp:556

6: ./amr2d.gnu.MPI.CUDA.ex(+0x3bb61d) [0x561c7982d61d]
amrex::StateDataPhysBCFunct::operator()(amrex::MultiFab&, int, int, amrex::IntVect const&, double, int) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Amr/AMReX_StateData.cpp:909

7: ./amr2d.gnu.MPI.CUDA.ex(+0x3b2795) [0x561c79824795]
std::enable_if<amrex::IsFabArray<amrex::MultiFab, void>::value, void>::type amrex::FillPatchSingleLevel<amrex::MultiFab, amrex::StateDataPhysBCFunct>(amrex::MultiFab&, amrex::IntVect const&, double, amrex::Vector<amrex::MultiFab*, std::allocatoramrex::MultiFab* > const&, amrex::Vector<double, std::allocator > const&, int, int, int, amrex::Geometry const&, amrex::StateDataPhysBCFunct&, int) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/AmrCore/AMReX_FillPatchUtil_I.H:159

8: ./amr2d.gnu.MPI.CUDA.ex(+0x3a9c82) [0x561c7981bc82]
std::vector<double, std::allocator >::~vector() at /usr/include/c++/9/bits/stl_vector.h:677
(inlined by) amrex::Vector<double, std::allocator >::~Vector() at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Base/AMReX_Vector.H:25
(inlined by) amrex::FillPatchIterator::FillFromLevel0(double, int, int, int, int) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Amr/AMReX_AmrLevel.cpp:1102

9: ./amr2d.gnu.MPI.CUDA.ex(+0x3aa29d) [0x561c7981c29d]
amrex::FillPatchIterator::Initialize(int, double, int, int, int) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Amr/AMReX_AmrLevel.cpp:1016

10: ./amr2d.gnu.MPI.CUDA.ex(+0x3ab441) [0x561c7981d441]
amrex::AmrLevel::FillPatch(amrex::AmrLevel&, amrex::MultiFab&, int, double, int, int, int, int) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Amr/AMReX_AmrLevel.cpp:2113

11: ./amr2d.gnu.MPI.CUDA.ex(+0xc69af) [0x561c795389af]
NavierStokesBase::computeGradP(double) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../Source/NavierStokesBase.cpp:4291

12: ./amr2d.gnu.MPI.CUDA.ex(+0x840dd) [0x561c794f60dd]
NavierStokes::initData() at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../Source/NavierStokes.cpp:371

13: ./amr2d.gnu.MPI.CUDA.ex(+0x390a41) [0x561c79802a41]
std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count() at /usr/include/c++/9/bits/shared_ptr_base.h:729
(inlined by) std::__shared_ptr<amrex::BoxList, (__gnu_cxx::_Lock_policy)2>::
__shared_ptr() at /usr/include/c++/9/bits/shared_ptr_base.h:1169
(inlined by) std::shared_ptramrex::BoxList::~shared_ptr() at /usr/include/c++/9/bits/shared_ptr.h:103
(inlined by) amrex::BoxArray::~BoxArray() at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Base/AMReX_BoxArray.H:556
(inlined by) amrex::Amr::defBaseLevel(double, amrex::BoxArray const*, amrex::Vector<int, std::allocator > const*) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Amr/AMReX_Amr.cpp:2504

14: ./amr2d.gnu.MPI.CUDA.ex(+0x39bc32) [0x561c7980dc32]
amrex::Amr::initialInit(double, double, amrex::BoxArray const*, amrex::Vector<int, std::allocator > const*) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Amr/AMReX_Amr.cpp:1274
(inlined by) amrex::Amr::init(double, double) at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../../amrex/Src/Amr/AMReX_Amr.cpp:1142

15: ./amr2d.gnu.MPI.CUDA.ex(+0x437bb) [0x561c794b57bb]
main at /home/lli/PR-DNS/IAMR/Exec/eb_run2d/../../Source/main.cpp:96

16: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7fa97ec3a0b3]

17: ./amr2d.gnu.MPI.CUDA.ex(+0x4d5de) [0x561c794bf5de]
?? ??:0

Could you please help with this?

Type mismatch in 3D

Function extrapproj in PROJOUTFLOWBC_F.H vs. Fortran procedure in PROJOUTFLOWBC_3D.F
number of arguments 47 does NOT match 46.

Function filterp in PROJECTION_F.H vs. Fortran procedure in PROJECTION_3D.F
number of arguments 15 does NOT match 12.

Function radmpy in PROJECTION_F.H vs. Fortran procedure in PROJECTION_3D.F
number of arguments 13 does NOT match 11.
arg #9: C type ['int', 'pointer'] does NOT match Fortran type ('REAL 8', 'pointer', 'r').
arg #11: C type ['double', 'pointer'] does NOT match Fortran type ('INTEGER 4', 'pointer', 'n').

Function raddiv in PROJECTION_F.H vs. Fortran procedure in PROJECTION_3D.F
number of arguments 13 does NOT match 11.
arg #9: C type ['int', 'pointer'] does NOT match Fortran type ('REAL 8', 'pointer', 'r').
arg #11: C type ['double', 'pointer'] does NOT match Fortran type ('INTEGER 4', 'pointer', 'n').

Why ScalMinMax is called in the scalar_update subroutine ?

Hello all,

I have a small question about godunov->ConservativeScalMinMax function in the scalar_update subroutine, the comments show this function is used to correct field to avoid undershoots or overshoots. Could someone give more explanations about this function? (in what numerical situation, do we need this function)

In the according Fortran subroutine, some min and max functions are used. I am still confused why they are used.

Thanks so much!

Questions about tagging higher error cells

Hello all,

Here are some small questions about tagging cells, I read two line in PROB_2D.F:

tag(i,j) = merge(set,tag(i,j),rho(i,j,1).lt.denerr)
tag(i,j) = merge(set,tag(i,j),adv(i,j,1).gt.adverr)

Q1:

I think the denerr is a criterion based on density, but I am not sure if it is based on absolute density value or absolute density gradient?

Q2:

I guess adverr is about some tracer errors which we can give ns.do_tracer_ref = 1 to make it open. But I still do not know the physical meaning of the error. Is it based on the absolute gradient of this tracer? How can we give a proper adverr value in probin file?

Thanks so much.

(A small suggestion, in line: https://github.com/AMReX-Codes/IAMR/blob/924ca222059e128dec9a11b20c53ace8c1344997/Exec/run2d/PROB_2D.F#L1117,

the comments are based on density gradient, yet the errors in the following subroutines are based on adverr, I am not sure if the comments are wrong. It makes me confused.)

IAMR Capability

Is IAMR capable of simulating mixing of two different miscible fluid species due to density difference? Thank you in advance for your response.

question about running IAMR on cori

Hi, I'm running IAMR 3d rt example on cori. I'd like to runs in the 1k - 8k cores range.

I'm seeing warnings

BOXLIB WARN: BiCGStab_SOLVE: failure 1
BOXLIB WARN: BICGSTAB_solve: breakdown in bicg, going with what I have

I've tried 256,512,1024 and 2048 MPI processes. I think I've seen these warnings in all of them. Should I be changing the input file as I use more cores?

Burlen

Questions about different diffusive schemes used in IAMR

Hello all,

I am now writing the codes about dealing with the diffusive term of the scalar equation. I noticed there are three different rho_flags in the IAMR according to the diffusion type choosing. Then based on the rho_flag, different fluxes terms will be defined and stored, such as codes in the Diffusion::diffusive_scalar.

Now I understand the scheme in the JCP paper, but it seems too difficult for me to distinguish these three different diffusion types and their advantages. Could someone give me a general and short explanation of these three different diffusion types? What are the underlying ideas about designing them?

Thanks so much.

Jordan

Questions about initbubble subroutine

Hello all,

When I read the subroutine initbubble in the prob_2D.F, I am confused about the following lines:

        scal(i,j,1) = one + half*(denfact-one)*(one-tanh(30.*(dist-radblob)))
        do n = 2,nscal-1
           scal(i,j,n) = one
        end do                  
        scal(i,j,nscal) = merge(one,zero,dist.lt.radblob)

Questions are:

  1. Why do we have so many scalars?

  2. What is the meaning of merge function? (I guess it is like a "if condition" )

Thanks a lot!

NavierStokes::velocity_diffusion_update

Hi,

I am trying to understand the code. Have a question. In the routine NavierStokes::velocity_diffusion_update, if we don't have variable viscosity, then the values of loc_viscn and loc_viscnp1 are zero. And then diffuse_velocity_setup and diffusion->diffuse_velocity get evaluated with zeroes. If we have variable viscosity then those values are non zero. Does that seem to be right?

amrex::Abort::2721::MLMG failed !!!

Hi, I am attempting to generate some realistic data for in situ vis benchmark.

I'd like to run 8192 cores and have the simulation run fairly quickly per iteration. I am trying to use the inputs.3d.rt that comes with IAMR to set things up, but I've modified some of the parameters as follows:

amr.n_cell = 2048 2048 4096                                                                                                          
amr.max_grid_size = 256
amr.blocking_factor = 8
amr.max_level = 0
amr.ref_ratio = 2 2 2 2 2 2 2
amr.regrid_int = 2 

With this config at 8192 cores things crash and I get the following message amrex::Abort::2721::MLMG failed !!!.

Can you help me figure out if I've done something wrong? and/or get this config to work?

I've tested with amr.n_cell = 1024 1024 2048 on 1024 MPI ranks, and it runs well, but 1024 is smaller job than I'd like to test.

I've tested with amr.n_cell = 1024 1024 2048, amr.max_level = 1, on 8192 ranks, but this runs too slow because the in situ code is relatively faster. I think setting max_level=0 would make it run fast enough, but would need to verify.

I appreciate any advice/corrections you might have.

Problem on particle_count output

Hello all,

I met a problem when running a 3D lid-driven case with particles.

The case was run on 8 cpu ranks without amr (amr.max_level=0).

It's interesting that the code only crashes when the particle is initially located at (0.5, 0.5, 0.5), and other locations are fine. The problem seems related to the 'particle_count' output.

Here is the screen output :

nohup: ignoring input
Successfully read inputs file ... 
Successfully read inputs file ... 
Starting to call amrex_probinit ... 
Successfully run amrex_probinit
Redistributing from processor 0 to 3
ParticleContainer<NStructReal, NStructInt, NArrayReal, NArrayInt> byte spread across MPI nodes: [0 (0) ... 0 (0)] total particles: (0)
ParticleContainer::Redistribute() time: 5.412101746e-05

Redistributing from processor 4 to 7
ParticleContainer<NStructReal, NStructInt, NArrayReal, NArrayInt> byte spread across MPI nodes: [0 (0) ... 56 (1)] total particles: (2)
ParticleContainer::Redistribute() time: 7.796287537e-05

Total number of particles: 2
ParticleContainer<NStructReal, NStructInt, NArrayReal, NArrayInt> byte spread across MPI nodes: [0 (0) ... 56 (1)] total particles: (2)
InitFromAsciiFile() time: 0.001892089844
Multiplying dt by init_shrink; dt = 0.00328125
ParticleContainer<NStructReal, NStructInt, NArrayReal, NArrayInt> byte spread across MPI nodes: [0 (0) ... 56 (1)] total particles: (2)
ParticleContainer::Redistribute() time: 2.098083496e-05

Multiplying dt by init_shrink; dt = 0.00328125
TracerParticleContainer::AdvectWithUmac() time: 1.59740448e-05
TracerParticleContainer::AdvectWithUmac() time: 3.099441528e-06
TracerParticleContainer::AdvectWithUmac() time: 3.099441528e-06
INITIAL GRIDS 
  Level 0   8 grids  262144 cells  100 % of domain
            smallest grid: 32 x 32 x 32  biggest grid: 32 x 32 x 32

CHECKPOINT: file = chk00000
amrex::UtilRenameDirectoryToOld():  chk00000 exists.  Renaming to:  chk00000.old.68820595741
Particle::Checkpoint: pdir levelDirectoriesCreated = chk00000.temp/Particles  0
ParticleContainer<NStructReal, NStructInt, NArrayReal, NArrayInt>::Checkpoint() time: 0.002763986588
checkPoint() time = 0.02367091179 secs.
PLOTFILE: file = plt00000
amrex::UtilRenameDirectoryToOld():  plt00000 exists.  Renaming to:  plt00000.old.71194791794
*** glibc detected *** ./amr3d.gnu.MPI.ex: double free or corruption (out): 0x0000000003030fd0 ***
======= Backtrace: =========
/lib64/libc.so.6[0x396d6760e6]
/lib64/libc.so.6[0x396d678c13]
./amr3d.gnu.MPI.ex[0x40d1f0]
./amr3d.gnu.MPI.ex[0x423896]
./amr3d.gnu.MPI.ex[0x52e327]
./amr3d.gnu.MPI.ex[0x44223a]
./amr3d.gnu.MPI.ex[0x442882]
./amr3d.gnu.MPI.ex[0x4136d9]
./amr3d.gnu.MPI.ex[0x419627]
./amr3d.gnu.MPI.ex[0x57f20e]
./amr3d.gnu.MPI.ex[0x58f055]
./amr3d.gnu.MPI.ex[0x409f64]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x396d61ecdd]
./amr3d.gnu.MPI.ex[0x409315]
======= Memory map: ========
00400000-00ab5000 r-xp 00000000 08:06 51034721                           /home/wangzhuo/workspace/ibamrbase/iamr/Exec/Lid3dPar/amr3d.gnu.MPI.ex
00cb4000-00cb8000 rw-p 006b4000 08:06 51034721                           /home/wangzhuo/workspace/ibamrbase/iamr/Exec/Lid3dPar/amr3d.gnu.MPI.ex
00cb8000-01563000 rw-p 00000000 00:00 0 
0252a000-03b43000 rw-p 00000000 00:00 0                                  [heap]
396ce00000-396ce20000 r-xp 00000000 08:03 7602505                        /lib64/ld-2.12.so
396d01f000-396d020000 r--p 0001f000 08:03 7602505                        /lib64/ld-2.12.so
396d020000-396d021000 rw-p 00020000 08:03 7602505                        /lib64/ld-2.12.so
396d021000-396d022000 rw-p 00000000 00:00 0 
396d600000-396d78a000 r-xp 00000000 08:03 7602222                        /lib64/libc-2.12.so
396d78a000-396d989000 ---p 0018a000 08:03 7602222                        /lib64/libc-2.12.so
396d989000-396d98d000 r--p 00189000 08:03 7602222                        /lib64/libc-2.12.so
396d98d000-396d98e000 rw-p 0018d000 08:03 7602222                        /lib64/libc-2.12.so
396d98e000-396d993000 rw-p 00000000 00:00 0 
396da00000-396da17000 r-xp 00000000 08:03 7602230                        /lib64/libpthread-2.12.so
396da17000-396dc17000 ---p 00017000 08:03 7602230                        /lib64/libpthread-2.12.so
396dc17000-396dc18000 r--p 00017000 08:03 7602230                        /lib64/libpthread-2.12.so
396dc18000-396dc19000 rw-p 00018000 08:03 7602230                        /lib64/libpthread-2.12.so
396dc19000-396dc1d000 rw-p 00000000 00:00 0 
396de00000-396de83000 r-xp 00000000 08:03 7602308                        /lib64/libm-2.12.so
396de83000-396e082000 ---p 00083000 08:03 7602308                        /lib64/libm-2.12.so
396e082000-396e083000 r--p 00082000 08:03 7602308                        /lib64/libm-2.12.so
396e083000-396e084000 rw-p 00083000 08:03 7602308                        /lib64/libm-2.12.so
396e600000-396e607000 r-xp 00000000 08:03 7602627                        /lib64/librt-2.12.so
396e607000-396e806000 ---p 00007000 08:03 7602627                        /lib64/librt-2.12.so
396e806000-396e807000 r--p 00006000 08:03 7602627                        /lib64/librt-2.12.so
396e807000-396e808000 rw-p 00007000 08:03 7602627                        /lib64/librt-2.12.so
7f258219a000-7f258274d000 rw-p 00000000 00:00 0 
7f258274d000-7f258278e000 rw-s 00000000 00:11 7944191                    /dev/shm/mpich_shar_tmpqcPPSI (deleted)
7f258278e000-7f25827cf000 rw-s 00000000 00:11 7993514                    /dev/shm/mpich_shar_tmpZsSESI (deleted)
7f25827cf000-7f2582810000 rw-s 00000000 00:11 8009874                    /dev/shm/mpich_shar_tmpoN80UI (deleted)
7f2582810000-7f25833f3000 rw-p 00000000 00:00 0 
7f25833f3000-7f25833ff000 r-xp 00000000 08:03 7602183                    /lib64/libnss_files-2.12.so
7f25833ff000-7f25835ff000 ---p 0000c000 08:03 7602183                    /lib64/libnss_files-2.12.so
7f25835ff000-7f2583600000 r--p 0000c000 08:03 7602183                    /lib64/libnss_files-2.12.so
7f2583600000-7f2583601000 rw-p 0000d000 08:03 7602183                    /lib64/libnss_files-2.12.so
7f2583601000-7f2585983000 rw-s 00000000 00:11 7997908                    /dev/shm/mpich_shar_tmpBojYFY (deleted)
7f2585983000-7f258598a000 rw-p 00000000 00:00 0 
7f258598a000-7f258599f000 r-xp 00000000 08:06 57965179                   /home/wangzhuo/mygcc/gcc5/lib64/libgcc_s.so.1
7f258599f000-7f2585b9f000 ---p 00015000 08:06 57965179                   /home/wangzhuo/mygcc/gcc5/lib64/libgcc_s.so.1
7f2585b9f000-7f2585ba0000 rw-p 00015000 08:06 57965179                   /home/wangzhuo/mygcc/gcc5/lib64/libgcc_s.so.1
7f2585bd6000-7f2585bd7000 rw-p 00000000 00:00 0 
7f2585bd7000-7f2585d55000 r-xp 00000000 08:06 57965941                   /home/wangzhuo/mygcc/gcc5/lib64/libstdc++.so.6.0.21
7f2585d55000-7f2585f55000 ---p 0017e000 08:06 57965941                   /home/wangzhuo/mygcc/gcc5/lib64/libstdc++.so.6.0.21
7f2585f55000-7f2585f5f000 r--p 0017e000 08:06 57965941                   /home/wangzhuo/mygcc/gcc5/lib64/libstdc++.so.6.0.21
7f2585f5f000-7f2585f61000 rw-p 00188000 08:06 57965941                   /home/wangzhuo/mygcc/gcc5/lib64/libstdc++.so.6.0.21
7f2585f61000-7f2585f66000 rw-p 00000000 00:00 0 
7f2585f66000-7f2585f85000 r-xp 00000000 08:06 51018902                   /home/wangzhuo/software/mpich3gnu48/lib/libmpicxx.so.12.1.0
7f2585f85000-7f2586185000 ---p 0001f000 08:06 51018902                   /home/wangzhuo/software/mpich3gnu48/lib/libmpicxx.so.12.1.0
7f2586185000-7f2586188000 rw-p 0001f000 08:06 51018902                   /home/wangzhuo/software/mpich3gnu48/lib/libmpicxx.so.12.1.0
7f2586188000-7f25861c6000 r-xp 00000000 08:06 57966035                   /home/wangzhuo/mygcc/gcc5/lib64/libquadmath.so.0.0.0
7f25861c6000-7f25863c5000 ---p 0003e000 08:06 57966035                   /home/wangzhuo/mygcc/gcc5/lib64/libquadmath.so.0.0.0
7f25863c5000-7f25863c6000 rw-p 0003d000 08:06 57966035                   /home/wangzhuo/mygcc/gcc5/lib64/libquadmath.so.0.0.0
7f25863c6000-7f25864db000 r-xp 00000000 08:06 51022208                   /home/wangzhuo/software/gcc48/lib64/libgfortran.so.3.0.0
7f25864db000-7f25866db000 ---p 00115000 08:06 51022208                   /home/wangzhuo/software/gcc48/lib64/libgfortran.so.3.0.0
7f25866db000-7f25866dd000 rw-p 00115000 08:06 51022208                   /home/wangzhuo/software/gcc48/lib64/libgfortran.so.3.0.0
7f25866dd000-7f25866de000 rw-p 00000000 00:00 0 
7f25866de000-7f2586911000 r-xp 00000000 08:06 51018892                   /home/wangzhuo/software/mpich3gnu48/lib/libmpi.so.12.1.0
7f2586911000-7f2586b10000 ---p 00233000 08:06 51018892                   /home/wangzhuo/software/mpich3gnu48/lib/libmpi.so.12.1.0
7f2586b10000-7f2586b22000 rw-p 00232000 08:06 51018892                   /home/wangzhuo/software/mpich3gnu48/lib/libmpi.so.12.1.0
7f2586b22000-7f2586b5c000 rw-p 00000000 00:00 0 
7f2586b5c000-7f2586b91000 r-xp 00000000 08:06 51018897                   /home/wangzhuo/software/mpich3gnu48/lib/libmpifort.so.12.1.0
7f2586b91000-7f2586d91000 ---p 00035000 08:06 51018897                   /home/wangzhuo/software/mpich3gnu48/lib/libmpifort.so.12.1.0
7f2586d91000-7f2586d92000 rw-p 00035000 08:06 51018897                   /home/wangzhuo/software/mpich3gnu48/lib/libmpifort.so.12.1.0
7f2586d92000-7f2586d94000 rw-p 00000000 00:00 0 
7fffe897c000-7fffe899f000 rw-p 00000000 00:00 0                          [stack]
7fffe89d2000-7fffe89d4000 r--p 00000000 00:00 0                          [vvar]
7fffe89d4000-7fffe89d6000 r-xp 00000000 00:00 0                          [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
7::SIGABRT !!!
See Backtrace.rg_7_rl_7 file for details
application called MPI_Abort(comm=0x84000002, 6) - process 7

Here is the Backtrace log :

=== If no file names and line numbers are shown below, one can run
            addr2line -Cfie my_exefile my_line_address
    to convert `my_line_address` (e.g., 0x4a6b) into file name and line number.

=== Please note that the line number reported by addr2line may not be accurate.
    One can use
            readelf -wl my_exefile | grep my_line_address'
    to find out the offset for that line.

 0: ./amr3d.gnu.MPI.ex() [0x55dc75]
    amrex::BLBackTrace::print_backtrace_info(_IO_FILE*)
    /home/wangzhuo/workspace/ibamrbase/amrex/Src/Base/AMReX_BLBackTrace.cpp:105

 1: ./amr3d.gnu.MPI.ex() [0x55e737]
    amrex::BLBackTrace::handler(int)
    /home/wangzhuo/workspace/ibamrbase/amrex/Src/Base/AMReX_BLBackTrace.cpp:51

 2: /lib64/libc.so.6() [0x396d632920]
    ??
    ??:0

 3: /lib64/libc.so.6(gsignal+0x35) [0x396d6328a5]
    ??
    ??:0

 4: /lib64/libc.so.6(abort+0x175) [0x396d634085]
    ??
    ??:0

 5: /lib64/libc.so.6() [0x396d6707b7]
    ??
    ??:0

 6: /lib64/libc.so.6() [0x396d6760e6]
    ??
    ??:0

 7: /lib64/libc.so.6() [0x396d678c13]
    ??
    ??:0

 8: ./amr3d.gnu.MPI.ex() [0x40d1f0]
    clear
    /home/wangzhuo/workspace/ibamrbase/amrex/Src/Base/AMReX_BaseFab.H:1552
    ~BaseFab
    /home/wangzhuo/workspace/ibamrbase/amrex/Src/Base/AMReX_BaseFab.H:1473
    ~FArrayBox
    /home/wangzhuo/workspace/ibamrbase/amrex/Src/Base/AMReX_FArrayBox.H:221
    ~FArrayBox
    /home/wangzhuo/workspace/ibamrbase/amrex/Src/Base/AMReX_FArrayBox.H:221

 9: ./amr3d.gnu.MPI.ex() [0x423896]
    amrex::FabArray<amrex::FArrayBox>::clear()
    /home/wangzhuo/workspace/ibamrbase/amrex/Src/Base/AMReX_FabArray.H:898

10: ./amr3d.gnu.MPI.ex() [0x52e327]
    ~FabArray
    /home/wangzhuo/software/gcc48/include/c++/4.8.2/bits/stl_vector.h:161
    ~MultiFab
    /home/wangzhuo/workspace/ibamrbase/amrex/Src/Base/AMReX_MultiFab.cpp:449

11: ./amr3d.gnu.MPI.ex() [0x44223a]
    NavierStokesBase::ParticleDerive(std::string const&, double, amrex::MultiFab&, int)
    /home/wangzhuo/workspace/ibamrbase/iamr/Exec/Lid3dPar/../../Source/NavierStokesBase.cpp:4408

12: ./amr3d.gnu.MPI.ex() [0x442882]
    NavierStokesBase::ParticleDerive(std::string const&, double, int)
    /home/wangzhuo/software/gcc48/include/c++/4.8.2/tuple:140

13: ./amr3d.gnu.MPI.ex() [0x4136d9]
    NavierStokes::derive(std::string const&, double, int)
    /home/wangzhuo/workspace/ibamrbase/iamr/Exec/Lid3dPar/../../Source/NavierStokes.cpp:1349

14: ./amr3d.gnu.MPI.ex() [0x419627]
    NavierStokes::writePlotFile(std::string const&, std::ostream&, amrex::VisMF::How)
    /home/wangzhuo/workspace/ibamrbase/iamr/Exec/Lid3dPar/../../Source/NavierStokes.cpp:1327

15: ./amr3d.gnu.MPI.ex() [0x57f20e]
    amrex::Amr::writePlotFile()
    /home/wangzhuo/workspace/ibamrbase/amrex/Src/Amr/AMReX_Amr.cpp:852

16: ./amr3d.gnu.MPI.ex() [0x58f055]
    amrex::Amr::init(double, double)
    /home/wangzhuo/workspace/ibamrbase/amrex/Src/Amr/AMReX_Amr.cpp:1108

17: ./amr3d.gnu.MPI.ex() [0x409f64]
    main
    /home/wangzhuo/workspace/ibamrbase/iamr/Exec/Lid3dPar/../../Source/main.cpp:56

18: /lib64/libc.so.6(__libc_start_main+0xfd) [0x396d61ecdd]
    ??
    ??:0

19: ./amr3d.gnu.MPI.ex() [0x409315]
    _start
    ??:0

Pressure bc at outflow

Can the condition of no tangential acceleration (phi_MAC=0) be removed and just Neumann bc used for the pressure at outflows? The outflow velocity can be corrected using the difference of what is needed for global mass conservation after computing the total inflow. Is there a problem to this approach with pressure at the nodes?

IAMR segfaults in amrex::Distribute() when running with > 1 MPI process with Cray compiler

When compiled with the Cray compiler (CCE), IAMR segfaults on line 929 of AMReX_DistributionMapping.cpp, which is the following statement:

vol += tokens[K].m_vol;

The crash does not occur if any of the following are true:

  • print vol to STDOUT immediately before line 929, e.g., amrex::Print() << vol << std::endl;
  • run with 1 MPI process
  • compile with DEBUG=TRUE
  • compiles with Intel or GCC (any optimization level)

These data points, especially the first, suggest to me that this is a bug in the Cray compiler, but I would first like to know if there is anything obviously wrong with amrex::Distribute().

To reproduce, use these commits:

amrex: 8516211a5
IAMR: 97929af

and use COMP=cray USE_MPI=TRUE USE_OMP=FALSE, and use the inputs file inputs.taygre. If the code is correct then I will close this issue and report the bug to Cray instead.

amrex::Abort::0::MLMGBndry::setBoxBC: Unknown LinOpBCType !!! SIGABRT

Hello. I am trying to replicate the "Non-reacting flow past a cylinder" tutorial provided by amrex-combustion under PeleLMeX. I did everything as specified in the tutorial, the only change that i made was turning off the MPI, by doing USE_MPI = FALSE in GNUmakefile.
When I try to run the case by giving the command "./PeleLMeX2d.gnu.ex input.2d-Re500", it gives this error -

Doing initial projection(s)

amrex::Abort::0::MLMGBndry::setBoxBC: Unknown LinOpBCType !!!
SIGABRT
See Backtrace.0 file for details

Can you please tell me where am I going wrong?

Thank you for your time.

IAMR postprocessing bug?

Hey,

Can someone help to check this potential bug? A three-minute short video is attached to reproduce this bug.

It shows that we can not see the meshes and # of cells on the finest level in Paraview 5.10.1. I am guessing the bug is from the postprocessing part of IAMR.

Best.

Question about setting physical boundary value while doing the initial projection

Hello all,

I have a question about the function doMLMGNodalProjection. In this line,
https://github.com/AMReX-Codes/IAMR/blob/29f74453b9ef0bb4c69db43354887a840cbc6fa4/Source/Projection.cpp#L2467 ,
it uses the function set_boundary_velocity. But it shows that this function sets velocity in ghost cells to zero except for inflow. I am confused about why it should be like this?

Here are my thoughts. To do initial cell centered velocity projection, we need to fill the ghost cells. But the ghost cells should be set to reflect the physical boundary condition (if only coarsest levels is considered), not just simply set as 0. So I do not understand the above function.

Could someone please give me some comments and tips?

Thanks so much.

Ruohai

Questions about where to use the FORT_XVELFILL and FORT_PRESFILL in the source codes?

Hello all,

I am struggling with the boundary conditions. I try to understand where the source codes use the FORTRAN subroutine FORT_XVELFILL and FORT_PRESFILL.

Here are my thoughts.

As a lid-driven example, the physical boundary condition is non-slip and it changes to the EXT_DIR mathematical boundary condition both for normal velocity and tangential velocity. Here https://github.com/AMReX-Codes/IAMR/blob/56efa2ce53539acf3aa3a355ced05690eba7cd02/Source/NS_setup.cpp#L186
BndryFunc(FORT_XVELFILL) is set. The FORT_XVELFILL, in this case, is useful since the x velocity should always be 1 in the upper boundary condition along the y-direction.

Guided by my intuition, I think every time-advancing or Poisson solving, this subroutine should be used to reset the boundary velocity. But I just do not know where and how the source code (AMReX and IAMR) use this subroutine?

Thanks so much.

Jordan

Compile with CustomFunc(Particles) in Debug Mode

Hi

When using IAMR, I wrote a set of particle processing methods through the particle module of amrex. When compiling, I encountered the following problem: in the GNUmakefile, USE_PARTICLES = TRUE is enabled, but the original particle part in IAMR will also be activated accordingly. However, if I do use USE_PARTICLES = FALSE and instead only include $(AMREX_HOME)/Src/Particle/Make.package, it can compile successfully in Release mode, but encounters compilation errors in DEBUG mode. Here is a partial compilation error message.

../../../amrex/Src/Particle/AMReX_ParticleUtil.H:32:44: error: ‘IsParticleIterator’ was not declared in this scope; did you mean ‘IsMultiFabIterator’?
   32 | template <class Iterator, std::enable_if_t<IsParticleIterator<Iterator>::value, int> foo = 0>
      |                                            ^~~~~~~~~~~~~~~~~~
      |                                            IsMultiFabIterator
../../../amrex/Src/Particle/AMReX_ParticleUtil.H:32:71: error: template argument 1 is invalid
   32 | plate <class Iterator, std::enable_if_t<IsParticleIterator<Iterator>::value, int> foo = 0>
      |                                                                    ^

../../../amrex/Src/Particle/AMReX_ParticleUtil.H:32:86: error: ‘foo’ does not name a type
   32 |  Iterator, std::enable_if_t<IsParticleIterator<Iterator>::value, int> foo = 0>
      |                                                                       ^~~

Compiling hydro_compute_edgestate_and_flux.cpp ...
../../../amrex/Src/Particle/AMReX_ParticleUtil.H:50:44: error: ‘IsParticleIterator’ was not declared in this scope; did you mean ‘IsMultiFabIterator’?
   50 | template <class Iterator, std::enable_if_t<IsParticleIterator<Iterator>::value && !Iterator::ContainerType::ParticleType::is_soa_particle, int> foo = 0>
      |                                            ^~~~~~~~~~~~~~~~~~
      |                                            IsMultiFabIterator
../../../amrex/Src/Particle/AMReX_ParticleUtil.H:50:71: error: template argument 1 is invalid
   50 | plate <class Iterator, std::enable_if_t<IsParticleIterator<Iterator>::value && !Iterator::ContainerType::ParticleType::is_soa_particle, int> foo = 0>
      |                                                                    ^

../../../amrex/Src/Particle/AMReX_ParticleUtil.H:50:80: error: expected ‘>’ before ‘&&’ token
   50 | ass Iterator, std::enable_if_t<IsParticleIterator<Iterator>::value && !Iterator::ContainerType::ParticleType::is_soa_particle, int> foo = 0>
      |                                                                    ^~

../../../amrex/Src/Particle/AMReX_ParticleUtil.H:50:145: error: ‘foo’ does not name a type
   50 | alue && !Iterator::ContainerType::ParticleType::is_soa_particle, int> foo = 0>
      |                                                                       ^~~

I hope to have more flexibility here by enabling the particle calculation part in IAMR through the addition of other macro compilation conditions, such as -DIAMR_PARTICLE, instead of it being activated simply by setting USE_PARTICLES = TRUE.

Questions about AMR level and Multigrid level in the JCP paper

Hello all,

Happy new year.

When reading the paper [1], I have some thoughts and confusing about AMR level and Multigrid level.
image

My question is why the Else if condition is needed in the pseudocode part?

Here is what I am thinking. If AMR level is 2, L_hi = 2, L_lo = 0. If level solver is considered, for each AMR level, multi-grid level is introduced for coarsening. We can get m(2) = 2, but generally m(0) != 0 since 0 AMR level can still be coarsen. I am not sure if I understand it right. How the m(l) is determined?

Thanks so much!

Reference:

  1. Almgren, Ann S., et al. "A conservative adaptive projection method for the variable density incompressible Navier–Stokes equations." Journal of computational Physics 142.1 (1998): 1-46.

Compile error in Getting Started

Hi,

I'm trying to run IAMR following Getting Started. When I execute make, I encountered the following error

Compiling AMReX_MLEBNodeFDLaplacian.cpp ...
g++ -MMD -MP  -Werror=return-type -g -O3 -std=c++14  -pthread    -DBL_SPACEDIM=2 -DAMREX_SPACEDIM=2 -DBL_FORT_USE_UNDERSCORE -DAMREX_FORT_USE_UNDERSCORE -DBL_Linux -DAMREX_Linux -DNDEBUG -DAMREX_NO_PROBINIT -Itmp_build_dir/s/2d.gnu.EXE -I. -I. -I../../Source -I../../Source/prob -I../../Source/Utilities -I/dfs/user/takashi279/plasma/design/multiscale/AMReX-Hydro/Slopes -I/dfs/user/takashi279/plasma/design/multiscale/AMReX-Hydro/Utils -I/dfs/user/takashi279/plasma/design/multiscale/AMReX-Hydro/MOL -I/dfs/user/takashi279/plasma/design/multiscale/AMReX-Hydro/Godunov -I/dfs/user/takashi279/plasma/design/multiscale/AMReX-Hydro/BDS -I/dfs/user/takashi279/plasma/design/multiscale/AMReX-Hydro/Projections -I/dfs/user/takashi279/plasma/design/multiscale/amrex/Src/Base -I/dfs/user/takashi279/plasma/design/multiscale/amrex/Src/Base/Parser -I/dfs/user/takashi279/plasma/design/multiscale/amrex/Src/AmrCore -I/dfs/user/takashi279/plasma/design/multiscale/amrex/Src/Amr -I/dfs/user/takashi279/plasma/design/multiscale/amrex/Src/Boundary -I/dfs/user/takashi279/plasma/design/multiscale/amrex/Src/LinearSolvers/MLMG -I/dfs/user/takashi279/plasma/design/multiscale/amrex/Tools/C_scripts  -c /dfs/user/takashi279/plasma/design/multiscale/amrex/Src/LinearSolvers/MLMG/AMReX_MLEBNodeFDLaplacian.cpp -o tmp_build_dir/o/2d.gnu.EXE/AMReX_MLEBNodeFDLaplacian.o
In file included from /dfs/user/takashi279/plasma/design/multiscale/amrex/Src/LinearSolvers/MLMG/AMReX_MLEBNodeFDLaplacian.cpp:1:0:
/dfs/user/takashi279/plasma/design/multiscale/amrex/Src/LinearSolvers/MLMG/AMReX_MLEBNodeFDLaplacian.H:122:71: error: array must be initialized with a brace-enclosed initializer
     GpuArray<Real,AMREX_SPACEDIM> m_sigma{AMREX_D_DECL(1_rt,1_rt,1_rt)};
                                                                       ^
/dfs/user/takashi279/plasma/design/multiscale/amrex/Src/LinearSolvers/MLMG/AMReX_MLEBNodeFDLaplacian.H:122:71: error: too many initializers for ‘amrex::GpuArray<double, 2u>’
/dfs/user/takashi279/plasma/design/multiscale/amrex/Tools/GNUMake/Make.rules:255: recipe for target 'tmp_build_dir/o/2d.gnu.EXE/AMReX_MLEBNodeFDLaplacian.o' failed
make: *** [tmp_build_dir/o/2d.gnu.EXE/AMReX_MLEBNodeFDLaplacian.o] Error 1

The options I set in the GNUMakefile is as follows:

#AMREX_HOME defines the directory in which we will find the BoxLib directory
AMREX_HOME=/dfs/user/takashi279/plasma/design/multiscale/amrex
AMREX_HYDRO_HOME=/dfs/user/takashi279/plasma/design/multiscale/AMReX-Hydro

#TOP defines the directory in which we will find Source, Exec, etc.
TOP = ../..

#
# Variables for the user to set ...
#

PRECISION   = DOUBLE

DIM         = 2
COMP        = gnu

DEBUG       = FALSE
USE_MPI     = FALSE
USE_OMP     = FALSE
PROFILE     = FALSE

USE_CUDA = FALSE

USE_SENSEI_INSITU = FALSE

EBASE = amr

Blocs   := .

include $(TOP)/Exec/Make.IAMR

Am I missing any dependencies? Any help would be really appreciated.

AMReX CUDA Isssue with CUDA version 11.2/11.3/11.6/11.7

I built the AMReX/amrex/Tests/GPU/Vector code for NVIDIA A100 GPU with the command make CUDA_ARCH=80, it built successfully but threw the below error at runtime. I tried with the CUDA version 11.2/11.3/11.6/11.7 but everytime facing the same issue , Please help in this regard


[/AMReX/amrex/Tests/GPU/Vector]$ ./main3d.gnu.CUDA.ex inputs
Initializing CUDA...
CUDA initialized with 1 device.
amrex::Abort::0::GPU last error detected in file ../../..//Src/Base/AMReX_GpuLaunchFunctsG.H line 885: invalid argument !!!
SIGABRT
See Backtrace.0 file for details
(cuda-11.7) aglnisha@scn37-mn:~/AMReX/amrex/Tests/GPU/Vector$ exit
exit

MLMG solver issues at high density ratio

Hi,

I frequently run into MLMG errors when working with high density ratios (~816:1) in IAMR.

The error message I get is
amrex::Abort::0::MLMG failed !!!

The preceding line above the error message is:
... Projection::level_project() at level 0

If I increase the tolerances, e.g. proj_tol etc, the error just occurs at later time steps.

Are there any recommendations you can make for this? Thanks

High density ratios with EB

Hi,

I encounter errors when I try and include EB geometry into a problem with large density ratios (816:1).

The issue seems to be with the nodal projection - it makes the velocity and gradp fields 0, which leads to an inability to obtain a time step estimate:

NavierStokes::advance(): before nodal projection 
max(abs(u/v))  = 0.2998059427  0.2482103605
max(abs(gpx/gpy/p)) = 13270.61685  16865.77065  1158.284095
... Projection::level_project() at level 0
Projection::level_project(): lev: 0, time: 0.142975462
NavierStokes::advance(): after velocity update
max(abs(u/v))  = 0  0
max(abs(gpx/gpy/p)) = 0  0  0

...

NavierStokesBase::estTimeStep() failed to provide a good timestep (probably because initial velocity field is zero with no external forcing).
Use ns.init_dt to provide a reasonable timestep on coarsest level.
Note that ns.init_shrink will be applied to init_dt.
amrex::Abort::0::
 !!!

If I change parameters to do with the projection, e.g. the bottom solver or tolerances, the projection fails outright:

Projection::initialVelocityProject(): iteration 0
After nodal projection:
  lev 0: max(abs(u,v)) = 0 0 
Projection::initialVelocityProject(): time: 0.002017629
done calling initialVelocityProject
calling initialPressureProject
Projection::initialPressureProject(): levels = 0  0
amrex::Abort::0::MLMG failed !!!
SIGABRT

The projection issues seem to occur when the more dense region of fluid interacts with the EB. I do not get this issue when I run with a lower density ratio (I've done it with 10:1 but haven't found the cut-off where it stops working).

I have tried tweaking things such as num_pre_smooth, which works for challenging problems without EB but not when I include geometry.

If there are any suggestions for things to try they would be most welcome. Is this a known limitation of IAMR with EB? I haven't found any examples of EB cases that use larger density ratios.

Thanks!

Small question about the using UpdateArg1 function

Hello all,

I have a small question about the UpdateArg1, see here: https://github.com/AMReX-Codes/IAMR/blob/29f74453b9ef0bb4c69db43354887a840cbc6fa4/Source/Projection.cpp#L1636 ,
it is used to update the pressure after doing the synchronization projection in the initial step. But for updating the pressure, I think the equation is Pnew = Pnew + Pold, not Pnew = Pnew + Pold/dt, according to formulas in the JCP paper. So the input parameter should not be 1.0/dt. I am not sure whether it is a bug here or I missed some important things.

Thanks for someone's suggestion and comments.

Viscous terms

Does IAMR assume constant viscosity? We are looking at implementing LES models and would like to know if the viscous term is already implemented as del.(mu*del u). That would make things easier.

function syncadvforcing

Function syncadvforcing in GODUNOV_F.H vs. Fortran procedure in GODUNOV_2D.F
number of arguments 42 does NOT match 43.

Broken dependancy on `create_constrained_umac_grown` function for main branch

I think this change, AMReX-Fluids/AMReX-Hydro#52, broke the main branch of IAMR. The development branch compiles without error so the change probably just needs to be applied to the main branch.

When I compile run2d on IAMR's main branch, this is the error I get:

g++ -MMD -MP  -Werror=return-type -g -O3  -pthread    -DAMREX_GIT_VERSION=\"21.12-69-gbe0d73dd6b67\" -DAMREX_RELEASE_NUMBER=211200 -DBL_SPACEDIM=2 -DAMREX_SPACEDIM=2 -DBL_FORT_USE_UNDERSCORE -DAMREX_FORT_USE_UNDERSCORE -DBL_Linux -DAMREX_Linux -DNDEBUG -DAMREX_NO_PROBINIT -Itmp_build_dir/s/2d.gnu.EXE -I. -I. -I../../Source -I../../Source/Src_2d -I../../Source/prob -I../../Source/Utilities -I../../../AMReX-Hydro/Slopes -I../../../AMReX-Hydro/Utils -I../../../AMReX-Hydro/MOL -I../../../AMReX-Hydro/Godunov -I../../../AMReX-Hydro/Projections -I/home/epalmer/amrex//Src/Base -I/home/epalmer/amrex//Src/Base/Parser -I/home/epalmer/amrex//Src/AmrCore -I/home/epalmer/amrex//Src/Amr -I/home/epalmer/amrex//Src/Boundary -I/home/epalmer/amrex//Src/LinearSolvers/MLMG -I/home/epalmer/amrex//Tools/C_scripts  -c ../../Source/NS_LES.cpp -o tmp_build_dir/o/2d.gnu.EXE/NS_LES.o
../../Source/NavierStokesBase.cpp: In member function ‘void NavierStokesBase::create_umac_grown(int, const amrex::MultiFab*)’:
../../Source/NavierStokesBase.cpp:1056:15: error: ‘create_constrained_umac_grown’ is not a member of ‘HydroUtils’
 1056 |   HydroUtils::create_constrained_umac_grown (level, nGrow, grids, crse_geom, fine_geom,
      |               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Compiling 

Incorrect finite difference stencils in computing vorticity magnitude

Hi all,

It appears to be that the single-sided difference stencils in computing vorticity magnitude in IAMR/DERIVE_ND.F90. I would have expected:
uylo(i,j,k) = (U(i,j+1,k)+three*U(i,j,k)-four*U(i,j-1,k))/(three*dy)
to be
uylo(i,j,k) = (-three*U(i,j,k)+four*U(i,j+1,k)-U(i,j+2,k))/(two*dy),
etc. Maybe I am possibly misinterpreting the code? I am happy to make the corrections if this is an issue.

Thanks!
Mike

Question about mask techniques in the IAMR codes

Hello all,

When I learned the mask codes, I realized it is a kind of technique to differentiate different boundary conditions (fine-fine, coarse-fine, physical). According to different mask values, we know the different boundary conditions. Then, from my guessing, we can re-fill the ghost cell values according to the different boundary condition. The advantage is we can use the unified FORTRAN subroutines to do the operations (e.g. compRHS & updatevel ) without considering the differences between different boundary conditions. I mean, the FORTRAN subroutines in the AMReX and IAMR do not include the information about CPU and boundary conditions. It seems like an "encapsulation".

I am not sure if I understand it right or not. If it is like this, can someone direct some codes fractions (operations) for me to learn the mask technique better?

Error happens when setting ns. init_iter = 0 in the 2D lid_driven case.

Hello all,

Like the title, if the initialization process for setting pressure is ignored in lid_driven case, error happens:
"0::Assertion `old_data != 0' failed, file "../../../amrex/Src/Amr/AMReX_StateData.H", line 234 !!!
0::SIGABRT !!!"
It seems that a state data is not filled. Maybe better to specify init_iter should be larger than 0.

Some further questions: is the multilevel initialization process really necessary and important? What if one just set old and new pressure as 0 and let the codes run. Maybe the initialization process will reduce the convergence time for level and composite solvers, but I am not sure.

Jordan

question about velocity perturbations

Hello all, I am a freshman in IAMR.
Here are my small questions about velocity perturbations. I want to superimpose some velocity perturbation on inlet velocity. So I changed one line in FORT_YVELFILL function PROB_3D.F:
v(i,j,k) = adv_vel+vturb
which vturb is a random noise. My case always stops in the initial step with an BOXLIB ERROR: Multigrid Solve: failed to converge in max_iter iterations, even the perturbation is very small.
Is this method wrong or is there other way to superimpose velocity perturbation?

IAMR regression tests out of date

Would it be ok with folks currently working on IAMR if we switched the ccse regression tests for IAMR to be compile-only instead of having them fail night after night? We could switch them back any time.

MOREGENGETFORCE

I've noticed a lot of pre-processor flags with GENGETFORCE and MOREGENGETFORCE. What is the difference between these? Is there a listing of the pre-proc flags and their effects anywhere (I couldn't find any in the user guide nor in comments)
Thanks

Pressure boundary condition

Hi,

I learned from the JCP paper that describes the IAMR formulation that an outflow uses phi_MAC=0 (no tangential acceleration condition). I would like to know if IAMR does a global mass conservation correction when dealing with inflow-outflow cases.

Errors Building on OSX

Just pulled today. Want to check this code out. It's not building for me though. I'm getting missing symbols during linking:

Undefined symbols for architecture x86_64:
"set_lohi", referenced from:
ns_basicstats_nctrac in SLABSTAT_NS_2D.o
ns_basicstats_ctrac in SLABSTAT_NS_2D.o
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status

Build environment is mac OS 10.13.4 with gcc-5 via macports. Attempted to build with and without mpi and release and debug. Further details of build can be found from attached file.
compile.txt

Spatially varying viscosity that is a function of a state variable?

Hi,

I am looking to run multiphase incompressible simulations in IAMR.

As per many commonly used formulations, the idea is for the two fluids to be indicated by a passive advected scalar (e.g. the volume fraction) that is used to compute the viscosity:

image

where the scalar varies between 0 and 1.

As far as I can see from the documentation/comments, this is supported in IAMR, through the calcViscosity function in the NavierStokes class. From what I understand, the entire MultiFab referenced by the pointer visc[dir] is filled with a constant viscosity coefficient ParmParsed in from the inputs file (vel_visc_coef), as below:

for (int dir=0; dir<AMREX_SPACEDIM; dir++)
 {
     visc[dir]->setVal(visc_coef[Xvel], 0, visc[dir]->nComp(), visc[dir]->nGrow());          
 }

What I want to do is rewrite this so that I can dereference the MultiFab pointer and extract the data using the array() functionality. Additionally, I would like the viscosity to be calculated using the Tracer variable in theMultiFab containing the state data. The code I have written for this is shown below (which would be inside calcViscosity):

 ParmParse pp("ns");
pp.query("muG", muG);
pp.query("muL", muL);

MultiFab&   S_new    = get_new_data(State_Type);
auto whichTime = which_time(State_Type,time);
BL_ASSERT(whichTime == AmrOldTime || whichTime == AmrNewTime);
auto visc = (whichTime == AmrOldTime ? viscn : viscnp1);

for(int dir = 0; dir <AMREX_SPACEDIM; dir++)
{

     MultiFab& viscMF = *visc[dir];
     #ifdef _OPENMP
     #pragma omp parallel
     #endif
     for (MFIter viscnewmfi(viscMF,true); viscnewmfi.isValid(); ++viscnewmfi)
     {

         const Box& vbx = viscnewmfi.tilebox();
         FArrayBox& Sfab = S_new[viscnewmfi];
         FArrayBox& viscFab = viscMF[viscnewmfi];
         auto lo = lbound(vbx);
         auto hi = ubound(vbx);
         Array4<Real>const& state = Sfab.array();
         Array4<Real>const& viscArray = viscFab.array();
         for(int i = lo.x; i <= hi.x; i++)
          {
              for(int j = lo.y; j <= hi.y; j++)
              {
                  for(int k = lo.z; k <= hi.z; k++)
                  {
                     viscArray(i,j,k,0) = muG*state(i,j,k,Tracer) + muL*(1.0-state(i,j,k,Tracer));
                  }
               }
           }
        }
}
            

My questions are:

  1. Would the MultiFab referenced by visc[dir] necessarily have the same spatial indices as S_new in this case?
  2. Is there native support for variable viscosity or a simpler solution to this than I have presented in the above code that I haven't noticed? I saw in the April 2020 release notes that it says "no longer supporting constant mu" so I was wondering if there was, or if this comment refers to another aspect of the code such as Diffusion.cpp.
  3. Assuming my code is correct, is there anything else I would have to change in the code? The only comment is in the code above calcViscosity:
// Functions for calculating the variable viscosity and diffusivity.
// These default to setting the variable viscosity and diffusivity arrays
// to the values in visc_coef and diff_coef.  These functions would
// need to be replaced in any class derived from NavierStokes that
// wants variable coefficients.

So I'm assuming not.

Thank you for your support, I look forward to your response.

Low Mach constraint options in IAMR

Hello,

I am currently developing a model of atmospheric plume rise, for which I had previously implemented conservative projection method (Almgren et al 1998) in my own code, along with a variety of low mach constraints (incomp, bous, anelastic and psuedo-incomp). I'm keen now to take advantage of AMReX and develop IAMR for this purpose. My question is: are all these constraints currently implemented in IAMR or is there only an option for incompressible at present?
I understand that MaestroeX does have the low mach psuedo incom constraint, but not the viscous terms - i intend to keep the diffusive parts available in IAMR to develop a suitable subgrid turbulence model, so my preference is to use and modify IAMR. Thanks in advance for your help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.