Giter Site home page Giter Site logo

geodynamics / aspect Goto Github PK

View Code? Open in Web Editor NEW
210.0 43.0 223.0 340.52 MB

A parallel, extensible finite element code to simulate convection in both 2D and 3D models.

Home Page: https://aspect.geodynamics.org/

License: Other

CMake 1.39% C++ 93.30% Shell 1.15% Makefile 0.56% MATLAB 0.04% Python 1.26% Gnuplot 0.54% Perl 0.08% Jupyter Notebook 1.41% Dockerfile 0.08% sed 0.16% Groovy 0.03%
c-plus-plus geoscience mantle-convection cig geodynamics high-performance-computing

aspect's People

Contributors

alarshi avatar anne-glerum avatar bangerth avatar bobmyhill avatar cedrict avatar class4kayaker avatar cmills1095 avatar danieldouglas92 avatar djneu avatar eheien avatar gassmoeller avatar hfmark avatar hlokavarapu avatar ian-r-rose avatar jdannberg avatar jperryhouts avatar kiralyagi avatar ljhwang avatar ludovicjnnt avatar marinelasbleis avatar mfraters avatar mibillen avatar naliboff avatar rrgrove6 avatar sac-bsa avatar shangxin-liu avatar siqizhang avatar spco avatar tjhei avatar zjiaqi2018 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aspect's Issues

Problems with tracers and MPI

I've encountered crashes related to using tracers and MPI with two or more processes. Attached, you'll find a simple parameter file that will produce an error after several hundred (800-900) timesteps when run with two MPI processes. This occurred on ubuntu 14.04 and also on OS X. I compiled deal.ii and ASPECT with clang 3.4-lubuntu3. I'm using openmpi bundled with ubuntu (which uses gcc 4.8.2 by default but I've used the OMPI_CXX environment variable to make the wrappers use clang because compiling deal.ii results in internal compiler errors with gcc).

------- Input file:

# At the top, we define the number of space dimensions we would like to
# work in:
set Dimension                              = 2

# There are several global variables that have to do with what
# time system we want to work in and what the end time is. We
# also designate an output directory.
set Use years in output instead of seconds = true
set End time                               = 3e9
set Output directory                       = output
set Resume computation             = false


# Then come a number of sections that deal with the setup
# of the problem to solve. The first one deals with the
# geometry of the domain within which we want to solve.
# The sections that follow all have the same basic setup
# where we select the name of a particular model (here,
# the box geometry) and then, in a further subsection,
# set the parameters that are specific to this particular
# model.
subsection Geometry model
  set Model name = box
  subsection Box
    set X periodic = false
    set X extent = 4.2e6
    set Y extent = 3e6
  end
end


# The following section deals with the discretization of
# this problem, namely the kind of mesh we want to compute
# on. We here use a globally refined mesh without
# adaptive mesh refinement.
subsection Mesh refinement
  set Initial global refinement                = 3
  set Initial adaptive refinement              = 2
  set Strategy                         = temperature
  set Time steps between mesh refinement       = 3
  set Refinement fraction                  = 0.3
  set Coarsening fraction              = 0.05
end


# The following two sections describe first the
# direction (vertical) and magnitude of gravity and the
# material model (i.e., density, viscosity, etc).
subsection Gravity model
  set Model name = vertical
  subsection Vertical
    set Magnitude = 9.81
  end
end

subsection Material model
   set Model name = simple
   subsection Simple model
     set Viscosity                     = 1.0E22
     set Thermal viscosity exponent    = 4.60517
     set Reference temperature         = 1250
     set Reference density             = 3300
  end
end

#7.38e-12 W/kg yields mantle heat production of 22 TW
subsection Heating model
    set Model name = constant heating
    subsection Constant heating
        set Radiogenic heating rate = 7.38e-12      
    end 
end


# The next section deals with the initial conditions for the
# temperature (there are no initial conditions for the
# velocity variable since the velocity is assumed to always
# be in a static equilibrium with the temperature field).
# There are a number of models with the 'function' model
# a generic one that allows us to enter the actual initial
# conditions in the form of a formula that can contain
# constants. We choose a linear temperature profile that
# matches the boundary conditions defined below plus
# a small perturbation:
subsection Initial conditions
  set Model name = function
  subsection Function
    set Variable names      = x,y
    set Function constants  = p=-0.01, L=4.2e6, D=3e6, pi=3.1415926536, k=1, T_top=0, T_bottom=2500
    set Function expression = T_top + (T_bottom-T_top)*(1-(y/D) - p*sin(k*pi*x/L)*sin(pi*y/D))
  end
end


# We then also have to prescribe several other parts of the model
# such as which boundaries actually carry a prescribed boundary
# temperature (as described in the documentation of the `box'
# geometry, boundaries 2 and 3 are the bottom and top boundaries)
# whereas all other parts of the boundary are insulated (i.e.,
# no heat flux through these boundaries; this is also often used
# to specify symmetry boundaries).
subsection Model settings
  set Fixed temperature boundary indicators   = 2,3

  # The next parameters then describe on which parts of the
  # boundary we prescribe a zero or nonzero velocity and
  # on which parts the flow is allowed to be tangential.
  # Here, all four sides of the box allow tangential
  # unrestricted flow but with a zero normal component:
  set Zero velocity boundary indicators       =
  set Prescribed velocity boundary indicators =
  set Tangential velocity boundary indicators = 0,1,2,3
  set Remove nullspace = net x translation  

  # The final part of this section describes whether we
  # want to include adiabatic heating (from a small
  # compressibility of the medium) or from shear friction,
  # as well as the rate of internal heating. We do not
  # want to use any of these options here:
  set Include adiabatic heating               = false
  set Include shear heating                   = false
end


# Then follows a section that describes the boundary conditions
# for the temperature. The model we choose is called 'box' and
# allows to set a constant temperature on each of the four sides
# of the box geometry. In our case, we choose something that is
# heated from below and cooled from above. (As will be seen
# in the next section, the actual temperature prescribed here
# at the left and right does not matter.)
subsection Boundary temperature model
  set Model name = box
  subsection Box
    set Bottom temperature = 2500
    set Top temperature    = 0
  end
end


# The final part is to specify what ASPECT should do with the
# solution once computed at the end of every time step. The
# process of evaluating the solution is called `postprocessing'
# and we choose to compute velocity and temperature statistics,
# statistics about the heat flux through the boundaries of the
# domain, and to generate graphical output files for later
# visualization. These output files are created every time
# a time step crosses time points separated by 1e7 years.
subsection Postprocess
  set List of postprocessors = velocity statistics, temperature statistics, heat flux statistics , visualization, tracers, basic statistics
  subsection Visualization
    set Time between graphical output = 1e6
    set Output format = hdf5
    set List of output variables =  viscosity, density
  end
  subsection Tracers
    set Number of tracers = 1000
    set Time between data output = 1e6
    set Data output format = hdf5
  end
end

subsection Checkpointing
  set Steps between checkpoint = 200
end
------- Error messages

*** Timestep 37:  t=1.78152e+07 years
   Solving temperature system... 16 iterations.
   Rebuilding Stokes preconditioner...
   Solving Stokes system... 30+5 iterations.

   Postprocessing:
     RMS, max velocity:                  0.162 m/year, 0.56 m/year
     Temperature min/avg/max:            0 K, 1270 K, 2504 K
     Heat fluxes through boundary parts: 238 W, 302.3 W, -1.821e+05 W, 1.72e+04 W
     Advecting particles:                done

*** Timestep 38:  t=1.78973e+07 years
   Solving temperature system... 16 iterations.
   Rebuilding Stokes preconditioner...
   Solving Stokes system... 30+5 iterations.

   Postprocessing:


----------------------------------------------------
Exception on MPI process <1> while running postprocessor <
N6aspect11Postprocess14PassiveTracersILi2EEE>: 

----------------------------------------------------
Exception on MPI process <0
--------------------------------------------------------
An error occurred in line <732> of file </opt/aspect/include/aspect/particle/world.h> in function
    void aspect::Particle::World<2, aspect::Particle::BaseParticle<2> >::check_particle_count() [dim = 2, T = aspect::Particle::BaseParticle<2>]
The violated condition was: 
    global_particles==global_num_particles
The name and call sequence of the exception was:
    ExcMessage ("Particle count unexpectedly changed.")
Additional Information: 
Particle count unexpectedly changed.
--------------------------------------------------------
> while running postprocessor <
N6aspect11Postprocess14PassiveTracersILi2EEE>: 
Aborting!

--------------------------------------------------------
An error occurred in line <732> of file </opt/aspect/include/aspect/particle/world.h> in function
    void aspect::Particle::World<2, aspect::Particle::BaseParticle<2> >::check_particle_count() [dim = 2, T = aspect::Particle::BaseParticle<2>]
The violated condition was: 
    global_particles==global_num_particles
The name and call sequence of the exception was:
    ExcMessage ("Particle count unexpectedly changed.")
Additional Information: 
Particle count unexpectedly changed.
--------------------------------------------------------

Aborting!
----------------------------------------------------
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD 
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
----------------------------------------------------
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD 
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[merckx:28810] *** Process received signal ***
[merckx:28810] Signal: Aborted (6)
[merckx:28810] Signal code:  (-6)
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD 
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[merckx:28811] *** Process received signal ***
[merckx:28811] Signal: Aborted (6)
[merckx:28811] Signal code:  (-6)
[merckx:28810] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x36ff0) [0x7f59e7e07ff0]
[merckx:28810] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39) [0x7f59e7e07f79]
[merckx:28810] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148) [0x7f59e7e0b388]
[merckx:28810] [ 3] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_verbose+0) [0x7f59e9149c09]
[merckx:28810] [ 4] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_verbosef+0) [0x7f59e9149c9e]
[merckx:28810] [ 5] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_collective+0) [0x7f59e9149dbb]
[merckx:28810] [ 6] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_verbosev+0) [0x7f59e9149d46]
[merckx:28810] [ 7] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_memory_check+0xc1) [0x7f59e91494e2]
[merckx:28810] [ 8] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_package_unregister+0x3b) [0x7f59e914a0f6]
[merckx:28810] [ 9] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_finalize+0x3c) [0x7f59e914a69a]
[merckx:28810] [10] /usr/local/deal.II-dev/lib/libdeal_II.g.so.8.1.0(_ZN6dealii8internal5p4est12InitFinalize9SingletonD2Ev+0x2c) [0x7f59f0d6d84c]
[merckx:28810] [11] /lib/x86_64-linux-gnu/libc.so.6(+0x3c509) [0x7f59e7e0d509]
[merckx:28810] [12] /lib/x86_64-linux-gnu/libc.so.6(+0x3c555) [0x7f59e7e0d555]
[merckx:28810] [13] /usr/lib/libmpi.so.1(orte_ess_base_app_abort+0x20) [0x7f59e7b0ac00]
[merckx:28810] [14] /usr/lib/libmpi.so.1(+0xba2a9) [0x7f59e7b0a2a9]
[merckx:28810] [15] /usr/lib/libmpi.so.1(ompi_mpi_abort+0x249) [0x7f59e7aa9b69]
[merckx:28810] [16] ./aspect(_ZN6aspect11Postprocess7ManagerILi2EE7executeERN6dealii12TableHandlerE+0x3ac) [0xb1a68c]
[merckx:28810] [17] ./aspect(_ZN6aspect9SimulatorILi2EE11postprocessEv+0xe3) [0xa3e513]
[merckx:28810] [18] ./aspect(_ZN6aspect9SimulatorILi2EE3runEv+0x705) [0xa3a335]
[merckx:28810] [19] ./aspect(main+0x53b) [0xbe0a4b]
[merckx:28810] [20] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f59e7df2ec5]
[merckx:28810] [21] ./aspect() [0x837f26]
[merckx:28810] *** End of error message ***
[merckx:28811] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x36ff0) [0x7f4b9de33ff0]
[merckx:28811] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39) [0x7f4b9de33f79]
[merckx:28811] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148) [0x7f4b9de37388]
[merckx:28811] [ 3] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_verbose+0) [0x7f4b9f175c09]
[merckx:28811] [ 4] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_verbosef+0) [0x7f4b9f175c9e]
[merckx:28811] [ 5] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_collective+0) [0x7f4b9f175dbb]
[merckx:28811] [ 6] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_verbosev+0) [0x7f4b9f175d46]
[merckx:28811] [ 7] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_memory_check+0xc1) [0x7f4b9f1754e2]
[merckx:28811] [ 8] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_package_unregister+0x3b) [0x7f4b9f1760f6]
[merckx:28811] [ 9] /opt/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_finalize+0x3c) [0x7f4b9f17669a]
[merckx:28811] [10] /usr/local/deal.II-dev/lib/libdeal_II.g.so.8.1.0(_ZN6dealii8internal5p4est12InitFinalize9SingletonD2Ev+0x2c) [0x7f4ba6d9984c]
[merckx:28811] [11] /lib/x86_64-linux-gnu/libc.so.6(+0x3c509) [0x7f4b9de39509]
[merckx:28811] [12] /lib/x86_64-linux-gnu/libc.so.6(+0x3c555) [0x7f4b9de39555]
[merckx:28811] [13] /usr/lib/libmpi.so.1(orte_ess_base_app_abort+0x20) [0x7f4b9db36c00]
[merckx:28811] [14] /usr/lib/libmpi.so.1(+0xba2a9) [0x7f4b9db362a9]
[merckx:28811] [15] /usr/lib/libmpi.so.1(ompi_mpi_abort+0x249) [0x7f4b9dad5b69]
[merckx:28811] [16] ./aspect(_ZN6aspect11Postprocess7ManagerILi2EE7executeERN6dealii12TableHandlerE+0x3ac) [0xb1a68c]
[merckx:28811] [17] ./aspect(_ZN6aspect9SimulatorILi2EE11postprocessEv+0xe3) [0xa3e513]
[merckx:28811] [18] ./aspect(_ZN6aspect9SimulatorILi2EE3runEv+0x705) [0xa3a335]
[merckx:28811] [19] ./aspect(main+0x53b) [0xbe0a4b]
[merckx:28811] [20] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f4b9de1eec5]
[merckx:28811] [21] ./aspect() [0x837f26]
[merckx:28811] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 28810 on node merckx exited on signal 6 (Aborted).
--------------------------------------------------------------------------
[merckx:28809] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[merckx:28809] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
max@merckx:~/aspect$ 


link.h not found on mac os

It looks like link.h is not available on mac os:

/Users/Jule/Documents/PhD/aspect/source/main.cc:30:12: fatal error:
'link.h'
 file not found
# include <link.h>

It is probably enough to remove this shared_lib validation on mac.

Problem with hanging node position

Hi all,

After update to the development version, I found the the hanging node position in output is separated from the line it sits on. I wonder if is there a bug in the mapping of hanging node, or it's something wrong with the dependent packages I built.
Here is the main packages version I used:
P4EST 1.1
Trilinos 11.12.1
Deal.II 8.2.1

To repeat this problem:
Use tests/simple-incompressible.prm
Change two lines in "subsection Mesh refinement" to get some refinement :
set Initial adaptive refinement = 3
set Strategy = topography

Would anyone give it a try see if it is repeatable?

Regards,

Siqi

hanging_node_error

'Write a lot of output files' bug?

This is mostly a reminder to myself to come back to this issue, but maybe there is still a bug, when writing a lot of visualization outputs per model run. This idea came up, due to a model that did not run during our tutorial (it is here: https://github.com/gassmoeller/aspect/blob/output_file_number/tutorial.prm ), but Juliane and Menno reported similar issues. I first thought it would be solved by #201, but it is still around. The model crashes when writing around the 1000th visualization file (1007 in my case, not always the same number, but around 1000), and it does not crash, when we do not write visualization output, or less output steps. Maybe there is a memory leak, or not closed file handle somewhere?

Error message:

*** Timestep 1008: t=1.16328e+11 years
Solving temperature system... 3 iterations.
Solving Stokes system... 0 iterations.

Postprocessing:


Exception on MPI process <0> while running postprocessor :


An error occurred in line <6743> of file </home/rengas/Software/deal.II-8.1/source/base/data_out_base.cc> in function
void dealii::DataOutInterface<dim, spacedim>::write_visit_record(std::ostream&, const std::vectorstd::vector<std::basic_string >&) const [with int dim = 2, int spacedim = 2, std::ostream = std::basic_ostream]
The violated condition was:
out
The name and call sequence of the exception was:
ExcIO()
Additional Information:

(none)

Aborting!


MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on

exactly when Open MPI kills them.

[rengas-laptop:10673] *** Process received signal ***
[rengas-laptop:10673] Signal: Aborted (6)
[rengas-laptop:10673] Signal code: (-6)
[rengas-laptop:10673] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) [0x7f1f4cbcfcb0]
[rengas-laptop:10673] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x7f1f4b94e0d5]
[rengas-laptop:10673] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b) [0x7f1f4b95183b]
[rengas-laptop:10673] [ 3] /home/rengas/Software/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_verbose+0) [0x7f1f4d3b3a9d]
[rengas-laptop:10673] [ 4] /home/rengas/Software/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_verbosef+0) [0x7f1f4d3b3b32]
[rengas-laptop:10673] [ 5] /home/rengas/Software/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_collective+0) [0x7f1f4d3b3c4f]
[rengas-laptop:10673] [ 6] /home/rengas/Software/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_abort_verbosev+0) [0x7f1f4d3b3bda]
[rengas-laptop:10673] [ 7] /home/rengas/Software/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_memory_check+0xc1) [0x7f1f4d3b33aa]
[rengas-laptop:10673] [ 8] /home/rengas/Software/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_package_unregister+0x3b) [0x7f1f4d3b3fa1]
[rengas-laptop:10673] [ 9] /home/rengas/Software/p4est-0.3.4.2/DEBUG/lib/libsc.so.0(sc_finalize+0x3c) [0x7f1f4d3b453f]
[rengas-laptop:10673] [10] /home/rengas/Software/deal.II-8.1/lib/libdeal_II.g.so.8.1.0(_ZN6dealii8internal5p4est12InitFinalize9SingletonD1Ev+0x22) [0x7f1f53f76bec]
[rengas-laptop:10673] [11] /lib/x86_64-linux-gnu/libc.so.6(+0x3b5b1) [0x7f1f4b9535b1]
[rengas-laptop:10673] [12] /lib/x86_64-linux-gnu/libc.so.6(+0x3b635) [0x7f1f4b953635]
[rengas-laptop:10673] [13] /usr/lib/libopen-rte.so.0(orte_ess_base_app_abort+0x20) [0x7f1f480c5200]
[rengas-laptop:10673] [14] /usr/lib/libopen-rte.so.0(orte_errmgr_base_error_abort+0xfd) [0x7f1f480c473d]
[rengas-laptop:10673] [15] /usr/lib/libmpi.so.0(ompi_mpi_abort+0x255) [0x7f1f4c50cec5]
[rengas-laptop:10673] [16] /home/rengas/Software/Aspect-Versionen/aspect/debug/aspect(_ZN6aspect11Postprocess7ManagerILi2EE7executeERN6dealii12TableHandlerE+0x2f9) [0x9b0bf1]
[rengas-laptop:10673] [17] /home/rengas/Software/Aspect-Versionen/aspect/debug/aspect(_ZN6aspect9SimulatorILi2EE11postprocessEv+0xd2) [0xc3200e]
[rengas-laptop:10673] [18] /home/rengas/Software/Aspect-Versionen/aspect/debug/aspect(_ZN6aspect9SimulatorILi2EE3runEv+0x689) [0xc2d6f5]
[rengas-laptop:10673] [19] /home/rengas/Software/Aspect-Versionen/aspect/debug/aspect(main+0x3c6) [0xb57f7d]
[rengas-laptop:10673] [20] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed) [0x7f1f4b93976d]
[rengas-laptop:10673] [21] /home/rengas/Software/Aspect-Versionen/aspect/debug/aspect() [0x897e79]

[rengas-laptop:10673] *** End of error message ***

mpirun noticed that process rank 0 with PID 10673 on node rengas-laptop exited on signal 6 (Aborted).

[rengas-laptop:10672] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[rengas-laptop:10672] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

Use curved mapping also in the interior

From a mailing list thread today:

We create the mapping in core.cc like this:
mapping (parameters.free_surface_enabled?1:4),
so this calls this constructor:
MappingQ (const unsigned int p,
const bool use_mapping_q_on_all_cells = false);
I think it still uses the straight mapping in the interior. Of course, this may well be a good target for a patch in itself

Check viscous dissipation formula

If the viscous dissipation formula is
sigma : \dot eps
then eq (3) is not correct because it contains the term
-1/3 (div u) I
on both sides of the double contraction. Double check manual and code.

optional simplified adiabatic heating

We are using total pressure instead of static pressure for the adiabatic heating term. This is in contrast to other codes, so it might be a good idea to supply an option that allows the user to decide what she wants. See #207.

Velocity output in m/yrs

When the option 'years in output instead of seconds' is chosen, the velocity is scaled when it is read from the input file, however, it is not scaled back in the visualization output.

Restart lets all CPUs write the welcome message

When restarting a model with MPI on 192 cores I get 192 welcome messages ("This is ASPECT ....") in log.txt but only one on stdout. Maybe some if (cpu == 0) is only working for stdout but not log.txt?

Manual references non-existent files

The manual has these lines:

\lstinputlisting[language=prmfile]{cookbooks/overview/boundary-conditions.part.prm}
\lstinputlisting[language=prmfile]{cookbooks/benchmarks/burstedde/burstedde.prm.out}

However, the first of these files does not exist at all, and the second one exists only in the .prm form but isn't checked in in the .prm.out form.

I'm going to address the second issue in a second, but don't know what to do about the first. @tjhei : do you recall how this happened?

Model time as time between checkpoints

Sometimes users may want to restart from a certain model time, however, currently they can only specify the time steps or the wall time between checkpoints.

entropy viscosity

Right now we use the same stabilisation terms for the temperature and the composition advection, so in the place where we calculate the advection system residual, the additional composition terms (like the reaction term) as well as the latent heat terms are missing,

Exploit more parallelism

Currently, we run Aspect with one thread per MPI process. What we should really do is to run it with n_cpus/n_mpi_processes_on_this_machine threads, to make optimal use of it.

Governing equation approximations

Hi all,

So we have discussed this in one form or another a couple of times, but I thought that I would bring it up again. As we know, there are several approximations to the equations for mass, momentum, and energy that are commonly used in mantle convection (Boussinesq, anelastic, etc). Some are more thermodynamically consistent than others. Discussing this with Juliane and Rene this week, I think we agreed that this could be clarified in Aspect. This recently reared its head when trying to reproduce the Zhong et al 2008 benchmark, as the "simple" material model is not Boussinesq in the strictest sense (does not use the reference density for thermal diffusivity, as I recall? I have not verified, but this is what Rene indicated to me). I think it is important to be able to use these approximations with confidence.

I see two different ways of addressing this:

  1. The easier solution. Try to introduce a Boussinesq material model which does exactly that and nothing more. The reference density is used everywhere except for the terms which multiply gravity. Additionally, it would be nice for Aspect to query the material model about its compressibility, look at whether viscous heating and adiabatic heating are included, and then infer which approximation is being used. This would be useful output information at the start of a run.

  2. Fully support the relevant approximations that are used (Boussinesq, extended Boussinesq, anelastic) so that it can be specified in a parameter file. This would require some extensive changes to the assembly of the matrix, as well as some checks for compressibility in the material model, but it may be worth it.

I am willing to work on this, but I'd like to hear thoughts from people as to what would be most appropriate before starting.

log.txt gets created on every core

It seems like we are creating/opening log.txt on every process (core.cc:176), although only process 0 will later on write into it. Would anyone oppose the idea of only creating log.txt on process 0? Currently I had some models, where I wrote output into local /tmp directories and later on copied it together, and I always lost the important log.txt because the empty log.txts overwrote the content. It was no big issue, because I had the screen output saved, but anyway.

extract spherical_surface_coordinates

Move the function such as ‘spherical_surface_coordinates’ in ‘initial_conditions/harmonic_perturbation’ and ‘velocity_boundary_conditon/gplate’ to somewhere else that we don’t need to write it inside different classes

Improve AMG for variable viscosity

As shown in issue #234 we don't currently do a particularly good job preconditioning the A block of the matrix. The approach to do better is discussed in May & Moresi, PEPI 171, pp. 33-47. It would just have to be implemented :-)

Let Aspect create its output directory

It might sound trivial, but I had the case a dozen times that I submitted a model to a cluster queue and noticed the next day I forgot to create the output directory. Could we make it possible to create aspect's output directory on runtime? Or are there compatibility problems with different OS or file systems?

number of depth slices in seismic_anomalies

The computation of seismic anomalies in seismic_anomalies.cc is confusing and, currently, uses a fixed number of 50 depth slices. Document this better, make n_slices a parameter.

Boundary and Initial conditions directory structure of Aspect

Considering that we already have a number of significant changes for the next release version of Aspect, I would like to propose one more rather large change. I find the current naming structure of source and include has developed in a suboptimal way for the boundary and initial condition plugins. We have three different naming conventions (1:boundary_composition/boundary_temperature, 2:initial_conditions, 3:compositional_initial_conditions/velocity_boundary_conditions) and especially when explaining the code to new users and introducing them into where to make changes this is kind of confusing.
I would personally consider naming convention 1 the best, and if the folders would be renamed that way, we would have the following folders:
boundary_composition, boundary_temperature, boundary_velocity, initial_temperature, initial_composition
I am aware that this is a change that introduces a lot of incompatibility (for user-written plugins that are not in master) and work (update many include commands - also in shared libraries for benchmarks and testcases) and should be considered carefully, before implementing it, but I also think that the improvement in structure and user-friendliness would be worth the effort. Especially considering that this change would not longer be possible, when we are in a later, more stable stage of the development.
Please let me know if you find this idea unfeasible or unnecessary, I appreciate any comment.

Sort the names of plugins

There are quite a number of plugins these days, so we should sort them alphabetically in all places where they are listed (as elements of a Selection/MultipleSelection, as well as when generating the manual).

min refinement function may crash on input files

There is a design flaw in the min refinement function. The problem is that we do

     prm.declare_entry ("Coordinate system", "depth",
                         Patterns::Selection ("depth|cartesian|spherical"),
                         "A selection that determines the assumed coordinate "
                         "system for the function variables. Allowed values "
                         "are 'depth', 'cartesian' and 'spherical'. 'depth' "
                         "requires a function expression that only "
                         "depends on one variable, which is interpreted to "
                         "be the depth of the point (in meters). 'spherical' coordinates "
                         "are interpreted as r,phi or r,phi,theta in 2D/3D "
                         "respectively with theta being the polar angle.");

      Functions::ParsedFunction<dim>::declare_parameters (prm, 1);

The last call declares a parameter "Variables" that is preset by "x,y" or "x,y,z", depending on 'dim'. But later on, we do

          if (coordinate_system == depth)
            min_refinement_level_depth.parse_parameters (prm);
          else
            min_refinement_level_position.parse_parameters (prm);

where min_refinement_level_depth is Functions::ParsedFunction<1>. So, if we use 'depth' (the default) then the parse_parameters() call will error out saying that "x,y,z" is not a valid choice for the 'Variables' input parameter.

This means that the when one selects "minimum refinement function" one gets an error on the input file unless one does something particular. I think we should at least make the default case work right, which would require doing
Functions::ParsedFunction<1>::declare_parameters (prm, 1);
instead of
Functions::ParsedFunction::declare_parameters (prm, 1);
The problem is that we can't make every case work :-(

The same is probably true for the maximum refinement function case as well.

Logging of output

  • all output should be copied into output/log.txt
  • optionally: -q option to suppress most of the output to the screen.

Think about: what happens when you snapshot/resume? Append to log.txt?

Models with constant composition do not converge

The composition solver produces problems for constant compositional fields, because we use the right hand side of the equation for the tolerance computation, which is approximately zero in many of these cases. For now I always use slightly perturbed fields, but for some cases this might not be possible (e.g. if you track the sum of something), and it is nowhere documented that compositional fields should have at least some perturbation, so new users fall into this pit as well.

Any ideas for a better tolerance computation, or a criterion to skip the solution right away if the new solution would be equal to the old?

implementing a stress post-processor

If anyone has time to do this it would be great to have a post-processor that outputs the stress tensor (sigma_1, sigma_2, sigma_3) and the normal stress.

initialize() in plugins is not useful

In many plugins we call the user initialize() function immediately after constructing the plugin, which is before the plugin has simulator access. This is kind of useless, because one could use the constructor instead.
I would like to suggest to move the initialize() after setting up simulator access.

reaction term and mass conservation

We have the option of reactions between compositional fields, and in this process the density of the fields might change, which will be considered e.g. in the buoyancy term. Do we need an additional term in the mass conservation to take that into account?

Improve initialization of initial conditions and adiabatic conditions

Our current initialization of initial conditions and adiabatic conditions is rather confuse. Since the initial conditions might depend on the adiabatic conditions we hand over a pointer to adiabatic conditions at creation time (this is probably a leftover from the time, when no SimulatorAccess was possible). The pointer to the initial conditions is reset after the creation of the adiabatic conditions (another reset). The adiabatic conditions themselves can not be created in the initialization list, since they depend on a lot of plugin pointers, which might need SimulatorAccess, so the SimulatorAccess need to be reset before creating the AdiabaticConditions. Additionally the files of AdiabaticConditions live in the main directories of Aspect, which is not a good place for them (since they are not that essential).
I have created two branches in my aspect fork (simulator_access_initial_conditions and flexible_adiabatic_conditions, which depends on the former) that deal with these problems on the cost of backward compatibility of user-written plugins. They are intended as suggestions, if anyone sees a better way to resolve this structure problems we already carried around for more than a year, it would be great.
A short description of the changes:

  1. Most initial conditions are now derived from SimulatorAccess, so they do not depend on the calculation of the AdiabaticConditions, but can still use them when they are called (we could keep the other pointers I removed to improve compatibility, but then people would use them, and we have SimulatorAccess for this kind of things).
  2. The adiabatic conditions were changed to a plugin architecture. This removes the necessity to hand over pointers to AdiabaticConditions, because they can be derived from SimulatorAccess. Then the AdiabaticConditions can be initialized later on, like any other plugin, and even updated over time (with the averaged composition/temperature) or defined by whatever function the user would like to use.

Any comment or thought is welcome!

Deal with the reference_ functions in the material model interface

The functions get_reference_ ... in the material model are somewhat strange since they are either only important for postprocessing (reference_thermal_expansivity, reference_density) or are used for things not really related to the material model (reference_viscosity for pressure scaling). Still in the current state they need to be implemented. We should find a consistent way to use them or discard them straight away and handle the cases when they are called in a different way.

Support Trilinos direct solvers

Since r33118, deal.II supports different kinds of direct solvers that are interfaces through Trilinos. We should make it possible to use these in Aspect, in particular SuperLUdist, in the same way as we currently support them through PETSc.

Need to initialize SimulatorAccess of plugins before parse_parameters()

We now have a neat uniform way of initializing plugins for SimulatorAccess -- but it's too late. We need to do this before we call parse_parameters(). To do this, we could either let the create___plugin() functions do this already just after they are created, or we could push calling parse_parameters() from create___plugin() to right after we call initialize(*this) in core.cc.

Ideas?

Rationale: I'm working on being able to use symbolic names for boundary components. To do this, one needs to be able to ask the geometry model what boundary indicator corresponds to the string "top". Problem is, that some plugins may want to do this already at a time when they call parse_parameters(). An example is boundary_temperature/constant.cc that allows to provide a list of pairs
boundary_id: value
where one can specify the constant temperature for each boundary component. Previously, boundary_id was a number, but now it may be a string that we need to translate using the geometry model -- but we can't, because at the point where we want to do this, we have no access to the geometry model :-( I suspect that other plugins probably want to do similar things...

periodic_box.prm fails

see test output here: http://cdash.kyomu.43-1.org/testDetails.php?test=4826757&build=2653

The matrix A is not invertible because of the periodic boundary conditions. We already now that the coarse grid solver of AMG (set to "KLU") is the problem here.

This test started failing when we changed the order of the mapping from Q4 to Q1, even though we are operating on a box, so that shouldn't matter. See #242 for a lengthy discussion without a solution so far.

Improve particle code

The current state of particles has two things that I’d like to improve:

Particles are distributed throughout the domain. It would be useful if we could provide a density function so that one can place them only in a single region (e.g., a LLSVP at the base of the mantle) and see where they go.

Right now, particles only carry a single attribute: their ID, which is essentially a random number. What one would like to have is one (or more) attribute one can assign to each particle. This way, one could (using the same example) assign one particular value to particles in an LLSVP and another value to all of the other particles, and see where they go with the advection.

shell_simple_3d crashes

When running shell_simple_3d.prm with 32 processors, it works for 16 time steps but then I get this:

*** Timestep 15: t=7.83037e+06 years
Solving temperature system... 21 iterations.
Solving Stokes system... 30+14 iterations.

Postprocessing:
RMS, max velocity: 0.0434 m/year, 0.162 m/year
Temperature min/avg/max: 973 K, 1409 K, 1973 K
Heat fluxes through boundary parts: -7.917e+12 W, 3.016e+12 W
Writing depth average output/depth_average.vtu

Number of active cells: 74,324 (on 6 levels)
Number of degrees of freedom: 3,053,696 (2,213,988+101,712+737,996)

*** Timestep 16: t=8.35066e+06 years
Solving temperature system... 19 iterations.
Rebuilding Stokes preconditioner...
Solving Stokes system... 30+19 iterations.

Postprocessing:
Writing graphical output: output/solution-00008
RMS, max velocity: 0.00439 m/year, 0.0298 m/year
Temperature min/avg/max: 973 K, 1408 K, 1973 K
Heat fluxes through boundary parts: -7.68e+12 W, 2.922e+12 W

*** Timestep 17: t=1.00836e+07 years
Solving temperature system...


An error occurred in line <828> of file </u/bangerth/p/deal.II/3/install-bottom/include/deal.II/lac/solver_gmres.h> in function
void dealii::SolverGMRES::solve(const MATRIX&, VECTOR&, const VECTOR&, const PRECONDITIONER&) [with MATRIX = dealii::TrilinosWrappers::SparseMatrix, PRECONDITIONER = dealii::TrilinosWrappers::PreconditionILU, VECTOR = dealii::TrilinosWrappers::MPI::Vector]
The violated condition was:
false
The name and call sequence of the exception was:
SolverControl::NoConvergence (this->control().last_step(), this->control().last_value())
Additional Information:

Iterative method reported convergence failure in step 737996 with residual 3.30962e+26

Does anyone else see this? I'm running with
mpirun -np 32 aspect ../Aspect/cookbooks/shell_simple_3d.prm

Make Aspect virtual machine compatible to VMWare

The current Aspect virtual machine uses a file format .ova, which in principle is compatible with most virtualization software. Currently however, there are Oracle specific extensions included that seem to prevent the image from running for example with an VMware software (I just got a question from a potential new aspect user about this, have not checked it myself). Is it possible to remove these extensions in future releases to increase compatibility of the image? I am too less involved in the process of creating these images to know the problems/work related, but Jonathan or Timo could you take a look at this at some point?

Better document compositional fields

The manual presently assumes that the compositional field equations have a zero rhs, but this is no longer necessary. Furthermore, make a note somewhere that the default operation in material models is to assume that the compositional fields are in equilibrium and that they therefore need to return increments, rather than reaction rates.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.