Giter Site home page Giter Site logo

miniamr's Introduction

miniAMR - Adaptive Mesh Refinement Mini-App

miniAMR applies a stencil calculation on a unit cube computational domain, which is divided into blocks. The blocks all have the same number of cells in each direction and communicate ghost values with neighboring blocks. With adaptive mesh refinement, the blocks can represent different levels of refinement in the larger mesh. Neighboring blocks can be at the same level or one level different, which means that the length of cells in neighboring blocks can differ by only a factor of two in each direction. The calculations on the variables in each cell is an averaging of the values in the chosen stencil. The refinement and coarsening of the blocks is driven by objects that are pushed through the mesh. If a block intersects with the surface or the volume of an object, then that block can be refined. There is also an option to uniformly refine the mesh. Each cell contains a number of variables, each of which is evaluated indepently.

Questions? Contact Courtenay Vaughan ([email protected])

miniamr's People

Contributors

ctvaugh avatar hughes-c avatar moehre2 avatar nmhamster avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

miniamr's Issues

OpenMP version of miniAMR

Hello,

I am from Samsung Semiconductor, Inc. and analyzing proxy apps that are of interest to Sandia.

I am able to build and run the reference MPI version of miniAMR. But I am having issues running the OpenMP version of miniAMR, which looks like a work-in-progress based on the paper in the README.

I also applied the patch here:
#8

but that seems to get us only part of the way there.

I used the one sphere moving diagonally on 27 processors as a my test case.
(https://github.com/Mantevo/miniAMR/tree/master/ref)

Thanks in advance.

Potential data race

Hello, my group is working on a race detection tool and our tool reported a potential data race in this repository. We wanted to check with you to see if the race is real.

The reported race is in the openmp parallel region at stencil.c:91.

#pragma omp parallel default(shared) private(i, j, k, bp)
{
      for (in = 0; in < sorted_index[num_refine+1]; in++) {
         bp = &blocks[sorted_list[in].n];
         for (i = 1; i <= x_block_size; i++)
            for (j = 1; j <= y_block_size; j++)
               for (k = 1; k <= z_block_size; k++)
                  work[i][j][k] = (bp->array[var][i-1][j  ][k  ] +
                                   bp->array[var][i  ][j-1][k  ] +
                                   bp->array[var][i  ][j  ][k-1] +
                                   bp->array[var][i  ][j  ][k  ] +
                                   bp->array[var][i  ][j  ][k+1] +
                                   bp->array[var][i  ][j+1][k  ] +
                                   bp->array[var][i+1][j  ][k  ])/7.0;
         for (i = 1; i <= x_block_size; i++)
            for (j = 1; j <= y_block_size; j++)
               for (k = 1; k <= z_block_size; k++)
                  bp->array[var][i][j][k] = work[i][j][k];
      }
}

The induction variable in the outer for loop, in, looks to be shared among all threads. This causes a write/write race when in is incremented which could cause undefined behavior with regards to the value of in.

Also, it may be possible for bp to point to the same location on multiple threads. If multiple threads set in to 0 for the initial loop iteration, the threads will read the same value from the sorted_list and use the same offset into blocks for bpat line 94

// bp points to the same location if in has the same value on different threads
bp = &blocks[sorted_list[in].n]; 

This potentially leads to lots of races. The write to bp->array[var][i][j][k] at line 108 races with itself, and all the reads on bp->array on lines 98-104 when multiple threads use the same value in bp.

I am not familiar with this code so I am not sure that I understand what is happening correctly.
Does this look like a real race to you?

miniAMR runnig problem

image
I want to run miniAMR with 64 or 512 processors, but when i submit the job, it will be killed due to some unkown reasons. I guess it is the problem of running parameters because I can run miniAMR successfully with 16 processors.

So how can I change the running parameters if I want to use 512 processors to run miniAMR?
Can anyone help me?

Thanks

MiniAMR versions

In the readme, miniAMR versions are described as miniAMR_ref and miniAMR_serial. However, in the directory there is only openmp and ref folders. In the README, miniAMR_ref is defined as self-contained MPI parallel. What is the difference between those two src directories?

Compilation error when openmpi is upgraded to 4.0.0

I was able to test it on Openmpi-3.0.0 and it worked fine. However, when I moved my system to 4.0.0 I was hit with an error at this line.

ierr = MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);

SOLUTION:

I removed the line from main and everything compiled perfectly. However, as I am not one of the application developers. I would like to know if you can take a look and if removing that line does not trigger other things to fail throughout the different application components.

MPI run problem with miniAMR

image
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
number of processors used does not match number allocated

I meet a problem that MPI_Abort failed , I cannot solve it , Can you help me ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.