Giter Site home page Giter Site logo

exapde / exasim Goto Github PK

View Code? Open in Web Editor NEW
63.0 63.0 18.0 320 MB

Exasim: Generating Discontinuous Galerkin Codes For Extreme Scalable Simulations

License: MIT License

Julia 5.36% MATLAB 33.98% Python 6.47% C++ 41.44% Cuda 10.58% C 1.67% M 0.04% GLSL 0.34% Objective-C 0.02% CMake 0.07% Shell 0.04%

exasim's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

exasim's Issues

Error "no method matching *(::Char, ::Tuple{String,Int64})" in install.jl

Trying to install Exasim on macOS (10.15.7). Running julia 1.5.3.

My first attempt to run

julia> include("install.jl")

failed because

Error: Cannot install mpich because conflicting formulae are installed.
  open-mpi: because both install MPI compiler wrappers

Please `brew unlink open-mpi` before continuing.

After running

$ brew unlink open-mpi

installation seemed to proceed more smoothly, until I hit

ERROR: LoadError: MethodError: no method matching *(::Char, ::Tuple{String,Int64})
Closest candidates are:
  *(::Any, ::Any, ::Any, ::Any...) at operators.jl:538
  *(::Union{AbstractChar, AbstractString}, ::Union{AbstractChar, AbstractString}...) at strings/basic.jl:251
  *(::Union{Regex, AbstractChar, AbstractString}, ::Union{Regex, AbstractChar, AbstractString}...) at regex.jl:656
Stacktrace:
 [1] *(::Char, ::Tuple{String,Int64}, ::Char) at ./operators.jl:538
 [2] top-level scope at /Users/gregorywagner/Projects/Exasim/Installation/install.jl:94
 [3] include(::String) at ./client.jl:457
 [4] top-level scope at REPL[1]:1
in expression starting at /Users/gregorywagner/Projects/Exasim/Installation/install.jl:94

This comes from

p = q * mpi * q;

I guess mpi is a tuple:

julia> mpi
("", 1)

and q is a char:

julia> q
'"': ASCII/Unicode U+0022 (category Po: Punctuation, other)

I'm not sure if mpi is meant to be Tuple, but the reason is because findinstallexec here:

mpi = findinstallexec("mpicxx", "openmpi", brew, 0);

returns a tuple:

return dirname, appinstall;

AD matvec MPI

MPI functionality still needed for exact Jacobian-vector multiplication using automatic differentiation

Create CONTRIBUTING.md

We should establish some rules for the core development team and any outside contributors. We'll follow standard GitHub Flow procedures, but it'll still be helpful to have the process for updating code written out.

non-reflecting boundary condition in Application/NS/naca0012 case

Hello.

First of all, Thank you for your works.

I want to get some guidance about non-reflecting boundary condition.

When running naca0012 examples, which you provide at Application/NS, it showed like boundary-induced oscillating, ended up going to divergence.

Could you check the above case to make it converged.

On the other hand, from references mentioned on the paper, i guess the kind of characteristic terms, which is in (ubou) , need to be within flux (fbou) in the form of boundary flux.

Can i get any indicates or references regarding why you put "A" (jacobian) in the (ubou) not (fbou)

Thank you.

P.S I found some tiny bug at execution on gpu, after i tested i am going to post issue about that.

AD Matvec GPU

Exact Jacobian-vector multiplications with autodiff on GPU

gmshcall.py to output physical groups if present in .geo specification

Regarding Exasim/src/Python/Mesh/gmshcall.py:

The current gmshcall function doesn't read in physical groups that are specified in the input .geo file. Physical groups are collections of model entities that are used to alias domains and boundaries, preventing having to manually enter them using logical operations on the coordinate values in the pdeapp script. If the geometry is specified using Gmsh, being able to read in the physical groups from the .geo file will yield significant user time savings in setting up the case.

Compile flags to App files and core libraries

Right now the app.cpuflags and app.gpuflags fields allow the user to add flags to the compilation of main.cpp. I think it would also be helpful to let the user add flags to compile opuApp/gpuApp and the opuCore/gpuCore libs.

The first case is helpful when using a GPU compiler that is not the default, for example using LLVM Clang. Compiling the App files requires specifying --cuda-path and --cuda-gpu-arch. We could add these to the app object, or we could just let the user specify them by something like gpuappflags.

Allowing libraries to be added to Core compilations is helpful when developing on an M1 Mac, for example. To use MPI on my machine, I've only had luck specifying everything to work on an x86 architecture. This means adding the flag "-arch x86_64" to every compilation string, including those in genlib.

macos m1 arm64: ERROR: LoadError: TypeError: non-boolean (Int64) used in boolean context

==> Exasim ...
[ Info: Precompiling Preprocessing [top-level]
[ Info: Precompiling Mesh [top-level]
[ Info: Precompiling Postprocessing [top-level]
Using g++ compiler for CPU source code
Generating CPU core libraries.ar: creating archive commonCore.a
a - commonCore.o
ERROR: LoadError: TypeError: non-boolean (Int64) used in boolean context
Stacktrace:
[1] genlib(cpucompiler::String, gpucompiler::String, coredir::String, cpulibflags::String, gpulibflags::String)
@ Gencode ~/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/src/Julia/Gencode/genlib.jl:19
[2] genlib
@ ~/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/src/Julia/Gencode/genlib.jl:5 [inlined]
[3] setcompilers(app::Preprocessing.PDEStruct)
@ Gencode ~/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/src/Julia/Gencode/setcompilers.jl:136
[4] exasim(pde::Preprocessing.PDEStruct, mesh::Preprocessing.MESHStruct)
@ Postprocessing ~/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/src/Julia/Postprocessing/exasim.jl:11
[5] top-level scope
@ ~/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/Applications/Euler/EulerVortex/pdeapp.jl:44
[6] include(fname::String)
@ Base.MainInclude ./client.jl:476
[7] top-level scope
@ REPL[16]:1
in expression starting at /Users/wjq/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/Applications/Euler/EulerVortex/pdeapp.jl:44

Julia GPU compilation warnings NavierStokes/naca

warning message pops up in Julia/python versions, not in Matlab. [#26]
resulted in the warnings (below) but seems the simulation is not affected (to be verified).
<<<
compile code...
ar: creating opuApp.a
a - opuApp.o
gpuInitu.cu(11): warning: variable "xdg1" was declared but never referenced
detected during:
instantiation of "void kernelgpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=double]"
(26): here
instantiation of "void gpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=double]"
(29): here

gpuInitu.cu(12): warning: variable "xdg2" was declared but never referenced
detected during:
instantiation of "void kernelgpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=double]"
(26): here
instantiation of "void gpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=double]"
(29): here

gpuInitu.cu(11): warning: variable "xdg1" was declared but never referenced
detected during:
instantiation of "void kernelgpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=float]"
(26): here
instantiation of "void gpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=float]"
(30): here

gpuInitu.cu(12): warning: variable "xdg2" was declared but never referenced
detected during:
instantiation of "void kernelgpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=float]"
(26): here
instantiation of "void gpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=float]"
(30): here

Julia and Python apps fail GPU compilation

There are a few issues compiling GPU code with Julia or Python apps.

  1. Compilation will fail with 4 errors of the form
gpuFbou.cu(32): error: identifier "gpuFbou1" is undefined

In the generated file gpuUbou.cu in the app folder, the function is called kernelgpuFbou1, not gpuFbou1. These functions are generated in Gencode/gencodeface files. They are generated correctly in the matlab version, but not in Julia or Python for versions 0.1-0.3

  1. After that, an error appears that opuApp.a does not exist. This is due to missing lines in compilecode.jl and compilecode.py. Line 110-113 in the Julia code is
elseif app.platform == "gpu"
    run(string2cmd(compilerstr[3]));
    run(string2cmd(compilerstr[4]));
    if app.mpiprocs==1
        run(string2cmd(compilerstr[7]));
    else
        run(string2cmd(compilerstr[8]));
    end
end

while the same code block in matlab is

elseif app.platform == "gpu"
   eval(char("!" + compilerstr{1}));
   eval(char("!" + compilerstr{2}));       
   eval(char("!" + compilerstr{3}));
   eval(char("!" + compilerstr{4}));
   if app.mpiprocs==1
       eval(char("!" + compilerstr{7}));
   else
       eval(char("!" + compilerstr{8}));
   end
end

The Julia code also needs to run compilestr(1) and compilestr(2). Same holds for Python

initial field to dataout

@rloekvh @exapde
suggestion to print the initial field data.
The following commands start printing from t=0+dt onwards, not t=0.
<<
pde.dt = dt*ones(1,nt); % time step sizes
pde.soltime = 1:length(pde.dt);

Missing flags leading to wrong solutions for Python and Julia interfaces

Python and Julia app structs are missing flags for external Fhat, Uhat, and Stab terms

This leads to incorrect solutions for time-dependent problems when using Python or Julia to generate GPU code specifically. Bug was discovered for 2D Euler but most likely applies to other problems. It does not show up when using Matlab, using Python or Julia for CPU code, or solving steady problems

GPU compilation error and Possible solution: Applications/Poisson/MuiltipleEquations/pdeapp.jl

Error:
In file included from gpuApp.cu:4:
gpuFlux.cu:1:10: fatal error: gpuFlux1.cpp: No such file or directory
 #include "gpuFlux1.cpp"
          ^~~~~~~~~~~~~~
compilation terminated.
ERROR: LoadError: failed process: Process(`nvcc -D_FORCE_INLINES -O3 -c --compiler-options "'-fPIC'" gpuApp.cu`, ProcessExited(1)) [1]

Suggested Solution @exapde
After the line

strgpu = replace(stropu, "opu" => "gpu");

I included the following line by replacing cpp by cu for proper compilation
strgpu = replace(strgpu, "cpp" => "cu");

And it worked. But not sure how it affects the other examples.
Any other examples for multiple equation models?

Modification to Version0.3/Kernel/Main/main.cpp

It seems the following modifications are required for successful generation of production codes in v0.3

Line 61:
printf("Usage: ./cppfile nomodels InputFile OutputFile\n");
and
Line 303:
string filename = pdemodel[i]->disc.common.fileout [i] + "_np" + NumberToString(pdemodel[i]->disc.common.mpiRank) + ".bin";

Also, the following code snippets in Sec 6.2 of ExaSim0.3 manual requires update.
mpirun -np mpiprocs ./mpiapp nomodels ../datain/ ../dataout/out
mpirun -gpu -np mpiprocs ./gpumpiapp nomodels ../datain/ ../dataout/out

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.