exapde / exasim Goto Github PK
View Code? Open in Web Editor NEWExasim: Generating Discontinuous Galerkin Codes For Extreme Scalable Simulations
License: MIT License
Exasim: Generating Discontinuous Galerkin Codes For Extreme Scalable Simulations
License: MIT License
Trying to install Exasim on macOS (10.15.7). Running julia 1.5.3.
My first attempt to run
julia> include("install.jl")
failed because
Error: Cannot install mpich because conflicting formulae are installed.
open-mpi: because both install MPI compiler wrappers
Please `brew unlink open-mpi` before continuing.
After running
$ brew unlink open-mpi
installation seemed to proceed more smoothly, until I hit
ERROR: LoadError: MethodError: no method matching *(::Char, ::Tuple{String,Int64})
Closest candidates are:
*(::Any, ::Any, ::Any, ::Any...) at operators.jl:538
*(::Union{AbstractChar, AbstractString}, ::Union{AbstractChar, AbstractString}...) at strings/basic.jl:251
*(::Union{Regex, AbstractChar, AbstractString}, ::Union{Regex, AbstractChar, AbstractString}...) at regex.jl:656
Stacktrace:
[1] *(::Char, ::Tuple{String,Int64}, ::Char) at ./operators.jl:538
[2] top-level scope at /Users/gregorywagner/Projects/Exasim/Installation/install.jl:94
[3] include(::String) at ./client.jl:457
[4] top-level scope at REPL[1]:1
in expression starting at /Users/gregorywagner/Projects/Exasim/Installation/install.jl:94
This comes from
Exasim/Installation/install.jl
Line 94 in fb02634
I guess mpi
is a tuple:
julia> mpi
("", 1)
and q
is a char:
julia> q
'"': ASCII/Unicode U+0022 (category Po: Punctuation, other)
I'm not sure if mpi
is meant to be Tuple
, but the reason is because findinstallexec
here:
Exasim/Installation/install.jl
Line 34 in fb02634
returns a tuple:
Exasim/Installation/findinstallexec.jl
Line 87 in fb02634
MPI functionality still needed for exact Jacobian-vector multiplication using automatic differentiation
We should establish some rules for the core development team and any outside contributors. We'll follow standard GitHub Flow procedures, but it'll still be helpful to have the process for updating code written out.
Hello.
First of all, Thank you for your works.
I want to get some guidance about non-reflecting boundary condition.
When running naca0012 examples, which you provide at Application/NS, it showed like boundary-induced oscillating, ended up going to divergence.
Could you check the above case to make it converged.
On the other hand, from references mentioned on the paper, i guess the kind of characteristic terms, which is in (ubou) , need to be within flux (fbou) in the form of boundary flux.
Can i get any indicates or references regarding why you put "A" (jacobian) in the (ubou) not (fbou)
Thank you.
P.S I found some tiny bug at execution on gpu, after i tested i am going to post issue about that.
Exact Jacobian-vector multiplications with autodiff on GPU
Hi,
Please provide the mesh file 'grid.bin' to run the naca0012 testcases in Euler and Navier-Stokes applications.
Dear authors,
Thank you very much for so powerful tool. Do you have any plan to support more PDEs, such as Maxwell equations in electromagnetic, and even PDEs for multiple physical field?
Thanks,
Tang Laoya
Regarding Exasim/src/Python/Mesh/gmshcall.py:
The current gmshcall function doesn't read in physical groups that are specified in the input .geo file. Physical groups are collections of model entities that are used to alias domains and boundaries, preventing having to manually enter them using logical operations on the coordinate values in the pdeapp script. If the geometry is specified using Gmsh, being able to read in the physical groups from the .geo file will yield significant user time savings in setting up the case.
Right now the app.cpuflags and app.gpuflags fields allow the user to add flags to the compilation of main.cpp. I think it would also be helpful to let the user add flags to compile opuApp/gpuApp and the opuCore/gpuCore libs.
The first case is helpful when using a GPU compiler that is not the default, for example using LLVM Clang. Compiling the App files requires specifying --cuda-path and --cuda-gpu-arch. We could add these to the app object, or we could just let the user specify them by something like gpuappflags.
Allowing libraries to be added to Core compilations is helpful when developing on an M1 Mac, for example. To use MPI on my machine, I've only had luck specifying everything to work on an x86 architecture. This means adding the flag "-arch x86_64" to every compilation string, including those in genlib.
==> Exasim ...
[ Info: Precompiling Preprocessing [top-level]
[ Info: Precompiling Mesh [top-level]
[ Info: Precompiling Postprocessing [top-level]
Using g++ compiler for CPU source code
Generating CPU core libraries.ar: creating archive commonCore.a
a - commonCore.o
ERROR: LoadError: TypeError: non-boolean (Int64) used in boolean context
Stacktrace:
[1] genlib(cpucompiler::String, gpucompiler::String, coredir::String, cpulibflags::String, gpulibflags::String)
@ Gencode ~/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/src/Julia/Gencode/genlib.jl:19
[2] genlib
@ ~/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/src/Julia/Gencode/genlib.jl:5 [inlined]
[3] setcompilers(app::Preprocessing.PDEStruct)
@ Gencode ~/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/src/Julia/Gencode/setcompilers.jl:136
[4] exasim(pde::Preprocessing.PDEStruct, mesh::Preprocessing.MESHStruct)
@ Postprocessing ~/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/src/Julia/Postprocessing/exasim.jl:11
[5] top-level scope
@ ~/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/Applications/Euler/EulerVortex/pdeapp.jl:44
[6] include(fname::String)
@ Base.MainInclude ./client.jl:476
[7] top-level scope
@ REPL[16]:1
in expression starting at /Users/wjq/Documents/WJQ_DATA/2_Skill/19_Exaim/Exasim/Applications/Euler/EulerVortex/pdeapp.jl:44
warning message pops up in Julia/python versions, not in Matlab. [#26]
resulted in the warnings (below) but seems the simulation is not affected (to be verified).
<<<
compile code...
ar: creating opuApp.a
a - opuApp.o
gpuInitu.cu(11): warning: variable "xdg1" was declared but never referenced
detected during:
instantiation of "void kernelgpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=double]"
(26): here
instantiation of "void gpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=double]"
(29): here
gpuInitu.cu(12): warning: variable "xdg2" was declared but never referenced
detected during:
instantiation of "void kernelgpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=double]"
(26): here
instantiation of "void gpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=double]"
(29): here
gpuInitu.cu(11): warning: variable "xdg1" was declared but never referenced
detected during:
instantiation of "void kernelgpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=float]"
(26): here
instantiation of "void gpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=float]"
(30): here
gpuInitu.cu(12): warning: variable "xdg2" was declared but never referenced
detected during:
instantiation of "void kernelgpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=float]"
(26): here
instantiation of "void gpuInitu(T *, T *, T *, T *, int, int, int, int, int, int) [with T=float]"
(30): here
There are a few issues compiling GPU code with Julia or Python apps.
gpuFbou.cu(32): error: identifier "gpuFbou1" is undefined
In the generated file gpuUbou.cu
in the app folder, the function is called kernelgpuFbou1, not gpuFbou1. These functions are generated in Gencode/gencodeface files. They are generated correctly in the matlab version, but not in Julia or Python for versions 0.1-0.3
opuApp.a
does not exist. This is due to missing lines in compilecode.jl
and compilecode.py
. Line 110-113 in the Julia code iselseif app.platform == "gpu"
run(string2cmd(compilerstr[3]));
run(string2cmd(compilerstr[4]));
if app.mpiprocs==1
run(string2cmd(compilerstr[7]));
else
run(string2cmd(compilerstr[8]));
end
end
while the same code block in matlab is
elseif app.platform == "gpu"
eval(char("!" + compilerstr{1}));
eval(char("!" + compilerstr{2}));
eval(char("!" + compilerstr{3}));
eval(char("!" + compilerstr{4}));
if app.mpiprocs==1
eval(char("!" + compilerstr{7}));
else
eval(char("!" + compilerstr{8}));
end
end
The Julia code also needs to run compilestr(1)
and compilestr(2)
. Same holds for Python
Python and Julia app structs are missing flags for external Fhat, Uhat, and Stab terms
This leads to incorrect solutions for time-dependent problems when using Python or Julia to generate GPU code specifically. Bug was discovered for 2D Euler but most likely applies to other problems. It does not show up when using Matlab, using Python or Julia for CPU code, or solving steady problems
Julia compilation crashing when flags introduced in PR #30 are empty strings. For Julia, need to only append these flags if fields are not empty. This does not seem to be an issue for Matlab or Python
Error:
In file included from gpuApp.cu:4:
gpuFlux.cu:1:10: fatal error: gpuFlux1.cpp: No such file or directory
#include "gpuFlux1.cpp"
^~~~~~~~~~~~~~
compilation terminated.
ERROR: LoadError: failed process: Process(`nvcc -D_FORCE_INLINES -O3 -c --compiler-options "'-fPIC'" gpuApp.cu`, ProcessExited(1)) [1]
Suggested Solution @exapde
After the line
strgpu = replace(strgpu, "cpp" => "cu");
And it worked. But not sure how it affects the other examples.
Any other examples for multiple equation models?
The document named exasim is written to be able to solve electromagnetic problems. Can you provide some guidance or examples? Thanks very much.
It seems the following modifications are required for successful generation of production codes in v0.3
Line 61:
printf("Usage: ./cppfile nomodels InputFile OutputFile\n");
and
Line 303:
string filename = pdemodel[i]->disc.common.fileout [i] + "_np" + NumberToString(pdemodel[i]->disc.common.mpiRank) + ".bin";
Also, the following code snippets in Sec 6.2 of ExaSim0.3 manual requires update.
mpirun -np mpiprocs ./mpiapp nomodels ../datain/ ../dataout/out
mpirun -gpu -np mpiprocs ./gpumpiapp nomodels ../datain/ ../dataout/out
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.