Giter Site home page Giter Site logo

husseinaluie / flowsieve Goto Github PK

View Code? Open in Web Editor NEW
16.0 5.0 8.0 4.78 MB

FlowSieve coarse-graining code base

Home Page: https://flowsieve.readthedocs.io/en/latest/

License: Other

C++ 87.07% Makefile 0.12% Python 0.54% Shell 0.10% C 0.01% TeX 0.02% Jupyter Notebook 12.15%

flowsieve's Introduction

FlowSieve

DOI

Documentation Status DOI

About FlowSieve

FlowSieve is developed as an open resource by the Complex Flow Group at the University of Rochester, under the sponsorship of the National Science Foundation and the National Aeronautics and Space Administration. Continued support for FlowSieve depends on demonstrable evidence of the code’s value to the scientific community. We kindly request that you cite the code in your publications and presentations. FlowSieve is made available under The Open Software License 3.0 (OSL-3.0) (see [the license file](\ref license1) or the human-readable summary at the end of the README), which means it is open to use, but requires attribution.

The following citations are suggested:

For journal articles, proceedings, etc.., we suggest:

Other articles that may be relevant to the work are:

  • Aluie, Hussein, Matthew Hecht, and Geoffrey K. Vallis, (2018). Mapping the energy cascade in the North Atlantic Ocean: The coarse-graining approach. Journal of Physical Oceanography, 48.2, 225-244, (https://doi.org/10.1175/JPO-D-17-0100.1)

For presentations, posters, etc.., we suggest acknowledging:

  • FlowSieve code from the Complex Flow Group at University of Rochester

Community Guidelines

  • Contributing
  • At the current stage of development, anyone seeking to contribute to the FlowSieve codebase is asked to contact the main developers (see Seeking Support) to discuss the best way to integrate their contributions. The codebase is maintained on GitHub, and contributions will ultimately result in merging commits into the main branch.
  • It is recommended to use a forked repository for active development, since it allows testing in a separate environment before merging.
  • Reporting Issues
  • Please report issues using the GitHub issue tracking tools (https://github.com/husseinaluie/FlowSieve/issues). Issues can also be submitted by email (see Seeking Support), but the issue tracker is preferred.
  • Seeking Support
  • The best way to obtain support is to contact Hussein Aluie or Benjamin Storer by email. Contact information is available at the Complex Flow Group webpage (http://www.complexflowgroup.com/people/).

FlowSieve documentation is available here.


Licence

This is a brief human-readable summary of the license, and is not the actual licence. See licence.md for the full licence details.

You are free:

  • To share: To copy, distribute and use the database.
  • To create: To produce works from the database.
  • To adapt: To modify, transform and build upon the database.

As long as you:

  • Attribute: You must attribute any public use of the database, or works produced from the database, in the manner specified in the license. For any use or redistribution of the database, or works produced from it, you must make clear to others the license of the database and keep intact any notices on the original database.

flowsieve's People

Contributors

bastorer avatar hmkhatri avatar husseinaluie avatar noraloose avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

flowsieve's Issues

Documentation: Tree structure

There is something funky going on with the documentation tree. The red circled parts don't seem to be actual subsections of the "Installation" section (but rather belong to the License text etc.).

Screen Shot 2022-04-15 at 12 22 24 PM

This issue is part of the JOSS review process over openjournals/joss-reviews#4277.

Running the basic tutorial

  • Starting from the "default" constants.hpp, I modified variables as in the documentation, but in addition needed to make the following modification:
EXTEND_DOMAIN_TO_POLES = false

I think it would be useful to include this in the docs too.

  • It would be nice if the tutorial included the required python dependencies, e.g., for example by providing an environment.yml file to set up a conda environment.
  • How about including a jupyter notebook that executes the steps done with python? Could be a great way to make the tutorial more interactive, and give the user some nice figures to look at and reproduce! It would only take a few lines of code to get started, e.g.,
import xarray as xr
ds = xr.open_dataset('/glade/u/home/noraloose/FlowSieve/Tutorial/Basic/velocity_sample.nc')
ds.uo.plot()

but you could also consider to make the 2 python scripts more interactive and step through them in the notebook.

This issue is part of the JOSS review process over at openjournals/joss-reviews#4277.

''from matpy import FiniteDiff'' failed

''from matpy import FiniteDiff'' , this command appears in Spherical Demo/Helmholtz/postprocessing/Plot_Spectral.ipynb.

I can't import matpy, but I'm sure I have the matpy library installed.

So far I haven't been able to successfully implement "from matpy import FiniteDiff". Does the FiniteDiff.py provided in the directory yields the same result?

Software paper

JOSS requires the software paper to include the following elements:

  • Summary of software: While your summary section is a great motivation for why coarse graining is needed, it does not describe the high-level functionality and purpose of the software (but rather of the technique coarse graining in general). Could you adapt the summary to be more specific to the software package FlowSieve?
  • State of the field: Could you include a discussion on how FlowSieve compares to other commonly-used packages?

This issue is part of the JOSS review process over openjournals/joss-reviews#4277.

Helmholtz decomposition in a high-resolution current

Dears,
I am using high-resolution currents from a MITgcm simulation. Previous to the use coarse-graining filtering, I have used Helmholtz decomposition to get Helmholtz scalars (Psi and Phi) following the github documentation for high-resolution velocities. In Li, Z., Chao, Y., & McWilliams, J. C. (2006) (Computation of the streamfunction and velocity potential for limited and irregular domains) explains a methodology to get Psi and Phi that I believe is similar to the used in Helmholtz_projection.cpp. However, I do not understand the step to refine the projection using a seed. Could you help me giving me more details about this step?. Thank you sou much.

Best regards

How to make an appropriate system.mk file

Hi!
I am new here! I don't know how to compile Case_files/coarse_grain.x. Should I make a system.mk file before compiling?
It's grateful to have your suggestions!

Add a copy of `constants.hpp` with the specific values in Tutorials

This is not a bug, but a suggestion. The number of parameters to go around and edit manually to get through the tutorial is a lot and I had to change it for every tutorial.

I think a new user would be best served by a copy of the constants.hpp file. Ther may also be other solutions to quickly get started

Installation

I think it would be helpful for many (possibly inexperienced) users if the first step in the installation instructions would be:

  • To clone the FlowSieve repository (or a fork) onto the HPC system they want to use. You could also provide a link that explains how to generally clone / fork GitHub repos.

This issue is part of the JOSS review process over openjournals/joss-reviews#4277.

Successfull compile but 'Assertion `input_nc_format == (3)' failed.' on JASMIN HPC

Hello,

This query is probably related with issues (#20 and #22). I don't think anyone ever got FlowSieve working on JASMIN?

I am testing the BASIC Tutorial but after running

mpirun ./coarse_grain.x --input_file velocity_sample.nc --filter_scales "1e3 15e3 50e3 100e3"

in a sbatch script, the error is as follows

coarse_grain.x: NETCDF_IO/read_var_from_file.cpp:85: void read_var_from_file(std::vector<double>&, const string&, const string&, std::vector<bool>*, std::vector<int>*, std::vector<int>*, int, int, bool, int, double, MPI_Comm): Assertion `input_nc_format == (3)' failed.
[host424:164200] *** Process received signal ***
[host424:164200] Signal: Aborted (6)
[host424:164200] Signal code:  (-6)
[host424:164200] [ 0] /lib64/libpthread.so.0(+0xf630)[0x7f0174ef3630]
[host424:164200] [ 1] /lib64/libc.so.6(gsignal+0x37)[0x7f0174b4c387]
[host424:164200] [ 2] /lib64/libc.so.6(abort+0x148)[0x7f0174b4da78]
[host424:164200] [ 3] /lib64/libc.so.6(+0x2f1a6)[0x7f0174b451a6]
[host424:164200] [ 4] /lib64/libc.so.6(+0x2f252)[0x7f0174b45252]
[host424:164200] [ 5] ./coarse_grain.x[0x4aa8d3]
[host424:164200] [ 6] ./coarse_grain.x[0x4b3e1b]
[host424:164200] [ 7] ./coarse_grain.x[0x4a0828]
[host424:164200] [ 8] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f0174b38555]
[host424:164200] [ 9] ./coarse_grain.x[0x4a397e]
[host424:164200] *** End of error message ***
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 164200 on node host424 exited on signal 6 (Aborted).
--------------------------------------------------------------------------

What i have gathered from the previous issues is that --has-parallel -> should be yes. Running nc-config -all gives

nc-config --all

This netCDF 4.8.1 has been built with the following features: 

  --cc            -> x86_64-conda-linux-gnu-cc
  --cflags        -> -I/apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718/include
  --libs          -> -L/apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718/lib -lnetcdf
  --static        -> -lmfhdf -ldf -lhdf5_hl -lhdf5 -lm -lcurl -lzip

  --has-c++       -> no
  --cxx           -> 

  --has-c++4      -> yes
  --cxx4          -> /home/conda/feedstock_root/build_artifacts/netcdf-cxx4_1659035179945/_build_env/bin/x86_64-conda-linux-gnu-c++
  --cxx4flags     -> -I/apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718/include
  --cxx4libs      -> -L/apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718//apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718/lib -lnetcdf-cxx4 -lnetcdf

  --has-fortran   -> yes
  --fc            -> /home/conda/feedstock_root/build_artifacts/netcdf-fortran_1674656969142/_build_env/bin/x86_64-conda-linux-gnu-gfortran
  --fflags        -> -I/apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718/include -I/apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718/include
  --flibs         -> -L/apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718/lib -lnetcdff -lnetcdf -lnetcdf
  --has-f90       -> TRUE
  --has-f03       -> yes

  --has-dap       -> yes
  --has-dap2      -> yes
  --has-dap4      -> yes
  --has-nc2       -> yes
  --has-nc4       -> yes
  --has-hdf5      -> yes
  --has-hdf4      -> yes
  --has-logging   -> no
  --has-pnetcdf   -> no
  --has-szlib     -> no
  --has-cdf5      -> yes
  --has-parallel4 -> no
  --has-parallel  -> no
  --has-nczarr    -> yes

  --prefix        -> /apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718
  --includedir    -> /apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718/include
  --libdir        -> /apps/jasmin/jaspy/mambaforge_envs/jaspy3.10/mf-22.11.1-4/envs/jaspy3.10-mf-22.11.1-4-r20230718/lib
  --version       -> netCDF 4.8.1

which shows it does not have parallel-netcdf. It could be that JASMIN does not have a pre-compiled parallel version of netcdf installed, which causes this issue? I guess I want to rule out other issues first, e.g. with the jasmin.mk file

# Specify compilers
CXX    ?= g++
MPICXX ?= mpicxx

# Linking flags for netcdf
LINKS:= -lnetcdf-cxx4 -lnetcdf -lhdf5_hl -lhdf5 -lm -ldl -lz -fopenmp

# Default compiler flags
CFLAGS:=-Wall -std=c++14

# Debug flags
DEBUG_FLAGS:=-g
DEBUG_LDFLAGS:=-g

# Basic optimization flags
OPT_FLAGS:=-O3

# Extra optimization flags (intel inter-process optimizations)
EXTRA_OPT_FLAGS:=

# Specify optimization flags for ALGLIB
ALGLIB_OPT_FLAGS:=-O3

# Modules are automatically on lib dir
NETCDF_LIBS="-L/apps/sw/eb/software/netCDF/4.8.0-gompi-2021a/lib"
NETCDF_INCS="-I/apps/sw/eb/software/netCDF/4.8.0-gompi-2021a/include"

HDF5_LIBS="-L/apps/sw/eb/software/HDF5/1.12.1-gompi-2021b/lib"
HDF5_INCS="-I/apps/sw/eb/software/HDF5/1.12.1-gompi-2021b/include"

LIB_DIRS:=${NETCDF_LIBS} ${HDF5_LIBS}
INC_DIRS:=${NETCDF_INCS} ${HDF5_INCS}

Any advice appreciated.

Example of usage with MPI parallelism

While in the article you state that

In particular, MPI is used to divide time and depth

I did not find an example which demonstrates this in the Tutorials. We only see OpenMP being used and SLURM_NTASKS is always set as 1. Would it be possible to construct a simple example which shows MPI parallelism. This is necessary to check out from openjournals/joss-reviews#4277 (comment):

Functionality: Have the functional claims of the software been confirmed?

OSI approved license

JOSS requires the LICENSE file to have the contents of an OSI approved software license. I can't find the ODC-BY 1.0 license among the licenses in the OSI approved list.

This issue is part of the JOSS review process over here.

Cannot find -lhdf5

Hi Ben,

I'm still trying 😄. I recieved some help from the university IT team and now parallel is enabled (#20). I have the output of nc-config --all at the bottom in case it is helpful.

Unfortunately, I now run into this error:

...
icpc: command line warning #10148: option '-Wdate-time' not supported
ld: cannot find -lhdf5_hl
ld: cannot find -lhdf5
Makefile:208: recipe for target 'Case_Files/coarse_grain.x' failed
make: *** [Case_Files/coarse_grain.x] Error 1

I did:

make clean

module load intel-compilers/2022
module load openmpi/4.1.4-intel
module load hdf5/1.12.2-intel-parallel
module load netcdf/netcdf-c-4.9.0-parallel

make Case_Files/coarse_grain.x

Any help would be deeply appreciated!

All the best,
Salah

This netCDF 4.9.0 has been built with the following features:

  --cc            -> mpicc
  --cflags        -> -I/network/software/ubuntu_bionic/netcdf/netcdf-c-4.9.0-parallel/include -I/network/software/ubuntu_bionic/hdf5/1.12.2-intel-parallel/include
  --libs          -> -L/network/software/ubuntu_bionic/netcdf/netcdf-c-4.9.0-parallel/lib -L/network/software/ubuntu_bionic/hdf5/1.12.2-intel-parallel/lib -lnetcdf -lhdf5_hl -lhdf5 -lm -lz -lsz -lbz2 -lxml2 -lcurl
  --static        -> -lhdf5_hl -lhdf5 -lm -lz -lsz -lbz2 -lxml2 -lcurl

  --has-c++       -> no
  --cxx           ->

  --has-c++4      -> yes
  --cxx4          -> g++
  --cxx4flags     -> -I/usr/include -Wdate-time -D_FORTIFY_SOURCE=2
  --cxx4libs      -> -L/usr/lib/x86_64-linux-gnu -lnetcdf_c++4 -lnetcdf

  --has-fortran   -> yes
  --fc            -> gfortran
  --fflags        -> -I/usr/include
  --flibs         -> -L/usr/lib -lnetcdff -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -lnetcdf -lnetcdf
  --has-f90       -> no
  --has-f03       -> yes

  --has-dap       -> yes
  --has-dap2      -> yes
  --has-dap4      -> yes
  --has-nc2       -> yes
  --has-nc4       -> yes
  --has-hdf5      -> yes
  --has-hdf4      -> no
  --has-logging   -> no
  --has-pnetcdf   -> no
  --has-szlib     -> yes
  --has-cdf5      -> yes
  --has-parallel4 -> yes
  --has-parallel  -> yes
  --has-nczarr    -> yes
  --has-zstd      -> no
  --has-benchmarks -> no

  --prefix        -> /network/software/ubuntu_bionic/netcdf/netcdf-c-4.9.0-parallel
  --includedir    -> /network/software/ubuntu_bionic/netcdf/netcdf-c-4.9.0-parallel/include
  --libdir        -> /network/software/ubuntu_bionic/netcdf/netcdf-c-4.9.0-parallel/lib
  --version       -> netCDF 4.9.0

Compilation error

Hi,
I updated the last code version. Compilation error came up when I trying 'make Case_Files/coarse_grain.x'.

The range to which coarse-grain is performed locally

Hello, everyone!
I am confused about the searching range of the coarse-grain operation. Supposing filter scale = 100km and Kernel_opt = 4 which is a tanh kernel. I noticed that there is a Kernpad = 2.5, which is scale factor for kernel search radius. Does this mean that I am doing filtering in a range of 250km(100km*2.5) in diameter?Does this go against what I originally wanted to do filtering within 100km?
Any reply and answer would be greatly appreciated!

question about irregular longitude and latitude, and many times

Dears,
I ask two questions:

  1. In constant.hpp I can choose UNIFORM_LON_GRID=false, and UNIFORM_LAT_GRID=false, It means longitude and latitude would have the form: lon(i,j) and lat(i,j)?

  2. In my netcdf file, I have 2208 times. When I run coarse_grain_scalars.x I add --Nprocs_in_time 2208, I am supposing this command allows doing the parallelization. Is it correct?

Best regards

Installation errors with gcc 10

With gcc 10.3.0, I get

mpic++   -c -DDEBUG=0  -O3    -o NETCDF_IO/add_attr_to_file.o NETCDF_IO/add_attr_to_file.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
NETCDF_IO/add_attr_to_file.cpp: In function 'void add_attr_to_file(const char*, double, const char*, MPI_Comm)':
NETCDF_IO/add_attr_to_file.cpp:23:38: error: format not a string literal and no format arguments [-Werror=format-security]
   23 |         snprintf(buffer, 50, filename);
      |                                      ^

If I do CFLAGS := -w to suppress warnings, it goes a bit further, but errors again

mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/add_attr_to_file.o NETCDF_IO/add_attr_to_file.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/add_var_to_file.o NETCDF_IO/add_var_to_file.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/check_file_existence.o NETCDF_IO/check_file_existence.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/initialize_output_file.o NETCDF_IO/initialize_output_file.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/initialize_particle_file.o NETCDF_IO/initialize_particle_file.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/initialize_postprocess_file.o NETCDF_IO/initialize_postprocess_file.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/initialize_projected_particle_file.o NETCDF_IO/initialize_projected_particle_file.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/initialize_regions_file.o NETCDF_IO/initialize_regions_file.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/initialize_subset_file.o NETCDF_IO/initialize_subset_file.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/load_region_definitions.o NETCDF_IO/load_region_definitions.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/nc_err.o NETCDF_IO/nc_err.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
mpic++   -c -DDEBUG=0 -w -O3    -o NETCDF_IO/package_field.o NETCDF_IO/package_field.cpp -lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl -fopenmp 
NETCDF_IO/package_field.cpp: In function 'void package_field(std::vector<short int>&, double&, double&, const std::vector<double>&, const std::vector<bool>*, MPI_Comm)':
NETCDF_IO/package_field.cpp:59:54: error: 'fmiddle' not specified in encerrorlosing 'parallel'
   59 |                 local_double = (original.at(index) - fmiddle) / frange;
      |                                                      ^~~~~~~
NETCDF_IO/package_field.cpp:49:13: error: enclosing 'parallel'
   49 |     #pragma omp parallel \
      |             ^~~
NETCDF_IO/package_field.cpp:59:65: error: 'frange' not specified in enclosing 'parallel'
   59 |                 local_double = (original.at(index) - fmiddle) / frange;
      |                                                                 ^~~~~~
NETCDF_IO/package_field.cpp:49:13: error: enclosing 'parallel'
   49 |     #pragma omp parallel \
      |             ^~~
make: *** [Makefile:78: NETCDF_IO/package_field.o] Error 1

Filering on cartesian grid

I want to coarse-grain a scalar on a regular cartesian grid. I compiled coarse_grain_scalars.x by setting CARTESIAN = true in constants.hpp file. However, when I try to coarse_grain the field, I get the following output (see output in grey shading),

Commandline flag "--input_file" got value "./input_file.nc"
Commandline flag "--time" received no value - will use default "time"
Commandline flag "--depth" received no value - will use default "depth"
Commandline flag "--latitude" got value "y"
Commandline flag "--longitude" got value "x"
Commandline flag "--is_degrees" received no value - will use default "true"
Commandline flag "--do_PEKE_conversion" received no value - will use default "false"
Commandline flag "--Nprocs_in_time" received no value - will use default "1"
Commandline flag "--Nprocs_in_depth" received no value - will use default "1"
Commandline flag "--region_definitions_file" received no value - will use default "region_definitions.nc"
Commandline flag "--region_definitions_dim" received no value - will use default "region"
Commandline flag "--region_definitions_var" received no value - will use default "region_definition"
Commandline flag "--variables" got value "var"
Commandline flag "--filter_scales" got value "1 5 10 20"
Filter scales (20) are: 1m, 5m, 10m, 20m,

The first output says "--is_degrees" received no value - will use default "true" even though I set CARTESIAN = true.

Am I missing something here?

Errors, compute_KE_spectra_and_slopes.cpp:(.text+0x139a): undefined reference to functions, such as `potential_vel_from_F

Hi Guys
I tried to compile the code but failed due to the errors occuring in compute_KE_spectra_and_slopes.cpp. Several warnings about undefined reference to functions can be seen, as follows:
/usr/bin/ld: Functions/compute_KE_spectra_and_slopes.o: in function compute_KE_spectra_and_slopes(std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, dataset const&, double)': compute_KE_spectra_and_slopes.cpp:(.text+0x114f): undefined reference to toroidal_vel_from_F(std::vector<double, std::allocator >&, std::vector<double, std::allocator >&, std::vector<double, std::allocator > const&, std::vector<double, std::allocator > const&, std::vector<double, std::allocator > const&, int, int, int, int, std::vector<bool, std::allocator > const&)'
/usr/bin/ld: compute_KE_spectra_and_slopes.cpp:(.text+0x11a2): undefined reference to potential_vel_from_F(std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> > const&, int, int, int, int, std::vector<bool, std::allocator<bool> > const&)' /usr/bin/ld: compute_KE_spectra_and_slopes.cpp:(.text+0x1356): undefined reference to toroidal_vel_from_F(std::vector<double, std::allocator >&, std::vector<double, std::allocator >&, std::vector<double, std::allocator > const&, std::vector<double, std::allocator > const&, std::vector<double, std::allocator > const&, int, int, int, int, std::vector<bool, std::allocator > const&)'
/usr/bin/ld: compute_KE_spectra_and_slopes.cpp:(.text+0x139a): undefined reference to `potential_vel_from_F(std::vector<double, std::allocator >&, std::vector<double, std::allocator >&, std::vector<double, std::allocator > const&, std::vector<double, std::allocator > const&, std::vector<double, std::allocator > const&, int, int, int, int, std::vector<bool, std::allocator > const&)'
collect2: error: ld returned 1 exit status
make: *** [Makefile:213: Case_Files/coarse_grain.x] Error 1

Could you kindly provide some help to fix this problem? Thanks!

Installing Error

make_CoarseGrainX.log

Dear All,

Recently I tried my best to install FlowSieve on our HPC, but I failed.

The error is as follows:
NETCDF_IO/write_field_to_output.cpp: In function ‘void write_field_to_output(const std::vector<double>&, const string&, const size_t*, const size_t*, const string&, const std::vector<bool>*, MPI_Comm)’: NETCDF_IO/write_field_to_output.cpp:134:23: error: ‘fmiddle’ is predetermined ‘shared’ for ‘shared’ private(index) ^ make: *** [NETCDF_IO/write_field_to_output.o] Error 1

Attached is the detailed error log. Would you mind giving me a hand? Many thanks!

Sincerely,
Ran

Makefile for Jasmin

Hi there,

I am using the Jasmin hpc service in the UK. I don't really know how to write a system.mk file for it and would like some guidance if possible.

I used the Bluehive_open.mk. I ran into the following when I tried make Case_Files/coarse_grain.x, fatal error: netcdf_par.h: No such file or directory.

I tried getting a copy of this h file off github and put it in the main directory, the error became longer and included error: 'num_regions' not specified in enclosing 'parallel' and error: 'len_fields' not specified in enclosing 'parallel', so I can see that I am missing something here.

Salah

When does filter commute with derivatives?

What settings do I have to choose in constants.hpp to guarantee that the filter commutes with derivatives? I suspect

  • DEFORM_AROUND_LAND = false (does this mean: don't change the kernel shape close to land?)
  • FILTER_OVER_LAND = true (or is this option merely about whether the output is masked or not, and does not impact the actual filter algo?)

These questions are not part of the JOSS review process, just for my own curiosity.

Documentation: Methods

There are a few inconsistencies and typos in the method section of the documentation, and it would be helpful to give it a bit more structure (via section headers). Specifically:

  • Two different version of phi are used interchangeably. It would make the documentation clearer to stick to only one of the 2 symbols.
  • What is "L_pad"?
  • "currently implementation" --> "current implementation"
  • "An identical equation": identical to what? It may be clearer to label the equation for Delta phi_N above, and than you could write: "An identical equation to [labeled equation]".
  • Before writing down the large-scale KE budget, it may be helpful to start a new section with a new section header, e.g., "Large-scale KE budget". It would also be helpful to refer the user to e.g., Aluie et al. (2018) for the derivation.
  • New section header before you introduce the Lamba^m equation.
  • It would be helpful to introduce a new section header before the sentence "This section outlines some of the parallelizations that are used.", e.g., "Parallelizing computations"
  • reoutines --> routines

This issue is part of the JOSS review process over openjournals/joss-reviews#4277.

Community guidelines

JOSS asks to include

clear community guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support.

Could you please add community guidelines to this effect? I think the README would be a great place for that.

This issue is part of the JOSS review process over openjournals/joss-reviews#4277.

Automated tests

I see that the package contains tests (in Tests/). I have a few questions:

  • Are the tests automated?
  • If the tests rely on manual testing, are there instructions for how the user can run these tests?

This issue is part of the JOSS review process over at openjournals/joss-reviews#4277.

Running the tutorial on the sphere

  • Is there a chance that you could include a second submission script for folks that have to use PBS (rather than slurm) schedulers? I think this could lower the startup hurdle for new users.
  • Can this tutorial be simplified? I imagine that most users do not want to wait for 3 hours until the scripts have finished.
  • Independent of whether the previous 2 points can be done, including a jupyter notebook for analysis would be a great. The notebook with included figures would help users to get an idea what the scripts and the package does, even before they run the full tutorial.

openjournals/joss-reviews#4277

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.