Giter Site home page Giter Site logo

wrfchem-leeds / wrfotron Goto Github PK

View Code? Open in Web Editor NEW
19.0 19.0 5.0 109.72 MB

Tools to automatise WRF-Chem runs with re-initialised meteorology

Home Page: https://wrfchem-leeds.github.io/WRFotron/

License: GNU Affero General Public License v3.0

Shell 63.36% Roff 0.18% NCL 9.16% Python 27.30%
wrf wrf-chem wrfchem wrfotron

wrfotron's People

Contributors

bjsilver avatar cemachelen avatar luke-conibear avatar lukeconibear avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

wrfotron's Issues

Bug causing unrealistic sulfate levels in MOZART-MOSAIC

There is an issue with sulphate emissions from biomass burning when using MOZART-MOSAIC.

The solution is to:

  • Comment out ebu_sulf in module_plumerise1.F on line 542 and recompile WRFChem
  • Set aem_so4 = 0 for biomass burning emissions in module_mosaic_addemiss.F

Merge our GitHub WRFotron with Christoph's GitLab WRFotron

Christoph has an updated WRFotron public on GitLab 🥳

It would be useful to combine effort on this moving forward.

Register (button top right) to join, comment, add issues, etc.

There is a leeds_setup branch over there with the docs and other Leeds specific things.

Remaining things to do:

  • Host docs on GitLab Pages via readthedocs or similar.
  • Merge leeds_setup branch into master
  • Keep (and maintain) or archive this GitHub repo (e.g., for reference, old issues, etc.).

Aqueous chemistry in stratocumulus clouds does not work with WRFChem3.7.1.

What happened:
Aqueous chemistry in stratocumulus clouds (cldchem_onoff) does not work with WRFChem3.7.1.

Aqueous chemistry does work for cumulus clouds (conv_tr_aqchem).

This issue does not happen for WRFChem4.0.3.

What you expected to happen:
When using MOZART-MOSAIC with aqueous chemistry (chem_opt = 202), aqueous chemistry works with both cumulus clouds (conv_tr_aqchem = 1) and stratocumulus clouds (cldchem_onoff = 1).

Minimal Complete Verifiable Example:
Running WRFChem3.7.1 with chem_opt = 202 and cldchem_onoff = 1, the rsl.error.0000 returns:

 chem_init: calling wetscav_mozcart_init for domain            1
   wetscav_mozcart_init: hetcnt =           42
   wetscav_mozcart_init: hno3_ndx =            1
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:     328
ERROR: cloud chemistry option requires chem_opt = 8 through 13 or 31 to 36 or 41 to 43 to function.
-------------------------------------------
Abort(1) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

Potential solutions/workarounds?

  1. Edit the relevant Fortran code (?) to include chem_opt = 202 in the above conditional.

  2. Fix the performance issue with WRFChem4 and move over to that.

Possible bug in master.bash/namelist

I have done a test run using the default WRFotron and although the output looks reasonable I'm getting the following error in my rsl.error files:

d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Attribute not found
d01 2017-09-15_23:00:00 NetCDF error in ext_ncd_get_dom_ti.code CHAR, line 83 Element TITLE
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Attribute not found
d01 2017-09-15_23:00:00 NetCDF error in ext_ncd_get_dom_ti.code CHAR, line 83 Element TITLE
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_ISOP
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_APIN
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_SO4I
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_SO4J
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_NO3I
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_NO3J
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_NH4I
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_NH4J
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_NAI
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_NAJ
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_CLI
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_CLJ
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_CO_A
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_ORGI_A
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_ORGJ_A
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_CO_BB
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_ORGI_BB
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_ORGJ_BB
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_GLY
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_sulf
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_MACR
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_MGLY
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_MVK
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_HCOOH
d01 2017-09-15_23:00:00 NetCDF error: NetCDF: Variable not found
d01 2017-09-15_23:00:00 NetCDF error in wrf_io.F90, line 2883 Varname E_HONO

I also don't have any 'E_xx' variables in my wrfinput_d01 file.

Not sure if it is linked but I checked the namelist.input file in the run directory I noticed that the frames_per_auxinput5 and auxinput5_inname which are used to give the emissions filename settings are commented out in namelist.input despite being included in the namelist.wrf.blueprint file in the WRFotron folder:

namelist.input
.......
! CHEM
io_form_auxinput5 = 2,
auxinput5_interval_m = 60,
! frames_per_auxinput5 = 1,
! auxinput5_inname = 'wrfchemi_d_'

io_form_auxinput6 = 2,
auxinput6_inname = 'wrfbiochemi_d'
auxinput6_interval_d = 600,
io_form_auxinput5 = 2, ! file type as NetCDF, anthropogenic
io_form_auxinput6 = 2, ! file type as NetCDF, biogenic
io_form_auxinput7 = 2, ! file type as NetCDF, biomass burning
io_form_auxinput12 = 2, ! file type as NetCDF, restart
auxinput5_inname = 'wrfchemi_d', ! file name, anthropogenic
auxinput6_inname = 'wrfbiochemi_d', ! file name, biogenic
auxinput7_inname = 'wrffirechemi_d
', ! file name, biomass burning
auxinput5_interval_m = 60, ! time interval in mins, anthropogenic
auxinput6_interval_d = 600, ! time interval in days, biogenic
auxinput7_interval_m = 60, 60, 60, ! time interval in mins, biomass burning
frames_per_auxinput5 = 1, ! files per time interval, anthropogenic
frames_per_auxinput7 = 1, 1, 1, ! files per time interval, biomass burning
io_form_auxinput12 = 0, ! restart file type
force_use_old_data = T, ! allow WRFChem4 to run with data from WRFChem3
/

namelist.wps.blueprint

! CHEM
io_form_auxinput5 = 2, ! file type as NetCDF, anthropogenic
io_form_auxinput6 = 2, ! file type as NetCDF, biogenic
io_form_auxinput7 = 2, ! file type as NetCDF, biomass burning
io_form_auxinput12 = 2, ! file type as NetCDF, restart
auxinput5_inname = 'wrfchemi_d_', ! file name, anthropogenic
auxinput6_inname = 'wrfbiochemi_d', ! file name, biogenic
auxinput7_inname = 'wrffirechemi_d_', ! file name, biomass burning
auxinput5_interval_m = 60, ! time interval in mins, anthropogenic
auxinput6_interval_d = 600, ! time interval in days, biogenic
auxinput7_interval_m = 60, 60, 60, ! time interval in mins, biomass burning
frames_per_auxinput5 = 1, ! files per time interval, anthropogenic
frames_per_auxinput7 = 1, 1, 1, ! files per time interval, biomass burning
io_form_auxinput12 = ISRESTARTVALUE, ! restart file type
force_use_old_data = T, ! allow WRFChem4 to run with data from WRFChem3

Those 2 variables seem to be commented out by the settings in master.bash, which overwrite the settings in namelist.wrf.blueprint when it's copied from namelist.wrf.prep.real to namelist.input for the second real with chemistry :

master.bash

meh - auxinput_interval_d might need max_domains values!

cat > patchy << PATCH_END
io_form_auxinput5 = 2,
auxinput5_interval_m = 60,
! frames_per_auxinput5 = 1,
! auxinput5_inname = 'wrfchemi_d_'

io_form_auxinput6 = 2,
auxinput6_inname = 'wrfbiochemi_d'
auxinput6_interval_d = 600,
PATCH_END

sed -e "/! CHEM/r patchy" namelist.wrf.prep.real > tmp; mv tmp namelist.wrf.prep.real
sed -e "/! CHEM/r patchy" namelist.wrf.prep.chem > tmp; mv tmp namelist.wrf.prep.chem
sed -e "/! CHEM/r patchy" namelist.wrf.prep.chem_cold > tmp; mv tmp namelist.wrf.prep.chem_cold
rm -f patchy

I'm doing a test run with these commented back in to check whether this is a problem in my test simulation but wondered if you knew whether this is actually an error or not?

Changing wrfbiochemi file for ingestion into WRFChem not working

What happened:
My run with an altered wrfbiochemi_domain_{projectTag} file is not working. The run stopped during pre.bash: the message at the end of pre.bash.e was

rm: cannot remove 'namelist.input': No such file or directory
starting wrf task 0 of 1
starting wrf task 0 of 1

MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

in module_wrfchem_lib ...
cp: cannot stat '/nobackup/cm17nav/simulation_WRFChem4.2_b1/restart/base/wrfrst_d01_2015-06-01_00:00:00': No such file or directory

In my WRFotron's pre.bash, I commented out lines 107-129.

(I am not sure whether this is helpful or not: rsl.error.0000 had an error message at the end, which was

FATAL CALLED FROM FILE: <stdin> LINE: 314
Possibly missing file for = auxinput6)

The path to my WRFotron is /nobackup/cm17nav/WRFotron2.3.0_WRFChem4.2_b1.
The path to my simulation folder is /nobackup/cm17nav/simulation_WRFChem4.2_b1.

What you expected to happen:
The altered wrfbiochemi_domain_{projectTag} file to be incorporated into WRFChem when pre.bash lines 107-129 were commented out.

netcdf error

Hi,
I recently started running WRFV4.2 (manually compiled) using the EDGARHTAP2-MEIC2015 anthro emissions files in the /nobackup/WRFChem/ on ARC4. I have an error in my pre.bash.exxxx that reads the following:

Name of source model =>ECMWF                           
Name of source model =>ECMWF                           
Name of source model =>ECMWF                           
starting wrf task            0  of            1
ncks: INFO nco_fl_open() reports current extended filetype = NC_FORMATX_NC3 does not equal previous extended filetype = NC_FORMATX_HDF5. This is expected when NCO is instructed to convert filetypes, i.e., to read from one type and write to another. And when NCO generates grids or templates (which are always netCDF3) when the input file is netCDF4. It is also expected when multi-file operators receive files known to be of different types. However, it could also indicate an unexpected change in input dataset type of which the user should be cognizant.
Netcdf error
ln: failed to create symbolic link ‘./anthro_emis.inp’: File exists
Traceback (most recent call last):
  File "sum_sector_emiss_wrfchemi.py", line 11, in <module>
    if '00z' in wrfchemi_files[0]:
IndexError: list index out of range
 starting wrf task            0  of            1

I think this netcdf error might be causing problems with the anthro emis, the tail of the anthro_emis.out reads:

data_file_init: Initializing type for src emission file EDGARHTAP2_MEIC2015_CO_
 2010.0.1x0.1.nc
 data_file_init:  nlon_src, nlat_src =         3600        1800
 data_file_init: data_dx,dx,has_area_map =    11117.58       30000.00     T
 data_file_init: file EDGARHTAP2_MEIC2015_CO_2010.0.1x0.1.nc is a new grid
data_file_init: xcen_src(1,2)  =  -179.949996948242     -179.850006103516    
data_file_init: xedge(1,2) =  -179.999992370605     -179.900001525879    
data_file_init: dx =                     10
data_file_init: nlon_src,nlat_src =   3600   1800
data_file_init: ycen_src  =   89.8499984741211      89.9499969482422    
data_file_init: yedge_src =   89.8999938964844      89.9999961853027    
 ====================================================
 get_units: con_fac(1,2) =   3.6000000E+12  1.0000000E+09
 ====================================================
  
 next_flnm; trying to increment file EDGARHTAP2_MEIC2015_CO_2010.0.1x0.1.nc
 next_flnm; il, iu =           35          35
 next_flnm; file_number =            1
 next_flnm; new file = EDGARHTAP2_MEIC2015_CO_2010.0.1x0.0.nc
 data_file_timing_init : Failed to open 
 /nobackup/chmltf/WRFChem/emissions/EDGAR-HTAP2_MEIC2015/MOZART/EDGARHTAP2_MEIC2
 015_CO_2010.0.1x0.0.nc
 No such file or directory    

So it does find the emissions files, but for some reason for the next file is looking for EDGARHTAP2_MEIC2015_CO_2010.0.1x0.0.nc, which I do not understand.

Here is my module list:

  1) licenses              3) intel/19.0.4          5) intelmpi/2019.4.243   7) ncl/6.5.0             9) wrfchemconda/3.7
  2) sge                   4) user                  6) netcdf/4.6.3          8) nco/4.6.0

Run not producing output files

Hi - just a minor issue and the problem is probably between the computer and the chair, but I could do with some help getting past it please:

I have been trying out a test run for a run that I switched domain to cover the Nile Delta and date to start from 01/01/2016 (I had not tried the 2nd step before, so I think I might be getting something wrong there). I changed the background surface temperature and pressure levels to what I think are the appropriate files, with December 2015 included to account for a 6 hour spinup time. The files I am pointing to are here:

image

Whenever I try to run this, the output folder produced only contains pp_concat_regrid.py and .bash. I get the following pre.bash.e* error:

image

I'm aware the first bit of the error can often come from not requesting enough memory, but I don't think that's the case here as I requested 4gb for main.bash and it's a 24 hour test run.

I don't think the problem is in pre.bash, because when I try grep -i 'success' *.out *.log, I get the following:
image.

The rsl.error files are within the chem_out folder, which I believe usually means it hasn't broken mid run? main.bash.e* mostly looks fairly normal, but has a bit in it that looks like this:

image

post.bash.e* is completely blank.

If you need me to send through any other info, let me know

Thank you

Connor

pp_concat_regrid: open_mf_wrf_dataset IOError

Hello,

I submitted pp_concat_regrid.bash, but received the following error from pp_concat_regrid.bash.e:

Traceback (most recent call last): File "pp_concat_regrid.py", line 24, in <module> with salem.open_mf_wrf_dataset(filelist, chunks={'west_east':'auto', 'south_north':'auto'}) as ds: File "/nobackup/cm17nav/miniconda3/envs/python3_ncl_nco/lib/python3.8/site-packages/salem/sio.py", line 1169, in open_mf_wrf_dataset raise IOError('no files to open') OSError: no files to open

And lines 1166-1169 of sio.py, which follow the command to define open_mf_wrf_dataset, are

if isinstance(paths, basestring): paths = sorted(glob(paths)) if not paths: raise IOError('no files to open')

My wrfout files are in the output directory too, so I'm not sure why they can't be found to be opened. Would a solution be for me to list the wrfout files within sio.py?

Many thanks,
Nadia

PM emissions not being processed properly in anthro_emis

I just noticed there may be a problem within anthro_emis while using the default set-up. I get no PM emissions in the generated wrfchemi files. I think this may be because there are no units for the variables in the input files for POM, OIN_PM2.5 and PM2.5_PM10 (kg m-2 s-1) so the scaling factors can't be applied in mozbc:

ncdump -h /nobackup/WRFChem/emissions/EDGAR-HTAP2_MEIC2015/MOZART/EDGARHTAP2_MEIC2015_POM_2010.0.1x0.1.nc
netcdf EDGARHTAP2_MEIC2015_POM_2010.0.1x0.1 {​​​
dimensions:
        time = 12 ;
        lat = 1800 ;
        lon = 3600 ;
variables:
        float emis_tot(time, lat, lon) ;
                emis_tot:_FillValue = NaNf ;
        float lat(lat) ;
                lat:_FillValue = NaNf ;
                lat:standard_name = "latitude" ;
                lat:long_name = "latitude" ;
                lat:units = "degrees_north" ;
                lat:comment = "center_of_cell" ;
        float lon(lon) ;
                lon:_FillValue = NaNf ;
                lon:standard_name = "longitude" ;
                lon:long_name = "longitude" ;
                lon:units = "degrees_east" ;
                lon:comment = "center_of_cell" ;

data_file_init: Initializing type for src emission file EDGARHTAP2_MEIC2015_OIN
 _PM2.5_2010.0.1x0.1.nc

 data_file_init:  nlon_src, nlat_src =         3600        1800
 data_file_init: data_dx,dx,has_area_map =    11117.58       30000.00     T
 ====================================================
 get_units: con_fac(1,2) =   0.0000000E+00  0.0000000E+00
 ====================================================
 

I don't know if it will affect everyone but I had the same issue previously and found it was solved by adding units to each variable.

Automatic email outputs that cronjob cannot touch linked files.

What happened:
Automatic email outputs that cronjob cannot touch linked files.

touch: cannot touch ‘./amazon_simulation_WRFChem4.2_test/run/base/2010-06-11_18:00:00-2010-06-13_00:00:00/scheme_input_emission_data.ncl’: Permission denied
touch: cannot touch ‘./amazon_simulation_WRFChem4.2_test/run/base/2010-06-11_18:00:00-2010-06-13_00:00:00/preprocess_emissions_routines.ncl’: Permission denied
touch: cannot touch ‘./amazon_simulation_WRFChem4.2_test/run/base/2010-06-11_18:00:00-2010-06-13_00:00:00/README.exo_coldens’: Permission denied
touch: cannot touch ‘./amazon_simulation_WRFChem4.2_test/run/base/2010-06-11_18:00:00-2010-06-13_00:00:00/moz0002.nc’: Permission denied
touch: cannot touch ‘./amazon_simulation_WRFChem4.2_test/run/base/2010-06-11_18:00:00-2010-06-13_00:00:00/sum_sector_emiss_wrfchemi.py’: Permission denied

What you expected to happen:
The cronjob to touch all files, including linked files.

Environment:
Compilation = CEMAC.

Issue with regridding wrfoutput using 'regrid_wrfout.py' script

When running the regrid_wrfout.py script I keep getting an error which I can't find any online help for (see below). It seems to occur when writing the regridded output to the new netcdf. I wondered if anyone has seen this before? I have checked the output to ensure there isn't an issues with the files that were concatenated.

<CF Field: PM2_5_DRY(time(744), atmosphere_sigma_coordinate(32), latitude(404), longitude(492)) ug m^-3> netCDF: PM2_5_DRY_e
Traceback (most recent call last):
File "regrid_wrfout.py", line 66, in
cf.write([var_010], 'wrfout_2019_oct_chemopt202_cf_regrid_pm25.nc', verbose=True, fmt='NETCDF4')
File "/nfs/see-fs-01_teaching/ee15amg/anaconda2/envs/esmpy/lib/python2.7/site-packages/cf/write.py", line 283, in write
HDF_chunks=HDF_chunksizes, unlimited=unlimited)
File "/nfs/see-fs-01_teaching/ee15amg/anaconda2/envs/esmpy/lib/python2.7/site-packages/cf/netcdf/write.py", line 428, in write
_write_a_field(f, g=g)
File "/nfs/see-fs-01_teaching/ee15amg/anaconda2/envs/esmpy/lib/python2.7/site-packages/cf/netcdf/write.py", line 2209, in _write_a_field
g=g)
File "/nfs/see-fs-01_teaching/ee15amg/anaconda2/envs/esmpy/lib/python2.7/site-packages/cf/netcdf/write.py", line 1836, in _create_netcdf_variable
raise NetCDFError(message)
cf.netcdf.write.NetCDFError: Can't create variable in NETCDF4 file from PM2_5_DRY field summary

Data : PM2_5_DRY(time(744), atmosphere_sigma_coordinate(32), latitude(404), longitude(492)) ug m^-3
Axes : atmosphere_sigma_coordinate(32) = [0.996500015259, ..., 0.0039653792046]
: latitude(404) = [-49.553848, ..., -9.253848] degrees_north
: longitude(492) = [129.41962, ..., 178.51962] degrees_east
: time(744) = [2019-10-01T00:00:00Z, ..., 2019-10-31T23:00:00Z] standard
Coord refs :
(NetCDF: Bad chunk sizes.)

How can I speed up WRFChem?

So, if I want to make a run go faster, can I increase the memory per core for main.bash (e.g. from 4G to 6G)? Would doing that make everyone else's runs slower?!

Adding variable calculated in module to wrfout files

I am hoping to add a variable, aerosol surface area (A), which I believe is calculated in MOSAIC already, to a wrfout files. It is calculated (or estimated) in the "module_mosaic_gly" module for the purposes of calculating glysoa_sfc_a0X species and is located in the .../WRFChem/chem/ directory.

So far I have added it to the registry.chem file under "Additional MOSAIC aerosol variables inside the chem array"
state real A ikjftb chem 1 - i0{12}rhusdf=(bdy_interp:dt) "A" "aerosol surface area" "cm^2/cm^3"

I have also added it to the iofields.chem file in the wrfotron.
+:h:0:PHOTR4,PHOTR7,A

I was just wondering if anybody had experience with this. Thanks!

Issue with post.bash output file dates

For the past week or so I have noticed that all of my post processed files have strange time values (e.g. days since 2000 values of e60 e-28 etc). I am using a copy of pp.ncl from the WRFotron folder and the updated post.bash. I think the date values are linked to an error I started getting in post.bash (shown below). I thought the issue may be due to the change of nco version to 4.6.0 (to fix the restart file bug), based on googling the error message and help threads suggesting it is a library bug (and ncl/nco being dependent on each other?), but when I try re-running post.bash with nco version 4.8.1 the error is still appearing.

The errors I get for every chunk are:

PP'ing 20190915000000
deleting tmp_wrfout_d01_2019-09-15_00:00:00.nc
fatal:Could not create (tmp_wrfout_d01_2019-09-15_00:00:00.nc)

fatal:ut_inv_calendar: Invalid specification string

fatal:["Execute.c":8637]:Execute: Error occurred at or near line 55 in file $NCARG_ROOT/lib/ncarg/nclscripts/wrf/WRF_contributed.ncl

fatal:["Execute.c":8637]:Execute: Error occurred at or near line 283 in file $NCARG_ROOT/lib/ncarg/nclscripts/wrf/WRF_contributed.ncl

fatal:["Execute.c":8637]:Execute: Error occurred at or near line 22 in file /nobackup/ee15amg/Australia_fires/WRFotron2.0_2019/pp.ncl

fatal:Error writing file variable attribute, either the file or the variable (timestamp) are undefined

fatal:["Execute.c":8637]:Execute: Error occurred at or near line 23 in file /nobackup/ee15amg/Australia_fires/WRFotron2.0_2019/pp.ncl

fatal:Error writing file variable attribute, either thefile or the variable (timestamp) are undefined

fatal:["Execute.c":8637]:Execute: Error occurred at or near line 24 in file /nobackup/ee15amg/Australia_fires/WRFotron2.0_2019/pp.ncl

fatal:Either file (fileout) isn't defined or variable (td_2m) is not a variable in the file

fatal:["Execute.c":8637]:Execute: Error occurred at or near line 33 in file /nobackup/ee15amg/Australia_fires/WRFotron2.0_2019/pp.ncl

fatal:Either file (fileout) isn't defined or variable (td) is not a variable in the file

fatal:["Execute.c":8637]:Execute: Error occurred at or near line 35 in file /nobackup/ee15amg/Australia_fires/WRFotron2.0_2019/pp.ncl

I get the same error for every wrfout file. Has anyone ever seen this before? Is it worth trying the python processing instead to diagnose whether this is an nco problem or an issue with my output (I hope this isn't the case :( )?

Main_restart.bash won't restart at 06z or 18z

What happened:
main_restart.bash won't restart at 06z or 18z, only at either 00z or 12z.

Minimal Complete Verifiable Example:
Trying to restart at 18z, main_restart.bash breaks and the bottom of rsl.error.0000 reads:

 HOURLY EMISSIONS UPDATE TIME    21600.0       0.0
mediation_integrate: med_read_wrf_chem_emissions: Read emissions for time 2014-08-15_18:00:00
d01 2014-08-15_18:00:00 mediation_integrate: calling input_auxinput5
d01 2014-08-15_18:00:00  input_wrf: begin
d01 2014-08-15_18:00:00 module_io.F: in wrf_inquire_filename
d01 2014-08-15_18:00:00  input_wrf: filestate =            0
d01 2014-08-15_18:00:00  input_wrf: dryrun =  F  switch           31
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_real_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_real for CEN_LAT returns   0.0000000E+00
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_real_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_real for CEN_LON returns   -59.83400
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_real_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_real for TRUELAT1 returns   0.0000000E+00
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_real_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_real for TRUELAT2 returns   0.0000000E+00
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_real_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_real for MOAD_CEN_LAT returns   0.0000000E+00
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_real_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_real for STAND_LON returns   -59.83400
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_real_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_real for POLE_LAT returns    90.00000
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_real_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_real for POLE_LON returns   0.0000000E+00
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_real_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_real for GMT returns    12.00000
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_integer_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_integer for JULYR returns         2014
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_integer_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_integer for JULDAY returns          227
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_integer_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_integer for MAP_PROJ returns            3
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_char_arr
d01 2014-08-15_18:00:00 mminlu = 'USGS'
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_char for MMINLU returns USGS
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_integer_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_integer for ISWATER returns           16
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_integer_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_integer for ISLAKE returns           -1
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_integer_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_integer for ISICE returns           24
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_integer_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_integer for ISURBAN returns            1
d01 2014-08-15_18:00:00 module_io.F (md_calls.m4) : in wrf_get_dom_ti_integer_sca
d01 2014-08-15_18:00:00  input_wrf: wrf_get_dom_ti_integer for ISOILWATER returns           14
d01 2014-08-15_18:00:00 module_io.F: in wrf_get_next_time
d01 2014-08-15_18:00:00            0  input_wrf: wrf_get_next_time current_date: 2014-08-15_18:00:00 Status =         -102
           0  input_wrf: wrf_get_next_time current_date: 2014-08-15_18:00:00 Status =         -102
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:     939
 ... Could not find matching time in input file
-------------------------------------------
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

Error when changing resolution

Hi

I'm trying to get a run working with a Western European domain and a 50km resolution. Whenever I try to run it, I get no output files. The bottom of the rsl.error.0000 file gives me this:
image

I'm not entirely sure what I am doing wrong here. I have changed the timesteps proportionally in namelist.chem.blueprint up to 5.0, 50 and 5.0 respectively and changed the meteorological timestep to 300 as this is also an increase of 2/3 over the default number. I have also run this with the default 180 meteorological timestep and got the same error. The commas in the namelists all seem to be where they should be, so I don't think it's a typo. I have changed the resolution for other simulations before, using the same method and not had this issue.

The pre-processors all worked, so I don't believe the issue is there. I also have a simulation which is running the same domain, except at the default 30km resolution and timesteps, which is working fine. The error message itself doesn't seem to be mentioning anything to do with resolution, but it's only changing the resolution and timesteps that seem to be causing it to appear.

Do you know what this error means and how I should go about fixing it?

Thank you

Restart files not created for the end of run.

What happened:
Restart files not created for the end of run.

Minimal Complete Verifiable Example:

# using the test submission
. master.bash 2016 10 12 00 24 06

# creates:
wrfrst_d01_2016-10-12_18:00:00

# instead of:
wrfrst_d01_2016-10-13_00:00:00

pp_concat_regrid.bash not concatenating output- giving a key error associated with the word "longitude"

Hi

I've been trying to do a test run of WRF-Chem, but I can't get pp_concat_regrid.bash to concatenate the hourly output files. My test run is a single day - 2016 10 12 00 24 06 using an automatic simulation.

My submission ends early and I just get a pp_concat_regrid.bash.e* file and a pp_concat_regrid_bash.o* file. I think the issue is something to do with the variable names, but I'm not really sure how to translate the error or where I'd look to fix it. I got the following error in pp_concat.regrid.bash.e*:
Traceback (most recent call last):
File "home/home01/eecjc/miniconda3/envs/python3_ncl_nco/lib/python3.8/site-packages/xarray/core/dataarray.py". line 687, in _getitem_coord
var = self.coords[key]
KeyError = 'longitude'

It then gives me further errors that also seem to be something to do with the key/variable being called 'longitude' including "DataArray.cf does not understand the key 'longitude'" and "ValueError('dataset must include lon/lat')

I am loading miniconda from my home folder and I am trying to use an environment I created this way:

conda create -n python3_ncl_nco -c conda-forge xarray salem xesmf numpy scipy pandas matplotlib rasterio affine ncl nco wrf-python dask geopandas descartes

If it helps, this is the full error:
image

I tried reducing the number of domains in pp_regrid_concat.py and resubmitting as I am only running with 1 domain, but got the same issue.

I'm not really used to Unix or using Arc, so apologies if this is a fairly garbled description or something I should be able to sort easily.

Cheers

Connor

Errors on clean WRFotron

Hello

I recently cloned a clean WRFotron repo (after the recent bug fixes) and tried to run the default domain/time. I had some errors which seem to lead to main crashing, and I think they may be related to the latest bugfix. Tried to work out what is going on without success

In pre.bash all log files are fine except diurnal_emiss.out, which has the error

(0)     ==== Preparing the emission files ====
(0)     == Emission source files not accessible, failure in emission_file_setup, check these paths:
(0)     /nobackup/eebjs/simulation_WRFChem4.2_test/run/base/2015-10-11_18:00:00-2015-10-13_00:00:00/wrfchemi_00z_d02
(0)     /nobackup/eebjs/simulation_WRFChem4.2_test/run/base/2015-10-11_18:00:00-2015-10-13_00:00:00/wrfchemi_12z_d02
processing /nobackup/eebjs/simulation_WRFChem4.2_test/run/base/2015-10-11_18:00:00-2015-10-13_00:00:00/wrfchemi_00z_d01
writing updated /nobackup/eebjs/simulation_WRFChem4.2_test/run/base/2015-10-11_18:00:00-2015-10-13_00:00:00/wrfchemi_00z_d01
processing /nobackup/eebjs/simulation_WRFChem4.2_test/run/base/2015-10-11_18:00:00-2015-10-13_00:00:00/wrfchemi_12z_d01
writing updated /nobackup/eebjs/simulation_WRFChem4.2_test/run/base/2015-10-11_18:00:00-2015-10-13_00:00:00/wrfchemi_12z_d01

I also noticed this message in the file:
(0) No diurnal cycle applied to the following emission variables, because of lack of sector information (was this intended?):
Here is the full file: diurnal_emiss.out

When main starts, it created the first wrfout file fine, then crashed on the second one. In the rsl files there is the error:

 mediation_integrate: med_read_wrf_chem_emissions: Open file wrfchemi_12z_d01
 HOURLY EMISSIONS UPDATE TIME        0.0       0.0
mediation_integrate: med_read_wrf_chem_emissions: Read emissions for time 2015-10-11_18:00:00
mediation_integrate: med_read_wrf_chem_emissions: Skip emissions    1
d01 2015-10-11_18:00:00  input_wrf: begin
d01 2015-10-11_18:00:00 module_io.F: in wrf_inquire_filename
d01 2015-10-11_18:00:00  input_wrf: filestate =          103
d01 2015-10-11_18:00:00  input_wrf: dryrun =  F  switch           31
d01 2015-10-11_18:00:00 module_io.F: in wrf_inquire_filename
d01 2015-10-11_18:00:00  Error trying to read metadata

which I think is what causes this error to show up in main.bash.e*

MPI_ABORT was invoked on rank 11 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[d8s0b2.arc4.leeds.ac.uk:04943] 127 more processes have sent help message help-mpi-api.txt / mpi-abort
[d8s0b2.arc4.leeds.ac.uk:04943] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

Does anyone know what might be causing the diurnal_emiss error and whether this is causing the crash in main?
Also, has anyone run the test case since the bug fix and does it work ok for you? Could be an issue at my end.

Cheers

anthro_emis segmentation fault

Running anthro_emis (/nobackup/WRFChem/anthro_emis version) with new emissions causes segmentation error (see error below).
Emissions netcdf files follow the same format as EDGAR-HTAP2 (but includes extra sectors 'emis_tot_no_awb'). Segmentation fault occurs when reading in individual sectors (of which there are 14) and only the total (i.e. 1 sector). The fault seems to occur once 12 files have been read in (each file is 3.5 Gb in size). The error always occurs after reading the last file in (so if more than 12 files are read in (e.g. 15) it will occur on the final file (file 15)). The segmentation fault prevents the final statement of 'anthro_emis complete' being printed. However, the wrfchemi_00z and wrfchemi_12z files are generated and look reasonable.
I have tried following online help on the GEOS-Chem website (http://wiki.seas.harvard.edu/geos-chem/index.php/Segmentation_faults) to find the route of the error. The GEOS-Chem website suggests the error arises from either:

  • an issue with values being out of bounds (e.g. negative) - this can't be the case for any of the files as minimum values are set to zero.
  • invalid memory access where there is a size mismatch between the input array and array it is being added to in anthro_emis. However this can't be the case either as the same error occurs when only reading in the total emissions (i.e. only 1 sector).

error:

will use source file for C2H6

get_src_time_ndx; src_dir,src_fn =
/nobackup/ee15amg/wrf3.7.1_data/emissions/EDGARv52015_CAMS2016_MEIC2017/EDGARv5
_2015_CAMS_v4.2_2016_MEIC_v1.3_2017_Malley_C2H6_monthly_0.1x0.1.nc
get_src_time_ndx; interp_date,datesec,ntimes = 20170904 0
12
get_src_time_ndx; tndx = 9
aera_interp: raw dataset max value = 1.7067602E-08
aera_interp: raw dataset max indices = 2188 991
aera_interp: raw dataset max value = 7.5946782E-10
aera_interp: raw dataset max indices = 2048 1322
aera_interp: raw dataset max value = 3.3164188E-10
aera_interp: raw dataset max indices = 2314 1258
aera_interp: raw dataset max value = 1.1780937E-10
aera_interp: raw dataset max indices = 2178 1460
aera_interp: raw dataset max value = 8.0596892E-12
aera_interp: raw dataset max indices = 1111 1339
aera_interp: raw dataset max value = 0.0000000E+00
aera_interp: raw dataset max indices = 1 1
aera_interp: raw dataset max value = 1.3037157E-13
aera_interp: raw dataset max indices = 2854 851
aera_interp: raw dataset max value = 1.7423774E-08
aera_interp: raw dataset max indices = 2188 991
aera_interp: raw dataset max value = 1.7423773E-08
aera_interp: raw dataset max indices = 2188 991
aera_interp: raw dataset max value = 4.7226974E-13
aera_interp: raw dataset max indices = 1796 1415
aera_interp: raw dataset max value = 4.8074793E-13
aera_interp: raw dataset max indices = 1886 1401
aera_interp: raw dataset max value = 2.9458816E-12
aera_interp: raw dataset max indices = 922 1320
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
anthro_emis 0000000000479C33 for__signal_handl Unknown Unknown
libpthread-2.17.s 00007F9FA5BD45D0 Unknown Unknown Unknown
libc-2.17.so 00007F9FA587D71C cfree Unknown Unknown
anthro_emis 00000000004AE590 for_dealloc_alloc Unknown Unknown
anthro_emis 000000000042D658 Unknown Unknown Unknown
anthro_emis 000000000044A318 Unknown Unknown Unknown
anthro_emis 000000000040C5E2 Unknown Unknown Unknown
libc-2.17.so 00007F9FA581A495 __libc_start_main Unknown Unknown
anthro_emis 000000000040C4E9 Unknown Unknown Unknown

My anthro_emis.inp file is as follows:

anthro_dir = '/nobackup/ee15amg/wrf3.7.1_data/emissions/EDGARv52015_CAMS2016_MEIC2017'
domains = 1

src_file_prefix = 'EDGARv5_2015_CAMS_v4.2_2016_MEIC_v1.3_2017_Malley_'
src_file_suffix = '_monthly_0.1x0.1.nc'

src_names = 'CO(28)','NOx(30)','SO2(64)','NH3(17)','BC(12)','OC(12)','PM2.5(1)','BIGALK(72)','BIGENE(56)',
'C2H4(28)','C2H5OH(46)','C2H6(30)'

sub_categories = 'emis_ind', ! CAMS, industrial non-power+CAMS, fugitive emissions+CAMS, solvent emissions
'emis_dom', ! CAMS, residential energy and other + CAMS, solid waster and waste water
'emis_tra', ! CAMS, off road transport+CAMS, road transport
'emis_ene', ! CAMS, power generation
'emis_ship', ! CAMS, shipping
'emis_agr', ! CAMS, Agricultural soils+CAMS, Agricultural livestock
'emis_awb', ! CAMS, Agricultural waste burning
'emis_tot', ! CAMS, total with awb
'emis_tot_no_awb', ! CAMS, total with awb
'emis_cds', ! EDGAR-v5, aircraft - climbing and descent
'emis_crs', ! EDGAR-v5, aircraft - cruise
'emis_lto', ! EDGAR-v5 aircraft - landing and take off
'emis_1A1_1A2', ! EDGAR-HTAPv2.2, CH4 only, Energy manufacturing transformation
'emis_1A3a_c_d_e', ! EDGAR-HTAPv2.2, CH4 only, Non-road transportation
'emis_1A3b', ! EDGAR-HTAPv2.2, CH4 only, Road transportation
'emis_1A4', ! EDGAR-HTAPv2.2, CH4 only, Energy for buildings
'emis_1B1', ! EDGAR-HTAPv2.2, CH4 only, Fugitive from solid
'emis_1B2a', ! EDGAR-HTAPv2.2, CH4 only, Oil production and refineries
'emis_1B2b', ! EDGAR-HTAPv2.2, CH4 only, Gas production and distribution
'emis_2', ! EDGAR-HTAPv2.2, CH4 only, Industrial process and product use
'emis_4A', ! EDGAR-HTAPv2.2, CH4 only, Enteric fermentation
'emis_4B', ! EDGAR-HTAPv2.2, CH4 only, Manure management
'emis_4C_4D', ! EDGAR-HTAPv2.2, CH4 only, Agricultural soils
'emis_4F', ! EDGAR-HTAPv2.2, CH4 only, Agricultural waste burning
'emis_6A_6C', ! EDGAR-HTAPv2.2, CH4 only, Solid waste disposal
'emis_6B', ! EDGAR-HTAPv2.2, CH4 only, Waste water
'emis_7A' ! EDGAR-HTAPv2.2, CH4 only, Fossil Fuel Fires

serial_output = .false.
!data_yrs_offset = 2 ! make sure to update this!
data_yrs_offset = 1 ! make sure to update this!
emissions_zdim_stag = 1

! make sure to update these dates!
start_data_time = '2016-01-01_00:00:00'
stop_data_time = '2016-12-31_00:00:00'

emis_map = !'CO->CO(emis_tot)',
!'NO->0.8NOx(emis_tot)',
!'NO2->0.2
NOx(emis_tot)',
!'SO2->SO2(emis_tot)',
!'NH3->NH3(emis_tot)',
!'ECI(a)->0.1BC(emis_tot)',
!'ECJ(a)->0.9
BC(emis_tot)',
!'ORGI(a)->0.1OC(emis_tot)',
!'PM25I(a)->0.1
PM2.5(emis_tot)'

    'CO_TRA->CO(emis_tra)','CO_IND->CO(emis_ind)',
        'CO_RES->CO(emis_dom)','CO_POW->CO(emis_ene)',
        'CO_SHP->CO(emis_ship)','CO_CDS->CO(emis_cds)',
        'CO_CRS->CO(emis_crs)','CO_LTO->CO(emis_lto)',
        'CO->CO(emis_tot)','CO_NO_AWB->CO(emis_tot_no_awb)',
    'CO_AWB->CO(emis_awb)','CO_AGR->CO(emis_agr)',

        'NO_TRA->0.8*NOx(emis_tra)','NO_IND->0.8*NOx(emis_ind)',
        'NO_RES->0.8*NOx(emis_dom)','NO_POW->0.8*NOx(emis_ene)',
        'NO_SHP->0.8*NOx(emis_ship)','NO_CDS->0.8*NOx(emis_cds)',
        'NO_CRS->0.8*NOx(emis_crs)','NO_LTO->0.8*NOx(emis_lto)',
        'NO->0.8*NOx(emis_tot)','NO_NO_AWB->0.8*NOx(emis_tot_no_awb)',
    'NO_AWB->0.8*NOx(emis_awb)','NO_AGR->0.8*NOx(emis_agr)',

        'NO2_TRA->0.2*NOx(emis_tra)','NO2_IND->0.2*NOx(emis_ind)',
        'NO2_RES->0.2*NOx(emis_dom)','NO2_POW->0.2*NOx(emis_ene)',
        'NO2_SHP->0.2*NOx(emis_ship)','NO2_CDS->0.2*NOx(emis_cds)',
        'NO2_CRS->0.2*NOx(emis_crs)','NO2_LTO->0.2*NOx(emis_lto)',
        'NO2->0.2*NOx(emis_tot)','NO2_NO_AWB->0.2*NOx(emis_tot_no_awb)',
    'NO2_AWB->0.2*NOx(emis_awb)','NO2_AGR->0.2*NOx(emis_agr)',

        'SO2_TRA->SO2(emis_tra)','SO2_IND->SO2(emis_ind)',
        'SO2_RES->SO2(emis_dom)','SO2_POW->SO2(emis_ene)',
        'SO2_SHP->SO2(emis_ship)','SO2_CDS->SO2(emis_cds)',
        'SO2_CRS->SO2(emis_crs)','SO2_LTO->SO2(emis_lto)',
        'SO2->SO2(emis_tot)','SO2_NO_AWB->SO2(emis_tot_no_awb)',
    'SO2_AWB->SO2(emis_awb)','SO2_AGR->SO2(emis_agr)',

    'NH3_TRA->NH3(emis_tra)','NH3_IND->NH3(emis_ind)',
        'NH3_RES->NH3(emis_dom)','NH3_POW->NH3(emis_ene)',
        'NH3_SHP->NH3(emis_ship)','NH3_CDS->NH3(emis_cds)',
        'NH3_CRS->NH3(emis_crs)','NH3_LTO->NH3(emis_lto)',
        'NH3->NH3(emis_tot)','NH3_NO_AWB->NH3(emis_tot_no_awb)',
        'NH3_AWB->NH3(emis_awb)','NH3_AGR->NH3(emis_agr)',

    'ECI_TRA(a)->0.1*BC(emis_tra)','ECI_IND(a)->0.1*BC(emis_ind)',
        'ECI_RES(a)->0.1*BC(emis_dom)','ECI_POW(a)->0.1*BC(emis_ene)',
        'ECI_SHP(a)->0.1*BC(emis_ship)','ECI_CDS(a)->0.1*BC(emis_cds)',
        'ECI_CRS(a)->0.1*BC(emis_crs)','ECI_LTO(a)->0.1*BC(emis_lto)',
        'ECI(a)->0.1*BC(emis_tot)','ECI_NO_AWB(a)->0.1*BC(emis_tot_no_awb)',
        'ECI_AGRI(a)->0.1*BC(emis_agr)','ECI_AWB(a)->0.1*BC(emis_awb)',
   
        'ECJ_TRA(a)->0.9*BC(emis_tra)','ECJ_IND(a)->0.9*BC(emis_ind)',
        'ECJ_RES(a)->0.9*BC(emis_dom)','ECJ_POW(a)->0.9*BC(emis_ene)',
        'ECJ_SHP(a)->0.9*BC(emis_ship)','ECJ_CDS(a)->0.9*BC(emis_cds)',
        'ECJ_CRS(a)->0.9*BC(emis_crs)','ECJ_LTO(a)->0.9*BC(emis_lto)',
        'ECJ(a)->0.9*BC(emis_tot)','ECJ_NO_AWB(a)->0.9*BC(emis_tot_no_awb)',
        'ECJ_AGRI(a)->0.9*BC(emis_agr)','ECJ_AWB(a)->0.9*BC(emis_awb)',
               
    'ORGI_TRA(a)->0.1*OC(emis_tra)','ORGI_IND(a)->0.1*OC(emis_ind)',
        'ORGI_RES(a)->0.1*OC(emis_dom)','ORGI_POW(a)->0.1*OC(emis_ene)',
        'ORGI_SHP(a)->0.1*OC(emis_ship)','ORGI_CDS(a)->0.1*OC(emis_cds)',
        'ORGI_CRS(a)->0.1*OC(emis_crs)','ORGI_LTO(a)->0.1*OC(emis_lto)',
        'ORGI(a)->0.1*OC(emis_tot)','ORGI_NO_AWB(a)->0.1*OC(emis_tot_no_awb)',
        'ORGI_AGR(a)->0.1*OC(emis_agr)','ORGI_AWB(a)->0.1*OC(emis_awb)',

        'PM25I_TRA(a)->0.1*PM2.5(emis_tra)','PM25I_IND(a)->0.1*PM2.5(emis_ind)',
        'PM25I_RES(a)->0.1*PM2.5(emis_dom)','PM25I_POW(a)->0.1*PM2.5(emis_ene)',
        'PM25I_SHP(a)->0.1*PM2.5(emis_ship)','PM25I_CDS(a)->0.1*PM2.5(emis_cds)',
        'PM25I_CRS(a)->0.1*PM2.5(emis_crs)','PM25I_LTO(a)->0.1*PM2.5(emis_lto)',
        'PM25I(a)->0.1*PM2.5(emis_tot)','PM25I_NO_AWB(a)->0.1*PM2.5(emis_tot_no_awb)',
        'PM25I_AGR(a)->0.1*PM2.5(emis_agr)','PM25I_AWB(a)->0.1*PM2.5(emis_awb)',

        'BIGALK_TRA->BIGALK(emis_tra)','BIGALK_IND->BIGALK(emis_ind)',
        'BIGALK_RES->BIGALK(emis_dom)','BIGALK_POW->BIGALK(emis_ene)',
        'BIGALK_SHP->BIGALK(emis_ship)','BIGALK_CDS->BIGALK(emis_cds)',
        'BIGALK_CRS->BIGALK(emis_crs)','BIGALK_LTO->BIGALK(emis_lto)',
        'BIGALK->BIGALK(emis_tot)','BIGALK_NO_AWB->BIGALK(emis_tot_no_awb)',
        'BIGALK_AGR->BIGALK(emis_agr)','BIGALK_AWB->BIGALK(emis_awb)',

        'BIGENE_TRA->BIGENE(emis_tra)','BIGENE_IND->BIGENE(emis_ind)',
        'BIGENE_RES->BIGENE(emis_dom)','BIGENE_POW->BIGENE(emis_ene)',
        'BIGENE_SHP->BIGENE(emis_ship)','BIGENE_CDS->BIGENE(emis_cds)',
        'BIGENE_CRS->BIGENE(emis_crs)','BIGENE_LTO->BIGENE(emis_lto)',
        'BIGENE->BIGENE(emis_tot)','BIGENE_NO_AWB->BIGENE(emis_tot_no_awb)',
        'BIGENE_AGR->BIGENE(emis_agr)','BIGENE_AWB->BIGENE(emis_awb)',

        'C2H4_TRA->C2H4(emis_tra)','C2H4_IND->C2H4(emis_ind)',
        'C2H4_RES->C2H4(emis_dom)','C2H4_POW->C2H4(emis_ene)',
        'C2H4_SHP->C2H4(emis_ship)','C2H4_CDS->C2H4(emis_cds)',
        'C2H4_CRS->C2H4(emis_crs)','C2H4_LTO->C2H4(emis_lto)',
        'C2H4->C2H4(emis_tot)','C2H4_NO_AWB->C2H4(emis_tot_no_awb)',
        'C2H4_AGR->C2H4(emis_agr)','C2H4_AWB->C2H4(emis_awb)',

        'C2H6_TRA->C2H6(emis_tra)','C2H6_IND->C2H6(emis_ind)',
        'C2H6_RES->C2H6(emis_dom)','C2H6_POW->C2H6(emis_ene)',
        'C2H6_SHP->C2H6(emis_ship)','C2H6_CDS->C2H6(emis_cds)',
        'C2H6_CRS->C2H6(emis_crs)','C2H6_LTO->C2H6(emis_lto)',
        'C2H6->C2H6(emis_tot)','C2H6_NO_AWB->C2H6(emis_tot_no_awb)',
        'C2H6_AGR->C2H6(emis_agr)','C2H6_AWB->C2H6(emis_awb)'

/

mpirun unable to find real.exe

The runs I have been submitting do not create the wrfinput_d01 and wrfbdy_d01 files made from real.exe. The executable is copied to the run folder but it still cannot "find" it.

The pre.bash.exxxxxx reads:
Name of source model =>NCEP GFS Analysis GRID 4

mpirun was unable to find the specified executable file, and therefore
did not launch the job. This error was first reported for process
rank 0; it may have occurred for other processes as well.

NOTE: A common cause for this error is misspelling a mpirun command
line parameter option (remember that mpirun interprets the first
unrecognized command line token as the executable).

Node: d8s0b4
Executable: real.exe

From what I can tell real.exe mostly requires a namelist.input file which is:
&time_control ! time
start_year = 2014, 2014, ! start year
start_month = 01, 01, ! start month
start_day = 02, 02, ! start day
start_hour = 12, 12, ! start hour
start_minute = 00, 00, ! start minute
start_second = 00, 00, ! start second
end_year = 2014, 2014, ! end year
end_month = 01, 01, ! end month
end_day = 05, 05, ! end day
end_hour = 00, 00, ! end hour
end_minute = 00, 00, ! end minute
end_second = 00, 00, ! end second
interval_seconds = 21600, ! interval between meteorological data files
input_from_file = .true., .true., ! whether the nested run will have input files for domains other than domain 1
history_interval = 60, 60, ! history output file interval in minutes (integer only)
frames_per_outfile = 1, 1, ! number of output times bulked into each history file
restart = .false., ! whether this run is a restart
restart_interval = 1440, ! restart output file interval in minutes
io_form_history = 2 ! NetCDF
io_form_restart = 2 ! NetCDF
io_form_input = 2 ! NetCDF
io_form_boundary = 2 ! NetCDF
debug_level = 400 ! debugging level
iofields_filename = "iofields" ,"iofields" ! an option to request particular variables to appear in output
io_form_auxinput7 = 2, ! biomass burning-emissions input (wrffirechemi_d01) data format is WRF netCDF
auxinput7_inname = 'wrffirechemi_d_' ! biomass burning filename
auxinput7_interval_m = 60, ! biomass burning time interval, minutes
frames_per_auxinput7 = 1, ! biomass burning files per time interval
io_form_auxinput12 = 0, ! restart file type
/

&domains ! domains - dimensions, nesting, parameters
time_step = 180, ! timestep, meteorology, seconds
time_step_fract_num = 0, ! numerator for fractional time step
time_step_fract_den = 1, ! denominator for fractional time step
max_dom = 1, ! number of domains
e_we = 140, 154, ! westeast dimension
e_sn = 140, 154, ! southnorth dimension
e_vert = 33, 33, ! vertical dimension
num_metgrid_levels = 27, ! number of vertical levels in WPS output
num_metgrid_soil_levels = 4, ! number of soil levels or layers in WPS output
dx = 30000, 2777.778, ! westeast resolution, metres
dy = 30000, 2777.778, ! southnorth resolution, metres
grid_id = 1, 2, ! grid ID
parent_id = 1, 1, ! parent ID
i_parent_start = 1, 70, ! x coordinate of the lower-left corner
j_parent_start = 1, 60, ! y coordinate of the lower-left corner
parent_grid_ratio = 1, 9, ! nesting ratio relative to the domain’s parent
parent_time_step_ratio = 1, 9, ! parent-to-nest time step ratio
feedback = 1, ! feedback from nest to its parent domain
/

&physics ! physics
mp_physics = 10, 10, ! microphysics scheme, 10 = Morrison 2-moment scheme
progn = 1, 1, ! prognostic number density, switch to use mix-activate scheme
ra_lw_physics = 4, 4, ! longwave radiation scheme, 4 = RRTMG
ra_sw_physics = 4, 4, ! shortwave radiation scheme, 4 = RRTMG
radt = 30, 30, ! minutes between radiation physics calls, recommended 1 minute per km of dx (e.g. 10 for 10 km grid); use the same value for all nests
sf_sfclay_physics = 5, 5, ! surface layer physics option, 5 = MYNN (Ravan's suggestion)
sf_surface_physics = 2, 2, ! land surface physics option, 2 = NOAH
bl_pbl_physics = 5, 5, ! boundary layer physics option, 5 = MYNN 2.5
bldt = 0, 0, ! minutes between boundary-layer physics calls, 0 = call every timestep
cu_physics = 5, 0, ! cumulus parameterization option, 5 = Grell 3D, 0 = off
cudt = 0, ! minutes between cumulus physics calls; should be set to 0 when using all cu_physics except Kain-Fritsch
cugd_avedx = 1, ! number of grid boxes over which subsidence is spread, set to 3 for 4km run, 1 for 36km
isfflx = 1, ! heat and moisture fluxes from the surface for real-data cases and when a PBL is used
ifsnow = 1, ! snow-cover effects
icloud = 1, ! cloud effect to the optical depth in radiation
surface_input_source = 1, ! where landuse and soil category data come from, 1 = WPS
num_soil_layers = 4, ! number of soil levels or layers in WPS output
sf_urban_physics = 1, ! activate urban canopy model, 1 = single layer, 2 = multi layer
mp_zero_out = 2, ! this keeps moisture variables above a threshold value ≥0
mp_zero_out_thresh = 1.e-8, ! critical value for moisture variable threshold, below which moisture arrays (except for Qv) are set to zero
cu_rad_feedback = .true., .false., ! sub-grid cloud effect to the optical depth in radiation
cu_diag = 1, 0, ! Additional time-averaged diagnostics from cu_physics
slope_rad = 0, 1, ! use slope-dependent radiation
topo_shading = 0, 1, ! applies neighboring-point shadow effects
num_land_cat = 20, ! number of land categories in input data
/

&fdda ! FDDA - options for grid, obs and spectral nudging
grid_fdda = 1, 0, ! grid nudging
gfdda_inname = "wrffdda_d", ! fdda filenames produced
gfdda_end_h = 10000, 0, ! time (hr) to stop nudging after the start of the forecast
gfdda_interval_m = 360, 0, ! time interval (in mins) between analysis times
if_no_pbl_nudging_uv = 1, 0, ! nudging of u and v in the PBL, 0 = yes, 1 = no
if_no_pbl_nudging_t = 1, 0, ! nudging of t in the PBL, 0 = yes, 1 = no
if_no_pbl_nudging_q = 1, 0, ! nudging of q in the PBL, 0 = yes, 1 = no
if_zfac_uv = 0, 0, ! nudge u and v in all layers, 0 = yes, 1 = limit to k_zfac_uv layers
k_zfac_uv = 2, ! model level below which nudging is switched off for u and v
if_zfac_t = 0, 0, ! nudge t in all layers, 0 = yes, 1 = limit to k_zfac_t layers
k_zfac_t = 2, ! model level below which nudging is switched off for t
if_zfac_q = 0, 0, ! nudge q in all layers, 0 = yes, 1 = limit to k_zfac_q layers
k_zfac_q = 2, ! model level below which nudging is switched off for q
guv = 0.0006, 0.0006, ! nudging coefficient for u and v (s-1)
gt = 0.0006, 0.0006, ! nudging coefficient for t (s-1)
gq = 0.0006, 0.0006, ! nudging coefficient for q (s-1)
if_ramping = 0, ! 0 = nudging ends as a step function, 1 = ramping nudging down at the end of the period
dtramp_min = 360, ! time (min) for ramping function
io_form_gfdda = 2, ! 2 = NetCDF
/

&dynamics ! dynamics - diffusion, damping options, advection options
rk_ord = 3, ! time-integration scheme option, 3 = Runge-Kutta 3rd order
w_damping = 1, ! vertical velocity damping flag, 1 = with damping
diff_opt = 1, 1, ! turbulence and mixing option, 1 = evaluates 2nd order diffusion term on coordinate surfaces
km_opt = 4, 4, ! eddy coefficient option, 4 = horizontal Smagorinsky first order closure
diff_6th_opt = 0, 0, ! 6th-order numerical diffusion, 0 = none
diff_6th_factor = 0.12, ! 6th-order numerical diffusion nondimensional rate
base_temp = 290. ! base state temperature (K)
damp_opt = 3, ! upper-level damping flag, 3 = Rayleigh damping
zdamp = 5000., 5000., ! damping depth (m) from model top
dampcoef = 0.2, 0.2, ! damping coefficient
khdif = 0, 0, ! horizontal diffusion constant (m2/s)
kvdif = 0, 0, ! vertical diffusion constant (m2/s)
non_hydrostatic = .true., ! running the model in nonhydrostatic mode
moist_adv_opt = 2, 2, ! advection options for moisture, 2 = monotonic
chem_adv_opt = 2, 2, ! advection options for chemistry, 2 = monotonic
scalar_adv_opt = 2, 2, ! advection options for scalars, 2 = monotonic
tke_adv_opt = 2, 2, ! advection options for TKE, 2 = monotonic
do_avgflx_em = 1, 1, ! outputs time-averaged masscoupled advective velocities, 1 = on
/

&bdy_control ! Boundary condition control
spec_bdy_width = 5, ! total number of rows for specified boundary value nudging
spec_zone = 1, ! number of points in specified zone
relax_zone = 4, ! number of points in relaxation zone
specified = .true., ! specified boundary condition
nested = .false., .true., ! nested boundary conditions
/

&grib2
/

&namelist_quilt ! options for asynchronized I/O for MPI applications
nio_tasks_per_group = 0, ! # of processors used for IO quilting per IO group
nio_groups = 0 ! number of quilting groups
/

&chem ! chemistry
kemit = 1, ! number of vertical levels in the emissions input data file
chem_opt = 202, 202, 202, ! chemistry option, 201 = MOZART-MOSAIC (4 bins + simplified SOA + no aqeuous chemistry), 202 = MOZART-MOSAIC (4 bins + VBS SOA + aqeuous chemistry).
bioemdt = 3.0, 3.0, 3.0, ! timestep, biogenic, minutes
photdt = 30., 10., 10., ! timestep, photolysis, minutes
chemdt = 3.0, 3.0, 3.0, ! timestep, chemistry, minutes
io_style_emissions = 2, ! anthropogenic emissions, files, two 12-h emissions data files used
emiss_inpt_opt = 102, 102, 102, ! RADM2 emission speciation adapted after reading data file to follow the RADM2/SORGAM framework (including isoprene)
emiss_opt = 10, 10, 10, ! anthropogenic emissions, setting, 10 = MOZART (MOZART + aerosols) emissions
chem_in_opt = 1, 1, 1, ! initialize chemistry, 1 = uses previous simulation data
phot_opt = 4, 4, 4, ! photolysis option, 1 = Full TUV, 3 = Madronich F-TUV, 4 = New full TUV scheme
gas_drydep_opt = 1, 1, 1, ! dry deposition of gas species, 1 = on
aer_drydep_opt = 1, 1, 1, ! dry deposition of aerosols, 1 = on
bio_emiss_opt = 3, 3, 3, ! includes MEGAN biogenic emissions online based upon the weather, land use data
gas_bc_opt = 1, 1, 1, ! gas boundary conditions, 1 = default
gas_ic_opt = 1, 1, 1, ! gas initial conditions, 1 = default
aer_bc_opt = 1, 1, 1, ! aerosol boundary conditions, 1 = default
aer_ic_opt = 1, 1, 1, ! aerosol initial conditions, 1 = default
gaschem_onoff = 1, 1, 1, ! gas phase chemistry, 1 = on
aerchem_onoff = 1, 1, 1, ! aerosol chemistry, 1 = on
wetscav_onoff = 1, 1, 1, ! wet scavenging in stratocumulus clouds, 1 = on
cldchem_onoff = 0, 1, 1, ! aqueous chemistry in stratocumulus clouds, 1 = on
vertmix_onoff = 1, 1, 1, ! vertical turbulent mixing, 1 = on
chem_conv_tr = 1, 0, 0, ! subgrid convective transport, 1 = on
conv_tr_wetscav = 1, 0, 0, ! wet scavenging in cumulus clouds, subgrid, 1 = on
conv_tr_aqchem = 1, 0, 0, ! aqueous chemistry in cumulus clouds, subgrid, 1 = on
seas_opt = 2, ! sea salt emissions, 2 = MOSAIC or MADE/SORGAM sea salt emissions
dust_opt = 3, ! dust emissions, 3 = GOCART dust emissions with AFWA modifications
dmsemis_opt = 1, ! include GOCART dms emissions from sea surface
biomass_burn_opt = 2, 2, 2, ! biomass burning emissions, 2 = MOCART
plumerisefire_frq = 30, 30, 30, ! time interval for calling the biomass burning plume rise subroutine
scale_fire_emiss = .true., .true., .true., ! must be equal to .true. when running with FINN emissions
aer_ra_feedback = 1, 1, 1, ! feedback from the aerosols to the radiation schemes, 1 = on
ne_area = 500, ! total number of chemical species, in the chemical name list, best to set to a value larger than all chemical species
opt_pars_out = 1, ! include optical properties in output
have_bcs_chem = .true., .false., .false., ! gets lateral boundary data from wrfbdy (.true.) or idealized profile (.false.)
have_bcs_upper = .false., .false., .false., ! upper boundary bounary condition for chemical species
aer_op_opt = 2, 2, 2, ! aerosol optical properties, 1 = volume, 2 = approximate Maxwell-Garnet, 3 = complex volume-mixing, 4 = complex Maxwell-Garnet, 5 = complex core-shell
bbinjectscheme = 1, 1, 1, ! 0 = plumerise (biomass_burn_opt), 1 = all ground level (recommended), 2 = flaming evenly in BL, 3 = flaming top BL, 4 = flaming injected at specific height
/

Dust (dust_opt = 3) is not working in WRFChem4.2

What happened:
Dust from GOCART with AFWA modifications (dust_opt = 3) appears to not be working in WRFChem4.2, as the concentrations of coarse other inorganic aerosol (oin_a04) are too low in high dust areas. The variable EROD (dust_erosion_dimension) is in the geogrid output file (geo_em.d01.nc), so dust is being preprocessed and the issue is not with GEOGRID.TBL. The issue is that dust is not being used/activated in the main WRFChem run.

What you expected to happen:
Dust to be included when using dust_opt = 3 with chem_opt = 202, and for reasonable concentrations of oin_a04.

Minimal Complete Verifiable Example:
Default CEMAC WRFotron for WRFChem4.2.

./real.exe: error while loading shared libraries: libifport.so.5: cannot open shared object file: No such file or directory

I am trying to install WRF-Chem version 4.2.1 using an Intel compiler on a HPC . I installed mpich, netcdf-c, netcdf-fortran, zlib and jasper to do so. However after compiling WRF-Chem, I tried to run real.exe and faced below error:
./real.exe: error while loading shared libraries: libifport.so.5: cannot open shared object file: No such file or directory

I insert "ldd real.exe" command , and here is the result :
ldd real.exe
linux-vdso.so.1 => (0x00007fffc5acf000)
libnetcdff.so.7 => /backup3/seti/setii/WRF-CHEM/LIBRARIES/netcdf/lib/libnetcdff.so.7 (0x00002b5db2dc8000)
libmpifort.so.12 => /backup3/seti/setii/WRF-CHEM/LIBRARIES/mpich/lib/libmpifort.so.12 (0x00002b5db3415000)
libmpi.so.12 => /backup3/seti/setii/WRF-CHEM/LIBRARIES/mpich/lib/libmpi.so.12 (0x00002b5db3716000)
libm.so.6 => /lib64/libm.so.6 (0x0000003943600000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003943e00000)
libc.so.6 => /lib64/libc.so.6 (0x0000003943200000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x0000003946e00000)
libdl.so.2 => /lib64/libdl.so.2 (0x0000003943a00000)
libifport.so.5 => not found
libifcoremt.so.5 => not found
libimf.so => not found
libsvml.so => not found
libintlc.so.5 => not found
librt.so.1 => /lib64/librt.so.1 (0x0000003944600000)
/lib64/ld-linux-x86-64.so.2 (0x00002b5db2ba5000)
libifport.so.5 => not found
libifcoremt.so.5 => not found
libimf.so => not found
libintlc.so.5 => not found
libsvml.so => not found
libimf.so => not found
libsvml.so => not found
libirng.so => not found
libintlc.so.5 => not found

module list
Currently Loaded Modulefiles:

  1. mathematica11 2) intel-mpi-5 3) intel-ics-2015

conda list

packages in environment at /opt/intel/intelpython3:

asn1crypto 0.24.0 py36_3 file:///opt/intel/conda_channel
bzip2 1.0.6 17 intel
certifi 2018.1.18 py36_2 file:///opt/intel/conda_channel
cffi 1.11.5 py36_3 file:///opt/intel/conda_channel
chardet 3.0.4 py36_3 file:///opt/intel/conda_channel
conda 4.3.31 py36_3 file:///opt/intel/conda_channel
conda-env 2.6.0 0 intel
cryptography 2.3 py36_1 file:///opt/intel/conda_channel
cycler 0.10.0 py36_7 intel
cython 0.29.6 py36h7b7c402_0 intel
daal 2019.4 intel_243 file:///opt/intel/conda_channel
daal4py 2019.4 py36h7b7c402_0 intel
freetype 2.9 3 intel
funcsigs 1.0.2 py36_7 intel
icc_rt 2019.4 intel_243 file:///opt/intel/conda_channel
idna 2.6 py36_3 file:///opt/intel/conda_channel
impi_rt 2019.4 intel_243 file:///opt/intel/conda_channel
intel-openmp 2019.4 intel_243 file:///opt/intel/conda_channel
intelpython 2019.4 0 intel
ipp 2019.4 intel_243 file:///opt/intel/conda_channel
kiwisolver 1.0.1 py36_2 intel
libffi 3.2.1 11 intel
libpng 1.6.36 2 intel
llvmlite 0.27.1 py36_0 intel
matplotlib 3.0.3 py36_4 intel
mkl 2019.4 intel_243 file:///opt/intel/conda_channel
mkl-service 1.0.0 py36h7b7c402_11 intel
mkl_fft 1.0.11 py36h7b7c402_2 intel
mkl_random 1.0.2 py36h7b7c402_4 intel
mpi4py 3.0.0 py36_3 intel
numba 0.42.1 np116py36_2 intel
numexpr 2.6.8 py36_2 intel
numpy 1.16.2 py36h7b7c402_0 intel
numpy-base 1.16.2 py36_0 intel
openssl 1.0.2r 2 file:///opt/intel/conda_channel
pandas 0.24.1 py36_3 intel
pip 10.0.1 py36_0 file:///opt/intel/conda_channel
pycosat 0.6.3 py36_3 file:///opt/intel/conda_channel
pycparser 2.18 py36_2 file:///opt/intel/conda_channel
pyeditline 2.0.0 py36_0 intel
pyopenssl 17.5.0 py36_2 file:///opt/intel/conda_channel
pyparsing 2.2.0 py36_2 intel
pysocks 1.6.7 py36_1 file:///opt/intel/conda_channel
python 3.6.8 7 file:///opt/intel/conda_channel
python-dateutil 2.6.0 py36_12 intel
pytz 2018.4 py36_3 intel
pyyaml 4.1 py36_3 intel
requests 2.20.1 py36_1 file:///opt/intel/conda_channel
ruamel_yaml 0.11.14 py36_4 file:///opt/intel/conda_channel
scikit-learn 0.20.3 py36h7b7c402_5 intel
scipy 1.2.1 py36h7b7c402_3 intel
setuptools 39.0.1 py36_0 file:///opt/intel/conda_channel
six 1.11.0 py36_3 file:///opt/intel/conda_channel
smp 0.1.4 py36_0 intel
sqlite 3.27.2 4 intel
tbb 2019.6 intel_243 file:///opt/intel/conda_channel
tbb4py 2019.6 py36_intel_0 file:///opt/intel/conda_channel
tcl 8.6.4 24 intel
tk 8.6.4 29 intel
urllib3 1.24.1 py36_2 file:///opt/intel/conda_channel
wheel 0.31.0 py36_3 file:///opt/intel/conda_channel
wrf-python 1.3.2 py38h7eb8c7e_1 https://
xgboost 0.81 py36_0 intel
xz 5.2.3 2 intel
yaml 0.1.7 2 intel
zlib 1.2.11 5 intel

I also noticed a discussion related to my similar problem ( #14 ), but I could not follow it because I am new in this field (I do not know what is pre.bash and config.bash) . So may I ask you please to kindly help me to sort this issue out. I have been struggling with the issue for 3 months, and did no progress to do my dissertation. Please help me.

any suggestion appreciated,
Best wishes,

Not finding .exe files

I am getting a whole lot of errors during pre.bash when trying the default run with a clean version of WRFotron.

openmpi/3.1.4 conflicts with loaded module 'intelmpi/2019.4.243'
utility.c(742):WARN:60: Flush error on file 'stdout'
/var/spool/sge_prod/d10s2b1/job_scripts/699115: line 64: ungrib.exe: command not found
[proxy:0:[email protected]] HYD_spawn (../../../../../src/pm/i_hydra/libhydra/spawn/intel/hydra_spawn.c:117): execvp error on file geogrid.exe (No such file or directory)
[proxy:0:[email protected]] HYD_spawn (../../../../../src/pm/i_hydra/libhydra/spawn/intel/hydra_spawn.c:117): execvp error on file metgrid.exe (No such file or directory)
rm: cannot remove ‘FILE*’: No such file or directory
rm: cannot remove ‘namelist.input’: No such file or directory
[proxy:0:[email protected]] HYD_spawn (../../../../../src/pm/i_hydra/libhydra/spawn/intel/hydra_spawn.c:117): execvp error on file real.exe (No such file or directory)
mv: cannot stat ‘rsl*’: No such file or directory
/var/spool/sge_prod/d10s2b1/job_scripts/699115: line 112: megan_bio_emiss: command not found
mv: cannot stat ‘wrfbiochemi_d01’: No such file or directory
/var/spool/sge_prod/d10s2b1/job_scripts/699115: line 124: ncatted: command not found
/var/spool/sge_prod/d10s2b1/job_scripts/699115: line 129: ncks: command not found
/var/spool/sge_prod/d10s2b1/job_scripts/699115: line 134: wesely: command not found
/var/spool/sge_prod/d10s2b1/job_scripts/699115: line 136: exo_coldens: command not found
/var/spool/sge_prod/d10s2b1/job_scripts/699115: line 145: anthro_emis: command not found
ln: failed to create symbolic link ‘./anthro_emis.inp’: File exists
cp: cannot stat ‘/nobackup/WRFChem/WRF_UoM_EMIT/WRF_UoM_EMIT/final_output’: No such file or directory
/var/spool/sge_prod/d10s2b1/job_scripts/699115: line 151: ncl: command not found
Traceback (most recent call last):
  File "sum_sector_emiss_wrfchemi.py", line 2, in <module>
    import xarray as xr
ModuleNotFoundError: No module named 'xarray'
/var/spool/sge_prod/d10s2b1/job_scripts/699115: line 160: fire_emis: command not found
[proxy:0:[email protected]] HYD_spawn (../../../../../src/pm/i_hydra/libhydra/spawn/intel/hydra_spawn.c:117): execvp error on file real.exe (No such file or directory)
mv: cannot stat ‘rsl*’: No such file or directory
/var/spool/sge_prod/d10s2b1/job_scripts/699115: line 212: mozbc: command not found
cp: cannot stat ‘/nobackup/eebjs/simulation_WRFChem4.2_test/restart/base/wrfrst_d01_2016-10-12_00:00:00’: No such file or directory
cp: cannot stat ‘/home/home01/eebjs/github_wrfotron/WRFotron_githubclean/WRFotron/pp_concat_regrid.py’: No such file or directory
cp: cannot stat ‘/home/home01/eebjs/github_wrfotron/WRFotron_githubclean/WRFotron/pp_concat_regrid.bash’: No such file or directory
rm: cannot remove ‘met_em*’: No such file or directory

The errors relating to missing or conflicting modules seemed to be fixed by switching to '_manual' versions of config.bash and pre.bash (but while keeping /nobackup/WRFChem/ instead of /nobackup/{$USER}

But the errors about not finding executables remain. Where should these executables be? I can't see them in /nobackup/WRFChem/
Maybe it would be better to have an absolute path constructed from variables in the bash scripts rather than using a relative path. But I'm not sure where the .exe files should come from when using the CEMAC version.

Thanks

Benchmarking and testing with each release

Following the WRFChem technical meeting on 13/07/2020, it was suggested that it would be useful to have a standard benchmark test for each WRFotron release to see how performance varies per release. This would use the default settings and executables for each release and evaluate the model against surface measurements/satellite observations. This could also include other outputs of interest which may not be directly evaluated. We discussed that this is likely to be for a single domain over China, due to our extensive measurements here, though other suggestions welcome too.

Please, add/edit/remove to the list below which you think would be useful for these benchmarks:

  • Air quality related concentrations:
    • PM2.5, PM10, O3, NO2, SO2, CO, aerosol components < 2.5 um (BC, OC, NH4, NO3, SO4, OIN)
  • Atmospheric burdens (sulphate, OH)
  • AOD550 (surface / column)
  • Meteorology:
    • Wind speed, wind direction, precipitation, temperature, pressure.
  • Metrics:
    • NMBF, NMAEF

Help for this work is welcome.

Parallelise postprocessing.py

What happened:
postprocessing.py does not run in parallel due to KMP_AFFINITY disabling multi-threading.

Minimal Complete Verifiable Example:
post.bash returns:

OMP: Warning #181: GOMP_CPU_AFFINITY: ignored because KMP_AFFINITY has been defined
OMP: Warning #123: Ignoring invalid OS proc ID 10.
OMP: Warning #123: Ignoring invalid OS proc ID 12.
OMP: Warning #123: Ignoring invalid OS proc ID 14.

Potential solution?:
Utilising OpenMP within wrf-python.

Within postprocessing.py, this may be something similar to:

from wrf import omp_set_num_threads, omp_get_max_threads
from wrf import omp_set_schedule, omp_get_schedule, OMP_SCHED_GUIDED

omp_set_num_threads(int(sys.argv[4]))
omp_set_schedule(OMP_SCHED_GUIDED, 0)
sched, modifier = omp_get_schedule()

And within post.bash, add a 4th positional argument:

python ${pyPpScript} ${inFile} tmp_${outFile} ${WRFdir} ${nprocPost}

However, paralleisation within post.bash is currently set up to loop over the whole postpoc function, so this solution will need rethinking.

pp_concat_regrid.bash loading the right module

Hi Luke and Helen. Sorry I can't work this one out. @ailishgraham and I are instructing some MRes students on WRF-Chem and WRFotron, and I am writing some instructions on how to post process the output, and have been going through the process myself.

using_cemac_conda=$((module list -t) |& grep -i wrfchemconda | wc -c)
if [[ $using_cemac_conda -ne 0 ]]; then
module load wrfchemconda/3.7
else
conda activate python3_ncl_nco
fi

Even after I load the cemac module by running . /nobackup/cemac/cemac.sh, the output of ((module list -t) |& grep -i wrfchemconda | wc -c) is still 0 for some reason, so then the script tries to activate the conda environment python3_ncl_nco which I do not have, which results in an error.

If I do this

(base) [[email protected] ~]$ conda deactivate
[[email protected] ~]$ module purge
[[email protected] ~]$ . /nobackup/cemac/cemac.sh
[[email protected] ~]$ module list
No Modulefiles Currently Loaded.
[[email protected] ~]$

I end up with no module files, not sure if I have done it right?

In my own runs, I got around this by editing pp_concat_regrid.bash so that it loaded my own python environment with the correct modules. But for instructing the MRes students I'd prefer to understand how to load and use the CEMAC modules correctly. Any advice on how to get this to work? Which of the CEMAC modules need to be loaded for this script to work?

Main.bash freezes

What happened:
Main.bash freezes.

What you expected to happen:
Main.bash to work.

Minimal Complete Verifiable Example:
Change the horizontal spatial resolution and meteorological timestep, and main.bash will freeze where at the bottom of rsl.error.0000 the last line is:

d01 2014-08-08_14:25:30 kpp_mechanism_driver: calling mozart_mosaic_4bin_aq_interface

Permission error - WRFChem python 3.8

What happened: When running WRFChem 4.2, no output was generated. I checked pre.bash.e* and it showed the following error:

PermissionError: [Errno 13] Permission denied: '/nobackup/WRFChem/anaconda3/lib/python3.8/site-packages/xesmf-0.5.1.dist-info'

I am wondering if this is because I'm a new user?

I am using the default cemac WRFotron

Meteorological spin-up is not working in CEMAC WRFotron

What happened:
The meteorological spin-up is not working in CEMAC WRFotron (meteo_out/rsl.error.0000) due to the wrfmeteo.exe executable looking for anthropogenic emissions (auxinput5).

In WRFChem4.2:

d01 2016-10-11_18:00:00 open_aux_u : opening auxinput5_d01_2016-10-11_18:00:00 for reading. DATASET DATASET=AUXINPUT5
d01 2016-10-11_18:00:00 calling wrf_open_for_read_begin in open_u_dataset
d01 2016-10-11_18:00:00  NetCDF error: No such file or directory
...
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:     314
Possibly missing file for = auxinput5

In WRFChem3.7.1:

d01 2016-10-11_18:00:00 open_aux_u : opening auxinput5_d01_2016-10-11_18:00:00 for reading. DATASET DATASET=AUXINPUT5
d01 2016-10-11_18:00:00 calling wrf_open_for_read_begin in open_u_dataset
d01 2016-10-11_18:00:00  NetCDF error: No such file or directory
...
d01 2016-10-11_18:00:00            1  input_wrf: wrf_get_next_time current_date: 2016-10-11_18:00:00 Status =          -10
           1  input_wrf: wrf_get_next_time current_date: 2016-10-11_18:00:00 Status =          -10
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:     939
 ... Could not find matching time in input file wrfout_d01_2016-10-11_18:00:00
-------------------------------------------

What you expected to happen:
The wrfmeteo.exe executable should be compiled without chemistry and when run using namelist.wrf.prep.spinup as namelist.input in main.bash, it should not look for emissions and should run successfully as below:

d01 2015-11-01_00:00:00 wrf: back from integrate
d01 2015-11-01_00:00:00 Entering ext_gr1_ioexit
d01 2015-11-01_00:00:00 in wrf_quilt_ioexit
d01 2015-11-01_00:00:00 wrf: back from med_shutdown_io
d01 2015-11-01_00:00:00 wrf: SUCCESS COMPLETE WRF

Minimal Complete Verifiable Example:
Run the default CEMAC WRFotron.

Problem with real.exe

Using the same set-up as previously worked I am now getting an error message during real.exe in pre.bash and pre-processing is subsequently failing due to wrfinput_d01 not being created.


mpirun was unable to find the specified executable file, and therefore
did not launch the job. This error was first reported for process
rank 0; it may have occurred for other processes as well.

NOTE: A common cause for this error is misspelling a mpirun command
line parameter option (remember that mpirun interprets the first
unrecognized command line token as the executable).

Node: d10s3b3
Executable: real.exe

The command used to submit real.exe is 'mpirun' and is defined in config.bash. I load the following modules at the top of config.bash 'intel openmpi WRFchem/3.7.1 ncl/6.5.0 nco/4.6.0 wrfchemconda/3.7'.
It seems odd that this has just stopped working where previously it did. Has anyone seen this issue before?

Thanks,
Ailish

Anthro emissions don't work when starting simulation in December

Hi

I remember from the last issue I posted here that anthro emissions don't work too well in simulations that start in December. I was just wondering if there's a way around it? I've been running a few months simulation in 1-2 week chunks, which worked fine for October & November, but the run doesn't seem to want to continue into December. I checked using grep -i and it's definitely the anthro emissions that aren't working. It's not a huge issue right now, but thought I should ask for future reference.

I'm using WRF-Chem 4.2 and a default version of WRFotron with the domain switched to Europe.

Cheers

Connor

IO-bound issue with WRF-Chem4.0.3 every meteorological increment.

What happened:
The IO of boundary conditions every interval is taking 4 times the normal compute time per wrfout file.

The hourly meteorological increment (metInc) is set within config.bash. For ECMWF, this meteorological increment should be 6 (hours). This is converted to seconds (__metIncSec__) within master.bash (metInc*3600), and used within namelist.wrf.blueprint of WRFotron.

The issue is driven by IO in WRFChem not being scalable.

This issue does not occur for WRChem3.7.1.

What you expected to happen:
There to be no spike in the compute time per wrfout files.

Minimal Complete Verifiable Example:
Run WRF-Chem4.0.3 with WRFotron2.0 and within rsl.error.0000 the below code shows the highly repetitive IO every meteorological increment:

    inc/wrf_bdyin.inc ext_write_field pan memorder XSZ Status =            0
    inc/wrf_bdyin.inc ext_write_field pan memorder XEZ

Potential solutions/workarounds?:

  1. Using quilting and CPU affinities, to use dedicated cores for IO. This is achieved through adding the following settings to namelist.wrf.blueprint (this is an example of using 10 dedicated cores for IO.):
&namelist_quilt           ! options for asynchronized I/O for MPI applications
 nio_tasks_per_group = 5, ! number of cores used for IO quilting per IO group
 nio_groups =          2, ! number of quilting groups

The total number of cores requested would also need to be changed within main.bash
(set within config.bash) and main_restart.bash. The number of cores = nproc_x * nproc_y + nio_groups * nio_tasks_per_group. So if the user requires 32 cores for the normal execution of WRF-Chem, then the user would request 42 cores in total (42 = 4 * 8 + 2 * 5).

  1. Newer versions of WRFChem e.g. 4.2

anthro_emis compilation error

There seems to be a problem with area_mapper which prevents anthro_emis from compiling (I'm just trying for the standard anthro_emis version here)

cp -r /nobackup/WRFChem/anthro_emis .
(base) [[email protected] postdoc]$ cd anthro_emis
(base) [[email protected] anthro_emis]$ cp /nobackup/WRFChem/build_scripts/fix_makefiles.sh .
(base) [[email protected] anthro_emis]$ ./fix_makefiles.sh
(base) [[email protected] anthro_emis]$ ./make_anthro

Using ifort fortran compiler

Using ifort fortan90 compiler

netcdf top level directory = /usr

ifort -g -O0 -c -I/usr/include misc_definitions_module.f90
ifort -g -O0 -c -I/usr/include constants_module.f90
ifort -g -O0 -c -I/usr/include mo_calendar.f90
ifort -g -O0 -c -I/usr/include anthro_types.f90
ifort -g -O0 -c -I/usr/include mapper_types.f90
ifort -g -O0 -c -I/usr/include area_mapper.f90
area_mapper.f90(78): error #5102: Cannot open include file 'netcdf.inc'
include 'netcdf.inc'
-----------^
area_mapper.f90(2000): error #6404: This name does not have a type, and must have an explicit type. [NF_NOERR]
if( ret /= nf_noerr ) then
--------------^
area_mapper.f90(2001): error #6404: This name does not have a type, and must have an explicit type. [NF_STRERROR]
write(,) nf_strerror( ret )
-----------------^
compilation aborted for area_mapper.f90 (code 1)
make: *** [area_mapper.o] Error 1
Failed to build anthro_emis
(base) [[email protected] anthro_emis]$

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.