Giter Site home page Giter Site logo

leapr's Introduction

NJOY2016

The NJOY Nuclear Data Processing System is a modular computer code designed to read evaluated data in ENDF format, transform the data in various ways, and output the results as libraries designed to be used in various applications. Each module performs a well defined processing task. The modules are essentially independent programs, and they communicate with each other using input and output files, plus a very few common variables.

Documentation

The user manual for NJOY2016 can be found here: NJOY User Manual (pdf).

Release and development versions

For the latest version of NJOY2016 and an overview of the latest changes, please see the Release Notes or the release page.

The latest release version of NJOY2016 can always be found at the head of the main branch of this repository and every release is associated to a release tag. New versions are released on a regular basis (we aim to provide updates at least every three months). The latest development version of NJOY2016 containing the latest updates and changes can be found in at the head of the develop branch. This development version should be used with caution.

Installation

Prerequisites:

The following are the prerequisites for compiling NJOY2016:

  • git
  • cmake 3.15 or higher
  • a Fortran 2003 compliant compiler such as gcc-7 or higher

Note: gcc-11.3 has been known to produce an internal compiler error while compiling NJOY2016, so as a result this specific version of gcc is not supported. Other versions of gcc (version 7 or higher) seem to be capable of compiling NJOY2016.

Instructions:

To compile the latest NJOY2016 version, you can use the following instructions:

git clone https://github.com/njoy/NJOY2016.git
cd NJOY2016
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ../
make -j8

The above instructions will produce a release build consisting of a dynamic library and dynamically linked executable. To compile a static version (i.e. the executable is not a dynamically linked executable), the cmake command shown above should be replaced with the following cmake command:

cmake -DCMAKE_BUILD_TYPE=Release -Dstatic_libraries=ON
      -Dstatic_njoy=ON -DCMAKE_EXE_LINKER_FLAGS=-static ../

When you have already cloned the NJOY2016 repository and wish to update to the latest version, you can use the following instructions (inside the build folder):

git pull
make -j8

Module overview

  • NJOY directs the flow of data through the other modules and contains a library of common functions and subroutines used by the other modules.
  • RECONR reconstructs pointwise (energy-dependent) cross sections from ENDF resonance parameters and interpolation schemes.
  • BROADR Doppler broadens and thins pointwise cross sections.
  • UNRESR computes effective self-shielded pointwise cross sections in the unresolved energy range.
  • HEATR generates pointwise heat production cross sections (KERMA coefficients) and radiation-damage cross sections.
  • THERMR produces cross sections and energy-to-energy matrices for free or bound scatterers in the thermal energy range.
  • GROUPR generates self-shielded multigroup cross sections, group-to-group scattering matrices, photon-production matrices, and charged-particle cross sections from pointwise input.
  • GAMINR calculates multigroup photoatomic cross sections, KERMA coefficients, and group-to-group photon scattering matrices.
  • ERRORR computes multigroup covariance matrices from ENDF uncertainties.
  • COVR reads the output of ERRORR and performs covariance plotting and output formatting operations.
  • MODER converts ENDF "tapes" back and forth between ASCII format and the special NJOY blocked-binary format.
  • DTFR formats multigroup data for transport codes that accept formats based in the DTF-IV code.
  • CCCCR formats multigroup data for the CCCC standard interface files ISOTXS, BRKOXS, and DLAYXS.
  • MATXSR formats multigroup data for the newer MATXS material cross-section interface file, which works with the TRANSX code to make libraries for many particle transport codes.
  • RESXSR prepares pointwise cross sections in a CCCC-like form for thermal flux calculators.
  • ACER prepares libraries in ACE format for the Los Alamos continuous-energy Monte Carlo code MCNP.
  • POWR prepares libraries for the EPRI-CELL and EPRI-CPM codes.
  • WIMSR prepares libraries for the thermal reactor assembly codes WIMS-D and WIMS-E.
  • PLOTR reads ENDF-format files and prepares plots of cross sections or perspective views of distributions for output using VIEWR.
  • VIEWR takes the output of PLOTR, or special graphics from HEATR, COVR, DTFR, or ACER, and converts the plots into Postscript format for printing or screen display.
  • MIXR is used to combine cross sections into elements or other mixtures, mainly for plotting.
  • PURR generates unresolved-resonance probability tables for use in representing resonance self-shielding effects in the MCNP Monte Carlo code.
  • LEAPR generates ENDF scattering-law files (File 7) for moderator materials in the thermal range. These scattering-law files can be used by THERMR to produce the corresponding cross sections.
  • GASPR generates gas-production cross sections in pointwise format from basic reaction data in an ENDF evaluation. These results can be converted to multigroup form using GROUPR, passed to ACER, or displayed using PLOTR.

License and Copyright

This software is distributed and copyrighted according to the LICENSE file.

leapr's People

Contributors

ameliajo avatar jlconlin avatar nathangibson14 avatar whaeck avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

Forkers

ameliajo

leapr's Issues

contin computing effective temperature is uncomfortable

Doesn't really seem to match the manual. Maybe the manual is wrong? It states that

teff ~ int b^2 P(b) exp(-b) db

when the code calls for fsum to use a tau of 0.5, meaning that the exponent in the calculated integral is -b/2. So this is strange, please look into

discre prepareParams not that great?

Consider maybe just getting rid of this, and evaluating things when they're used? This seems like a rather not straightforward approach, especially since ar vector and dbw vector only seem to be used like in one place

contin's interpolation checking out of bounds

This is a super small thing, but the interpolate.h function that contin uses only checks to see if the x value is larger than the vector, not smaller. This seems weird, so please look into whether this was done for a reason or not

coldHydrogen_util.betaLoop_util failed

I tried compiling/testing. It compiles fine, but one test fails:

1: /Users/jlconlin/NJOY21/Code/leapr/src/coldHydrogen/coldHydrogen_util/betaLoop_util/test/jprimeLoop.test.cpp:98: FAILED:
1:   REQUIRE( out == Approx(0.161404).epsilon(1e-6) )
1: with expansion:
1:   0.161370993 == Approx( 0.161404 )
1:
1: ===============================================================================
1: test cases:  2 |  1 passed | 1 failed
1: assertions: 12 | 11 passed | 1 failed
1:
1/1 Test #1: coldHydrogen.coldHydrogen_util.betaLoop_util ...***Failed    0.01 sec

trans is relatively slow

My trans, for simple water test case, takes about 0.05-0.06 seconds, while legacy trans is like 0.0002. So definitely see what might be causing that

if oscillator energy is too high (even like 205) their alternative implementation of tanh fails

When discre does its prepareParams, the legacy code writes out its hyperbolic functions explicitly, and calculates sinh(oscillator energy/2tev) and cosh(oscillator energy/2tev) separately, then uses that to calcualte tanh(oscillator energy/2tev). When oscillator energy is too big, this gets set to Inf, and the S(a,b) from discre gets set to all zeros.

My implementation just uses tanh(oscillator energy/2tev) directly, which doesn't go to infinity. yay! But should definitely make note that my code doesn't fail at the same point legacy code fails, so as to account for that during comparison

Note that if you replace the line
dbw(i)=ar(i)*cn
with
dbw(i)=adel(i)/(tanh(bdeln(i)/2)*bdeln(i))
then the legacy code won't fail. Also note that I'm encountering this problem when I use two oscillators, one of energy 205.0 and the other 0.48, and that it seems to occur with an oscillator energy of anywhere from 35-36.

Trans use of ndmax

in trans.h, ndmax is set equal to max(1e6,beta.size()). In s_table_generation, nds / nbt (goes by both names depending on which file we're in) is the number of S_t(a,b) values that were computed (determined by how quickly the S_t(a,bi) values get small).

According to s_table_generation, nds / nbt could be as big as ndmax. But when we inevitably go to sbfill.h, it throws an error if nds is greater than 0.5*ndmax. So that is already a problem that could be caught further upstream.

But what's more is that I don't particularly see why we need to define ndmax so early, could we just set ndmax equal to 2*nds? The only problem we'd have to work around is the fact that ndmax is used before we actually calculate nds, but we could mess with that a bit I think. Just want to avoid throwing an error to the user that could potentially be avoided. Also 1e6 is a lot of space that might not necessarily be necessary to reserve.

coher bcc factor calculation

Line 2679-2680 of leapr.f90 redefines i1m to be two different things, which I'm not completely sure if that was a mistake or not. Strange. Try to figure out if i1m=15 is actually what we want or not. This behavior is in bccLatticeFactors.h for coher in c++ translation

discre's exb vector has useless factor of 2

exb, which is populated in discre_util/prepareParams.h, is equal to exp( -beta * sc / 2 ), but the only time that it seems to be used (when populating the sex vector) has exb squared. So that's weird, but that's okay. Maybe change that

coldh ifree capability

In order to take advantage of Eq. 569 and Eq. 570 (which assumes that the molecular transitions are free) you have to go in and manually change "ifree" to 1. This is not good

3 lines in oscLoopFuncs causing grief

Change these 3 lines and we pass oscLoopFuncs.test.cpp and discre.test.cpp. Leave them alone and we pass leapr.test.cpp

Do fix this, I can't even right now

coher hex function -- on the .ge. vs. .gt. situation

There's a location in hexagonal part of coher (I have it in hexLatticeFactors.h, or rather in hexLatticeFactorsHelper.h) where we check to see if tsq >= b[ifl+2*i-3]. The equals part of this is really messing with me.

Legacy code says that tsq and b[x] values of 1.5800000165571461E+018 and 1.5800000430651474E+018 (respectively) are different, so it doesn't go into the if conditional. Mine is treating them as the same.

Honestly probably the best thing to do is to just take the equality part of this out of the legacy code, see what answers it produces, check to make sure they're not too wildly crazy, and then design my code to match that.

trans's sbfill

Just some things to make it cleaner

don't take in sb, create it in sbfill and return it
don't take in ndmax, compute it
don't take in be, take b

and change the test cases and places it's used accordingly

discre effective temperature calculation

discre may (?) be calculating effective temperature wrong? In the original source code it's line it's around line 1343 that dist(i) is defined to be

weight * ( beta / 2 ) * coth ( normalized_beta / 2 )

(this code is presently in my discre-->discre_util-->oscLoopFuncs.h)
But according to Eq. 544 I think this should rather be

weight * ( normalized_beta / 2 ) * coth ( normalized_beta / 2 ) or
weight * ( beta / 2 ) * coth ( beta / 2 )

meaning that legacy line 1343 should likely have bdeln(i) instead of bdel(i)

Worry that trans warps the beta grid

Trans changes the beta spacing, it defines a different delta in the beginning of trans.h. The existent S(a,b) that was generated in contin is then scaled to reflect this change, and the changed version of this S(a,b) is output from sbfill into the vector previously known as sb (I changed it to be sab). Then Eq. 535 is calculated and the result is put into the final sym_sab vector. This worries me, just because we're changing the beta grid halfway through the inelastic calculations. Please look into this, it might be nothing but just to be sure

contin/contin_util/checkMoments.h and sum0

Try to determine just how important sum0 is, and if we need to get it working. It's a bit of a pain to get sum0 and sum1 working with ranges, so I'm just going to push that until later once I figure out just how necessary it is.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.