nestcollaboration / nest Goto Github PK
View Code? Open in Web Editor NEWNoble Element Simulation Technique is used to simulate noble-element energy deposition microphysics.
Home Page: http://nest.physics.ucdavis.edu
License: Other
Noble Element Simulation Technique is used to simulate noble-element energy deposition microphysics.
Home Page: http://nest.physics.ucdavis.edu
License: Other
Two month ago, I forked nestpy
(source: https://github.com/xxiang4/nestpy) for a LUX project, and added a function run_NEST_vec()
to my own nestpy
. This function takes in a list of energies, position as input, and output s1, s2 variables.
Is it be possible to add a similar function in nest c++ code --- a function that takes in a vector of energies, and positions, and output s1 and s2 variables? Then the python binding will simply mirror the nest version with minimal changes.
Looping in c++ is often a better idea than looping in python.
In NEST, the size of photo-electron is drawn from a Gaussian with allow negative value for the pulse area of SPE. It needs to be changed by a truncated gaussian to avoid negative values.
The new Kr83m deltaT warning alerts users of deltaT being out of bounds but should only be called once so that those simulating many kr83m yields do not get too many warnings.
Relevant PR: #92
Code:
The proposed solution is as follows (so that it appears once, whether or not you are in a loop or vectorizing.)
Replace the new code with something to the extent of
bool reported_low_deltaT = false; // determining where this goes will be the trick but I have a few ideas.
GetYieldKr83m(**args) {
// do a bunch of stuff in between here.
if (deltaT_ns < 100 && energy < 41.5 && reported_low_deltaT == false) {
reported_low_deltaT = true;
cerr << "\tWARNING! Past Kr83m model fit validity region. Details: "
<< " deltaT_ns is <100 ns and your input energy is either 32.1 or 9.4 keV. "
<< " Data for separated Kr83m decays does not yet exist for deltaT_ns <100 ns. "
<< " 9.4 & 32.1 keV yields are still summed to physically accurate result, but individually will be nonsensical." << endl;
}
}
This ideally means the warning only outputs once as long as we initialize the reported boolean outside of the function (inside would be a slightly different story, but also an option.)
Do not have time to fix this this week probably but as we discussed this should be an open issue so we don't forget.
There's a delete detector;
command in the runNESTvec(...) function of execNEST.cpp https://github.com/NESTCollaboration/nest/blob/master/src/execNEST.cpp#L307 (pointed out by Albert Baker of Imperial College London).
Since detector
is an input argument, it shouldn't be deleted here, and the delete detector
line in main at line 233 should suffice. I recommend just removing this command at line 307 in the next commit. Else, I can just make a new commit really fast with just this fix.
The Tools directory needs a README.md explaining what these are and how to use them.
include/NEST.hh is going to be the first thing that people look at. It has no comments. I really think that unless we do the following, most people will get frustrated:
When NEST is built with G4=ON
, libNEST.so
is not linked against the G4 libraries which causes errors:
Linking CXX executable ../bin/LZNESTUtilsTests
/cvmfs/lz.opensciencegrid.org/external/NEST/2.0.0/x86_64-slc6-gcc48-opt/lib/libNEST.so: undefined reference to `G4VProcess::DumpInfo() const'
/cvmfs/lz.opensciencegrid.org/external/NEST/2.0.0/x86_64-slc6-gcc48-opt/lib/libNEST.so: undefined reference to `G4VProcess::BuildWorkerPhysicsTable(G4ParticleDefinition const&)'
/cvmfs/lz.opensciencegrid.org/external/NEST/2.0.0/x86_64-slc6-gcc48-opt/lib/libNEST.so: undefined reference to `CLHEP::HepRandom::createInstance()'
/cvmfs/lz.opensciencegrid.org/external/NEST/2.0.0/x86_64-slc6-gcc48-opt/lib/libNEST.so: undefined reference to `typeinfo for G4UserStackingAction'
...
This can be remedied by linking anything linked against libNEST.so
also against the G4 libraries, but that creates a lot of overhead.
The following needs to be added to the if(G4)
block
target_link_libraries(NEST ${Geant4_LIBRARIES})
Is it possible to control the verbose level in NEST using some interger?
There is an error happening at the level of the cmake configuration when using NEST as an external package:
CMake Error at CMakeLists.txt:5 (find_package):
Found package configuration file:
....../lib64/cmake/NEST/NESTConfig.cmake
but it set NEST_FOUND to FALSE so package "NEST" is considered to be NOT
FOUND. Reason given by package:
The following imported targets are referenced, but are missing: gcem
Issue:
Tried to build last version of NEST+rootNEST and when make install that line appears:
file INSTALL cannot find /home/aspelene/nest-final/rootNEST
and rootNEST (which is created after make) is disappeared.
When just make is used (without make install) rootNEST works fine, as main NEST.
Full log of the error:
[aspelene@aspelene nest-final]$ make install
[ 33%] Built target NEST_core
[ 44%] Built target testNEST
[ 77%] Built target NEST
[ 88%] Built target rootNEST
[100%] Built target bareNEST
Install the project...
-- Install configuration: ""
-- Installing: /home/aspelene/nest-final/lib/libNEST.so
-- Up-to-date: /home/aspelene/nest-final/include/analysis.hh
-- Up-to-date: /home/aspelene/nest-final/include/NEST.hh
-- Up-to-date: /home/aspelene/nest-final/include/RandomGen.hh
-- Up-to-date: /home/aspelene/nest-final/include/TestSpectra.hh
-- Up-to-date: /home/aspelene/nest-final/include/DetectorExample_XENON10.hh
-- Up-to-date: /home/aspelene/nest-final/include/VDetector.hh
CMake Error at cmake_install.cmake:141 (file):
file INSTALL cannot find "/home/aspelene/nest-final/rootNEST".
make: *** [install] Error 1
In the drift velocity calculations, could it throw an error instead of exiting? That way clients can handle the case of errors in the way of their choosing.
When trying to make programs that use the nest library, an issue I encounter is the sporadic location of where header files are located. Can we do the following:
In the code, you are handling the error using the exit
function.
I would suggest to throw exception instead to allow a more flexible handling of the errors in codes calling NEST.
See this for reference: https://en.cppreference.com/w/cpp/error/exception
Here is an example to replace these lines https://github.com/NESTCollaboration/nest/blob/master/src/NEST.cpp#L1337:
if (gasGap <= 0. && E_liq > 0.) {
throw std::runtime_error("ERR: The gas gap in the S2 calculation broke!!!!");
}
As this is in the function GetS2
, I would recommend to put a try... catch
in execNEST.cpp
such as:
try{
scint2= n.GetS2(quanta.electrons,truthPos[0],truthPos[1],truthPos[2],smearPos[0],smearPos[1],smearPos[2],
driftTime,vD,i,useField,0,verbosity,wf_time,wf_amp,g2_params);
}
catch(exception& e){
// Handle the exception here, you can then exit or use a return statement.
}
You will also need to include:
#include <exception>
Due to stricter rules, beta
is considered ambiguous and should be NEST::beta
instead.
It would be even better to define it as enum class INTERACTION_TYPE
, i.e. beta
→ NEST::INTERACTION_TYPE::beta
which is much clearer.
/code/sim/NEST_LZ/testNEST.cpp: In function ‘int main(int, char**)’:
/code/sim/NEST_LZ/testNEST.cpp:160:16: error: reference to ‘beta’ is ambiguous
type_num = beta; // default electron recoil model
^~~~
In file included from /code/sim/NEST_LZ/testNEST.cpp:14:0:
/code/sim/NEST_LZ/include/NEST.hh:112:3: note: candidates are: NEST::INTERACTION_TYPE beta
beta = 8,
^~~~
In file included from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/cmath:1914:0,
from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/math.h:36,
from /code/sim/NEST_LZ/include/RandomGen.hh:9,
from /code/sim/NEST_LZ/include/NEST.hh:65,
from /code/sim/NEST_LZ/testNEST.cpp:14:
/cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/bits/specfun.h:343:5: note: template<class _Tpa, class _Tpb> typename __gnu_cxx::__promote_2<_Tp, _Up>::__type std::beta(_Tpa, _Tpb)
beta(_Tpa __a, _Tpb __b)
^~~~
/code/sim/NEST_LZ/testNEST.cpp:238:31: error: reference to ‘beta’ is ambiguous
yieldsMax = n.GetYields(beta, energyMaximum, rho, centralField,
^~~~
In file included from /code/sim/NEST_LZ/testNEST.cpp:14:0:
/code/sim/NEST_LZ/include/NEST.hh:112:3: note: candidates are: NEST::INTERACTION_TYPE beta
beta = 8,
^~~~
In file included from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/cmath:1914:0,
from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/math.h:36,
from /code/sim/NEST_LZ/include/RandomGen.hh:9,
from /code/sim/NEST_LZ/include/NEST.hh:65,
from /code/sim/NEST_LZ/testNEST.cpp:14:
/cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/bits/specfun.h:343:5: note: template<class _Tpa, class _Tpb> typename __gnu_cxx::__promote_2<_Tp, _Up>::__type std::beta(_Tpa, _Tpb)
beta(_Tpa __a, _Tpb __b)
^~~~
/code/sim/NEST_LZ/testNEST.cpp:468:30: error: reference to ‘beta’ is ambiguous
yields = n.GetYields(beta, refEnergy, rho, detector->FitEF(xx, yy, zz),
^~~~
In file included from /code/sim/NEST_LZ/testNEST.cpp:14:0:
/code/sim/NEST_LZ/include/NEST.hh:112:3: note: candidates are: NEST::INTERACTION_TYPE beta
beta = 8,
^~~~
In file included from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/cmath:1914:0,
from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/math.h:36,
from /code/sim/NEST_LZ/include/RandomGen.hh:9,
from /code/sim/NEST_LZ/include/NEST.hh:65,
from /code/sim/NEST_LZ/testNEST.cpp:14:
/cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/bits/specfun.h:343:5: note: template<class _Tpa, class _Tpb> typename __gnu_cxx::__promote_2<_Tp, _Up>::__type std::beta(_Tpa, _Tpb)
beta(_Tpa __a, _Tpb __b)
^~~~
/code/sim/NEST_LZ/testNEST.cpp: In function ‘int main(int, char**)’:
/code/sim/NEST_LZ/testNEST.cpp:160:16: error: reference to ‘beta’ is ambiguous
type_num = beta; // default electron recoil model
^~~~
In file included from /code/sim/NEST_LZ/testNEST.cpp:14:0:
/code/sim/NEST_LZ/include/NEST.hh:112:3: note: candidates are: NEST::INTERACTION_TYPE beta
beta = 8,
^~~~
In file included from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/cmath:1914:0,
from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/math.h:36,
from /code/sim/NEST_LZ/include/RandomGen.hh:9,
from /code/sim/NEST_LZ/include/NEST.hh:65,
from /code/sim/NEST_LZ/testNEST.cpp:14:
/cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/bits/specfun.h:343:5: note: template<class _Tpa, class _Tpb> typename __gnu_cxx::__promote_2<_Tp, _Up>::__type std::beta(_Tpa, _Tpb)
beta(_Tpa __a, _Tpb __b)
^~~~
/code/sim/NEST_LZ/testNEST.cpp:238:31: error: reference to ‘beta’ is ambiguous
yieldsMax = n.GetYields(beta, energyMaximum, rho, centralField,
^~~~
In file included from /code/sim/NEST_LZ/testNEST.cpp:14:0:
/code/sim/NEST_LZ/include/NEST.hh:112:3: note: candidates are: NEST::INTERACTION_TYPE beta
beta = 8,
^~~~
In file included from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/cmath:1914:0,
from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/math.h:36,
from /code/sim/NEST_LZ/include/RandomGen.hh:9,
from /code/sim/NEST_LZ/include/NEST.hh:65,
from /code/sim/NEST_LZ/testNEST.cpp:14:
/cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/bits/specfun.h:343:5: note: template<class _Tpa, class _Tpb> typename __gnu_cxx::__promote_2<_Tp, _Up>::__type std::beta(_Tpa, _Tpb)
beta(_Tpa __a, _Tpb __b)
^~~~
/code/sim/NEST_LZ/testNEST.cpp:468:30: error: reference to ‘beta’ is ambiguous
yields = n.GetYields(beta, refEnergy, rho, detector->FitEF(xx, yy, zz),
^~~~
In file included from /code/sim/NEST_LZ/testNEST.cpp:14:0:
/code/sim/NEST_LZ/include/NEST.hh:112:3: note: candidates are: NEST::INTERACTION_TYPE beta
beta = 8,
^~~~
In file included from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/cmath:1914:0,
from /cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/math.h:36,
from /code/sim/NEST_LZ/include/RandomGen.hh:9,
from /code/sim/NEST_LZ/include/NEST.hh:65,
from /code/sim/NEST_LZ/testNEST.cpp:14:
/cvmfs/sft.cern.ch/lcg/releases/gcc/7.3.0-90605/x86_64-centos7-gcc62-opt/include/c++/7.3.0/bits/specfun.h:343:5: note: template<class _Tpa, class _Tpb> typename __gnu_cxx::__promote_2<_Tp, _Up>::__type std::beta(_Tpa, _Tpb)
beta(_Tpa __a, _Tpb __b)
^~~~
make[2]: *** [CMakeFiles/NEST_core.dir/testNEST.cpp.o] Error 1
make[1]: *** [CMakeFiles/NEST_core.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make[2]: *** [CMakeFiles/NEST.dir/testNEST.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/NEST.dir/all] Error 2
make: *** [all] Error 2
I was recently trying to produce two separate Kr83m lines and found that there are couple of peculiar features for times below 100 ns. One of them is the behavior of electron and photon yields for times around 50 ns, where total number of electrons for two lines drops from ~1300 to 700 in just 10 ns time difference. Is there a reason why it looks so or is it a glitch of the underlying model?
Also, I was expecting that total number of electrons + photons stays the same regardless of decay time, but it seems to have a feature at 50-100 ns.
I understand that two lines should not be trusted individually at such time scales, but it would still be nice to have a bit more realistic extrapolation into that region, as different experiments might be sensitive to double peak photon timing at rather short time scales. I have created plot for Kr83m with 30 ns time difference and an "equivalent" 2 gamma rays with the same energies to demonstrate this potential effect.
All of the plots are made with NESTpy 1.4.8.
Great thanks for help!
Modern C++ requires to use of the fixed-width integer types that were introduced in C++11. It's recommended to use int64_t
instead of long int
. The header is cstdint
: documentation.
To be able to access D_L and D_T as a function of electric field, please make one dedicated function within NEST.cpp for accessing D_T(field) and another one for accessing D_L(field). This will enable clean calls of these in code for building Garfield++ transport parameter tables.
When NEST draws binomial random numbers, it currently uses a Gaussian approximation for N > 10 (https://github.com/NESTCollaboration/nest/blob/master/NEST.cpp#L18). This is inaccurate if p is near 0 or 1. The figure below shows the binomial PMF (solid) and its Gaussian approximation (dashed) for N = 100, for different ps:
Perhaps it would be possible to implement one of the rules from here?
At the line: https://github.com/NESTCollaboration/nest/blob/master/src/NEST.cpp#L787
there is the definition of the variable dt
which is never used in GetS1
scope.
@tunnell One of the Travis checks is ~instantaneous as normal, but recently (I do not unfortunately recall which exact revision, but it was within the last week during Oct.) the second takes an eternity to complete. See attached screen shot.
Too many if statements, introduced for new muon (and continuous beta) code, may be slowing NEST down
NEST::beta and INTERACTION_TYPE::beta seem to be two solutions to the same problem of CentOS 7 disambiguation. LZ did it one way, but LUX another. Which one is right?? @tunnell @kreczko @riffard @jpbrodsky @jecutter ??
At request, turning this question into an issue.
It seems that according to these lines:
Lines 804 to 808 in 3322903
There is an "effective" SPE efficiency that depends on the number of hits, nHits
. This comes into the S1 Pulse area at this timingless S1 case:
Lines 887 to 889 in 3322903
The question that @robertsjames and I have is whether or not this "effective" efficiency is derived analytically, or fitted (where did it come from in functional form?) and if we can re-parameterize it in terms of something other than nHits. For example, in an ideal case, we would like for eff
to be dependent instead on the number of photoelectrons produced or detected, such as this variable:
Line 886 in 3322903
This eff
is important because it has a ~few% effect for detectors with an SPE efficiency of <99%, which affects energy reconstruction, simulations, etc. So we want to get this right on our end, but also ensure that NEST has it right, too!
The pulseShape script seems to be seg faulting when run on the photon_times.txt execNest output. I'm running execNest with verbosity=true and useTiming=1. photon_times was generated using ./execNest 30000 beta 0 15 300 -1 123 . The valgrind output on the code is as follows:
==2345== Command: ./pulseShape 10 80
==2345==
==2345== Invalid read of size 4
==2345== at 0x5751601: getc (in /usr/lib64/libc-2.17.so)
==2345== by 0x4010F6: main (in /local/users/ishira/NEST/build/examples/pulseShape)
==2345== Address 0x0 is not stack'd, malloc'd or (recently) free'd
I've been trying to debug this for a bit and I'm not entirely sure where the error is stemming from. It's potentially reading from a reallocated pointer but I can't seem to figure out exactly where.
On https://zenodo.org/record/3905382#.Xv2SDShKjAQ
You can see that version 2.1.0 has the same published date as version v2.0.0 and version v0.0.test.
Not sure how this can happen, but it might be worth fixing it for citations.
I ran into a major problem on my desktop computer at work where the latest NEST repo at time of writing this (git commit 1bd3d70) refused to compile at all during testing of gcc and g++. There was a lot of misinformation or partial information on the web that did not apply to my case. I had to merge a lot of the commands and ideas from multiple sources like stack overflow dot com and Kate K. The way I was able to finally solve the problem was exactly as follows step by step (procedure very likely would have to be modified in order to apply to different flavors of Linux/ Unix systems, and/or Apple Mac of course): CERN software repo was key solution here but yum does not work directly (homebrew for Mac?) without some pre-reqs
wget http://linuxsoft.cern.ch/cern/slc61/x86_64/extras/RPMS/sl-release-scl-rh-1-2.slc6.noarch.rpm
sudo rpm -Uvh sl-release-scl-rh-1-2.slc6.noarch.rpm
wget http://linuxsoft.cern.ch/cern/slc62/i386/yum/extras/sl-release-scl-1-2.slc6.noarch.rpm
sudo rpm -Uvh sl-release-scl-1-2.slc6.noarch.rpm
sudo yum install sl-release-scl
sudo yum install devtoolset-7
scl enable devtoolset-7 bash
Version 6 should also work, but I believe this is bleeding-edge latest at time of writing. Note above how downloading and installing stuff not enough: final step is to enable. Not sure how general this is to everybody, but perhaps the README needs to updated in next release, to make sure everyone is aware of the minimum version pre-reqs on g++ (and clang for Mac) at the very least. rootNEST also no longer compiles the old way with gcc given in the README, but works fine via cmake now if you turn ROOT support on with the proper switch. But again, commands above will have slightly different names by OS, so I'm unsure what to say in the README without being too ultra-specific to my own personal case, or what to add to Travis.
Currently, testNEST.cpp is returning 1
if everything runs fine and 0
if an error occurs.
This is inverted compared to Unix conventions, e.g http://tldp.org/LDP/abs/html/exitcodes.html.
testNEST.cpp should return 0
if everything runs fine and 1
if an error occurs.
This also means that the current travis-tests are false-positives:
https://travis-ci.com/NESTCollaboration/nest/jobs/168868777
$ ./testNEST
*** Detector definition message ***
You are currently using the default XENON10 template detector.
This program takes 6 (or 7) inputs, with Z position in mm from bottom of detector:
./testNEST numEvts type_interaction E_min[keV] E_max[keV] field_drift[V/cm] x,y,z-position[mm] {optional:seed}
For 8B, numEvts is kg-days of exposure with everything else same. For WIMPs:
./testNEST exposure[kg-days] {WIMP} m[GeV] x-sect[cm^2] field_drift[V/cm] x,y,z-position[mm] {optional:seed}
For cosmic-ray muons or other similar particles with elongated track lengths:
./testNEST numEvts {MIP} LET[MeV*cm^2/gram] x,y-position[mm](Initial) field_drift[V/cm] x,y,z-position[mm](Final) {optional:seed}
The command "./testNEST" exited with 0.
For the method GetYieldNR, it would be great to pass the NuisParam
vector by reference:
virtual YieldResult GetYieldNR(double energy, double density, double dfield, double massNum,
std::vector<double> &NuisParam={11.,1.1,0.0480,-0.0533,12.6,0.3,2.,0.3,2.,0.5,1.,1.});
Dear @Gratt55,
In the function GammaHandler::photoIonization
is the first dimension of sourceInfo
expected to be large (e.g. > 10
)?
If yes, it might be worth changing
double GammaHandler::photoIonization(vector<vector<double>> sourceInfo, vector<double> xyTry) {
//implement simple delta function to the spectrum
double fValue = 0.0;
for(int i = 0; i < sourceInfo.size(); i++) {
double initialEnergy = sourceInfo[i][0];
double br = sourceInfo[i][1];
double pe = sourceInfo[i][2];
double co = sourceInfo[i][3];
double pp = sourceInfo[i][4];
if(abs(xyTry[0]-initialEnergy) < brThresh) {
fValue = yMax*br*(pe/(pe+co+pp));
}
}
return fValue;
}
to
double GammaHandler::photoIonization(const vector<vector<double>>& sourceInfo,
const vector<double>& xyTry) {
//implement simple delta function to the spectrum
double value {0.0};
std::size_t index {0};
for(int i = 0; i < sourceInfo.size(); i++) {
double initialEnergy {sourceInfo[i][0]};
if(abs(xyTry[0] - initialEnergy) < brThresh) {
index = i;
}
}
double br = sourceInfo[index][1];
double pe = sourceInfo[index][2];
double co = sourceInfo[index][3];
double pp = sourceInfo[index][4];
value = yMax * br * ( pe / (pe + co + pp) );
return value;
}
(also changed fValue
→ value
as f<Name>
is usually the notation for class members, not local variables).
into std::array or vectors, everywhere in the code (might be big job)
Here is the definition of the GetDriftVelocity_Liquid
function:
Line 1588 in c622b09
To my knowledge, Density
is not called anywhere else in the function. Can we remove it?
There's a problem with the energy spectrum calculation for sub-2GeV mass WIMPs in TestSpectra.cpp. If mass < 2 GeV, the number of sample points becomes large ( > 10,000 ) and the EnergySpec array can only take 10,000 entries. Plus, if the bin size is too small for any WIMP, the calculation can break down and think that the maximum recoil energy is too small, truncating the expected recoil spectrum. I suggest committing replacing the WIMP_prep_spectrum function with these edits:
TestSpectra::WIMP_spectrum_prep TestSpectra::WIMP_prep_spectrum(double mass,
double eStep) {
WIMP_spectrum_prep spectrum;
double divisor, x1, x2;
vector<double> EnergySpec;
int numberPoints;
if (mass < 2.0) { // GeV/c^2
divisor = 10. / eStep;
numberPoints = int(1000. / eStep);
} else if (mass < 10.) {
divisor = 10. / eStep;
numberPoints = int(1000. / eStep);
} else {
divisor = 1.0 / eStep;
numberPoints = int(100. / eStep);
}
int nZeros = 0; //keep track of the number of zeros in a row
for (int i = 0; i < (numberPoints + 1); i++) {
EnergySpec.push_back( WIMP_dRate(double(i) / divisor, mass) );
if ( EnergySpec[i] == 0. ) nZeros++;
else nZeros = 0; //reset the count if EnergySpec[i] != zero
if ( nZeros == 100 ) break; //quit the for-loop once we're sure we're only getting zeros
}
for (long i = 0; i < 1000000; i++) {
spectrum.integral += WIMP_dRate(double(i) / 1e4, mass) / 1e4;
}
for (int i = 0; i < (int) EnergySpec.size() - 2; i++) {
x1 = double(i) / divisor;
x2 = double(i + 1) / divisor;
spectrum.base[i] = EnergySpec[i + 1] *
pow(EnergySpec[i + 1] / EnergySpec[i], x2 / (x1 - x2));
spectrum.exponent[i] = log(EnergySpec[i + 1] / EnergySpec[i]) / (x1 - x2);
if (spectrum.base[i] > 0. && spectrum.base[i] < DBL_MAX &&
spectrum.exponent[i] > 0. && spectrum.exponent[i] < DBL_MAX )
; // spectrum.integral+=spectrum.base[i]/spectrum.exponent[i]*(exp(-spectrum.exponent[i]*x1)-exp(-spectrum.exponent[i]*x2));
else {
if ( EnergySpec[i+2] > 10. ) { //i.e. the calculation stopped before event rate was low
cerr << "ERROR: WIMP E_step is too small! Increase it slightly to avoid noise in the calculation." << endl;
exit(0.);
}
spectrum.xMax = double(i - 1) / divisor;
if (spectrum.xMax <= 0.0) {
cerr << "ERROR: The maximum possible WIMP recoil is negative, which "
"usually means your E_step is too small."
<< endl;
exit(0);
}
break;
}
}
spectrum.divisor = divisor;
return spectrum;
}
where the main changes are the step size for sub-2GeV masses and the warning statement if the energy spectrum calculation breaks a bit early. In addition, for low-mass WIMPs, the current calculation fills EnergySpec
with many many zeros. I added a quick if-statement that cuts the code if there have been a bunch of zeros, indicating that the calculation is done.
Some header files are hh and others hpp. There should be one convention.
In a few spots in NESTcalc/NEST, we have the following issue:
We have if-statements comparing energy to floats, which will often result in the "if" statements rendering false due to slight rounding errors (see this link for more info about this process.)
Suggestion: we change all of these if statements for floats to the following (not perfect, but we can get by with it probably in our simplest cases:
if (abs(energy - 32.1) < 1e-9)
Hi all!
If I'm interpreting the release notes of 2.1 correctly, there is no change to the NR best-fit mean yields compared with v0.2.1, but I wanted to confirm this (not sure if this is the proper avenue, sorry if not!).
This folder doesn't do anything I don't think? At least I don't think so. If it does, delete. Otherwise, pollutes things.
In the NESTcalc::GetS1
function, there is a basic, no-timing block of code here:
Lines 886 to 891 in 3322903
In these lines, pulseArea
is smeared by sPEres * sqrt(Nphe)
where the Nphe
is defined:
Line 886 in 3322903
To my understanding, this slightly (very, very slightly) overestimates the smearing effect, but it should probably not be ignored. My suggestion would be to replace the linked lines above with something like this to make the pulseArea
calculation more statistically accurate. But, perhaps I am missing something.
Nphe = nHits + BinomFluct(nHits, fdetector->get_P_dphe());
Nphe_det = BinomFluct(Nphe, 1. - (1. - eff) / (1. + fdetector->get_P_dphe()));
pulseArea = RandomGen::rndm()->rand_gauss(
Nphe_det,
fdetector->get_sPEres() * sqrt(Nphe_det));
This is in regards to the question I asked @mszydagis and then directed toward @riffard, but I figured that a git issue is more trackable for everyone. (Tagging @robertsjames here because he had the same question as well.)
Making the NESTCalc method WorkFunction
static will be nice to allow user to use it without having to play with a NESTCalc
object.
In NEST, the ownership of the VDetector object is attributed to the NESTcalc object. In some applications, the ownership of the VDetector object can be different. I then suggest to avoid the delete of the VDetector object in the NESTcalc destructor and to delete it externally.
Proposed modification:
NESTcalc::~NESTcalc() {
if (pulseFile) pulseFile.close();
}
The VDetector class and any user-generated subtypes hold a bunch of different detector specs. These specs are referred to inconsistently in the NESTCalc class.
Some functions (GetYields) take the electric field as an input parameter, but the EField is also part of the detector class (FitEF()).
Some functions take density as an input parameter. This is not included in the detector class at all, but many similar specs are included, such as temperature.
Meanwhile, whether the xenon is gas or liquid is controlled entirely by the detector spec class (get_inGas()), rather than as an input. As a result, even low-level NESTCalc functions like GetYield(...) require a detector class to be instantiated purely to store the phase.
NESTCalc::SetDensity(temperature, pressure) takes temperature/pressure as inputs, but these are available in the detector class. Same with SetDriftVelocity.
Hopefully in a month or two I'll have some time to make a branch with a lot of specific suggestions on how to revamp the core NEST code for readability and flexibility, which could include changes that address the question of how exactly we want to use the detector class, but I'm posting this issue for now to note down some of the goals I'll have then.
When execute bareNEST, it says:
ERROR: You need a minimum of 12 nuisance parameters for the mean yields.
I think because bareNEST is still an example for the old NEST 2.0 version, which only requires 2 NuisParam.
Once we go public, I can grab a DOI for the project. If you want to know more about software citation more generally, see this talk by Daniel Katz from NCSA.
Essentially, the Zenodo integration is setup so is quick to ship with the release. I will add also a CITATION file that says that people using NEST should cite the original nest paper and then the DOI for this version. There will be a badge in the README. This all waits until public.
With GetYields and GetQuanta, there needs to be a check that the number and values for the nuisance parameters makes sense. If the array is too short, there is a buffer overflow problem.
Thanks to Luke K. on LZ, we have a couple smaller enhancements to add to our new floating point equality function. Shouldn't change any results but will make the code prettier/more logical.
if (a == b) { // shortcut, handles infinities
return true;
} else {
There are a lot of call of the vector copy operators in NESTcalc
.
Example, here:
NESTresult NESTcalc::FullCalculation(INTERACTION_TYPE species, double energy,
double density, double dfield, double A,
double Z,
vector<double> NuisParam /*={11.,1.1,0.0480,-0.0533,12.6,0.3,2.,0.3,2.,0.5,1.,1.}*/,
vector<double> FreeParam /*={1.,1.,0.1,0.5,0.19,2.25}*/,
bool do_times /*=true*/)
the suggested modification are:
NESTresult NESTcalc::FullCalculation(INTERACTION_TYPE species, double energy,
double density, double dfield, double A,
double Z,
vector<double>& NuisParam /*={11.,1.1,0.0480,-0.0533,12.6,0.3,2.,0.3,2.,0.5,1.,1.}*/,
vector<double>& FreeParam /*={1.,1.,0.1,0.5,0.19,2.25}*/,
bool do_times /*=true*/)
There is the same issue here:
https://github.com/NESTCollaboration/nest/blob/master/src/NEST.cpp#L38
https://github.com/NESTCollaboration/nest/blob/master/src/NEST.cpp#L134
https://github.com/NESTCollaboration/nest/blob/master/src/NEST.cpp#L173
https://github.com/NESTCollaboration/nest/blob/master/src/NEST.cpp#L402
https://github.com/NESTCollaboration/nest/blob/master/src/NEST.cpp#L457
https://github.com/NESTCollaboration/nest/blob/master/src/NEST.cpp#L659
https://github.com/NESTCollaboration/nest/blob/master/src/NEST.cpp#L1413
https://github.com/NESTCollaboration/nest/blob/master/src/NEST.cpp#L1813
I may missed some...
If you want to prevent the vector to be modified, in the function, declare it as const like that:
const vector<double>& NuisParam
The line here:
Line 27 in 9363b96
Reference: https://en.wikipedia.org/wiki/Binomial_distribution#Normal_approximation
"One rule is that for n > 5 the normal approximation is adequate if the absolute value of the skewness is strictly less than 1/3"
This is the least rigorous of the approximation conditions. Either we should up the conditions to higher statistics to play it safe, or go with a more stringent approximation condition. This causes a mismatch as-is between BinomFluct
and other out-of-the-box Binomial sampling packages (in python). At the very least we need to change this to a <=
condition in the line I link above.
I think that the xoroshiro doesn't work on 32 bit machines. I'm writing it here just for the record. I don't think anybody runs 32 bit anymore? At least, when I compile on the test server I get:
/tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:602:38: error: ‘__uint128_t’ was not declared in this scope
601 xoroshiro_detail::xoroshiro_plus<__uint128_t, __uint128_t, 105, 36, 70>;
602 ^
603 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:602:51: error: ‘__uint128_t’ was not declared in this scope
604 xoroshiro_detail::xoroshiro_plus<__uint128_t, __uint128_t, 105, 36, 70>;
605 ^
606 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:602:75: error: template argument 1 is invalid
607 xoroshiro_detail::xoroshiro_plus<__uint128_t, __uint128_t, 105, 36, 70>;
608 ^
609 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:602:75: error: template argument 2 is invalid
610 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:606:38: error: ‘__uint128_t’ was not declared in this scope
611 xoroshiro_detail::xoroshiro_plus<__uint128_t, uint64_t, 105, 36, 70>;
612 ^
613 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:606:72: error: template argument 1 is invalid
614 xoroshiro_detail::xoroshiro_plus<__uint128_t, uint64_t, 105, 36, 70>;
615 ^
616 In file included from /tmp/pip-req-build-mc5OaQ/src/nestpy/RandomGen.hh:8:0,
617 from /tmp/pip-req-build-mc5OaQ/src/nestpy/NEST.hh:65,
618 from /tmp/pip-req-build-mc5OaQ/src/nestpy/NEST.cpp:2:
619 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:602:38: error: ‘__uint128_t’ was not declared in this scope
620 xoroshiro_detail::xoroshiro_plus<__uint128_t, __uint128_t, 105, 36, 70>;
621 ^
622 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:602:51: error: ‘__uint128_t’ was not declared in this scope
623 xoroshiro_detail::xoroshiro_plus<__uint128_t, __uint128_t, 105, 36, 70>;
624 ^
625 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:602:75: error: template argument 1 is invalid
626 xoroshiro_detail::xoroshiro_plus<__uint128_t, __uint128_t, 105, 36, 70>;
627 ^
628 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:602:75: error: template argument 2 is invalid
629 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:606:38: error: ‘__uint128_t’ was not declared in this scope
630 xoroshiro_detail::xoroshiro_plus<__uint128_t, uint64_t, 105, 36, 70>;
631 ^
632 /tmp/pip-req-build-mc5OaQ/src/nestpy/xoroshiro.hh:606:72: error: template argument 1 is invalid
633 xoroshiro_detail::xoroshiro_plus<__uint128_t, uint64_t, 105, 36, 70>;
Dear all,
I have to admit I am not aware of the NEST versioning policy, so please let me know.
NEST uses the format of .. scheme for versioning, which usually means relatively frequent releases (which I am a fan of). However, the last release is already 10 months old - all the new and improved bits are only available in master.
Would it be possible to please release a new version? At LZ we have developers burning to use the latest features and bug fixes:
v2.0.1...master
Hi all,
I've encountered a problem after new commits: I have segmentation fault for any model (for all fields and energy ranges, pure build w/o ROOT/Geant) + warnings during compilation about long long int format (only with output part (sprintf), not in "main" code itself). Tried to change uint_64t and int64_t to old unsigned long /long (according to logs), but it didn't help.
Please see compilation log with warnings attached. OS is Ubuntu 20.10, CXX compiler is GNU 10.2.0.
log_nest.txt
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.