enzo-project / enzo-dev Goto Github PK
View Code? Open in Web Editor NEWThe Enzo adaptive mesh-refinement simulation code.
License: Other
The Enzo adaptive mesh-refinement simulation code.
License: Other
Original report by chummels (Bitbucket: chummels, GitHub: chummels).
I was looking through the new 2.2 documentation, and it seems like there still remain some holdovers from past eras of enzo which no longer apply. Examples include:
Fortunately, with our new use of readthedocs.org, any time a modification occurs in the docs, it will immediately be rebuilt and posted to the enzo.readthedocs.org website, to assure people have access to the most current version of the documentation.
Original report by John Regan (Bitbucket: john_regan, ).
Invoking Dark Matter only particle splitting causes a seg fault on line 355 of particle_splitter.F. This is likely because of attributes not being correctly allocated before being passed to the fortran routine. The bug is easily reproduced by settting ParticleSplitterIterations = 1 and restarting. This was discovered in a non-star particle run.
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
#!txt
Currently this hits the Cycle limit of 100,000. dt goes to 0.0.
Early on I get this warning:
MPI_Init: NumberOfProcessors = 1
warning: the following parameter line was not interpreted:
SubcycleSafetyFactor = 2 //
Starts showing signs of failure on cycle 64:
TopGrid dt = 7.908641e-04 time = 0.10322590090186 cycle = 64
Level[0]: dt = 0.000790864 0.000790864 (0.000790864/0.000790864)
eu1 4 1 -1.29180098360730866E-005 3.03920109296982320E-011 2.96298713606934521E-011 -8.19497254999048720E-003 3.0787524511602937 2.46618969282207936E-003 -1.31676979731730542E-005
eu1 103 1 -1.29180098360658241E-005 3.03920109296985551E-011 2.96298713606937752E-011 8.19497254998794410E-003 -3.0787524511602271 -2.46618969282202602E-003 -1.31676979731664863E-005
EvolveLevel[0]: NumberOfSubCycles = 1 (65 total)
RebuildHierarchy: level = 0
CPUTime-output: Frac = 1.000000, Current = 0.0364301 (0.0364144), Stop = 2592000.000000, Last = 0.000526905
dt, Initialdt: 0.000784819 0
TopGrid dt = 7.848190e-04 time = 0.10401676499578 cycle = 65
By cycle 10000, we are taking tiny timesteps:
TopGrid dt = 3.855508e-56 time = 0.34083068217033 cycle = 10000
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
The --suite=full answer testing test takes ~2days with -g optimization. Tests should be examined to determine the bottlenecks, and adjusted.
Original report by Andrew Emerick (Bitbucket: aemerick, GitHub: aemerick).
Goal is to improve the must refine particle methods to allow user to more easily flag particles as must refine particles for a variety of non-trivial conditionals. Currently must refine particles are determined by particle type in a list of particle types, or by particle mass, with the conditionals as to whether or not a given particle is a must refine particle handled in a Fortran routine.
This improvement would be to move the conditionals entirely to the C function that calls the Fortran routine. In the C function, we would generate a flagging array that on-the-fly flags particles as must refine, passing this flagging array only to the Fortran routine (rather than both particle mass and type arrays). This will allow for more complex conditionals. For example, making a star particle a must refine particle, but only at the end of its life when it injects feedback.
Original report by Andrew Emerick (Bitbucket: aemerick, GitHub: aemerick).
I just noticed that many of the star formation routines are passed the cooling time, but never do anything with it. This is a pretty minor thing, but in light off all the PR activity and upcoming workshop I felt motivated to suggest cleaning this up a bit if it is worthwhile.
I'm happy to do this and submit the PR, just wanted to know if the PR would be accepted before starting. If I do this, I'll likely clean other unused parameters in the SF routines if I spot them.
Original report by Brian O'Shea (Bitbucket: bwoshea, GitHub: bwoshea).
When we update our testing infrastructure, we need to include:
Original report by Greg Bryan (Bitbucket: gbryan, GitHub: gbryan).
It looks like the last three arguments in the call to star_feedback_ssn are missing in Grid_StarParticleHandler:
Grid_StarParticleHandler.C has:
#!c++
extern "C" void FORTRAN_NAME(star_feedback_ssn)(
int *nx, int *ny, int *nz,
float *d, float *dm, float *te, float *ge, float *u, float *v,
float *w, float *metal,
int *idual, int *imetal, hydro_method *imethod, float *dt,
float *r, float *dx, FLOAT *t, float *z,
float *d1, float *x1, float *v1, float *t1,
float *sn_param, float *m_eject, float *yield,
int *nmax, FLOAT *xstart, FLOAT *ystart, FLOAT *zstart,
int *ibuff, int *level,
FLOAT *xp, FLOAT *yp, FLOAT *zp, float *up, float *vp, float *wp,
float *mp, float *tdp, float *tcp, float *metalf, int *type,
int *explosionFlag,
float *smthresh, int *willExplode, float *soonestExplosion,
float *gamma, float *mu,
float *te1, float *metalIIfield, float *metalIIfrac, int *imetalII,
float *s49_tot, int *maxlevel);
while star_maker_ssn.F has:
#!FORTRAN
subroutine star_feedback_ssn(nx, ny, nz,
& d, dm, te, ge, u, v, w, metal,
& idual, imetal, imethod, dt, r, dx, t, z,
& d1, x1, v1, t1, sn_param, retfr, yield,
& npart, xstart, ystart, zstart, ibuff, level,
& xp, yp, zp, up, vp, wp,
& mp, tdp, tcp, metalf, type,
& explosionFlag,smthresh,
& willExplode, soonestExplosion, gam, mu,
& te1, metalSNII,
& metalfSNII, imetalSNII,
& s49_tot, maxlevel,
& distrad, diststep, distcells)
I think the last three arguments are just missing (but not sure if there are other things missing)? I think Nathan used and tested this, so I'm guessing the error snuck in during the merge...
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
The enzo.exe binary should be copied once to the root of the testing directory (the hash dir), then linked to for each test. That way the link doesn't point to a binary that could change when the repo is updated. @brittonsmith, @chummels, could one of you change this?
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
#!txt
acceleration-boundary-yes currently fails for several answer testing tests. Primarily, from the push suite, the following tests fail:
ExtremeAdvectionTest
GravityTestSphere
GravityTest
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
The current documentation claims that this value is the Initialdt of the current timestep. In fact, this parameter is the Initialdt the simulation should use when starting/restarting, but is immediately reset to 0 (used for logic). This should be more clear.
Should the initial top grid timestep for the current/last timestep be saved during output?
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
We should improve how the CUDA ppm solver handles the numbers of ghost zones, which is now a runtime parameter.
Original report by John Wise (Bitbucket: jwise77, GitHub: jwise77).
When the Fortran solvers crash, many values are printed but without any descriptions beyond single letters. We should add prefix descriptions to these write() statements.
Original report by Andrew Emerick (Bitbucket: aemerick, GitHub: aemerick).
The documentation on compile-time options needs improvement. The existing descriptions could be more precise. In particular, parameters that sound important but may or may not do anything should be defined better (like max-tasks-per-node-N
).
I think three general things could be improved:
Clearer descriptions on all (or mostly all) parameters
Denoting which parameters really should never be moved from the default, and why (and in what situation you may want to do this)
Some may not be used, or changing them from default may break everything. Long term fix would be to remove these compile time options and associated code. Short term fix is mark these as "do not touch or does nothing".
I'm happy to make the changes myself and issue a PR as long as people include updated descriptions on parameters here. I can collate and update.
Original report by Daniel Reynolds (Bitbucket: drreynolds, GitHub: drreynolds).
When generating a local gold-standard of the "push" suite of test problems, the CoolingTest_Grackle test problem immediately fails due to a missing input file, metal_cool.dat. It looks like this file is missing from the enzo-dev repository, so if someone could add it in, I imagine that this test would pass.
That said, the default configuration uses "grackle-no" by default, so I wonder whether this test should run in the first place?
As this was the only test that failed when generating the local standard, once this is fixed then I think everything should be fine.
Anyways, here's the estd.out file from running CoolingTest_Grackle:
$ cat enzo-gold/5d6653715fb6/Cooling/CoolingTest_Grackle/estd.out
MPI_Init: NumberOfProcessors = 1
warning: the following parameter line was not interpreted:
use_grackle = 1
warning: the following parameter line was not interpreted:
UVbackground = 0
InitializeRateData: NumberOfTemperatureBins = 600
InitializeRateData: RadiationFieldType = 0
****** ReadUnits: 4.906565e+31 1.670000e-24 3.085700e+18 3.155700e+11 *******
Caught fatal exception:
'Error opening metal cooling table metal_cool.dat
'
at ReadMetalCoolingRates.C:40
Backtrace:
BT symbol: ./enzo.exe() [0x40aaa3]
BT symbol: ./enzo.exe() [0x88510c]
BT symbol: ./enzo.exe() [0x7ed79e]
BT symbol: ./enzo.exe() [0x891fb9]
BT symbol: ./enzo.exe() [0x7e8599]
BT symbol: ./enzo.exe() [0x40a372]
BT symbol: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7fc145438ec5]
BT symbol: ./enzo.exe() [0x409229]
*** Error in `./enzo.exe': free(): invalid pointer: 0x00000000068565c8 ***
[0]0:Return code = 0, signaled with Aborted
Original report by John Wise (Bitbucket: jwise77, GitHub: jwise77).
The test's parameter files have several parameters (e.g. units, dtPhoton) as NaNs, causing yt to fail loading the datasets during the test suite.
Original report by chummels (Bitbucket: chummels, GitHub: chummels).
I think it can be confusing for new (and old) users to find various locations for sometimes disparate information about a code like enzo. We've done a pretty good job of removing references to the LCA page and all of its old versions of enzo. But now we have http://enzo-project.org and enzo.googlecode.com, which are two separate places that we need to keep up to date (and are currently not in sync with each other). What further complicates issues is that there are virtually no references that I can find about us using bitbucket (only 1 in the dev section), even though it is the main avenue by which we all interact with the code.
It seems to me the only reason we keep up the enzo.googlecode.com website is to provide a location for the "stable" versions of the code to be downloaded (e.g. 2.0, 2.1, 2.2, etc.). It also seems like enzo.googlecode.com was preferred before we had built up the enzo-project.org website (which IMO is much nicer), but now there is so much crosstalk between the two that it seems very confusing (and difficult to keep everything up to date if we must update the docs, and then two websites with relevant information every time we modify something).
So what I'm asking is, can we migrate everything to just sit on the enzo-project.org website (boot camp, tarballs, content); delete the enzo.googlecode.com website; remove all references to enzo.googlecode.com; and continue to do everything through enzo-project.org with a short rope to bitbucket for those who want to get the code?
I may have missed some significant reasons for keeping the googlecode website, so please correct me if I'm wrong, but I think my proposition would streamline our public face a lot for new users.
Original report by Danielle Skinner (Bitbucket: drenniks, GitHub: drenniks).
I've noticed there are many parameters that are either missing or have incomplete descriptions. I've compiled a list of parameters from a simulation that I have been working with that are not in the parameter list. This may not include all parameters on the webpage that don't have descriptions.
I think it would be useful to get these updated. What I am asking is for people to take a look at the Google Doc file at the end of this description, and add a description to whatever parameters they can. Once all the parameters are finished, I will submit a pull request to update the documentation. That way all the parameter updates will be contained in a single pull request.
After each parameter, I put a small description about the parameters status in the parameter list.
Here is a link to the google doc: https://docs.google.com/document/d/1sbv_67BV_koOsldsjpx1oycpGvZTFB2ZjdpE9_LcKvo/edit?usp=sharing
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
I've seen a few parameters that are only read in and written out. These should be removed.
Here is a running list (Please edit as found):
#!text
GreensFunctionMaxNumber
GreensFunctionMaxSize
And here is a list of all global_data.h parameters variables in fewer than 4 files (lots of false positives for unused, but maybe still a useful list),
found using http://paste.yt-project.org/show/3338/:
#!text
PreviousMaxTask
debug2
CurrentProblemType
TimestepSafetyVelocity
DimUnits
DimLabels
BaryonSelfGravityApproximation
S2ParticleSize
GreensFunctionMaxNumber
GreensFunctionMaxSize
GloverRadiationBackground
GloverOpticalDepth
EvolveRefineRegionNtimes
EvolveRefineRegionTime
EvolveRefineRegionLeftEdge
EvolveRefineRegionRightEdge
StaticPartitionNestedGrids
First_Pass
DepositPositionsParticleSmoothRadius
ExternalBoundaryField
NodeMem
NodeMap
PrevParameterFileName
WaitComm
filePtr
tracename
Start_Wall_Time
End_Wall_Time
flagging_count
in_count
out_count
moving_count
flagging_pct
moving_pct
memtracePtr
traceMEM
memtracename
StarParticlesOnProcOnLvl_Position
StarParticlesOnProcOnLvl_Velocity
StarParticlesOnProcOnLvl_Mass
StarParticlesOnProcOnLvl_Attr
StarParticlesOnProcOnLvl_Type
StarParticlesOnProcOnLvl_Number
RKOrder
SmallEint
CoolingCutOffDensity1
CoolingCutOffDensity2
CoolingPowerCutOffDensity1
CoolingPowerCutOffDensity2
CoolingCutOffTemperature
HaloMass
HaloConcentration
HaloRedshift
HaloCentralDensity
HaloVirialRadius
ExternalGravityConstant
ExternalGravityPosition
ExternalGravityOrientation
ShiningParticleID
TotalSinkMass
NBodyDirectSummation
StageInput
LocalPath
GlobalPath
yt_parameter_file
conversion_factors
my_processor
pix2x
pix2y
x2pix
y2pix
PhotonMemoryPool
TotalEscapedPhotonCount
PhotonEscapeFilename
IsothermalSoundSpeed
RefineByJeansLengthUnits
MBHParticleIOTemp
OutputWhenJetsHaveNotEjected
current_error
ClusterSMBHAccretionEpsilon
ExtraOutputs
I think it would be good to compile a list and do it all in one go.
Original report by yoshiki takahashi (Bitbucket: yoshiki-takahashi, GitHub: yoshiki-takahashi).
line 22,36
SyntaxError: Missing parentheses in call to 'print'.
so change print( "WARNING: could not get version information. Please install mercurial.")
Original report by dcollins4096 (Bitbucket: dcollins4096, GitHub: dcollins4096).
The MaximumGravityRefinementLevel will cause incorrect results due to the fact that the SiblingList is not repopulated. Please stop using MaximumGravityRefinement until this is resolved.
PrepareDensityField, line 116,
level = min(level, MaximumGravityRefinementLevel);
and then the grid array is set from the level (which in my case was level=1, while I had MaximumRefinementLevel = 2)
It then calls
PrepareGravitatingMassField2a(Grids[grid1], grid1, SiblingList,
MetaData, level, When);
where everything except SiblingList is referencing level=1. But SiblingList ws generated from level=2. Which is then passed into PrepareGravitatingMassField2a, which does some particle overlap stuff, namely calling CheckForOverlap on the GridList (on level 1) and things in the SiblingList (from level=2)
Possible solutions:
-- Storing the SiblingList of MaximumGravityRefinementLevel (as a global? Passing it through the recursive call to Evolve Level)
-- Recomputing the SiblingList as it is needed
This will need testing. In the mean time, please do not use MaximumGravityRefinement, it will lead to incorrect results.
Original report by Forrest Glines (Bitbucket: forrestglines, GitHub: forrestglines).
When compiling with single precision, several tests from the test suite either fail their checks or fail to complete the simulations without errors. However, different tests fail on different machines. This is using the make configurations outlined for compiling for CUDA, but without running with CUDA
i.e. with these for the make config
#!bash
make integers-32
make precision-32
make particles-32
make particle-id-32
make inits-32
make io-32
and with this in the makefile
#!bash
MACH_FFLAGS_INTEGER_32 =
MACH_FFLAGS_INTEGER_64 = -i8
MACH_FFLAGS_REAL_32 =
MACH_FFLAGS_REAL_64 = -r8
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
This page:
https://enzo.readthedocs.org/en/latest/developer_guide/HowToAddNewBaryonField.html
should be updated about comments of which pieces of the code need to be updated for conservation/interpolation/prolongation. Ideally this will then be updated again using the field objects.
Original report by John Wise (Bitbucket: jwise77, GitHub: jwise77).
When using star objects, there are a lot of undocumented behaviors of particle types that indicate whether the star object is living/dead, had a supernovae, etc. Document this!
Original report by John Wise (Bitbucket: jwise77, GitHub: jwise77).
At very high redshifts, the radiation energy density will have some cumulative effects. There have some off-line requests for this feature, so I'm making an issue. I don't believe it should be too hard to add because only a few files (CosmologyCompute*
) would have to modified.
Original report by Nathan Goldbaum (Bitbucket: ngoldbaum, GitHub: ngoldbaum).
This issue is triggered when DetermineSubgridSizeExtrema is called during the very first RebuildHierarchy. Since the hierarchy doesn't exist yet, the NumberOfCells array is zero for all levels but level 0. This causes MaximumSubgridSize and MinimumSubgridEdge to be floored to the smallest allowed values.
For most problems, this will create inefficient AMR hierarchies dominated by small grids with large surface area to volume ratios. Since SubgridSizeAdjust is turned on by default, this means new users will tend to be bit by this issue, as they are more likely to be running test problems rather than cosmology simulations which have static initial hierarchies and will not have this issue.
My workaround is simply to turn off SubgridSizeAdjust during initialization.
I could see two ways to fix this, one would be to alter DetermineSubgridSizeExtrema to respect the MinimumSubgridEdge and MaximumSubgridSize parameters supplied by the user in their parameter file rather than overwriting them.
Another would be to patch RebuildHierarchy so that DetermineSubgridSizeExtrema is never called during initialization. This will still create tiny grids on a new AMR level the first time the code reaches it, one would also have to patch the call to DetermineSubgridSizeExtrema to pass in (for example) NumberOfCells[i] if NumberOfCells[i+1] is zero.
Original report by dcollins4096 (Bitbucket: dcollins4096, GitHub: dcollins4096).
At Line 191 in euler.F, there's a someone disconcerting "max" statement. It is possible that this statement is causing other poor behaviors in the code. Possible things to test:
-- remove the line, see what the code does
-- Install write statements at the point to see if it ever gets actually triggered (my suspicion is that it won't, between the CFL and cooling time criteria on dt)
-- try reformulating this as a timestep criterion
All of @samskillman @gbryan @jwise77 have expressed interest in this one.
d.
Original report by Duncan Christie (Bitbucket: dachrist, GitHub: dachrist).
Updating Numpy to recent versions -- I tried with 1.15.3 but not 1.16.0 that was released a few days ago -- seems to break performance_tools.py. It works without errors with 1.11.3, which I have previously been using.
The specific error returned is:
Traceback (most recent call last):
File "/home/dachristie/ENZO-Adding-AD/enzo-dev-adding-ad/src/performance_tools/performance_tools.py", line 1014, in
p = perform(filename)
File "/home/dachristie/ENZO-Adding-AD/enzo-dev-adding-ad/src/performance_tools/performance_tools.py", line 187, in init
self.data = self.build_struct(filename)
File "/home/dachristie/ENZO-Adding-AD/enzo-dev-adding-ad/src/performance_tools/performance_tools.py", line 276, in build_struct
data[line_key][i] = line_value
ValueError: setting an array element with a sequence.
Original report by Philipp Grete (Bitbucket: pgrete, GitHub: pgrete).
Cuda build variable(s) are slightly off, e.g., MACH_LIBS_INCLUDES
in
Make.config.assemble:836 ASSEMBLE_CUDA_INCLUDES = $(MACH_LIBS_INCLUDES)
is never used and probably should read MACH_INCLUDES_CUDA
as other parts in the machinery.
Original report by Brian O'Shea (Bitbucket: bwoshea, GitHub: bwoshea).
Subject says it all; this is important for the 2.4 release.
Original report by John Wise (Bitbucket: jwise77, GitHub: jwise77).
The streaming data format conflicts with parallel HDF5 installations.
Original report by Andrew Emerick (Bitbucket: aemerick, GitHub: aemerick).
Hi everyone,
One thing that has always bothered me are the separate definitions for solar mass throughout the code. This is 1.989e33
in many places, but is defined in physical_constants.h
as SolarMass = 1.9891e33
.
I would assume there may be similar inconsistencies with other physical constants. Should we push to have everything uniformly defined as in physical_constants.h
? If so, I can go through the code and replace all locally defined physical constants with the constants defined in physical_constants.h
.
I wouldn't be surprised if this leads to large enough changes to answers to fail the test-suite.
Original report by chummels (Bitbucket: chummels, GitHub: chummels).
Original report by chummels (Bitbucket: chummels, GitHub: chummels).
Right now there is only a small amount of useful information dropped into the test_results.txt file after the completion of a test suite run. There is far more useful information that drops to STDOUT during runtime. It would be beneficial to clean up test_results.txt and provide more information for debugging possible failures / errors.
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
Many tests are currently very sensitive to compilers/optimizations. An incomplete list of these failing tests include: AdiabaticExpansion, CollideTest, ProtostellarCollapse_Std, PhotonTestAMR
We should figure out a way to handle different sensitivities. Perhaps a flag during testing such as: --rtol=1.0e-7, much like what is used in the nose testing framework.
Original report by Brian O'Shea (Bitbucket: bwoshea, GitHub: bwoshea).
Please comment on this issue to suggest ways that we can improve the Enzo testing infrastructure (documented here) . Some ideas are:
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
#!txt
calc_dt returns NaNs. Probably has something to do with initialization:
warning: the following parameter line was not interpreted:
GravityBoundaryFaces = 1 1 1 // isolating in all directions
warning: the following parameter line was not interpreted:
GravityBoundaryRestart = 0 // read boundary restart if possible
warning: the following parameter line was not interpreted:
GravityBoundaryName = potbdry // default boundary restart file
****** ReadUnits: 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 *******
DATA dump: ./DD0001/pc_amr_
WriteAllData: writing group file ./DD0001/pc_amr_0001.cpu0000
DATA dump: dumpdirname=(./DD0001) == unixresult=0
Continuation Flag = 1
calc_dt NaN NaN 4 4 4
Original report by Daegene Koh (Bitbucket: dkoh, GitHub: dkoh).
Currently Enzo can support up to HDF5 1.8. while 1.10 breaks it.
There are also concerns with the future roadmap of HDF5 and keeping up with the various changes that are coming.
This could be resolved by simply updating Read/Write routines or perhaps reworking them altogether.
Original report by MattT (Bitbucket: MatthewTurk, GitHub: MatthewTurk).
The IO in WriteAllData can be refactored to be a bit more efficient.
Responsible: A DC & MJT joint effort
Original report by Nathan Goldbaum (Bitbucket: ngoldbaum, GitHub: ngoldbaum).
Currently 1D AMR problems crash on some platforms (OS X seems to be particularly affected):
The InteractingBlasWaves problem crashes very quickly with the following traceback:
#0 0x00007fff94c72866 in __pthread_kill ()
#1 0x00007fff9408535c in pthread_kill ()
#2 0x00007fff92f3ab1a in abort ()
#3 0x00007fff95336690 in szone_error ()
#4 0x00007fff9533819c in tiny_free_list_remove_ptr ()
#5 0x00007fff95334127 in szone_free_definite_size ()
#6 0x000000010038c657 in ProtoSubgrid::ShrinkToMinimumSize (this=0x105a0cd30) at ProtoSubgrid_ShrinkToMinimumSize.C:101
#7 0x000000010031ca3d in IdentifyNewSubgridsBySignature (SubgridList=0x10128ab10, NumberOfSubgrids=@0x7fff5f5e2d58) at IdentifyNewSubgridsBySignature.C:52
#8 0x00000001000e2884 in FindSubgrids (Grid=0x104f6dc50, level=1, TotalFlaggedCells=@0x7fff5fbfdd38, FlaggedGrids=@0x7fff5fbfdd30) at FindSubgrids.C:126
#9 0x00000001003b316c in RebuildHierarchy (MetaData=0x7fff5fbff388, LevelArray=0x7fff5fbfe410, level=0) at RebuildHierarchy.C:397
#10 0x00000001000b49f5 in EvolveHierarchy (TopGrid=@0x7fff5fbff368, MetaData=@0x7fff5fbff388, Exterior=0x7fff5fbfe5a0, ImplicitSolver=0x522f6412, LevelArray=0x7fff5fbfe410, Initialdt=0) at EvolveHierarchy.C:282
#11 0x0000000100002257 in main (argc=3, argv=0x7fff5fbff7c8) at enzo.C:753
The specific line its crashing on is when the GridFlaggingField is deleted during subgrid construction.
I also see crashes for ShockInABox (traceback) and all of the AMR Toro tests except for Toro3 and Toro5 (traceback for Toro1 AMR).
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
Initial conditions should be made to match the method paper. This is a simple fix, but one that will break answer testing. DomainLeft/RightEdge should be (-0.5,-0.5) and (0.5, 0.5).
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
Simple plots should be updated to use new plotting machinery available in yt.
Original report by Brian O'Shea (Bitbucket: bwoshea, GitHub: bwoshea).
This issue is a place to identify documentation for enzo-dev (Enzo 2.x) that could be added or improved, or that is inaccurate. This includes docs for parameters, physics, setting up and using the code, and the test suite. Please make suggestions below, and if you have a particular page in mind please include a link to the appropriate web page.
The Enzo documentation can be found online at https://enzo.readthedocs.io/en/latest/ .
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
There is a bug introduced in pull request #219
that was pointed out in this thread: https://groups.google.com/forum/#!topic/enzo-dev/0Z17eRLJFME
Simulations run with more than 3 cores seem to fail in at least some cosmological simulations.
Original report by Daegene Koh (Bitbucket: dkoh, GitHub: dkoh).
In particular, the error comes from Grid_RotatingDiskInitalizeGrid.C.
The prototype for RotatingDiskInitializeGrid() has parameters cast in FLOAT
while in Grid.h the parameters are float.
I'm not sure what the intended types are.
Original report by Nathan Goldbaum (Bitbucket: ngoldbaum, GitHub: ngoldbaum).
Currently Enzo's build system directly imports mercurial:
Since Enzo is BSD-licensed, this is not allowed because directly importing mercurial implies that the python code that imports it must be GPL licensed. See https://www.mercurial-scm.org/wiki/MercurialApi for more details.
Instead, we should be talking to mercurial over the python-hglib command server. This will also allow us to support python installations based on python3, since python-hglib is available under python3.
Original report by Sam Skillman (Bitbucket: samskillman, GitHub: samskillman).
There are still references to lagos, which is several years out of date at this point.
Original report by Andrew Emerick (Bitbucket: aemerick, GitHub: aemerick).
RT simulations can crash when RadiativeTransferLoadBalance is ON, yet when there are either no sources or no photons (not entirely sure which one is the cause). This could likely be an easy(ish) fix but just having a check for any sources present before doing any load balancing.
Plans are to address this at Enzo workshop 2017
Original report by Nathan Goldbaum (Bitbucket: ngoldbaum, GitHub: ngoldbaum).
Now that we're moving away from splitting code up into two repos and just maintaining two named branches in the same repository, we need to update the install instructions in the docs to reflect the new practice.
Original report by Greg Bryan (Bitbucket: gbryan, GitHub: gbryan).
Star Particle Method 1 documentation incorrectly says that star particle velocity is set to 0. This was corrected some time ago (but not changed in docs).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.