Giter Site home page Giter Site logo

lilf's People

Contributors

ccstuardi avatar henedler avatar jurjen93 avatar martijnoei avatar revoltek avatar tpasini avatar vcuciti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

lilf's Issues

Add BLsmooth.py to scripts directory?

Maybe I am missing it, but currently pulling the LiLF github, I don't see BLsmooth.py inside the directory, which makes the user need to get it from somewehere else and add it to the $PATH for the pipeline to run.

Would be nice to add BLsmooth.py into the LiLF/scripts/ directory to have everything together

memory error while running a deep field

Hi, I'm getting the following error running LOFAR_dd-serial.py on a deep (72h) observation:

 - 15:50:29 - ClassJones         | Build solution Dico for killMS
 - 15:50:29 - ClassJones         |   Parsing solutions ddcal/c00/solutions/interp.h5:sol000/phase000+amplitude000
 - 15:50:29 - ClassJones         | Parsing h5file pattern ddcal/c00/solutions/interp.h5
 - 15:50:29 - ClassJones         |   Applying ddcal/c00/solutions/interp.h5 solset ['sol000'] soltabs {'tec000': False, 'phase000': True, 'amplitude000': True}
 - 15:55:01 - AsyncProcessPool   | process io00: exception raised processing job DATA:0:0: Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/DDFacet/Other/AsyncProcessPool.py", line 870, in _dispatch_job
    result = call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/DDFacet/Data/ClassVisServer.py", line 545, in _handler_LoadVisChunk
    JonesMachine.InitDDESols(DATA)
  File "/usr/local/lib/python3.8/dist-packages/DDFacet/Data/ClassJones.py", line 170, in InitDDESols
    DicoSols, TimeMapping, DicoClusterDirs = self.MakeSols("killMS", DATA, quiet=quiet)
  File "/usr/local/lib/python3.8/dist-packages/DDFacet/Data/ClassJones.py", line 299, in MakeSols
    DicoClusterDirs_killMS, DicoSols = self.GiveKillMSSols()
  File "/usr/local/lib/python3.8/dist-packages/DDFacet/Data/ClassJones.py", line 459, in GiveKillMSSols
    DicoClusterDirs, DicoSols, VisToJonesChanMapping = self.GiveKillMSSols_SingleFile(
  File "/usr/local/lib/python3.8/dist-packages/DDFacet/Data/ClassJones.py", line 794, in GiveKillMSSols_SingleFile
    VisToJonesChanMapping,DicoClusterDirs,DicoSols,G=self.ReadH5(SolsFile)
  File "/usr/local/lib/python3.8/dist-packages/DDFacet/Data/ClassJones.py", line 671, in ReadH5
    solset_gains.append(np.exp(1j * phase))
MemoryError: Unable to allocate array with shape (15304, 29, 35, 768, 2) and data type complex128

The guess of Reinout is that all solutions in all ms files are stored in a single H5 file with a giant time axis (15304). Could this be the issue?
The pipeline runs fine on the same node for a shorter (4h) observation.

ps: I had to add --Misc-ConserveMemory=1 to the DDF call otherwise I was getting another kind of error (AsyncProcessPool | worker 'compXX' killed by signal SIGKILL), but this should not be related to LiLF but to DDF..

Complete LBA IS calibrator models

We need proper sub-arcsecond models for the calibrators.

  • 3c196: model already exists
  • 3c295: create model based on the 30-80 MHz model of christian, still requires re-scaling and improvement
  • 3c380: create model based on Christian's one

3c295-christian.txt

Reference antenna 001

Hi,

A bunch of scripts have the reference antenna set to CS001LBA, but since it is being used for testing LOFAR2.0 new data will crash the pipeline. It would be nice to be able to set the reference antenna with an argument, or maybe just switch the hardcoded CS001LBA to CS002LBA?

LOFAR_ddfacet pipeline has no matching parset

When running the LOFAR_ddfacet.py pipeline, I got the following error:

Singularity> python3 /home/mknapp/software/LiLF/pipelines/LOFAR_ddfacet.py
2023-11-26 10:47:29 - INFO - Logging initialised in /Download/mss/id693094_-_v830tau (file: pipeline-ddfacet.logger_2023-11-26_10:47:29.logger)
2023-11-26 10:47:29 - WARNING - Hostname GALATEA unknown.
2023-11-26 10:47:29 - INFO - Scheduler initialised for cluster Unknown: GALATEA (maxThreads: 12, qsub (multinode): False, max_processors: 12).
Traceback (most recent call last):
  File "/home/mknapp/software/LiLF/pipelines/LOFAR_ddfacet.py", line 19, in <module>
    parset_dir = parset.get('LOFAR_ddfacet','parset_dir')
  File "/usr/lib/python3.8/configparser.py", line 781, in get
    d = self._unify_values(section, vars)
  File "/usr/lib/python3.8/configparser.py", line 1149, in _unify_values
    raise NoSectionError(section) from None
configparser.NoSectionError: No section: 'LOFAR_ddfacet'

I checked the parset directory and indeed there are no parsets for LOFAR_ddfacet. Is this pipeline deprecated? Should I be using pipeline LOFAR_dd.py instead?

LOFAR_dd.py pipeline hangs at 'predict' step without error message

I'm running the LOFAR_dd.py pipeline on an Ubuntu 22 system, 128 G RAM, 12 cores. I'm using Singularity 20220805 for the environment. The pipeline runs fine until it gets to the "Predict full model" step. It works through the first few parts of that step, but then hangs silently at "Adding model data column...". I see that entry in the wsclean log and then nothing else happens. This has recurred at least 3 times and it gets stuck in exactly the same place. No error message. When I check resource utilization, the computer is at idle levels, so it isn't a case of this step just taking a really long time.

Let me know if seeing the logs either from the pipeline or wsclean would be helpful.

Add example parset

Add a lilf.parset as example, containing all the options a user might want to change.

Extraction: issue with fits mask size

Currently the user can specify a .reg file as cleaning mask. We are trying to also allow a .mask.fits file, but its size in degrees has to be exactly the same of the images produced by the script, otherwise we get a WSClean error.

Amp DIE step fails near end in LOFAR_self pipeline

I'm running in a singularity container (pill20220805.simg) on Ubuntu 22, machine has 12 cores, 128 G RAM, GPU, large RAID with ~6 TB free.

I'm getting an error near the end of the "slow G" step. From the logs (attached), the error occurs when two or three of the four timeslots have already finished processing. After I got the error the first time, I rebooted my system and ran the pipeline again - I got the same error and the logs look similar, though not quite identical.

Error:

2023-11-03 20:23:29 - ERROR - DP3 run problem on:
logs_pipeline-self_2023-10-31_11:59:41/TC00_solG-c0.log
Traceback (most recent call last):
  File "/home/mknapp/software/LiLF/pipelines/LOFAR_self.py", line 243, in <module>
    MSs.run('DP3 '+parset_dir+'/DP3-solG.parset msin=$pathMS sol.h5parm=$pathMS/g.h5 sol.solint='+str(120*base_solint)+' sol.nchan='+str(16*base_nchan),
  File "/home/mknapp/software/LiLF/LiLF/lib_ms.py", line 141, in run
    self.scheduler.run(check = True, maxThreads = maxThreads)
  File "/home/mknapp/software/LiLF/LiLF/lib_util.py", line 741, in run
    self.check_run(log, commandType)
  File "/home/mknapp/software/LiLF/LiLF/lib_util.py", line 813, in check_run
    raise RuntimeError(commandType+' run problem on:\n'+out)
RuntimeError: DP3 run problem on:
logs_pipeline-self_2023-10-31_11:59:41/TC00_solG-c0.log

Looking at the logs for each of the timeslots, three of the four look fine but TC01_solG-c0.log says "killed" right at the end. The machine did not reboot and I was not doing anything else on it at the time.

Processing 899 time slots ...

0%....10.Killed

Any guidance would be appreciated! The singularity container has been working great for me up until this point. I did note that the code in this branch for this pipeline step looks a little different than what's quoted in the error message - any chance that changes since this docker/singularity image was made could be causing the issue?

TC00_solG-c0.log
TC01_solG-c0.log
TC02_solG-c0.log
TC03_solG-c0.log
pipeline-self_2023-11-03_212042.logger.txt

time splitting in integer units of integration time

for timerange in np.array_split(sorted(set(t.getcol('TIME'))), round(hours)):

I think that by splitting this way the regular structure of (time[i]-time[i+1])=constant is destroyed at the boundaries of the split. That means that after combining the ms again in time (which one sometimes wants), the time axis has become irregular. An easy solution I think would be to split with taql in integer units of timesteps. A simple example to split select time steps, say from 120 to 140 with taql is:

taql 'select from bla.MS where TIME in (select distinct TIME from bla.MS offset 120 limit 20) giving bla-subset.MS as plain'

Calibrator model searched in the wrong file

When running /LiLF/pipelines/LOFAR_cal.py, the model for 3C380 calibrator has been searched in calib-highres.skydb while it was in calib-simple.skydb.

> more logs_pipeline-cal_2023-06-22_13:06:10/3c380_SB110_pre.log
std exception detected: Something went wrong while reading the source model. 
The error was: Couldn't find patch for direction [3C380]

I solved this issue manually by adding specifications in the lil.config file:

[LOFAR_cal]
skymodel = /LiLF/models/calib-simple.skydb

Improve flagging of bad periods

Aim: improve the flagging of the few % of data with quickly varying ionospheric conditions.
Can we transfer those from the calibrator? Possible solutions are to include a fast ampsolve on the calibrator that can be used to identify periods of bad scintillation/decorrelation. Alternatively, we can look at identifying periods with quickly varying phases.

Container issue

Hi,
I am running the LiLF pipeline by following the instructions from the github page and by using the container built by running docker_build.sh. Everything goes well until the DD calibration step where it crashes after a deprecation warning (which seems to be a hard crash because the scipy functionality is already deprecated):
image

Do you have a container with an earlier scipy version? Or a fix to go around this issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.