Giter Site home page Giter Site logo

bcalden / clusterpyxt Goto Github PK

View Code? Open in Web Editor NEW
27.0 10.0 8.0 4.52 MB

The Galaxy Cluster ‘Pypeline’ for X-ray Temperature Maps

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%
galaxy-clusters pipeline astronomy x-ray-astronomy astrophysics spectral-analysis cosmology

clusterpyxt's Introduction

Current beta version for CIAO 4.14

To use this version of ClusterPyXT please install CIAO-4.14 using conda as outlined by here. This version of ClusterPyXT requires python 3.8 or greater, the default for CIAO 4.14. See below for further details on the CIAO installation.

Introduction

ClusterPyXT is a software pipeline to automate the creation of x-ray temeprature maps, pressure maps, surface brightness maps, and density maps. It is open source and under active development. Please feel free to contribute! See the contribution section below for more details. (Even if you're new to everything!)

Overview

Requirements

This version of ClusterPyXT requires CIAO-4.14. The full calibration database (CALDB) is a requirement as well and can be installed with CIAO. To do so, during step 2 of the conda environment creation change caldb_main to caldb.

CIAO Installation

These instructions are for CIAO 4.14. Follow the installation instructions at the Chandra X-ray Center (CXC). Note, the custom installation option should be used as it allows for the full CALDB installation. Make sure to select all CALDB options before downloading the installation script. Additionally, it is recommended you install the latest version of Python during installation. Python 3.8 is required at a minimum.

Another requirement for ClusterPyXT is the astropy python library within the CIAO environment. CIAO 4.14 allows for the easy installation of this library. After installation, start the CIAO environment and run conda install astropy tqdm.

Download ClusterPyXT

To download ClusterPyXT, simply run git clone https://github.com/bcalden/ClusterPyXT.git.

Running ClusterPyXT

System Configuration

After following the instructions above, go the ClusterPyXT directory and run python clusterpyxt.py to initialize the system configuration.

Cluster Initialization

Next, you must initialize a cluster. At a minimum, you need the name for the cluster and the Chandra Observation IDs you plan on using. Names can be whatever you want (e.g. A85, A115, Bullet, Toothbrush) just recognize the name needs to be valid in directory and file names. That means don't use slashes or other characters disallowed by your filesystem. Observation IDs can be found using the Chandra Data Archive. While you can start the pipeline with just this information, redshift, hydrogen column density, and the metallicity of the cluster are required to complete the spectral fitting. Redshift information can be found at the NASA/IPAC Extragalactic Database (NED). Hydrogen column density information can be found at NASA HEASARC Tools. Solar abundance can usually be estimated at 0.3 although you can check the literature to see if a better value for your cluster should be used.

There are various ways to initialize a cluster. To use the CLI GUI(beta), just run python clusterpyxt.py.

Processing a Cluster

After a cluster is initialized, you can simply start/continue the pipeline by running python clusterpyxt.py and selecting the continue option. The pipeline will prompt for additional input (point source region files, exclusion areas, etc.). Follow a detailed walkthrough below.

Stage 1

Required: Cluster initialized following the above process Cluster Initialization Main output: Merged X-ray surface brightness map.

After cluster initialization the pipeline will download, reproject, and merge the observations and backgrounds. This process can take on order of 10s of minutes to hours (for large numbers of observations).

Stage 2

Required: sources.reg and exclude.reg Main output: X-ray surface brightness with sources removed.

At this stage, the data is downloaded and the observations are merged into a surface brightness map - ../[pipeline_data_dir]/[cluster_name]/[cluster_name]_broad_flux.img. Now it is time to filter out point sources and high energy flares. To do so, first open the surface brightness map and create regions around sources you want excluded from the data analysis. These are typically foreground point sources one does not want to consider when analyzing the cluster. Save these regions as a DS9 region file named ../[pipeline_data_dir]/[cluster_name]/sources.reg.

Additionally, you need to create a region file containing any regions you wanted excluded from the deflaring process. This would include areas such as the peak of cluster emission as these regions may contain high energy events you want to consider in this analysis. Save this region file as ../[pipeline_data_dir]/[cluster_name]/exclude.reg.

After both files are saved, you can continue ClusterPyXT by running python clusterpyxt.py --continue or through the CLI GUI.

Stage 3

Required: acisI_region_0.reg file for each observation.

Main output: RMF and ARF files.

This stage extracts the RMF and ARF files. Before continuing the pipeline you need to create a region file for each observation. Each observation will need its own region file named acisI_region_0.reg and saved in the respective analysis directory (../[pipeline_data_dir]/[cluster_name]/[observation_id]/analysis/acisI_region_0.reg).

To create this file, select the Run Stage 3 button and then, the Make ACIS Region Files button open the respective acisI_clean.fits file (../[pipeline_data_dir]/[cluster_name]/[observation_id]/analysis/acisI_clean.fits) for each observation. When it opens, draw a small circle region containing some of each of the ACIS-I CCD's. This region does not need to contain ALL of the chips, just a piece of each. It can be ~40 arc seconds (bigger circle=longer runtime). Save this region file as acisI_region_0.reg, overwriting the file in the observations analysis directory. This save file dialog should open to the right folder with only the acisI_region_0.reg file inside. Once you are finished with all observations, click the button to run stage 3.

Stage 4

Required: master_crop-ciaowcs.reg

Main output: Filtered data (0.7-8.0 keV)

Now you need to create a region file enclosing the region you would like to crop the final analysis to. To do so, open the surface brightness file (../[pipeline_data_dir]/[cluster_name]/main_output/[cluster_name]_xray_surface_brightness_nosrc.fits) and create a box region containing all parts of the image you want included in the analysis.

Save this file as: ../[pipeline_data_dir]/[cluster_name]/[observation_id]/analysis/acisI_region_0.reg

After this region file is created, continue running ClusterPyXT by running python clusterpyxt.py --continue or through the CLI GUI.

Note: Due to processing complexities, during this stage you may encounter an error where two observations have slightly different dimensions (usually ~1) and the pipeline cannot combine them. This is due to the region splitting pixels and in some observations that pixel may be counted where in others it is not. If this happens, draw a new crop region, save it, and re-run.

Stage 5

Required: All previous stages completed.

Main output: Scale map and regions used for spectral fitting

This stage only requires all previous stages to be completed. Stage 5 calculates the adaptive circular bins, generates the scale map, and calculates exposure corrections. It can take a long time (~10s of hours).

After this stage is complete, you are ready for spectral fitting.

Spectral Fitting

As there can be anywhere from 103 to 105 regions to fit, there are multiple ways to create a temperature map. One can either process every region in serial on their local computer, run it in parallel on a single, multicore computer, or even in parallel on a supercomputer. As scheduling on a supercomputer is highly specific to each environment, only a general description is provided. Feel free to contact us for help.

To do the spectral fitting, only a subset of the data is required. If processing on a remote machine, insure the remote machine has the required software and ClusterPyXT is configured (see above). Make a directory for your cluster in the remote cluster data directory (set in the system configuration on first run, also set in ClusterPyXT\pypeline_config.ini). The files required are the configuration file (ClusterName_pypeline_config.ini) and the acb folder within the cluster directory. Upload both of these to the remote machines cluster folder you just created. You are now ready for spectral fitting.

To run the spectral fitting portion of the pipeline in serial, run python spectral.py --cluster_config_file 'path/to/cluster_config_file' --resolution 2.

To run in parallel (recommended), run python spectral.py --parallel --num_cpus N --cluster_config_file 'path/to/cluster\_config\_file' --resolution 2. If you need to restart for this reason, or any reason, simply add the --continue argument to the above spectral.py command and ClusterPyXT will begin where it left off without having to re-fit any region.

The resolution is set as either 1 - low resolution, 2 - medium resolution, or 3 - high resolution.

To run on a supercomputer, you can make use of the command file generated (commands_ClusterName.lis) in the acb directory. This command file has a line for each region to be fit that directly calls the spectral fitting routine on that region. You can write a simple script to parse this command file and send it to each of the nodes used in the supercomputer.

Temperature Map Creation

After spectral fitting, the last thing to do is create the temperature map. If you did the spectral fitting on a remote machine, you need to download the three .csv files created within the remote acb directory. Next, simply run python acb.py --temperature_map --cluster_config_file 'path/to/cluster_config_file' --resolution 2. Check the clustername/main_output/ directory for the output.

Pressure Map Creation

After generating the temperature map, there is now enough data to generate the pseudo pressure map. Simply run python acb.py --make_pressure_map --cluster_config_file 'path/to/config/file'. Check the clustername/main_output directory for the output.

License

This software is licensed under a 3-clause BSD style license - see the LICENSE.md file

Citation

If you make use of this code in its original form or portions of it, please cite:

Alden et al., 2019, Astronomy and Computing, 27 (2019), 147-155 doi: 10.1016/j.ascom.2019.04.001

Also available on the arXiv: arXiv:1903.08215

Contributions

All levels of contributions are welcome. From text documentation to parallel processing algorithms, your input and help is desired. Please see CONTRIBUTING.md for more information.

clusterpyxt's People

Contributors

bcalden avatar mendozad31 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clusterpyxt's Issues

Fully comment the code

At a minimum, every function should have a comment describing its inputs, outputs, and what it does/is intended to do.

Unable to generate Temperature and pressure maps.

Even after executing every step without any error except one (WARNING: failed to import sherpa.astro.xspec; XSPEC models will not be available) my output Temperature and pressure maps are blank. Maybe something is wrong with spectral fittings because it took less than a minute with 6478 region files only and showing "no fit" for many regions.

CIAO4.14

Dear sir,
Could I use it for CIAO version 4.14 ?

Blank temperature and pressure map

@bcalden
Solved the pychips import error by importing these git files in "/home/zareef/anaconda3/envs/ciao-4.13/lib/python3.8/site-packages" and replacing pychips with Pychips in cluster.py python file.
Solved the sherpa import error by reinstalling Sherpa package using conda.

After that ran these commands for spectral fittings(178 iterations), temperature and pressure Map:

python spectral.py --parallel --cluster_config_file /home/zareef/gc/xray/a3444/a3444_pypeline_config.ini --resolution 3

(ciao-4.13) zareef@zareef:~/soft/ClusterPyXT$ python acb.py --temperature_map --resolution 3 --cluster_config_file /home/zareef/gc/xray/a3444/a3444_pypeline_config.ini

(ciao-4.13) zareef@zareef:~/soft/ClusterPyXT$ python acb.py --make_pressure_map --cluster_config_file /home/zareef/gc/xray/a3444/a3444_pypeline_config.ini

But when I opened a3444_temperature_map.fits and a3444_pressure.fits using ds9 both of them are blank.

Errors occur at the beginning of preparing to merge the observations.

After downloading the cluster data from Chandra Data Archive , when it is doing preparing to merge the observations:

Reprocessing Abell 85
Reprocessing Abell 85/4881
Traceback (most recent call last):
File "clusterpyxt.py", line 117, in
menu.make_menu()
File "/home/wanglei/xray/ClusterPyXT-master/menu.py", line 301, in make_menu
display_menu(main_menu)
File "/home/wanglei/xray/ClusterPyXT-master/menu.py", line 198, in display_menu
selected_action'function'
File "/home/wanglei/xray/ClusterPyXT-master/menu.py", line 171, in continue_cluster
ciao.start_from_last(current_cluster)
File "/home/wanglei/xray/ClusterPyXT-master/ciao.py", line 1085, in start_from_last
success = function_stepspypeline_progress_index
File "/home/wanglei/xray/ClusterPyXT-master/ciao.py", line 406, in merge_observations
reprocess_cluster(cluster)
File "/home/wanglei/xray/ClusterPyXT-master/ciao.py", line 230, in reprocess_cluster
copy_event_files(observation.reprocessing_directory, observation.analysis_directory)
File "/home/wanglei/xray/ClusterPyXT-master/ciao.py", line 278, in copy_event_files
evt2_filename = evt2_filename[-1]
IndexError: list index out of range

ps :Computer system is Linux Ubuntu 64 bit,ciao-4.11

Pipeline freezes during Stage 1 download

New user here... getting to grips with ClusterPyXT for data processing. I've set everything up according to the instructions on the main page, and when I set up a cluster for processing, the pipeline freezes during the download.

For context, I'm running CIAO-4.15 via the instructions on https://cxc.cfa.harvard.edu/ciao/download/ with the full CALDB installation on OSX 10.15.7 (Catalina), and CIAO runs successfully (I ran a different pipeline yesterday). After launching the virtualenv and running python3 clusterpyxt.py the GUI boots up correctly, but after entering information for my cluster target and clicking "Run Stage 1" I get the rainbow wheel and the Download message doesn't shift from 0%, even after waiting upwards of half an hour.

The download stage of the other pipeline only took a matter of a few minutes for each dataset, so I don't understand what the problem is. Any help would be appreciated!

Implement unit tests with a known good cluster/obsids

Implement unit tests for each stage of the pipeline with a known good cluster dataset at that stage. This should be implemented so that the tests are done during commits (as well as can be run during local development.)

Error parameter

I'm using ciao-4.14 to do an analysis but when I run my pipeline to do the analysis I get the following return:

gabriel@gabriel-Aspire-A515-51G:~/astrosoft/ClusterPyXT$ python clusterpyxt.py
/home/gabriel/astrosoft/ClusterPyXT/pypeline_config.ini
Cluster data written to /home/gabriel/aglomerados/dados.clusterpyxt/A119/A119_pypeline_config.ini
Making directories
Cluster data written to /home/gabriel/aglomerados/dados.clusterpyxt/A119/A119_pypeline_config.ini
Downloading files for ObsId 4180, total size is 58 Mb.

Type Format Size 0........H.........1 Download Time Average Rate

vvref pdf 25 Mb #################### 8 s 3295.2 kb/s
evt1 fits 21 Mb #################### 9 s 2449.3 kb/s
asol fits 3 Mb #################### 2 s 1393.9 kb/s
evt2 fits 2 Mb #################### 4 s 694.4 kb/s
mtl fits 520 Kb #################### 2 s 254.8 kb/s
cntr_img jpg 517 Kb #################### 2 s 277.8 kb/s
bias fits 428 Kb #################### 2 s 216.3 kb/s
bias fits 426 Kb #################### 2 s 277.8 kb/s
bias fits 426 Kb #################### 2 s 242.0 kb/s
bias fits 426 Kb #################### 2 s 223.8 kb/s
bias fits 425 Kb #################### 2 s 230.8 kb/s
stat fits 372 Kb #################### 2 s 209.5 kb/s
osol fits 356 Kb #################### 2 s 201.1 kb/s
osol fits 355 Kb #################### 2 s 170.9 kb/s
osol fits 355 Kb #################### 2 s 182.6 kb/s
eph1 fits 282 Kb #################### 2 s 178.6 kb/s
eph1 fits 275 Kb #################### 2 s 153.4 kb/s
eph1 fits 259 Kb #################### 2 s 154.1 kb/s
aqual fits 200 Kb #################### 2 s 125.9 kb/s
full_img jpg 77 Kb #################### 1 s 57.5 kb/s
osol fits 64 Kb #################### 1 s 52.4 kb/s
cntr_img fits 56 Kb #################### 1 s 45.9 kb/s
vv pdf 51 Kb #################### 1 s 41.7 kb/s
full_img fits 44 Kb #################### 1 s 41.7 kb/s
bpix fits 21 Kb #################### < 1 s 21.9 kb/s
oif fits 20 Kb #################### < 1 s 21.2 kb/s
readme ascii 10 Kb #################### < 1 s 10.8 kb/s
eph1 fits 7 Kb #################### < 1 s 10.3 kb/s
fov fits 7 Kb #################### < 1 s 11.0 kb/s
flt fits 6 Kb #################### < 1 s 11.2 kb/s
msk fits 5 Kb #################### < 1 s 6.4 kb/s
pbk fits 4 Kb #################### < 1 s 6.5 kb/s

  Total download size for ObsId 4180 = 58 Mb
  Total download time for ObsId 4180 = 1 m 2 s

Reprocessing A119.

Running chandra_repro
version: 15 March 2022

Processing input directory '/home/gabriel/aglomerados/dados.clusterpyxt/A119/4180'

No boresight correction update to asol file is needed.
Resetting afterglow status bits in evt1.fits file...

Running acis_build_badpix and acis_find_afterglow to create a new bad pixel file...

Running acis_process_events to reprocess the evt1.fits file...
Filtering the evt1.fits file by grade and status and time...
Applying the good time intervals from the flt1.fits file...
The new evt2.fits file is: /home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/repro/acisf04180_repro_evt2.fits

Updating the event file header with chandra_repro HISTORY record
Creating FOV file...

Cleaning up intermediate files

Any issues pertaining to data quality for this observation will be listed in the Comments section of the Validation and Verification report located in:
/home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/repro/axaff04180N004_VV001_vv2.pdf

The data have been reprocessed.
Start your analysis with the new products in
/home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/repro
Running ccd_sort on A119.
Working on A119/4180
evt1 : ['/home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/secondary/acisf04180_000N005_evt1.fits']
evt2 : /home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/repro/acisf04180_repro_evt2.fits
detname : ACIS-01236
A119/4180: Making level 2 event file for ACIS Chip id: 0
A119/4180: Making level 2 event file for ACIS Chip id: 1
A119/4180: Making level 2 event file for ACIS Chip id: 2
A119/4180: Making level 2 event file for ACIS Chip id: 3
A119/4180: Making level 2 event file for ACIS Chip id: 6
Running ciao_back on A119.
Finding background for /home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/analysis/acis_ccd1.fits
Found background at /home/gabriel/miniconda3/envs/ciao-4.14/CALDB/data/chandra/acis/bkgrnd/acis1iD2000-12-01bkgrnd_ctiN0005.fits
Running dmkeypar /home/gabriel/aglomerados/dados.clusterpyxt/A119/4180/analysis/acis_ccd1.fits "GAINFILE" echo=True
pset: cannot convert parameter value : rval
/tmp/tmpiutbyeed.dmkeypar.par: cannot convert parameter value : rval
Error getting parameter file in CIAO. Please close ClusterPyXT and re-try the stage. If the problem persists, please file a bug report on https://github.com/bcalden/ClusterPyXT with the following error message:
Traceback (most recent call last):
File "/home/gabriel/astrosoft/ClusterPyXT/clusterpyxt.py", line 523, in run_stage_1
ciao.run_stage_1(self._cluster_obj)
File "/home/gabriel/astrosoft/ClusterPyXT/ciao.py", line 1290, in run_stage_1
merge_observations(cluster)
File "/home/gabriel/astrosoft/ClusterPyXT/ciao.py", line 435, in merge_observations
ciao_back(cluster)
File "/home/gabriel/astrosoft/ClusterPyXT/ciao.py", line 138, in ciao_back
acis_gain = rt.dmkeypar(infile=acis_file,
File "/home/gabriel/miniconda3/envs/ciao-4.14/lib/python3.9/site-packages/ciao_contrib/runtool.py", line 1836, in call
stackfiles = self._update_parfile(parfile)
File "/home/gabriel/miniconda3/envs/ciao-4.14/lib/python3.9/site-packages/ciao_contrib/runtool.py", line 1396, in _update_parfile
self._update_parfile_verify(parfile, stackfiles)
File "/home/gabriel/miniconda3/envs/ciao-4.14/lib/python3.9/site-packages/ciao_contrib/runtool.py", line 1326, in _update_parfile_verify
oval = _to_python(ptype, pio.pget(fp, oname))
ValueError: pget() Parameter not found
Abortado (imagem do núcleo gravada)

units of the pixels in product maps

Hi @bcalden. What are the units of the pixels in the product maps (surface_brightness_nosrc_cropped.fits, _temperature_map.fits, _density.fits, _pressure.fits, broad_thresh.expmap, broad_flux.img, merged_evt.fits, etc.)? Would it be possible to add the units of the pixels to the headers? Maybe this question is obvious to X-ray experts, but I can't find the answers for some of the maps.

notes on updating to CIAO 4.16

We have just released CIAO 4.16 which comes with native macOS ARM support (which makes your mac laptops go brrrrr).

I note that for CIAO 4.15 we changed how the conda installation works (making use of conda-forge for various technical/legal issues) so the installation steps will likely need updating.

I would hope that the Sherpa code doesn't need much updating but if you do use group_counts (or any of the other group_xxx calls) then the behavior may change (it depends if you group then filter or filter then group, as the latter is where the behavior changes). There may be other changes, including from CIAO 4.15 (the notice/ignore/group/set_analysis/... calls now report the selected filter range for one).

error in stage 2

Dear Sir,

I tried to run the code. I have created the source and exclude regions. I have proceeded further through GUI mode as well as command line mode (python clusterpyxt.py --continue)
When I clicked on run stage 2, I have encountered with the following error. I am unable to resolve it.
It will be very useful, I you could provide one example.

Traceback (most recent call last):
File "clusterpyxt.py", line 533, in run_stage_2
ciao.run_stage_2_parallel(self._cluster_obj, get_arguments())
File "/home/sk/ClusterPyXT-master/ciao.py", line 1357, in run_stage_2_parallel
remove_sources_in_parallel(cluster,args)
File "/home/sk/ClusterPyXT-master/ciao.py", line 671, in remove_sources_in_parallel
with mp.Pool(args.num_cpus) as pool:
File "/home/sk/CIAO/ciao-4.14/ots/lib/python3.8/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "/home/sk/CIAO/ciao-4.14/ots/lib/python3.8/multiprocessing/pool.py", line 205, in init
raise ValueError("Number of processes must be at least 1")
ValueError: Number of processes must be at least 1
Aborted (core dumped)

Regions with zero area in the sources.reg file cause a crash

If the sources.reg file contains regions of zero area, a warning is thrown and passed as part of a filename string (near line 478 in ciao.py). This string is then used in the dmextract command at line 478 which crashes the application as it contains warning messages not expected by dmextract.

Error starting pipeline

I did all the installations as indicated on the site, using ciao-4.14, but when I enter the directory and run python clusterpyxt.py I get the following message:

gabriel@gabriel-Aspire-A515-51G:~/astrosoft/ClusterPyXT$ python3 clusterpyxt.py
/home/gabriel/astrosoft/ClusterPyXT/pypeline_config.ini
qt.qpa.plugin: Could not load the Qt platform plugin "offscreen" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.

Abortado (imagem do núcleo gravada)

AttributeError while running the spectral fitting portion of the pipeline in serial/parallel

AttributeError is showing while doing spectral fitting using the following command (replaced the 'path/to/cluster_config_file' with my cluster config file path):
Note: Running in a 2 core 4 thread processor computer.

python spectral.py --parallel --num_cpus 4 --cluster_config_file 'path/to/cluster\_config\_file' --resolution 2
or
python spectral.py --cluster_config_file 'path/to/cluster\_config\_file' --resolution 2

Error:

Traceback (most recent call last):
  File "/home/beerus/anaconda3/envs/ciao-4.12/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/home/beerus/anaconda3/envs/ciao-4.12/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/beerus/ClusterPyXT/cluster.py", line 1750, in fit_region_number
    sherpa.set_analysis('energy')
AttributeError: 'NoneType' object has no attribute 'set_analysis'
4690:	Loading data pulse invariant files (PI files)
4690:	Loading background PI files

ModuleNotFoundError: No module named 'pychips' while running cluster.py

Successfully installed pychips and tested it by importing it in python.

But "ModuleNotFoundError: No module named 'pychips'" is showing while trying to run cluster.py inside ciao-4.13 environment from ClusterPyXT (deleted the try except block from cluster.py)

command: python3 cluster.py

Error:

Traceback (most recent call last): File "cluster.py", line 5, in <module> import config File "/home/beerus/ClusterPyXT/config.py", line 2, in <module> import cluster File "/home/beerus/ClusterPyXT/cluster.py", line 12, in <module> import pychips ModuleNotFoundError: No module named 'pychips'

Update the graphical user interface

Allow for the editing of cluster config files from within the GUI. Further, allow for progress check on the clusters with the option of repeating stages from w/in the GUI.

norm map & free paremater

Hi.

  1. I see that the best-fit parameters are saved in acb/*spectral_fits.csv files. Is there any existing way to export the norm obtained from the spectral fitting as a fits image? Probably like the way the temperature map is generated.

  2. In the code, the abundance and other parameters are kept fixed. Only temperature and normalization are free. Is there any plan to do spectral fitting letting the abundance as free parameters as well?

Thanks.

No module named 'ciao_contrib'

Hi Brian, I'm new user ClusterPyXT and when I run the command clusterpyxt.py the linux show me the follow mesage:

File "/home/alisson/ClusterPyXT-master/acb.py", line 9, in <module>
    import ciao_contrib.runtool as rt
ModuleNotFoundError: No module named 'ciao_contrib'

apparently I don't have the ciao-contrib package installed. However, I am unable to install with

pip install ciao-contrib

or

sudo python ciao-contrib install

Could you help me with this issue?

I'm using ciao-4.16 and when I run ciaover -v the output is:

The current environment is configured for:
  CIAO        : CIAO 4.16.0 Tuesday, December 05, 2023
  Contrib     : Package release 0  Tuesday, November 28, 2023
  bindir      : /home/alisson/miniconda3/envs/ciao-4.16/bin
  Python path : /home/alisson/miniconda3/envs/ciao-4.16/bin
  CALDB       : 4.11.0
System information:
Linux alisson-550XBE-350XBE 6.5.0-14-generic #14~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Nov 20 18:15:30 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

I hope this information helps

Best,
Alisson

Error when creating temperature map. Variable referenced before assignment.

Last Step Completed: 5
Creating temperature map.
Traceback (most recent call last):
File "acb.py", line 1132, in
make_temperature_map(clstr, args.resolution)
File "acb.py", line 963, in make_temperature_map
_update_completed_things(i, len(regions), "regions")
UnboundLocalError: local variable 'i' referenced before assignment

Problem with --continue flag

Program crashes with --continue argument.

Traceback (most recent call last):

File "clusterpyxt.py", line 116, in

args = process_commandline_arguments(cluster_obj)

File "clusterpyxt.py", line 45, in process_commandline_arguments

ciao.start_from_last(cluster_obj)

File "/home/jzuhone/Source/ClusterPyXT/ciao.py", line 1055, in start_from_last

success = function_steps[pypeline_progress_index](cluster)

TypeError: 'NoneType' object is not callable

Program pauses during deflare command

After printing "Cleaning the lightcurve for {OBSID}" the program pauses until the user presses enter. This is undesired and may be related to either the CIAO command punlearn, or the deflare command. (Lines 484-492 in ciao.py)

Short term fix may be to just add "press enter to continue", but this should only be considered short term.

There should be no pause.

Refactor pipeline to better reflect the pipeline overview

The codebase needs refactoring to better follow the pipeline overview graphic. The code still have many of the vestiges of the bash script program flow. Break it down by the stages in the graphic. While a further breakdown may be necessary, at a minimum there should be a more mirrored code naming.

Error: pget() Parameter not found

I will be using CLusterPyXT (https://github.com/bcalden/ClusterPyXT/tree/dev-CIAO-4.12) to produce spectral maps for Chandra data in a Linux Ubuntu 18.04.5 machine.
To do so I had to install Ciao (version 4.12) using anaconda3 and Python 3.8.3.
The system downloads the observation data and then presents the following message:


The data have been reprocessed.
Start your analysis with the new products in
/home/user/Doutorado/A2199/10748/repro
Running ccd_sort on A2199.
Working on A2199/10748
evt1 : ['/home/user/Doutorado/A2199/10748/secondary/acisf10748_001N002_evt1.fits']
evt2 : /home/user/Doutorado/A2199/10748/repro/acisf10748_repro_evt2.fits
detname : ACIS-01236
A2199/10748: Making level 2 event file for ACIS Chip id: 0
A2199/10748: Making level 2 event file for ACIS Chip id: 1
A2199/10748: Making level 2 event file for ACIS Chip id: 2
A2199/10748: Making level 2 event file for ACIS Chip id: 3
A2199/10748: Making level 2 event file for ACIS Chip id: 6
Running ciao_back on A2199.
Finding background for /home/user/Doutorado/A2199/10748/analysis/acis_ccd2.fits
Found background at /home/user/anaconda3/envs/ciao-4.12/CALDB/data/chandra/acis/bkgrnd/acis2iD2009-09-21bkgrnd_ctiN0003.fits
pset: cannot convert parameter value : rval
/tmp/tmpamechstz.dmkeypar.par: cannot convert parameter value : rval
Error getting parameter file in CIAO. Please close ClusterPyXT and re-try the stage. If the problem persists, please file a bug report on https://github.com/bcalden/ClusterPyXT with the following error message:
Traceback (most recent call last):
File "clusterpyxt.py", line 496, in run_stage_1
ciao.run_stage_1(self._cluster_obj)
File "/home/user/ClusterPyXT-dev-CIAO-4.12/ciao.py", line 1313, in run_stage_1
merge_observations(cluster)
File "/home/user/ClusterPyXT-dev-CIAO-4.12/ciao.py", line 485, in merge_observations
ciao_back(cluster)
File "/home/user/ClusterPyXT-dev-CIAO-4.12/ciao.py", line 188, in ciao_back
echo=True)
File "/home/user/anaconda3/envs/ciao-4.12/lib/python3.5/site-packages/ciao_contrib/runtool.py", line 1810, in call
stackfiles = self._update_parfile(parfile)
File "/home/user/anaconda3/envs/ciao-4.12/lib/python3.5/site-packages/ciao_contrib/runtool.py", line 1365, in _update_parfile
self._update_parfile_verify(parfile, stackfiles)
File "/home/user/anaconda3/envs/ciao-4.12/lib/python3.5/site-packages/ciao_contrib/runtool.py", line 1294, in _update_parfile_verify
oval = _to_python(ptype, pio.pget(fp, oname))
ValueError: pget() Parameter not found
Abort (core dumped)


Do you have any idea why this error?

Parallel fitting appears to have a memory leak

When the fitting procedure is run in parallel mode it appears to have a memory leak. Within about an hour or two the user will have to restart the procedure. The pipeline allows for this without having to refit already completed regions but it should not be an issue in the first place.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.