starlink / starlink Goto Github PK
View Code? Open in Web Editor NEWStarlink Software Collection
Starlink Software Collection
This directory contains the following directories: libraries Starlink Fortran/C libraries applications Starlink Fortran/C applications etc Starlink initialisation scripts buildsupport Starlink support applications required to build configure-based Starlink applications. thirdparty Third party applications and libraries required to build Starlink classic applications pubs Starlink publications docs General documentation not associated with a particular application or library. Building the Starlink source set from scratch --------------------------------------------- To build the COMPLETE set of Starlink classic applications the following steps are required. If you wish to do more elaborate things, then you should refer to Starlink document SSN/78, which also contains some FAQs. Details of this document are at the end of this README. This procedure will not build the Starlink _Java_ applications. They are built separately, using the procedure described in Java README (currently these are available from the git repository at https://github.com/Starlink/starjava.git). You do not need to build any of the Starlink classic applications to build the Java ones, unless you need to rebuild the native parts of JNIAST, JNIHDS or SPLAT. To build the documents for the Starjava applications FROG and SPLAT you will need the Starlink applications star2html and the latex classes provided in latexsupport. Preparation ----------- - Ensure the required software development tools and headers are installed on your system (see the "Prerequisites" section below) - Specify where you want the installed files to end up (_PREFIX) and where you want the build to find any previously installed Starlink tree (_STARLINK). If there is no previously existing tree, then set the two variables to the same value. Both variables default to /star. You must have write access to the directory you name here. Here and below we use /local-star as an example; in the examples below substitute your own choice for this directory. % setenv STARCONF_DEFAULT_STARLINK /local-star # csh % setenv STARCONF_DEFAULT_PREFIX /local-star or % export STARCONF_DEFAULT_STARLINK=/local-star # sh % export STARCONF_DEFAULT_PREFIX=/local-star - Delete any previous Starlink environment variables. % unsetenv INSTALL # csh % unsetenv STARLINK or % unset INSTALL # sh % unset STARLINK and also make sure that an old-style Starlink system is not in your PATH (since tools like messgen will get very confused). You may also run into difficulties if you have previously sourced the ${STARLINK}/etc/login and ${STARLINK}/etc/cshrc scripts that existed from a previous classic mk-build Starlink install, as even with $STARLINK removed from your PATH some applications may still get confused at build time. If the login script in question is a `new' one, from a previous autoconf install, you will probably be fine (although you may need to remove installed manifest files). - Review any other environment variables that might affect the build. For example, the variables F77, FC and CC will force the build system to use the specified compilers. In particular, the default IRAF installation sets the F77 variable to be a script which works happily in most cases, but which will _not_ work with libtool, and will cause the build to fail with opaque error messages about being `unable to infer tagged configuration'. Unless you do wish to force certain compilers to be used, it is probably better to unset these variables before building the software. See './configure --help' for a list of `some influential environment variables'. This is good time to get the compilers that you're going to use resolved, especially if you're not working on a GNU/Linux platform. See the platform specific instructions later in this file. At the time of writing (February 2009) it is known that the gfortran compiler will work, but only from GCC4.3 onwards. If this is picked up by default on your system you need to re-define FC and F77 to select another compiler. For other versions of GCC4 you will need to use g95 (thanks to Andy Vaught for his help with this), which is available from the g95 web site: www.g95.org. For GCC3 based systems g77 should be used. - Ensure that /tmp has sufficient free space and is writable. - An example "sourceme" file for csh/tcsh is included in the repository. You can customize it for your system and then use it to set up your environment as described above. The build sequence ------------------ To build the complete source distribution, you go through the steps: % ./bootstrap % make configure-deps % ./configure -C % make world % cd thirdparty/perlsys/perlmods % setenv STARLINK_DIR $STARCONF_DEFAULT_PREFIX # For csh-like shells. % ./build-modules.sh Each of these steps is described in detail below. - First bootstrap the system. This downloads third-party software, builds and installs the buildsupport system (the tools in the buildsupport directory, plus third-party applications autoconf, automake and libtool), and then goes on to generate the configure scripts which will be used in the next step. The autoconf, automake and libtool applications have been patched specifically for building this tree so any versions shipped with the OS, or installed by you, will _not_ work. These applications will be installed into $STARCONF_DEFAULT_PREFIX/buildsupport, so you should add $STARCONF_DEFAULT_PREFIX/buildsupport/bin to the front of your PATH at this point, BEFORE you run the ./bootstrap script. In order for the `make' below to work, you also need to add the default bin directory to your PATH, so you should do that now. # csh-like % setenv PATH $STARCONF_DEFAULT_PREFIX/bin:$STARCONF_DEFAULT_PREFIX/buildsupport/bin:$PATH % ./bootstrap or # sh-like % PATH=$STARCONF_DEFAULT_PREFIX/bin:$STARCONF_DEFAULT_PREFIX/buildsupport/bin:$PATH % ./bootstrap This step takes a long while. Note that, although an important part of the ./bootstrap script is running autoreconf in the top-level directory, you should not run autoreconf yourself in this directory. - Build the configure dependencies. There are a _few_ components which have to be built and installed before the main tree is configured. You will have been advised of this at the end of the bootstrap process above, but in case you missed that, the command is % make configure-deps - Configure and build everything. Now configure and build the system. The dependencies within the Makefile ensure that everthing is built in the correct order. % ./configure -C # -C means caching results % make world If you need to give ./configure some help to find include files and libraries (for example, you are on a Mac and have installed Fink in /sw or OpenDarwin in /opt/local, or are on a Sun and have extra software in /opt), then do so by setting (for example) CFLAGS=-I/opt/local/include and LDFLAGS=-L/opt/local/lib either in the environment or, better, as arguments to the ./configure script. Don't do this unless you discover you have to. Each of these steps can also take some non-trivial time. - Finally install the Perl modules. The Perl modules can be installed using the build-modules.sh script in the perlmods directory. This script requires that the environment variable STARLINK_DIR be set to the Starlink installation into which you would like to install the Perl modules, i.e. $STARCONF_DEFAULT_PREFIX. % cd thirdparty/perlsys/perlmods % ./build-modules.sh This will install or update if needed the dependencies from CPAN and then install all the modules present in the perlmods directory itself. Additional hints ---------------- - Disabling shared libraries If you wish to disable the building of shared libraries you should use the --disable-shared configure option when you give the ./configure command above. % ./configure -C --disable-shared - Disabling the documentation By default, the system will be configured so that all documentation is built at the same time as the software. For this to work, you must have LaTeX installed on your machine. Building the documentation is rather slow, and you can speed up the build somewhat by omitting it. You do that as follows: % ./configure -C --without-stardocs - Disabling build of VTK and associated software By default VTK is built as a requirement of GAIA 3D and mesa is built if OpenGL libraries are not present. These components may be omitted using the following option: % ./configure -C --without-vtk - Building a single library or application If you wish to build only as far as a given component, then specify it by giving the name of the associated `manifest' file. % make /local-star/manifests/ast This will build, and install under STARCONF_DEFAULT_PREFIX, this component and _everything it depends on to be built_. Using the Newly-built System ---------------------------- To activate the Starlink system which you have just built, set the STARLINK_DIR environment variable to the install location (as chosen with STARCONF_DEFAULT_PREFIX) and source the cshrc and login files (for csh/tcsh) or profile file (for bash). % setenv STARLINK_DIR /local-star % source ${STARLINK_DIR}/etc/cshrc % source ${STARLINK_DIR}/etc/login Developing individual components -------------------------------- Note that this sequence of ./bootstrap; make configure-deps; ./configure -C; make world is indeed a `make world' command -- it builds everything in the repository that has been brought into the configure-based system, and will fail if some components have not been checked out. If you wish to build or develop a specific component, the instructions are slightly different. - Specify the _PREFIX and _STARLINK variables as before, though this time it might be appropriate to give them different values, if you want to build against one installation tree (_STARLINK), but install into another (_PREFIX). As above, unset the INSTALL and STARLINK variables, and make sure there is no old-style Starlink system in your PATH. - If you have already built the buildsupport tools (autoconf, automake, libtool and starconf), then make sure these are in your PATH. If they are not built, or if you are not sure they are up-to-date, you can build just these by going to the top-level of your checkout and giving the command % ./bootstrap --buildsupport - Now you can go into a specific directory and build the library or application as normal (a bootstrap is required in the directory if you are building from a git checkout) % ./bootstrap % ./configure % make % make install - After updating a component from the repository, it is possible that some generated files will be out of date (if configure.ac or Makefile.am had been updated). Any required updating is generally handled automatically by makefile rules, but if you wish to guarantee that everything is up to date, then you can give the command `autoreconf'. This does no harm in any case, since it does nothing if it is not required. As noted above, the exception to this is that you should not run autoreconf in the top-level directory. Updating the source set ----------------------- The `make world' command will not _re_build the tree after you do an update from the source repository, possibly unexpectedly. For a detailed explanation of why this is so, see the `General FAQs' in SSN/78, described below. You should also run the ./update-modules script after performing an update of the source set from the main repository. This will make sure that any thirdparty code is also updated. Platform specific build notes ----------------------------- - GNU/Linux Currently (February 2009) the source set is known to build on many different flavours of GNU/Linux and is actively developed using these. The only expected issue is that of Fortran compiler support (C, C++ and Fortran compilers are required to build the complete collection). After the release of GCC 4 the Fortran compiler "g77" has been replaced with a completely new "gfortran" (that now implements Fortran 90, 95 and some 2003 features), which has not been compatible with Starlink Fortran until the release of version 4.3. Happily this turned out to be not a major problem (thanks to Andy Vaught) as the other free Fortran compiler "g95", is compatible. So to build the collection with a GCC 4+ compiler you'll either need a copy of "g95" which is available from "www.g95.org", or more a very recent version of gfortran (this can be checked by running gfortran -v). To make sure you use the "g95" or "gfortran" compilers as required it is best to define the environment variables "FC" and "F77": % setenv FC g95 # csh % setenv F77 g95 or % export FC=g95 # sh % export F77=g95 or % setenv FC gfortran # csh % setenv F77 gfortran or % export FC=gfortran # sh % export F77=gfortran before running "configure". - Ubuntu (>= 11.04 "Natty Narwhal) / Debian As of version 11.04 of Ubuntu the default handling of shared library linking has changed. In the past indirect linking was enabled, such that a program could link library "A", which in turn uses library "B", and without explicitly linking the latter, routines from "B" would be available (a situation that occurs in Starlink). Now that this capability has been disabled, you will likely see strange linker errors if you follow verbatim the build instructions as described earlier in this document. To remedy this situation you need to re-activate indirect linking with LDFLAGS when configuring the build: % ./configure -C LDFLAGS=-Wl,--no-as-needed Ubuntu explain the issue here: https://wiki.ubuntu.com/NattyNarwhal/ToolchainTransition According to the following web page, this problem will also occur with Debian (as of the Wheezy release), although it has not been tested: http://wiki.debian.org/ToolChain/DSOLinking Bootstrap in thirdparty/kitware/vtk may fail during bootstrap of cmake if QT5 devel packges are installed on the local system. The answer seems to be to remove such packages before building starlink, reinstalling them again afterwards if necessary.. - macOS The build situation under macOS follows much like that of GNU/Linux, but you'll need to install your own Fortran compiler. In 10.3 the "g77" compiler from "Fink" or "MacPorts" has been used successfully, in 10.4 you'll need a copy of "g95". For 10.5 or later you can use g95, or gfortran. The latter is available from hpc.sourceforge.net as well as MacPorts. Note you'll also need to install X11 (see http://xquartz.macosforge.org), and a functioning TeX and Ghostscript if you want to build the documents. The build on macOS is not relocatable. - Solaris The collection is known to build under Solaris 8 (sparc) and 10 (intel), using the SUN compilers (Workshop 6 and studio 11 respectively). To make sure the correct compilers are picked up, you should define: % setenv FC f77 # csh % setenv F77 f77 % setenv CC cc % setenv CXX CC or % export FC=f77 # sh % export F77=f77 % export CC=cc % export CXX=CC Further information ------------------- You should consult the project web pages at <http://www.starlink.ac.uk>, <http://starlink.eao.hawaii.edu> and consider subscribing to the Starlink development and user mailing lists: see <http://www.jiscmail.ac.uk/archives/starlink.html> and <http://www.jiscmail.ac.uk/archives/stardev.html>. Starlink document SSN/78 gives much more detailed information on the process of building Starlink classic applications, but this document is primarily concerned with documenting the build system itself, and describing how to add new components to the build-system repository. The source for this document is in the repository at docs/ssn/078, and a built version should be available on the Starlink web pages. A version is also currently (May 2006) available at <http://www.astro.gla.ac.uk/users/norman/star/ssn78/>. This document contains some FAQs: a few of these will likely be of interest to thise building the system from a checkout, though most address quite specific details of how to configure software to work within the Starlink source tree. git repository -------------- In February 2009 the Starlink source code was moved to a git repository on github. This is described by a wiki at: http://starlink.eao.hawaii.edu/ The build procedures described above are still correct, but much of the associated documentation, SSN/78 etc. are becoming rapidly out-of-date. Consult the wiki for download instructions and how to access things like specific releases. Prerequisites ------------- This section lists some of the software development tools and headers that are required to build Starlink. The list is not exhastive, but is just a suggestion of some packages to check that may not not be provided by default by your operating system. To build the documentation on any system will require a reasonably complete up-to-date TeX install that includes the TeX4HT system -- for example, a recent TeXLive installation which includes a variety of standard packages. Ubuntu: git build-essential (includes libc6-dev libc-dev gcc g++ make dpkg-dev) gfortran (or g95) libxext-dev libxau-dev libx11-dev libxt-dev xutils-dev libncurses-dev flex bison byacc latex texlive-latex-base texlive-latex-extra texlive-science latex-color pgf tex4ht texlive-fonts-extra cm-super texinfo texi2html libglu1-mesa-dev freeglut3-dev mesa-common-dev zlib1g-dev curl libssl-dev libexpat1-dev libjpeg-turbo8-dev Fedora: Mainly as for Ubuntu, but note that package names are usually changed from "libxyz-dev" to "libXyz-devel". libXt-devel ncurses-devel makedepend For the new documentation build, it may be necessary to install the following (at least on Fedora 20 and 21): texlive-latex texlive-tex4ht.noarch texlive-siunitx.noarch texlive-titlesec.noarch texlive-abstract.noarch texlive-multirow.noarch texlive-mdframed.noarch texlive-titling.noarch texlive-tocloft.noarch texlive-eqparbox.noarch Fedora 31: flex byacc bison libXt-devel libXau-devel libXext-devel imake libpng-devel openssl-devel expat-devel texi2html texinfo texlive-scheme-full (a subset may be sufficient) To build perl-JCMT-Tau package in perlmods, the following packages may need to be added to the text file thirdparty/perlsys/perlmods/cpan_deps. XML::Parser SOAP::Lite Debian 10: This list (kindly provided by Paul Kerry), may not be complete and some packages may be superfluous.. bison byacc ed flex g++ gcc git gfortran libc6-dev libcurl4-openssl-dev libexpat1-dev libxext-dev libffi-dev libfreetype6-dev libgfortran5 libice-dev libjpeg62-turbo-dev libncurses-dev libpng-dev libsm-dev libssl-dev libx11-dev libxau-dev libxcb1-dev libxdmcp-dev libxt-dev make texinfo texlive-full uuid-dev xutils-dev zlib1g-dev Arch Linux: base-devel ed gcc-fortran ghostscript git libx11 libxt netpbm tcsh texlive-most CentOS 7: libXext-devel libXau-devel libX11-devel libXt-devel libxml2-devel ncurses-devel texlive-multirow Rocky Linux 8: Group: "Development Tools" gfortran libX11-devel libXext-devel libXt-devel libxml2-devel ncurses-devel texinfo libglvnd-opengl mesa-libGLw freeglut-devel openssl-devel expat-devel libjpeg-turbo-devel macOS: gfortran XCode xcode commandline tools
/stardev in Hilo is at 3204cbb and produces responsivity files with no FPLANE coordinate frame. Unfortunately this breaks the pipeline (which mosaics them in FPLANE coordinates). Here are the command line arguments used by the pipeline (with suitable dummy names):
% calcflat in=^inlist method=polynomial order=1 resist=^$STARLINK_DIR/share/smurf/resist.cfg \
respmask=true snrmin=3 resp=resp1 out=flat1
The output responsivity images has only 4 coordinate frames (grid, pixel, axis and fraction) where it should also have BOLO and FPLANE.
The bug was introduced after daf2bb2, as /stardev at the JCMT works fine.
Peter (or other GAIA maintenance enthusiasts???),
there is a dynamically generated FITS cube I'm trying to load into GAIA from SAMP, at the following URL:
http://dc.zah.uni-heidelberg.de/califa/q2/dl/dlget?ID=ivo%3A%2F%2Forg.gavo.dc%2F~%3Fcalifa%2Fdatadr2%2FIC1528.V1200.rscube.fits&DEC=-7.097008036747518%20-7.089047402267772&RA=1.2644871315905835%201.2799343705069166&&&&&&&
If I send a message using jsamp (e.g. starjava/bin/jsamp
or java -jar starjava/lib/jsamp/jsamp.jar
) like this:
jsamp messagesender \
-mtype image.load.fits \
-targetname 'gaia' \
-param url 'http://dc.zah.uni-heidelberg.de/califa/q2/dl/dlget?ID=ivo%3A%2F%2Forg.gavo.dc%2F~%3Fcalifa%2Fdatadr2%2FIC1528.V1200.rscube.fits&DEC=-7.097008036747518%20-7.089047402267772&RA=1.2644871315905835%201.2799343705069166&&&&&&&'
I get a samp.ok
response from gaia (indicating that the load was successful) and gaia then starts an asynchronous load, popping up a window with an indeterminate progress bar that says:
Downloading: [big ugly url]
waiting for result from dc.zah.uni-heidelberg.de...
and then after a few minutes a different popup informs me:
Error: cannot map zero length file: /tmp/cat33146.fits
If I download the cube to a local file then use the same jsamp invocation but with a file://...
URL it works fine.
I suspect the issue is to do with the horrible characters in the URL (something similar but different happens in ds9, which I've also reported).
Ideally the samp response should get transmitted after successful or unsuccessful completion of the load, but that may be a bit more effort to fix.
Note, this is a refiling of an issue that I initially filed in the wrong place, starjava (Starlink/starjava#4)
For the two tiles at the north and south corners of the facet at RA=12h which is split between the lower left and upper right corners of the FITS HPX projection plane (in the nested scheme with Nside=64 these are tiles 24576 and 28671), if the region output by TILEINFO is given to TILELIST it matches many tiles. This is shown in the picture below by the red speckly area (it's not solid as I only plotted intermittent tiles in there for plotting speed). Tile 24576 is marked by the large green dot.
To reproduce the problem:
$ tileinfo instrument="SCUBA-2(850)" itile=24576 region=region.ast
$ tilelist instrument="SCUBA-2(850)" region=region.ast
From the picture above it looks like the extra tiles are matching because the uncertainty is just over 2 facets wide. Inserting these lines into smf_jsatiles_region before the KeyMap loop makes the problem go away:
double circle_centre[] = {0, 0};
double circle_point[] = {0};
astSetUnc(region, astCircle(fs, 1, circle_centre, circle_point, NULL, ""));
But that probably isn't the proper solution.
ORAC-DR writes SCUBA-2 JSA tiles with history information:
prov-problem-in $ hislist s20140301_00007_850_healpix001244.sdf
...
2: 2014 Apr 29 00:08:40.682 - SETTITLE (KAPPA 2.1-12)
Parameters: NDF=@s20140301_00007_850_fmos_1244 TITLE='CRL618'
MSG_FILTER=! QUIET=!
Software: /net/ipu/export/data/gbell/star/bin/kappa/ndfpack_mon
And jsawrapdr converts it to FITS including the history:
prov-problem-in $ fitshead jcmts20140301_00007_850_healpix001244_obs_000.fits
...
HISTORY 2: 2014 Apr 29 00:08:40.682 - SETTITLE (KAPPA 2.1-12)
HISTORY User: gbell Host: ikaika Width: 72
HISTORY Dataset: /export/data/visitors/gbell/scratch19/aIN5sFKqj5/s20140301_0000
HISTORY 7_85...
HISTORY Parameters: NDF=@s20140301_00007_850_fmos_1244 TITLE='CRL618'
HISTORY MSG_FILTER=! QUIET=!
HISTORY Software: /net/ipu/export/data/gbell/star/bin/kappa/ndfpack_mon
Then jsawrapdr converts it back to SDF and the history is missing:
prov-problem-out $ hislist s20140301_00007_850_healpix001238.sdf
...
2: 2014 Apr 28 23:51:05.076 - SETTITLE (KAPPA 2.1-12)
(and PROVSHOW works on that file).
However when ORAC-DR processes that file, I get a file, also without history:
prov-problem-out $ hislist gs850um_healpix001244.sdf
...
2: 2014 Apr 28 23:51:05.218 - SETTITLE (KAPPA 2.1-12)
But now PROVSHOW segfaults:
prov-problem-out $ provshow gs850um_healpix001244.sdf
Segmentation fault (core dumped)
(Which prevents jsawrapdr from post-processing that file.)
I was able to put an if statement around the line which I identified with gdb
as causing the segfault, so I have the following, which works:
ndg_provenance.c:
3169 if (hrec->text) {
3170 astMapPut0C( kmrec, TEXT_NAME, hrec->text, NULL );
3171 }
3172 else {
3173 astMapPut0C( kmrec, TEXT_NAME, "", NULL );
3174 }
But I'm not sure whether HistRec.text is supposed to be allowed to be NULL or
not. If it is then I could commit this change, but if it's supposed to be
guaranteed to not be NULL then this isn't the right fix.
Echomop builds a tar file of all the tex support documents and installs that into the documentation tree as sun152.latex_tar
. Leaving aside that the other documents install the EPS files explicitly rather than using a tar file (it could be argued that it would be more convenient if the other documents used the tar scheme instead), the tar file is not correct:
sun152.htx/
sun152.htx/*.*
sun152.tex
sun152_01.eps
sun152_02.eps
sun152_03.eps
sun152_04.eps
sun152_05.eps
sun152_06.eps
sun152_07.eps
sun152_08.eps
sun152_cover.eps
sun152_glossy1.eps
sun152_glossy2.eps
and includes the HTX component which is already installed (I haven't listed the contents of the HTX directory but the tar file includes them all). The HTX files should not be there.
In some sense I'm not sure why we continue to install the document source since it's all in the git repository. The only reason I can think of would be for grepping but the PDF files are searchable.
Presumably at some point we need to change all references to JACH on github, to be EAO. See https://help.github.com/articles/renaming-an-organization/
Could be quite a fragile process....
Even though the documents in docs/
work with bootstrap
, configure
and make
, they are not part of make world
. For example sgp38.tex
is not built. Looking at the Makefile there are only entries for:
$(MANIFESTS)/sc2 \
$(MANIFESTS)/sc3 \
$(MANIFESTS)/sc4 \
$(MANIFESTS)/sc5 \
$(MANIFESTS)/sc6 \
$(MANIFESTS)/sc7 \
$(MANIFESTS)/sc9 \
$(MANIFESTS)/sc12 \
$(MANIFESTS)/sc14 \
$(MANIFESTS)/sc15 \
$(MANIFESTS)/sc17 \
$(MANIFESTS)/sc21 \
$(MANIFESTS)/sg8 \
$(MANIFESTS)/sg9
but Makefile.dependencies
lists all the documents properly. I think that the new documents were never added to Makefile.in
. I see that @pwdraper added the cookbooks and sg8
and sg9
in 2f49bb3 but there was no correponding followup commit to 963e493 which was the one that added all the other documents to the dependencies file in 2008.
I'm amazed I only just noticed. It came up because I needed an online link to SGP/38.
On a completely clean build, the SMURF file 'params.tex', generated by make_pardocs is empty.
This file should contain the prolat-generated documentation of all the smurf dimmconfig parameters.
This problem occurs on both OSX (built by @sfsgraves) and on Linux (built by @grahambell ). I'd run across the problem previously but @dsberry couldn't reproduce it so I assumed it was something weird in my setup.
The file 'make_pardocs.in' expects the make system to replace @prolat@ with the path to the prolat binary. In this build however it has just been replaced with the word 'prolat'.
When running smurf_makemap on non-scuba2 data using configuration dimmconfig_bright_compact.lis
, came across the following error:
!! Unknown configuration parameter 'FILT_EDGE_LARGESCALE' specified via
! environment parameter CONFIG.
! Text was '450.flt.filt_edge_largescale=600' read from file
! '$<STARLINK_DIR>/share/smurf/dimmconfig.lis'.
! Application exit status SAI__ERROR, Error
Upon further investigation, the problem is that since the instrument is unknown (not scuba-2), no 450.*
or 850.*
configuration options are registered from the defaults file and thus flt.filt_edge_largescale
is not a known option (only the 450 and 850 alternatives are listed in the defaults file). The actual error is the line flt.filt_edge_largescale = 200
in the dimmconfig_bright_compact.lis
configuration. kpg1Config
mistakenly reports the error as coming from dimmconfig.lis
. This line is not the error, as it is ignored, confirmed by running makemap specifiying dimmconfig.lis
and getting no error.
Reported by a visiting observer, and verified here:
Gaia can't load data if its in a directory with a space in its name:
a) If you try and open a file from the command line that is in a directory with a space in it, you get an HDS_OPEN error:
$ gaia Test\ Directory/scuba2_generic_bothmaps.sdf
GAIA_DIR = /stardev/bin/gaia
!! Error accessing file '/home/sgraves/Test.sdf' - No such file or directory
! HDS_OPEN: Error opening an HDS container file.
a) The Select File window doesn't list the directory. If you manually type the directory name in it will correctly list the file, but you will then get the same HDS_OPEN error as above.
This was verified on the EAO Hilo /stardev build, which claims a starversion of:
master @ 5c8256a (2016-01-23T05:39:31)
Not sure if this would be an awful lot of work to fix, but it did seem to be very annoying for the users (and initially confusing, until they'd worked out what was causing the problem).
With commit ed5ae8d GAIA no longer assumes .sdf
in the C and Fortran code. The TCL code is full of .sdf
though and this needs to be cleaned up. Not sure how best to refactor the code at the moment (Tcl interface to dat_par.h
? Tcl library that at least defines the file extension in a single place in GAIA?). @pwdraper how would you approach it?
This would be relevant if someone is writing an HDS emulation library on top of HDF5 and that person thinks it would be confusing to continue to call the files .sdf
.
When cfitsio is built individually (bootstrap, configure, make and make install in thirdparty/heasarc/cfitsio) it doesn't install the versioned library libcfitsio.so.4. When built from a complete build (bootstrap, configure -C, make world in the top level directory, installing to a new empty target directory) this file is created but doesn't appear in the cfitsio manifest. It's not entirely clear how it's installed because the Makefile.am specifies cfitsio/libcfitsio.4@SHLIB_SUFFIX@ where the name is the wrong way round: this expands (in Makefile) to cfitsio/libcfitsio.4.so whereas the file we actually have is cfitsio/libcfitsio.so.4. It seems that installing sla or atl actually might actually create the versioned library. The corresponding commit message (ca719b0) mentions that the versioned libraries end up as copies, but what we actually seem to get is a symlink going the wrong way: from libcfitsio.so.4 to libcfitsio.so.
itcl
can not be installed into a tree where itcl
is already installed because it fails to deal with the directories that are already present:
mkdir /var/folders/ll/34nyr16s1w5_3m896tr29cvr0000gn/T/starconf-565/star/lib/iwidgets4.0.1
mkdir /var/folders/ll/34nyr16s1w5_3m896tr29cvr0000gn/T/starconf-565/star/lib/iwidgets4.0.1/scripts
mkdir /var/folders/ll/34nyr16s1w5_3m896tr29cvr0000gn/T/starconf-565/star/lib/iwidgets4.0.1/demos
mkdir /var/folders/ll/34nyr16s1w5_3m896tr29cvr0000gn/T/starconf-565/star/lib/iwidgets4.0.1/demos/images
mkdir /var/folders/ll/34nyr16s1w5_3m896tr29cvr0000gn/T/starconf-565/star/lib/iwidgets4.0.1/demos/html
cp: cannot overwrite directory /star/./lib/iwidgets with non-directory ./lib/iwidgets
Installation of component /star/manifests/itcl failed
The only solution is to remove the iwidgets directory and re run make install
.
In 289b69d SUN/209 was converted to use the new latex style but there are two outstanding problems:
When using the GWM/SGS interface after measuring an aperture, PHOTOM crashes
with a segv in aptop.f at the DAT_UNMAP (Line 680) while freeing up HDS workspace.
Operating with the same parameters from GAIA there is no crash. The code probably
needs valgrinding or even updated to use a modern PSX or AST approach to obtaining
workspace.
Recorded here as we're too busy at present to investigate now.
Jane Buckle reports that she has memory corruption in a FITS file:
I've created a fits file (written using pyfits, copying a header), which gaia
is happy to display, but kappa commands (such as ndftrace
) complain about:
The error starts with:
*** glibc detected *** /stardev/bin/convert/fits2ndf: double free or corruption (!prev): 0x0000000004505e50 ***
Then there is a long listing which it call a memory map, then it complains about the fits2ndf
:
2b80815dd000-2b80815de000 rw-p 00000000 00:1
!! Requested data extends beyond the end of the record; record length is 0
! bytes (possible corrupt HDS container file
! HDS_OPEN: Error opening an HDS container file.
! Failed to convert the FITS format file
! NDFTRACE: Error displaying the attributes of an NDF data structure.
! Application exit status NDF__CVTER, foreign format conversion error
I don't believe that the file is actually corrupt, since gaia
has no problems.
I have recently encountered this issue on Ubuntu 14.04 with the locale set to en_ZA.UTF-8, while using software which uses the AST library as a dependency.
This happens because the native sscanf which AST uses to parse the header is locale-aware. I can see that AST provides its own replacement function which it substitutes on some platforms; however, it was not selected on my platform when I installed AST (8.0.7 and several older versions) from the source tarball.
In addition to this, when this error is encountered, AST attempts to print the affected card in an error message, but does not truncate the char array after 80 characters, so it actually attempts to print the entire remainder of the header, which causes a buffer overflow.
Setting the LC_NUMERIC environment variable to e.g. C fixes my problem locally. Is this the correct way of fixing the problem, or should AST use its replacement function on platforms where it detects a locale with a nonstandard separator? My understanding is that FITS files are not locale-aware and should always use the decimal point as a separator, in which case AST's current behaviour seems to be a bug.
Converting an SDFITS file to NDF I get the following structure:
CNV_OTF3223S004 <NDF>
DATA_ARRAY <ARRAY> {structure}
DATA(1) <_UBYTE> *
ORIGIN(1) <_INTEGER> 1
MORE <EXT> {structure}
FITS(8) <_CHAR*80> 'SIMPLE = T / fi...'
... 'COM...','ORIGIN =
'Supercam'','END'
FITS_EXT_1 <TABLE> {structure}
NROWS <_INTEGER> 64
COLUMNS <COLUMNS> {structure}
CDELT2 <COLUMN> {structure}
COMMENT <_CHAR*19> 'label for field 1'
DATA(64) <_DOUBLE> 0.098173249432384,
... -0.022024647309758
UNITS <_CHAR*8> 'deg Glon'
CDELT3 <COLUMN> {structure}
COMMENT <_CHAR*19> 'label for field 2'
DATA(64) <_DOUBLE> -0.014729787100485,
... -0.015228609176948
UNITS <_CHAR*8> 'deg Glat'
TSYS <COLUMN> {structure}
COMMENT <_CHAR*19> 'label for field 3'
DATA(64) <_DOUBLE> 41524.657977196,0,
... 8152.7942924153
UNITS <_CHAR*1> 'K'
TRX <COLUMN> {structure}
COMMENT <_CHAR*19> 'label for field 4'
DATA(64) <_DOUBLE> 0,0,24.422266182385,0,0,0,0,
... 0,0,90219.208127594
UNITS <_CHAR*1> 'K'
...
End of Trace.
(where the last few columns have been left off). This conversion has a problem though in that it's missing a FITS header. The SDFITS file has a primary HDU and then a secondary HDU associated with the binary table. The secondary HDU is not included in the conversion.
For all I know this is intentional but I wasn't expecting a problem with a FITS file this simple. Am I meant to be using the EXTABLE
parameter or something in order to import the HDU from another extension?
SOFA has a license that makes it difficult for Starlink software to be included in linux distributions such as RedHat and Debian. The solution to this is to use ERFA which is a rebadged version of the SOFA library with a permissive license.
thirdparty/sofa/sofa/
should be replaced with thirdparty/erfa/erfa/
A similar change should be made to PAL and AST.
git status
keeps going on about untracked .so
files (e.g. in cfitsio).
[dsb@lurga cfitsio]$ git status
# Not currently on any branch.
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# libcfitsio.so.1
# libcfitsio.so.1.3.35
nothing added to commit but untracked files present (use "git add" to track)
The .gitignore file contains:
# Build files
*.o
*.a
*.so
*.dylib
*.cache/
should it not also include
*.so.*
????
The build-modules script to install extra perl modules regularly fails when installing the LWP::protocol::https package. This a is a dependency needed when installing Astro-Catalog-4.31, so this problem is only encountered when installing into a new build.
The error message written into the log is:
# Failed test at t/apache.t line 18.
# 'Can't connect to www.apache.org:443 (certificate verify failed)
#
# LWP::Protocol::https::Socket: SSL connect attempt failed error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed at /net/kamaka/export/data/star/star-legacy-r1/star-2015A/Perl/lib/perl5/site_perl/5.18.2/LWP/P\
rotocol/http.pm line 47.
# '
# doesn't match '(?^:Apache Software Foundation)'
The work around is to manually install that dependency using --force to ignore the test results, and then re-run the build-modules script again.
Should this be behaviour be included in the build-modules script (as is done with the Tk module already)? Or is there a better way to avoid getting that error message which I've missed? This problem occurs both on Mac and on Linux, at least when building in Hilo.
In commit 0b368ea we are left with the following code:
/* Get the name for the Q NDF for this block. Start of with "Q" followed by
the block index. */
iblock++;
nc = sprintf( ndfname, "Q%d", iblock );
/* Append the subarray name to the NDF name. */
nc += sprintf( ndfname + nc, "_%s", subarray );
/* Append the chunk index to the NDF name. */
nc += sprintf( ndfname + nc, "_%d", (int) ichunk );
in smurf_calcqu.c
. The second two lines were historically protected by if
-statements and so were distinct. Now this could all be done in a single sprintf
. Furthermore, one_snprintf
should be used to protect against buffer overflow.
There are many unprotected sprintf
uses in SMURF. Particular care should be taken when using %s
.
Reported by one of the JCMT support scientists:
If you run gaia with a non existent file name, you have to click 'Okay' on two identical boxes telling you that the file doesn't exist, instead of just on one. You can't interact with gaia until you've clicked 'Okay' on both boxes.
(To add to the inconsistency: If you use gaiadisp, the gaiadisp script actually checks if the file exists and puts a message on the command line instead of using a dialogue box indicating the file was not there. )
@pwdraper Any interest in fixing it? I don't think it was super urgent, it was just irritating them a little.
In astlib version 7.3.2 setting CarLin=0 or CarLin=1 on FitsChan does not seem to have any effect. In version 7.0.6 setting CarLin seemed to work. Here is an example FITS header on which it does not work for me (i.e. calling astTranN returns the same result whether I set CarLin=0 or CarLin=1):
SIMPLE = T / file does conform to FITS standard BITPIX = -32 / number of bits per data pixel NAXIS = 3 / number of data axes NAXIS1 = 81 / length of data axis 1 NAXIS2 = 59 / length of data axis 2 NAXIS3 = 1 / length of data axis 3 EXTEND = T / FITS dataset may contain extensions COMMENT FITS (Flexible Image Transport System) format is defined in 'Astronomy COMMENT and Astrophysics', volume 376, page 359; bibcode: 2001A&A...376..359H OBJECT = 'GALFACTS Field 1 Polarized Intensity' / Object name CTYPE1 = 'RA---CAR' / 1st axis type CRVAL1 = 63.224998 / Reference pixel value CRPIX1 = -6.150000000000000E+01 / Reference pixel CDELT1 = -0.0166667 / Pixel size in world coordinate units CROTA1 = 0.0000 / Axis rotation in degrees CTYPE2 = 'DEC--CAR' / 2nd axis type CRVAL2 = 28.650000 / Reference pixel value CRPIX2 = 1.650000000000000E+01 / Reference pixel CDELT2 = 0.0166667 / Pixel size in world coordinate units CROTA2 = 0.0000 / Axis rotation in degrees CTYPE3 = 'FREQ' / 3rd axis type CRVAL3 = 1451974016.000000 / Reference pixel value CRPIX3 = 1.00 / Reference pixel CDELT3 = 126000000.0000000 / Pixel size in world coordinate units CROTA3 = 0.0000 / Axis rotation in degrees EQUINOX = 2000.00 / Equinox of coordinates (if any) BUNIT = 'Kelvin' / Units of pixel data values BTYPE = 'Polarized Intensity' END
The online AST Grid Plotter (http://starlink.jach.hawaii.edu/cgi-bin/ast/fits-plotter) seems to work fine though.
VTK v6.x does not seem to be backwards compatible with v5.10 so in order to keep up with VTK we need to modify GAIA accordingly. A naive build attempt resulted in the following errors:
clang++ -g -O2 -g -O0 -Wall -Wextra -fstack-protector -ftrapv -DHAVE_CONFIG_H -I. -I./generic -I../gaia/generic -I/star/include/skycat -I/star/include/rtd -I/star/include/cat -I/star/include/astrotcl -I/star/include/tclutil -I/star/include/vtk -I"/star/include" -I"/star/include" -I/opt/X11/include -g -O0 -Wall -Wextra -fstack-protector -ftrapv -Os -Wall -Wno-implicit-int -fno-common -g -O2 -c `echo ./generic/Gaia3dVtkTcl.C` -o Gaia3dVtkTcl.o
./generic/Gaia3dVtkTcl.C:41:17: warning: using directive refers to implicitly-defined namespace 'std'
using namespace std;
^
In file included from ./generic/Gaia3dVtkTcl.C:64:
./generic/vtkAstTransform.h:49:25: error: variable has incomplete type 'class VTK_COMMON_EXPORT'
class VTK_COMMON_EXPORT vtkAstTransform : public vtkWarpTransform
^
./generic/vtkAstTransform.h:49:7: note: forward declaration of 'VTK_COMMON_EXPORT'
class VTK_COMMON_EXPORT vtkAstTransform : public vtkWarpTransform
^
./generic/vtkAstTransform.h:49:41: error: expected ';' after top level declarator
class VTK_COMMON_EXPORT vtkAstTransform : public vtkWarpTransform
^
;
./generic/vtkAstTransform.h:49:43: error: expected unqualified-id
class VTK_COMMON_EXPORT vtkAstTransform : public vtkWarpTransform
^
./generic/Gaia3dVtkTcl.C:858:34: error: use of undeclared identifier 'vtkAstTransform'; did you mean 'vtkTransform'?
vtkAstTransform *transform = vtkAstTransform::New();
^~~~~~~~~~~~~~~
vtkTransform
/star/include/vtk/vtkProp3D.h:37:7: note: 'vtkTransform' declared here
class vtkTransform;
^
./generic/Gaia3dVtkTcl.C:858:34: error: incomplete type 'vtkTransform' named in nested name specifier
vtkAstTransform *transform = vtkAstTransform::New();
^~~~~~~~~~~~~~~~~
/star/include/vtk/vtkProp3D.h:37:7: note: forward declaration of 'vtkTransform'
class vtkTransform;
^
./generic/Gaia3dVtkTcl.C:859:5: error: reference to overloaded function could not be resolved; did you mean to call it?
transform->SetMapping( (AstMapping *) astCopy( mapping ) );
^~~~~~~~~
./generic/Gaia3dVtkTcl.C:862:25: error: address of overloaded function 'transform' does not match required type 'vtkAbstractTransform'
tpdf->SetTransform( transform );
^~~~~~~~~
I'm not sure how trivial the modifications will be to get this working.
VTK v6.1 does build okay (the mirrored repo has the code in it already).
If the regions output by TILEINFO for the tiles at the southern corners of facets 6 and 4 (the equatorial facets at RA 12 and 0) are given to ASTOVERLAP, it considers them to be identical:
$ tileinfo instrument="SCUBA-2(850)" itile=24576 region=24576.ast
$ tileinfo instrument="SCUBA-2(850)" itile=16384 region=16384.ast
$ astoverlap 24576.ast 16384.ast
The Regions are identical to within their uncertainties.
Maybe it's not looking at the different reference position for tile 6? Because this is one of the differences between the region files:
$ diff -U0 24576.ast 16384.ast
[...]
@@ -77 +77 @@
- SRef1 = 3.14159265358979 # Ref. pos. RA 12:00:00.0
+ SRef1 = 0 # Ref. pos. RA 0:00:00.0
[...]
The same is true for some other pairs such as 24579 and 16387.
The postscript/PDF version of SUN/152 does not get installed.
There doesn't seem to be any reason to keep echwind
and kaprh
in the main tree. I asked on the Starlink list about echwind
and no-one responded. No-one can seriously recommend using anything from kaprh
at this point (the only issue being some old documents referring to old commands).
OS X builds do not seem to be detecting the JPEG library properly. The test should either determine if the library is installed and not build it, or else build the private Starlink JPEG. This causes problems on OS X where the JPEG can come from multiple different locations and if you do a build with homebrew it won't work with MacPorts. The solution, at least on Mac, may be to trigger the third party build of JPEG regardless of the existence of the library in the system.
If I try to use PROVADD to set the parent for a file, I don't see it in the PROVSHOW output. For example:
/tmp/test $ creframe x1 mode=bl accept
/tmp/test $ ndfcopy x1 x2
/tmp/test $ provadd x2 x1 isroot
/tmp/test $ provshow x2
0: /tmp/test/x2
Parents: <unknown>
Date: 2014-05-06 00:56:17
Creator: <unknown>
Starlink has a few documents that were written in XML rather than LaTeX. These documents use a completely different build system and rely on components that are no longer supported (e.g. Jade). Currently they are not installed and not included in the updates to the build system when fixing #4.
#5 has some comments from @nxg and @MalcolmCurrie concerning the XML documentation.
The documents have some useful information so the issue is what we do with them? One option is to indicate that the documents are frozen and generate PDF versions that are installed without attempting to do an automated build. This would seem to be the pragmatic solution. Certainly less work than attempting to rejig the build using modern tools.
As I was adding an old link to SC/20, it reminded me that we need to
switch jach.hawaii.edu links to the new eaobservatory.org in our
documents at a suitable date.
It has been noted that the EXT model causes slightly different maps to be created by SMURF makemap
when the number of threads changes. The discrepancy is of the order of the noise and so it quite noticeable.
I think that this is caused by the code that splits up the data into chunks not splitting on WVM measurement boundaries. The WVM is readout every 1.2 seconds or so and so there can be 200 TCS readings for every WVM reading. The WVM calculation depends on the airmass of the telescope but once calculated should not change as the telescope moves until the next reading is made. If we split the data between threads between readings the tau reading for the start of the next chunk will be different to the reading from the end of the previous chunk despite them being nominally the same reading. The number of threads will therefore control how many of these jumps there are (and the telescope can move quite a long way in a second).
After an estimate for the thread boundaries has been calculated the actually boundary should be finessed by running forward until the WVM measurement changes.
To be pedantic we should really be reading the telescope position for WVM_TIME
but at the moment we simply assume that the TCS_AIRMASS
reading is appropriate for the associated WVM_*
readings despite the WVM_TIME
indicating that an earlier telescope reading should be used (by looking at TCS_TAI
).
We recently had a report that the pipeline wasn't working:
!! Error reading file names from stream attached to shell process -
! Interrupted system call
!! No files found matching the file specification
! '/Users/person/Oph_H2D+/cube_recipe/adam_61767/ndfpack_mon'.
!! HDS_OPEN: Error opening an HDS container file.
The problem was that HDS was entering the wildcard match part of the code and failing to find the file. The culprit here being the +
in the directory name. It's not clear whether this is fixable without a lot of work (splitting the path up and checking each component in turn to see if it exists) but I'm logging this for reference.
It also made clear to me that HDS is still forking processes to look for wildcard files (and rec1_find_file
dynamically creates a shell script) rather than using wordexp()
like the rest of Starlink.
At some point in the past year the Starlink documentation has ended up with a strange byte sequence in it. Every time SUN95 is updated and the index rebuilt I get this error:
Updating index file for document: sun95
sed: RE error: illegal byte sequence
I only see this in OS X (Mountain Lion - not tested on older systems but it may be that the problem has been around for a long time and only just results in an error on Mountain Lion) and not on Linux. I am assuming sed
has got cleverer recently and realises if a non-ASCII character is seen.
Filing this as a reminder.
etc/init/profile.in doesn't create an alias for frog, whereas cshrc.in does.
(Problem found testing 2015A-RC2 but probably not necessary to fix for the 2015A release.)
Building Starlink on Ubuntu 12.04 (with tcsh as my shell) I ran into the above error, which comes from running hlib on the .hlp files. The tcsh man page says that the status check, $?
, is a tcsh enhancement and is not in csh. The first line of hlib is:
#!/bin/csh -f
and if I change it to tcsh
, hlib runs fine. Alternatively, changing $?
to $status
works. Any preferences for which change to implement? I'm not sure of the ubiquity of tcsh so perhaps the latter would be preferable?
What I find odd is that I've never run into this error before, even with a previous build on the same OS, and I've always used tcsh.
While checking for notable improvements since Hikianalia, I could not find
the release tag browsing in gitk. starversion says
hikianalia @ 6a47df7 (2013-04-04T04:26:09)
This SHA1 id isn't known. Has it been lost or am being thick at the end of a tiring week?
When FINDCLUMPS (using METHOD=GaussClumps) rejects clumps, it seems always to report this as due to touching the data array edge, e.g.:
1 clump rejected because it touches an edge of the data array.
However it can also reject them due to the clump peak being too low, but this is only seen when debugging output is enabled, e.g.:
Iteration 3:
Integrated clump intensity: 1708.34785167207 (in 441 pixels)
(the clump peak is too low)
And in this case the main message (normally the only one you see) is very misleading. It looks like the message pre-dates the addition of the "thresh" parameter.
Background information: I have a SCUBA-2 450um observation with only one significant feature which is very close to "thresh" (in this case 10) * the RMS and nowhere near the edge. Re-reducing the observation seems to cause enough change in the map for the clump to randomly switch between being accepted and being rejected. With the message indicating that this was due to touching the edge this was very confusing.
Currently the document creation system generates Postscript documents by using latex and dvips. It would be much more convenient if we generated PDF documents directly with pdflatex.
I think this needs:
.ps
target and addition of .pdf
target for LaTeX files.includegraphics
command provided by graphicx
in LaTeX2e./star/docs
.This seems like a fairly well-constrained task and shouldn't take very long. Many of the documents don't need any changes at all to work.
SExtractor is currently at v2.5-1. @talister would like SExtractor to be upgraded to V2.19.5, particularly as this would allow catalogues to be created in a form that would allow us to compare autoastrom with scamp.
@pwdraper It's not at all clear to me how to do the SExtractor merge with upstream as there seem to be two SExtractor submodules and I'm confused as to the distinction and which files are really needed to provide NDF support. Can you please give me some guidance?
Trying to build at the summit /stardev from a fresh clone... make world hung at
libraries/cnf (as if it were vtk). Going in manually it works until partway through
hypertext-document building.
47/194:subsection:...."Sun" for node47.html
;.;
48/194:subsubsection:...."General" for node48.html
;..;
49/194:subsubsection:...."Data Types" for node49.html
;..,,,..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
The fullstops go on ad infinitum.
smf_check_smfHead
has this braindead logic:
/* Detector names */
if (ohdr->detname == NULL ){
ohdr->detname = astMalloc( ihdr->ndet*
( strlen( ohdr->detname ) + 1 ) );
if( ohdr->detname ) {
memcpy( ohdr->detname, ihdr->detname,
ihdr->ndet*( strlen( ohdr->detname ) + 1 ) );
}
}
/* OCS Config */
if (ohdr->ocsconfig == NULL ){
ohdr->ocsconfig = astMalloc( ihdr->ndet*
( strlen( ohdr->ocsconfig ) + 1 ) );
if( ohdr->ocsconfig ) {
strcpy( ohdr->ocsconfig, ihdr->ocsconfig );
}
}
where the strlen
call will obviously get itself into a bit of trouble given that we know that ohdr->ocsconfig
is NULL
. I imagine this is meant to be looking at ihdr
and not ohdr
but should presumably not do anything if ihdr->ocsconfig
is itself NULL
. I'm not sure why it's multiplying by ihdr->ndet
.
It looks like the detname
code from ddb48b7 was copied for the ocsconfig
case in c38a437 (the detname
section looks wrong as well).
The crehlp
command no longer works on OS X Mavericks. The reason is that strncpy
is being used to copy characters within the same buffer and the behavior of this is undefined. This is the code from creh.c
:
382 iobuf [ 0 ] = (char) c1;
383 iobuf [ 1 ] = (char) ' ';
384 l = ito - ifrom + 1;
-> 385 strncpy ( iobuf + 2, iobuf + ifrom, l );
386 for ( i = 2 + l; i < LIN - 1; iobuf [ i++ ] = (char) ' ' );
387 iobuf [ i ] = (char) '\0';
In line 385 we copy characters from iobuf
into iobuf
. For the example in convert/tasks.hlp
ifrom
has a value of 2 and so this line ends up copying the contents from the same position. It seems that Mavericks now has overlap detection code in place that triggers an abort at runtime. It seems that if we are really intending to shift characters from the end of iobuf
to position 2 of iobuf
then we should probably change the code to use memmove
and terminate the resultant buffer ourself at position l+2
:
diff --git a/libraries/hlp/creh.c b/libraries/hlp/creh.c
index c9912f1..e139c2a 100644
--- a/libraries/hlp/creh.c
+++ b/libraries/hlp/creh.c
@@ -382,7 +382,8 @@ int hlpCreh ( int ( * nametr ) ( int, char*, int, char* ),
iobuf [ 0 ] = (char) c1;
iobuf [ 1 ] = (char) ' ';
l = ito - ifrom + 1;
- strncpy ( iobuf + 2, iobuf + ifrom, l );
+ memmove( &(iobuf[2]), &(iobuf[ifrom]), l );
+ iobuf[l+2] = '\0';
for ( i = 2 + l; i < LIN - 1; iobuf [ i++ ] = (char) ' ' );
iobuf [ i ] = (char) '\0';
The above patch seems to be enough to get help library created again.
The build system doesn't seem to care that crehlp
is failing, and I think that is because of a related bug in hlib
where the return value from crehlp
is not trapped:
foreach file ($*)
crehlp $file $file:r.shl
end
Line 638 of smf_rebin1map.c:
} else if( pdata->operation == 1 ) {
should be
} else if( pdata->operation == 2 ) {
as can be seen by comparison with line 431.
On MacOS Starlink init scripts (starlink.login and starlink.sh) set the DYLD_LIBRARY_PATH variable to point to Starlink installed libraries. This causes problems for programs installed with Fink. For example, Fink's Python version uses version 8.5.0 of TK library, which is installed under /sw/lib. If one set the DYLD_LIBRARY_PATH to include /star/lib in the path, Python attempts to use the TK version under /star/lib, and it crashes if the version is older than the one it was compiled with.
Indeed, if the binaries are linked properly, one should not need to set this variable, because the executable contains the full path to libraries.
The latest /stardev at JAC in Hilo (cfb0eea) is giving me the following error when running sc2filtermap:
hdr is NULL for calc_wvm. Possible programming error.
Command line:
sc2filtermap in=pg20130111_1_cal out=pg20130111_1_whiten whiten \
whiterefmap=pg20130111_1_jkmap\(0~574,0~574\)
Examples files at JAC: ~agibb/scuba2/
I'm trying to figure out how to take a WCS that I have in a FrameSet (in python) and write it into a FITS header. Following the documentation in the help for PyFITSAdapter, I came up with the following:
import starlink.Ast
import starlink.Atl
import pyfits
f = pyfits.open('1904-66_TAN.fits')
# File from http://www.atnf.csiro.au/people/mcalabre/WCS/example_data.html
fc = starlink.Ast.FitsChan( starlink.Atl.PyFITSAdapter(f[0]) )
wcs = fc.read()
f.close()
hdu = pyfits.PrimaryHDU()
fc2 = starlink.Ast.FitsChan( None, starlink.Atl.PyFITSAdapter(hdu) )
fc2.write(wcs)
fc2.writefits()
print hdu.header
But at the end of this, the header is empty.
Before the writefits()
command, the header is still its original value with keys like SIMPLE, BITPIX, NAXIS. But after writefits()
, it is empty. So that function is clearly doing something. Just not what I expected.
Could you please explain what I am doing wrong here? Thanks so much for your help!
This was caused by a bug in the PyFITSAdapter class. Also note that the "fc2" FitsChan will use "NATIVE" encoding by default (i.e. it will use an AST-specific keywords for the WCS information). If you want standard FITS-WCS keywords, you need to append "Encoding=FITS-WCS" to the end of the FitsChan constructor call.
Resize the qual array along with other data components in fts2init
to avoid segmentation faults when writing out this component within NDF files.
A temporary patch to circumvent this problem was first developed by Graham Bell in December, 2012, and applied to the repository by Matt Sherwood (Commit: 92dc7e8) on September 5, 2013.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.