Giter Site home page Giter Site logo

slrdata's Introduction

Python package for satellite laser ranging file formats

This package contains functions to parse CPF (predicts) and CRD (ranging observations) files in Python.

See the ILRS documentation of the file formats for their definitions and details.

These were written for personal use, implementing mostly just features that were necessary at the moment. No warranty or future maintenance is promised. Feature request will be considered. Packaging for PyPI is also a possibility if there is demand.

Installation

Python 3 is required, as well as the packages listed in requirements.txt. I recommend using a virtualenv and running:

$ pip install -r requirements.txt
$ python setup.py install

Usage

Consolidated Prediction Format

The CPF parser was written in order to interpolate predicts in the way described in the CPF format specification (see https://ilrs.cddis.eosdis.nasa.gov/data_and_products/formats/cpf.html).

It works relatively simply. The parse_CPF function takes in the raw contents of a CPF file, and produces a Prediction object, which wraps the data and an interpolator (a BarycentricInterpolator from scipy is used).

Calling the interpolate(t) method, where t is a datetime object returns the interpolated position as a numpy array. A ValueError will be raised if the t is outside the valid interpolation range of the prediction file.

The method write_data(filename, delim=",") will write the raw prediction data to the given file, using the given delimiter (defaults to a comma).

Example

>>> from SLRdata import parse_CPF
>>> from datetime import datetime

>>> raw_data = open("todays_predict_file.cpf", "r").read()
>>> my_predict = parse_CPF(raw_data)

>>> timestamp = datetime.utcnow()
>>> my_predict.interpolate(timestamp)
array([  712048.91907249,  5057248.90826335, -5784879.0715739 ])

>>> my_predict.write_data("predict.csv")
>>> my_predict.write_data("predict_tab-separated.txt", "\t")

Consolidated Ranging Data

The CRD format is more complex and parsed in a much more complicated way.

  1. The parser returns collection of dictionaries representing observation "units" (defined by the H1 header lines, ended by the H9 header).
  2. A unit dictionary has a list of "sessions", defined by H4 headers, ended by the H8 header.
  3. Each session is a dictionary containing session parameters and a numpy array with the observed data.

Currently only the bare minimum of ranging data is parsed: the timestamp (second of day) and the observed delay. More values are added as needed.

Station and target data are also parsed into dictionaries, accessed with the "station" and "target" keywords. The CRD specification allows target and station data both at the unit level (after a H1 headers, but before a H4 header), or for each separate session (after a H4 header). This flexibility leads to a complication: the parser puts the "station" and "target" keywords either in the unit dictionary or the session dictionary, depending on where they are defined in the file. It is up to the user to make sure these are found correctly (see example below).

If a timestamp (usually end time of observations) is not specified, it will be None.

Examples

>>> from SLRdata import parse_CRD

>>> raw_data = open("ranging_file.crd", "r").read()
>>> Units = parse_CRD(raw_data)

>>> unit = Units[0]

>>> unit.keys()
dict_keys(['format', 'version', 'time', 'sessions', 'station', 'target'])

>>> unit["station"]
{'name': 'BORL', 'ID': 7811, 'system': 38, 'occupancy': 2, 'timescale': 7}

>>> unit["sessions"][0]["data"]
array([[  6.56007011e+04,   1.42107683e-02],
       [  6.56021011e+04,   1.41864711e-02],
       [  6.56028011e+04,   1.41743928e-02],
       ...,
       [  6.58627011e+04,   1.37161761e-02],
       [  6.58628011e+04,   1.37175978e-02],
       [  6.58630011e+04,   1.37204455e-02]])

Here's how to get the station of a session, while falling back to the unit level if it is not found. The Python dictionaries' get method defaults to its second parameter if the requested key is not found in the dictionary:

>>> session = unit["sessions"][0]

>>> session["station"]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
KeyError: 'station'

>>> session.get("station", unit["station"])
{'name': 'BORL', 'ID': 7811, 'system': 38, 'occupancy': 2, 'timescale': 7}

The raw ranging data in a unit can be dumped into a CSV file with the following function:

>>> SLRdata.dump_unit(unit, filename, delim=",")

Some metadata will be written as comment lines starting with a # symbol.

Troposphere correction

A troposphere correction function is included, based on IERS Technical Note 13.

The function troposphere_correction takes the following parameters:

  • Temperature, in Kelvin,
  • Pressure, in hPa,
  • Relative humidity, as a percentage,
  • Station latitude, in radians,
  • Station geodetic height, in kilometers,
  • Satellite elevation, in radians,
  • Laser wavelength, in nanometers.

Examples

from math import pi
from SLRdata import troposphere_correction

T = 273.15
P = 1013.25
RH = 50.0
lat = 30 * pi/180
height = 0.500
elevation = 60 * pi/180
lbd = 532.0

deltaR = troposphere_correction(T, P, RH, lat, height, elevation, lbd)

slrdata's People

Contributors

dronir avatar tdi avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

slrdata's Issues

Some suggestions for improving the SLRdata package

The SLRdata package undoubtedly provides great convenience for generating the CPF prediction and reading the SLR data in CRD format. However, it will be better if (1) add a method to show the avaliable prediction range in time, such as my_predict.info(); enable to set the start and end times as well as the step size (for example, 1 minute) and output the prediction to a file. (2) enable to add the coordinates(lat, lon, height) of a site, and consider the leap second and a prior time bias (3) enable to output the MJD, Seconds of Day, azimuth, elevation, range to the target and 2-way flight time besides cartesian coordinats of the target w.r.t. the geocenter. Hope to make some improvements to make it powerful.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.