Giter Site home page Giter Site logo

seiscomp / main Goto Github PK

View Code? Open in Web Editor NEW
9.0 9.0 17.0 73.83 MB

The SeisComP application layer for data processing and analysis

License: Other

CMake 0.44% Python 10.35% HTML 0.48% CSS 0.04% JavaScript 0.20% C++ 78.31% C 9.35% Shell 0.01% OpenEdge ABL 0.49% Rouge 0.02% Assembly 0.32%

main's Introduction

SeisComP

About

SeisComP is a seismological software for data acquisition, processing, distribution and interactive analysis that has been developed by the GEOFON Program at Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences and gempa GmbH.

License

SeisComP is primarily released under the AGPL 3.0. Please check the license agreement.

Asking Questions

Please ask questions in the forums and use appropriate topics to get help on usage or to discuss new features.

If you found a concrete issue in the codes or if you have code related questions please use the Github issue tracker of the corresponding repository, e.g. GitHub issue tracker of this repository.

Checkout the repositories

The SeisComP software collection is distributed among several repositories. This repository only contains the build environment, the runtime framework (seiscomp control script) and the documentation.

To checkout all repositories to build a complete SeisComP distribution the following script can be used:

#!/bin/bash

if [ $# -eq 0 ]
then
    echo "$0 <target-directory>"
    exit 1
fi

target_dir=$1
repo_path=https://github.com/SeisComP

echo "Cloning base repository into $target_dir"
git clone $repo_path/seiscomp.git $target_dir

echo "Cloning base components"
cd $target_dir/src/base
git clone $repo_path/seedlink.git
git clone $repo_path/common.git
git clone $repo_path/main.git
git clone $repo_path/extras.git

echo "Cloning external base components"
git clone $repo_path/contrib-gns.git
git clone $repo_path/contrib-ipgp.git
git clone https://github.com/swiss-seismological-service/sed-SeisComP-contributions.git contrib-sed

echo "Done"

cd ../../

echo "If you want to use 'mu', call 'mu register --recursive'"
echo "To initialize the build, run 'make'."

To keep track of the state of each subrepository, mu-repo is a recommended way.

Build

Prerequisites

The following packages should be installed to compile SeisComP:

  • g++
  • git
  • cmake + cmake-gui
  • libboost
  • libxml2-dev
  • flex
  • libfl-dev
  • libssl-dev
  • crypto-dev
  • python-dev (optional)
  • python-numpy (optional)
  • libqt4-dev (optional)
  • qtbase5-dev (optional)
  • libmysqlclient-dev (optional)
  • libpq-dev (optional)
  • libsqlite3-dev (optional)
  • ncurses-dev (optional)

The Python development libraries are required if Python wrappers should be compiled which is the default configuration. The development files must match the used Python interpreter of the system. If the system uses Python3 then Python3 development files must be present in exactly the same version as the used Python3 interpreter. The same holds for Python2.

Python-numpy is required if Numpy support is enable which is also the default configuration.

Configuration

The SeisComP build system provides several build options which can be controlled with a cmake gui or from the commandline passing -D[OPTION]=ON|OFF to cmake.

In addition to standard cmake options such as CMAKE_INSTALL_PREFIX the following global options are available:

Option Default Description
SC_GLOBAL_UNITTESTS ON Whether to build unittests or not. If enabled then use ctest in the build directory to run the unittests.
SC_GLOBAL_PYTHON_WRAPPER ON Build Python wrappers for the C++ libraries. You should not turn off this option unless you know exactly what you are doing.
SC_GLOBAL_PYTHON_WRAPPER_NUMPY ON Add Numpy support to Python wrappers. If enabled then all SeisComP arrays will provide a method numpy() which returns a Numpy array representation.
SC_ENABLE_CONTRIB ON Enable inclusion of external contributions into the build. This includes all directories in src/extras.
SC_GLOBAL_GUI ON Enables compilation of GUI components. This requires the Qt libraries to be installed. Either Qt4 or Qt5 are supported. The build will prefer Qt5 if found and will fallback to Qt4 if the Qt5 development libraries are not installed on the host system.
SC_GLOBAL_GUI_QT5 ON If SC_GLOBAL_GUI is enabled then Qt5 support will be enabled if this option is active. Otherwise only Qt4 will be supported.
SC_DOC_GENERATE OFF Enable generation of documentation
SC_DOC_GENERATE_HTML ON Enable generation of HTML documentation
SC_DOC_GENERATE_MAN ON Enable generation of MAN pages
SC_DOC_GENERATE_PDF OFF Enable generation of PDF documentation

Compilation

  1. Clone all required repositories (see above)
  2. Run make
  3. Configure the build
  4. Press 'c' as long as 'g' appears
  5. Press 'g' to generate the Makefiles
  6. Enter the build directory and run make

Installation

  1. Enter the build directory and run make install to install SeisComP

Contributing improvements and bug fixes

Please consider contributing to the code.

main's People

Contributors

acarapetis avatar aemanov avatar andres-h avatar brtle avatar donavin97 avatar fmassin avatar gempa-dirk avatar gempa-enrico avatar gempa-jabe avatar gempa-lukas avatar gempa-stephan avatar janisozaur avatar jollyfant avatar jordi-domingo avatar jsaul avatar kaestli avatar luca-s avatar marcopovitch avatar megies avatar ogalanis avatar pevans-gfz avatar quiffman avatar salichon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

main's Issues

lib boost error during update-config

sorry if 'main' is the incorrect area for this

near the end of the update-config output I'm getting this error right after the warnings about plugins being loaded already

python3.8: /usr/include/boost/smart_ptr/intrusive_ptr.hpp:199: T* boost::intrusive_ptr::operator->() const [with T = Seiscomp::DataModel::Config]: Assertion `px != 0' failed.

it looks like there is maybe a typo with ` being used instead of ' but I'm just guessing

ubuntu 20.04 & libboost.dev 1.71.0.0ubuntu2 & latest master branch as of this post

[fdsnws] junk after XML data

When using Python 3, fdsnws output has sometimes junk after XML data when non-ASCII characters appear in XML. The reason seems to be that in Sink.write(), data is unicode, but size is in raw bytes.

The following workaround seems to solve the problem:

class Sink(seiscomp.io.ExportSink):
    def __init__(self, request):
        seiscomp.io.ExportSink.__init__(self)
        self.request = request
        self.written = 0

    def write(self, data, size):
        try:
            if self.request._disconnected:
                return -1
            tmp = py3bstr(data)
            data = py3ustr(tmp[:size])
            writeTS(self.request, data)
            self.written += size
            return size

        except Exception as e:
            seiscomp.logging.error(str(e))

[scrttv] a couple of changes in the behaviour of the new version

Dear developers,

I have just noticed a couple of changes of scrttv that appeared in the version 5.4.0 and I am not sure they were intended.

While using scrttv to visualize a miniseed data (scrttv data.mseed) I noticed the following changes:

1 - "Restore default display" function doesn't restore the zoom to default like it used to be. If this is the intended behaviour, how can I restore the zoom to default now?

2 - When I load some picks from an xml file NOT all the picks are loaded (I see this in the scrttv loading dialog) and displayed (no picks in the waveforms). I do not know the reason of that since nothing is printed in the logs. However I can see that all the displayed picks are at the end of my miniseed data: is it possible that there is an initial period for which the picks are discarded?

thanks

Luca

FDSNWS-Dataselect stuck on multi-gap/overlap file

We have found this problem in our Seiscomp4 (4.5) installation that is used for FDSNWS Dataselect:
The fdsnws request is:

wget -O mseed "http://webservices.ingv.it/fdsnws/dataselect/1/query?network=MN&station=AQU&location=--&channel=BHZ&starttime=1995-01-02T00:00:00&endtime=1995-01-02T01:00:00"

During the analysis of the datafile we spotted that it has a lot of gaps and overlaps (like some others our files), shown in the following figure.

mseed-request-block-seiscomp

So we have discovered that whenever we make the request of these particular files our Seiscomp4 - fdnsws freezes.
Moreover we have noticed that one cpu goes to 100% and the other cpu remains idle (100).

Our idea is that probably the fdsnws process freezes when reading these files.

Example file :
MN.AQU..BHZ.D.1995.zip

fdsnws & twisted threadpool error

Hi, this error is repeatedly showing up in our fdsnws.log (forgive the formatting)

2021/05/06 00:33:12 [error/log] 'RequestOptions' object has no attribute 'matchTimeSeries' [' File "/usr/lib/python3/dist-packages/twisted/python/threadpool.py", line 250, in inContext\n result = inContext.theWork()\n', ' File "/usr/lib/python3/dist-packages/twisted/python/threadpool.py", line 266, in <lambda>\n inContext.theWork = lambda: context.call(ctx, func, *args, **kw)\n', ' File "/usr/lib/python3/dist-packages/twisted/python/context.py", line 122, in callWithContext\n return self.currentContext().callWithContext(ctx, func, *args, **kw)\n', ' File "/usr/lib/python3/dist-packages/twisted/python/context.py", line 85, in callWithContext\n return func(*args,**kw)\n', ' File "/Rdata/seiscomp/lib/python/seiscomp/fdsnws/station.py", line 381, in _processRequestExp\n skipRestricted)\n', ' File "/Rdata/seiscomp/lib/python/seiscomp/fdsnws/station.py", line 706, in _processStation\n for stream in ro.streamIter(net, sta, loc, True, dac):\n', ' File "/Rdata/seiscomp/lib/python/seiscomp/fdsnws/station.py", line 228, in streamIter\n if dac is not None and ro.matchTimeSeries:\n']

fdsnws is otherwise up and running seemingly normally. Only recent change is adding "availability" service recently, possibly related?

python3-twisted version is 17.9.0-2ubuntu0.1 and it is latest 4.5.0 built on python 3.6

Any help appreciated as usual

scamp: amplitude skipping should be avoided whe Time Grammar is used

Dear developers,

I just wanted to report a possible issue in scmap. In particular scamp skips amplitude re-computations when the same pick id is used by multiple origins. However in SeisComP 6 the Time Grammar has been introduced and that makes the amplitude noise/signal windows dependent on the origin. That means the same pick id can produce multiple amplitude values , one for each origin the pick is referenced by.

This issue doesn't affect me since I am not using SeisComP v6 yet. I just wanted to make you aware of this (small?) issue.

Luca

scevent: split origin issue (EvSplitOrg action)

Dear developers,

I would like to report a strange behaviour I noticed on scevent as a result of a split origin.

We had an event (smi:ch.ethz.sed/sc3a/2020wtaffs) at 2020-11-17 23:11:28 in "Elm GL" that contained an origin (smi:ch.ethz.sed/sc3a/origin/NLL.20201117231135.77423.15151) that was actually a different event. So we split that origin to create an event on its own (smi:ch.ethz.sed/sc3a/2020wtafhp) at 2020-11-17 23:11:08 in "Bourg-Saint_pierre VS". That worked fine.

Then we manually reviewed origin smi:ch.ethz.sed/sc3a/origin/NLL.20201117231135.77423.15151 and created a manual origin smi:ch.ethz.sed/sc3a/origin/NLL.20201118113244.164437.127249 . Again, that worked fine. You can see the results in this screenshots:

Screenshot from 2020-11-20 12-09-57

Screenshot from 2020-11-20 11-43-41

Screenshot from 2020-11-20 11-41-53

The strange behaviour started after.

We have an automatic relocation module (scrtdd) which works similarly to screloc: it listens to origins and provide new relocated ones.

The strange behaviour is that the automatic origins generated by scrtdd (smi:ch.ethz.sed/sc3a/Origin/RTDD.20201118111112.972178.48930, smi:ch.ethz.sed/sc3a/Origin/RTDD.20201118113406.830309.49067, smi:ch.ethz.sed/sc3a/Origin/RTDD.20201118113424.101546.49072) as a relocation result of the origins belonging to event smi:ch.ethz.sed/sc3a/2020wtafhp at 2020-11-17 23:11:08 were wrongly associated to the event smi:ch.ethz.sed/sc3a/2020wtaffs at 2020-11-17 23:11:28.

I looked at the logs and this is what I found out.

Here is the log of the event smi:ch.ethz.sed/sc3a/2020wtafhp creation as a result of the split of smi:ch.ethz.sed/sc3a/origin/NLL.20201117231135.77423.15151 (nothing suspicious here):

Screenshot from 2020-11-20 13-22-59

Now let's see the scevent log for the origin smi:ch.ethz.sed/sc3a/Origin/RTDD.20201118111112.972178.48930 that was wrongly associated to event smi:ch.ethz.sed/sc3a/2020wtaffs, although the triggering origin smi:ch.ethz.sed/sc3a/origin/NLL.20201117231135.77423.15151 was already part of event smi:ch.ethz.sed/sc3a/2020wtafhp.

Screenshot from 2020-11-20 14-33-54

We can see the problem in the log: the matching picks seems to be wrong. For some reason scevent believes that there are more mathing picks in event smi:ch.ethz.sed/sc3a/2020wtaffs (4/10) instead of smi:ch.ethz.sed/sc3a/2020wtafhp (0/10 !?!?). It seems there is some sort of stale cached information here.

The same issue happened to origins smi:ch.ethz.sed/sc3a/Origin/RTDD.20201118113406.830309.49067 and smi:ch.ethz.sed/sc3a/Origin/RTDD.20201118113424.101546.49072 that were associated to the wrong event smi:ch.ethz.sed/sc3a/2020wtaffs, although the triggering origin smi:ch.ethz.sed/sc3a/origin/NLL.20201118113244.164437.127249 was already part of event smi:ch.ethz.sed/sc3a/2020wtafhp.

Screenshot from 2020-11-20 11-39-37

Screenshot from 2020-11-20 11-37-47

Thank your for your help

Luca

fdsnxml2inv: inconsistent timespan behavior when agregating location

This report is derived from an observation we did while resoving an webservices problem.

Our station inventory is managed as native FDSN StationXML and converted into SC Inventory XML as part of the configuration process. In one specific case the coordinates for different epochs differ, while the location code remains unchaged. We understand that while FDSN does not define rules on the exact usage of the location code, SC Inventory data model requires a location to have the same coordinates. This situation is usually handled by fdsnxml2inv by providing a warning and use only one set of coordinates for all comprised epochs. In our case these informations are not used, so this modification of metadata seems acceptable.

Processing /tmp/20231124_161019-build_seiscomp3_db_from_stationxml/stationxml/IV_.stationxml
 - parsing StationXML
 - converting into SeisComP-XML
W  IV.CPOZ location '' starting 2011-07-19 10:55:00 has conflicting coordinates: using the last read
   lat 40.8211 != 40.8211
   lon 14.1191 != 14.1187
   elevation 52 != 2

However, the problem we encountered is that the timespans in this situation are then handled in an incoherent way. The station correctly reports an open timespan. On the other hand, the location reports an endtime corrisponding to the first epoch and thus exluding the later two epochs. The two epochs are still retained in the resulting Inventory XML. Below the output from scinv ls:

station CPOZ   Darsena Pozzuoli - Stazione Osservatorio Vesuviano
      epoch 2011-07-19 10:55:00
      location __
        epoch 2011-07-19 10:55:00 - 2022-03-31 10:00:00
        channel HHE
          epoch 2011-07-19 10:55:00 - 2022-03-31 10:00:00
        channel HHE
          epoch 2022-04-11 11:00:00 - 2023-03-01 00:18:00
        channel HHE
          epoch 2023-03-01 00:18:00
        channel HHN
          epoch 2011-07-19 10:55:00 - 2022-03-31 10:00:00
        channel HHN
          epoch 2022-04-11 11:00:00 - 2023-03-01 00:18:00
        channel HHN
          epoch 2023-03-01 00:18:00
        channel HHZ
          epoch 2011-07-19 10:55:00 - 2022-03-31 10:00:00
        channel HHZ
          epoch 2022-04-11 11:00:00 - 2023-03-01 00:18:00
        channel HHZ
          epoch 2023-03-01 00:18:00
        channel HNE
          epoch 2023-03-01 00:18:00
        channel HNN
          epoch 2023-03-01 00:18:00
        channel HNZ
          epoch 2023-03-01 00:18:00

This behaviour seems unexpected and may be considered a bug. We understand that SC Inventory cannot represent the original StationXML information unchanged. However, if the change of representation is acceptable, location timespan should be set accordingly.

I attach the problematic files.

CPOZ_failure.tgz

fdsnws bug: event id is not reported anymore

Dear developers,

I've noticed that in recent versions of SeisComP a bug was introduced in fdsnws and the event id is not reported anymore. That applies to both text and xml format. The last version that I know it was working fine is 4.4.0 then I tested version 4.7.0 and I saw this bug. I didn't have the time to check what commit caused the bug though.

thanks
Luca

[scdb] cannot read XML notifiers in seiscomp 5 (it works in seiscomp 6)

I created a xml notifier file with scdispatch:

scdispatch -i event.xml -O merge --create-notifier > notifier.xml

When I try to apply the changes with scdb, it fails in reading the notifier.xml file:

$scdb -i notifier.xml
Parsing file 'notifier.xml'...
Error: no valid object found in file 'notifier.xml'

The issue is on Seiscomp 5. If I try to run scdb with Seiscomp 6 (same input file) it works.

Do you know already what can be the source of the issue? Was there a commit for this particular error? I would like to backport the fix.

[scdbstrip] linting went wrong

I have just noticed the results of linting scdbstrip and something went wrong. That doesn't break the code and it doesn't bother me. However, since the purpose of the change was "linting", it would be better to check why the tool made such modification and avoid that in the future.

Inconsistent use of \ at the end of a line (some lines have it, but other don't):

Examples (but there are many more):
image

image

Spurious trailing comma ",":

Examples (but there are many more):
image

fdsnws auth Python3 -> KeyError: "b'z7vvsZcSkQkRE0No2O3CEe6K'"

We are having a problem with authenticated fdsnws downloads. We run Seiscomp4.5.0 from the master branch and Python3.8 on Ubuntu 18.04.5 LTS.

sysop@eida:~$ seiscomp-python 
Python 3.8.0 (default, Feb 25 2021, 22:10:10) 
[GCC 8.4.0] on linux

We got the following errors:

HTTPAuthSessionWrapper.getChildWithDefault encountered unexpected error
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/twisted/web/_auth/wrapper.py",
line 162, in _login
    d = self._portal.login(credentials, None, IResource)
  File "/usr/lib/python3/dist-packages/twisted/cred/portal.py", line
119, in login
    return maybeDeferred(self.checkers[i].requestAvatarId, credentials
  File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line
320, in addCallback
    return self.addCallbacks(callback, callbackArgs=args,
  File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line
310, in addCallbacks
    self._runCallbacks()
--- <exception caught here> ---
  File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line
653, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/usr/local/lib/python/seiscomp/fdsnws/dataselect.py", line 373,
in requestAvatar
    self.__userdb.getAttributes(avatarId)),
  File "/usr/local/bin/fdsnwsauth", line 164, in getAttributes
    return self.__users[name][1]
builtins.KeyError: "b'ttz_Ye1Jyw-aJ-JxmXG2tn04'"

HTTPAuthSessionWrapper.getChildWithDefault encountered unexpected error
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/twisted/web/_auth/wrapper.py",
line 162, in _login
    d = self._portal.login(credentials, None, IResource)
  File "/usr/lib/python3/dist-packages/twisted/cred/portal.py", line
119, in login
    return maybeDeferred(self.checkers[i].requestAvatarId, credentials
  File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line
320, in addCallback
    return self.addCallbacks(callback, callbackArgs=args,
  File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line
310, in addCallbacks
    self._runCallbacks()
--- <exception caught here> ---
  File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line
653, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/usr/local/lib/python/seiscomp/fdsnws/dataselect.py", line 373,
in requestAvatar
    self.__userdb.getAttributes(avatarId)),
  File "/usr/local/bin/fdsnwsauth", line 164, in getAttributes
    return self.__users[name][1]
builtins.KeyError: "b'z7vvsZcSkQkRE0No2O3CEe6K'"

I figured out the problem is the character encoding of "name" passed to getAttributes() in fdsnws.py on line 165.

The name is expected to be a byte array of the user name. But instead it is a string that contains the b at the beginning and the two single quotes.

We have a workaround that removes the b and the two single quotes and makes it work as expected.

    def getAttributes(self, name):
        name = name[2:len(name)-1].encode('ascii') # ugly workaround to remove b'...' around name and convert to a correct ascii byte array
        return self.__users[name][1]

But I am sure that is the wrong place for the fix. I can't figure out where the name gets messed up.

I have also tried to modify dataselect.py line 373 to use utils.py3ustr(avatarId) instead of only avatarId, but that didn't work either, giving the error "'str' object has no attribute 'decode'". I then gave up because I couldn't figure out who calls requestAvatar().

I am not sure if the error is in Seiscomp, or in one of the used python libraries. I tried both Twisted from the Ubuntu packages and also from pip3 install with the same issue.

scinv "_networkTypeFirewall" build error

Trying to install master. Probably just bad timing on my part and this is already known but FWIW

Scanning dependencies of target scinv
[ 90%] Building CXX object src/base/main/apps/tools/inventory/scinv/CMakeFiles/scinv.dir/main.cpp.o
/Rdata/seiscomp/SRC/src/base/main/apps/tools/inventory/scinv/main.cpp: In member function 'virtual bool InventoryManager::validateParameters()':
/Rdata/seiscomp/SRC/src/base/main/apps/tools/inventory/scinv/main.cpp:499:4: error: '_networkTypeFirewall' was not declared in this scope
  499 |    _networkTypeFirewall = Util::StringFirewall();
      |    ^~~~~~~~~~~~~~~~~~~~
/Rdata/seiscomp/SRC/src/base/main/apps/tools/inventory/scinv/main.cpp:500:4: error: '_stationTypeFirewall' was not declared in this scope
  500 |    _stationTypeFirewall = Util::StringFirewall();
      |    ^~~~~~~~~~~~~~~~~~~~
make[2]: *** [src/base/main/apps/tools/inventory/scinv/CMakeFiles/scinv.dir/build.make:63: src/base/main/apps/tools/inventory/scinv/CMakeFiles/scinv.dir/main.cpp.o] Error 1

fdsnws crash when availability enabled

Hi, noticing now in 6.0.3 that when the availability service is enabled, fdsnws crashes and won't start.

Here is the fdsnws.log message:

2023/12/15 10:22:37 [error/Core] Time::fromString: buffer size exceeded: 171 > 63
2023/12/15 10:22:37 [error/Core] Time::fromString: buffer size exceeded: 171 > 63
2023/12/15 10:52:54 [error/Core] Time::fromString: buffer size exceeded: 134 > 63
2023/12/15 10:52:54 [error/Core] Time::fromString: buffer size exceeded: 134 > 63

It's just those four messages. This is following a fresh run of scardac... I suspect a datetime string got garbled in the (mysql) database somewhere but I am not fluent enough in mysql to know how to check. Never had the issue previously.

Adding a merge option to scdb

Dear developers,

I would like to add an option to scdb to merge objects, similarly to what scdispatch does in "update" mode. That is, scdb should compare the "objects to be written" to the content of the database and merge the differences.

Ideally I would copy the logic (visitor code) from scdispatch, but I have a dout regarding the update of objects. Currently scdb saves new objects to the database but fails when the same object id is found in the db. So I am not sure I would be able to update an existing object.

Do you know if what I am planning to do is feasible?

Thanks!

`scamp` offline mode doesn't provide origins to amplitude processors

Prior to 5.5.0, scamp --ep ... would provide hypocenters to AmplitudeProcessor::setEnvironment(). It seems that when the --picks option was introduced, the behaviour changed so that setEnvironment is now never passed a hypocenter, even when --picks is disabled.

I think the culprit is setting pick here: https://github.com/SeisComP/main/blob/master/apps/processing/scamp/amptool.cpp#L741
This means the if (pick) here will always be true: https://github.com/SeisComP/main/blob/master/apps/processing/scamp/amptool.cpp#L849

ew2sc vs. ew2sc3

Sorry if this isn't the right sc git branch to report this small version 4 issue.

It is ew2sc in scconfig's "System" tab, however in the Modules tab it is "ew2sc3".

One can work around this with
seiscomp alias create ew2sc3 ew2sc
and then start up in System "ew2sc3" but clearly this is just a simple bug fix in the right place in the code to change the Modules to look for ew2sc.

scautopick stop working after ~3 hours

Dear developers,

this issue is something that I am going to debug myself, but I would like to double check first with you in case you have an idea already of what the problem could be.

The bug happens all the time: I start scautopick and after ~3 hours it stops producing picks. The process is still alive though. The interesting thing is that I am using data at 200.000hz. Do you know if there are internal counters that might reach the limits too soon with this high frequency data?

thanks
Luca

mysql: ResponsePAZ normalizationFactor should allow negative numbers

Hi, currently I see that the normalizationFactor field in ResponsePAZ is type "double unsigned" BUT for some (older?) Guralp sensors the normalization factor (e.g. A0) is actually negative, and thus cannot be inserted into our database.

(IMO it is EXTREMELY unwise of Guralp to have allowed negative A0 factors in the first place but that's here nor there.)

Can anyone recommend a method to quickly alter this table to just double? Also, are unsigned float/double etc tables depreciated in mysql now anyway?

edit: ok I figured it out,

  1. log into mysql e.g. mysql -u root -p

use your_sc_db_name; 
ALTER TABLE ResponsePAZ MODIFY COLUMN normalizationFactor DOUBLE;

Handling of empty location codes on the database layer (postgresql): NULL vs. [empty string]

I am not actually sure where to place this report, as I have no clue whatsoever where this issue is coming from.

We observe that in case of "empty" location codes, their representation in a postgres database, namely in the data fields stationmagnitude.m_waveformid_locationcode, amplitude.m_waveformid_locationcode, and pick.m_waveformid_locationcode is sometimes NULL, sometimes '' [empty string]

  • this both versions are present in data created between 2014 and 2022, over many versions of seiscomp(3/4)
  • some of our seiscomp installation of any area do only one version, some just the other, some both.
  • for amplitudes and station magnitudes, just the ML family (Ml, Mlv, Mlh, MLhc,) and MVS is affected, mb is always [empty string]
  • for amplitudes and station magnitudes, both representations are found with data created by human authors (interactive analysis) as
    well as by automatic processing), sometimes on the same system (however, for some "authors" of a system, we may find both versions, for other authors just one)
  • for picks, NULL m_waveformid_locationcode are found only in (a part of the) manual picks, never in automatic picks.
  • the issue does not related to the inventory referred to; the m_code attribute of the entity sensorlocation is always [empty string] (if not non-empty string], it is never NULL (all systems, all seiscomp versions)
  • In relatively rare cases, the representation of the amplitude's m_waveformid_sensorlocationcode is even different between the representation of a stationmagnitude.m_waveformid_sensorlocationcode and the m_waveformid_sensorlocationcode of the amplitude it refers to. These cases exist for both automatically and manually processed station magnitudes. For the automatically processed one, we can, afaiu, completely exclude system or metadata modifications between the creation of the amplitude and the creation of the corresponding station magnitude. (proof: http://mercalli.ethz.ch/~kaestli/sensorlocation_null_vs_emptystring.png

We are actually not sure whether there is any impact on this on the seiscomp stack - the difference can be handled in the ORM. However, there is an impact if the m_waveformid_sensorlocation field is ever used in join statements (postgres evaluates (NULL = '') to FALSE, thus e.g. metadata for seismic data would not be found any more.
The impact on external applications varies over different versions of postgres, as e.g. text concatenation NULL || 'MYTEXT' in postgres 9 would implicitly typecast and evaluate to 'MYTEXT' while in postgres 12, it would implicitly type mismatch and evaluate to NULL.

BUG: scdbstrip doesn't consider --days option

Dear developers,

a bug has been recently introduced in scdbstrip, where the --days option is not considered if that is the only option provided and the code defaults to the 30 days period instead.

The bug was introduced with the inclusion of the new option --datetime and the line responsible for the bug is this, which makes the period 30 days regardless of the value of self._daysToKeep.

Luca

inv2dlsv crash

When I run inv2dlsv, it crashes with the following traceback:

traceback (most recent call last):
  File "/Users/seiscomp/seiscomp/bin/inv2dlsv", line 21, in <module>
    from seiscomp.legacy.db.seiscomp import sc3wrap
ModuleNotFoundError: No module named 'seiscomp.legacy.db.seiscomp'

I think there is just a small typo (2 too many seiscomp3 -> seiscomp changes, if I am correct) in lines 21 & 22 of inv2dlsv, which should read:

from seiscomp.legacy.db.seiscomp3 import sc3wrap
from seiscomp.legacy.db.seiscomp3.inventory import Inventory

feature request: scxmldump read list of events/origin ids from a file

When trying to export a large amount of events I often have issues with the eventID string being too long for my system kernel. Would it be possible to introduce an option to read from a file containing a list of IDs?

something like scxmldump -d $conn -fPAMF -EF eventids.txt -o out.scxml

I would like to help but I don't know how reading a file works in C at all

Thanks in advance,

updating scdbstrip and scdispatch on seiscomp v5?

Dear developers,

I don't know if you plan to make an additional v5.x.y release eventually, but in case you do it and if it doesn't create any issue on your side, would you please consider back-porting the latest changes of scdbstrip and scdispatch from master (or v6) to v5 branch?

thanks
Luca

[scart] Exception when using `resample|dec` record streams

When running scart with either a resample:// or a dec:// record stream configured the program prints an exception i.e.

Exception: <built-in function Array_bytes> returned a result with an error set

and no data is written. Besides of testing with a file:// proxy stream I also tested with a fdsnws:// proxy stream leading to the same issue. The data itself shouldn't be the issue, since I tested with different input data (from different data centers), too.

Reproducer:

  • Python 3.8.10
  • SeisComP upstream from master
$ cat scart.list
2020-10-25T19:30:00;2020-10-25T19:35:00;GR.BFO..HHZ
$ scart -v -I "resample://file/$(realpath GR.BFO..HHZ.mseed)" --list scart.list --stdout 
Archive: /home/damb/seiscomp/var/lib/archive/
Mode: IMPORT
10:03:55 [debug/Environment] Setting installdir from $SEISCOMP_ROOT: /home/damb/seiscomp
10:03:55 [info/Environment] using local config dir: /home/damb/.seiscomp
10:03:55 [debug/RecordStream] trying to open stream resample://file//home/damb/tmp/GR.BFO..HHZ.mseed
adding stream: 2020-10-25 19:30:00.000 2020-10-25 19:35:00.000 GR.BFO..HHZ
10:03:55 [debug/Resample] [dec] caching 41 coefficents for N=2
10:03:56 [debug/Resample] [dec] caching 1001 coefficents for N=50
Exception: <built-in function Array_bytes> returned a result with an error set
10:03:56 [debug/Resample] Closing proxy source

Resources:

  • GR.BFO..HHZ.txt (the file contains miniseed records covering the time window specified in scart.list; renamed due to GitHub upload policies)

Flipped vertical component

Normally for a vertical-component stream the dip is -90 degrees. If a vertical component is flipped, this is indicated in the inventory by a dip value of +90 degrees. An example is IU.TIXI.00.BHZ.

When plotting the traces e.g. in the scolv picker window, the flipped traces need to be inverted. This does not happen.

As a consequence, e.g. when measuring onset polarities, the wrong polarity is obtained.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.