Giter Site home page Giter Site logo

unidata / ldm Goto Github PK

View Code? Open in Web Editor NEW
43.0 20.0 27.0 97.47 MB

The Unidata Local Data Manager (LDM) system includes network client and server programs designed for event-driven data distribution, and is the fundamental component of the Unidata Internet Data Distribution (IDD) system.

Home Page: http://www.unidata.ucar.edu/software/ldm

License: Other

Shell 0.68% Puppet 0.02% C 74.02% HTML 5.94% PHP 0.01% CSS 0.10% Tcl 0.98% C++ 11.53% Perl 0.02% Yacc 0.22% Lex 0.07% Makefile 1.70% M4 0.65% Roff 2.00% RPC 0.33% Python 1.05% R 0.53% CMake 0.14% Vim Script 0.03%

ldm's Introduction

                          LDM README FILE

INTRODUCTION:

    This package contains the source-code for the Unidata Program Center's Local
    Data Manager (LDM).

RELEASE IDENTIFICATION:

    The release identifier for this package is in the file VERSION and has the
    following form:

        <major>.<minor>.<rev>

    where:

        <major>     Is the major release-number (e.g., 6).  Changes to this
                    component indicate a major departure from previous releases
                    (such as a change to the LDM protocol).  Such changes are
                    often not compatible with previous releases.

        <minor>     Is the minor release-number (e.g., 1).  Changes to this
                    component indicate the addition of new features.  The
                    package remains compatible with previous releases having the
                    same major release-number.

        <rev>       Is the revision-level (e.g., 0).  Changes to this component
                    indicate bug-fixes and/or performance improvments that are
                    functionally compatible with previous releases having the
                    same major and minor release-numbers.

LEGAL:

    Licensing and copyright information are contained in the file COPYRIGHT,
    which is in the top-level source directory.

INFORMATION:

    HOMEPAGE -- INCLUDING INSTALLATION & CONFIGURATION INSTRUCTIONS:

        The homepage of the LDM package is

                http://www.unidata.ucar.edu/software/ldm/

        Click on the appropriate release identifier to go the release-specific
        homepage, where you will find detailed LDM installation and
        configuration instructions as well as other useful information.

    CHANGE LOG:

        Changes to the package are documented in the file CHANGE_LOG, which is
        in the top-level source-directory.

LDM DECODERS:

    LDM-compatible decoders for local processing of received data products are
    available for the McIDAS package. See

            http://www.unidata.ucar.edu/software/mcidas/mcidd/

SUPPORT:

    You may request support by sending an email inquiry to
    
        [email protected]
        
    Please include a description of the type of platform (hardware and operating
    system) and any relevant information that can help us answer your question
    (error messages, symptoms, etc.).

RULES FOR CONTRIBUTING:

    See the file CONTRIBUTING.md in the top-level source-directory.

ldm's People

Contributors

akrherz avatar bradh avatar brian-m-rapp avatar cfstras avatar mustbei avatar naderchaser avatar oxelson avatar semmerson avatar stonecooper avatar yt4xb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ldm's Issues

ldmget(1) utility

Like notifyme(1) but would actually get the data-product(s) that matched either the MD5 checksum or the conjunction of the feedtype, pattern, and time-offset. Would put them in a product-queue (the default queue by default).

LDM losing products in queue on start

As noticed from our AWS NEXRAD level 2 machine, when the LDM starts, it compares the wall clock against the product creation time, rather than the product insertion time. This causes us to lose a few NEXRAD chunks when restarting the LDM.

Support package management

yum(1), apt-get(1), and whatever BSD and Solaris use.

Can have the main LDM package depend on an LDM configuration package without a version in order to obtain default files in ~ldm/etc that won't be overridden.

This might be problematical for the GRIB tables. They would need to be versioned or part of the main LDM package.

Modify rsyslogd(8) configuration-file for UTC timestamps

Beginning with version 5 of rsyslogd(8), timestamps of log messages are rewritten in local time by default. The LDM installation procedure should ensure that the rsyslogd(8) configuration-file has been modified to use UTC timestamps for LDM logging.

Figure-out PATH problem

Something about $LDMHOME/bin being necessary but not wanting to add it to PATH. See ESupport ticket MXK-668269.

Waiting for the LDM server to terminate...

This has been a long-standing problem with the LDM in the AWIPS project... attempting to stop the ldm results in this message posted repeatedly, sometimes for 10 or more minutes

[root@edextest ~]# ldmadmin stop
Stopping the LDM server...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...
Waiting for the LDM server to terminate...

primary/secondary

It would be nice in LDM to have the ability to select which upstream site you prefer to connect to, and to have an alternate upstream which could be connected to if the preferred connection is unavailable. This would need a configurable time-out.

It would also be nice to be able to force the switch "on-the-fly" if you knew your preferred upstream had a problem.

This would be instead of primary/secondary, in which LDM decides based on whichever upstream is quicker (and can lead to "flapping" if they are very close).

Eliminate the "libxml2" subpackage

Because Linux and Solaris come with a "libxml2" library, this subpackage is no longer necessary. Eliminating it would greatly reduce the size of the LDM package, the time it takes to build, and the engineering overhead of maintaining a 3rd-party package (security, updates, etc.).

Eliminate use of exitIfDone()

The use of exitIfDone() should be eliminated because 1) it complicates thinking about the flow-of-control because it's, basically, a "goto"; and 2) it prevents resources (particularly malloc()ed ones) from being reclaimed -- thus, complicating the use of valgrind(1).

Functions that use exitIfDone() will have to be modified to return an appropriate status to their calling function. All functions in the calling-sequence will have to be modified to unwind the sequence -- freeing resources as it's unwound.

Have notifyme(1) exit on SIGHUP

Apparently, it doesn't, which causes notifyme(1) processes to hang around after the user has logged off even if output is to the controlling terminal.

This conflicts with issue #40

Create "upgrade" web-page

For consistency with other package web-pages, the LDM needs an "upgrade" web-page to which the "downloads" web-page can link. Currently, that information is divided between the "installation" web-page and the "starting" web-page. The "upgrade" web-page should reference those two pages with some narrative glue.

Enhance pqing(1) to ingest GRIB-2 products

From ESupport ticket ZNI-140417:

Within the pqing source directory, there is a file, wmo_message.c. Within that file, there is a function, ids_len. This function is called to calculate the length of the GRIB bulletin. The original function assumed that the data is GRIB 1 format. I made the following changes to the function (I account for the GRIB version) to calculate the message length, recompiled pqing and the changes corrected the problem.

static int
ids_len(const char *cp)
{
    int len;
    const unsigned char *up;
    int vers = *(cp+7);
    if (vers == 1) {
        up = (const unsigned char *)cp + IDS_LEN -1 ;
        len = *up++ * 256 *256;
        len += *up++ * 256;
        len += *up;
    }
    else {
        up = (const unsigned char *)cp + 8 ;
        len = *up++ * 256 *256 * 256 *256 *256 *256 *256;
        len += *up++ * 256 *256 * 256 *256 *256 *256;
        len += *up++ * 256 *256 * 256 *256 *256;
        len += *up++ * 256 *256 * 256 *256;
        len += *up++ * 256 *256 * 256;
        len += *up++ * 256 *256;
        len += *up++ * 256;
        len += *up;
    }
    return len;
}

Adapt to use the "Log4c" logging package

This would obviate the need to support syslog, rsyslog, and syslog-ng. The "ulog.h" and "log.c" API-s could still be used, but a new module would need to be written to replace "ulog.c" ("ulog2log4c.c"?).

This is a better solution than issue #40 but will require more effort.

Clean-up "pqact/filel.c"

In ticket GNY-856216, Gilbert reported that pqact(1) received a SIGSEGV -- apparently due to a decoder not existing. This might be due to pqact/filel.c accessing an entry after it's been freed by delete_entry(). The following might help:

  • *_open() functions should modify entry atomically.
  • free_fl_entry() shouldn't call entry->ops->close() and delete_entry() should.
  • The fl_ops.close() function should remove the entry and free it.
  • Upon failure, the fl_ops.sync() and fl_ops.put() functions should remove the entry and free it.

Create binary repositories

Create binary yum(1) and apt-get(1) repositories for the package and incorporate them into the continuous-delivery pipeline.

RFE: Add -wait option to pqact PIPE actions

Hi Unidata/Steve,

I would really like to see a "-wait" or equivalent option added to PIPE/EXEC actions to effectively limit the number of processes one pqact could have active at one time. This flag would cause pqact to not recycle the slot until that process has exited. I think I discussed this with you many years ago and you were not enthusiastic about it as misbehaving/naughty processes could wedge up and effectively jam up pqact as well as pqact waits for these processes to exit...

The issue is that any process that has this one product in execution model, could effectively DOS a system as pqact exec's off one process per product received. Starting up LDM after a considerable downtime is one example. Another is some products that come in rapid succession...

Currently, users have two options:

  1. Allow their process to handle more than one product on stdin, effectively making it long running
  2. Add some locking mechanism that checks to see if others like it are currently running and then sleeps for a bit waiting for those to exit.

I personally loathe option 2 as having potentially hundreds of scripts writing lock files and sleeping is a race condition waiting to happen. I have written lots of processes that do option 1, but not all are well suited for it. For example, satellite data processors.

A nice aspect of this is that pqact could then log non-zero exit statuses from these '-wait' processes, which would help users debugging this. Perhaps some other logging would already kick in, if pqact had no available slots over some given about of time, I am unsure of that one.

I think a reasonable exception is for '-wait' to imply a '-close' as well. I'd be happy to provide feedback if there are other edge cases you anticipate. Thanks for your consideration :)

Add deletion of product-queue by systemd(8) when LDM user logs out to "LDM user" documentation

On systems using systemd(8), various inter-process communication objects of the LDM system -- including the memory-mapped LDM product-queue -- will be deleted when the LDM user logs out if

  • The value of the parameter RemoveIPC is yes; and
  • The LDM user is not a "system user".

A "system user" is one whose UID lies in the range delimited by the parameters SYS_UID_MIN and SYS_UID_MAX in the file /etc/login.defs when the systemd(8) utility was compiled.

On such systems, either the value of the RemoveIPC parameter must be no or the LDM user must be a "system user" in order for the LDM to work correctly.

pqact state files should be accurate to the process and not all processes

With the current release, all pqact processes appear to write the same .state file and not a .state file that is accurate to that pqact process. For example, on a machine that has 9 pqact process, the written out .state file for each at shutdown time has the same timestamp!

Here's the problem. Yesterday, my system feeds NEXRAD2 from idd.unidata.ucar.edu and other feeds from idd.ssec.wisc.edu. The NEXRAD2 feed became very latent, with a feed latency at shutdown time of ~30 minutes. I have 7 pqact processes that only look at that feedtype, so their queue time should have been 30 minutes in the past. The IDS feedtype was current, so this resulted in .state files with the current timestamp . So when I started LDM back up, the 7 pqact processes ignored the incoming new NEXRAD2 data because the pqinsert times were within that recent 30 minute window. After some cussing, I realized what had happened, stopped LDM, removed the state files and started LDM again. Of course, data was missed during this period :(

Please modify pqact to write state files that represent where exactly that pqact process is within the LDM queue so that I don't loose data during restarts. Even in the normal case of a machine with multiple pqact processes, this would result in products being missed.

Add thread-ID to log messages

With the advent of the multicast feature, multiple threads are logging and it would be beneficial to support greping the messages for those from a particular thread as well as differentiating messages from different threads.

This can be done by invoking pthread_once() in ulog.c to create a THREAD_ID key, setting the value of the key to a mutex-protected, automatically-incremented integer, and then including the integral value in a message prefix (e.g., (file,line,thread-ID)).

Unfortunately, creating an elegant message prefix would seem to require modifications to both ulog.c and log.c: either ulog.c would have to know about incoming messages from log.c or the messaging functions of ulog.c would have to become macros.

Could the PID be added to the tuple? E.g., (PID,TID,file,line). This would break backward compatibility in terms of message format.

Move ~ldm/etc to /etc/opt/ldm and ~ldm/var to /var/opt/ldm

In order to conform to the Filesystem Hierarchy Standard.

Issues:

  • Creation of /etc/opt/ldm and /var/opt/ldm with the proper permissions when necessary
  • Honoring or moving ~ldm/etc and ~ldm/var from a previous installation
  • Creating symbolic links when necessary
  • Doing all this in both source and RPM installs

Preventing the LDM from "daemonizing"

The '-l' ("minus el") option on ldmd allows one to choose where the LDM logs go: stderr, a log file, or the system log (if the option is omitted).

However, this option also has a second function: If it's set to stderr, the LDM chooses not to daemonize itself. If either of the other two settings are chosen, then it will daemonize itself.

It would be nice if these two settings could be separated into two options, one to control log location and the other to control daemonizing.

Replace MAX_CIRCBUFSIZE in pqing(1) with command-line argument

The C macro MAX_CIRCBUFSIZE in file "src/pqing/fxbuf.c" hard-codes the maximum size of a WMO message (1048576 bytes). Replace that parameter with a command-line argument. Also, vet the current code that allows the buffer to grow beyond that limit.

Remove precision specifications from rtstats(1)'s sprintf(3) call that creates product-identifier

Line 147 in rtstats/binstats.c is

        sprintf(stats_data, "%14.14s %14.14s %32.32s %7.10s %32.32s %12.0lf %12.0lf %.8g %10.2f %4.0f@%4.4s %20.20s\n",

The "%32.32s" is truncating the "origin_v_upstream" field. It should be 64 characters. This might cause problems for subsequent processing.

LDM versions 6.10.1 and earlier don't have the precision specifications.

LDM version 6.11.7 doesn't give a reason for the change in the CHANGE_LOG file.

Support RPM

Some LDM users would like to install from an RPM but the current support for that is poor. Here's an example of a problem (there might be some line formatting errors):

My name is James Brenton and I am a contractor at Marshall Space Flight Center. We are using LDM on our older machines to get meteorological data from a server at the Eastern Range at Cape Canaveral. We're working on migrating to newer virtual machines using the same build of LDM. The virtual machine is running Redhat Linux 7.2 and we're trying to build and install LDM 6.11.2.

We've been trying to build an RPM to install LDM, using these directions where "~" is "/home/ldm/":

1). Make sure the build system is at the same level as the install system. It's okay to build and install on same system.

2). mkdir -p ~/rpmbuild/BUILD ~/rpmbuild/RPMS ~/rpmbuild/RPMS/i386 ~/rpmbuild/RPMS/i686 ~/rpmbuild/RPMS/noarch ~/rpmbuild/SOURCES ~/rpmbuild/SPECS ~/rpmbuild/SRPMS ~/rpmbuild/tmp

3). vi ~/.rpmmacros

     %_topdir               ~/rpmbuild
     %_tmppath              ~/rpmbuild/tmp

4). tar -xvzf ldm-6.11.2 tar.gz

5). Copy the ldm.spec file to the ~/rpmbuild/SPECS directory as ldm.spec.  Update the ldm.spec if applicable.

6). Copy the ldm-6.11.2 tar.gz file to the ~/rpmbuild/SOURCES directory.

7). rpmbuild -bb ~/rpmbuild/SPECS/ldm.spec

But we get the following error when we get to step 7:

/usr/bin/install -c -m 644  rpc/rpc.h rpc/types.h rpc/xdr.h rpc/auth.h rpc/clnt                                    .h rpc/rpc_msg.h rpc/auth_unix.h rpc/svc.h rpc/svc_auth.h rpc/pmap_clnt.h rpc/pm                                    ap_prot.h '/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/i                                    nclude/rpc'
make  install-data-hook
make[3]: Entering directory `/home/ldm/rpmbuild/BUILD/ldm-6.11.2'
./ensureLdmhomeLinks /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr ldm-6.11.2
/usr/bin/mkdir -p /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/var
./ensureVar /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr

NOTE: The command "make root-actions" will have to be executed by the
superuser in order to complete the installation process.

make[3]: Leaving directory `/home/ldm/rpmbuild/BUILD/ldm-6.11.2'
make[2]: Leaving directory `/home/ldm/rpmbuild/BUILD/ldm-6.11.2'
make[1]: Leaving directory `/home/ldm/rpmbuild/BUILD/ldm-6.11.2'
+ sed -e s:/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr:/home/ldm:g                                     /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/etc/registry.xml
+ mv -f /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/etc/registry.xml.new /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/etc/registry.xml
+ /usr/lib/rpm/check-buildroot
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/lib/libxml2.                                    la:libdir='/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/lib'
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/lib/xml2Conf.sh:XML2_LIBDIR="-L/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/lib"
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/lib/xml2Conf.sh:XML2_INCLUDEDIR="-I/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/include/libxml2"
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/lib/pkgconfig/libxml-2.0.pc:prefix=/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/lib/libldm.so.0.0.0 matches
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/lib/libldm.la:libdir='/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/lib'
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/lib/libldm.a matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/xmllint matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/xmlcatalog matches
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/xml2-config:prefix=/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/feedme matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/hupsyslog matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/ldmd matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/ldmping matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/ldmsend matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/notifyme matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqact matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqcat matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqcheck matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqcopy matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqcreate matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqexpire matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqing matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqinsert matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqmon matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqsend matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqsurf matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/pqutil matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/regutil matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/rtstats matches
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/netcheck:$ldmhome = "/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr";
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/syscheck:$ldmhome = "/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr";
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/ldmadmin:$ldmhome = "/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr";
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/ldmadmin:$bin_path = "/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin";
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/ldmfail:$ldmhome = "/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr" ;
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/uldbutil matches
Binary file /home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/bin/ulogger matches
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/var/logs/ldmd.log</tt></td>
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/var/logs/metrics.txt</tt></td>
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/var/logs/metrics.txt*</tt></td>
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/etc/pqact.conf</tt></td>
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr</tt></td>
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/etc/pqsurf.conf</tt></td>
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr</tt></td>
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/var/queues/ldm.pq</tt></td>
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/etc/scour.conf</tt></td>
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/etc/ldmd.conf</tt></td>
/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64/usr/ldm-6.11.2/share/doc/ldm/basics/LDM-registry.html:<td><tt>/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7
Found '/home/ldm/rpmbuild/BUILDROOT/ldm-6.11.2-1.el7.x86_64' in installed files;aborting
error: Bad exit status from /home/ldm/rpmbuild/tmp/rpm-tmp.V9fUY3 (%install)

Replace flushing NULLPROC with nil data-product

Currently, a synchronous, round-trip NULLPROC message is used to flush the connection every time the end of the product-queue is hit or every 30 seconds, whichever comes first.

Replace the use of this NULLPROC with the sending of an empty (i.e., "nil") data-product using the "message passing" capability of the RPC layer (i.e., non-NULL reply decoder but a zero timeout value). If the MD5 signature of the nil data-product is the same as the last successfully-transmitted data-product, then this should be backward-compatible.

Doing this will use the network more efficiently because the upstream LDM will not wait for a reply from the downstream LDM.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.