Giter Site home page Giter Site logo

xrootd4j's Introduction

xrootd4j

Implementation of the xrootd data access protocol in Java. The project provides a library for integration and a standalone xrootd data server.

About the library

xrootd is the native data access protocol of the ROOT data analysis framework. The reference implementation of the protocol is provided by SLAC National Accelerator Laboratory.

dCache is a distributed storage system frequently used in the Worldwide LHC Computing Grid, high energy physics, photon sciences, and other communities.

This project provides our implementation of the xrootd data access protocol in Java. The library is used to implement the xrootd support in dCache.

A standalone data server is provided. The primary purpose of the standalone data server is for testing, both interoperability testing and as a platform to test plugins without having to install dCache.

xrootd4j heavily depends on Netty, a high performance asynchronous event-drive network application framework.

Compiling the project

To compile the project simply execute:

mvn package

Installing the library

To install the core library (xrootd4j) into your local maven repository run:

mvn -am -pl xrootd4j install

Using the library

Add the following Maven dependency to your project:

<dependency>
    <groupId>org.dcache</groupId>
    <artifactId>xrootd4j</artifactId>
    <version>2.0.0</version>
</dependency>

To automatically download the dependency, add our Maven repository to your project:

<repositories>
  <repository>
    <id>xrootd4j.repository</id>
    <url>https://download.dcache.org/nexus/content/repositories/releases/</url>
  </repository>
</repositories>

Alternatively, download or build the JAR by hand and add it to the build classpath.

Starting the standalone server

The standalone server may be executed as follows:

java -Dlog=debug -jar xrootd4j-standalone/target/xrootd4j-standalone-4.3.0-SNAPSHOT-jar-with-dependencies.jar

Please adjust the log level as needed. Add the -h option at the end of the command to get a brief synopsis of available options.

Creating plugins from Maven archetypes

We provide templates for authorization and channel handler plugins. To instantiate such a template, run:

mvn -DarchetypeCatalog=https://download.dcache.org/nexus/content/groups/public -Dfilter=org.dcache: archetype:generate

Select the appropriate archetype from the list.

Contributing

For code formatting, we use an adapted version of the Google style guide for Java that can be found here for use with IntelliJ. The reformatting involves optimization of imports (reordering), application of all syntactical sugar settings, but does not include code rearrangement (fields, methods, classes) or code cleanup for existing code. Reformatting should be applied to the changed code before submitting a patch.

Authors

The code was originally written by Martin Radicke and sponsored by DESY. It has since been maintained by Gerd Behrmann and Thomas Zangerl sponsored by NDGF.

xrootd4j's People

Contributors

alrossi avatar dependabot[bot] avatar dmitrylitvintsev avatar gbehrmann avatar kofemann avatar lemora avatar mksahakyan avatar paulmillar avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xrootd4j's Issues

more fun with AES cipher pad block errors, XrootD 4.9

Using the same cipher for DH session key encryption and decryption as we do in the pre-4.9 implementation, the following issues were encountered (in this sequence)

  1. When sigver is turned on, the first signed hash produces the decryption-side pad block corrupted error.
  2. If we initialize decryption only for signed hashes using NoPadding, error 1 goes away. After 50 or so iterations, however, the dCache door complains that it cannot decrypt the main buffer in the GSI handshake from the xrdcp client.
  3. So now we try decrypting everything using NoPadding (in 4.9). After 50 or so iterations, our own TPC client receives a complaint from the xrootd source server: [4003] Secgsi: ErrParseBuffer: error decrypting main buffer with session cipher: kXGC_cert. So encryption from dCache is bad during the GSI handshake.
  4. If we use NoPadding for encryption, however, the xrootd server complains that the buffer is not block aligned.

That is where I am currently in terms of wrestling with this issue.

handling stat request

Hi Gerd,

I don't know your new mail so I write in this way. Now we have installed dCache 2.6.16 at MWT2 - the one coming with the newest xrootd4j. It still does not support stat and we really need it.
I tested it:
~ >xrdfs uct2-s6.uchicago.edu:1096 stat /atlas/rucio/user/hito:user.hito.xrootd.mwt2-1M
Path: /atlas/rucio/user/hito:user.hito.xrootd.mwt2-1M
Id: 0
Size: 1310720
Flags: 48 (IsReadable|IsWritable)
~ >xrdfs uct2-s6.uchicago.edu:1096 stat /atlas/rucio/user/hito:asdfgsdfgsdfgsdfg
Path: /atlas/rucio/user/hito:asdfgsdfgsdfgsdfg
Id: 0
Size: 512
Flags: 19 (XBitSet|IsDir|IsReadable)
~ >xrdfs uct2-s6.uchicago.edu:1096 stat /pnfs/uchicago.edu/atlaslocalgroupdisk/rucio/data12_8TeV/35/44/NTUP_SMWZ.01128232._000029.root.1
Path: /pnfs/uchicago.edu/atlaslocalgroupdisk/rucio/data12_8TeV/35/44/NTUP_SMWZ.01128232._000029.root.1
Id: 0
Size: 512
Flags: 19 (XBitSet|IsDir|IsReadable)

Can you tell me when we could expect this?

Cheers,
Ilija

Stack-traced logged by mistake

In user-forum, we have reports like:

21 Aug 2017 10:17:47 (Xrootd-uct2-xrootd) [door:Xrootd-uct2-xrootd@uct2-xrootdDomain-1094:AAVXRPs0mFA] Error during decrypting/server-side key exchange: {}
javax.crypto.BadPaddingException: pad block corrupted
	at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineDoFinal(Unknown Source) ~[bcprov-jdk15on-1.50.jar:1.50.0]
	at javax.crypto.Cipher.doFinal(Cipher.java:2165) ~[na:1.8.0_91]
	at org.dcache.xrootd.plugins.authn.gsi.DHSession.decrypt(DHSession.java:200) ~[xrootd4j-gsi-3.2.2.jar:3.2.2]
	at org.dcache.xrootd.plugins.authn.gsi.GSIAuthenticationHandler.handleCertStep(GSIAuthenticationHandler.java:302) [xrootd4j-gsi-3.2.2.jar:3.2.2]

Note how the log message is Error during decrypting/server-side key exchange: {}. This points to a simple mistake where the desire was to log the exception's message, but a stack-trace is logged instead.

Update parser so all paths may contain CGI/query elements

As clarified in xrootd/xrootd#850, all paths in any xrootd request may contain CGI elements (a query string).

Currently all operations, (except kXR_open and kXR_mv) treat all paths as if they have no CGI elements, as the xrootd protocol spec currently doesn't mention this behaviour.

This results in clients potentially sending valid paths to xroot4j that then fail.

To fix this, all operations should be examined and any paths should remove the query string argument. Additional getter methods should be added to provide access to these optional elements.

possible race in get checksum

Not sure if this is xrootd4j or dcache-xrootd, so posting in both places

When running multiple backgrounded transfers on my desktop, I encountered:

[1.203GB/2.051GB][ 58%][=============================> ][6.099MB/s]
Run: [ERROR] Server responded with an error: [3012] Internal server error (ChecksumChannel must not be written to after getChecksums)

This is with two-party write from the client to the pool.

Support TPC for ALICE tokens

Originally RT #9501 (3rd party xrootd transfers to dCache endpoints)

Hi,

This is to keep track of this request.

And a bit more info on our use case as feedback to Paul and Al (btw,
many thanks to Al for working on the TPC feature!).

ALICE transfers don't use X.509 certificates but instead we provide (as
you see in the command line) both source and target access envelope for
the respective operations.

Simple writes work ok so I think the target part is also ok in TPC mode.
It might then be a simple question of not trying X.509 in this case but
making sure the source URL is fully passed to the other side, including
the authz parameter.

Cheers,

.costin

optimization of buffer usage on pools

I am including below the discussion(s) and testing which I have engaged in with KIT and ATLAS concerning direct memory usage by xrootd/Netty. The aggregated data per test case is in this tgz:

https://drive.google.com/file/d/1Y2sYOtrPnYH7c6p8YxQhLyHjaSSD__Gt/view?usp=sharing

This needs further study.

See also #137

Original testing:

PROCEDURE 

A) added metrics logging at INFO level
B) start pool with the config
C) \s dcatest08-5Domain log set stdout org.dcache.xrootd INFO
   to obtain the netty allocator metrics lines, e.g.:
   03 Aug 2022 12:35:53 (dcatest08-5) [] allocator PooledByteBufAllocator.935483729 –– after chunked write: PooledByteBufAllocatorMetric(usedHeapMemory: 0; usedDirectMemory: 77515264; numHeapArenas: 16; numDirectArenas: 16; smallCacheSize: 256; normalCacheSize: 64; numThreadLocalCaches: 1; chunkSize: 131072)

D) run 50 concurrent xrdcp writes of a file (1 GiB, except for case #4) to the pool
E) grep "after chunked write" ~/Desktop/dcatest08-5Domain.log | cut -d ' ' -f 16 | sort -h | uniq -c
F) delete files from namespace
G) stop pool, change config, restart (repeat from B)

JVM direct memory always is:  dcache.java.memory.direct=1024m

Critical value to be observed: the maximum direct memory allocated by the PooledDirectByteBufferAllocator.


PARAMETERS

1. -Dio.netty.maxDirectMemory
2. -Dio.netty.allocator.maxOrder
3. pool.mover.xrootd.frame-size=131072
4. size of the file written
5. pool.mover.xrootd.threads
6. number of connections


RESULTS

1. Has no effect on the maximum the allocator requests.

-Dio.netty.maxDirectMemory

		Peak Usage (1 thread)

Default (none)	855638016		
512m		855638016


2. This translates into the number of places to shift the page size to get the chunk size. The default is 11, which results in 16MiB chunks; 4 results in 128KiB chunks:

-Dio.netty.allocator.maxOrder

		Peak Usage (1 thread)	Samples

Default (11)	855638016		5090
128KiB  (4)	637796352		5556

It also increases the amount of time the allocation stays at maximum (number of samples).

This modification was suggested by https://github.com/oracle/helidon/pull/3826.


3. Does not affect the max direct memory allocated.

pool.mover.xrootd.frame-size

		Peak Usage (1 thread)	Samples

Default (8MiB)	855638016		5090
128KiB		855638016		5697

It does increase the amount of time the allocation stays at maximum (number of samples).


4. Does not affect the max direct memory allocated.  With maxOrder = 4, i.e., chunk size 128KiB:

		Peak Usage (1 thread)	Samples

1 GiB file	637796352		5556
2 GiB file	637796352		11510

Obviously, with double the data being written, the amount of time spent in the max state is about double.


5. With maxOrder = 4, i.e., chunk size 128KiB:

		Peak Usage		Samples

1  thread       637796352		5556
2  threads      646578176		2741
5  threads      646578176		2741
10 threads	698220544		1
20 threads	707264512		1

So increasing the mover threads does have some impact, but it is sublinear (note the smallish increase in the peak from 10 to 20).


6. Here we vary the number of xrdcp clients exec'd in parallel, with maxOrder = 4, i.e., chunk size 128KiB:

		Peak Usage (1 thread)	Samples

50		637796352		5556
60		763625472		6716
70		889716736		7632

Taking the ratios of each case, we have a requirement of roughly 12 MiB per connection.

So if we expect to sustain 1000 simultaneous writes, we need 12 GiB of direct memory.

Doing the same number of simultaneous reads of a single 1 GiB file:

		Peak Usage (1 thread)	Samples
		
70		587726848		1678

Reads seem to require about a 33% less memory than writes (or writes require 50% more then reads).  I think this might be due to greater synchronization (I see the reads completing in a more serialized manner).

That squares with what KIT announced.  66% of 12 is 8 MiB.

Conclusions reported to email thread:

After doing some more investigation, I wanted to report back to the group my findings.  I have already been conversing with Petr and Brian about this, so they have heard some of it already.

My aim has been to observe the maximum/peak direct memory allocated on the pool during reads and writes using the xroot protocol.  To that end, I have been conducting some trials with a 1GiB file and the xrdcp client.  

These potential factors have been parameterized:

1.    Size of chunk used by the Netty pooled direct memory allocator;
2.    Maximum size of the buffer/frame used by dCache;
3.    Number of threads given to the Netty socket group;
4.    Number of actual concurrent connections (number of concurrent clients).

Two notes on the dCache implementation.  

First, for writes, dCache does not currently try to break up the arriving data frame into smaller units, even serially.  The amount of data from the packet/frame is immediately written to a direct memory buffer. Since the goal is keeping the differential of direct memory allocation/deallocation below the threshold of the total allocated direct memory for the JVM, it may be possible to chunk the write so that smaller pieces are allocated and deallocated independently, though it would have to be done without incurring another copy in user space.  This would, however, require some significant changes to the code. For the moment, the only way to achieve some downscaling of memory usage on writes is to set the client env var XRD_CPCHUNKSIZE to something less than 8 MiB (the documentation says the default is 16KiB, but if you do not set this env var, the client sends 8 MiB frames).

Second, #2 is currently dual-purpose: it is the max size of the buffer allocated, but also the size of the frame returned to the client. Hence, lowering #2 means smaller chunks are sent back on the connection as well.  Once again, this could possibly be changed so that we repackage smaller disk-to-memory chunks into larger TCP data frames, but I think this would mean adding a series of copies into user space, which defeats the purpose of Java NIO.  A lower max frame size may nevertheless be advantageous with reads (see below), despite some potential penalty in latency.

The whole situation seems ripe for further optimization study.  I believe current work by Tigran for NFS is actually moving in this direction, utilizing JNI (Java Native Interface) to call the Sun Nio library's 'transferTo', which bottoms out in Linux sendfile64, which in turn relies on in-kernel copy between disk and NIC.   I am not sure whether we can or should do something similar for xroot (TLS presents complications as well).

Now, on to the profiling results.  

The defaults for the first three parameters are currently: 16 MiB, 8 MiB and 20 threads.   As I mentioned earlier, the Netty allocator chunk is determined by multiplying the page size (platform dependent; on the SL 7/RH 7 node I am using it is 8 KiB) by some power of two, the default being 11 (8 KiB << 11 = 16 MiB).  #4 (number of connections) of course dominates the result; #3 has some effect on memory but more on latency.  I would say it is safe to leave it at its default setting.  

For reads, the following comparison should serve to illustrate what the lower buffer sizes can accomplish:

70 clients/connections
20 threads (default)
8M frame/buffer size
16M Netty allocator chunk size

720M of direct memory is peak usage.

vs.

70 clients/connections
20 threads (default)
128K frame/buffer size
128K Netty allocator chunk size

16M of direct memory is peak usage.

The savings is pretty significant.  There is, however, as mentioned, some performance penalty in doing this.   I imagine one would need to do some calibration based on the typical load on the system to see where the sweet spot is.

These changes correspond to doing the following in the pool layout file:

[${host.name}-5Domain]
dcache.java.options.extra=-Dio.netty.allocator.maxOrder=4   [1]
dcache.java.memory.heap=1024m
dcache.java.memory.direct=1024m
[${host.name}-5Domain/pool]
pool.name=${host.name}-5
pool.path=/diske/pool5
pool.wait-for-files=${pool.path}/setup
pool.tags=hostname=${host.name}.fnal.gov rack=rack2
pool.mover.xrootd.frame-size=131072                         [2]

That is, I have matched a smaller xrootd frame size (128K) [2] with a smaller maximum chunk size for the Netty allocator (8 << 4 = 128) [1].

I am not claiming this is indeed the optimal size, but merely saying that for reads, a smaller chunk size matched to the allocator may offer some breathing room on how much dcache.java.memory.direct needs to be set on each pool.

Without changing anything, the observation that each connection requires about 8 MiB (about 50% more for writes, actually) holds, largely as a factor of the chunk size used by the client, but also because of the allocation settings as I have described.

Implement kXR_fattr and extended info for xKR_stat

Hello,

I'm currently working on a proof-of-concept project to interface XRootD with Jefferson Lab's (www.jlab.org) tape library. My initial approach is to use the xroot.redirect directive to fetch file metadata from the tape library, and possibly the file residency manager to control staging and migrating. I have already added support for extended stat info in my personal copy of the xrootd4j 4.4.0. This was fairly simple. But it looks like adding support for fattr will take a bit more effort because little of the required scaffolding is present in the current code.

Can someone offer some suggestion on the best way to proceed? I would like to contribute this new functionality back to the repository when it is ready.

Thanks!

Chris

TLS with readv

There appears to be an issue with vector reads when TLS is activated (that is, SSL encryption on the channel).

Attached logs show scenario in which this was discovered.

Client does a series of reads, one of which is kXR_readv.

The other reads succeed, but the vector read generates a socket error, the client reopens and reattempts the vector read repeatedly, until it gives up.

client-al.log
dcatest04-5Domain.log
tls-read.log

XrootD client implementation

I poked around the code but didn't see any client implementations. Is there any plan for this project to provide an XrootD client implementation in java as well?

malicious client can crash a pool on write

The handling of direct memory writes on write requests can cause an OOM if the XRD_CPCHUNKSIZE is big.

Setting that to 1 GiB, for instance, as observed by the xrdcp client:

[dcatest08.fnal.gov:33141] Sending message kXR_write (handle: 0x00000000, offset: 0, size: 1073741824)

The pool crashes:

04 Aug 2022 10:55:07 (dcatest08-5) [] Restarting due to fatal JVM error: java.lang.OutOfMemoryError: Direct buffer memory

This is a vulnerability which needs to be fixed, though I am not sure what the best approach is (outright rejecting a request to write a chunk that exceeds a certain size, or trying somehow to accommodate it; the latter would be similar to what I spoke of in my message to the original thread.) One way or another, I don't think it a good idea not to address this somehow.

issue with dCache 2.10.8

Hi,

at AGLT2 they upgraded to 2.10.8 and xrootd door does not work any more.
Here's the issue:
2014-10-24 18:35:05 Launching /usr/bin/java -server -Xmx915m -XX:MaxDirectMemorySize=1g -Dsun.net.inetaddr.ttl=1800 -Dorg.globus.tcp.port.range=20000,50000 -Dorg.dcache.dcap.port=0 -Dorg.dcache.net.tcp.portrange=33115:33145 -Dorg.globus.jglobus.delegation.cache.lifetime=30000 -Dorg.globus.jglobus.crl.cache.lifetime=60000 -Djava.security.krb5.realm= -Djava.security.krb5.kdc= -Djavax.security.auth.useSubjectCredsOnly=false -Djava.security.auth.login.config=/etc/dcache/jgss.conf -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/dcache/xrootd-dcdum01Domain-oom.hprof -javaagent:/usr/share/dcache/classes/aspectjweaver-1.8.1.jar -Djava.awt.headless=true -DwantLog4jSetup=n -d64 -Xss256k -XX:+UseParallelGC -XX:ParallelGCThreads=20 -Ddcache.home=/usr/share/dcache -Ddcache.paths.defaults=/usr/share/dcache/defaults org.dcache.boot.BootLoader start xrootd-dcdum01Domain
INFO - dcache.conf:160: Property site is not a standard property
18:35:06,850 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
18:35:06,850 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
18:35:06,850 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [jar:file:/usr/share/dcache/classes/logback-console-config-2.10.8.jar!/logback.xml]
18:35:06,873 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@4926f2ea - URL [jar:file:/usr/share/dcache/classes/logback-console-config-2.10.8.jar!/logback.xml] is not of type file
18:35:06,932 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
18:35:06,937 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
18:35:06,949 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDERR]
18:35:06,973 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
18:35:07,023 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
18:35:07,023 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDERR] to Logger[ROOT]
18:35:07,023 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
18:35:07,024 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@1c16062b - Registering current configuration as safe fallback point
18:35:07,550 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
18:35:07,550 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
18:35:07,550 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [stdout]
18:35:07,551 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
18:35:07,553 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [dmg.util.PinboardAppender]
18:35:07,554 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [pinboard]
18:35:07,559 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.PatternLayout] for [layout] property
18:35:07,567 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.FileAppender]
18:35:07,568 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [events]
18:35:07,571 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
18:35:07,572 |-INFO in ch.qos.logback.core.FileAppender[events] - File property is set to [/tmp/events-domain.name_IS_UNDEFINED.out]
18:35:07,573 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
18:35:07,574 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [traceFile]
18:35:07,588 |-INFO in ch.qos.logback.core.rolling.FixedWindowRollingPolicy@69eeff74 - No compression will be used
18:35:07,596 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
18:35:07,596 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[traceFile] - Active log file name: /tmp/trace-domain.name_IS_UNDEFINED.out
18:35:07,596 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[traceFile] - File property is set to [/tmp/trace-domain.name_IS_UNDEFINED.out]
18:35:07,596 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.classic.net.SocketAppender]
18:35:07,599 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [remote]
18:35:07,721 |-WARN in ch.qos.logback.core.joran.util.PropertySetter@49c54f01 - Failed to set property [port] to value "alarms.server.port_IS_UNDEFINED". ch.qos.logback.core.util.PropertySetterException: Conversion to type [int] failed.
at ch.qos.logback.core.util.PropertySetterException: Conversion to type [int] failed.
at at ch.qos.logback.core.joran.util.PropertySetter.setProperty(PropertySetter.java:159)
at at ch.qos.logback.core.joran.util.PropertySetter.setProperty(PropertySetter.java:120)
at at ch.qos.logback.core.joran.action.NestedBasicPropertyIA.body(NestedBasicPropertyIA.java:92)
at at ch.qos.logback.core.joran.spi.Interpreter.callBodyAction(Interpreter.java:295)
at at ch.qos.logback.core.joran.spi.Interpreter.characters(Interpreter.java:175)
at at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:57)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:149)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:135)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:99)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:49)
at at org.dcache.boot.Domain.initializeLogging(Domain.java:163)
at at org.dcache.boot.Domain.start(Domain.java:120)
at at org.dcache.boot.BootLoader.main(BootLoader.java:122)
Caused by: java.lang.NumberFormatException: For input string: "alarms.server.port_IS_UNDEFINED"
at at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at at java.lang.Integer.parseInt(Integer.java:492)
at at java.lang.Integer.(Integer.java:677)
at at ch.qos.logback.core.joran.util.StringToObjectConverter.convertArg(StringToObjectConverter.java:61)
at at ch.qos.logback.core.joran.util.PropertySetter.setProperty(PropertySetter.java:157)
at ... 12 common frames omitted
18:35:07,721 |-ERROR in ch.qos.logback.classic.net.SocketAppender[remote] - No remote address was configured for appenderremote For more information, please visit http://logback.qos.ch/codes.html#socket_no_host
18:35:07,721 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [stdout] to Logger[ROOT]
18:35:07,722 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [pinboard] to Logger[ROOT]
18:35:07,722 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [remote] to Logger[ROOT]
18:35:07,722 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting additivity of logger [events] to false
18:35:07,722 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [events] to Logger[events]
18:35:07,722 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting additivity of logger [logger.dev] to false
18:35:07,722 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [traceFile] to Logger[logger.dev]
18:35:07,722 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting additivity of logger [dev] to false
18:35:07,722 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [traceFile] to Logger[dev]
18:35:07,722 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [dummy] to OFF
18:35:07,722 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [stdout] to Logger[dummy]
18:35:07,722 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [pinboard] to Logger[dummy]
18:35:07,722 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [traceFile] to Logger[dummy]
18:35:07,722 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [events] to Logger[dummy]
18:35:07,722 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [remote] to Logger[dummy]
18:35:07,737 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,740 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,741 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,742 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,742 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,743 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,743 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,744 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,747 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,748 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,749 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,749 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,750 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,751 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,751 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,752 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [dmg.util.logback.Threshold] for [threshold] property
18:35:07,797 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
18:35:07,798 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@792ad7e - Registering current configuration as safe fallback point

24 Oct 2014 18:35:08 (System) [] Cell created: lm
24 Oct 2014 18:35:08 (System) [] Client started
24 Oct 2014 18:35:08 (lm) [] Sending to head01.aglt2.org/192.41.230.44:11111 : whatToDo xrootd-dcdum01Domain -serial=0
24 Oct 2014 18:35:08 (lm) [] Reasonable reply arrived (0) :
24 Oct 2014 18:35:08 (lm) [] whatToDo got : -serial=0 -- do * nl d:dCacheDomain c:dCacheDomain
24 Oct 2014 18:35:08 (lm) [] Got 'route added' event: * * *@dCacheDomain Default
24 Oct 2014 18:35:08 (lm) [] Default route was added
24 Oct 2014 18:35:08 (lm) [] update requested to upstream Domains
24 Oct 2014 18:35:08 (lm) [] Resending to RoutingMgr: [xrootd-dcdum01Domain]
24 Oct 2014 18:35:08 (lm) [] ts=2014-10-24T18:35:08.157-0400 event=org.dcache.cells.send.begin uoid=1414190108153:102 lastuoid=1414190108153:101 session= mode=async message=String[] source=[>RoutingMgr@xrootd-dcdum01Domain] destination=[>RoutingMgr@local]
24 Oct 2014 18:35:08 (lm) [] ts=2014-10-24T18:35:08.169-0400 event=org.dcache.cells.send.end uoid=1414190108153:102 session=
24 Oct 2014 18:35:08 (lm) [] LocationManager starting connector with -domain=dCacheDomain -lm=lm
24 Oct 2014 18:35:08 (lm) [] Cell created: c-dCacheDomain-101
24 Oct 2014 18:35:08 (lm) [] Created : disconnected
24 Oct 2014 18:35:08 (c-dCacheDomain-101) [] ts=2014-10-24T18:35:08.178-0400 event=org.dcache.cells.send.begin uoid=1414190108178:104 lastuoid=1414190108178:103 session= mode=callback message=""where is dCacheDomain"" source=[>c-dCacheDomain-101@xrootd-dcdum01Domain] destination=[>lm@local]
24 Oct 2014 18:35:08 (c-dCacheDomain-101) [] ts=2014-10-24T18:35:08.180-0400 event=org.dcache.cells.queue.begin uoid=1414190108178:104 lastuoid=1414190108178:103 session= source=[>c-dCacheDomain-101@xrootd-dcdum01Domain] destination=[>lm@local]
24 Oct 2014 18:35:08 (lm) [] ts=2014-10-24T18:35:08.182-0400 event=org.dcache.cells.queue.end uoid=1414190108178:104 session=
24 Oct 2014 18:35:08 (lm) [c-dCacheDomain-101] ts=2014-10-24T18:35:08.183-0400 event=org.dcache.cells.deliver.begin uoid=1414190108178:104 lastuoid=1414190108178:103 session= message=""where is dCacheDomain"" source=[>c-dCacheDomain-101@xrootd-dcdum01Domain] destination=[>lm@local]
24 Oct 2014 18:35:08 (lm) [c-dCacheDomain-101] ts=2014-10-24T18:35:08.184-0400 event=org.dcache.cells.deliver.end uoid=1414190108178:104 session=
24 Oct 2014 18:35:08 (lm) [] Sending to head01.aglt2.org/192.41.230.44:11111 : whereIs dCacheDomain -serial=1
24 Oct 2014 18:35:08 (lm) [] Reasonable reply arrived (1) :
24 Oct 2014 18:35:08 (lm) [] ts=2014-10-24T18:35:08.188-0400 event=org.dcache.cells.send.begin uoid=1414190108188:105 lastuoid=1414190108178:104 session= mode=async message=""-serial=1 -- location dCacheDomain head01.aglt2.org:11111"" source=[>lm@xrootd-dcdum01Domain] destination=[>c-dCacheDomain-101@xrootd-dcdum01Domain]
24 Oct 2014 18:35:08 (c-dCacheDomain-101) [] ts=2014-10-24T18:35:08.189-0400 event=org.dcache.cells.send.end uoid=1414190108178:104 session=
24 Oct 2014 18:35:08 (lm) [] ts=2014-10-24T18:35:08.189-0400 event=org.dcache.cells.send.end uoid=1414190108188:105 session=
24 Oct 2014 18:35:08 (System) [] Cell created: Xrootd-dcdum01
24 Oct 2014 18:35:08 (System) [] Cell exported: Xrootd-dcdum01
24 Oct 2014 18:35:08 (System) [] update requested to upstream Domains
24 Oct 2014 18:35:08 (System) [] Resending to RoutingMgr: [xrootd-dcdum01Domain, Xrootd-dcdum01]
24 Oct 2014 18:35:08 (System) [] ts=2014-10-24T18:35:08.195-0400 event=org.dcache.cells.send.begin uoid=1414190108195:107 lastuoid=1414190108195:106 session= mode=async message=String[] source=[>RoutingMgr@xrootd-dcdum01Domain] destination=[>RoutingMgr@local]
24 Oct 2014 18:35:08 (c-dCacheDomain-101) [] Using clear text channel
24 Oct 2014 18:35:08 (System) [] ts=2014-10-24T18:35:08.196-0400 event=org.dcache.cells.send.end uoid=1414190108195:107 session=
24 Oct 2014 18:35:08 (c-dCacheDomain-101) [] Cell created: c-dCacheDomain-101-102
24 Oct 2014 18:35:08 (Xrootd-dcdum01) [] Cell message monitoring set to false
24 Oct 2014 18:35:08 (Xrootd-dcdum01) [] Cell classification set to XrootdDoor
24 Oct 2014 18:35:08 (c-dCacheDomain-101-102) [] Got 'route added' event: * dCacheDomain c-dCacheDomain-101-102 Domain
24 Oct 2014 18:35:08 (c-dCacheDomain-101-102) [] Downstream route added to domain dCacheDomain
24 Oct 2014 18:35:08 (c-dCacheDomain-101-102) [] update requested to upstream Domains
24 Oct 2014 18:35:08 (c-dCacheDomain-101-102) [] Resending to RoutingMgr: [xrootd-dcdum01Domain, Xrootd-dcdum01]
24 Oct 2014 18:35:08 (c-dCacheDomain-101-102) [] ts=2014-10-24T18:35:08.213-0400 event=org.dcache.cells.send.begin uoid=1414190108213:109 lastuoid=1414190108213:108 session= mode=async message=String[] source=[>RoutingMgr@xrootd-dcdum01Domain] destination=[>RoutingMgr@local]
24 Oct 2014 18:35:08 (c-dCacheDomain-101-102) [] ts=2014-10-24T18:35:08.214-0400 event=org.dcache.cells.send.end uoid=1414190108213:109 session=
24 Oct 2014 18:35:08 (Xrootd-dcdum01) [] Setup controller set to none
24 Oct 2014 18:35:08 (Xrootd-dcdum01) [] Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@7834986: startup date [Fri Oct 24 18:35:08 EDT 2014]; root of context hierarchy
24 Oct 2014 18:35:08 (Xrootd-dcdum01) [] Loading XML bean definitions from class path resource [org/dcache/xrootd/door/xrootd.xml]
24 Oct 2014 18:35:08 (Xrootd-dcdum01) [] JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
24 Oct 2014 18:35:09 (Xrootd-dcdum01) [] ts=2014-10-24T18:35:09.481-0400 event=org.dcache.cells.send.begin uoid=1414190109480:111 lastuoid=1414190109480:110 session= mode=async message=LoginBrokerInfo source=[>Xrootd-dcdum01@xrootd-dcdum01Domain] destination=[>LoginBroker@local]
24 Oct 2014 18:35:09 (Xrootd-dcdum01) [] ts=2014-10-24T18:35:09.482-0400 event=org.dcache.cells.send.end uoid=1414190109480:111 session=

I can not test now as I have no server with this version.
Ilija

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.