Giter Site home page Giter Site logo

tkowalcz / tjahzi Goto Github PK

View Code? Open in Web Editor NEW
98.0 98.0 18.0 680 KB

Java clients, log4j2 and logback appenders for Grafana Loki

License: MIT License

Java 100.00%
allocation-free grafana-loki java log4j-appender log4j2-appender logback logback-appender logging loki protobuf

tjahzi's Introduction

Hi there ๐Ÿ‘‹

tjahzi's People

Contributors

efimmatytsin avatar jeantil avatar tkowalcz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

tjahzi's Issues

GraalVM native image compatibility

Hello!

Thank you for such an amazing library, I'm using it for some Micronaut projects.

I found that this library is not supporting the GraalVM native image feature, I have tried to build an application with logback-appender-nodep and it failed with an exception:

Error: Detected a direct/mapped ByteBuffer in the image heap. A direct ByteBuffer has a pointer to unmanaged C memory, and C memory from the image generator is not available at image runtime.A mapped ByteBuffer references a file descriptor, which is no longer open and mapped at run time.   To see how this object got instantiated use --trace-object-instantiation=java.nio.DirectByteBuffer. The object was probably created by a class initializer and is reachable from a static field. You can request class initialization at image runtime by using the option --initialize-at-run-time=<class-name>. Or you can write your own initialization methods and call them explicitly from your main entry point.
[application:38538]    Detailed message:
  [total]:  96,274.29 ms, Trace: Object was reached by
 7.97 GB
        reading field pl.tkowalcz.tjahzi.org.agrona.concurrent.UnsafeBuffer.byteBuffer of
                constant pl.tkowalcz.tjahzi.org.agrona.concurrent.UnsafeBuffer@f8a283 reached by
        reading field pl.tkowalcz.tjahzi.LogBufferSerializer.buffer of
                constant pl.tkowalcz.tjahzi.LogBufferSerializer@6740bb47 reached by
        reading field pl.tkowalcz.tjahzi.TjahziLogger.serializer of
                constant pl.tkowalcz.tjahzi.TjahziLogger@4da08d11 reached by
        reading field pl.tkowalcz.tjahzi.logback.LokiAppender.logger of
                constant pl.tkowalcz.tjahzi.logback.LokiAppender@1c61966b reached by
        indexing into array
                constant java.lang.Object[]@2da676b6 reached by
        reading field java.util.concurrent.CopyOnWriteArrayList.array of
                constant java.util.concurrent.CopyOnWriteArrayList@7516bfd8 reached by
        reading field ch.qos.logback.core.util.COWArrayList.underlyingList of
                constant ch.qos.logback.core.util.COWArrayList@4aa193f reached by
        reading field ch.qos.logback.core.spi.AppenderAttachableImpl.appenderList of
                constant ch.qos.logback.core.spi.AppenderAttachableImpl@7a6d3c78 reached by
        reading field ch.qos.logback.classic.Logger.aai of
                constant ch.qos.logback.classic.Logger@3a065022 reached by
        reading field ch.qos.logback.classic.LoggerContext.root of
                constant ch.qos.logback.classic.LoggerContext@61d7f698 reached by
        reading field ch.qos.logback.classic.Logger.loggerContext of
                constant ch.qos.logback.classic.Logger@658579fb reached by
        scanning method io.micronaut.runtime.Micronaut.lambda$null$0(Micronaut.java:111)
Call path from entry point to io.micronaut.runtime.Micronaut.lambda$null$0(EmbeddedApplication, CountDownLatch, boolean, Thread):
        at io.micronaut.runtime.Micronaut.lambda$null$0(Micronaut.java:111)
        at io.micronaut.runtime.Micronaut$$Lambda$1044/0x00000007c26c8b08.run(Unknown Source)
        at java.lang.Thread.run(Thread.java:833)
        at com.oracle.svm.core.thread.JavaThreads.threadStartRoutine(JavaThreads.java:596)
        at com.oracle.svm.core.posix.thread.PosixJavaThreads.pthreadStartRoutine(PosixJavaThreads.java:192)
        at com.oracle.svm.core.code.IsolateEnterStub.PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df(generated:0)

Do you have any plans to add support for GraalVM?

If you need, I can provide a sample app to reproduce this error.

Ability to configure size of thread local buffer to avoid message fragmentation

Class ByteBufferDestinations which is used as a thread local contains a buffer object that is initialised to 10kb. All messages larger than 10kb will be fragmented.

It is possible that someone would log message larger than 10kb and not want it to be fragmented. The size of that buffer should be configurable. We do not want to support unbounded buffer growth due to possibility of DoS attack done by erroneous message.

Complete Netty pipeline

Now it is unidirectional (we do not wait for requests or process responses). This is clearly invalid implementation. Fix it.

Support for finer timestamp resolution

Currently Tjahzi timestamps all logs using milliseconds precision timer. It is provided by log4j via getTimeMillis method on LogEvent.

Log4j and Loki supports finer precision timestamps. On Log4j side these are available via getInstant method. We should use that instead of getTimeMillis.

Discussed in #95

Originally posted by thgau August 26, 2022
I use the Thahzi with log4j and I see that the loki timestamp always has 000000 in the micro/nanoseconds part.
log4j is able to give the timestamps with finer resolution. I can put these into the text part. But still it does not reach the loki timestamp.

Spark applications using the Loki appender hang at shutdown

I am using the Loki log appender in an Apache Spark application deployed with spark-submit. The Spark application hangs indefinitely at shutdown. When running the application from the command line I have to explicitly kill the task using CTRL-C in order to terminate the program.

This is the Scala code of my application:

object TestApplication {

  def main(args: Array[String]) = {
    val session = SparkSession.builder.getOrCreate()
    import session.implicits._
    val logger = LogManager.getLogger("TestApplicationLogger")
    logger.info("starting application ")
    val data = Seq.fill[Int](100)(1)
    val count = data
      .toDS()
      .map(x => {
        val message = String.format(
          "process on node %s %s",
          java.net.InetAddress.getLocalHost().getHostName(),
          TaskContext.getPartitionId().toString())
        val l = LogManager.getLogger("TestApplicationLogger")
        l.info(message)
        x
      })
      .count
    logger.info(String.format("Processed %s elements", count.toString()))
  }
}

And here are the last lines of the command line output:

22/12/07 15:57:52 INFO TaskSchedulerImpl: Killing all running tasks in stage 2: Stage finished
22/12/07 15:57:52 INFO DAGScheduler: Job 1 finished: count at TestApplication.scala:19, took 0.368057 s
2022-12-07 15:57:52,364 main TRACE Log4jLoggerFactory.getContext() found anchor interface org.apache.spark.internal.Logging
22/12/07 15:57:52 INFO TestApplicationLogger: Processed 100 elements

- HERE THE PROGRAM STOPS AND CONTINUES ONLY AFTER MANUALLY TERMINATING IT WITH CTRL-C -

22/12/07 15:59:09 INFO SparkContext: Invoking stop() from shutdown hook
2022-12-07 15:59:09,128 SparkUI-71 TRACE Log4jLoggerFactory.getContext() found anchor class org.sparkproject.jetty.util.log.Slf4jLog
2022-12-07 15:59:09,131 shutdown-hook-0 TRACE Log4jLoggerFactory.getContext() found anchor interface org.apache.spark.internal.Logging
22/12/07 15:59:09 INFO SparkUI: Stopped Spark web UI at http://192.168.88.194:4040
22/12/07 15:59:09 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/12/07 15:59:09 INFO MemoryStore: MemoryStore cleared
22/12/07 15:59:09 INFO BlockManager: BlockManager stopped
22/12/07 15:59:09 INFO BlockManagerMaster: BlockManagerMaster stopped
22/12/07 15:59:09 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/12/07 15:59:09 INFO SparkContext: Successfully stopped SparkContext
22/12/07 15:59:09 INFO ShutdownHookManager: Shutdown hook called
22/12/07 15:59:09 INFO ShutdownHookManager: Deleting directory /tmp/spark-ae372578-3098-4051-b54b-64444a536497
22/12/07 15:59:09 INFO ShutdownHookManager: Deleting directory /tmp/spark-37b54d5b-12bd-42e5-a828-daa55179b647

As you can see, the program stops after the last line in the Scala code and waits there indefinitely until I press CTRL-C.
Is this a known problem?

outdated netty-jars in package

Hey,
there is a open vulnerable(https://nvd.nist.gov/vuln/detail/CVE-2022-24823) for io.netty:netty-codec-http prior to version 4.1.77.Final. The newest log4j-appender ships netty in version 4.1.52.Final.

Is there any posibillity that the included packages can be updates?

Currently we delete the incuded older version and manually add the 4.1.77.Final version via gradle.

Thanks in advance!

How to find logs in Grafana Cloud

We have had success using Promtail to get logs from our Java app log files. For Promtail we have called the Job log4j, and when using this panel https://grafana.com/grafana/dashboards/13639 which we have imported, we can see the logs by entering log4j in the App field.

I believe this is similar to entering {job="log4j"} in the Log browser of Grafana > Explore

For reference, our scrape_configs section of our Promtail config.yaml is like the below:

scrape_configs:
- job_name: app
  static_configs:
  - targets:
      - localhost
    labels:
      job: log4j
      __path__: ../logs/*.log

How do we set the job value in the XML file for the Tjahzi log4j2 appender? Or, more broadly, how are we to view these logs in Grafana Cloud?

For reference, our xml file looks very similar to the below:

<?xml version="1.0" encoding="UTF-8"?>
<configuration packages="pl.tkowalcz.tjahzi.log4j2">
    <Loggers>
        <Root level="INFO">
            <AppenderRef ref="Loki"/>
        </Root>
    </Loggers>

    <appenders>
        <Loki name="Loki">
            <host>grafanaCloudHostSameAsWhatweUseInPromtail(without https:// at the front)</host>
            <port>443</port>
	    <username>usernameSameAsWhatweUseInPromtail</username>
            <password>passwordSameAsWhatweUseInPromtail</password>

            <PatternLayout>
                <Pattern>%X{tid} [%t] %d{MM-dd HH:mm:ss.SSS} %5p %c{1} - %m%n%exception{full}</Pattern>
            </PatternLayout>

            <Label name="server" value="127.0.0.1"/>
        </Loki>
    </appenders>
</configuration>

Reimplement subset of TextBuilder

Javolution is a fine library helping us be more efficient by not allocating Strings and StringBuilders. However we want to have minimal dependency set. We use only small subset of the library - the TextBuilder class and then only few methods.

It should be easy to do clean room impl. of such (minimal) class.

Dynamic label substitution

Hello again :)

I'm posting this after having read this:

Lookups / variable substitution
Contents of the properties are automatically interpolated by Log4j2 (see here). All environment, system etc. variable references will be replaced by their values during initialization of the appender. Alternatively this process could have been executed for every log message. The latter approach was deemed too expensive. If you need a mechanism to replace a variable after logging system initialization I would lvie to hear your use case - please file an issue.

I'll do my best to explain our use case: we have a distributed multi-tenant application (many running application instances, each one serving different tenants) and we would like to attach a "tenant" label to each message, in order to easily group messages (potentially produced by different instances) but all belonging to the same tenant.

For now, we will simply log the tenant in each message payload (using MDC/ThreadContext), but it would be great to take advantage of Loki labels to make it easier when it comes to monitoring.
An idea would be to be able to write something like this:
<Label name="tenant" value="${ctx:tenant}"/>

I also understand that there is a possible impact in terms of performance having to evaluate the label every message.

passing parameter instead of hard coded value for url

Hi tkowalcz,

I saw in your sample that you used for example ${sys:loki.url} instead of real value in loki appendar in log4j example. log4j2-appender/README.md

i need to use something like this with passing config from spring boot yaml file. but i don't know how can pass parameter to log4j2 context to detect real value same your usage. although i don't know how you fill ${sys:loki.host} value in your example.

Thank you for helping me.

logback-appender creating odd slf4j messages during initialization.

It seems that just adding Loki appender to logback causes the following messages to appear at the start. Any ideas?

  <appender name="LOKI" class="pl.tkowalcz.tjahzi.logback.LokiAppender">
    <host>${loki.host}</host>
    <port>${loki.port}</port>

    <efficientLayout>
      <pattern>%msg</pattern>
    </efficientLayout>

    <label>
      <name>server</name>
      <value>${HOSTNAME}</value>
    </label>
  </appender>
[error] SLF4J: A number (57) of logging calls during the initialization phase have been intercepted and are
[error] SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging system.
[error] SLF4J: See also http://www.slf4j.org/codes.html#replay
[info] Hello world!

Serialize static labels to string representation and reuse

We have a set of static labels that is logged with every stream. Right now these are the only label slogged, ever. First we optimised serialisation by not passing them through log buffer but setting them in Agents constructor. We should also serialise them to string to have it ready whenever we send a message.

Support default value syntax for ctx/MDC reference

Log4j2 allows the definition of a "fallback" value when performing property substitution:
https://logging.apache.org/log4j/2.x/manual/configuration.html#DefaultProperties
And the syntax is quite simple:
${lookupName:key:-defaultValue}

When it comes to MDC properties, which are computed at runtime, it makes sense to allow a default fallback property, but tjhazi does not allow this yet.
My use case is always the "tenant" property: sometimes the tenant may not be present such for those log messages related to the system (e.g. application startup, periodic tasks, etc..).

Long story short, I would like to be able to write something like this:
<Label name="tenant" value="${ctx:tenant:-system}"/>

Implement protobuf wire protocol and get rid that dependency

Protobuf increases size of the release jar and has inefficient API that forces us to allocate a lot of Strings. We need to create protobuf message template with enough logic that we can send valid Loki updates without depending on protobuf itself.

race condition in Log4jAdapterLabelPrinter

There is a problem with multiple threads in class Log4jAdapterLabelPrinter.
The StringBuilder allocated there is shared between threads. It might happen that the threads corrupt the data if both are using it at the same time.
I think the StringBuffer should be allocated once per Thread and kept in a ThreadLocal storage.
Best Regards,
thgau

Patterns in lables

It is possible to use pattens such as %C{1.} as values in labels which are then dynamically resolved?

Investigate 'host' parameter resolution point

We need to check what happens if hostname provided in a configuration resolves to different IP while application is running.
One scenario when this may happen if hostname points to a load balancer and underlying physical machine change in time. We should still be able to re-resolve and connect.

log4j2-appender not checking for success response from Loki

Hi,
we were debugging the log transmission of tjahzi to Loki to find out why certain logs were not transmitted.
After a good while of digging, we found a network issue that caused larger batches of logs not to arrive at the Loki server.
Smaller batches that got though always got acknowledged by Loki with a 204 NO CONTENT response. The larger ones lost in the depths of our network did not trigger this response.

Is tjahzi checking for this response to determine the successful transmission of a batch of logs?
If it is, does it log unsuccessful transmissions somewhere?

Sadly I cannot check this myself since I have absolutely no experience with Java itself.

Thank you for your work, now that it's working it'll save us a lot of headaches!

Better send logic

Tjahzi does too much http calls to Loki - whenever there is any message present and it check this typically with 1ms interval. We need to throttle that.

Introduce logic similar to promtail that will send logs when they reach some critical size or timeout is reached.

All sorts of lookups in labels don't work

When trying the below configuration, the lookup donse't parse the lookup and just putting the lookup name as the label value.
I'm using the appender latest version and applied all the configs as the docs.

using log4j2 via spring library, and run it as container in k8s. so the lookups should be valid,
even copy the example from the docs dosen't work

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" monitorInterval="30" packages="pl.tkowalcz.tjahzi.log4j2">
    <Properties>
        <Proprty>
            <Property name="pod_name">${k8s:podName}</Property>
            <Property name="hostname">${k8s:hostname}</Property>
            <Property name="container_name">${k8s:containerName}</Property>
            <Property name="app">${sys:spring.application.name}</Property>
            <Property name="environment">${sys:spring.profiles.active}</Property>
        </Proprty>
    </Properties>
    <Appenders>
        <Console name="ConsoleAppender" target="SYSTEM_OUT" follow="true">
            <PatternLayout pattern="%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}} %highlight{${LOG_LEVEL_PATTERN:-%5p}}{FATAL=red blink, ERROR=red, WARN=yellow bold, INFO=green, DEBUG=green bold, TRACE=blue} %style{${sys:PID}}{magenta} [%15.15t] %style{%-40.40C{1.}}{cyan} : %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
        </Console>
        <Loki name="loki-appender">
            <host>loki.kube-prometheus-stack.svc.cluster.local</host>
            <port>3100</port>

            <PatternLayout>
                <Pattern>%X{tid} [%t] %d{MM-dd HH:mm:ss.SSS} %5p %c{1} - %m%n%exception{full}</Pattern>
            </PatternLayout>

            <Label name="app" value="${app}"/>
            <Label name="server" value="${sys:hostname}"/>
<!--            <Label name="server" value=${sys:hostname}/>-->
            <Label name="pod_name" value="${pod_name}"/>
            <Label name="hostname" value="${hostname}"/>
            <Label name="container_name" value="${container_name}"/>
            <Label name="environment" value="${environment}"/>
            <LogLevelLabel>level</LogLevelLabel>
        </Loki>

    </Appenders>
    <Loggers>
        <Root level="INFO">
            <AppenderRef ref="ConsoleAppender" />
            <AppenderRef ref="loki-appender"/>
        </Root>
        <Logger name="org.springframework" level="WARN" />
    </Loggers>
</Configuration>

AtomicBuffer is not correctly aligned: addressOffset=12 is not divisible by 8

hello,I use version 9.20, the following error will appear when starting, what is the problem? thanks
ERROR in ch.qos.logback.core.joran.spi.Interpreter@108:16 - RuntimeException in Action for tag [appender] java.lang.IllegalStateException: AtomicBuffer is not correctly aligned: addressOffset=12 is not divisible by 8
at org.springframework.boot.logging.logback.LogbackLoggingSystem.loadConfiguration(LogbackLoggingSystem.java:169)
at org.springframework.boot.logging.AbstractLoggingSystem.initializeWithSpecificConfig(AbstractLoggingSystem.java:66)
at org.springframework.boot.logging.AbstractLoggingSystem.initialize(AbstractLoggingSystem.java:57)
at org.springframework.boot.logging.logback.LogbackLoggingSystem.initialize(LogbackLoggingSystem.java:118)
at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:318)
... 24 more

Using MDC for labels

Hey there,

i want to migrate from ELK Stack to Loki.
Is it possible to send the MDC Parameters (set dynamically in ThreadContext.put(key, value)) as Labels to Loki? And how would i achieve this with log4j2?

Thanks in advance

Rewrite how statistic are exposed for consumption

After implementing current approach to doing metrics I have identified some drawbacks to how these are exposed.

To access the metrics user has to get hold of the appender object (through ((org.apache.logging.log4j.core.LoggerContext) LogManager.getContext(false))) and set MonitoringModule implementation.

Proposal is to add configuration entry that will enable statistics in the appender as well as specify a transient file (possibly in/dev/shm) that will be mmaped and used to dump statistics in real time. We will provide utility classes to decode that file and send the stats via e.g. dropwizard metrics.

TjahziInitializer causing Out of Memory Error, when running in Docker Container

When running project in Docker Container, following are the error logs:

Setting Active Processor Count to 8
Calculating JVM memory based on 18761956K available memory
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx18103971K -XX:MaxMetaspaceSize=145984K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 18761956K, Thread Count: 250, Loaded Class Count: 23360, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 128 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled
Picked up JAVA_TOOL_OPTIONS: -Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties -XX:+ExitOnOutOfMemoryError -XX:ActiveProcessorCount=8 -XX:MaxDirectMemorySize=10M -Xmx18103971K -XX:MaxMetaspaceSize=145984K -XX:ReservedCodeCacheSize=240M -Xss1M -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics -Dorg.springframework.cloud.bindings.boot.enable=true
[
Exception in thread "main" java.lang.reflect.InvocationTargetException
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.base/java.lang.reflect.Method.invoke(Unknown Source)
	at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:109)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:58)
	at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88)
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
	at java.base/java.nio.Bits.reserveMemory(Unknown Source)
	at java.base/java.nio.DirectByteBuffer.<init>(Unknown Source)
	at java.base/java.nio.ByteBuffer.allocateDirect(Unknown Source)
	at pl.tkowalcz.tjahzi.TjahziInitializer.allocateJavaBuffer(TjahziInitializer.java:97)
	at pl.tkowalcz.tjahzi.TjahziInitializer.createLoggingSystem(TjahziInitializer.java:29)
	at pl.tkowalcz.tjahzi.log4j2.LokiAppenderBuilder.build(LokiAppenderBuilder.java:137)
	at pl.tkowalcz.tjahzi.log4j2.LokiAppenderBuilder.build(LokiAppenderBuilder.java:31)
	at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122)
	at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002)
	at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942)
	at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934)
	at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552)
	at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241)
	at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288)
	at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:618)
	at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:691)
	at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:708)
	at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:263)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
	at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
	at org.apache.commons.logging.LogAdapter$Log4jLog.<clinit>(LogAdapter.java:155)
	at org.apache.commons.logging.LogAdapter$Log4jAdapter.createLog(LogAdapter.java:122)
	at org.apache.commons.logging.LogAdapter.createLog(LogAdapter.java:89)
	at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:67)
	at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:59)
	at org.springframework.boot.SpringApplication.<clinit>(SpringApplication.java:196)
	at com.sara.extractservice.ExtractServiceApplication.main(ExtractServiceApplication.java:36)
	... 8 more

I am using Ubuntu 20.04.2 LTS, with 32 GB RAM. The Docker Image was created using mvn spring-boot:build-image. Could you share what changes I need to make for fixing this issue? Thanks.

How to use the log4j-appender with property files and labels

My existing project is configured in the properties format of log4j2 already. I would like to introduce the loki appender, but I'm failing to add the labels.

This is my current properties file:

monitorInterval=30

rootLogger.level = INFO
rootLogger.appenderRef.file.ref = FileAppender
rootLogger.appenderRef.loki.ref = loki-appender

appender.loki.name = loki-appender
appender.loki.type = Loki
appender.loki.host = loki
appender.loki.port = 3100
appender.loki.layout.type = PatternLayout
appender.loki.layout.pattern = %X{tid} [%t] %d{MM-dd HH:mm:ss.SSS} %5p %c{1} - %m%n%exception{full}
appender.loki.label.source.type = label
appender.loki.label.source.name = source
appender.loki.label.source.value = log4j

And this is the error I'm facing:

Exception in thread "main" java.lang.ExceptionInInitializerError
Caused by: org.apache.logging.log4j.core.config.ConfigurationException: No type attribute provided for component label
	at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.createComponent(PropertiesConfigurationBuilder.java:334)
	at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.processRemainingProperties(PropertiesConfigurationBuilder.java:348)
	at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.createAppender(PropertiesConfigurationBuilder.java:225)
	at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.build(PropertiesConfigurationBuilder.java:158)
	at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory.getConfiguration(PropertiesConfigurationFactory.java:56)
	at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory.getConfiguration(PropertiesConfigurationFactory.java:35)
	at org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:458)
	at org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:385)
	at org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(ConfigurationFactory.java:293)
	at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:647)
	at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668)
	at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
	at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
	at org.apache.flink.client.cli.CliFrontend.<clinit>(CliFrontend.java:89)
root@ubuntu:/flink-1.13.2#

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.