openzipkin / zipkin-aws Goto Github PK
View Code? Open in Web Editor NEWReporters and collectors for use in Amazon's cloud
License: Apache License 2.0
Reporters and collectors for use in Amazon's cloud
License: Apache License 2.0
XRay has nice support for Exceptions
. It would be cool to support it.
Stack trace includes:
Caused by: java.lang.NullPointerException: null
at brave.instrumentation.aws.AwsClientTracing$TracingExecutorFactory.<init>(AwsClientTracing.java:69)
at brave.instrumentation.aws.AwsClientTracing.build(AwsClientTracing.java:47)
This is because the getClientConfiguration()
does not get initialized by default in the default builder (thus blows up when calling clientConfiguration.getMaxConnections()
in TracingExecutorFactory
), instead Amazon uses AwsSyncClientParams
to provide defaults if any are missing right before they call the build method:
@Override
public final TypeToBuild build() {
return configureMutableProperties(build(getSyncClientParams()));
}
Temporary work-around: Have async builders include: .withClientConfiguration(new ClientConfigurationFactory().getConfig());
I haven't been able to reproduce locally.
This is likely caused by a dependency conflict of some kind. The iteration time on this will not be fun, since you have to wait for Circle on every change.
From @cemo on May 26, 2017 10:59
I would like to see DynamoDB support for zipkin. I usually let AWS services to store data and I deal rest of the services. DynamoDB seems a good and cheap alternative for Zipkin. Is it possible to support it as well?
Copied from original issue: openzipkin/zipkin#1599
Looks like the module is missing from the root POM and the version number in brave-instrumentation-aws-java-sdk-core
is incorrect.
looks like to use error-prone we need to make the compiler config conditional. cc @shakuzen
With the SDK instrumentation finished I think this is a good time to summarize what we feel should be included or addressed before we release a version 1.0 of this library.
ErrorHandler
impl, and associated storage codeThe following are features I can come up with by reading through the service list
FinishedSpanHandler
for tagging spans with host/container metadataTraceContext.Extractor
for Lambda requests from API GatewayI'll take this on, but basically we need to have publishing setup so that this can eventually go to maven central
version: '2'
services:
zipkin:
image: openzipkin/zipkin-aws
container_name: zipkin-aws
ports:
- 9411:9411
networks:
- docker-net
environment:
- STORAGE_TYPE=elasticsearch
- ES_HOSTS=https://search-*****************.us-east-2.es.amazonaws.com
- KINESIS_APP_NAME=zipkin-kinesis
- KINESIS_STREAM_NAME=kinesis-zipkin
- AWS_ACCESS_KEY_ID=keyid
- AWS_SECRET_ACCESS_KEY=secretkeyid
- KINESIS_AWS_STS_REGION=us-east-2
- AWS_DEFAULT_REGION=us-east-2
- AWS_CBOR_DISABLE=1
networks:
docker-net:
driver: bridge
My region is us-east-2 but Kinesis collector alway try to connect to us-east-1.
zipkin-aws | 2018-07-20 10:34:34.799 INFO 5 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
zipkin-aws | 2018-07-20 10:34:35.258 INFO 5 --- [ main] o.s.b.w.e.u.UndertowServletWebServer : Undertow started on port(s) 9411 (http) with context path ''
zipkin-aws | 2018-07-20 10:34:35.277 INFO 5 --- [ main] z.s.ZipkinServer : Started ZipkinServer in 19.063 seconds (JVM running for 20.253)
zipkin-aws | 2018-07-20 10:34:35.468 INFO 5 --- [inesis-zipkin-0] c.a.s.k.c.l.w.Worker : Syncing Kinesis shard info
zipkin-aws | 2018-07-20 10:34:35.709 ERROR 5 --- [inesis-zipkin-0] c.a.s.k.c.l.w.ShardSyncTask : Caught exception while sync'ing Kinesis shards and leases
zipkin-aws |
zipkin-aws | com.amazonaws.services.kinesis.model.ResourceNotFoundException: Stream kinesis-zipkin under account not found. (Service: AmazonKinesis; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: ********************************)
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1640) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) ~[aws-java-sdk-core-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2388) ~[aws-java-sdk-kinesis-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2364) ~[aws-java-sdk-kinesis-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.AmazonKinesisClient.executeListShards(AmazonKinesisClient.java:1337) ~[aws-java-sdk-kinesis-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.AmazonKinesisClient.listShards(AmazonKinesisClient.java:1312) ~[aws-java-sdk-kinesis-1.11.348.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.proxies.KinesisProxy.listShards(KinesisProxy.java:304) ~[amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.proxies.KinesisProxy.getShardList(KinesisProxy.java:365) ~[amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncer.getShardList(ShardSyncer.java:319) ~[amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncer.syncShardLeases(ShardSyncer.java:121) ~[amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncer.checkAndCreateLeasesForNewShards(ShardSyncer.java:90) ~[amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardSyncTask.call(ShardSyncTask.java:71) [amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:49) [amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.initialize(Worker.java:635) [amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.run(Worker.java:566) [amazon-kinesis-client-1.9.1.jar!/:?]
zipkin-aws | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
zipkin-aws | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
zipkin-aws | at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
When building my PR, that doesn't touch SQS, I encountered a test flake in the new tests added in #79
Not sure what happened. The circle build can be seen here: https://circleci.com/gh/openzipkin/zipkin-aws/426
Hello,
Would you guys be able to add functionality in for an VPC with ElasticSearch?
In this file you are referencing the DomainStatus.Endpoint but when a VPC is in use it returns DomainStatus.Endpoints.vpc
Thanks
Following openzipkin/brave#602 we should make the method field in the XRay reporter able to use the template instead of the http method
In the V1 SDK instrumentation we extracted errors from the intermediate http requests so that we could see the errors when retries occurred. This was not immediately obvious when building the V2 instrumentation so we should get that added for feature parity.
Changes in openzipkin/brave#846 made HexCodec.writeHexByte
package-private and therefore no longer accessible from AWSPropagation
. It also added more efficient access to string span / trace id values which should probably be used to benefit from the cached id string values.
Create Spring auto configuration to make the SQS collector easy to integration with Zipkin server.
One of the more important things about zipkin is it is architecture level of abstraction vs framework. We should be careful to review how we encode messages (and headers) is natural for any language.
For example, in Kafka, we don't even encode the representation type, rather peek at bytes.
https://github.com/openzipkin/zipkin/tree/master/zipkin-collector/kafka#encoding-spans-into-kafka-messages While this is more about limitations in metadata for Kafka, it is a somewhat usual thing. Ex in http, we look at media type headers to tell which codec to use.
An anti-pattern would be encoding java class names or something else hard to describe in pseudocode.
I'm not saying we are doing anything wrong here, just calling out something that may not be very explicit.
Food for thought. cc @eirslett @basvanbeek @mjbryant @jcarres-mdsol @rogeralsing @abesto
The TracingRequestHandler in brave-instrumentation/aws-java-sdk-core can throw NPE due to an application span not being present in both the beforeAttempt
and afterAttempt
callback.
The root cause here is the s3 client executes a stealth HEAD request for many of the operations like CreateBucket
, doesBucketExistV2
to do some preliminary checking and caching. Unfortunately, these HEAD requests do not invoke beforeExecution
so an application span is never created for them, but they do invoke beforeAttempt
.
This means the overall aws-sdk
span isn't available and results in the NPE.
A couple of options:
beforeExecution
I noticed when you boot up an elasticsearch instance, you can still hit the root and health check URLs with no signature. Of course, if you goof a signature, it will yell.
It would be cool if someone can help dig out when exactly we need to sign requests, especially as pertains to health checks.
Dear developers,
I am currently using Spring Cloud Sleuth and Spring AWS Message to send and receive message from AWS SQS.
I find that the X-ray trace cannot be propagated across the SQS message producer and SQS message consumer and the X-ray trace appears as 2 independent trace.
Will there be any feature enhancement roadmap to support trace propagation across SQS and even SNS?
Regards,
Alex Wong
The convention in Zipkin server (and most yaml config) is to use kebab case names. The SQS configuration should be changed to follow suit.
I have configured 2 apps which are traced by zipkin, data is stored in aws elasticsearch, if i give full acess , and also if i allow not to sign for IAM users, zipkins data is stored in aws ES, but if i restrict for specific IAM user and try to save it in ES, am unable to store data, i was using aws autoconfiure aws-elasticsearch module, which take care of signing requests from zipkins to ES, but it is failing, however if i tried make a manual signed request to ES, it is working, is there any issue with autosigner? , pls help
As part of #59 a check has been added that sends "unknown" as segment name to X-Ray when the received span doesn't have a remote service name. This solves the issue that X-Ray segments must have a name but creates issues on the X-Ray side, for example:
in this case the "unknown" segment belongs to the security-gateway-aws service and not to the test-app and in the service map "unknown" services appear as dependencies:
As part of building PR's and master, we should be running all of our tests. I'm not sure if this worked in the past but we should get it back to working.
This probably only involves enabling and configuring the failsafe plugin
I get this exception when trying to start up with all the latest deps:
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [zipkin2.reporter.Sender]: Factory method 'kinesisSender' threw exception; nested exception is java.lang.NoClassDefFoundError: zipkin2/reporter/internal/BaseCall
Amazon Kinesis is a service similar to Kafka. It suggests lower message costs and higher amount of data per message (1MB just like Kafka). It persists all data for minimum 24hrs which means you have to pay certain costs to keep the stream alive.
I've not heard anyone request this specifically, just noting some things as we go along.
In Brave, the minimum language level for core code is 1.6 as there are a number of legacy apps and/or agents that cannot move ahead of that. This is also the case in zipkin-reporter. The minimum level for collectors is java 7, as custom servers needn't be so low.
There are libraries which have higher language level, such as okhttp (java7) and I'm not sure the minimum level for AWS sdk (haven't looked).
Whatever we decide here is important here, and should be noted on the README.
Dear developers,
I am currently extensively using zipkin-aws, brave and spring cloud sleuth together to fire trace to AWS X-Ray. I found that if I include the XRay SDK and Zipkin-aws together. Their tracing context are separated and will be independently record the trace to X-Ray.
Will there be any roadmap for the integration between zipkin-aws and AWS X-Ray SDK in the near future?
Regards,
Alex Wong
Migration is via a custom layout, per this ticket: spring-projects/spring-boot#8107
As mentioned in readme, region method is not available in SQSSender.builder
In order to use AWS managed resources for as much of the process as possible it would be nice to support pulling spans off of an SQS queue
This will include a span reporter and span collector for SQS
When the SQS collector encounters a message that fails to deserialize it should log and delete the offending message. If this doesn't happen the bad message will cycle back through the queue and continue to fail.
Stack trace for reference
java.lang.RuntimeException: Cannot decode spans
at zipkin.internal.Collector.doError(Collector.java:144) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.Collector.errorReading(Collector.java:119) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.Collector.errorReading(Collector.java:114) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.Collector.acceptSpans(Collector.java:59) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.V2Collector.acceptSpans(V2Collector.java:43) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.collector.Collector.acceptSpans(Collector.java:112) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.collector.sqs.SQSSpanProcessor.process(SQSSpanProcessor.java:109) [zipkin-collector-sqs-0.8.7.jar!/:na]
at zipkin.collector.sqs.SQSSpanProcessor.run(SQSSpanProcessor.java:75) [zipkin-collector-sqs-0.8.7.jar!/:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_152]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_152]
Caused by: java.lang.IllegalArgumentException: Empty endpoint at $[3].remoteEndpoint reading List from json
at zipkin2.internal.JsonCodec.exceptionReading(JsonCodec.java:229) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.internal.JsonCodec.readList(JsonCodec.java:142) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.codec.SpanBytesDecoder$1.decodeList(SpanBytesDecoder.java:38) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin.internal.V2Collector.decodeList(V2Collector.java:48) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.V2Collector.decodeList(V2Collector.java:29) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
at zipkin.internal.Collector.acceptSpans(Collector.java:57) [io.zipkin.java-zipkin-2.4.5.jar!/:na]
... 9 common frames omitted
Caused by: java.lang.IllegalArgumentException: Empty endpoint at $[3].remoteEndpoint
at zipkin2.internal.V2SpanReader$1.fromJson(V2SpanReader.java:134) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.internal.V2SpanReader$1.fromJson(V2SpanReader.java:109) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.internal.V2SpanReader.fromJson(V2SpanReader.java:59) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.internal.V2SpanReader.fromJson(V2SpanReader.java:22) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
at zipkin2.internal.JsonCodec.readList(JsonCodec.java:138) ~[io.zipkin.zipkin2-zipkin-2.4.5.jar!/:na]
... 13 common frames omitted
While trying to get docker-zipkin-aws
updated with both collectors I ran into a problem. Both have their own configuration for their credentials provider. Unfortunately this causes a problem with spring at startup because they are both able to create a bean of the same type and trample on each other
Supporting SNS for reporting allows us to subscribe any number SQS queue in any region to our span stream. This would also allow for a fanout to be used for any real time analytics purposes that might arise.
This will include a SNS span reporter
AWS just announce support for an API/UI to configure sampling rules. We should implement this feature to make use of brave more seamless in an XRay world.
Depends on existence of reservoir sampler: openzipkin/brave#705
https://docs.aws.amazon.com/xray/latest/devguide/xray-console-sampling.html
Batch apis are useful in SQS when you have multiple messages you want to send at one time. The zipkin.reporter.Sender
is already designed for batch operations. For example, if you give it a timeout and a threshold, it will collect as many messages as possible to meet that.
After looking carefully, I noticed that not only is the current AwsBufferedSqsSender redundant to this, but it also implies a higher overhead (lower signal). For example, regardless of whether you are using batches or not, an api request cannot be larger than 256KiB. This is a lower figure than most transports, so collectors are more than capable of accepting a single list of 256KiB of spans.
Rather than confuse configuration with a second-tier of batching (which won't be effective anyway when using the AsyncReporter), we should revert to single-message (with up to 256KiB of span)
As part of the move to the Apache Software Foundation this repository will transfer ownership to apache
and name to incubator-zipkin-aws
.
Zipkin SQSSender doesn't work with AWS SQS FIFO queue
Throws following exception in AsyncReporter at this.sender.sendSpans(nextMessage).execute();
com.amazonaws.services.sqs.model.AmazonSQSException: The request must contain the parameter MessageGroupId. (Service: AmazonSQS; Status Code: 400; Error Code: MissingParameter; Request ID: 82b6a6cc-c2e8-5905-b27e-f45be9c97ea0)
When projects are already importing the V2 AWS SDK for other things, it would be nice to have the option of using V2 SDK for the SQS reporter as well. Besides not inflating the project dependencies, reportedly important V1 and V2 in the same project produces conflicts.
Originally requested by @pims as openzipkin/brave#473
Hi there,
I’ve been trying to instrument the Amazon S3 Client from the Java AWS-SDK.
New versions of the SDK offer hooks via the RequestHandler2 abstract class which implements the following interface:
public interface IRequestHandler2 {
AmazonWebServiceRequest beforeExecution(AmazonWebServiceRequest request);
AmazonWebServiceRequest beforeMarshalling(AmazonWebServiceRequest request);
void beforeRequest(Request<?> request);
HttpResponse beforeUnmarshalling(Request<?> request, HttpResponse httpResponse);
void afterResponse(Request<?> request, Response<?> response);
void afterError(Request<?> request, Response<?> response, Exception e);
}
I’ve tried something along those lines, but couldn't get it to work properly. @adriancole suggested raising the issue here.
public class ZipkinRequestHandler extends RequestHandler2 {
final private Tracer tracer;
final private CurrentTraceContext currentTraceContext;
final private HttpClientHandler<Request, Response> handler;
final private TraceContext.Injector<Request> injector;
final private TraceContext.Extractor<Request> extractor;
public static ZipkinRequestHandler create(final HttpTracing httpTracing,
final HttpClientAdapter<Request, Response> adapter) {
return new ZipkinRequestHandler(
httpTracing.tracing().tracer(),
httpTracing.tracing().currentTraceContext(),
HttpClientHandler.create(httpTracing, adapter),
httpTracing.tracing().propagation().injector(new Propagation.Setter<Request, String>() {
@Override
public void put(Request carrier, String key, String value) {
carrier.addHeader(key, value);
}
}),
httpTracing.tracing().propagation().extractor(new Propagation.Getter<Request, String>() {
@Override
public String get(Request carrier, String key) {
final Map<String, String> headers = carrier.getHeaders();
return headers.get(key);
}
})
);
}
private ZipkinRequestHandler(Tracer tracer, CurrentTraceContext ctc, HttpClientHandler<Request, Response> handler,
TraceContext.Injector<Request> injector, TraceContext.Extractor<Request> extractor) {
this.tracer = tracer;
this.currentTraceContext = ctc;
this.handler = handler;
this.injector = injector;
this.extractor = extractor;
}
@Override
public void beforeRequest(Request<?> request) {
TraceContext parent = currentTraceContext.get();
try(CurrentTraceContext.Scope scope = currentTraceContext.newScope(parent)) {
Span span = handler.handleSend(injector,request);
span.annotate("start" + LocalDateTime.now().toString());
System.out.println(LocalDateTime.now() + " beforeRequest " + span.toString());
}
}
@Override
public void afterResponse(Request<?> request, Response<?> response) {
final Span span = tracer.joinSpan(extractor.extract(request).context());
span.annotate("end-"+ LocalDateTime.now().toString());
handler.handleReceive(response, null, span);
System.out.println(LocalDateTime.now() + " afterResponse " + span.toString());
}
@Override
public void afterError(Request<?> request, Response<?> response, Exception ex) {
final Span span = tracer.joinSpan(extractor.extract(request).context());
handler.handleReceive(null, ex, span);
System.out.println("afterError " + span.toString());
}
}
@devinsba noticed the error handling wasn't right when his account was missing permissions for elasticsearch. We should note in the readme (well create a readme first and note) the IAM permissions needed. Let's add a error nicer than NPE when it isn't there.
There was also a report of a hang on a newly provisioned cluster. This might be a smell of an infinite socket or otherwise timeout. Something to look into.
cc @sethp-jive
If you look deeply into the code, you'll notice Amazon's SQS client uses Base64 to encode messages, similarly to how we do in Scribe. Even json has encoding issues. For example, there are constraints on which unicode characters are permitted. After all that.. there's either URL or POST encoding!
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-limits.html
Because zipkin doesn't put constraints on UTF-8, a user can create a message that cannot be published (without encoding). One way to solve this is to only permit thrift, and then always Base64 (which is fine albeit inefficient). Another way is to blindly try for json, and assume users won't use restricted unicode characters.
Either way, we have to reflect (base64) encoding overhead in Sender.messageSizeInBytes
, and note in docs any constraints beyond that and what people should expect. (even if the answer is just watch for dropped messages).
builder methods should return the concrete type and not the interface type.
With zipkin-sparkstreaming
moving to the attic, we should deprecate or move this modules like was just done for zipkin-zookeeper
perhaps
thoughts @adriancole
For those of us that use gRPC it would be nice to have the encoder support handling spans from the brave-grpc instrumentation.
I wonder if there is some abstraction we can add so we don't have to keep manually implementing these things as new RPC mechanisms are instrumented in brave, obviously HTTP is fairly well understood and the tags are standarized
it isn't used importantly, and complicates the build
From @cemo on October 18, 2017 21:20
I am in the process of customizing brave and putting into the production system but I came across a problem.
Despite of creating my interceptor with hardcoded labels, it always displays remote in the console. I could not find time to check it but seems there is a bug at there. I might give a try tomorrow to find culprit.
Copied from original issue: openzipkin/brave#524
Hi,
This Issue is more a Question:
My app sends data to kinesis with sender-kinesis (zipkin-aws) utility, I'm using kinesis collector for elasticsearch storage as an independent process, now I want a Hybrid--> run another kinesis collector with xray storage.
The reason for the hybrid is because I want still keep using zipkin UI for traces and AWS XRAY console for other purposes.
I don't see the kinesis collector with xray storage, does it exist? if it exists , how can I do it?
kicked https://circleci.com/gh/openzipkin/zipkin-aws/567 due to
[ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /home/circleci/project/collector-sqs && /usr/lib/jvm/java-9-openjdk-amd64/bin/java -jar /home/circleci/project/collector-sqs/target/surefire/surefirebooter15158461991753914067.jar /home/circleci/project/collector-sqs/target/surefire 2018-07-31T23-48-15_167-jvmRun1 surefire9318205394990007620tmp
This was the image
Status: Downloaded newer image for circleci/openjdk:9-jdk
using image circleci/openjdk@sha256:c53ae15adb4c48727b6b7ea1763e2d95ed6414f90
oddly that image doesn't show up in circleci at least as far as I can tell https://hub.docker.com/r/circleci/openjdk/tags/
The SQS instrumentation for brave uses queue.url
where the XRay storage converter expects aws.queue_url
I think the XRay encoder should support braves tags in addition to the XRay named counterparts.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.