Giter Site home page Giter Site logo

mspnp / microservices-reference-implementation Goto Github PK

View Code? Open in Web Editor NEW
836.0 68.0 340.0 1.57 MB

A reference implementation demonstrating microservices architecture and best practices for Microsoft Azure

Home Page: https://docs.microsoft.com/azure/architecture/microservices/

License: Other

Mustache 39.22% Bicep 60.78%
microservices-architecture microservice azure kubernetes microservices-reference azure-pipelines aks cicd

microservices-reference-implementation's Introduction

Microservices Reference Implementation

Microsoft patterns & practices

This reference implementation shows a set of best practices for building and running a microservices architecture on Microsoft Azure, using Kubernetes.

❗ The previous advanced Microservices Reference Implementation is now known as the AKS Fabrikam Drone Delivery reference implementation. The AKS Frabrikam Drone Delivery reference implementation is built on top the guidance forming the AKS Baseline Cluster. This basic Microservices Reference Implementation will remain here for your reference but we recommend you to consider basing your work on the AKS Fabrikam Drone Delivery reference implementation.

Guidance

This project has a companion set of articles that describe challenges, design patterns, and best practices for building microservices architecture. You can find these articles on the Azure Architecture Center:

Scenario

​Fabrikam, Inc. (a fictional company) is starting a drone delivery service. The company manages a fleet of drone aircraft. Businesses register with the service, and users can request a drone to pick up goods for delivery. When a customer schedules a pickup, a backend system assigns a drone and notifies the user with an estimated delivery time. While the delivery is in progress, the customer can track the location of the drone, with a continuously updated ETA.

The Drone Delivery app

The Drone Delivery application is a sample application that consists of several microservices. Because it's a sample, the functionality is simulated, but the APIs and microservices interactions are intended to reflect real-world design patterns.

  • Ingestion service. Receives client requests and buffers them.
  • Scheduler service. Dispatches client requests and manages the delivery workflow.
  • Supervisor service. Monitors the workflow for failures and applies compensating transactions.
  • Account service. Manages user accounts.
  • Third-party Transportation service. Manages third-party transportation options.
  • Drone service. Schedules drones and monitors drones in flight.
  • Package service. Manages packages.
  • Delivery service. Manages deliveries that are scheduled or in-transit.
  • Delivery History service. Stores the history of completed deliveries.

Test results and metrics

The Drone Delivery application has been tested up to 2000 messages/sec:

Replicas ~Max CPU (mc) ~Max Mem (MB) Avg. Throughput*  Max. Throughput* Avg (ms) 50th (ms) 95th (ms) 99th (ms)
Nginx 1 N/A N/A serve: 1595 reqs/sec serve: 1923 reqs/sec N/A N/A N/A N/A
Ingestion 10 474 488 ingest: 1275 msgs/sec ingest: 1710 msgs/sec 251 50.1 1560 2540
Workflow (receive messages) 35 1445 79 egress: 1275 msgs/sec egress: 1710 msgs/sec 81.5 0 25.7 121
Workflow (call backend services + mark message as complete) 35 1445 79 complete: 1100 msgs/sec complete: 1322 msgs/sec 561.8 447 1350 2540
Package 50 213 78 N/A N/A 67.5 53.9 165 306
Delivery 50 328 334 N/A N/A 93.8 82.4 200 304
Dronescheduler 50 402 301 N/A N/A 85.9 72.6 203 308

*sources:

  1. Serve: Visual Studio Load Test Throughout Request/Sec
  2. Ingest: Azure Service Bus metrics Incoming Messages/Sec
  3. Egress: Azure Service Bus metrics Outgoing Messages/Sec
  4. Complete: AI Service Bus Complete dependencies
  5. Avg/50th/95th/99th: AI dependencies
  6. CPU/Mem: Azure Monitor for Containers

Deployment

To deploy the solution, follow the steps listed here.


This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

microservices-reference-implementation's People

Contributors

ckittel avatar codestellar avatar dragon119 avatar ferantivero avatar francischeung avatar fsimonazzi avatar hallihan avatar ilkerd93 avatar kirpasingh avatar lastcoolnameleft avatar magrande avatar mic-max avatar msftgits avatar neilpeterson avatar nithinpnp avatar tungbq avatar v-fearam avatar veronicawasson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

microservices-reference-implementation's Issues

Error building ingestion service

When I run this step:

docker build -t openjdk_and_mvn-build:8-jdk -f $INGESTION_PATH/Dockerfilemaven $INGESTION_PATH

I get the following error:

curl: (22) The requested URL returned error: 404 Not Found
The command '/bin/sh -c mkdir -p /usr/share/maven /usr/share/maven/ref && curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz && echo "${SHA} /tmp/apache-maven.tar.gz" | sha256sum -c - && tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 && rm -f /tmp/apache-maven.tar.gz && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn' returned a non-zero code: 22

The problem is that it's looking for https://apache.osuosl.org/maven/maven-3/3.5.2/binaries/apache-maven-3.5.2-bin.tar.gz but this file doesn't exist. See https://apache.osuosl.org/maven/maven-3/

[Helm charts] mark AI_KEY as required

if AI_KEY is empty, ingestion service and other should be allowed to be deployed. This will prevent facing http 503 down to the road in the case of ingestion.

related: #138

possible Shipping cascading failures scenario under heavy loads

under heavy loads Delivery microservice fails to store in Redis. It happens when you create and never complete/cancel inflight deliveries.

Although it is not a very realistic scenario, please lets consider nominate this as a possible opportunity of improvement for resiliency. The idea would be to add 2 new fallback strategies:

  1. free up space under this situation.
  2. implement Delivery History roundtrips when in-flight deliveries are not present.

provide other ways to get our RI deployed

Since now we documented how to get our RI deployed step by step. It would be great if we provide with other ways to deploy fast and easier in next version:

  1. helm chart
  2. shell script for creation and clean up.
  3. one shot kubectl apply

Add -o json to command

A small enhancement to command
export ACR_SERVER=$(az acr show -g $RESOURCE_GROUP -n $ACR_NAME --query "loginServer")

Add
export ACR_SERVER=$(az acr show -g $RESOURCE_GROUP -n $ACR_NAME --query "loginServer" -o json)

Potentially dangerous async code in the scheduler

The service callers are implemented using a pattern that seems problematic (it would definitely be so if mapped literally to .net)

For example the package service (same pattern is used for other services). The entry method performs an asynchronous call which returns a future, and then registers an asynchronous callback on that future to update an instance variable and then this instance variable is returned as the result of the method.

Two main concerns:

  • The value set by the callback is used without knowing that the callback was executed at all.
  • The logic relies on shared state (not even closures, but instance variables) so depending on how this caller is shared this could be a problem.
    ** Given that these callers are initialized statically these instances will be shared unless Akka is serializing processing (which would be problematic in itself).
    ** Also, sharing state across threads without synchronization is usually problematic

[Deployment] Improve monitoring section fluentd-ES

Azure Monitor for Containers is now being prescribed as main Monitoring solution for the Reference Implementation.

Fluentd-ES section can be now removed or add some note to clarify how these two monitoring solutions will live together.

Scheduler under stress causes resource runaway on gc activity

@kirpasingh
we are moving to webclient but meanwhile we need to fix this issue

solution
in Main.java set the global properties for the proxy

	if (StringUtils.isNotEmpty(SchedulerSettings.HttpProxyValue)) {
		String[] address = SchedulerSettings.HttpProxyValue.split("\\s*:\\s*");
		System.setProperty("http.proxyPort", address[1]);
	    System.setProperty("http.proxyHost", address[0]);			
	}

In ServiceCallerImpl return a new asycresttemplate per request

public AsyncRestTemplate getAsyncRestTemplate() {
	AsyncRestTemplate asyncRestTemplate = new AsyncRestTemplate();

	asyncRestTemplate.getMessageConverters().add(new MappingJackson2HttpMessageConverter());
	asyncRestTemplate.getMessageConverters().add(new StringHttpMessageConverter());
	asyncRestTemplate.setErrorHandler(new ServiceCallerResponseErrorHandler());
	return asyncRestTemplate;
}

in constructor only set the headers

public ServiceCallerImpl() {
this.requestHeaders = new HttpHeaders();
this.requestHeaders.setAccept(Collections.singletonList(MediaType.APPLICATION_JSON));
}

Failed delivery should contain metadata so as to be retriable

If a delivery fails at any stage, it should be put into a compensation queue and there should be metadata such as service name and error message so that the compensation handler thread is able to retry the delivery using the metadata information.

Callback is added to newly created completed future

The delivery processor creates a completed future only to attach a callback on completion. Since it's already completed the callback will run right away. Since this is the "async" method, it will be scheduled for execution in a different thread; that's a useful side effect, but if that's the goal of this construct there are more direct and explicit mechanisms to schedule work in the thread pool.

		CompletableFuture.completedFuture(deliverySchedule).whenCompleteAsync((ds, error) -> {
			if (ds == null) {
				Log.error("Failed Delivery");
				superviseFailureAsync(deliveryRequest, ServiceName.DeliveryService,
						error == null ? "Unknown error" : ExceptionUtils.getStackTrace(error).toString())
								.thenAcceptAsync(result -> Log.debug(result));
			} else {
				Log.info("Completed Delivery", ds.toString());
			}

		});

Latest Helm version is not working with the the commands listed

Hi,
I'm encountering couple of issues with the Helm commands.
I have installed Helm version 3.1 and the following commands and parameter does not exist in new version.

helm init --service-account tiller

helm install stable/nginx-ingress --name nginx-ingress-dev --namespace ingress-controllers --set rbac.create=true --set controller.ingressClass=nginx-dev

I think also the helm repo add command is missing, since it does not recognize stabe/nginx-ingress in above command as well.

Thanks,
Mohammad

Cannot deploy delivery service

I followed Deploying the Reference Implementation and I am stuck in Deploy the Delivery service:

helm install $HELM_CHARTS/delivery/ \ --set image.tag=0.1.0 \ --set image.repository=delivery \ --set dockerregistry=$ACR_SERVER \ --set identity.clientid=$DELIVERY_PRINCIPAL_CLIENT_ID \ --set identity.resourceid=$DELIVERY_PRINCIPAL_RESOURCE_ID \ --set cosmosdb.id=$DATABASE_NAME \ --set cosmosdb.collectionid=$COLLECTION_NAME \ --set keyvault.uri=$DELIVERY_KEYVAULT_URI \ --set reason="Initial deployment" \ --namespace backend \ --name delivery-v0.1.0

When I invoke the command it results in this error message:
"Error: azureidentities.aadpodidentity.k8s.io "delivery-identity" already exists"

I tried to delete delivery-identity by replacing DELIVERY_PRINCIPAL_RESOURCE_ID, DELIVERY_PRINCIPAL_CLIENT_ID in /charts/delivery/templates/delivery-identity.yaml and the invoking kubectl delete -f delivery-identity.yaml, but this results in azureidentities.aadpodidentity.k8s.io "delivery-identity" not found - not sure why it says "delivery-identity" exits, but then it is not found...

Design principles not followed

Looks like design principles are staying limited to documents only and not getting implemented in code. Why are we using deliveryRepository in DeliveriesController?

Please do not say it is a mock implementation. Even interns can write good mocks. We need to show real life code so that other devs can learn and take home the right approach.

Thanks,
Mandeep

HTTP404 while trying to check the request status.

Hi,
when I POST a new delivery request, I do receive a HTTP202, which seems to be expected.

HTTP/2 202 
server: nginx/1.15.6
date: Wed, 13 Nov 2019 11:15:43 GMT
content-type: application/json;charset=UTF-8
location: http://deliveries/api/deliveries/d3e64649-fae2-4fc5-8498-09b34de70227
request-context: appId=cid-v1:b338a12e-bd41-4560-aa9c-1508ed91a24e
strict-transport-security: max-age=15724800; includeSubDomains

If I try to check the request status I keep getting HTTP404.

curl "https://$EXTERNAL_INGEST_FQDN/api/deliveries/d3e64649-fae2-4fc5-8498-09b34de70227" --header 'Accept: application/json' -k -i
HTTP/2 404 
server: nginx/1.15.6
date: Wed, 13 Nov 2019 11:21:16 GMT
content-length: 0
request-context: appId=cid-v1:b338a12e-bd41-4560-aa9c-1508ed91a24e
strict-transport-security: max-age=15724800; includeSubDomains

All services seem to be available.

helm ls
NAME                            REVISION        UPDATED                         STATUS  CHART                    APP VERSION     NAMESPACE          
delivery-v0.1.0-dev             1               Wed Nov 13 10:59:13 2019        DEPLOYEDdelivery-v0.1.0          v0.1.0          backend-dev        
dronescheduler-v0.1.0-dev       1               Wed Nov 13 11:27:00 2019        DEPLOYEDdronescheduler-v0.1.0    v0.1.0          backend-dev        
ingestion-v0.1.0-dev            1               Wed Nov 13 11:21:38 2019        DEPLOYEDingestion-v0.1.0         v0.1.0          backend-dev        
nginx-ingress                   1               Wed Nov 13 10:49:01 2019        DEPLOYEDnginx-ingress-1.0.1      0.21.0          ingress-controllers
package-v0.1.0-dev              1               Wed Nov 13 11:03:43 2019        DEPLOYEDpackage-v0.1.0           v0.1.0          backend-dev        
workflow-v0.1.0-dev             1               Wed Nov 13 11:13:15 2019        DEPLOYEDworkflow-v0.1.0          v0.1.0          backend-dev    

ACS->AKS

Now that AKS is GAed, can we please use AKS instead of ACS-engine in this repo?

Error running Ingestion build image to produce jar file

I am running on Win10 Enterprise v1809 with Docker for Windows v18.09.2 installed and configured for linux containers.

I am able to successfully build the Ingestion service's build image (openjdk_and_mvn-build:8-jdk), whose base image is openjdk:8-jdk.

When I run this step:

$INGESTION_PATH="${SOURCE_CODE_ROOT}\src\shipping\ingestion"
docker run -it -v ${INGESTION_PATH}:/sln openjdk_and_mvn-build:8-jdk

I get the following error:

Error: Could not find or load main class org.apache.maven.surefire.booter.ForkedBooter

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:21 min
[INFO] Finished at: 2019-03-06T13:58:30Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-test) on project Ingestion: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test failed: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /sln && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -jar /sln/target/surefire/surefirebooter2156255563599099968.jar /sln/target/surefire/surefire2972241147860250331tmp /sln/target/surefire/surefire_02516224541232345229tmp

l5d support for multiple namespaces

Linkerd supports multiple namespaces. We could implement some of the following approaches to get servicemesh config ready for bc-shipping:

  1. ship custom yaml for l5d
  2. add some instructions on how to modify servicemesh.yml
  3. add some instructions on how to remove the integration with l5d (temporary)

For now to workaround this, please use this configuration instead:

EDIT:

wget https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd.yml && \
sed -i "s#/default#/bc-shipping#g" linkerd.yml && \
kubectl apply -f linkerd.yml

Delivery service is failing with Http 500 in a brand new deplyment

Setup:

  • Deploy the delivery service as per the deployment steps.

Repro:

  • Deploy all the services in a K8S cluster as per instructions. Examine the logs of the delivery service pod.

Output:

info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
      Request finished in 8.9961ms 500
{"@t":"2018-01-31T22:00:01.1236257Z","@mt":"{HostingRequestStartingLog:l}","@r":["Request starting HTTP/1.1 PUT http://deliveryservice/api/Deliveries/655e091b-8bcd-497e-8214-c77304ad6b74 application/json;charset=UTF-8 500"],"Protocol":"HTTP/1.1","Method":"PUT","ContentType":"application/json;charset=UTF-8","ContentLength":500,"Scheme":"http","Host":"deliveryservice","PathBase":"","Path":"/api/Deliveries/655e091b-8bcd-497e-8214-c77304ad6b74","QueryString":"","HostingRequestStartingLog":"Request starting HTTP/1.1 PUT http://deliveryservice/api/Deliveries/655e091b-8bcd-497e-8214-c77304ad6b74 application/json;charset=UTF-8 500","EventId":{"Id":1},"SourceContext":"Microsoft.AspNetCore.Hosting.Internal.WebHost","RequestId":"0HLB8UIHU9U9K:00000065","RequestPath":"/api/Deliveries/655e091b-8bcd-497e-8214-c77304ad6b74","CorrelationId":"Y2vWzYy/VEafVQJH/zFLnCy8cOeIeVCJAAAAAAAAAAA="}
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
      Request starting HTTP/1.1 PUT http://deliveryservice/api/Deliveries/655e091b-8bcd-497e-8214-c77304ad6b74 application/json;charset=UTF-8 500
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
      Executing action method Fabrikam.DroneDelivery.DeliveryService.Controllers.DeliveriesController.Put (Fabrikam.DroneDelivery.DeliveryService) with arguments (Fabrikam.DroneDelivery.DeliveryService.Models.Delivery, 655e091b-8bcd-497e-8214-c77304ad6b74) - ModelState is Valid
{"@t":"2018-01-31T22:00:01.1248330Z","@mt":"Executing action method {ActionName} with arguments ({Arguments}) - ModelState is {ValidationState}","ActionName":"Fabrikam.DroneDelivery.DeliveryService.Controllers.DeliveriesController.Put (Fabrikam.DroneDelivery.DeliveryService)","Arguments":["Fabrikam.DroneDelivery.DeliveryService.Models.Delivery","655e091b-8bcd-497e-8214-c77304ad6b74"],"ValidationState":"Valid","EventId":{"Id":1},"SourceContext":"Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker","ActionId":"c8520b88-e24c-4c13-b469-31c5f84ae975","RequestId":"0HLB8UIHU9U9K:00000065","RequestPath":"/api/Deliveries/655e091b-8bcd-497e-8214-c77304ad6b74","CorrelationId":"Y2vWzYy/VEafVQJH/zFLnCy8cOeIeVCJAAAAAAAAAAA="}
info: Fabrikam.DroneDelivery.DeliveryService.Controllers.DeliveriesController[0]
      In Put action with delivery 655e091b-8bcd-497e-8214-c77304ad6b74: {"Id":"655e091b-8bcd-497e-8214-c77304ad6b74","Owner":{"UserId":"user id for logging","AccountId":"some-owner-id"},"Pickup":{"Altitude":0.60163037537661235,"Latitude":0.94598246496548966,"Longitude":0.09818297902715134},"Dropoff":{"Altitude":0.049069503989147,"Latitude":0.86240977793214368,"Longitude":0.0977955656165792},"Deadline":"DeadlyQueueOfZombiatedDemons","Expedited":true,"ConfirmationRequired":0,"DroneId":"\"AssignedDroneIdeac1212c-d9e3-4ce3-b187-5f935bfbfdab\""}
{"@t":"2018-01-31T22:00:01.1251718Z","@mt":"In Put action with delivery {Id}: {@DeliveryInfo}","Id":"655e091b-8bcd-497e-8214-c77304ad6b74","DeliveryInfo":"{\"Id\":\"655e091b-8bcd-497e-8214-c77304ad6b74\",\"Owner\":{\"UserId\":\"user id for logging\",\"AccountId\":\"some-owner-id\"},\"Pickup\":{\"Altitude\":0.60163037537661235,\"Latitude\":0.94598246496548966,\"Longitude\":0.09818297902715134},\"Dropoff\":{\"Altitude\":0.049069503989147,\"Latitude\":0.86240977793214368,\"Longitude\":0.0977955656165792},\"Deadline\":\"DeadlyQueueOfZombiatedDemons\",\"Expedited\":true,\"ConfirmationRequired\":0,\"DroneId\":\"\\\"AssignedDroneIdeac1212c-d9e3-4ce3-b187-5f935bfbfdab\\\"\"}","SourceContext":"Fabrikam.DroneDelivery.DeliveryService.Controllers.DeliveriesController","ActionId":"c8520b88-e24c-4c13-b469-31c5f84ae975","ActionName":"Fabrikam.DroneDelivery.DeliveryService.Controllers.DeliveriesController.Put (Fabrikam.DroneDelivery.DeliveryService)","RequestId":"0HLB8UIHU9U9K:00000065","RequestPath":"/api/Deliveries/655e091b-8bcd-497e-8214-c77304ad6b74","CorrelationId":"Y2vWzYy/VEafVQJH/zFLnCy8cOeIeVCJAAAAAAAAAAA="}
info: RedisCache[0]
      Start: storing item in Redis
{"@t":"2018-01-31T22:00:01.1254558Z","@mt":"Start: storing item in Redis","SourceContext":"RedisCache","ActionId":"c8520b88-e24c-4c13-b469-31c5f84ae975","ActionName":"Fabrikam.DroneDelivery.DeliveryService.Controllers.DeliveriesController.Put (Fabrikam.DroneDelivery.DeliveryService)","RequestId":"0HLB8UIHU9U9K:00000065","RequestPath":"/api/Deliveries/655e091b-8bcd-497e-8214-c77304ad6b74","Scope":["CreateItemAsync"],"CorrelationId":"Y2vWzYy/VEafVQJH/zFLnCy8cOeIeVCJAAAAAAAAAAA="}
info: RedisCache[0]
      End: storing item in Redis
{"@t":"2018-01-31T22:00:01.1325360Z","@mt":"End: storing item in Redis","SourceContext":"RedisCache","ActionId":"c8520b88-e24c-4c13-b469-31c5f84ae975","ActionName":"Fabrikam.DroneDelivery.DeliveryService.Controllers.DeliveriesController.Put (Fabrikam.DroneDelivery.DeliveryService)","RequestId":"0HLB8UIHU9U9K:00000065","RequestPath":"/api/Deliveries/655e091b-8bcd-497e-8214-c77304ad6b74","Scope":["CreateItemAsync"],"CorrelationId":"Y2vWzYy/VEafVQJH/zFLnCy8cOeIeVCJAAAAAAAAAAA="}
info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
      Executed action Fabrikam.DroneDelivery.DeliveryService.Controllers.DeliveriesController.Put (Fabrikam.DroneDelivery.DeliveryService) in 9.2041ms
{"@t":"2018-01-31T22:00:01.1333130Z","@mt":"Executed action {ActionName} in {ElapsedMilliseconds}ms","ActionName":"Fabrikam.DroneDelivery.DeliveryService.Controllers.DeliveriesController.Put (Fabrikam.DroneDelivery.DeliveryService)","ElapsedMilliseconds":9.2041,"EventId":{"Id":2},"SourceContext":"Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker","ActionId":"c8520b88-e24c-4c13-b469-31c5f84ae975","RequestId":"0HLB8UIHU9U9K:00000065","RequestPath":"/api/Deliveries/655e091b-8bcd-497e-8214-c77304ad6b74","CorrelationId":"Y2vWzYy/VEafVQJH/zFLnCy8cOeIeVCJAAAAAAAAAAA="}
fail: Fabrikam.DroneDelivery.DeliveryService.Middlewares.GlobalLoggerMiddleware[0]
      An internal handled exception has occurred: Object reference not set to an instance of an object.
System.NullReferenceException: Object reference not set to an instance of an object.
   at Fabrikam.DroneDelivery.DeliveryService.Services.RedisCache`1.<CreateItemAsync>d__12.MoveNext() in /src/Fabrikam.DroneDelivery.DeliveryService/Services/RedisCache.cs:line 98

Even the Swagger UI returns 500.

Events should be named in simple past tense. An Event happended in the past.

I see this event named DeliveryStatusEvent:
https://github.com/mspnp/microservices-reference-implementation/blob/master/src/bc-shipping/delivery/Fabrikam.DroneDelivery.DeliveryService/Models/DeliveryStatusEvent.cs

Events (any kind, Domain Events, Integration Events, etc.) should be named in simple past tense. An event is something that has happened in the past.
As Greg Young highlights here:
http://codebetter.com/gregyoung/2010/04/11/what-is-a-domain-event/
An event is something that has happened in the past.
All events should be represented as verbs in the past tense such as CustomerRelocated, CargoShipped, etc.

Also, see here:
https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/microservice-ddd-cqrs-patterns/domain-events-design-implementation

fluentd in status CrashLoopBackOff

Hi,
I did follow the manual setup, but my fluentd pod keeps on getting in CrashLoopBackOff state. Here the output of kubectl describe pods fluentd-krm4s -n kube-system. Any ideas?

Name:           fluentd-krm4s
Namespace:      kube-system
Priority:       0
Node:           aks-agentpool-13756522-2/10.240.0.4
Start Time:     Wed, 13 Nov 2019 11:33:50 +0100
Labels:         controller-revision-hash=6b9df4c48d
                k8s-app=fluentd-logging
                pod-template-generation=1
                version=v1
Annotations:    <none>
Status:         Running
IP:             10.244.0.12
IPs:            <none>
Controlled By:  DaemonSet/fluentd
Containers:
  fluentd:
    Container ID:   docker://7f711e2cabb05900246d339e7997a135a3b4f1e744d47cb605224041fac6a6e5
    Image:          fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
    Image ID:       docker-pullable://fluent/fluentd-kubernetes-daemonset@sha256:5d3bca81124cf99825aa8c5db6258ae7a591c9954ed78097027954ace2b8747e
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 25 Nov 2019 08:17:02 +0100
      Finished:     Mon, 25 Nov 2019 08:17:03 +0100
    Ready:          False
    Restart Count:  3331
    Limits:
      memory:  200Mi
    Requests:
      cpu:     100m
      memory:  200Mi
    Environment:
      FLUENT_ELASTICSEARCH_HOST:        elasticsearch
      FLUENT_ELASTICSEARCH_PORT:        9200
      FLUENT_ELASTICSEARCH_SCHEME:      http
      FLUENT_ELASTICSEARCH_SSL_VERIFY:  true
      LOGZIO_TOKEN:                     ThisIsASuperLongToken
      LOGZIO_LOGTYPE:                   kubernetes
      KUBERNETES_PORT_443_TCP_ADDR:     onlgvhaxguqdo-afa5ed90.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:                  tcp://onlgvhaxguqdo-afa5ed90.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:          tcp://onlgvhaxguqdo-afa5ed90.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:          onlgvhaxguqdo-afa5ed90.hcp.westeurope.azmk8s.io
    Mounts:
      /var/lib/docker/containers from varlibdockercontainers (ro)
      /var/log from varlog (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-58fhf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  varlog:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
  varlibdockercontainers:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers
    HostPathType:  
  default-token-58fhf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-58fhf
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason   Age                      From                               Message
  ----     ------   ----                     ----                               -------
  Warning  BackOff  2m45s (x78930 over 11d)  kubelet, aks-agentpool-13756522-2  Back-off restarting failed container

Can not find COSMOSDB_COL_NAME

There is no definition for $COSMOSDB_COL_NAME at microservices-reference-implementation
/deployment.md line247
--set cosmosDb.collectionName=$COSMOSDB_COL_NAME \

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.