Giter Site home page Giter Site logo

palantir / k8s-spark-scheduler Goto Github PK

View Code? Open in Web Editor NEW
175.0 239.0 43.0 27.41 MB

A Kubernetes Scheduler Extender to provide gang scheduling support for Spark on Kubernetes

License: Apache License 2.0

Shell 4.56% Go 94.74% Dockerfile 0.70%
octo-correct-managed

k8s-spark-scheduler's Introduction

Archived

This project is no longer maintained.

Kubernetes Spark Scheduler Extender

CircleCI

k8s-spark-scheduler-extender is a Kubernetes Scheduler Extender that is designed to provide gang scheduling capabilities for running Apache Spark on Kubernetes.

Running Spark applications at scale on Kubernetes with the default kube-scheduler is prone to resource starvation and oversubscription. Naively scheduling driver pods can occupy space that should be reserved for their executors. Using k8s-spark-scheduler-extender guarantees that a driver will only be scheduled if there is space in the cluster for all of its executors. It can also guarantee scheduling order for drivers, with respect to their creation timestamp.

Requirements:

  • Kubernetes: 1.11.0
  • Spark: Any snapshot build that includes commit f6cc354d83. This is expected to be in Spark 3.x

Spark scheduler extender is a Witchcraft server, and uses Godel for testing and building. It is meant to be deployed with a new kube-scheduler instance, running alongside the default scheduler. This way, non-spark pods can continue to be scheduled by the default scheduler, and opt-in pods are scheduled using the spark-sdcheduler.

Usage

To set up the scheduler extender as a new scheduler named spark-scheduler, run:

kubectl apply -f examples/extender.yml

This will create a new service account, a cluster binding for permissions, a config map and a deployment, all under namespace spark. It is worth noting that this example sets up the new scheduler with a super user. k8s-spark-scheduler-extender groups nodes in the cluster with a label specified in its configuration. Nodes that this scheduler will consider should have this label set. FIFO order is preserved for pods that have a node affinity or a node selector set for the same instance-group label. The given example configuration sets this label as instance-group.

Refer to Spark's website for documentation on running Spark with Kubernetes. To schedule a spark application using spark-scheduler, you must apply the following metadata to driver and executor pods.

driver:

apiVersion: v1
kind: Pod
metadata:
  labels:
    spark-app-id: my-custom-id
  annotations:
    spark-driver-cpu: 1
    spark-driver-mem: 1Gi
    spark-executor-cpu: 2
    spark-executor-mem: 4Gi
    spark-executor-count: 8
spec:
  schedulerName: spark-scheduler

executor:

apiVersion: v1
kind: Pod
metadata:
  labels:
    spark-app-id: my-custom-id
spec:
  schedulerName: spark-scheduler

As of f6cc354d83, spark supports specifying pod templates for driver and executors. Although spark configuration can also be used to apply label and annotations, the pod template feature in spark is the only way of setting schedulerName. To apply the above overrides, you should save them as files and set these configuration overrides:

"spark.kubernetes.driver.podTemplateFile": "/path/to/driver.template",
"spark.kubernetes.executor.podTemplateFile": "/path/to/executor.template"

Dynamic Allocation

k8s-spark-scheduler-extender also supports running Spark applications in dynamic allocation mode. You can find more information about how to configure Spark to make use of dynamic allocation in the Spark documentation.
To inform k8s-spark-scheduler-extender that you are running an application with dynamic allocation enabled, you should omit setting the spark-executor-count annotation on the driver pod, and instead set the following three annotations:

  • spark-dynamic-allocation-enabled: "true"
  • spark-dynamic-allocation-min-executor-count: minimum number of executors to always reserve resources for. Should be equal to the spark.dynamicAllocation.minExecutors value you set in the Spark configuration
  • spark-dynamic-allocation-max-executor-count: maximum number of executors to allow your application to request at a given time. Should be equal to the spark.dynamicAllocation.maxExecutors value you set in the Spark configuration

If dynamic allocation is enabled, k8s-spark-scheduler-extender will guarantee that your application will only get scheduled if the driver and executors until the minimum executor count fit to the cluster. Executors over the minimum are not reserved for, and are only scheduled if there is capacity to do so when they are requested by the application.

Configuration

k8s-spark-scheduler-extender is a witchcraft service, and supports configuration options detailed in the github documentation. Additional configuration options are:

  • fifo: a boolean flag to turn on FIFO processing of spark drivers. With this turned on, younger spark drivers will be blocked from scheduling until the cluster has space for the oldest spark driver. Executor scheduling is unaffected from this.
  • kube-config: path to a kube-config file
  • binpack: the algorithm to binpack pods in a spark application over the free space in the cluster. Currently available options are distribute-evenly and tightly-pack, the former being the default. They differ on how they distribute the executors, distribute-evenly round-robin's available nodes, whereas tightly-pack fills one node before moving to the next.
  • qps and burst: These are parameters for rate limiting kubernetes clients, used directly in client construction.

Development

Use ./godelw docker build to build an image using the Dockerfile template. Built image will use the default configuration. Deployment created by kubectl apply -f examples/extender.yml can be used to iterate locally.

Use ./examples/submit-test-spark-app.sh <id> <executor-count> <driver-cpu> <driver-mem> <driver-nvidia-gpus> <executor-cpu> <executor-mem> <executor-nvidia-gpus> to mock a spark application launch. Created pods will have a node selector for instance-group: main, so desired nodes in the cluster should be modified to have this label set.

Use ./godelw verify to run tests and style checks

Contributing

The team welcomes contributions! To make changes:

  • Fork the repo and make a branch
  • Write your code (ideally with tests) and make sure the CircleCI build passes
  • Open a PR (optionally linking to a github issue)

License

This project is made available under the Apache 2.0 License.

k8s-spark-scheduler's People

Contributors

a-k-g avatar adambalogh avatar alexis-d avatar alibiyeslambek avatar ashrayjain avatar bergeoisie avatar blakehawkins avatar chia7712 avatar chrisbattarbee avatar codekarma avatar cosmin-ionita avatar gouthamreddykotapalle avatar j-baker avatar k-simons avatar laflechejonathan avatar onursatici avatar palaska avatar pisarenko-net avatar rbotarleanu avatar rkaram avatar svc-excavator-bot avatar vvkot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-spark-scheduler's Issues

Work with Cluster Autoscaler?

Hi I'm trying to figure out if the k8s-spark-scheduler work in conjunction with the cluster-autoscaler.

My scenario is that I'm trying to have a node-pool that is small, but when I submit spark jobs, it can grow (up to a point) and the nodes will increase, the job will be fulfilled, and then cluster-autoscaler would scale it back down.

The spark job is always in pending

I follow the readme to call kubectl apply -f examples/extender.yml and the scheduler is created.

spark-scheduler-f575548bb-7fs5w   2/2     Running   0          52m
spark-scheduler-f575548bb-l2bdn   2/2     Running   0          52m

and then I create pod template for driver and executor.

apiVersion: v1
kind: Pod
metadata:
  labels:
    spark-app-id: my-custom-id
  annotations:
    spark-driver-cpus: 1
    spark-driver-mem: 1g
    spark-executor-cpu: 2
    spark-executor-mem: 4g
    spark-executor-count: 8
spec:
  schedulerName: spark-scheduler

apiVersion: v1
kind: Pod
metadata:
  labels:
    spark-app-id: my-custom-id
spec:
  schedulerName: spark-scheduler

The command to submit spark job is shown below.

./bin/spark-submit \
    --master k8s://https://spark10:6443 \
    --deploy-mode cluster \
    --name my-custom-id \
    --class org.apache.spark.examples.SparkPi \
    --conf spark.executor.instances=1 \
    --conf spark.kubernetes.container.image=chia7712/spark:latest \
    --conf spark.kubernetes.container.image.pullPolicy=Never \
    --conf spark.kubernetes.driver.podTemplateFile=/home/chia7712/driver.yaml \
    --conf spark.kubernetes.executor.podTemplateFile=/home/chia7712/executor.yaml \
    local:///opt/spark/examples/jars/spark-examples_2.12-3.1.2.jar

However, the job is always in pending. the spec of driver is shown below.

Name:         my-custom-id-d759047a14c4c0ce-driver
Namespace:    default
Priority:     0
Node:         <none>
Labels:       spark-app-id=my-custom-id
              spark-app-selector=spark-8e9a9108444e47878de54a64a1849f46
              spark-role=driver
Annotations:  spark-driver-cpus: 1
              spark-driver-mem: 1g
              spark-executor-count: 8
              spark-executor-cpu: 2
              spark-executor-mem: 4g
Status:       Pending
IP:           
IPs:          <none>
Containers:
  spark-kubernetes-driver:
    Image:       chia7712/spark:latest
    Ports:       7078/TCP, 7079/TCP, 4040/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Args:
      driver
      --properties-file
      /opt/spark/conf/spark.properties
      --class
      org.apache.spark.examples.SparkPi
      local:///opt/spark/examples/jars/spark-examples_2.12-3.1.2.jar
    Limits:
      memory:  1408Mi
    Requests:
      cpu:     1
      memory:  1408Mi
    Environment:
      SPARK_USER:                 chia7712
      SPARK_APPLICATION_ID:       spark-8e9a9108444e47878de54a64a1849f46
      SPARK_DRIVER_BIND_ADDRESS:   (v1:status.podIP)
      SPARK_LOCAL_DIRS:           /var/data/spark-364d254a-6342-468e-8c65-74439134c645
      SPARK_CONF_DIR:             /opt/spark/conf
    Mounts:
      /opt/spark/conf from spark-conf-volume-driver (rw)
      /opt/spark/pod-template from pod-template-volume (rw)
      /var/data/spark-364d254a-6342-468e-8c65-74439134c645 from spark-local-dir-1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9xk4g (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  pod-template-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-custom-id-d759047a14c4c0ce-driver-podspec-conf-map
    Optional:  false
  spark-local-dir-1:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  spark-conf-volume-driver:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      spark-drv-07a9b27a14c4c432-conf-map
    Optional:  false
  kube-api-access-9xk4g:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

Did I miss any configuration?

Spark scheduler should keep the pods and resource reservation mapping uptodate

When pods die/exit, currently the associated resource reservations stay assigned to the pod. It is more desirable to make the spark scheduler behave like a controller and unassign the resource reservation in these cases so that other consumers of resource reservations still get a consistent view of the world.

Separate out failure-fits from failure-fifo

Currently we bucket fifo related failures (example, when there are younger drivers present that we need to schedule first) into the failure-fit category for both logs and metrics.

We may want to separate this out to a separate bucket (like failure-fifo)

[Feature] Extending the scheduler-extender to support non-Spark workload

Hi Team,

Now that we have the scheduling framework that supports multiple profiles, why can't we also add support for non-spark pods to utilize the scheduler extender's reservation objects so pods from all workloads/profiles can use their reservations to maintain universal cluster resource knowledge in an attempt to achieve multi-tenancy?

In short, all the different scheduler profiles share the extender and make use of the resource reservation feature using the resourceReservations CRD objects. The users can now use the same scheduler binary to gain maximum utilization of their cluster resources.

We can fork away from the repository to add this feature or have it implemented as a pluggable configuration.

Automatically deleting resourcereservation object when spark-driver completed

The resourcereservation object takes up space on the nodes for the correct launch of the driver and all its executors. The resourcereservation object continues to exist even after the driver’s pod is completed, so when the spark-scheduler is running, resourcereservation object still affects the planning decision.
In the case of the default-scheduler, the spark-driver pod goes into the status “completed” when task is done. After that, the spark-driver does not use any CPU or memory resources and allows you to run other spark tasks.

executor pod schedule stucked with enough resource

when i submit a batch of spark jobs, it runs doesn't like the expection.
Some executor pods stucking although there are enough resources in each node for it to run.
It annoyed me, and I wonder if there is something that I don't considered.
ps. I run these spark jobs like the example and it works ok for running a single job

Compatibility with K8S 1.25

spark-scheduler is using an an outdated version client-go which calls api/policy/v1beta1/PodDisruptionBudget. This is a breaking change in K8S 1.25, and stops the scheduler from functioning.

https://kubernetes.io/docs/reference/using-api/deprecation-guide/#poddisruptionbudget-v125

Here is what the scheduler logs look like

 kube-scheduler I0314 18:39:34.199516       1 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133                       │
│ kube-scheduler E0314 18:39:34.201584       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: the server could not find the  │
│ requested resource

I've created a #241 PR to fix this issue by bumping the K8S dependencies to work on 1.25 and above. I welcome any feedback or suggestions to get it merged.

configuration options from the install.yml does not seem to work

Hello team,

I added binpack: tightly-pack config to the install.yml and rebuilt the image. The newly built image still does not seem to support this binpacking option. The same happens with fifo: true

I currently have a 4 node cluster with each node containining 4 allocatable cores and 2g or allocatable memory. I submitted my spark job with the following configirations -

spark.driver.memory=1g
spark.executor.memory=512m

spark.driver.cores=1
spark.executor.cores=1

spark.executor.instances=2

Ideally, with the binpack:tightly-pack, all the executors need to be scheduled to the same node which does not seem to happen.

Pods scheduled stuck in Pending state

I'm attempting to run spark-thriftserver using this scheduler extender. If you're not familiar, spark-thriftserver runs in client mode (local driver, remote executors). The thrift server exposes a JDBC connection which receives queries and turns these into spark jobs.

The command to run this looks like:

/opt/spark/sbin/start-thriftserver.sh \
  --conf spark.master=k8s://https://my-EKS-server:443 \
  --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
  --conf spark.kubernetes.namespace=spark \
  --conf spark.kubernetes.container.image=my-image \
  --conf spark.kubernetes.file.upload.path=file:///tmp \
  --conf spark.app.name=sparkthriftserver \
  --conf spark.kubernetes.executor.podTemplateFile=/path/to/executor.template \
  --verbose

spark-defaults.conf looks like:

spark.sql.catalogImplementation hive
spark.kubernetes.allocation.batch.size 5
spark.shuffle.service.enabled true
spark.dynamicAllocation.enabled true
spark.dynamicAllocation.executorIdleTimeout 30s
spark.dynamicAllocation.minExecutors 1
spark.dynamicAllocation.maxExecutors 50

So far, I've applied the extender.yaml file as-is without any modifications. This instantiates two new pods under the spark namespace both in Running state with names starting with "spark-scheduler-". describe pod XXX yields some troubling information about them:

Events:
  Type     Reason     Age                From                                             Message
  ----     ------     ----               ----                                             -------
  Normal   Scheduled  15m                fargate-scheduler                                Successfully assigned spark/spark-scheduler-7bbb5bb979-fhktn to fargate-ip-XXX-XXX-XXX-XXX.ec2.internal
  Normal   Pulling    15m                kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Pulling image "gcr.io/google_containers/hyperkube:v1.13.1"
  Normal   Pulled     15m                kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Successfully pulled image "gcr.io/google_containers/hyperkube:v1.13.1"
  Normal   Created    14m                kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Created container kube-scheduler
  Normal   Started    14m                kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Started container kube-scheduler
  Normal   Pulling    14m                kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Pulling image "palantirtechnologies/spark-scheduler:latest"
  Normal   Pulled     14m                kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Successfully pulled image "palantirtechnologies/spark-scheduler:latest"
  Warning  Unhealthy  14m (x3 over 14m)  kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Liveness probe failed: Get https://XXX.XXX.XXX.XXX:8484/spark-scheduler/status/liveness: dial tcp XXX.XXX.XXX.XXX
:8484: connect: connection refused
  Normal   Killing    14m                kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Container spark-scheduler-extender failed liveness probe, will be restarted
  Normal   Created    14m (x2 over 14m)  kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Created container spark-scheduler-extender
  Normal   Pulled     14m                kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Container image "palantirtechnologies/spark-scheduler:latest" already present on machine
  Normal   Started    14m (x2 over 14m)  kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Started container spark-scheduler-extender
  Warning  Unhealthy  14m (x4 over 14m)  kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Readiness probe failed: Get https://XXX.XXX.XXX.XXX:8484/spark-scheduler/status/readiness: dial tcp XXX.XXX.XXX.XXX:8484: connect: connection refused
  Warning  Unhealthy  14m                kubelet, fargate-ip-XXX-XXX-XXX-XXX.ec2.internal  Liveness probe failed: HTTP probe failed with statuscode: 503

When I attempt to run the driver above (which launches properly), because the spark.dynamicAllocation.minExecutors is set to 1 the driver immediately requests a single executor pod at startup. The pod itself remains indefinitely in a pending state.
describe pod XXX seems to suggest that no nodes satisfy the pod's scheduling criteria:

Events:
  Type     Reason            Age                   From             Message
  ----     ------            ----                  ----             -------
  Warning  FailedScheduling  3m49s (x37 over 14m)  spark-scheduler  0/4 nodes are available: 4 Insufficient pods.

What I'm having trouble figuring out is:

  1. what exactly is the criteria which causes no nodes to be insufficient? I am not making use of any instance-group labels, nor any custom labels. All the nodes accept the spark namespace. sorry to ask but I am struggling to find the proper steps to take to narrow down the issue.
  2. are the "liveness" error messages above signifying the issue resides with an unhealthy scheduler? I can ssh into these two scheduler instances if needed but not sure what logs to take a closer look at after i open the shell.

If it helps, this is using aws fargate as the compute resources behind kubernetes, but based on what i know so far that shouldnt be an issue.

Example in examples/extender.yml bug

The example did not work for me on first try because the spark service account searched for in the ClusterRoleBinding is in the default namespace but we create it in the spark namespace, so the scheduler service account assumed lacked the required role of cluster-admin. So the deployment would fail every time because the service account lacked permissions needed. I would do a pull request and change the example but wanted to see if you believed I was missing something. The example works for me now that I have changed the service accounts namespace in the ClusterRoleBinding to spark. Other than that the example works great, thank you for the great tool.

[METRICS] - No information on the README.md about a basic metric client setup

Hello, guys!

I'm going through this code and even a non-Go guy like me can quickly realise how well-crafted it is! Great work!

I've been able to run the extender in minikube pretty easily using the instructions provided in the readme, but I'd like to see more insights about how the extender performs. I see that there is already a /metrics directory in the project where a lot of metrics seem to be incremented, but I'm still having a hard time trying to understand how can I get those metrics out of the pod, to be able to see them in a dashboard.

I tracked down the dependencies and I found that you guys built a new library on top go-metrics, so by reading the readme of that repo I find that I need to set up a client that would expose those metrics to the desired metric engine. But I see that this client takes a metric Registry and exposes it and I'm not really able to figure out where in the code is that Registry that needs to be exposed.

I understand that the metrics setup were probably left undocumented because they use go-metrics which I suppose that it's a metric standard in Go-world, or it may be considered as out-of-scope for this project, but I really think it would be helpful to include at least a minimal metric setup (with Prometheus, for example). I'd be happy to open a PR with this setup once I'll figure it out.

What do you think?

Thanks,
Cosmin

Autoscaling

Many thanks for this project!

I had read that there’s an autoscaler associated with this project. Is that available anywhere?

pods go in Pending state intermittently, scheduler restart solves the issue

We are facing an issue in our env where Spark pods go in Pending state intermittently. We have to restart Spark scheduler pods to fix the issue.
We are seeing below errors in spark-scheduler-extender logs...not sure this is related to the issue
Looking for some pointers to explain this odd behaviour.

k8s version: v1.23
spark-scheduler version: v0.58.0

"stacktrace": "error when looking for already bound reservations\nfailed to get resource reservations podName:agg-spark-350zvn28en0u-b29f74875b02ba23-exec-1, podNamespace:prod01\n\ngithub.com/palantir/k8s-spark-scheduler/internal/extender.(*ResourceReservationManager).FindAlreadyBoundReservationNode\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/internal/extender/resourcereservations.go:141\ngithub.com/palantir/k8s-spark-scheduler/internal/extender.(*SparkSchedulerExtender).selectExecutorNode\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/internal/extender/resource.go:382\ngithub.com/palantir/k8s-spark-scheduler/internal/extender.(*SparkSchedulerExtender).selectNode\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/internal/extender/resource.go:210\ngithub.com/palantir/k8s-spark-scheduler/internal/extender.(*SparkSchedulerExtender).Predicate\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/internal/extender/resource.go:151\ngithub.com/palantir/k8s-spark-scheduler/cmd.registerExtenderEndpoints.func1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/cmd/endpoints.go:36\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2109\ngithub.com/palantir/witchcraft-go-server/wrouter.(*rootRouter).Register.func1.1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/router_root.go:136\ngithub.com/palantir/witchcraft-go-server/witchcraft/internal/middleware.NewRouteLogTraceSpan.func1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/witchcraft/internal/middleware/route.go:107\ngithub.com/palantir/witchcraft-go-server/wrouter.(*routeRequestHandlerWithNext).HandleRequest\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/router_root.go:150\ngithub.com/palantir/witchcraft-go-server/witchcraft/internal/middleware.NewRouteRequestLog.func1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/witchcraft/internal/middleware/route.go:32\ngithub.com/palantir/witchcraft-go-server/wrouter.(*routeRequestHandlerWithNext).HandleRequest\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/router_root.go:150\ngithub.com/palantir/witchcraft-go-server/witchcraft/internal/middleware.NewRequestMetricRequestMeter.func1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/witchcraft/internal/middleware/request.go:168\ngithub.com/palantir/witchcraft-go-server/wrouter.(*routeRequestHandlerWithNext).HandleRequest\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/router_root.go:150\ngithub.com/palantir/witchcraft-go-server/wrouter.(*rootRouter).Register.func1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/router_root.go:139\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2109\ngithub.com/julienschmidt/httprouter.(*Router).Handler.func1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/julienschmidt/httprouter/router.go:275\ngithub.com/julienschmidt/httprouter.(*Router).ServeHTTP\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/julienschmidt/httprouter/router.go:387\ngithub.com/palantir/witchcraft-go-server/wrouter/whttprouter.(*router).ServeHTTP\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/whttprouter/routerimpl.go:71\ngithub.com/palantir/witchcraft-go-server/witchcraft/internal/middleware.NewRequestExtractIDs.func1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/witchcraft/internal/middleware/request.go:139\ngithub.com/palantir/witchcraft-go-server/wrouter.(*requestHandlerWithNext).ServeHTTP\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/router_root.go:250\ngithub.com/palantir/witchcraft-go-server/witchcraft/internal/middleware.NewRequestContextLoggers.func1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/witchcraft/internal/middleware/request.go:73\ngithub.com/palantir/witchcraft-go-server/wrouter.(*requestHandlerWithNext).ServeHTTP\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/router_root.go:250\ngithub.com/palantir/witchcraft-go-server/witchcraft/internal/middleware.NewRequestContextMetricsRegistry.func1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/witchcraft/internal/middleware/request.go:84\ngithub.com/palantir/witchcraft-go-server/wrouter.(*requestHandlerWithNext).ServeHTTP\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/router_root.go:250\ngithub.com/palantir/witchcraft-go-server/witchcraft/internal/middleware.NewRequestPanicRecovery.func1.1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/witchcraft/internal/middleware/request.go:42\ngithub.com/palantir/witchcraft-go-server/witchcraft/internal/negroni.(*Recovery).ServeHTTP\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/witchcraft/internal/negroni/recovery.go:193\ngithub.com/palantir/witchcraft-go-server/witchcraft/internal/middleware.NewRequestPanicRecovery.func1\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/witchcraft/internal/middleware/request.go:41\ngithub.com/palantir/witchcraft-go-server/wrouter.(*requestHandlerWithNext).ServeHTTP\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/router_root.go:250\ngithub.com/palantir/witchcraft-go-server/wrouter.(*rootRouter).ServeHTTP\n\t/home/circleci/go/src/github.com/palantir/k8s-spark-scheduler/vendor/github.com/palantir/witchcraft-go-server/wrouter/router_root.go:103\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.initALPNRequest.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:3556\nnet/http.(*http2serverConn).runHandler\n\t/usr/local/go/src/net/http/h2_bundle.go:5910",

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.