Giter Site home page Giter Site logo

kubernetes's Introduction

Rundeck Kubernetes Plugin

This project provides integration between Rundeck and Kubernetes. This project contains a number of providers allowing job writers to use steps to call various API actions in Kubernetes.

Use cases:

  • Create Kubernetes Deployments, Services and Jobs
  • Run ad hoc command executions inside Kubernetes containers.

Requirements

These plugins require the python kubernetes SDK to be installed on the rundeck server. For example, you can install it using pip install kubernetes.

The Python Kubernetes API client requires version 11 of the library. You can confirm it with python -m pip list | grep kubernetes.

Further information here: https://github.com/kubernetes-client/python.

Authentication for Tectonic Environments.

There is a pull request work for the kubernetes python SDK to support authenticating with the kubernetes API using OIDC (which is used by tectonic).

For now, you can install the kubernetes python SDK from this repo to have the OIDC support:

git clone --recursive https://github.com/ltamaster/python
cd python
python setup.py install

Build and Install

Run gradle build to build the zip file. Then, copy the zip file to the $RDECK_BASE\libext folder.

Authentication

By default, and if any authentication parameters are not set, the plugin will check the ~/.kube/config file to get the authentication parameters.

Otherwise, you can set the following parameters:

  • Kubernetes Config File Path: a custom path for the kubernetes config file
  • Cluster URL: Kubernetes Cluster URL
  • Kubernetes API Token: Token to connect to the kubernetes API
  • Verify SSL: Enable/Disable the SSL verification
  • SSL Certificate Path: SSL Certificate Path for SSL connections

Resource Model

This plugin allows getting the container pods from kubernetes as rundeck nodes.

  • Default attributes: List of key=value pairs, example: username=root

  • Custom Mapping: Custom mapping adding on the rundeck nodes, for example: nodename.selector=default:Name,hostname.selector=default:pod_id

  • Tags: List of tags. You can add static and custom tags, for example: tag.selector=default:image, tag.selector=default:status, kubernetes

  • Namespace Retrieve only pods from a desired namespace. (An empty value results in listing pods from all namespaces) For example: default will result on listing the pods on "default" namespace.

  • Field Selector: Filter the list of pods using a response's API fields. For further information check SDK docs here. For example: metadata.uid=123 will show the pod with uid 123.

  • Just Running Pods?: Filter by running pods

This plugin generates a list of default pod's attributes in order to reference them on the custom config parameters of the plugin (eg: default:status, default:image). The following list are the default available attributes:

default:pod_id: Pod ID,
default:host_id: Host ID,
default:started_at: started At,
default:name: Pod Name,
default:namespace: Pod namespace,
default:labels: Deployments labels,
default:image: Image,
default:status: Pod Status,
default:status_message: Pod Status message,
default:container_id: Container ID,
default:container_name: Container Name

For example, if you want to add a custom tag for the container's image name, use tag.selector=default:image on the Tags config attribute. Or if you want to define the hostname node attribute using the POD ID, use hostname.selector=default:pod_id on the Custom Mapping config attribute.

Node Executor

This plugin allows run commands/scripts to a container pod from rundeck.

Configurations:

  • Shell: Shell used on the POD to run the command. Default value: /bin/bash
  • Debug?: Write debug messages to stderr

File Copier

This plugin allows copy files from rundeck to a pod. For now just script and text files can be copied to a remote pod.

Configurations:

  • Shell: Shell used on the POD to run the command. Default value: /bin/bash
  • Debug?: Write debug messages to stderr

Workflow Steps

The following steps plugins allow you to deploy/un-deploy applications and run/re-run jobs on kubernetes. For example, you can create deployment, services, ingress, etc and update or delete those kubernetes resources.

Create / Update / Delete / Check / Wait a Deployment

These steps manage deployment resources, you can create, update or delete a deployment and check its status.

Also, you have a step to wait for a deployment to be ready when the deployment is created. These require that the deployment define a Readiness Probe (further information here)

Create / Update / Delete Services

These steps manage services resources, you can create, update or delete a service.

Create / Delete / Re-run Jobs

These steps manage services resources, you can create or delete a Job.

Also, you can re-run jobs that are already created. Kubernetes doesn't allow re-run jobs, so what this step does is get the job definition, delete it, and creating it again.

Generic Steps

These steps provide a generic way to create/delete resources on kubernetes using a yaml script. The resources that this plugin allows creating are:

  • Deployment
  • Service
  • Ingress
  • Job
  • StorageClass
  • PersistentVolume
  • PersistentVolumeClaim
  • Secret

kubernetes's People

Contributors

aliksend avatar alizdavoodi avatar danielmartinimt avatar darwisnarvaezdev avatar edcoll avatar ehe-pd avatar git4fred avatar guycarmy avatar hdimitriou avatar jsboak avatar jtobard avatar kawaja avatar khudgins avatar l2je avatar ltamaster avatar majgis avatar midokate avatar miguelantonio avatar mszumski-pcx avatar nickvth avatar plambert avatar seantsb avatar sjrd218 avatar sp-joseluis-ledesma avatar tholok97 avatar umomany avatar vitalis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes's Issues

Support configMaps

why isn't there any support in this plugin to create/delete config maps ?

Wait For Job

I see the Kubernetes / Deployment / Waitfor task.
I have assumed this can't be used to wait for a Job just created to complete, is that correct?

Would a feature Kubernetes / Job / Waitfor that waits for a job to complete, make sense for general usage?
A bonus would be also reporting the log to rundeck.

Support private registry

In order to use a private docker registry it's needed to specify the ImagePullSecrets
Support for job and deployment

Inline script step fails on Kubernetes pod node

When I try to run a 'script' node step on a pod I get

DEBUG: kubernetes-plugin: config file
--
18:42:08 | DEBUG: kubernetes-plugin: /home/rundeck/.kube/config-rd-K1
18:42:08 | DEBUG: kubernetes-plugin: -------------------
18:42:08 | DEBUG: kubernetes-plugin: getting settings from file /home/rundeck/.kube/config-rd-K1
18:42:08 | DEBUG: kubernetes-model-source: --------------------------
18:42:08 | DEBUG: kubernetes-model-source: Pod Name:  rundeck-worker-replicaset-c57jv
18:42:08 | DEBUG: kubernetes-model-source: Namespace: rundeck-workers
18:42:08 | DEBUG: kubernetes-model-source: Container: rundeck-worker
18:42:08 | DEBUG: kubernetes-model-source: --------------------------
18:42:08 | DEBUG: kubernetes-model-source: Copying file from /home/rundeck/var/tmp/dispatch2691833068787593122.tmp to /tmp/72-77-rundeck-worker-replicaset-c57jv-rundeck-worker-dispatch-script.tmp.sh
18:42:08 | Traceback (most recent call last):
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/stream/ws_client.py", line 249, in websocket_call
18:42:08 | client = WSClient(configuration, get_websocket_url(url), headers)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/stream/ws_client.py", line 72, in __init__
18:42:08 | self.sock.connect(url, header=header)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/websocket/_core.py", line 226, in connect
18:42:08 | self.handshake_response = handshake(self.sock, *addrs, **options)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/websocket/_handshake.py", line 79, in handshake
18:42:08 | status, resp = _get_resp_headers(sock)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/websocket/_handshake.py", line 160, in _get_resp_headers
18:42:08 | raise WebSocketBadStatusException("Handshake status %d %s", status, status_message, resp_headers)
18:42:08 | websocket._exceptions.WebSocketBadStatusException: Handshake status 403 Forbidden
18:42:08 |  
18:42:08 | During handling of the above exception, another exception occurred:
18:42:08 |  
18:42:08 | Traceback (most recent call last):
18:42:08 | File "/home/rundeck/libext/cache/kubernetes-plugin-1.10.1-SNAPSHOT/pods-copy-file.py", line 69, in <module>
18:42:08 | main()
18:42:08 | File "/home/rundeck/libext/cache/kubernetes-plugin-1.10.1-SNAPSHOT/pods-copy-file.py", line 65, in main
18:42:08 | common.copy_file(name, container, source_file, destination_path, destination_file_name)
18:42:08 | File "/home/rundeck/libext/cache/kubernetes-plugin-1.10.1-SNAPSHOT/common.py", line 374, in copy_file
18:42:08 | _preload_content=False)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/stream/stream.py", line 32, in stream
18:42:08 | return func(*args, **kwargs)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/apis/core_v1_api.py", line 835, in connect_get_namespaced_pod_exec
18:42:08 | (data) = self.connect_get_namespaced_pod_exec_with_http_info(name, namespace, **kwargs)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/apis/core_v1_api.py", line 935, in connect_get_namespaced_pod_exec_with_http_info
18:42:08 | collection_formats=collection_formats)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/api_client.py", line 321, in call_api
18:42:08 | _return_http_data_only, collection_formats, _preload_content, _request_timeout)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/api_client.py", line 155, in __call_api
18:42:08 | _request_timeout=_request_timeout)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/stream/stream.py", line 27, in _intercept_request_call
18:42:08 | return ws_client.websocket_call(config, *args, **kwargs)
18:42:08 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/stream/ws_client.py", line 255, in websocket_call
18:42:08 | raise ApiException(status=0, reason=str(e))
18:42:08 | kubernetes.client.rest.ApiException: (0)
18:42:08 | Reason: Handshake status 403 Forbidden

'Command' node steps work fine on those pods with the same settings. This is just another step on the same job.

I have setup the project to use "Kubernetes / Pods / File Copier" with the very same setting I have for 'Kubernetes / Pods / Node Executor'

Openshift BuildConfig support

Hi,
We had a need in our organization to add support for creating buildconfigs in openshift using create_from_yaml.py. I'd be willing to contribute it back to upstream with any necessary changes that'll go with the upstream code as well. Would you be interested in a PR?

How to connect to multiply k8s clusters?

Hi

I want to connect to multiple kubernetes clusters to run jobs on
I can add more than one Kubernetes / Pods / Resource Model to use nodes from.
But how should I configure Default Node Executor and Default File Copier to make it working on many clusters?

Thanks

client.Configuration.set_default AttributeError: 'function' object has no attribute 'set_default'

When attempting to run a Kubernetes / Job / Create, with the following project's and job authentication settings:

project:

project.nodeCache.enabled=false
project.plugin.NodeExecutor.Kubernetes-node-executor.config_file=/var/lib/rundeck/.kube/config
project.plugin.NodeExecutor.Kubernetes-node-executor.shell=/bin/bash
project.plugin.NodeExecutor.Kubernetes-node-executor.token=keys/k8s/rundeck-token-nwkmp
project.plugin.NodeExecutor.Kubernetes-node-executor.url=k8s.mycluster.com

job:

  • name: CONFIG_PATH
    required: true
    value: s3://pathtofile.yaml
    scheduleEnabled: true
    sequence:
    commands:
    • configuration:
      api_version: batch/v1
      container_image: 1234.mycontainer
      container_name: my-data-export
      debug: 'true'
      environments: CONFIG_PATH=${option.CONFIG_PATH}
      image_pull_policy: Always
      name: my-data-export-service
      namespace: default
      token: keys/k8s/rundeck-token-nwkmp
      url: k8s.mycluster.com
      verify_ssl: 'false'
      nodeStep: true
      type: Kubernetes-Create-Job
      keepgoing: true
      strategy: node-first

it fails as with an error that points to the client configuration set on the plugin:

[Kubernetes-Create-Job] executing: [python, -u, /var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.4/job-create.py]

DEBUG: kubernetes-model-source: Log level configured for DEBUG
Traceback (most recent call last):
File "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.4/job-create.py", line 208, in
main()
File "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.4/job-create.py", line 132, in main
common.connect()
File "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.4/common.py", line 60, in connect
client.Configuration.set_default(configuration)
AttributeError: 'function' object has no attribute 'set_default'
[Kubernetes-Create-Job]: result code: 1
Failed: NonZeroResultCode: Script result code was: 1

Create deployment = 404 not found

Hello guys,
I have some trouble to create a deployment with the node step.
I can create pods, services but I have 404 error when I try to create a deployment.

Have ever seen that ?

Thank's for your help

missing modules for debian

Hi @ltamaster,
we are using latest pluging at debian(strech) and struggling with missing modules like

File "/opt/rundeck/libext/cache/kubernetes-plugin-1.0.9/create-from-yaml.py", line 6, in

import common
File "/opt/rundeck/libext/cache/kubernetes-plugin-1.0.9/common.py", line 11, in
from kubernetes import client, config
File "/usr/local/lib/python3.7/dist-packages/kubernetes/init.py", line 29, in
from _file_cache import _FileCache
ModuleNotFoundError: No module named '_file_cache'

What prerequisites are to install on debian?
python3-yaml already installed
other attempts with pip install ... are not successful.

create-from-yaml.py

hi,
got an error like this
File "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.7/create-from-yaml.py", line 114, in main()
File "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.7/create-from-yaml.py", line 96, in main
print("Secret created '%s'" % str(resp.status))
AttributeError: 'V1Secret' object has no attribute 'status'

Evicted POD's cause the POD's resource model to fail

Evicted POD's do not have a status and therefore the PODs resource model is not able to iterate over them. This causes it to fail and return an old cached set of PODs'

for info in pod.status.conditions:

Traceback (most recent call last): File "/home/rundeck/libext/cache/kubernetes-plugin-1.0.13-SNAPSHOT/pods-resource-model.py", line 234, in <module> main() File "/home/rundeck/libext/cache/kubernetes-plugin-1.0.13-SNAPSHOT/pods-resource-model.py", line 219, in main boEmoticon File "/home/rundeck/libext/cache/kubernetes-plugin-1.0.13-SNAPSHOT/pods-resource-model.py", line 77, in nodeCollectData for info in pod.status.conditions: TypeError: 'NoneType' object is not iterable [2020-08-07 19:31:14,058] ERROR resources.ExceptionCatchingResourceModelSource ExceptionCatchingResourceModelSource - [ResourceModelSource: 2.source (kubernetes-resource-model), project: data-platform] com.dtolabs.rundeck.core.resources.ResourceModelSourceException: failed to execute: /home/rundeck/libext/cache/kubernetes-plugin-1.0.13-SNAPSHOT/pods-resource-model.py: Script execution failed with result: 1 at com.dtolabs.rundeck.core.resources.ScriptPluginResourceModelSource.getNodes(ScriptPluginResourceModelSource.java:81) at com.dtolabs.rundeck.core.resources.ExceptionCatchingResourceModelSource.getNodes(ExceptionCatchingResourceModelSource.java:57) at com.dtolabs.rundeck.core.resources.DelegateResourceModelSource.getNodes(DelegateResourceModelSource.java:35) at com.dtolabs.rundeck.core.common.ProjectNodeSupport.getNodeSet(ProjectNodeSupport.java:136) at com.dtolabs.rundeck.core.common.ProjectNodeSupport$ProjectNodesSource.getNodes(ProjectNodeSupport.java:356) at com.dtolabs.rundeck.core.resources.ResourceModelSource$getNodes$0.call(Unknown Source) at rundeck.services.nodes.CachedProjectNodes.reloadNodeSet(CachedProjectNodes.groovy:44) at rundeck.services.nodes.CachedProjectNodes.getNodeSet(CachedProjectNodes.groovy:40) at com.dtolabs.rundeck.server.projects.RundeckProject.getNodeSet(RundeckProject.java:63) at com.dtolabs.rundeck.core.common.IRundeckProject$getNodeSet$1.call(Unknown Source) at rundeck.controllers.FrameworkController.nodesdata(FrameworkController.groovy:356) at rundeck.controllers.FrameworkController$nodesdata$0.callCurrent(Unknown Source) at rundeck.controllers.FrameworkController.nodesFragmentData(FrameworkController.groovy:624) at rundeck.controllers.FrameworkController$nodesFragmentData.callCurrent(Unknown Source) at rundeck.controllers.FrameworkController.nodesQueryAjax(FrameworkController.groovy:675) at rundeck.controllers.FrameworkController.nodesQueryAjax(FrameworkController.groovy) at org.grails.core.DefaultGrailsControllerClass$MethodHandleInvoker.invoke(DefaultGrailsControllerClass.java:223) at org.grails.core.DefaultGrailsControllerClass.invoke(DefaultGrailsControllerClass.java:188) at org.grails.web.mapping.mvc.UrlMappingsInfoHandlerAdapter.handle(UrlMappingsInfoHandlerAdapter.groovy:90) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter.doFilter(WebSocketUpgradeFilter.java:226) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1591) at org.springframework.security.web.jaasapi.JaasApiIntegrationFilter.doFilter(JaasApiIntegrationFilter.java:91) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1591) at javax.servlet.FilterChain$doFilter.call(Unknown Source) at org.rundeck.grails.plugins.securityheaders.RundeckSecurityHeadersFilter.doFilterInternal(RundeckSecurityHeadersFilter.groovy:67) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1591) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114) at grails.plugin.springsecurity.web.UpdateRequestContextHolderExceptionTranslationFilter.doFilter(UpdateRequestContextHolderExceptionTranslationFilter.groovy:64) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at grails.plugin.springsecurity.web.filter.GrailsHttpPutFormContentFilter.doFilterInternal(GrailsHttpPutFormContentFilter.groovy:54) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at grails.plugin.springsecurity.web.filter.GrailsAnonymousAuthenticationFilter.doFilter(GrailsAnonymousAuthenticationFilter.groovy:54) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter.doFilter(RememberMeAuthenticationFilter.java:158) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.jaasapi.JaasApiIntegrationFilter.doFilter(JaasApiIntegrationFilter.java:91) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:200) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.preauth.AbstractPreAuthenticatedProcessingFilter.doFilter(AbstractPreAuthenticatedProcessingFilter.java:121) at org.rundeck.security.RundeckPreauthenticationRequestHeaderFilter.super$3$doFilter(RundeckPreauthenticationRequestHeaderFilter.groovy) at sun.reflect.GeneratedMethodAccessor322.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:98) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1225) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:145) at org.rundeck.security.RundeckPreauthenticationRequestHeaderFilter.doFilter(RundeckPreauthenticationRequestHeaderFilter.groovy:44) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at grails.plugin.springsecurity.web.authentication.logout.MutableLogoutFilter.doFilter(MutableLogoutFilter.groovy:64) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at grails.plugin.springsecurity.web.SecurityRequestHolderFilter.doFilter(SecurityRequestHolderFilter.groovy:58) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1591) at org.grails.web.servlet.mvc.GrailsWebRequestFilter.doFilterInternal(GrailsWebRequestFilter.java:77) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1591) at org.grails.web.filters.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:67) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1591) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1591) at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:96) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1591) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:542) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:536) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1307) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:494) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:367) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918) at java.lang.Thread.run(Thread.java:748) Caused by: com.dtolabs.rundeck.core.resources.ResourceModelSourceException: Script execution failed with result: 1 at com.dtolabs.rundeck.core.resources.ScriptResourceUtil.executeScript(ScriptResourceUtil.java:147) at com.dtolabs.rundeck.core.resources.ScriptPluginResourceModelSource.getNodes(ScriptPluginResourceModelSource.java:67) ... 116 more

Testing framework

It would be nice to have some sort of automated testing framework for this plugin. A quick scan surfaced a few options:

Have you thought about any testing frameworks for this plugin? Are there any you would support and/or incorporate if I was able to contribute some code? Are there any you would refuse to support or incorporate?

Error management. Possibility to display Kubernetes' errors when they occur.

Hi team

Sometimes, when errors occur, we receive errors from Java or a specific library. It would be helpful to also get the errors thrown by Kubernetes to know what is going on, for instance, when the token isn't correct, the error display in the GUI isn't explicit as it displays the following message :

The Node Source had an error:
failed to execute: /home/rundeck/libext/cache/kubernetes-plugin-2.0.4-SNAPSHOT/pods-resource-model.py: Script execution failed with result: 1

It would be more helpful to display the authentication error from Kubernetes in this example.

Using kubeconfig in Workflow steps

I have been trying to use a kube config file in Workflow steps to authenticate against my kubernetes cluster. Looking through the code, it appears the functionality has been implemented ($RD_CONFIG_CONFIG_FILE), but the text field "Kubernetes Config File Path" doesn't exist to enter it.

I've submitted a PR to include "Kubernetes Config File Path" in each Workflow step configuration.

Create job with Persistent Volume

When i create Job with Persistent Volume, plugin try take data for this from Resource Requests field. Version 1.07-SNAPSHOT
I see the following lines in log:

'volume_mounts': [{'mount_path': '1,memory=512Mi',
                'mount_propagation': None,
                'name': 'cpu',
                'read_only': None,
                'sub_path': None}],

ERROR: kubernetes-model-source: Exception creating job: (422)
Reason: Unprocessable Entity

In plugin source, file job-create.py

    if "volumes" in data:
        volumes_array = data["resources_requests"].splitlines()

Cannot connect to OC Cluster but works well from bash

Hi Team,

We faced OC connection issue from GUI, but oc login in bash works well.
Test with Rundeck from 3.2.8 to 3.3.4 and Kube plugin 1.0.5 to 2.0.1
Configuraition tested with 'Kubernetes Config File' and 'Cluster URL + Token' and Both.

GUI error:


The Node Source had an error:
failed to execute: /home/rundeck/libext/cache/kubernetes-plugin-2.0.1/pods-resource-model.py: Script execution failed with result: 1

BASH works well:

root@rundeck-7cc95f4b77-9dxs2:/home/rundeck# oc login -u apikey -p  xxxxxxxx  --server=https://ce.eu.xxxxx.com:443
Login successful.

You have access to 78 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "dev-stage".

Error log:

Traceback (most recent call last):
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.1/pods-resource-model.py", line 262, in <module>
    main()
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.1/pods-resource-model.py", line 232, in main
    watch=False,
  File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/api/core_v1_api.py", line 16864, in list_pod_for_all_namespaces
    return self.list_pod_for_all_namespaces_with_http_info(**kwargs)  # noqa: E501
  File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/api/core_v1_api.py", line 16981, in list_pod_for_all_namespaces_with_http_info
    collection_formats=collection_formats)
  File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/api_client.py", line 353, in call_api
    _preload_content, _request_timeout, _host)
  File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/api_client.py", line 184, in __call_api
    _request_timeout=_request_timeout)
  File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/api_client.py", line 377, in request
    headers=headers)
  File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/rest.py", line 243, in GET
    query_params=query_params)
  File "/usr/local/lib/python3.5/dist-packages/kubernetes/client/rest.py", line 216, in request
    headers=headers)
  File "/usr/local/lib/python3.5/dist-packages/urllib3/request.py", line 76, in request
    method, url, fields=fields, headers=headers, **urlopen_kw
  File "/usr/local/lib/python3.5/dist-packages/urllib3/request.py", line 97, in request_encode_url
    return self.urlopen(method, url, **extra_kw)
  File "/usr/local/lib/python3.5/dist-packages/urllib3/poolmanager.py", line 336, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "/usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py", line 659, in urlopen
    conn = self._get_conn(timeout=pool_timeout)
  File "/usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py", line 279, in _get_conn
    return conn or self._new_conn()
  File "/usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py", line 238, in _new_conn
    **self.conn_kw
  File "/usr/local/lib/python3.5/dist-packages/urllib3/connection.py", line 115, in __init__
    _HTTPConnection.__init__(self, *args, **kw)
TypeError: __init__() got an unexpected keyword argument 'assert_hostname'

Next release

Hi

When is the next release scheduled? There is some bug fixes in master (like #40), I need it in my setup.
Is master stable? If it is hard to make release, can I build plugin and use it in production? Is there new bugs after last release?

Thank you

ImportError: cannot import name UnrewindableBodyError

I am attempting to create a pod which runs busybox and am getting the following error:

Traceback (most recent call last):
--
File   "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.13-SNAPSHOT/pods-create.py",   line 6, in <module>
import   common
File   "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.13-SNAPSHOT/common.py",   line 11, in <module>
from   kubernetes import client, config
File   "/usr/lib/python2.7/site-packages/kubernetes/__init__.py", line 19,   in <module>
import   kubernetes.client
File   "/usr/lib/python2.7/site-packages/kubernetes/client/__init__.py",   line 619, in <module>
from   .apis.admissionregistration_api import AdmissionregistrationApi
File   "/usr/lib/python2.7/site-packages/kubernetes/client/apis/__init__.py",   line 4, in <module>
from   .admissionregistration_api import AdmissionregistrationApi
File   "/usr/lib/python2.7/site-packages/kubernetes/client/apis/admissionregistration_api.py",   line 23, in <module>
from   ..api_client import ApiClient
File   "/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py",   line 28, in <module>
from   .configuration import Configuration
File   "/usr/lib/python2.7/site-packages/kubernetes/client/configuration.py",   line 16, in <module>
import   urllib3
File   "/usr/lib/python2.7/site-packages/urllib3/__init__.py", line 10, in   <module>
from   .connectionpool import (
File   "/usr/lib/python2.7/site-packages/urllib3/connectionpool.py", line   31, in <module>
from   .connection import (
File   "/usr/lib/python2.7/site-packages/urllib3/connection.py", line 45,   in <module>
from   .util.ssl_ import (
File   "/usr/lib/python2.7/site-packages/urllib3/util/__init__.py", line   5, in <module>
from   .request import make_headers
File   "/usr/lib/python2.7/site-packages/urllib3/util/request.py", line 5,   in <module>
from   ..exceptions import UnrewindableBodyError
ImportError:   cannot import name UnrewindableBodyError
Failed:   NonZeroResultCode: Script result code was: 1

Default container name to reading pod logs seems not necessary or should be omitted conditionally

When I try to fetch logs from rundeck using kubernetes plugin, I got an below error.

ERROR: kubernetes-model-source: Exception error creating: (400)
20:05:58 | Reason: Bad Request
20:05:58 | HTTP response headers: HTTPHeaderDict({'Content-Length': '171', 'Access-Control-Allow-Headers': 'X-Auth-Token,Content-Type,Authorization', 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains', 'Server': 'nginx/1.15.9', 'Connection': 'keep-alive', 'Access-Control-Allow-Credentials': 'true', 'Date': 'Mon, 06 Jan 2020 11:05:58 GMT', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'GET, PUT, POST, DELETE, PATCH, OPTIONS', 'Content-Type': 'application/json'})
20:05:58 | HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"container None is not valid for pod app-xxxxxxxxxx-xxxxx","reason":"BadRequest","code":400}
20:05:58

I've investigate cause and found that os.environ.get('RD_NODE_DEFAULT_CONTAINER_NAME') returns None and Kubernetes client use it to select container inside pod.

Below code is a current master branch code.

data = {}
data["name"] = os.environ.get('RD_CONFIG_NAME')
data["namespace"] = os.environ.get('RD_CONFIG_NAMESPACE')
data["container"] = os.environ.get('RD_NODE_DEFAULT_CONTAINER_NAME')
common.connect()
try:
v1 = client.CoreV1Api()
ret = v1.read_namespaced_pod_log(
namespace=data["namespace"],
name=data["name"],
container=data["container"],
_preload_content=False
)
print(ret.read())

But we can't specify default container name on Rundeck job window and I've fetched pod logs successfully if I omitted default container name.

So my opinion is that we can remove this default container name and fetch without container name like this:

    data = {}
    data["name"] = os.environ.get('RD_CONFIG_NAME')
    data["namespace"] = os.environ.get('RD_CONFIG_NAMESPACE')

    common.connect()

    try:
        v1 = client.CoreV1Api()
        ret = v1.read_namespaced_pod_log(
            namespace=data["namespace"],
            name=data["name"],
            _preload_content=False
        )
        print(ret.read())

    except ApiException as e:
        log.error("Exception error creating: %s\n" % e)
        sys.exit(1)

Regression in wait-for-job: Bad Request

We use the custom resource steps to:

  1. create a new job
  2. wait for the job completion and spit out the logs
  3. delete the job resource

It seems upgrading from version 15 to 16 there was a regression that causes the Job/Waitfor step to error on pulling logs when the pod is spinning up.

Debug logs:

...
DEBUG: kubernetes-wait-job: Searching for pod associated with job
DEBUG: kubernetes-wait-job: Fetching logs from pod: rundeck-test-blah-9548-86vwq
INFO: kubernetes-wait-job: ========================== job log start ==========================
ERROR: kubernetes-wait-job: Exception waiting for job: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': '1525ec26-f329-4675-800e-4d03061fd3be', 'Content-Type': 'application/json', 'Date': 'Wed, 24 Jun 2020 22:22:22 GMT', 'Content-Length': '221'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"container \\"tests\\" in pod \\"rundeck-test-blah-9548-86vwq\\" is waiting to start: ContainerCreating","reason":"BadRequest","code":400}\n'

Can't Delete Job

plugin version: 1.0.5

I had success creating a job using this plugin. 🎆 🎉 🎈
Unfortunately, I'm unable to delete a job using either Kubernetes / Generic / Delete or Kubernetes / Job / Delete (also Kubernetes / Job / Re-Run fails on create step):

Using Kubernetes / Generic / Delete: results in TypeError: delete_namespaced_job() takes exactly 4 arguments (3 given)

17:40:05 |   |   | 1: Workflow step executing: StepExecutionItem{type='NodeDispatch', keepgoingOnSuccess=false, hasFailureHandler=false}
17:40:05 |   |   | preparing for sequential execution on 1 nodes
17:40:05 |   |   | Executing command on node: localhost, NodeEntryImpl{tags=[], attributes={nodename=localhost, hostname=localhost, osFamily=unix, osVersion=4.15.0-23-generic, osArch=amd64, description=Rundeck server node, osName=Linux, username=rundeck, tags=}, project='null'}
17:40:05 |   |   | [workflow] beginExecuteNodeStep(localhost): NodeDispatch: StepExecutionItem{type='NodeDispatch', keepgoingOnSuccess=false, hasFailureHandler=false}
17:40:05 |   |   | [Kubernetes-Delete] step started, config: {name=helloworld, namespace=default, debug=true, verify_ssl=false, type=Job}
17:40:05 |   |   | [Kubernetes-Delete] executing: [python, -u, /var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.5/delete.py]
17:40:05 |   |   | DEBUG: kubernetes-model-source: Log level configured for DEBUG
17:40:05 |   |   | Traceback (most recent call last):
17:40:05 |   |   | File "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.5/delete.py", line 116, in <module>
17:40:05 |   |   | main()
17:40:05 |   |   | File "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.5/delete.py", line 68, in main
17:40:05 |   |   | pretty="true")
17:40:05 |   |   | TypeError: delete_namespaced_job() takes exactly 4 arguments (3 given)
17:40:05 |   |   | [Kubernetes-Delete]: result code: 1
17:40:05 |   |   | Failed: NonZeroResultCode: Script result code was: 1

Using Kubernetes / Job / Delete reports success:

17:32:35 |   |   | 1: Workflow step executing: StepExecutionItem{type='NodeDispatch', keepgoingOnSuccess=false, hasFailureHandler=false}
-- | -- | -- | --
17:32:35 |   |   | preparing for sequential execution on 1 nodes
17:32:35 |   |   | Executing command on node: localhost, NodeEntryImpl{tags=[], attributes={nodename=localhost, hostname=localhost, osFamily=unix, osVersion=4.15.0-23-generic, osArch=amd64, description=Rundeck server node, osName=Linux, username=rundeck, tags=}, project='null'}
17:32:35 |   |   | [workflow] beginExecuteNodeStep(localhost): NodeDispatch: StepExecutionItem{type='NodeDispatch', keepgoingOnSuccess=false, hasFailureHandler=false}
17:32:35 |   |   | [Kubernetes-Delete-Job] step started, config: {name=helloworld, namespace=default, debug=true, verify_ssl=false}
17:32:35 |   |   | [Kubernetes-Delete-Job] executing: [python, -u, /var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.5/job-delete.py]
17:32:35 |   |   | DEBUG: kubernetes-service-delete: Log level configured for DEBUG
17:32:35 |   |   | Deployment deleted '{u'completionTime': u'2018-06-21T23:25:57Z', u'conditions': [{u'status': u'True', u'lastProbeTime': u'2018-06-21T23:25:57Z', u'type': u'Complete', u'lastTransitionTime': u'2018-06-21T23:25:57Z'}], u'succeeded': 1, u'startTime': u'2018-06-21T23:25:53Z'}'
17:32:35 |   |   | [Kubernetes-Delete-Job]: result code: 0

but all future create attempts result in failure:


16:52:17 |   | 2. Kubernetes / Job / Create | [workflow] Begin step: 2,NodeDispatch
-- | -- | -- | --
16:52:17 |   |   | 2: Workflow step executing: StepExecutionItem{type='NodeDispatch', keepgoingOnSuccess=false, hasFailureHandler=false}
16:52:17 |   |   | preparing for sequential execution on 1 nodes
16:52:17 |   |   | Executing command on node: localhost, NodeEntryImpl{tags=[], attributes={nodename=localhost, hostname=localhost, osFamily=unix, osVersion=4.15.0-23-generic, osArch=amd64, description=Rundeck server node, osName=Linux, username=rundeck, tags=}, project='null'}
16:52:17 |   |   | [workflow] beginExecuteNodeStep(localhost): NodeDispatch: StepExecutionItem{type='NodeDispatch', keepgoingOnSuccess=false, hasFailureHandler=false}
16:52:17 |   |   | [Kubernetes-Create-Job] step started, config: {debug=false, container_name=alp, verify_ssl=false, image_pull_policy=Always, name=helloworld, namespace=default, container_command=echo hello k8s, api_version=batch/v1, job_restart_policy=Never, container_image=alpine}
16:52:17 |   |   | [Kubernetes-Create-Job] executing: [python, -u, /var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.5/job-create.py]
16:52:17 |   |   | ERROR: kubernetes-model-source: Exception creating job: (409)
16:52:17 |   |   | Reason: Conflict
16:52:17 |   |   | HTTP response headers: HTTPHeaderDict({'Date': 'Thu, 21 Jun 2018 22:52:17 GMT', 'Content-Length': '245', 'Content-Type': 'application/json'})
16:52:17 |   |   | HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"object is being deleted: jobs.batch \"helloworld\" already exists","reason":"AlreadyExists","details":{"name":"helloworld","group":"batch","kind":"jobs"},"code":409}
16:52:17 |   |   |  
16:52:17 |   |   |  
16:52:17 |   |   |  
16:52:17 |   |   | [Kubernetes-Create-Job]: result code: 1

Using kubectl, the job is deleted without issue (same config used by rundeck plugin):

[root@kubernetes ~]# kubectl get jobs
NAME         DESIRED   SUCCESSFUL   AGE
helloworld   1         1            3m
[root@kubernetes ~]# kubectl describe jobs helloworld
Name:           helloworld
Namespace:      default
Selector:       controller-uid=fca539a8-75a4-11e8-88e2-fa163eaa4323
Labels:         controller-uid=fca539a8-75a4-11e8-88e2-fa163eaa4323
                job-name=helloworld
Annotations:    <none>
Parallelism:    1
Completions:    1
Start Time:     Thu, 21 Jun 2018 16:46:50 -0600
Pods Statuses:  0 Running / 1 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=fca539a8-75a4-11e8-88e2-fa163eaa4323
           job-name=helloworld
  Containers:
   alp:
    Image:  alpine
    Port:   <none>
    Command:
      echo
      hello
      k8s
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  6m    job-controller  Created pod: helloworld-rzdj8
[root@kubernetes ~]# kubectl delete jobs helloworld
job "helloworld" deleted
[root@kubernetes ~]# kubectl describe jobs helloworld
Error from server (NotFound): jobs.batch "helloworld" not found

Support nodes-executors in multiple Kubernetes clusters

On a project, through the 'edit nodes' I can add pods of various clusters to the list of nodes.
When I want to execute a command to a pod only the 'Kubernetes / Pods / Node Executor' settings at the project level are honoured.

What I would like is to use 'Kubernetes Config File Path' from the Node itself - we have it on the configuration used to get this node and not the 'Kubernetes Config File Path' from the project settings.
This way I will be able to access nodes on different K8s clusters and execute commands there

we got failed when we running delete service kubernetes-plugin-2.0.3

hi there ,

i created job Kubernetes / Service / Delete it was succesfully delete the service on the kubernetes but when i checked on the rundeck it seem like failure

here's the logs :

Traceback (most recent call last):
File "/var/lib/rundeck/libext/cache/kubernetes-plugin-2.0.3/service-delete.py", line 48, in
main()
File "/var/lib/rundeck/libext/cache/kubernetes-plugin-2.0.3/service-delete.py", line 40, in main
print(common.parseJson(api_response))
File "/var/lib/rundeck/libext/cache/kubernetes-plugin-2.0.3/common.py", line 258, in parseJson
return json.dumps(obj, cls=ObjectEncoder)
File "/usr/lib/python2.7/json/init.py", line 251, in dumps
sort_keys=sort_keys, **kw).encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return iterencode(o, 0)
File "/var/lib/rundeck/libext/cache/kubernetes-plugin-2.0.3/common.py", line 254, in default
return {k.lstrip('
'): v for k, v in vars(obj).items()}
TypeError: vars() argument must have dict attribute
Failed: NonZeroResultCode: Script result code was: 1

Screen Shot 2021-04-05 at 15 07 17

Error when trying to create a Deployment or a Pod

Hello,

I'm testing this plugin and I got the Resource Model working just fine.
But when I try to use one of the workflow steps to create a Deployment or a Pod I get the following error:

[Kubernetes-Create-Deployment] step started, config: {image=registry.example.com/busybox:1.33.0, debug=true, container_name=busybox, verify_ssl=false, replicas=1, name=busybox-deployment, namespace=rundeck-dev, api_version=apps/v1, labels=app=busybox}
[Kubernetes-Create-Deployment] executing: [python, -u, /home/rundeck/libext/cache/kubernetes-plugin-2.0.3/deployment-create.py]
DEBUG: kubernetes-model-source: Log level configured for DEBUG
DEBUG: kubernetes-model-source: Creating job from data:
DEBUG: kubernetes-model-source: {'api_version': 'apps/v1', 'name': 'busybox-deployment', 'container_name': 'busybox', 'image': 'registry.example.com/busybox:1.33.0', 'ports': None, 'replicas': '1', 'namespace': 'rundeck-dev', 'labels': 'app=busybox'}
DEBUG: kubernetes-plugin: config file
DEBUG: kubernetes-plugin: None
DEBUG: kubernetes-plugin: -------------------
DEBUG: kubernetes-plugin: getting from default config file
Traceback (most recent call last):
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/deployment-create.py", line 128, in <module>
    main()
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/deployment-create.py", line 123, in main
    deployment = create_deployment_object(data)
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/deployment-create.py", line 19, in create_deployment_object
    template_spec = common.create_pod_template_spec(data=data)
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/common.py", line 264, in create_pod_template_spec
    for port in data["ports"].split(','):
AttributeError: 'NoneType' object has no attribute 'split'
[Kubernetes-Create-Deployment]: result code: 1

And the deployment is not created in Kubernetes. This happens for any Pod with the ports field empty.

But when I deploy nginx with the port filed filled in, the following error occurs, but for some reason the deployment is created in Kubernetes.

[Kubernetes-Create-Deployment] step started, config: {image=registry.example.com/nginx:1.19.10-alpine, debug=true, container_name=nginx, verify_ssl=false, replicas=1, name=nginx-deployment, namespace=rundeck-dev, api_version=apps/v1, ports=80, labels=app=nginx}
[Kubernetes-Create-Deployment] executing: [python, -u, /home/rundeck/libext/cache/kubernetes-plugin-2.0.3/deployment-create.py]
DEBUG: kubernetes-model-source: Log level configured for DEBUG
DEBUG: kubernetes-model-source: Creating job from data:
DEBUG: kubernetes-model-source: {'api_version': 'apps/v1', 'name': 'nginx-deployment', 'container_name': 'nginx', 'image': 'registry.example.com/nginx:1.19.10-alpine', 'ports': '80', 'replicas': '1', 'namespace': 'rundeck-dev', 'labels': 'app=nginx'}
DEBUG: kubernetes-plugin: config file
DEBUG: kubernetes-plugin: None
DEBUG: kubernetes-plugin: -------------------
DEBUG: kubernetes-plugin: getting from default config file
Traceback (most recent call last):
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/deployment-create.py", line 128, in <module>
    main()
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/deployment-create.py", line 124, in main
    create_deployment(apiV1, deployment, data)
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/deployment-create.py", line 60, in create_deployment
    print(common.parseJson(api_response.status))
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/common.py", line 258, in parseJson
    return json.dumps(obj, cls=ObjectEncoder)
  File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps
    **kw).encode(obj)
  File "/usr/lib/python3.6/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/lib/python3.6/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/common.py", line 254, in default
    return {k.lstrip('_'): v for k, v in vars(obj).items()}
TypeError: vars() argument must have __dict__ attribute
[Kubernetes-Create-Deployment]: result code: 1

Using the Kubernetes/Generic/Create workflow step returns the same error in the Nginx deployment case. Again, in this case the deployment is created in Kubernetes.

DEBUG: kubernetes-model-source: Log level configured for DEBUG
Traceback (most recent call last):
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/create-from-yaml.py", line 139, in <module>
    main()
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/create-from-yaml.py", line 40, in main
    print(common.parseJson(resp.status))
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/common.py", line 258, in parseJson
    return json.dumps(obj, cls=ObjectEncoder)
  File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps
    **kw).encode(obj)
  File "/usr/lib/python3.6/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/lib/python3.6/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.3/common.py", line 254, in default
    return {k.lstrip('_'): v for k, v in vars(obj).items()}
TypeError: vars() argument must have __dict__ attribute
Failed: NonZeroResultCode: Script result code was: 1

The Kubernetes cluster is v1.14.5, the Rundeck instance has Python 3.6.9 and kubernetes==12.0.1 installed.

Typo in Waitfor Job step

Found in release 1.0.13.
In the logs of the workflow step: Waitfor Job. There is a message with a typo: "Wating for job completion"

Regards,
Willian.

can't build 1.0.11

$ gradle --version

------------------------------------------------------------
Gradle 5.3
------------------------------------------------------------

Build time:   2019-03-20 11:03:29 UTC
Revision:     f5c64796748a98efdbf6f99f44b6afe08492c2a0

Kotlin:       1.3.21
Groovy:       2.5.4
Ant:          Apache Ant(TM) version 1.9.13 compiled on July 10 2018
JVM:          1.8.0_191 (Oracle Corporation 25.191-b12)
OS:           Mac OS X 10.13.6 x86_64
$ git branch
* (HEAD detached at 1.0.11)
$ gradle build

Welcome to Gradle 5.3!

Here are the highlights of this release:
 - Feature variants AKA "optional dependencies"
 - Type-safe accessors in Kotlin precompiled script plugins
 - Gradle Module Metadata 1.0

For more details see https://docs.gradle.org/5.3/release-notes.html

Starting a Gradle Daemon (subsequent builds will be faster)

FAILURE: Build failed with an exception.

* Where:
Script 'https://raw.githubusercontent.com/rundeck-plugins/build-zip/master/build.gradle' line: 74

* What went wrong:
A problem occurred evaluating script.
> Could not find method leftShift() for arguments [build_5euplq0w1agkutluqphlqsf3j$_run_closure4@7cb03ba8] on task ':build' of type org.gradle.api.DefaultTask.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 16s

Can't mount several volumes for a job

I try to create a job with several volumes, as Volume Mounts I set : scripts=/mnt/scripts, secrets=/mnt/secrets

Following instructions: Volume Mounts, name=mountPath format. Use comma for multiples volume

Then in my job, the result is:

        volumeMounts:
        - mountPath: /mnt/scripts, secrets
          name: scripts

instead of

        volumeMounts:
        - mountPath: /mnt/scripts
          name: scripts
        - mountPath: /mnt/secrets
          name: secrets

I take a look on the plugin code, and I think that the error is that we loop over lines instead of commas.

Here:

volumes_array = data["volume_mounts"].splitlines()

We have

    if "volume_mounts" in data:
        volumes_array = data["volume_mounts"].splitlines()
        tmp = dict(s.split('=', 1) for s in volumes_array)

        mounts = []
        for key in tmp:
            mounts.append(client.V1VolumeMount(
                name=key,
                mount_path=tmp[key])
            )

instead of:

    if "volume_mounts" in data:
        volumes_array = data["volume_mounts"].split(",")
        tmp = dict(s.split('=', 1) for s in volumes_array)

        mounts = []
        for key in tmp:
            mounts.append(client.V1VolumeMount(
                name=key,
                mount_path=tmp[key])
            )

Remote pod executions hang

There appears to be an issue with the latest Python client release (8.0.1) that causes remote pod executions to intermittently hang indefinitely:

kubernetes-client/python#602

In Rundeck, this issue manifests as remote pod execution jobs that hang indefinitely, preventing execution of subsequently scheduled jobs. The frequency of this occurrences appears to be roughly once every 10-20 executions.

I can provide more details if necessary, but I'm opening this issue to draw attention to the related Python client issue. This appears to be a feature-breaking issue.

Releasing 1.0.12

Hi @ltamaster
Are you planning to release 1.0.12 soon?
Will be grate to have new version with our changes in it :)

create custom resource

Hi @ltamaster
Is there a possibility to create a custom resource with a deployment job?
like command line 'kubectl create -f file.yaml'
kind regards
Marco

kubernetes module v12.0.0 not working

This error:

Traceback (most recent call last):
  File "/home/jpla/rundeck/rd.repo/pro/3.3.4/libext/cache/kubernetes-plugin-2.0.1/pods-resource-model.py", line 262, in <module>
    main()
  File "/home/jpla/rundeck/rd.repo/pro/3.3.4/libext/cache/kubernetes-plugin-2.0.1/pods-resource-model.py", line 219, in main
    ret = v1.list_pod_for_all_namespaces(
  File "/home/jpla/.local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 16864, in list_pod_for_all_namespaces
    return self.list_pod_for_all_namespaces_with_http_info(**kwargs)  # noqa: E501
  File "/home/jpla/.local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 16967, in list_pod_for_all_namespaces_with_http_info
    return self.api_client.call_api(
  File "/home/jpla/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
  File "/home/jpla/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
    response_data = self.request(
  File "/home/jpla/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 373, in request
    return self.rest_client.GET(url,
  File "/home/jpla/.local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 239, in GET
    return self.request("GET", url,
  File "/home/jpla/.local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 212, in request
    r = self.pool_manager.request(method, url,
  File "/usr/lib/python3/dist-packages/urllib3/request.py", line 75, in request
    return self.request_encode_url(
  File "/usr/lib/python3/dist-packages/urllib3/request.py", line 97, in request_encode_url
    return self.urlopen(method, url, **extra_kw)
  File "/usr/lib/python3/dist-packages/urllib3/poolmanager.py", line 330, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 654, in urlopen
    conn = self._get_conn(timeout=pool_timeout)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 274, in _get_conn
    return conn or self._new_conn()
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 228, in _new_conn
    conn = self.ConnectionCls(
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 115, in __init__
    _HTTPConnection.__init__(self, *args, **kw)
TypeError: __init__() got an unexpected keyword argument 'assert_hostname'
[2020-10-21T10:54:51,579] ERROR resources.ExceptionCatchingResourceModelSource - [ResourceModelSource: 2.source (kubernetes-resource-model), project: mysql] 
com.dtolabs.rundeck.core.resources.ResourceModelSourceException: failed to execute: /home/jpla/rundeck/rd.repo/pro/3.3.4/libext/cache/kubernetes-plugin-2.0.1/pods-resource-model.py: Script execution failed with result: 1
	at com.dtolabs.rundeck.core.resources.ScriptPluginResourceModelSource.getNodes(ScriptPluginResourceModelSource.java:82) ~[rundeck-core-3.3.4-20201007.jar!/:?]
	at com.dtolabs.rundeck.core.resources.ExceptionCatchingResourceModelSource.getNodes(ExceptionCatchingResourceModelSource.java:58) [rundeck-core-3.3.4-20201007.jar!/:?]
	at com.dtolabs.rundeck.core.resources.DelegateResourceModelSource.getNodes(DelegateResourceModelSource.java:35) [rundeck-core-3.3.4-20201007.jar!/:?]
	at com.dtolabs.rundeck.core.common.ProjectNodeSupport.getNodeSet(ProjectNodeSupport.java:139) [rundeck-core-3.3.4-20201007.jar!/:?]
	at com.dtolabs.rundeck.core.common.ProjectNodeSupport$ProjectNodesSource.getNodes(ProjectNodeSupport.java:359) [rundeck-core-3.3.4-20201007.jar!/:?]
	at com.dtolabs.rundeck.core.resources.ExceptionCatchingResourceModelSource.getNodes(ExceptionCatchingResourceModelSource.java:58) [rundeck-core-3.3.4-20201007.jar!/:?]
	at com.dtolabs.rundeck.core.resources.ResourceModelSource$getNodes.call(Unknown Source) [rundeck-core-3.3.4-20201007.jar!/:?]
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:115) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:119) [groovy-2.5.6.jar!/:2.5.6]
	at rundeck.services.nodes.CachedProjectNodes.reloadNodeSet(CachedProjectNodes.groovy:44) [classes!/:?]
	at rundeck.services.nodes.CachedProjectNodes$reloadNodeSet.call(Unknown Source) [classes!/:?]
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:115) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:119) [groovy-2.5.6.jar!/:2.5.6]
	at rundeck.services.NodeService$_loadNodes_closure5.doCall(NodeService.groovy:313) [classes!/:?]
	at rundeck.services.NodeService$_loadNodes_closure5.doCall(NodeService.groovy) [classes!/:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_265]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_265]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_265]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_265]
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101) [groovy-2.5.6.jar!/:2.5.6]
	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:263) [groovy-2.5.6.jar!/:2.5.6]
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1041) [groovy-2.5.6.jar!/:2.5.6]
	at groovy.lang.Closure.call(Closure.java:405) [groovy-2.5.6.jar!/:2.5.6]
	at groovy.lang.Closure.call(Closure.java:399) [groovy-2.5.6.jar!/:2.5.6]
	at com.codahale.metrics.Timer.time(Timer.java:104) [metrics-core-4.0.7.jar!/:4.0.7]
	at sun.reflect.GeneratedMethodAccessor560.invoke(Unknown Source) ~[?:?]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_265]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_265]
	at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoCachedMethodSite.invoke(PojoMetaMethodSite.java:188) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:127) [groovy-2.5.6.jar!/:2.5.6]
	at org.grails.plugins.metricsweb.MetricService.withTimer(MetricService.groovy:60) [metricsweb-3.3.4-20201007.jar!/:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_265]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_265]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_265]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_265]
	at org.codehaus.groovy.runtime.callsite.PlainObjectMetaMethodSite.doInvoke(PlainObjectMetaMethodSite.java:43) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrap.invoke(PogoMetaMethodSite.java:179) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.call(PogoMetaMethodSite.java:70) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:115) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:143) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callSafe(AbstractCallSite.java:103) [groovy-2.5.6.jar!/:2.5.6]
	at rundeck.services.NodeService$_loadNodes_closure6.doCall(NodeService.groovy:320) [classes!/:?]
	at rundeck.services.NodeService$_loadNodes_closure6.doCall(NodeService.groovy) [classes!/:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_265]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_265]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_265]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_265]
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:101) [groovy-2.5.6.jar!/:2.5.6]
	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323) [groovy-2.5.6.jar!/:2.5.6]
	at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:263) [groovy-2.5.6.jar!/:2.5.6]
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1041) [groovy-2.5.6.jar!/:2.5.6]
	at groovy.lang.Closure.call(Closure.java:405) [groovy-2.5.6.jar!/:2.5.6]
	at groovy.lang.Closure.call(Closure.java:399) [groovy-2.5.6.jar!/:2.5.6]
	at groovy.lang.Closure.run(Closure.java:486) [groovy-2.5.6.jar!/:2.5.6]
	at org.springframework.core.task.SimpleAsyncTaskExecutor$ConcurrencyThrottlingRunnable.run(SimpleAsyncTaskExecutor.java:275) [spring-core-5.1.14.RELEASE.jar!/:5.1.14.RELEASE]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_265]
Caused by: com.dtolabs.rundeck.core.resources.ResourceModelSourceException: Script execution failed with result: 1
	at com.dtolabs.rundeck.core.resources.ScriptResourceUtil.executeScript(ScriptResourceUtil.java:147) ~[rundeck-core-3.3.4-20201007.jar!/:?]
	at com.dtolabs.rundeck.core.resources.ScriptPluginResourceModelSource.getNodes(ScriptPluginResourceModelSource.java:68) ~[rundeck-core-3.3.4-20201007.jar!/:?]
	... 61 more

WORKAROUND
pip3 install kubernetes==11.0.0

Fill the token field using a value stored in the storage key in the node configuration

In the node configuration, when you are using the Kubernetes plugin, you have to fill the token field directly with your token value.

The aim of this request is to have the possibility to fill this field by the value stored in the storage key.

In the project setting configuration, this option is already present when you define the Kubernetes plugin as your default node executor.

Cluster takes unusally long time with Scheduling Pods

job-wait.py fails with below error if K8s cluster takes unusually longer time to schedule pod.

Traceback (most recent call last):

17:06:45 |   | File "/home/rundeck/libext/cache/kubernetes-plugin-2.0.0/job-wait.py", line 77, in wait
17:06:45 |   | first_item = pod_list.items[0]
17:06:45 |   | IndexError: list index out of range

Follow up Kubernete job execution logs

Current behavior is that for example "Kubernetes/Job/Wait for" brings the complete execution logs only when the job is completed. It will be a nice feature to allow monitor in execution time the logs of the k8s job (like "tail").

K8s Authentication through ServiceAccount

I'm running my Rundeck inside my K8s cluster and allow my pods with ServiceAccount policy to access my K8s cluster, but the method does not contemplate the use of this form.

Any chance to implement authentication through Service Account for pods?

Pod wait and Deployment Status got 401 error

09:21:05 1. Watcher for ts-utils pod rundeck-cc7776b7f-79gcdChecking job status ...
09:21:05   ERROR: kubernetes-wait-pod: Exception waiting for job: (401)
09:21:05   Reason: Unauthorized
09:21:05   HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Length': '165', 'Audit-Id': '3b7692d2-d1f0-4420-90ce-9cdd8f3f802a', 'Content-Type': 'application/json', 'Date': 'Thu, 29 Oct 2020 09:21:05 GMT'})
09:21:05   HTTP response body: {
09:21:05   "kind": "Status",
09:21:05   "apiVersion": "v1",
09:21:05   "metadata": {
09:21:05    
09:21:05   },
09:21:05   "status": "Failure",
09:21:05   "message": "Unauthorized",
09:21:05   "reason": "Unauthorized",
09:21:05   "code": 401
09:21:05   }
09:21:05    
09:21:05    
09:21:05   Failed: NonZeroResultCode: Script result code was: 1
09:21:05   [wf:5edf6ad6-ac7e-4ae7-9213-2298b35cd4b6] Step [2] did not run. start conditions: [(after.step.1 == 'true')], skip conditions: [(step.2.completed == 'true')]
09:13:24 1. Watcher for deployment rundeck-cc7776b7f-79gcdERROR: kubernetes-model-source: Exception deleting deployment: (401)
09:13:24   Reason: Unauthorized
09:13:24   HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Audit-Id': 'a08a5dae-cdfa-4942-aa6d-9b01177fc624', 'Date': 'Thu, 29 Oct 2020 09:13:24 GMT', 'Content-Type': 'application/json', 'Content-Length': '129'})
09:13:24   HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
09:13:24    
09:13:24    
09:13:24    
09:13:24   Failed: NonZeroResultCode: Script result code was: 1

TypeError: __init__() got an unexpected keyword argument 'assert_hostname'

Hi

I am having a lot of trouble setting up this plugin.
My setup: I am running Rundeck inside a Kubernetes cluster, which I wish to manage using the same Rundeck. Furthermore, I want to use the same Rundeck to run commands and kubernetes jobs on the same cluster.

I have not succeeded in showing the kubernetes nodes as well as configuring the run of a simple "hello world" kubernetes job.

In the authentication section I have the following:
Cluster URL: mylab-master01.mylab.lab
Token: Token String of default token from kubernetes
Verify SSL: Unchecked
SSL Certificate Path: Blank

The following is the output of my job running.

13:23:20 1. Kubernetes / Job / Create [workflow] Begin step: 1,NodeDispatch
13:23:20 1: Workflow step executing: StepExecutionItem{type='NodeDispatch', keepgoingOnSuccess=false, hasFailureHandler=false}
13:23:20 preparing for sequential execution on 1 nodes
13:23:20 Executing command on node: localhost, NodeEntryImpl{tags=[], attributes={nodename=localhost, hostname=localhost, osFamily=unix, osVersion=3.10.0-327.36.3.el7.x86_64, osArch=amd64, description=Rundeck server node, osName=Linux, username=rundeck, tags=}, project='null'}
13:23:20 [workflow] beginExecuteNodeStep(localhost): NodeDispatch: StepExecutionItem{type='NodeDispatch', keepgoingOnSuccess=false, hasFailureHandler=false}
13:23:20 [Kubernetes-Create-Job] step started, config: {debug=true, container_name=hello world Container, verify_ssl=false, image_pull_policy=Always, name=my job, namespace=default, api_version=batch/v1, job_restart_policy=Never, container_image=hello-world:latest, url=mylab-master01.mylab.lab, token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tenM5OHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI4YjAyMjRlLWY1NjctMTFlNy1hZjQ0LTAwNTA1NjhlNzU0NiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.b7gfoLkWCwH_7KeuyGcLqyW9kJTtfJzjePt887xGFWz468y3jhMxU_QeID49d1x4PMeEbx73G35I3vQ6s3Nn0mh5Hrg0pVsWiyd6x0fIdxW9bNpNuJsHD5oIbbhM-GXKaUBJ2Inwqc5tloXjYnh67Vb4P816xYPnPkaw_ejDuEEHc3LrS4SQpNoy22z_URELhYGpRBEmcGODZm_VFxq8jt72B5xED0AO5jBRFaCNQns95ZIvFB0IfKuUzIk2PN369Wx-th3UvaKfpfjhTBbeF4wc4X5XLcRjTGWeYklwNXB2cLxiPbDlBsDQsj8RW-gHZz7peFIOViDcR1wrEEJZ6w}
13:23:20 [Kubernetes-Create-Job] executing: [python, -u, /var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.4/job-create.py]
13:23:20 DEBUG: kubernetes-model-source: Log level configured for DEBUG
13:23:20 DEBUG: kubernetes-model-source: Creating job
13:23:20 DEBUG: kubernetes-model-source: {'image_pull_policy': 'Always', 'name': 'my job', 'job_restart_policy': 'Never', 'namespace': 'default', 'container_name': 'hello world Container', 'container_image': 'hello-world:latest', 'api_version': 'batch/v1'}
13:23:20 DEBUG: kubernetes-model-source: new job:
13:23:21 DEBUG: kubernetes-model-source: {'api_version': 'batch/v1',
13:23:21 'kind': 'Job',
13:23:21 'metadata': {'annotations': None,
13:23:21 'cluster_name': None,
13:23:21 'creation_timestamp': None,
13:23:21 'deletion_grace_period_seconds': None,
13:23:21 'deletion_timestamp': None,
13:23:21 'finalizers': None,
13:23:21 'generate_name': None,
13:23:21 'generation': None,
13:23:21 'initializers': None,
13:23:21 'labels': None,
13:23:21 'name': 'my job',
13:23:21 'namespace': 'default',
13:23:21 'owner_references': None,
13:23:21 'resource_version': None,
13:23:21 'self_link': None,
13:23:21 'uid': None},
13:23:21 'spec': {'active_deadline_seconds': None,
13:23:21 'backoff_limit': None,
13:23:21 'completions': None,
13:23:21 'manual_selector': None,
13:23:21 'parallelism': None,
13:23:21 'selector': None,
13:23:21 'template': {'metadata': {'annotations': None,
13:23:21 'cluster_name': None,
13:23:21 'creation_timestamp': None,
13:23:21 'deletion_grace_period_seconds': None,
13:23:21 'deletion_timestamp': None,
13:23:21 'finalizers': None,
13:23:21 'generate_name': None,
13:23:21 'generation': None,
13:23:21 'initializers': None,
13:23:21 'labels': None,
13:23:21 'name': 'my job',
13:23:21 'namespace': None,
13:23:21 'owner_references': None,
13:23:21 'resource_version': None,
13:23:21 'self_link': None,
13:23:21 'uid': None},
13:23:21 'spec': {'active_deadline_seconds': None,
13:23:21 'affinity': None,
13:23:21 'automount_service_account_token': None,
13:23:21 'containers': [{'args': None,
13:23:21 'command': None,
13:23:21 'env': [],
13:23:21 'env_from': None,
13:23:21 'image': 'hello-world:latest',
13:23:21 'image_pull_policy': 'Always',
13:23:21 'lifecycle': None,
13:23:21 'liveness_probe': None,
13:23:21 'name': 'hello world Container',
13:23:21 'ports': None,
13:23:21 'readiness_probe': None,
13:23:21 'resources': None,
13:23:21 'security_context': None,
13:23:21 'stdin': None,
13:23:21 'stdin_once': None,
13:23:21 'termination_message_path': None,
13:23:21 'termination_message_policy': None,
13:23:21 'tty': None,
13:23:21 'volume_devices': None,
13:23:21 'volume_mounts': None,
13:23:21 'working_dir': None}],
13:23:21 'dns_config': None,
13:23:21 'dns_policy': None,
13:23:21 'host_aliases': None,
13:23:21 'host_ipc': None,
13:23:21 'host_network': None,
13:23:21 'host_pid': None,
13:23:21 'hostname': None,
13:23:21 'image_pull_secrets': None,
13:23:21 'init_containers': None,
13:23:21 'node_name': None,
13:23:21 'node_selector': None,
13:23:21 'priority': None,
13:23:21 'priority_class_name': None,
13:23:21 'restart_policy': 'Never',
13:23:21 'scheduler_name': None,
13:23:21 'security_context': None,
13:23:21 'service_account': None,
13:23:21 'service_account_name': None,
13:23:21 'share_process_namespace': None,
13:23:21 'subdomain': None,
13:23:21 'termination_grace_period_seconds': None,
13:23:21 'tolerations': None,
13:23:21 'volumes': None}}},
13:23:21 'status': None}
13:23:21 Traceback (most recent call last):
13:23:21 File "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.4/job-create.py", line 208, in
13:23:21 main()
13:23:21 File "/var/lib/rundeck/libext/cache/kubernetes-plugin-1.0.4/job-create.py", line 197, in main
13:23:21 namespace=data["namespace"]
13:23:21 File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/batch_v1_api.py", line 58, in create_namespaced_job
13:23:21 (data) = self.create_namespaced_job_with_http_info(namespace, body, **kwargs)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/batch_v1_api.py", line 143, in create_namespaced_job_with_http_info
13:23:21 collection_formats=collection_formats)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 321, in call_api
13:23:21 _return_http_data_only, collection_formats, _preload_content, _request_timeout)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 155, in __call_api
13:23:21 _request_timeout=_request_timeout)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py", line 364, in request
13:23:21 body=body)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/rest.py", line 266, in POST
13:23:21 body=body)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/kubernetes/client/rest.py", line 166, in request
13:23:21 headers=headers)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 70, in request
13:23:21 **urlopen_kw)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 148, in request_encode_body
13:23:21 return self.urlopen(method, url, **extra_kw)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 321, in urlopen
13:23:21 response = conn.urlopen(method, u.request_uri, **kw)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 589, in urlopen
13:23:21 conn = self._get_conn(timeout=pool_timeout)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 251, in _get_conn
13:23:21 return conn or self._new_conn()
13:23:21 File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 212, in _new_conn
13:23:21 strict=self.strict, **self.conn_kw)
13:23:21 File "/usr/local/lib/python2.7/dist-packages/urllib3/connection.py", line 125, in init
13:23:21 _HTTPConnection.init(self, *args, **kw)
13:23:21 TypeError: init() got an unexpected keyword argument 'assert_hostname'
13:23:21 [Kubernetes-Create-Job]: result code: 1
13:23:21 Failed: NonZeroResultCode: Script result code was: 1

Please help

Thank you

Yoav

stderr breaks the script step

Using the script step to execute on a pod, I noticed that even though the commands returned with exit code 0, the job failed.

I isolated the issue to commands that write to stderr without failing, like 'git clone' or even 'set -x'

I would understand the logic, but shouldn't be a way around this?

The gradle build failed

Hi

I have installed the kubernetes SDK and trying to create the zip file for kubernetes plugin using gradle build but it is failing.

gradle build

FAILURE: Build failed with an exception.

  • Where:
    Build file '/root/kubernetes/build.gradle' line: 7

  • What went wrong:
    Plugin [id: 'pl.allegro.tech.build.axion-release', version: '1.7.1'] was not found in any of the following sources:

  • Gradle Core Plugins (plugin is not in 'org.gradle' namespace)
  • Plugin Repositories (could not resolve plugin artifact 'pl.allegro.tech.build.axion-release:pl.allegro.tech.build.axion-release.gradle.plugin:1.7.1')
    Searched in the following repositories:
    Gradle Central Plugin Repository
  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

  • Get more help at https://help.gradle.org

BUILD FAILED in 1m 1s

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.