Giter Site home page Giter Site logo

tektoncd / catalog Goto Github PK

View Code? Open in Web Editor NEW
640.0 18.0 562.0 9.24 MB

Catalog of shared Tasks and Pipelines.

License: Apache License 2.0

Dockerfile 6.96% Go 0.81% Shell 81.76% JavaScript 0.06% Python 10.41%
tekton pipeline task catalog re-useable hacktoberfest k8s

catalog's Introduction

Tekton Catalog

If you want v1alpha1 resources, you need to go to the v1alpha1 branch. The main branch is synced with v1beta1 since 2020, 19th June.

This repository contains a catalog of Task resources (and someday Pipelines and Resources), which are designed to be reusable in many pipelines.

Each Task is provided in a separate directory along with a README.md and a Kubernetes manifest, so you can choose which Tasks to install on your cluster. A directory can hold one task and multiple versions.

See our project roadmap.

Hub provides an easy way to search and discover all Tekton resources

Catalog Structure

  1. Each resource follows the following structure

    ./task/                     ๐Ÿ‘ˆ the kind of the resource
    
        /argocd                 ๐Ÿ‘ˆ definition file must have same name
           /0.1
             /OWNERS            ๐Ÿ‘ˆ owners of this resource
             /README.md
             /argocd.yaml       ๐Ÿ‘ˆ the file name should match the resource name
             /samples/deploy-to-k8s.yaml
           /0.2/...
    
        /golang-build
           /OWNERS
           /README.md
           /0.1
             /README.md
             /golang-build.yaml
             /samples/golang-build.yaml
    
  2. Resource YAML file includes following changes

  • Labels include the version of the resource.
  • Annotations include minimum pipeline version supported by the resource, tags associated with the resource and displayName of the resource
 labels:
    app.kubernetes.io/version: "0.1"                 ๐Ÿ‘ˆ Version of the resource

  annotations:
    tekton.dev/pipelines.minVersion: "0.12.1"        ๐Ÿ‘ˆ Min Version of pipeline resource is compatible
    tekton.dev/categories: CLI		        ๐Ÿ‘ˆ Comma separated list of categories
    tekton.dev/tags: "ansible, cli"                  ๐Ÿ‘ˆ Comma separated list of tags
    tekton.dev/displayName: "Ansible Tower Cli"      ๐Ÿ‘ˆ displayName can be optional
    tekton.dev/platforms: "linux/amd64,linux/s390x"  ๐Ÿ‘ˆ Comma separated list of platforms, can be optional

spec:
  description: |-
    ansible-tower-cli task simplifies
    workflow, jobs, manage users...                  ๐Ÿ‘ˆ Summary

    Ansible Tower (formerly โ€˜AWXโ€™) is a ...

Note : Categories are a generalized list and are maintained by Hub. To add new categories, please follow the procedure mentioned here.

Task Kinds

There are two kinds of Tasks:

  1. ClusterTask with a Cluster scope, which can be installed by a cluster operator and made available to users in all namespaces
  2. Task with a Namespace scope, which is designed to be installed and used only within that namespace.

Tasks in this repo are namespace-scoped Tasks, but can be installed as ClusterTasks by changing the kind.

Using Tasks

First, install a Task onto your cluster:

$ kubectl apply -f golang/build.yaml
task.tekton.dev/golang-build created

You can see which Tasks are installed using kubectl as well:

$ kubectl get tasks
NAME           AGE
golang-build   3s

With the Task installed, you can define a TaskRun that runs that Task, being sure to provide values for required input parameters and resources:

apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
  name: example-run
spec:
  taskRef:
    name: golang-build
  params:
  - name: package
    value: github.com/tektoncd/pipeline
  workspaces:
  - name: source
    persistentVolumeClaim:
      claimName: my-source

Next, create the TaskRun you defined:

$ kubectl apply -f example-run.yaml
taskrun.tekton.dev/example-run created

You can check the status of the TaskRun using kubectl:

$ kubectl get taskrun example-run -oyaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
  name: example-run
spec:
  ...
status:
  completionTime: "2019-04-25T18:10:09Z"
  conditions:
  - lastTransitionTime: "2019-04-25T18:10:09Z"
    status: True
    type: Succeeded
...

Using Tasks through Bundles

Tekton Bundles are an alpha feature of Tekton pipelines that allows storing Tasks as bundles in a container registry, instead of as custom resources in etcd in a Kubernetes cluster. With Tekton Bundles are enabled, it is possible to reference any task in the catalog without installing it first. Tasks are available at gcr.io/tekton-releases/catalog/upstream/<task-name>:<task-version>. For example:

apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
  name: example-run
spec:
  taskRef:
    name: golang-build
    bundle: gcr.io/tekton-releases/catalog/upstream/golang-build:0.1
  params:
  - name: package
    value: github.com/tektoncd/pipeline
  workspaces:
  - name: source
    persistentVolumeClaim:
      claimName: my-source

Contributing and Support

If you want to contribute to this repository, please see our contributing guidelines.

If you are looking for support, enter an issue or join our Slack workspace

Status of the Project

This project is still under active development, so you might run into issues. If you do, please don't be shy about letting us know, or better yet, contribute a fix or feature. Its folder structure is not yet set in stone either.

See our project roadmap.

catalog's People

Contributors

afrittoli avatar barthy1 avatar bendory avatar bittrance avatar bobcatfish avatar bradbeck avatar chanseokoh avatar chitrangpatel avatar chmouel avatar concaf avatar divyansh42 avatar dlorenc avatar frerikandriessen avatar gijsvandulmen avatar iancoffey avatar imjasonh avatar jhonis avatar jimmyjones2 avatar jromero avatar natalieparellano avatar navidshaikh avatar piyush-garg avatar pratap0007 avatar pritidesai avatar puneetpunamiya avatar savitaashture avatar vdemeester avatar vinamra28 avatar wlynch avatar yuege01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

catalog's Issues

buildah: Need input parameter for build context

Expected Behavior

Suppose we have a repository like this:

./foo/Dockerfile
./foo/bar
./foo/bar/Makefile

where ./foo/Dockerfile has:

FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install make
COPY . /app
RUN make -f /app/bar/Makefile

In order to build this properly, there should be an input parameter available to specify the build context as ./foo.

Actual Behavior

However, the build command is currently:

    command: ['buildah', 'bud', '--tls-verify=$(inputs.params.TLSVERIFY)', '--layers', '-f', '$(inputs.params.DOCKERFILE)', '-t', '$(outputs.resources.image.url)', '.']

There is currently no way to specify the build context in a TaskRun.

Steps to Reproduce the Problem

Additional Info

Failing to run s2i task on kubernetes/minikube

Expected Behavior

S2i building application properly

Actual Behavior

When running s2i task under minikube or kubernetes we are getting this error :

[....]
---> Pruning the development dependencies
npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
npm timing stage:loadCurrentTree Completed in 33ms
npm timing stage:loadIdealTree:cloneCurrentTree Completed in 1ms
npm timing stage:loadIdealTree:loadShrinkwrap Completed in 6ms
npm timing stage:loadIdealTree:loadAllDepsIntoIdealTree Completed in 3ms
npm timing stage:loadIdealTree Completed in 21ms
npm timing stage:generateActionsToTake Completed in 7ms
npm timing stage:executeActions Completed in 56ms
npm timing stage:rollbackFailedOptional Completed in 2ms
npm timing stage:runTopLevelLifecycles Completed in 134ms
npm timing audit submit Completed in 449ms
npm http fetch POST 200 https://registry.npmjs.org/-/npm/v1/security/audits/quick 453ms
npm timing audit body Completed in 6ms
up to date in 0.533s
found 0 vulnerabilities

npm timing npm Completed in 1167ms
npm info ok
---> Cleaning up npm cache
---> Fix permissions on app-root
error committing container for step {Env:[PATH=/opt/app-root/src/node_modules/.bin/:/opt/app-root/src/.npm-global/bin/:/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin SUMMARY=Platform for building and running Node.js 12.10.0 applications DESCRIPTION=Node.js  available as docker container is a base platform for building and running various Node.js  applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. STI_SCRIPTS_URL=image:///usr/libexec/s2i STI_SCRIPTS_PATH=/usr/libexec/s2i APP_ROOT=/opt/app-root HOME=/opt/app-root/src PLATFORM=el7 BASH_ENV=/opt/app-root/etc/scl_enable ENV=/opt/app-root/etc/scl_enable PROMPT_COMMAND=. /opt/app-root/etc/scl_enable NODEJS_SCL=rh-nodejs10 NPM_RUN=start NODE_VERSION=12.10.0 NPM_VERSION=6.10.3 NODE_LTS=false NPM_CONFIG_LOGLEVEL=info NPM_CONFIG_PREFIX=/opt/app-root/src/.npm-global NPM_CONFIG_TARBALL=/usr/share/node/node-v12.10.0-headers.tar.gz DEBUG_PORT=5858 LD_LIBRARY_PATH=/opt/rh/httpd24/root/usr/lib64] Command:run Args:[/usr/libexec/s2i/assemble] Flags:[] Attrs:map[] Message:RUN /usr/libexec/s2i/assemble Original:RUN /usr/libexec/s2i/assemble}: error copying layers and metadata for container "50b91ee8a2c440fa6467bb0e7afa1932ce596259d267ec7f03e8e6bbc965247c": Error initializing source containers-storage:e68f92fecf6fffae93b5eb830d66e4d832fbd9af572c5f89c8af524c5898ae66-working-container: error extracting layer "ee67b56b473b3521d9e30919a551d56d8e97cfc1843ae7f02306e78066b8520b": lstat /var/lib/containers/storage/overlay/ee67b56b473b3521d9e30919a551d56d8e97cfc1843ae7f02306e78066b8520b/merged/usr/bin/git: input/output error

Steps to Reproduce the Problem

  1. Create a resource
resource.yaml
 ---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: nodejs-s2i-example
spec:
  type: git
  params:
  - name: revision
    value: master
  - name: url
    value: https://github.com/chmouel/nodejs-health-check
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: image
spec:
  type: image
  params:
  - name: url
    value: localhost:5000/nodejs-health-check-tekton
  1. Create a taskrun
taskrun.yaml
 ---
apiVersion: tekton.dev/v1alpha1
kind: TaskRun
metadata:
  name: s2i-run
spec:
  serviceAccount: default
  taskRef:
    name: s2i
  outputs:
    resources:
      - name: image
        resourceRef:
          name: image
  inputs:
    resources:
      - name: source
        resourceRef:
          name: nodejs-s2i-example
    params:
    - name: BUILDER_IMAGE
      value: nodeshift/centos7-s2i-nodejs:latest
    - name: TLSVERIFY
      value: "false"
  1. Apply the Task
task.yaml
 apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: s2i
spec:
  inputs:
    resources:
     - name: source
       type: git
    params:
    - name: BUILDER_IMAGE
      description: The location of the s2i builder image.
    - name: PATH_CONTEXT
      description: The location of the path to run s2i from.
      default: .
    - name: TLSVERIFY
      description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry)
      default: "true"

outputs:
resources:
- name: image
type: image

steps:

  • name: generate
    image: quay.io/openshift-pipeline/s2i
    workingdir: /workspace/source
    command: ['s2i', 'build', '$(inputs.params.PATH_CONTEXT)', '$(inputs.params.BUILDER_IMAGE)', '--as-dockerfile', '/gen-source/Dockerfile.gen']
    volumeMounts:

    • name: gen-source
      mountPath: /gen-source
  • name: build
    image: quay.io/buildah/stable
    workingdir: /gen-source
    command: ['buildah', 'bud', '--tls-verify=$(inputs.params.TLSVERIFY)', '--layers', '-f', '/gen-source/Dockerfile.gen', '-t', '$(outputs.resources.image.url)', '.']
    volumeMounts:

    • name: varlibcontainers
      mountPath: /var/lib/containers
    • name: gen-source
      mountPath: /gen-source
      securityContext:
      privileged: true
  • name: push
    image: quay.io/buildah/stable
    command: ['buildah', 'push', '--tls-verify=$(inputs.params.TLSVERIFY)', '$(outputs.resources.image.url)', 'docker://$(outputs.resources.image.url)']
    volumeMounts:

    • name: varlibcontainers
      mountPath: /var/lib/containers
      securityContext:
      privileged: true

volumes:

  • name: varlibcontainers
    emptyDir: {}
  • name: gen-source
    emptyDir: {}

watch the logs,

Additional Info

  • I have tried this under with different s2i git repos with the same failures
  • buildah task do build properly only s2i is not working here
  • This was tested as work on openshift4

Tekton dashboand access 403 forbidden and 404

Expected Behavior

Dashboard web page could be accessed.

Actual Behavior

403 Forbidden

Steps to Reproduce the Problem

  1. Install tekton pipeline and dashboard as documents said
kubectl get po,ep -n tekton-pipelines
NAME                                               READY   STATUS    RESTARTS   AGE
pod/tekton-dashboard-5b5864cf9b-8fl5c              1/1     Running   0          112m
pod/tekton-pipelines-controller-55c6b5b9f6-8xzp2   1/1     Running   0          7d20h
pod/tekton-pipelines-webhook-6794d5bcc8-v4h2f      1/1     Running   0          7d20h

NAME                                    ENDPOINTS           AGE
endpoints/tekton-dashboard              10.1.94.234:9097    112m
endpoints/tekton-pipelines-controller   10.1.127.224:9090   7d21h
endpoints/tekton-pipelines-webhook      10.1.127.225:8443   7d21h
  1. Access dashboard curl 10.1.94.234:9097/, see 403 Forbidden
curl 10.1.94.234:9097/
403 Forbidden
  1. Access health check are fine curl 10.1.94.234:9097/health and curl 10.1.94.234:9097/readiness
  2. I don't find any explicit error in dashboard pod log messages.

Additional Info

Here is the describe pod result

kubectl -n tekton-pipelines describe po  tekton-dashboard-5b5864cf9b-8fl5c
Name:               tekton-dashboard-5b5864cf9b-8fl5c
Namespace:          tekton-pipelines
Priority:           0
PriorityClassName:  <none>
Node:               <hidden>
Start Time:         Mon, 26 Aug 2019 20:00:04 -0700
Labels:             app=tekton-dashboard
                    pod-template-hash=5b5864cf9b
Annotations:        kubernetes.io/psp: ibm-restricted-psp
                    seccomp.security.alpha.kubernetes.io/pod: docker/default
Status:             Running
IP:                 10.1.94.234
Controlled By:      ReplicaSet/tekton-dashboard-5b5864cf9b
Containers:
  tekton-dashboard:
    Container ID:   docker://1f2acdda139e5cd2ae582c03989e9616feb8bf654563a77fd3847b6cb481b368
    Image:          gcr.io/tekton-releases/github.com/tektoncd/dashboard/cmd/dashboard@sha256:1bae097b49afa77fcbf9415a2e3484a6636818abf04d05dc817dc2f176510827
    Image ID:       docker-pullable://gcr.io/tekton-releases/github.com/tektoncd/dashboard/cmd/dashboard@sha256:1bae097b49afa77fcbf9415a2e3484a6636818abf04d05dc817dc2f176510827
    Port:           9097/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 26 Aug 2019 20:00:06 -0700
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:9097/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:9097/readiness delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      PORT:                          9097
      WEB_RESOURCES_DIR:             /var/run/ko/web
      PIPELINE_RUN_SERVICE_ACCOUNT:
      INSTALLED_NAMESPACE:           tekton-pipelines (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from tekton-dashboard-token-j2cfh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  tekton-dashboard-token-j2cfh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  tekton-dashboard-token-j2cfh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

[Request] Have a task supporting the bash commands

Feature request

The goal of this request is to have a task supporting the execution of multiple bash commands as we can by example do using CircleCI. See : https://github.com/snowdrop/component-operator-demo/blob/master/.circleci/config.yml#L26-L32

Tekton currently proposes this approach: https://github.com/tektoncd/pipeline/blob/master/examples/taskruns/task-env.yaml
but we can't group the commands to be executed by a container as currently it is restricted to what the k8s container spec allows

A workaround is perhaps to mount the bash script to be executed OR to have a parser converting a multiline bash commands into x args

Problems getting the cloud native buildpacks example to run because of missing builder image

Expected Behavior

Follow the example in the buildpacksdirectory and get it to run.

Actual Behavior

Looks like the used builder docker image does not exist anymore. It is referenced in the task run as "gcr.io/cncf-buildpacks-ci/tekton-cnb-test:bionic" but it looks like it got removed. Is there a new home for it somewhere?

Steps to Reproduce the Problem

  1. docker pull gcr.io/cncf-buildpacks-ci/tekton-cnb-test:bionic

Additional Info

kn-create task not receiving the service name parameter

Expected Behavior

Successful service creation using kn task at https://github.com/tektoncd/catalog/tree/master/kn

Actual Behavior

Service creation failing with error

[create] 'service create' requires the service name given as single argument

Steps to Reproduce the Problem

  1. Follow instructions at https://github.com/tektoncd/catalog/tree/master/kn
  2. Reference a built image and a service name in taskrun.yaml
  3. Create taskrun
17:38 โžœ  catalog git:(master) โœ—  kubectl create -f kn/kn-create.yaml 
task.tekton.dev/kn-create created

17:38 โžœ  catalog git:(master) โœ—  cat taskrun.yaml 
apiVersion: tekton.dev/v1alpha1
kind: TaskRun
metadata:
  generateName: kn-create-
spec:
  serviceAccount: kn-deployer-account  # <-- run as the authorized SA

  taskRef:
    name: kn-create
  inputs:
    params:
    - name: service
      value: hello
    resources:
    - name: image
      resourceSpec:
        type: image
        params:
        - name: url
          value: gcr.io/knative-samples/helloworld-go

17:38 โžœ  catalog git:(master) โœ—  kubectl create -f taskrun.yaml 
taskrun.tekton.dev/kn-create-ncwqk created

17:38 โžœ  catalog git:(master) โœ—  tkn taskrun list
NAME              STARTED         DURATION   STATUS             
kn-create-ncwqk   4 seconds ago   ---        Running(Pending)   

17:39 โžœ  catalog git:(master) โœ—  tkn taskrun logs kn-create-ncwqk
[create] 'service create' requires the service name given as single argument

container step-create has failed  : Error

Doc enhancement : where git code is stored for s2i task

Doc enhancement

The s2i task of the catalog specifies a resource of type git without working dir. Is it possible to document where the project cloned is stored within the pod's task. Is it under the current path of the volume mounted ? Is it somewhere else ? Is it by default under /workspace/ as reported within the git container log ?

That could be great to document that otherwise it is very difficult, hard to figure out how the first step of the build should be defined in order to use the files of the git cloned project

Example as defined within the s2i task - Git Resource

    resources:
     - name: source
       type: git

and steps

  steps:
  - name: generate
    image: quay.io/openshift-pipeline/s2i
    workingdir: /workspace/source
...

Some of the Tasks still use `workingdir` instead of `workingDir` in steps

Add initial Kubernetes "deployment" tasks

Similar to #36, but for Kubernetes, we should provide a set of task that makes it easy to deploy (services, deployments, and moreโ€ฆ) kubernetes services using Tekton.

I need to complete this issue ๐Ÿ‘ผ

Improvements for Maven Task

Motivation

The current maven task is simple. Based on @rhuss comment there is scope for more features in the Task. (ref: #141 (review))

summary from the comment

To summarize, the features that I would like eventually to show up would be:

Support for Java 8 and Java 11 and for multiple Maven versions
Support for running a local mvnw which is part of a project
Proxy support
More ways to influence settings.xml, e.g. to add credentials for accessing servers (e.g. for signing and deploying artefacts)
And the holy grail: How to reuse local .m2 dependencies from a previous run. One idea is to create an image only with the dependencies by running mvn dependencies:go-offline for only download dependencies into the volume and reus that. But this is really a larger story.

[request] please support non privileged

I'm not allowed by the cluster-admin to add scc privileged to an service account.
What can I do to make an s2i build like I used to do in Jenkins with a OpenShift builder

Epic: enhance the catalog CI

This issue represent an Epic to track work around the CI for catalog.

  • Add more simple validation (like yamllint): #101
  • Add execution tests as much as possible: #103
  • Add continuous tests (validation and execution) on a daily or weekly basis

Add Task definition without PipelineResource

In the recent PipelineResource working group and as presented during Oct. 2 working group, and related to tektoncd/pipeline#1369 :

In the most recent PipelineResource working group we landed on the idea of keeping PipelineResources as Alpha when the rest of the Tekton resources move to Beta. So with that approach we'll need to make sure that our docs correctly reflect the "right" way to work with external resources.

In this direction, we may want to provide Task definition in the catalog that do not depend on PipelineResource, alongside ones that do. This issue is here to track work related to that.

We could have yaml files alongside the one that have PipelineResource:

~/s/g/t/catalog ยฑ master:328b678 ฮป ls -l golang 
.rw-rw-r--@ 1.3k vincent 24 Sep 17:26 build.yaml
.rw-rw-r--@ 1.3k vincent 24 Sep 17:26 build.pr.yaml
.rw-rw-r--@ 1.2k vincent 24 Sep 17:26 lint.yaml
.rw-rw-r--@ 1.2k vincent 24 Sep 17:26 lint.pr.yaml
.rw-rw-r--@ 3.7k vincent 11 Sep 13:21 README.md
.rw-rw-r--@ 1.3k vincent 24 Sep 17:26 tests.yaml
.rw-rw-r--@ 1.3k vincent 24 Sep 17:26 tests.pr.yaml

kn task: generateName doesn't work on v0.5.2

Expected Behavior

TaskRun being created

Actual Behavior

TaskRun is not being created

Steps to Reproduce the Problem

  1. Create task without any modifications from https://github.com/tektoncd/catalog/tree/master/kn
  2. Create TaskRun as advised in Readme.md, but got error:
$ oc apply -f kn-run.yaml                                                  
error: error when retrieving current configuration of:
Resource: "tekton.dev/v1alpha1, Resource=taskruns", GroupVersionKind: "tekton.dev/v1alpha1, Kind=TaskRun"
Name: "", Namespace: "test-pipelines"
Object: &{map["apiVersion":"tekton.dev/v1alpha1" "kind":"TaskRun" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "generateName":"kn-create-" "namespace":"test-pipelines"] "spec":map["inputs":map["params":[map["name":"service" "value":"my-service"]] "resources":[map["name":"image" "resourceSpec":map["params":[map["name":"url" "value":"gcr.io/knative-samples/helloworld-go"]] "type":"image"]]]] "serviceAccount":"kn-deployer-account" "taskRef":map["name":"kn-create"]]]}
  1. I had to change Metadata in Task from:
metadata:
  generateName: kn-create-

to:

metadata:
  name: kn-create

Additional Info

Running on OpenShift 4.1
(Kubernetes v1.13.4+3a25c9b)
Using Openshift-Pipelines Operator v 0.5.2

Templating typo on buildkit/buildah

Seems that some yaml files were not upgraded correctly to use parentheses instead of brackets.

Expected Behavior

Run a TaskRun for buildah or buildkit task with default parameters.

Actual Behavior

The yaml files have brackets instead of parentheses and the templating does not work

Warning  InspectFailed  29s (x4 over 45s)  kubelet, ubuntu-bionic  Failed to apply default image tag "${inputs.params.BUILDKIT_CLIENT_IMAGE}": couldn't parse image reference "${inputs.params.BUILDKIT_CLIENT_IMAGE}": invalid reference format: repository name must be lowercase

Support kn commands as args for kn task

Expected Behavior

User should be able to use all the commands kn provides with tekton.

This proposal is to rename present kn-create task with simply kn task which takes its parameters as spec.inputs.params of type: array.
With this in place, we'll have single task for kn at catalog with each command group and its flag usage for taskrun examples documented.

Actual Behavior

Presently we have kn-create task with few parameters support.

Parameterize the kn image to be used in kn-create task

Expected Behavior

I should be able to input the kn image to be used for kn-create task at https://github.com/tektoncd/catalog/blob/master/kn/kn-create.yaml
specifically, I'd like to run the nightly build. The default could be the latest release available for kn image.

Actual Behavior

The current kn task now points the latest kn release available.

Additional Info

  1. Latest released upstream kn image gcr.io/knative-releases/github.com/knative/client/cmd/kn
  2. Latest nightly released upstream kn image gcr.io/knative-nightly/knative.dev/client/cmd/kn

Vulnerability scanning task

It would be good to have an example vulnerability scanning task, as this is certainly something that will be widely used.... could look to using opensource scanning engines .... maybe something like anchore(?).

Taskrun tests

We are currently only apply the yamls making sure they apply into the cluster for our template test.

We want ideally to be able to run the taskrun and check they are succeeding, for example on kaniko or jjb have it to build and image and making sure this build it.

As a prereq we would need a external registry to output to

Releasing process of the catalog

With the recent backward-incompatible (containerTemplate -> stepTemplate, templating using $(โ€ฆ) instead of ${โ€ฆ}), the catalog is or appeard broken for different releases of tektoncd/pipeline.

This issue is there to discuss what we can do to, hopefully fix that.

One solution would be to branch the catalog on a given cadence with the same major/minor version of tektoncd/pipeline โ€“ making, for example, the 0.6.0 catalog branch be compatible with tektoncd/pipeline 0.6.0 for sure.

The rules would be the following:

  • Once we do a tektoncd/pipeline release
    • we make sure all task from the catalog works with it (aka validate and are able to run) โ€” initially this might be done manually, but parallel work should be done to automate those tests (see #102 )
    • we create a branch with that working state, that is guaranteed to work on the said release
    • we update the previous release branch README to inform that it's not supported anymore
  • The master branch can be updated alongside changes from tektoncd/pipeline
  • If a new task is added, we can also cherry-pick to the latest release branch (and only this one to reduce the maintainance burden) โ€” this is up to the author of the task and/or the main repository OWNERs

Another alternative could be "Design a system for annotating Tasks with Tekton version support"

Some tasks in the catalog may only work with recent or old versions of Tekton. We should come up with a clear way to display this to catalog users.

/kind feature
/area release

Add yamllint to CI test

Expected Behavior

Yaml nicely linted, tidy and clean

Actual Behavior

image

Steps to Reproduce the Problem

  1. install yamllint
  2. yamllint **/*.yaml

Additional Info

  • A makefile with a make lint so people can run it locally before sending the pr would be nice,

Decide on support "levels'

Is everything in here at the same "level" of support? Do we need a "playground" area? Or maybe "stable" and "incubation", like helm/charts?

terraform task(s)

We should provide Tasks that are able to use terraform to deploy things through it.

/kind feature

Add Tasks to cover PipelineResource use cases

In the recent PipelineResource working group and as presented during Oct. 2 working group, and related to tektoncd/pipeline#1369 :

In the most recent PipelineResource working group we landed on the idea of keeping PipelineResources as Alpha when the rest of the Tekton resources move to Beta. So with that approach we'll need to make sure that our docs correctly reflect the "right" way to work with external resources.

In this direction, we need to provide generic tasks that cover the use case of PipelineResource today, e.g. a GitTask, a DockerImageTask, etc... This issue is there to track work related to that.

Current PipelineResource to cover

  • GitResource #123
  • ClusterResource
  • PullRequestResource
  • GCSResource
  • S3resource

Additionnal generic tasks

  • Other VCS, like mercurial, โ€ฆ

Feel free to add your ideas/need in comment, I'll update this issue's description ๐Ÿ˜‰

E2E Test for S2I image is not working anymore

Expected Behavior

Test pass

Actual Behavior

Error log when building S2I image
namespace/s2i-67513 created
task.tekton.dev/s2i created
pipelineresource.tekton.dev/nodejs-s2i-example created
pipelineresource.tekton.dev/image created
securitycontextconstraints.security.openshift.io/privileged added to: ["system:serviceaccount:s2i-67513:builder"]
taskrun.tekton.dev/s2i-run created
FAILED: s2i task has failed to comeback properly
--- TR Dump
apiVersion: v1
items:
- apiVersion: tekton.dev/v1alpha1
  kind: TaskRun
  metadata:
    creationTimestamp: "2019-11-18T09:01:16Z"
    generation: 1
    labels:
      tekton.dev/task: s2i
    name: s2i-run
    namespace: s2i-67513
    resourceVersion: "86713"
    selfLink: /apis/tekton.dev/v1alpha1/namespaces/s2i-67513/taskruns/s2i-run
    uid: faa4f8cb-09e1-11ea-92a1-0655fa29ff3e
  spec:
    inputs:
      params:
      - name: BUILDER_IMAGE
        value: nodeshift/centos7-s2i-nodejs:latest
      - name: TLSVERIFY
        value: "false"
      resources:
      - name: source
        resourceRef:
          name: nodejs-s2i-example
    outputs:
      resources:
      - name: image
        resourceRef:
          name: image
    podTemplate: {}
    serviceAccount: builder
    serviceAccountName: ""
    taskRef:
      kind: Task
      name: s2i
    timeout: 1h0m0s
  status:
    completionTime: "2019-11-18T09:02:23Z"
    conditions:
    - lastTransitionTime: "2019-11-18T09:02:23Z"
      message: '"step-build" exited with code 1 (image: "quay.io/buildah/stable@sha256:5d8058ea6b2310924506834755f8bded36ec0ddab8e5a014e2bfc3c782d261fb");
        for logs run: kubectl -n s2i-67513 logs s2i-run-pod-b20377 -c step-build'
      reason: Failed
      status: "False"
      type: Succeeded
    podName: s2i-run-pod-b20377
    startTime: "2019-11-18T09:01:16Z"
    steps:
    - container: step-generate
      imageID: quay.io/openshift-pipeline/s2i@sha256:59172242f74d18af4cd325ca42b333f45456119c88ccc4b637d8a48ca2ecb1cc
      name: generate
      terminated:
        containerID: cri-o://031d8e93d98f2a2ae71fd5e855479d650d76ddf95c88d62efde412e6ef6caad6
        exitCode: 0
        finishedAt: "2019-11-18T09:01:35Z"
        reason: Completed
        startedAt: "2019-11-18T09:01:29Z"
    - container: step-build
      imageID: quay.io/buildah/stable@sha256:5d8058ea6b2310924506834755f8bded36ec0ddab8e5a014e2bfc3c782d261fb
      name: build
      terminated:
        containerID: cri-o://e3038bd32bd530d8d6df33267214a1a90e57069a404864f10109d13895a6323f
        exitCode: 1
        finishedAt: "2019-11-18T09:02:22Z"
        reason: Error
        startedAt: "2019-11-18T09:01:30Z"
    - container: step-push
      imageID: quay.io/buildah/stable@sha256:5d8058ea6b2310924506834755f8bded36ec0ddab8e5a014e2bfc3c782d261fb
      name: push
      terminated:
        containerID: cri-o://f805e42ca658fc27a462f7a615a552fee8963cbdc9916214f634d674fe345337
        exitCode: 0
        finishedAt: "2019-11-18T09:02:23Z"
        reason: Completed
        startedAt: "2019-11-18T09:01:32Z"
    - container: step-git-source-nodejs-s2i-example-tgxh5
      imageID: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init@sha256:00466e8ec7d8a289140893523d33261ba5006dfb1bd9b96aee2736fc739dba5a
      name: git-source-nodejs-s2i-example-tgxh5
      terminated:
        containerID: cri-o://da1193898f06b1f0a43334aad53c57e08acec0fdc4925a8f6761d6f1086a0a47
        exitCode: 0
        finishedAt: "2019-11-18T09:01:35Z"
        message: '[{"name":"","digest":"","key":"commit","value":"76ade011c9724969e5f941ef2262b0a4861375c6","resourceRef":{}}]'
        reason: Completed
        startedAt: "2019-11-18T09:01:28Z"
    - container: step-image-digest-exporter-tbk8z
      imageID: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter@sha256:23e2de68c86de494aba98dabf02b175efc051827c52350bdd9a89f6a3d969ea9
      name: image-digest-exporter-tbk8z
      terminated:
        containerID: cri-o://cc3b6f456ebb5b077d64b49194ef7ddd6b6b3724163e5a468036ffc177182d07
        exitCode: 0
        finishedAt: "2019-11-18T09:02:23Z"
        reason: Completed
        startedAt: "2019-11-18T09:01:32Z"
    - container: step-create-dir-image-jxtrw
      imageID: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/bash@sha256:a96b5840cdeb2a6598a8566a8607b925732286a8fdf15147be3591b7c7fb41f7
      name: create-dir-image-jxtrw
      terminated:
        containerID: cri-o://ace87af0235b65a6f9c0df06f927b2a3c5bb9dd276de8100bd190a0b1dba26fb
        exitCode: 0
        finishedAt: "2019-11-18T09:01:34Z"
        reason: Completed
        startedAt: "2019-11-18T09:01:28Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
--- Container Logs
{"level":"warn","ts":1574067684.8257823,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: \"ref: refs/heads/master\" is not a valid GitHub commit ID"}
{"level":"info","ts":1574067684.8269114,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized."}
{"level":"warn","ts":1574067685.7178762,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: open /var/run/ko/HEAD: no such file or directory"}
{"level":"info","ts":1574067685.7195227,"logger":"fallback-logger","caller":"bash/main.go:64","msg":"Successfully executed command \"sh -c mkdir -p /workspace/source\"; output "}
{"level":"warn","ts":1574067686.6667647,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: open /var/run/ko/HEAD: no such file or directory"}
{"level":"info","ts":1574067686.6682932,"logger":"fallback-logger","caller":"bash/main.go:64","msg":"Successfully executed command \"sh -c mkdir -p /builder/home/image-outputs/image\"; output "}
{"level":"warn","ts":1574067694.7497332,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: open /var/run/ko/HEAD: no such file or directory"}
{"level":"info","ts":1574067694.7511628,"logger":"fallback-logger","caller":"bash/main.go:64","msg":"Successfully executed command \"sh -c mkdir -p /workspace/output/image\"; output "}
{"level":"warn","ts":1574067694.9579082,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: \"ref: refs/heads/master\" is not a valid GitHub commit ID"}
{"level":"info","ts":1574067695.26,"logger":"fallback-logger","caller":"git/git.go:103","msg":"Successfully cloned https://github.com/chmouel/nodejs-health-check @ master in path /workspace/source"}
Application dockerfile generated in /gen-source/Dockerfile.gen
STEP 1: FROM nodeshift/centos7-s2i-nodejs:latest
Getting image source signatures
Copying blob sha256:7a6cdfcfad372eae4d6b9740ac4dcb12593285a2f7c42a95d31969b159b7366c
Copying blob sha256:8ba884070f611d31cb2c42eddb691319dc9facf5e0ec67672fcfa135181ab3df
Copying blob sha256:ee720ba20823ba4560916cf32bc06c5e31c230cb76f641be4a5fbfc7754d7574
Copying blob sha256:ebf1fb961f612b70f32c7d9184c8b3e06b9f427dff8c77385a489a4f2fbfac12
Copying blob sha256:c3dca185eb1482e83381d99881430ebc179e91f0969bc905207cd1e2312e5b57
Copying blob sha256:497ef6ea0fac8097af3363a9b9032f0948098a9fa2b9002eb51ac65f2ed29cf6
Copying blob sha256:b0d906f6aea242af239b222a9ec8a3b722c106334f61b802f81e7bcea1912ee8
Copying blob sha256:d1a3590852524d45a3664a9b171065e240b64b8c2c92321012374538b2904992
Copying config sha256:ecb99fab7c68fc8d1c76f4cf75951e4fdeeb2cf7e65eebff3db795c20e4af7c7
Writing manifest to image destination
Storing signatures
STEP 2: LABEL "io.openshift.s2i.build.source-location"="."       "io.openshift.s2i.build.image"="nodeshift/centos7-s2i-nodejs:latest"
5a0347b9845fab1f6224270e42a142581f6f17d6cd89d792164329575245704b
STEP 3: USER root
28027f2dfcee8426b83b3a6a28949c6c61c15d88bef6140ebf5a9dddea4d9669
STEP 4: COPY upload/src /tmp/src
d4f2bed41c4e32d3858669de035a471bdcf9470b24b6ab3518952df7359956de
STEP 5: RUN chown -R 1001:0 /tmp/src
b575cb905ba6239f2d3cabc1016255a7a322b52ca31d292c1254a003891efa64
STEP 6: USER 1001
6a9222eba876e0be679e880c25ea266afb7adbbeba0580de90a573253ae8d234
STEP 7: RUN /usr/libexec/s2i/assemble
---> Installing application source
---> Building your Node application from source
Current git config
/usr/libexec/s2i/assemble: line 52: git: command not found
subprocess exited with status 127
subprocess exited with status 127
error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127
time="2019-11-18T09:01:33.213001105Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.11.2 instance.id=5e5a98b8-5d03-480a-b0dd-6a40e1c51036 service=registry version=v2.7.1
time="2019-11-18T09:01:33.213115101Z" level=info msg="redis not configured" go.version=go1.11.2 instance.id=5e5a98b8-5d03-480a-b0dd-6a40e1c51036 service=registry version=v2.7.1
time="2019-11-18T09:01:33.21338972Z" level=info msg="Starting upload purge in 24m0s" go.version=go1.11.2 instance.id=5e5a98b8-5d03-480a-b0dd-6a40e1c51036 service=registry version=v2.7.1
time="2019-11-18T09:01:33.224329419Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.11.2 instance.id=5e5a98b8-5d03-480a-b0dd-6a40e1c51036 service=registry version=v2.7.1
time="2019-11-18T09:01:33.224515148Z" level=info msg="listening on [::]:5000" go.version=go1.11.2 instance.id=5e5a98b8-5d03-480a-b0dd-6a40e1c51036 service=registry version=v2.7.1

Steps to Reproduce the Problem

  1. Enable S2I in the e2e tests

Decide/Include Submission Guidelines

  • What type of tasks/pipelines are appropriate for this catalog?
  • What are the requirements to become an OWNER of this catalog?
  • What are the requirements to become an OWNER of a specific item in the catalog?

Design: Sharing Tasks and Pipelines without Copy Pasta

Expected Behavior

We should have a recommended path for how we expect people to make use of the Tasks and Pipelines that we make available in this repo, as well as how they can share Tasks and Pipelines within a company.

We want to be able to uphold these attributes:

  • Config as code - i.e. checking in Pipelines and Tasks with the code they are used for (but what about a company with many repos that wants to share these? what about our official catalog?)
  • Updates to the definitions in this repo should be consumable for users (but how do we deal with changes and compatibility issues? should we have versions?)

Actual Behavior

At the moment the best way to use the Tasks and Pipelines in this repo would be to copy them (jokingly called "copy pasta") which would effectively fork them, meaning that copies can diverge over time.

image

Additional Info

One idea that might be worth exploring is using image registries (#29) to store Tasks and Pipelines as versioned artifacts. See https://stevelasker.blog/2019/01/25/cloud-native-artifact-stores-evolve-from-container-registries/.

kn-create task creation fails

Expected Behavior

kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/kn/kn-create.yaml

works and kn-create task is available.

Actual Behavior

task definition https://github.com/tektoncd/catalog/blob/master/kn/kn-create.yaml is not being created.

โœ—  kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/kn/kn-create.yaml

Error from server (InternalError): error when creating "https://raw.githubusercontent.com/tektoncd/catalog/master/kn/kn-create.yaml": Internal error occurred: admission webhook "webhook.tekton.dev" denied the request: mutation failed: cannot decode incoming new object: json: cannot unmarshal bool into Go struct field ParamSpec.default of type string

Steps to Reproduce the Problem

  1. Follow the instructions at https://github.com/tektoncd/catalog/blob/master/kn/README.md

Cannot create step with script

I'm trying to create a step on a task that executes a script

 - name: terraform-cli
      image: quay.io/rcmendes/terraform-cli:latest
      workingdir: /workspace/source
      script: |
            #!/usr/bin/env bash
            for f in * ; do echo "export $f=$(cat $f)" >> ~/.bashrc ; done
            source ~/.bashrc
            /usr/local/bin/terraform apply -auto-approve
      volumeMounts:
        - name: provider-credentials
          mountPath: "/workspace/source"

Getting the error message:

Internal error occurred: admission webhook "webhook.tekton.dev" denied the request: mutation failed: cannot decode incoming new object: json: unknown field "script"

I'm Using CodeReadyContainers 4.2 and operator version 0.8

Tekton jib-maven fails with stange compilation error

Expected Behavior

I'm trying to run a taskrun with jib-maven from a test public git repo on the URL below.
Maven compile work locally flawlessly and so running jib locally.
URL of Git REPO:
https://github.com/einnovator-appstudio/superheros2

Actual Behavior

It fails miserable with tekton.
Maven pulls all artifacts with success from maven central and other public s3 maven repo.
But then when it suppose goes for a compile trows a strange '.class' expected -- which I'm not sure what it means.
See below output from pod log.

Any idea what is error is about?!? How can it be fixed?!? Or whatever!?
I'm mostly/completelly clueless at this point.
Last critical step to be able to complete a 1y+ project and got stuck on this...
Much appreciated in advance for the tips/help/fix/etc.

Additional Info

Downloaded from central: https://repo.maven.apache.org/maven2/com/google/collections/google-collections/1.0/google-collections-1.0.jar (640 kB at 976 kB/s)
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 21 source files to /workspace/source/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /workspace/source/src/main/java/com/demo/manager/SuperheroManagerImpl.java:[63,82] '.class' expected
[INFO] 1 error
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 26.225 s
[INFO] Finished at: 2019-10-31T18:18:43Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project superheros: Compilation failure
[ERROR] /workspace/source/src/main/java/com/demo/manager/SuperheroManagerImpl.java:[63,82] '.class' expected
[ERROR]
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

E:\tmp\kube>kubectl -n einnovator get taskruns
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
superheros5-jib-maven False Failed 12m 11m

this is the task run:

E:\tmp\kube>kubectl -n einnovator describe taskruns
Name: superheros5-jib-maven
Namespace: einnovator
Labels: tekton.dev/task=jib-maven
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"tekton.dev/v1alpha1","kind":"ClusterTask","metadata":{"annotations":{},"name":"jib-maven"},"spec":{"inputs":{"params":[{"de...
API Version: tekton.dev/v1alpha1
Kind: TaskRun
Metadata:
Creation Timestamp: 2019-10-31T18:18:03Z
Generation: 1
Resource Version: 59658134
Self Link: /apis/tekton.dev/v1alpha1/namespaces/einnovator/taskruns/superheros5-jib-maven
UID: c72e5f88-fc0a-11e9-9e51-ca2ea1d44072
Spec:
Inputs:
Resources:
Name: source
Resource Ref:
Name: superheros5-git
Outputs:
Resources:
Name: image
Resource Ref:
Name: superheros5-image
Pod Template:
Service Account: jib-maven
Task Ref:
Kind: ClusterTask
Name: jib-maven
Timeout: 1h0m0s
Status:
Completion Time: 2019-10-31T18:18:44Z
Conditions:
Last Transition Time: 2019-10-31T18:18:44Z
Message: "step-build-and-push" exited with code 1 (image: "docker-pullable://gcr.io/cloud-builders/mvn@sha256:dbcfcc889ff012712a6affaba023271a0656e2d57584eff012306fd72f50aa70"); for logs run: kubectl -n einnovator logs superheros5-jib-maven-pod-d9b4e6 -c step-build-and-push
Reason: Failed
Status: False
Type: Succeeded
Pod Name: superheros5-jib-maven-pod-d9b4e6
Start Time: 2019-10-31T18:18:03Z
Steps:
Container: step-build-and-push
Image ID: docker-pullable://gcr.io/cloud-builders/mvn@sha256:dbcfcc889ff012712a6affaba023271a0656e2d57584eff012306fd72f50aa70
Name: build-and-push
Terminated:
Container ID: docker://5b440c97eef6994316b5e0f5943f78a56f548fd9c5bb43a94d9d5fb66221347f
Exit Code: 1
Finished At: 2019-10-31T18:18:43Z
Reason: Error
Started At: 2019-10-31T18:18:11Z
Container: step-create-dir-image-cx9b5
Image ID: docker-pullable://gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/bash@sha256:b183305a486aafbf207cf4dd969b38645b04e6fd18470f32fc7927d0a8035581
Name: create-dir-image-cx9b5
Terminated:
Container ID: docker://7998b5092e6f2b3ccf2ae9f0c904e88e6232acb3952a5f79e2faf6f05bdc5298
Exit Code: 0
Finished At: 2019-10-31T18:18:12Z
Reason: Completed
Started At: 2019-10-31T18:18:09Z
Container: step-git-source-superheros5-git-pjh8s
Image ID: docker-pullable://gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init@sha256:2aaaecd06986c7705f68f19435b8a913ef6701ac6b961df16d1535f45503cea5
Name: git-source-superheros5-git-pjh8s
Terminated:
Container ID: docker://c8f9830be5bc7cc71b3a92961f4c22aefedd398988f1cac58ef2d59af3ca2119
Exit Code: 0
Finished At: 2019-10-31T18:18:14Z
Reason: Completed
Started At: 2019-10-31T18:18:10Z
Container: step-image-digest-exporter-kw6wm
Image ID: docker-pullable://gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter@sha256:b86cfca770c60d6965dced6c36745f04cdfea2680b517fb067828e68292f13be
Name: image-digest-exporter-kw6wm
Terminated:
Container ID: docker://429d2177c8f8e9ae7f06a17cf6b6bf5c9f4f0b0e35f286a0df63e3c5c0d1b26f
Exit Code: 0
Finished At: 2019-10-31T18:18:44Z
Reason: Completed
Started At: 2019-10-31T18:18:11Z
Events:
Type Reason Age From Message


Warning Failed 16m taskrun-controller "step-build-and-push" exited with code 1 (image: "docker-pullable://gcr.io/cloud-builders/mvn@sha256:dbcfcc889ff012712a6affaba023271a0656e2d57584eff012306fd72f50aa70"); for logs run: kubectl
-n einnovator logs superheros5-jib-maven-pod-d9b4e6 -c step-build-and-push

These are the git and image resources:

E:\tmp\kube>kubectl -n einnovator describe pipelineresources
Name: superheros5-git
Namespace: einnovator
Labels:
Annotations:
API Version: tekton.dev/v1alpha1
Kind: PipelineResource
Metadata:
Creation Timestamp: 2019-10-31T13:48:26Z
Generation: 1
Resource Version: 59598664
Self Link: /apis/tekton.dev/v1alpha1/namespaces/einnovator/pipelineresources/superheros5-git
UID: 1d1b56c9-fbe5-11e9-9e51-ca2ea1d44072
Spec:
Params:
Name: url
Value: https://github.com/einnovator-appstudio/superheros2.git
Name: revision
Value: master
Type: git
Events:

Name: superheros5-image
Namespace: einnovator
Labels:
Annotations:
API Version: tekton.dev/v1alpha1
Kind: PipelineResource
Metadata:
Creation Timestamp: 2019-10-31T13:48:26Z
Generation: 2
Resource Version: 59616949
Self Link: /apis/tekton.dev/v1alpha1/namespaces/einnovator/pipelineresources/superheros5-image
UID: 1d0ad1d7-fbe5-11e9-9e51-ca2ea1d44072
Spec:
Params:
Name: url
Value: einnovator/superheros5
Type: image
Events:

Support kn flags for kn-create task

Expected Behavior

User should be able to specify the flags available for kn create in kn-create task.

Actual Behavior

Currently only optional flag supported is --force.

/assign

kn task does not work on v0.5.2

Expected Behavior

Knative service beeing deployed

Actual Behavior

Not able to deploy simple helloworld Knative Service, seems like there's a problem with passing arguments to the kn service image. I used the example Task and TaskRun from https://github.com/tektoncd/catalog/tree/master/kn without any modifications, but got this error:

$ oc logs kn-create-pod-6a16e9    
$ 'service create' requires the service name given as single argument

This part is probably not correctly parsed by tekton:
https://github.com/tektoncd/catalog/blob/master/kn/kn-create.yaml#L20

I have tried to run the kn image directly with the same arguments and it is working as expected

Additional Info

Running on OpenShift 4.1
(Kubernetes v1.13.4+3a25c9b)
Using Openshift-Pipelines Operator v 0.5.2

Buildpacks: bypass detect step

Expected Behavior

There should be a way for a user to set the desired buildpack and skip the detection work. I wouldn't expect the Tekton step as a whole to be skipped, however I would expect it to take in my input and make it a no-op.

The pack CLI has the option to do this when building images:

pack build --buildpack

Actual Behavior

Steps to Reproduce the Problem

Additional Info

Tag Images for openshift-client Task

Currently, the openshift-client task uses a tag of latest for its image. It would be a better practice to tag the image based on the version of Tekton pipelines the task would work with and explicitly call out what the version tag is for the task as opposed to using the latest tag.

There should be documentation on the available tags and how those tags correspond to certain versions of Tekton pipelines/the task. The previous versions of the task should also be archived and use the appropriate image tag.

This issue is in response to #74 where the task was updated for a new version of the OpenShift Tekton operator that is actually not available yet. I would like to ask that this task not be updated until the new version of an operator is available in the future.

Idea: TekDoc tool, similar to godoc

Desired state

godoc hosts documentation for go projects. It finds these projects (scrapes them? not sure!) and looks for docstrings in the required format and uses that to generate (and i guess maintain? it must poll for changes) documentation, for example [this is some godoc generated docs for a function in tekton pipelines](https://godoc.org/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1#ApplyArrayReplacements, specifically this function.

What we would like is to be able to run an host a similar tool, maybe called TekDoc, that would show us for all the Pipelines and Tasks in tektoncd/catalog:

  • The name of the Pipeline / Task
  • Link to the source
  • A description
  • All parameters with descriptions + defaults
  • All input/output resources with descriptions + defaults

More requirements

(note we dont need to meet all of these right away, we can start with something super simple and iterate!)

  • Other people have their own catalogs too, e.g. https://github.com/garethr/snyk-tekton. We should make it so this tool could also display those docs, both allowing for our tool to point at multiple catalogs, and making it so that ppl can run the tool themselves
  • One day we might have more types in the catalog, like PipelineResources and tekton triggers types so we can't just assume Pipelines and Tasks only.
  • We are making changes to Pipelines and Tasks at the moment, so we need to make sure that it wont be too hard to change the tool to deal with those changes
  • We should allow for the tool to display versions of the Pipelines and Tasks

Current state

  1. At the moment all usage info is in READMEs that are submitted with the Pipelines and Tasks, e.g. this one https://github.com/tektoncd/catalog/tree/master/conftest#conftest
  2. We have description fields for params but not for Pipelines and Tasks themselves, and not for the PipelineResources that they use

Possible plan of attack

I tried to break it up into a few milestones, but some of it could happen in parallel!

[ Milestone 1: add the necessary info to the types ]

  1. Add description field to Pipeline type
  2. Add description field to Task type
  3. Add description field to input & output resources in Task (tektoncd/pipeline#1389)
  4. Add description field to resources in Pipeline
  5. Update Tasks in the catalog to use these fields (might have to wait for a Pipelines release to merge the change, our versioning policy for the catalog vs. pipelines is a bit adhoc at the moment, + @vdemeester )

Suggestion: assume markdown format for contents of descriptions

[ Milestone 2: create service and/or library that scrapes catalog info]

For this bit it would probably be good to understand how godoc scrapes & polls!

  1. The tool/service/library should be able to be given a github repo, find the Tasks and Pipelines
  2. The tool/service/library should from those Tasks and Pipelines be able to extract their docs

The idea is that the webserver in milestone 3 will use this, but we can develop and test it without needing to have the whole webserver completed.

[ Milestone 3: create webserver ]

We can probably just pick any ol' webserver, but we'd want to make sure we can deploy and run it in a container. Would be great if the webserver we chose could automatically emit metrics to let us know if its up and running and if there are any errors.

  1. The webserver should invoke the service, then display all the Tasks + Pipelines + their docs

Additional info

Original description

We could automatically generate interface documentation for the Tasks/Pipelines in this repository from the input and output parameters, and display this usage documentation nicely!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.