Giter Site home page Giter Site logo

Comments (7)

duglin avatar duglin commented on August 23, 2024

Number 1 was something I was wondering about too... while the current design is ok for local dev, or even the first stage of a pipeline, since I wouldn't want the build step to be defined/re-run as my app moved thru testing, staging and prod I would end up having to define a new service.yaml w/o the build step for those 2nd+ stages - thus requiring me to keep these two in-sync. Which is error prone and cumbersome.

Number 2 is interesting, and I would include in there the notion of a "build" doing more than just creating an image. As you mentioned, generating swagger docs, or just docs in general, release zip files, or other artifacts could be needed. The question I wonder about is whether people would more naturally want to define these things in a "workflow/build" system or in something more simple like a Makefile. If the steps are as simple as "docker build ... && docker push ..." then I think it's easier to claim we can do it in Knative "build", but when it's more complex and people need multiple/non-trivial steps, I think they're going to feel constrained w/o something more like a Makefile or scripts/bash. I also believe people will want to version these build steps along with their source code - meaning they'll want them placed in the same git repo as the source. Keeping the build steps in Knative meaning keeping two systems in-sync.

I do agree that build and serving might be better if they are more loosely coupled and I wonder if eventing should be the bridge between them rather than any hard-coded link. For example, what if the workflow was something more like:
1 - user pushes changes to github
2- something detects the change, does a build, and pushes artifacts to something - one of those will be a new image to a registry
3 - the push of the image to a registry sends out an event
4 - knative is subscribed to receive those events and a KService is setup to redeploy upon a new image being available

Whether step 2 is a github function, travis, or kn build is an impl choice.

How the Kn Service is linked via this event could be due to someone setting up the dockerhub event source which calls a "rebuild function" which pokes the Kn Service to do a redeploy. Or, we could make it easier with something like:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: helloworld
spec:
  revisions:
  - image: duglin/helloworld:latest
    imageWatch: dockerhub://duglin/helloworld:latest

or to support a rolling upgrade:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: helloworld
spec:
  revisions:
  - image: duglin/helloworld:v1
    imageWatch: dockerhub://duglin/helloworld:v1
  - image: duglin/helloworld:latest
    imageWatch: dockerhub://duglin/helloworld:latest
    traffic: 10%

where "imageWatch" sets up all of the eventing infrastructure for them.

from client.

sixolet avatar sixolet commented on August 23, 2024

@duglin I think that putting an imageWatch in the Serving area gets us into a much harder dependency on Eventing than I was expecting, and putting a revisions list each with its own spec in there has all the problems of not having a history and of "what do you edit" and... etc.

Linking the two with eventing, especially eventing mediated through the image repo, seems like it's introducing a lot of indirection for little benefit if we want to use this as a PAAS, since we're left with no way other than an event for user intent to travel from build/pipeline to serving.

from client.

sixolet avatar sixolet commented on August 23, 2024

@evankanderson

Point 1 above is really interesting and a challenge.

Point 2 seems easier to work with, and I'm not sure it's easier if we invert the flow — either way, there's some API point of contact that needs to be made, it's just a question of whose API expanded to include some part of the other group's api.

Point 3, I'm not sure. Does your answer change if we can't push code and configuration at the same time?

I have another idea about how to do the orchestration between source and serving I'm working on on paper. Stay tuned.

from client.

duglin avatar duglin commented on August 23, 2024

imageWatch would be optional and just a convenience mechanism so the user doesn't need to define an EventSource, a Subscription and a ServicePoker function themselves. If they want some other way to kick off a redeploy they're free to do so. But, given that I think an update to image in a repo is going to be one of the more popular triggers people will want, I thought we should make it part of the core and easy.

What I like about using eventing for this is that we're then no longer dependent on build at all - meaning people can continue to use Kn build, jenking, travis or anything else. But in the end, IMO, what serving should be looking for a new image to appear - which may, or may not, be related to some "source"/github repo. That's an impl detail/choice of how the user chooses to define that part of the pipeline. We're just concerned with how to serve/host the output of it.

Having said that, we can choose to optionally continue to have a reference to the source/build in the Service - I wouldn't be against it. But even then, I'm not sure watching for an event would be a bad idea since the build process could result in no new image being uploaded despite there being a modification to the build section of the Service.

from client.

sixolet avatar sixolet commented on August 23, 2024

Ok, the thing having Serving orchestrate buys us is a single place to put configuration and code information, down to what source code we want to be serving. We can still kick off arbitrary CI flows from serving, but it gives us an API where you describe what you want running and how all in one place, which is valuable.

I keep trying to figure out, for example, what the flow is when you want to change an environment variable from a build-triggers-serving standpoint, and it doesn't seem quite right.

There are ways to have Serving orchestrate that are more in line with the pipelines project, for example by borrowing some of their api.

from client.

mattmoor avatar mattmoor commented on August 23, 2024

I keep trying to figure out, for example, what the flow is when you want to change an environment variable from a build-triggers-serving standpoint, and it doesn't seem quite right.

Looking at this in purely declarative terms, just because the user has only patched an environment variable it doesn't mean that they don't want a Build. This is a declaration of intent, and the user's intention is that the given source is deployed with the new environment variable.

Eliding the build is really an optimization that assumes (with debate-able correctness that) we're further optimizing a hermetic / reproducible build that would be fully cached if executed.

Honestly, I feel like:

  1. Dispatching a Pipeline that culminates in a Service mutation on source changes, and
  2. Directly altering the Service on env-only changes

... are more correct ways of expressing these two intents, respectively.


It occurs to me writing this that 1. above has potential races like I mentioned in #29 if multiple folks trigger pipelines via independent CLI invocations, but with the orientation here we could parameterize the Pipeline with the current metadata.generation (or even metadata.resourceVersion) of the Service at creation to allow optimistic concurrency to keep the final patches from conflicting (only one could succeed!)

from client.

sixolet avatar sixolet commented on August 23, 2024

Directly altering the service on env-only changes loses you any benefit you were getting from your CD system.

Dispatching a Pipeline that culminates in a Service mutation on source changes ends up working, but only for source changes that don't need to be accompanied with simultaneous env changes.

from client.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.