Giter Site home page Giter Site logo

openfaas / openfaas-cloud Goto Github PK

View Code? Open in Web Editor NEW
767.0 27.0 230.0 31.98 MB

The Multi-user OpenFaaS Platform

Home Page: https://docs.openfaas.com/openfaas-cloud/intro/

License: MIT License

Go 74.91% Makefile 0.26% Shell 0.41% HTML 1.23% Dockerfile 1.30% JavaScript 20.55% CSS 0.71% Mustache 0.62%
cicd multi-user faas openfaas kubernetes gitops swarm dashboard cloud

openfaas-cloud's Introduction

OpenFaaS Cloud

The Multi-user OpenFaaS Platform

Introduction

Build Status

OpenFaaS Cloud introduces an automated build and management system for your Serverless functions with native integrations into your source-control management system whether that is GitHub or GitLab.

With OpenFaaS Cloud functions are managed through typing git push which reduces the tooling and learning curve required to operate functions for your team. As soon as OpenFaaS Cloud receives a push event from git it will run through a build-workflow which clones your repo, builds a Docker image, pushes it to a registry and then deploys your functions to your cluster. Each user can access and monitor their functions through their personal dashboard.

Features:

  • Portable - self-host on any cloud
  • Multi-user - use your GitHub/GitLab identity to log into your personal dashboard
  • Automates CI/CD triggered by git push (also known as GitOps)
  • Onboard new git repos with a single click by adding the GitHub App or a repository tag in GitLab
  • Immediate feedback on your personal dashboard and through GitHub Checks or GitLab Statuses
  • Sub-domain per user or organization with HTTPS
  • Runtime-logs for your functions
  • Fast, non-root image builds using buildkit from Docker

The dashboard page for a user:

Dashboard

The details page for a function:

Details page

Overview

Conceptual diagram

The high-level workflow for the OpenFaaS Cloud CI/CD pipeline.

KubeCon video

KubeCon: OpenFaaS Cloud + Linkerd: A Secure, Multi-Tenant Serverless Platform - Charles Pretzer & Alex Ellis

Blog posts

Documentation

Roadmap & Features

See the Roadmap & Features

Get started

You can set up and host your own OpenFaaS Cloud or pay an expert to do that for you. OpenFaaS Ltd also offers custom development, if you should have new requirements.

Option 1: Expert installation

OpenFaaS Ltd provides expert installation and support for OpenFaaS Cloud. You can bring your own infrastructure, or we can install and configure OpenFaaS Cloud for your accounts on a managed cloud.

Get started today

Option 2: Automated deployment (self-hosted)

You can set up your own OpenFaaS Cloud with authentication and wildcard certificates using ofc-bootstrap in around 100 seconds using the ofc-bootstrap tool.

This method assumes that you are using Kubernetes, have a public IP available or are using the inlets-operator, and have a domain name. Some basic knowledge of how to setup a GitHub App and GitHub OAuth App along with a DNS service account on DigitalOcean, Google Cloud DNS, Cloudflare or AWS Route53.

A developer install is also available via this blog post, which disables OAuth and TLS. You will still need an IP address and domain name.

Deploy with: ofc-bootstrap

Getting help

For help join #openfaas-cloud on the OpenFaaS Slack workspace. If you need commercial support, contact [email protected]

openfaas-cloud's People

Contributors

acornies avatar akihirosuda avatar alexellis avatar bartsmykla avatar burtonr avatar dependabot[bot] avatar doowb avatar ericstoekl avatar heyts avatar ivanayov avatar jagreehal avatar jaigouk avatar johnmccabe avatar kenfdev avatar kvuchkov avatar martindekov avatar matipan avatar naisanzaa avatar qolzam avatar rgee0 avatar ryanbascom avatar rzr avatar s8sg avatar scottg489 avatar viveksyngh avatar waterdrips avatar wilsonianb avatar zeerorg avatar zimme avatar zwovo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openfaas-cloud's Issues

Use short SHA for image version

At the moment the image versions are of type 0.9.1-4456fab11f107a35ee60c4cd3cb6e1e1281cdf95
e.g.

"image": "ofcommunity/kenfdev-faas-contributors-page-contributors-page:0.9.1-4456fab11f107a35ee60c4cd3cb6e1e1281cdf95"

As the version is something that's expected to be short in length and the short SHA is enough for commit reference, do you think it's better to cut it?

Gitlab payload handling convention

So in github case the pushevent struct handling looks like that

type PushEvent struct {
	Ref        string `json:"ref"`
	Repository struct {
		Name     string `json:"name"`
		FullName string `json:"full_name"`
		CloneURL string `json:"clone_url"`
		Owner    struct {
			Login string `json:"login"`
			Email string `json:"email"`
		} `json:"owner"`
	}
	AfterCommitID string `json:"after"`
	Installation  struct {
		ID int `json:"id"`
	}
}

and In Gitlab to handle the same payload the struct looks something like

type PushEvent struct {
	Ref           string `json:"ref"`
	UserUsername  string `json:"user_username"`
	UserEmail     string `json:"user_email"`
	Project       Project //would be equivalence to repository in github context
	Repository    Repository
	AfterCommitID string `json:"after"`
}

type Project struct {
	Name              string `json:"name"` //would be repo name
	PathWithNamespace string `json:"path_with_namespace"` //would be repo full name
}

type Repository struct {
	GitHttpUrl string `json:"git_http_url"`//would be equivalent to CloneUrl
}

I was wondering should we stick to the same naming convention for every git hosting service(like the provided in the github-push) or should we follow the naming convention provided by the service?

Also it seems like the secret is send in plain text not hashed. Is this is a problem?

Note: This is tested with normal webhook payload not with gitlab application payload equivalence of the github one since I don't know if one exists.

Tracking feature: Test mixed-case usernames

We need to test mixed-case usernames end-to-end including:

  • of-router (using wildcard DNS)

No instructions exist for Swarm, just a YAML file for Kubernetes, so update the docs/README.md too

  • UI dashboard
  • GitHub push events

You'll need a second GitHub account with a MixedCase username (don't rename your existing one) and then add it to the official CUSTOMERS list or turn validate_customers off.

If required, please propose a fix for anything left over from the work @burtonr did.

handle installation and remove event in git-event

Use installation and remove event from github-app to deploy and remove functions from the openfaas-cloud

Current behaviour:
Currently only commit status event are handled.

Expected behaviour:
We can handle installation and remove event in git event

Deploy an application using openfaas-cloud when a installation event is received
Remove an application using openfaas-cloud when a remove event is received

Validate hmac trying to find secret in wrong path

While trying to run pipeline-log noticed it loads the secret value rather than name as path aka var/openfaas/secrets/<some_sha_here> rather than /var/openfaas/secrets/thesecretname

The problematic code is in pipeline-log/handler.go in

hmacErr := sdk.ValidHMACWithSecretKey(&req, payloadSecret, os.Getenv("Http_X_Cloud_Signature"))

we are giving payloadSecret which is the content of the payload-secret instead of the secret name and we try to look at non-existing path like pointed above

func ValidHMAC(payload *[]byte, secretKey string, digest string) error {
	key, err := ReadSecret(secretKey)
...

Expected Behaviour

Read from secret name rather than secret value

Current Behaviour

Get secret value as path

Possible Solution

I can think of two scenarios depending on the context in which the code is written

No. 1 remove

key, err := ReadSecret(secretKey)
	if err != nil {
		return fmt.Errorf("unable to load HMAC symmetric key, %s", err.Error())
	}

from function

func ValidHMAC(payload *[]byte, secretKey string, digest string) error 

in sdk/hmac.go

No 2 pass "payload-secret" instead of payloadSecret to

hmacErr := sdk.ValidHMACWithSecretKey(&req, payloadSecret, os.Getenv("Http_X_Cloud_Signature"))

in pipeline-log/handler.go

Steps to Reproduce (for bugs)

  1. Deploy latest pipeline-log in openfaas-cloud instance
  2. Create payload-secret
  3. Trigger build
  4. Check logs

Context

Not able to run logs in dashboard which prevented me from testing related task

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):
    18.06.0-ce
  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Swarm
  • Operating System and version (e.g. Linux, Windows, MacOS):
    Linux
  • Link to your project or a code example to reproduce issue:
    N.A.
  • Please also follow the troubleshooting guide and paste in any other diagnostic information you have:

Add Travis CI / releases for core functions / containers

Add CI via travis for:

  • of-builder daemon
  • router daemon
  • buildkit gRPC daemon
  • the functions that drive OpenFaaS-Cloud in the stack.yml file

These should all be pushed to the Docker Hub under the openfaas name on successful build.

Examples are available in the faas-cli project and faas project on how to build on tag and how to push up to the hub. The final step will be for me to add the password and enable the travis build.

Update GitHub statuses API for commits

We can update GitHub statuses API for commits and give feedback on whether a push / deployment succeeded or failed.

  • This needs an extended OAuth permission for the GitHub App to write commit statuses - everyone will have to accept the new permissions to update commit statuses on their repos

  • The Golang GitHub library used with Derek should be able to do this - vendor it if it's not already there

  • We have to authenticate to the GitHub API to write a status - use the Derek code that

  • We have to use a cert/pem from the GitHub App - use a secret to store it on Docker/Kubernetes and updated the README with instructions on how to use it

  • We need a description for events to go back to GitHub - keep this brief right now just pass or fail will do

  • We have to extend the JSON "push message" structs to accept the installation ID - this is needed so we can authenticate later into an "installation - i.e. a GitHub repo". See the Derek package and vendor if needed.

I would suggest writing the status back via the final buildshiprun function (it will need the pem/cert secret attaching to it)

Show OpenFaaS Cloud building a Gist

Since Gists are only GitHub repos, we should support building a function from a Gist.

This is useful because it could become part of play-ground or UI experiment where we can type code into a web-editor which saves into a Gist and triggers a full build-pipeline.

Gists don't appear to support folders, but I may be wrong.

That may mean setting the handler to "./"

Initial research

First off - try to create a Gist with a stack.yml and a handler of your choice, then clone it and see if you can build it locally with faas-cli.

If you can then see if you can fire a web hook to OpenFaaS cloud with HMAC turned off and fake the JSON data to point at the gist rather than at your GitHub repo - now what is missing/what changes do we need?

Alex

Unable to read tests for func GetImageName

Hi @viveksyngh

Please could you refactor the names of the test cases in buildshiprun?

Currently they are named like "test case 1" and "test case 2" - these should be representative of what we're testing in each scenario.

Thanks,

Alex

Auto-detect buildshiprun limits based upon swarm/k8s

If KUBERNETES_SERVICE_PORT is present as an env-var then Limits need to be formatted for Kubernetes i.e. 30Mi and if not then for Swarm i.e. 30m.

Unfortunately sometimes a user (like myself) will have the wrong limits file set i.e. limits-swarm.yml when they needed limits-k8s.yml and the errors from either back-end are very vague.

We can autodetect this and then set the values as appropriate and prevent the UX issue from coming up again.

Optimisation (status): Use token forwarding and status grouping to avoid extra calls

This suggestion consider git-status as a openfaas function for status update

Group multiple status update in an array

When required to update multiple status at a time, rather than making a separate call for each status, multiple statuses can be grouped and forwarded at once

Possible Solution

1. Accept multiple commit status in git-status
git-status can be changed to accept more than one status at a single execution

2. Sdk to abstract combine multiple status in a single call
This can be abstracted by the sdk for Status
Example:

status := BuildStatus()

// add status
status.Add(context1, status1)
// add status
status.Add(context2, status2)

// same status object can store the token from earlier call 
status.Report()
}

In the above example status1 and status2 is forwarded at a single call to git-status

3. Avoid status that will be overridden
Github map status based on context, which overrides status for same context. Which means for a call sequence like below, the status1 will be overridden by status2 in github

status := BuildStatus()

// add status
status.Add(context1, status1)
// add status
status.Add(context1, status2)

// same status object can store the token from earlier call 
status.Report()
}

To avoid an extra call the grouped status can be mapped against the context, that will avoid extra call for status that going to be overridden, i.e.

CommitStatuses map[string]CommitStatus // context -> CommitStatus

Avoid Authentication by Forwarding the Token

Authentication makes an extra Http call for each status update.
As the function are stateless, it is (not recommended/ impossible) to store token on functions

Possible Solution

1. Return Token on Success
auth-token can be returned from the git-status function on authentication so that it can be reused

2. Status SDK
status sdk is meant to abstract the call to git-status call. Status sdk can itself be used for abstract the call to git-status.
It can be used to abstract the token reuse mechanism for multiple status update from a single execution of a function
example:

status := BuildStatus()
Loop {
      // add status
      status.Add(..)
      // same status object can store the token from earlier call 
      status.Report()
}

3. Forward token to FaaS function

Token can be forwarded from one function to another, so that it can be reused.
It can be achieved by setting it either in http_header or as a part of data
And can be passed while creating the status

status := BuildStatus(os.Getenv(Http_Token)

Possible Definition for Status sdk

const (
        Success = "success"
        Failure = "failure"
        Pending = "pending"
)

// context constant
const (
 //       Build  = "%s_build"
        Deploy = "%s_deploy"  // function1_deploy 
        Stack  = "stack-deploy"
)

type CommitStatus struct {
        Status      string `json:"status"`
        Description string `json:"description"`
        Context     string `json:"context"`
}

// Status to post github-status to git-status function
type Status struct {
        CommitStatuses map[string]CommitStatus `json:"commit-statuses"`
        EventInfo      Event                   `json:"event"`
        AuthToken      string                  `json:"auth-token"`
}

// builds a status
func BuildStatus(event *Event, token String) *Status

// Unmarshal a status from json byte array
UnmarshalStatus(data []byte) (*Status, error)

// Add a status to status object
func (status *Status) AddStatus(status string, desc string, context string) 

// Marshal a status Object
func (status *Status) Marshal() ([]byte, error)

//  send a status update to git-status function, and store token in object 
func (status *Status) Report(gateway string) (string, error) 

Update status if stack.yml not present in repo

Scenario

User generates a function named fn1 with fn1.yml and doesn't rename it to stack.yml. OpenFaaS Cloud always looks for stack.yml in the root of the repo, so this won't build and the user will never find out the reason.

Error

This generates a build error, so we should inform them about it via the Statuses API:

# docker service logs git-tar  -f
git-tar.1.3y8ges2e9qma@openfaas-cloud    | 2018/04/29 07:45:36 Writing lock-file to: /tmp/.lock
git-tar.1.3y8ges2e9qma@openfaas-cloud    | 2018/04/29 07:45:48 Forking fprocess.
git-tar.1.3y8ges2e9qma@openfaas-cloud    | 2018/04/29 07:45:48 Query  
git-tar.1.3y8ges2e9qma@openfaas-cloud    | 2018/04/29 07:45:48 Path  /
git-tar.1.3y8ges2e9qma@openfaas-cloud    | 2018/04/29 07:45:48 Success=false, Error=exit status 255
git-tar.1.3y8ges2e9qma@openfaas-cloud    | 2018/04/29 07:45:48 Out=2018/04/29 07:45:48 parseYAML  open /tmp/go-fns-tester/stack.yml: no such file or directory
git-tar.1.3y8ges2e9qma@openfaas-cloud    | 

Report status to Github via Checks API

Github Checks API offers a great in-app experience for CI/CD by showing rich information about different build steps along with some actions (e.g. re-run). It is a superset of the status API.

This would make openfaas-cloud feel even more natively integrated with GitHub by showing most of the information directly in the app. Moreover, there will be a separate tab on pull requests to support more advanced development flows.

Constraints

  • It is important to keep the interoperability of openfaas-cloud, i.e. consider future integrations with other SCM providers that do not offer the same style of API.
  • Build logs will still need to be persisted (e.g. in S3). The Checks API should be considered an output and not a source build information.

Apply Kubernetes NetworkPolicy to prevent accessing other functions

We should apply a Kubernetes NetworkPolicy to prevent one user's functions accessing another's functions directly.

I.e. for an initial version the function should leave via egress and enter the public/private network IP of the gateway again if it wants to call another function.

Example:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: access-gateway
  namespace: openfaas
spec:
  podSelector:
    matchLabels:
      app: gateway
  ingress:
  - from:
    - podSelector:
        matchLabels:
          namespace: openfaas-fn

Block access to the gateway app via pods in the openfaas-fn (functions) namespace.

git-tar deploy does not return error on non-success http status code

When looking through the logs in git-tar, I found that my function deployment returned a 500 error, however, there is no error checking for the deploy status

git-tar.1.8qapnsejzztn@playground    | Deploying service - spacex-info
git-tar.1.8qapnsejzztn@playground    | 2018/08/09 02:11:25 v1.53cf333d41169248e81195a8811570348ca18329
git-tar.1.8qapnsejzztn@playground    | Service deployed  spacex-info 500 Internal Server Error burtonr
git-tar.1.8qapnsejzztn@playground    | 2018/08/09 02:11:57 Functions owned by burtonr:
git-tar.1.8qapnsejzztn@playground    |  []Garbage collection ran for burtonr/test-function - 0 functions deleted.
git-tar.1.8qapnsejzztn@playground    | 
git-tar.1.8qapnsejzztn@playground    | 2018/08/09 02:11:58 Duration: 33.223507 seconds

The c.Do(httpReq) does not throw an error, so this goes unnoticed as the request was successful, but the response status is not success.

See here: https://github.com/openfaas/openfaas-cloud/blob/master/git-tar/function/ops.go#L252

Enable HMAC for auth on the import-secrets function

We should auth on the import-secrets function by using HMAC.

In the same way that we have a symmetric secret which the gh-push function validates, we should have a shared secret that git-tar uses to sign the import secret request. In this way import-secrets will only import a secret signed by the owner of the new symmetric key.

Tasks

  • Create new named secret in openfaas-fn namespace (document this)

Note: I don't know whether we should re-use the existing value in github_webhook_secret or whether we should create a new named secret. This shouldn't really be an environmental variable anyway.

  • Bind secret to git-tar and import-secrets functions
  • Update DEV instructions in /docs/
  • Test and validate that secrets are only imported when signed

Use the existing library for HMAC signing / verification.

Research: How do we make logs accessible to users?

Feature

There are a couple of scenarios (at least) where we need to make logs available to users:

  • Docker build fails for reason X

This should be made available and the GitHub status API only provides a short summary

  • git-tar operation fails due to any number of reasons

If the message is short we can store this in the GitHub status

Related but out of scope in this issue:

  • Exposing function logs to users

Constraints

Can we do this without installing, maintaining and continually migrating a stateful database?

Potential implementations

  • GitHub Gists in user account

  • Separate branch in source repository to hold logs

  • Separate GitHub repo that we or the user owns

  • Use GitHub issues raised in the user's repo

  • One GitHub issue and many comments

Or something else completely.

The key point is that we get feedback to the user as a next step, rather than having no logs, how can we get to some logs.

Proposal: Should build and release be split?

openfaas-cloud cloud provides pipeline that clone, build, push and deploy functions. Deployment is a part of the pipeline although based on the scenarios it might be justified to use a separate system (e.g Release Manager) to handle the deployment.

Expected Behaviour

By default deployment gets handled by of-deployer function. If necessary one may replace the of-deployer with tailor made pipeline and handle the deploy call differently

Current Behaviour

buildshiprun is responsible for:
build : build the function image
push: to make the docker image available in repo
deploy: to deploy the function in openfaas

Currently it is not possible to integrate 3rd pary release manager

Possible Solution

Deployment can be handled by a separate function, say of-deployer.
buildshiprun rather be buildship responsible to build and ship the code (push to repo).

of-deployer take out the deployment responsibilities from buildshiprun. The code stays same with a abstraction of a call

PUT /deploy
{
  "service" : "",
  "image" : "",
  "env-vars": "",
  "labels": {},
  "secrets": []
}

Context

A 3rd party release manager may be important in a production environment to perform different operations such as:

  • deploy
  • staging
  • perform rollback
  • push configuration
  • apply policies
  • configuration versioning
    etc.

Although continious-deployment pipeline may differ based on needs and should not be in scope of openfaas-cloud.

Limit # of functions per repo

An exhaustion attack on OpenFaaS Cloud could involve adding hundreds of functions to a stack file to exhaust resources and cause noise.

If these functions were deployed and created Pods then they would also cause the cluster to fill up.

We should look to add a config limit for how many functions are allows per repo/stack file.

I am thinking of a default limit of 5

Update garbage-collect to recover old user functions

Right now garbage-collect just echoes what it would remove, but it should actually remove orphaned functions where someone has removed a function from a stack or renamed it within a repo.

Additional unit tests should also be added, which may need some abstractions to be added for the HTTP calls / queries to the list-functions function and the delete functions API.

Investigate non-root buildkit implementations

There is some work to patch runc to do non-root builds via buildkit

This currently looks too manual and bespoke to be useful - i.e. patching kernel modules/runc and other components, but would be an ideal fit for OpenFaaS Cloud builds when ready.

functions doesn't honour gateway_url value provided by env

environment file gateway_config.yml is being provided with configurable gateway_url. Although in many Functions the gateway_url value from ENV is not being used instead it has been hardcoded. It eventually fails the function call when deployed manually with different url

Proposal: Case insensitive username handling

We should handle usernames as case insensitive from the CUSTOMERS File

Affix
affix
aFfIx

should all match to affix

gh-push handler.go line 62

for _, customer := range customers {
	if customer == pushEvent.Repository.Owner.Login {
		found = true
	}
}

should be

for _, customer := range customers {
	if strings.EqualFold(customer, pushEvent.Repository.Owner.Login) {
		found = true
	}
}

Re-write dashboard in React.js

Expected Behaviour

In order to scale and to support in the roadmap (#145) for OAuth 2.0 with GitHub we need a single-page app with its own API shim. @kenfdev is writing a replacement for the overview and pipeline-log functions which we can extend with functionality and later use to integrate with OAuth 2.0.

Current Behaviour

We use two separate functions with Go templating.

sdk.ReportStatus treated as singleton

This relates to the sdk.ReportStatus function from @s8sg which is being treated as a singleton (anti-patten) and is preventing unit testing. There may be other similar pieces of code, but we should start with big pieces like this.

Expected Behaviour

Code using sdk.ReportStatus cannot be unit tested since it cannot be swapped out. This should be changed into a factory pattern.

Current Behaviour

Whatever code has used sdk.ReportStatus cannot be tested.

Possible Solution

Something like this:

// For use in code when testing (needs to be injected)
statusReporter := sdk.CreateStatusReporter(sdk.ReportType.Void)

// For regular use for deployment in live system
statusReporter := sdk.CreateStatusReporter(sdk.ReportType.Live)

Steps to Reproduce (for bugs)

  1. Try to write a unit test for "github-push"'s handle() function
  2. Get errors

Context

Unit test coverage is key to stability and preventing regression.

Signing releases?

It'd be a good idea to implement some sort of signing, even if it's sending a separate list of files related to the built function. I Understand you could defer some of this to encrypted transport to ensure nothing has tampered with in-transit, but perhaps a way to communicate with the server "hey here's a new valid hash for a release".

garbage-collect must use internal trust

Expected Behaviour

garbage-collect should validate all requests using the internal trust secret

Current Behaviour

It doesn't which means if you have the URL you may be able to delete functions

Possible Solution

@martindekov to investigate

Provide configuration for Kubernetes

The public and development version of OpenFaaS Cloud is currently hosted on Docker Swarm, but we should add a configuration for Kubernetes so that we can take advantage of network-policy and other features.

  • Kubernetes-style YAML files in a yaml folder (not helm)
  • Update any references to "gateway:8080" to an environmental variable i.e. gateway_url or something else if it's already been used in stack.yml

Notes:

  • Buildkit used in of-builder is a privileged container, so set that securityContext too

Support for private repos

First of all, thx for your amazing work! This is exactly what I've been looking for.
I have this dream of a netlify-like workflow where you just push function code a repo and everything else just works :D
What's the roadmap for openfaas-cloud?
It would be awesome If it could support private github repos, eg automatically add a deploy key to the repo and use that in the clone step, are there any plans in that direction?
I would be glad to help implementing!

Buildkit and Python template - missing pip

Using Buildkit and Python template resulted in a missing pip. I did this with the requests module.

Locally built with Docker it works fine.

To reproduce deploy on Kubernetes and trigger a build for a python 2/3 project which consumes and uses a pip. You'll see the error at runtime.

Add NetworkPolicy to separate user functions

We need a NetworkPolicy which does the following:

  • Prevent functions calling system services - block (openfaas-fn) -> (openfaas)
  • Prevent functions calling other functions - block (openfaas-fn) -> (openfaas-fn)
  • Allow OpenFaaS Cloud "system" functions to do the above. Use a label to decide which functions belong to this group.

In addition we will have to prevent users from adding this "system" or "openfaas" label to their functions in the buildshiprun function.

Testing:

Show above conditions are satisfied in a deployment on Kubernetes.

Changes should be made to / added to the Kubernetes YAML files in ./yaml/ and documented in the docs/README.md file.

Update instructions to use github-event, not github-push

Expected Behaviour

github-event adds support for removing functions when an installation removed event is received.

It should be what we point at for webhooks rather than to: github-push.

  • Set its name as: system-github-event (don't rename the handler)
  • Set the name of system-github-push to github-push (only in the YAML)

Update the instructions in docs/README.md

Test end-to-end on your own cluster by using the router and the address system.domain.com/github-event.

Provide feedback via outgoing webhook

An outgoing webhook URL configured via environmental variable should allow for events to be posted to a remote server for status.

  • Whenever a GH-push is received
  • Whenever buildshiprun executes
  • Whenever of-builder pushes a new image successfully or if it fails and how long it took

The payload format should be in JSON and include context:

i.e.

{
 "owner": "alexellis",
 "repo": "kubecon-tester",
 "source": "gh-push"
}

The URL to receive web hooks should be set the stack.yml file - probably in the same file that has the gateway configuration URL.

A specific event would extend the JSON above with specific information like a "message" field:

{
 "owner": "alexellis",
 "repo": "kubecon-tester",
 "source": "gh-push",
 "message": "owner not in CUSTOMERS file"
}

We may also want some additional fields to represent whether this is a success / warning message such as in the example above.

Since all OpenFaaS functions are written in Go we should have a common package with the struct representing the JSON event and a HTTP client/wrapper for easy auditing of events.

of-builder yaml is invalid

When deploying the latest of-builder you get an error:

error: error validating "yaml/of-builder-dep.yml": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]):unknown field "environment" in io.k8s.api.core.v1.Container;

Problem: The container section has environment: instead of env:
Solution: Change environment: to env:

Issue: case-sensitive usernames are not supported by the router

The HTTPS router that gives a sub-domain to each user will not support a username like @burtonr - burtonr would be fine though.

An Internet address is only case sensitive for everything after the domain name.

https://www.computerhope.com/issues/ch000709.htm

It would be simple to have the functions deployed with lowercase metadata for usernames - this would resolve the problem. The Git-Owner label should be updated in the buildshiprun function.

Move to buildkit gRPC builder

It looks like the binary used to do one-time builds has been removed from upstream buildkit. I've opened an issue with upstream to see what happened. moby/buildkit#340

The short answer is that we can use the gRPC builder daemon which is now available and update of-builder to make use of that.

John has started work in #16

Deploy functions in Kubernetes Cant't contain {"Limits":{"Memory":"30Mi"}}

I tried deploy functions by the call of POST method system/functions:

request body:

{
	"Service":"lmxia-hello-python",
	"Image":"172.16.0.125:31115/lmxia/python-ci:latest-1b0af9f9c1166302e60c6ec4fe22b3ac89645c18",
	"Network":"func_functions",
	"Labels":{
		"Git-Cloud":"1",
		"Git-DeployTime":"2018-04-16",
		"Git-Owner":"lmxia",
		"Git-Repo":"faas-ci"
	},
	"Limits":{
		"Memory":"30Mi"
	}
}

502 status code get.
And succeed after removing

"Limits":{
	"Memory":"30Mi"
}

buildshiprun doesn't handle build failure

Overview

For an of-builder error buildshiprun doesn't handle the error properly

In the log while of-builder has failed with the output:failed to solve: rpc error: code = Unknown desc = exit code 2, the buildshiprun uses the string as image name

buildshiprun.1.l81fyjakncif@linuxkit-025000000001    | 2018/07/02 02:05:13 buildshiprun: image 'failed to solve: rpc error: code = Unknown desc = exit code 2'
buildshiprun.1.l81fyjakncif@linuxkit-025000000001    | 2018/07/02 02:05:13 Deploying failed to solve: rpc error: code = Unknown desc = exit code 2 as s8sg-regex_go
buildshiprun.1.l81fyjakncif@linuxkit-025000000001    | functionExists status: 200 OK
buildshiprun.1.l81fyjakncif@linuxkit-025000000001    | Deploying: failed to solve: rpc error: code = Unknown desc = exit code 2 as s8sg-regex_go
buildshiprun.1.l81fyjakncif@linuxkit-025000000001    | Deploy status: 400 Bad Request

Details

Problem 1
The image name here is invalid. In the code (buildshiprun/handler.go) we are just checking:

        if strings.Contains(imageName, "exit status") == true {
                msg := "Unable to build image, check builder logs"
                status.AddStatus(sdk.Failure, msg, sdk.FunctionContext(event.Service))
                reportStatus(status)
                log.Fatal(msg)
                auditEvent.Message = fmt.Sprintf("buildshiprun failure: %s", msg)
                sdk.PostAudit(auditEvent)
                return msg
        }

Which is not enough.

Problem 2
The 2nd problem we need to check the resp Status in deployFunction(). In the log it clearly got a 404 but as the code doesn't check the resp status and reports no error.

     res, err = c.Do(httpReq)

        if err != nil {
                fmt.Println(err)
                return "", err
        }

        defer res.Body.Close()
        fmt.Println("Deploy status: " + res.Status)

Possible Solution

Problem 1
exit status can be changed to exit code
Although using regex to validate image name might be a good solution rather than string match.
From the thread notaryproject/notary#105

var imageValidator = regexp.MustCompile("(?:[a-zA-Z0-9]+(?:[._-][a-z0-9]+)*(?::[0-9]+)?/[a-zA-Z0-9]+(?:[._-][a-z0-9]+)*/)*[a-zA-Z0-9]+(?:[._-][a-z0-9]+)*")

func ValidImage(image string) bool {
        match := imageValidator.FindString(image)
        // image should be the whole string
        if len(match) == len(image) {
                return true
        }
        return false
}

Problem 2
We need to check for Resp status code if it is a success in deployFunction()

if  res.StatusCode < 200 || res.StatusCode > 299 {
        return "", fmt.Errorf("Response code %d",  res.StatusCode)
} 

Router: Add annotation to stop Weave cloud from scraping

Weave cloud is creating erroneous requests to the router by attempting to scrape metrics from it.

See the faas-netes code and copy the approach into the Deployment YAML to prevent scraping using the required annotation.

Alex

Use separate HMAC key for inter-function trust

Expected Behaviour

Functions should use a separate named symmetric key for HMAC / trust i.e. function-hmac-key

Current Behaviour

We re-use the key shared with GitHub github-webhook-secret

Possible Solution

  • Add step for Swarm + Kubernetes to README to generate the new secret
  • Replace the secret in the list of secrets for all functions apart from github-pushwhich needs both
  • Update functions to use new function to sign payloads
  • Test end-to-end

Feature: Secret management for user functions

Some functions may need to access private API tokens or secret keys.

Initial idea for validation using a separate HTTPS API for supplying the secret:

  • Functions are namespaced with the prefix of the GitHub username
  • JSON push events are validated as being from GitHub - therefore we could use the username as a prefix for named secrets
  • We'd need an API where the user can pre-share their secret with us ahead of time over HTTPS (encrypted connection) to be stored directly in the Kubernetes/Swarm secret store
  • This API could be a function, but it would need to have OAuth verified by a GitHub identity

Once supplied the user would supply the name of the secret in their stack.yml file and the buildshiprun function would bind that at deployment time. Secret names are prefixed with the username so there is no possibility of attaching to another user's secret.

suggestion (status): Use separate Github status context for each function

stack.yml can have multiple functions. If any one of the function has status failure, the overall status should be failure

This is ensured by github:

github status allow to specify user defined context

Status: 
          "state": "success",
          "target_url": "status page",
          "description": "The function has deployed as: user-function_name",
          "context": "function_name"

In Github status if one context status is failure the whole status is failure

Additionally, a combined state is returned. The state is one of:

failure if any of the contexts report as error or failure
pending if there are no statuses or a context is pending
success if the latest status for all contexts is success

Please check here for more details

Possible Implementation:
We can get function specific status in the github status by specifying the context as function-name.

Once the yaml is parsed in git-tar the function status need to be shown as pending
and once buildshiprun as successfully deployed the function it need to be shown as success
if any error occurs the status should be failure

i.e. for a stack which include function-1 and function-2
the status will be updated as:

// For pending
Status: pending 
function-1: pending
function-2: pending

// For one pending
Status: pending 
function-1: success
function-2: pending

// For success
Status: success
function-1: success
function-2: success

// For failure
Status: failure
function-1: failure
function-2: success

Deal with error before the yaml is parsed:

Error might happen before the stack.yml is even parsed, for example stack.yml is not present. We need another separate context before even stack.yml is parsed
We can have a context named Stack Deploy.
Stack Deploy status can be shown as pending as soon as a push event is accepted in gh-push
Stack Deploy status can be success once the yaml is parsed
Stack Deploy is failure in case of any error in between

i.e. for a stack which include function-1 and function-2
the status will be updated as:

Stack Deploy (Pending)

screen shot 2018-05-02 at 6 03 37 pm

Stack Deploy (Failed)

screen shot 2018-05-03 at 2 38 38 pm

Stack Deploy Passed, Function (Pending)

screen shot 2018-05-02 at 5 40 45 pm

Stack Deploy Passed, Function (Failed

screen shot 2018-05-02 at 6 04 24 pm

)
Stack Deploy Passed, One Function (Pending) One (success)

screen shot 2018-05-03 at 11 28 34 am

Stack Deploy Passed, One Function (Pending) One (failed)

screen shot 2018-05-03 at 10 26 30 am

Stack Deploy Passed, Both Function passed

screen shot 2018-05-03 at 10 39 18 am

Add node8 express template to the `git-tar` function

It would be nice to be able to serve SPAs in OpenFaaS Cloud. To be able to do that, the new of-watchdog would be a requirement because the function would need to serve the assets (index.html, *.js, *.css, etc.) and set proper Content-Type depending on the file type.

Expected Behaviour

The node8-express-template should work with OpenFaaS Cloud, too.

Current Behaviour

Only the default templates are supported.

Possible Solution

Pull the templates before the shrinkwrap in the git-tar function.

Context

I'm trying to serve a React SPA with OpenFaaS Cloud.

Audit meaningful data from functions

We should audit meaningful data from functions such as:

  • git-tar - what size tar-ball?
  • buildshiprun - what size Docker image in the registry etc?

These are just ideas - if we have the data, let's audit it so we can learn how to operate the system.

@ericstoekl has a PR #38 - Eric please can you break it up into smaller pieces and raise as PRs?

Thanks,

Alex

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.