Giter Site home page Giter Site logo

openfaas / python-flask-template Goto Github PK

View Code? Open in Web Editor NEW
85.0 7.0 86.0 65 KB

HTTP and Flask-based OpenFaaS templates for Python 3

License: MIT License

Python 50.86% Dockerfile 40.34% Shell 8.80%
faas hacktoberfest kubernetes python serverless

python-flask-template's Introduction

OpenFaaS Python Flask Templates

The Python Flask templates that make use of the incubator project of-watchdog.

Templates available in this repository:

  • python3-http

  • python3-http-debian (ideal for compiled dependencies like numpy, pandas, pillow)

  • python3-flask

  • python3-flask-debian (ideal for compiled dependencies like numpy, pandas, pillow)

  • python27-flask (Python 2.7 is deprecated)

Notes:

  • To build and deploy a function for an ARM computer, you'll need to use faas-cli publish --platforms

SSH authentication for private Git repositories and Pip modules

If you need to install Pip modules from private Git repositories, we provide an alternative set of templates for OpenFaaS Pro customers:

Picking your template

The templates named python*-flask* are designed as a drop-in replacement for the classic python3 template, but using the more efficient of-watchdog. The move to use flask as an underlying framework allows for greater control over the HTTP request and response.

Those templates named python*-http* are designed to offer full control over the HTTP request and response. Flask is used as an underlying framework.

The witness HTTP server is used along with Flask for all templates.

Are you referencing pip modules which require a native build toolchain? It's advisable to use the template with a -debian suffix in this case. The Debian images are larger, however they are usually more efficient for use with modules like numpy and pandas.

Python Versioning

We try to keep the default Python 3 version up-to-date, however, you can specify a specific python version using the PYTHON_VERSION build argument.

The current stable version of python is 3.11, you might want to test the next pre-release using

functions:
  pgfn:
    lang: python3-http-debian
    handler: ./pgfn
    image: pgfn:latest
    build_args:
      - PYTHON_VERSION=3.12

Or pin to a older version while you wait for your dependencies to be updated.

functions:
  pgfn:
    lang: python3-http-debian
    handler: ./pgfn
    image: pgfn:latest
    build_args:
      - PYTHON_VERSION=3.10

This can also be set using the --build-arg flag.

faas-cli build --build-arg PYTHON_VERSION=3.12

Downloading the templates

Using template pull with the repository's URL:

faas-cli template pull https://github.com/openfaas-incubator/python-flask-template

Using the template store:

# Either command downloads both templates
faas-cli template store pull python3-http

# Or
faas-cli template store pull python3-flask

Using your stack.yml file:

configuration:
    templates:
        - name: python3-http

Using the python3-http templates

Create a new function

export OPENFAAS_PREFIX=alexellis2
export FN="tester"
faas-cli new --lang python3-http $FN

Build, push, and deploy

faas-cli up -f $FN.yml

Test the new function

echo -n content | faas-cli invoke $FN

Event and Context Data

The function handler is passed two arguments, event and context.

event contains data about the request, including:

  • body
  • headers
  • method
  • query
  • path

context contains basic information about the function, including:

  • hostname

Response Bodies

By default, the template will automatically attempt to set the correct Content-Type header for you based on the type of response.

For example, returning a dict object type will automatically attach the header Content-Type: application/json and returning a string type will automatically attach the Content-Type: text/html, charset=utf-8 for you.

Example usage

Return a JSON body with a Content-Type

def handle(event, context):
    return {
        "statusCode": 200,
        "body": {"message": "Hello from OpenFaaS!"},
        "headers": {
            "Content-Type": "application/json"
        }
    }

Custom status codes and response bodies

Successful response status code and JSON response body

def handle(event, context):
    return {
        "statusCode": 200,
        "body": {
            "key": "value"
        }
    }

Successful response status code and string response body

def handle(event, context):
    return {
        "statusCode": 201,
        "body": "Object successfully created"
    }

Failure response status code and JSON error message

def handle(event, context):
    return {
        "statusCode": 400,
        "body": {
            "error": "Bad request"
        }
    }

Custom Response Headers

Setting custom response headers

def handle(event, context):
    return {
        "statusCode": 200,
        "body": {
            "key": "value"
        },
        "headers": {
            "Location": "https://www.example.com/"
        }
    }

Accessing Event Data

Accessing request body

def handle(event, context):
    return {
        "statusCode": 200,
        "body": "You said: " + str(event.body)
    }

Accessing request method

def handle(event, context):
    if event.method == 'GET':
        return {
            "statusCode": 200,
            "body": "GET request"
        }
    else:
        return {
            "statusCode": 405,
            "body": "Method not allowed"
        }

Accessing request query string arguments

def handle(event, context):
    return {
        "statusCode": 200,
        "body": {
            "name": event.query['name']
        }
    }

Accessing request headers

def handle(event, context):
    return {
        "statusCode": 200,
        "body": {
            "content-type-received": event.headers.get('Content-Type')
        }
    }

Example with Postgresql:

stack.yml

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  pgfn:
    lang: python3-http-debian
    handler: ./pgfn
    image: pgfn:latest
    build_options:
      - libpq

Alternatively you can specify ADDITIONAL_PACKAGE in the build_args section for the function.

    build_args:
      ADDITIONAL_PACKAGE: "libpq-dev gcc python3-dev"

requirements.txt

psycopg2==2.9.3

Create a database and table:

CREATE DATABASE main;

\c main;

CREATE TABLE users (
    name TEXT,
);

-- Insert the original Postgresql author's name into the test table:

INSERT INTO users (name) VALUES ('Michael Stonebraker');

handler.py:

import psycopg2

def handle(event, context):

    try:
        conn = psycopg2.connect("dbname='main' user='postgres' port=5432 host='192.168.1.35' password='passwd'")
    except Exception as e:
        print("DB error {}".format(e))
        return {
            "statusCode": 500,
            "body": e
        }

    cur = conn.cursor()
    cur.execute("""SELECT * from users;""")
    rows = cur.fetchall()

    return {
        "statusCode": 200,
        "body": rows
    }

Always read the secret from an OpenFaaS secret at /var/openfaas/secrets/secret-name. The use of environment variables is an anti-pattern and will be visible via the OpenFaaS API.

Using the python3-flask template

Create a new function

export OPENFAAS_PREFIX=alexellis2
export FN="tester"
faas-cli new --lang python3-flask $FN

Build, push, and deploy

faas-cli up -f $FN.yml

Test the new function

echo -n content | faas-cli invoke $FN

Example of returning a string

def handle(req):
    """handle a request to the function
    Args:
        req (str): request body
    """

    return "Hi" + str(req)

Example of returning a custom HTTP code

def handle(req):
    return "request accepted", 201

Example of returning a custom HTTP code and content-type

def handle(req):
    return "request accepted", 201, {"Content-Type":"binary/octet-stream"}

Example of accepting raw bytes in the request

Update stack.yml:

    environment:
      RAW_BODY: True

Note: the value for RAW_BODY is case-sensitive.

def handle(req):
    """handle a request to the function
    Args:
        req (str): request body
    """

    # req is bytes, so an input of "hello" returns i.e. b'hello'
    return str(req)

Testing

The python3 templates will run pytest using tox during the faas-cli build. There are several options for controlling this.

Disabling testing

The template exposes the build arg TEST_ENABLED. You can completely disable testing during build by passing the following flag to the CLI

--build-arg 'TEST_ENABLED=false'

You can also set it permanently in your stack.yaml, see the YAML reference in the docs.

Changing the test configuration

The template creates a default tox.ini file, modifying this file can completely control what happens during the test. You can change the test command, for example switching to nose. See the tox docs for more details and examples.

Changing the test command

If you don't want to use tox at all, you can also change the test command that is used. The template exposes the build arg TEST_COMMAND. You can override the test command during build by passing the following flag to the CLI

--build-arg 'TEST_COMMAND=bash test.sh'

Setting the command to any other executable in the image or any scripts you have in your function.

You can also set it permanently in your stack.yaml, see the YAML reference in the docs.

python-flask-template's People

Contributors

alexellis avatar burtonr avatar dsbibby avatar kturcios avatar lucasroesler avatar martindekov avatar mehyedes avatar nitishkumar71 avatar omerzamir avatar rdimitrov avatar rgschmitz1 avatar saikiran2603 avatar ssullivan avatar telackey avatar viveksyngh avatar welteki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-flask-template's Issues

Support request for long running functions

Hello everyone, I have a problem at present. My function is a tool with time-consuming operations. Each processing takes about 5 or 10 minutes. The following error will occur after I make two consecutive requests at the same time, and the container process will exit. This error occurs whether synchronous or asynchronous.๏ผˆThere is no problem in executing a single request๏ผ‰ I don't know why. Please give me some advice. Thank you very much.

2022/09/16 07:14:56 Upstream HTTP request error: Post "http://127.0.0.1:5000/": EOF
2022/09/16 07:14:56 Upstream HTTP request error: Post "http://127.0.0.1:5000/": EOF
2022/09/16 07:14:56 Forked function has terminated: exit status 114

Expected Behaviour

Each function can be successfully executed at the same time (although I don't know whether it can be)

Current Behaviour

2022/09/16 07:14:56 Upstream HTTP request error: Post "http://127.0.0.1:5000/": EOF
2022/09/16 07:14:56 Upstream HTTP request error: Post "http://127.0.0.1:5000/": EOF
2022/09/16 07:14:56 Forked function has terminated: exit status 114

Are you a GitHub Sponsor (Yes/No?)

Check at: https://github.com/sponsors/openfaas

  • Yes
  • [โˆš ] No

Which Solution Do You Recommend?

Please point out what I need to pay attention to or tell me what I need to configure

Context

Each request for a function is ultimately a time-consuming operation. I need two requests to be successfully executed at the same time without the above errors

2022/09/16 07:14:56 stderr: ASSERTION VIOLATION
2022/09/16 07:14:56 stderr: File: ../src/ast/rewriter/rewriter_def.h
2022/09/16 07:14:56 stderr: Line: 226
2022/09/16 07:14:56 stderr: UNEXPECTED CODE WAS REACHED.
2022/09/16 07:14:56 stderr: Z3 4.11.2.0
2022/09/16 07:14:56 stderr: Please file an issue with this message and more detail about how you encountered it at https://github.com/Z3Prover/z3/issues/new
2022/09/16 07:14:56 Upstream HTTP request error: Post "http://127.0.0.1:5000/": EOF
2022/09/16 07:14:56 Upstream HTTP request error: Post "http://127.0.0.1:5000/": EOF
2022/09/16 07:14:56 Forked function has terminated: exit status 114
2022/09/16 07:14:56 stdout: }

Your Environment

faas-cli version 0.14.2
docker version 20.10.14
kubectl version v1.23.1
ubuntu 20.04
template python3-flask-debian

python3-*-debian syntax error during TEST_ENABLED check

My actions before raising this issue

faas build fails with tox error #44 seems like it may be related

Expected Behaviour

Dockerfiles using python:3.7-slim-buster base image should skip testing when TEST_ENABLED is specified as false.

Current Behaviour

The TEST_ENABLED check is throwing a syntax error due to syntax inconsistencies between alpine (busybox) and debian (dash) implementations of the default shell. This doesn't allow the use of bypassing the test using the --build-arg "TEST_ENABLED=false" flag as suggested in the documentation.

Step 27/35 : RUN if [ "$TEST_ENABLED" == "false" ]; then     echo "skipping tests";    else     eval "$TEST_COMMAND";     fi
 ---> Running in 05bb4ca63b94
/bin/sh: 1: [: false: unexpected operator

Possible Solution

I suggest replacing the double equal (==) in the Dockerfile templates with a single equal (=) to be more POSIX friendly, this worked for me in both alpine and debian versions of the python image.

RUN if [ "$TEST_ENABLED" = "false" ]; then \

Steps to Reproduce (for bugs)

  1. faas-cli template pull https://github.com/openfaas-incubator/python-flask-template
  2. faas-cli new --lang python3-http-debian hello-python
  3. faas-cli build -f hello-python.yml

Context

I'm a college student working on a capstone project involving deploying open-source cloud native applications in Kubernetes and doing a comparison against vendor specific solutions.

I'm using of-watchdog templates as a basis for my own custom functions and would love to contribute in any way I can.

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ):
  ___                   _____           ____
 / _ \ _ __   ___ _ __ |  ___|_ _  __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \
| |_| | |_) |  __/ | | |  _| (_| | (_| |___) |
 \___/| .__/ \___|_| |_|_|  \__,_|\__,_|____/
      |_|

CLI:
 commit:  b1c09c0243f69990b6c81a17d7337f0fd23e7542
 version: 0.14.2
  • Docker version docker version (e.g. Docker 17.0.05 ):
Docker version 20.10.14, build a224086
  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes

  • Operating System and version (e.g. Linux, Windows, MacOS):

NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • Code example or link to GitHub repo or gist to reproduce problem:
    shared output snippet in current behavior above, let me know if a more complete log is required

  • Other diagnostic information / logs from troubleshooting guide

Next steps

You may join Slack for community support.

faas-cli publish -f my-function.yml --platforms linux/arm/v7 fails on pytest

My actions before raising this issue

I'm sorry that my faas friday contribution is an issue, but, at least it has a workaround! ๐Ÿค™

The error indicates pytest is failing. I am sharing detailed logs below after doing faas-cli template pull and creating a new function to do this test.

Expected Behaviour

This should should pass for a newly generated function:

faas-cli publish -f my-function.yml --platforms linux/arm/v7

Current Behaviour

The publish command fails with the following errors from Docker:

Errors received during build:
- [testbot] received non-zero exit code from build, error: #1 [internal] load build definition from Dockerfile
#1 sha256:075a3d35e0cfd6f8ab29191ba59f72275e0de836100fbf47a4479d478589a10f
#1 transferring dockerfile: 1.15kB done
#1 DONE 0.0s

#2 [internal] load .dockerignore
#2 sha256:c5c02775cf56ecbb91a4b522c245b084c179fca7d60edf301d56a840c8fd79e6
#2 transferring context: 2B done
#2 DONE 0.0s

#5 [auth] armhf/python:pull token for registry-1.docker.io
#5 sha256:3dc59a684d8d4154fb7662095cb4fb49b3254796e28ce4af271db93499b4fa7f
#5 DONE 0.0s

#6 [auth] openfaas/of-watchdog:pull token for registry-1.docker.io
#6 sha256:fc965b916b550c1c70d5127a16730dc7797be92605d259eb4d775900cb5b0dbe
#6 DONE 0.0s

#4 [internal] load metadata for docker.io/openfaas/of-watchdog:0.7.7
#4 sha256:bc34e2a38b7becdfdbb099e33e528b60e49b127082ee2115ee2bb3cf8404e0c6
#4 DONE 0.7s

#3 [internal] load metadata for docker.io/armhf/python:3.6-alpine
#3 sha256:e6d48735957efd2aca6bd015db24f7a4e25ff42e9137a2eedffa2864e2d879de
#3 DONE 0.7s

#14 [internal] load build context
#14 sha256:75d750aed9a9fecde279564471856cbc9904beaa7a60a6192034745d46cbad2c
#14 DONE 0.0s

#7 [stage-1  1/18] FROM docker.io/armhf/python:3.6-alpine@sha256:3f2dcb7c293c8fb2c7d58733bf323da02054977b07c7f50f1e274daf54c2971b
#7 sha256:d3e6d426630b31c375c08b79bc045333eb432358559e15b18198923007c7955b
#7 resolve docker.io/armhf/python:3.6-alpine@sha256:3f2dcb7c293c8fb2c7d58733bf323da02054977b07c7f50f1e274daf54c2971b 0.1s done
#7 DONE 0.0s

#8 [watchdog 1/1] FROM docker.io/openfaas/of-watchdog:0.7.7@sha256:f988f45b65b0282f457bed763525ec92ca493487cc033c2db0399eac17732ac4
#8 sha256:0966a2a0eb6bbfcb4864f9327ee06080148d634d81e99878e4dcd5c6b51c5e14
#8 resolve docker.io/openfaas/of-watchdog:0.7.7@sha256:f988f45b65b0282f457bed763525ec92ca493487cc033c2db0399eac17732ac4 0.1s done
#8 DONE 0.1s

#14 [internal] load build context
#14 sha256:75d750aed9a9fecde279564471856cbc9904beaa7a60a6192034745d46cbad2c
#14 transferring context: 63.18MB 1.3s done
#14 DONE 1.3s

#20 [stage-1 12/18] WORKDIR /home/app/function/
#20 sha256:5d4886239545062f7c5987311dc564c87064597224ed6a9d6dab7069f259e213
#20 CACHED

#10 [stage-1  3/18] RUN chmod +x /usr/bin/fwatchdog
#10 sha256:a917f761637ebfa961d596caee35ecf8f962252ed0a6ff413b539274b4647cf3
#10 CACHED

#9 [stage-1  2/18] COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog
#9 sha256:9529337b26faa6c3fa0b2bd61c954359542dfe2f3a688e9bc930419290d56d37
#9 CACHED

#13 [stage-1  6/18] WORKDIR /home/app/
#13 sha256:2d34903acad8c20a4d2657108e671a46f6cfa708c5ee0e6824561208ea0e361e
#13 CACHED

#15 [stage-1  7/18] COPY index.py           .
#15 sha256:640f776c9056fbe53b26d009b2ac1ee1c613b9f01f11838368c7bdf2ef8c9ade
#15 CACHED

#16 [stage-1  8/18] COPY requirements.txt   .
#16 sha256:63a815bbf06530f21163b4718f5d47f55a96304e21858cf17efd9516927fea05
#16 CACHED

#19 [stage-1 11/18] RUN touch ./function/__init__.py
#19 sha256:d665af0632794f7e0fe82d0cade434b2fe0c73bc2fcf758aec1abf9f283513a3
#19 CACHED

#18 [stage-1 10/18] RUN mkdir -p function
#18 sha256:ee5110d7edcc7f389bc349a190a7fb599b44766aa68af9c14368909667e4568b
#18 CACHED

#21 [stage-1 13/18] COPY function/requirements.txt      .
#21 sha256:55d47fb5e2e878953ffe27f2d71f50efc9fa0f90beadd9bb00fe8c4e2cc6baca
#21 CACHED

#12 [stage-1  5/18] RUN chown app /home/app
#12 sha256:a4e2b41a9152ca3bcc83684ed7a2386f34d25511763a67fb387f5d6aab493b95
#12 CACHED

#17 [stage-1  9/18] RUN pip install -r requirements.txt
#17 sha256:48e4b20bc86026ac1165f7d9aaffdbda8c284d44fb8396647e8d797fb9b8b516
#17 CACHED

#11 [stage-1  4/18] RUN addgroup -S app && adduser app -S -G app
#11 sha256:0c5c95592af6724c8127d43533c7ea5905a0c80ee373b90b00e7a9c56df84bd4
#11 CACHED

#22 [stage-1 14/18] RUN pip install --user -r requirements.txt
#22 sha256:19170bae26414d676be587e449c9fdabf4836c2c944ef34014e97725c2a3e24c
#22 CACHED

#23 [stage-1 15/18] COPY function/   .
#23 sha256:a202f6f412b7f6421e99ef8e7be4bfccee2a2002e3100e9033efd64ce5094807
#23 DONE 2.8s

#24 [stage-1 16/18] RUN chown -R app:app ../
#24 sha256:44c0bdeccf4f45533be7ba76ad19b5b9dedf25036ebce41faf947c65f5245344
#24 DONE 7.6s

#25 [stage-1 17/18] RUN if [ "true" == "false" ]; then     echo "skipping tests";    else     eval "tox";     fi
#25 sha256:4b8af72c588d9de7175d86920a2242ca71cb2d8c6f246a03bac65694aa11bd7f
#25 4.573 lint recreate: /home/app/function/.tox/lint
#25 12.32 lint installdeps: flake8
#25 46.50 lint installed: flake8==3.9.2,importlib-metadata==4.6.0,mccabe==0.6.1,pycodestyle==2.7.0,pyflakes==2.3.1,typing-extensions==3.10.0.0,zipp==3.5.0
#25 46.50 lint run-test-pre: PYTHONHASHSEED='1279681707'
#25 46.50 lint run-test: commands[0] | flake8 .
#25 49.84 0
#25 49.99 test recreate: /home/app/function/.tox/test
#25 54.88 test installdeps: flask, pytest, -rrequirements.txt
#25 119.0 test installed: attrs==21.2.0,click==8.0.1,dataclasses==0.8,Flask==2.0.1,importlib-metadata==4.6.0,iniconfig==1.1.1,itsdangerous==2.0.1,Jinja2==3.0.1,MarkupSafe==2.0.1,packaging==20.9,pluggy==0.13.1,py==1.10.0,pyparsing==2.4.7,pytest==6.2.4,toml==0.10.2,typing-extensions==3.10.0.0,Werkzeug==2.0.1,zipp==3.5.0
#25 119.0 test run-test-pre: PYTHONHASHSEED='1279681707'
#25 119.0 test run-test: commands[0] | pytest
#25 121.6 Traceback (most recent call last):
#25 121.6   File "/home/app/function/.tox/test/bin/pytest", line 5, in <module>
#25 121.6     from pytest import console_main
#25 121.6   File "/home/app/function/.tox/test/lib/python3.6/site-packages/pytest/__init__.py", line 7, in <module>
#25 121.6     from _pytest.capture import CaptureFixture
#25 121.6   File "/home/app/function/.tox/test/lib/python3.6/site-packages/_pytest/capture.py", line 548, in <module>
#25 121.6     class MultiCapture(Generic[AnyStr]):
#25 121.6   File "/home/app/function/.tox/test/lib/python3.6/site-packages/_pytest/capture.py", line 616, in MultiCapture
#25 121.6     def readouterr(self) -> CaptureResult[AnyStr]:
#25 121.6   File "/usr/local/lib/python3.6/typing.py", line 510, in inner
#25 121.6     return cached(*args, **kwds)
#25 121.6   File "/usr/local/lib/python3.6/typing.py", line 1079, in __getitem__
#25 121.6     extra=self.__extra__)
#25 121.6   File "/usr/local/lib/python3.6/typing.py", line 944, in __new__
#25 121.6     self = super().__new__(cls, name, bases, namespace, _root=True)
#25 121.6   File "/usr/local/lib/python3.6/typing.py", line 118, in __new__
#25 121.6     return super().__new__(cls, name, bases, namespace)
#25 121.6   File "/usr/local/lib/python3.6/abc.py", line 133, in __new__
#25 121.6     cls = super().__new__(mcls, name, bases, namespace)
#25 121.6 ValueError: 'out' in __slots__ conflicts with class variable
#25 121.8 ERROR: InvocationError for command /home/app/function/.tox/test/bin/pytest (exited with code 1)
#25 121.8 ___________________________________ summary ____________________________________
#25 121.8   lint: commands succeeded
#25 121.8 ERROR:   test: commands failed
#25 ERROR: executor failed running [/bin/sh -c if [ "$TEST_ENABLED" == "false" ]; then     echo "skipping tests";    else     eval "$TEST_COMMAND";     fi]: exit code: 1
------
 > [stage-1 17/18] RUN if [ "true" == "false" ]; then     echo "skipping tests";    else     eval "tox";     fi:
------
Dockerfile:38
--------------------
  37 |     ARG TEST_ENABLED=true
  38 | >>> RUN if [ "$TEST_ENABLED" == "false" ]; then \
  39 | >>>     echo "skipping tests";\
  40 | >>>     else \
  41 | >>>     eval "$TEST_COMMAND"; \
  42 | >>>     fi
  43 |     
--------------------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c if [ "$TEST_ENABLED" == "false" ]; then     echo "skipping tests";    else     eval "$TEST_COMMAND";     fi]: exit code: 1

Possible Solution

I made this change to the template on my local machine, as a test, because I noticed armhf/python:3.6 is deprecated - and was pleasantly surprised arm32v7/python:3.6-alpine worked.

# I commented this out
# FROM armhf/python:3.6-alpine

FROM arm32v7/python:3.6-alpine

Steps to Reproduce (for bugs)

  1. faas-cli template store pull python3-http get the python3-http templates, if you haven't
  2. faas-cli new testbot --lang python3-http-armhf to build a new testbot function
  3. faas-cli publish -f testbot.yml --platforms linux/arm/v7 to try building it

Context

I am doing development on my Raspberry Pi k3s netbooting cluster running at home, switched my images to armhf, and bumped into this issue.

Changing the local template for python3-http-armhf is a good short term solution. Long term, I'm not sure if this impacts more people, and if there's something I can do to help.

I'll share on Slack to see what folks are seeing. ๐Ÿ’ฏ

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ): CLI:
 commit:  72816d486cf76c3089b915dfb0b66b85cf096634
 version: 0.13.13

Gateway
 uri:     http://127.0.0.1:8080
 version: 0.20.12
 sha:     a6dbb4cd0285f6dbc0bc3f43f72ceacdbdf6f227


Provider
 name:          faas-netes
 orchestration: kubernetes
 version:       0.13.4 
 sha:           6f34f27a2405798b5ee2846f1654bc7754991920
  • Docker version docker version (e.g. Docker 17.0.05 ):
Client: Docker Engine - Community
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        f0df350
 Built:             Wed Jun  2 11:56:38 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:54:50 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
  • Are you using Docker Swarm or Kubernetes (FaaS-netes)? Kubernetes

  • Operating System and version (e.g. Linux, Windows, MacOS):

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.2 LTS
Release:        20.04
Codename:       focal
  • Code example or link to GitHub repo or gist to reproduce problem: shared above, let me know if more is needed?

  • Other diagnostic information / logs from troubleshooting guide the function builds with my workaround, and runs in OpenFaas.

Next steps

Let's talk. I hope I am doing something wrong.

You may join Slack for community support.

Pass in additional packages to be installed as a step in the template Dockerfile

My actions before raising this issue

Expected Behaviour

Ability to pass in additional packages to be installed as part of the template Dockerfile

Current Behaviour

Inability to pass additional packages to be installed as part of template Dockerfile

Possible Solution

Maybe this is already possible, but I have no idea how to pass that argument

Steps to Reproduce (for bugs)

Context

I need to do this, so that I can install the necessary dependencies to build cryptography

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ):

  • Docker version docker version (e.g. Docker 17.0.05 ): 19.03.12

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)? k8s with openfaas

  • Operating System and version (e.g. Linux, Windows, MacOS): Alpine Linux v3.12 (containerized)

  • Code example or link to GitHub repo or gist to reproduce problem:

  • Other diagnostic information / logs from troubleshooting guide

Next steps

You may join Slack for community support.

Support for numpy

It is rather difficult to get numpy support since it requires it to be compiled inside the docker image. Adding numpy to the requirements.txt for these alpine images causes an error like:

ERROR: Command errored out with exit status 1: /usr/local/bin/python /usr/local/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-xjbzgbxn/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'Cython>=0.29.13' 'numpy==1.13.3; python_version=='"'"'3.6'"'"' and platform_system!='"'"'AIX'"'"'' 'numpy==1.14.5; python_version>='"'"'3.7'"'"' and platform_system!='"'"'AIX'"'"'' 'numpy==1.16.0; python_version=='"'"'3.6'"'"' and platform_system=='"'"'AIX'"'"'' 'numpy==1.16.0; python_version>='"'"'3.7'"'"' and platform_system=='"'"'AIX'"'"'' Check the logs for full command output.

Could you provide a build option like (not sure if all packages are necessary):

  - name: numpy
    packages:
      - gcc
      - gfortran
      - python
      - python-dev
      - py-pip
      - build-base
      - py3-numpy
      - wget
      - freetype-dev
      - libpng-dev
      - openblas-dev

Python function test

My actions before raising this issue

Expected Behaviour

Using python3-http template, there is a file handler_test.py. Modified it to not pass the test:

from .handler import handle

def test_handle():
    assert False

I've built the function and I was expecting to get an test error. Did a local run and also got no test errors.

Current Behaviour

I didn't get any type of test errors.

Possible Solution

Steps to Reproduce (for bugs)

  1. Write failing tests
  2. Build and get no errors.

Context

Testing code is essential. Providing a solid way to write and run tests is a must.

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ):

CLI:
commit: f72db8de657001a2967582c535fe8351c259c5f6
version: 0.16.17

  • Docker version docker version (e.g. Docker 17.0.05 ):
    Docker Community 24.0.7

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes

  • Operating System and version (e.g. Linux, Windows, MacOS):
    Linux Ubuntu

  • Code example or link to GitHub repo or gist to reproduce problem:

  • Other diagnostic information / logs from troubleshooting guide

Next steps

You may join Slack for community support.

Templates not working in Openshift

I encountered the issue described in #25

The problem still exists, and is due to Openshift assigning random user ids to containers.

Expected Behaviour

Dependencies in requirements.txt are available within function

Current Behaviour

ModuleNotFoundError

Possible Solution

I think the simplest way to solve this issue is to install the requirements globally

I.e. replacing :

USER app
RUN pip install --user -r requirements.txt

by

USER root
RUN pip install -r requirements.txt

This fix has been successfully tested.

Your Environment

  • Openshift/OKD 4.7 (but should affect all versions of Openshift / OKD)

  • Faas-CLI version 0.13.9

Thanks,

Support multipart/form-data and json

The python3-http templates use an Event abstraction which represents the incoming request.
Currently the Event only holds the request.data as body, which is the body as byte array.

Suggestion: Leverage Flask's support for multipart/form-data, application/json (and x-www-form-urlencoded) by adding the following attributes to the Event object:

  1. request.files and request.form - they are needed for multipart/form-data requests.
    The request.files attribute is an ImmutableMultiDict of FileStorage (which is a file-ish ducktype) where every entry is a list of files.
    The request.form is an ImmutableMultiDict of strings which represent the textual multiparts which have no filename in their content-disposition (or the values of an x-www-form-urlencoded body).
    We already preserve ImmutableMultiDict in Event.query, so it seems straightforward to add files and form, too
  2. request.get_json() - it only gets filled when the request is of Content-Type: application/json
class Event:
    def __init__(self):
        self.body = request.get_data()
        self.json = request.get_json()
        self.form = request.form
        self.files = request.files
        self.headers = request.headers
        self.method = request.method
        self.query = request.args
        self.path = request.path

The format_body method also needs to be adjusted so that it uses Flask's support for file responses if the handler returns a file.

return send_file(fout, 'application/pdf')

This gives us the ability to upload files to such functions and download the function result (would also enable us to have uploads in the OpenFaaS Dashboard). Depending on Flask's implementation this also gives us support for large files.

I am preparing a PR for the python3-http template, and I could do likewise for the IRequest in the Java Template.

Please let me know if you approve of the general idea.

faas build fails with tox error

My actions before raising this issue

When building a function with faas build , build fails with below error
ERROR: tox config file (either pyproject.toml, tox.ini, setup.cfg) not found
The command '/bin/sh -c if [ "$TEST_ENABLED" == "false" ]; then echo "skipping tests"; else eval "$TEST_COMMAND"; fi' returned a non-zero code: 1
[0] < Building test-tox done in 8.28s.
[0] Worker done.

Total build time: 8.28s
Errors received during build:

  • [test-tox] received non-zero exit code from build, error: The command '/bin/sh -c if [ "$TEST_ENABLED" == "false" ]; then echo "skipping tests"; else eval "$TEST_COMMAND"; fi' returned a non-zero code: 1

Expected Behaviour

Function build should complete successfully.

Current Behaviour

Build completes in error. When TEST_ENABLED is set to false , build completes successfully. But default mode when TEST_ENABLED is set to true , the build fails to find tox.ini , although the file is created by default by the cli.

Possible Solution

Steps to Reproduce (for bugs)

  1. create a new function using python-flask-template repo "faas new --lang python3-http test-tox"
  2. Build function using faas build -f test-tox.yml
  3. Function build fails with above error

Context

trying to use new template format.

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ):
    version: 0.13.9

  • Docker version docker version (e.g. Docker 17.0.05 ):
    19.03.8

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes

  • Operating System and version (e.g. Linux, Windows, MacOS):
    Linux (CentOS)

  • Code example or link to GitHub repo or gist to reproduce problem:
    n/A

  • Other diagnostic information / logs from troubleshooting guide
    Step 27/35 : ARG TEST_ENABLED=true
    ---> Using cache
    ---> c38444c37ac6
    Step 28/35 : RUN if [ "$TEST_ENABLED" == "false" ]; then echo "skipping tests"; else eval "$TEST_COMMAND"; fi
    ---> Running in c1b290bd6c83
    ERROR: tox config file (either pyproject.toml, tox.ini, setup.cfg) not found
    The command '/bin/sh -c if [ "$TEST_ENABLED" == "false" ]; then echo "skipping tests"; else eval "$TEST_COMMAND"; fi' returned a non-zero code: 1
    [0] < Building test-tox done in 2.11s.
    [0] Worker done.

Total build time: 2.11s
Errors received during build:

  • [test-tox] received non-zero exit code from build, error: The command '/bin/sh -c if [ "$TEST_ENABLED" == "false" ]; then echo "skipping tests"; else eval "$TEST_COMMAND"; fi' returned a non-zero code: 1

Next steps

You may join Slack for community support.

Build fails in python3-flask template

Decription

When trying to build function with the: python3-flask template the 15 step fails with this error:

ERROR: Command errored out with exit status 1: /usr/local/bin/python /usr/local/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-zh4yzal0/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools >= 40.8.0' wheel 'Cython >= 0.29.14' 'cffi >= 1.12.3 ; platform_python_implementation == '"'"'CPython'"'"'' 'greenlet>=0.4.14 ; platform_python_implementation == '"'"'CPython'"'"'' Check the logs for full command output.
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1

The step which fails is:

RUN pip install -r requirements.txt

which installs:

flask
gevent

I see flask succeeds being fetched, but gevent fails.

Full logs

The full logs are the following:

Step 15/32 : RUN pip install -r requirements.txt
 ---> Running in 5bcfc8dc5f75
Collecting flask
  Downloading Flask-1.1.2-py2.py3-none-any.whl (94 kB)
Collecting gevent
  Downloading gevent-20.4.0.tar.gz (5.5 MB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'error'
  ERROR: Command errored out with exit status 1:
   command: /usr/local/bin/python /usr/local/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-zh4yzal0/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools >= 40.8.0' wheel 'Cython >= 0.29.14' 'cffi >= 1.12.3 ; platform_python_implementation == '"'"'CPython'"'"'' 'greenlet>=0.4.14 ; platform_python_implementation == '"'"'CPython'"'"''
       cwd: None
  Complete output (109 lines):
  Collecting setuptools>=40.8.0
    Downloading setuptools-46.1.3-py3-none-any.whl (582 kB)
  Collecting wheel
    Downloading wheel-0.34.2-py2.py3-none-any.whl (26 kB)
  Collecting Cython>=0.29.14
    Downloading Cython-0.29.17-py2.py3-none-any.whl (971 kB)
  Collecting cffi>=1.12.3
    Downloading cffi-1.14.0.tar.gz (463 kB)
  Collecting greenlet>=0.4.14
    Downloading greenlet-0.4.15.tar.gz (59 kB)
  Collecting pycparser
    Downloading pycparser-2.20-py2.py3-none-any.whl (112 kB)
  Building wheels for collected packages: cffi, greenlet
    Building wheel for cffi (setup.py): started
    Building wheel for cffi (setup.py): finished with status 'error'
    ERROR: Command errored out with exit status 1:
     command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-u0umok24/cffi/setup.py'"'"'; __file__='"'"'/tmp/pip-install-u0umok24/cffi/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-ehri8a3m
         cwd: /tmp/pip-install-u0umok24/cffi/
    Complete output (36 lines):
    running bdist_wheel
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.7
    creating build/lib.linux-x86_64-3.7/cffi
    copying cffi/pkgconfig.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/verifier.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/recompiler.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/api.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/__init__.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/commontypes.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/cparser.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/lock.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/error.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/model.py -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/_cffi_include.h -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/parse_c_type.h -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/_embedding.h -> build/lib.linux-x86_64-3.7/cffi
    copying cffi/_cffi_errors.h -> build/lib.linux-x86_64-3.7/cffi
    running build_ext
    building '_cffi_backend' extension
    creating build/temp.linux-x86_64-3.7
    creating build/temp.linux-x86_64-3.7/c
    gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.7m -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.7/c/_cffi_backend.o
    c/_cffi_backend.c:15:10: fatal error: ffi.h: No such file or directory
       15 | #include <ffi.h>
          |          ^~~~~~~
    compilation terminated.
    error: command 'gcc' failed with exit status 1
    ----------------------------------------
    ERROR: Failed building wheel for cffi
    Running setup.py clean for cffi
    Building wheel for greenlet (setup.py): started
    Building wheel for greenlet (setup.py): finished with status 'done'
    Created wheel for greenlet: filename=greenlet-0.4.15-cp37-cp37m-linux_x86_64.whl size=50272 sha256=61a30c0689692a1cdcc99cc917c3c9426baa1e6c5c33d1dbcba581799006d49b
    Stored in directory: /root/.cache/pip/wheels/9f/ef/08/f28669af76917de4c18abfd28a491c67c8ac3166d45d00660c
  Successfully built greenlet
  Failed to build cffi
  Installing collected packages: setuptools, wheel, Cython, pycparser, cffi, greenlet
      Running setup.py install for cffi: started
      Running setup.py install for cffi: finished with status 'error'
      ERROR: Command errored out with exit status 1:
       command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-u0umok24/cffi/setup.py'"'"'; __file__='"'"'/tmp/pip-install-u0umok24/cffi/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-rynut_o8/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-zh4yzal0/overlay --compile --install-headers /tmp/pip-build-env-zh4yzal0/overlay/include/python3.7m/cffi
           cwd: /tmp/pip-install-u0umok24/cffi/
      Complete output (36 lines):
      running install
      running build
      running build_py
      creating build
      creating build/lib.linux-x86_64-3.7
      creating build/lib.linux-x86_64-3.7/cffi
      copying cffi/pkgconfig.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/verifier.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/recompiler.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/api.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/__init__.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/commontypes.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/cparser.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/lock.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/error.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/model.py -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/_cffi_include.h -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/parse_c_type.h -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/_embedding.h -> build/lib.linux-x86_64-3.7/cffi
      copying cffi/_cffi_errors.h -> build/lib.linux-x86_64-3.7/cffi
      running build_ext
      building '_cffi_backend' extension
      creating build/temp.linux-x86_64-3.7
      creating build/temp.linux-x86_64-3.7/c
      gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.7m -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.7/c/_cffi_backend.o
      c/_cffi_backend.c:15:10: fatal error: ffi.h: No such file or directory
         15 | #include <ffi.h>
            |          ^~~~~~~
      compilation terminated.
      error: command 'gcc' failed with exit status 1
      ----------------------------------------
  ERROR: Command errored out with exit status 1: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-u0umok24/cffi/setup.py'"'"'; __file__='"'"'/tmp/pip-install-u0umok24/cffi/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-rynut_o8/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-zh4yzal0/overlay --compile --install-headers /tmp/pip-build-env-zh4yzal0/overlay/include/python3.7m/cffi Check the logs for full command output.
  ----------------------------------------
ERROR: Command errored out with exit status 1: /usr/local/bin/python /usr/local/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-zh4yzal0/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools >= 40.8.0' wheel 'Cython >= 0.29.14' 'cffi >= 1.12.3 ; platform_python_implementation == '"'"'CPython'"'"'' 'greenlet>=0.4.14 ; platform_python_implementation == '"'"'CPython'"'"'' Check the logs for full command output.
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1

Steps to reproduce

  1. Pull function from store
  2. Build function with the template
  3. See the above failure

Pass the path route argument to the handler

My actions before raising this issue

I'd like to suggest passing the path argument to the handler as a second argument, as it would prove quite valuable to my application
I'm not really familiar with Flask however and I might be able to access the argument already via a property on the request object. If that's the case, please let me know!

Context

The function I'm developing returns a list of attributes for a username. I think the natural implementation would be to make a GET request to /username. It would be great if I could reuse it

Misconfigured multi-stage step name in all Dockerfiles

My actions before raising this issue

Expected Behaviour

The faas-cli build, having pulled the most recent templates for python3-http as of the last 30 minutes, should continue to build as it did earlier today.
(There could be a problem my end, but it's failing on my local M1 Macbook Pro and my GitLab instance.)
Functions I haven't modified at all in days are also failing redeployment.

Current Behaviour

It fails, seemingly because it is trying to pull from a remote "builder" image on Docker Hub, as opposed to using the builder "stage" of the multi-stage build. Error below:

#3 [internal] load metadata for docker.io/library/builder:latest
#3 sha256:261eb3ef077559ec61986b2391905264393ea6320a9bca56ed4d427da6e7cb59
#3 ERROR: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
------
 > [internal] load metadata for docker.io/library/builder:latest:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed

Possible Solution

Explicitly label the previous stage as builder? I'm actually stumped as to why this has broken just now, perhaps due to a breaking of compatibility introduced by using the --platform flag?

FROM --platform=${TARGETPLATFORM:-linux/amd64} python:3.7-alpine

Steps to Reproduce (for bugs)

  1. using this code https://github.com/emoulsdale/openfaas-test (sorry, it's really bad and unidiomatic)
  2. run faas-cli template store pull python3-http
  3. run faas-cli build
  4. observe this error appear!

Context

My test functions are no longer building with the newest templates at all. Sorry to be a pain :) . I can revert back to the previous version for now though! Will have to check it in to my code manually, and alter the CI scripts.

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ):
    0.14.2

  • Docker version docker version (e.g. Docker 17.0.05 ):
    Docker version 20.10.13, build a224086

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes

  • Operating System and version (e.g. Linux, Windows, MacOS):
    MacOS with M1 Pro

  • Code example or link to GitHub repo or gist to reproduce problem:

  • Other diagnostic information / logs from troubleshooting guide

https://github.com/emoulsdale/openfaas-test

Next steps

You may join Slack for community support.

Just wanted to add that this still only the first problem I've had, so far, after months of trying silly things. I am really enjoying sponsoring this project!

support request: logging

As HTTP mode no longer needs IO to exchange data with watchdog, are both of them free for logging?
I've tried using combine_output: false and use stderr but I always get an error.

handler.py

import sys

def handle(event, context):
    sys.stderr.write('foo\n')
    sys.stdout.write('bar\n')
    return {
        "statusCode": 200,
        "body": "Hello from OpenFaaS!"
    }

function.yml

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  mytest:
    lang: python3-http
    handler: ./mytest
    image: mytest:latest

And the error output for first curl to the function

mytest.1.xov05txbh055@docker-desktop    | 2019/09/10 00:08:39 SIGTERM received.. shutting down server in 10s
mytest.1.xov05txbh055@docker-desktop    | 2019/09/10 00:08:39 Removing lock-file : /tmp/.lock
mytest.1.xov05txbh055@docker-desktop    | 2019/09/10 00:08:39 Error reading stderr: EOF

Following requests do not log any errors but don't log any foobar either...

logging

Hi,

I tried passing app.logger to the function by adding

class Context:
    def __init__(self):
        self.hostname = os.getenv('HOSTNAME', 'localhost')
        self.logger = app.logger

and calling it inside the function I get no error but the function does not actually get called, so there is a silent exception somewhere.

How do you log ? print does not show up even if I have write_debug env variable set to true

Support question on log aggregation

My actions before raising this issue

This is more a question rather than an issue.

I am trying to collect logs from a python flask function, and hope I can retain them as log files on disk, which will rotate once a day. During this process, I find that:

  1. I don't really know how to overwrite the log provider to save logs to a file on the disk of a k8s master node. The only related tutorial I see is the static logger provider written in Golang. I hope can get some tutorials in python.
  2. I cannot get logs by faas-cli logs <func_name>, I can only see logs by docker logs <container_name>.

How someone could tell me how to implement the log collection correctly.

A further question is that, if collecting logs to a file system is unrealistic, do I need to make sure my logs can be shown by faas-cli logs <func_name> before trying to collect it by Elastic Search or Grafana Loki? I am not familiar with these two things.

Expected Behaviour

Some logs show when running faas-cli logs flask-faas or kubectl logs XXX

Current Behaviour

Run faas-cli logs flask-faas shows nothing

Run kubectl logs flask-faas-57f6449bbd-z5cld -n openfaas-fn shows an error. I am not sure if it is related with openfaas or not.

Error from server: Get "https://192.168.65.4:10250/containerLogs/openfaas-fn/flask-faas-57f6449bbd-z5cld/flask-faas": open /run/config/pki/apiserver-kubelet-client.crt: no such file or directory

Only when using docker logs the logger and stdout/stderr works (2ef3e197a8b1 is the id of the function container).

$ docker logs 2ef3e197a8b1
2023/03/16 12:02:28 Version: 0.9.10     SHA: eefeb9dd8c979398a46fc0decc3297591362bfab
2023/03/16 12:02:28 Forking: python, arguments: [index.py]
2023/03/16 12:02:28 Started logging: stderr from function.
2023/03/16 12:02:28 Started logging: stdout from function.
2023/03/16 12:02:28 Watchdog mode: http fprocess: "python index.py"
2023/03/16 12:02:28 Timeouts: read: 10s write: 10s hard: 10s health: 10s
2023/03/16 12:02:28 Listening on port: 8080
2023/03/16 12:02:28 Writing lock-file to: /tmp/.lock
2023/03/16 12:02:28 Metrics listening on port: 8081
2023/03/16 12:02:29 stderr: global logger error
2023/03/16 12:06:51 stderr: global stderr
2023/03/16 12:06:51 stderr: INFO:function.handler:logger info: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:06:51 stderr: ERROR:function.handler:logger error: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:06:51 stderr: stderr: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:06:51 stderr: INFO:function.handler:inner logger info: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:06:51 stderr: ERROR:function.handler:inner logger error: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:06:51 GET /?filepath=data/10ibt_H.msgpack&filterStr=00000%2A%2A%2A%2A%2A - 200 OK - ContentLength: 189B (0.0171s)

If I add PYTHONUNBUFFERED=1 as mentioned in this post, the docker log includes print and stdout information as shown below, but the faas-cli logs and kubectl logs are still the same, giving no useful logs.

2023/03/16 12:40:09 Version: 0.9.10     SHA: eefeb9dd8c979398a46fc0decc3297591362bfab
2023/03/16 12:40:09 Forking: python, arguments: [index.py]
2023/03/16 12:40:09 Started logging: stderr from function.
2023/03/16 12:40:09 Started logging: stdout from function.
2023/03/16 12:40:09 Watchdog mode: http fprocess: "python index.py"
2023/03/16 12:40:09 Timeouts: read: 10s write: 10s hard: 10s health: 10s     
2023/03/16 12:40:09 Listening on port: 8080
2023/03/16 12:40:09 Writing lock-file to: /tmp/.lock
2023/03/16 12:40:09 Metrics listening on port: 8081
2023/03/16 12:40:09 stdout: global print
2023/03/16 12:40:09 stdout: global stdout
2023/03/16 12:40:09 stderr: global logger error
2023/03/16 12:40:09 stderr: global stderr
2023/03/16 12:43:37 stderr: INFO:function.handler:logger info: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])    
2023/03/16 12:43:37 stderr: ERROR:function.handler:logger error: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])  
2023/03/16 12:43:37 stderr: stderr: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:43:37 stdout: print: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:43:37 stdout: stdout: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:43:37 stdout: inner print: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:43:37 stdout: inner stdout: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:43:37 stderr: INFO:function.handler:inner logger info: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:43:37 stderr: ERROR:function.handler:inner logger error: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:43:37 stderr: inner stderr: ImmutableMultiDict([('filepath', 'data/10ibt_H.msgpack'), ('filterStr', '00000*****')])
2023/03/16 12:43:37 GET /?filepath=data/10ibt_H.msgpack&filterStr=00000%2A%2A%2A%2A%2A - 200 OK - ContentLength: 189B (0.0023s)

Steps to Reproduce (for bugs)

Create a function by template. I am using the flask template with of-watchdog:0.9.10.

faas-cli new --lang python3-http flask-faas --prefix lyudmilalala

Overwrite the handler.py in the flask-faas folder.

from flask import current_app
import sys
import logging

logger = logging.getLogger(__name__)
logger.setLevel("INFO")

logger.info("global logger info")
logger.error("global logger error")
print("global print") # cannot print
sys.stdout.write("global stdout\n")
sys.stderr.write("global stderr\n")

def handle(event, context):
    current_app.logger.info("app logger: " + repr(event.query)) # cannot print
    logger.info("logger info: " + repr(event.query))
    logger.error("logger error: " + repr(event.query))
    print("print: " + repr(event.query)) # cannot print
    sys.stdout.write("stdout: " + repr(event.query) + "\n")
    sys.stderr.write("stderr: " + repr(event.query) + "\n")

    inner(event)
    return {
        "statusCode": 200,
        "body": "Welcom to flask-demo v1.0!"
    }

def inner(event):
    current_app.logger.info("inner app logger: " + repr(event.query)) # cannot print
    logger.info("inner logger info: " + repr(event.query))
    logger.error("inner logger error: " + repr(event.query))
    print("inner print: " + repr(event.query)) # cannot print
    sys.stdout.write("inner stdout: " + repr(event.query) + "\n")
    sys.stderr.write("inner stderr: " + repr(event.query)  + "\n")

Overwrite the flask-faas.yml

192.168.1.124 is my local IP.

version: 1.0
provider:
  name: openfaas
  gateway: http://192.168.1.124:31112
functions:
  flask-faas:
    lang: python3-http
    handler: ./flask-faas
    image: lyudmilalala/flask-faas:1.0
    environment:
      write_debug: true 
      PYTHONUNBUFFERED: 1

Start the function.

faas-cil up -f flask-faas.yml

Call the function with curl.

curl --request GET 'http://192.168.1.124:31112/function/flask-faas?filepath=data/10ibt_H.msgpack&filterStr=00000*****'

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ):
    0.14.2

  • Docker version docker version (e.g. Docker 17.0.05 ):
    docker version 20.10.11

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    FaaS-netes with tag 0.16.4, k8s version v1.22.4

  • Operating System and version (e.g. Linux, Windows, MacOS):
    Windows 10 64-bit, Docker Desktop 4.3.0 with kind

  • Code example or link to GitHub repo or gist to reproduce problem:

  • Other diagnostic information / logs from troubleshooting guide

Docs and posts about logging I have gone through

I did not find much about customized log provider. Also, because I am considering to move some of my business from python to c++ for performance improvement, I want to know more about logging for customized docker image with of-watchdog. (In my case, I use a pistachio server behind of-watchdog, and don't whether I should use a log library or just stdout logs.) I hope to see more documentations about them.

Next steps

You may join Slack for community support.

request.get_data() is always empty because of flask/werkzeug bug with chunked transfer encoding

The python3-flask template (and likely the python2) is bitten by this Flask issue: pallets/flask#2229

Flask / Werkzeug have a bug handling chunked transfer encoding--and perhaps any other time that Content-Length is not set--when combined with WSGI.

When the request is forwarded through the of-watchdog process, http.NewRequest is given a reference to the body io.ReadCloser. Since that is a stream, the outbound request will always use chunked transfer encoding and no Content-Length.

This could be avoided by changing of-watchdog to read the whole body into a buffer and then send it unchunked; but it would make more sense, I think, either to upgrade to a fixed version of Flask/Werkzeug (assuming there is one) or revert back to Flask's development server (app.run(...)) rather than using gevent's WSGIServer, as the built-in server does not seem to be affected.

The following examples are using the same of-watchdog and function code. The only difference is that the first one is using Flask with WSGIServer and the second is using the Flask built-in server.

WSGI server

$ curl -i -XPOST -d "rawdsddddddddddddddddddddddd" 'http://127.0.0.1:9999/'
HTTP/1.1 200 OK
Content-Length: 0
Content-Type: text/html; charset=utf-8
Date: Wed, 25 Jul 2018 23:33:36 GMT

Flask's server

$ curl -i -XPOST -d "rawdsddddddddddddddddddddddd" 'http://127.0.0.1:9999/'
HTTP/1.1 200 OK
Content-Length: 28
Content-Type: text/html; charset=utf-8
Date: Wed, 25 Jul 2018 23:31:29 GMT
Server: Werkzeug/0.14.1 Python/3.6.5

rawdsddddddddddddddddddddddd

Small typo in the Dockerfiles

My actions before raising this issue

In the Dockerfile for a couple of the templates the line:

#build function directory and install user specified componenets.

Has the word components spelled as componenets.

Read the contribution guide and it said to raise an issue and not a PR.

Support running as random UID in Docker image

You should be able to run with a --uid flag and a random number, and still be able to access your pip modules inside the function.

Right now that is probably not the case, but if you look at the way we changed openfaas/templates for python3 this fix can be applied to the templates in this repo also.

return statusCode ignored, move statusCode to body?

My actions before raising this issue

Expected Behaviour

The return status code should be returned along with the body message.

Current Behaviour

The return 'statusCode' is currently seems to be completely ignored when using curl on an OpenFaas function endpoint. There doesn't seem to be a mechanism to retrieve the statusCode (maybe I'm missing something in the documentation).

Possible Solution

Include the status code in the return body so it can be parsed by end users. As suggested here, openfaas/faas-netes#147 (comment)

i.e. in handler.py

def handle(event, context):
    return {
        "body": {
            "statusCode": 200,
            "message": "Hello from OpenFaaS!"
        }
    }

Steps to Reproduce (for bugs)

  1. Modify one of the template handler.py return status codes to be something other than 200 (e.g. 500)
  2. build, push, deploy a container using the template modified
  3. execute the function and check the return status

Context

I'm a graduate student working on a masters capstone comparing performance and cost of cloud native applications running with open sourced software to proprietary solutions from cloud vendors.

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ):
  ___                   _____           ____
 / _ \ _ __   ___ _ __ |  ___|_ _  __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \
| |_| | |_) |  __/ | | |  _| (_| | (_| |___) |
 \___/| .__/ \___|_| |_|_|  \__,_|\__,_|____/
      |_|

CLI:
 commit:  0074051aeb837f5f160ee8736341460468b5c190
 version: 0.15.4
  • Docker version docker version (e.g. Docker 17.0.05 ):
Client: Docker Engine - Community
 Version:           20.10.21
  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes

  • Operating System and version (e.g. Linux, Windows, MacOS):

NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • Code example or link to GitHub repo or gist to reproduce problem:

  • Other diagnostic information / logs from troubleshooting guide

Remove unnecessary package cache files for apt/pip, implement multistage builds to minimize container image

My actions before raising this issue

Expected Behaviour

I'd recommend cleaning up the cache files from apt and pip. tox testing can be performed without additional final image padding by using multistage build mechanisms.

Current Behaviour

Package cache files from apt and pip are creating larger container images. tox also significantly increase in the final container image size as it creates a virtual environment and installs several packages.

Possible Solution

  • Remove the apt cache files by chaining install and cache remove commands (this is the command found in the docker best practices documentation)
RUN apt-get -qy update && \
	apt-get -qy install ${ADDITIONAL_PACKAGE} && \
	rm -rf /var/lib/apt/lists/*
  • Remove the pip cache by using --no-cache-dir flag when installing with pip.
RUN pip install --no-cache-dir -r requirements.txt
  • Implement three build stages in the Dockerfile, (e.g. builder, testing, final), label the initial python base image FROM python:<tag> AS builder starting the testing stage using FROM builder AS testing, and starting the final stage using FROM builder AS final, this will still run tests but avoids the unnecessary .tox directory left behind from testing.
FROM openfaas/of-watchdog:0.9.6 AS watchdog
FROM python:3.7-slim-buster AS builder

<snippet>

FROM builder AS tester

ARG TEST_COMMAND=tox
ARG TEST_ENABLED=true
RUN [ "$TEST_ENABLED" = "false" ] && echo "skipping tests" || eval "$TEST_COMMAND"

FROM builder AS final

<snippet>

Doing the above reduced the deployed container image size by 28% for the python3-flask-debian template (results will likely vary a depending on additional packages installed). Build time didn't appear to be affected by modifications.

Steps to Reproduce (for bugs)

  1. Copy Dockerfile from gist.
  2. Use faas-cli to build a container image from above Dockerfile.
  3. Use faas-cli to build a container image with an unmodified Dockerfile.
  4. Compare container image sizes.

Context

I'm a graduate student working on a masters capstone comparing performance and cost of cloud native applications running with open sourced software to proprietary solutions from cloud vendors. I'm also working with a company (BioDepot LLC) that uses Docker for packaging their software. I'd like to pass along some helpful tips that I've learned from maintaining container images.

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ):
  ___                   _____           ____
 / _ \ _ __   ___ _ __ |  ___|_ _  __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \
| |_| | |_) |  __/ | | |  _| (_| | (_| |___) |
 \___/| .__/ \___|_| |_|_|  \__,_|\__,_|____/
      |_|

CLI:
 commit:  b1c09c0243f69990b6c81a17d7337f0fd23e7542
 version: 0.14.2
  • Docker version docker version (e.g. Docker 17.0.05 ):
Docker version 20.10.14, build a224086
  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes

  • Operating System and version (e.g. Linux, Windows, MacOS):

NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Error with Twilio example

Reported as: Not receiving request body in function handler

I'm trying to test the following python flask code.

https://github.com/chzbrgr71/kube-con-2018/blob/master/open-faas/sms-ratings/sms-ratings/handler.py

I've tried printing the request body and it appears to be empty. What am I doing wrong?

Here's the error I'm seeing

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 2292, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python2
2018/07/16 06:51:23 stderr: .7/site-packages/flask/app.py", line 1815, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1718, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/l
2018/07/16 06:51:23 stderr: ocal/lib/python2.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1799, in dispatch_request
    return self.view_functions[rule.endpoint](
2018/07/16 06:51:23 stderr: **req.view_args)
  File "index.py", line 11, in main_route
    ret = handler.handle(request.get_data())
  File "/root/function/handler.py", line 10, in handle
    msgBody = req.values.get("Body")
AttributeError: 'str' object has no attribute 'values'

I've also used HTTP GET to confirm I'm seeing the parameters correctly sent in the URL.

Here's a scrubbed set of parameters from the request.

ApiVersion=2010-04-01&SmsSid=<>&SmsStatus=<>&SmsMessageSid=<>&NumSegments=1&From=<>&ToState=OR&MessageSid=<>&AccountSid=<>&ToZip=<>&FromCountry=US&ToCity=<>&FromCity=<>&To=<>&FromZip=<>&Body=<>&ToCountry=US&FromState=<>&NumMedia=0

extra chown command pads image size and build time

My actions before raising this issue

Expected Behaviour

The --chown flag was introduced in v17.09.0-ce allows for copying files and changing ownership in one step, this would allow for copying files with non-root permissions without any additional container layer.

Current Behaviour

There's an additional RUN chown -R app:app ../ layer in several Dockerfiles that pads the overall image size and build time, this is especially noticeable as user specified requirements.txt grows.

Possible Solution

Add the --chown=app flag to any COPY command intended for non-root user (app) files, this will reduce overall build time and container image size.

or alternative...

Create an additional build stage (e.g. FROM python:* AS production) and copy over necessary build artifacts after testing finishes. This may require significant tweaking to the Dockerfiles to make this solution possible (e.g. using a virtualenv), however it may result in a much smaller container image?

Steps to Reproduce (for bugs)

  1. faas-cli template pull https://github.com/openfaas-incubator/python-flask-template
  2. faas-cli new --lang python3-http-debian hello-python
  3. Add several requirements to the hello-python/requirements.txt, example below
boto3
botocore
click
gensim
jmespath
joblib
nltk
numpy
pandas
python-dateutil
pytz
regex
s3transfer
scipy
simplejson
six
smart-open
tqdm
urllib3
  1. time faas-cli build --no-cache -f hello-python.yml
  2. Modify any file COPY command intended for non-root user files to include --chown=app
diff --git a/template/python3-http-debian/Dockerfile b/template/python3-http-debian/Dockerfile
index 624fb03..6396fd3 100644
--- a/template/python3-http-debian/Dockerfile
+++ b/template/python3-http-debian/Dockerfile
@@ -19,8 +19,8 @@ ENV PATH=$PATH:/home/app/.local/bin

 WORKDIR /home/app/

-COPY index.py           .
-COPY requirements.txt   .
+COPY --chown=app index.py           .
+COPY --chown=app requirements.txt   .
 USER root
 RUN pip install -r requirements.txt
 USER app
@@ -32,8 +32,7 @@ COPY function/requirements.txt        .
 RUN pip install --user -r requirements.txt

 USER root
-COPY function/   .
-RUN chown -R app:app ../
+COPY --chown=app function/ .

 ARG TEST_COMMAND=tox
 ARG TEST_ENABLED=true
  1. time faas-cli build --no-cache -f hello-python.yml
  2. Compare image size and build time

Context

I'm a college student working on a capstone project involving deploying open-source cloud native applications in Kubernetes and doing a comparison against vendor specific solutions.

I'm using python-flask-templates as a basis for my own custom functions and would love to contribute in any way I can.

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ):
  ___                   _____           ____
 / _ \ _ __   ___ _ __ |  ___|_ _  __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \
| |_| | |_) |  __/ | | |  _| (_| | (_| |___) |
 \___/| .__/ \___|_| |_|_|  \__,_|\__,_|____/
      |_|

CLI:
 commit:  b1c09c0243f69990b6c81a17d7337f0fd23e7542
 version: 0.14.2
  • Docker version docker version (e.g. Docker 17.0.05 ):
Docker version 20.10.14, build a224086
  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes

  • Operating System and version (e.g. Linux, Windows, MacOS):

NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • Code example or link to GitHub repo or gist to reproduce problem:
    I can create a full gist if desired, but the requirements.txt provided in the "steps to reproduce" above matches my specific use case. To summarize my results, after implementing the first proposed solution produces a ~30 second decrease in build time and ~500mb decrease in image size.

  • Other diagnostic information / logs from troubleshooting guide

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.