Giter Site home page Giter Site logo

apache / apisix-docker Goto Github PK

View Code? Open in Web Editor NEW
654.0 34.0 430.0 428 KB

the docker for Apache APISIX

Home Page: https://apisix.apache.org/

License: Apache License 2.0

Dockerfile 43.80% Shell 26.43% Makefile 29.78%
docker api-gateway cloud-native microservices api reverse-proxy api-management loadbalancing serverless kubernetes

apisix-docker's Introduction

What is Apache APISIX API Gateway

Apache APISIX is a dynamic, real-time, high-performance API Gateway.

APISIX API Gateway provides rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more.

You can use APISIX API Gateway to handle traditional north-south traffic, as well as east-west traffic between services. At present, APISIX has been used in various industries, including NASA, Tencent Cloud, EU Digital Factory, Airbus, Airwallex, iQIYI, etc.

How to run Apache APISIX

Apache APISIX supports stand-alone mode and also supports the use of etcd database as the configuration center.

How to run APISIX in stand-alone mode

In stand-alone mode, APISIX uses apisix.yaml as the configuration center to store routing, upstream, consumer and other information. After APISIX is started, it will load the apisix.yaml file regularly to update the corresponding configuration information.

You can start an APISIX container with stand-alone mode by the following command:

docker run -d --name apache-apisix \
  -p 9080:9080 \
  -e APISIX_STAND_ALONE=true \
  apache/apisix

Add Route and Plugin configuration to the running APISIX container:

docker exec -i apache-apisix sh -c 'cat > /usr/local/apisix/conf/apisix.yaml <<_EOC_
routes:
  -
    id: httpbin
    uri: /*
    upstream:
      nodes:
        "httpbin.org": 1
      type: roundrobin
    plugin_config_id: 1

plugin_configs:
  -
    id: 1
    plugins:
      response-rewrite:
        body: "Hello APISIX\n"
    desc: "response-rewrite"
#END
_EOC_'

Test example:

curl http://127.0.0.1:9080/
Hello APISIX

If you want to know more configuration examples, you can refer to stand-alone.

How to run APISIX using etcd as configuration center

Solution 1

The operation of APISIX also supports the use of etcd as the configuration center. Before starting the APISIX container, we need to start the etcd container with the following command, and specify the network used by the container as the host network. Make sure that all the required ports (default: 9080, 9443 and 2379) are available and not used by other system processes.

  1. Start etcd.
docker run -d \
  --name etcd \
  --net host \
  -e ALLOW_NONE_AUTHENTICATION=yes \
  -e ETCD_ADVERTISE_CLIENT_URLS=http://127.0.0.1:2379 \
  bitnami/etcd:latest
  1. Start APISIX.
docker run -d \
  --name apache-apisix \
  --net host \
  apache/apisix

Solution 2

Before starting the APISIX container, we need to create a Docker virtual network and start the etcd container.

  1. Create a network and view the subnet address, then start etcd
docker network create apisix-network --driver bridge && \
docker network inspect -v apisix-network && \
docker run -d --name etcd \
  --network apisix-network \
  -p 2379:2379 \
  -p 2380:2380 \
  -e ALLOW_NONE_AUTHENTICATION=yes \
  -e ETCD_ADVERTISE_CLIENT_URLS=http://127.0.0.1:2379 \
  bitnami/etcd:latest
  1. View the return result of the previous step, we can see the subnet address. Create a APISIX configuration file in the current directory. You need to set allow_admin to the subnet address obtained in step1.
cat << EOF > $(pwd)/config.yaml
deployment:
  role: traditional
  role_traditional:
    config_provider: etcd
  admin:
    allow_admin:
      - 0.0.0.0/0  # Please set it to the subnet address you obtained.
                  # If not set, by default all IP access is allowed.
  etcd:
    host:
      - "http://etcd:2379"
    prefix: "/apisix"
    timeout: 30
EOF
  1. Start APISIX and reference the file created in the previous step.
 docker run -d --name apache-apisix \
  --network apisix-network \
  -p 9080:9080 \
  -p 9180:9180 \
  -v $(pwd)/config.yaml:/usr/local/apisix/conf/config.yaml \
  apache/apisix

Test example

Check that APISIX is running properly by running the following command on the host.

curl "http://127.0.0.1:9180/apisix/admin/services/" \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1'

The response indicates that apisix is running successfully:

{
  "total": 0,
  "list": []
}

If you want to modify the default configuration of APISIX, you can use the following command to enter the APISIX container and modify the configuration file ./conf/config.yaml, which will take effect after reloading APISIX. For details, please refer to ./conf/config-default.yaml.

docker exec -it apache-apisix bash

For more information, you can refer to the APISIX Website and APISIX Documentation. If you encounter problems during use, you can ask for help through slack and the mailing list.

Reload APISIX in a running container

If you change your custom configuration, you can reload APISIX (without downtime) by issuing.

docker exec -it apache-apisix apisix reload

This will run the apisix reload command in your container.

Kubernetes Ingress

During the deployment process, in addition to the above operations, APISIX also derived the apisix-ingress-controller, which can be deployed and used in the K8s environment more conveniently.

License

Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0

apisix-docker's People

Contributors

alinsran avatar baoyuantop avatar biubiue avatar bzp2010 avatar fukiki avatar guitu168 avatar gxthrj avatar hazel6869 avatar iamayushdas avatar imjoey avatar jbampton avatar kamly avatar kayx23 avatar leslie-tsang avatar linsir avatar liuxiran avatar moonming avatar nic-chen avatar shreemaan-abhishek avatar shuaijinchao avatar sn0rt avatar soulbird avatar spacewander avatar tao12345666333 avatar tinywan avatar totemofwolf avatar tzssangglass avatar vkill avatar xunzhuo avatar yiyiyimu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apisix-docker's Issues

nginx: [emerg] invalid number of arguments in "resolver_timeout" directive in /usr/local/apisix/conf/nginx.conf:54

Step 1

docker pull apache/apisix:1.2-alpine

Step 2

docker run --name gateway \
 -v `pwd`/example/apisix_conf/config.yaml:/usr/local/apisix/conf/config.yaml \
 -v `pwd`/example/apisix_log:/usr/local/apisix/logs  \
 -v `pwd`/example/dashboard:/usr/local/apisix/dashboard \
 -p 9080:9080 \
 -p 9443:9443 \
 -d apache/apisix:1.2-centos

the container exits, check the log and get this

nginx: [emerg] invalid number of arguments in "resolver_timeout" directive in /usr/local/apisix/conf/nginx.conf:54

error parsing HTTP 403 response body

使用的**区镜像

 image: gcr.azk8s.cn/etcd-development/etcd:v3.3.12

多个版本复现

macos 10.15.6
centos7.5

error message:
image

 error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>403 Forbidden</h1></center>\r\n<hr><center>nginx/1.14.0 (Ubuntu)</center>\r\n</body>\r\n</html>\r\n"

无docker 日志

bug: host can not request APISIX docker

403 error occurred when the host request APISIX docker.
because no matter /example/apisix_conf/config.yaml or config.yaml which created by gen-config-yaml.sh, there is no such configuration:

allow_admin:                  # http://nginx.org/en/docs/http/ngx_http_access_module.html#allow
   - 0.0.0.0/0

relate #84

docker-compose occurres error which is cause by no exist url

Error: Error fetching file: Failed downloading https://github.com/iresty/apisix/raw/master/rockspec/apisix-master-0.rockspec - apisix-master-0.rockspec
Service 'apisix' failed to build: The command '/bin/sh -c apk add --no-cache --virtual .builddeps automake autoconf libtool pkgconfig cmake git && luarocks install https://github.com/iresty/apisix/raw/master/rockspec/apisix-${APISIX_VERSION}-0.rockspec --tree=/usr/local/apisix/deps && cp /usr/local/apisix/deps/lib/luarocks/rocks-5.1/apisix/${APISIX_VERSION}-0/bin/apisix /usr/bin/ && bin='#! /usr/local/openresty/luajit/bin/luajit' && sed -i "1s@.*@$bin@" /usr/bin/apisix && apk del .builddeps' returned a non-zero code: 1
Failed to deploy 'Compose: example': docker-compose process finished with exit code 1

error occurres when excuting dockerfile which contains no exist url ([https://github.com/iresty/apisix/raw/master/rockspec/apisix-${APISIX_VERSION}-0.rockspec --tree=/usr/local/apisix/deps]) in blow place:

for APISIX
RUN apk add --no-cache --virtual .builddeps
automake
autoconf
libtool
pkgconfig
cmake
git
&& luarocks install https://github.com/iresty/apisix/raw/master/rockspec/apisix-${APISIX_VERSION}-0.rockspec --tree=/usr/local/apisix/deps
&& cp /usr/local/apisix/deps/lib/luarocks/rocks-5.1/apisix/${APISIX_VERSION}-0/bin/apisix /usr/bin/
&& bin='#! /usr/local/openresty/luajit/bin/luajit'
&& sed -i "1s@.*@$bin@" /usr/bin/apisix
&& apk del .builddeps

/usr/bin/apisix:520: in function </usr/bin/apisix:497>

etcd run

docker run -it --name etcd-server \
-v /f/github/docker-apisix/example/etcd_conf/etcd.conf.yml:/opt/bitnami/etcd/conf/etcd.conf.yml \
-p 2379:2379 \
-p 2380:2380  \
--env ALLOW_NONE_AUTHENTICATION=yes bitnami/etcd:3.4.2

apisix run

docker run  --name test-api-gateway \
-v /f/github/docker-apisix/example/apisix_conf/config.yaml:/usr/local/apisix/conf/config.yaml  \
-v /f/github/docker-apisix/example/apisix_log:/usr/local/apisix/logs  \
-p 8880:9080 \
-p  8083:9443 registry.cn-beijing.aliyuncs.com/tinywan/apisix:alpine

/usr/local/openresty/luajit/bin/luajit: /usr/bin/apisix:520: curl http://192.168.1.3:2379/v2/keys/apisix/routes?prev_exist=false -X PUT -d dir=true --connect-timeout 1 --max-time 2 --retry 1 2>&1
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0404 page not found
100    27  100    19  100     8   1900    800 --:--:-- --:--:-- --:--:--  2700
stack traceback:
        [C]: in function 'error'
        /usr/bin/apisix:520: in function </usr/bin/apisix:497>
        /usr/bin/apisix:575: in main chunk
        [C]: at 0x55d94683a2c0

config.yaml

etcd:
  host: "http://192.168.1.3:2379"   # etcd address

curl result

$ curl http://192.168.1.3:2379/v2/keys/apisix/routes?prev_exist=false -X PUT -d dir=true --connect-timeout 1 --max-time 2 --retry 1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    27  100    19  100     8   2111    888 --:--:-- --:--:-- --:--:--  3375404 page not found

22222

docker compose example start failed

sh gen-config-yaml.sh && docker-compose -p docker-apisix up -d

ERROR: for docker-apisix_apisix_1 UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)

ERROR: for apisix UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

etcd-server could not be resolved (3: Host not found)

I created a network named gateway, then let ectd-server and test-api-gateway joined this network.

docker exec -it test-api-gateway sh
/usr/local/apisix # ping etcd-server
PING etcd-server (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.272 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.071 ms
64 bytes from 172.19.0.2: seq=2 ttl=64 time=0.068 ms
64 bytes from 172.19.0.2: seq=3 ttl=64 time=0.116 ms
64 bytes from 172.19.0.2: seq=4 ttl=64 time=0.078 ms
64 bytes from 172.19.0.2: seq=5 ttl=64 time=0.066 ms

But got error as the title after request

curl http://127.0.0.1:8080/apisix/admin/services/1 -X PUT -d '
{
    "upstream": {
        "type": "roundrobin",
        "nodes": {
            "127.0.0.1:9081": 1
        }
    }
}'
{"error_msg":"etcd-server could not be resolved (3: Host not found)"}

Specify ip works

test: need to add more tests in CI

There few test cases in the CI of apisix-docker project.
No matter all-in-one or alpine, I think more tests need to be added, such as using the adminApi to add, delete, modify and query the route, and verify the results.

after configure SSL,don't connect it

2020/12/14 09:42:35 [error] 54#54: 960335 [lua] init.lua:180: http_ssl_phase(): failed to fetch ssl config: failed to fetch SSL certificate: not found, context: ssl_certificate_by_lua, client: 10.112.0.116, server: 0.0.0.0:9443
2020/12/14 09:42:58 [error] 54#54: 963324 [lua] init.lua:180: http_ssl_phase(): failed to fetch ssl config: failed to fetch SSL certificate: not found, context: ssl_certificate_by_lua, client: 10.112.0.116, server: 0.0.0.0:9443

the config.yaml is below:

apisix:
  node_listen: 9080              # APISIX listening port
  enable_ipv6: false

  allow_admin:                  # http://nginx.org/en/docs/http/ngx_http_access_module.html#allow
    - 0.0.0.0/0              # We need to restrict ip access rules for security. 0.0.0.0/0 is for test.

  admin_key:
    - name: "admin"
      key: edd1c9f034335f136f87ad84b625c8f1
      role: admin                 # admin: manage all configuration data
                                  # viewer: only can view configuration data
    - name: "viewer"
      key: 4054f7cf07e344346cd3f287985e76a2
      role: viewer
  ssl:
    enable: true                  # ssl is disabled by default
                                  # enable it to use your own cert and key
    enable_http2: true
    listen_port: 9443
    ssl_trusted_certificate: /usr/local/apisix/conf/cert/ca.pem # Specifies a file path with trusted CA certificates in the PEM format
                                                # used to verify the certificate when APISIX needs to do SSL/TLS handshaking
                                                # with external services (e.g. etcd)
    ssl_cert: /usr/local/apisix/conf/cert/server.pem
    ssl_cert_key: /usr/local/apisix/conf/cert/server.key
    ssl_protocols: "TLSv1.2 TLSv1.3"
    ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
    ssl_session_tickets: false              #  disable ssl_session_tickets by default for 'ssl_session_tickets' would make Perfect Forward Secrecy useless.
                                            #  ref: https://github.com/mozilla/server-side-tls/issues/135
    key_encrypt_salt: "edd1c9f0985e76a2"    #  If not set, will save origin ssl key into etcd.
                                            #  If set this, must be a string of length 16. And it will encrypt ssl key with AES-128-CBC
                                            #  !!! So do not change it after saving your ssl, it can't decrypt the ssl keys have be saved if you change !!
etcd:
  host:                           # it's possible to define multiple etcd hosts addresses of the same etcd cluster.
    - "http://gistack-etcd:2379"     # multiple etcd address
  prefix: "/apisix"               # apisix configurations prefix
  timeout: 30                     # 30 seconds
docker ps :
dbb324ca4312        registry.cn-beijing.aliyuncs.com/gisuni/apisix:2.1-centos      "sh -c '/usr/bin/api…"   2 hours ago         Up 2 hours            0.0.0.0:9080->9080/tcp, 0.0.0.0:9443->9443/tcp   gistack-apisix
59c37ca479d6        registry.cn-beijing.aliyuncs.com/gisuni/etcd:3.4.9             "/entrypoint.sh etcd"    2 hours ago         Up 2 hours            0.0.0.0:2379->2379/tcp, 2380/tcp                 gistack-etcd

the image all use official images

discard openresty image

we have to discard openresty image, build from alpine directly. Because openresty not has the official images in dockerhub.

request help: Docker部署,无法访问dashboard!版本2.1

Issue description

安装方法:参考https://github.com/apache/apisix-docker/blob/master/manual.md

Environment

步骤:
配置文件使用的是:https://github.com/apache/apisix-docker/example/下面的
`

  1. docker pull bitnami/etcd

  2. docker pull apache/apisix

  3. docker network create --driver=bridge --subnet=172.18.0.0/16 --ip-range=172.18.5.0/24 --gateway=172.18.5.254 apisix

  4. docker run -it --name etcd-server -v /d/Work/docker-instance/apisix-docker/example/etcd_conf/etcd.conf.yml:/opt/bitnami/etcd/conf/etcd.conf.yml -p 2379:2379 -p 2380:2380 --network apisix --ip 172.18.5.10 --env ALLOW_NONE_AUTHENTICATION=yes bitnami/etcd

  5. docker run --name apisix -v /d/Work/docker-instance/apisix-docker/example/apisix_conf/config.yaml:/usr/local/apisix/conf/config.yaml -v /d/Work/docker-instance/apisix-docker/example/apisix_log:/usr/local/apisix/logs -p 9080:9080 -p 9443:9443 --network apisix --ip 172.18.5.11 -d apache/apisix
    版本号及运行命令:sh-4.2# apisix version;ps -ef|grep nginx
    2.1
    root 36 1 0 08:04 ? 00:00:00 nginx: master process /usr/local/openresty/bin/openresty -p /usr/local/apisix -g daemon off;
    nobody 37 36 0 08:04 ? 00:00:04 nginx: worker process
    nobody 38 36 0 08:04 ? 00:00:04 nginx: worker process
    nobody 39 36 0 08:04 ? 00:00:00 nginx: cache manager process
    root 41 36 0 08:04 ? 00:00:04 nginx: privileged agent process
    root 84 48 0 08:51 pts/1 00:00:00 grep nginx`

nginx.conf :

# Configuration File - Nginx Server Configs
# This is a read-only file, do not try to modify it.

master_process on;

worker_processes auto;
worker_cpu_affinity auto;

error_log logs/error.log warn;
pid logs/nginx.pid;

worker_rlimit_nofile 20480;

events {
    accept_mutex off;
    worker_connections 10620;
}

worker_rlimit_core  16G;

worker_shutdown_timeout 240s;

env APISIX_PROFILE;


# main configuration snippet starts


# main configuration snippet ends


http {
    lua_package_path  "$prefix/deps/share/lua/5.1/?.lua;$prefix/deps/share/lua/5.1/?/init.lua;/usr/local/apisix/?.lua;/usr/local/apisix/?/init.lua;;/usr/local/apisix/?.lua;./?.lua;/usr/loca
l/openresty/luajit/share/luajit-2.1.0-beta3/?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua;/usr/local/openresty/luajit/share/lua/5.1/?.lua;/usr/local/openresty/lua
jit/share/lua/5.1/?/init.lua;";
    lua_package_cpath "$prefix/deps/lib64/lua/5.1/?.so;$prefix/deps/lib/lua/5.1/?.so;;./?.so;/usr/local/lib/lua/5.1/?.so;/usr/local/openresty/luajit/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/
loadall.so;";

    lua_shared_dict plugin-limit-req     10m;
    lua_shared_dict plugin-limit-count   10m;
    lua_shared_dict prometheus-metrics   10m;
    lua_shared_dict plugin-limit-conn    10m;
    lua_shared_dict upstream-healthcheck 10m;
    lua_shared_dict worker-events        10m;
    lua_shared_dict lrucache-lock        10m;
    lua_shared_dict skywalking-tracing-buffer    100m;
    lua_shared_dict balancer_ewma        10m;
    lua_shared_dict balancer_ewma_locks  10m;
    lua_shared_dict balancer_ewma_last_touched_at 10m;
    lua_shared_dict plugin-limit-count-redis-cluster-slot-lock 1m;
    lua_shared_dict tracing_buffer       10m; # plugin: skywalking
    lua_shared_dict plugin-api-breaker   10m;

    # for openid-connect plugin
    lua_shared_dict discovery             1m; # cache for discovery metadata documents
    lua_shared_dict jwks                  1m; # cache for JWKs
    lua_shared_dict introspection        10m; # cache for JWT verification results

    # for custom shared dict

    # for proxy cache
    proxy_cache_path /tmp/disk_cache_one levels=1:2 keys_zone=disk_cache_one:50m inactive=1d max_size=1G use_temp_path=off;

    # for proxy cache
    map $upstream_cache_zone $upstream_cache_zone_info {
        disk_cache_one /tmp/disk_cache_one,1:2;
    }

    lua_ssl_verify_depth 5;
    ssl_session_timeout 86400;

    underscores_in_headers on;

    lua_socket_log_errors off;

    resolver 127.0.0.11 valid=30;
    resolver_timeout 5;

    lua_http10_buffering off;

    lua_regex_match_limit 100000;
    lua_regex_cache_max_entries 8192;

    log_format main escape=default '$remote_addr - $remote_user [$time_local] $http_host "$request" $status $body_bytes_sent $request_time "$http_referer" "$http_user_agent" $upstream_addr
$upstream_status $upstream_response_time "$upstream_scheme://$upstream_host$upstream_uri"';

    access_log logs/access.log main buffer=16384 flush=3;
    open_file_cache  max=1000 inactive=60;
    client_max_body_size 0;
    keepalive_timeout 60s;
    client_header_timeout 60s;
    client_body_timeout 60s;
    send_timeout 10s;

    server_tokens off;

    include mime.types;
    charset utf-8;

    real_ip_header X-Real-IP;

    set_real_ip_from 127.0.0.1;
    set_real_ip_from unix:;

    # http configuration snippet starts


    # http configuration snippet ends

    upstream apisix_backend {
        server 0.0.0.1;
        balancer_by_lua_block {
            apisix.http_balancer_phase()
        }

        keepalive 320;
    }

    init_by_lua_block {
        require "resty.core"
        apisix = require("apisix")

        local dns_resolver = { "127.0.0.11", }
        local args = {
            dns_resolver = dns_resolver,
        }
        apisix.http_init(args)
    }

    init_worker_by_lua_block {
        apisix.http_init_worker()
    }


    server {
        listen 9080 reuseport;



        # http server configuration snippet starts


        # http server configuration snippet ends

        set $upstream_scheme             'http';
        set $upstream_host               $host;
        set $upstream_uri                '';

        location = /apisix/nginx_status {
            allow 127.0.0.0/24;
            deny all;
            access_log off;
            stub_status;
        }

        location /apisix/admin {
                allow 0.0.0.0/0;
                deny all;

            content_by_lua_block {
                apisix.http_admin()
            }
        }

        location /apisix/dashboard {
                allow 0.0.0.0/0;
                deny all;

            alias dashboard/;

            try_files $uri $uri/index.html /index.html =404;
        }


        location / {
            set $upstream_mirror_host        '';
            set $upstream_upgrade            '';
            set $upstream_connection         '';

            access_by_lua_block {
                apisix.http_access_phase()
            }

            proxy_http_version 1.1;
            proxy_set_header   Host              $upstream_host;
            proxy_set_header   Upgrade           $upstream_upgrade;
            proxy_set_header   Connection        $upstream_connection;
            proxy_set_header   X-Real-IP         $remote_addr;
            proxy_pass_header  Date;

            ### the following x-forwarded-* headers is to send to upstream server

            set $var_x_forwarded_for        $remote_addr;
            set $var_x_forwarded_proto      $scheme;
            set $var_x_forwarded_host       $host;
            set $var_x_forwarded_port       $server_port;

            if ($http_x_forwarded_for != "") {
                set $var_x_forwarded_for "${http_x_forwarded_for}, ${realip_remote_addr}";
            }
            if ($http_x_forwarded_proto != "") {
                set $var_x_forwarded_proto $http_x_forwarded_proto;
            }
            if ($http_x_forwarded_host != "") {
                set $var_x_forwarded_host $http_x_forwarded_host;
            }
            if ($http_x_forwarded_port != "") {
                set $var_x_forwarded_port $http_x_forwarded_port;
            }

            proxy_set_header   X-Forwarded-For      $var_x_forwarded_for;
            proxy_set_header   X-Forwarded-Proto    $var_x_forwarded_proto;
            proxy_set_header   X-Forwarded-Host     $var_x_forwarded_host;
            proxy_set_header   X-Forwarded-Port     $var_x_forwarded_port;

            ###  the following configuration is to cache response content from upstream server

            set $upstream_cache_zone            off;
            set $upstream_cache_key             '';
            set $upstream_cache_bypass          '';
            set $upstream_no_cache              '';
            set $upstream_hdr_expires           '';
            set $upstream_hdr_cache_control     '';

            proxy_cache                         $upstream_cache_zone;
            proxy_cache_valid                   any 10s;
            proxy_cache_min_uses                1;
            proxy_cache_methods                 GET HEAD;
            proxy_cache_lock_timeout            5s;
            proxy_cache_use_stale               off;
            proxy_cache_key                     $upstream_cache_key;
            proxy_no_cache                      $upstream_no_cache;
            proxy_cache_bypass                  $upstream_cache_bypass;

            proxy_hide_header                   Cache-Control;
            proxy_hide_header                   Expires;
            add_header      Cache-Control       $upstream_hdr_cache_control;
            add_header      Expires             $upstream_hdr_expires;
            add_header      Apisix-Cache-Status $upstream_cache_status always;

            proxy_pass      $upstream_scheme://apisix_backend$upstream_uri;

            mirror          /proxy_mirror;

            header_filter_by_lua_block {
                apisix.http_header_filter_phase()
            }

            body_filter_by_lua_block {
                apisix.http_body_filter_phase()
            }

            log_by_lua_block {
                apisix.http_log_phase()
            }
        }

        location @grpc_pass {

            access_by_lua_block {
                apisix.grpc_access_phase()
            }

            grpc_set_header   Content-Type application/grpc;
            grpc_socket_keepalive on;
            grpc_pass         grpc://apisix_backend;

            header_filter_by_lua_block {
                apisix.http_header_filter_phase()
            }

            body_filter_by_lua_block {
                apisix.http_body_filter_phase()
            }

            log_by_lua_block {
                apisix.http_log_phase()
            }
        }

        location = /proxy_mirror {
            internal;

            if ($upstream_mirror_host = "") {
                return 200;
            }

            proxy_http_version 1.1;
            proxy_set_header Host $upstream_host;
            proxy_pass $upstream_mirror_host$request_uri;
        }
    }
}

  • apisix version (cmd: apisix version): 2.1
  • OS: Docker

WARNING: Ignoring http://mirrors.tuna.tsinghua.edu.cn/alpine/v3.9/main/x86_64/APKINDEX.tar.gz: network error (check Internet connection and firewall)

When I attempt to build Docker artifact from master

docker build -t apisix:master-alpine -f alpine/Dockerfile alpine

...OR I try to build from release

docker build -t apisix:0.8-alpine --build-arg APISIX_VERSION=0.8 -f alpine/Dockerfile alpine

I get the following issue

...
Step 14/22 : RUN apk add --no-cache --virtual .build-deps         build-base         coreutils         curl         gd-dev         geoip-dev         libxslt-dev         linux-headers         make         perl-dev         readline-dev         zlib-dev         ${RESTY_ADD_PACKAGE_BUILDDEPS}     && apk add --no-cache         gd         geoip         libgcc         libxslt         zlib     && cd /tmp     && curl -fSL https://www.openssl.org/source/openssl-${RESTY_OPENSSL_VERSION}.tar.gz -o openssl-${RESTY_OPENSSL_VERSION}.tar.gz     && tar xzf openssl-${RESTY_OPENSSL_VERSION}.tar.gz     && cd openssl-${RESTY_OPENSSL_VERSION}     && if [ $(echo ${RESTY_OPENSSL_VERSION} | cut -c 1-5) = "1.1.1" ] ; then         echo 'patching OpenSSL 1.1.1 for OpenResty'         && curl -s https://raw.githubusercontent.com/openresty/openresty/master/patches/openssl-1.1.1c-sess_set_get_cb_yield.patch | patch -p1 ;     fi     && if [ $(echo ${RESTY_OPENSSL_VERSION} | cut -c 1-5) = "1.1.0" ] ; then         echo 'patching OpenSSL 1.1.0 for OpenResty'         && curl -s https://raw.githubusercontent.com/openresty/openresty/ed328977028c3ec3033bc25873ee360056e247cd/patches/openssl-1.1.0j-parallel_build_fix.patch | patch -p1         && curl -s https://raw.githubusercontent.com/openresty/openresty/master/patches/openssl-1.1.0d-sess_set_get_cb_yield.patch | patch -p1 ;     fi     && ./config       no-threads shared zlib -g       enable-ssl3 enable-ssl3-method       --prefix=/usr/local/openresty/openssl       --libdir=lib       -Wl,-rpath,/usr/local/openresty/openssl/lib     && make -j${RESTY_J}     && make -j${RESTY_J} install_sw     && cd /tmp     && curl -fSL https://ftp.pcre.org/pub/pcre/pcre-${RESTY_PCRE_VERSION}.tar.gz -o pcre-${RESTY_PCRE_VERSION}.tar.gz     && tar xzf pcre-${RESTY_PCRE_VERSION}.tar.gz     && cd /tmp/pcre-${RESTY_PCRE_VERSION}     && ./configure         --prefix=/usr/local/openresty/pcre         --disable-cpp         --enable-jit         --enable-utf         --enable-unicode-properties     && make -j${RESTY_J}     && make -j${RESTY_J} install     && cd /tmp     && curl -fSL https://github.com/openresty/openresty/releases/download/v${RESTY_VERSION}/openresty-${RESTY_VERSION}.tar.gz -o openresty-${RESTY_VERSION}.tar.gz     && tar xzf openresty-${RESTY_VERSION}.tar.gz     && cd /tmp/openresty-${RESTY_VERSION}     && eval ./configure -j${RESTY_J} ${_RESTY_CONFIG_DEPS} ${RESTY_CONFIG_OPTIONS} ${RESTY_LUAJIT_OPTIONS}     && make -j${RESTY_J}     && make -j${RESTY_J} install     && cd /tmp     && rm -rf         openssl-${RESTY_OPENSSL_VERSION}.tar.gz openssl-${RESTY_OPENSSL_VERSION}         pcre-${RESTY_PCRE_VERSION}.tar.gz pcre-${RESTY_PCRE_VERSION}         openresty-${RESTY_VERSION}.tar.gz openresty-${RESTY_VERSION}     && apk del .build-deps     && ln -sf /dev/stdout /usr/local/openresty/nginx/logs/access.log     && ln -sf /dev/stderr /usr/local/openresty/nginx/logs/error.log
 ---> Running in 486f295972da
fetch http://mirrors.tuna.tsinghua.edu.cn/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://mirrors.tuna.tsinghua.edu.cn/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
WARNING: Ignoring http://mirrors.tuna.tsinghua.edu.cn/alpine/v3.9/main/x86_64/APKINDEX.tar.gz: network error (check Internet connection and firewall)
WARNING: Ignoring http://mirrors.tuna.tsinghua.edu.cn/alpine/v3.9/community/x86_64/APKINDEX.tar.gz: network error (check Internet connection and firewall)
ERROR: unsatisfiable constraints:

My environment as follows

Docker version 19.03.4, build 9013bf5
macOS Mojave 10.14.6 (18G1012)

I am based in the United States in California.
I wonder if the file for APKINDEX.tar.gz could be hosted on Apache CDN?

Error: Could not load rockspec file /tmp/luarocks_luarocks-rockspec-apisix-master-0-kbdBgI/apisix-master-0.rockspec (Error loading file: [string "/tmp/luarocks_luarocks-rockspec-apisix-master..."]:7: unexpected symbol near '<')

OK: 326 MiB in 81 packages

Error: Could not load rockspec file /tmp/luarocks_luarocks-rockspec-apisix-master-0-kbdBgI/apisix-master-0.rockspec (Error loading file: [string "/tmp/luarocks_luarocks-rockspec-apisix-master..."]:7: unexpected symbol near '<')
ERROR: Service 'apisix' failed to build : The command '/bin/sh -c set -x && /bin/sed -i 's,http://dl-cdn.alpinelinux.org,https://mirrors.aliyun.com,g' /etc/apk/repositories && apk add --no-cache --virtual .builddeps automake autoconf libtool pkgconfig cmake git && luarocks install https://github.com/apache/apisix/blob/master/rockspec/apisix-${APISIX_VERSION}-0.rockspec --tree=/usr/local/apisix/deps && cp -v /usr/local/apisix/deps/lib/luarocks/rocks-5.1/apisix/${APISIX_VERSION}-0/bin/apisix /usr/bin/ && bin='#! /usr/local/openresty/luajit/bin/luajit\npackage.path = "/usr/local/apisix/?.lua;" .. package.path' && sed -i "1s@.*@$bin@" /usr/bin/apisix && mv /usr/local/apisix/deps/share/lua/5.1/apisix /usr/local/apisix && apk del .builddeps build-base make unzip' returned a non-zero code: 1

Access 8080 port has response "404 Not Found"

I use docker to start apisix, and then visit http://127.0.0.1:8080. The response is "404 not found".
Do you have a built-in dashboard in the docker image of apsix?

My command to start the apinix container is as follows

docker run --name test-api-gateway -v ${PWD}/example/apisix_conf/config.yaml:/usr/local/apisix/conf/config.yaml -v ${PWD}/example/apisix_log:/usr/local/apisix/logs --link etcd-server -p 8080:9080 -p 8083:9443 -d iresty/apisix

image

can not access the 9080 port

# docker-compose -p docker-apisix up -d
Creating network "docker-apisix_apisix" with driver "bridge"
Creating docker-apisix_etcd_1 ... done
Creating docker-apisix_web1_1 ... done
Creating docker-apisix_web2_1 ... done
Creating docker-apisix_apisix_1 ... done

After success run the images,


# curl "http://127.0.0.1:9080/apisix/admin/services/" -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1'
curl: (56) Recv failure: Connection reset by peer

# ping  http://127.0.0.1:9080
ping: http://127.0.0.1:9080: Name or service not known
# telnet  http://127.0.0.1:9080
telnet: could not resolve http://127.0.0.1:9080/telnet: Name or service not known

# lsof -i:9080
COMMAND     PID USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
docker-pr 26200 root    4u  IPv6 932639246      0t0  TCP *:9080 (LISTEN)

request help: can not find apache/apisix:2.0-alpine image in dockerhub

Issue description

I can't find apache/apisix:2.0-alpine image in hub.docker.com after Apisix 2.0 released. So I wonder if it's much more recommend to use centos version instead of alpine version in production environment.

Environment

  • apisix version (cmd: apisix version): 2.0
  • OS:

debug.yaml can't be found

Steps to reproduce:

 ¥ docker pull apache/apisix:2.1-centos
 ¥  docker run --rm -itd apache/apisix:2.1-centos
 ¥ docker exec -it quizzical_grothendieck bash
[root@cc6dc5231827 apisix]# cd conf/
[root@cc6dc5231827 conf]# ls
cert  config-default.yaml  config.yaml  mime.types # <-- there should be  debug.yaml under conf/

Docker compose not working (nginx: [emerg] invalid parameter: valid= in /usr/local/apisix/conf/nginx.conf:48).

Hi All

I got the latest clone and tried to run apisix with docker-compose in the example folder.

I am getting the below log

Deprecated: apisix.real_ip_from has been moved to nginx_config.http.real_ip_from. apisix.real_ip_from will be removed in the future version. Please use nginx_config.http.real_ip_from first.

nginx: [emerg] invalid parameter: valid= in /usr/local/apisix/conf/nginx.conf:48

Deprecated: apisix.real_ip_header has been moved to nginx_config.http.real_ip_header. apisix.real_ip_header will be removed in the future version. Please use nginx_config.http.real_ip_header first.

Illegal instruction (core dumped)

Hi~, I have an error I don't know how to deal.

my operation:

  1. docker run --rm -it apache/apisix:latest /bin/sh
  2. apisix version / apisix init
    then "Illegal instruction (core dumped)"

core_dump

The host networking driver is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.

docker-apisix docker-compose.yml using network_mode: host now, but the host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server. see https://docs.docker.com/network/host/

network_mode: host used for best network performance, but if you run docker environment for development or quick demo on Mac/Windows, it's not working.

This means we can't access the container service from the current machine when you start services by using network_mode: host on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.

Maybe a solution, and this is what I have tried:

Change docker-compose.yml to using ports and networks

    ports:
      - "9080:9080"
      - "9443:9443"
    networks:
      - apisix
    # network_mode: host

And then should edit config.yml, add allow ips, change etcd host to using container service name

  allow_admin:
    - 127.0.0.0/24
    - 192.168.0.0/16

etcd:
  # host: "http://127.0.0.1:2379"
  host: "http://etcd:2379"

the admin api 404 not found

the port_admin of confg.yaml is 9180, but the dashboard the api BASE_URL is / ,dashboard is return 404 error.
I found in nginx.conf

    server {
        listen 9180;

        location /apisix/admin {
            allow 127.0.0.0/24;
            deny all;

            content_by_lua_block {
                apisix.http_admin()
            }
        }
    }
server {
        listen 9080;
        listen 9443 ssl;
        ssl_certificate      cert/apisix.crt;
        ssl_certificate_key  cert/apisix.key;
        ssl_session_cache    shared:SSL:1m;

        location /apisix/dashboard {
            index index.html;
            allow 127.0.0.0/24;
            deny all;

            root ../;

            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Real-PORT $remote_port;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

            try_files $uri $uri/ /index.html;
        }

a simple solution is cancel the port_admin.

error sudo docker build -f alpine-dev/Dockerfile .

Error: Error fetching file: Failed downloading https://github.com/apache/apisix/raw/master/rockspec - Failed downloading https://github.com/apache/apisix/raw/master/rockspec/apisix-master-0.rockspec - /root/.cache/luarocks/https___github.com_apache_apisix_raw_master_rockspec/apisix-master-0.rockspec
/usr/local/openresty/luajit/bin/luajit: /usr/local/openresty/luajit/share/lua/5.1/luarocks/cmd.lua:172: bad argument apache/apisix#1 to 'exit' (number expected, got string)
stack traceback:
[C]: in function 'exit'
/usr/local/openresty/luajit/share/lua/5.1/luarocks/cmd.lua:172: in function 'die'
/usr/local/openresty/luajit/share/lua/5.1/luarocks/cmd.lua:627: in function 'run_command'
/usr/local/openresty/luajit/bin/luarocks:38: in main chunk
[C]: at 0x5581e355f320
The command '/bin/sh -c set -x && /bin/sed -i 's,http://dl-cdn.alpinelinux.org,https://mirrors.aliyun.com,g' /etc/apk/repositories && apk add --no-cache --virtual .builddeps automake autoconf libtool pkgconfig cmake git && luarocks install https://github.com/apache/apisix/raw/master/rockspec/apisix-master-0.rockspec --tree=/usr/local/apisix/deps && cp -v /usr/local/apisix/deps/lib/luarocks/rocks-5.1/apisix/master-0/bin/apisix /usr/bin/ && bin='#! /usr/local/openresty/luajit/bin/luajit\npackage.path = "/usr/local/apisix/?.lua;" .. package.path' && sed -i "1s@.*@$bin@" /usr/bin/apisix && mv /usr/local/apisix/deps/share/lua/5.1/apisix /usr/local/apisix && apk del .builddeps build-base make unzip' returned a non-zero code: 1

feat: build docker image with source code

Issue description

Sometime we should add some special logical for ourself business, we should build docker image with source code. So maybe we should add dockerfile in source code.

Environment

  • apisix version (cmd: apisix version):
  • OS:

bug: when set `apisix.ssl.enable: true`, can not start APISIX with docker 2.1-alpine

Issue description

Environment

  • apisix version : docker image apache/APISIX:2.1-alpine
  • OS:

Minimal test code / Steps to reproduce the issue

  1. set apisix.ssl.enable: true in config.yaml
    image

  2. the docker of APISIX start failed
    image

  3. Then, I copy the nginx_config item into config.yaml, restart APISIX, failed and report as blow

image

What's the actual result? (including assertion message & call stack if applicable)

What's the expected result?

Unable to start port 9443 in docker

Issue description

It could be config.yaml Configuration does not enable SSL, when I open the following error

missing ssl cert for ssl

image

My configuration is as follows
image

Refer: https://github.com/apache/apisix/blob/master/conf/config-default.yaml

Environment

docker-compose.yml

version: "3"

services:
  apisix:
    image: apache/apisix:2.1-alpine
    restart: always
    volumes:
      - ./apisix_log:/usr/local/apisix/logs
      - ./apisix_conf/config.yaml:/usr/local/apisix/conf/config.yaml:ro
    depends_on:
      - etcd
    ##network_mode: host
    ports:
      - "80:9080/tcp"
      - "443:9443/tcp"
#      - "9080:9080/tcp"
#      - "9443:9443/tcp"
    networks:
      apisix:
        ipv4_address: 172.18.5.11

  etcd:
    image: bitnami/etcd:3.4.9
    user: root
    restart: always
    volumes:
      - ./etcd_data:/etcd_data
    environment:
      ETCD_DATA_DIR: /etcd_data
      ETCD_ENABLE_V2: "true"
      ALLOW_NONE_AUTHENTICATION: "yes"
      ETCD_ADVERTISE_CLIENT_URLS: "http://0.0.0.0:2379"
      ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"
    ports:
      - "2379:2379/tcp"
    networks:
      apisix:
        ipv4_address: 172.18.5.10

CI: fail to fetch dashboard

+ git clone -b master https://github.com/apache/apisix.git /tmp/apisix
Cloning into '/tmp/apisix'...
+ cd /tmp/apisix
+ git submodule init
+ git submodule update
+ cd dashboard
/bin/sh: cd: line 1: can't cd to dashboard: No such file or directory

we need to change the script, fetch it from https://github.com/apache/apisix-dashboard

How to connect etcd cluster with tls

Environmental description:
rke version: v1.2.1
kubernetes: v1.18.3
I use the kubernetes and etcd cluster (3 nodes) built by rke。They are https://10.1.7.51:2379, https://10.1.7.52:2379, https://10.1.7.53:2379

Use: curl -k --cert /etc/kubernetes/ssl/kube-etcd-10-1-7-51.pem --key /etc/kubernetes/ssl/kube-etcd-10-1-7-51 -key.pem https://10.1.7.51:2379/version
Output: {"etcdserver":"3.4.3","etcdcluster":"3.4.0"}

start up:docker-compose up -d

Output:

got malformed version message: "" from etcd
got malformed version message: "" from etcd
got malformed version message: "" from etcd
got malformed version message: "" from etcd
got malformed version message: "" from etcd

This is my complete config.yaml,

apisix:
  node_listen: 80              # APISIX listening port
  enable_heartbeat: true
  enable_admin: true
  enable_admin_cors: true         # Admin API support CORS response headers.
  enable_debug: false
  enable_dev_mode: false          # Sets nginx worker_processes to 1 if set to true
  enable_reuseport: true          # Enable nginx SO_REUSEPORT switch if set to true.
  enable_ipv6: true
  config_center: etcd             # etcd: use etcd to store the config value
                                  # yaml: fetch the config value from local yaml file `/your_path/conf/apisix.yaml`

  #proxy_protocol:                 # Proxy Protocol configuration
  #  listen_http_port: 9181        # The port with proxy protocol for http, it differs from node_listen and port_admin.
                                   # This port can only receive http request with proxy protocol, but node_listen & port_admin
                                   # can only receive http request. If you enable proxy protocol, you must use this port to
                                   # receive http request with proxy protocol
  #  listen_https_port: 9182       # The port with proxy protocol for https
  #  enable_tcp_pp: true           # Enable the proxy protocol for tcp proxy, it works for stream_proxy.tcp option
  #  enable_tcp_pp_to_upstream: true # Enables the proxy protocol to the upstream server

  proxy_cache:                     # Proxy Caching configuration
    cache_ttl: 10s                 # The default caching time if the upstream does not specify the cache time
    zones:                         # The parameters of a cache
    - name: disk_cache_one         # The name of the cache, administrator can be specify
                                   # which cache to use by name in the admin api
      memory_size: 50m             # The size of shared memory, it's used to store the cache index
      disk_size: 1G                # The size of disk, it's used to store the cache data
      disk_path: "/tmp/disk_cache_one" # The path to store the cache data
      cache_levels: "1:2"           # The hierarchy levels of a cache
  #  - name: disk_cache_two
  #    memory_size: 50m
  #    disk_size: 1G
  #    disk_path: "/tmp/disk_cache_two"
  #    cache_levels: "1:2"

#  allow_admin:                  # http://nginx.org/en/docs/http/ngx_http_access_module.html#allow
#    - 127.0.0.0/24              # If we don't set any IP list, then any IP access is allowed by default.
#    - 172.17.0.0/24
  #   - "::/64"
  # port_admin: 9180              # use a separate port

  # Default token when use API to call for Admin API.
  # *NOTE*: Highly recommended to modify this value to protect APISIX's Admin API.
  # Disabling this configuration item means that the Admin API does not
  # require any authentication.
  admin_key:
    -
      name: "admin"
      key: edd1c9f034335f136f87ad84b625c8f1
      role: admin                 # admin: manage all configuration data
                                  # viewer: only can view configuration data
    -
      name: "viewer"
      key: 4054f7cf07e344346cd3f287985e76a2
      role: viewer
  router:
    http: 'radixtree_uri'         # radixtree_uri: match route by uri(base on radixtree)
                                  # radixtree_host_uri: match route by host + uri(base on radixtree)
    ssl: 'radixtree_sni'          # radixtree_sni: match route by SNI(base on radixtree)
  # stream_proxy:                 # TCP/UDP proxy
  #   tcp:                        # TCP proxy port list
  #     - 9100
  #     - 9101
  #   udp:                        # UDP proxy port list
  #     - 9200
  #     - 9211
  dns_resolver:                   # default DNS resolver, with disable IPv6 and enable local DNS
    - 10.43.0.10
    - 223.5.5.5
    - 1.1.1.1
    - 8.8.8.8
  dns_resolver_valid: 30          # valid time for dns result 30 seconds
  resolver_timeout: 5             # resolver timeout
  ssl:
    enable: true
    enable_http2: true
    listen_port: 443
    ssl_protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
    ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM--SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES12ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SH

nginx_config:                     # config for render the template to genarate nginx.conf
  error_log: "logs/error.log"
  error_log_level: "warn"         # warn,error
  worker_rlimit_nofile: 20480     # the number of files a worker process can open, should be larger than worker_connections
  event:
    worker_connections: 10620
  http:
    access_log: "logs/access.log"
    keepalive_timeout: 60s         # timeout during which a keep-alive client connection will stay open on the server side.
    client_header_timeout: 60s     # timeout for reading client request header, then 408 (Request Time-out) error is returned t
    client_body_timeout: 60s       # timeout for reading client request body, then 408 (Request Time-out) error is returned to 
    send_timeout: 10s              # timeout for transmitting a response to the client.then the connection is closed
    underscores_in_headers: "on"   # default enables the use of underscores in client request header fields
    real_ip_header: "X-Real-IP"    # http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header
    real_ip_from:                  # http://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from
      - 127.0.0.1
      - 'unix:'
    #lua_shared_dicts:              # add custom shared cache to nginx.conf
    #  ipc_shared_dict: 100m        # custom shared cache, format: `cache-key: cache-size`

etcd:
  host:                           # it's possible to define multiple etcd hosts addresses of the same etcd cluster.
#    - "http://172.18.5.10:2379"     # multiple etcd address
    - "https://10.1.7.51:2379"     # multiple etcd address
  prefix: "/apisix"               # apisix configurations prefix
  timeout: 3                      # 3 seconds

plugins:                          # plugin list
  - example-plugin
  - limit-req
  - limit-count
  - limit-conn
  - key-auth
  - basic-auth
  - prometheus
  - node-status
  - jwt-auth
  - zipkin
  - ip-restriction
  - grpc-transcode
  - serverless-pre-function
  - serverless-post-function
  - openid-connect
  - proxy-rewrite
  - redirect
  - response-rewrite
  - fault-injection
  - udp-logger
  - wolf-rbac
  - proxy-cache
  - tcp-logger
  - proxy-mirror
  - kafka-logger
  - cors
stream_plugins:
  - mqtt-proxy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.