Giter Site home page Giter Site logo

apache / apisix-ingress-controller Goto Github PK

View Code? Open in Web Editor NEW
944.0 44.0 327.0 11.98 MB

APISIX Ingress Controller for Kubernetes

Home Page: https://apisix.apache.org/

License: Apache License 2.0

Dockerfile 0.17% Go 98.28% Makefile 0.58% Shell 0.88% Open Policy Agent 0.09%
ingress controller kubernetes k8s apigateway microservices api loadbalancing apisix devops

apisix-ingress-controller's Introduction

Apache APISIX for Kubernetes

Go Report Card Slack

Use Apache APISIX for Kubernetes Ingress.

All configurations in apisix-ingress-controller are defined with Kubernetes CRDs (Custom Resource Definitions). Support configuring plugins, service registration discovery mechanism for upstreams, load balancing and more in Apache APISIX.

apisix-ingress-controller is an Apache APISIX control plane component. Currently it serves for Kubernetes clusters. In the future, we plan to separate the submodule to adapt to more deployment modes, such as virtual machine clusters.

The technical architecture of apisix-ingress-controller:

Architecture

Status

This project is currently general availability.

Features

  • Declarative configuration for Apache APISIX with Custom Resource Definitions(CRDs), using k8s yaml struct with minimum learning curve.
  • Hot-reload during yaml apply.
  • Native Kubernetes Ingress (both v1 and v1beta1) support.
  • Auto register k8s endpoint to upstream (Apache APISIX) node.
  • Support load balancing based on pod (upstream nodes).
  • Out of box support for node health check.
  • Plug-in extension supports hot configuration and immediate effect.
  • Support SSL and mTLS for routes.
  • Support traffic split and canary deployments.
  • Support TCP 4 layer proxy.
  • Ingress controller itself as a pluggable hot-reload component.
  • Multi-cluster configuration distribution.

More about comparison among multiple Ingress Controllers.

Get started

Prerequisites

Apisix ingress controller requires Kubernetes version 1.16+. Because we used CustomResourceDefinition v1 stable API. From the version 1.0.0, APISIX-ingress-controller need to work with Apache APISIX version 2.7+.

Works with APISIX Dashboard

Currently, APISIX Ingress Controller automatically manipulates some APISIX resources, which is not very compatible with APISIX Dashboard. In addition, users should not modify resources labeled managed-by: apisix-ingress-controllers via APISIX Dashboard.

Internal Architecture

module

Apache APISIX Ingress vs. Kubernetes Ingress Nginx

  • The control plane and data plane are separated, which can improve security and deployment flexibility.
  • Hot-reload during yaml apply.
  • More convenient canary deployment.
  • Verify the correctness of the configuration, safe and reliable.
  • Rich plugins and ecology.
  • Supports APISIX custom resources and Kubernetes native Ingress resources.

Contributing

We welcome all kinds of contributions from the open-source community, individuals and partners.

How to contribute

Most of the contributions that we receive are code contributions, but you can also contribute to the documentation or simply report solid bugs for us to fix.

For new contributors, please take a look at issues with a tag called Good first issue or Help wanted.

How to report a bug

  • Ensure the bug was not already reported by searching on GitHub under Issues.

  • If you're unable to find an open issue addressing the problem, open a new one. Be sure to include a title and clear description, as much relevant information as possible, and a code sample or an executable test case demonstrating the expected behavior that is not occurring.

Contributor over time

Contributor over time

Community

  • Mailing List: Mail to [email protected], follow the reply to subscribe the mailing list.
  • QQ Group - 578997126
  • Twitter Follow - follow and interact with us using hashtag #ApacheAPISIX
  • Bilibili video

Todos

  • More todos will display in issues

User stories

If you are willing to share with us some scenarios and use cases when you use APISIX Ingress, please reply to the issue, or submit PR to update Powered-BY file

Who Uses APISIX Ingress?

A wide variety of companies and organizations use APISIX Ingress for research, production and commercial product, below are some of them:

  • AISpeech
  • European Copernicus Reference System
  • Jiakaobaodian(驾考宝典)
  • Horizon Robotics(地平线)
  • Tencent Cloud
  • UPYUN
  • Zoom

Milestone

Terminology

  • APISIX Ingress: the whole service that contains the proxy (Apache APISIX) and ingress controller (apisix-ingress-controller).
  • apisix-ingress-controller: the ingress controller component.

apisix-ingress-controller's People

Contributors

alinsran avatar chever-john avatar chzhuo avatar dependabot[bot] avatar dickens7 avatar donghui0 avatar fgksgf avatar fhuzero avatar firstsawyou avatar gallardot avatar gxthrj avatar jiangfucheng avatar junnplus avatar kishanikandasamy avatar lianghao208 avatar lingsamuel avatar mangogoforward avatar nevercase avatar nic-6443 avatar pottekkat avatar revolyssup avatar ronething avatar shareinto avatar stillfox-lee avatar stu01509 avatar tao12345666333 avatar tokers avatar xiangtianyu avatar yiyiyimu avatar zaunist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apisix-ingress-controller's Issues

apisix对etcd去集群问题

我测试apisix对etcd集群的支持情况,发现有点问题。我的测试方法是:先逐一的把etcd集群的节点杀掉,每杀掉一个就看一些dashboard。只有把3个etcd集群中的节点都杀掉时,dashboard才开始报错。但是,当我把etcd节点启动其中部分节点时,dashboard始终是去连一个我没有启动的etcd节点,而不会去尝试连接我已经启动的etcd节点。

Failed to startup APISIX ingress controller

When I execute the kubectl apply -f ingress_controller.yaml command, I get the following error:

[root@k8s-master eplat-yamls]# kubectl logs -f ingress-controller-5587c86b49-78bxg -n cloud
panic: failed to read configuration file: /go/src/github.com/api7/ingress-controller/conf/conf.json

goroutine 1 [running]:
github.com/iresty/ingress-controller/conf.init.0()
        /go/src/github.com/api7/ingress-controller/conf/init.go:78 +0x8bd

Here are the details of ingress controller pod:

[root@k8s-master eplat-yamls]# kubectl describe pod ingress-controller-5587c86b49-78bxg -n cloud
Name:         ingress-controller-5587c86b49-78bxg
Namespace:    cloud
Priority:     0
Node:         k8s-node1/10.55.78.50
Start Time:   Wed, 09 Sep 2020 09:55:10 +0800
Labels:       app=apisix
              pod-template-hash=5587c86b49
              tier=backend
Annotations:  <none>
Status:       Running
IP:           10.55.78.50
IPs:
  IP:           10.55.78.50
Controlled By:  ReplicaSet/ingress-controller-5587c86b49
Containers:
  ingress-controller:
    Container ID:   docker://81fbe2e58ec29f3cf7ea8e41e1f6ea2c380675f062574708cca22bcd107ce65d
    Image:          api7/ingress-controller:v3
    Image ID:       docker://sha256:ea32b3c55296c612734f39b0189f7790777ee07733e73db7a014ce93ed30c6e1
    Port:           8080/TCP
    Host Port:      8080/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Wed, 09 Sep 2020 10:00:56 +0800
      Finished:     Wed, 09 Sep 2020 10:00:56 +0800
    Ready:          False
    Restart Count:  6
    Environment Variables from:
      cloud-config  ConfigMap  Optional: false
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hct7v (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-hct7v:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hct7v
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From                Message
  ----     ------     ----                  ----                -------
  Normal   Scheduled  9m13s                 default-scheduler   Successfully assigned cloud/ingress-controller-5587c86b49-78bxg to k8s-node1
  Normal   Pulled     7m41s (x5 over 9m4s)  kubelet, k8s-node1  Container image "api7/ingress-controller:v3" already present on machine
  Normal   Created    7m41s (x5 over 9m4s)  kubelet, k8s-node1  Created container ingress-controller
  Normal   Started    7m40s (x5 over 9m4s)  kubelet, k8s-node1  Started container ingress-controller
  Warning  BackOff    4m (x25 over 9m1s)    kubelet, k8s-node1  Back-off restarting failed container

Failed to list *v1.ApisixRoute: apisixroutes.apisix.apache.org is forbidden

When i try to define a route through ApisixRoute, i get following error:

E0909 10:34:29.424696       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:cloud:default" cannot list resource "endpoints" in API group "" at the cluster scope
E0909 10:34:29.425328       1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.ApisixRoute: apisixroutes.apisix.apache.org is forbidden: User "system:serviceaccount:cloud:default" cannot list resource "apisixroutes" in API group "apisix.apache.org" at the cluster scope

misc: some basic goals

ingress本身是为了k8s入口流量转发,为了达到这样的效果,需要一层代理,我们用apisix实现
所以ingress本身的能力偏向管理,性能要求属于apisix自身特性
需要达到一些管理效果:
1、yaml文件的定义,兼容k8s 现有ingress的配置;
(测试现有ingress的yaml能否被aipsix ingress controller正确执行)
2、能支持apisix中新增的对象,比如route、upstream、plugin等;
(测试apisix对象是否能够通过yaml正确定义)
3、apisix的热加载;
(测试yaml变更之后,是否是热更新)
4、与k8s中的状态同步;
(测试k8s服务生命周期是否正常同步到apisix)
其他一些功能测试:
5、apisix本身的healthcheck等;
(测试k8s pod出现异常时,apisix能否及时调整流量)
6、ingress controller热备切换;
(测试ingress controller的高可用)

I got them from @gxthrj .

The APISIX series Resources should be maintained in a separate repository

So far both apisix-ingress-controller and apisix-dashboard maintain their own definitions for each APISIX resource, like Route, Upstream, Consumers and so on.

I propose to keep the definitions for those resources in a separate repository, and versioning them according to the release of data plane.

Protobuf 3 is a good choice to definition them.

discuss: the design of yaml for APISIX

为了能够在k8s中使用yaml定义出apisix需要的对象,定义了以下结构。
如果存在没有覆盖到的功能可以增加 & 修改,欢迎大家一起讨论。

结构示例

1、ApisixRoute基础路由 结构上与ingress类似,方便迁移原生ingress yaml

apiVersion: apisix.apache.org/v1
kind: ApisixRoute                 	        # apisix route
metadata:
  annotations:                                          
    k8s.apisix.apache.org/ingress.class: apisix_group   # 分组
    k8s.apisix.apache.org/ssl-redirect: 'false'         # ssl 转发
  name: httpserverRoute
  namespace: cloud   				# 指定namespace,同一个yaml中只能配置一个namespace下的backend
spec:
  rules:
  - host: test.apisix.apache.org
    http:
      paths:
      - backend:
          serviceName: httpserver		# 结合namespace => cloud/httpserver (namespace/serviceName) 
          servicePort: 8080
        path: /hello*				# 支持正则
        plugins:				# 插件绑定
          - httpserver-plugins		        # httpserver-plugins 是一个自定义的插件集合 (kind: apisixPlugin)
          - ...
      - backend:
          serviceName: httpserver		# 多个路由指向同一个service
          servicePort: 8080
        path: /ws*

支持

  • namespace、host、path、backend(service)
  • path支持全量和深前缀匹配
  • 支持部分annotation
SSL转发         k8s.apisix.apache.org/ssl-redirect: 'true' or 'false'
ingress分组    k8s.apisix.apache.org/ingress.class: string
访问白名单     k8s.apisix.apache.org/whitelist-source-range: 1.2.3.4/16,4.3.2.1/8

不兼容

  • annotation,除了上面提到的以外,ingress中的其他annotation通过插件 ApisixPlugins 方式替代

2、定义ApisixService 对应apisix中的service对象

apiVersion: apisix.apache.org/v1
kind: ApisixService                 	# apisix service
metadata:
  name: httpserver
  namespace: cloud  
spec:
  upstream: httpserver			# upstream = cloud/httpserver (namespace/upstreamName)
  port: 8080				# 在service上定义端口号
  plugins:				# 插件绑定
    - httpserver-plugins                # httpserver-plugins 是一个自定义的插件集合 (kind: apisixPlugin)
    - ...

支持

  • 指定namespace下service 与upstream 的绑定
  • 支持服务端口号的绑定
  • 多个service可以指定同一个upstream

校验

  • 同一个namespace下service name不能重复

3、定义ApisixUpstream

apiVersion: apisix.apache.org/v1
kind: ApisixUpstream                 	# apisix upstream
metadata:
  name: httpserver			# cloud/httpserver
  namespace: cloud   					
spec:
  loadbalancer: roundrobin
  healthcheck:
  	active:
  		...
  	passive:	
  		...

支持

  • upstream下的nodeList自动注册;
  • upstream可以定义 healthcheck 和 loadbalancer

4、定义ApisixPlugin

apiVersion: apisix.apache.org/v1
kind: ApisixPlugin                 			# apisix plugin 
metadata:
  name: httpserver-plugins				# cloud/httpserver-plugins
  namespace: cloud   	
spec:
  plugins:
  - plugin: limit-conn
  	enable: true
  	config:
  	  key: value
  - plugin: cors
  	enable: true
  	config:
  	  key: value

5、ApisixSSL定义

apiVersion: apisix.apache.org/v1
kind: ApisixSSL                 			# apisix SSL
metadata:
  name: duiopen
spec:
  hosts:
  - asr.duiopen.com 					# 支持泛域名 例如:*.duiopen.com
  - tts.duiopen.com
  secret:
  	all.duiopen.com 				# k8s secret

6、admission webhook

apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
  name: apisix-validations
webhooks:
- admissionReviewVersions:
  - v1beta1
  name: validations.apisix.apache.org
  namespaceSelector: {}
  rules:                                  # admission rules
  - apiGroups:
    - apisix.apache.org
    apiVersions:
    - '*'
    operations:
    - CREATE
    - UPDATE
    resources:
    - ApisixRoutes
    - ApisixPlugins
    scope: '*'
  failurePolicy: Fail
  clientConfig:                          # admission webhook     
    service:
      namespace: apisix
      name: apisix-ingress-controller
      path: '/validate'
      port: 80
    caBundle: 'jjyy'
  sideEffects: Unknown
  timeoutSeconds: 30

runtime error: invalid memory address or nil pointer dereference

An error occurred in the Ingress controller when I added the Ingress configuration:

[root@k8s-master ~]# kubectl logs -f ingress-controller-5587c86b49-5kz82 -n cloud
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x104bf0f]

goroutine 131 [running]:
github.com/gxthrj/seven/apisix.(*Route).convert(0xc000457c80, 0x0, 0x0, 0x1f045f8, 0xc0002e2390, 0x0)
        /go/pkg/mod/github.com/gxthrj/[email protected]/apisix/route.go:152 +0x1cf
github.com/gxthrj/seven/apisix.ListRoute(0x0, 0x0, 0x15c0580, 0xc0004c1150, 0x8, 0x10, 0xc0004c1120)
        /go/pkg/mod/github.com/gxthrj/[email protected]/apisix/route.go:54 +0x3c2
github.com/gxthrj/seven/apisix.FindCurrentRoute(0xc00024dc00, 0x13, 0xc0004c1120, 0x0)
        /go/pkg/mod/github.com/gxthrj/[email protected]/apisix/route.go:22 +0xe5
github.com/gxthrj/seven/state.(*routeWorker).trigger(0xc000401e60, 0x13f2fe9, 0x7, 0x13f269c, 0x6, 0x1274d40, 0xc0000ba550, 0x0, 0x0)
        /go/pkg/mod/github.com/gxthrj/[email protected]/state/builder.go:92 +0x1dd
github.com/gxthrj/seven/state.(*routeWorker).start.func1(0xc000401e60)
        /go/pkg/mod/github.com/gxthrj/[email protected]/state/route_worker.go:21 +0x8a
created by github.com/gxthrj/seven/state.(*routeWorker).start
        /go/pkg/mod/github.com/gxthrj/[email protected]/state/route_worker.go:17 +0x6c

ingress configuration:

kubectl apply -f - <<EOF
apiVersion: apisix.apache.org/v1
kind: ApisixRoute
metadata:
  name: foo-bar

spec:
  rules:
  - host: foo.bar
    http:
      paths:
      - backend:
          serviceName: http-svc
          servicePort: 80
        path: /
EOF

K8S Service: I think the service is all right

[root@k8s-master eplat-yamls]# kubectl get svc -o wide --show-labels
NAME           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                         AGE     SELECTOR        LABELS
apisix-gw-lb   NodePort    10.1.41.185   <none>        9080:31700/TCP,9443:30905/TCP   20h     app=apisix-gw   app=apisix-gw
http-svc       NodePort    10.1.83.151   <none>        80:31562/TCP                    93m     app=http-svc    app=http-svc
kubernetes     ClusterIP   10.1.0.1      <none>        443/TCP                         3d17h   <none>          component=apiserver,provider=kubernetes

[root@k8s-master eplat-yamls]# curl http://10.1.83.151 -H Host:foo.bar


Hostname: http-svc-674f6fb5b5-gmh9m

Pod Information:
        node name:      k8s-node1
        pod name:       http-svc-674f6fb5b5-gmh9m
        pod namespace:  default
        pod IP: 10.244.1.68

Server values:
        server_version=nginx: 1.13.3 - lua: 10008

Request Information:
        client_address=10.244.0.0
        method=GET
        real path=/
        query=
        request_version=1.1
        request_scheme=http
        request_uri=http://foo.bar:8080/

Request Headers:
        accept=*/*
        host=foo.bar
        user-agent=curl/7.29.0

Request Body:
        -no body in request-

**cloud.yaml: **

Maybe the cloud.yaml need modify, Because the etcd is a pod on the master node and must use https to connect it, It's possible the ingress controller not got data

Because the etcd is a pod on the master node ,so i don't know the ETCD_SERVER_INTERNAL what to put in here

apiVersion: v1
kind: ConfigMap
metadata:
  name: cloud-config
  namespace: cloud
data:
  ETCD_SERVER_INTERNAL: '["http://127.0.0.1:2379"]'
  SYSLOG_HOST: 127.0.0.1
  APISIX_BASE_URL: "http://10.25.78.50:31700/apisix/admin/route/apisix/admin"
  ENV: "prod"

apisix upsteam node cannot be deleted dynamically

My configuration information is as follows

apisixroute Configuration:


apiVersion: v1
items:

  • apiVersion: apisix.apache.org/v1
    kind: ApisixRoute
    metadata:
    creationTimestamp: "2020-07-17T05:51:42Z"
    generation: 7
    name: mall-apisix-ingress
    namespace: mall
    resourceVersion: "2771827"
    selfLink: /apis/apisix.apache.org/v1/namespaces/mall/apisixroutes/mall-apisix-ingress
    uid: 357ffbad-97d1-4f28-b801-247a46b4b226
    spec:
    rules:
    • host: ebs.test.com
      http:
      paths:
      • backend:
        serviceName: ebs-app-rest
        servicePort: 8100
        path: '*'

[root@m1 apisix-ingress]# kubectl describe svc ebs-app-rest -n mall
Name: ebs-app-rest
Namespace: mall
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"ebs-app-rest","namespace":"mall"},"spec":{"ports":[{"name":"http"...
Selector: app=ebs-app-rest
Type: ClusterIP
IP: 10.254.199.108
Port: http 8080/TCP
TargetPort: 8100/TCP
Endpoints: 172.30.88.170:8100
Session Affinity: None
Events:

image

172.30.88.169,172.30.88.143 is not automatically deleted!
How to solve the problem!

Some questions about deploying APISIX Ingress Controller

问题一:k8s已经部署完成,一个master和一个node,完成部署后,我是直接在master节点上apply samples目录中的yaml文件就可以了吗?还是说需要做一些其他的操作?

问题二:因为我们需要的一个服务鉴权的功能,具体的业务需要自己来写,我理解 APISIX ingress controller是通过调度API的方式进行操作etcd以完成配置同步,真正操作客户端请求的还是APISIX,那可能需要自己开发一个小插件来满足业务需求,请问我理解的是否有误?

问题三:如果需要自己开发业务插件或者服务,那么我应该按照APISIX ingress controller文档中的"SDK"的来部署开发环境而不是"deployment",如果需要部署开发环境,我需要自己开发一些业务,按照文档的描述,我理解是需要在集群外部署APISIX,这个“集群外”是什么意思?是部署在虚拟机上还是放到docker里面呢?具体的部署位置是在master还是node节点上呢?

We need a good CLI interface

So far we don't have a CLI interface, the first step users may do is typing ./apisix-ingress-controller --help to see each meaning of options when they build it in their own dev environment.

use zapcore to encapsulate a log package

We need a good log package, which must contains the following features:

  • support custom log level, log files
  • support custom log rotation options (can be added in the future)
  • support transmit logs to external servers like syslog server (can be added in the future)
  • high performance as soon as possible

I propose to use zapcore to implement it.

Service APIs support

We have plan to support Service APIs but without concrete steps to do it, and we need more and more feedbacks from the community, more communications with the Service APIs sig.

This issue is used to track our new trend about Service APIs.

find endpoint cloud/httpserver err%!(EXTRA string=endpoints "httpserver" not found)

I have finished deployment of the APISIX ingress controller, but when i use this demo i get following error:

demo:

kubectl apply -f - <<EOF
apiVersion: apisix.apache.org/v1
kind: ApisixRoute
metadata:
  name: httpserver-route
  namespace: cloud
spec:
  rules:
  - host: test.apisix.apache.org
    http:
      paths:
      - backend:
          serviceName: httpserver
          servicePort: 8080
        path: /hello*
EOF

ingress controller error log:

E0909 16:07:57.815863       1 ep.go:38] find endpoint cloud/httpserver err%!(EXTRA string=endpoints "httpserver" not found)
E0909 16:07:57.817280       1 builder.go:208] solver upstream failed, update upstream to etcd failed, err: http post failed, url: http://10.25.78.50:31700/apisix/admin/upstreams, err: status: 400, body: {"error_msg":"invalid configuration: property \"nodes\" validation failed: object matches none of the requireds"}

I'm sure, the k8s master node hava been saved cofiguration to etcd,i use kubectl get ApisixRoute httpserver-route -n cloud -o yaml got that.

[root@k8s-master eplat-yamls]# kubectl get ApisixRoute httpserver-route -n cloud -o yaml
apiVersion: apisix.apache.org/v1
kind: ApisixRoute
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apisix.apache.org/v1","kind":"ApisixRoute","metadata":{"annotations":{},"name":"httpserver-route","namespace":"cloud"},"spec":{"rules":[{"host":"test.apisix.apache.org","http":{"paths":[{"backend":{"serviceName":"httpserver","servicePort":8080},"path":"/hello*"}]}}]}}
  creationTimestamp: "2020-09-09T08:07:49Z"
  generation: 1
  managedFields:
  - apiVersion: apisix.apache.org/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        .: {}
        f:rules: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2020-09-09T08:07:49Z"
  name: httpserver-route
  namespace: cloud
  resourceVersion: "600363"
  selfLink: /apis/apisix.apache.org/v1/namespaces/cloud/apisixroutes/httpserver-route
  uid: 737ca448-88c5-4b34-a80b-6f51d8f980c3
spec:
  rules:
  - host: test.apisix.apache.org
    http:
      paths:
      - backend:
          serviceName: httpserver
          servicePort: 8080
        path: /hello*

Need to add retry, when synchronization

I occasionally encounter failures when requesting the admin API of the APISIX cluster.

{"error_msg":"timeout"} 

This will cause the Ingress Controller to fail to create the corresponding resources normally. like this:

$ kubectl -n cloud logs ingress-controller-d978b79d4-s8kgk  
E1203 15:43:38.194554       1 upstream.go:26] list upstreams in etcd failed, group: apisix-gw-lb.infraop.svc:9080, err: json转换失败
E1203 15:43:38.196041       1 builder.go:154] solver upstream failed, find upstream from etcd failed, upstream: &{ID:<nil> FullName:0xc000725450 Group:0xc0007253c0 ResourceVersion:0xc0007253d0 Name:0xc000725420 Type:0xc000725460 HashOn:<nil> Key:<nil> Nodes:[0xc00092a0e0] FromKind:<nil>}, err: list upstreams failed, err: json转换失败
E1203 15:56:54.560663       1 upstream.go:26] list upstreams in etcd failed, group: apisix-gw-lb.infraop.svc:9080, err: json转换失败
E1203 15:56:54.560703       1 builder.go:154] solver upstream failed, find upstream from etcd failed, upstream: &{ID:<nil> FullName:0xc000724710 Group:0xc000724650 ResourceVersion:0xc000724670 Name:0xc0007246d0 Type:0xc000724720 HashOn:<nil> Key:<nil> Nodes:[0xc000b1a800] FromKind:<nil>}, err: list upstreams failed, err: json转换失败
E1203 16:08:28.243429       1 upstream.go:26] list upstreams in etcd failed, group: , err: json转换失败
E1203 16:08:28.243469       1 builder.go:154] solver upstream failed, find upstream from etcd failed, upstream: &{ID:<nil> FullName:0xc000b471c0 Group:0xc000b47160 ResourceVersion:0xc000b47170 Name:0xc000b47180 Type:0xc000b471d0 HashOn:<nil> Key:<nil> Nodes:[0xc000bb9f80] FromKind:0xc000b471a0}, err: list upstreams failed, err: json转换失败
E1203 16:10:57.544821       1 upstream.go:26] list upstreams in etcd failed, group: , err: json转换失败
E1203 16:10:57.544850       1 builder.go:154] solver upstream failed, find upstream from etcd failed, upstream: &{ID:<nil> FullName:0xc00028ab90 Group:0xc00028aa70 ResourceVersion:0xc00028aa80 Name:0xc00028aa90 Type:0xc00028aba0 HashOn:0xc00028abb0 Key:0xc00028abc0 Nodes:[0xc000b20820] FromKind:0xc00028ab30}, err: list upstreams failed, err: json转换失败
E1203 16:45:40.909181       1 upstream.go:26] list upstreams in etcd failed, group: apisix-gw-lb.infraop.svc:9080, err: json转换失败
E1203 16:45:40.909220       1 builder.go:154] solver upstream failed, find upstream from etcd failed, upstream: &{ID:<nil> FullName:0xc0000ff0a0 Group:0xc0000fed70 ResourceVersion:0xc0000feda0 Name:0xc0000fedf0 Type:0xc0000ff0b0 HashOn:<nil> Key:<nil> Nodes:[0xc000b7a840] FromKind:<nil>}, err: list upstreams failed, err: json转换失败
E1203 16:45:41.051755       1 upstream.go:26] list upstreams in etcd failed, group: , err: json转换失败
E1203 16:45:41.051784       1 builder.go:154] solver upstream failed, find upstream from etcd failed, upstream: &{ID:<nil> FullName:0xc00077e3d0 Group:0xc00077e370 ResourceVersion:0xc00077e380 Name:0xc00077e390 Type:0xc00077e3e0 HashOn:<nil> Key:<nil> Nodes:[0xc000b7ab20] FromKind:0xc00077e3b0}, err: list upstreams failed, err: json转换失败
E1203 16:45:41.137569       1 upstream.go:26] list upstreams in etcd failed, group: , err: json转换失败
E1203 16:45:41.137605       1 builder.go:154] solver upstream failed, find upstream from etcd failed, upstream: &{ID:<nil> FullName:0xc0004a4d40 Group:0xc0004a4cb0 ResourceVersion:0xc0004a4cc0 Name:0xc0004a4cd0 Type:0xc0004a4d50 HashOn:0xc0004a4d80 Key:0xc0004a4d90 Nodes:[0xc000533520] FromKind:0xc0004a4d30}, err: list upstreams failed, err: json转换失败

Support kustomize

We need support the kustomize so users can install all the necessary API objects by just typing one command.

The default namespace `cloud` is not declared in documents

As per the configurations in sample/deploy, we have an implicit install namespace cloud for those API objects, we'd better to add some hints to create this namespace firstly, otherwise errors will occur while users are installing these resources.

kubectl create namespace cloud

What's more, i think the cloud namespace is not so specialized, i recommend to use ingress-apisix as the namespace.

自定义apisixRoute实例后controller创建资源报错

定义了Route资源
apiVersion: apisix.apache.org/v1 kind: ApisixRoute metadata: name: test-route namespace: testnamespace resourceVersion: '80141484' spec: rules: - host: xxx.test-dev.com http: paths: - backend: serviceName: xxx servicePort: 8080 path: /
controller分别添加route失败,到dashboard查看upstream创建了,重启controller,又创建了service,再重启controller又创建了router

E0610 09:43:03.761447 1 builder.go:123] add route failed, route: &v1.Route{ID:(*string)(0xc00040b3d0), Group:(*string)(0xc0007c61d0), FullName:(*string)(0xc0007c6250), ResourceVersion:(*string)(0xc0007c61e0), Host:(*string)(0xc0007c61f0), Path:(*string)(0xc0007c6210), Name:(*string)(0xc0007c6220), Methods:[]*string(nil), ServiceId:(*string)(0xc0000cf280), ServiceName:(*string)(0xc0007c6230), UpstreamId:(*string)(nil), UpstreamName:(*string)(0xc0007c6240), Plugins:(*v1.Plugins)(0xc0008d60c0)}, err: status: 201, body: {"node":{"value":{"host":"xxx.test-dev.com","plugins":{},"uri":"\/","service_id":"00000000000000000059","desc":"xxx.test-dev.com\/","priority":0},"createdIndex":76,"key":"\/apisix\/routes\/00000000000000000076","modifiedIndex":76},"action":"create"}

[discuss] do we need to support parsing native ingress yaml?

It is not supported temporarily. If there is a need, you can consider adding
There are mainly the following reasons:
1.In k8s, the coexistence of nginx ingress and apisx ingress can be selected;
2.Because ingress is edge traffic, during the migration from nginx ingress to apisix ingress, you may need to roll back. To facilitate quick rollback to nginx ingress, you should avoid modifying the original configuration of nginx ingress;
3.We are not planning to expand k8s native resources such as ingress;

Therefore, we have defined ApisixRoute, which can be migrated according to a single yaml. The data structure is basically the same as ingress. During the migration process, only a few modifications are required to run, and it is also convenient to roll back.

English README for a larger audience

Dear Team,
I like the project and want to use APISIX as my ingress controller for a personal project.
Noticed that we only have Chinese readme, which limits the audience to only Chinese readers.

Would it make sense to have an English version, I can help if we need one.

prometheus metrics support

We need expose the prometheus metrics, this is important since we need monitoring and alarm to enhance the observability and find problems in time.

I execute the yaml file in k8s and report such an error in the log on the controller

I execute the yaml file in k8s and report such an error in the log on the controller.
ingress-controller error log:
E0917 16:16:11.276293 1 builder.go:154] solver upstream failed, find upstream from etcd failed, upstream: &{ID: FullName:0xc0004a4100 Group:0xc0004a4070 ResourceVersion:0xc0004a4080 Name:0xc0004a40d0 Type:0xc0004a4110 HashOn: Key: Nodes:[0xc00000f1a0] FromKind:}, err: list upstreams failed, err: http get failed, url: http://192.168.63.185:9180/apisix/admin/upstreams, err: status: 401, body:

<title>401 Authorization Required</title>

401 Authorization Required


openresty

my yaml file:

apiVersion: apisix.apache.org/v1
kind: ApisixRoute
metadata:
  name: httpserver-route
  namespace: default
spec:
  rules:
  - host: test.apisix.apache.org
    http:
      paths:
      - backend:
          serviceName: nginx-web
          servicePort: 80
        path: /

Official APISIX Ingress Controller Image

In the sample ingress controller doc, I need to modify the APISIX ingress controller image, Is there a built ingress controller image?

I cloned this repository, and built a image by Dockerfile,

git clone https://github.com/api7/ingress-controller.git
cd ingress-controller
docker build -t APISIX_INGRESS_CONTROLLER_IMAGE .

but the built image is 1.1 G, It's a little big.

[DISCUSS] About donating api7/ingress-controller as a sub-project of Apache APISIX

Hello everyone,

I am the founder of api7/ingress-controller.

Recently, more and more friends have paid attention to this project, and it has also been used by some companies in the production environment. But we know that api7/ingress-controller still has a lot of room for improvement. In order to make api7/ingress-controller more useful, I want to start discussing donating api7/ingress-controller as a sub-project of Apache APISIX.

the benefit are:

Embracing closely with the Apache community, attracting more people's attention and participation in contributing ingress-controller, we can continue to develop more features of ingress-controller.

and, I hope that any decision will be based on discussions with us, not a personal decision. Ensure that the direction of ingress-controller is correct.

If you have no objection, I will start writing a proposal.

Welcome to give me some feedback, thank you very much.

Discuss: use terratest for testing with Kubernetes

I proposal use terratest for testing with Kubernetes. It can test the logic of Kubernetes with code(APIs).

But ingress-controller has CRDs, and the terratest do not CRDs testing.

Anybody has some good ideas?

I found an issue to discuss the feature, need time to digest.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.