Giter Site home page Giter Site logo

lxcfs-admission-webhook's People

Contributors

chenhuazhong avatar denverdino avatar thinkblue1991 avatar wutz avatar xigang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

lxcfs-admission-webhook's Issues

当节点重启之后出现CrashLoopBackOff 的情况

当我部署该项目之后, 在该node节点出现重启的情况会发现无法将DaemonSet的pod拉起的情况。
]# kubectl get pods -n kube-system |grep lxcfs
lxcfs-4m5fk 1/1 Running 0 7d19h
lxcfs-69ddw 0/1 CrashLoopBackOff 8 19m
lxcfs-8msgp 1/1 Running 0 7d20h
lxcfs-9bn8l 1/1 Running 1 7d18h
lxcfs-9kfnh 1/1 Running 0 10d
lxcfs-admission-webhook-deployment-7bc979694d-l9kvs 1/1 Running 0 13h
lxcfs-crbtc 0/1 CrashLoopBackOff 162 7d19h
lxcfs-fnzj8 0/1 Error 165 7d19h
lxcfs-k66k6 1/1 Running 0 10d
lxcfs-pxg56 0/1 CrashLoopBackOff 9 21m
lxcfs-ql6gb 0/1 CrashLoopBackOff 8 16m
lxcfs-xgvfn 0/1 CrashLoopBackOff 8 19m
lxcfs-z8kc2 0/1 CrashLoopBackOff 165 7d19h

检查lxcfs 的pod日志提示:
]# kubectl logs -f lxcfs-z8kc2 -n kube-system
mount namespace: 5
hierarchies:
0: fd: 6: pids
1: fd: 7: net_cls
2: fd: 8: hugetlb
3: fd: 9: memory
4: fd: 10: perf_event
5: fd: 11: blkio
6: fd: 12: cpu,cpuacct
7: fd: 13: freezer
8: fd: 14: cpuset
9: fd: 15: devices
10: fd: 16: name=systemd
11: fd: 17: unified
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option

对于fuser 所提示“noempty”挂载选项应该如何去配置在daemonset 当中呢?
相必是可能因为在挂载lxcfs的时候出现挂载的目录当中已经存在数据造成的。

]# ls -alR
.:
total 4
drwxr-xr-x 3 root root 18 Jun 16 20:26 .
drwxr-xr-x. 36 root root 4096 Jun 6 12:22 ..
drwxr-xr-x 9 root root 107 Jun 16 20:26 proc

./proc:
total 0
drwxr-xr-x 9 root root 107 Jun 16 20:26 .
drwxr-xr-x 3 root root 18 Jun 16 20:26 ..
drwxr-xr-x 2 root root 6 Jun 16 20:26 cpuinfo
drwxr-xr-x 2 root root 6 Jun 16 20:26 diskstats
drwxr-xr-x 2 root root 6 Jun 16 20:26 loadavg
drwxr-xr-x 2 root root 6 Jun 16 20:26 meminfo
drwxr-xr-x 2 root root 6 Jun 16 20:26 stat
drwxr-xr-x 2 root root 6 Jun 16 20:26 swaps
drwxr-xr-x 2 root root 6 Jun 16 20:26 uptime

after lxcfs container crashed it could not be restart again.

lxcfs container logs:

container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:53: mounting \\\"/var/lib/lxcfs/proc/uptime\\\" to rootfs \\\"/var/lib/docker/devicemapper/mnt/80cc788550e01d8690970b31e471b16d0c3093e35edba07d427ca305c3fd3e7c/rootfs\\\" at \\\"/var/lib/docker/devicemapper/mnt/80cc788550e01d8690970b31e471b16d0c3093e35edba07d427ca305c3fd3e7c/rootfs/proc/uptime\\\" caused \\\"not a directory\\\"\""
[root@192 proc]# ll
total 0
drwxr-xr-x. 2 root root 6 Dec 22 21:23 cpuinfo
drwxr-xr-x. 2 root root 6 Dec 22 21:23 diskstats
drwxr-xr-x. 2 root root 6 Dec 22 21:23 meminfo
drwxr-xr-x. 2 root root 6 Dec 22 21:23 stat
drwxr-xr-x. 2 root root 6 Dec 22 21:23 swaps
drwxr-xr-x. 2 root root 6 Dec 22 21:23 uptime

On the problem machine, the path of /var/lib/lxcfs/proc/cpuinfo was changed to directory type which originally is a file, that's strange.

@denverdino

新加入节点CrashLoopBackOff,实际为Check if the specified host path exists and is the expected type

新加入的 node 节点,并且也无法自动安装 lxcfs 命令 ,具体操作如下:

Error: failed to start container "lxcfs": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "rootfs_linux.go:58: mounting \"/var/lib/lxcfs/proc/stat\" to rootfs \"/var/lib/docker/overlay2/a5cd744e5e44f3576860feadc7eecaca95616aab669f91e84054f33778f63af4/merged\" at \"/var/lib/docker/overlay2/a5cd744e5e44f3576860feadc7eecaca95616aab669f91e84054f33778f63af4/merged/proc/stat\" caused \"not a directory\""": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

k8s1.22 部署web.yaml失败.Internal error occurred: failed calling webhook "mutating.lxcfs-admission-webhook.aliyun.com": failed to call webhook: Post "https://lxcfs-admission-webhook-svc.default.svc:443/mutate?timeout=10s": x509: certificate specifies an incompatible key usage

Internal error occurred: failed calling webhook "mutating.lxcfs-admission-webhook.aliyun.com": failed to call webhook: Post "https://lxcfs-admission-webhook-svc.default.svc:443/mutate?timeout=10s": x509: certificate specifies an incompatible key usage

cpu 没有正确获取,memory 正确获取

node 节点的配置:8c 32g
[root@VM_1_103_centos ~]# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)
[root@VM_1_103_centos ~]# uname -r
4.14.105-19-0012

image

image

查看 lxcfs 日志:出现如下日志报错
bindings.c: 4149: read_cpuacct_usage_all: read_cpuacct_usage_all reading from /kubepods/burstable/pod6db043da-b4b6-11ea-a5c6-b65bbdc96ca7/9fc0fb03adacfc69b5f7a7129f627ef33b2690075a0b4421a6f18eccbe08b966/cpuacct.usage_all failed.
bindings.c: 4149: read_cpuacct_usage_all: read_cpuacct_usage_all reading from /kubepods/burstable/pod5b863638-b4b7-11ea-a5c6-b65bbdc96ca7/42a0a048389f016389bab0b65433ace5e6295471fe61661092e961d3f89f7cf8/cpuacct.usage_all failed.
bindings.c: 4149: read_cpuacct_usage_all: read_cpuacct_usage_all reading from /kubepods/burstable/pod64ab4316-b4b7-11ea-a5c6-b65bbdc96ca7/6b10e20c0c22d728587029312a6db44ee97567c029a0f0bfecdd6a152f5d6b22/cpuacct.usage_all failed.
bindings.c: 4149: read_cpuacct_usage_all: read_cpuacct_usage_all reading from /kubepods/burstable/podb76d2724-b4b7-11ea-a5c6-b65bbdc96ca7/69784ca0f8a43f9930dd64d71dd87db051530370751f51d7c50e032bd7b575ea/cpuacct.usage_all failed.
bindings.c: 4149: read_cpuacct_usage_all: read_cpuacct_usage_all reading from /kubepods/burstable/pod3ada9e2e-b4b7-11ea-a5c6-b65bbdc96ca7/c8e11c1f7b6eb6c66a7d82d3f8a9309241d3cc88e5d35c76a83f06f3d35a300e/cpuacct.usage_all failed.
bindings.c: 4149: read_cpuacct_usage_all: read_cpuacct_usage_all reading from /kubepods/burstable/pod6db043da-b4b6-11ea-a5c6-b65bbdc96ca7/9fc0fb03adacfc69b5f7a7129f627ef33b2690075a0b4421a6f18eccbe08b966/cpuacct.usage_all failed.

no cipher suite supported by both client and server

Hi
I deploy this admission-webhook,the log of pod's lxcfs-admission-webhook-deployment-xxx ,

E0330 01:41:07.271820 1 main.go:27] Failed to load key pair: tls: failed to find "CERTIFICATE" PEM block in certificate input after skipping PEM blocks of the following types: [CERTIFICATE REQUEST]
I0330 01:41:07.273262 1 main.go:50] Server started
2020/03/30 01:41:59 http: TLS handshake error from 10.42.1.0:61039: tls: no cipher suite supported by both client and server.

7111

Docker info:
QQ截图20200330094746
kubectl version
2222222222
Thanks.

deployment/install.sh failed

environment:
os: centos 7.4
kubernetes: 1.11.2

lxcfs-daemonset.yaml can be executed normally

However, when I perform a deploy/install.sh installation:

[[email protected] ~/lxcfs-admission-webhook]deployment/install.sh
creating certs in tmpdir /tmp/tmp.Fp7lKhMGHo 
Generating RSA private key, 2048 bit long modulus
................................................+++
........+++
e is 65537 (0x10001)
certificatesigningrequest.certificates.k8s.io/lxcfs-admission-webhook-svc.default created
NAME                                  AGE       REQUESTOR          CONDITION
lxcfs-admission-webhook-svc.default   0s        kubernetes-admin   Pending
certificatesigningrequest.certificates.k8s.io/lxcfs-admission-webhook-svc.default approved
secret/lxcfs-admission-webhook-certs created
NAME                            TYPE      DATA      AGE
lxcfs-admission-webhook-certs   Opaque    2         0s
deployment.apps/lxcfs-admission-webhook-deployment created
service/lxcfs-admission-webhook-svc created
error: error validating "deployment/mutatingwebhook-ca-bundle.yaml": error validating data: ValidationError(MutatingWebhookConfiguration.webhooks[0].clientConfig.caBundle): invalid type for io.k8s.api.admissionregistration.v1beta1.WebhookClientConfig.caBundle: got "array", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false

help

failed to deploy web.yaml

The environments are as below:
kubernetes1.16.2
CentOS Linux release 7.7.1908 (Core), kernel 3.10.0-1062.4.1.el7.x86_64

Normal Scheduled default-scheduler Successfully assigned default/web-7f4dfcc4f4-lq55x to k8s-node-02
Warning Failed 15s kubelet, k8s-node-02 Error: failed to start container "web": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused "rootfs_linux.go:58: mounting \"/var/lib/lxcfs/proc/loadavg\" to rootfs \"/data/docker/overlay2/08404158d190bc9e00e02b0278d7a03156fa2e0c5846e195832549c1877ff227/merged\" at \"/proc/loadavg\" caused \"\\\"/data/docker/overlay2/08404158d190bc9e00e02b0278d7a03156fa2e0c5846e195832549c1877ff227/merged/proc/loadavg\\\" cannot be mounted because it is located inside \\\"/proc\\\"\""": unknown
Warning Failed 11s kubelet, k8s-node-02 Error: failed to start container "web": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused "rootfs_linux.go:58: mounting \"/var/lib/lxcfs/proc/loadavg\" to rootfs \"/data/docker/overlay2/d29b975165bb68a36e5a243453048e8c2158292f342d7c2963dbff42f2787b37/merged\" at \"/proc/loadavg\" caused \"\\\"/data/docker/overlay2/d29b975165bb68a36e5a243453048e8c2158292f342d7c2963dbff42f2787b37/merged/proc/loadavg\\\" cannot be mounted because it is located inside \\\"/proc\\\"\""": unknown
Normal Pulling 0s (x3 over 19s) kubelet, k8s-node-02 Pulling image "httpd:2.4.32"
Normal Pulled (x3 over 15s) kubelet, k8s-node-02 Successfully pulled image "httpd:2.4.32"
Normal Created (x3 over 15s) kubelet, k8s-node-02 Created container web
Warning Failed kubelet, k8s-node-02 Error: failed to start container "web": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused "rootfs_linux.go:58: mounting \"/var/lib/lxcfs/proc/loadavg\" to rootfs \"/data/docker/overlay2/7de388819224ba39f4c3a19dab71ecc6970cf3c19849fe0dff83c73c4b0a2d1f/merged\" at \"/proc/loadavg\" caused \"\\\"/data/docker/overlay2/7de388819224ba39f4c3a19dab71ecc6970cf3c19849fe0dff83c73c4b0a2d1f/merged/proc/loadavg\\\" cannot be mounted because it is located inside \\\"/proc\\\"\""": unknown

Only /proc/cpuinfo is empty or null

Only /proc/cpuinfo is empty or null; other information is well visible.

k8s node in /var/lib/lxc/lcxfs/proc
image

Please tell me how to restore the node without rebooting it in order not to affect the other pods you are using.

lxcfs-ds-pod log
/usr/bin/lxcfs --debug --enable-loadavg --enable-cfs /var/lib/lxc/lxcfs
Running constructor lxcfs_init to reload liblxcfs
mount namespace: 5
hierarchies:
0: fd: 6: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Kernel supports pidfds
Kernel does not support swap accounting
api_extensions:

  • cgroups
  • sys_cpu_online
  • proc_cpuinfo
  • proc_diskstats
  • proc_loadavg
  • proc_meminfo
  • proc_stat
  • proc_swaps
  • proc_uptime
  • proc_slabinfo
  • shared_pidns
  • cpuview_daemon
  • loadavg_daemon
  • pidfds
    FUSE library version: 3.16.2
    nullpath_ok: 0
    unique: 2, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
    INIT: 7.34
    flags=0x33fffffb
    max_readahead=0x00020000
    INIT: 7.38
    flags=0x0040f039
    max_readahead=0x00020000
    max_write=0x00100000
    max_background=0
    congestion_threshold=0
    time_gran=1
    unique: 2, success, outsize: 80
    unique: 4, opcode: LOOKUP (1), nodeid: 1, insize: 45, pid: 1864049
    LOOKUP /proc
    getattr[NULL] /proc
    NODEID: 2
    unique: 4, success, outsize: 144
    unique: 6, opcode: LOOKUP (1), nodeid: 2, insize: 48, pid: 1864049
    LOOKUP /proc/cpuinfo
    getattr[NULL] /proc/cpuinfo
    NODEID: 3
    unique: 6, success, outsize: 144
    unique: 8, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 1864053
    getattr[NULL] /
    unique: 8, success, outsize: 120
    unique: 10, opcode: LOOKUP (1), nodeid: 2, insize: 50, pid: 1864055
    LOOKUP /proc/diskstats
    getattr[NULL] /proc/diskstats
    NODEID: 4
    unique: 10, success, outsize: 144
    unique: 12, opcode: LOOKUP (1), nodeid: 2, insize: 48, pid: 1864061
    LOOKUP /proc/loadavg
    getattr[NULL] /proc/loadavg
    NODEID: 5
    unique: 12, success, outsize: 144
    unique: 14, opcode: LOOKUP (1), nodeid: 2, insize: 48, pid: 1864067
    LOOKUP /proc/meminfo
    getattr[NULL] /proc/meminfo
    NODEID: 6
    unique: 14, success, outsize: 144
    unique: 16, opcode: LOOKUP (1), nodeid: 2, insize: 45, pid: 1864073
    LOOKUP /proc/stat
    getattr[NULL] /proc/stat
    NODEID: 7
    unique: 16, success, outsize: 144
    unique: 18, opcode: LOOKUP (1), nodeid: 2, insize: 46, pid: 1864080
    LOOKUP /proc/swaps
    getattr[NULL] /proc/swaps
    NODEID: 8
    unique: 18, success, outsize: 144
    unique: 20, opcode: LOOKUP (1), nodeid: 2, insize: 47, pid: 1864086
    LOOKUP /proc/uptime
    getattr[NULL] /proc/uptime
    NODEID: 9
    unique: 20, success, outsize: 144
    unique: 22, opcode: LOOKUP (1), nodeid: 1, insize: 44, pid: 1864094
    LOOKUP /sys
    getattr[NULL] /sys
    NODEID: 10
    unique: 22, success, outsize: 144
    unique: 24, opcode: LOOKUP (1), nodeid: 10, insize: 48, pid: 1864094
    LOOKUP /sys/devices
    getattr[NULL] /sys/devices
    NODEID: 11
    unique: 24, success, outsize: 144
    unique: 26, opcode: LOOKUP (1), nodeid: 11, insize: 47, pid: 1864094
    LOOKUP /sys/devices/system
    getattr[NULL] /sys/devices/system
    NODEID: 12
    unique: 26, success, outsize: 144
    unique: 28, opcode: LOOKUP (1), nodeid: 12, insize: 44, pid: 1864094
    LOOKUP /sys/devices/system/cpu
    getattr[NULL] /sys/devices/system/cpu
    NODEID: 13
    unique: 28, success, outsize: 144
    unique: 30, opcode: LOOKUP (1), nodeid: 13, insize: 47, pid: 1864094
    LOOKUP /sys/devices/system/cpu/online
    getattr[NULL] /sys/devices/system/cpu/online
    NODEID: 14
    unique: 30, success, outsize: 144
    unique: 32, opcode: LOOKUP (1), nodeid: 1, insize: 45, pid: 1864391
    LOOKUP /proc
    getattr[NULL] /proc
    NODEID: 2
    unique: 32, success, outsize: 144
    unique: 34, opcode: STATFS (17), nodeid: 1, insize: 40, pid: 3382
    unique: 34, success, outsize: 96
    unique: 36, opcode: STATFS (17), nodeid: 1, insize: 40, pid: 2393381
    unique: 36, success, outsize: 96
    unique: 38, opcode: OPEN (14), nodeid: 14, insize: 48, pid: 1864861
    open flags: 0x8000 /sys/devices/system/cpu/online
    open[94652746107344] flags: 0x8000 /sys/devices/system/cpu/online
    unique: 38, success, outsize: 32
    unique: 40, opcode: READ (15), nodeid: 14, insize: 80, pid: 1864861
    read[94652746107344] 1024 bytes from 0 flags: 0x8000
    read[94652746107344] 4 bytes from 0
    unique: 40, success, outsize: 20
    unique: 42, opcode: FLUSH (25), nodeid: 14, insize: 64, pid: 1864861
    flush[94652746107344]
    unique: 42, success, outsize: 16
    unique: 44, opcode: RELEASE (18), nodeid: 14, insize: 64, pid: 0
    release[94652746107344] flags: 0x8000
    unique: 44, success, outsize: 16
    unique: 46, opcode: OPEN (14), nodeid: 9, insize: 48, pid: 1864861
    open flags: 0x8000 /proc/uptime
    open[94652746107536] flags: 0x8000 /proc/uptime
    unique: 46, success, outsize: 32
    unique: 48, opcode: READ (15), nodeid: 9, insize: 80, pid: 1864861
    read[94652746107536] 8191 bytes from 0 flags: 0x8000
    read[94652746107536] 20 bytes from 0
    unique: 48, success, outsize: 36
    unique: 50, opcode: OPEN (14), nodeid: 6, insize: 48, pid: 1864861
    open flags: 0x8000 /proc/meminfo
    open[94652746107344] flags: 0x8000 /proc/meminfo
    unique: 50, success, outsize: 32

CrashLoopBackOff: exec user process caused "no such file or directory"

Follow the instruction in README, pod has below status:

kubectl get pod | grep lxcfs

lxcfs-admission-webhook-deployment-845cdc8c6-fng4j   0/1     ContainerCreating   0          92m
lxcfs-sfrjv                                          0/1     CrashLoopBackOff    23         94m

kubectl logs lxcfs-sfrjv

standard_init_linux.go:211: exec user process caused "no such file or directory"

Host System: Debian 9

kubectl version:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-09T11:18:22Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

create daemonset failed

os: ubuntu 16.04
kbuernetes: v1.11.10

step:

root@k8s-m:/home/www/server/kube-yamls/public# git clone https://github.com/denverdino/lxcfs-admission-webhook.git
Cloning into 'lxcfs-admission-webhook'...
remote: Enumerating objects: 28, done.
remote: Counting objects: 100% (28/28), done.
remote: Compressing objects: 100% (27/27), done.
remote: Total 28 (delta 1), reused 28 (delta 1), pack-reused 0
Unpacking objects: 100% (28/28), done.
Checking connectivity... done.

root@k8s-m:/home/www/server/kube-yamls/public/lxcfs-admission-webhook/deployment# pwd
/home/www/server/kube-yamls/public/lxcfs-admission-webhook/deployment
root@k8s-m:/home/www/server/kube-yamls/public/lxcfs-admission-webhook/deployment# ls
deployment.yaml  install.sh  lxcfs-daemonset.yaml  mutatingwebhook.yaml  service.yaml  uninstall.sh  validatingwebhook.yaml  webhook-create-signed-cert.sh  webhook-patch-ca-bundle.sh  web.yaml

root@k8s-m:/home/www/server/kube-yamls/public/lxcfs-admission-webhook/deployment# kubectl apply -f  lxcfs-daemonset.yaml 
daemonset.apps/lxcfs created

root@k8s-m:/home/www/server/kube-yamls/public/lxcfs-admission-webhook/deployment# kubectl  api-versions |grep admissionregistration.k8s.io/v1beta1
admissionregistration.k8s.io/v1beta1

root@k8s-m:/home/www/server/kube-yamls/public/lxcfs-admission-webhook/deployment# kubectl get pods |grep lxc      
lxcfs-5kv56                                                0/1       CrashLoopBackOff    2          1m
lxcfs-5nbrb                                                0/1       CrashLoopBackOff    1          1m
lxcfs-74txn                                                0/1       CrashLoopBackOff    2          1m
lxcfs-9f5sv                                                0/1       RunContainerError   3          1m
lxcfs-bzhgz                                                0/1       CrashLoopBackOff    2          1m
lxcfs-d7q5k                                                0/1       CrashLoopBackOff    3          1m
lxcfs-dqdw7                                                0/1       CrashLoopBackOff    3          1m
lxcfs-fdsvj                                                0/1       CrashLoopBackOff    3          1m
...

root@k8s-m:/home/www/server/kube-yamls/public/lxcfs-admission-webhook/deployment# kubectl describe po  lxcfs-d7q5k     
Name:               lxcfs-d7q5k
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               ...
Start Time:         Mon, 28 Oct 2019 17:53:42 +0800
Labels:             app=lxcfs
                    controller-revision-hash=1128831819
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                ...
Controlled By:      DaemonSet/lxcfs
Containers:
  lxcfs:
    Container ID:   docker://d58c2afae24d11d1313f9f7ceb8aa19db4351c3db884f98218e1be655873989e
    Image:          registry.cn-hangzhou.aliyuncs.com/denverdino/lxcfs:3.1.2
    Image ID:       docker-pullable://registry.cn-hangzhou.aliyuncs.com/denverdino/lxcfs@sha256:102ed1896c3bcd5325f293a2758568022c93dd32d8712bc397f48cd38012a441
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       RunContainerError
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      linux mounts: path /var/lib/lxcfs is mounted on /var/lib/lxcfs but it is not a shared mount
      Exit Code:    128
      Started:      Mon, 28 Oct 2019 17:56:49 +0800
      Finished:     Mon, 28 Oct 2019 17:56:49 +0800
    Ready:          False
    Restart Count:  5
    Environment:    <none>
    Mounts:
      /sys/fs/cgroup from cgroup (rw)
      /usr/local from usr-local (rw)
      /var/lib/lxcfs from lxcfs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gznmj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  cgroup:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/cgroup
    HostPathType:  
  usr-local:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local
    HostPathType:  
  lxcfs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/lxcfs
    HostPathType:  DirectoryOrCreate
  default-token-gznmj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gznmj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason   Age              From                 Message
  ----     ------   ----             ----                 -------
  Normal   Pulling  1m (x5 over 3m)  kubelet, 10.1.56.56  pulling image "registry.cn-hangzhou.aliyuncs.com/denverdino/lxcfs:3.1.2"
  Normal   Pulled   1m (x5 over 3m)  kubelet, 10.1.56.56  Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/denverdino/lxcfs:3.1.2"
  Normal   Created  1m (x5 over 3m)  kubelet, 10.1.56.56  Created container
  Warning  Failed   1m (x5 over 3m)  kubelet, 10.1.56.56  Error: failed to start container "lxcfs": Error response from daemon: linux mounts: path /var/lib/lxcfs is mounted on /var/lib/lxcfs but it is not a shared mount
  Warning  BackOff  1m (x5 over 2m)  kubelet, 10.1.56.56  Back-off restarting failed container

when i want to create daemonset, then failed.

i dont know what happend.

help.

How to modify the "name" section in validatingwebhook.yaml and mutatingwebhook.yaml?

The current "name" section value is: validation.lxcfs-admission-webhook.aliyun.com/mutating.lxcfs-admission-webhook.aliyun.com

As the k8s doc said: The name of a MutatingWebhookConfiguration or a ValidatingWebhookConfiguration object must be a valid DNS subdomain name.

So how could we set such section in our actual k8s clusters? Not sure if "mutating.lxcfs-admission-webhook.aliyun.com" refers to specific svc deployed in your clusters?

pod 出现CrashLoopBackOff

Message: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "rootfs_linux.go:58: mounting \"/var/lib/lxcfs/proc/cpuinfo\" to rootfs \"/var/lib/docker/overlay2/c8de89d9c388253c25184e2dccc283ee76762db1f65ddef9ec2b2d702b5cb9f3/merged\" at \"/var/lib/docker/overlay2/c8de89d9c388253c25184e2dccc283ee76762db1f65ddef9ec2b2d702b5cb9f3/merged/proc/cpuinfo\" caused \"not a directory\""": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

安装部署完成,但没有成功,free依然是获得宿主机资源

kubectl get po

NAME READY STATUS RESTARTS AGE
centosvnc-4-0 1/1 Running 0 58m
centosvnc-5-0 1/1 Running 0 58m
centosvnc-6-0 1/1 Running 0 58m
lxcfs-mlwnz 1/1 Running 1 3h
lxcfs-p9sk9 1/1 Running 0 3h
lxcfs-pnkzd 1/1 Running 0 3h

kubectl exec -it lxcfs-mlwnz sh

sh-4.2# free
total used free shared buff/cache available
Mem: 16431920 14309100 429176 393920 1693644 1293868
Swap: 0 0 0
sh-4.2#

部署daemonset.yml 之后出现pod crash

pod crash, 报错如下.
噢, 我的 k8s 版本是 1.22.1 , containerd 运行时。

[root@master ~]# kubectl logs -f lxcfs-4lnlk
mount namespace: 5
hierarchies:
  0: fd:   6: freezer
  1: fd:   7: memory
  2: fd:   8: hugetlb
  3: fd:   9: pids
  4: fd:  10: devices
  5: fd:  11: cpuset
  6: fd:  12: blkio
  7: fd:  13: cpuacct,cpu
  8: fd:  14: net_prio,net_cls
  9: fd:  15: perf_event
 10: fd:  16: name=systemd
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.