Giter Site home page Giter Site logo

kubeblocks-addons's Introduction

kubeblocks-addons

KubeBlocks add-ons.

Add-on Tutorial

NOTE: This tutorial is applicable for KubeBlocks version 0.9.0.

Supported Add-ons

NAME APP-VERSION DESCRIPTION
apecloud-mysql 8.0.30 ApeCloud MySQL is a database that is compatible with MySQL syntax and achieves high availability through the utilization of the RAFT consensus protocol.
apecloud-postgresql latest ApeCloud PostgreSQL is a database that is compatible with PostgreSQL syntax and achieves high availability through the utilization of the RAFT consensus protocol.
clickhouse 22.9.4 ClickHouse is an open-source column-oriented OLAP database management system. Use it to boost your database performance while providing linear scalability and hardware efficiency.
elasticsearch 8.8.2 Elasticsearch is a distributed, RESTful search engine optimized for speed and relevance on production-scale workloads.
etcd 3.5.6 etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines.
greptimedb 0.3.2 An open-source, cloud-native, distributed time-series database with PromQL/SQL/Python supported.
kafka 3.3.2 Apache Kafka is a distributed streaming platform designed to build real-time pipelines and can be used as a message broker or as a replacement for a log aggregation solution for big data applications.
llm baichuan-7b-q4
baichuan2-13b-q4
baichuan2-7b-4q
codeshell-7b-chat-q4
latest
replit-code-3b-f16
zephyr-beta-7b-q4
Large language models.
mariadb 10.6.15 MariaDB is a high performance open source relational database management system that is widely used for web and application servers
milvus 2.2.4 A cloud-native vector database, storage for next generation AI applications.
mongodb 4.0
4.2
4.4
5.0
5.0.20
6.0
sharding-5.0
MongoDB is a document database designed for ease of application development and scaling.
mysql 5.7.42
8.0.33
MySQL is a widely used, open-source relational database management system (RDBMS)
nebula 3.5.0 NebulaGraph is a popular open-source graph database that can handle large volumes of data with milliseconds of latency, scale up quickly, and have the ability to perform fast graph analytics.
neon latest Neon is a serverless open-source alternative to AWS Aurora Postgres. It separates storage and compute and substitutes the PostgreSQL storage layer by redistributing data across a cluster of nodes.
oceanbase 4.2.0.0-100010032023083021 Unlimited scalable distributed database for data-intensive transactional and real-time operational analytics workloads, with ultra-fast performance that has once achieved world records in the TPC-C benchmark test. OceanBase has served over 400 customers across the globe and has been supporting all mission critical systems in Alipay.
official-postgresql 12.15
14.7
14.7-zhparser
A Official PostgreSQL cluster definition Helm chart for Kubernetes
openldap 2.4.57 The OpenLDAP Project is a collaborative effort to develop a robust, commercial-grade, fully featured, and open source LDAP suite of applications and development tools. This chart provides KubeBlocks'
opensearch 2.7.0 Open source distributed and RESTful search engine.
oracle-mysql 8.0.32
8.0.32-perf
MySQL is a widely used, open-source relational database management system (RDBMS)
orioledb beta1 OrioleDB is a new storage engine for PostgreSQL, bringing a modern approach to database capacity, capabilities and performance to the world's most-loved database platform.
polardbx 2.3 PolarDB-X is a cloud native distributed SQL Database designed for high concurrency, massive storage, complex querying scenarios.
postgresql 12.14.0
12.14.1
12.15.0
14.7.2
14.8.0
A PostgreSQL (with Patroni HA) cluster definition Helm chart for Kubernetes
pulsar 2.11.2 Apache Pulsar is an open-source, distributed messaging and streaming platform built for the cloud.
qdrant 1.5.0 High-performance, massive-scale Vector Database for the next generation of AI.
redis 7.0.6 Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.
risingwave v1.0.0 RisingWave is a distributed SQL streaming database that enables cost-efficient and reliable processing of streaming data.
starrocks 3.1.1 A Linux Foundation project, is the next-generation data platform designed to make data-intensive real-time analytics fast and easy.
tdengine 3.0.5.0 A Specific Implementation of TDengine Chart for Kubernetes, and provides by KubeBlocks' ClusterDefinition API manifests.
TiDB 7.1.2 TiDB is an open-source, cloud-native, distributed, MySQL-Compatible database for elastic scale and real-time analytics.
weaviate 1.18.0 Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects.
xinference 1.16.0
cpu-latest
Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models.
zookeeper 3.7.1 Apache ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
yashandb personal-23.1.1.100 YashanDB is a new database system completely independently designed and developed by SICS. Based on classical database theories, it incorporates original Bounded Evaluation theory, Approximation theory, Parallel Scalability theory and Cross-Modal Fusion Computation theory, supports multiple deployment methods such as stand-alone/primary-standby, shared cluster, and distributed ones, covers OLTP/HTAP/OLAP transactions and analyzes mixed load scenarios.
greatsql 8.0.32-25 GreatSQL is a high performance open source relational database management system that can be used on common hardware for financial-grade application scenarios.

kubeblocks-addons's People

Contributors

1aal avatar ahjing99 avatar caiq1nyu avatar dengshaojiang avatar earayu avatar fengluodb avatar free6om avatar haowen159 avatar heng4fun avatar iziang avatar jairuigou avatar jashbook avatar kissycn avatar kizuna-lek avatar kubejocker avatar ldming avatar leon-inf avatar linghan-hub avatar lynnleelhl avatar nashtsai avatar nayutah avatar shanshanying avatar skyrise-l avatar sophon-zt avatar wangyelei avatar wusai80 avatar xuriwuyun avatar y-rookie avatar yipeng1030 avatar zhaodiankui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubeblocks-addons's Issues

[Features] support backup and restore for elasticsearch

What is the user interaction of your feature
A concise description of user interactions or user stories of your feature request

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new feature, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[Features] support elasticsearch logical backup&restore

What is the user interaction of your feature

https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new feature, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[BUG] zookeeper cluster failed to be initialized on minikube

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. create cluster zookeeper-3.7.2 on minikube
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
  name: zkeeper-shuehr
  namespace: default
spec:
  clusterDefinitionRef: zookeeper
  clusterVersionRef: zookeeper-3.7.2
  terminationPolicy: DoNotTerminate
  componentSpecs:
    - name: zookeeper
      componentDefRef: zookeeper
      replicas: 2
      resources:
        requests:
          cpu: 100m
          memory: 0.5Gi
        limits:
          cpu: 100m
          memory: 0.5Gi
      volumeClaimTemplates:
        - name: data
          spec:
            storageClassName:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 1Gi
        - name: data-log
          spec:
            storageClassName:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 1Gi
  1. See error
kubectl get pod
NAME                                            READY   STATUS             RESTARTS         AGE
zkeeper-shuehr-zookeeper-0                      0/1     CrashLoopBackOff   7 (77s ago)    17m
zkeeper-shuehr-zookeeper-1                      0/1     CrashLoopBackOff   7 (118s ago)   17m

describe pod

kubectl describe pod zkeeper-shuehr-zookeeper-0 
Name:         zkeeper-shuehr-zookeeper-0
Namespace:    default
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Mon, 15 Jan 2024 10:46:28 +0800
Labels:       app.kubernetes.io/component=zookeeper
              app.kubernetes.io/instance=zkeeper-shuehr
              app.kubernetes.io/managed-by=kubeblocks
              app.kubernetes.io/name=zookeeper
              app.kubernetes.io/version=
              apps.kubeblocks.io/component-name=zookeeper
              controller-revision-hash=zkeeper-shuehr-zookeeper-8688568656
              statefulset.kubernetes.io/pod-name=zkeeper-shuehr-zookeeper-0
Annotations:  apps.kubeblocks.io/component-replicas: 2
Status:       Running
IP:           10.244.0.53
IPs:
  IP:           10.244.0.53
Controlled By:  StatefulSet/zkeeper-shuehr-zookeeper
Containers:
  zookeeper:
    Container ID:  docker://fb1675f807a49e7ad188a0c1c98eed66ce3eb606e4c07037e7dc9c702a3937df
    Image:         bitnami/zookeeper:3.7
    Image ID:      docker-pullable://bitnami/zookeeper@sha256:3090426a46d9b9e91f94437a73fc821858eb2deeeddc469f228b07e336126c67
    Ports:         2181/TCP, 2888/TCP, 3888/TCP, 8080/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Command:
      /kb-scripts/start-zookeeper.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Mon, 15 Jan 2024 11:01:21 +0800
      Finished:     Mon, 15 Jan 2024 11:02:51 +0800
    Ready:          False
    Restart Count:  7
    Limits:
      cpu:     100m
      memory:  512Mi
    Requests:
      cpu:      100m
      memory:   512Mi
    Liveness:   exec [/bin/bash -c echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/bash -c echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment Variables from:
      zkeeper-shuehr-zookeeper-env      ConfigMap  Optional: false
      zkeeper-shuehr-zookeeper-rsm-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:      zkeeper-shuehr-zookeeper-0 (v1:metadata.name)
      KB_POD_UID:        (v1:metadata.uid)
      KB_NAMESPACE:     default (v1:metadata.namespace)
      KB_SA_NAME:        (v1:spec.serviceAccountName)
      KB_NODENAME:       (v1:spec.nodeName)
      KB_HOST_IP:        (v1:status.hostIP)
      KB_POD_IP:         (v1:status.podIP)
      KB_POD_IPS:        (v1:status.podIPs)
      KB_HOSTIP:         (v1:status.hostIP)
      KB_PODIP:          (v1:status.podIP)
      KB_PODIPS:         (v1:status.podIPs)
      KB_POD_FQDN:      $(KB_POD_NAME).zkeeper-shuehr-zookeeper-headless.$(KB_NAMESPACE).svc
      ZOO_ENABLE_AUTH:  yes
    Mounts:
      /bitnami/zookeeper/data from data (rw)
      /bitnami/zookeeper/log from data-log (rw)
      /kb-scripts from scripts (rw)
      /opt/bitnami/zookeeper/conf/zoo.cfg from configs (rw,path="zoo.cfg")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rqlkp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-zkeeper-shuehr-zookeeper-0
    ReadOnly:   false
  data-log:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-log-zkeeper-shuehr-zookeeper-0
    ReadOnly:   false
  configs:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      zkeeper-shuehr-zookeeper-zookeeper-config
    Optional:  false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      zkeeper-shuehr-zookeeper-zookeeper-scripts
    Optional:  false
  kube-api-access-rqlkp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 kb-data=true:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Normal   Scheduled               17m                   default-scheduler        Successfully assigned default/zkeeper-shuehr-zookeeper-0 to minikube
  Normal   SuccessfulAttachVolume  17m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-7bc213be-3f7f-468c-b2ce-dc29d472753f"
  Normal   SuccessfulAttachVolume  17m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-a054d278-c748-4966-a903-d13f1a99f3a0"
  Normal   Pulling                 16m                   kubelet                  Pulling image "bitnami/zookeeper:3.7"
  Normal   Pulled                  15m                   kubelet                  Successfully pulled image "bitnami/zookeeper:3.7" in 15.366897007s (1m0.081202819s including waiting)
  Normal   Killing                 14m                   kubelet                  Container zookeeper failed liveness probe, will be restarted
  Warning  Unhealthy               14m (x3 over 15m)     kubelet                  Liveness probe failed: localhost [127.0.0.1] 2181 (?) : Connection refused
  Normal   Pulled                  14m                   kubelet                  Container image "bitnami/zookeeper:3.7" already present on machine
  Warning  Unhealthy               14m                   kubelet                  Readiness probe failed: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown
  Warning  Unhealthy               14m (x5 over 15m)     kubelet                  Readiness probe failed: command "/bin/bash -c echo \"ruok\" | timeout 2 nc -w 2 localhost 2181 | grep imok" timed out
  Normal   Started                 14m (x2 over 15m)     kubelet                  Started container zookeeper
  Normal   Created                 14m (x2 over 15m)     kubelet                  Created container zookeeper
  Warning  Unhealthy               6m50s (x29 over 15m)  kubelet                  Liveness probe failed: command "/bin/bash -c echo \"ruok\" | timeout 2 nc -w 2 localhost 2181 | grep imok" timed out
  Warning  Unhealthy               112s (x35 over 15m)   kubelet                  Readiness probe failed: localhost [127.0.0.1] 2181 (?) : Connection refused

logs pod

kubectl logs zkeeper-shuehr-zookeeper-0  --previous 
zookeeper 03:25:55.02 
zookeeper 03:25:55.42 Welcome to the Bitnami zookeeper container
zookeeper 03:25:55.72 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-zookeeper
zookeeper 03:25:56.02 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-zookeeper/issues
zookeeper 03:25:56.22 
zookeeper 03:25:56.51 INFO  ==> ** Starting ZooKeeper setup **
zookeeper 03:26:01.22 INFO  ==> Initializing ZooKeeper...
zookeeper 03:26:01.52 INFO  ==> User injected custom configuration detected!
zookeeper 03:26:02.32 INFO  ==> Deploying ZooKeeper with persisted data...
zookeeper 03:26:02.62 INFO  ==> ** ZooKeeper setup finished! **

zookeeper 03:26:04.42 INFO  ==> ** Starting ZooKeeper **
/opt/bitnami/java/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfg

Expected behavior
zookeeper cluster suucess to be initialized on minikube

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

[BUG] failed to create ob and error: secret "ob-host-single-conn-credential" not found

$ k get pod                                  
NAME                           READY   STATUS                       RESTARTS   AGE
ob-host-single-ob-bundle-0-0   2/3     CreateContainerConfigError   0          61s

$k describe pod ob-host-single-ob-bundle-0-0
...
Topology Spread Constraints:  kubernetes.io/hostname:ScheduleAnyway when max skew 1 is exceeded for selector app.kubernetes.io/instance=ob-host-single,apps.kubeblocks.io/component-name=ob-host-single-ob-bundle-0
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  14s               default-scheduler  Successfully assigned default/ob-host-single-ob-bundle-0-0 to k3d-demo-kb-0.8-test-server-0
  Normal   Pulled     11s               kubelet            Container image "docker.io/apecloud/obtools:4.2.1" already present on machine
  Normal   Created    11s               kubelet            Created container kb-tools
  Normal   Started    11s               kubelet            Started container kb-tools
  Normal   Pulled     11s               kubelet            Container image "docker.io/apecloud/obagent:4.2.1-100000092023101717" already present on machine
  Normal   Created    11s               kubelet            Created container metrics
  Normal   Started    10s               kubelet            Started container metrics
  Normal   Pulled     10s               kubelet            Container image "infracreate-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kubeblocks-tools:0.8.0-beta.28" already present on machine
  Normal   Created    10s               kubelet            Created container config-manager
  Normal   Started    10s               kubelet            Started container config-manager
  Normal   Pulled     9s (x3 over 11s)  kubelet            Container image "docker.io/apecloud/oceanbase:4.2.0.0-100010032023083021" already present on machine
  Warning  Failed     9s (x3 over 11s)  kubelet            Error: secret "ob-host-single-conn-credential" not found

[Features] support dashboard for elasticsearch

What is the user interaction of your feature
A concise description of user interactions or user stories of your feature request

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new feature, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[BUG] zookeeper cluster list-logs No such file or directory

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. create cluster
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
  name: zkeeper-pfkpxz
  namespace: default
spec:
  clusterDefinitionRef: zookeeper
  clusterVersionRef: zookeeper-3.7.2
  terminationPolicy: WipeOut
  componentSpecs:
    - name: zookeeper
      componentDefRef: zookeeper
      replicas: 2
      resources:
        requests:
          cpu: 100m
          memory: 0.5Gi
        limits:
          cpu: 100m
          memory: 0.5Gi
      volumeClaimTemplates:
        - name: data
          spec:
            storageClassName:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 1Gi
        - name: data-log
          spec:
            storageClassName:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 1Gi
  1. See error
kbcli cluster list-logs zkeeper-pfkpxz
ls: cannot access '/opt/zookeeper/logs/zookeeper_audit.log': No such file or directory
ls: cannot access '/opt/zookeeper/logs/zookeeper-*-server-*.log': No such file or directory
ls: cannot access '/opt/zookeeper/logs/zookeeper_audit.log': No such file or directory
ls: cannot access '/opt/zookeeper/logs/zookeeper-*-server-*.log': No such file or directory
ls: cannot access '/opt/zookeeper/logs/zookeeper_audit.log': No such file or directory
ls: cannot access '/opt/zookeeper/logs/zookeeper-*-server-*.log': No such file or directory
No log files found. You can enable the log feature with the kbcli command below.
kbcli cluster update zkeeper-pfkpxz --enable-all-logs=true --namespace default

Expected behavior
zookeeper cluster list-logs success.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

[Features] YashanDB stand-alone mode

YashanDB is a new database system completely independently designed and developed by SICS. And it supports multiple deployment methods such as stand-alone/primary-standby, shared cluster, and distributed ones, covers OLTP/HTAP/OLAP transactions and analyzes mixed load scenarios.

We shall support stand-alone deploy mode first.

[Improvement] optimize the member leaving process for the Qdrant cluster.

Is your improvement request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new function, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or functions you've considered.

Additional context
Add any other context or screenshots about the improvement request here.

[Features] Support open source proxy camellia

What is the user interaction of your feature
A concise description of user interactions or user stories of your feature request

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new feature, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[Improvement] set PostgreSQL synchronous_commit to on

Now, PostgreSQL synchronous_commit is off, set it to on to avoid data loss.

https://postgresqlco.nf/doc/en/param/synchronous_commit/14/

When set to on, commits wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and flushed it to durable storage. This ensures the transaction will not be lost unless both the primary and all synchronous standbys suffer corruption of their database storage.

image

[Features] support mongodb sharding with proxy

What is the user interaction of your feature
A concise description of user interactions or user stories of your feature request

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new feature, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[Features] support milvus backup and restore

What is the user interaction of your feature

https://milvus.io/docs/milvus_backup_cli.md

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new feature, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

postgres的official-postgresql-cc文件借鉴的时候,报错

根据addons中的office-postgresql借鉴自己写了一个addons,然后在启动集群的时候,关于validate configmap报错,
错误是: failed to validate configmap: [missing ',' in struct literal: 1127:29 expected '}', found 'EOF': 1133:2]

我是在office-postgresql这个文件的基础上进行的修改,不知道为什么创建集群的时候就一直卡在这里,

[Features] Support Apache Doris

Support Doris Cluster with following features:

  • Create Cluster
  • FailOver/SwitchOver
  • Vscale
  • Hscale
  • Volumeexpand
  • Stop/Start
  • Restart
  • Backup/Restore
  • Config
  • Monitoring
  • ...

[BUG]yashandb exp/yasdb/imp/yasldr/yaspwd/yasql/yasrman/yaswrap/yex_server cannot use

refer #307 for the setup steps

sh-4.4$ pwd
/home/yashan/bin
sh-4.4$ ls
exp  imp  yasagent  yasbak  yasboot  yasdb  yasldr  yasom  yaspwd  yasql  yasrman  yaswrap  yex_server
sh-4.4$ ./exp
./exp: error while loading shared libraries: libcsvexp.so.0: cannot open shared object file: No such file or directory
sh-4.4$ ./yasdb
./yasdb: error while loading shared libraries: libyas_server.so.0: cannot open shared object file: No such file or directory
sh-4.4$ ./imp
./imp: error while loading shared libraries: libyascli.so.0: cannot open shared object file: No such file or directory
sh-4.4$ ./yasldr
./yasldr: error while loading shared libraries: libyascli.so.0: cannot open shared object file: No such file or directory
sh-4.4$ ./yaspwd
./yaspwd: error while loading shared libraries: libyas_infra.so.0: cannot open shared object file: No such file or directory
sh-4.4$ ./yasql
./yasql: error while loading shared libraries: libyascli.so.0: cannot open shared object file: No such file or directory
sh-4.4$ ./yasrman
./yasrman: error while loading shared libraries: libyascli.so.0: cannot open shared object file: No such file or directory
sh-4.4$ ./yaswrap
./yaswrap: error while loading shared libraries: libyascli.so.0: cannot open shared object file: No such file or directory
sh-4.4$ ./yex_server
./yex_server: error while loading shared libraries: libyex_client.so: cannot open shared object file: No such file or directory

[BUG] kbcli enable addon kafka failed

Describe the bug
kbcli enable addon kafka failed.

Kubernetes: v1.26.3
KubeBlocks: 0.8.0-alpha.5
kbcli: 0.8.0-alpha.5

To Reproduce
Steps to reproduce the behavior:

  1. install kubeblocks
  2. See error
➜  ~ kbcli addon enable kafka
patching addon 'status.phase=Failed' to 'status.phase=' will result addon install spec (spec.install) not being updated
addon.extensions.kubeblocks.io/kafka enabled
➜  ~ kubectl get pod
NAME                                            READY   STATUS    RESTARTS   AGE
install-kafka-addon-fv6rr                       0/1     Error     0          3m45s
install-kafka-addon-jcm7d                       0/1     Error     0          4m24s
install-kafka-addon-mqk7b                       0/1     Error     0          3m19s
install-kafka-addon-px2xj                       0/1     Error     0          4m

➜  ~ kubectl logs install-kafka-addon-fv6rr
Defaulted container "helm" out of: helm, copy-charts (init)
Release "kb-addon-kafka" does not exist. Installing it now.
Error: release kb-addon-kafka failed, and has been uninstalled due to atomic being set: 1 error occurred:
	* ClusterDefinition.apps.kubeblocks.io "kafka" is invalid: [spec.componentDefs[3].podSpec.containers[0].env[0].value: Invalid value: "integer": spec.componentDefs[3].podSpec.containers[0].env[0].value in body must be of type string: "integer", <nil>: Invalid value: "null": some validation rules were not checked because the object was invalid; correct the existing errors to complete validation]

Expected behavior
kbcli enable addon kafka success.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

[Features] Support Halo

Support Halo Cluster Addon with following features:

  • Create Primary/Secondary Halo Cluster
  • Vscale
  • Hscale
  • Volumeexpand
  • Stop/Start
  • Restart
  • Backup/Restore
  • Logs
  • Config
  • Monitor
  • FailOver/SwitchOver

[Features] Support TiDB Cluster

Support TiDB Cluster Addon with following features:

  • Vscale
  • Hscale
  • Volumeexpand
  • Stop/Start
  • Restart
  • Backup/Restore
  • Logs
  • Config
  • Monitor

[Features] support elasticsearch cluster

What is the user interaction of your feature
A concise description of user interactions or user stories of your feature request

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new feature, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[Features] Instances in the zookeeper cluster support roles

What is the user interaction of your feature
A concise description of user interactions or user stories of your feature request

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new feature, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[BUG]kbcli cluster create tidb with default cpu/memory will be OOMkilled

➜ ~ kbcli version
Kubernetes: v1.27.3-gke.100
KubeBlocks: 0.8.0-alpha.8
kbcli: 0.8.0-alpha.8

The default resource for tikv is too small , thus create the cluster with default value will fail for oom, we need to set the default memory to some value like 2Gi

➜  ~ kbcli cluster create --cluster-definition=tidb
Info: --cluster-version is not specified, ClusterVersion tidb-v7.1.2 is applied by default
Cluster daisy79 created

➜  ~ k get pod
NAME                 READY   STATUS      RESTARTS      AGE
daisy79-pd-0         1/1     Running     0             53s
daisy79-tidb-0       2/2     Running     0             53s
daisy79-tikv-0       0/1     OOMKilled   2 (22s ago)   52s

➜  ~ kbcli cluster describe daisy79
Name: daisy79	 Created Time: Nov 21,2023 14:49 UTC+0800
NAMESPACE   CLUSTER-DEFINITION   VERSION       STATUS     TERMINATION-POLICY
default     tidb                 tidb-v7.1.2   Updating   Delete

Endpoints:
COMPONENT   MODE        INTERNAL                                       EXTERNAL
pd          ReadWrite   daisy79-pd.default.svc.cluster.local:2379      <none>
                        daisy79-pd.default.svc.cluster.local:2380
tikv        ReadWrite   daisy79-tikv.default.svc.cluster.local:20160   <none>
tidb        ReadWrite   daisy79-tidb.default.svc.cluster.local:4000    <none>
                        daisy79-tidb.default.svc.cluster.local:10080

Topology:
COMPONENT   INSTANCE         ROLE     STATUS    AZ              NODE                                                CREATED-TIME
pd          daisy79-pd-0     <none>   Running   us-central1-c   gke-yjtest-default-pool-5dfa9cf3-17rn/10.128.0.59   Nov 21,2023 14:49 UTC+0800
tidb        daisy79-tidb-0   <none>   Running   us-central1-c   gke-yjtest-default-pool-5dfa9cf3-7k65/10.128.0.62   Nov 21,2023 14:49 UTC+0800
tikv        daisy79-tikv-0   <none>   Running   us-central1-c   gke-yjtest-default-pool-5dfa9cf3-w53m/10.128.0.58   Nov 21,2023 14:49 UTC+0800

Resources Allocation:
COMPONENT   DEDICATED   CPU(REQUEST/LIMIT)   MEMORY(REQUEST/LIMIT)   STORAGE-SIZE   STORAGE-CLASS
pd          false       1 / 1                1Gi / 1Gi               data:20Gi      kb-default-sc
tikv        false       1 / 1                1Gi / 1Gi               data:20Gi      kb-default-sc
tidb        false       1 / 1                1Gi / 1Gi               data:20Gi      kb-default-sc

Images:
COMPONENT   TYPE   IMAGE
pd          pd     docker.io/pingcap/pd:v7.1.2
tikv        tikv   docker.io/pingcap/tikv:v7.1.2
tidb        tidb   docker.io/pingcap/tidb:v7.1.2

Show cluster events: kbcli cluster list-events -n default daisy79

[Improvement] upgrade qdrant version to 1.7.3

Is your improvement request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new function, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or functions you've considered.

Additional context
Add any other context or screenshots about the improvement request here.

[Features] pulsar broker&proxy component support nodeport service

What is the user interaction of your feature
A concise description of user interactions or user stories of your feature request

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new feature, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[Features] pulsar support v3.0.2

What is the user interaction of your feature
A concise description of user interactions or user stories of your feature request

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

If this is a new feature, please describe the motivation and goals.
A clear and concise description of why you want to happen, link the design doc if possible

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[Feature]PolarDB-X member reconfiguration support

➜ ~ kbcli version
Kubernetes: v1.27.3-gke.100
KubeBlocks: 0.7.0-beta.18
kbcli: 0.7.0-beta.18

  1. Create PolarDB-X

      `helm repo add kubeblocks-kbcli  https://jihulab.com/api/v4/projects/150246/packages/helm/stable`

"kubeblocks-kbcli" already exists with the same configuration, skipping

      `helm repo update kubeblocks-kbcli `

Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kubeblocks-kbcli" chart repository
Update Complete. ⎈Happy Helming!⎈

      `helm upgrade --install polardbx kubeblocks-kbcli/polardbx --version 0.7.0-beta.18 `

Release "polardbx" has been upgraded. Happy Helming!
NAME: polardbx
LAST DEPLOYED: Fri Nov  3 11:57:54 2023
NAMESPACE: default
STATUS: deployed
REVISION: 4
TEST SUITE: None
NOTES:
Thanks for installing PolarDB-X using KubeBlocks!


    `kbcli cluster create  polardbx-tjxuol             --termination-policy=Halt             --monitoring-interval=0 --enable-all-logs=false --cluster-definition=polardbx --cluster-version=polardbx-v1.4.1 --set cpu=500m,memory=1Gi,replicas=3,storage=5Gi  --namespace default `

Cluster polardbx-tjxuol created

➜  ~ kbcli cluster describe polardbx-tjxuol
Name: polardbx-tjxuol	 Created Time: Nov 03,2023 11:58 UTC+0800
NAMESPACE   CLUSTER-DEFINITION   VERSION           STATUS    TERMINATION-POLICY
default     polardbx             polardbx-v1.4.1   Running   WipeOut

Endpoints:
COMPONENT   MODE        INTERNAL                                             EXTERNAL
gms         ReadWrite   polardbx-tjxuol-gms.default.svc.cluster.local:3306   <none>
                        polardbx-tjxuol-gms.default.svc.cluster.local:9104
dn          ReadWrite   polardbx-tjxuol-dn.default.svc.cluster.local:3306    <none>
cn          ReadWrite   polardbx-tjxuol-cn.default.svc.cluster.local:3306    <none>
                        polardbx-tjxuol-cn.default.svc.cluster.local:9104
cdc         ReadWrite   polardbx-tjxuol-cdc.default.svc.cluster.local:3306   <none>
                        polardbx-tjxuol-cdc.default.svc.cluster.local:9104

Topology:
COMPONENT   INSTANCE                ROLE       STATUS    AZ              NODE                                                CREATED-TIME
cdc         polardbx-tjxuol-cdc-0   <none>     Running   us-central1-c   gke-yijing-default-pool-3e14ea35-klwc/10.128.0.26   Nov 03,2023 11:58 UTC+0800
cn          polardbx-tjxuol-cn-0    <none>     Running   us-central1-c   gke-yijing-default-pool-3e14ea35-klwc/10.128.0.26   Nov 03,2023 11:58 UTC+0800
dn          polardbx-tjxuol-dn-0    follower   Running   us-central1-c   gke-yijing-default-pool-3e14ea35-hqtr/10.128.0.30   Nov 03,2023 11:58 UTC+0800
dn          polardbx-tjxuol-dn-1    leader     Running   us-central1-c   gke-yijing-default-pool-3e14ea35-hxpl/10.128.0.28   Nov 03,2023 11:58 UTC+0800
dn          polardbx-tjxuol-dn-2    follower   Running   us-central1-c   gke-yijing-default-pool-3e14ea35-klwc/10.128.0.26   Nov 03,2023 11:58 UTC+0800
gms         polardbx-tjxuol-gms-0   leader     Running   us-central1-c   gke-yijing-default-pool-3e14ea35-wg54/10.128.0.35   Nov 03,2023 11:58 UTC+0800
gms         polardbx-tjxuol-gms-1   follower   Running   us-central1-c   gke-yijing-default-pool-3e14ea35-wg54/10.128.0.35   Nov 03,2023 11:58 UTC+0800
gms         polardbx-tjxuol-gms-2   follower   Running   us-central1-c   gke-yijing-default-pool-3e14ea35-klwc/10.128.0.26   Nov 03,2023 11:58 UTC+0800

Resources Allocation:
COMPONENT   DEDICATED   CPU(REQUEST/LIMIT)   MEMORY(REQUEST/LIMIT)   STORAGE-SIZE   STORAGE-CLASS
gms         false       500m / 500m          1Gi / 1Gi               data:5Gi       kb-default-sc
dn          false       1 / 1                1Gi / 1Gi               data:20Gi      kb-default-sc
cn          false       1 / 1                1Gi / 1Gi               data:20Gi      kb-default-sc
cdc         false       1 / 1                1Gi / 1Gi               data:20Gi      kb-default-sc

Images:
COMPONENT   TYPE   IMAGE
gms         gms    polardbx/polardbx-engine-2.0:latest
dn          dn     polardbx/polardbx-engine-2.0:latest
cn          cn     polardbx/polardbx-sql:latest
cdc         cdc    polardbx/polardbx-cdc:latest

Show cluster events: kbcli cluster list-events -n default polardbx-tjxuol
  1. Restart
➜  ~ kbcli cluster restart polardbx-tjxuol
Please type the name again(separate with white space when more than one): polardbx-tjxuol
OpsRequest polardbx-tjxuol-restart-tqb2c created successfully, you can view the progress:
	kbcli cluster describe-ops polardbx-tjxuol-restart-tqb2c -n default

➜  ~ kbcli cluster describe-ops polardbx-tjxuol-restart-tqb2c -n default
Spec:
  Name: polardbx-tjxuol-restart-tqb2c	NameSpace: default	Cluster: polardbx-tjxuol	Type: Restart

Command:
  kbcli cluster restart polardbx-tjxuol --components=gms,dn,cn,cdc --namespace=default

Status:
  Start Time:         Nov 03,2023 12:10 UTC+0800
  Duration:           28m
  Status:             Running
  Progress:           2/8
                      OBJECT-KEY                  STATUS       DURATION    MESSAGE
                      Pod/polardbx-tjxuol-cdc-0   Succeed      3m21s       Successfully restart: Pod/polardbx-tjxuol-cdc-0 in Component: cdc
                      Pod/polardbx-tjxuol-cn-0    Succeed      3m4s        Successfully restart: Pod/polardbx-tjxuol-cn-0 in Component: cn
                      Pod/polardbx-tjxuol-dn-1    Pending      <Unknown>
                      Pod/polardbx-tjxuol-dn-2    Pending      <Unknown>
                      Pod/polardbx-tjxuol-dn-0    Processing   28m         Start to restart: Pod/polardbx-tjxuol-dn-0 in Component: dn
                      Pod/polardbx-tjxuol-gms-0   Pending      <Unknown>
                      Pod/polardbx-tjxuol-gms-2   Pending      <Unknown>
                      Pod/polardbx-tjxuol-gms-1   Processing   28m         Start to restart: Pod/polardbx-tjxuol-gms-1 in Component: gms

Conditions:
LAST-TRANSITION-TIME         TYPE          REASON                         STATUS   MESSAGE
Nov 03,2023 12:10 UTC+0800   Progressing   OpsRequestProgressingStarted   True     Start to process the OpsRequest: polardbx-tjxuol-restart-tqb2c in Cluster: polardbx-tjxuol
Nov 03,2023 12:10 UTC+0800   Validated     ValidateOpsRequestPassed       True     OpsRequest: polardbx-tjxuol-restart-tqb2c is validated
Nov 03,2023 12:10 UTC+0800   Restarting    RestartStarted                 True     Start to restart database in Cluster: polardbx-tjxuol

Warning Events: <none>

➜  ~ k describe sts polardbx-tjxuol-dn
Name:               polardbx-tjxuol-dn
Namespace:          default
CreationTimestamp:  Fri, 03 Nov 2023 11:58:23 +0800
Selector:           app.kubernetes.io/instance=polardbx-tjxuol,app.kubernetes.io/managed-by=kubeblocks,app.kubernetes.io/name=polardbx,apps.kubeblocks.io/component-name=dn
Labels:             app.kubernetes.io/component=dn
                    app.kubernetes.io/instance=polardbx-tjxuol
                    app.kubernetes.io/managed-by=kubeblocks
                    app.kubernetes.io/name=polardbx
                    apps.kubeblocks.io/component-name=dn
                    rsm.workloads.kubeblocks.io/controller-generation=2
Annotations:        config.kubeblocks.io/tpl-polardbx-scripts: polardbx-tjxuol-dn-polardbx-scripts
                    kubeblocks.io/generation: 1
Replicas:           3 desired | 3 total
Update Strategy:    OnDelete
Pods Status:        3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app.kubernetes.io/component=dn
                    app.kubernetes.io/instance=polardbx-tjxuol
                    app.kubernetes.io/managed-by=kubeblocks
                    app.kubernetes.io/name=polardbx
                    app.kubernetes.io/version=polardbx-v1.4.1
                    apps.kubeblocks.io/component-name=dn
                    apps.kubeblocks.io/workload-type=Consensus
  Annotations:      kubeblocks.io/restart: 2023-11-03T04:10:57Z
  Service Account:  kb-polardbx-tjxuol
  Init Containers:
   tools-updater:
    Image:      polardbx/xstore-tools:latest
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/ash
    Args:
      -c
      ./hack/update.sh /target
    Limits:
      cpu:     0
      memory:  0
    Environment Variables from:
      polardbx-tjxuol-dn-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:                (v1:metadata.name)
      KB_POD_UID:                 (v1:metadata.uid)
      KB_NAMESPACE:               (v1:metadata.namespace)
      KB_SA_NAME:                 (v1:spec.serviceAccountName)
      KB_NODENAME:                (v1:spec.nodeName)
      KB_HOST_IP:                 (v1:status.hostIP)
      KB_POD_IP:                  (v1:status.podIP)
      KB_POD_IPS:                 (v1:status.podIPs)
      KB_HOSTIP:                  (v1:status.hostIP)
      KB_PODIP:                   (v1:status.podIP)
      KB_PODIPS:                  (v1:status.podIPs)
      KB_CLUSTER_NAME:           polardbx-tjxuol
      KB_COMP_NAME:              dn
      KB_CLUSTER_COMP_NAME:      polardbx-tjxuol-dn
      KB_CLUSTER_UID_POSTFIX_8:  690c6c10
      KB_POD_FQDN:               $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
      NODE_NAME:                  (v1:spec.nodeName)
    Mounts:
      /target from xstore-tools (rw)
   role-agent-installer:
    Image:      msoap/shell2http:1.16.0
    Port:       <none>
    Host Port:  <none>
    Command:
      cp
      /app/shell2http
      /role-probe/agent
    Environment:  <none>
    Mounts:
      /role-probe from role-agent (rw)
  Containers:
   engine:
    Image:       polardbx/polardbx-engine-2.0:latest
    Ports:       3306/TCP, 11306/TCP, 31600/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      /scripts/xstore-setup.sh
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:     1
      memory:  1Gi
    Startup:   tcp-socket :mysql delay=20s timeout=30s period=10s #success=1 #failure=60
    Environment Variables from:
      polardbx-tjxuol-dn-env      ConfigMap  Optional: false
      polardbx-tjxuol-dn-rsm-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:                (v1:metadata.name)
      KB_POD_UID:                 (v1:metadata.uid)
      KB_NAMESPACE:               (v1:metadata.namespace)
      KB_SA_NAME:                 (v1:spec.serviceAccountName)
      KB_NODENAME:                (v1:spec.nodeName)
      KB_HOST_IP:                 (v1:status.hostIP)
      KB_POD_IP:                  (v1:status.podIP)
      KB_POD_IPS:                 (v1:status.podIPs)
      KB_HOSTIP:                  (v1:status.hostIP)
      KB_PODIP:                   (v1:status.podIP)
      KB_PODIPS:                  (v1:status.podIPs)
      KB_CLUSTER_NAME:           polardbx-tjxuol
      KB_COMP_NAME:              dn
      KB_CLUSTER_COMP_NAME:      polardbx-tjxuol-dn
      KB_CLUSTER_UID_POSTFIX_8:  690c6c10
      KB_POD_FQDN:               $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
      LANG:                      en_US.utf8
      LC_ALL:                    en_US.utf8
      ENGINE:                    galaxy
      ENGINE_HOME:               /opt/galaxy_engine
      NODE_ROLE:                 candidate
      NODE_IP:                    (v1:status.hostIP)
      NODE_NAME:                  (v1:spec.nodeName)
      POD_IP:                     (v1:status.podIP)
      POD_NAME:                   (v1:metadata.name)
      LIMITS_CPU:                1000 (limits.cpu)
      LIMITS_MEM:                1073741824 (limits.memory)
      PORT_MYSQL:                3306
      PORT_PAXOS:                11306
      PORT_POLARX:               31600
      KB_SERVICE_USER:           polardbx_root
      KB_SERVICE_PASSWORD:       <set to the key 'password' in secret 'polardbx-tjxuol-conn-credential'>  Optional: false
      RSM_COMPATIBILITY_MODE:    true
    Mounts:
      /data-log/mysql from data-log (rw)
      /data/mysql from data (rw)
      /etc/podinfo from podinfo (rw)
      /scripts/xstore-post-start.sh from scripts (rw,path="xstore-post-start.sh")
      /scripts/xstore-setup.sh from scripts (rw,path="xstore-setup.sh")
      /tools/xstore from xstore-tools (rw)
   exporter:
    Image:      prom/mysqld-exporter:v0.14.0
    Port:       9104/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     0
      memory:  0
    Environment Variables from:
      polardbx-tjxuol-dn-env      ConfigMap  Optional: false
      polardbx-tjxuol-dn-rsm-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:                (v1:metadata.name)
      KB_POD_UID:                 (v1:metadata.uid)
      KB_NAMESPACE:               (v1:metadata.namespace)
      KB_SA_NAME:                 (v1:spec.serviceAccountName)
      KB_NODENAME:                (v1:spec.nodeName)
      KB_HOST_IP:                 (v1:status.hostIP)
      KB_POD_IP:                  (v1:status.podIP)
      KB_POD_IPS:                 (v1:status.podIPs)
      KB_HOSTIP:                  (v1:status.hostIP)
      KB_PODIP:                   (v1:status.podIP)
      KB_PODIPS:                  (v1:status.podIPs)
      KB_CLUSTER_NAME:           polardbx-tjxuol
      KB_COMP_NAME:              dn
      KB_CLUSTER_COMP_NAME:      polardbx-tjxuol-dn
      KB_CLUSTER_UID_POSTFIX_8:  690c6c10
      KB_POD_FQDN:               $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
      MYSQL_MONITOR_USER:        <set to the key 'username' in secret 'polardbx-tjxuol-conn-credential'>  Optional: false
      MYSQL_MONITOR_PASSWORD:    <set to the key 'password' in secret 'polardbx-tjxuol-conn-credential'>  Optional: false
      DATA_SOURCE_NAME:          $(MYSQL_MONITOR_USER):$(MYSQL_MONITOR_PASSWORD)@(localhost:3306)/
    Mounts:                      <none>
   kb-role-probe:
    Image:       registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.7.0-beta.18
    Ports:       7373/TCP, 50101/TCP
    Host Ports:  0/TCP, 0/TCP
    Command:
      lorry
      --port
      7373
      --grpcport
      50101
    Readiness:  exec [/bin/grpc_health_probe -addr=:50101] delay=0s timeout=1s period=2s #success=1 #failure=3
    Environment:
      KB_RSM_USERNAME:               <set to the key 'username' in secret 'polardbx-tjxuol-conn-credential'>  Optional: false
      KB_RSM_PASSWORD:               <set to the key 'password' in secret 'polardbx-tjxuol-conn-credential'>  Optional: false
      KB_RSM_ACTION_SVC_LIST:        [36501]
      KB_SERVICE_USER:               <set to the key 'username' in secret 'polardbx-tjxuol-conn-credential'>  Optional: false
      KB_SERVICE_PASSWORD:           <set to the key 'password' in secret 'polardbx-tjxuol-conn-credential'>  Optional: false
      KB_RSM_SERVICE_PORT:           3306
      KB_SERVICE_PORT:               3306
      KB_RSM_ROLE_UPDATE_MECHANISM:  DirectAPIServerEventUpdate
      KB_RSM_ROLE_PROBE_TIMEOUT:     1
      KB_POD_NAME:                    (v1:metadata.name)
      KB_NAMESPACE:                   (v1:metadata.namespace)
      KB_POD_UID:                     (v1:metadata.uid)
      KB_NODENAME:                    (v1:spec.nodeName)
      KB_SERVICE_CHARACTER_TYPE:     custom
    Mounts:                          <none>
   action-0:
    Image:      arey/mysql-client:latest
    Port:       <none>
    Host Port:  <none>
    Command:
      /role-probe/agent
      -port
      36501
      -export-all-vars
      -form
      /role
      mysql -h127.0.0.1 -P3306 -uroot -N -B -e "select role from information_schema.alisql_cluster_local" | xargs echo -n
    Environment:
      KB_RSM_USERNAME:  <set to the key 'username' in secret 'polardbx-tjxuol-conn-credential'>  Optional: false
      KB_RSM_PASSWORD:  <set to the key 'password' in secret 'polardbx-tjxuol-conn-credential'>  Optional: false
    Mounts:
      /role-probe from role-agent (rw)
  Volumes:
   xstore-tools:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
   podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
      metadata.annotations['runmode'] -> runmode
      metadata.name -> name
      metadata.namespace -> namespace
   scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      polardbx-tjxuol-dn-polardbx-scripts
    Optional:  false
   data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
   data-log:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
   role-agent:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
Volume Claims:
  Name:          data
  StorageClass:  kb-default-sc
  Labels:        apps.kubeblocks.io/vct-name=data
  Annotations:   <none>
  Capacity:      20Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type     Reason               Age                From                    Message
  ----     ------               ----               ----                    -------
  Normal   SuccessfulCreate     41m                statefulset-controller  create Claim data-polardbx-tjxuol-dn-0 Pod polardbx-tjxuol-dn-0 in StatefulSet polardbx-tjxuol-dn success
  Normal   SuccessfulCreate     41m                statefulset-controller  create Claim data-polardbx-tjxuol-dn-1 Pod polardbx-tjxuol-dn-1 in StatefulSet polardbx-tjxuol-dn success
  Normal   SuccessfulCreate     41m                statefulset-controller  create Pod polardbx-tjxuol-dn-1 in StatefulSet polardbx-tjxuol-dn successful
  Normal   SuccessfulCreate     41m                statefulset-controller  create Claim data-polardbx-tjxuol-dn-2 Pod polardbx-tjxuol-dn-2 in StatefulSet polardbx-tjxuol-dn success
  Normal   SuccessfulCreate     41m                statefulset-controller  create Pod polardbx-tjxuol-dn-2 in StatefulSet polardbx-tjxuol-dn successful
  Normal   SuccessfulCreate     28m (x2 over 41m)  statefulset-controller  create Pod polardbx-tjxuol-dn-0 in StatefulSet polardbx-tjxuol-dn successful
  Warning  RecreatingFailedPod  28m (x8 over 28m)  statefulset-controller  StatefulSet default/polardbx-tjxuol-dn is recreating failed Pod polardbx-tjxuol-dn-0
  Normal   SuccessfulDelete     28m (x8 over 28m)  statefulset-controller  delete Pod polardbx-tjxuol-dn-0 in StatefulSet polardbx-tjxuol-dn successful
➜  ~

[BUG]yashandb table lost after restart db

install db:

➜  ~ kbcli version
Kubernetes: v1.27.9-gke.1092000
KubeBlocks: 0.8.1
kbcli: 0.8.1

curl -fsSL https://kubeblocks.io/installer/install_cli.sh |bash -s 0.8.1
kbcli kubeblocks install
helm repo add kubeblocks https://jihulab.com/api/v4/projects/150246/packages/helm/stable
helm repo update kubeblocks
helm upgrade -i yashandb kubeblocks/yashandb --version v0.0.1
kbcli cluster create yashantest --cluster-definition=yashandb
kubectl exec -it yashantest-yashandb-compdef-0 sh
  1. Create table
sh-4.4$  /home/yashan/bin/yasboot sql -d sys/[email protected]:1688
YashanDB SQL Personal Edition Release 23.1.1.100 x86_64

Connected to:
YashanDB Server Personal Edition Release 23.1.1.100 x86_64 - X86 64bit Linux

SQL> create table test5(c1 int, c2 int, c3 int);
begin
    for i in 1..10000 loop
        insert into test5 values (i,i,i);
        end loop;
            commit ;
end;
/create table test5(c1 int, c2 int, c3 int);
begin
    for i in 1..10000 loop
        insert into test5 values (i,i,i);
        end loop;
            commit ;
end;

Succeed.

SQL>    2    3    4    5    6    7
/

PL/SQL Succeed.

SQL> select count(*) from test5;
select count(*) from test5;

             COUNT(*)
---------------------
                10000

1 row fetched.

SQL>
  1. Restart cluster with kbcli
 ➜  ~ kbcli cluster restart yashantest
Please type the name again(separate with white space when more than one): yashantest
OpsRequest yashantest-restart-lbp24 created successfully, you can view the progress:
	kbcli cluster describe-ops yashantest-restart-lbp24 -n default

  1. the table lost
➜  ~ kubectl exec -it yashantest-yashandb-compdef-0 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
sh-4.4$ /home/yashan/bin/yasboot sql -d sys/[email protected]:1688
YashanDB SQL Personal Edition Release 23.1.1.100 x86_64

Connected to:
YashanDB Server Personal Edition Release 23.1.1.100 x86_64 - X86 64bit Linux

SQL> select count(*) from test5;
select count(*) from test5;

[1:22]YAS-02012 table or view does not exist

I also tried with shutdown, encountered following problem

SQL> create table test5(c1 int, c2 int, c3 int);
begin
    for i in 1..10000 loop
        insert into test5 values (i,i,i);
        end loop;
            commit ;
end;
/create table test5(c1 int, c2 int, c3 int);
begin
    for i in 1..10000 loop
        insert into test5 values (i,i,i);
        end loop;
            commit ;
end;

Succeed.

SQL>    2    3    4    5    6    7
/

PL/SQL Succeed.

SQL> select count(*) from test5;
select count(*) from test5;

             COUNT(*)
---------------------
                10000

1 row fetched.

SQL> shutdown immediate;
shutdown immediate;

Succeed.

SQL> command terminated with exit code 137

➜  ~ kubectl exec -it yashantest-yashandb-compdef-0 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
sh-4.4$ /home/yashan/bin/yasdb  open &
[1] 247
sh-4.4$ /home/yashan/bin/yasdb: error while loading shared libraries: libyas_server.so.0: cannot open shared object file: No such file or directory

[1]+  Done(127)               /home/yashan/bin/yasdb open
sh-4.4$
sh-4.4$  /home/yashan/bin/yasboot sql -d sys/[email protected]:1688
YashanDB SQL Personal Edition Release 23.1.1.100 x86_64

Connected to:
YashanDB Server Personal Edition Release 23.1.1.100 x86_64 - X86 64bit Linux

SQL> select count(*) from test5;
select count(*) from test5;

[1:22]YAS-02012 table or view does not exist

SQL>

[Features] Support Solr

What is the user interaction of your feature
Solr is the popular, blazing-fast, open source enterprise search platform built on Apache Lucene.

[Features] Integration of MySQL Cluster Addons with WeScale Proxy

Background

Currently, out of the three MySQL Cluster Addons available (apecloud-mysql, mysql, oracle-mysql), only apecloud-mysql is integrated with the WeScale proxy.

Proposal

It is worth considering integrating the mysql and oracle-mysql addons with the WeScale proxy, similar to the apecloud-mysql addon. This integration would allow for the dynamic enabling or disabling of the proxy functionality for these addons as well.

To achieve this, we may need to make modifications to resources such as ClusterDefinition and ClusterVersion. By extending the integration to these addons, we can ensure consistent proxy functionality across all MySQL Cluster Addons.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.