Giter Site home page Giter Site logo

k8ssandra / management-api-for-apache-cassandra Goto Github PK

View Code? Open in Web Editor NEW
70.0 31.0 51.0 1.29 MB

RESTful / Secure Management Sidecar for Apache Cassandra

License: Apache License 2.0

Java 94.68% Shell 4.74% Roff 0.58%
cassandra sidecar kubernetes cloud-native

management-api-for-apache-cassandra's Introduction

Management API for Apache Cassandra®

Java CI Docker Release

Introduction

Cassandra operations have historically been command line driven. The management of operational tools for Apache Cassandra have been mostly outsourced to teams who manage their specific environments.

The result is a fragmented and tribal set of best practices, workarounds, and edge cases.

The Management API is a sidecar service layer that attempts to build a well supported set of operational actions on Cassandra nodes that can be administered centrally. It currently works with official Apache Cassandra 3.11.x and 4.0 via a drop in java agent.

  • Lifecycle Management
    • Start Node
    • Stop Node
  • Configuration Management (alpha)
    • Change YAML
    • Change jvm-opts
  • Health Checks
    • Kubernetes liveness/readiness checks
    • Consistency level checks
  • Per node actions
    • All nodetool commands

Design Principles

  • Secure by default
  • Simple to use and extend
  • CQL Only for all C* interactions
    • Operations: Use CALL method for invoking via CQL
    • Observations: Rely on System Views

The Management API has no configuration file. Rather, it can only be configured from a small list of command line flags. Communication by default can only be via unix socket or via a http(s) endpoint with optional TLS client auth.

In a containerized setting the Management API represents PID 1 and will be responsible for the lifecycle of Cassandra via the API.

Communication between the Management API and Cassandra is via a local unix socket using CQL as it's only protocol. This means, out of the box Cassandra can be started securely with no open ports! Also, using CQL only means operators can execute operations via CQL directly if they wish.

Each Management API is responsible for the local node only. Coordination across nodes is up to the caller. That being said, complex health checks can be added via CQL.

Supported Image Matrix

The following versions of Cassandra and DSE are published to Docker and supported:

Cassandra 3.11.x Cassandra 4.0.x Cassandra 4.1.x DSE 6.8.x DSE 6.9.x
3.11.7 4.0.0 4.1.0 6.8.25 6.9.0
3.11.8 4.0.1 4.1.1 6.8.26
3.11.11 4.0.3 4.1.2 6.8.28
3.11.12 4.0.4 4.1.3 6.8.29
3.11.13 4.0.5 4.1.4 6.8.30
3.11.14 4.0.6 4.1.5 6.8.31
3.11.15 4.0.7 6.8.32
3.11.16 4.0.8 6.8.33
3.11.17 4.0.9 6.8.34
4.0.10 6.8.35
4.0.11 6.8.36
4.0.12 6.8.37
4.0.13 6.8.38
6.8.39
6.8.40
6.8.41
6.8.42
6.8.43
6.8.44
6.8.46
6.8.47
6.8.48
6.8.49
6.8.50
  • Apache Cassandra images are available in linux/amd64 or linux/arm64 formats. The DSE images are available only in the linux/amd64 format.
  • All images (with the exception of Cassandra trunk) are available as an Ubuntu based image or a RedHat UBI 8 based image. Cassandra trunk images are only RedHat UBI8 based.
  • All Cassandra 3.11.x images come with JDK 8
  • All Cassandra 4.0.x and 4.1.x images come with JDK 11
  • All DSE 6.8.x Ubuntu based images are available with either JDK 8 or JDK 11 (you have to pick, only one JDK is installed in an image)
  • All DSE 6.8.x RedHat UBI 8 based images come with JDK 8
  • All DSE 6.9.x Ubuntu based images come with only JDK 11
  • All DSE 6.9.x RedHat UBI 8 based images come with only JDK 11

Docker coordinates for Cassandra OSS images

Ubuntu based images (OSS)

For all Ubuntu based OSS Cassandra images, the Docker coordinates are as follows:

  k8ssandra/cass-management-api:<version>

Example for Cassandra 4.0.10

  k8ssandra/cass-management-api:4.0.10

RedHat UBI 8 based images (OSS)

For all RedHat UBI 8 based OSS Cassandra images, the Docker coordinates are as follows:

  k8ssandra/cass-management-api:<version>-ubi8

Example for Cassandra 4.0.10

  k8ssandra/cass-management-api:4.0.10-ubi8

Docker coordinates for DSE 6.8.x images

Ubuntu based images (DSE 6.8)

For all JDK 8 Ubuntu based DSE 6.8.x images, the Docker coordinates are as follows:

  datastax/dse-mgmtapi-6_8:<version>

Example for DSE 6.8.31

  datastax/dse-mgmtapi-6_8:6.8.31

For all JDK 11 Ubuntu based DSE 6.8.x images, the Docker coordinates are as follows:

  datastax/dse-mgmtapi-6_8:<version>-jdk11

Example for DSE 6.8.31

  datastax/dse-mgmtapi-6_8:6.8.31-jdk11

RedHat UBI 8 based images (DSE 6.8)

For all RedHat UBI 8 based DSE 6.8.x images, the Docker coordinates are as follows:

  datastax/dse-mgmtapi-6_8:<version>-ubi8

Example for DSE 6.8.31

  datastax/dse-mgmtapi-6_8:6.8.31-ubi8

Docker coordinates for DSE 6.9.x images

Ubuntu based images (DSE 6.9)

For all JDK 11 Ubuntu based DSE 6.8.x images, the Docker coordinates are as follows:

  datastax/dse-mgmtapi-6_8:<version>-jdk11

Example for DSE 6.9.0

  datastax/dse-mgmtapi-6_8:6.9.0-jdk11

RedHat UBI 8 based images (DSE 6.9)

For all RedHat UBI 8 based DSE 6.9.x images, the Docker coordinates are as follows:

  datastax/dse-mgmtapi-6_8:<version>-ubi8

Example for DSE 6.9.0

  datastax/dse-mgmtapi-6_8:6.9.0-ubi8

** NOTE: The docker repo is not a typo, it really is datastax/dse-mgmtapi-6_8 for 6.9 images

Docker coordinates for Cassandra Trunk images

We also build and publish Nightly images for Cassandra trunk. These images are only published with a RedHat UBI 8 base platform, with JDK 11. NOTE: These are not production ready images. Use with caution!

The most recent nightly build is available at

  k8ssandra/cass-management-api:5.0-nightly-latest

You can also use an image built on a specific date

  k8ssandra/cass-management-api:5.0-nightly-YYYYMMDD

There is also an image tag for the specific Cassandra commit SHA (not all commits are built)

  k8ssandra/cass-management-api:5.0-nightly-<Short SHA>

Building

Minimum Java Version

The project has been updated to now require JDK11 or newer to build. The jarfile artifacts are still compiled to Java8 as Java8 is still what some Cassandra versions ship with.

Containers

First, you will need to have the Docker buildx plugin installed.

To build an image based on the desired Cassandra version see the examples below:

#Create a docker image with management api and C* 3.11 (version 3.11.7 and newer are supported, replace `3.11.16` with the version you want below)
docker buildx build --load --build-arg CASSANDRA_VERSION=3.11.16 --tag mgmtapi-3_11 --file cassandra/Dockerfile-3.11 --target cassandra --platform linux/amd64 .

#Create a docker image with management api and C* 4.0 (version 4.0.0 and newer are supported)
docker buildx build --load --build-arg CASSANDRA_VERSION=4.0.6 --tag mgmtapi-4_0 --file cassandra/Dockerfile-4.0 --target cassandra --platform linux/amd64 .

#Create a docker image with management api and C* 4.1 (version 4.1.0 and newer are supported)
docker buildx build --load --build-arg CASSANDRA_VERSION=4.1.4 --tag mgmtapi-4_1 --file cassandra/Dockerfile-4.1 --target cassandra --platform linux/amd64 .

To build a RedHat Universal Base Image (UBI) based Cassandra image, use the ubi8 Dockerfile. Examples:

#Create a UBI8 based image with management api and C* 3.11 (version 3.11.7 and newer are supported, replace `3.11.16` with the version you want below)
docker buildx build --load --build-arg CASSANDRA_VERSION=3.11.16 --tag mgmtapi-3_11_ubi8 --file cassandra/Dockerfile-3.11.ubi8 --target cassandra --platform linux/amd64 .

#Create a UBI8 based image with management api and C* 4.0 (version 4.0.0 and newer are supported)
docker buildx build --load --build-arg CASSANDRA_VERSION=4.0.6 --tag mgmtapi-4_0_ubi8 --file cassandra/Dockerfile-4.0.ubi8 --target cassandra --platform linux/amd64 .

#Create a UBI8 based image with management api and C* 4.1 (version 4.1.0 and newer are supported)
docker buildx build --load --build-arg CASSANDRA_VERSION=4.1.4 --tag mgmtapi-4_1_ubi8 --file cassandra/Dockerfile-4.1.ubi8 --target cassandra --platform linux/amd64 .

You can also build OSS Cassandra images for linux/arm64 based platforms. Both Ubuntu and UBI8 based images support this. Simply change the --platform argument above to --platform linux/arm64. Examples:

#Create an ARM64 docker image with management api and C* 3.11 (version 3.11.7 and newer are supported, replace `3.11.16` with the version you want below)
docker buildx build --load --build-arg CASSANDRA_VERSION=3.11.16 --tag mgmtapi-3_11 --file cassandra/Dockerfile-3.11 --target cassandra --platform linux/arm64 .

#Create an ARM64 UBI8 based image with management api and C* 4.0 (version 4.0.0 and newer are supported)
docker buildx build --load --build-arg CASSANDRA_VERSION=4.0.6 --tag mgmtapi-4_0_ubi8 --file cassandra/Dockerfile-4.0.ubi8 --target cassandra --platform linux/arm64 .

To build an image based on DSE, see the DSE README.

Standalone

mvn -DskipTests package
mvn test
mvn integration-test -Drun3.11tests=true -Drun4.0tests=true

NOTE 1: Running integration-tests will also run unit tests.

NOTE 2: Running integration-tests requires at least one of -Drun3.11tests, -Drun3.11testsUBI, -Drun4.0tests, -Drun4.0testsUBI, -Drun4.1tests, -Drun4.1testsUBI, -Drun5.0testsUBI, -DrunDSE6.8tests, -DrunDSE6.8testsUBI, -DrunDSE6.9tests, or -DrunDSE6.9testsUBI to be set to true (you can set any combination of them to true).

NOTE 3: In order to run DSE integration tests, you must also enable the dse profile:

mvn integration-test -P dse -DrunDSE6.8tests=true

Cassandra trunk

For building an image based on the latest from Cassandra trunk, see this README.

DSE 6.8.x/6.9.x

For building an image based on DSE 6.8, see the DSE 6.8 README.

For building an image based on DSE 6.9, see the DSE 6.9 README.

REST API

The current Swagger/OpenAPI documentation

Also readable from url root: /openapi.json

Usage

As of v0.1.24, Management API Docker images for Apache Cassandra are consolidated into a single image repository here:

For different Cassandra versions, you will need to specify the Cassandra version as an image tag. See the supported image matrix above.

Each of the above examples will always point to the latest Management API version for the associated Cassandra version. If you want a specific Management API version, you can append the desired version to the Cassandra version tag. For example, if you want v0.1.24 of Management API for Cassandra version 3.11.9:

 docker pull k8ssandra/cass-management-api:3.11.9-v0.1.24

For Management API versions v0.1.23 and lower, you will need to use the old Docker repositories, which are Cassandra version specific:

For DSE Docker images, see the DSE 6.8 README or the DSE 6.9 README.

For running standalone the jars can be downloaded from the github release: Management API Releases Zip

The Management API can be run as a standalone service or along with the Kubernetes cass-operator.

The Management API is configured from the CLI. To start the service with a C* version built above, run:

 > docker run -e USE_MGMT_API=true -p 8080:8080 -it --rm mgmtapi-4_0

 > curl http://localhost:8080/api/v0/probes/liveness
 OK

 # Check service and C* are running
 > curl http://localhost:8080/api/v0/probes/readiness
 OK

Specifying an alternate listen port

By default, all images will listen on port 8080 for Management API connections. This can be overridden by specifying the environment variable MGMT_API_LISTEN_TCP_PORT and setting it to your desired port. For example:

> docker run -e USE_MGMT_API=true -e MGMT_API_LISTEN_TCP_PORT=9090 -p 9090:9090 k8ssandra/cass-management-api:3.11.15

The above would run a Cassandra 3.11.15 image with Management API listening on port 9090 (instead of 8080).

Usage with DSE

Please see the DSE 6.8 README or the DSE 6.9 README for details.

Using the Service with a locally installed C* or DSE instance

To start the service with a locally installed C* or DSE instance, you would run the below commands. The Management API will figure out through --db-home whether it points to a C* or DSE folder

# REQUIRED: Add management api agent to C*/DSE startup
> export JVM_EXTRA_OPTS="-javaagent:$PWD/management-api-agent/target/datastax-mgmtapi-agent-0.1.0-SNAPSHOT.jar"

> alias mgmtapi="java -jar management-api-server/target/datastax-mgmtapi-server-0.1.0-SNAPSHOT.jar"

# Start the service with a local unix socket only, you could also pass -H http://localhost:8080 to expose a port
> mgmtapi --db-socket=/tmp/db.sock --host=unix:///tmp/mgmtapi.sock --db-home=<pathToCassandraOrDseHome>

# Cassandra/DSE will be started by the service by default unless you pass --explicit-start flag

# Check the service is up
> curl --unix-socket /tmp/mgmtapi.sock http://localhost/api/v0/probes/liveness
OK

# Check C*/DSE is up
> curl --unix-socket /tmp/mgmtapi.sock http://localhost/api/v0/probes/readiness
OK

# Stop C*/DSE
curl -XPOST --unix-socket /tmp/mgmtapi.sock http://localhost/api/v0/lifecycle/stop
OK

Making changes

Code Formatting

Gogle Java Style

The project uses google-java-format and enforces the Google Java Style for all Java source files. The Maven plugin is configured to check the style during compile and it will fail the compile if it finds a file that does not adhere to the coding standard.

Checking the format

If you want to check the formatting from the command line after making changes, you can simply run:

mvn fmt:check

NOTE: If you are making changes in the DSE agent, you need to enable the dse profile:

mvn -Pdse fmt:check

Formatting the code

If you want have the plugin format the code for you, you can simply run:

mvn fmt:format

NOTE: If you are making changes in the DSE agent, you need to enable the dse profile:

mvn -Pdse fmt:format

Using Checkstyle in an IDE

You can also install a checkstyle file in some popular IDEs to automatically format your code. The Google checkstyle file can be found here: google_checks.xml

Refer to your IDE's documentation for installing and setting up checkstyle.

Source code headers

In addition to Java style formatting, the project also enforces that source files have the correct header. Source files include .java, .xml and .properties files. The Header should be:

/*
 * Copyright DataStax, Inc.
 *
 * Please see the included license file for details.
 */

for Java files. For XML and Properties files, the same header should exist, with the appropriate comment characters replacing the Java comment characters above.

Just like the Coding style, the Headers are checked at compile time and will fail the compile if they aren't correct.

Checking the headers

If you want to check the headers from the command line after making changes, you can simply run:

mvn license:check

NOTE: If you are making changes in the DSE agent, you need to enable the dse profile:

mvn -Pdse license:check

Formatting the code

If you want have the plugin format the headers for you, you can simply run:

mvn license:format

NOTE: If you are making changes in the DSE agent, you need to enable the dse profile:

mvn -Pdse license:format

XML formatting

The project also enforces a standard XML format. Again, it is checked at compile time and will fail the compile if XML files are not formatted correctly. See the plugin documentation for formatting details here: https://acegi.github.io/xml-format-maven-plugin/?utm_source=mavenlibs.com

Checking XML file formatting

If you want to check XML files from the command line after making changes, you can simply run:

mvn xml-format:xml-check

NOTE: If you are making changes in the DSE agent, you need to enable the dse profile:

mvn -Pdse xml-format:xml-check

Formatting XML files

If you want have the plugin format XML files for you, you can simply run:

mvn xml-format:xml-format

NOTE: If you are making changes in the DSE agent, you need to enable the dse profile:

mvn -Pdse xml-format:xml-format

Design Summary

The architecture of this repository is laid as follows, front-to-back:

  1. The management-api-server/doc/openapi.json documents the API.
  2. The server implements the HTTP verbs/endpoints under the management-api-server/src/main/java/com/datastax/mgmtapi/resources folder (e.g. NodeOpsresources.java).
  3. The server methods communicate back to the agents using cqlService.executePreparedStatement() calls which are routed as plaintext through a local socket. These calls return ResultSet objects, and to access scalar values within these you are best to call .one() before checking for nulls and .getObject(0). This java object can then be serialized into JSON for return to the client.
  4. The server communicates only with the management-api-agent-common sub-project, which holds the un-versioned CassandraAPI interface.
  5. The management-api-agent-common/src/main/java/com/datastax/mgmtapi/NodeOpsProvider.java routes commands through to specific versioned instances of CassandraAPI which is implemented in the version 3x/4x subprojects as CassandraAPI4x/CassandraAPI3x.

Any change to add endpoints or features will need to make modifications in each of the above components to ensure that they propagate through.

Changes to API endpoints

If you are adding a new endpoint, removing an endpoint, or otherwise changing the public API of an endpoint, you will need to re-generate the OpenAPI/Swagger document. The document lives at management-api-server/doc/openapi.json and is regenerated during the build's compile phase. If your changes to code cause the API to change, you will need to perform a local mvn compile to regenerate the document and then add the change to your git commit.

mvn clean compile
git add management-api-server/doc/openapi.json
git commit

API Client Generation

In addition to automatic OpenAPI document generation, a Golang client or a Java client can be generated during the build (unfortunately, only one of them can be generated at a time, but you can run the process-classes goal back-to-back to generate them both). The Java client generation is enabled by default (or can be explicitly enabled with the java-clientgen Maven profile). The Go client generation is disabled by default and can be enabled with the go-clientgen Maven profile. The clients are built using the OpenAPI Tools generator Maven plugin and can be used by projects to interact with the Management API. The client generation happens during the process-classes phase of the Maven build so that changes to the API implementation can be compiled into an OpenAPI document spec file during the compile phase of the build. The client code is generated in the target directory under the management-api-server sub-module and should be located at

management-api-server/target/generated-sources/openapi

To generate the Go client, run the following from the root of the project:

mvn process-classes -P go-clientgen

The Go client code will be generated in management-api-server/target/generated-sources/openapi/go-client

To generate the Java client, run the following from the root of the project:

mvn process-classes -P java-clientgen

or simply:

mvn process-classes

The Java client code will be generated in management-api-server/target/generated-sources/openapi/java-client

Maven coordinates for the Java generated client

This project also has a workflow_dispatch job that will publish the current master branch version of the Java generated client to the Datastax public Maven repository. To pull in this artifact in a Maven project, you will need to add the Datastax Artifactory repository to your Maven settings:

  <profiles>
    <profile>
      <id>datastax</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      <repositories>
        <repository>
          <id>datastax-artifactory</id>
          <name>DataStax Artifactory</name>
          <releases>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
            <checksumPolicy>warn</checksumPolicy>
          </releases>
          <url>https://repo.datastax.com</url>
          <layout>default</layout>
        </repository>
      </repositories>
    </profile>
  </profiles>

At the current time, the artifact for the Java client will have a version that contains the Git Hash of the commit it was built from. To add the artifact to your Maven project as a dependency, you will need something like this in your pom.xml:

<project>
  <dependencies>
    <dependency>
      <groupId>io.k8ssandra</groupId>
      <artifactId>datastax-mgmtapi-client-openapi</artifactId>
      <version>0.1.0-9d71b60</version>
    </dependency>
  </dependnecies>
</project>

where 9d71b60 is the hash of the release you want.

Eventually, this artifact will be published into Maven Central and have a regular release version (i.e. 0.1.0).

Published Docker images

When PRs are merged into the master branch, if all of the integration tests pass, the CI process will build and publish all supported Docker images with GitHub commit SHA tags. These images are not intended to be used in production. They are meant for facilitating testing with dependent projects.

The format of the Docker image tag for OSS Cassandra based images will be <Cassandra version>-<git commit sha>. For example, if the SHA for the commit to master is 3e99e87, then the Cassandra 3.11.11 image tag would be 3.11.11-3e99e87. The full docker coordinates would be k8ssandra/cass-management-api:3.11.11-3e99e87. Once published, these images can be used for testing in dependent projects (such as cass-operator). Testing in dependent projects is a manual process at this time and is not automated.

Official Release process

When the master branch is ready for release, all that needs to be done is to create a git tag and push the tag. When a git tag is pushed, a GitHub Action will kick off that builds the release versions of the Docker images and publish the to DockerHub. The release tag should be formatted as:

v0.1.X

where X is incremental for each release. If the most recent release version is v0.1.32, then to cut the next (v0.1.33) release, do the following:

git checkout master
git pull
git tag v0.1.33
git push origin refs/tags/v0.1.33

Once the tag is pushed, the release process will start and build the Docker images as well as the Maven artifacts. The images are automatically pushed to DockerHub and the Maven artifacts are published and attached to the GitHub release.

CLI Help

The CLI help covers the different options:

mgmtapi --help

NAME
        cassandra-management-api - REST service for managing an Apache
        Cassandra or DSE node

SYNOPSIS
        cassandra-management-api
                [ {-C | --cassandra-home | --db-home} <db_home> ]
                [ --explicit-start <explicit_start> ] [ {-h | --help} ]
                {-H | --host} <listen_address>...
                [ {-K | --no-keep-alive} <no_keep_alive> ]
                [ {-p | --pidfile} <pidfile> ]
                {-S | --cassandra-socket | --db-socket} <db_unix_socket_file>
                [ --tlscacert <tls_ca_cert_file> ]
                [ --tlscert <tls_cert_file> ] [ --tlskey <tls_key_file> ]

OPTIONS
        -C <db_home>, --cassandra-home <db_home>, --db-home <db_home>
            Path to the Cassandra or DSE root directory, if missing will use
            $CASSANDRA_HOME/$DSE_HOME respectively

            This options value must be a path on the file system that must be
            readable, writable and executable.


        --explicit-start <explicit_start>
            When using keep-alive, setting this flag will make the management
            api wait to start Cassandra/DSE until /start is called via REST

        -h, --help
            Display help information

        -H <listen_address>, --host <listen_address>
            Daemon socket(s) to listen on. (required)

        -K <no_keep_alive>, --no-keep-alive <no_keep_alive>
            Setting this flag will stop the management api from starting or
            keeping Cassandra/DSE up automatically

        -p <pidfile>, --pidfile <pidfile>
            Create a PID file at this file path.

            This options value must be a path on the file system that must be
            readable and writable.


        -S <db_unix_socket_file>, --cassandra-socket <db_unix_socket_file>,
        --db-socket <db_unix_socket_file>
            Path to Cassandra/DSE unix socket file (required)

            This options value must be a path on the file system that must be
            readable and writable.


        --tlscacert <tls_ca_cert_file>
            Path to trust certs signed only by this CA

            This options value must be a path on the file system that must be
            readable.


        --tlscert <tls_cert_file>
            Path to TLS certificate file

            This options value must be a path on the file system that must be
            readable.


        --tlskey <tls_key_file>
            Path to TLS key file

            This options value must be a path on the file system that must be
            readable.


COPYRIGHT
        Copyright (c) DataStax 2020

LICENSE
        Please see https://www.apache.org/licenses/LICENSE-2.0 for more
        information

Roadmap

  • CQL based configuration changes
  • Configuration as system table

License

Copyright DataStax, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Dependencies

For information on the packaged dependencies of the Management API for Apache Cassandra® and their licenses, check out our open source report.

management-api-for-apache-cassandra's People

Contributors

adejanovski avatar adutra avatar alexsandrorotundo avatar andrey-dubnik avatar burmanm avatar careykevin avatar clusterjan avatar dependabot[bot] avatar emerkle826 avatar jdonenine avatar jeffbanks avatar jimdickinson avatar johnsmartco avatar johntrimble avatar jsanda avatar jtgrabowski avatar miles-garnsey avatar olim7t avatar phact avatar rsds143 avatar rzvoncek avatar sbtourist avatar tjake avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

management-api-for-apache-cassandra's Issues

Migrate source repository to k8ssandra GH org

As k8ssandra evolves a large swath of the growth in management-api will be driven via k8ssandra, so we'll be moving the management-api-for-apache-cassandra repo from datastax -> k8ssandra.

Add support for creating keyspaces

It would certainly be good in general to have support for creating keyspaces. In particular having it in the mgmt api will help simplify some things in Cass Operator.

When I added Reaper integration, one of things that has to happen is to create the Reaper keyspace. This is currently accomplished in Cass Operator by running a python script in a k8s job. It would be simpler and more efficient to be able to just make a call to the mgmt api.

K8SSAND-151 ⁃ Fix pom versioning to match release versioning

Right now, the POM version is still sitting at 0.1.0-SNAPSHOT, though the release tagging is now at v0.1.23. The POM should be updated to reflect the next release tag.

BONUS: setup a process to change the POM version from a SNAPSHOT version into a proper release version when tagging (this is likely a bit more involved).

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: K8SSAND-151
┆Priority: Medium

K8SSAND-155 ⁃ Add a PID check for "start" operation

Currently, there is no check that the Cassandra process starts and actually stays running when a POST to the "start" endpoint is made (see here). It would be better to check that the PID still exists/is running if possible.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: K8SSAND-155
┆Priority: Medium

K8SSAND-153 ⁃ Improve logging around driver connection attempts

When a Management API server instance starts up and starts Cassandra, it tries to establish a Driver CqlSession over the Unix socket that the Management API agent creates on Cassandra startup. Until Cassandra creates this socket however, the server will fail to establish a connection and thus a CqlSession. You will see logs similar to:

WARN  [epollEventLoopGroup-81-2] 2021-02-24 13:33:29,403 Loggers.java:39 - [s76] Error connecting to Node(endPoint=/tmp/cassandra.sock, hostId=null, hashCode=5f84d3e5), trying next node (FileNotFoundException: null)

This NullPointerException is misleading and can make you think that there is a problem. It is normal for this to occur while Cassandra is starting, though if it persists for a while, there may indeed be an issue preventing Cassandra from starting (i.e. not enough available memory for the container to start).

It would be better if the logs in this case indicated what is happening (and what might be a problem).

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: K8SSAND-153
┆Priority: Medium

Fix DockerHub image push process

Currently, the Docker image publishing process logs into DockerHub to push images with GitHub secrets stored in GitHub (see here).

      - name: Login to Docker Hub
        run: echo "${{ secrets.DOCKER_HUB_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_HUB_USERNAME }}" --password-stdin

The problem is that DOCKER_HUB_USERNAME and DOCKER_HUB_PASSWORD are set to someone's personal credentials. These need to be changed to bot credentials (preferably datastaxdocker).

What needs to happen to resolve this ticket:

  1. Replace DOCKER_HUB_USERNAME and DOCKER_HUB_PASSWORD secrets with the datastaxdocker bot credentials
  2. Ensure datastaxdocker can push to DockerHub k8ssandra org (https://hub.docker.com/orgs/k8ssandra)
  3. Update the docker-release.yaml script to use DOCKER_HUB_USERNAME and DOCKER_HUB_PASSWORD for DSE pushes to the datastax DockerHub org.

K8SSAND-159 ⁃ Timeout on decommissioning node

Currently there is a 30 second timeout in the mgmt api for the ops/node/decommission endpoint it will time out because the server sleeps for 30 seconds and then can take a while to finish decommissioning depending on how much data is in there.

Ideally we could change the decomm endpoint to a fire & forget mode instead.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: K8SSAND-159
┆Priority: Medium

K8SSAND-840 ⁃ Replication of system keyspaces should be configurable per datacenter

The following system properties can be set to configure replication of system keyspaces:

  • cassandra.system_distributed_replication_dc_names
  • cassandra.system_distributed_replication_per_dc

When these properties are set the SystemDistributedReplicationInterceptor class configures replication for the following keyspaces:

  • system_auth
  • system_traces
  • system_distributed

The cassandra.system_distributed_replication_per_dc configures the replication factor. The problem is that this value is used for all DCs. I have DCs with 1, 3, and 6 nodes for example, the replication factor will have to be set to 1.

Maybe we could introduce a new system property that accepts a comma-delimited list of key-value pairs which are separated by colons. It might look something like this:

cassandra.system_distributed_replication=dc1:1,dc2:3,dc3:3

This would be easy to parse and provide the flexibility to configure the replication factor per DC.

┆Issue is synchronized with this Jira Task by Unito
┆Fix Versions: management-api-for-apache-cassandra-0.1.29
┆Issue Number: K8SSAND-840
┆Priority: Medium

Fix cassandra user's HOME directory

See this issue

Essentially, now that Management API runs as the cassandra user, that user's effective HOME directory should exist and be writable by that user. We can either:

  1. Create the default HOME directory of /home/cassandra and set the correct permissions, or
  2. Run a usermod command to alter the home directory to /opt/cassandra which is already setup with correct permissions.

K8SSAND-160 ⁃ REST interface to replace nodetool rebuild

It would be useful to be able to talk to Management API instead of using kuebctl exec -- nodetool rebuild.

┆Issue is synchronized with this Jira Task by Unito
┆fixVersions: management-api-for-apache-cassandra-0.1.27,k8ssandra-operator-v1.0.0-alpha.2
┆friendlyId: K8SSAND-160
┆priority: Medium

K8SSAND-148 ⁃ Fix `--cassandra-home` in entrypoint.sh

The entrypoint script specifies /var/lib/cassandra as the --cassandra-home value here, but it should be $CASSANDRA_HOME, which is specified in the 4.0 Dockerfile here and in the 3.11 Dockerfile here. This hasn't caused a problem yet because the base Cassandra image used for Management API sets up a symlink so that /opt/cassandra is essentially the same as /var/lib/cassandra. However, it would be best to have the entrypoint script use the location setup in the Dockerfile.

┆Issue is synchronized with this Jira Bug by Unito
┆Issue Number: K8SSAND-148
┆Priority: Medium

K8SSAND-150 ⁃ Indicate startup failures in logs

In certain environments (kubernetes comes to mind), Management API attempts to start Cassandra and, for one reason or another, Cassandra does not start and there is no clear indication as to why. Sometimes this is do to limited resources available and there aren't enough for the process to start and it simply dies almost immediately. It would be much easier to identify theses issues if some logs were generated where possible.

NOTE: This may be related to issue #76 in some cases.

┆Issue is synchronized with this Jira Task by Unito
┆Issue Number: K8SSAND-150
┆Priority: Medium

CASSANDRA-15299 breaks UnixSocketServer4x

CASSANDRA-15299 has been merged into trunk and will be in the next 4.0-beta4 release. The following classes were removed in that ticket (See this commit:

  • org.apache.cassandra.transport.Message.ProtocolEncoder
  • org.apache.cassandra.transport.Message.ProtocolDecoder

These classes are used directly in the org.apache.cassandra.transport.UnixSocketServer4x class.

This currently is not a problem for the Management API since it is built against 4.0-beta1, but it will break once we try to upgrade the Cassandra 4 dependency to 4.0-beta1 do to the changes in CASSANDRA-15299.

Publish ARM64 image for Cassandra 3.11.10

Currently, the official DockerHub Cassandra 3.11.10 image is only published for linux/amd64 and PPC. As soon as the base image is ready for ARM64, we need to re-enable ARM64 deployments. See this and this for more details.

high and critical CVE with resteasy libraries

Solve failing IntegrationTest for Cassandra 4.0-beta4

The testSuperuserWasNotSet test in IntegrationTest fails on the assertTrue(ready) assert, meaning all of the calls to the readiness probe are failing (each call is returning 500 Internal Server Error). Only happening for C* 4.0 beta 4.

Looking at logging, seems the mgmt api server (running inside the container) is logging

Internal connection to Cassandra closed

which indicates the server can't communicate over the unix socket for some reason TBD.

Add build for Cassandra 3.11.7 and fix Cassandra 3.11.6 build

Currently, Cassandra 3.11.6 is built and pushed to its own repo in dockerhub:

      - name: Publish 3.11 to Registry
        uses: elgohr/Publish-Docker-Github-Action@master
        with:
          name: datastax/cassandra-mgmtapi-3_11_6
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}
          tag_names: true
          dockerfile: Dockerfile-3_11

However, the Dockerfile-3_11 file use the broader cassandra:3.11 base image:

FROM management-api-for-apache-cassandra-builder as builder

FROM cassandra:3.11

This means that if we were to create a new release today, we'd end up with a Cassandra 3.11.7 image in the datastax/cassandra-mgmtapi-3_11_6 repo, which is probably not what we want--though it is a nice way to sneak in upgrade.

We should, at a minimum, not push a 3.11.7 Cassandra image masquerading as 3.11.6. Talking to @jimdickinson, I think there is room to debate whether or not we should support both 3.11.6 and 3.11.7 or just the latest 3.11.x for future versions of the management API. If it's easy, I'd personally just support both for now to make navigating the change control processes at enterprises easier.

Expose storage space used by nodes in endpoint metadata

For the k8s cass-operator, we need to be able to find out how much space each node is currently using. This will allow us to determine if its safe to decommission a node and have the remaining nodes absorb its data.

I believe that this information would fit nicely in the existing endpoint metadata structure, but if it needs to be a new endpoint then that is fine too.

Make the location where the Docker image copies JARs configurable

In Dockerfile-3_11 and in Dockerfile-4_0 jars are copied as follows:

COPY --from=builder /build/management-api-common/target/datastax-mgmtapi-common-0.1.0-SNAPSHOT.jar /etc/cassandra/
COPY --from=builder /build/management-api-agent/target/datastax-mgmtapi-agent-0.1.0-SNAPSHOT.jar /etc/cassandra/
COPY --from=builder /build/management-api-server/target/datastax-mgmtapi-server-0.1.0-SNAPSHOT.jar /opt/mgmtapi/
COPY --from=builder /build/management-api-shim-3.x/target/datastax-mgmtapi-shim-3.x-0.1.0-SNAPSHOT.jar /opt/mgmtapi/
COPY --from=builder /build/management-api-shim-4.x/target/datastax-mgmtapi-shim-4.x-0.1.0-SNAPSHOT.jar /opt/mgmtapi/

I am working on Medusa integration with cass-operator for backup/restore support. Medusa needs access to /etc/cassandra/cassandra.yaml. I did some testing with creating a mount point at /etc/cassandra. The image copies the JARs into /etc/cassandra. The directory is then mounted over and the files are not accessible.

Can we change the location to /tmp and and have docker-entrypoint.sh copy them over to /etc/cassandra?

high and critical CVEs io.netty package

Repository Tag CVE ID Severity Packages Source Package Package Version Fix Status
datastax/cassandra-mgmtapi-3_11_7 v0.1.22 CVE-2019-20445 critical io.netty_netty-all   4.0.44.Final fixed in 4.1.44
datastax/cassandra-mgmtapi-3_11_7 v0.1.22 CVE-2019-20444 critical io.netty_netty-all   4.0.44.Final fixed in 4.1.44
datastax/cassandra-mgmtapi-3_11_7 v0.1.22 CVE-2019-16869 high io.netty_netty-all   4.0.44.Final fixed in 4.1.42.Final

K8SSAND-154 ⁃ Add support for running repairs

It would be useful to be able to run nodetool repair through the API.

There are a bunch of options that are supported by the CLI command, I don't know if all of them should be exposed or not.

┆Issue is synchronized with this Jira Task by Unito
┆Reviewer: Alexander Dejanovski
┆fixVersions: management-api-for-apache-cassandra-0.1.29
┆friendlyId: K8SSAND-154
┆priority: Medium

K8SSAND-158 ⁃ Some useful log messages are missing when MTLS is enabled

I have noticed in my testing that when MTLS certificates are enabled, there is less logging by the mgmt api. Specifically the "address= url= status=" messages are not logged when MTLS is enabled, and this has made automated test validations more difficult to perform.

The following logs were taken from a pod that was bootstrapping. Notice the missing readiness and liveness probe calls. This is a small example, but there are no "address=foo" messages in log for the entire session for the MTLS example.

Example when MTLS is disabled:

cassandra INFO [epollEventLoopGroup-36-2 2020-10-02 15:47:52,834 Uuids.java:194 - PID obtained through native call to getpid(): 19 cassandra WARN [epollEventLoopGroup-36-2 2020-10-02 15:47:53,400 AbstractBootstrap.java:452 - Unknown channel option 'TCP_NODELAY' for channel '[id: 0x72990ea9' server-system-logger tail: can't open '/var/log/cassandra/system.log': No such file or directory server-system-logger tail: /var/log/cassandra/system.log has appeared; following end of new file

Same setup but with MTLS disabled:

cassandra INFO [nioEventLoopGroup-2-1 2020-10-02 15:58:04,580 Cli.java:617 - address=/10.244.4.5:55580 url=/api/v0/probes/cluster status=200 OK cassandra INFO [nioEventLoopGroup-2-2 2020-10-02 15:58:06,574 Cli.java:617 - address=/10.244.5.1:48544 url=/api/v0/probes/readiness status=200 OK cassandra INFO [nioEventLoopGroup-2-1 2020-10-02 15:58:06,596 Cli.java:617 - address=/10.244.4.5:55606 url=/api/v0/metadata/endpoints status=200 OK cassandra INFO [nioEventLoopGroup-2-2 2020-10-02 15:58:06,622 Cli.java:617 - address=/10.244.4.5:55608 url=/api/v0/ops/seeds/reload status=200 OK cassandra INFO [nioEventLoopGroup-2-1 2020-10-02 15:58:06,649 Cli.java:617 - address=/10.244.4.5:55612 url=/api/v0/probes/cluster status=200 OK server-system-logger tail: can't open '/var/log/cassandra/system.log': No such file or directory server-system-logger tail: /var/log/cassandra/system.log has appeared; following end of new file

Please enable the "address=foo" log messages, and any additional useful messages that are being omitted, when MTLS is enabled.

┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-158
┆priority: Medium

Create image for Cassandra 4.0-beta4

Cassandra 4.0-beta4 was released on 12/30/2020. It does not look like there are management-api images for any of the recent 4.0 betas. I would like to have a management-api image build for4.0-beta4 for k8ssandra in preparation of the 4.0 GA release.

@emerkle826 can you take this?

Update MCAC for Cassandra 4.0 based images

The latest C* 4.0 image is based on Cassandra 4.0-beta4. However, the Metrics Collector (MCAC) jar that is bundled is not compatible with C* 4.0. There is a PR for updating MCAC here. Once that is merged and released, we should update the images for Management API include it.

Migrate docker image repository to k8ssandra docker org

As a companion to #95 we also want to consolidate the docker images generated for the management-api under the k8ssandra docker org.

Also included in this work should be a refactoring of the way the repositories are structured so that all management-api images are consolidated under a single repository with relevant tags applied denoting the various version options available.

We also should generate new naming conventions that avoid the Apache Cassandra trademarks.

This would consolidate all of the following repositories:

https://hub.docker.com/r/datastax/cassandra-mgmtapi-4_0_0
https://hub.docker.com/r/datastax/cassandra-mgmtapi-3_11_10
https://hub.docker.com/r/datastax/cassandra-mgmtapi-3_11_9
https://hub.docker.com/r/datastax/cassandra-mgmtapi-3_11_8
https://hub.docker.com/r/datastax/cassandra-mgmtapi-3_11_7

Note that we do not intend to move the DSE related management-api repository if possible:

https://hub.docker.com/r/datastax/dse-mgmtapi-6_8

Unix Socket and TCP socket driver conflict

ERROR [nioEventLoopGroup-3-4] 2020-05-29 09:56:16,269 NodeOpsResources.java:240 - Error when executing request java.lang.IllegalArgumentException: Multiple entries with same key: 1c59a723-4cc4-4f16-bc07-d9b29d3eeedc=Node(endPoint=/10.244.217.176:0, hostId=1c59a723-4cc4-4f16-bc07-d9b29d3eeedc, hashCode=172d302f) and 1c59a723-4cc4-4f16-bc07-d9b29d3eeedc=Node(endPoint=/tmp/cassandra.sock, hostId=1c59a723-4cc4-4f16-bc07-d9b29d3eeedc, hashCode=44592dc4) at com.datastax.oss.driver.shaded.guava.common.collect.ImmutableMap.conflictException(ImmutableMap.java:215) at com.datastax.oss.driver.shaded.guava.common.collect.ImmutableMap.checkNoConflict(ImmutableMap.java:209) at com.datastax.oss.driver.shaded.guava.common.collect.RegularImmutableMap.checkNoConflictInKeyBucket(RegularImmutableMap.java:147) at com.datastax.oss.driver.shaded.guava.common.collect.RegularImmutableMap.fromEntryArray(RegularImmutableMap.java:110) at com.datastax.oss.driver.shaded.guava.common.collect.ImmutableMap$Builder.build(ImmutableMap.java:393) at com.datastax.oss.driver.internal.core.metadata.InitialNodeListRefresh.compute(InitialNodeListRefresh.java:81) at com.datastax.oss.driver.internal.core.metadata.MetadataManager.apply(MetadataManager.java:508) at com.datastax.oss.driver.internal.core.metadata.MetadataManager$SingleThreaded.refreshNodes(MetadataManager.java:328) at com.datastax.oss.driver.internal.core.metadata.MetadataManager$SingleThreaded.access$1700(MetadataManager.java:293) at com.datastax.oss.driver.internal.core.metadata.MetadataManager.lambda$refreshNodes$1(MetadataManager.java:166) at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:387) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) INFO [nioEventLoopGroup-2-1] 2020-05-29 09:56:16,269 Cli.java:573 - address=/10.50.50.82:60720 url=/api/v0/probes/readiness status=500 Internal Server Error

K8SSAND-834 ⁃ Cassandra 4.1-SNAPSHOT fails to start

A recent PR for Medusa uncovered a non-backward compatible change in some internal Cassandra APIs that Management API uses.

Specifically, this change in the Dispatcher to the processRequest method breaks the call to that method in Management API here.

Either the 4.0 agent will have to handle this (maybe with some reflection tricks), or we may have to split out the 4.0 GA agent from the 4.1-SNAPSHOT agent. It will also be interesting to see if the Cassandra change makes it into a 4.0 patch release (and not just in 4.1), as that would mean a different agent for 4.0.0 GA vs 4.0.1.

┆Issue is synchronized with this Jira Bug by Unito
┆Fix Versions: management-api-for-apache-cassandra-0.1.29
┆Issue Number: K8SSAND-834
┆Priority: Medium

API v0.1.24 not compatible with C* 4.0-rc1

Running the API with a custom C* build off the cassandra-4.0-rc1 tag (3282f5ecf187ecbb56b8d73ab9a9110c010898b0) fails with:

ERROR [nioEventLoopGroup-3-11] 2021-04-26 14:58:34,013 NodeOpsResources.java:339 - Error when executing request
com.datastax.oss.driver.api.core.servererrors.ServerError: Failed to execute method NodeOps.checkConsistencyLevel
	at com.datastax.oss.driver.api.core.servererrors.ServerError.copy(ServerError.java:54)
	at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149)
	at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:53)
	at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:30)
	at com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:230)
	at com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:54)
	at com.datastax.mgmtapi.CqlService.executePreparedStatement(CqlService.java:70)
	at com.datastax.mgmtapi.resources.K8OperatorResources.lambda$checkClusterConsistency$1(K8OperatorResources.java:93)
	at com.datastax.mgmtapi.resources.NodeOpsResources.handle(NodeOpsResources.java:331)
	at com.datastax.mgmtapi.resources.K8OperatorResources.checkClusterConsistency(K8OperatorResources.java:83)

DB home is not a writeable path

Running the API v0.1.24 on GKE fails with:

Running java -Xms128m -Xmx128m -jar /opt/management-api/datastax-mgmtapi-server-0.1.0-SNAPSHOT.jar --cassandra-socket /tmp/cassandra.sock --host tcp://0.0.0.0:8080 --host file:///tmp/oss-mgmt.sock --explicit-start true --cassandra-home /var/lib/cassandra/
Usage error: Option 'db_home' was given value '/var/lib/cassandra/' which is not a writeable path

Note: I'm running a slightly modified version of the Management API image because I need to run with a custom C* image instead of pulling the prebuilt one from dockerhub, but other than that my setup is pretty standard so I have no reasons to believe this wouldn't be a problem in general.

The issue is that when it mounts the volume /var/lib/cassandra in GKE, the folder ends up being owned by root (regardless of the chown that happened earlier). I confirmed that by logging the permissions at the very end of the Dockerfile execution which was correct, and then at the beginning of the entrypoint, which was incorrect. It looks like this is how k8s works: kubernetes/kubernetes#2630

In the entrypoint, it does have another chown call to fix the permissions, but that only happens if the user is root: https://github.com/k8ssandra/management-api-for-apache-cassandra/blob/v0.1.24/scripts/docker-entrypoint.sh#L17

I think we can fix this by removing/reverting the USER cassandra directive from the Dockerfile and then run the java command in the entrypoint with gosu.

Add experimental arm64 support for Cassandra 3.11.6

The official Cassandra images all support a variety of platforms including arm64. Support for arm64 is important for users who are using Cassandra in an embedded/IoT setting, are developing on their new ARM powered Apple laptop, or are looking to save money by using arm servers, such as with AWS Graviton. Ideally, we would align with OSS Cassandra to better enable these users.

Supporting arm64 comes with a number of challenges:

  1. Metrics collector does not presently support arm
  2. There is no official Cassandra 4.0 image and ours does not support arm
  3. Netty only provides arm64 builds for epoll support starting with 4.1.50
  4. We have no arm hardware for building
  5. We do not presently have arm hardware for testing

This ticket is narrowly focused on just getting Cassandra 3.11.6 to work, in any capacity, on arm64 with the Management API. 3.11.6 already has an official multiarch image and the Cass Operator does testing against 3.11.6 regularly, so it's a good candidate to start with.

For this ticket, I propose we address the above issues in the following way:

  1. Disable the metrics collector when running on arm
  2. Ignore Cassandra 4.0 for the moment
  3. Update the version of Netty for the management API
  4. We can use buildx, which will in turn use QEMU, to produce builds for arm without the need of arm hardware
  5. I can test on a raspberry pi, but it's not ideal. I think its worth it to get an arm build out there even if we cannot test it regularly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.