Giter Site home page Giter Site logo

charts's Introduction

Hazelcast

Slack javadoc Docker pulls Quality Gate Status


What is Hazelcast

The world’s leading companies trust Hazelcast to modernize applications and take instant action on data in motion to create new revenue streams, mitigate risk, and operate more efficiently. Businesses use Hazelcast’s unified real-time data platform to process streaming data, enrich it with historical context and take instant action with standard or ML/AI-driven automation - before it is stored in a database or data lake.

Hazelcast is named in the Gartner Market Guide to Event Stream Processing and a leader in the GigaOm Radar Report for Streaming Data Platforms. To join our community of CXOs, architects and developers at brands such as Lowe’s, HSBC, JPMorgan Chase, Volvo, New York Life, and others, visit hazelcast.com.

When to use Hazelcast

Hazelcast provides a platform that can handle multiple types of workloads for building real-time applications.

  • Stateful data processing over streaming data or data at rest
  • Querying streaming and batch data sources directly using SQL
  • Ingesting data through a library of connectors and serving it using low-latency SQL queries
  • Pushing updates to applications on events
  • Low-latency queue-based or pub-sub messaging
  • Fast access to contextual and transactional data via caching patterns such as read/write-through and write-behind
  • Distributed coordination for microservices
  • Replicating data from one region to another or between data centers in the same region

Key Features

  • Stateful and fault-tolerant data processing and querying over data streams and data at rest using SQL or dataflow API
  • A comprehensive library of connectors such as Kafka, Hadoop, S3, RDBMS, JMS and many more
  • Distributed messaging using pub-sub and queues
  • Distributed, partitioned, queryable key-value store with event listeners, which can also be used to store contextual data for enriching event streams with low latency
  • A production-ready Raft-implementation which allows lineralizable (CP) concurrency primitives such as distributed locks.
  • Tight integration for deploying machine learning models with Python to a data processing pipeline
  • Cloud-native, run everywhere architecture
  • Zero-downtime operations with rolling upgrades
  • At-least-once and exactly-once processing guarantees for stream processing pipelines
  • Data replication between data centers and geographic regions using WAN
  • Microsecond performance for key-value point lookups and pub-sub
  • Unique data processing architecture results in 99.99% latency of under 10ms for streaming queries with millions of events per second.
  • Client libraries in Java, Python, Node.js, .NET, C++ and Go

Operational Data Store

Hazelcast provides distributed in-memory data structures which are partitioned, replicated and queryable. One of the main use cases for Hazelcast is for storing a working set of data for fast querying and access.

The main data structure underlying Hazelcast, called IMap, is a key-value store which has a rich set of features, including:

Hazelcast stores data in partitions, which are distributed to all the nodes. You can increase the storage capacity by adding additional nodes, and if one of the nodes go down, the data is restored automatically from the backup replicas.

You can interact with maps using SQL or a programming language client of your choice. You can create and interact with a map as follows:

CREATE MAPPING myMap (name varchar EXTERNAL NAME "__key", age INT EXTERNAL NAME "this") 
TYPE IMap
OPTIONS ('keyFormat'='varchar','valueFormat'='int');
INSERT INTO myMap VALUES('Jake', 29);
SELECT * FROM myMap;

The same can be done programmatically as follows using one of the supported programming languages. Here are some exmaples in Java and Python:

var hz = HazelcastClient.newHazelcastClient();
IMap<String, Integer> map = hz.getMap("myMap");
map.set(Alice, 25);
import hazelcast

client = hazelcast.HazelcastClient()
my_map = client.get_map("myMap")
age = my_map.get("Alice").result()

Other programming languages supported are C#, C++, Node.js and Go.

Alternatively, you can ingest data directly from the many sources supported using SQL:

CREATE MAPPING csv_ages (name VARCHAR, age INT)
TYPE File
OPTIONS ('format'='csv',
    'path'='/data', 'glob'='data.csv');
SINK INTO myMap
SELECT name, age FROM csv_ages;

Hazelcast also provides additional data structures such as ReplicatedMap, Set, MultiMap and List. For a full list, refer to the distributed data structures section of the docs.

Stateful Data Processing

Hazelcast has a built-in data processing engine called Jet. Jet can be used to build both streaming and batch data pipelines that are elastic. You can use it to process large volumes of real-time events or huge batches of static datasets. To give a sense of scale, a single node of Hazelcast has been proven to aggregate 10 million events per second with latency under 10 milliseconds. A cluster of Hazelcast nodes can process billion events per second.

An application which aggregates millions of sensor readings per second with 10-millisecond resolution from Kafka looks like the following:

var hz = Hazelcast.bootstrappedInstance();

var p = Pipeline.create();

p.readFrom(KafkaSources.<String, Reading>kafka(kafkaProperties, "sensors"))
 .withTimestamps(event -> event.getValue().timestamp(), 10) // use event timestamp, allowed lag in ms
 .groupingKey(reading -> reading.sensorId())
 .window(sliding(1_000, 10)) // sliding window of 1s by 10ms
 .aggregate(averagingDouble(reading -> reading.temperature()))
 .writeTo(Sinks.logger());

hz.getJet().newJob(p).join();

Use the following command to deploy the application to the server:

bin/hazelcast submit analyze-sensors.jar

Jet supports advanced streaming features such as exactly-once processing and watermarks.

Data Processing using SQL

Jet also powers the SQL engine in Hazelcast which can execute both streaming and batch queries. Internally, all SQL queries are converted to Jet jobs.

CREATE MAPPING trades (
    id BIGINT,
    ticker VARCHAR,
    price DECIMAL,
    amount BIGINT)
TYPE Kafka
OPTIONS (
    'valueFormat' = 'json',
    'bootstrap.servers' = 'kafka:9092'
);
SELECT ticker, ROUND(price * 100) AS price_cents, amount
  FROM trades
  WHERE price * amount > 100;
+------------+----------------------+-------------------+
|ticker      |           price_cents|             amount|
+------------+----------------------+-------------------+
|EFGH        |                  1400|                 20|

Messaging

Hazelcast provides lightweight options for adding messaging to your application. The two main constructs for messaging are topics and queues.

Topics

Topics provide a publish-subscribe pattern where each message is fanned out to multiple subscribers. See the examples below in Java and Python:

var hz = Hazelcast.bootstrappedInstance();
ITopic<String> topic = hz.getTopic("my_topic");
topic.addMessageListener(msg -> System.out.println(msg));
topic.publish("message");
topic = client.get_topic("my_topic")

def handle_message(msg):
    print("Received message %s"  % msg.message)
topic.add_listener(on_message=handle_message)
topic.publish("my-message")

For examples in other languages, please refer to the docs.

Queues

Queues provide FIFO-semantics and you can add items from one client and remove from another. See the examples below in Java and Python:

var client = Hazelcast.newHazelcastClient();
IQueue<String> queue = client.getQueue("my_queue");
queue.put("new-item")
import hazelcast

client = hazelcast.HazelcastClient()
q = client.get_queue("my_queue")
my_item = q.take().result()
print("Received item %s" % my_item)

For examples in other languages, please refer to the docs.

Get Started

Follow the Getting Started Guide to install and start using Hazelcast.

Documentation

Read the documentation for in-depth details about how to install Hazelcast and an overview of the features.

Get Help

You can use Slack for getting help with Hazelcast

How to Contribute

Thanks for your interest in contributing! The easiest way is to just send a pull request. Have a look at the issues marked as good first issue for some guidance.

Building From Source

Building Hazelcast requires at minimum JDK 17. Pull the latest source from the repository and use Maven install (or package) to build:

$ git pull origin master
$ ./mvnw clean package -DskipTests

It is recommended to use the included Maven wrapper script. It is also possible to use local Maven distribution with the same version that is used in the Maven wrapper script.

Additionally, there is a quick build activated by setting the -Dquick system property that skips validation tasks for faster local builds (e.g. tests, checkstyle validation, javadoc, source plugins etc) and does not build extensions and distribution modules.

Testing

Take into account that the default build executes thousands of tests which may take a considerable amount of time. Hazelcast has 3 testing profiles:

  • Default:
    ./mvnw test

to run quick/integration tests (those can be run in parallel without using network by using -P parallelTest profile).

  • Slow Tests:
    ./mvnw test -P nightly-build

to run tests that are either slow or cannot be run in parallel.

  • All Tests:
    ./mvnw test -P all-tests

to run all tests serially using network.

Some tests require Docker to run. Set -Dhazelcast.disable.docker.tests system property to ignore them.

When developing a PR it is sufficient to run your new tests and some related subset of tests locally. Our PR builder will take care of running the full test suite.

Trigger Phrases in the Pull Request Conversation

When you create a pull request (PR), it must pass a build-and-test procedure. Maintainers will be notified about your PR, and they can trigger the build using special comments. These are the phrases you may see used in the comments on your PR:

  • run-lab-run - run the default PR builder
  • run-lts-compilers - compiles the sources with JDK 17 and JDK 21 (without running tests)
  • run-ee-compile - compile hazelcast-enterprise with this PR
  • run-ee-tests - run tests from hazelcast-enterprise with this PR
  • run-windows - run the tests on a Windows machine (HighFive is not supported here)
  • run-with-ibm-jdk-8 - run the tests with IBM JDK 8
  • run-cdc-debezium-tests - run all tests in the extensions/cdc-debezium module
  • run-cdc-mysql-tests - run all tests in the extensions/cdc-mysql module
  • run-cdc-postgres-tests - run all tests in the extensions/cdc-postgres module
  • run-mongodb-tests - run all tests in the extensions/mongodb module
  • run-s3-tests - run all tests in the extensions/s3 module
  • run-nightly-tests - run nightly (slow) tests. WARNING: Use with care as this is a resource consuming task.
  • run-ee-nightly-tests - run nightly (slow) tests from hazelcast-enterprise. WARNING: Use with care as this is a resource consuming task.
  • run-sql-only - run default tests in hazelcast-sql, hazelcast-distribution, and extensions/mapstore modules
  • run-docs-only - do not run any tests, check that only files with .md, .adoc or .txt suffix are added in the PR
  • run-sonar - run SonarCloud analysis
  • run-arm64 - run the tests on arm64 machine

Where not indicated, the builds run on a Linux machine with Oracle JDK 17.

Creating PRs for Hazelcast SQL

When creating a PR with changes located in the hazelcast-sql module and nowhere else, you can label your PR with SQL-only. This will change the standard PR builder to one that will only run tests related to SQL (see run-sql-only above), which will significantly shorten the build time vs. the default PR builder. NOTE: this job will fail if you've made changes anywhere other than hazelcast-sql.

Creating PRs which contain only documentation

When creating a PR which changes only documentation (files with suffix .md or .adoc) it makes no sense to run tests. For that case the label docs-only can be used. The job will fail in case you've made other changes than in .md, .adoc or .txt files.

License

Source code in this repository is covered by one of two licenses:

  1. Apache License 2.0
  2. Hazelcast Community License

The default license throughout the repository is Apache License 2.0 unless the header specifies another license.

Acknowledgments

Thanks to YourKit for supporting open source software by providing us a free license for their Java profiler.

We owe (the good parts of) our CLI tool's user experience to picocli.

Copyright

Copyright (c) 2008-2024, Hazelcast, Inc. All Rights Reserved.

Visit www.hazelcast.com for more info.

charts's People

Contributors

alparslanavci avatar andrey-gava avatar cagric0 avatar cangencer avatar cheels avatar davidkarlsen avatar dependabot[bot] avatar devopshelm avatar dzeromski-hazelcast avatar eminn avatar galibey avatar gregrluck avatar hasancelik avatar hbceylan avatar huseyinbabal avatar itscaro avatar jisnardo avatar jpoisso avatar justinrush avatar kutluhanmetin avatar maurizio-lattuada avatar mizbahcelik avatar mtyazici avatar nosan avatar olukas avatar semihbkgr avatar seriybg avatar thilinapiy avatar utkukaratas avatar vbekiaris avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Question: CPU Sizing on Kubernetes?

According to the operations guidelines, the CPU Sizing is recommended to be 8 cores at a minimum.
How does this apply to a Kubernetes/OpenShift environment?

Management-center could not lock home directory

Hi,

I'm deploying the Hazelcast Helm chart version 3.4.3 (https://hazelcast-charts.s3.amazonaws.com/) on an EKS Kubernetes cluster.

To avoid another issue related to adminCredentialsSecretName helm chart property I have manually created a pvc and indicated in the values of mancenter to enable persistence and use that pvc.
The first deployment of man center is working without problems, it uses that pvc and binds the pod to it and when I enter it works.
The issue occurs on the following times the pod is created, as the config is stored in the pv created by the pvc, this error appears and man center never starts again.

using automatic sizing of heap size by up to 80% of available memory and starting with container support executing command specified by MC_INIT_CMD for container initialization ERROR: Could not lock home directory. Make sure that Management Center web application is stopped (offline) before starting this command. If you are sure the application is stopped, it means that lock file was not deleted properly. Please delete 'mc.lock' file in the home directory manually before using the command. To see the full stack trace, re-run with the -v/--verbose option.

Said that, here below I provide the config values applied to the management center deployment in this chart.
image

Ingress host and tls values are also configured but omitted here.

This is the manually created pvc config
image

Storage class (default) is an EBS GP2

Seems like a command to delete mc.lock file is needed as man center is not working, therefore I consider it stopped.

Make autodiscovery work based on SVC names and/or FQDN

Problem: If the target endpoint is set to an name based value either a FQDN or a SVC name the member autodiscovery doen work. Though it does work when using a IP based host name target-endpoints: "XX.239.105.XX:30XXX"

The reason for this request is a simplyfied setup for HA.

unable to validate against any pod security policy: [spec.securityContext.fsGroup: Invalid value:

I get

Events:
  Type     Reason        Age                     From                   Message
  ----     ------        ----                    ----                   -------
  Warning  FailedCreate  2m14s (x17 over 4m57s)  replicaset-controller  Error creating: pods "hazelcast-mancenter-5d5bcfd7d7-" is forbidden: unable to validate against any pod security policy: [spec.securityContext.fsGroup: Invalid value: []int64{100100}: group 100100 must be in the ranges: [{1 65535}]]

when attempting to install the mancenter

sync script should not port changes which are already at official repo

When someone from community sends PR to our official helm chart, we need to port it to this repo:
#60
helm/charts#17193

Even if developer copies changes from official repo without any modification, git can find unimportant diffs(whitespace etc). In such cases, the create-pr-at-official-helm-repo script should not create nonsense PR at official repo.

To prevent it, developer can put special string like [not-sync] into commit message so create-pr-at-official-helm-repo script can parse it. WDYT? @leszko @eminn

Management-center in Kubernetes: cannot connect to a specific cluster-name with DNS lookup

Hello,

I'm trying to deploy the Hazelcast Helm chart version 3.4.0 (https://hazelcast-charts.s3.amazonaws.com/) on a Kubernetes cluster (orchestrated via Rancher).

I tried different approaches to provide values for the management center (javaopts, configmap containing hazelcast-client.yaml file...), but I found no way to have the management center both:

  • being able to connect to a cluster having a specific cluster name (e.g. tomcat instead of the default one dev)
  • using DNS lookup (in our cluster we have restricted rights on Kubernetes API)

Consider that the cluster itself is up and running and nodes can find each others.

Said that, here below I provide all the files related to the management center deployment in this chart.

  • content of the Helm values file:
    fullnameOverride: "hazelcast-mau"
    image:
      tag: "4.0.1"
    cluster:
      memberCount: 2
    metrics:
      enabled: true
    rbac:
      enabled: false
      create: false
    serviceAccount:
      create: false
    metrics:
      enabled: false
    mancenter:
      enabled: true
      image:
        tag: "4.0.2"
      javaOpts: "-Dhazelcast.mc.phone.home.enabled=false"
      persistence:
        enabled: true
        storageClass: "bronze"
        size: 2Gi
      service:
        type: ClusterIP
      ingress:
        enabled: true
        hosts:
          - "hazelcast-mau-mancenter.mau-test.test.swissid.xyz"
    hazelcast:
      # In this configmap, the Hazelcast cluster configuration is set
      existingConfigMap: hazelcast-configmap
    mancenter:
     existingConfigMap: hazelcast-mancenter-configmap
     # Force to use DNS Lookup strategy
     javaOpts: "-Dhazelcast.kubernetes.service-dns=hazelcast-mau.mau-test.svc.cluster.local"
  • Content of the hazelcast-mancenter-configmap ConfigMap, wherein hazelcast-client.yaml file is stored:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      annotations:
        field.cattle.io/projectId: "c-tddvl:p-s55rc"
      name: hazelcast-mancenter-configmap
      namespace: mau-test
    data:
      hazelcast-client.yaml: |-
        hazelcast-client:
          cluster-name: tomcat
          network:
            kubernetes:
              enabled: true

Having such configuration, the management center complains as:

using automatic sizing of heap size by up to 80% of available memory and starting with container support
executing command specified by MC_INIT_CMD for container initialization
Successfully added Cluster Config.
##################################################
# initialisation complete, starting now....
##################################################
+ exec java --add-opens java.base/java.lang=ALL-UNNAMED -server -Dhazelcast.mc.home=/data -Djava.net.preferIPv4Stack=true -Dhazelcast.mc.healthCheck.enable=true -DserviceName=hazelcast-mau -Dhazelcast.mc.tls.enabled=false -Dhazelcast.kubernetes.service-dns=hazelcast-mau.mau-test.svc.cluster.local -XX:+UseContainerSupport -XX:MaxRAMPercentage=80 -cp /opt/hazelcast/management-center/hazelcast-management-center-4.0.2.war -Dhazelcast.mc.contextPath=/ -Dhazelcast.mc.http.port=8080 -Dhazelcast.mc.https.port=8443 com.hazelcast.webmonitor.Launcher
2020-06-16 10:25:03 [main INFO  c.h.webmonitor.config.BuildInfo - Hazelcast Management Center 4.0.2
2020-06-16 10:25:03 [main INFO  com.hazelcast.webmonitor.Launcher - Health check is enabled and available at http://localhost:8081/health
2020-06-16 10:25:07 [main INFO  c.h.webmonitor.config.SqlDbConfig - Checking DB for required migrations.
2020-06-16 10:25:07 [main INFO  c.h.webmonitor.config.SqlDbConfig - Number of applied DB migrations: 0.
2020-06-16 10:25:07 [main INFO  c.h.webmonitor.config.AppConfig - Creating cache with maxSize=768
2020-06-16 10:25:07 [main INFO  c.h.w.storage.DiskUsageMonitor - Monitoring /data [mode=purge, interval=1000ms, limit=512 MB]
2020-06-16 10:25:07 [main INFO  c.h.w.s.s.impl.DisableLoginStrategy - Login will be disabled for 5 seconds after 3 failed login attempts. For every 3 consecutive failed login attempts, disable period will be multiplied by 10.
2020-06-16 10:25:07 [main INFO  c.h.i.m.impl.MetricsConfigHelper - MC-Client-tomcat [tomcat [4.0.1 Overridden metrics configuration with system property 'hazelcast.client.metrics.enabled'='false' -> 'ClientMetricsConfig.enabled'='false'
2020-06-16 10:25:08 [main ERROR c.h.w.service.ClusterManager - Failed to start client for cluster tomcat.
com.hazelcast.config.InvalidConfigurationException: Invalid configuration
    at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.loadDiscoveryStrategies(DefaultDiscoveryService.java:147)
    at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.<init>(DefaultDiscoveryService.java:57)
    at com.hazelcast.spi.discovery.impl.DefaultDiscoveryServiceProvider.newDiscoveryService(DefaultDiscoveryServiceProvider.java:29)
    at com.hazelcast.client.impl.clientside.ClusterDiscoveryServiceBuilder.initDiscoveryService(ClusterDiscoveryServiceBuilder.java:246)
    at com.hazelcast.client.impl.clientside.ClusterDiscoveryServiceBuilder.build(ClusterDiscoveryServiceBuilder.java:99)
    at com.hazelcast.client.impl.clientside.HazelcastClientInstanceImpl.initClusterDiscoveryService(HazelcastClientInstanceImpl.java:285)
    at com.hazelcast.client.impl.clientside.HazelcastClientInstanceImpl.<init>(HazelcastClientInstanceImpl.java:242)
    at com.hazelcast.client.HazelcastClient.constructHazelcastClient(HazelcastClient.java:458)
    at com.hazelcast.client.HazelcastClient.newHazelcastClientInternal(HazelcastClient.java:416)
    at com.hazelcast.client.HazelcastClient.newHazelcastClient(HazelcastClient.java:136)
    at com.hazelcast.webmonitor.service.client.ImdgClientManager.newClient(ImdgClientManager.java:122)
    at com.hazelcast.webmonitor.service.ClusterManager.newClient(ClusterManager.java:203)
    at com.hazelcast.webmonitor.service.ClusterManager.lambda$new$0(ClusterManager.java:74)
    at java.base/java.util.ArrayList.forEach(Unknown Source)
    at com.hazelcast.webmonitor.service.ClusterManager.<init>(ClusterManager.java:69)

CUT CUT CUT CUT

Caused by: com.hazelcast.config.properties.ValidationException: There is no discovery strategy factory to create 'DiscoveryStrategyConfig{properties={}, className='com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy', discoveryStrategyFactory=null}' Is it a typo in a strategy classname? Perhaps you forgot to include implementation on a classpath?
    at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.buildDiscoveryStrategy(DefaultDiscoveryService.java:186)
    at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.loadDiscoveryStrategies(DefaultDiscoveryService.java:141)
    ... 59 common frames omitted
  • Here the content of environment variable MC_INIT_CMD: ./mc-conf.sh cluster add --lenient=true -H /data -cc /config/hazelcast-client.yaml

So here you can see it took correctly tomcat as cluster name (see the content of the hazelcast-client.yaml file provided in ConfigMap hazelcast-mancenter-configmap), but it complains about an invalid (?) configuration.

I tried another approach: remove the entry mancenter.existingConfigMap from the Helm values yaml file (leaving the javaOpts one with the same value), but this time the management center complains about a not correct authentication:

using automatic sizing of heap size by up to 80% of available memory and starting with container support
executing command specified by MC_INIT_CMD for container initialization
Successfully added Cluster Config.
##################################################
# initialisation complete, starting now....
##################################################
+ exec java --add-opens java.base/java.lang=ALL-UNNAMED -server -Dhazelcast.mc.home=/data -Djava.net.preferIPv4Stack=true -Dhazelcast.mc.healthCheck.enable=true -DserviceName=hazelcast-mau -Dhazelcast.mc.tls.enabled=false -Dhazelcast.kubernetes.service-dns=hazelcast-mau.mau-test.svc.cluster.local -XX:+UseContainerSupport -XX:MaxRAMPercentage=80 -cp /opt/hazelcast/management-center/hazelcast-management-center-4.0.2.war -Dhazelcast.mc.contextPath=/ -Dhazelcast.mc.http.port=8080 -Dhazelcast.mc.https.port=8443 com.hazelcast.webmonitor.Launcher
2020-06-16 10:50:00 [main INFO  c.h.webmonitor.config.BuildInfo - Hazelcast Management Center 4.0.2
2020-06-16 10:50:01 [main INFO  com.hazelcast.webmonitor.Launcher - Health check is enabled and available at http://localhost:8081/health
2020-06-16 10:50:04 [main INFO  c.h.webmonitor.config.SqlDbConfig - Checking DB for required migrations.
2020-06-16 10:50:04 [main INFO  c.h.webmonitor.config.SqlDbConfig - Number of applied DB migrations: 0.
2020-06-16 10:50:04 [main INFO  c.h.webmonitor.config.AppConfig - Creating cache with maxSize=768
2020-06-16 10:50:04 [main INFO  c.h.w.storage.DiskUsageMonitor - Monitoring /data [mode=purge, interval=1000ms, limit=512 MB]
2020-06-16 10:50:04 [main INFO  c.h.w.s.s.impl.DisableLoginStrategy - Login will be disabled for 5 seconds after 3 failed login attempts. For every 3 consecutive failed login attempts, disable period will be multiplied by 10.
2020-06-16 10:50:05 [main INFO  c.h.i.m.impl.MetricsConfigHelper - MC-Client-dev [dev [4.0.1 Overridden metrics configuration with system property 'hazelcast.client.metrics.enabled'='false' -> 'ClientMetricsConfig.enabled'='false'
2020-06-16 10:50:05 [main INFO  c.h.c.i.spi.ClientInvocationService - MC-Client-dev [dev [4.0.1 Running with 2 response threads, dynamic=true
2020-06-16 10:50:05 [main INFO  com.hazelcast.core.LifecycleService - MC-Client-dev [dev [4.0.1 HazelcastClient 4.0.1 (20200409 - eb984ad, e086b9c) is STARTING
2020-06-16 10:50:05 [main INFO  com.hazelcast.core.LifecycleService - MC-Client-dev [dev [4.0.1 HazelcastClient 4.0.1 (20200409 - eb984ad, e086b9c) is STARTED
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/opt/hazelcast/management-center/hazelcast-management-center-4.0.2.war) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2020-06-16 10:50:05 [MC-Client-dev.internal-1 INFO  c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Trying to connect to cluster: dev
2020-06-16 10:50:05 [main INFO  c.h.internal.diagnostics.Diagnostics - MC-Client-dev [dev [4.0.1 Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-06-16 10:50:05 [MC-Client-dev.internal-1 WARN  c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Unable to get live cluster connection, retry in 1000 ms, attempt: 1 , cluster connect timeout: 9223372036854775807 seconds , max backoff millis: 32000
2020-06-16 10:50:05 [main INFO  com.hazelcast.webmonitor.Launcher - 
Hazelcast Management Center successfully started at http://localhost:8080/
2020-06-16 10:50:06 [MC-Client-dev.internal-1 WARN  c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Unable to get live cluster connection, retry in 2000 ms, attempt: 2 , cluster connect timeout: 9223372036854775807 seconds , max backoff millis: 32000
2020-06-16 10:50:08 [MC-Client-dev.internal-1 WARN  c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Unable to get live cluster connection, retry in 4000 ms, attempt: 3 , cluster connect timeout: 9223372036854775807 seconds , max backoff millis: 32000
2020-06-16 10:50:12 [MC-Client-dev.internal-1 WARN  c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Unable to get live cluster connection, retry in 8000 ms, attempt: 4 , cluster connect timeout: 9223372036854775807 seconds , max backoff millis: 32000
2020-06-16 10:50:20 [MC-Client-dev.internal-1 INFO  c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Trying to connect to [hazelcast-mau:5701
2020-06-16 10:50:20 [MC-Client-dev.internal-1 WARN  c.h.c.i.c.nio.ClientConnection - MC-Client-dev [dev [4.0.1 ClientConnection{alive=false, connectionId=1, channel=NioChannel{/172.24.9.185:40709->hazelcast-mau/172.24.9.15:5701}, remoteEndpoint=null, lastReadTime=2020-06-16 10:50:20.331, lastWriteTime=2020-06-16 10:50:20.324, closedTime=2020-06-16 10:50:20.333, connected server version=null} closed. Reason: Failed to authenticate connection
com.hazelcast.client.AuthenticationException: Authentication failed. The configured cluster name on the client (see ClientConfig.setClusterName()) does not match the one configured in the cluster or the credentials set in the Client security config could not be authenticated
    at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.authenticateOnCluster(ClientConnectionManagerImpl.java:793)
    at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.getOrConnect(ClientConnectionManagerImpl.java:581)
    at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.connect(ClientConnectionManagerImpl.java:423)
    at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.doConnectToCandidateCluster(ClientConnectionManagerImpl.java:451)
    at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.doConnectToCluster(ClientConnectionManagerImpl.java:385)
    at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.lambda$submitConnectToClusterTask$1(ClientConnectionManagerImpl.java:359)
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
    at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
    at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.base/java.lang.Thread.run(Unknown Source)
    at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
    at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
2020-06-16 10:50:20 [MC-Client-dev.internal-1 WARN  c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Exception during initial connection to [hazelcast-mau:5701: com.hazelcast.client.AuthenticationException: Authentication failed. The configured cluster name on the client (see ClientConfig.setClusterName()) does not match the one configured in the cluster or the credentials set in the Client security config could not be authenticated
2020-06-16 10:50:20 [MC-Client-dev.internal-1 INFO  c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Trying to connect to [hazelcast-mau:5703

This seems to be correct, since here the cluster name is dev (default) and not tomcat.

I tried a final approach: put back the ConfigMap hazelcast-mancenter-configmap containing the hazelcast-client.yaml file, but this time with this content:

apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    field.cattle.io/projectId: "c-tddvl:p-s55rc"
  name: hazelcast-mancenter-configmap
  namespace: mau-test
data:
  hazelcast-client.yaml: |-
    hazelcast-client:
      cluster-name: tomcat

Here the whole network.kubernetes.enabled: true is now not being provided.

So, the management center is now correctly trying to connect to a cluster having name tomcat, but this time looking on 127.0.0.1:

using automatic sizing of heap size by up to 80% of available memory and starting with container support
executing command specified by MC_INIT_CMD for container initialization
Successfully added Cluster Config.
##################################################
# initialisation complete, starting now....
##################################################
+ exec java --add-opens java.base/java.lang=ALL-UNNAMED -server -Dhazelcast.mc.home=/data -Djava.net.preferIPv4Stack=true -Dhazelcast.mc.healthCheck.enable=true -DserviceName=hazelcast-mau -Dhazelcast.mc.tls.enabled=false -Dhazelcast.kubernetes.service-dns=hazelcast-mau.mau-test.svc.cluster.local -XX:+UseContainerSupport -XX:MaxRAMPercentage=80 -cp /opt/hazelcast/management-center/hazelcast-management-center-4.0.2.war -Dhazelcast.mc.contextPath=/ -Dhazelcast.mc.http.port=8080 -Dhazelcast.mc.https.port=8443 com.hazelcast.webmonitor.Launcher
2020-06-16 10:55:32 [main INFO  c.h.webmonitor.config.BuildInfo - Hazelcast Management Center 4.0.2
2020-06-16 10:55:33 [main INFO  com.hazelcast.webmonitor.Launcher - Health check is enabled and available at http://localhost:8081/health
2020-06-16 10:55:36 [main INFO  c.h.webmonitor.config.SqlDbConfig - Checking DB for required migrations.
2020-06-16 10:55:36 [main INFO  c.h.webmonitor.config.SqlDbConfig - Number of applied DB migrations: 0.
2020-06-16 10:55:36 [main INFO  c.h.webmonitor.config.AppConfig - Creating cache with maxSize=768
2020-06-16 10:55:36 [main INFO  c.h.w.storage.DiskUsageMonitor - Monitoring /data [mode=purge, interval=1000ms, limit=512 MB]
2020-06-16 10:55:36 [main INFO  c.h.w.s.s.impl.DisableLoginStrategy - Login will be disabled for 5 seconds after 3 failed login attempts. For every 3 consecutive failed login attempts, disable period will be multiplied by 10.
2020-06-16 10:55:37 [main INFO  c.h.i.m.impl.MetricsConfigHelper - MC-Client-tomcat [tomcat [4.0.1 Overridden metrics configuration with system property 'hazelcast.client.metrics.enabled'='false' -> 'ClientMetricsConfig.enabled'='false'
2020-06-16 10:55:37 [main INFO  c.h.c.i.spi.ClientInvocationService - MC-Client-tomcat [tomcat [4.0.1 Running with 2 response threads, dynamic=true
2020-06-16 10:55:37 [main INFO  com.hazelcast.core.LifecycleService - MC-Client-tomcat [tomcat [4.0.1 HazelcastClient 4.0.1 (20200409 - eb984ad, e086b9c) is STARTING
2020-06-16 10:55:37 [main INFO  com.hazelcast.core.LifecycleService - MC-Client-tomcat [tomcat [4.0.1 HazelcastClient 4.0.1 (20200409 - eb984ad, e086b9c) is STARTED
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/opt/hazelcast/management-center/hazelcast-management-center-4.0.2.war) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 INFO  c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Trying to connect to cluster: tomcat
2020-06-16 10:55:37 [main INFO  c.h.internal.diagnostics.Diagnostics - MC-Client-tomcat [tomcat [4.0.1 Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 INFO  c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Trying to connect to [127.0.0.1:5701
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 WARN  c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Exception during initial connection to [127.0.0.1:5701: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused to address /127.0.0.1:5701
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 INFO  c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Trying to connect to [127.0.0.1:5702
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 WARN  c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Exception during initial connection to [127.0.0.1:5702: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused to address /127.0.0.1:5702
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 INFO  c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Trying to connect to [127.0.0.1:5703
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 WARN  c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Exception during initial connection to [127.0.0.1:5703: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused to address /127.0.0.1:5703
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 WARN  c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Unable to get live cluster connection, retry in 1000 ms, attempt: 1 , cluster connect timeout: 9223372036854775807 seconds , max backoff millis: 32000

So, concluding: is this a configuration issue or is effectively a bug?
I honestly find no other ways to configure together both these parameters and I was not able to find any relevant hint on the documentation.

Thank you for your time.

Make mancenter service type to default to ClusterIP

This is a minor security and configuration improvement. When setting the the Service type for the mancenter to Loadbalancer it open per default a Nodeport in the firewall. This could be consired a security risk even though you can change it to ClusterIP. The request is to set the mancenter default to type ClusterIP because it is also the kubernetes default.

https://github.com/hazelcast/charts/blob/master/stable/hazelcast-enterprise/values.yaml#L336

Logging configuration support

As I user, I would like to configure logging by specifying a logging framework (eg. log4j, log4j2, logback) and a configuration. Some of the logging framework libraries can also be included in the lib jars.

Create a tool to improve Helm Chart maintenance

We already have a lot of similar Helm Charts:

  • stable/hazelcast
  • stable/hazelcast-jet
  • hazelcast/hazelcast
  • hazelcast/hazelcast-jet
  • hazelcast/hazelcast-enterprise
  • hazelcast/hazelcast-jet-enterprise
  • ibm
  • ibm community enterprise

We need to create some tooling to maintain them effectively. An idea would be to create a tool that makes the changes everywhere from a *.patch file. Or a tool to quickly give a diff of the Charts.

We should also research what other tools do (e.g. MySQL).

Management Center Persistence in Helm Charts

mancenter.persistence.enabled parameter is used to create Persistence Volume for MC but we need to create PV everytime MC is deployed with helm chart.

I believe we should remove mancenter.persistence.enabled. @emre-aydin What do you think?

Hazelcast Helm Deploy on IKS not connecting

This might be a possible duplicate of #86

Deployed Hazelcast with helm to IKS Cluster.

helm version --short
v3.0.2+g19e47ee
helm ls
NAME     	NAMESPACE	REVISION	UPDATED                            	STATUS  	CHART           	APP VERSION
hazelcast	default  	1       	2020-01-21 17:44:03.50896 -0500 EST	deployed	hazelcast-2.10.0	3.12.4
kubectl logs hazelcast-0

########################################
# JAVA_OPTS=-Dhazelcast.mancenter.enabled=false -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/opt/hazelcast/logging.properties -Dhazelcast.config=/data/hazelcast/hazelcast.yaml -DserviceName=hazelcast -Dnamespace=default -Dhazelcast.mancenter.enabled=true -Dhazelcast.mancenter.url=http://hazelcast-mancenter:8080/hazelcast-mancenter -Dhazelcast.shutdownhook.policy=GRACEFUL -Dhazelcast.shutdownhook.enabled=true -Dhazelcast.graceful.shutdown.max.wait=600
# CLASSPATH=/opt/hazelcast/*:/opt/hazelcast/lib/*
# starting now....
########################################
+ exec java -server -Dhazelcast.mancenter.enabled=false -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/opt/hazelcast/logging.properties -Dhazelcast.config=/data/hazelcast/hazelcast.yaml -DserviceName=hazelcast -Dnamespace=default -Dhazelcast.mancenter.enabled=true -Dhazelcast.mancenter.url=http://hazelcast-mancenter:8080/hazelcast-mancenter -Dhazelcast.shutdownhook.policy=GRACEFUL -Dhazelcast.shutdownhook.enabled=true -Dhazelcast.graceful.shutdown.max.wait=600 com.hazelcast.core.server.StartServer
Jan 21, 2020 10:44:08 PM com.hazelcast.config.AbstractConfigLocator
INFO: Loading configuration '/data/hazelcast/hazelcast.yaml' from System property 'hazelcast.config'
Jan 21, 2020 10:44:08 PM com.hazelcast.config.AbstractConfigLocator
INFO: Using configuration file at /data/hazelcast/hazelcast.yaml
Jan 21, 2020 10:44:08 PM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [dev] [3.12.5] Prefer IPv4 stack is true, prefer IPv6 addresses is false
Jan 21, 2020 10:44:08 PM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [dev] [3.12.5] Picked [172.30.199.27]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
Jan 21, 2020 10:44:08 PM com.hazelcast.system
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Hazelcast 3.12.5 (20191210 - 294ff46) starting at [172.30.199.27]:5701
Jan 21, 2020 10:44:08 PM com.hazelcast.system
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
Jan 21, 2020 10:44:08 PM com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Backpressure is disabled
Jan 21, 2020 10:44:09 PM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Kubernetes Discovery properties: { service-dns: null, service-dns-timeout: 5, service-name: hazelcast, service-port: 0, service-label: null, service-label-value: true, namespace: default, pod-label: null, pod-label-value: null, resolve-not-ready-addresses: true, use-node-name-as-external-address: false, kubernetes-api-retries: 3, kubernetes-master: https://kubernetes.default.svc}
Jan 21, 2020 10:44:09 PM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Kubernetes Discovery activated with mode: KUBERNETES_API
Jan 21, 2020 10:44:09 PM com.hazelcast.instance.Node
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Activating Discovery SPI Joiner
Jan 21, 2020 10:44:09 PM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
Jan 21, 2020 10:44:09 PM com.hazelcast.internal.diagnostics.Diagnostics
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
Jan 21, 2020 10:44:09 PM com.hazelcast.core.LifecycleService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] [172.30.199.27]:5701 is STARTING
Jan 21, 2020 10:44:09 PM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Kubernetes plugin discovered availability zone: wdc04
Jan 21, 2020 10:44:09 PM com.hazelcast.kubernetes.KubernetesClient
WARNING: Cannot fetch public IPs of Hazelcast Member PODs, you won't be able to use Hazelcast Smart Client from outside of the Kubernetes network
Jan 21, 2020 10:44:14 PM com.hazelcast.internal.cluster.ClusterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5]

Members {size:1, ver:1} [
	Member [172.30.199.27]:5701 - 18220a66-dc88-434d-9e9a-ab2fe1fb701c this
]

Jan 21, 2020 10:44:14 PM com.hazelcast.internal.management.ManagementCenterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Hazelcast will connect to Hazelcast Management Center on address:
http://hazelcast-mancenter:8080/hazelcast-mancenter
Jan 21, 2020 10:44:14 PM com.hazelcast.core.LifecycleService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] [172.30.199.27]:5701 is STARTED
Jan 21, 2020 10:44:19 PM com.hazelcast.internal.management.ManagementCenterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Failed to connect to: http://hazelcast-mancenter:8080/hazelcast-mancenter/collector.do
Jan 21, 2020 10:44:19 PM com.hazelcast.client.impl.ClientEngine
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Applying a new client selector :ClientSelector{any}
Jan 21, 2020 10:44:45 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Initialized new cluster connection between /172.30.199.27:5701 and /172.30.239.251:43381
Jan 21, 2020 10:44:52 PM com.hazelcast.internal.cluster.ClusterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5]

Members {size:2, ver:2} [
	Member [172.30.199.27]:5701 - 18220a66-dc88-434d-9e9a-ab2fe1fb701c this
	Member [172.30.239.251]:5701 - 805fe633-c23b-43da-b5e1-7090f3e1069e
]

Jan 21, 2020 10:45:22 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Initialized new cluster connection between /172.30.199.27:5701 and /172.30.199.43:40657
Jan 21, 2020 10:45:29 PM com.hazelcast.internal.cluster.ClusterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5]

Members {size:3, ver:3} [
	Member [172.30.199.27]:5701 - 18220a66-dc88-434d-9e9a-ab2fe1fb701c this
	Member [172.30.239.251]:5701 - 805fe633-c23b-43da-b5e1-7090f3e1069e
	Member [172.30.199.43]:5701 - 3b36c99b-7df8-4278-9b6c-2d58db0d05ba
]

Jan 21, 2020 10:46:24 PM com.hazelcast.internal.management.ManagementCenterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Failed to pull tasks from Management Center

Visiting http://$MANCENTER_IP:8080/hazelcast-mancenter
gives
image

Not sure where to look for debugging.

Can't install hazelcast release in various namespaces

For some development improvement we are using various namespace for various environment.
In the case we can't install only one hazelcast release, every other deployment lead to next error:

helm upgrade --install --wait hazelcast hazelcast/hazelcast --set mancenter.persistence.enabled=true,mancenter.ingress.enabled=true,mancenter.ingress.hosts={hazelcast-uat.company.com},mancenter.ingress.annotations."kubernetes\.io/ingress\.class=nginx",cluster.memberCount=3,mancenter.service.type=ClusterIP --namespace=system-uat
Release "hazelcast" does not exist. Installing it now.

Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "hazelcast" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "system-uat": current value is "system-test"

I believe that the ability to install multiple releases in the same kubernetes cluster(but different ns) is a must for any charts.

Create Continuous Delivery process

Currently, Hazelcast Helm Chart repo i published by the Helm-Chart-release pipeline in Jenkins. However, there are the following drawbacks comparing the to the official Helm Chart repo:

  • The Jenkins release is not triggered automatically (you need to go into Jenkins and click "Build" to release new chart versions)
  • There are not tests run (so, actually you need to test the each chart manually before releasing)
  • There is no check if the Chart version was bumped up (so, if you forget about it, the new chart will override the existing chart)
  • There are no notifications to the PR sent (because there are not tests).

Some guidelines how to create a Continuous Delivery pipeline for the Helm Charts:

PVC metadata.name length may prevent glusterfs strorageclass use

glusterfs appends glusterfs-dynamic- to the storage service name, 18 characters.

https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/glusterfs/glusterfs.go#L72

The service name has a max length of 63, as per the error below.

Consider trimming the PVC metadata.name to a maximum of 45 characters.

# kubectl -n dacleyra get pvc dacleyra-hazelcast-hazelcast-enterprise-mancenter -o=yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
  creationTimestamp: 2019-05-15T21:51:47Z
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: hazelcast-enterprise
    chart: hazelcast-enterprise-1.0.1
    heritage: Tiller
    release: dacleyra-hazelcast
  name: dacleyra-hazelcast-hazelcast-enterprise-mancenter
  namespace: dacleyra
  resourceVersion: "81471163"
  selfLink: /api/v1/namespaces/dacleyra/persistentvolumeclaims/dacleyra-hazelcast-hazelcast-enterprise-mancenter
  uid: a313b277-775b-11e9-ac0c-6cae8b1be502
spec:
  accessModes:
  - ReadWriteOnce
  dataSource: null
  resources:
    requests:
      storage: 8Gi
  storageClassName: glusterfs
status:
  phase: Pending

# kubectl -n dacleyra describe pvc dacleyra-hazelcast-hazelcast-enterprise-mancenter
Name:          dacleyra-hazelcast-hazelcast-enterprise-mancenter
Namespace:     dacleyra
StorageClass:  glusterfs
Status:        Pending
Volume:
Labels:        app=hazelcast-enterprise
               chart=hazelcast-enterprise-1.0.1
               heritage=Tiller
               release=dacleyra-hazelcast
Annotations:   volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
  Type     Reason              Age                 From                         Message
  ----     ------              ----                ----                         -------
  Warning  ProvisioningFailed  3m (x429 over 17h)  persistentvolume-controller  Failed to provision volume with StorageClass "glusterfs": failed to create volume: failed to create endpoint/service dacleyra/glusterfs-dynamic-dacleyra-hazelcast-hazelcast-enterprise-mancenter: error creating service: Service "glusterfs-dynamic-dacleyra-hazelcast-hazelcast-enterprise-mancenter" is invalid: metadata.name: Invalid value: "glusterfs-dynamic-dacleyra-hazelcast-hazelcast-enterprise-mancenter": must be no more than 63 characters

Configuring service-dns fails when installed from helm chart

When run the Hazelcast server using helm chart with service-dns value configured fails with the following error

Caused by: com.hazelcast.config.InvalidConfigurationException: Properties 'service-dns' and ('service-name' or 'service-label-name') cannot be defined at the same time

I override the config properties from the parent chart as follows

hazelcast:
  cluster:
    memberCount: 2
  service:
    clusterIP: "None"
  hazelcast:
    rest: true
    yaml:
      hazelcast:
        network:
          join:
            multicast:
              enabled: false
            kubernetes:
              enabled: true
              service-dns: ${serviceName}
        management-center:
          enabled: ${hazelcast.mancenter.enabled}
          url: ${hazelcast.mancenter.url}

I tried overriding the service-name property to null but it is still not working.

AWS EKS is binding MC to a DNS name

When you deploy hazelcast chart to AWS EKS, the instructions provided in INDEX.txt was not correct for Management Center so I had to find out MC URL by executing describe command.

please see ``LoadBalancer Ingress` below.

$ kubectl describe svc dining-serval-hazelcast-enterprise-mancenter 
Name:                     dining-serval-hazelcast-enterprise-mancenter
Namespace:                default
Labels:                   app=hazelcast-enterprise
                          chart=hazelcast-enterprise-1.0.1
                          heritage=Tiller
                          release=dining-serval
Annotations:              <none>
Selector:                 app=hazelcast-enterprise,release=dining-serval,role=mancenter
Type:                     LoadBalancer
IP:                       10.100.98.225
LoadBalancer Ingress:     a539e0f709f3111e8b9d00af5b0ce326-465655592.us-west-2.elb.amazonaws.com
Port:                     mancenterport  8080/TCP
TargetPort:               mancenter/TCP
NodePort:                 mancenterport  31877/TCP
Endpoints:                192.168.246.76:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  6m    service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   6m    service-controller  Ensured load balancer

Create a way to mount a custom volume

Currently, to mount a volume, which contains custom JARs, keystore/trustore, a user needs to modify templates/*. Allow doing it just by modifying values.yaml.

Headless Kubernetes Service By Default

Hazelcast uses Kubernetes Service to discover other Hazelcast Members so no need to have a LoadBalancer or clusterIP as service type. It must be Headless by default and it is actually recommended way to use

https://kubernetes.io/docs/concepts/configuration/overview/#services
Use headless Services (which have a ClusterIP of None) for easy service discovery when you don't need kube-proxy load balancing.

I also experienced that Hazelcast can only work with headless Services with Istio Service Mesh.
https://github.com/hazelcast-guides/hazelcast-istio

Run kubesec.io for hazelcast and hazelcast-enterprise

Information

Run kubesec.io on hazelcast/hazelcast and hazelcast/hazelcast-enterprise Helm Charts and fix the obvious comments.

Comments from Mesut

kubectl plugin scan statefulset/pioneering-zebra-hazelcast

scanning statefulset pioneering-zebra-hazelcast

kubesec.io score: 3


Advise1. .spec .volumeClaimTemplates[] .spec .accessModes | index("ReadWriteOnce")

  1. containers[] .securityContext .runAsNonRoot == true

Force the running image to run as a non-root user to ensure least privilege

  1. containers[] .securityContext .capabilities .drop

Reducing kernel capabilities available to a container limits its attack surface

  1. containers[] .securityContext .readOnlyRootFilesystem == true

An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost

  1. containers[] .securityContext .runAsUser > 10000

Run as a high-UID user to avoid conflicts with the host's user table

Hazelcast management center can't redeploy

After a successfully first deploy, then after redploying the management center, it will fail with the following log message:
executing command specified by MC_INIT_CMD
ERROR: Could not add new Cluster Config. Reason: Cluster config dev already exists!
To see the full stack trace, re-run with the -v/--verbose option.

Expected behavior is that restart will work fine.
If persistence is disabled, it is working fine. But then I have to create master password every time management center is restarted.

Helm version:
version.BuildInfo{Version:"v3.0.3", GitCommit:"ac925eb7279f4a6955df663a0128044a8a6b7593", GitTreeState:"clean", GoVersion:"go1.13.6"}

Hazelcast chart version: hazelcast-3.0.5
App version: 4.0

Kubectl version:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

values.yaml:
mancenter:
persistence:
storageClass: nfs-client
service:
type: NodePort

`customVolume` is not working

With Helm 2;

customVolume was working until this commit and stopped working with this commit. You can simply verify it by;

helm upgrade --install my-chart \
--set hazelcast.licenseKey=<key> \
--set customVolume.hostPath.path=/tmp/  \
hazelcast/hazelcast-enterprise --version 3.4.4 --debug --dry-run

It will throw Error: YAML parse error on hazelcast-enterprise/templates/statefulset.yaml: error converting YAML to JSON: yaml: line 101: mapping values are not allowed in this context. Here line 101 is not 101th line in yaml file, it is relative line number in dry run output which is equal to

volumes:
- name: hazelcast-storage
    configMap:
    name: huseyin-hz-hazelcast-enterprise-configuration
- name: hazelcast-custom
            hostPath:
    path: /tmp/

MC Pod does not start with helm installation

helm install hazelcast2 hazelcast/hazelcast command can't create MC pod in minikube.

$ kubectl get all
NAME                         READY   STATUS             RESTARTS   AGE
pod/hazelcast2-0             1/1     Running            2          11m
pod/hazelcast2-1             1/1     Running            2          10m
pod/hazelcast2-2             1/1     Running            2          9m59s
pod/hazelcast2-mancenter-0   0/1     CrashLoopBackOff   3          77s

output of kubectl describe pod/hazelcast2-mancenter-0

  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  11m (x2 over 11m)     default-scheduler  pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         11m                   default-scheduler  Successfully assigned default/hazelcast-mancenter-0 to minikube
  Warning  Unhealthy         8m19s                 kubelet, minikube  Liveness probe failed: Get http://172.17.0.3:8081/health: read tcp 172.17.0.1:39992->172.17.0.3:8081: read: connection reset by peer
  Normal   Pulled            6m40s (x5 over 11m)   kubelet, minikube  Container image "hazelcast/management-center:4.0" already present on machine
  Normal   Created           6m40s (x5 over 11m)   kubelet, minikube  Created container hazelcast-mancenter
  Normal   Started           6m40s (x5 over 11m)   kubelet, minikube  Started container hazelcast-mancenter
  Warning  BackOff           95s (x34 over 8m16s)  kubelet, minikube  Back-off restarting failed container

OpenSSL Read-only file system problem

How to reproduce

Create a keystore with key and certificates

kubectl create secret generic keystore --from-file=key.pem --from-file=chain.pem --from-file=cert.pem

Install hazelcast-enterprise with the following command

helm install --name hazelcast-openssl \
  --set hazelcast.licenseKey=<license-key>  \
  --set hazelcast.ssl=true \
  --set secretsMountName=keystore \
  --set hazelcast.yaml.hazelcast.network.ssl.factory-class-name=com.hazelcast.nio.ssl.OpenSSLEngineFactory \
  --set hazelcast.yaml.hazelcast.network.ssl.properties.keyFile=/data/secrets/key.pem \
  --set hazelcast.yaml.hazelcast.network.ssl.properties.trustCertCollectionFile=/data/secrets/cert.pem \
  --set hazelcast.yaml.hazelcast.network.ssl.properties.keyCertChainFile=/data/secrets/chain.pem \
  hazelcast/hazelcast-enterprise

The exception on the hazelcast member log says readonly filesystem.

	Caused by: java.io.IOException: Read-only file system
		at java.io.UnixFileSystem.createFileExclusively(Native Method)
		at java.io.File.createTempFile(File.java:2024)
		at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:183)

Workaround
Making this explicitly readOnlyRootFilesystem: false fixes the problem.

Failed to connect to: http://hazelcast-mancenter:8080/hazelcast-mancenter/

Hi, am sri I have installed hazelcast using helm in my eks cluster in dev namespace previously its running 2 members and 1 mancentre after we are try to deploy another 2 members and 1 mancentre with some different name in same namespace But i getting some errors in the members log files Please help me for this

Please check me the logs am getting from hazelcast member

exec java -server -javaagent:/opt/hazelcast/lib/jmx_prometheus_javaagent.jar=8080:/opt/hazelcast/jmx_agent_config.yaml -Dhazelcast.mancenter.enabled=false -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/opt/hazelcast/logging.properties -Dhazelcast.config=/data/hazelcast/hazelcast.yaml -DserviceName=hazelcast-dev-new -Dnamespace=dev -Dhazelcast.mancenter.enabled=true -Dhazelcast.mancenter.url=http://hazelcast-dev-new-mancenter:8080/hazelcast-mancenter -Dhazelcast.shutdownhook.policy=GRACEFUL -Dhazelcast.shutdownhook.enabled=true -Dhazelcast.graceful.shutdown.max.wait=600 -Dhazelcast.jmx=true com.hazelcast.core.server.StartServer
Dec 12, 2019 6:18:39 AM com.hazelcast.config.AbstractConfigLocator
INFO: Loading configuration '/data/hazelcast/hazelcast.yaml' from System property 'hazelcast.config'
Dec 12, 2019 6:18:39 AM com.hazelcast.config.AbstractConfigLocator
INFO: Using configuration file at /data/hazelcast/hazelcast.yaml
Dec 12, 2019 6:18:40 AM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [dev] [3.12] Prefer IPv4 stack is true, prefer IPv6 addresses is false
Dec 12, 2019 6:18:40 AM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [dev] [3.12] Picked [ip]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
Dec 12, 2019 6:18:40 AM com.hazelcast.system
INFO: [ip]:5701 [dev] [3.12] Hazelcast 3.12 (20190409 - 915d83a) starting at [ip]:5701
Dec 12, 2019 6:18:40 AM com.hazelcast.system
INFO: [ip]:5701 [dev] [3.12] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
Dec 12, 2019 6:18:40 AM com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator
INFO: [ip]:5701 [dev] [3.12] Backpressure is disabled
Dec 12, 2019 6:18:40 AM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [ip]:5701 [dev] [3.12] Kubernetes Discovery properties: { service-dns: null, service-dns-timeout: 5, service-name: hazelcast-dev-new, service-port: 0, service-label: null, service-label-value: true, namespace: dev, resolve-not-ready-addresses: true, kubernetes-master: https://kubernetes.default.svc}
Dec 12, 2019 6:18:40 AM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [ip]:5701 [dev] [3.12] Kubernetes Discovery activated resolver: KubernetesApiEndpointResolver
Dec 12, 2019 6:18:40 AM com.hazelcast.instance.Node
INFO: [ip]:5701 [dev] [3.12] Activating Discovery SPI Joiner
Dec 12, 2019 6:18:41 AM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl
INFO: [ip]:5701 [dev] [3.12] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
Dec 12, 2019 6:18:41 AM com.hazelcast.internal.diagnostics.Diagnostics
INFO: [ip]:5701 [dev] [3.12] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
Dec 12, 2019 6:18:41 AM com.hazelcast.core.LifecycleService
INFO: [ip]:5701 [dev] [3.12] [ip]:5701 is STARTING
Dec 12, 2019 6:18:41 AM com.hazelcast.kubernetes.KubernetesClient
WARNING: Cannot fetch public IPs of Hazelcast Member PODs, you won't be able to use Hazelcast Smart Client from outside of the Kubernetes network
Dec 12, 2019 6:18:47 AM com.hazelcast.internal.cluster.ClusterService
INFO: [ip]:5701 [dev] [3.12]

Members {size:1, ver:1} [
Member [ip]:5701 - 45c847bc-cd06-43eb-b2bb-709f2e849e3f this
]

Dec 12, 2019 6:18:47 AM com.hazelcast.internal.management.ManagementCenterService
INFO: [ip]:5701 [dev] [3.12] Hazelcast will connect to Hazelcast Management Center on address:
http://hazelcast-mancenter:8080/hazelcast-mancenter
Dec 12, 2019 6:18:47 AM com.hazelcast.internal.jmx.ManagementService
INFO: [ip]:5701 [dev] [3.12] Hazelcast JMX agent enabled.
Dec 12, 2019 6:18:47 AM com.hazelcast.core.LifecycleService
INFO: [ip]:5701 [dev] [3.12] [ip]:5701 is STARTED
Dec 12, 2019 6:18:52 AM com.hazelcast.internal.management.ManagementCenterService
INFO: [ip]:5701 [dev] [3.12] Failed to connect to: http://hazelcast-dev-new-mancenter:8080/hazelcast-mancenter/collector.do
Dec 12, 2019 6:18:52 AM com.hazelcast.client.impl.ClientEngine
INFO: [ip]:5701 [dev] [3.12] Applying a new client selector :ClientSelector{any}
Dec 12, 2019 6:19:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Initialized new cluster connection between /ip:5701 and /ip:44171
Dec 12, 2019 6:19:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=4, /ip:5701->/ip:43528, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
Dec 12, 2019 6:19:26 AM com.hazelcast.internal.cluster.ClusterService
INFO: [ip]:5701 [dev] [3.12]

Members {size:2, ver:2} [
Member [ip]:5701 - 45c847bc-cd06-43eb-b2bb-709f2e849e3f this
Member [ip]:5701 - c95eae99-b779-42a5-bf34-d21dd4bc64ae
]

Dec 12, 2019 6:19:37 AM com.hazelcast.internal.management.ManagementCenterService
INFO: [ip]:5701 [dev] [3.12] Connection to Management Center restored.
Dec 12, 2019 6:19:37 AM com.hazelcast.client.impl.ClientEngine
INFO: [ip]:5701 [dev] [3.12] Applying a new client selector :ClientSelector{any}
Dec 12, 2019 6:20:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=17, /ip:5701->/ip:44708, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
Dec 12, 2019 6:20:57 AM com.hazelcast.internal.management.ManagementCenterService
INFO: [ip]:5701 [dev] [3.12] Failed to pull tasks from Management Center
Dec 12, 2019 6:21:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=30, /ip:5701->/ip:46236, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
Dec 12, 2019 6:22:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=43, /ip:5701->/ip:47788, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
Dec 12, 2019 6:23:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=56, /ip:5701->/ip:49416, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
Dec 12, 2019 6:24:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=69, /ip:5701->/ip:51034, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.

Prometheus Scraping on 5701 in addition to 8080

When you install hazelcast helm chart with metrics.enabled=true, Prometheus tries to scrape 2 ports on hazelcast member port. 5701 should not be scraped as hazelcast publishes metrics on 8080.

image

Change the default livenessProbe endpoint to `/health`

Currently the default livenessProbe endpoint is /health/node-state. It should actually be /health.

The change is trivial, but before applying it we need to double check that it does not break rolling upgrade and scaling down.

enable auto-merge via Github comments at sync script after testing period

Remove this line:

 hub api repos/${HELM_REPOSITORY}/issues/${OFFICIAL_HELM_PR_ISSUE_NUMBER}/comments -f body='Please approve @hasancelik @leszko @mesutcelik @googlielmo @eminn'

and add below part:

 hub api repos/${HELM_REPOSITORY}/issues/${OFFICIAL_HELM_PR_ISSUE_NUMBER}/comments -f body='/ok-to-test'
 hub api repos/${HELM_REPOSITORY}/issues/${OFFICIAL_HELM_PR_ISSUE_NUMBER}/comments -f body='/lgtm'

 export GITHUB_TOKEN=${APPROVER_GITHUB_TOKEN}
 hub api repos/${HELM_REPOSITORY}/issues/${OFFICIAL_HELM_PR_ISSUE_NUMBER}/comments -f body='/lgtm'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.