Giter Site home page Giter Site logo

hz-docs's Introduction

Hazelcast Platform Documentation

Build Staging yellow

This repository contains the Antora components for the Hazelcast Platform documentation.

The documentation source files are marked up with AsciiDoc.

Docs Structure

This section describes some important information about how this repository is structured:

  • The component name, version, and start page are configured in each branch’s antora.yml file.

  • The navigation for all modules is stored in the ROOT module’s nav.adoc file.

  • The docs site playbook instructs Antora to automatically build the site using content in the main branch as well as any branches that are prefixed with v/.

Release Workflow

Documentation for new releases is hosted in versioned branches that are prefixed with v/. The latest-dev content (snapshot content) is stored in the main branch.

We support documentation for the latest patch releases of minor versions. For example, content for the 5.0 version is hosted in the v/5.0 branch. This branch contains content for the latest patch release of version 5.0.

Note
The documentation build process is triggered whenever you create a new branch with the v/ prefix, push to an existing v/ branch, or push to the main branch.

Snapshot Releases

Add the new snapshot version to the following:

Repository Branch File Fields

hz-docs

main

docs/antora.yml

  • version

  • display_version

  • full-version

  • asciidoc.attributes.page-latest-supported-java-client

hazelcast-docs

main and develop

_redirects

/hazelcast/latest-dev/*

search-config.json

  • start_urls.tags

  • start_urls.variables.version

management-center-docs

main

docs/antora.yml

asciidoc.attributes.page-latest-supported-hazelcast

Latest Releases

Add the major.minor version to the following:

Repository Branch File Fields

hz-docs

v/{major.minor version}

docs/antora.yml

  • version

  • display_version

hazelcast-docs

main and develop

_redirects

/hazelcast/latest/*

search-config.json

Create a new object in the start_urls array.

{ "url": "https://docs.hazelcast.com/hazelcast/(?P<version>.*?)/", "tags": [ "hazelcast-{major.minor version}" ], "variables": { "version": ["{major.minor version}"] }, "selectors_key": "hz" }

management-center-docs

The v/{version} branch where version is the value of the asciidoc.attributes.page-latest-supported-mc field in the docs/antora.yml file of the hz-docs repository

docs/antora.yml

asciidoc.attributes.page-latest-supported-hazelcast

Add the full version major.minor.patch to the following:

Repository File Fields

hz-docs

docs/antora.yml

  • full-version

  • asciidoc.attributes.page-latest-supported-java-client

Patch Releases

In the v/ branch for the minor version whose patch you are releasing, update the asciidoc.attributes.full-version field in the antora.yml file to the new patch version. For example, if you are releasing version 5.0.3, find the v/5.0 branch and update the asciidoc.attributes.full-version field in the antora.yml file with 5.0.3.

Note
As soon as you push content to this branch, GitHub will trigger a new build of the site, which will include your new content.

Creating Release Branches

  1. If you are releasing a new major version, create a release branch from the main branch.

    For example, if you are releasing version 5.1, create a new release branch named 5.1 from the main branch.

  2. Update the fields mentioned in Latest Releases.

  3. Remove the prerelease: true field from the docs/antora.yml file of the hz-docs repository.

    Important
    If you are creating a branch for a beta release, do not remove this field.
  4. When you are ready to release, create a maintenance branch from the release branch.

    Note
    As soon as you push the maintenance branch to the repository, GitHub will trigger a new build of the site, which will include your new content.
  5. Make sure to delete the release branch.

    For example, if you released version 5.1, delete the 5.1 branch. This step helps to keep the repository clean of release branches.

GitHub Actions

To automate some elements of the build process, this repository includes the following GitHub Actions:

Table 1. GitHub Actions
File Description Triggers

validate-site.yml

Validates that all internal and external links are working

On a pull request to the main, archive, and v/ maintenance branches

build-site.yml

Builds the production documentation site by sending a build hook to Netlify (the hosting platform that we use)

On a push to the main branch and any v/ maintenance branches

backport.yml

Backports commits to maintenance branches

On a push to the main branch that originated from a pull request with the backport label

Contributing

If you want to add a change or contribute new content, see our contributing guide.

To let us know about something that you’d like us to change, consider creating an issue.

License

All documentation is available under the terms of a Creative Commons License

hz-docs's People

Contributors

ahmetmircik avatar angelasimms avatar avtarraikmo avatar devopshazelcast avatar emreyigit avatar fatihozer0 avatar frant-hartm avatar gbarnett-hz avatar jackpgreen avatar jakescahill avatar jameshazelcast avatar jerrinot avatar k-jamroz avatar kutluhanmetin avatar kwart avatar ldziedziul avatar mdumandag avatar mtyazici avatar oliverhowell avatar orcunc avatar pveentjer avatar rebekah-lawrence avatar seii avatar semihbkgr avatar serdaro avatar seriybg avatar shultseva avatar srknzl avatar tomaszgaweda avatar vbekiaris avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hz-docs's Issues

Docs: Feedback for Queue and Topic Pages

Hi, I have some feedback about this page

"Hazelcast distributed queue is an implementation of java.util.concurrent.BlockingQueue. Being distributed, Hazelcast distributed queue enables all cluster members to interact with it. Using Hazelcast distributed queue, you can add an item in one cluster member and remove it from another one."

This one talks only about embedded mode, and no need to mention BlockingQueue as it's not relevant for non-Java users. Same problem exists on the Topic page, so I won't repeat it here.

hazelcast.discovery.public.ip.enabled can be improved

In https://docs.hazelcast.com/hazelcast/5.0-beta-1/clients/java.html#client-system-properties, hazelcast.discovery.public.ip.enabled is documented poorly. This propery is a three-state boolean that can have null, false or true. every value's behavior should be documented. See the documentation in Node.js client:

hazelcast/hazelcast-nodejs-client@c980b4d

When set to `true`, the client will assume that it needs to use public IP addresses reported by members. When set to `false`, the client will always use private addresses reported by members. If it is null, the client will try to infer how the cluster is structured and act accordingly. This inference is not %100 reliable and may result in false-negatives.

Docs: Feedback for Listening to Events

Hi, I have some feedback about this page

I would priotrize the distributed object events and explain the use cases a bit more in detail. I think cluster events are not so relevant for most common use cases.

Consider merging distributed object events into the map section, as I think it's a more natural place there.

Docs: SQL is hard to find

It's hard to find SQL docs in the left hand nav. It deserves its own section. The comparison with Predicate is distracting since SQL covers much broader use cases. I think the comparison can be made at the Predicate page instead.

I would expect something top-level for SQL, as we have even for Executors, a much more minor feature.

List all options for adding files to members' classpaths

We have the Adding Java Classes to Members section, which explains code deployment. But we should have an overview that lists all the ways of adding Java class to members such as adding JAR files to the lib folder or using the command line.

SQL Docs are spread over multiple places

SQL Docs are spread over multiple places. Some are in Develop Solutions, some are in Reference but they're more like guides, I found it very hard to navigate.

Docs: Feedback for Distributed Data Structures

Hi, I have some feedback about this page

We can change the ordering based on the usage patterns of data structures. For example, Replicated Map is mentioned way down but it's the 3rd popular data structure where as things like Cache and Set etc are pretty niche.

Here's data for 4.2:
image

Provide documentation for logging in custom code

As a developer, I want to log. I want to use the common logging infrastructure so I don't have to change logging levels in multiple places. I want to use a standard API for logging (slf4j preferably).

Provide documentation on

  • how to change logging levels in the platform

  • the logging infrastructure in the platform so I can hook into it (what jar's I should depend on, what's in the runtime etc..)

Protobuf Support on Solaris

More info - hazelcast/hazelcast#18767 (comment)

Protobuf is used by hazelcast-jet-kafka and there is no protobuf lib for solaris, hence we can't support it, unless we find a workaround.

we should fix the solaris jenkins build so that it skips the hazelcast-jet-kafka extension, cause right now we get zero tests run and the jenkins job is disabled

Docs: Feedback for INSERT INTO/SINK INTO

Hi, I have some feedback about this page

In the examples section, it only shows the very basic case where you're inserting a simple key-value. I would appreciate an example of when you're inserting an object as well, possibly separating those two examples even though they overlap a bit.

I had a simple example like:

  • CREATE MAPPING persons (__key BIGINT, name VARCHAR, age INT) TYPE IMap OPTIONS ('keyFormat'='bigint','valueFormat'='java','valueJavaClass' = 'sql.Person')
  • `INSERT INTO persons (__key, name, age) VALUES (1 , 'Jake', 29);

Additionally, regarding the basic example of adding a key-value entry, I would like a more detailed explanation that the provided example will create an IMap with String as keys and Int values (i.e. IMap<String, Integer>) and that the key (= __key) is aliased as name and value (this) is aliased as age. Also, there's explanation of the EXTERNAL NAME. I know it's hidden under the CREATE MAPPING link but it's still not really transparent.

Docs: Feedback for Production Checklist

Hi, I have some feedback about this page

https://docs.hazelcast.com/hazelcast/latest/production-checklist.html#hardware-recommendations

  • We suggest at least 8 CPU cores or equivalent per member...

https://docs.hazelcast.com/hazelcast/latest/production-checklist.html#operating-system-recommendations

  • Solaris Sparc should be Solaris SPARC in all cases (just like VMWare ESX below it is "ESX" everywhere). See these two Wikipedia articles if confirmation is needed.

https://docs.hazelcast.com/hazelcast/latest/production-checklist.html#vmware-esx

  • Add

    • - Avoid sharing one Network Interface Card (NICs) between multiple virtual machine environments. A Hazelcast cluster is a distributed system and can be very network-intensive. Trying to share one physical NIC between multiple VMs may cause network-related performance problems.
  • Edit

    • Be careful overcommitting CPU cores. Monitor CPU steal time metrics.

    • Do not move guests while Hazelcast is running - for ESX this means disabling vMotion. If you want to use vMotion (live migration), first stop the Hazelcast cluster then restart it after the migration completes.

    • Always enable verbose garbage collection (GC) logs in the Java Virtual Machine. When "Real" time is higher than "User" time, this may indicate virtualization issues. The JVM is not using the CPU to execute application code during garbage collection, and is probably waiting on input/output (I/O) operations.

    • Use pass-through hard disks/partitions; do not use image files.

    • If you want to use automatic snapshots, first stop the Hazelcast cluster then restart it after the snapshot.

https://docs.hazelcast.com/hazelcast/latest/production-checklist.html#windows

  • The workaround for such cases...

https://docs.hazelcast.com/hazelcast/latest/production-checklist.html#jvm-recommendations

  • In order to avoid long garbage collection (GC) pauses and latencies from the Java Virtual Machine (JVM), we recommend 16GB or less of maximum JVM heap. If [High-Density Memory]() is enabled, no more than 8GB of maximum JVM heap is recommended. Horizontal scaling of JVM memory is recommended over vertical scaling if you wish to exceed these numbers.
    (Also add a hyperlink on "High-Density Memory", linking to https://docs.hazelcast.com/hazelcast/5.0-beta-1/storage/high-density-memory.html#latest-banner or equivalent)

  • Semicolons should be replaced with colons for General recommendations, For Java9+, and For Java 8.

https://docs.hazelcast.com/hazelcast/latest/production-checklist.html#jvm-recommendations

  • Total data size should be calculated based on the combination of primary data and backup data. For example, if you have configured your cluster with a backup count of 2, then total memory consumed is actually 3x larger than the primary data size (primary + backup + backup). Partition sizes of 50MB or less are recommended.

https://docs.hazelcast.com/hazelcast/latest/production-checklist.html#jvm-recommendations

  • Add

  • Edit

    • An optimal partition count and size establish a balance between the number of partitions...

    • The partition count should be a prime number. This helps to minimize...

    • A partition count which is too low constrains the cluster. The count should be...

    • A partition size of 50MB or less typically ensures good performance. Larger clusters may be able to use up to 100MB partition sizes, but will likely also require larger JVM heap sizes to accomodate the increase in data flow.

    • The partition count cannot be eaily changed after a cluster is created, so if you have a large cluster be sure to test and set an optimum partition count prior to deployment. If you need to change th partition count after a cluster is already running, you will need to schedule a maintenance window to entirely bring the cluster down. If your cluster uses [Persistence]() or [CP Persistence]() features, those persistent files will need to be removed after the cluster is shut down, as they contain references to the previous partition count. Once all member configurations are updated, and any persistent data structure files are removed, the cluster can be safely restarted.

  • Remove "Large Cluster Configuration Requirements" section, as it doesn't add value and doesn't appear to directly correspond with any section from the Deployment & Operations Guide

Persistence: stale date - outdated information

https://github.com/hazelcast/hz-docs/blob/main/docs/modules/storage/pages/persistence.adoc
Synchronizing Persisted Data Faster:
Actual text
Otherwise, the crashed member aborts the initialization and shuts down. To be able to join the cluster, the Persistence directory previously used by the crashed member must be deleted manually. You can do so, using a force start.
Expected
Now, client should connect due to revised functionality - see issue https://github.com/hazelcast/hazelcast-enterprise/issues/4163#issuecomment-895087809 for details

Dead links

Found some links not going to proper places. I would have fixed it myself with a PR but I actually kind find how to reference the other files. Or maybe I'm just lazy to find out on a Friday evening, sorry :) I'll learn from the PRs that you'll send and won't repeat this again, I promise :)

Sink overview misses the sink names

In the sink overview, the table actually misses the first column with sink name. The table columns seem to be messed up.

See here

The correct state in old Jet docs:

Docs: Feedback for On-Premise

Hi, I have some feedback about this page

It might be useful to list everything that comes with slim/full so users can decide which one they want and maybe find help on what certain files are for.

hazelcast-5.0-SNAPSHOT-slim
├── LICENSE
├── NOTICE
├── bin
│   ├── common.sh
│   ├── hz-cli
│   ├── hz-cli.bat
│   ├── hz-cluster-admin
│   ├── hz-cluster-cp-admin
│   ├── hz-healthcheck
│   ├── hz-start
│   ├── hz-start.bat
│   ├── hz-stop
│   └── hz-stop.bat
├── config
│   ├── examples
│   │   ├── hazelcast-client-full-example.xml
│   │   ├── hazelcast-client-full-example.yaml
│   │   ├── hazelcast-client.yaml
│   │   ├── hazelcast-full-example.xml
│   │   ├── hazelcast-full-example.yaml
│   │   ├── hazelcast-security-hardened.yaml
│   │   └── hazelcast.yaml
│   ├── hazelcast-client.xml
│   ├── hazelcast.xml
│   ├── jmx_agent_config.yaml
│   ├── jvm-client.options
│   ├── jvm.options
│   └── log4j2.properties
├── lib
│   ├── cache-api-1.1.1.jar
│   ├── hazelcast-5.0-SNAPSHOT.jar
│   ├── hazelcast-download.properties
│   ├── hazelcast-hibernate53-2.1.1.jar
│   ├── hazelcast-sql-5.0-SNAPSHOT.jar
│   ├── hazelcast-wm-4.0.jar
│   ├── jansi-2.1.0.jar
│   ├── jline-reader-3.19.0.jar
│   ├── jline-terminal-3.19.0.jar
│   ├── jline-terminal-jansi-3.19.0.jar
│   ├── jmx_prometheus_javaagent-0.14.0.jar
│   ├── log4j-api-2.14.0.jar
│   ├── log4j-core-2.14.0.jar
│   ├── log4j-slf4j-impl-2.14.0.jar
│   ├── picocli-3.9.0.jar
│   └── slf4j-api-1.7.30.jar
└── licenses
    ├── THIRD-PARTY.txt
    ├── apache-v2-license.txt
    └── hazelcast-community-license.txt
hazelcast-5.0-SNAPSHOT (full)
├── LICENSE
├── NOTICE
├── bin
│   ├── common.sh
│   ├── hz-cli
│   ├── hz-cli.bat
│   ├── hz-cluster-admin
│   ├── hz-cluster-cp-admin
│   ├── hz-healthcheck
│   ├── hz-start
│   ├── hz-start.bat
│   ├── hz-stop
│   └── hz-stop.bat
├── config
│   ├── examples
│   │   ├── hazelcast-client-full-example.xml
│   │   ├── hazelcast-client-full-example.yaml
│   │   ├── hazelcast-client.yaml
│   │   ├── hazelcast-full-example.xml
│   │   ├── hazelcast-full-example.yaml
│   │   ├── hazelcast-security-hardened.yaml
│   │   └── hazelcast.yaml
│   ├── hazelcast-client.xml
│   ├── hazelcast.xml
│   ├── jmx_agent_config.yaml
│   ├── jvm-client.options
│   ├── jvm.options
│   └── log4j2.properties
├── custom-lib
│   ├── hazelcast-3-connector-impl-5.0-SNAPSHOT.jar
│   ├── hazelcast-3.12.12.jar
│   └── hazelcast-client-3.12.12.jar
├── lib
│   ├── cache-api-1.1.1.jar
│   ├── hazelcast-3-connector-common-5.0-SNAPSHOT.jar
│   ├── hazelcast-3-connector-interface-5.0-SNAPSHOT.jar
│   ├── hazelcast-5.0-SNAPSHOT.jar
│   ├── hazelcast-download.properties
│   ├── hazelcast-hibernate53-2.1.1.jar
│   ├── hazelcast-jet-avro-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-cdc-debezium-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-cdc-mysql-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-cdc-postgres-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-csv-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-elasticsearch-7-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-files-azure-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-files-gcs-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-files-s3-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-grpc-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-hadoop-all-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-kafka-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-kinesis-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-protobuf-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-python-5.0-SNAPSHOT.jar
│   ├── hazelcast-jet-s3-5.0-SNAPSHOT.jar
│   ├── hazelcast-sql-5.0-SNAPSHOT.jar
│   ├── hazelcast-wm-4.0.jar
│   ├── jansi-2.1.0.jar
│   ├── jline-reader-3.19.0.jar
│   ├── jline-terminal-3.19.0.jar
│   ├── jline-terminal-jansi-3.19.0.jar
│   ├── jmx_prometheus_javaagent-0.14.0.jar
│   ├── log4j-api-2.14.0.jar
│   ├── log4j-core-2.14.0.jar
│   ├── log4j-slf4j-impl-2.14.0.jar
│   ├── picocli-3.9.0.jar
│   └── slf4j-api-1.7.30.jar
└── licenses
    ├── THIRD-PARTY.txt
    ├── apache-v2-license.txt
    └── hazelcast-community-license.txt

Docs: Feedback for Capacity Planning

Hi, I have some feedback about this page

Much of the text needs some small tweaks for readability. In addition, several places use Hazelcast terminology that isn't hyperlinked to provide an explanation of what it means.

Docs: Feedback for Cluster Utilities

Hi, I have some feedback about this page

I think this chapter contains content a bit out of place. To me, "Managing Cluster and Member states", "Defining Member Attributes", "Getting Member Events and Member Sets" and "Enabling Lite Members" has nothing to do with "Cluster Utilities". I agree it might be subjective and maybe needs proper thinking but I guarantee that nobody will find e.g. lite members in the structure under "Cluster utilities" without a search.

should getDistributedObjects include `__sql.catalog` map ?

Describe the bug
I think this is an internal map for SQL and should be hidden.
Expected behavior
I expect it to be filtered from getDistributedObjects results.

To Reproduce

Steps to reproduce the behavior:

  1. Run 5.0 snapshot server(latest) docker run -p 5701:5701 hazelcast/hazelcast:5.0-SNAPSHOT
  2. Run the following script
import com.hazelcast.client.config.ClientConfig;
import com.hazelcast.client.HazelcastClient;
import com.hazelcast.core.HazelcastInstance;

public class Main {
    public static void main(String[] args) {
        ClientConfig c = new ClientConfig();
        HazelcastInstance h = HazelcastClient.newHazelcastClient(c);
        System.out.println(h.getDistributedObjects().size()); // 1
        System.out.println(h.getDistributedObjects().toArray()[0]); // ReplicatedMap{name='__sql.catalog'}
    }
}

Docs: Feedback for Persistence

Hi, I have some feedback about this page

We don't really tell them why we need Merkle trees for map persistence to the new user. It just has a config description for Merkle Trees. We should go into more depth as to why we need this. Explain it.

Configuration examples should include all options as tabs

We always include examples for YAML and XML, but we are inconsistent about whether we include examples for Spring.

We also have the option of using system properties and environment variables, so these should be included as an option.

HazelcastJsonValue is documented only for the Predicates API

HazelcastJsonValue is a serialization option that's used by default in services such as SQL, but it doesn't appear in the serialization section.

As a user, I would have no idea that I can serialize my data to HazelcastJsonValue, which comes with faster queries, but slower inserts.

We should have a centralized place where we can discuss this serialization option and link to it where necessary.

All current information is here: https://github.com/hazelcast/hazelcast-platform/blob/master/docs/modules/query/pages/querying-maps-predicates.adoc#querying-json-strings

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.