infinispan / infinispan-helm-charts Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
So I have a quarkus pod that is starting after my infinispan pod is successfully up.
I would like infinispan to have a specific cache schema already in place when its created / restarted / whatever.
Is there a property in chart I can use to supply a json structure / file to do this?
networking.k8s.io/v1beta1/Ingress
has been removed in k8s 1.22. We should utilise networking.k8s.io/v1/Ingress
instead.
When I use Infinispan chart as a dependency in my helm chart I get the following error:
Error: INSTALLATION FAILED: found in Chart.yaml, but missing in charts/ directory: infinispan-infinispan
dependencies:
- name: infinispan-infinispan
alias: infinispan
version: 0.3.0
repository: https://charts.openshift.io/
helm dependency update
helm install
command.Why:
Based on my investigation it is because the name of the chart is infinispan
but as a dependency, we use the infinispan-infinispan
name. I could not find any work around, I think it should be solved in the infinispan chart itself.
Hello!
@ryanemerson
I have an error
Unexpected error creating the cache with the provided configuration. "Unauthorized action."
What can be ?
I have added secret with creation user
security:
secretName: ispn-connection
batch: ""
in doc https://infinispan.org/docs/helm-chart/main/helm-chart.html#adding-multiple-credentials_configuring-authentication
written next:
deploy:
security:
authentication: true
secretName: 'connect-secret'
Secret:
identities-batch: user create user_administrator -p -g admin
user create user_keycloak -p -g application
user create user_monitor -p --users-file metrics-users.properties --groups-file metrics-groups.properties
password: password
username: user_monitor
Maybe i miss something ?
In the Service:
infinispan-helm-charts/templates/service.yaml
Lines 5 to 6 in 9b38d8c
In the Route::
infinispan-helm-charts/templates/route.yaml
Lines 7 to 11 in 9b38d8c
In the Ingress::
infinispan-helm-charts/templates/route.yaml
Lines 31 to 35 in 9b38d8c
The metrics service doesn't need additional route and load-balancing, so it could be a headless service to improve network routes on K8s. Could be a separated template from service.yaml
, with a name like metrics-service.yaml
For downstream docs, include values and descriptions from the schema and README in the docs for convenience.
Also include additional details about roles and permissions when configuring authentication + procedures for configuring network services and connecting to clusters via CLI, Console, and clients.
Hello!
Can you create affinity template for deployment ?
Because i don need hostname, i need zones
Is this possible ?
Thank you
Hello!
I have written issue there is a link:
#93
Shoud i change some thing here ?
As i understand this is default realm ?
infinispan:
cacheContainer:
# [USER] Add cache, template, and counter configuration.
name: default
# [USER] Specify `security: null` to disable security authorization.
security:
authorization: {}
The Operator allows labels to be defined per service or per pod, however the Helm Chart only allows labels to be applied to all created resources via deploy.labels.
We should introduce two new fields podLabels
and serviceLabels
to differentiate how labels are propagated:
deploy:
labels: {} #Applied to all resources
podLabels: {} # Applied to all Pods
serviceLabels: {} # Applied to all services
this is a generalized solution for #97
Error:
Helm install failed: chart requires kubeVersion: >= 1.21.0
which is incompatible with Kubernetes v1.24.13-eks-0a21954
Fix:
I tested it can be fixed by adding kubeVersion: >= 1.21.0-0
but not sure if it works well with non EKS clusters.
Thanks.
In some resources, infinispan-helm-charts.labels
template is used to set the labels, this template contains clusterName
itself but still clusterName
is set individually for some resources and hence there are duplicated keys for clusterName
, there is no problem when installing with helm but when I use skaffold which run some post-render process in helm and validate the YAML format, it throws an error.
The generated output in helm is:
apiVersion: v1
kind: Service
metadata:
name: infinispan-metrics
annotations:
meta.helm.sh/release-name:
meta.helm.sh/release-namespace: default
labels:
app: infinispan-service-metrics
clusterName: infinispan
clusterName: infinispan
helm.sh/chart: infinispan-0.3.0
meta.helm.sh/release-name: infinispan
meta.helm.sh/release-namespace: default
app.kubernetes.io/version: "14.0"
Here is the error I get in skaffold
Error: INSTALLATION FAILED: error while running post render on files: error while running command /Users/tashkhisi/.asdf/installs/skaffold/2.0.1/bin/skaffold. error output:
reading Kubernetes YAML: yaml: unmarshal errors:
line 12: mapping key "clusterName" already defined at line 11
To avoid issues with duplicate resources we should add the following annotations:
"meta.helm.sh/release-name :release-name"
"meta.helm.sh/release-namespace: <the current k8s namespace>"
Without these annotations, the second execution of helm template . --validate | oc apply -f -
will result in the following error:
helm template --validate . | oc apply -f -
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Secret "release-name-generated-secret" in namespace "alvaro-test09" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "release-name"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "alvaro-test09"
error: no objects passed to apply
We should define a helper template so that we can add these annotations to all resources, something like:
{{/*
Helm Annotations
*/}}
{{- define "infinispan.annotations" }}
meta.helm.sh/release-name: "{{ }}"
meta.helm.sh/release-namespace: "{{ }}"
{{- end }}
I think that this is partially covered by #25.
With the current latest version, defining a configuration as:
#Build configuration
images:
server: quay.io/infinispan/server:13.0
initContainer: registry.access.redhat.com/ubi8-micro
#Deployment configuration
deploy:
replicas: 1
And creating the cluster with
helm install infinispan openshift-helm-charts/infinispan-infinispan -f rhdg-chart/minimal-values.yaml
When I try to update the cluster to have to replicas, the secret is gone and Helm cannot create the second pod:
helm upgrade infinispan openshift-helm-charts/infinispan-infinispan -f rhdg-chart/minimal-values.yaml --set deploy.replicas=2
If this is fully duplicated, please close this case.
Hello!
There is a question i have deployed infinispan
But have a lot of lags in console, can you tell me what the problem ?
It's freez on loading when you click on any tabs
as you see on screenshot
I send you my value file
# Default values for infinispan-helm-charts.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
images:
# [USER] The container images for server pods.
server: quay.io/infinispan/server:14.0
initContainer: registry.access.redhat.com/ubi8-micro
deploy:
# [USER] Specify the number of nodes in the cluster.
replicas: 2
clusterDomain: cluster.local
container:
extraJvmOpts: ""
libraries: ""
# [USER] Define custom environment variables using standard K8s format
# env:
# - name: STANDARD_KEY
# value: standard value
# - name: CONFIG_MAP_KEY
# valueFrom:
# configMapKeyRef:
# name: special-config
# key: special.how
# - name: SECRET_KEY
# valueFrom:
# secretKeyRef:
# name: special-secret
# key: special.how
env:
storage:
size: 1Gi
storageClassName: ""
# [USER] Set `ephemeral: true` to delete all persisted data when clusters shut down or restart.
ephemeral: true
resources:
# [USER] Specify the CPU limit and the memory limit for each pod.
limits:
cpu: 1000m
memory: 1024Mi
# [USER] Specify the maximum CPU requests and the maximum memory requests for each pod.
requests:
cpu: 1000m
memory: 1024Mi
security:
secretName: ""
batch: ""
expose:
# [USER] Specify `type: ""` to disable network access to clusters.
type: Route
nodePort: 0
host: dummy
annotations:
- key: kubernetes.io/ingress.class
value: alb
- key: alb.ingress.kubernetes.io/group.name
value: dummy
- key: alb.ingress.kubernetes.io/group.order
value: dummy
- key: alb.ingress.kubernetes.io/scheme
value: internal
- key: alb.ingress.kubernetes.io/target-type
value: ip
- key: alb.ingress.kubernetes.io/listen-ports
value: '[{"HTTP": 80}, {"HTTPS":443}]'
- key: alb.ingress.kubernetes.io/certificate-arn
value: dummy
- key: alb.ingress.kubernetes.io/ssl-redirect
value: '443'
monitoring:
enabled: false
logging:
categories:
# [USER] Specify the FQN of a package from which you want to collect logs.
- category: com.arjuna
# [USER] Specify the level of log messages.
level: warn
# No need to warn about not being able to TLS/SSL handshake
- category: io.netty.handler.ssl.ApplicationProtocolNegotiationHandler
level: error
makeDataDirWritable: false
nameOverride: ""
resourceLabels: []
podLabels:
- key: microservice
value: infinispan
svcLabels: []
tolerations: []
nodeAffinity: {}
nodeSelector: {}
infinispan:
cacheContainer:
# [USER] Add cache, template, and counter configuration.
name: default
# [USER] Specify `security: null` to disable security authorization.
security:
authorization: {}
transport:
cluster: ${infinispan.cluster.name:cluster}
node-name: ${infinispan.node.name:}
stack: kubernetes
server:
endpoints:
# [USER] Hot Rod and REST endpoints.
- securityRealm: default
socketBinding: default
connectors:
rest:
restConnector:
hotrod:
hotrodConnector:
# [MEMCACHED] Uncomment to enable Memcached endpoint
# memcached:
# memcachedConnector:
# socketBinding: memcached
# [METRICS] Metrics endpoint for cluster monitoring capabilities.
- connectors:
rest:
restConnector:
authentication:
mechanisms: BASIC
securityRealm: metrics
socketBinding: metrics
interfaces:
- inetAddress:
value: ${infinispan.bind.address:127.0.0.1}
name: public
security:
credentialStores:
- clearTextCredential:
clearText: secret
name: credentials
path: credentials.pfx
securityRealms:
# [USER] Security realm for the Hot Rod and REST endpoints.
- name: default
# [USER] Comment or remove this properties realm to disable authentication.
propertiesRealm:
groupProperties:
path: groups.properties
groupsAttribute: Roles
userProperties:
path: users.properties
# [METRICS] Security realm for the metrics endpoint.
- name: metrics
propertiesRealm:
groupProperties:
path: metrics-groups.properties
relativeTo: infinispan.server.config.path
groupsAttribute: Roles
userProperties:
path: metrics-users.properties
relativeTo: infinispan.server.config.path
socketBindings:
defaultInterface: public
portOffset: ${infinispan.socket.binding.port-offset:0}
socketBinding:
# [USER] Socket binding for the Hot Rod and REST endpoints.
- name: default
port: 11222
# [METRICS] Socket binding for the metrics endpoint.
- name: metrics
port: 11223
# [MEMCACHED] Uncomment to enable Memcached endpoint
# - name: memcached
# port: 11221
When trying to create a cache from cache template as described in https://infinispan.org/docs/stable/titles/configuring/configuring.html#cache-configuration and https://infinispan.org/docs/helm-chart/main/helm-chart.html#server-configuration-values_configuring-servers
│ java.lang.reflect.InvocationTargetException │
│ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) │
│ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) │
│ at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) │
│ at java.base/java.lang.reflect.Method.invoke(Method.java:568) │
│ at org.infinispan.server.loader.Loader.run(Loader.java:106) │
│ at org.infinispan.server.loader.Loader.main(Loader.java:51) │
│ Caused by: org.infinispan.commons.CacheConfigurationException: ISPN000374: No such template 'distributed-cache-template' when declaring 'sessions' │
│ at org.infinispan.configuration.parsing.CacheParser.getConfigurationBuilder(CacheParser.java:1186) │
│ at org.infinispan.configuration.parsing.CacheParser.parseDistributedCache(CacheParser.java:1038) │
│ at org.infinispan.configuration.parsing.Parser.parseCaches(Parser.java:872) │
│ at org.infinispan.configuration.parsing.Parser.parseContainer(Parser.java:752) │
│ at org.infinispan.configuration.parsing.Parser.readElement(Parser.java:87) │
│ at org.infinispan.configuration.parsing.ParserRegistry.parseElement(ParserRegistry.java:209) │
│ at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:187) │
│ at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:175) │
│ at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:169) │
│ at org.infinispan.server.Server.parseConfiguration(Server.java:319) │
│ at org.infinispan.server.Server.<init>(Server.java:234) │
│ at org.infinispan.server.Bootstrap.runInternal(Bootstrap.java:171) │
│ at org.infinispan.server.tool.Main.run(Main.java:98) │
│ at org.infinispan.server.Bootstrap.main(Bootstrap.java:56)
The only changed configuration is as follows:
cacheContainer:
name: default
statistics: true
distributedCacheConfiguration:
name: "distributed-cache-template"
mode: "SYNC"
statistics: "true"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "5000"
maxIdle: "1000"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
distributedCache:
name: "sessions"
configuration: "my-dist-template"
Defining caches without template works. What is the correct syntax to use cache templates?
I would like to access the console via an ingress. I have decided not to use the expose
part in the chart since currently there is no tls
support in there. With port forward everything seems to be ok, I can access the console.
Do you know what am I doing wrong with the ingress setup?
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: svc-mb-infinispan
annotations:
meta.helm.sh/release-name: svc-mb-infinispan
meta.helm.sh/release-namespace: foo
"cert-manager.io/cluster-issuer": "letsencrypt"
"kubernetes.io/ingress.class": "internal"
"kubernetes.io/tls-acme": "true"
labels:
app: infinispan-ingress
clusterName: svc-mb-infinispan
helm.sh/chart: infinispan-0.3.2
meta.helm.sh/release-name: svc-mb-infinispan
meta.helm.sh/release-namespace: foo
app.kubernetes.io/version: "14.0"
app.kubernetes.io/managed-by: Helm
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-mb-infinispan
port:
number: 11222
host: "mb-infinispan.intra.mimacom.io"
tls:
- hosts:
- "mb-infinispan.intra.mimacom.io"
secretName: mb-infinispan.intra.mimacom.io-tls
For some reason it does not work
Here is the values file:
images:
server: quay.io/infinispan/server:14.0
initContainer: registry.access.redhat.com/ubi8-micro
deploy:
replicas: 3
container:
storage:
ephemeral: true
infinispan:
cacheContainer:
statistics: true # Global statistics
distributedCacheConfiguration:
name: default
mode: ASYNC
statistics: true # Cache statistics
encoding:
mediaType: "application/x-java-serialized-object"
expiration:
lifespan: 300000
maxIdle: 120000
memory:
maxCount: 1000
whenFull: REMOVE
Hello,
I am looking for an opportunity to deploy the InfiniSpan in a cross-DC mode.
Does this chart support this mode? And, if yes, are there any documentation pages available?
Thanks!
I deploy the infinispan cluster using these following command:
helm lint ./infinispan-helm-charts
helm install -n qa infinispan-server ./infinispan-helm-charts
And then port forward to access
kubectl port-forward service/infinispan-server 11222:11222 -n qa
On Minikube: Its working fine
When i run above command infinispan server got created and goes into cluster
on EKS : clustering is not happening
When i run above command infinispan server got created but NOT goes into cluster
I have the following values.yaml which defines a cache template default
. On startup of the different services I create caches based on this template.
I am wondering why on the console however global statistics is not enabled.
Do you know how can I enable it with yaml definition?
images:
server: quay.io/infinispan/server:14.0
initContainer: registry.access.redhat.com/ubi8-micro
deploy:
replicas: 3
container:
storage:
ephemeral: true
infinispan:
cacheContainer:
distributedCacheConfiguration:
name: default
mode: ASYNC
statistics: true
expiration:
lifespan: 300000
maxIdle: 120000
memory:
maxCount: 1000
whenFull: REMOVE
Infinispan 13.0 allows the server and logging to be configured via yaml. Instead of utilising the _infinispan.xml
and _log4j2.xml
templates, we can embed the default configurations in the values.yaml
.
The advantage of this approach is that it's now possible to update the configurations at chart creation time without having to modify any templates. This is necessary in order to better integrate with the Openshift Helm charts UI.
For example, values.yaml
could like the following:
images:
server: 'registry.redhat.io/datagrid/datagrid-8-rhel8:1.2'
initContainer: registry.access.redhat.com/ubi8-micro
deploy:
replicas: 1
...
infinispan:
cache-container:
...
server:
A consequence of this approach is that it's no longer possible to support the toggling of authentication (values.security.authentication: true | false
) as template logic is not allowed in values.yaml
.
The JGroups docs recommends that the service used by DNS_PING has spec.publishNotReadyAddresses==true so that JGroups discovery can happen before a Pod is marked as Ready by k8s.
Hi
I am integrating keycloak with infinispan, but when i am creating sessions cache on infinispan i am getting following error.
i am using following values.yaml
#Build configuration
images:
server: quay.io/infinispan/server:latest
initContainer: registry.access.redhat.com/ubi8-micro
#Deployment configuration
deploy:
infinispan:
cacheContainer:
distributedCacheConfiguration:
name: "sessions-cfg"
mode: "SYNC"
statistics: "true"
locking:
acquire-timeout: "0"
mediaType: "application/x-jboss-marshalling"
#Add a user with full security authorization.
security:
batch: "user create myuser -p qwer1234 -g admin"
#Create a cluster with two pods.
replicas: 2
getting the similar error when i an doing it with cache definations
Build configuration
images:
server: quay.io/infinispan/server:latest
initContainer: registry.access.redhat.com/ubi8-micro
#Deployment configuration
deploy:
infinispan:
cacheContainer:
name: "keycloak"
statistics: "true"
caches:
work:
distributedCache:
mode: "SYNC"
statistics: "true"
encoding:
mediaType: "application/x-jboss-marshalling"
sessions:
distributedCache:
mode: "SYNC"
statistics: "true"
encoding:
mediaType: "application/x-jboss-marshalling"
authenticationSessions:
distributedCache:
mode: "SYNC"
statistics: "true"
encoding:
mediaType: "application/x-jboss-marshalling"
offlineSessions:
distributedCache:
mode: "SYNC"
statistics: "true"
encoding:
mediaType: "application/x-jboss-marshalling"
clientSessions:
distributedCache:
mode: "SYNC"
statistics: "true"
encoding:
mediaType: "application/x-jboss-marshalling"
offlineClientSessions:
distributedCache:
mode: "SYNC"
statistics: "true"
encoding:
mediaType: "application/x-jboss-marshalling"
loginFailures:
distributedCache:
mode: "SYNC"
statistics: "true"
encoding:
mediaType: "application/x-jboss-marshalling"
actionTokens:
distributedCache:
mode: "SYNC"
statistics: "true"
encoding:
mediaType: "application/x-jboss-marshalling"
#Add a user with full security authorization.
security:
batch: "user create myuser -p qwer1234 -g admin"
#Create a cluster with two pods.
replicas: 2
in this case getting following error
�[1;31m2023-08-02 19:35:11,859 FATAL (main) [org.infinispan.SERVER] ISPN080028: Infinispan Server failed to start org.infinispan.manager.EmbeddedCacheManagerStartupException: ISPN000573: Cannot recreate persisted configuration for cache 'sessions' because configuration Configuration{simpleCache=false, clustering=[[mode=DIST_SYNC, remote-timeout=15000, invalidation-batch-size=128, bias-acquisition=ON_WRITE, bias-lifespan=300000], hash=[consistent-hash-factory=null, owners=2, segments=256, capacity-factor=1.0, key-partitioner=HashFunctionPartitioner{hashFunction=MurmurHash3, ns=256}], l1=[enabled=false, invalidation-threshold=0, l1-lifespan=600000, l1-cleanup-interval=60000], state-transfer=[enabled=true, timeout=240000, chunk-size=512, await-initial-transfer=true], partition-handling=[when-split=ALLOW_READ_WRITES, merge-policy=NONE]], customInterceptors=[interceptors=[]], encoding=[[media-type=application/x-protostream], key=[media-type=null], value=[media-type=null]], expiration=[lifespan=-1, max-idle=-1, reaperEnabled=true, interval=60000, touch=SYNC], query=[properties={}, default-max-results=100, hit-count-accuracy=10000], indexing=[properties={}, index=null, auto-config=false, key-transformers={}, indexed-entities=[], enabled=false, storage=filesystem, startup-mode=NONE, path=null], reader=[refresh-interval=0], writer=[[thread-pool-size=1, queue-count=1, queue-size=null, commit-interval=null, ram-buffer-size=null, max-buffered-entries=null, low-level-trace=false], index-merge=[max-entries=null, factor=null, min-size=null, max-size=null, max-forced-size=null, calibrate-by-deletes=null]], invocationBatching=[enabled=false], locking=[concurrency-level=32, isolation=REPEATABLE_READ, acquire-timeout=10000, striping=false], memory=[storage=HEAP, max-size=null, max-count=-1, when-full=NONE], modules={}, persistence=[passivation=false, availability-interval=1000, connection-attempts=10, connection-interval=50], stores=[], security=[authorization=[enabled=true, roles=[observer]]], sites=[[merge-policy=org.infinispan.xsite.spi.DefaultXSiteEntryMergePolicy@49153009, max-cleanup-delay=30000, tombstone-map-size=512000], backups=[], backup-for=[remote-cache=null, remote-site=null]], statistics=[statistics=true, statistics-available=true], transaction=[[auto-commit=true, stop-timeout=30000, locking=OPTIMISTIC, transaction-manager-lookup=org.infinispan.transaction.lookup.GenericTransactionManagerLookup@a1b7549, transaction-synchronization-registry-lookup=null, mode=NON_TRANSACTIONAL, synchronization=false, single-phase-auto-commit=false, reaper-interval=30000, complete-timeout=60000, notifications=true], recovery=[enabled=false, recovery-cache=__recoveryInfoCacheName__]], unsafe=[unreliable-return-values=false], template=false} is incompatible with the existing configuration Configuration{simpleCache=false, clustering=[[mode=DIST_SYNC, remote-timeout=17500, invalidation-batch-size=128, bias-acquisition=ON_WRITE, bias-lifespan=300000], hash=[consistent-hash-factory=null, owners=2, segments=256, capacity-factor=1.0, key-partitioner=HashFunctionPartitioner{hashFunction=MurmurHash3, ns=256}], l1=[enabled=false, invalidation-threshold=0, l1-lifespan=600000, l1-cleanup-interval=60000], state-transfer=[enabled=true, timeout=60000, chunk-size=512, await-initial-transfer=true], partition-handling=[when-split=ALLOW_READ_WRITES, merge-policy=NONE]], customInterceptors=[interceptors=[]], encoding=[[media-type=application/x-protostream], key=[media-type=null], value=[media-type=null]], expiration=[lifespan=-1, max-idle=-1, reaperEnabled=true, interval=60000, touch=SYNC], query=[properties={}, default-max-results=100, hit-count-accuracy=10000], indexing=[properties={}, index=null, auto-config=false, key-transformers={}, indexed-entities=[], enabled=false, storage=filesystem, startup-mode=NONE, path=null], reader=[refresh-interval=0], writer=[[thread-pool-size=1, queue-count=1, queue-size=null, commit-interval=null, ram-buffer-size=null, max-buffered-entries=null, low-level-trace=false], index-merge=[max-entries=null, factor=null, min-size=null, max-size=null, max-forced-size=null, calibrate-by-deletes=null]], invocationBatching=[enabled=false], locking=[concurrency-level=1000, isolation=REPEATABLE_READ, acquire-timeout=15000, striping=false], memory=[storage=HEAP, max-size=null, max-count=-1, when-full=NONE], modules={}, persistence=[passivation=false, availability-interval=1000, connection-attempts=10, connection-interval=50], stores=[], security=[authorization=[enabled=false, roles=[]]], sites=[[merge-policy=org.infinispan.xsite.spi.DefaultXSiteEntryMergePolicy@49153009, max-cleanup-delay=30000, tombstone-map-size=512000], backups=[], backup-for=[remote-cache=null, remote-site=null]], statistics=[statistics=true, statistics-available=true], transaction=[[auto-commit=true, stop-timeout=30000, locking=OPTIMISTIC, transaction-manager-lookup=org.infinispan.transaction.lookup.GenericTransactionManagerLookup@a1b7549, transaction-synchronization-registry-lookup=null, mode=NON_TRANSACTIONAL, synchronization=false, single-phase-auto-commit=false, reaper-interval=30000, complete-timeout=60000, notifications=true], recovery=[enabled=false, recovery-cache=__recoveryInfoCacheName__]], unsafe=[unreliable-return-values=false], template=false} at org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:781) at org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:746) at org.infinispan.server.SecurityActions.lambda$startCacheManager$1(SecurityActions.java:68) at org.infinispan.security.Security.doPrivileged(Security.java:56) at org.infinispan.server.SecurityActions.doPrivileged(SecurityActions.java:40) at org.infinispan.server.SecurityActions.startCacheManager(SecurityActions.java:71) at org.infinispan.server.Server.run(Server.java:408) at org.infinispan.server.Bootstrap.runInternal(Bootstrap.java:173) at org.infinispan.server.tool.Main.run(Main.java:98) at org.infinispan.server.Bootstrap.main(Bootstrap.java:56) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.infinispan.server.loader.Loader.run(Loader.java:106) at org.infinispan.server.loader.Loader.main(Loader.java:51)
i am deploying this on my local machine using minikube using following command
helm install infinispan openshift-helm-charts/infinispan-infinispan --values values.yaml
can anyone help me what i am doing wrong.?
It's a common use case to be able to set custom env values for a deployment/stateful set but it doesn't seem like that is possible with this helm chart.
Would you accept a PR for changes to enable that? From both a list defined in the chart and from a secret. If you would, can you advise if there is a preferred format for that?
Use case for adding env to values.yaml config eg:
deploy:
container:
envFromSecrets:
- "mySecretNameContainingPostGresPassword"
env:
POSTGRES_URL: jdbc:postgresql://infinispan-postgres:5432/infinispan
POSTGRES_USERNAME: infinispan
# the infinispan config
infinispan:
cacheContainer:
statistics: true
caches:
myTemplate:
distributedCacheConfiguration:
owners: ${env.STANDARD_OWNERS:2}
myCacheName:
distributedCache:
configuration: myTemplate
server:
dataSources:
- name: ds
jndiName: 'jdbc/postgres'
statistics: true
connectionFactory:
driver: org.postgresql.Driver
url: '${env.POSTGRES_URL:jdbc:postgresql://infinispan-postgres:5432/infinispan}'
username: '${env.POSTGRES_USERNAME:postgres}'
password: '${env.POSTGRES_PASSWORD:mysecretpassword}'
We can then override the env for different environments with our own custom settings rather than having to update values throughout the charts.
Regular users don't have authorization to manipulate ServiceMonitors unless granted by admin. This makes default Helm Chart invalid for those user and will fail on not having authorization to create ServiceMonitor. We should consider disabling monitoring by default in Infinispan Helm Chart.
othere ref: https://issues.redhat.com/browse/JDG-6019
Currently the helm-charts utilise the Infinispan image IDENTITIES_PATH
and CONFIG_PATH
env variables to configure the underlying server. We should remove this in favour of generating the Infinispan xml directly and adding users via the cli.
We should provide a guide for installing the Infinispan Helm chart then passing values to build and deploy clusters.
Is there an ability to specify nodeSelector
in value.yaml for infinispan chart?
Hello!
There is a strange thing
user with Role admin dont have tab with creation Cache Setup
user with Role application has tab with creation Cache Setup
Is this okay ?
We want to connect keycloak to our external infinispan, what role we need to choose ?
Thank you
We should explicitly define the minimum kubernetes version that's supported with a given chart release.
Hello!
I have a question about how to safely add password to security.batch through values.yml ?
Because in documantation only showing this
deploy:
security:
batch: 'user create admin -p changeme'
Because i dont see how it's possible to add now.
only one thing come it's doing from --set deploy.security.batch="user create admin -p ${var.password}"
Is this work for me ?
Thank you
Hello!
I have created two different security realms, but after adding securityRealm keycloak, i can't enter to realm default now
I want to enter to different realms through console UI
infinispan:
cacheContainer:
# [USER] Add cache, template, and counter configuration.
name: default
# [USER] Specify `security: null` to disable security authorization.
security:
authorization: {}
transport:
cluster: ${infinispan.cluster.name:cluster}
node-name: ${infinispan.node.name:}
stack: kubernetes
server:
endpoints:
# [USER] Hot Rod and REST endpoints.
- securityRealm: default
socketBinding: default
connectors:
rest:
restConnector:
authentication:
mechanisms: BASIC
hotrod:
hotrodConnector:
- securityRealm: keycloak
socketBinding: default
connectors:
rest:
restConnector:
authentication:
mechanisms: BASIC
hotrod:
hotrodConnector:
# [MEMCACHED] Uncomment to enable Memcached endpoint
# memcached:
# memcachedConnector:
# socketBinding: memcached
# [METRICS] Metrics endpoint for cluster monitoring capabilities.
- connectors:
rest:
restConnector:
authentication:
mechanisms: BASIC
securityRealm: metrics
socketBinding: metrics
interfaces:
- inetAddress:
value: ${infinispan.bind.address:127.0.0.1}
name: public
security:
credentialStores:
- clearTextCredential:
clearText: secret
name: credentials
path: credentials.pfx
securityRealms:
# [USER] Security realm for the Hot Rod and REST endpoints.
- name: default
# [USER] Comment or remove this properties realm to disable authentication.
propertiesRealm:
groupProperties:
path: groups.properties
groupsAttribute: Roles
userProperties:
path: users.properties
- name: keycloak
# [USER] Comment or remove this properties realm to disable authentication.
propertiesRealm:
groupProperties:
path: keycloak-groups.properties
groupsAttribute: Roles
userProperties:
path: keycloak-users.properties
# [METRICS] Security realm for the metrics endpoint.
- name: metrics
propertiesRealm:
groupProperties:
path: metrics-groups.properties
relativeTo: infinispan.server.config.path
groupsAttribute: Roles
userProperties:
path: metrics-users.properties
relativeTo: infinispan.server.config.path
socketBindings:
defaultInterface: public
portOffset: ${infinispan.socket.binding.port-offset:0}
socketBinding:
# [USER] Socket binding for the Hot Rod and REST endpoints.
- name: default
port: 11222
# [METRICS] Socket binding for the metrics endpoint.
- name: metrics
port: 11223
What i did wrong ?
When trying to disable auth fully and trying to query cache using quarkus client the following error occurs:
ISPN005003: Exception reported java.lang.SecurityException: ISPN000287: Unauthorized access: subject 'null' lacks 'CREATE' permission
This is the chart values config:
infinispan:
# Default values for infinispan-helm-charts.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
images:
# [USER] The container images for server pods.
server: quay.io/infinispan/server:14.0
initContainer: registry.access.redhat.com/ubi8-micro
deploy:
# [USER] Specify the number of nodes in the cluster.
replicas: 1
container:
extraJvmOpts: ""
storage:
size: 1Gi
storageClassName: ""
# [USER] Set `ephemeral: true` to delete all persisted data when clusters shut down or restart.
ephemeral: false
resources:
# [USER] Specify the CPU limit and the memory limit for each pod.
limits:
cpu: 500m
memory: 512Mi
# [USER] Specify the maximum CPU requests and the maximum memory requests for each pod.
requests:
cpu: 500m
memory: 512Mi
security:
secretName: ""
batch: ""
expose:
# [USER] Specify `type: ""` to disable network access to clusters.
type: ""
nodePort: 0
host: ""
annotations: [ ]
monitoring:
enabled: true
logging:
categories:
# [USER] Specify the FQN of a package from which you want to collect logs.
- category: com.arjuna
# [USER] Specify the level of log messages.
level: warn
# No need to warn about not being able to TLS/SSL handshake
- category: io.netty.handler.ssl.ApplicationProtocolNegotiationHandler
level: error
makeDataDirWritable: false
nameOverride: ""
resourceLabels: [ ]
podLabels: [ ]
svcLabels: [ ]
infinispan:
cacheContainer:
# [USER] Add cache, template, and counter configuration.
name: default
# [USER] Specify `security: null` to disable security authorization.
security: null
transport:
cluster: ${infinispan.cluster.name:cluster}
node-name: ${infinispan.node.name:}
stack: kubernetes
server:
endpoints:
# [USER] Hot Rod and REST endpoints.
- securityRealm: default
socketBinding: default
connectors:
rest:
restConnector:
hotrod:
hotrodConnector:
# [MEMCACHED] Uncomment to enable Memcached endpoint
# memcached:
# memcachedConnector:
# socketBinding: memcached
# [METRICS] Metrics endpoint for cluster monitoring capabilities.
- connectors:
rest:
restConnector:
authentication:
mechanisms: BASIC
securityRealm: metrics
socketBinding: metrics
interfaces:
- inetAddress:
value: ${infinispan.bind.address:127.0.0.1}
name: public
security:
credentialStores:
- clearTextCredential:
clearText: secret
name: credentials
path: credentials.pfx
securityRealms:
# [USER] Security realm for the Hot Rod and REST endpoints.
- name: default
# [USER] Comment or remove this properties realm to disable authentication.
# propertiesRealm:
# groupProperties:
# path: groups.properties
# groupsAttribute: Roles
# userProperties:
# path: users.properties
# [METRICS] Security realm for the metrics endpoint.
- name: metrics
propertiesRealm:
groupProperties:
path: metrics-groups.properties
relativeTo: infinispan.server.config.path
groupsAttribute: Roles
userProperties:
path: metrics-users.properties
relativeTo: infinispan.server.config.path
socketBindings:
defaultInterface: public
portOffset: ${infinispan.socket.binding.port-offset:0}
socketBinding:
# [USER] Socket binding for the Hot Rod and REST endpoints.
- name: default
port: 11222
# [METRICS] Socket binding for the metrics endpoint.
- name: metrics
port: 11223
# [MEMCACHED] Uncomment to enable Memcached endpoint
# - name: memcached
# port: 11221
Its an empty infinispan instance, so client should be able to create the cache automatically. This is the case when using infinispan via docker compose with a custom config where security is disabled as per docs: https://infinispan.org/docs/stable/titles/security/security.html
Also if I examine the infinispan.xml settings file inside the container I can see that it still contains the default auth enabled settings. So that means that the above config to disable security had no effect or I did it incorrectly.
The Helm Chart generates a secret on Helm install containing the content of deploy.security.batch as well as credentials required by the monitoring endpoint. If deploy.security.batch is empty, then a default user "developer" is created. Both the "developer" user and the "monitor" user have a password value that is generated by Helm.
In order to prevent the generated password from being regenerated on calls to Helm Upgrade, the Secret is only generated if {{ .Release.IsInstall}}. Consequently, attempts to update deploy.security.batch on upgrade will be ignored as the old Secret is used.
In order to support the below workflow, we need to refactor how credentials are managed:
1.helm install -n helm datagrid . --set deploy.security.batch="user create djavan -p originalPass"
2. helm upgrade -n helm datagrid . --set deploy.security.batch="user create djavan -p newPass"
also discussed: https://issues.redhat.com/browse/JDG-6015
I'm running into this error when deploying this helm chart to eks and not really sure what's going on. This is the first attempt at deploying this.
Our EKS cluster is 1.26
ISPN000512 Cannot acquire lock /opt/infinispan/server/data/___global.lck
install command used:
helm install infinispan-server . -n token-cache
No values overrides being used.
Full stack trace:
2023-11-13 21:13:23,197 ERROR (main) [org.infinispan.CONFIG] ISPN000660: DefaultCacheManager start failed, stopping any running components org.infinispan.commons.CacheConfigurationException: ISPN000512: Cannot acquire lock '/opt/infinispan/server/data/___global.lck' for persistent global state
at org.infinispan.globalstate.impl.GlobalStateManagerImpl.acquireGlobalLock(GlobalStateManagerImpl.java:88)
at org.infinispan.globalstate.impl.GlobalStateManagerImpl.start(GlobalStateManagerImpl.java:65)
at org.infinispan.globalstate.impl.CorePackageImpl$1.start(CorePackageImpl.java:34)
at org.infinispan.globalstate.impl.CorePackageImpl$1.start(CorePackageImpl.java:27)
at org.infinispan.factories.impl.BasicComponentRegistryImpl.invokeStart(BasicComponentRegistryImpl.java:616)
at org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:607)
at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:576)
at org.infinispan.factories.impl.BasicComponentRegistryImpl$ComponentWrapper.running(BasicComponentRegistryImpl.java:807)
at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:379)
at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:252)
at org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:779)
at org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:747)
at org.infinispan.server.SecurityActions.lambda$startCacheManager$1(SecurityActions.java:68)
at org.infinispan.security.Security.doPrivileged(Security.java:56)
at org.infinispan.server.SecurityActions.doPrivileged(SecurityActions.java:40)
at org.infinispan.server.SecurityActions.startCacheManager(SecurityActions.java:71)
at org.infinispan.server.Server.run(Server.java:417)
at org.infinispan.server.Bootstrap.runInternal(Bootstrap.java:173)
at org.infinispan.server.tool.Main.run(Main.java:98)
at org.infinispan.server.Bootstrap.main(Bootstrap.java:56)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.infinispan.server.loader.Loader.run(Loader.java:106)
at org.infinispan.server.loader.Loader.main(Loader.java:51)
Caused by: java.io.FileNotFoundException: /opt/infinispan/server/data/___global.lck (Permission denied)
at java.base/java.io.FileOutputStream.open0(Native Method)
at java.base/java.io.FileOutputStream.open(FileOutputStream.java:293)
at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:235)
at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:184)
at org.infinispan.globalstate.impl.GlobalStateManagerImpl.acquireGlobalLock(GlobalStateManagerImpl.java:82)
... 25 more
2023-11-13 21:13:23,198 WARN (main) [org.infinispan.CONTAINER] ISPN000574: Global state cannot persisted because it is incomplete (usually caused by errors at startup).
2023-11-13 21:13:23,384 FATAL (main) [org.infinispan.SERVER] ISPN080028: Infinispan Server failed to start org.infinispan.manager.EmbeddedCacheManagerStartupException: ISPN000512: Cannot acquire lock '/opt/infinispan/server/data/___global.lck' for persistent global state
We should follow a similar format to charts used in https://github.com/redhat-developer/redhat-helm-charts.
This requires splitting the values.yaml into multiple sections, mainly images
and deploy
. The former defines all images that should be used by the charts, and the latter defines runtime behaviour of the Infinispan cluster.
Splitting the values.yaml
file in this way makes the Openshift UI much easier to navigate.
Hello!
Why are added to expose.annotations or podLabels array ?
Why i can't add
Something like this
podLabels:
microservice: infinispan
I
received error
values don't meet the specifications of the schema(s) in the following chart(s):
│ infinispan:
│ - deploy.expose.annotations: Invalid type. Expected: [array,null], given: object
│ - deploy.podLabels: Invalid type. Expected: [array,null], given: object
I'm always added without array, like example
When using ClusterIP, it only provides one record of the server to the client. When using LoadBalancer or a headless service in a Statefulset, the client could receive all the records for every replica inside the server. Using a headed clusterIP service, the client will only receive one of the multiple replicas on every client request.
Hello!
There is a question:
I saw that you are using volume Template, but is this possible using volume which already exist ?
I mean not create through volumeTemplate
Thank you
Currently a ServiceMonitor
is created if the k8s cluster supports the type, however in some cases this is not desirable as the resource may not be desired or the chart deployer may not have the required permissions to create such a resource.
We should allow users to disable ServiceMonitor creation via the values.yaml.
We can add the deploy.monitoring
object, with a sub-field enabled
to determine if the monitoring resources should be created or not. Monitoring should be enabled by default to provide backwards-compatibility.:
deploy:
...
monitoring:
enabled: true
Hi,
I tried to create more than one cache at startup but i have recived the following error
Red Hat Data Grid Server failed to start org.infinispan.commons.configuration.io.ConfigurationReaderException: Missing required attribute(s): name[86,1]
My helm yml is the following
deploy:
infinispan:
cacheContainer:
distributedCache:
- name: "mycache"
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
- name: "mycache1"
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
Is possible to create more than one chache?
Thank you for your help.
Use GH action to sync the content into the Helm docs.
Currently the helm-charts consume the 12.1.x images, we should transition to Infinispan 13.x as soon as possible as this will be the first version that provides official support for the charts. This will require xml schemas to be updated.
Make it clear in docs how to add an authn secret or pass the CLI batch via yaml.
We should print all k8s objects structure as well as Pod logs in case if failure test
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.