Comments (14)
Here is a simplified / public version of whats running here. It does so in production for about a week and survived several node reboots this far.
Note this does not include the deployment of the very first FreeIPA instance, the "origin instance", freeipa-origin-server.my.domain
, so to say, because I have not done that on K8s. A chart would need to do this first before it can define replicas as show below.
The commentary explains things I was not able to automate. For example the placement of the ipa-replica-install-options
file in each volume and the manual switching of the readinessProbe ... don't know if a chart can do (some of) that?
And as mentioned above: the pod replicas count of replicas: 1
is mandatory for each RC object. Managing it as a K8s Admin might want to do will wreak havoc.
---
#
# FreeIPA server instances to provide identity services, including and most
# importantly Domain Name Resolution for all hosts.
#
# NOTE:
# - each instance is defined as ReplicationController because
# IPA_SERVER_HOSTNAME and IPA_SERVER_IP must be specific to each instance,
# see https://github.com/dharmendrakariya/freeipa-helm-chart/issues/1
#
# - why dnsConfig for pod is used?:
#
# each FreeIPA instance acts as its own nameserver on 127.0.0.1, as fallback
# a public server is given (9.9.9.9, dns9.quad9.net). This fallback serves
# only the CentOS system in the pod container, IPA itself does not use it!
#
# additional forwarders, that is external DNS servers, should be given via
# ipa-replica-install-options file, because dnsConfig only supports 3
# nameservers maximum and IPA should be able to resolve public names on its
# own
#
#
# ROLLING A NEW INSTANCE:
#
# 1) prepare .yaml based on the defintions below
#
# 2) ensure the new public IP is setup on the target node (see Service below)
#
# 3) ensure storage volumes exist + contains a specific `ipa-replica-install-options`
# file each
#
# 4) ensure readinessProbe is shortcutted to /bin/true and initialDelaySeconds
# is long enough for the init process to finish without k8s killing the pod,
# see commentary below
#
# 5) run `kubectl apply -f ...` and watch var/log/*log on the data/ volume
#
# 6) when the instance initialized itself successfully: revert the
# readinessProbe shortcut and apply this manifest again
#
---
# --- Namespace
apiVersion: v1
kind: Namespace
metadata:
name: idm
---
# --- Services
apiVersion: v1
kind: Service
metadata:
name: idm1
namespace: idm
labels:
app: freeipa
instance: idm1
spec:
selector:
app: freeipa
instance: idm1
type: NodePort
externalIPs:
- 1.2.3.4
externalTrafficPolicy: Local
ports:
- name: dns
port: 53
- name: dns-udp
port: 53
protocol: UDP
- name: http
port: 80
- name: krb5
port: 88
- name: krb5-udp
port: 88
protocol: UDP
- name: ldap
port: 389
- name: ipa-admin
port: 443
- name: kpasswd
port: 464
- name: kpasswd-udp
port: 464
protocol: UDP
- name: ldaps
port: 636
---
apiVersion: v1
kind: Service
metadata:
name: idm2
namespace: idm
labels:
app: freeipa
instance: idm2
spec:
selector:
app: freeipa
instance: idm2
type: NodePort
externalIPs:
- 1.2.3.5
externalTrafficPolicy: Local
ports:
- name: dns
port: 53
- name: dns-udp
port: 53
protocol: UDP
- name: http
port: 80
- name: krb5
port: 88
- name: krb5-udp
port: 88
protocol: UDP
- name: ldap
port: 389
- name: ipa-admin
port: 443
- name: kpasswd
port: 464
- name: kpasswd-udp
port: 464
protocol: UDP
- name: ldaps
port: 636
---
apiVersion: v1
kind: Service
metadata:
name: idm3
namespace: idm
labels:
app: freeipa
instance: idm3
spec:
selector:
app: freeipa
instance: idm3
type: NodePort
externalIPs:
- 1.2.3.6
externalTrafficPolicy: Local
ports:
- name: dns
port: 53
- name: dns-udp
port: 53
protocol: UDP
- name: http
port: 80
- name: krb5
port: 88
- name: krb5-udp
port: 88
protocol: UDP
- name: ldap
port: 389
- name: ipa-admin
port: 443
- name: kpasswd
port: 464
- name: kpasswd-udp
port: 464
protocol: UDP
- name: ldaps
port: 636
---
# --- Controllers
apiVersion: v1
kind: ReplicationController
metadata:
name: idm1
namespace: idm
labels:
app: freeipa
instance: idm1
spec:
# DON'T TOUCH THIS - it can not scale this way
replicas: 1
selector:
app: freeipa
instance: idm1
template:
metadata:
name: idm
namespace: idm
labels:
app: freeipa
instance: idm1
spec:
dnsPolicy: ClusterFirst
dnsConfig:
nameservers:
- 127.0.0.1
- 9.9.9.9
# ensure to always resolve ourselves: the pod is unaware of its public IP
hostAliases:
- ip: 1.2.3.4
hostnames:
- idm1.my.domain
securityContext:
sysctls:
- name: net.ipv6.conf.lo.disable_ipv6
value: "0"
containers:
- name: freeipa-server
image: quay.io/freeipa/freeipa-server:centos-8
imagePullPolicy: IfNotPresent
ports:
- name: dns
containerPort: 53
- name: dns-udp
containerPort: 53
protocol: UDP
- name: http
containerPort: 80
- name: krb5
containerPort: 88
- name: krb5-udp
containerPort: 88
protocol: UDP
- name: ldap
containerPort: 389
- name: ipa-admin
containerPort: 443
- name: kpasswd
containerPort: 464
- name: kpasswd-udp
containerPort: 464
protocol: UDP
- name: ldaps
containerPort: 636
volumeMounts:
- name: data
mountPath: /data
- name: cgroups
mountPath: /sys/fs/cgroup
readOnly: true
- name: run
mountPath: /run
- name: run-systemd
mountPath: /run/systemd
- name: tmp
mountPath: /tmp
env:
- name: IPA_SERVER_HOSTNAME
value: idm1.my.domain
- name: IPA_SERVER_IP
value: 1.2.3.4
# - name: DEBUG_TRACE
# value: "on"
resources:
requests:
memory: 2.5Gi
limits:
memory: 3Gi
readinessProbe:
exec:
# FIXME: systemctl status fails during ipa-replica-install as the
# Service does not route to the pod as long as its its not
# ready... but it *must* talk to itself on its public IP
command: ["/usr/bin/systemctl", "is-active", "--quiet", "ipa"]
# command: ["/bin/true"]
# initialDelaySeconds: 550 # for ipa-replica-install
initialDelaySeconds: 60
timeoutSeconds: 10
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
livenessProbe:
# initialDelaySeconds: 550 # for ipa-replica-install
initialDelaySeconds: 60
periodSeconds: 30
httpGet:
path: /
port: 80
volumes:
- name: cgroups
hostPath:
path: /sys/fs/cgroup
- name: run
emptyDir:
medium: Memory
- name: run-systemd
emptyDir:
medium: Memory
- name: tmp
emptyDir:
medium: Memory
- name: data
persistentVolumeClaim:
claimName: idm1
---
apiVersion: v1
kind: ReplicationController
metadata:
name: idm2
namespace: idm
labels:
app: freeipa
instance: idm2
spec:
replicas: 1
selector:
app: freeipa
instance: idm2
template:
metadata:
name: idm
namespace: idm
labels:
app: freeipa
instance: idm2
spec:
dnsPolicy: ClusterFirst
dnsConfig:
nameservers:
- 127.0.0.1
- 9.9.9.9
hostAliases:
- ip: 1.2.3.5
hostnames:
- idm2.my.domain
securityContext:
sysctls:
- name: net.ipv6.conf.lo.disable_ipv6
value: "0"
containers:
- name: freeipa-server
image: quay.io/freeipa/freeipa-server:centos-8
imagePullPolicy: IfNotPresent
ports:
- name: dns
containerPort: 53
- name: dns-udp
containerPort: 53
protocol: UDP
- name: http
containerPort: 80
- name: krb5
containerPort: 88
- name: krb5-udp
containerPort: 88
protocol: UDP
- name: ldap
containerPort: 389
- name: ipa-admin
containerPort: 443
- name: kpasswd
containerPort: 464
- name: kpasswd-udp
containerPort: 464
protocol: UDP
- name: ldaps
containerPort: 636
volumeMounts:
- name: data
mountPath: /data
- name: cgroups
mountPath: /sys/fs/cgroup
readOnly: true
- name: run
mountPath: /run
- name: run-systemd
mountPath: /run/systemd
- name: tmp
mountPath: /tmp
env:
- name: IPA_SERVER_HOSTNAME
value: idm2.my.domain
- name: IPA_SERVER_IP
value: 1.2.3.5
resources:
requests:
memory: 2.5Gi
limits:
memory: 3Gi
readinessProbe:
exec:
command: ["/usr/bin/systemctl", "is-active", "--quiet", "ipa"]
# command: ["/bin/true"]
# initialDelaySeconds: 550
initialDelaySeconds: 60
timeoutSeconds: 10
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
livenessProbe:
# initialDelaySeconds: 550
initialDelaySeconds: 60
periodSeconds: 30
httpGet:
path: /
port: 80
volumes:
- name: cgroups
hostPath:
path: /sys/fs/cgroup
- name: run
emptyDir:
medium: Memory
- name: run-systemd
emptyDir:
medium: Memory
- name: tmp
emptyDir:
medium: Memory
- name: data
persistentVolumeClaim:
claimName: idm2
---
apiVersion: v1
kind: ReplicationController
metadata:
name: idm3
namespace: idm
labels:
app: freeipa
instance: idm3
spec:
replicas: 1
selector:
app: freeipa
instance: idm3
template:
metadata:
name: idm
namespace: idm
labels:
app: freeipa
instance: idm3
spec:
dnsPolicy: ClusterFirst
dnsConfig:
nameservers:
- 127.0.0.1
- 9.9.9.9
hostAliases:
- ip: 1.2.3.6
hostnames:
- idm3.my.domain
securityContext:
sysctls:
- name: net.ipv6.conf.lo.disable_ipv6
value: "0"
containers:
- name: freeipa-server
image: quay.io/freeipa/freeipa-server:centos-8
imagePullPolicy: IfNotPresent
ports:
- name: dns
containerPort: 53
- name: dns-udp
containerPort: 53
protocol: UDP
- name: http
containerPort: 80
- name: krb5
containerPort: 88
- name: krb5-udp
containerPort: 88
protocol: UDP
- name: ldap
containerPort: 389
- name: ipa-admin
containerPort: 443
- name: kpasswd
containerPort: 464
- name: kpasswd-udp
containerPort: 464
protocol: UDP
- name: ldaps
containerPort: 636
volumeMounts:
- name: data
mountPath: /data
- name: cgroups
mountPath: /sys/fs/cgroup
readOnly: true
- name: run
mountPath: /run
- name: run-systemd
mountPath: /run/systemd
- name: tmp
mountPath: /tmp
env:
- name: IPA_SERVER_HOSTNAME
value: idm3.my.domain
- name: IPA_SERVER_IP
value: 1.2.3.6
resources:
requests:
memory: 2.5Gi
limits:
memory: 3Gi
readinessProbe:
exec:
command: ["/usr/bin/systemctl", "is-active", "--quiet", "ipa"]
# command: ["/bin/true"]
# initialDelaySeconds: 550
initialDelaySeconds: 60
timeoutSeconds: 10
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
livenessProbe:
# initialDelaySeconds: 550
initialDelaySeconds: 60
periodSeconds: 30
httpGet:
path: /
port: 80
volumes:
- name: cgroups
hostPath:
path: /sys/fs/cgroup
- name: run
emptyDir:
medium: Memory
- name: run-systemd
emptyDir:
medium: Memory
- name: tmp
emptyDir:
medium: Memory
- name: data
persistentVolumeClaim:
claimName: idm3
---
# --- Storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: idm1
labels:
app: freeipa
instance: idm1
spec:
storageClassName: staticdisk
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
local:
path: "/srv/idm1"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: my.domain/data_idm
operator: In
values:
- "idm1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: idm2
labels:
app: freeipa
instance: idm2
spec:
storageClassName: staticdisk
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
local:
path: "/srv/idm2"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: my.domain/data_idm
operator: In
values:
- "idm2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: idm3
labels:
app: freeipa
instance: idm3
spec:
storageClassName: staticdisk
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
local:
path: "/zpool02/idm3"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: my.domain/data_idm
operator: In
values:
- "idm3"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: idm1
namespace: idm
labels:
app: freeipa
instance: idm1
spec:
accessModes:
- ReadWriteOnce
storageClassName: staticdisk
volumeName: idm1
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: idm2
namespace: idm
labels:
app: freeipa
instance: idm2
spec:
accessModes:
- ReadWriteOnce
storageClassName: staticdisk
volumeName: idm2
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: idm3
namespace: idm
labels:
app: freeipa
instance: idm3
spec:
accessModes:
- ReadWriteOnce
storageClassName: staticdisk
volumeName: idm3
resources:
requests:
storage: 1Gi
---
# --- Backup
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: idm-backup
namespace: idm
spec:
schedule: "@weekly"
# schedule: "*/10 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
# NOTE:
# - prepare this ServiceAccount with a Secret that configures kubectl
# to run against this cluster and have RBAC grants for this
# namespace
# - the backups pile up on the storage volume of idm3 instance - its
# probably necessary to further process them from there and cleanup
# old ones
serviceAccountName: kubectl
containers:
- name: ipa-backup
imagePullPolicy: IfNotPresent
image: bitnami/kubectl
command: ['kubectl', '-n', 'idm',
'exec', 'rc/idm3', '-c', 'freeipa-server',
'--',
'/usr/sbin/ipa-backup']
---
a ipa-replica-install-options
file looks like so:
--unattended
--ip-address=1.2.3.?
--no-host-dns
--no-ntp
--no-ssh
--no-sshd
--setup-ca
--setup-dns
--forwarder=9.9.9.9
--forwarder=9.9.9.10
--forwarder=...
--force-join
--skip-conncheck
--admin-password='*********'
--server=freeipa-origin-server.my.domain
--domain=MY.DOMAIN
from freeipa-helm-chart.
I will have to look into that, coz I left this long ago TBH and I don't know the workaround, also just curious , Are you running this in prod?
from freeipa-helm-chart.
no, but I want to. FreeIPA still runs in a VM here, which blocks many things.
... I did some research on what StatefulSet
can and can not: the workarounds EITHER go with init containers that tweak things on the same storage right before the pod starts OR to modify the actual container image to read in values from the storage volume, ie. sourcing a bash file as first thing in entrypoint.
from freeipa-helm-chart.
its months ago I tried this last, looking some more reveals ...
I am using a ipa-replica-install-options
file on each storage volume, which has instance specific settings such as --ip-address
telling the public IP of the kubernetes Service
the instance will be reachable at from outside (its not resolvable at this point, cause the replica is not installed yet).
That would be a method / correct place: I could put a file there specifying the IPA_SERVER_HOSTNAME which effectively is going to be resolved to that given --ip-address
(after the replica installed itself).
However, something would need to read it in, since ipa-replica-install
does not have such a option afaik and the install-options file does not allow for anything else.
from freeipa-helm-chart.
First of all , I would not suggest to go with production k8s setup. As the freeipa project itself is not recommending this ( as the docker image is not so viable afaik, they are treating docker container as vm only, as so many services are there and all are in one container which is not at all docker best practices implementation as you are not breaking down the micro services, right? Isn't it the first idea of Docker).
Anyway the k8s setup is not available by the project is the main reason. You will have to maintain all this.
Sure you can get it run by hook or crook. But I would suggest refrain. This was I created just to play around, later on we moved to VM implementation.
from freeipa-helm-chart.
Anyway, will look into it and will try to understand the things.
from freeipa-helm-chart.
First of all , I would not suggest to go with production k8s setup. As the freeipa project itself is not recommending this
yeah, my more-or-less naive approach was/is to just run three of them.
So far I am running the infrastructure for years with one freeipa VM only + a caching DNS in front, which all clients but the K8s cluster ask. Thats stable and I would keep that structure, just with 3 pods for the one VM.
( as the docker image is not so viable afaik, they are treating docker container as vm only, as so many services are there and all are in one container which is not at all docker best practices implementation as you are not breaking down the micro services, right? Isn't it the first idea of Docker).
yes, thats the idea.
by now, I think its not the solution to our problems as this breaking-down-everything introduces more complexity and I faced tons of problems caused by k8s Service
or Ingress
objects or the IP-tables-rules being messed up, while the actual app behind these ran fine. So there was no outage of the app itself, but the service for the users still failed.
Therefore, I think: given this containerizing + orchestrating exists, a good deal of thinking should be put in how to capsule an application. And that probably needs to derive from how the thing can be / must be scaled.
Given FreeIPA has this concept of replicas - which is proven to work, afaik - it occurs natural to me to just make the orchestrator run it that way.
(obviously with some advice to the deployer, like: run each pod on a different machine and different storage - perhaps a DaemonSet
would actually fit better than a StatefulSet
?)
Anyway the k8s setup is not available by the project is the main reason. You will have to maintain all this.
Sure you can get it run by hook or crook. But I would suggest refrain. This was I created just to play around, later on we moved to VM implementation.
how you run the VMs? I have only this one cluster, now operated with k8s, and I failed to get kube-virt
going - which in itself does not appear like the correct solution to me. So the current VM is running next to k8s on one of the nodes, without k8s knowing. Thats pain for its own.
I would contribute, that is maintain a public version of my deployment. ... if it works.
Anyway, will look into it and will try to understand the things.
yes, that be great.
from freeipa-helm-chart.
@x3nb63 I have deployment in this chart , can u have a look ? https://github.com/dharmendrakariya/freeipa-helm-chart-2/tree/master/freeipa
from freeipa-helm-chart.
@x3nb63 I have deployment in this chart , can u have a look ? https://github.com/dharmendrakariya/freeipa-helm-chart-2/tree/master/freeipa
I see a Deployment
object used and then a HorizontalPodAutoscaler
object that somehow finds it (I dont understand what {{ include "freeipa.fullname" . }}
results in) and will then scale the number of replicas either by CPU or memory usage.
-> that pretty much fits what I try here.
I don't see how IPA_SERVER_HOSTNAME
can have a different value for each replica?
It also gets to be freeipa.example.testy
for each and every replica the HPA may trigger. So I assume with 1 replica it works, whereas on scale up to 2 the second replica will fail to come up - cause that hostname exists already in FreeIPA.
What happens if you make it scale to 2?
from freeipa-helm-chart.
I got my three replicas running!
but had to step back to using ReplicationController: one object definition per replica. That way I can have three different IPA_SERVER_HOSTNAME
values and three IPA_SERVER_IP
too. As it turned out: ipa will update its own IP in the database at each startup. Without IPA_SERVER_IP
set this will be the pod IP, which then ends up as A record in DNS ... thus, making clients stop talking to the public IP held by the Service
object ...
downside: this can not scale on its own.
however, since you use helm: I assume the chart could be made to produce N ReplicationController
from the values file? That way it would be possible to say how many "static" ones there should be.
(I dont use helm, so can't tell)
from freeipa-helm-chart.
Sorry , but I am not getting enough time and I am not able to get into it!! can you work with pure manifests and see any workaround?? later on we can always tweak the chart!! what you think?
I understand the problem of replicas(one which is freeeipa's concept) so we can't really use the k8s feature of replica here right? so we need some tweaks and workaround but i am not getting anything rignnow, may be if u could help here for the betterment.
Sorry again:(
from freeipa-helm-chart.
I understand the problem of replicas(one which is freeeipa's concept) so we can't really use the k8s feature of replica here right?
yes, K8s understanding of "replica" is less then what a "freeipa replica" requires. In essence: K8s produces identical copies, that is "clones", of the very same config, whereas each freeipa replica requires its own unique identity.
so we need some tweaks and workaround [...]
I think that can't be tweaked or worked around. It has to stick to a replica count of 1 for the K8s objects (whichever it is) and create one object per freeipa instance. That's it. Its about accepting that we can't use K8s' scaling features which essentially increase/decrease the object replica count. All we can do is prepare N fixed objects.
... unless changes to freeipa-container are made ... maybe ...
I will prepare a plain vanilla version of my manifest once its running productively for a while. Need to solve a few more things such as running ipa backup by a CronJob
... stay tuned ...
from freeipa-helm-chart.
cool then!! if that is done then I guess at least people can rely on this, till now they(community) have just created pure manifests only and that only I used for this chart, just statefulset I would say, has been done as an enhancement. anyway if u could better the pure manifests, that would be very helpful for the people struggling around. Kudos Buddy:) Thanks!!
from freeipa-helm-chart.
@x3nb63 hail master!! this is deep, will try to create a helm chart based on this once I get some free time. (will need to brush up) anyway I am leaving this open, this is helpful. Thank you:)
from freeipa-helm-chart.
Related Issues (2)
- Any updates? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from freeipa-helm-chart.