Comments (5)
@tudordumitriu I think this is outside the scope of the Helm chart, though I can see how it's related. I would be extremely wary of using an autoscaler to automate database cluster scaling - the chance of data loss is high and the timescales and nuances involved in moving shards around likely require manual supervision anyway.
In the general case, this is a difficult problem, which is probably why there are no community tools to address it (regardless of Kubernetes). One tool you could look at is couchdb-admin from Cabify. I haven't used it personally but it looks to automate at least some of the management tasks.
Unfortunately, the process for moving shards described in the docs is tricky in Kubernetes because not many storage backends support ReadWriteMany
AccessModes. You could try exposing port 22 between the CouchDB pods and using SCP. You're likely better off using CouchDB internal replication to create additional shard replicas instead of moving files/indexes directly, but it's a slower process.
Adding a node would require something like:
- Join the new node to the cluster
- Put the new node in maintenance mode.
- For each database in the cluster:
- figure out the new shard map to ensure even spread of shards and replicas across machines/availability zones. For your new node, add the desired shard ranges/replicas to each shard map. This temporarily increases the number of replicas of each shard. For large numbers of databases/shards, you might need to serialize this process.
- This should result in CouchDB internal replication creating and populating the database shards on the new node. When design documents are replicated, indexing jobs will trigger on the new node to create the required indexes. For large shards, this may take some time (hours/days).
- Monitor the internal replication backlog and indexing process to observe when the replication and indexing process is complete. I'm not sure if metrics for these are exposed by
_stats
or whether you'd need to parse the log output (e.g. look for[warning] <date> <existing node> <pid> -------- mem3_sync shards/<shard range>/<db name>.<epoch> <new node> {pending_changes,<value>}
) to determine the internal replication backlog. Indexing backlog can be queried using the_active_tasks
endpoint. - If/when internal replication and indexing is up to date, take the node out of maintenance mode.
- Update the shard map to remove the original replicas that are now redundant
- figure out the new shard map to ensure even spread of shards and replicas across machines/availability zones. For your new node, add the desired shard ranges/replicas to each shard map. This temporarily increases the number of replicas of each shard. For large numbers of databases/shards, you might need to serialize this process.
Removing a node from the cluster would be a similar process, updating the shard map to ensure enough shard replicas exist on the remaining nodes before removing it.
from couchdb-helm.
Thank you @willholley! Truly appreciate it
Sorry for not being 100% within scope, but since the final goal is to deploy it within a cluster it made some sense to address it here (and honestly didn't know somewhere else to go).
So bottom line, because of the complexity of the process this job cannot be automated, and we should try to estimate the loads and and to anticipate the timings as best as we can.
When time comes (loose terms warning):
- We add a new node in our k8s cluster
- Update the statefulset number of replicas (the new node WON'T be added to the cluster)
- We switch the new couchdb node to maintenance mode (with appropriate settings - not 100% sure how the process can be serialized, would appreciate a hint)
- Wait for the sync jobs to finish (and might take a while), because as you said copying data it doesn't make sense and might be error prone
- Take the node out of maintenance mode
- Add the node to the cluster
I still have some questions (some maybe out of scope as well):
- Should we be having an odd number of nodes within a cluster (I've noticed from time to time strange write errors due to quorum not being met) - that means that we have to to the above for 2 extra nodes?
- Is the k8s service type LoadBalancer enough to handle the load management for the couchdb cluster deployed as a statefullset?
- I've noticed serious performance differences between running a single node cluster and a 3 nodes cluster (within the same node though), meaning that the single node preforms way better (and I guess that's due to the fact the nodes need to keep in sync and lots of replication calls are being made), so basically there should be one k8s node for each couchdb node (considering that the couchdb containers are quite some CPU consumers in load tests)
Thanks again!
from couchdb-helm.
you need to add the node to the cluster (step 6) before putting it in MM and moving shards around.
Regarding the questions:
- Should we be having an odd number of nodes within a cluster (I've noticed from time to time strange write errors due to quorum not being met) - that means that we have to to the above for 2 extra nodes?
Multiples of 3 is usually simplest because shard distribution is then equal, but it's not required.
- Is the k8s service type LoadBalancer enough to handle the load management for the couchdb cluster deployed as a statefullset?
Yes - no problems with the k8s service loadbalancer (it's just IPTables/IPVS). If you expose CouchDB to the outside using an Ingress the performance will depend on which Ingress implementation you use etc.
- I've noticed serious performance differences between running a single node cluster and a 3 nodes cluster (within the same node though), meaning that the single node preforms way better (and I guess that's due to the fact the nodes need to keep in sync and lots of replication calls are being made), so basically there should be one k8s node for each couchdb node (considering that the couchdb containers are quite some CPU consumers in load tests)
Yes - there's not really any benefit in having more than one CouchDB node per worker. The only exception I can think of is that you could "oversize" the cluster initially and then spread out CouchDB nodes amongst machines as you grow without needing to go through the cluster expansion steps described above, assuming you use remote storage.
from couchdb-helm.
Thanks again!
from couchdb-helm.
Hi, mostly thinking out loud here, but would the following be a valid scaling strategy?
- Initially deploy the CouchDB cluster with a large number of CouchDB nodes (ie. k8s pods), configuring their resource allocations quite low so that many pods can run on a single k8s node.
- When usage increases (as measured by global CPU utilization), increase the resource allocations of the pods so that they get spread out to more (and possibly more powerful) k8s nodes.
- Conversely, when usage decreases, reduce the resource allocations so that pods can regroup to fewer k8s nodes
This way, there is no need for resharding. However (and please note I am a k8s beginner), I don't think this "migration" of pods to other nodes when their resource allocations change would be automatic, so it would probably require killing the pods to force them being recreated elsewhere.
EDIT: just realized that changing the resource requests of pods according to actual usage and migrating them to other k8s nodes is the Vertical Pod Autoscaler's job, so it seems scaling could be achieved by implementing point 1 above and properly configuring a Vertical Pod Autoscaler (and a Cluster Autoscaler).
from couchdb-helm.
Related Issues (20)
- Update Installation Guide
- Idempotent Helm chart install action? HOT 1
- Breaking change from Version 3.6.0 -> 3.6.1
- helm chart default install de-facto doesn't work (inconsistent adminHash)
- post-install job uses hard-coded cluster.local DNS suffix
- Container couchdb is going into restart loop right after deploy without any logs HOT 8
- CrashLoopBackoff when PersistentVolume=true HOT 2
- Add an option to specify resources for init container
- JWT Authentication issues HOT 2
- Fresh 3 Nodes cluster do not pass Fauxton GUI replica check. HOT 5
- post-install hook job should also run post-upgrade
- Text search not working with CouchDB 3.3.2
- Automate generation of README values documentation HOT 1
- Fix broken CI on main HOT 1
- Coordinator node regularly restarts in 3 node cluster HOT 3
- Add a NOTES.txt message for adminHash and adminPassword both being set
- Figure out a way to avoid manual action after deploying a cluster with the helm chart HOT 2
- Ingress should point to the "not headless" service
- unable to setup couch db on ipv6 only environemnt. HOT 1
- Erlang cookie not the same on all nodes HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from couchdb-helm.