Giter Site home page Giter Site logo

Comments (5)

willholley avatar willholley commented on June 14, 2024

@tudordumitriu I think this is outside the scope of the Helm chart, though I can see how it's related. I would be extremely wary of using an autoscaler to automate database cluster scaling - the chance of data loss is high and the timescales and nuances involved in moving shards around likely require manual supervision anyway.

In the general case, this is a difficult problem, which is probably why there are no community tools to address it (regardless of Kubernetes). One tool you could look at is couchdb-admin from Cabify. I haven't used it personally but it looks to automate at least some of the management tasks.

Unfortunately, the process for moving shards described in the docs is tricky in Kubernetes because not many storage backends support ReadWriteMany AccessModes. You could try exposing port 22 between the CouchDB pods and using SCP. You're likely better off using CouchDB internal replication to create additional shard replicas instead of moving files/indexes directly, but it's a slower process.

Adding a node would require something like:

  • Join the new node to the cluster
  • Put the new node in maintenance mode.
  • For each database in the cluster:
    • figure out the new shard map to ensure even spread of shards and replicas across machines/availability zones. For your new node, add the desired shard ranges/replicas to each shard map. This temporarily increases the number of replicas of each shard. For large numbers of databases/shards, you might need to serialize this process.
      • This should result in CouchDB internal replication creating and populating the database shards on the new node. When design documents are replicated, indexing jobs will trigger on the new node to create the required indexes. For large shards, this may take some time (hours/days).
    • Monitor the internal replication backlog and indexing process to observe when the replication and indexing process is complete. I'm not sure if metrics for these are exposed by _stats or whether you'd need to parse the log output (e.g. look for [warning] <date> <existing node> <pid> -------- mem3_sync shards/<shard range>/<db name>.<epoch> <new node> {pending_changes,<value>}) to determine the internal replication backlog. Indexing backlog can be queried using the _active_tasks endpoint.
    • If/when internal replication and indexing is up to date, take the node out of maintenance mode.
    • Update the shard map to remove the original replicas that are now redundant

Removing a node from the cluster would be a similar process, updating the shard map to ensure enough shard replicas exist on the remaining nodes before removing it.

from couchdb-helm.

tudordumitriu avatar tudordumitriu commented on June 14, 2024

Thank you @willholley! Truly appreciate it

Sorry for not being 100% within scope, but since the final goal is to deploy it within a cluster it made some sense to address it here (and honestly didn't know somewhere else to go).
So bottom line, because of the complexity of the process this job cannot be automated, and we should try to estimate the loads and and to anticipate the timings as best as we can.

When time comes (loose terms warning):

  1. We add a new node in our k8s cluster
  2. Update the statefulset number of replicas (the new node WON'T be added to the cluster)
  3. We switch the new couchdb node to maintenance mode (with appropriate settings - not 100% sure how the process can be serialized, would appreciate a hint)
  4. Wait for the sync jobs to finish (and might take a while), because as you said copying data it doesn't make sense and might be error prone
  5. Take the node out of maintenance mode
  6. Add the node to the cluster

I still have some questions (some maybe out of scope as well):

  1. Should we be having an odd number of nodes within a cluster (I've noticed from time to time strange write errors due to quorum not being met) - that means that we have to to the above for 2 extra nodes?
  2. Is the k8s service type LoadBalancer enough to handle the load management for the couchdb cluster deployed as a statefullset?
  3. I've noticed serious performance differences between running a single node cluster and a 3 nodes cluster (within the same node though), meaning that the single node preforms way better (and I guess that's due to the fact the nodes need to keep in sync and lots of replication calls are being made), so basically there should be one k8s node for each couchdb node (considering that the couchdb containers are quite some CPU consumers in load tests)

Thanks again!

from couchdb-helm.

willholley avatar willholley commented on June 14, 2024

you need to add the node to the cluster (step 6) before putting it in MM and moving shards around.

Regarding the questions:

  1. Should we be having an odd number of nodes within a cluster (I've noticed from time to time strange write errors due to quorum not being met) - that means that we have to to the above for 2 extra nodes?

Multiples of 3 is usually simplest because shard distribution is then equal, but it's not required.

  1. Is the k8s service type LoadBalancer enough to handle the load management for the couchdb cluster deployed as a statefullset?

Yes - no problems with the k8s service loadbalancer (it's just IPTables/IPVS). If you expose CouchDB to the outside using an Ingress the performance will depend on which Ingress implementation you use etc.

  1. I've noticed serious performance differences between running a single node cluster and a 3 nodes cluster (within the same node though), meaning that the single node preforms way better (and I guess that's due to the fact the nodes need to keep in sync and lots of replication calls are being made), so basically there should be one k8s node for each couchdb node (considering that the couchdb containers are quite some CPU consumers in load tests)

Yes - there's not really any benefit in having more than one CouchDB node per worker. The only exception I can think of is that you could "oversize" the cluster initially and then spread out CouchDB nodes amongst machines as you grow without needing to go through the cluster expansion steps described above, assuming you use remote storage.

from couchdb-helm.

tudordumitriu avatar tudordumitriu commented on June 14, 2024

Thanks again!

from couchdb-helm.

gpothier avatar gpothier commented on June 14, 2024

Hi, mostly thinking out loud here, but would the following be a valid scaling strategy?

  1. Initially deploy the CouchDB cluster with a large number of CouchDB nodes (ie. k8s pods), configuring their resource allocations quite low so that many pods can run on a single k8s node.
  2. When usage increases (as measured by global CPU utilization), increase the resource allocations of the pods so that they get spread out to more (and possibly more powerful) k8s nodes.
  3. Conversely, when usage decreases, reduce the resource allocations so that pods can regroup to fewer k8s nodes

This way, there is no need for resharding. However (and please note I am a k8s beginner), I don't think this "migration" of pods to other nodes when their resource allocations change would be automatic, so it would probably require killing the pods to force them being recreated elsewhere.

EDIT: just realized that changing the resource requests of pods according to actual usage and migrating them to other k8s nodes is the Vertical Pod Autoscaler's job, so it seems scaling could be achieved by implementing point 1 above and properly configuring a Vertical Pod Autoscaler (and a Cluster Autoscaler).

from couchdb-helm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.