Comments (3)
to quote the code comment:
# we tried all osds to place this pg,
# so the shardsize is just too big
# if pg_size_choice is auto, we try to avoid this PG anyway,
# but if we still end up here, it means the choices for moves are really
# becoming tight.
unsuccessful_pools.add(pg_pool)
so we assume that the shardsize is the problem and we couldn't place the pg because of that.
and we end up at that statement when there was no single target osd possible to place that pg.
so other pgs of the same pool should also have no chance of getting a valid destination.
i wonder in this scenario why disabling that check allows finding a destination for another pg of the same pool :)
please update to the latest git version and try again.
if it fails, can you send me another state file so i can try to reproduce locally?
from ceph-balancer.
Still interested in an analysis? If so, please send a state dump :)
from ceph-balancer.
I'll close this for now - please notify me if you have another shot to reproduce with a state file.
from ceph-balancer.
Related Issues (20)
- Do Not Skip the Entire Pool When Balancing PGs HOT 5
- IndexError: list index out of range HOT 3
- can't run it with python < 3.9 HOT 1
- Add ability to handle invoking the Ceph client via Cephadm? HOT 1
- no trace found for 2147483647 in default~hdd HOT 6
- JSONDecodeError in Reef HOT 4
- Please consider type datacenter as a root bucket HOT 1
- JSON TypeError on show command when json output is used. HOT 4
- ./placementoptimizer.py showremapped balks at cluster topology changes HOT 3
- Feature request: take bluestore_min_alloc_size into account HOT 3
- KeyError: 66 (in used += self.cluster.osd_transfer_remainings[osdid]) HOT 5
- ZeroDivisionError: division by zero (osd_objs_acting) HOT 3
- RuntimeError: pg 18.6a to be moved to osd.117 is misplaced with -198781.0<0 objects already transferred
- ./placementoptimizer.py showremapped --by-osd throws KeyError if there is recovery
- key error - on healthy cluster
- it's breaking CRUSH rule HOT 3
- Sorting in heterogeneous clusters
- OSD utilization is very different from ceph osd df HOT 2
- crush root X not known HOT 8
- Failing when using --source-osds
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ceph-balancer.