gluster / glusterfs Goto Github PK
View Code? Open in Web Editor NEWGluster Filesystem : Build your distributed storage in minutes
Home Page: https://www.gluster.org
License: GNU General Public License v2.0
Gluster Filesystem : Build your distributed storage in minutes
Home Page: https://www.gluster.org
License: GNU General Public License v2.0
Rsync/Tar+ssh retries are done for all the GFIDs which were present in queue. This is very inefficient, enhance Geo-replication to parse the error of Rsync/Tar to capture failed GFIDs and retry only those GFIDs.
Existing:
- Try sync using rsync or Tar+ssh
- If failed, stat all items in Master to find already unlinked GFIDs, If Unlinked remove from the queue
- Retry the entire queue
Proposed:
- Try sync using rsync or Tar+ssh
- If Rsync returns error code 23 or tar+ssh returns failure, parse the error output and identify already unlinked GFIDs and error GFIDs
- Retry only Error GFIDs
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
The plans from tiering for 3.10 is to have support for add and remove brick on tiered volumes. By add and remove brick, the rebalance process that are to be followed add/remove brick will also be in.All this will be in experimental state. Before the code for volume expansion gets in, we will push the code to have tier under the service framework and would have separated attach-tier from add-brick, detach-tier from remove brick and tier daemon from rebalance.
https://docs.google.com/document/d/1lj7f0aF5TS3N2I9inUBn-5p2ZpJSV4uptPnIN3KYKEo
https://docs.google.com/document/d/18jDyOIkJuifufqR5afIkZGXiTD77Gs22alK7J94SN1g
Discussion on http://review.gluster.org/#/c/16134/3
As mentioned there, we need to move virtual xattrs into their own namespace so that they can be recognized as such easily (no long lists please). This is more work than should be done in that patch, but it does need to be done some day.
Hi,
I am trying to set a simple gluster volume but cannot get read or write access to the fuse mount point without root privileges.
I am using gluster 3.8.5 from SIG repo on Centos 7.2 in cloudwatt provider
uname -r
3.10.0-327.28.3.el7.x86_64
Here is the steps I follow to instantiate my volume (on 2 nodes):
sudo mkfs.xfs -i size=512 /dev/vdc1 -f
sudo mkdir -p /srv/gluster/data/small
sudo mount -t xfs /dev/vdc1 /srv/gluster/data
sudo yum install glusterfs-server glusterfs-fuse
sudo service glusterd start
sudo gluster peer probe 172.52.0.3
sudo gluster volume create small replica 2 transport tcp 172.52.0.4:/srv/gluster/data/small 172.52.0.3:/srv/gluster/data/small
sudo gluster volume set small nfs.disable off
sudo gluster volume set small nfs.acl off
sudo gluster volume start small
sudo mkdir -p /srv/data/small
sudo mount -t glusterfs -o dev,acl 172.52.0.4:/small /srv/data/small
sudo chmod -R 777 /srv/data/small
ll /srv/data/small
**ls: cannot access /srv/data/small: Permission denied**
GlusterFS volumes do not support user/group quotas. Supporting user/group quotas can help in multi-tenant or workgroup use cases, where the user may want to control quota based on identity rather than hierarchy.
Tiering currently promotes or demotes a single file at a time. The multithreaded code used in DHT rebalance is not used. This makes tier migration of files very slow.
This project will change the tiering migration code to use a thread pool, to move multiple files at a time. DHT multithreaded code shall be leveraged.
Draft description of implementation:
https://docs.google.com/document/d/10pvgU9uYINv0pXjGgIUNGATpoh93b4djExmyWG5Atn8/edit
caching-xlators: io-cache, md-cache, open-behind, quick-read, read-ahead, write-behind, readdir-ahead
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
Since this is generated code, it uses plain malloc/free and is thus not subject to our memory tracking. This has made leaks more difficult to diagnose here than elsewhere in the code. We should probably post-process the code that comes out of rpcgen to use GF_MALLOC/GF_FREE instead.
When multiple gsyncd worker connects to same slave node, then it is not possible to detect gsyncd process which is spawned by particular master node worker. Add Master hostname
and brick_path
in log file names as well as in slave gsyncd arguments.
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
From kernel version 3.X or greater, creating of a file results in removexattr call on security.ima xattr. But this xattr is not set on the file unless IMA feature is active. Enable removexattr call to return ENODATA if it is not found in the cache.
Patch link https://review.gluster.org/#/c/16460/
Prepare Gluster for supporting SELinux contents. Currently it is not possible to set SELinux labels on contents on a Gluster volume.
Etherpad with notes: https://public.pad.fsfe.org/p/selinux
Related bug in bugzilla: #1318100
No component should have more than 5 bad tests/known issue against them.
Tier component needs to fix at least 2 tests to get under the threshold.
libvirtd leaks a lot of memory when using glusterfs driver.
Here some output from valgrind:
==27470== 2,704,894 (272 direct, 2,704,622 indirect) bytes in 1 blocks are definitely lost in loss record 1,926 of 1,927
==27470== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==27470== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB25508: inode_table_new (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x2B5FA904: ???
==27470== by 0x2B60B639: ???
==27470== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)
==27470== by 0x1CDB7D3E: rpc_clnt_notify (in /usr/lib64/libgfrpc.so.0.0.1)
==27470== 639,120 bytes in 2 blocks are possibly lost in loss record 1,917 of 1,927
==27470== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==27470== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB364E6: mem_pool_new_fn (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB2555A: inode_table_new (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x2B3B3904: ???
==27470== by 0x2B3C4639: ???
==27470== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)
Version information:
CentOS Linux release 7.2.1511
glusterfs-client-xlators-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-fuse-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-api-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-libs-3.7.1-16.0.1.el7.centos.x86_64
libvirt-1.2.17-13.el7_2.5.x86_64
qemu-kvm-1.5.3-105.el7_2.7.x86_64
This feature improves the create performance.
Feature page: https://review.gluster.org/#/c/16436/
Hi
running on Ubuntu 12.04.1 x64. Scrips are in /etc/init as expected but mounting fails. Here is the things I found in the logs
[697470.126842] init: wait-for-state (mounting-glusterfsglusterfs-server) main process (295) terminated with status 100
[697470.129226] init: mounting-glusterfs main process (290) terminated with status 1
Share is not mounted at boot. Mounting manually is ok.
Ideas?
The original intention with xdata was that it could be used to add enhancements without forcing a new protocol version, but that any such enhancements would be accommodated directly in the next protocol version. Besides being cleaner and easier to document, using RPC fields would require less code and be more efficient. For 4.0, which will require a new protocol version anyway, we should clean up all of the xdata we've accumulated and turn it into RPC fields.
If primary slave node goes down then Geo-rep workers will fail to connect to any other slave node on restart since slave nodes info is fetched from Slave Volume info(using primary slave node IP)
Cache the list of slave nodes and use them when primary node down.
As per the current design trash directory, namely .trashcan, will be created at the root when bricks associated with a volume become online and there is a restriction to delete this directory from the volume even when trash feature is disabled.
This proposal is targeted in a such a way that creation and subsequent enforcement on trash directory to happen only when feature is enabled for that volume.
Improve directory enumeration performance by implementing parallel readdirp at the dht layer.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1401812
Design doc: http://review.gluster.org/#/c/16090/
When I issue the command
gluster espresso mocha size grande milk breve --nofoam
it serves it with a chocolate covered espresso bean on the lid. This melts and makes a mess. Please add the option to remove the bean.
As we know gluster block storage creation and maintenance is not simple today, as it involves all the manual steps like
To make this basic operations simple we should integrate the block story with gluster CLI.
As part of it, we need the following basic commands
$ gluster block create
$ gluster block modify
$ gluster block list
$ gluster block delete
Gluster should have some provision to take statedump of gfapi applications.
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
Essentially a snapshot of a volume, which was taken when the volume had nfs.ganesha option enabled, has a export.conf file for it. This file will be restored to the Ganesha Export Directory, which has been moved to the shared storage. Currently we fail the snapshot restore in a scenario, where the snapshot has the said conf file, but the shared storage is not available. This behaviour is expected. However there is no option for the user to proceed with the snapshot at this point in time.
We will introduce a force option for snapshot restore, which will enable the user to restore this particular snapshot in the above explained scenario, thereby abandoning the export.conf. The reason for introducing the force option is to make the user explicitly ask for the saved export.conf to be abandoned in such a scenario.
Example of proposed option in restore cli:
gluster snapshot restore [force]
Standard boolean types have been part of C since C99. Having our own custom type is awkward for people new to our code, and sometimes has worse effects (e.g. because it's an enum each boolean takes up four bytes for one bit of information).
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
Most code uses the fop type ion the call_stack. The fop-latency code uses a type in the call_frame instead. We should eliminate this duplication by making the fop-latency code use the call_stack value instead (and fix its type which is currently incorrect).
Currently the gluster volume status <VOLNAME|all> clients
command gives us the following information on clients:
Information regarding the maximum op-version that each client supports should be added to the volume status command so that users can get the op-versions supported by each client in one command.
Corresponding bug: https://bugzilla.redhat.com/show_bug.cgi?id=1409078
There is no way at present to determine when a rebalance operation will complete. This requires admins to keep monitoring the rebalance operation.
The proposed approach will calculate the estimated time every time the rebalance status command is issues. The value will be displayed along with the rebalance status output.
This approach will provide a rough estimate of the time required and will be updated every time the status command is issues.
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
Hi,
I have set up georeplication on Ubuntu 16.04, GlusterFS 3.9.1.
Here's the log...
[2017-01-26 09:47:05.985043] D [master(/data/glusterfs/backup1):302:a_syncdata] _GMaster: candidate for syncing .gfid/ad1d09bc-3cb9-4c54-9c2b-033e8a98f7d3
[2017-01-26 09:47:05.985896] E [syncdutils(/data/glusterfs/backup1):296:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/syncdutils.py", line 326, in twrap
tf(*aa)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1649, in syncjob
po = self.sync_engine(pb, self.log_err)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1730, in rsync
log_err=log_err)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 56, in sup
sys._getframe(1).f_code.co_name)(*a, **kw)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1041, in rsync
"log_rsync_performance")):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/configinterface.py", line 252, in get_realtime
return self.get(opt, printValue=False)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/configinterface.py", line 357, in get
self.update_to(d, allow_unresolved=True)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/configinterface.py", line 347, in update_to
update_from_sect(sect, MultiDict(dct, mad, *self.auxdicts))
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/configinterface.py", line 327, in update_from_sect
for k, v in self.config._sections[sect].items():
File "/usr/lib/python2.7/collections.py", line 127, in items
return [(key, self[key]) for key in self]
KeyError: 'state_socket_unencoded'
[2017-01-26 09:47:05.986107] D [master(/data/glusterfs/backup1):302:a_syncdata] _GMaster: candidate for syncing .gfid/0817467a-d011-44e7-9019-fa8d3c865bf9
[2017-01-26 09:47:05.987433] E [syncdutils(/data/glusterfs/backup1):296:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/syncdutils.py", line 326, in twrap
tf(*aa)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1649, in syncjob
po = self.sync_engine(pb, self.log_err)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1730, in rsync
log_err=log_err)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 56, in sup
sys._getframe(1).f_code.co_name)(*a, **kw)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1041, in rsync
"log_rsync_performance")):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/syncdutils.py", line 368, in boolify
lstr = s.lower()
AttributeError: 'NoneType' object has no attribute 'lower'
[2017-01-26 09:47:06.5925] D [master(/data/glusterfs/backup1):302:a_syncdata] _GMaster: candidate for syncing .gfid/af424cc5-e287-4983-a86b-e376229be559
[2017-01-26 09:47:06.6483] D [master(/data/glusterfs/backup1):302:a_syncdata] _GMaster: candidate for syncing .gfid/3a161390-4c5a-4aec-9ca0-83489392ba4c
[2017-01-26 09:47:06.6648] I [syncdutils(/data/glusterfs/backup1):237:finalize] <top>: exiting.
[2017-01-26 09:47:06.15986] I [repce(/data/glusterfs/backup1):92:service_loop] RepceServer: terminating on reaching EOF.
[2017-01-26 09:47:06.16272] I [syncdutils(/data/glusterfs/backup1):237:finalize] <top>: exiting.
[2017-01-26 09:47:06.483865] I [monitor(monitor):349:monitor] Monitor: worker(/data/glusterfs/backup1) died in startup phase
[2017-01-26 09:47:06.486694] I [gsyncdstatus(monitor):233:set_worker_status] GeorepStatus: Worker Status: Faulty
Could you please help me ?
Thanks !
HTIME is the index file generated by Changelog translator to store information about list of changelogs. Currently complete changelog path is saved including brick path.
Do not save full path of Changelog file in HTIME file. Save only Timestamp suffix and an extra byte to say it is empty changelog or not. (Can be saved in binary format)
This change will
With this change, Upgrade script is necessary to migrate HTIME files of existing Volumes if changelog is enabled.
Existing:
<CHANGELOG_PATH>\x00<CHANGELOG_PATH>..
Changelog file name is saved in lower case to identify empty changelog else in upper case.
Proposed:
<STATE><TS1>\x00<STATE><TS2>..
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
use the unbundled/decoupled storhaug component for common-ha to include Samba.
This entails 1) removing the .../extras/ganesha/... bits, 2) removing the ganesha CLIs in gluster and glusterd, 3) updating storhaug to include the additions and enhancements that have been made, e.g. port{block,unblock}, 4) update documentation, and 5) add tests in Glusto and/or CentOS CI.
Let multiple bricks live together in one processs to conserve memory and ports, reduce context switches, allow more bricks per server.
https://github.com/gluster/glusterfs-specs/blob/master/under_review/multiplexing.md
https://bugzilla.redhat.com/show_bug.cgi?id=1385758
http://review.gluster.org/#/c/14763/
Sub commands are not used to distinguish the different roles of Geo-replication. For example --monitor
is used to make gsyncd
act as monitor but since it is optional argument, all the necessary arguments have to be validated separately. Existing gsyncd
uses deprecated optparse
library, this should be changed to use argparse
library with sub commands.(Like gsyncd.py monitor [args]
)
Default Configurations are generated every time glusterd restarted. Due to this permissions are reset in the template file which causes Non root Geo-replication to Fail. Default configurations should be packaged and installed as part of installation.
Session specific configurations are maintained as copy of Template configuration and then custom configurations are overwritten to that file. This introduces lot of issues during upgrade. Only custom configurations should be maintained in session config file.
Config help command is required to see list of Configurations possible. Some of the config names are very specific to a mode/method, general name is required. For example, use_tarssh
is very confusing, what will happen if use_tarssh is disabled? how this option will look like if new sync engine is introduced?
Existing:
use-tarssh = True|False
use-metavolume = True|False
Proposed:
sync-engine = rsync|tarssh
active-passive-mode = manual|node-id|meta-volume
Tiering is to have support for add and remove brick on tiered volumes. By add and remove brick, the rebalance process that are to be followed add/remove brick will also be in.All this will be in experimental state. Before the code for volume expansion gets in, we will push the code to have tier under the service framework and would have separated attach-tier from add-brick, detach-tier from remove brick and tier daemon from rebalance.
https://docs.google.com/document/d/1lj7f0aF5TS3N2I9inUBn-5p2ZpJSV4uptPnIN3KYKEo
https://docs.google.com/document/d/18jDyOIkJuifufqR5afIkZGXiTD77Gs22alK7J94SN1g
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
./configure works fine
make produces the error below.
all -fno-strict-aliasing -DGF_DARWIN_HOST_OS -I../../../../libglusterfs/src -I../../../../xlators/lib/src -I../../../../argp-standalone -D__DARWIN_64_BIT_INO_T -bundle -undefined suppress -flat_namespace -D_XOPEN_SOURCE -O0 -nostartfiles -g -O2 -MT marker-common.lo -MD -MP -MF .deps/marker-common.Tpo -c marker-common.c -fno-common -DPIC -o .libs/marker-common.o
In file included from ../../../../libglusterfs/src/stack.h:46,
from ../../../../libglusterfs/src/xlator.h:71,
from ../../../../libglusterfs/src/inode.h:42,
from marker-common.h:27,
from marker-common.c:24:
../../../../libglusterfs/src/globals.h:26: warning: unknown option after '#pragma GCC diagnostic' kind
glibtool: link: cc -Wl,-undefined -Wl,dynamic_lookup -o .libs/marker.0.so -bundle .libs/marker.o .libs/marker-quota.o .libs/marker-quota-helper.o .libs/marker-common.o ../../../../libglusterfs/src/.libs/libglusterfs.dylib -ll -lpthread -O0 -O2
duplicate symbol _k in:
.libs/marker.o
.libs/marker-quota.o
duplicate symbol _k in:
.libs/marker.o
.libs/marker-quota-helper.o
duplicate symbol _k in:
.libs/marker.o
.libs/marker-common.o
ld: 3 duplicate symbols for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[5]: *** [marker.la] Error 1
make[4]: *** [all-recursive] Error 1
make[3]: *** [all-recursive] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2
libvirtd eats a lot of memory over the time by day by day when using glusterfs driver.
Here some output from valgrind:
==27470== 2,704,894 (272 direct, 2,704,622 indirect) bytes in 1 blocks are definitely lost in loss record 1,926 of 1,927
==27470== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==27470== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB25508: inode_table_new (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x2B5FA904: ???
==27470== by 0x2B60B639: ???
==27470== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)
==27470== by 0x1CDB7D3E: rpc_clnt_notify (in /usr/lib64/libgfrpc.so.0.0.1)
==27470== 639,120 bytes in 2 blocks are possibly lost in loss record 1,917 of 1,927
==27470== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==27470== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB364E6: mem_pool_new_fn (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB2555A: inode_table_new (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x2B3B3904: ???
==27470== by 0x2B3C4639: ???
==27470== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)
Version information:
CentOS Linux release 7.2.1511
glusterfs-client-xlators-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-fuse-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-api-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-libs-3.7.1-16.0.1.el7.centos.x86_64
libvirt-1.2.17-13.el7_2.5.x86_64
qemu-kvm-1.5.3-105.el7_2.7.x86_64
gluster volume get <volname> cluster.op-version
can provide the current op-version the cluster is running with. However there is no way for users to know the maximum value to which they can bump up the op-version. Given that auto op-version update doesn't happen on an upgrade, users might end up running the cluster with an older op-version. The focus area covered by this feature is 'user experience'
Related bug in bugzilla: #1365822
Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.
Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.
Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md
This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.
Happy testing!
During graph switch we do not perform much cleanup operation on the old graph, leading to memory leak.
glusterfs version :3.7.13
system os :linux ubuntu 14.04
node : gfs01 ,gfs02
description: Glusterfs node Data size inconsistency after node failure recovery
gfs01 :
/dev/vdb1 380G 326G 55G 86% /data/brick1
gfs02:
/dev/vdb1 380G 330G 50G 87% /data/brick1
root@gfs02:~# gluster volume heal gv0 info
Brick 192.168.0.31:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick 192.168.0.32:/data/brick1/gv0
Status: Connected
Number of entries: 0
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.