Giter Site home page Giter Site logo

gluster / glusterfs Goto Github PK

View Code? Open in Web Editor NEW
4.5K 242.0 1.1K 172.16 MB

Gluster Filesystem : Build your distributed storage in minutes

Home Page: https://www.gluster.org

License: GNU General Public License v2.0

Shell 10.03% Python 3.86% C 84.10% Makefile 0.42% Perl 0.78% Emacs Lisp 0.02% Scheme 0.01% Lex 0.01% Yacc 0.08% Ruby 0.02% M4 0.41% RPC 0.21% Roff 0.01% Vim Script 0.04%
gluster storage c distributed-systems glusterfs libgfapi k8s-sig-storage filesystem fuse-filesystem high-availability

glusterfs's Introduction

Gluster is a free and open source software scalable network filesystem.






Build Status Coverage Status

Gluster

Gluster is a software defined distributed storage that can scale to several petabytes. It provides interfaces for object, block and file storage.

Development

The development workflow is documented in Contributors guide

Documentation

The Gluster documentation can be found at Gluster Docs.

Deployment

Quick instructions to build and install can be found in INSTALL file.

Testing

GlusterFS source contains some functional tests under tests/ directory. All these tests are run against every patch submitted for review. If you want your patch to be tested, please add a .t test file as part of your patch submission. You can also submit a patch to only add a .t file for the test case you are aware of.

To run these tests, on your test-machine, just run ./run-tests.sh. Don't run this on a machine where you have 'production' glusterfs is running, as it would blindly kill all gluster processes in each runs.

If you are sending a patch, and want to validate one or few specific tests, then run a single test by running the below command.

  bash# /bin/bash ${path_to_gluster}/tests/basic/rpc-coverage.t

You can also use prove tool if available in your machine, as follows.

  bash# prove -vmfe '/bin/bash' ${path_to_gluster}/tests/basic/rpc-coverage.t

Maintainers

The list of Gluster maintainers is available in MAINTAINERS file.

License

Gluster is dual licensed under GPLV2 and LGPLV3+.

Please visit the Gluster Home Page to find out more about Gluster.

glusterfs's People

Contributors

amarts avatar anoopcs9 avatar aravindavk avatar aspandey avatar avati avatar avra avatar csabahenk avatar dmantipov avatar itisravi avatar kalebskeithley avatar karthik-us avatar kotreshhr avatar kritikadhananjay avatar kshlm avatar manu0401 avatar mchangir avatar mdjunaid avatar mohit84 avatar nixpanic avatar rafikc30 avatar raghavendra-talur avatar raghavendrabhat avatar sanjurakonde avatar shishirng avatar soumyakoduri avatar sunnyku avatar thotz avatar vbellur avatar vshankar avatar xhernandez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

glusterfs's Issues

Add memory tracking for XDR code

Since this is generated code, it uses plain malloc/free and is thus not subject to our memory tracking. This has made leaks more difficult to diagnose here than elsewhere in the code. We should probably post-process the code that comes out of rpcgen to use GF_MALLOC/GF_FREE instead.

Glusterfs node Data size inconsistency after node failure recovery

glusterfs version ๏ผš3.7.13
system os ๏ผšlinux ubuntu 14.04
node ๏ผš gfs01 ๏ผŒgfs02
description๏ผš Glusterfs node Data size inconsistency after node failure recovery
gfs01 ๏ผš
/dev/vdb1 380G 326G 55G 86% /data/brick1
gfs02:
/dev/vdb1 380G 330G 50G 87% /data/brick1
root@gfs02:~# gluster volume heal gv0 info
Brick 192.168.0.31:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick 192.168.0.32:/data/brick1/gv0
Status: Connected
Number of entries: 0

Request testing feedback for component/feature gfapi in release-3.10

This issue tracks testing feedback on component/feature gfapi for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

glusterfs permission denied

Hi,
I am trying to set a simple gluster volume but cannot get read or write access to the fuse mount point without root privileges.
I am using gluster 3.8.5 from SIG repo on Centos 7.2 in cloudwatt provider

uname -r
3.10.0-327.28.3.el7.x86_64

Here is the steps I follow to instantiate my volume (on 2 nodes):

sudo mkfs.xfs -i size=512 /dev/vdc1 -f
sudo mkdir -p /srv/gluster/data/small
sudo mount -t xfs /dev/vdc1 /srv/gluster/data
sudo yum install glusterfs-server glusterfs-fuse
sudo service glusterd start
sudo gluster peer probe 172.52.0.3
sudo gluster volume create small replica 2 transport tcp 172.52.0.4:/srv/gluster/data/small 172.52.0.3:/srv/gluster/data/small
sudo gluster volume set small nfs.disable off
sudo gluster volume set small nfs.acl off
sudo gluster volume start small
sudo mkdir -p /srv/data/small
sudo mount -t glusterfs -o dev,acl 172.52.0.4:/small /srv/data/small
sudo chmod -R 777 /srv/data/small
ll /srv/data/small 
**ls: cannot access /srv/data/small: Permission denied**

Introduce force option for Snapshot Restore

Essentially a snapshot of a volume, which was taken when the volume had nfs.ganesha option enabled, has a export.conf file for it. This file will be restored to the Ganesha Export Directory, which has been moved to the shared storage. Currently we fail the snapshot restore in a scenario, where the snapshot has the said conf file, but the shared storage is not available. This behaviour is expected. However there is no option for the user to proceed with the snapshot at this point in time.

We will introduce a force option for snapshot restore, which will enable the user to restore this particular snapshot in the above explained scenario, thereby abandoning the export.conf. The reason for introducing the force option is to make the user explicitly ask for the saved export.conf to be abandoned in such a scenario.

Example of proposed option in restore cli:
gluster snapshot restore [force]

Get rid of custom boolean type

Standard boolean types have been part of C since C99. Having our own custom type is awkward for people new to our code, and sometimes has worse effects (e.g. because it's an enum each boolean takes up four bytes for one bit of information).

Reduce the Changelog HTIME file size by using optimal format

HTIME is the index file generated by Changelog translator to store information about list of changelogs. Currently complete changelog path is saved including brick path.

Do not save full path of Changelog file in HTIME file. Save only Timestamp suffix and an extra byte to say it is empty changelog or not. (Can be saved in binary format)

This change will

  • Improves the search performance of Changelogs Time range search
  • Eleminates the snapshot restore related issue(Snapshot restore changes the brick path)
  • Less diskspace to save index file

With this change, Upgrade script is necessary to migrate HTIME files of existing Volumes if changelog is enabled.

Existing:

<CHANGELOG_PATH>\x00<CHANGELOG_PATH>..

Changelog file name is saved in lower case to identify empty changelog else in upper case.

Proposed:

<STATE><TS1>\x00<STATE><TS2>..

Request testing feedback for component/feature bitrot in release-3.10

This issue tracks testing feedback on component/feature bitrot for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

switch to storhaug for HA for ganesha and samba

use the unbundled/decoupled storhaug component for common-ha to include Samba.

This entails 1) removing the .../extras/ganesha/... bits, 2) removing the ganesha CLIs in gluster and glusterd, 3) updating storhaug to include the additions and enhancements that have been made, e.g. port{block,unblock}, 4) update documentation, and 5) add tests in Glusto and/or CentOS CI.

Request testing feedback for component/feature fuse in release-3.10

This issue tracks testing feedback on component/feature fuse for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

Eliminate redundancy of fop type in call_frame vs. call_stack

Most code uses the fop type ion the call_stack. The fop-latency code uses a type in the call_frame instead. We should eliminate this duplication by making the fop-latency code use the call_stack value instead (and fix its type which is currently incorrect).

Request testing feedback for component/feature quota in release-3.10

This issue tracks testing feedback on component/feature quota for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

Mac: Clang linking error while making

./configure works fine
make produces the error below.

all -fno-strict-aliasing -DGF_DARWIN_HOST_OS -I../../../../libglusterfs/src -I../../../../xlators/lib/src -I../../../../argp-standalone -D__DARWIN_64_BIT_INO_T -bundle -undefined suppress -flat_namespace -D_XOPEN_SOURCE -O0 -nostartfiles -g -O2 -MT marker-common.lo -MD -MP -MF .deps/marker-common.Tpo -c marker-common.c -fno-common -DPIC -o .libs/marker-common.o
In file included from ../../../../libglusterfs/src/stack.h:46,
from ../../../../libglusterfs/src/xlator.h:71,
from ../../../../libglusterfs/src/inode.h:42,
from marker-common.h:27,
from marker-common.c:24:
../../../../libglusterfs/src/globals.h:26: warning: unknown option after '#pragma GCC diagnostic' kind
glibtool: link: cc -Wl,-undefined -Wl,dynamic_lookup -o .libs/marker.0.so -bundle .libs/marker.o .libs/marker-quota.o .libs/marker-quota-helper.o .libs/marker-common.o ../../../../libglusterfs/src/.libs/libglusterfs.dylib -ll -lpthread -O0 -O2
duplicate symbol _k in:
.libs/marker.o
.libs/marker-quota.o
duplicate symbol _k in:
.libs/marker.o
.libs/marker-quota-helper.o
duplicate symbol _k in:
.libs/marker.o
.libs/marker-common.o
ld: 3 duplicate symbols for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[5]: *** [marker.la] Error 1
make[4]: *** [all-recursive] Error 1
make[3]: *** [all-recursive] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2

Clean up virtual xattrs

Discussion on http://review.gluster.org/#/c/16134/3

As mentioned there, we need to move virtual xattrs into their own namespace so that they can be recognized as such easily (no long lists please). This is more work than should be done in that patch, but it does need to be done some day.

Add information on op-version for clients to volume status output

Currently the gluster volume status <VOLNAME|all> clients command gives us the following information on clients:

  1. Brick name
  2. Client count for each brick
  3. hostname:port for each client
  4. Bytes read and written for each client

Information regarding the maximum op-version that each client supports should be added to the volume status command so that users can get the op-versions supported by each client in one command.

Corresponding bug: https://bugzilla.redhat.com/show_bug.cgi?id=1409078

Request testing feedback for component/feature disperse in release-3.10

This issue tracks testing feedback on component/feature disperse for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

3.3.0qa43 does it wrong...

When I issue the command

gluster espresso mocha size grande milk breve --nofoam

it serves it with a chocolate covered espresso bean on the lid. This melts and makes a mess. Please add the option to remove the bean.

Memory leaks

libvirtd leaks a lot of memory when using glusterfs driver.
Here some output from valgrind:

==27470== 2,704,894 (272 direct, 2,704,622 indirect) bytes in 1 blocks are definitely lost in loss record 1,926 of 1,927
==27470== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==27470== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB25508: inode_table_new (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x2B5FA904: ???
==27470== by 0x2B60B639: ???
==27470== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)
==27470== by 0x1CDB7D3E: rpc_clnt_notify (in /usr/lib64/libgfrpc.so.0.0.1)

==27470== 639,120 bytes in 2 blocks are possibly lost in loss record 1,917 of 1,927
==27470== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==27470== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB364E6: mem_pool_new_fn (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB2555A: inode_table_new (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x2B3B3904: ???
==27470== by 0x2B3C4639: ???
==27470== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)

Version information:
CentOS Linux release 7.2.1511

glusterfs-client-xlators-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-fuse-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-api-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-libs-3.7.1-16.0.1.el7.centos.x86_64
libvirt-1.2.17-13.el7_2.5.x86_64
qemu-kvm-1.5.3-105.el7_2.7.x86_64

Improve Rsync/Tar+SSH Error Handling and retries

Rsync/Tar+ssh retries are done for all the GFIDs which were present in queue. This is very inefficient, enhance Geo-replication to parse the error of Rsync/Tar to capture failed GFIDs and retry only those GFIDs.

Existing:
- Try sync using rsync or Tar+ssh
- If failed, stat all items in Master to find already unlinked GFIDs, If Unlinked remove from the queue
- Retry the entire queue

Proposed:
- Try sync using rsync or Tar+ssh
- If Rsync returns error code 23 or tar+ssh returns failure, parse the error output and identify already unlinked GFIDs and error GFIDs
- Retry only Error GFIDs

Request testing feedback for component/feature glusterd in release-3.10

This issue tracks testing feedback on component/feature glusterd for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

Request testing feedback for component/feature eventing in release-3.10

This issue tracks testing feedback on component/feature eventing for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

In gfapi fix memory leak during graph switch

During graph switch we do not perform much cleanup operation on the old graph, leading to memory leak.

  • Inode table of old graph needs cleanup.
    Fix inode leaks
    Fix forget of each xl to free inode ctx properly
  • The xl objects itself (xlator_t)
  • The mem_accnt structure in every xl object.
    Fix all the leaks so that the ref count of mem_accnt structure is 0
  • Implement fini() in every xlator
  • Some of the above items cannot be fully completed, e.g. fini in every xlator because it needs effort from all the xlator owners.

volume expansion/contraction for tiered volumes

Tiering is to have support for add and remove brick on tiered volumes. By add and remove brick, the rebalance process that are to be followed add/remove brick will also be in.All this will be in experimental state. Before the code for volume expansion gets in, we will push the code to have tier under the service framework and would have separated attach-tier from add-brick, detach-tier from remove brick and tier daemon from rebalance.

https://docs.google.com/document/d/1lj7f0aF5TS3N2I9inUBn-5p2ZpJSV4uptPnIN3KYKEo

https://docs.google.com/document/d/18jDyOIkJuifufqR5afIkZGXiTD77Gs22alK7J94SN1g

Introducing block CLI commands

As we know gluster block storage creation and maintenance is not simple today, as it involves all the manual steps like

  1. Creating the file in the gluster volume.
  2. Mapping the target file from the volume.
  3. Creating the LUN.
  4. setting the appropriate ACLs
  5. set UserID and password for authentication.
  6. Creation of multipathed targets of HA, involves repetition of above steps in each node.

To make this basic operations simple we should integrate the block story with gluster CLI.

As part of it, we need the following basic commands

$ gluster block create
$ gluster block modify
$ gluster block list
$ gluster block delete

GlusterFS volumes must support user/group quotas

GlusterFS volumes do not support user/group quotas. Supporting user/group quotas can help in multi-tenant or workgroup use cases, where the user may want to control quota based on identity rather than hierarchy.

Request testing feedback for component/feature afr in release-3.10

This issue tracks testing feedback on component/feature afr for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.

NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

Geo-Replication fail at startup

Hi,

I have set up georeplication on Ubuntu 16.04, GlusterFS 3.9.1.

Here's the log...

[2017-01-26 09:47:05.985043] D [master(/data/glusterfs/backup1):302:a_syncdata] _GMaster: candidate for syncing .gfid/ad1d09bc-3cb9-4c54-9c2b-033e8a98f7d3
[2017-01-26 09:47:05.985896] E [syncdutils(/data/glusterfs/backup1):296:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/syncdutils.py", line 326, in twrap
    tf(*aa)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1649, in syncjob
    po = self.sync_engine(pb, self.log_err)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1730, in rsync
    log_err=log_err)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 56, in sup
    sys._getframe(1).f_code.co_name)(*a, **kw)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1041, in rsync
    "log_rsync_performance")):
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/configinterface.py", line 252, in get_realtime
    return self.get(opt, printValue=False)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/configinterface.py", line 357, in get
    self.update_to(d, allow_unresolved=True)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/configinterface.py", line 347, in update_to
    update_from_sect(sect, MultiDict(dct, mad, *self.auxdicts))
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/configinterface.py", line 327, in update_from_sect
    for k, v in self.config._sections[sect].items():
  File "/usr/lib/python2.7/collections.py", line 127, in items
    return [(key, self[key]) for key in self]
KeyError: 'state_socket_unencoded'
[2017-01-26 09:47:05.986107] D [master(/data/glusterfs/backup1):302:a_syncdata] _GMaster: candidate for syncing .gfid/0817467a-d011-44e7-9019-fa8d3c865bf9
[2017-01-26 09:47:05.987433] E [syncdutils(/data/glusterfs/backup1):296:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/syncdutils.py", line 326, in twrap
    tf(*aa)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1649, in syncjob
    po = self.sync_engine(pb, self.log_err)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1730, in rsync
    log_err=log_err)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 56, in sup
    sys._getframe(1).f_code.co_name)(*a, **kw)
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1041, in rsync
    "log_rsync_performance")):
  File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/syncdutils.py", line 368, in boolify
    lstr = s.lower()
AttributeError: 'NoneType' object has no attribute 'lower'
[2017-01-26 09:47:06.5925] D [master(/data/glusterfs/backup1):302:a_syncdata] _GMaster: candidate for syncing .gfid/af424cc5-e287-4983-a86b-e376229be559
[2017-01-26 09:47:06.6483] D [master(/data/glusterfs/backup1):302:a_syncdata] _GMaster: candidate for syncing .gfid/3a161390-4c5a-4aec-9ca0-83489392ba4c
[2017-01-26 09:47:06.6648] I [syncdutils(/data/glusterfs/backup1):237:finalize] <top>: exiting.
[2017-01-26 09:47:06.15986] I [repce(/data/glusterfs/backup1):92:service_loop] RepceServer: terminating on reaching EOF.
[2017-01-26 09:47:06.16272] I [syncdutils(/data/glusterfs/backup1):237:finalize] <top>: exiting.
[2017-01-26 09:47:06.483865] I [monitor(monitor):349:monitor] Monitor: worker(/data/glusterfs/backup1) died in startup phase
[2017-01-26 09:47:06.486694] I [gsyncdstatus(monitor):233:set_worker_status] GeorepStatus: Worker Status: Faulty

Could you please help me ?

Thanks !

Redo tier deamon as a service

The plans from tiering for 3.10 is to have support for add and remove brick on tiered volumes. By add and remove brick, the rebalance process that are to be followed add/remove brick will also be in.All this will be in experimental state. Before the code for volume expansion gets in, we will push the code to have tier under the service framework and would have separated attach-tier from add-brick, detach-tier from remove brick and tier daemon from rebalance.

https://docs.google.com/document/d/1lj7f0aF5TS3N2I9inUBn-5p2ZpJSV4uptPnIN3KYKEo

https://docs.google.com/document/d/18jDyOIkJuifufqR5afIkZGXiTD77Gs22alK7J94SN1g

Move fields from xdata into protocol definition

The original intention with xdata was that it could be used to add enhancements without forcing a new protocol version, but that any such enhancements would be accommodated directly in the next protocol version. Besides being cleaner and easier to document, using RPC fields would require less code and be more efficient. For 4.0, which will require a new protocol version anyway, we should clean up all of the xdata we've accumulated and turn it into RPC fields.

Request testing feedback for component/feature caching xlators in release-3.10

This issue tracks testing feedback on component/feature caching xlators for release-3.10

caching-xlators: io-cache, md-cache, open-behind, quick-read, read-ahead, write-behind, readdir-ahead

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

Primary slave failover handling

If primary slave node goes down then Geo-rep workers will fail to connect to any other slave node on restart since slave nodes info is fetched from Slave Volume info(using primary slave node IP)
Cache the list of slave nodes and use them when primary node down.

Memory leaks due to libvirtd eating a lot of memory over the time by day by day when using glusterfs driver.

libvirtd eats a lot of memory over the time by day by day when using glusterfs driver.
Here some output from valgrind:

==27470== 2,704,894 (272 direct, 2,704,622 indirect) bytes in 1 blocks are definitely lost in loss record 1,926 of 1,927
==27470== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==27470== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB25508: inode_table_new (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x2B5FA904: ???
==27470== by 0x2B60B639: ???
==27470== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)
==27470== by 0x1CDB7D3E: rpc_clnt_notify (in /usr/lib64/libgfrpc.so.0.0.1)

==27470== 639,120 bytes in 2 blocks are possibly lost in loss record 1,917 of 1,927
==27470== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==27470== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB364E6: mem_pool_new_fn (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB2555A: inode_table_new (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x2B3B3904: ???
==27470== by 0x2B3C4639: ???
==27470== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)

Version information:
CentOS Linux release 7.2.1511

glusterfs-client-xlators-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-fuse-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-api-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-libs-3.7.1-16.0.1.el7.centos.x86_64
libvirt-1.2.17-13.el7_2.5.x86_64
qemu-kvm-1.5.3-105.el7_2.7.x86_64

Request testing feedback for component/feature distribute in release-3.10

This issue tracks testing feedback on component/feature distribute for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

Request testing feedback for component/feature geo-replication in release-3.10

This issue tracks testing feedback on component/feature geo-replication for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

Not mounting on Ubuntu 12.04

Hi
running on Ubuntu 12.04.1 x64. Scrips are in /etc/init as expected but mounting fails. Here is the things I found in the logs

[697470.126842] init: wait-for-state (mounting-glusterfsglusterfs-server) main process (295) terminated with status 100
[697470.129226] init: mounting-glusterfs main process (290) terminated with status 1

Share is not mounted at boot. Mounting manually is ok.
Ideas?

Disable creation of trash directory by default

As per the current design trash directory, namely .trashcan, will be created at the root when bricks associated with a volume become online and there is a restriction to delete this directory from the volume even when trash feature is disabled.

This proposal is targeted in a such a way that creation and subsequent enforcement on trash directory to happen only when feature is enabled for that volume.

Tracker BZ
Patch under review

Request testing feedback for component/feature change-time-recorder and tier in release-3.10

This issue tracks testing feedback on component/feature change-time-recorder and tier for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

Estimate how long it will take for a rebalance operation to complete

There is no way at present to determine when a rebalance operation will complete. This requires admins to keep monitoring the rebalance operation.

The proposed approach will calculate the estimated time every time the rebalance status command is issues. The value will be displayed along with the rebalance status output.

  1. Determine the number of files on a brick using statfs
  2. Calculate the rate at which files have been processed so far
  3. Calculate the time required to complete the operation based on 1 and 2.

This approach will provide a rough estimate of the time required and will be updated every time the status command is issues.

Improve gsyncd configuration and arguments handling

Improvements in Gsyncd Roles

Sub commands are not used to distinguish the different roles of Geo-replication. For example --monitor is used to make gsyncd act as monitor but since it is optional argument, all the necessary arguments have to be validated separately. Existing gsyncd uses deprecated optparse library, this should be changed to use argparse library with sub commands.(Like gsyncd.py monitor [args])

  • Optparse to Argparse migration
  • Localized imports of libraries to each roles

Configuration Improvements

Default Configurations are generated every time glusterd restarted. Due to this permissions are reset in the template file which causes Non root Geo-replication to Fail. Default configurations should be packaged and installed as part of installation.

Session specific configurations are maintained as copy of Template configuration and then custom configurations are overwritten to that file. This introduces lot of issues during upgrade. Only custom configurations should be maintained in session config file.

Discoverable configurations and standardize config names

Config help command is required to see list of Configurations possible. Some of the config names are very specific to a mode/method, general name is required. For example, use_tarssh is very confusing, what will happen if use_tarssh is disabled? how this option will look like if new sync engine is introduced?

Existing:

use-tarssh = True|False
use-metavolume = True|False

Proposed:
sync-engine = rsync|tarssh
active-passive-mode = manual|node-id|meta-volume

Identifiers for Slave gsyncd processes and Slave log files

When multiple gsyncd worker connects to same slave node, then it is not possible to detect gsyncd process which is spawned by particular master node worker. Add Master hostname and brick_path in log file names as well as in slave gsyncd arguments.

Support to get maximum op-version supported in a heterogeneous cluster

gluster volume get <volname> cluster.op-version can provide the current op-version the cluster is running with. However there is no way for users to know the maximum value to which they can bump up the op-version. Given that auto op-version update doesn't happen on an upgrade, users might end up running the cluster with an older op-version. The focus area covered by this feature is 'user experience'

Related bug in bugzilla: #1365822

Request testing feedback for component/feature arbiter in release-3.10

This issue tracks testing feedback on component/feature arbiter for release-3.10

Request the maintainer or person assigned to the issue to test and provide feedback on the health of the component before closing the issue.
NOTE: As a maintainer please change the assignment as needed, in case someone else is working on testing the component.

Other folks that are testing the release are also welcome to add tests performed to these issues, as that helps gauging the health of the component/feature.

Major points to look out for:

  • We have introduced brick multiplexing, so tests with and without multiplexing enabled will help in understanding if there are any lingering issues
  • readdir-ahead as a sub xlator to DHT is added with this release, this is an experimental feature, but testing with this enabled can help weed out any issues that other xlators may face when this feature is enabled
  • If any upgrade or mixed mode cluster testing is performed, then testing out the glusterd changes to report op-version would be useful to perform

Check release notes to understand how to enable the above features: https://github.com/gluster/glusterfs/blob/v3.10.0rc0/doc/release-notes/3.10.0.md

This issue is tracked against the release-3.10 and will not appear in the project dashboard for the same, as this is a release activity and does not define scope for the release.

Happy testing!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.