summaryrefslogtreecommitdiff
path: root/filesystems/glusterfs/patches/patch-10963
diff options
context:
space:
mode:
authormanu <manu>2015-06-02 03:44:16 +0000
committermanu <manu>2015-06-02 03:44:16 +0000
commitfdd6a30a948c49087da9945e62d331a56792a29c (patch)
treef5cf1c15a66fef8668f98ee6398439d7a5e290de /filesystems/glusterfs/patches/patch-10963
parent0758b47179fb4df2f49c23576479a306284699de (diff)
downloadpkgsrc-fdd6a30a948c49087da9945e62d331a56792a29c.tar.gz
* Bitrot Detection
Bitrot detection is a technique used to identify an ?insidious? type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files. * Multi threaded epoll for performance improvements Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement. * Volume Tiering [Experimental] Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification. Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. Trashcan This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period. * Efficient Object Count and Inode Quota Support This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count. This feature has been utilized to add support for inode quotas. * Pro-active Self healing for Erasure Coding Gluster 3.7 adds pro-active self healing support for erasure coded volumes. * Exports and Netgroups Authentication for NFS This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports. * GlusterFind GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume. * Rebalance Performance Improvements Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files. * NFSv4 and pNFS support Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include: - Addition of upcall infrastructure for cache invalidation. - Support for lease locks and delegations. - Support for enabling Ganesha through Gluster CLI. - Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA. pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release. * Snapshot Scheduling With this enhancement, administrators can schedule volume snapshots. * Snapshot Cloning Volume snapshots can now be cloned to create a new writeable volume. * Sharding [Experimental] Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size. Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release. * RCU in glusterd Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd * Arbiter Volumes Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening. Update to GlusterFS 3.7.1 * Better split-brain resolution split-brain resolutions can now be also driven by users without administrative intervention. * Geo-replication improvements There have been several improvements in geo-replication for stability and performance. * Minor Improvements - Message ID based logging has been added for several translators. - Quorum support for reads. - Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in gluster snapshot list - Support for gluster volume get <volname> added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
Diffstat (limited to 'filesystems/glusterfs/patches/patch-10963')
-rw-r--r--filesystems/glusterfs/patches/patch-10963110
1 files changed, 110 insertions, 0 deletions
diff --git a/filesystems/glusterfs/patches/patch-10963 b/filesystems/glusterfs/patches/patch-10963
new file mode 100644
index 00000000000..f247de6f511
--- /dev/null
+++ b/filesystems/glusterfs/patches/patch-10963
@@ -0,0 +1,110 @@
+$NetBSD: patch-10963,v 1.1 2015/06/02 03:44:16 manu Exp $
+
+From upstream http://review.gluster.org/10963
+
+From 5c359a79bd3c978d0f636082871c289c717d354e Mon Sep 17 00:00:00 2001
+From: Krishnan Parthasarathi <kparthas@redhat.com>
+Date: Tue, 19 May 2015 14:48:01 +0530
+Subject: [PATCH] glusterd: fix repeated connection to nfssvc failed msgs
+
+... and disable reconnect timer on rpc_clnt_disconnect.
+
+Root Cause
+----------
+
+gluster-NFS service wouldn't be started if there are no
+started volumes that have nfs service enabled for them.
+Before this fix we would initiate a connect even when
+the gluster-NFS service wasn't (re)started. Compounding
+that glusterd_conn_disconnect doesn't disable reconnect
+timer. So, it is possible that the reconnect timer was
+in execution when the timer event was attempted to be
+removed.
+
+Change-Id: Iadcb5cff9eafefa95eaf3a1a9413eeb682d3aaac
+BUG: 1222065
+Signed-off-by: Krishnan Parthasarathi <kparthas@redhat.com>
+Reviewed-on: http://review.gluster.org/10830
+Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
+Reviewed-by: Gaurav Kumar Garg <ggarg@redhat.com>
+Reviewed-by: Kaushal M <kaushal@redhat.com>
+---
+
+diff --git rpc/rpc-lib/src/rpc-clnt.c rpc/rpc-lib/src/rpc-clnt.c
+index 264a312..db99484 100644
+--- rpc/rpc-lib/src/rpc-clnt.c
++++ rpc/rpc-lib/src/rpc-clnt.c
+@@ -1108,6 +1108,11 @@
+
+ conn = &rpc->conn;
+
++ pthread_mutex_lock (&conn->lock);
++ {
++ rpc->disabled = 0;
++ }
++ pthread_mutex_unlock (&conn->lock);
+ rpc_clnt_reconnect (conn);
+
+ return 0;
+@@ -1758,6 +1763,7 @@
+
+ pthread_mutex_lock (&conn->lock);
+ {
++ rpc->disabled = 1;
+ if (conn->timer) {
+ gf_timer_call_cancel (rpc->ctx, conn->timer);
+ conn->timer = NULL;
+diff --git xlators/mgmt/glusterd/src/glusterd-conn-mgmt.c xlators/mgmt/glusterd/src/glusterd-conn-mgmt.c
+index da8c909..fca9323 100644
+--- xlators/mgmt/glusterd/src/glusterd-conn-mgmt.c
++++ xlators/mgmt/glusterd/src/glusterd-conn-mgmt.c
+@@ -80,7 +80,6 @@
+ int
+ glusterd_conn_term (glusterd_conn_t *conn)
+ {
+- rpc_clnt_disable (conn->rpc);
+ rpc_clnt_unref (conn->rpc);
+ return 0;
+ }
+diff --git a/xlators/mgmt/glusterd/src/glusterd-nfs-svc.c xlators/mgmt/glusterd/src/glusterd-nfs-svc.c
+index 49b1b56..cb08a20 100644
+--- xlators/mgmt/glusterd/src/glusterd-nfs-svc.c
++++ xlators/mgmt/glusterd/src/glusterd-nfs-svc.c
+@@ -164,18 +164,15 @@
+ {
+ int ret = -1;
+
+- if (glusterd_are_all_volumes_stopped ()) {
+- ret = svc->stop (svc, SIGKILL);
++ ret = svc->stop (svc, SIGKILL);
++ if (ret)
++ goto out;
+
+- } else {
+- ret = glusterd_nfssvc_create_volfile ();
+- if (ret)
+- goto out;
++ ret = glusterd_nfssvc_create_volfile ();
++ if (ret)
++ goto out;
+
+- ret = svc->stop (svc, SIGKILL);
+- if (ret)
+- goto out;
+-
++ if (glusterd_nfssvc_need_start ()) {
+ ret = svc->start (svc, flags);
+ if (ret)
+ goto out;
+@@ -192,10 +189,9 @@
+
+ int
+ glusterd_nfssvc_start (glusterd_svc_t *svc, int flags)
+ {
+- if (glusterd_nfssvc_need_start ())
+- return glusterd_svc_start (svc, flags, NULL);
++ return glusterd_svc_start (svc, flags, NULL);
+
+ return 0;
+ }
+