That is correct. The volume was 9 servers w/ 3x replication, and I wanted
to move all data off of one of the sets of 3 servers, and those were which
I removed w/ remove-brick start and commit. Per the RH documentation (
),
this should not be an issue assuming the remove-brick process completes
before committing it.
*Stephen Repetski*
On Thu, Jul 23, 2015 at 5:17 PM, Raz Tamir <ratamir(a)redhat.com> wrote:
As far as I can see from the logs you removed 3 bricks. Can you
confirm?
Thanks in advance,
Raz Tamir
ratamir(a)redhat.com
RedHat Israel
RHEV-M QE Storage team
------------------------------
*From: *"Stephen Repetski" <srepetsk(a)srepetsk.net>
*To: *"Raz Tamir" <ratamir(a)redhat.com>
*Cc: *"users" <users(a)ovirt.org>
*Sent: *Friday, July 24, 2015 12:01:16 AM
*Subject: *Re: [ovirt-users] oVirt not starting primary storage domain
Hi Raz:
I'm using vdsm-4.16.14-0.el6.x86_64 with glusterfs-3.6.2-1.el6.x86_64 on
oVirt 3.5.2.
I removed the brick with: gluster remove-brick store1 replica 3 $1 $2 $3
start; gluster remove-brick store1 replica 3 $1 $2 $3 commit. Between the
two commands I used the 'status' option to verify that all nodes were
marked as 'completed' before running the 'commit' one.
Also, the two log files you requested are available here:
http://srepetsk.net/files/engine.log.20150723 &&
http://srepetsk.net/files/etc-glusterfs-glusterd.vol.log.20150723
The gluster log file is from one of the servers from a different brick in
the primary (aka "store1") datacenter/gluster volume, so it was and still
is in the volume.
Thanks,
Stephen
*Stephen Repetski*
Rochester Institute of Technology '13 |
http://srepetsk.net
On Thu, Jul 23, 2015 at 4:28 PM, Raz Tamir <ratamir(a)redhat.com> wrote:
> Hi Stephen,
> 1) Can you please provide the vdsm and gluster versions?
> 2) How you removed the brick?
> 3) Can you please attach the glusterfs log located under /var/log ?
>
> * Just for info - there is no support for gluster if the volume is not a
> 3-way replica
>
>
>
>
> Thanks in advance,
> Raz Tamir
> ratamir(a)redhat.com
> RedHat Israel
> RHEV-M QE Storage team
>
> ------------------------------
> *From: *"Stephen Repetski" <srepetsk(a)srepetsk.net>
> *To: *"users" <users(a)ovirt.org>
> *Sent: *Thursday, July 23, 2015 11:08:57 PM
> *Subject: *[ovirt-users] oVirt not starting primary storage domain
>
>
> Hi all,
>
> I recently made a change with the gluster volume backing my primary
> storage domain (removed 1 of the 3 bricks, each w/ 3x replication), and now
> oVirt fails to activate the primary storage domain. After attempting to
> start the domain the engine goes through and does its various commications
> with VDSM, but then fails out with a "Sanlock resource read failure" -
>
https://gist.githubusercontent.com/srepetsk/83ef13ddcf1e690a398e/raw/ada3...
>
> Is there a way to figure out more on what this SpmStatusVDS error is and
> what might be causing it?
>
> Thanks,
> Stephen
>
> *Stephen Repetski*
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
>