[ovirt-users] 3.6: GlusterFS data domain failover?

Wee Sritippho wee.s at forest.go.th
Mon Nov 23 08:53:09 UTC 2015


Hi Nir,

Thank you for spending your time answering all of the questions. I 
really appreciate it.

Today morning, I'm, somehow, able to edit the existing GlusterFS data 
domain / add a new GlusterFS data domain.  Therefore, I created a 
replica-3 volume and added it as a new data domain, then removed the 
replica-2 volume as you suggested.

I though the problem in the 3rd question was gone and I won't be able to 
reproduce it again. However, after some amount of time, I'm able to 
reproduce it once more time :

On 20/11/2558 21:34, Nir Soffer wrote:
>> 3. Why can't I add another GlusterFS data domain? When I choose
>> 'GlusterFS' as my 'Storage Type' every text field become grayed-out.
> Maybe the selected host is down? Can you createa any other storage
> domain?
>
> Nir
This time, I can confirmed that both the host and GlusterFS was up in 
'gluster-status.txt'. I'm able to create any other storage domain except 
GlusterFS as recorded in 'ovirt-cant-add-glusterfs-domain.mp4'.

Wee


---
ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
https://www.avast.com/antivirus
-------------- next part --------------
[root at engine2 ~]# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick engine1.ovirt.forest.go.th:/home/bric
k1/gv0                                      49152     0          Y       7199
Brick engine2.ovirt.forest.go.th:/home/bric
k1/gv0                                      49152     0          Y       26109
NFS Server on localhost                     2049      0          Y       20906
Self-heal Daemon on localhost               N/A       N/A        Y       20918
NFS Server on 172.16.2.62                   2049      0          Y       7187
Self-heal Daemon on 172.16.2.62             N/A       N/A        Y       7194

Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: gv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick engine1.ovirt.forest.go.th:/home/bric
k1/gv1                                      49153     0          Y       7204
Brick engine2.ovirt.forest.go.th:/home/bric
k1/gv1                                      49153     0          Y       32538
Brick engine2.ovirt.forest.go.th:/home/bric
k2/gv1                                      49154     0          Y       32556
NFS Server on localhost                     2049      0          Y       20906
Self-heal Daemon on localhost               N/A       N/A        Y       20918
NFS Server on 172.16.2.62                   2049      0          Y       7187
Self-heal Daemon on 172.16.2.62             N/A       N/A        Y       7194

Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks

[root at engine2 ~]# gluster peer status
Number of Peers: 1

Hostname: 172.16.2.62
Uuid: 4c5bb4c3-251f-4724-9add-88124d33b9ad
State: Peer in Cluster (Connected)
Other names:
engine1.ovirt.forest.go.th
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ovirt-cant-add-glusterfs-domain.mp4
Type: video/mp4
Size: 444147 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151123/0c1ddf01/attachment-0001.mp4>


More information about the Users mailing list