
Also, I've removed the /var/lib/glusterd/* (except gluster.info) on node1, it was the root cause of the whole event in ovirt.
Hi, 1, r710cluster is contain node0,node1,node2. 2, I'm sorry, supervdsm.log wasn't attached. what nodes on are need to know it? 3, it was on node0
[root@node1 ~]# gluster peer status Number of Peers: 1
Hostname: 172.16.0.12 Uuid: 0ccdaa63-758a-43a4-93f2-25214a3cfa12 State: Accepted peer request (Connected)
[root@node1 ~]# gluster volume info No volumes present
:( After this I did it on node2
[root@node2 ~]# gluster peer status Number of Peers: 2
Hostname: 172.16.0.10 Uuid: 664273d9-a77f-476d-ad09-3e126c4b19a7 State: Peer in Cluster (Connected)
Hostname: 172.16.0.11 Uuid: dc7a1611-f653-4d64-a426-b5ef589de289 State: Accepted peer request (Connected)
[root@node2 ~]# gluster volume info
Volume Name: g2sata Type: Replicate Volume ID: 49d76fc8-853e-4c7d-82a5-b12ec98dadd8 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 172.16.0.10:/data/sata/brick2 Brick2: 172.16.0.12:/data/sata/brick2 Options Reconfigured: nfs.disable: on user.cifs: disable auth.allow: 172.16.* storage.owner-uid: 36 storage.owner-gid: 36
Volume Name: g4sata Type: Replicate Volume ID: f26ed231-c951-431f-8a2f-e8818b58cfb4 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 172.16.0.10:/data/sata/iso Brick2: 172.16.0.12:/data/sata/iso Options Reconfigured: nfs.disable: off user.cifs: disable auth.allow: 172.16.0.* storage.owner-uid: 36 storage.owner-gid: 36
Many thanks,
Tibor
----- 2015. aug.. 17., 16:03, Sahina Bose sabose@redhat.com írta:
From the engine.log - the gluster volumes are queried on host "node1" which returns no volumes.
1. Your cluster "r710cluster1" - which nodes are added to it? node1 alone or node0 and node2 as well?
2. Was the attached supervdsm.log from node1?
3. Which node was the below "gluster volume info" output from? What is the output of "gluster peer status" and "gluster volume info" on node1?
On 08/17/2015 12:49 PM, Demeter Tibor wrote:
Dear Sahina,
Thank you for your reply.
Volume Name: g2sata Type: Replicate Volume ID: 49d76fc8-853e-4c7d-82a5-b12ec98dadd8 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 172.16.0.10:/data/sata/brick2 Brick2: 172.16.0.12:/data/sata/brick2 Options Reconfigured: nfs.disable: on user.cifs: disable auth.allow: 172.16.* storage.owner-uid: 36 storage.owner-gid: 36
Volume Name: g4sata Type: Replicate Volume ID: f26ed231-c951-431f-8a2f-e8818b58cfb4 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 172.16.0.10:/data/sata/iso Brick2: 172.16.0.12:/data/sata/iso Options Reconfigured: nfs.disable: off user.cifs: disable auth.allow: 172.16.0.* storage.owner-uid: 36 storage.owner-gid: 36
Also, I have attached the logs .
Thanks in advance,
Tibor
----- 2015. aug.. 17., 8:40, Sahina Bose sabose@redhat.com írta:
Please provide output of "gluster volume info" command, vdsm.log & engine.log
There could be a mismatch between node information in engine database and gluster - one of the reasons is because the gluster server uuid changed on the node and we will need to see why.
On 08/17/2015 12:35 AM, Demeter Tibor wrote:
Hi All,
I have to upgrade ovirt 3.5.0 to 3.5.3. We have a 3 node system and we have a gluster replica beetwen 2 node of these 3 servers. I had gluster volume beetwen node0 and node2 But I wanted to do a new volume beetwen node1 and node2. It didn't work, it complety kill my node1, because glusterd does not stated.I always have to got always error: gluster peer rejected (related to node1). I followed this article http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Reje... and it was good, my gluster service already worked, but ovirt got these errors
Detected deletion of volume g2sata on cluster r710cluster1, and deleted it from engine DB. Detected deletion of volume g4sata on cluster r710cluster1, and deleted it from engine DB.
And ovirt does not see my gluster volumes anymore.
I've checked with gluster volume status and gluster volume heal g2sata info, it seems to be working, my VMs are ok.
How can I reimport my lost volumes to ovirt?
Thanks in advance,
Tibor _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users