
----- Original Message -----
From: "Joop" <jvdwege@xs4all.nl> To: "Balamurugan Arumugam" <barumuga@redhat.com> Sent: Tuesday, November 13, 2012 8:21:43 PM Subject: Re: [Users] oVirt nightly-11-11 and gluster
Hi,
----- Original Message -----
From: "Joop" <jvdwege@xs4all.nl> To: users@ovirt.org Sent: Monday, November 12, 2012 5:29:32 PM Subject: [Users] oVirt nightly-11-11 and gluster
Came across the following error in engine.log after creating a new gluster volume. 2012-11-12 12:39:35,264 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-94) START, GlusterVolumesListVDSCommand(HostName = st01, HostId = 402f987e-2804-11e2-aa60-78e7d1f4ada5), log id: 33d22dbf 2012-11-12 12:39:35,376 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-94) Failed in GlusterVolumesListVDS method, for vds: st01; host: st01.nieuwland.nl 2012-11-12 12:39:35,377 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (QuartzScheduler_Worker-94) Command GlusterVolumesListVDS execution failed. Exception: NumberFormatException: For input string: "1 x 2 = 2"
In case of REPLICATE and STRIPE volume type, we suppose to get integer value in brick count. But I am seeing '1 x 2 = 2'. This could be a recent change in 'gluster volume info' output. Can you send me below details? 1. output of 'rpm -qa | grep glusterfs'
glusterfs-rdma-3.3.1-2.fc17.x86_64 glusterfs-3.3.1-2.fc17.x86_64 glusterfs-server-3.3.1-2.fc17.x86_64 glusterfs-geo-replication-3.3.1-2.fc17.x86_64 glusterfs-fuse-3.3.1-2.fc17.x86_64
2. output of 'gluster volume info' command [root@st02 vdsm]# gluster volume info
Volume Name: testvol Type: Distribute Volume ID: eb0eaea0-6507-4cdb-9080-0c42d23fbeca Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: st01.nieuwland.nl:/gluster-test Brick2: st02.nieuwland.nl:/gluster-test
Volume Name: Data Type: Replicate Volume ID: e5f289ab-3580-4974-88a6-42323bdc1674 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: st01.nieuwland.nl:/gluster-data Brick2: st02.nieuwland.nl:/gluster-data Options Reconfigured: auth.allow: * nfs.disable: off
Nice, you'll notice I have two gluster volumes, testvol is made by hand from the management server and Data is made by oVirt. Testvol is a distributed volume and Data a replication volume. Might explain the worden of the Number of Bricks line.
Let me know if I can do more testing!
The fix http://gerrit.ovirt.org/#/c/7951/ is merged and available in vdsm upstream master yesterday. You could try latest nightly build. Thanks, Regards, Bala