Hi Vijay,
Gluster was 3.2.7 and now i have upgraded to
"glusterfs-server-3.3.1-8.fc17.x86_64" .
Now its works fine, but now i have only one host & I wanted try with two
hosts since I was facing the issue "SPM contending continuously" with
dreyou packages on centos 6.3 that is the reason I wanted to try with
fedora 17 with ovirt 3.1.
Once i get one more server again i will continue adding hosts to my cluster.
Thanks,
Jithin
On Tue, Jan 22, 2013 at 9:20 AM, Vijay Bellur <vbellur(a)redhat.com> wrote:
On 01/21/2013 01:50 PM, Kanagaraj Mayilsamy wrote:
> Hi Jithin,
>
> By looking at the logs, looks like you already had a volume named
> 'vol1' in the gluster and you have tried to create another volume with the
> same name from the UI. Thats why you were able to see the volume 'vol1'
> even after the creation was failed.
>
> I am not sure which version of ovirt-engine you are using. The recent
> releases(3.2) and the upstream code currently have the support for
> reflecting the old volumes in the UI even though there were created via UI
> or directly from CLI. With this change vol1 should have appeared in the UI
> even before the creation.
>
> So it looks like there are no issues with the creation of volume. I am
> not familiar with the mount issues, some one else will help you out.
>
Can you please provide the glusterfs version installed on the host from
where you are trying to mount?
Note that glusterfs 3.3 or 3.4 is not compatible with glusterfs 3.2 &
hence you cannot have a mix of these versions in the cluster or between
client & servers.
Thanks,
Vijay
> Thanks,
> Kanagaraj
>
> ----- Original Message -----
>
>> From: "Jithin Raju" <rajujith(a)gmail.com>
>> To: "Kanagaraj Mayilsamy" <kmayilsa(a)redhat.com>, users(a)ovirt.org
>> Sent: Monday, January 21, 2013 1:33:56 PM
>> Subject: Re: [Users] gluster volume creation error
>>
>>
>> Hi Kanagaraj,
>>
>>
>> PFA,
>>
>>
>> gluster version info:
>> glusterfs-geo-replication-3.2.**7-2.fc17.x86_64
>> glusterfs-3.2.7-2.fc17.x86_64
>> glusterfs-fuse-3.2.7-2.fc17.**x86_64
>> glusterfs-rdma-3.2.7-2.fc17.**x86_64
>> vdsm-gluster-4.10.0-10.fc17.**noarch
>> glusterfs-server-3.2.7-2.fc17.**x86_64
>>
>>
>> Thanks,
>> Jithin
>>
>>
>>
>> On Mon, Jan 21, 2013 at 1:15 PM, Kanagaraj Mayilsamy <
>> kmayilsa(a)redhat.com > wrote:
>>
>>
>>
>>
>>
>> ----- Original Message -----
>>
>>> From: "Jithin Raju" < rajujith(a)gmail.com >
>>> To: users(a)ovirt.org
>>> Sent: Monday, January 21, 2013 1:10:15 PM
>>> Subject: [Users] gluster volume creation error
>>>
>>>
>>>
>>> Hi ,
>>>
>>>
>>> Volume creation is failing in posixfs data center.
>>>
>>>
>>> While trying to create a distribute volume web UI exits with error:
>>>
>>>
>>> "creation of volume failed" and volume is not listed in web UI.
>>>
>>> Can you please provide the engine.log and vdsm.log(from all the hosts
>> in the cluster ).
>>
>>
>>
>>> From the backend I can see volume got created.
>>>
>>>
>>> gluster volume info
>>>
>>>
>>>
>>> Volume Name: vol1
>>> Type: Distribute
>>> Status: Created
>>> Number of Bricks: 2
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: x.250.76.71:/data
>>> Brick2: x.250.76.70:/data
>>>
>>>
>>> When I try to mount the volume manually to /mnt
>>> its not giving any message
>>>
>>>
>>> exit status is zero.
>>>
>>>
>>> mount command listed as below:
>>>
>>>
>>> fig:/vol1 on /mnt type fuse.glusterfs
>>> (rw,relatime,user_id=0,group_**id=0,default_permissions,**
>>> allow_other,max_read=131072)
>>>
>>>
>>>
>>> when I run a df it gives me like below:
>>> "df: `/mnt': Transport endpoint is not connected"
>>>
>>>
>>>
>>> So i just tail'ed
>>> "/var/log/glusterfs/etc-**glusterfs-glusterd.vol.log"
>>>
>>>
>>>
>>> [2013-01-21 11:30:07.828518] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:1009 )
>>> [2013-01-21 11:30:10.839882] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:1007 )
>>> [2013-01-21 11:30:13.852374] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:1005 )
>>> [2013-01-21 11:30:16.864634] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:1003 )
>>> [2013-01-21 11:30:19.875986] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:1001 )
>>> [2013-01-21 11:30:22.886854] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:999 )
>>> [2013-01-21 11:30:25.898840] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:997 )
>>> [2013-01-21 11:30:28.910000] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:995 )
>>> [2013-01-21 11:30:31.922336] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:993 )
>>> [2013-01-21 11:30:34.934772] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:991 )
>>> [2013-01-21 11:30:37.946215] W
>>> [socket.c:1494:__socket_proto_**state_machine] 0-socket.management:
>>> reading from socket failed. Error (Transport endpoint is not
>>> connected), peer ( 135.250.76.70:989 )
>>>
>>>
>>>
>>>
>>> Just wanted to know what am I doing wrong here?
>>>
>>>
>>> package details:
>>>
>>>
>>>
>>> vdsm-python-4.10.0-10.fc17.**x86_64
>>> vdsm-cli-4.10.0-10.fc17.noarch
>>> vdsm-xmlrpc-4.10.0-10.fc17.**noarch
>>> vdsm-4.10.0-10.fc17.x86_64
>>> vdsm-gluster-4.10.0-10.fc17.**noarch
>>>
>>>
>>> selinux is permissive,iptables i have flushed.
>>>
>>>
>>> Thanks,
>>> Jithin
>>> ______________________________**_________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org...
>>>
>>>
>>
>> ______________________________**_________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org...
>
>
>