[Users] [node-devel] GlusterFS on oVirt node
Saša Friedrich
sasa.friedrich at bitlab.si
Fri Oct 25 06:42:02 UTC 2013
Thanks, I'll try that (and report). And I think data in that path don't
need to be persistant, because when I reboot node, and make whole fs rw
again, host gets attached to oVirt with no errors.
One more thing. Messages log file keep getting me errors (every 2 sec):
Oct 25 08:32:50 localhost python[462]: service-status:
ServiceNotExistError: Tried all alternatives but failed:
Oct 25 08:32:50 localhost python[462]: ServiceNotExistError:
gluster-swift-object is not a SysV service
Oct 25 08:32:50 localhost python[462]: ServiceNotExistError:
gluster-swift-object is not native systemctl service
.
.
.
.
.
Oct 25 08:33:18 localhost python[462]: service-is-managed:
ServiceNotExistError: Tried all alternatives but failed:
Oct 25 08:33:18 localhost python[462]: ServiceNotExistError: smb is not
native systemctl service
Oct 25 08:33:18 localhost python[462]: ServiceNotExistError: samba is
not native systemctl service
Oct 25 08:33:18 localhost python[462]: ServiceNotExistError: smb is not
a SysV service
Oct 25 08:33:18 localhost python[462]: ServiceNotExistError: samba is
not a SysV service
So I somehow lost hope for node and tried makeing host from fc19
(minimal installation). Installed, attached host in oVirt engine, but...
Errors are the same!
Any clue?
tnx
Dne 25. 10. 2013 08:27, piše Fabian Deutsch:
> Am Donnerstag, den 24.10.2013, 19:59 +0200 schrieb Saša Friedrich:
>> I reinstalled node and remounter / rw then I checked fs before
>> activating host (in oVirt Engine) and after (which files have been
>> changed)... The "ro" problem seems to be in /var/lib/glusterd/. Is there
>> any way I can change node so this directory would be mounted rw? And to
>> persist this setting after reboot.
> Hey,
>
> do you know if the data in /var/lib/glusterd needs to survive reboots?
> If not, then this patch http://gerrit.ovirt.org/20540 will probably fix
> the problem. The patch just adds that path to /etc/rwtab.d/ovirt - this
> will tell the read-only root to make that path writable at boot.
> Due to the nature of Node you will need to build your own image or wait
> until the patch lands in an image. Editing the rwtab file by hand at
> runtime won't have an effect.
>
> Greetings
> fabian
>
>> tnx
>>
>>
>>
>> Dne 24. 10. 2013 15:28, piše Fabian Deutsch:
>>> Am Donnerstag, den 24.10.2013, 15:12 +0200 schrieb Saša Friedrich:
>>>> Progress report:
>>>>
>>>> I remounted fs on oVirt nodes rw, started glusterd with no errors.
>>>> Then I activated hosts in oVirt Engine. Also no errors! Yei!
>>> Yey! :)
>>> Yes, mount -oremount,rw make's the FS temporarily writeable. But you
>>> will have issues as soon as you reboot.
>>> We'll need to investigate which paths need to be persisted (so the data
>>> written to them survives a reboot) and which only need to be write-able
>>> e.g. for temporary data.
>>>
>>> Would you mind opening a bug for this?
>>>
>>> Greetings
>>> fabian
>>>
>>>> Then I created volume (replication), added two bricks (oVirt nodes),
>>>> and started volume. Seems fine. I checkd on node1:
>>>>
>>>> # gluster volume info
>>>>
>>>> Volume Name: data_vol
>>>> Type: Replicate
>>>> Volume ID: a1cdc762-2198-47e2-9b4a-58fd0571b269
>>>> Status: Started
>>>> Number of Bricks: 1 x 2 = 2
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: 192.168.254.124:/data/gluster
>>>> Brick2: 192.168.254.141:/data/gluster
>>>> Options Reconfigured:
>>>> storage.owner-gid: 36
>>>> storage.owner-uid: 36
>>>> auth.allow: *
>>>> user.cifs: on
>>>> nfs.disable: off
>>>>
>>>>
>>>>
>>>> WORKING!
>>>>
>>>>
>>>> BUT... Now i can not create storage domain. When I hit OK button on
>>>> "New storage domain dialog", process is running very long. Eventually
>>>> this process stops and returns " Error while executing action Add
>>>> Storage Connection: Network error during communication with the
>>>> Host".
>>>>
>>>> I'm stuck again :-( in need for HELP!
>>> Could you please provide the logfiles mentioned here:
>>> http://www.ovirt.org/Node_Troubleshooting#Log_Files
>>>
>>> Greetings
>>> fabian
>>>
>>>> tnx
>>>>
>>>>
>>>>
>>>>
>>>> Dne 24. 10. 2013 13:02, piše Mike Burns:
>>>>
>>>>> Adding to node-devel list and users list.
>>>>>
>>>>> -- Mike
>>>>>
>>>>> Apologies for top posting and typos. This was sent from a mobile device.
>>>>>
>>>>> Saša Friedrich <sasa.friedrich at bitlab.si> wrote:
>>>>>
>>>>> Hello!
>>>>>
>>>>> Acording to http://www.ovirt.org/Node_Glusterfs_Support glusterfs on
>>>>> ovirt node should be supported. But I have some difficulties to
>>>>> implement it.
>>>>>
>>>>>
>>>>> I installed ovirt (nested kvm - home testing) following "Up and Running
>>>>> with oVirt 3.3) using Fedora19
>>>>> Install went well. Everything is working fine.
>>>>>
>>>>> Now I created two hosts (nested kvm - ovirt node fc19 - just for
>>>>> testing) and added them in oVirt.
>>>>> Super fine - working!
>>>>>
>>>>> Now I'd like to use this hosts as glustefs nodes too. Acording to google
>>>>> (I'm googling for two days now) I'ts possible, but I can not find any
>>>>> usable how-to
>>>>>
>>>>> 1. I removed these two hosts from default data center
>>>>> 2. I created new data center (type: GlusterFS)
>>>>> 3. I created new cluster (Enable Gluster Service checked)
>>>>> 4. I added host
>>>>> 5. Now I get error message in events: "Could not find gluster uuid of
>>>>> server host1 on Cluster Cluster1."
>>>>>
>>>>>
>>>>> If I ssh to my host (fc19 node) glusterd.service is not running. If I
>>>>> try to run it It returns error
>>>>>
>>>>> here is the log:
>>>>> [2013-10-24 09:52:25.969899] I [glusterfsd.c:1910:main]
>>>>> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.0
>>>>> (/usr/sbin/glusterd -p /run/glusterd.pid)
>>>>> [2013-10-24 09:52:25.974480] I [glusterd.c:962:init] 0-management: Using
>>>>> /var/lib/glusterd as working directory
>>>>> [2013-10-24 09:52:25.977648] I [socket.c:3480:socket_init]
>>>>> 0-socket.management: SSL support is NOT enabled
>>>>> [2013-10-24 09:52:25.977694] I [socket.c:3495:socket_init]
>>>>> 0-socket.management: using system polling thread
>>>>> [2013-10-24 09:52:25.978611] W [rdma.c:4197:__gf_rdma_ctx_create]
>>>>> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device)
>>>>> [2013-10-24 09:52:25.978651] E [rdma.c:4485:init] 0-rdma.management:
>>>>> Failed to initialize IB Device
>>>>> [2013-10-24 09:52:25.978667] E [rpc-transport.c:320:rpc_transport_load]
>>>>> 0-rpc-transport: 'rdma' initialization failed
>>>>> [2013-10-24 09:52:25.978747] W [rpcsvc.c:1387:rpcsvc_transport_create]
>>>>> 0-rpc-service: cannot create listener, initing the transport failed
>>>>> [2013-10-24 09:52:25.979890] I
>>>>> [glusterd.c:354:glusterd_check_gsync_present] 0-glusterd:
>>>>> geo-replication module not installed in the system
>>>>> [2013-10-24 09:52:25.980000] E [store.c:394:gf_store_handle_retrieve]
>>>>> 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info,
>>>>> error: No such file or directory
>>>>> [2013-10-24 09:52:25.980026] E
>>>>> [glusterd-store.c:1277:glusterd_retrieve_op_version] 0-: Unable to get
>>>>> store handle!
>>>>> [2013-10-24 09:52:25.980048] E [store.c:394:gf_store_handle_retrieve]
>>>>> 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info,
>>>>> error: No such file or directory
>>>>> [2013-10-24 09:52:25.980060] E
>>>>> [glusterd-store.c:1378:glusterd_retrieve_uuid] 0-: Unable to get store
>>>>> handle!
>>>>> [2013-10-24 09:52:25.980074] I
>>>>> [glusterd-store.c:1348:glusterd_restore_op_version] 0-management:
>>>>> Detected new install. Setting op-version to maximum : 2
>>>>> [2013-10-24 09:52:25.980309] E [store.c:360:gf_store_handle_new] 0-:
>>>>> Failed to open file: /var/lib/glusterd/options, error: Read-only file system
>>>>>
>>>>>
>>>>> Acording to log /var/lib/glusterd/glusterd.info is missing and can not
>>>>> be created because fs is mounted "ro".
>>>>>
>>>>>
>>>>> Now I'm stuck!
>>>>> What am I missing?
>>>>>
>>>>>
>>>>> tnx for help!
>>>> _______________________________________________
>>>> node-devel mailing list
>>>> node-devel at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/node-devel
>
More information about the Users
mailing list