From mburns at redhat.com Tue Oct 8 14:04:44 2013 From: mburns at redhat.com (Mike Burns) Date: Tue, 8 Oct 2013 10:04:44 -0400 (EDT) Subject: [node-devel] oVirt Node weekly meeting Message-ID: <325702494.3272266.1381241084794.JavaMail.root@redhat.com> The following meeting has been modified: Subject: oVirt Node weekly meeting Organizer: "Mike Burns" Location: #ovirt on irc.oftc.net Time: 10:00:00 AM - 10:30:00 AM GMT -05:00 US/Canada Eastern [MODIFIED] Recurrence : Every Tuesday End by Oct 7, 2013 Effective Nov 22, 2011 Invitees: aliguori at linux.vnet.ibm.com; anthony at codemonkey.ws; mvanhorssen at vluchtelingenwerk.nl; node-devel at ovirt.org; otavio.ferranti at eldorado.org.br; swonderl at redhat.com; whenry at redhat.com; leiwang at redhat.com *~*~*~*~*~*~*~*~*~* Fabian will be sending a new version of the meeting soon. Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 1950 bytes Desc: not available URL: From fabiand at redhat.com Tue Oct 8 14:31:38 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Tue, 08 Oct 2013 16:31:38 +0200 Subject: [node-devel] oVirt Node Weekly Meeting Minutes -- 2013-10-08 Message-ID: <1381242698.2771.2.camel@fdeutsch-laptop.local> ================================= #ovirt: oVirt Node Weekly Meeting ================================= Meeting started by fabiand at 14:00:44 UTC. The full logs are available at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-10-08-14.00.log.html . Meeting summary --------------- * agenda (fabiand, 14:02:29) * 3.0.x updates (fabiand, 14:02:39) * 3.1.0 planning (fabiand, 14:02:56) * other topics (fabiand, 14:03:04) * 3.0.x updates (fabiand, 14:03:13) * no urgent update needed for 3.3.0.1 (fabiand, 14:06:21) * 3.3.1 vdsm builds are scheduled for mid-next week (fabiand, 14:06:50) * oVirt 3.3.1 release tentatively set for end of October (mburns, 14:07:45) * A couple of node specific patches should go into the next 3.0 based build - most of them are already submitted (fabiand, 14:08:54) * 3.1.0 planning (fabiand, 14:09:38) * Release planning page for 3.1: http://www.ovirt.org/Node_3.1_release-management (fabiand, 14:10:47) * PackageRefactoring owned by mburns - page created (fabiand, 14:10:59) * BuildtoolMigration owned by jboggs - page done (fabiand, 14:12:34) * StorageAndInstallerModuleRewrite owned by fabiand - page created (fabiand, 14:13:42) * OpenVSwitchSupport owned by .. rbarry? - page missing (fabiand, 14:14:44) * LINK: http://www.ovirt.org/Node_Openvswitch_Integration (rbarry, 14:16:26) * ACTION: rbarry to fix the openvswitch link on the planning page (fabiand, 14:16:48) * PluginLiveInstall owned by jboggs - page created (fabiand, 14:17:03) * FeaturePublishing owned by fabiand - page created (fabiand, 14:18:27) * i18n - patches from haibo. Can be seen as a feature, but no page around yet (fabiand, 14:20:39) * ACTION: fabiand to create page for i18n feature (fabiand, 14:21:09) * other topics (fabiand, 14:21:26) * mburns going to be taking a step back from the oVirt Node project (mburns, 14:21:55) * fabiand will take over driving oVirt Node (mburns, 14:22:08) Meeting ended at 14:24:30 UTC. Action Items ------------ * rbarry to fix the openvswitch link on the planning page * fabiand to create page for i18n feature Action Items, by person ----------------------- * fabiand * fabiand to create page for i18n feature * rbarry * rbarry to fix the openvswitch link on the planning page * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * fabiand (66) * mburns (17) * rbarry (8) * jboggs (6) * ovirtbot (5) * sbonazzo (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From fdeutsch at redhat.com Thu Oct 17 15:24:16 2013 From: fdeutsch at redhat.com (Fabian Deutsch) Date: Thu, 17 Oct 2013 11:24:16 -0400 (EDT) Subject: [node-devel] oVirt Node weekly meeting Message-ID: <1072328763.19997137.1382023456169.JavaMail.root@redhat.com> The following is a new meeting request: Subject: oVirt Node weekly meeting Organiser: "Fabian Deutsch" Location: irc://irc.oftc.net#ovirt Time: 4:00:00 PM - 4:30:00 PM GMT +01:00 Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna Recurrence : Every Tuesday No end date Effective 15 Oct, 2013 Invitees: node-devel at ovirt.org *~*~*~*~*~*~*~*~*~* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 1511 bytes Desc: not available URL: From fabiand at redhat.com Mon Oct 21 17:45:41 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Mon, 21 Oct 2013 19:45:41 +0200 Subject: [node-devel] Needed: Node and Engine cooperation Message-ID: <1382377541.2828.45.camel@fdeutsch-laptop.local> Hey, with the extraction of the oVirt Engine / VDSM specific bits from Node in it's 3.0 release, oVirt Node became unaware of when it is being managed. Pre-3.0 Node (it's TUI) had specific knowledge about what configuration files existed when it was registered to Engine. This is not the case in Node 3.0 anymore. And this leads to problems. E.g. a user removing Engines network layout. A new way is needed to pass informations between the management instance and Node's core. This informations are needed e.g. to prevent the user from accidentally destroying Engines network layout on a Node. I've opened a bug [0] to suggest a way of sharing this kind of informations. The idea is that Node and the management instance - Engine - share a set of common configuration keys in /etc/default/ovirt to pass the relevant bit's to Node. For now I thought about this three keys: OVIRT_MANAGED_BY= This key is used to (a) signal the Node is being managed and (b) signaling who is managing this node. OVIRT_MANAGED_IFNAMES=[,,...] This key is used to specify a number (comma separated list) of ifnames which are managed and for which the TUI shall display some information (IP, ...). This can also be used by the TUI to decide to not offer NIC configuration to the user. OVIRT_MANAGED_LOCKED_PAGES=[,,...] (Future) A list of pages which shall be locked e.g. because the management instance is configuring the aspect (e.g. networking or logging). The third one (OVIRT_MANAGED_LOCKED_PAGES) needs a tighter integration and might be relevant in the future, but the first two should really be implemented quickly for the reasons given above. It is quit elate in the development process but probably worth to think about getting this into 3.3.1, to prevent all sorts of (accidentally) user-driven collisions between Node and Engine. Thoughts? Greetings fabian --- [0] https://bugzilla.redhat.com/show_bug.cgi?id=1021647 From mburns at redhat.com Thu Oct 24 11:02:29 2013 From: mburns at redhat.com (Mike Burns) Date: Thu, 24 Oct 2013 07:02:29 -0400 (EDT) Subject: [node-devel] GlusterFS on oVirt node In-Reply-To: <5268FB3E.3080903@bitlab.si> References: <5268FB3E.3080903@bitlab.si> Message-ID: Adding to node-devel list and users list. -- Mike Apologies for top posting and typos. This was sent from a mobile device. Sa?a Friedrich wrote: Hello! Acording to http://www.ovirt.org/Node_Glusterfs_Support glusterfs on ovirt node should be supported. But I have some difficulties to implement it. I installed ovirt (nested kvm - home testing) following "Up and Running with oVirt 3.3) using Fedora19 Install went well. Everything is working fine. Now I created two hosts (nested kvm - ovirt node fc19 - just for testing) and added them in oVirt. Super fine - working! Now I'd like to use this hosts as glustefs nodes too. Acording to google (I'm googling for two days now) I'ts possible, but I can not find any usable how-to 1. I removed these two hosts from default data center 2. I created new data center (type: GlusterFS) 3. I created new cluster (Enable Gluster Service checked) 4. I added host 5. Now I get error message in events: "Could not find gluster uuid of server host1 on Cluster Cluster1." If I ssh to my host (fc19 node) glusterd.service is not running. If I try to run it It returns error here is the log: [2013-10-24 09:52:25.969899] I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.0 (/usr/sbin/glusterd -p /run/glusterd.pid) [2013-10-24 09:52:25.974480] I [glusterd.c:962:init] 0-management: Using /var/lib/glusterd as working directory [2013-10-24 09:52:25.977648] I [socket.c:3480:socket_init] 0-socket.management: SSL support is NOT enabled [2013-10-24 09:52:25.977694] I [socket.c:3495:socket_init] 0-socket.management: using system polling thread [2013-10-24 09:52:25.978611] W [rdma.c:4197:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) [2013-10-24 09:52:25.978651] E [rdma.c:4485:init] 0-rdma.management: Failed to initialize IB Device [2013-10-24 09:52:25.978667] E [rpc-transport.c:320:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2013-10-24 09:52:25.978747] W [rpcsvc.c:1387:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2013-10-24 09:52:25.979890] I [glusterd.c:354:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [2013-10-24 09:52:25.980000] E [store.c:394:gf_store_handle_retrieve] 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, error: No such file or directory [2013-10-24 09:52:25.980026] E [glusterd-store.c:1277:glusterd_retrieve_op_version] 0-: Unable to get store handle! [2013-10-24 09:52:25.980048] E [store.c:394:gf_store_handle_retrieve] 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, error: No such file or directory [2013-10-24 09:52:25.980060] E [glusterd-store.c:1378:glusterd_retrieve_uuid] 0-: Unable to get store handle! [2013-10-24 09:52:25.980074] I [glusterd-store.c:1348:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 2 [2013-10-24 09:52:25.980309] E [store.c:360:gf_store_handle_new] 0-: Failed to open file: /var/lib/glusterd/options, error: Read-only file system Acording to log /var/lib/glusterd/glusterd.info is missing and can not be created because fs is mounted "ro". Now I'm stuck! What am I missing? tnx for help! From sasa.friedrich at bitlab.si Thu Oct 24 13:12:31 2013 From: sasa.friedrich at bitlab.si (=?UTF-8?B?U2HFoWEgRnJpZWRyaWNo?=) Date: Thu, 24 Oct 2013 15:12:31 +0200 Subject: [node-devel] GlusterFS on oVirt node In-Reply-To: References: <5268FB3E.3080903@bitlab.si> Message-ID: <52691CBF.8050709@bitlab.si> Progress report: I remounted fs on oVirt nodes rw, started glusterd with no errors. Then I activated hosts in oVirt Engine. Also no errors! Yei! Then I created volume (replication), added two bricks (oVirt nodes), and started volume. Seems fine. I checkd on node1: # gluster volume info Volume Name: data_vol Type: Replicate Volume ID: a1cdc762-2198-47e2-9b4a-58fd0571b269 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.254.124:/data/gluster Brick2: 192.168.254.141:/data/gluster Options Reconfigured: storage.owner-gid: 36 storage.owner-uid: 36 auth.allow: * user.cifs: on nfs.disable: off WORKING! BUT... Now i can not create storage domain. When I hit OK button on "New storage domain dialog", process is running very long. Eventually this process stops and returns " Error while executing action Add Storage Connection: Network error during communication with the Host". I'm stuck again :-( in need for HELP! tnx Dne 24. 10. 2013 13:02, pi?e Mike Burns: > Adding to node-devel list and users list. > > -- Mike > > Apologies for top posting and typos. This was sent from a mobile device. > > Sa?a Friedrich wrote: > > Hello! > > Acording to http://www.ovirt.org/Node_Glusterfs_Support glusterfs on > ovirt node should be supported. But I have some difficulties to > implement it. > > > I installed ovirt (nested kvm - home testing) following "Up and Running > with oVirt 3.3) using Fedora19 > Install went well. Everything is working fine. > > Now I created two hosts (nested kvm - ovirt node fc19 - just for > testing) and added them in oVirt. > Super fine - working! > > Now I'd like to use this hosts as glustefs nodes too. Acording to google > (I'm googling for two days now) I'ts possible, but I can not find any > usable how-to > > 1. I removed these two hosts from default data center > 2. I created new data center (type: GlusterFS) > 3. I created new cluster (Enable Gluster Service checked) > 4. I added host > 5. Now I get error message in events: "Could not find gluster uuid of > server host1 on Cluster Cluster1." > > > If I ssh to my host (fc19 node) glusterd.service is not running. If I > try to run it It returns error > > here is the log: > [2013-10-24 09:52:25.969899] I [glusterfsd.c:1910:main] > 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.0 > (/usr/sbin/glusterd -p /run/glusterd.pid) > [2013-10-24 09:52:25.974480] I [glusterd.c:962:init] 0-management: Using > /var/lib/glusterd as working directory > [2013-10-24 09:52:25.977648] I [socket.c:3480:socket_init] > 0-socket.management: SSL support is NOT enabled > [2013-10-24 09:52:25.977694] I [socket.c:3495:socket_init] > 0-socket.management: using system polling thread > [2013-10-24 09:52:25.978611] W [rdma.c:4197:__gf_rdma_ctx_create] > 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) > [2013-10-24 09:52:25.978651] E [rdma.c:4485:init] 0-rdma.management: > Failed to initialize IB Device > [2013-10-24 09:52:25.978667] E [rpc-transport.c:320:rpc_transport_load] > 0-rpc-transport: 'rdma' initialization failed > [2013-10-24 09:52:25.978747] W [rpcsvc.c:1387:rpcsvc_transport_create] > 0-rpc-service: cannot create listener, initing the transport failed > [2013-10-24 09:52:25.979890] I > [glusterd.c:354:glusterd_check_gsync_present] 0-glusterd: > geo-replication module not installed in the system > [2013-10-24 09:52:25.980000] E [store.c:394:gf_store_handle_retrieve] > 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, > error: No such file or directory > [2013-10-24 09:52:25.980026] E > [glusterd-store.c:1277:glusterd_retrieve_op_version] 0-: Unable to get > store handle! > [2013-10-24 09:52:25.980048] E [store.c:394:gf_store_handle_retrieve] > 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, > error: No such file or directory > [2013-10-24 09:52:25.980060] E > [glusterd-store.c:1378:glusterd_retrieve_uuid] 0-: Unable to get store > handle! > [2013-10-24 09:52:25.980074] I > [glusterd-store.c:1348:glusterd_restore_op_version] 0-management: > Detected new install. Setting op-version to maximum : 2 > [2013-10-24 09:52:25.980309] E [store.c:360:gf_store_handle_new] 0-: > Failed to open file: /var/lib/glusterd/options, error: Read-only file system > > > Acording to log /var/lib/glusterd/glusterd.info is missing and can not > be created because fs is mounted "ro". > > > Now I'm stuck! > What am I missing? > > > tnx for help! -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabiand at redhat.com Thu Oct 24 13:23:58 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Thu, 24 Oct 2013 15:23:58 +0200 Subject: [node-devel] GlusterFS on oVirt node In-Reply-To: References: <5268FB3E.3080903@bitlab.si> Message-ID: <1382621038.2828.7.camel@fdeutsch-laptop.local> Am Donnerstag, den 24.10.2013, 07:02 -0400 schrieb Mike Burns: > Adding to node-devel list and users list. > > -- Mike > > Apologies for top posting and typos. This was sent from a mobile device. > > Sa?a Friedrich wrote: > > Hello! > > Acording to http://www.ovirt.org/Node_Glusterfs_Support glusterfs on > ovirt node should be supported. But I have some difficulties to > implement it. > > > I installed ovirt (nested kvm - home testing) following "Up and Running > with oVirt 3.3) using Fedora19 > Install went well. Everything is working fine. > > Now I created two hosts (nested kvm - ovirt node fc19 - just for > testing) and added them in oVirt. > Super fine - working! > > Now I'd like to use this hosts as glustefs nodes too. Acording to google > (I'm googling for two days now) I'ts possible, but I can not find any > usable how-to > > 1. I removed these two hosts from default data center > 2. I created new data center (type: GlusterFS) > 3. I created new cluster (Enable Gluster Service checked) > 4. I added host > 5. Now I get error message in events: "Could not find gluster uuid of > server host1 on Cluster Cluster1." > > > If I ssh to my host (fc19 node) glusterd.service is not running. If I > try to run it It returns error > > here is the log: > [2013-10-24 09:52:25.969899] I [glusterfsd.c:1910:main] > 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.0 > (/usr/sbin/glusterd -p /run/glusterd.pid) > [2013-10-24 09:52:25.974480] I [glusterd.c:962:init] 0-management: Using > /var/lib/glusterd as working directory > [2013-10-24 09:52:25.977648] I [socket.c:3480:socket_init] > 0-socket.management: SSL support is NOT enabled > [2013-10-24 09:52:25.977694] I [socket.c:3495:socket_init] > 0-socket.management: using system polling thread > [2013-10-24 09:52:25.978611] W [rdma.c:4197:__gf_rdma_ctx_create] > 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) > [2013-10-24 09:52:25.978651] E [rdma.c:4485:init] 0-rdma.management: > Failed to initialize IB Device > [2013-10-24 09:52:25.978667] E [rpc-transport.c:320:rpc_transport_load] > 0-rpc-transport: 'rdma' initialization failed > [2013-10-24 09:52:25.978747] W [rpcsvc.c:1387:rpcsvc_transport_create] > 0-rpc-service: cannot create listener, initing the transport failed > [2013-10-24 09:52:25.979890] I > [glusterd.c:354:glusterd_check_gsync_present] 0-glusterd: > geo-replication module not installed in the system > [2013-10-24 09:52:25.980000] E [store.c:394:gf_store_handle_retrieve] > 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, > error: No such file or directory > [2013-10-24 09:52:25.980026] E > [glusterd-store.c:1277:glusterd_retrieve_op_version] 0-: Unable to get > store handle! > [2013-10-24 09:52:25.980048] E [store.c:394:gf_store_handle_retrieve] > 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, > error: No such file or directory > [2013-10-24 09:52:25.980060] E > [glusterd-store.c:1378:glusterd_retrieve_uuid] 0-: Unable to get store > handle! > [2013-10-24 09:52:25.980074] I > [glusterd-store.c:1348:glusterd_restore_op_version] 0-management: > Detected new install. Setting op-version to maximum : 2 > [2013-10-24 09:52:25.980309] E [store.c:360:gf_store_handle_new] 0-: > Failed to open file: /var/lib/glusterd/options, error: Read-only file system Hey Sasa, that looks like we are missing a couple of paths which need either be write-able or even further more persisted. Greetings fabian > > Acording to log /var/lib/glusterd/glusterd.info is missing and can not > be created because fs is mounted "ro". > > > Now I'm stuck! > What am I missing? > > > tnx for help! > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel From fabiand at redhat.com Thu Oct 24 13:28:17 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Thu, 24 Oct 2013 15:28:17 +0200 Subject: [node-devel] GlusterFS on oVirt node In-Reply-To: <52691CBF.8050709@bitlab.si> References: <5268FB3E.3080903@bitlab.si> <52691CBF.8050709@bitlab.si> Message-ID: <1382621297.2828.11.camel@fdeutsch-laptop.local> Am Donnerstag, den 24.10.2013, 15:12 +0200 schrieb Sa?a Friedrich: > Progress report: > > I remounted fs on oVirt nodes rw, started glusterd with no errors. > Then I activated hosts in oVirt Engine. Also no errors! Yei! Yey! :) Yes, mount -oremount,rw make's the FS temporarily writeable. But you will have issues as soon as you reboot. We'll need to investigate which paths need to be persisted (so the data written to them survives a reboot) and which only need to be write-able e.g. for temporary data. Would you mind opening a bug for this? Greetings fabian > Then I created volume (replication), added two bricks (oVirt nodes), > and started volume. Seems fine. I checkd on node1: > > # gluster volume info > > Volume Name: data_vol > Type: Replicate > Volume ID: a1cdc762-2198-47e2-9b4a-58fd0571b269 > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: 192.168.254.124:/data/gluster > Brick2: 192.168.254.141:/data/gluster > Options Reconfigured: > storage.owner-gid: 36 > storage.owner-uid: 36 > auth.allow: * > user.cifs: on > nfs.disable: off > > > > WORKING! > > > BUT... Now i can not create storage domain. When I hit OK button on > "New storage domain dialog", process is running very long. Eventually > this process stops and returns " Error while executing action Add > Storage Connection: Network error during communication with the > Host". > > I'm stuck again :-( in need for HELP! Could you please provide the logfiles mentioned here: http://www.ovirt.org/Node_Troubleshooting#Log_Files Greetings fabian > > tnx > > > > > Dne 24. 10. 2013 13:02, pi?e Mike Burns: > > > Adding to node-devel list and users list. > > > > -- Mike > > > > Apologies for top posting and typos. This was sent from a mobile device. > > > > Sa?a Friedrich wrote: > > > > Hello! > > > > Acording to http://www.ovirt.org/Node_Glusterfs_Support glusterfs on > > ovirt node should be supported. But I have some difficulties to > > implement it. > > > > > > I installed ovirt (nested kvm - home testing) following "Up and Running > > with oVirt 3.3) using Fedora19 > > Install went well. Everything is working fine. > > > > Now I created two hosts (nested kvm - ovirt node fc19 - just for > > testing) and added them in oVirt. > > Super fine - working! > > > > Now I'd like to use this hosts as glustefs nodes too. Acording to google > > (I'm googling for two days now) I'ts possible, but I can not find any > > usable how-to > > > > 1. I removed these two hosts from default data center > > 2. I created new data center (type: GlusterFS) > > 3. I created new cluster (Enable Gluster Service checked) > > 4. I added host > > 5. Now I get error message in events: "Could not find gluster uuid of > > server host1 on Cluster Cluster1." > > > > > > If I ssh to my host (fc19 node) glusterd.service is not running. If I > > try to run it It returns error > > > > here is the log: > > [2013-10-24 09:52:25.969899] I [glusterfsd.c:1910:main] > > 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.0 > > (/usr/sbin/glusterd -p /run/glusterd.pid) > > [2013-10-24 09:52:25.974480] I [glusterd.c:962:init] 0-management: Using > > /var/lib/glusterd as working directory > > [2013-10-24 09:52:25.977648] I [socket.c:3480:socket_init] > > 0-socket.management: SSL support is NOT enabled > > [2013-10-24 09:52:25.977694] I [socket.c:3495:socket_init] > > 0-socket.management: using system polling thread > > [2013-10-24 09:52:25.978611] W [rdma.c:4197:__gf_rdma_ctx_create] > > 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) > > [2013-10-24 09:52:25.978651] E [rdma.c:4485:init] 0-rdma.management: > > Failed to initialize IB Device > > [2013-10-24 09:52:25.978667] E [rpc-transport.c:320:rpc_transport_load] > > 0-rpc-transport: 'rdma' initialization failed > > [2013-10-24 09:52:25.978747] W [rpcsvc.c:1387:rpcsvc_transport_create] > > 0-rpc-service: cannot create listener, initing the transport failed > > [2013-10-24 09:52:25.979890] I > > [glusterd.c:354:glusterd_check_gsync_present] 0-glusterd: > > geo-replication module not installed in the system > > [2013-10-24 09:52:25.980000] E [store.c:394:gf_store_handle_retrieve] > > 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, > > error: No such file or directory > > [2013-10-24 09:52:25.980026] E > > [glusterd-store.c:1277:glusterd_retrieve_op_version] 0-: Unable to get > > store handle! > > [2013-10-24 09:52:25.980048] E [store.c:394:gf_store_handle_retrieve] > > 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, > > error: No such file or directory > > [2013-10-24 09:52:25.980060] E > > [glusterd-store.c:1378:glusterd_retrieve_uuid] 0-: Unable to get store > > handle! > > [2013-10-24 09:52:25.980074] I > > [glusterd-store.c:1348:glusterd_restore_op_version] 0-management: > > Detected new install. Setting op-version to maximum : 2 > > [2013-10-24 09:52:25.980309] E [store.c:360:gf_store_handle_new] 0-: > > Failed to open file: /var/lib/glusterd/options, error: Read-only file system > > > > > > Acording to log /var/lib/glusterd/glusterd.info is missing and can not > > be created because fs is mounted "ro". > > > > > > Now I'm stuck! > > What am I missing? > > > > > > tnx for help! > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel From fabiand at redhat.com Fri Oct 25 06:27:03 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Fri, 25 Oct 2013 08:27:03 +0200 Subject: [node-devel] GlusterFS on oVirt node In-Reply-To: <52695FE7.2060207@bitlab.si> References: <5268FB3E.3080903@bitlab.si> <52691CBF.8050709@bitlab.si> <1382621297.2828.11.camel@fdeutsch-laptop.local> <52695FE7.2060207@bitlab.si> Message-ID: <1382682423.2855.2.camel@fdeutsch-laptop.local> Am Donnerstag, den 24.10.2013, 19:59 +0200 schrieb Sa?a Friedrich: > I reinstalled node and remounter / rw then I checked fs before > activating host (in oVirt Engine) and after (which files have been > changed)... The "ro" problem seems to be in /var/lib/glusterd/. Is there > any way I can change node so this directory would be mounted rw? And to > persist this setting after reboot. Hey, do you know if the data in /var/lib/glusterd needs to survive reboots? If not, then this patch http://gerrit.ovirt.org/20540 will probably fix the problem. The patch just adds that path to /etc/rwtab.d/ovirt - this will tell the read-only root to make that path writable at boot. Due to the nature of Node you will need to build your own image or wait until the patch lands in an image. Editing the rwtab file by hand at runtime won't have an effect. Greetings fabian > > tnx > > > > Dne 24. 10. 2013 15:28, pi?e Fabian Deutsch: > > Am Donnerstag, den 24.10.2013, 15:12 +0200 schrieb Sa?a Friedrich: > >> Progress report: > >> > >> I remounted fs on oVirt nodes rw, started glusterd with no errors. > >> Then I activated hosts in oVirt Engine. Also no errors! Yei! > > Yey! :) > > Yes, mount -oremount,rw make's the FS temporarily writeable. But you > > will have issues as soon as you reboot. > > We'll need to investigate which paths need to be persisted (so the data > > written to them survives a reboot) and which only need to be write-able > > e.g. for temporary data. > > > > Would you mind opening a bug for this? > > > > Greetings > > fabian > > > >> Then I created volume (replication), added two bricks (oVirt nodes), > >> and started volume. Seems fine. I checkd on node1: > >> > >> # gluster volume info > >> > >> Volume Name: data_vol > >> Type: Replicate > >> Volume ID: a1cdc762-2198-47e2-9b4a-58fd0571b269 > >> Status: Started > >> Number of Bricks: 1 x 2 = 2 > >> Transport-type: tcp > >> Bricks: > >> Brick1: 192.168.254.124:/data/gluster > >> Brick2: 192.168.254.141:/data/gluster > >> Options Reconfigured: > >> storage.owner-gid: 36 > >> storage.owner-uid: 36 > >> auth.allow: * > >> user.cifs: on > >> nfs.disable: off > >> > >> > >> > >> WORKING! > >> > >> > >> BUT... Now i can not create storage domain. When I hit OK button on > >> "New storage domain dialog", process is running very long. Eventually > >> this process stops and returns " Error while executing action Add > >> Storage Connection: Network error during communication with the > >> Host". > >> > >> I'm stuck again :-( in need for HELP! > > Could you please provide the logfiles mentioned here: > > http://www.ovirt.org/Node_Troubleshooting#Log_Files > > > > Greetings > > fabian > > > >> tnx > >> > >> > >> > >> > >> Dne 24. 10. 2013 13:02, pi?e Mike Burns: > >> > >>> Adding to node-devel list and users list. > >>> > >>> -- Mike > >>> > >>> Apologies for top posting and typos. This was sent from a mobile device. > >>> > >>> Sa?a Friedrich wrote: > >>> > >>> Hello! > >>> > >>> Acording to http://www.ovirt.org/Node_Glusterfs_Support glusterfs on > >>> ovirt node should be supported. But I have some difficulties to > >>> implement it. > >>> > >>> > >>> I installed ovirt (nested kvm - home testing) following "Up and Running > >>> with oVirt 3.3) using Fedora19 > >>> Install went well. Everything is working fine. > >>> > >>> Now I created two hosts (nested kvm - ovirt node fc19 - just for > >>> testing) and added them in oVirt. > >>> Super fine - working! > >>> > >>> Now I'd like to use this hosts as glustefs nodes too. Acording to google > >>> (I'm googling for two days now) I'ts possible, but I can not find any > >>> usable how-to > >>> > >>> 1. I removed these two hosts from default data center > >>> 2. I created new data center (type: GlusterFS) > >>> 3. I created new cluster (Enable Gluster Service checked) > >>> 4. I added host > >>> 5. Now I get error message in events: "Could not find gluster uuid of > >>> server host1 on Cluster Cluster1." > >>> > >>> > >>> If I ssh to my host (fc19 node) glusterd.service is not running. If I > >>> try to run it It returns error > >>> > >>> here is the log: > >>> [2013-10-24 09:52:25.969899] I [glusterfsd.c:1910:main] > >>> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.0 > >>> (/usr/sbin/glusterd -p /run/glusterd.pid) > >>> [2013-10-24 09:52:25.974480] I [glusterd.c:962:init] 0-management: Using > >>> /var/lib/glusterd as working directory > >>> [2013-10-24 09:52:25.977648] I [socket.c:3480:socket_init] > >>> 0-socket.management: SSL support is NOT enabled > >>> [2013-10-24 09:52:25.977694] I [socket.c:3495:socket_init] > >>> 0-socket.management: using system polling thread > >>> [2013-10-24 09:52:25.978611] W [rdma.c:4197:__gf_rdma_ctx_create] > >>> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) > >>> [2013-10-24 09:52:25.978651] E [rdma.c:4485:init] 0-rdma.management: > >>> Failed to initialize IB Device > >>> [2013-10-24 09:52:25.978667] E [rpc-transport.c:320:rpc_transport_load] > >>> 0-rpc-transport: 'rdma' initialization failed > >>> [2013-10-24 09:52:25.978747] W [rpcsvc.c:1387:rpcsvc_transport_create] > >>> 0-rpc-service: cannot create listener, initing the transport failed > >>> [2013-10-24 09:52:25.979890] I > >>> [glusterd.c:354:glusterd_check_gsync_present] 0-glusterd: > >>> geo-replication module not installed in the system > >>> [2013-10-24 09:52:25.980000] E [store.c:394:gf_store_handle_retrieve] > >>> 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, > >>> error: No such file or directory > >>> [2013-10-24 09:52:25.980026] E > >>> [glusterd-store.c:1277:glusterd_retrieve_op_version] 0-: Unable to get > >>> store handle! > >>> [2013-10-24 09:52:25.980048] E [store.c:394:gf_store_handle_retrieve] > >>> 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, > >>> error: No such file or directory > >>> [2013-10-24 09:52:25.980060] E > >>> [glusterd-store.c:1378:glusterd_retrieve_uuid] 0-: Unable to get store > >>> handle! > >>> [2013-10-24 09:52:25.980074] I > >>> [glusterd-store.c:1348:glusterd_restore_op_version] 0-management: > >>> Detected new install. Setting op-version to maximum : 2 > >>> [2013-10-24 09:52:25.980309] E [store.c:360:gf_store_handle_new] 0-: > >>> Failed to open file: /var/lib/glusterd/options, error: Read-only file system > >>> > >>> > >>> Acording to log /var/lib/glusterd/glusterd.info is missing and can not > >>> be created because fs is mounted "ro". > >>> > >>> > >>> Now I'm stuck! > >>> What am I missing? > >>> > >>> > >>> tnx for help! > >> _______________________________________________ > >> node-devel mailing list > >> node-devel at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/node-devel > > > From sasa.friedrich at bitlab.si Thu Oct 24 17:59:03 2013 From: sasa.friedrich at bitlab.si (=?UTF-8?B?U2HFoWEgRnJpZWRyaWNo?=) Date: Thu, 24 Oct 2013 19:59:03 +0200 Subject: [node-devel] GlusterFS on oVirt node In-Reply-To: <1382621297.2828.11.camel@fdeutsch-laptop.local> References: <5268FB3E.3080903@bitlab.si> <52691CBF.8050709@bitlab.si> <1382621297.2828.11.camel@fdeutsch-laptop.local> Message-ID: <52695FE7.2060207@bitlab.si> I reinstalled node and remounter / rw then I checked fs before activating host (in oVirt Engine) and after (which files have been changed)... The "ro" problem seems to be in /var/lib/glusterd/. Is there any way I can change node so this directory would be mounted rw? And to persist this setting after reboot. tnx Dne 24. 10. 2013 15:28, pi?e Fabian Deutsch: > Am Donnerstag, den 24.10.2013, 15:12 +0200 schrieb Sa?a Friedrich: >> Progress report: >> >> I remounted fs on oVirt nodes rw, started glusterd with no errors. >> Then I activated hosts in oVirt Engine. Also no errors! Yei! > Yey! :) > Yes, mount -oremount,rw make's the FS temporarily writeable. But you > will have issues as soon as you reboot. > We'll need to investigate which paths need to be persisted (so the data > written to them survives a reboot) and which only need to be write-able > e.g. for temporary data. > > Would you mind opening a bug for this? > > Greetings > fabian > >> Then I created volume (replication), added two bricks (oVirt nodes), >> and started volume. Seems fine. I checkd on node1: >> >> # gluster volume info >> >> Volume Name: data_vol >> Type: Replicate >> Volume ID: a1cdc762-2198-47e2-9b4a-58fd0571b269 >> Status: Started >> Number of Bricks: 1 x 2 = 2 >> Transport-type: tcp >> Bricks: >> Brick1: 192.168.254.124:/data/gluster >> Brick2: 192.168.254.141:/data/gluster >> Options Reconfigured: >> storage.owner-gid: 36 >> storage.owner-uid: 36 >> auth.allow: * >> user.cifs: on >> nfs.disable: off >> >> >> >> WORKING! >> >> >> BUT... Now i can not create storage domain. When I hit OK button on >> "New storage domain dialog", process is running very long. Eventually >> this process stops and returns " Error while executing action Add >> Storage Connection: Network error during communication with the >> Host". >> >> I'm stuck again :-( in need for HELP! > Could you please provide the logfiles mentioned here: > http://www.ovirt.org/Node_Troubleshooting#Log_Files > > Greetings > fabian > >> tnx >> >> >> >> >> Dne 24. 10. 2013 13:02, pi?e Mike Burns: >> >>> Adding to node-devel list and users list. >>> >>> -- Mike >>> >>> Apologies for top posting and typos. This was sent from a mobile device. >>> >>> Sa?a Friedrich wrote: >>> >>> Hello! >>> >>> Acording to http://www.ovirt.org/Node_Glusterfs_Support glusterfs on >>> ovirt node should be supported. But I have some difficulties to >>> implement it. >>> >>> >>> I installed ovirt (nested kvm - home testing) following "Up and Running >>> with oVirt 3.3) using Fedora19 >>> Install went well. Everything is working fine. >>> >>> Now I created two hosts (nested kvm - ovirt node fc19 - just for >>> testing) and added them in oVirt. >>> Super fine - working! >>> >>> Now I'd like to use this hosts as glustefs nodes too. Acording to google >>> (I'm googling for two days now) I'ts possible, but I can not find any >>> usable how-to >>> >>> 1. I removed these two hosts from default data center >>> 2. I created new data center (type: GlusterFS) >>> 3. I created new cluster (Enable Gluster Service checked) >>> 4. I added host >>> 5. Now I get error message in events: "Could not find gluster uuid of >>> server host1 on Cluster Cluster1." >>> >>> >>> If I ssh to my host (fc19 node) glusterd.service is not running. If I >>> try to run it It returns error >>> >>> here is the log: >>> [2013-10-24 09:52:25.969899] I [glusterfsd.c:1910:main] >>> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.0 >>> (/usr/sbin/glusterd -p /run/glusterd.pid) >>> [2013-10-24 09:52:25.974480] I [glusterd.c:962:init] 0-management: Using >>> /var/lib/glusterd as working directory >>> [2013-10-24 09:52:25.977648] I [socket.c:3480:socket_init] >>> 0-socket.management: SSL support is NOT enabled >>> [2013-10-24 09:52:25.977694] I [socket.c:3495:socket_init] >>> 0-socket.management: using system polling thread >>> [2013-10-24 09:52:25.978611] W [rdma.c:4197:__gf_rdma_ctx_create] >>> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) >>> [2013-10-24 09:52:25.978651] E [rdma.c:4485:init] 0-rdma.management: >>> Failed to initialize IB Device >>> [2013-10-24 09:52:25.978667] E [rpc-transport.c:320:rpc_transport_load] >>> 0-rpc-transport: 'rdma' initialization failed >>> [2013-10-24 09:52:25.978747] W [rpcsvc.c:1387:rpcsvc_transport_create] >>> 0-rpc-service: cannot create listener, initing the transport failed >>> [2013-10-24 09:52:25.979890] I >>> [glusterd.c:354:glusterd_check_gsync_present] 0-glusterd: >>> geo-replication module not installed in the system >>> [2013-10-24 09:52:25.980000] E [store.c:394:gf_store_handle_retrieve] >>> 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, >>> error: No such file or directory >>> [2013-10-24 09:52:25.980026] E >>> [glusterd-store.c:1277:glusterd_retrieve_op_version] 0-: Unable to get >>> store handle! >>> [2013-10-24 09:52:25.980048] E [store.c:394:gf_store_handle_retrieve] >>> 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, >>> error: No such file or directory >>> [2013-10-24 09:52:25.980060] E >>> [glusterd-store.c:1378:glusterd_retrieve_uuid] 0-: Unable to get store >>> handle! >>> [2013-10-24 09:52:25.980074] I >>> [glusterd-store.c:1348:glusterd_restore_op_version] 0-management: >>> Detected new install. Setting op-version to maximum : 2 >>> [2013-10-24 09:52:25.980309] E [store.c:360:gf_store_handle_new] 0-: >>> Failed to open file: /var/lib/glusterd/options, error: Read-only file system >>> >>> >>> Acording to log /var/lib/glusterd/glusterd.info is missing and can not >>> be created because fs is mounted "ro". >>> >>> >>> Now I'm stuck! >>> What am I missing? >>> >>> >>> tnx for help! >> _______________________________________________ >> node-devel mailing list >> node-devel at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/node-devel > From sasa.friedrich at bitlab.si Fri Oct 25 06:42:02 2013 From: sasa.friedrich at bitlab.si (=?UTF-8?B?U2HFoWEgRnJpZWRyaWNo?=) Date: Fri, 25 Oct 2013 08:42:02 +0200 Subject: [node-devel] GlusterFS on oVirt node In-Reply-To: <1382682423.2855.2.camel@fdeutsch-laptop.local> References: <5268FB3E.3080903@bitlab.si> <52691CBF.8050709@bitlab.si> <1382621297.2828.11.camel@fdeutsch-laptop.local> <52695FE7.2060207@bitlab.si> <1382682423.2855.2.camel@fdeutsch-laptop.local> Message-ID: <526A12BA.8050009@bitlab.si> Thanks, I'll try that (and report). And I think data in that path don't need to be persistant, because when I reboot node, and make whole fs rw again, host gets attached to oVirt with no errors. One more thing. Messages log file keep getting me errors (every 2 sec): Oct 25 08:32:50 localhost python[462]: service-status: ServiceNotExistError: Tried all alternatives but failed: Oct 25 08:32:50 localhost python[462]: ServiceNotExistError: gluster-swift-object is not a SysV service Oct 25 08:32:50 localhost python[462]: ServiceNotExistError: gluster-swift-object is not native systemctl service . . . . . Oct 25 08:33:18 localhost python[462]: service-is-managed: ServiceNotExistError: Tried all alternatives but failed: Oct 25 08:33:18 localhost python[462]: ServiceNotExistError: smb is not native systemctl service Oct 25 08:33:18 localhost python[462]: ServiceNotExistError: samba is not native systemctl service Oct 25 08:33:18 localhost python[462]: ServiceNotExistError: smb is not a SysV service Oct 25 08:33:18 localhost python[462]: ServiceNotExistError: samba is not a SysV service So I somehow lost hope for node and tried makeing host from fc19 (minimal installation). Installed, attached host in oVirt engine, but... Errors are the same! Any clue? tnx Dne 25. 10. 2013 08:27, pi?e Fabian Deutsch: > Am Donnerstag, den 24.10.2013, 19:59 +0200 schrieb Sa?a Friedrich: >> I reinstalled node and remounter / rw then I checked fs before >> activating host (in oVirt Engine) and after (which files have been >> changed)... The "ro" problem seems to be in /var/lib/glusterd/. Is there >> any way I can change node so this directory would be mounted rw? And to >> persist this setting after reboot. > Hey, > > do you know if the data in /var/lib/glusterd needs to survive reboots? > If not, then this patch http://gerrit.ovirt.org/20540 will probably fix > the problem. The patch just adds that path to /etc/rwtab.d/ovirt - this > will tell the read-only root to make that path writable at boot. > Due to the nature of Node you will need to build your own image or wait > until the patch lands in an image. Editing the rwtab file by hand at > runtime won't have an effect. > > Greetings > fabian > >> tnx >> >> >> >> Dne 24. 10. 2013 15:28, pi?e Fabian Deutsch: >>> Am Donnerstag, den 24.10.2013, 15:12 +0200 schrieb Sa?a Friedrich: >>>> Progress report: >>>> >>>> I remounted fs on oVirt nodes rw, started glusterd with no errors. >>>> Then I activated hosts in oVirt Engine. Also no errors! Yei! >>> Yey! :) >>> Yes, mount -oremount,rw make's the FS temporarily writeable. But you >>> will have issues as soon as you reboot. >>> We'll need to investigate which paths need to be persisted (so the data >>> written to them survives a reboot) and which only need to be write-able >>> e.g. for temporary data. >>> >>> Would you mind opening a bug for this? >>> >>> Greetings >>> fabian >>> >>>> Then I created volume (replication), added two bricks (oVirt nodes), >>>> and started volume. Seems fine. I checkd on node1: >>>> >>>> # gluster volume info >>>> >>>> Volume Name: data_vol >>>> Type: Replicate >>>> Volume ID: a1cdc762-2198-47e2-9b4a-58fd0571b269 >>>> Status: Started >>>> Number of Bricks: 1 x 2 = 2 >>>> Transport-type: tcp >>>> Bricks: >>>> Brick1: 192.168.254.124:/data/gluster >>>> Brick2: 192.168.254.141:/data/gluster >>>> Options Reconfigured: >>>> storage.owner-gid: 36 >>>> storage.owner-uid: 36 >>>> auth.allow: * >>>> user.cifs: on >>>> nfs.disable: off >>>> >>>> >>>> >>>> WORKING! >>>> >>>> >>>> BUT... Now i can not create storage domain. When I hit OK button on >>>> "New storage domain dialog", process is running very long. Eventually >>>> this process stops and returns " Error while executing action Add >>>> Storage Connection: Network error during communication with the >>>> Host". >>>> >>>> I'm stuck again :-( in need for HELP! >>> Could you please provide the logfiles mentioned here: >>> http://www.ovirt.org/Node_Troubleshooting#Log_Files >>> >>> Greetings >>> fabian >>> >>>> tnx >>>> >>>> >>>> >>>> >>>> Dne 24. 10. 2013 13:02, pi?e Mike Burns: >>>> >>>>> Adding to node-devel list and users list. >>>>> >>>>> -- Mike >>>>> >>>>> Apologies for top posting and typos. This was sent from a mobile device. >>>>> >>>>> Sa?a Friedrich wrote: >>>>> >>>>> Hello! >>>>> >>>>> Acording to http://www.ovirt.org/Node_Glusterfs_Support glusterfs on >>>>> ovirt node should be supported. But I have some difficulties to >>>>> implement it. >>>>> >>>>> >>>>> I installed ovirt (nested kvm - home testing) following "Up and Running >>>>> with oVirt 3.3) using Fedora19 >>>>> Install went well. Everything is working fine. >>>>> >>>>> Now I created two hosts (nested kvm - ovirt node fc19 - just for >>>>> testing) and added them in oVirt. >>>>> Super fine - working! >>>>> >>>>> Now I'd like to use this hosts as glustefs nodes too. Acording to google >>>>> (I'm googling for two days now) I'ts possible, but I can not find any >>>>> usable how-to >>>>> >>>>> 1. I removed these two hosts from default data center >>>>> 2. I created new data center (type: GlusterFS) >>>>> 3. I created new cluster (Enable Gluster Service checked) >>>>> 4. I added host >>>>> 5. Now I get error message in events: "Could not find gluster uuid of >>>>> server host1 on Cluster Cluster1." >>>>> >>>>> >>>>> If I ssh to my host (fc19 node) glusterd.service is not running. If I >>>>> try to run it It returns error >>>>> >>>>> here is the log: >>>>> [2013-10-24 09:52:25.969899] I [glusterfsd.c:1910:main] >>>>> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.0 >>>>> (/usr/sbin/glusterd -p /run/glusterd.pid) >>>>> [2013-10-24 09:52:25.974480] I [glusterd.c:962:init] 0-management: Using >>>>> /var/lib/glusterd as working directory >>>>> [2013-10-24 09:52:25.977648] I [socket.c:3480:socket_init] >>>>> 0-socket.management: SSL support is NOT enabled >>>>> [2013-10-24 09:52:25.977694] I [socket.c:3495:socket_init] >>>>> 0-socket.management: using system polling thread >>>>> [2013-10-24 09:52:25.978611] W [rdma.c:4197:__gf_rdma_ctx_create] >>>>> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) >>>>> [2013-10-24 09:52:25.978651] E [rdma.c:4485:init] 0-rdma.management: >>>>> Failed to initialize IB Device >>>>> [2013-10-24 09:52:25.978667] E [rpc-transport.c:320:rpc_transport_load] >>>>> 0-rpc-transport: 'rdma' initialization failed >>>>> [2013-10-24 09:52:25.978747] W [rpcsvc.c:1387:rpcsvc_transport_create] >>>>> 0-rpc-service: cannot create listener, initing the transport failed >>>>> [2013-10-24 09:52:25.979890] I >>>>> [glusterd.c:354:glusterd_check_gsync_present] 0-glusterd: >>>>> geo-replication module not installed in the system >>>>> [2013-10-24 09:52:25.980000] E [store.c:394:gf_store_handle_retrieve] >>>>> 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, >>>>> error: No such file or directory >>>>> [2013-10-24 09:52:25.980026] E >>>>> [glusterd-store.c:1277:glusterd_retrieve_op_version] 0-: Unable to get >>>>> store handle! >>>>> [2013-10-24 09:52:25.980048] E [store.c:394:gf_store_handle_retrieve] >>>>> 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, >>>>> error: No such file or directory >>>>> [2013-10-24 09:52:25.980060] E >>>>> [glusterd-store.c:1378:glusterd_retrieve_uuid] 0-: Unable to get store >>>>> handle! >>>>> [2013-10-24 09:52:25.980074] I >>>>> [glusterd-store.c:1348:glusterd_restore_op_version] 0-management: >>>>> Detected new install. Setting op-version to maximum : 2 >>>>> [2013-10-24 09:52:25.980309] E [store.c:360:gf_store_handle_new] 0-: >>>>> Failed to open file: /var/lib/glusterd/options, error: Read-only file system >>>>> >>>>> >>>>> Acording to log /var/lib/glusterd/glusterd.info is missing and can not >>>>> be created because fs is mounted "ro". >>>>> >>>>> >>>>> Now I'm stuck! >>>>> What am I missing? >>>>> >>>>> >>>>> tnx for help! >>>> _______________________________________________ >>>> node-devel mailing list >>>> node-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/node-devel > From mburns at redhat.com Fri Oct 25 11:41:12 2013 From: mburns at redhat.com (Mike Burns) Date: Fri, 25 Oct 2013 07:41:12 -0400 Subject: [node-devel] GlusterFS on oVirt node In-Reply-To: <1382682423.2855.2.camel@fdeutsch-laptop.local> References: <5268FB3E.3080903@bitlab.si> <52691CBF.8050709@bitlab.si> <1382621297.2828.11.camel@fdeutsch-laptop.local> <52695FE7.2060207@bitlab.si> <1382682423.2855.2.camel@fdeutsch-laptop.local> Message-ID: <526A58D8.50502@redhat.com> On 10/25/2013 02:27 AM, Fabian Deutsch wrote: > Am Donnerstag, den 24.10.2013, 19:59 +0200 schrieb Sa?a Friedrich: >> I reinstalled node and remounter / rw then I checked fs before >> activating host (in oVirt Engine) and after (which files have been >> changed)... The "ro" problem seems to be in /var/lib/glusterd/. Is there >> any way I can change node so this directory would be mounted rw? And to >> persist this setting after reboot. > > Hey, > > do you know if the data in /var/lib/glusterd needs to survive reboots? > If not, then this patch http://gerrit.ovirt.org/20540 will probably fix I'll ack this once you fix the typo in the summary (s/bar/var/) Mike > the problem. The patch just adds that path to /etc/rwtab.d/ovirt - this > will tell the read-only root to make that path writable at boot. > Due to the nature of Node you will need to build your own image or wait > until the patch lands in an image. Editing the rwtab file by hand at > runtime won't have an effect. > > Greetings > fabian > >> >> tnx >> >> >> >> Dne 24. 10. 2013 15:28, pi?e Fabian Deutsch: >>> Am Donnerstag, den 24.10.2013, 15:12 +0200 schrieb Sa?a Friedrich: >>>> Progress report: >>>> >>>> I remounted fs on oVirt nodes rw, started glusterd with no errors. >>>> Then I activated hosts in oVirt Engine. Also no errors! Yei! >>> Yey! :) >>> Yes, mount -oremount,rw make's the FS temporarily writeable. But you >>> will have issues as soon as you reboot. >>> We'll need to investigate which paths need to be persisted (so the data >>> written to them survives a reboot) and which only need to be write-able >>> e.g. for temporary data. >>> >>> Would you mind opening a bug for this? >>> >>> Greetings >>> fabian >>> >>>> Then I created volume (replication), added two bricks (oVirt nodes), >>>> and started volume. Seems fine. I checkd on node1: >>>> >>>> # gluster volume info >>>> >>>> Volume Name: data_vol >>>> Type: Replicate >>>> Volume ID: a1cdc762-2198-47e2-9b4a-58fd0571b269 >>>> Status: Started >>>> Number of Bricks: 1 x 2 = 2 >>>> Transport-type: tcp >>>> Bricks: >>>> Brick1: 192.168.254.124:/data/gluster >>>> Brick2: 192.168.254.141:/data/gluster >>>> Options Reconfigured: >>>> storage.owner-gid: 36 >>>> storage.owner-uid: 36 >>>> auth.allow: * >>>> user.cifs: on >>>> nfs.disable: off >>>> >>>> >>>> >>>> WORKING! >>>> >>>> >>>> BUT... Now i can not create storage domain. When I hit OK button on >>>> "New storage domain dialog", process is running very long. Eventually >>>> this process stops and returns " Error while executing action Add >>>> Storage Connection: Network error during communication with the >>>> Host". >>>> >>>> I'm stuck again :-( in need for HELP! >>> Could you please provide the logfiles mentioned here: >>> http://www.ovirt.org/Node_Troubleshooting#Log_Files >>> >>> Greetings >>> fabian >>> >>>> tnx >>>> >>>> >>>> >>>> >>>> Dne 24. 10. 2013 13:02, pi?e Mike Burns: >>>> >>>>> Adding to node-devel list and users list. >>>>> >>>>> -- Mike >>>>> >>>>> Apologies for top posting and typos. This was sent from a mobile device. >>>>> >>>>> Sa?a Friedrich wrote: >>>>> >>>>> Hello! >>>>> >>>>> Acording to http://www.ovirt.org/Node_Glusterfs_Support glusterfs on >>>>> ovirt node should be supported. But I have some difficulties to >>>>> implement it. >>>>> >>>>> >>>>> I installed ovirt (nested kvm - home testing) following "Up and Running >>>>> with oVirt 3.3) using Fedora19 >>>>> Install went well. Everything is working fine. >>>>> >>>>> Now I created two hosts (nested kvm - ovirt node fc19 - just for >>>>> testing) and added them in oVirt. >>>>> Super fine - working! >>>>> >>>>> Now I'd like to use this hosts as glustefs nodes too. Acording to google >>>>> (I'm googling for two days now) I'ts possible, but I can not find any >>>>> usable how-to >>>>> >>>>> 1. I removed these two hosts from default data center >>>>> 2. I created new data center (type: GlusterFS) >>>>> 3. I created new cluster (Enable Gluster Service checked) >>>>> 4. I added host >>>>> 5. Now I get error message in events: "Could not find gluster uuid of >>>>> server host1 on Cluster Cluster1." >>>>> >>>>> >>>>> If I ssh to my host (fc19 node) glusterd.service is not running. If I >>>>> try to run it It returns error >>>>> >>>>> here is the log: >>>>> [2013-10-24 09:52:25.969899] I [glusterfsd.c:1910:main] >>>>> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.0 >>>>> (/usr/sbin/glusterd -p /run/glusterd.pid) >>>>> [2013-10-24 09:52:25.974480] I [glusterd.c:962:init] 0-management: Using >>>>> /var/lib/glusterd as working directory >>>>> [2013-10-24 09:52:25.977648] I [socket.c:3480:socket_init] >>>>> 0-socket.management: SSL support is NOT enabled >>>>> [2013-10-24 09:52:25.977694] I [socket.c:3495:socket_init] >>>>> 0-socket.management: using system polling thread >>>>> [2013-10-24 09:52:25.978611] W [rdma.c:4197:__gf_rdma_ctx_create] >>>>> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) >>>>> [2013-10-24 09:52:25.978651] E [rdma.c:4485:init] 0-rdma.management: >>>>> Failed to initialize IB Device >>>>> [2013-10-24 09:52:25.978667] E [rpc-transport.c:320:rpc_transport_load] >>>>> 0-rpc-transport: 'rdma' initialization failed >>>>> [2013-10-24 09:52:25.978747] W [rpcsvc.c:1387:rpcsvc_transport_create] >>>>> 0-rpc-service: cannot create listener, initing the transport failed >>>>> [2013-10-24 09:52:25.979890] I >>>>> [glusterd.c:354:glusterd_check_gsync_present] 0-glusterd: >>>>> geo-replication module not installed in the system >>>>> [2013-10-24 09:52:25.980000] E [store.c:394:gf_store_handle_retrieve] >>>>> 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, >>>>> error: No such file or directory >>>>> [2013-10-24 09:52:25.980026] E >>>>> [glusterd-store.c:1277:glusterd_retrieve_op_version] 0-: Unable to get >>>>> store handle! >>>>> [2013-10-24 09:52:25.980048] E [store.c:394:gf_store_handle_retrieve] >>>>> 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, >>>>> error: No such file or directory >>>>> [2013-10-24 09:52:25.980060] E >>>>> [glusterd-store.c:1378:glusterd_retrieve_uuid] 0-: Unable to get store >>>>> handle! >>>>> [2013-10-24 09:52:25.980074] I >>>>> [glusterd-store.c:1348:glusterd_restore_op_version] 0-management: >>>>> Detected new install. Setting op-version to maximum : 2 >>>>> [2013-10-24 09:52:25.980309] E [store.c:360:gf_store_handle_new] 0-: >>>>> Failed to open file: /var/lib/glusterd/options, error: Read-only file system >>>>> >>>>> >>>>> Acording to log /var/lib/glusterd/glusterd.info is missing and can not >>>>> be created because fs is mounted "ro". >>>>> >>>>> >>>>> Now I'm stuck! >>>>> What am I missing? >>>>> >>>>> >>>>> tnx for help! >>>> _______________________________________________ >>>> node-devel mailing list >>>> node-devel at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/node-devel >>> >> > > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel > From vbellur at redhat.com Sat Oct 26 16:16:09 2013 From: vbellur at redhat.com (Vijay Bellur) Date: Sat, 26 Oct 2013 21:46:09 +0530 Subject: [node-devel] [Users] GlusterFS on oVirt node In-Reply-To: <1382682423.2855.2.camel@fdeutsch-laptop.local> References: <5268FB3E.3080903@bitlab.si> <52691CBF.8050709@bitlab.si> <1382621297.2828.11.camel@fdeutsch-laptop.local> <52695FE7.2060207@bitlab.si> <1382682423.2855.2.camel@fdeutsch-laptop.local> Message-ID: <526BEAC9.1030902@redhat.com> On 10/25/2013 11:57 AM, Fabian Deutsch wrote: > Am Donnerstag, den 24.10.2013, 19:59 +0200 schrieb Sa?a Friedrich: >> I reinstalled node and remounter / rw then I checked fs before >> activating host (in oVirt Engine) and after (which files have been >> changed)... The "ro" problem seems to be in /var/lib/glusterd/. Is there >> any way I can change node so this directory would be mounted rw? And to >> persist this setting after reboot. > > Hey, > > do you know if the data in /var/lib/glusterd needs to survive reboots? /var/lib/glusterd does need to survive reboots. -Vijay From masayag at redhat.com Sun Oct 27 10:58:06 2013 From: masayag at redhat.com (Moti Asayag) Date: Sun, 27 Oct 2013 06:58:06 -0400 (EDT) Subject: [node-devel] Needed: Node and Engine cooperation In-Reply-To: <1382377541.2828.45.camel@fdeutsch-laptop.local> References: <1382377541.2828.45.camel@fdeutsch-laptop.local> Message-ID: <1511837625.14165477.1382871486005.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Fabian Deutsch" > To: "arch" , "node-devel" > Sent: Monday, October 21, 2013 8:45:41 PM > Subject: [node-devel] Needed: Node and Engine cooperation > > Hey, > > with the extraction of the oVirt Engine / VDSM specific bits from Node > in it's 3.0 release, oVirt Node became unaware of when it is being > managed. > Pre-3.0 Node (it's TUI) had specific knowledge about what configuration > files existed when it was registered to Engine. This is not the case in > Node 3.0 anymore. And this leads to problems. E.g. a user removing > Engines network layout. > > A new way is needed to pass informations between the management instance > and Node's core. This informations are needed e.g. to prevent the user > from accidentally destroying Engines network layout on a Node. How is it different from an admin connecting to non ovirt-node host and manually dis-configure its network ? I'm not sure we need to prevent from the administrator to perform any manual changes on the host. Perhaps the TUI could reflect the networks name by querying vdsm/libvirt in the same sense as the engine does so the user will be aware which interfaces carry logical networks. > > I've opened a bug [0] to suggest a way of sharing this kind of > informations. > > The idea is that Node and the management instance - Engine - share a set > of common configuration keys in /etc/default/ovirt to pass the relevant > bit's to Node. > For now I thought about this three keys: > > > OVIRT_MANAGED_BY= > This key is used to (a) signal the Node is being managed and (b) > signaling who is managing this node. > > OVIRT_MANAGED_IFNAMES=[,,...] > This key is used to specify a number (comma separated list) of ifnames > which are managed and for which the TUI shall display some information > (IP, ...). > This can also be used by the TUI to decide to not offer NIC > configuration to the user. > > OVIRT_MANAGED_LOCKED_PAGES=[,,...] > (Future) A list of pages which shall be locked e.g. because the > management instance is configuring the aspect (e.g. networking or > logging). > > > The third one (OVIRT_MANAGED_LOCKED_PAGES) needs a tighter integration > and might be relevant in the future, but the first two should really be > implemented quickly for the reasons given above. > > It is quit elate in the development process but probably worth to think > about getting this into 3.3.1, to prevent all sorts of (accidentally) > user-driven collisions between Node and Engine. > > Thoughts? > > Greetings > fabian > > --- > [0] https://bugzilla.redhat.com/show_bug.cgi?id=1021647 > > _______________________________________________ > node-devel mailing list > node-devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/node-devel > From fabiand at redhat.com Sun Oct 27 11:04:44 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Sun, 27 Oct 2013 12:04:44 +0100 Subject: [node-devel] [Users] GlusterFS on oVirt node In-Reply-To: <526BEAC9.1030902@redhat.com> References: <5268FB3E.3080903@bitlab.si> <52691CBF.8050709@bitlab.si> <1382621297.2828.11.camel@fdeutsch-laptop.local> <52695FE7.2060207@bitlab.si> <1382682423.2855.2.camel@fdeutsch-laptop.local> <526BEAC9.1030902@redhat.com> Message-ID: <1382871884.3143.2.camel@fdeutsch-laptop.local> Am Samstag, den 26.10.2013, 21:46 +0530 schrieb Vijay Bellur: > On 10/25/2013 11:57 AM, Fabian Deutsch wrote: > > Am Donnerstag, den 24.10.2013, 19:59 +0200 schrieb Sa?a Friedrich: > >> I reinstalled node and remounter / rw then I checked fs before > >> activating host (in oVirt Engine) and after (which files have been > >> changed)... The "ro" problem seems to be in /var/lib/glusterd/. Is there > >> any way I can change node so this directory would be mounted rw? And to > >> persist this setting after reboot. > > > > Hey, > > > > do you know if the data in /var/lib/glusterd needs to survive reboots? > > /var/lib/glusterd does need to survive reboots. Thanks for this info! Then we'll probably need another patch to make this happen. Greetings fabian From fabiand at redhat.com Sun Oct 27 12:06:06 2013 From: fabiand at redhat.com (Fabian Deutsch) Date: Sun, 27 Oct 2013 13:06:06 +0100 Subject: [node-devel] Needed: Node and Engine cooperation In-Reply-To: <1511837625.14165477.1382871486005.JavaMail.root@redhat.com> References: <1382377541.2828.45.camel@fdeutsch-laptop.local> <1511837625.14165477.1382871486005.JavaMail.root@redhat.com> Message-ID: <1382875566.3143.15.camel@fdeutsch-laptop.local> Am Sonntag, den 27.10.2013, 06:58 -0400 schrieb Moti Asayag: > > ----- Original Message ----- > > From: "Fabian Deutsch" > > To: "arch" , "node-devel" > > Sent: Monday, October 21, 2013 8:45:41 PM > > Subject: [node-devel] Needed: Node and Engine cooperation > > > > Hey, > > > > with the extraction of the oVirt Engine / VDSM specific bits from Node > > in it's 3.0 release, oVirt Node became unaware of when it is being > > managed. > > Pre-3.0 Node (it's TUI) had specific knowledge about what configuration > > files existed when it was registered to Engine. This is not the case in > > Node 3.0 anymore. And this leads to problems. E.g. a user removing > > Engines network layout. > > > > A new way is needed to pass informations between the management instance > > and Node's core. This informations are needed e.g. to prevent the user > > from accidentally destroying Engines network layout on a Node. > > How is it different from an admin connecting to non ovirt-node host and manually > dis-configure its network ? You are right that there is not really a difference between those both scenarios. If vdsm can cope with this then this shouldn't be a problem. My assumption was that vdsm had problems when the network configuration got changed on a different way than through vdsm. If vdsm is fine with this - the network configuration changed by the user - then this is fine and we don't have a problem. > I'm not sure we need to prevent from the administrator to perform any manual > changes on the host. Perhaps the TUI could reflect the networks name by querying > vdsm/libvirt in the same sense as the engine does so the user will be aware which > interfaces carry logical networks. The problem here is that the TUI is not aware of vdsm. That's why I suggest that VDSM is publishing these informations through e.g. the mechanism which is mentioned in [0] or also maybe through http://wiki.ovirt.org/Features/Node/FeaturePublishing Greetings fabian > > > > I've opened a bug [0] to suggest a way of sharing this kind of > > informations. > > > > The idea is that Node and the management instance - Engine - share a set > > of common configuration keys in /etc/default/ovirt to pass the relevant > > bit's to Node. > > For now I thought about this three keys: > > > > > > OVIRT_MANAGED_BY= > > This key is used to (a) signal the Node is being managed and (b) > > signaling who is managing this node. > > > > OVIRT_MANAGED_IFNAMES=[,,...] > > This key is used to specify a number (comma separated list) of ifnames > > which are managed and for which the TUI shall display some information > > (IP, ...). > > This can also be used by the TUI to decide to not offer NIC > > configuration to the user. > > > > OVIRT_MANAGED_LOCKED_PAGES=[,,...] > > (Future) A list of pages which shall be locked e.g. because the > > management instance is configuring the aspect (e.g. networking or > > logging). > > > > > > The third one (OVIRT_MANAGED_LOCKED_PAGES) needs a tighter integration > > and might be relevant in the future, but the first two should really be > > implemented quickly for the reasons given above. > > > > It is quit elate in the development process but probably worth to think > > about getting this into 3.3.1, to prevent all sorts of (accidentally) > > user-driven collisions between Node and Engine. > > > > Thoughts? > > > > Greetings > > fabian > > > > --- > > [0] https://bugzilla.redhat.com/show_bug.cgi?id=1021647 > > > > _______________________________________________ > > node-devel mailing list > > node-devel at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/node-devel > > From danken at redhat.com Mon Oct 28 00:39:48 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Mon, 28 Oct 2013 00:39:48 +0000 Subject: [node-devel] Needed: Node and Engine cooperation In-Reply-To: <1382875566.3143.15.camel@fdeutsch-laptop.local> References: <1382377541.2828.45.camel@fdeutsch-laptop.local> <1511837625.14165477.1382871486005.JavaMail.root@redhat.com> <1382875566.3143.15.camel@fdeutsch-laptop.local> Message-ID: <20131028003843.GA12280@redhat.com> On Sun, Oct 27, 2013 at 01:06:06PM +0100, Fabian Deutsch wrote: > Am Sonntag, den 27.10.2013, 06:58 -0400 schrieb Moti Asayag: > > > > ----- Original Message ----- > > > From: "Fabian Deutsch" > > > To: "arch" , "node-devel" > > > Sent: Monday, October 21, 2013 8:45:41 PM > > > Subject: [node-devel] Needed: Node and Engine cooperation > > > > > > Hey, > > > > > > with the extraction of the oVirt Engine / VDSM specific bits from Node > > > in it's 3.0 release, oVirt Node became unaware of when it is being > > > managed. > > > Pre-3.0 Node (it's TUI) had specific knowledge about what configuration > > > files existed when it was registered to Engine. This is not the case in > > > Node 3.0 anymore. And this leads to problems. E.g. a user removing > > > Engines network layout. > > > > > > A new way is needed to pass informations between the management instance > > > and Node's core. This informations are needed e.g. to prevent the user > > > from accidentally destroying Engines network layout on a Node. > > > > How is it different from an admin connecting to non ovirt-node host and manually > > dis-configure its network ? > > You are right that there is not really a difference between those both > scenarios. > If vdsm can cope with this then this shouldn't be a problem. > My assumption was that vdsm had problems when the network configuration > got changed on a different way than through vdsm. > If vdsm is fine with this - the network configuration changed by the > user - then this is fine and we don't have a problem. Vdsm is not "fine" with arbitrary changes to network configuration done under its feet. If you're configuring an oVirt node, we strongly recommend doing it via Engine. Anything else is likely to break something or to be overridden by Engine. Let alone trigger evil races within initscripts or Vdsm. For plain (non ovirt-node) hosts, we trust admins to know what they are doing. The premise of ovirt-node is a bit different: it's all about hard-to-tweak-and-break. As much as I personally hate when my admin hands are tied by an application, I think it is sensible for the TUI to report which Engine controls it, and to lock the network configuration page when the node is remote-controlled. However, the TUI should allow explicit unlocking of the "remote-controlled". > > > I'm not sure we need to prevent from the administrator to perform any manual > > changes on the host. Perhaps the TUI could reflect the networks name by querying > > vdsm/libvirt in the same sense as the engine does so the user will be aware which > > interfaces carry logical networks. > > The problem here is that the TUI is not aware of vdsm. That's why I > suggest that VDSM is publishing these informations through e.g. the > mechanism which is mentioned in [0] or also maybe through > http://wiki.ovirt.org/Features/Node/FeaturePublishing > > Greetings > fabian > > > > > > > I've opened a bug [0] to suggest a way of sharing this kind of > > > informations. > > > > > > The idea is that Node and the management instance - Engine - share a set > > > of common configuration keys in /etc/default/ovirt to pass the relevant > > > bit's to Node. > > > For now I thought about this three keys: > > > > > > > > > OVIRT_MANAGED_BY= > > > This key is used to (a) signal the Node is being managed and (b) > > > signaling who is managing this node. "vendor" is less interesting than the managing app, and the location of its access point. > > > > > > OVIRT_MANAGED_IFNAMES=[,,...] > > > This key is used to specify a number (comma separated list) of ifnames > > > which are managed and for which the TUI shall display some information > > > (IP, ...). > > > This can also be used by the TUI to decide to not offer NIC > > > configuration to the user. I do not see the benefit of this. All (non-wifi) nics of a host are reported by Vdsm to Engine and thus manage-able by the latter. > > > > > > OVIRT_MANAGED_LOCKED_PAGES=[,,...] > > > (Future) A list of pages which shall be locked e.g. because the > > > management instance is configuring the aspect (e.g. networking or > > > logging). > > > > > > > > > The third one (OVIRT_MANAGED_LOCKED_PAGES) needs a tighter integration > > > and might be relevant in the future, but the first two should really be > > > implemented quickly for the reasons given above. .. but that's the only thing we need... > > > > > > It is quit elate in the development process but probably worth to think > > > about getting this into 3.3.1, to prevent all sorts of (accidentally) > > > user-driven collisions between Node and Engine. Please do not delay the 3.3.1 beta for this. I prefer a release note: "do not attempt to configure node networking when registered to Engine, unless you really know what your are doing." From fdeutsch at redhat.com Mon Oct 28 12:31:38 2013 From: fdeutsch at redhat.com (Fabian Deutsch) Date: Mon, 28 Oct 2013 08:31:38 -0400 (EDT) Subject: [node-devel] oVirt Node weekly meeting Message-ID: <1537750536.33000891.1382963497651.JavaMail.root@redhat.com> The following meeting has been modified: Subject: oVirt Node weekly meeting Organiser: "Fabian Deutsch" Location: irc://irc.oftc.net#ovirt Time: 3:00:00 PM - 3:30:00 PM GMT +01:00 Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna [MODIFIED] Recurrence : Every Tuesday No end date Effective 15 Oct, 2013 Invitees: node-devel at ovirt.org *~*~*~*~*~*~*~*~*~* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 1524 bytes Desc: not available URL: