<html><body><div style="font-family: georgia,serif; font-size: 12pt; color: #000000"><div>Works perfectly fine for me with VPN towards the environment.</div><div>I suppose you should use VPN level security for your connectivity to your data center resources.</div><div><br></div><div><span name="x"></span><br>Thanks in advance.<br><div><br></div>Best regards,<br>Nikolai<br>____________________<br>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red Hat Israel<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501<br><div><br></div>Tel: +972 9 7692043<br>Mobile: +972 52 7342734<br>Email: nsednev@redhat.com<br>IRC: nsednev<span name="x"></span><br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>users-request@ovirt.org<br><b>To: </b>users@ovirt.org<br><b>Sent: </b>Monday, June 1, 2015 2:39:29 AM<br><b>Subject: </b>Users Digest, Vol 44, Issue 127<br><div><br></div>Send Users mailing list submissions to<br> users@ovirt.org<br><div><br></div>To subscribe or unsubscribe via the World Wide Web, visit<br> http://lists.ovirt.org/mailman/listinfo/users<br>or, via email, send a message with subject or body 'help' to<br> users-request@ovirt.org<br><div><br></div>You can reach the person managing the list at<br> users-owner@ovirt.org<br><div><br></div>When replying, please edit your Subject line so it is more specific<br>than "Re: Contents of Users digest..."<br><div><br></div><br>Today's Topics:<br><div><br></div> 1. Re: gluster config in 4 node cluster (???? ???????????)<br> 2. SPICE Through a Router? Squid? (alexmcwhirter@triadic.us)<br> 3. Bug in Snapshot Removing (Soeren Malchow)<br> 4. Re: gluster config in 4 node cluster (Soeren Malchow)<br> 5. Re: Bug in Snapshot Removing (Soeren Malchow)<br> 6. Re: Bug in Snapshot Removing (Soeren Malchow)<br><div><br></div><br>----------------------------------------------------------------------<br><div><br></div>Message: 1<br>Date: Sun, 31 May 2015 19:32:39 +0300<br>From: ???? ??????????? <y.poltoratskiy@gmail.com><br>To: users@ovirt.org<br>Subject: Re: [ovirt-users] gluster config in 4 node cluster<br>Message-ID: <556B37A7.1000508@gmail.com><br>Content-Type: text/plain; charset="utf-8"; Format="flowed"<br><div><br></div><br>Hi,<br><div><br></div>As for me, I would build one cluster with gluster service only based on <br>two nodes (replica 2), and the other one with virt service only based on <br>other two nodes. I think this variant is more scalable in future.<br><div><br></div>PS. I am a new in oVirt, so do not except that I am wrong.<br><div><br></div><br>28.05.2015 23:11, paf1@email.cz ?????:<br>> Hello,<br>> How to optimal configure 4 node cluster for any one node goes to <br>> maintenance without stopping VM ??<br>><br>> a) replica 4 - but it takes a lot of space<br>> b) disperse 3+1 ( raid 5 ) - but bad performance and not visible by <br>> oVirt 3.7.2<br>> c) stripe2+replica2 = but VM paused<br>><br>> any other idea ?<br>> regs.<br>> Pa.<br>><br>><br>> _______________________________________________<br>> Users mailing list<br>> Users@ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/20150531/a3db0889/attachment-0001.html><br><div><br></div>------------------------------<br><div><br></div>Message: 2<br>Date: Sun, 31 May 2015 18:54:56 -0400<br>From: alexmcwhirter@triadic.us<br>To: users@ovirt.org<br>Subject: [ovirt-users] SPICE Through a Router? Squid?<br>Message-ID: <8bea728a4fb4d5cd804c6aa50cb5e6c8@triadic.us><br>Content-Type: text/plain; charset=US-ASCII; format=flowed<br><div><br></div>I have a dual host setup working right now. Host 1 runs the engine and <br>is also a node. Host 2 does DB storage and NFS storage. The WebSockets <br>proxy is running on Host1.<br><div><br></div>My question is how do I run this behind a router? I am correct in <br>understanding that the WebSockets proxy acts as the spice access point <br>for all of the nodes in the cluster / datacetner? or does each node host <br>need a direct connection for spice?<br><div><br></div>the .vv file I receive from the management console specifies the <br>engine's private IP address which works fine when inside the ovirt <br>management LAN, but it wont route from WAN obviously.<br><div><br></div>So essentially i guess i need squid to rewrite the served vv file to the <br>public IP and somehow make the ports work correctly, which is difficult <br>considering every time a VM is created it also adds its own spice port, <br>correct?<br><div><br></div><br>------------------------------<br><div><br></div>Message: 3<br>Date: Sun, 31 May 2015 22:56:40 +0000<br>From: Soeren Malchow <soeren.malchow@mcon.net><br>To: "libvirt-users@redhat.com" <libvirt-users@redhat.com>, users<br> <users@ovirt.org><br>Subject: [ovirt-users] Bug in Snapshot Removing<br>Message-ID: <D1915E46.D966%soeren.malchow@mcon.net><br>Content-Type: text/plain; charset="us-ascii"<br><div><br></div>Dear all<br><div><br></div>I am not sure if the mail just did not get any attention between all the mails and this time it is also going to the libvirt mailing list.<br><div><br></div>I am experiencing a problem with VM becoming unresponsive when removing Snapshots (Live Merge) and i think there is a serious problem.<br><div><br></div>Here are the previous mails,<br><div><br></div>http://lists.ovirt.org/pipermail/users/2015-May/033083.html<br><div><br></div>The problem is on a system with everything on the latest version, CentOS 7.1 and ovirt 3.5.2.1 all upgrades applied.<br><div><br></div>This Problem did NOT exist before upgrading to CentOS 7.1 with an environment running ovirt 3.5.0 and 3.5.1 and Fedora 20 with the libvirt-preview repo activated.<br><div><br></div>I think this is a bug in libvirt, not ovirt itself, but i am not sure. The actual file throwing the exception is in VDSM (/usr/share/vdsm/virt/vm.py, line 697).<br><div><br></div>We are very willing to help, test and supply log files in anyway we can.<br><div><br></div>Regards<br>Soeren<br><div><br></div>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/20150531/589d7894/attachment-0001.html><br><div><br></div>------------------------------<br><div><br></div>Message: 4<br>Date: Sun, 31 May 2015 23:32:53 +0000<br>From: Soeren Malchow <soeren.malchow@mcon.net><br>To: ???? ??????????? <y.poltoratskiy@gmail.com>, "users@ovirt.org"<br> <users@ovirt.org><br>Subject: Re: [ovirt-users] gluster config in 4 node cluster<br>Message-ID: <D19165E2.D96E%soeren.malchow@mcon.net><br>Content-Type: text/plain; charset="windows-1251"<br><div><br></div>Hi<br><div><br></div>For a production environment i would not build a 2 node gluster, i would build at least 3 nodes to have it much easier with the quorum.<br><div><br></div>Taking into account that you can use commodity hardware i would also suggest to split the services, but i would go for at least 3 gluster nodes ? which add up to at least 5 nodes for a HA system.<br><div><br></div>If you want 4 in any case, then my suggestion would be to go for 4 replicas, each node has anything that it needs to run and you can basically use NFS to localhost for the storage which would make the nodes always access the local storage for the VMS, availability wise it makes no difference.<br><div><br></div>Cheers<br>Soeren<br><div><br></div>From: ???? ??????????? <y.poltoratskiy@gmail.com<mailto:y.poltoratskiy@gmail.com>><br>Date: Sunday 31 May 2015 18:32<br>To: "users@ovirt.org<mailto:users@ovirt.org>" <users@ovirt.org<mailto:users@ovirt.org>><br>Subject: Re: [ovirt-users] gluster config in 4 node cluster<br><div><br></div><br>Hi,<br><div><br></div>As for me, I would build one cluster with gluster service only based on two nodes (replica 2), and the other one with virt service only based on other two nodes. I think this variant is more scalable in future.<br><div><br></div>PS. I am a new in oVirt, so do not except that I am wrong.<br><div><br></div><br>28.05.2015 23:11, paf1@email.cz<mailto:paf1@email.cz> ?????:<br>Hello,<br>How to optimal configure 4 node cluster for any one node goes to maintenance without stopping VM ??<br><div><br></div>a) replica 4 - but it takes a lot of space<br>b) disperse 3+1 ( raid 5 ) - but bad performance and not visible by oVirt 3.7.2<br>c) stripe2+replica2 = but VM paused<br><div><br></div>any other idea ?<br>regs.<br>Pa.<br><div><br></div><br><div><br></div>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<mailto:Users@ovirt.org>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/20150531/75d42960/attachment-0001.html><br><div><br></div>------------------------------<br><div><br></div>Message: 5<br>Date: Sun, 31 May 2015 23:35:36 +0000<br>From: Soeren Malchow <soeren.malchow@mcon.net><br>To: Soeren Malchow <soeren.malchow@mcon.net>,<br> "libvirt-users@redhat.com" <libvirt-users@redhat.com>, users<br> <users@ovirt.org><br>Subject: Re: [ovirt-users] Bug in Snapshot Removing<br>Message-ID: <D1916735.D978%soeren.malchow@mcon.net><br>Content-Type: text/plain; charset="windows-1252"<br><div><br></div>Small addition again:<br><div><br></div>This error shows up in the log while removing snapshots WITHOUT rendering the Vms unresponsive<br><div><br></div>?<br>Jun 01 01:33:45 mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1657]: Timed out during operation: cannot acquire state change lock<br>Jun 01 01:33:45 mc-dc3ham-compute-02-live.mc.mcon.net vdsm[6839]: vdsm vm.Vm ERROR vmId=`56848f4a-cd73-4eda-bf79-7eb80ae569a9`::Error getting block job info<br> Traceback (most recent call last):<br> File "/usr/share/vdsm/virt/vm.py", line 5759, in queryBlockJobs?<br><div><br></div>?<br><div><br></div><br><div><br></div>From: Soeren Malchow <soeren.malchow@mcon.net<mailto:soeren.malchow@mcon.net>><br>Date: Monday 1 June 2015 00:56<br>To: "libvirt-users@redhat.com<mailto:libvirt-users@redhat.com>" <libvirt-users@redhat.com<mailto:libvirt-users@redhat.com>>, users <users@ovirt.org<mailto:users@ovirt.org>><br>Subject: [ovirt-users] Bug in Snapshot Removing<br><div><br></div>Dear all<br><div><br></div>I am not sure if the mail just did not get any attention between all the mails and this time it is also going to the libvirt mailing list.<br><div><br></div>I am experiencing a problem with VM becoming unresponsive when removing Snapshots (Live Merge) and i think there is a serious problem.<br><div><br></div>Here are the previous mails,<br><div><br></div>http://lists.ovirt.org/pipermail/users/2015-May/033083.html<br><div><br></div>The problem is on a system with everything on the latest version, CentOS 7.1 and ovirt 3.5.2.1 all upgrades applied.<br><div><br></div>This Problem did NOT exist before upgrading to CentOS 7.1 with an environment running ovirt 3.5.0 and 3.5.1 and Fedora 20 with the libvirt-preview repo activated.<br><div><br></div>I think this is a bug in libvirt, not ovirt itself, but i am not sure. The actual file throwing the exception is in VDSM (/usr/share/vdsm/virt/vm.py, line 697).<br><div><br></div>We are very willing to help, test and supply log files in anyway we can.<br><div><br></div>Regards<br>Soeren<br><div><br></div>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/20150531/c25fc497/attachment-0001.html><br><div><br></div>------------------------------<br><div><br></div>Message: 6<br>Date: Sun, 31 May 2015 23:39:24 +0000<br>From: Soeren Malchow <soeren.malchow@mcon.net><br>To: Soeren Malchow <soeren.malchow@mcon.net>,<br> "libvirt-users@redhat.com" <libvirt-users@redhat.com>, users<br> <users@ovirt.org><br>Subject: Re: [ovirt-users] Bug in Snapshot Removing<br>Message-ID: <D1916815.D97C%soeren.malchow@mcon.net><br>Content-Type: text/plain; charset="windows-1252"<br><div><br></div>And sorry, another update, it does kill the VM partly, it was still pingable when i wrote the last mail, but no ssh and no spice console possible<br><div><br></div>From: Soeren Malchow <soeren.malchow@mcon.net<mailto:soeren.malchow@mcon.net>><br>Date: Monday 1 June 2015 01:35<br>To: Soeren Malchow <soeren.malchow@mcon.net<mailto:soeren.malchow@mcon.net>>, "libvirt-users@redhat.com<mailto:libvirt-users@redhat.com>" <libvirt-users@redhat.com<mailto:libvirt-users@redhat.com>>, users <users@ovirt.org<mailto:users@ovirt.org>><br>Subject: Re: [ovirt-users] Bug in Snapshot Removing<br><div><br></div>Small addition again:<br><div><br></div>This error shows up in the log while removing snapshots WITHOUT rendering the Vms unresponsive<br><div><br></div>?<br>Jun 01 01:33:45 mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1657]: Timed out during operation: cannot acquire state change lock<br>Jun 01 01:33:45 mc-dc3ham-compute-02-live.mc.mcon.net vdsm[6839]: vdsm vm.Vm ERROR vmId=`56848f4a-cd73-4eda-bf79-7eb80ae569a9`::Error getting block job info<br> Traceback (most recent call last):<br> File "/usr/share/vdsm/virt/vm.py", line 5759, in queryBlockJobs?<br><div><br></div>?<br><div><br></div><br><div><br></div>From: Soeren Malchow <soeren.malchow@mcon.net<mailto:soeren.malchow@mcon.net>><br>Date: Monday 1 June 2015 00:56<br>To: "libvirt-users@redhat.com<mailto:libvirt-users@redhat.com>" <libvirt-users@redhat.com<mailto:libvirt-users@redhat.com>>, users <users@ovirt.org<mailto:users@ovirt.org>><br>Subject: [ovirt-users] Bug in Snapshot Removing<br><div><br></div>Dear all<br><div><br></div>I am not sure if the mail just did not get any attention between all the mails and this time it is also going to the libvirt mailing list.<br><div><br></div>I am experiencing a problem with VM becoming unresponsive when removing Snapshots (Live Merge) and i think there is a serious problem.<br><div><br></div>Here are the previous mails,<br><div><br></div>http://lists.ovirt.org/pipermail/users/2015-May/033083.html<br><div><br></div>The problem is on a system with everything on the latest version, CentOS 7.1 and ovirt 3.5.2.1 all upgrades applied.<br><div><br></div>This Problem did NOT exist before upgrading to CentOS 7.1 with an environment running ovirt 3.5.0 and 3.5.1 and Fedora 20 with the libvirt-preview repo activated.<br><div><br></div>I think this is a bug in libvirt, not ovirt itself, but i am not sure. The actual file throwing the exception is in VDSM (/usr/share/vdsm/virt/vm.py, line 697).<br><div><br></div>We are very willing to help, test and supply log files in anyway we can.<br><div><br></div>Regards<br>Soeren<br><div><br></div>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/20150531/b035f648/attachment.html><br><div><br></div>------------------------------<br><div><br></div>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div><br>End of Users Digest, Vol 44, Issue 127<br>**************************************<br></div><div><br></div></div></body></html>