Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

On Fri, Jun 23, 2017 at 4:40 PM, Karli Sjöberg <karli@inparadise.se> wrote:
Den 23 juni 2017 21:08 skrev Vinícius Ferrão <ferrao@if.ufrj.br>:
Hello oVirt folks.
I’m a traitor of the Xen movement and was looking for some good alternatives for XenServer hypervisors. I was aware of KVM for a long time but I was missing a more professional and appliance feeling of the product, and oVirt appears to deliver exactly what I’m looking for.
Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for equal or better alternatives, but I’m starting to get frustrated with oVirt.
Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on my notebook, it was a no go. For whatever reasons I don’t know vdsmd.service and libvirtd failed to start. I make sure that I was running with EPT support enabled to achieve nested virtualization, but as I said: it was a no go.
So I’ve decommissioned a XenServer machine that was in production just to try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon E5506 with 48GB of system RAM, but I can’t get the hosted engine to work, it always insults my hardware: --- Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy.
It’s definitely a problem on the storage subsystem, the error is just random, at this moment I’ve got:
[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request.
But on other tries it came up with something like this:
No response for JSON-RPC Volume.getSize request.
I was thinking that the problem was on the NFSv3 server on our FreeNAS box, so I’ve changed to an iSCSI backend, but the problem continues.
Can't say anything about your issues without the logs but there's nothing wrong with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's NFS since oVirt 3.2 so...
Could you share your relevant exports configuration to make sure he's using something that you know works?
"You're holding it wrong":) Sorry, I know you're frustrated but that's what I can add to the conversation.
/K
This happens at the very end of the ovirt-hosted-engine-setup command, which leads me to believe that’s an oVirt issue. The OVA was already copied and deployed to the storage:
[ INFO ] Starting vdsmd [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Extracting disk image from OVF archive (could take a few minutes depending on archive size) [ INFO ] Validating pre-allocated volume size [ INFO ] Uploading volume to data domain (could take a few minutes depending on archive size) [ INFO ] Image successfully imported from OVF [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request. [ INFO ] Yum Performing yum transaction rollback [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- setup/answers/answers-20170623032541.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy Log file is located at /var/log/ovirt-hosted-engine- setup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log
At this point I really don’t know what I should try. And the log file is too verborragic (hoping this word exists) to look for errors.
Any guidance?
Thanks, V.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Adam Litke

--_000_3759BD4B51CA47E781BBE185D1B31B2Eifufrjbr_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 SGVsbG8gQWRhbSBhbmQgS2FybGksDQoNCkkgd2lsbCByZW1hcCB1aWQgYW5kIGdpZCBvZiBORlMg dG8gMzYgYW5kIHRyeSBhZ2FpbiB3aXRoIE5GUyBzaGFyaW5nLg0KDQpCdXQgdGhpcyBkb2VzIG5v dCBtYWtlIG11Y2ggc2Vuc2UsIGJlY2F1c2Ugb24gaVNDU0kgdGhpcyBzaG91bGQgbm90IGhhcHBl bi4gVGhlcmUgYXJlIG5vIHBlcm1pc3Npb25zIGludm9sdmVkIGFuZCB3aGVuIG9WaXJ0IHJ1bnMg dGhlIGhvc3RlZC1lbmdpbmUgc2V0dXAgaXQgY3JlYXRlcyB0aGUgZXh0MyBmaWxlc3lzdGVtIG9u IHRoZSBpU0NTSSBzaGFyZSB3aXRob3V0IGFueSBpc3N1ZS4gSGVyZeKAmXMgYSBwaG90byBvZiB0 aGUgbmV0d29yayBiYW5kd2lkdGggZHVyaW5nIHRoZSBPVkYgZGVwbG95bWVudDogaHR0cDovL3d3 dy5pZi51ZnJqLmJyL35mZXJyYW8vb3ZpcnQvYmFuZHdpZHRoLWlzY3NpLW92Zi5qcGcNCg0KU28g aXTigJlzIGFwcGVhcnMgdG8gYmUgd29ya2luZy4gU29tZXRoaW5nIGhhcHBlbnMgYWZ0ZXIgdGhl IGRlcGxveW1lbnQgdGhhdCBicm9rZXMgdGhlIGNvbm5lY3Rpb25zIGFuZCBraWxscyB2ZHNtLg0K DQpUaGFua3MsDQpWLg0KDQpPbiAyMyBKdW4gMjAxNywgYXQgMTc6NDcsIEFkYW0gTGl0a2UgPGFs aXRrZUByZWRoYXQuY29tPG1haWx0bzphbGl0a2VAcmVkaGF0LmNvbT4+IHdyb3RlOg0KDQoNCg0K T24gRnJpLCBKdW4gMjMsIDIwMTcgYXQgNDo0MCBQTSwgS2FybGkgU2rDtmJlcmcgPGthcmxpQGlu cGFyYWRpc2Uuc2U8bWFpbHRvOmthcmxpQGlucGFyYWRpc2Uuc2U+PiB3cm90ZToNCg0KDQpEZW4g MjMganVuaSAyMDE3IDIxOjA4IHNrcmV2IFZpbsOtY2l1cyBGZXJyw6NvIDxmZXJyYW9AaWYudWZy ai5icjxtYWlsdG86ZmVycmFvQGlmLnVmcmouYnI+PjoNCkhlbGxvIG9WaXJ0IGZvbGtzLg0KDQpJ 4oCZbSBhIHRyYWl0b3Igb2YgdGhlIFhlbiBtb3ZlbWVudCBhbmQgd2FzIGxvb2tpbmcgZm9yIHNv bWUgZ29vZCBhbHRlcm5hdGl2ZXMgZm9yIFhlblNlcnZlciBoeXBlcnZpc29ycy4gSSB3YXMgYXdh cmUgb2YgS1ZNIGZvciBhIGxvbmcgdGltZSBidXQgSSB3YXMgbWlzc2luZyBhIG1vcmUgcHJvZmVz c2lvbmFsIGFuZCBhcHBsaWFuY2UgZmVlbGluZyBvZiB0aGUgcHJvZHVjdCwgYW5kIG9WaXJ0IGFw cGVhcnMgdG8gZGVsaXZlciBleGFjdGx5IHdoYXQgSeKAmW0gbG9va2luZyBmb3IuDQoNCkRvbuKA mXQgZ2V0IG1lIHdyb25nLCBJ4oCZbSBub3Qgc2F5aW5nIHRoYXQgWGVuIGlzIG5vdCBnb29kLCBJ 4oCZbSBsb29raW5nIGZvciBlcXVhbCBvciBiZXR0ZXIgYWx0ZXJuYXRpdmVzLCBidXQgSeKAmW0g c3RhcnRpbmcgdG8gZ2V0IGZydXN0cmF0ZWQgd2l0aCBvVmlydC4NCg0KRmlyc3RseSBJ4oCZdmUg dHJpZWQgdG8gaW5zdGFsbCB0aGUgb1ZpcnQgTm9kZSBvbiBhIFZNIGluIFZNd2FyZSBGdXNpb24g b24gbXkgbm90ZWJvb2ssIGl0IHdhcyBhIG5vIGdvLiBGb3Igd2hhdGV2ZXIgcmVhc29ucyBJIGRv buKAmXQga25vdyB2ZHNtZC5zZXJ2aWNlIGFuZCBsaWJ2aXJ0ZCBmYWlsZWQgdG8gc3RhcnQuIEkg bWFrZSBzdXJlIHRoYXQgSSB3YXMgcnVubmluZyB3aXRoIEVQVCBzdXBwb3J0IGVuYWJsZWQgdG8g YWNoaWV2ZSBuZXN0ZWQgdmlydHVhbGl6YXRpb24sIGJ1dCBhcyBJIHNhaWQ6IGl0IHdhcyBhIG5v IGdvLg0KDQpTbyBJ4oCZdmUgZGVjb21taXNzaW9uZWQgYSBYZW5TZXJ2ZXIgbWFjaGluZSB0aGF0 IHdhcyBpbiBwcm9kdWN0aW9uIGp1c3QgdG8gdHJ5IG9WaXJ0LiBUaGUgaGFyZHdhcmUgaXMgbm90 IG5ldywgYnV04oCZcyBpdOKAmXMgdmVyeSBjYXBhYmxlOiBEdWFsIFhlb24gRTU1MDYgd2l0aCA0 OEdCIG9mIHN5c3RlbSBSQU0sIGJ1dCBJIGNhbuKAmXQgZ2V0IHRoZSBob3N0ZWQgZW5naW5lIHRv IHdvcmssIGl0IGFsd2F5cyBpbnN1bHRzIG15IGhhcmR3YXJlOiAtLS0gSG9zdGVkIEVuZ2luZSBk ZXBsb3ltZW50IGZhaWxlZDogdGhpcyBzeXN0ZW0gaXMgbm90IHJlbGlhYmxlLCBwbGVhc2UgY2hl Y2sgdGhlIGlzc3VlLGZpeCBhbmQgcmVkZXBsb3kuDQoNCkl04oCZcyBkZWZpbml0ZWx5IGEgcHJv YmxlbSBvbiB0aGUgc3RvcmFnZSBzdWJzeXN0ZW0sIHRoZSBlcnJvciBpcyBqdXN0IHJhbmRvbSwg YXQgdGhpcyBtb21lbnQgSeKAmXZlIGdvdDoNCg0KWyBFUlJPUiBdIEZhaWxlZCB0byBleGVjdXRl IHN0YWdlICdNaXNjIGNvbmZpZ3VyYXRpb24nOiBbLTMyNjA1XSBObyByZXNwb25zZSBmb3IgSlNP Ti1SUEMgU3RvcmFnZURvbWFpbi5kZXRhY2ggcmVxdWVzdC4NCg0KQnV0IG9uIG90aGVyIHRyaWVz IGl0IGNhbWUgdXAgd2l0aCBzb21ldGhpbmcgbGlrZSB0aGlzOg0KDQpObyByZXNwb25zZSBmb3Ig SlNPTi1SUEMgVm9sdW1lLmdldFNpemUgcmVxdWVzdC4NCg0KSSB3YXMgdGhpbmtpbmcgdGhhdCB0 aGUgcHJvYmxlbSB3YXMgb24gdGhlIE5GU3YzIHNlcnZlciBvbiBvdXIgRnJlZU5BUyBib3gsIHNv IEnigJl2ZSBjaGFuZ2VkIHRvIGFuIGlTQ1NJIGJhY2tlbmQsIGJ1dCB0aGUgcHJvYmxlbSBjb250 aW51ZXMuDQoNCkNhbid0IHNheSBhbnl0aGluZyBhYm91dCB5b3VyIGlzc3VlcyB3aXRob3V0IHRo ZSBsb2dzIGJ1dCB0aGVyZSdzIG5vdGhpbmcgd3Jvbmcgd2l0aCBGcmVlTkFTIChGcmVlQlNEKSBO RlMsIEkndmUgYmVlbiBydW5uaW5nIG9WaXJ0IHdpdGggRnJlZUJTRCdzIE5GUyBzaW5jZSBvVmly dCAzLjIgc28uLi4NCg0KQ291bGQgeW91IHNoYXJlIHlvdXIgcmVsZXZhbnQgZXhwb3J0cyBjb25m aWd1cmF0aW9uIHRvIG1ha2Ugc3VyZSBoZSdzIHVzaW5nIHNvbWV0aGluZyB0aGF0IHlvdSBrbm93 IHdvcmtzPw0KDQoNCiJZb3UncmUgaG9sZGluZyBpdCB3cm9uZyI6KSBTb3JyeSwgSSBrbm93IHlv dSdyZSBmcnVzdHJhdGVkIGJ1dCB0aGF0J3Mgd2hhdCBJIGNhbiBhZGQgdG8gdGhlIGNvbnZlcnNh dGlvbi4NCg0KL0sNCg0KVGhpcyBoYXBwZW5zIGF0IHRoZSB2ZXJ5IGVuZCBvZiB0aGUgb3ZpcnQt aG9zdGVkLWVuZ2luZS1zZXR1cCBjb21tYW5kLCB3aGljaCBsZWFkcyBtZSB0byBiZWxpZXZlIHRo YXTigJlzIGFuIG9WaXJ0IGlzc3VlLiBUaGUgT1ZBIHdhcyBhbHJlYWR5IGNvcGllZCBhbmQgZGVw bG95ZWQgdG8gdGhlIHN0b3JhZ2U6DQoNClsgSU5GTyBdIFN0YXJ0aW5nIHZkc21kDQpbIElORk8g XSBDcmVhdGluZyBWb2x1bWUgR3JvdXANClsgSU5GTyBdIENyZWF0aW5nIFN0b3JhZ2UgRG9tYWlu DQpbIElORk8gXSBDcmVhdGluZyBTdG9yYWdlIFBvb2wNClsgSU5GTyBdIENvbm5lY3RpbmcgU3Rv cmFnZSBQb29sDQpbIElORk8gXSBWZXJpZnlpbmcgc2FubG9jayBsb2Nrc3BhY2UgaW5pdGlhbGl6 YXRpb24NClsgSU5GTyBdIENyZWF0aW5nIEltYWdlIGZvciAnaG9zdGVkLWVuZ2luZS5sb2Nrc3Bh Y2UnIC4uLg0KWyBJTkZPIF0gSW1hZ2UgZm9yICdob3N0ZWQtZW5naW5lLmxvY2tzcGFjZScgY3Jl YXRlZCBzdWNjZXNzZnVsbHkNClsgSU5GTyBdIENyZWF0aW5nIEltYWdlIGZvciAnaG9zdGVkLWVu Z2luZS5tZXRhZGF0YScgLi4uDQpbIElORk8gXSBJbWFnZSBmb3IgJ2hvc3RlZC1lbmdpbmUubWV0 YWRhdGEnIGNyZWF0ZWQgc3VjY2Vzc2Z1bGx5DQpbIElORk8gXSBDcmVhdGluZyBWTSBJbWFnZQ0K WyBJTkZPIF0gRXh0cmFjdGluZyBkaXNrIGltYWdlIGZyb20gT1ZGIGFyY2hpdmUgKGNvdWxkIHRh a2UgYSBmZXcgbWludXRlcyBkZXBlbmRpbmcgb24gYXJjaGl2ZSBzaXplKQ0KWyBJTkZPIF0gVmFs aWRhdGluZyBwcmUtYWxsb2NhdGVkIHZvbHVtZSBzaXplDQpbIElORk8gXSBVcGxvYWRpbmcgdm9s dW1lIHRvIGRhdGEgZG9tYWluIChjb3VsZCB0YWtlIGEgZmV3IG1pbnV0ZXMgZGVwZW5kaW5nIG9u IGFyY2hpdmUgc2l6ZSkNClsgSU5GTyBdIEltYWdlIHN1Y2Nlc3NmdWxseSBpbXBvcnRlZCBmcm9t IE9WRg0KWyBFUlJPUiBdIEZhaWxlZCB0byBleGVjdXRlIHN0YWdlICdNaXNjIGNvbmZpZ3VyYXRp b24nOiBbLTMyNjA1XSBObyByZXNwb25zZSBmb3IgSlNPTi1SUEMgU3RvcmFnZURvbWFpbi5kZXRh Y2ggcmVxdWVzdC4NClsgSU5GTyBdIFl1bSBQZXJmb3JtaW5nIHl1bSB0cmFuc2FjdGlvbiByb2xs YmFjaw0KWyBJTkZPIF0gU3RhZ2U6IENsZWFuIHVwDQpbIElORk8gXSBHZW5lcmF0aW5nIGFuc3dl ciBmaWxlICcvdmFyL2xpYi9vdmlydC1ob3N0ZWQtZW5naW5lLXNldHVwL2Fuc3dlcnMvYW5zd2Vy cy0yMDE3MDYyMzAzMjU0MS5jb25mJw0KWyBJTkZPIF0gU3RhZ2U6IFByZS10ZXJtaW5hdGlvbg0K WyBJTkZPIF0gU3RhZ2U6IFRlcm1pbmF0aW9uDQpbIEVSUk9SIF0gSG9zdGVkIEVuZ2luZSBkZXBs b3ltZW50IGZhaWxlZDogdGhpcyBzeXN0ZW0gaXMgbm90IHJlbGlhYmxlLCBwbGVhc2UgY2hlY2sg dGhlIGlzc3VlLGZpeCBhbmQgcmVkZXBsb3kNCkxvZyBmaWxlIGlzIGxvY2F0ZWQgYXQgL3Zhci9s b2cvb3ZpcnQtaG9zdGVkLWVuZ2luZS1zZXR1cC9vdmlydC1ob3N0ZWQtZW5naW5lLXNldHVwLTIw MTcwNjIzMDIzNDI0LW85cmJ0MC5sb2cNCg0KQXQgdGhpcyBwb2ludCBJIHJlYWxseSBkb27igJl0 IGtub3cgd2hhdCBJIHNob3VsZCB0cnkuIEFuZCB0aGUgbG9nIGZpbGUgaXMgdG9vIHZlcmJvcnJh Z2ljIChob3BpbmcgdGhpcyB3b3JkIGV4aXN0cykgdG8gbG9vayBmb3IgZXJyb3JzLg0KDQpBbnkg Z3VpZGFuY2U/DQoNClRoYW5rcywNClYuDQoNCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fDQpVc2VycyBtYWlsaW5nIGxpc3QNClVzZXJzQG92aXJ0Lm9yZzxt YWlsdG86VXNlcnNAb3ZpcnQub3JnPg0KaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xp c3RpbmZvL3VzZXJzDQoNCg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX18NClVzZXJzIG1haWxpbmcgbGlzdA0KVXNlcnNAb3ZpcnQub3JnPG1haWx0bzpVc2Vy c0BvdmlydC5vcmc+DQpodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNl cnMNCg0KDQoNCg0KLS0NCkFkYW0gTGl0a2UNCg0K --_000_3759BD4B51CA47E781BBE185D1B31B2Eifufrjbr_ Content-Type: text/html; charset="utf-8" Content-ID: <6145CC2C582E4041B207C5845B6F783B@if.ufrj.br> Content-Transfer-Encoding: base64 PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KSGVsbG8gQWRhbSBhbmQgS2FybGks DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5JIHdp bGwgcmVtYXAgdWlkIGFuZCBnaWQgb2YgTkZTIHRvIDM2IGFuZCB0cnkgYWdhaW4gd2l0aCBORlMg c2hhcmluZy48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2 IGNsYXNzPSIiPkJ1dCB0aGlzIGRvZXMgbm90IG1ha2UgbXVjaCBzZW5zZSwgYmVjYXVzZSBvbiBp U0NTSSB0aGlzIHNob3VsZCBub3QgaGFwcGVuLiBUaGVyZSBhcmUgbm8gcGVybWlzc2lvbnMgaW52 b2x2ZWQgYW5kIHdoZW4gb1ZpcnQgcnVucyB0aGUgaG9zdGVkLWVuZ2luZSBzZXR1cCBpdCBjcmVh dGVzIHRoZSBleHQzIGZpbGVzeXN0ZW0gb24gdGhlIGlTQ1NJIHNoYXJlIHdpdGhvdXQgYW55IGlz c3VlLiBIZXJl4oCZcyBhIHBob3RvIG9mIHRoZQ0KIG5ldHdvcmsgYmFuZHdpZHRoIGR1cmluZyB0 aGUgT1ZGIGRlcGxveW1lbnQ6IDxhIGhyZWY9Imh0dHA6Ly93d3cuaWYudWZyai5ici9+ZmVycmFv L292aXJ0L2JhbmR3aWR0aC1pc2NzaS1vdmYuanBnIiBjbGFzcz0iIj4NCmh0dHA6Ly93d3cuaWYu dWZyai5ici9+ZmVycmFvL292aXJ0L2JhbmR3aWR0aC1pc2NzaS1vdmYuanBnPC9hPjwvZGl2Pg0K PGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+U28gaXTi gJlzIGFwcGVhcnMgdG8gYmUgd29ya2luZy4gU29tZXRoaW5nIGhhcHBlbnMgYWZ0ZXIgdGhlIGRl cGxveW1lbnQgdGhhdCBicm9rZXMgdGhlIGNvbm5lY3Rpb25zIGFuZCBraWxscyB2ZHNtLjwvZGl2 Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhh bmtzLDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5WLjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xh c3M9IiI+DQo8ZGl2Pg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xhc3M9IiI+DQo8ZGl2IGNs YXNzPSIiPk9uIDIzIEp1biAyMDE3LCBhdCAxNzo0NywgQWRhbSBMaXRrZSAmbHQ7PGEgaHJlZj0i bWFpbHRvOmFsaXRrZUByZWRoYXQuY29tIiBjbGFzcz0iIj5hbGl0a2VAcmVkaGF0LmNvbTwvYT4m Z3Q7IHdyb3RlOjwvZGl2Pg0KPGJyIGNsYXNzPSJBcHBsZS1pbnRlcmNoYW5nZS1uZXdsaW5lIj4N CjxkaXYgY2xhc3M9IiI+DQo8ZGl2IGRpcj0ibHRyIiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8 ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGJyIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iZ21haWxf cXVvdGUiPk9uIEZyaSwgSnVuIDIzLCAyMDE3IGF0IDQ6NDAgUE0sIEthcmxpIFNqw7ZiZXJnIDxz cGFuIGRpcj0ibHRyIiBjbGFzcz0iIj4NCiZsdDs8YSBocmVmPSJtYWlsdG86a2FybGlAaW5wYXJh ZGlzZS5zZSIgdGFyZ2V0PSJfYmxhbmsiIGNsYXNzPSIiPmthcmxpQGlucGFyYWRpc2Uuc2U8L2E+ Jmd0Ozwvc3Bhbj4gd3JvdGU6PGJyIGNsYXNzPSIiPg0KPGJsb2NrcXVvdGUgY2xhc3M9ImdtYWls X3F1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29s aWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8ZGl2IGRpcj0iYXV0byIgY2xhc3M9IiI+PHNwYW4gY2xh c3M9IiI+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjxkaXYgY2xhc3M9ImdtYWlsX2V4 dHJhIj48YnIgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+RGVuIDIzIGp1bmkg MjAxNyAyMTowOCBza3JldiBWaW7DrWNpdXMgRmVycsOjbyAmbHQ7PGEgaHJlZj0ibWFpbHRvOmZl cnJhb0BpZi51ZnJqLmJyIiB0YXJnZXQ9Il9ibGFuayIgY2xhc3M9IiI+ZmVycmFvQGlmLnVmcmou YnI8L2E+Jmd0Ozo8YnIgdHlwZT0iYXR0cmlidXRpb24iIGNsYXNzPSIiPg0KPGJsb2NrcXVvdGUg Y2xhc3M9Im1fMjQ0MTczMzI3ODcwNTk0ODM2MXF1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44 ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8ZGl2IGNs YXNzPSIiPkhlbGxvIG9WaXJ0IGZvbGtzLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkni gJltIGEgdHJhaXRvciBvZiB0aGUgWGVuIG1vdmVtZW50IGFuZCB3YXMgbG9va2luZyBmb3Igc29t ZSBnb29kIGFsdGVybmF0aXZlcyBmb3IgWGVuU2VydmVyIGh5cGVydmlzb3JzLiBJIHdhcyBhd2Fy ZSBvZiBLVk0gZm9yIGEgbG9uZyB0aW1lIGJ1dCBJIHdhcyBtaXNzaW5nIGEgbW9yZSBwcm9mZXNz aW9uYWwgYW5kIGFwcGxpYW5jZSBmZWVsaW5nIG9mIHRoZSBwcm9kdWN0LCBhbmQgb1ZpcnQgYXBw ZWFycyB0byBkZWxpdmVyIGV4YWN0bHkgd2hhdA0KIEnigJltIGxvb2tpbmcgZm9yLjxiciBjbGFz cz0iIj4NCjxiciBjbGFzcz0iIj4NCkRvbuKAmXQgZ2V0IG1lIHdyb25nLCBJ4oCZbSBub3Qgc2F5 aW5nIHRoYXQgWGVuIGlzIG5vdCBnb29kLCBJ4oCZbSBsb29raW5nIGZvciBlcXVhbCBvciBiZXR0 ZXIgYWx0ZXJuYXRpdmVzLCBidXQgSeKAmW0gc3RhcnRpbmcgdG8gZ2V0IGZydXN0cmF0ZWQgd2l0 aCBvVmlydC48YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpGaXJzdGx5IEnigJl2ZSB0cmll ZCB0byBpbnN0YWxsIHRoZSBvVmlydCBOb2RlIG9uIGEgVk0gaW4gVk13YXJlIEZ1c2lvbiBvbiBt eSBub3RlYm9vaywgaXQgd2FzIGEgbm8gZ28uIEZvciB3aGF0ZXZlciByZWFzb25zIEkgZG9u4oCZ dCBrbm93IHZkc21kLnNlcnZpY2UgYW5kIGxpYnZpcnRkIGZhaWxlZCB0byBzdGFydC4gSSBtYWtl IHN1cmUgdGhhdCBJIHdhcyBydW5uaW5nIHdpdGggRVBUIHN1cHBvcnQgZW5hYmxlZCB0byBhY2hp ZXZlIG5lc3RlZCB2aXJ0dWFsaXphdGlvbiwNCiBidXQgYXMgSSBzYWlkOiBpdCB3YXMgYSBubyBn by48YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpTbyBJ4oCZdmUgZGVjb21taXNzaW9uZWQg YSBYZW5TZXJ2ZXIgbWFjaGluZSB0aGF0IHdhcyBpbiBwcm9kdWN0aW9uIGp1c3QgdG8gdHJ5IG9W aXJ0LiBUaGUgaGFyZHdhcmUgaXMgbm90IG5ldywgYnV04oCZcyBpdOKAmXMgdmVyeSBjYXBhYmxl OiBEdWFsIFhlb24gRTU1MDYgd2l0aCA0OEdCIG9mIHN5c3RlbSBSQU0sIGJ1dCBJIGNhbuKAmXQg Z2V0IHRoZSBob3N0ZWQgZW5naW5lIHRvIHdvcmssIGl0IGFsd2F5cyBpbnN1bHRzIG15IGhhcmR3 YXJlOiAtLS0gSG9zdGVkDQogRW5naW5lIGRlcGxveW1lbnQgZmFpbGVkOiB0aGlzIHN5c3RlbSBp cyBub3QgcmVsaWFibGUsIHBsZWFzZSBjaGVjayB0aGUgaXNzdWUsZml4IGFuZCByZWRlcGxveS48 YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpJdOKAmXMgZGVmaW5pdGVseSBhIHByb2JsZW0g b24gdGhlIHN0b3JhZ2Ugc3Vic3lzdGVtLCB0aGUgZXJyb3IgaXMganVzdCByYW5kb20sIGF0IHRo aXMgbW9tZW50IEnigJl2ZSBnb3Q6PGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KWyBFUlJP UiBdIEZhaWxlZCB0byBleGVjdXRlIHN0YWdlICdNaXNjIGNvbmZpZ3VyYXRpb24nOiBbLTMyNjA1 XSBObyByZXNwb25zZSBmb3IgSlNPTi1SUEMgU3RvcmFnZURvbWFpbi5kZXRhY2ggcmVxdWVzdC48 YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpCdXQgb24gb3RoZXIgdHJpZXMgaXQgY2FtZSB1 cCB3aXRoIHNvbWV0aGluZyBsaWtlIHRoaXM6PGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0K Tm8gcmVzcG9uc2UgZm9yIEpTT04tUlBDIFZvbHVtZS5nZXRTaXplIHJlcXVlc3QuPGJyIGNsYXNz PSIiPg0KPGJyIGNsYXNzPSIiPg0KSSB3YXMgdGhpbmtpbmcgdGhhdCB0aGUgcHJvYmxlbSB3YXMg b24gdGhlIE5GU3YzIHNlcnZlciBvbiBvdXIgRnJlZU5BUyBib3gsIHNvIEnigJl2ZSBjaGFuZ2Vk IHRvIGFuIGlTQ1NJIGJhY2tlbmQsIGJ1dCB0aGUgcHJvYmxlbSBjb250aW51ZXMuPC9kaXY+DQo8 L2Jsb2NrcXVvdGU+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+DQo8ZGl2IGRpcj0iYXV0byIgY2xh c3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8L3NwYW4+DQo8ZGl2IGRpcj0iYXV0byIgY2xh c3M9IiI+Q2FuJ3Qgc2F5IGFueXRoaW5nIGFib3V0IHlvdXIgaXNzdWVzIHdpdGhvdXQgdGhlIGxv Z3MgYnV0IHRoZXJlJ3Mgbm90aGluZyB3cm9uZyB3aXRoIEZyZWVOQVMgKEZyZWVCU0QpIE5GUywg SSd2ZSBiZWVuIHJ1bm5pbmcgb1ZpcnQgd2l0aCBGcmVlQlNEJ3MgTkZTIHNpbmNlIG9WaXJ0IDMu MiBzby4uLjwvZGl2Pg0KPC9kaXY+DQo8L2Jsb2NrcXVvdGU+DQo8ZGl2IGNsYXNzPSIiPjxiciBj bGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5Db3VsZCB5b3Ugc2hhcmUgeW91ciByZWxl dmFudCBleHBvcnRzIGNvbmZpZ3VyYXRpb24gdG8gbWFrZSBzdXJlIGhlJ3MgdXNpbmcgc29tZXRo aW5nIHRoYXQgeW91IGtub3cgd29ya3M/PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPiZuYnNwOzwvZGl2 Pg0KPGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44 ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8ZGl2IGRp cj0iYXV0byIgY2xhc3M9IiI+DQo8ZGl2IGRpcj0iYXV0byIgY2xhc3M9IiI+PGJyIGNsYXNzPSIi Pg0KPC9kaXY+DQo8ZGl2IGRpcj0iYXV0byIgY2xhc3M9IiI+JnF1b3Q7WW91J3JlIGhvbGRpbmcg aXQgd3JvbmcmcXVvdDs6KSBTb3JyeSwgSSBrbm93IHlvdSdyZSBmcnVzdHJhdGVkIGJ1dCB0aGF0 J3Mgd2hhdCBJIGNhbiBhZGQgdG8gdGhlIGNvbnZlcnNhdGlvbi48L2Rpdj4NCjxzcGFuIGNsYXNz PSJIT0VuWmIiPjxmb250IGNvbG9yPSIjODg4ODg4IiBjbGFzcz0iIj4NCjxkaXYgZGlyPSJhdXRv IiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgZGlyPSJhdXRvIiBjbGFzcz0i Ij4vSzwvZGl2Pg0KPC9mb250Pjwvc3Bhbj4NCjxkaXYgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSJo NSI+DQo8ZGl2IGRpcj0iYXV0byIgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2 IGRpcj0iYXV0byIgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+DQo8ZGl2IGNs YXNzPSJnbWFpbF9xdW90ZSI+DQo8YmxvY2txdW90ZSBjbGFzcz0ibV8yNDQxNzMzMjc4NzA1OTQ4 MzYxcXVvdGUiIHN0eWxlPSJtYXJnaW46MCAwIDAgLjhleDtib3JkZXItbGVmdDoxcHggI2NjYyBz b2xpZDtwYWRkaW5nLWxlZnQ6MWV4Ij4NCjxkaXYgY2xhc3M9IiI+VGhpcyBoYXBwZW5zIGF0IHRo ZSB2ZXJ5IGVuZCBvZiB0aGUgb3ZpcnQtaG9zdGVkLWVuZ2luZS1zZXR1cCBjb21tYW5kLCB3aGlj aCBsZWFkcyBtZSB0byBiZWxpZXZlIHRoYXTigJlzIGFuIG9WaXJ0IGlzc3VlLiBUaGUgT1ZBIHdh cyBhbHJlYWR5IGNvcGllZCBhbmQgZGVwbG95ZWQgdG8gdGhlIHN0b3JhZ2U6PGJyIGNsYXNzPSIi Pg0KPGJyIGNsYXNzPSIiPg0KWyBJTkZPIF0gU3RhcnRpbmcgdmRzbWQ8YnIgY2xhc3M9IiI+DQpb IElORk8gXSBDcmVhdGluZyBWb2x1bWUgR3JvdXA8YnIgY2xhc3M9IiI+DQpbIElORk8gXSBDcmVh dGluZyBTdG9yYWdlIERvbWFpbjxiciBjbGFzcz0iIj4NClsgSU5GTyBdIENyZWF0aW5nIFN0b3Jh Z2UgUG9vbDxiciBjbGFzcz0iIj4NClsgSU5GTyBdIENvbm5lY3RpbmcgU3RvcmFnZSBQb29sPGJy IGNsYXNzPSIiPg0KWyBJTkZPIF0gVmVyaWZ5aW5nIHNhbmxvY2sgbG9ja3NwYWNlIGluaXRpYWxp emF0aW9uPGJyIGNsYXNzPSIiPg0KWyBJTkZPIF0gQ3JlYXRpbmcgSW1hZ2UgZm9yICdob3N0ZWQt ZW5naW5lLmxvY2tzcGFjZScgLi4uPGJyIGNsYXNzPSIiPg0KWyBJTkZPIF0gSW1hZ2UgZm9yICdo b3N0ZWQtZW5naW5lLmxvY2tzcGFjZScgY3JlYXRlZCBzdWNjZXNzZnVsbHk8YnIgY2xhc3M9IiI+ DQpbIElORk8gXSBDcmVhdGluZyBJbWFnZSBmb3IgJ2hvc3RlZC1lbmdpbmUubWV0YWRhdGEnIC4u LjxiciBjbGFzcz0iIj4NClsgSU5GTyBdIEltYWdlIGZvciAnaG9zdGVkLWVuZ2luZS5tZXRhZGF0 YScgY3JlYXRlZCBzdWNjZXNzZnVsbHk8YnIgY2xhc3M9IiI+DQpbIElORk8gXSBDcmVhdGluZyBW TSBJbWFnZTxiciBjbGFzcz0iIj4NClsgSU5GTyBdIEV4dHJhY3RpbmcgZGlzayBpbWFnZSBmcm9t IE9WRiBhcmNoaXZlIChjb3VsZCB0YWtlIGEgZmV3IG1pbnV0ZXMgZGVwZW5kaW5nIG9uIGFyY2hp dmUgc2l6ZSk8YnIgY2xhc3M9IiI+DQpbIElORk8gXSBWYWxpZGF0aW5nIHByZS1hbGxvY2F0ZWQg dm9sdW1lIHNpemU8YnIgY2xhc3M9IiI+DQpbIElORk8gXSBVcGxvYWRpbmcgdm9sdW1lIHRvIGRh dGEgZG9tYWluIChjb3VsZCB0YWtlIGEgZmV3IG1pbnV0ZXMgZGVwZW5kaW5nIG9uIGFyY2hpdmUg c2l6ZSk8YnIgY2xhc3M9IiI+DQpbIElORk8gXSBJbWFnZSBzdWNjZXNzZnVsbHkgaW1wb3J0ZWQg ZnJvbSBPVkY8YnIgY2xhc3M9IiI+DQpbIEVSUk9SIF0gRmFpbGVkIHRvIGV4ZWN1dGUgc3RhZ2Ug J01pc2MgY29uZmlndXJhdGlvbic6IFstMzI2MDVdIE5vIHJlc3BvbnNlIGZvciBKU09OLVJQQyBT dG9yYWdlRG9tYWluLmRldGFjaCByZXF1ZXN0LjxiciBjbGFzcz0iIj4NClsgSU5GTyBdIFl1bSBQ ZXJmb3JtaW5nIHl1bSB0cmFuc2FjdGlvbiByb2xsYmFjazxiciBjbGFzcz0iIj4NClsgSU5GTyBd IFN0YWdlOiBDbGVhbiB1cDxiciBjbGFzcz0iIj4NClsgSU5GTyBdIEdlbmVyYXRpbmcgYW5zd2Vy IGZpbGUgJy92YXIvbGliL292aXJ0LWhvc3RlZC1lbmdpbmUtPHdiciBjbGFzcz0iIj5zZXR1cC9h bnN3ZXJzL2Fuc3dlcnMtPHdiciBjbGFzcz0iIj4yMDE3MDYyMzAzMjU0MS5jb25mJzxiciBjbGFz cz0iIj4NClsgSU5GTyBdIFN0YWdlOiBQcmUtdGVybWluYXRpb248YnIgY2xhc3M9IiI+DQpbIElO Rk8gXSBTdGFnZTogVGVybWluYXRpb248YnIgY2xhc3M9IiI+DQpbIEVSUk9SIF0gSG9zdGVkIEVu Z2luZSBkZXBsb3ltZW50IGZhaWxlZDogdGhpcyBzeXN0ZW0gaXMgbm90IHJlbGlhYmxlLCBwbGVh c2UgY2hlY2sgdGhlIGlzc3VlLGZpeCBhbmQgcmVkZXBsb3k8YnIgY2xhc3M9IiI+DQpMb2cgZmls ZSBpcyBsb2NhdGVkIGF0IC92YXIvbG9nL292aXJ0LWhvc3RlZC1lbmdpbmUtPHdiciBjbGFzcz0i Ij5zZXR1cC9vdmlydC1ob3N0ZWQtZW5naW5lLTx3YnIgY2xhc3M9IiI+c2V0dXAtMjAxNzA2MjMw MjM0MjQtbzlyYnQwLjx3YnIgY2xhc3M9IiI+bG9nPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIi Pg0KQXQgdGhpcyBwb2ludCBJIHJlYWxseSBkb27igJl0IGtub3cgd2hhdCBJIHNob3VsZCB0cnku IEFuZCB0aGUgbG9nIGZpbGUgaXMgdG9vIHZlcmJvcnJhZ2ljIChob3BpbmcgdGhpcyB3b3JkIGV4 aXN0cykgdG8gbG9vayBmb3IgZXJyb3JzLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkFu eSBndWlkYW5jZT88YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpUaGFua3MsPGJyIGNsYXNz PSIiPg0KVi48YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpfX19fX19fX19fX19fX19fX19f X19fX19fX19fX188d2JyIGNsYXNzPSIiPl9fX19fX19fX19fX19fX19fPGJyIGNsYXNzPSIiPg0K VXNlcnMgbWFpbGluZyBsaXN0PGJyIGNsYXNzPSIiPg0KPGEgaHJlZj0ibWFpbHRvOlVzZXJzQG92 aXJ0Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiIGNsYXNzPSIiPlVzZXJzQG92aXJ0Lm9yZzwvYT48YnIg Y2xhc3M9IiI+DQo8YSBocmVmPSJodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGlu Zm8vdXNlcnMiIHRhcmdldD0iX2JsYW5rIiBjbGFzcz0iIj5odHRwOi8vbGlzdHMub3ZpcnQub3Jn Lzx3YnIgY2xhc3M9IiI+bWFpbG1hbi9saXN0aW5mby91c2VyczwvYT48YnIgY2xhc3M9IiI+DQo8 L2Rpdj4NCjwvYmxvY2txdW90ZT4NCjwvZGl2Pg0KPGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8L2Rp dj4NCjwvZGl2Pg0KPC9kaXY+DQo8L2Rpdj4NCjxiciBjbGFzcz0iIj4NCl9fX19fX19fX19fX19f X19fX19fX19fX19fX19fXzx3YnIgY2xhc3M9IiI+X19fX19fX19fX19fX19fX188YnIgY2xhc3M9 IiI+DQpVc2VycyBtYWlsaW5nIGxpc3Q8YnIgY2xhc3M9IiI+DQo8YSBocmVmPSJtYWlsdG86VXNl cnNAb3ZpcnQub3JnIiBjbGFzcz0iIj5Vc2Vyc0BvdmlydC5vcmc8L2E+PGJyIGNsYXNzPSIiPg0K PGEgaHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIiBy ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIiBjbGFzcz0iIj5odHRwOi8vbGlzdHMub3Zp cnQub3JnLzx3YnIgY2xhc3M9IiI+bWFpbG1hbi9saXN0aW5mby91c2VyczwvYT48YnIgY2xhc3M9 IiI+DQo8YnIgY2xhc3M9IiI+DQo8L2Jsb2NrcXVvdGU+DQo8L2Rpdj4NCjxiciBjbGFzcz0iIj4N CjxiciBjbGVhcj0iYWxsIiBjbGFzcz0iIj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0K PC9kaXY+DQotLSA8YnIgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSJnbWFpbF9zaWduYXR1cmUiIGRh dGEtc21hcnRtYWlsPSJnbWFpbF9zaWduYXR1cmUiPg0KPGRpdiBkaXI9Imx0ciIgY2xhc3M9IiI+ QWRhbSBMaXRrZTwvZGl2Pg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+DQo8L2Jsb2Nr cXVvdGU+DQo8L2Rpdj4NCjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K --_000_3759BD4B51CA47E781BBE185D1B31B2Eifufrjbr_--

On Sat, Jun 24, 2017 at 12:23 AM, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
Hello Adam and Karli,
I will remap uid and gid of NFS to 36 and try again with NFS sharing.
But this does not make much sense, because on iSCSI this should not happen. There are no permissions involved and when oVirt runs the hosted-engine setup it creates the ext3 filesystem on the iSCSI share without any issue. Here’s a photo of the network bandwidth during the OVF deployment: http://www.if.ufrj.br/~ferrao/ovirt/bandwidth-iscsi-ovf.jpg
So it’s appears to be working. Something happens after the deployment that brokes the connections and kills vdsm.
Indeed - may be two different issues. Let us know how the NFS works first, then let's try with iSCSI. Y.
Thanks, V.
On 23 Jun 2017, at 17:47, Adam Litke <alitke@redhat.com> wrote:
On Fri, Jun 23, 2017 at 4:40 PM, Karli Sjöberg <karli@inparadise.se> wrote:
Den 23 juni 2017 21:08 skrev Vinícius Ferrão <ferrao@if.ufrj.br>:
Hello oVirt folks.
I’m a traitor of the Xen movement and was looking for some good alternatives for XenServer hypervisors. I was aware of KVM for a long time but I was missing a more professional and appliance feeling of the product, and oVirt appears to deliver exactly what I’m looking for.
Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for equal or better alternatives, but I’m starting to get frustrated with oVirt.
Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on my notebook, it was a no go. For whatever reasons I don’t know vdsmd.service and libvirtd failed to start. I make sure that I was running with EPT support enabled to achieve nested virtualization, but as I said: it was a no go.
So I’ve decommissioned a XenServer machine that was in production just to try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon E5506 with 48GB of system RAM, but I can’t get the hosted engine to work, it always insults my hardware: --- Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy.
It’s definitely a problem on the storage subsystem, the error is just random, at this moment I’ve got:
[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request.
But on other tries it came up with something like this:
No response for JSON-RPC Volume.getSize request.
I was thinking that the problem was on the NFSv3 server on our FreeNAS box, so I’ve changed to an iSCSI backend, but the problem continues.
Can't say anything about your issues without the logs but there's nothing wrong with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's NFS since oVirt 3.2 so...
Could you share your relevant exports configuration to make sure he's using something that you know works?
"You're holding it wrong":) Sorry, I know you're frustrated but that's what I can add to the conversation.
/K
This happens at the very end of the ovirt-hosted-engine-setup command, which leads me to believe that’s an oVirt issue. The OVA was already copied and deployed to the storage:
[ INFO ] Starting vdsmd [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Extracting disk image from OVF archive (could take a few minutes depending on archive size) [ INFO ] Validating pre-allocated volume size [ INFO ] Uploading volume to data domain (could take a few minutes depending on archive size) [ INFO ] Image successfully imported from OVF [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request. [ INFO ] Yum Performing yum transaction rollback [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- setup/answers/answers-20170623032541.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-s etup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log
At this point I really don’t know what I should try. And the log file is too verborragic (hoping this word exists) to look for errors.
Any guidance?
Thanks, V.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Adam Litke
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--_000_1DA0132AE68A4110B189E34426BCB310ifufrjbr_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable Hello again, Setting the folder permissions to vdsm:kvm (36:36) done the trick to make N= FS work. I wasn't expecting this to work since it does not make sense to me= , making a parallel with the iSCSI problem. I'm starting to believe that's something just bad. Perhaps the Storage syst= em is running with something broken or the host machine is just unstable. I will consider this as solved, since further inside the oVirt Engine panel= I was able to mount and use the same iSCSI sharing. But let me ask for sincere answers here: 1. oVirt is feature complete comparing to RHEV? 2. Should I migrate from XenServer to oVirt? This is biased, I know, but I = would like to hear opinions. The folks with @redhat.com email addresses wil= l know how to advocate in favor of oVirt. 3. Some "journalists" says that oVirt is like Fedora in comparison to RHEL,= is this really true? Or it's more aligned with a CentOS-like release? Beca= use Fedora isn't really an Enterprise OS, and I was looking for an Enterpri= se Hypervisor. I'm aware that oVirt is the upstream from RHEV. 4. There is any good SPICE client for macOS? Or should I just use the HTML5= version instead? Thanks, V. Sent from my iPhone On 23 Jun 2017, at 18:50, Yaniv Kaul <ykaul@redhat.com<mailto:ykaul@redhat.= com>> wrote: On Sat, Jun 24, 2017 at 12:23 AM, Vin=EDcius Ferr=E3o <ferrao@if.ufrj.br<ma= ilto:ferrao@if.ufrj.br>> wrote: Hello Adam and Karli, I will remap uid and gid of NFS to 36 and try again with NFS sharing. But this does not make much sense, because on iSCSI this should not happen.= There are no permissions involved and when oVirt runs the hosted-engine se= tup it creates the ext3 filesystem on the iSCSI share without any issue. He= re=92s a photo of the network bandwidth during the OVF deployment: http://w= ww.if.ufrj.br/~ferrao/ovirt/bandwidth-iscsi-ovf.jpg So it=92s appears to be working. Something happens after the deployment tha= t brokes the connections and kills vdsm. Indeed - may be two different issues. Let us know how the NFS works first, = then let's try with iSCSI. Y. Thanks, V. On 23 Jun 2017, at 17:47, Adam Litke <alitke@redhat.com<mailto:alitke@redha= t.com>> wrote: On Fri, Jun 23, 2017 at 4:40 PM, Karli Sj=F6berg <karli@inparadise.se<mailt= o:karli@inparadise.se>> wrote: Den 23 juni 2017 21:08 skrev Vin=EDcius Ferr=E3o <ferrao@if.ufrj.br<mailto:= ferrao@if.ufrj.br>>: Hello oVirt folks. I=92m a traitor of the Xen movement and was looking for some good alternati= ves for XenServer hypervisors. I was aware of KVM for a long time but I was= missing a more professional and appliance feeling of the product, and oVir= t appears to deliver exactly what I=92m looking for. Don=92t get me wrong, I=92m not saying that Xen is not good, I=92m looking = for equal or better alternatives, but I=92m starting to get frustrated with= oVirt. Firstly I=92ve tried to install the oVirt Node on a VM in VMware Fusion on = my notebook, it was a no go. For whatever reasons I don=92t know vdsmd.serv= ice and libvirtd failed to start. I make sure that I was running with EPT s= upport enabled to achieve nested virtualization, but as I said: it was a no= go. So I=92ve decommissioned a XenServer machine that was in production just to= try oVirt. The hardware is not new, but=92s it=92s very capable: Dual Xeon= E5506 with 48GB of system RAM, but I can=92t get the hosted engine to work= , it always insults my hardware: --- Hosted Engine deployment failed: this = system is not reliable, please check the issue,fix and redeploy. It=92s definitely a problem on the storage subsystem, the error is just ran= dom, at this moment I=92ve got: [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No respons= e for JSON-RPC StorageDomain.detach request. But on other tries it came up with something like this: No response for JSON-RPC Volume.getSize request. I was thinking that the problem was on the NFSv3 server on our FreeNAS box,= so I=92ve changed to an iSCSI backend, but the problem continues. Can't say anything about your issues without the logs but there's nothing w= rong with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's NFS= since oVirt 3.2 so... Could you share your relevant exports configuration to make sure he's using= something that you know works? "You're holding it wrong":) Sorry, I know you're frustrated but that's what= I can add to the conversation. /K This happens at the very end of the ovirt-hosted-engine-setup command, whic= h leads me to believe that=92s an oVirt issue. The OVA was already copied a= nd deployed to the storage: [ INFO ] Starting vdsmd [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Extracting disk image from OVF archive (could take a few minutes d= epending on archive size) [ INFO ] Validating pre-allocated volume size [ INFO ] Uploading volume to data domain (could take a few minutes dependin= g on archive size) [ INFO ] Image successfully imported from OVF [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No respons= e for JSON-RPC StorageDomain.detach request. [ INFO ] Yum Performing yum transaction rollback [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers= /answers-20170623032541.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, ple= ase check the issue,fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engi= ne-setup-20170623023424-o9rbt0.log At this point I really don=92t know what I should try. And the log file is = too verborragic (hoping this word exists) to look for errors. Any guidance? Thanks, V. _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users -- Adam Litke _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users --_000_1DA0132AE68A4110B189E34426BCB310ifufrjbr_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1= 252"> </head> <body dir=3D"auto"> <div><span></span></div> <div> <div><span></span></div> <div> <div>Hello again,</div> <div><br> </div> <div>Setting the folder permissions to vdsm:kvm (36:36) done the trick to m= ake NFS work. I wasn't expecting this to work since it does not make sense = to me, making a parallel with the iSCSI problem.</div> <div><br> </div> <div>I'm starting to believe that's something just bad. Perhaps the Storage= system is running with something broken or the host machine is just unstab= le.</div> <div><br> </div> <div>I will consider this as solved, since further inside the oVirt Engine = panel I was able to mount and use the same iSCSI sharing.</div> <div><br> </div> <div>But let me ask for sincere answers here:</div> <div><br> </div> <div>1. oVirt is feature complete comparing to RHEV?</div> <div><br> </div> <div>2. Should I migrate from XenServer to oVirt? This is biased, I know, b= ut I would like to hear opinions. The folks with @redhat.com email addresse= s will know how to advocate in favor of oVirt.</div> <div><br> </div> <div>3. Some "journalists" says that oVirt is like Fedora in comp= arison to RHEL, is this really true? Or it's more aligned with a CentOS-lik= e release? Because Fedora isn't really an Enterprise OS, and I was looking = for an Enterprise Hypervisor. I'm aware that oVirt is the upstream from RHEV.</div> <div><br> </div> <div>4. There is any good SPICE client for macOS? Or should I just use the = HTML5 version instead?</div> <div><br> </div> <div>Thanks,</div> <div>V.<br> <br> <div>Sent from my iPhone</div> </div> <div><br> On 23 Jun 2017, at 18:50, Yaniv Kaul <<a href=3D"mailto:ykaul@redhat.com= ">ykaul@redhat.com</a>> wrote:<br> <br> </div> <blockquote type=3D"cite"> <div> <div dir=3D"ltr"><br> <div class=3D"gmail_extra"><br> <div class=3D"gmail_quote">On Sat, Jun 24, 2017 at 12:23 AM, Vin=EDcius Fer= r=E3o <span dir=3D"ltr"> <<a href=3D"mailto:ferrao@if.ufrj.br" target=3D"_blank">ferrao@if.ufrj.b= r</a>></span> wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> <div style=3D"word-wrap:break-word">Hello Adam and Karli, <div><br> </div> <div>I will remap uid and gid of NFS to 36 and try again with NFS sharing.<= /div> <div><br> </div> <div>But this does not make much sense, because on iSCSI this should not ha= ppen. There are no permissions involved and when oVirt runs the hosted-engi= ne setup it creates the ext3 filesystem on the iSCSI share without any issu= e. Here=92s a photo of the network bandwidth during the OVF deployment: <a href=3D"http://www.if.ufrj.br/~fer= rao/ovirt/bandwidth-iscsi-ovf.jpg" target=3D"_blank"> http://www.if.ufrj.br/~ferrao/<wbr>ovirt/bandwidth-iscsi-ovf.jpg</a></div> <div><br> </div> <div>So it=92s appears to be working. Something happens after the deploymen= t that brokes the connections and kills vdsm.</div> </div> </blockquote> <div><br> </div> <div>Indeed - may be two different issues. Let us know how the NFS works fi= rst, then let's try with iSCSI.</div> <div>Y.</div> <div> </div> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> <div style=3D"word-wrap:break-word"> <div><br> </div> <div>Thanks,</div> <div>V.</div> <div> <div class=3D"h5"> <div><br> <div> <blockquote type=3D"cite"> <div>On 23 Jun 2017, at 17:47, Adam Litke <<a href=3D"mailto:alitke@redh= at.com" target=3D"_blank">alitke@redhat.com</a>> wrote:</div> <br class=3D"m_-5224729437101941227Apple-interchange-newline"> <div> <div dir=3D"ltr"><br> <div class=3D"gmail_extra"><br> <div class=3D"gmail_quote">On Fri, Jun 23, 2017 at 4:40 PM, Karli Sj=F6berg= <span dir=3D"ltr"> <<a href=3D"mailto:karli@inparadise.se" target=3D"_blank">karli@inparadi= se.se</a>></span> wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> <div dir=3D"auto"><span> <div><br> <div class=3D"gmail_extra"><br> <div class=3D"gmail_quote">Den 23 juni 2017 21:08 skrev Vin=EDcius Ferr=E3o= <<a href=3D"mailto:ferrao@if.ufrj.br" target=3D"_blank">ferrao@if.ufrj.= br</a>>:<br type=3D"attribution"> <blockquote class=3D"m_-5224729437101941227m_2441733278705948361quote" styl= e=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div>Hello oVirt folks.<br> <br> I=92m a traitor of the Xen movement and was looking for some good alternati= ves for XenServer hypervisors. I was aware of KVM for a long time but I was= missing a more professional and appliance feeling of the product, and oVir= t appears to deliver exactly what I=92m looking for.<br> <br> Don=92t get me wrong, I=92m not saying that Xen is not good, I=92m looking = for equal or better alternatives, but I=92m starting to get frustrated with= oVirt.<br> <br> Firstly I=92ve tried to install the oVirt Node on a VM in VMware Fusion on = my notebook, it was a no go. For whatever reasons I don=92t know vdsmd.serv= ice and libvirtd failed to start. I make sure that I was running with EPT s= upport enabled to achieve nested virtualization, but as I said: it was a no go.<br> <br> So I=92ve decommissioned a XenServer machine that was in production just to= try oVirt. The hardware is not new, but=92s it=92s very capable: Dual Xeon= E5506 with 48GB of system RAM, but I can=92t get the hosted engine to work= , it always insults my hardware: --- Hosted Engine deployment failed: this system is not reliable, please check the is= sue,fix and redeploy.<br> <br> It=92s definitely a problem on the storage subsystem, the error is just ran= dom, at this moment I=92ve got:<br> <br> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No respons= e for JSON-RPC StorageDomain.detach request.<br> <br> But on other tries it came up with something like this:<br> <br> No response for JSON-RPC Volume.getSize request.<br> <br> I was thinking that the problem was on the NFSv3 server on our FreeNAS box,= so I=92ve changed to an iSCSI backend, but the problem continues.</div> </blockquote> </div> </div> </div> <div dir=3D"auto"><br> </div> </span> <div dir=3D"auto">Can't say anything about your issues without the logs but= there's nothing wrong with FreeNAS (FreeBSD) NFS, I've been running oVirt = with FreeBSD's NFS since oVirt 3.2 so...</div> </div> </blockquote> <div><br> </div> <div>Could you share your relevant exports configuration to make sure he's = using something that you know works?</div> <div> </div> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> <div dir=3D"auto"> <div dir=3D"auto"><br> </div> <div dir=3D"auto">"You're holding it wrong":) Sorry, I know you'r= e frustrated but that's what I can add to the conversation.</div> <span class=3D"m_-5224729437101941227HOEnZb"><font color=3D"#888888"> <div dir=3D"auto"><br> </div> <div dir=3D"auto">/K</div> </font></span> <div> <div class=3D"m_-5224729437101941227h5"> <div dir=3D"auto"><br> </div> <div dir=3D"auto"> <div class=3D"gmail_extra"> <div class=3D"gmail_quote"> <blockquote class=3D"m_-5224729437101941227m_2441733278705948361quote" styl= e=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div>This happens at the very end of the ovirt-hosted-engine-setup command,= which leads me to believe that=92s an oVirt issue. The OVA was already cop= ied and deployed to the storage:<br> <br> [ INFO ] Starting vdsmd<br> [ INFO ] Creating Volume Group<br> [ INFO ] Creating Storage Domain<br> [ INFO ] Creating Storage Pool<br> [ INFO ] Connecting Storage Pool<br> [ INFO ] Verifying sanlock lockspace initialization<br> [ INFO ] Creating Image for 'hosted-engine.lockspace' ...<br> [ INFO ] Image for 'hosted-engine.lockspace' created successfully<br> [ INFO ] Creating Image for 'hosted-engine.metadata' ...<br> [ INFO ] Image for 'hosted-engine.metadata' created successfully<br> [ INFO ] Creating VM Image<br> [ INFO ] Extracting disk image from OVF archive (could take a few minutes d= epending on archive size)<br> [ INFO ] Validating pre-allocated volume size<br> [ INFO ] Uploading volume to data domain (could take a few minutes dependin= g on archive size)<br> [ INFO ] Image successfully imported from OVF<br> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No respons= e for JSON-RPC StorageDomain.detach request.<br> [ INFO ] Yum Performing yum transaction rollback<br> [ INFO ] Stage: Clean up<br> [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-<wbr>setup/an= swers/answers-20170623<wbr>032541.conf'<br> [ INFO ] Stage: Pre-termination<br> [ INFO ] Stage: Termination<br> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, ple= ase check the issue,fix and redeploy<br> Log file is located at /var/log/ovirt-hosted-engine-s<wbr>etup/ovirt-hosted= -engine-setup<wbr>-20170623023424-o9rbt0.log<br> <br> At this point I really don=92t know what I should try. And the log file is = too verborragic (hoping this word exists) to look for errors.<br> <br> Any guidance?<br> <br> Thanks,<br> V.<br> <br> ______________________________<wbr>_________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org" target=3D"_blank">Users@ovirt.org</a><br=
<a href=3D"http://lists.ovirt.org/mailman/listinfo/users" target=3D"_blank"=
http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> </div> </blockquote> </div> <br> </div> </div> </div> </div> </div> <br> ______________________________<wbr>_________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org" target=3D"_blank">Users@ovirt.org</a><br=
<a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer= " target=3D"_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><= br> <br> </blockquote> </div> <br> <br clear=3D"all"> <div><br> </div> -- <br> <div class=3D"m_-5224729437101941227gmail_signature" data-smartmail=3D"gmai= l_signature"> <div dir=3D"ltr">Adam Litke</div> </div> </div> </div> </div> </blockquote> </div> <br> </div> </div> </div> </div> <br> ______________________________<wbr>_________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer= " target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><= br> <br> </blockquote> </div> <br> </div> </div> </div> </blockquote> </div> </div> </body> </html> --_000_1DA0132AE68A4110B189E34426BCB310ifufrjbr_--

On Mon, Jun 26, 2017 at 1:16 AM, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
Hello again,
Setting the folder permissions to vdsm:kvm (36:36) done the trick to make NFS work. I wasn't expecting this to work since it does not make sense to me, making a parallel with the iSCSI problem.
I'm starting to believe that's something just bad. Perhaps the Storage system is running with something broken or the host machine is just unstable.
I will consider this as solved, since further inside the oVirt Engine panel I was able to mount and use the same iSCSI sharing.
But let me ask for sincere answers here:
1. oVirt is feature complete comparing to RHEV?
Yes. In fact it may have additional features unavailable in RHV.
2. Should I migrate from XenServer to oVirt? This is biased, I know, but I would like to hear opinions. The folks with @redhat.com email addresses will know how to advocate in favor of oVirt.
I don't know, and I don't think (having a @redhat.com email myself) I should advocate. If someone made a comparison (even if it's just for their own specific use case), it could be great if it could be shared. If you do - please share your thoughts.
3. Some "journalists" says that oVirt is like Fedora in comparison to RHEL, is this really true? Or it's more aligned with a CentOS-like release? Because Fedora isn't really an Enterprise OS, and I was looking for an Enterprise Hypervisor. I'm aware that oVirt is the upstream from RHEV.
I'd cautiously say 'in between'. We strive to ensure oVirt is stable, and I believe we make good progress in every release. We also make an effort to quickly release fixes (minor releases). That being said, RHV has a longer life cycle, and for example, when oVirt stopped releasing oVirt 3.6.x, Red Hat continued to release minor versions of it. We have hundreds of oVirt users running it in production, many in large scale, with mission critical workloads. Lastly, oVirt enables features in upstream before they are delivered in RHV. (That being said, I've had a very good experience with Fedora 24 which was rock solid for me, then I've had some misfortune with Fedora 25, and now I'm assessing if I should upgrade to F26 beta...)
4. There is any good SPICE client for macOS? Or should I just use the HTML5 version instead?
I'm afraid not. Y.
Thanks, V.
Sent from my iPhone
On 23 Jun 2017, at 18:50, Yaniv Kaul <ykaul@redhat.com> wrote:
On Sat, Jun 24, 2017 at 12:23 AM, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
Hello Adam and Karli,
I will remap uid and gid of NFS to 36 and try again with NFS sharing.
But this does not make much sense, because on iSCSI this should not happen. There are no permissions involved and when oVirt runs the hosted-engine setup it creates the ext3 filesystem on the iSCSI share without any issue. Here’s a photo of the network bandwidth during the OVF deployment: http://www.if.ufrj.br/~ferrao/ovirt/bandwidth-iscsi-ovf.jpg
So it’s appears to be working. Something happens after the deployment that brokes the connections and kills vdsm.
Indeed - may be two different issues. Let us know how the NFS works first, then let's try with iSCSI. Y.
Thanks, V.
On 23 Jun 2017, at 17:47, Adam Litke <alitke@redhat.com> wrote:
On Fri, Jun 23, 2017 at 4:40 PM, Karli Sjöberg <karli@inparadise.se> wrote:
Den 23 juni 2017 21:08 skrev Vinícius Ferrão <ferrao@if.ufrj.br>:
Hello oVirt folks.
I’m a traitor of the Xen movement and was looking for some good alternatives for XenServer hypervisors. I was aware of KVM for a long time but I was missing a more professional and appliance feeling of the product, and oVirt appears to deliver exactly what I’m looking for.
Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for equal or better alternatives, but I’m starting to get frustrated with oVirt.
Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on my notebook, it was a no go. For whatever reasons I don’t know vdsmd.service and libvirtd failed to start. I make sure that I was running with EPT support enabled to achieve nested virtualization, but as I said: it was a no go.
So I’ve decommissioned a XenServer machine that was in production just to try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon E5506 with 48GB of system RAM, but I can’t get the hosted engine to work, it always insults my hardware: --- Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy.
It’s definitely a problem on the storage subsystem, the error is just random, at this moment I’ve got:
[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request.
But on other tries it came up with something like this:
No response for JSON-RPC Volume.getSize request.
I was thinking that the problem was on the NFSv3 server on our FreeNAS box, so I’ve changed to an iSCSI backend, but the problem continues.
Can't say anything about your issues without the logs but there's nothing wrong with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's NFS since oVirt 3.2 so...
Could you share your relevant exports configuration to make sure he's using something that you know works?
"You're holding it wrong":) Sorry, I know you're frustrated but that's what I can add to the conversation.
/K
This happens at the very end of the ovirt-hosted-engine-setup command, which leads me to believe that’s an oVirt issue. The OVA was already copied and deployed to the storage:
[ INFO ] Starting vdsmd [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Extracting disk image from OVF archive (could take a few minutes depending on archive size) [ INFO ] Validating pre-allocated volume size [ INFO ] Uploading volume to data domain (could take a few minutes depending on archive size) [ INFO ] Image successfully imported from OVF [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request. [ INFO ] Yum Performing yum transaction rollback [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- setup/answers/answers-20170623032541.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-s etup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log
At this point I really don’t know what I should try. And the log file is too verborragic (hoping this word exists) to look for errors.
Any guidance?
Thanks, V.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Adam Litke
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--Apple-Mail=_14C264AE-EDD9-4816-8521-9283CF06EB7F Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii
4. There is any good SPICE client for macOS? Or should I just use the = HTML5 version instead? =20 I'm afraid not. Y.
There is one spice client, RemoveViewer, but I will not call it good, = it's very slow. So I tend to use the embedded vnc viewer. --Apple-Mail=_14C264AE-EDD9-4816-8521-9283CF06EB7F Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><br class=3D""><div><br class=3D""><blockquote type=3D"cite" = class=3D""><div class=3D""></div><div class=3D""><div dir=3D"ltr" = class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><blockquote class=3D"gmail_quote" = style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid = rgb(204,204,204);padding-left:1ex"><div dir=3D"auto" class=3D""><div = class=3D""><div class=3D""><div class=3D"">4. There is any good SPICE = client for macOS? Or should I just use the HTML5 version = instead?</div></div></div></div></blockquote><div class=3D""><br = class=3D""></div><div class=3D"">I'm afraid not.</div><div = class=3D"">Y.</div></div></div></div></div></blockquote><br = class=3D""></div><div>There is one spice client, RemoveViewer, but I = will not call it good, it's very slow. So I tend to use the embedded vnc = viewer.</div><br class=3D""></body></html>= --Apple-Mail=_14C264AE-EDD9-4816-8521-9283CF06EB7F--

Is there some way to make vnc/web the default for each vk. Oh I have searched and not found. :/ On Mon, 26 Jun 2017 at 8:35 am, Fabrice Bacchella < fabrice.bacchella@orange.fr> wrote:
4. There is any good SPICE client for macOS? Or should I just use the
HTML5 version instead?
I'm afraid not. Y.
There is one spice client, RemoveViewer, but I will not call it good, it's very slow. So I tend to use the embedded vnc viewer.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Jun 26, 2017 at 4:04 AM, Ben De Luca <bdeluca@gmail.com> wrote:
Is there some way to make vnc/web the default for each vk. Oh I have searched and not found. :/
I feel like there used to be a way to do this at the cluster level but I also could not find it. Your best bet would be to configure VM templates and set the console to vnc there.
On Mon, 26 Jun 2017 at 8:35 am, Fabrice Bacchella < fabrice.bacchella@orange.fr> wrote:
4. There is any good SPICE client for macOS? Or should I just use the
HTML5 version instead?
I'm afraid not. Y.
There is one spice client, RemoveViewer, but I will not call it good, it's very slow. So I tend to use the embedded vnc viewer.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Adam Litke

--Apple-Mail=_17C188C1-8A28-4AEC-A39D-46D97240560E Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8
On 26 Jun 2017, at 15:17, Adam Litke <alitke@redhat.com> wrote: =20 =20 =20 On Mon, Jun 26, 2017 at 4:04 AM, Ben De Luca <bdeluca@gmail.com = <mailto:bdeluca@gmail.com>> wrote: Is there some way to make vnc/web the default for each vk. Oh I have = searched and not found. :/=20 =20 I feel like there used to be a way to do this at the cluster level but = I also could not find it. Your best bet would be to configure VM = templates and set the console to vnc there.
The default is an OS type setting, you can change/override that in = osinfo[1]. Another alternative is to just use a template with it (including a = system-wide change via Blank template) You can also use the combined display of SPICE+VNC which allows you to = connect with either protocol For the client it=E2=80=99s a user=E2=80=99s preference setting (to use = remote-viewer or novnc/spice-html5) As for the original question, none of the existing MacOS SPICE clients = are enterprise quality(that=E2=80=99s subjective of course:), but = configuring both gives you an easy way how to switch to vnc for = particular user or in case of problems Thanks, michal [1] = http://www.ovirt.org/develop/release-management/features/virt/os-info/
=20 =20 =20 On Mon, 26 Jun 2017 at 8:35 am, Fabrice Bacchella = <fabrice.bacchella@orange.fr <mailto:fabrice.bacchella@orange.fr>> = wrote: =20 =20
4. There is any good SPICE client for macOS? Or should I just use the = HTML5 version instead? =20 I'm afraid not. Y. =20 There is one spice client, RemoveViewer, but I will not call it good, = it's very slow. So I tend to use the embedded vnc viewer. =20
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users = <http://lists.ovirt.org/mailman/listinfo/users> =20 _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users = <http://lists.ovirt.org/mailman/listinfo/users> =20 =20 =20 =20 --=20 Adam Litke _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_17C188C1-8A28-4AEC-A39D-46D97240560E Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><br class=3D""><div><blockquote type=3D"cite" class=3D""><div = class=3D"">On 26 Jun 2017, at 15:17, Adam Litke <<a = href=3D"mailto:alitke@redhat.com" class=3D"">alitke@redhat.com</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><div = dir=3D"ltr" class=3D""><br class=3D""><div class=3D"gmail_extra"><br = class=3D""><div class=3D"gmail_quote">On Mon, Jun 26, 2017 at 4:04 AM, = Ben De Luca <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:bdeluca@gmail.com" target=3D"_blank" = class=3D"">bdeluca@gmail.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class=3D""><div = dir=3D"auto" class=3D"">Is there some way to make vnc/web the default = for each vk. Oh I have searched and not found. = :/ </div></div></blockquote><div class=3D""><br class=3D""></div><div= class=3D"">I feel like there used to be a way to do this at the cluster = level but I also could not find it. Your best bet would be to = configure VM templates and set the console to vnc = there.</div></div></div></div></div></blockquote><div><br = class=3D""></div>The default is an OS type setting, you can = change/override that in osinfo[1].</div><div>Another alternative is to = just use a template with it (including a system-wide change via Blank = template)</div><div><br class=3D""></div><div>You can also use the = combined display of SPICE+VNC which allows you to connect with either = protocol</div><div>For the client it=E2=80=99s a user=E2=80=99s = preference setting (to use remote-viewer or = novnc/spice-html5)</div><div><br class=3D""></div><div>As for the = original question, none of the existing MacOS SPICE clients are = enterprise quality(that=E2=80=99s subjective of course:), but = configuring both gives you an easy way how to switch to vnc for = particular user or in case of problems</div><div><br = class=3D""></div><div>Thanks,</div><div>michal</div><div><br = class=3D""></div><div>[1] <a = href=3D"http://www.ovirt.org/develop/release-management/features/virt/os-i= nfo/" = class=3D"">http://www.ovirt.org/develop/release-management/features/virt/o= s-info/</a></div><div><br class=3D""><blockquote type=3D"cite" = class=3D""><div class=3D""><div dir=3D"ltr" class=3D""><div = class=3D"gmail_extra"><div class=3D"gmail_quote"><div = class=3D""> </div><blockquote class=3D"gmail_quote" style=3D"margin:0= 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div = class=3D""><div dir=3D"auto" class=3D""><br class=3D""></div><br = class=3D""><div class=3D"gmail_quote"><div class=3D""><div = class=3D"h5"><div class=3D"">On Mon, 26 Jun 2017 at 8:35 am, Fabrice = Bacchella <<a href=3D"mailto:fabrice.bacchella@orange.fr" = target=3D"_blank" class=3D"">fabrice.bacchella@orange.fr</a>> = wrote:<br class=3D""></div></div></div><blockquote class=3D"gmail_quote" = style=3D"margin:0 0 0 .8ex;border-left:1px #ccc = solid;padding-left:1ex"><div class=3D""><div class=3D"h5"><div = style=3D"word-wrap:break-word" class=3D""><br class=3D""><div = class=3D""><br class=3D""><blockquote type=3D"cite" class=3D""><div = class=3D""></div><div class=3D""><div class=3D""><div = class=3D"gmail_extra"><div class=3D"gmail_quote"><blockquote = class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px = solid rgb(204,204,204);padding-left:1ex"><div dir=3D"auto" class=3D""><div= class=3D""><div class=3D""><div class=3D"">4. There is any good SPICE = client for macOS? Or should I just use the HTML5 version = instead?</div></div></div></div></blockquote><div class=3D""><br = class=3D""></div><div class=3D"">I'm afraid not.</div><div = class=3D"">Y.</div></div></div></div></div></blockquote><br = class=3D""></div></div><div style=3D"word-wrap:break-word" class=3D""><div= class=3D"">There is one spice client, RemoveViewer, but I will not call = it good, it's very slow. So I tend to use the embedded vnc = viewer.</div><br class=3D""></div></div></div><span = class=3D"">______________________________<wbr = class=3D"">_________________<br class=3D""> Users mailing list<br class=3D""> <a href=3D"mailto:Users@ovirt.org" target=3D"_blank" = class=3D"">Users@ovirt.org</a><br class=3D""> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" = rel=3D"noreferrer" target=3D"_blank" = class=3D"">http://lists.ovirt.org/<wbr = class=3D"">mailman/listinfo/users</a><br class=3D""> </span></blockquote></div></div> <br class=3D"">______________________________<wbr = class=3D"">_________________<br class=3D""> Users mailing list<br class=3D""> <a href=3D"mailto:Users@ovirt.org" class=3D"">Users@ovirt.org</a><br = class=3D""> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" = rel=3D"noreferrer" target=3D"_blank" = class=3D"">http://lists.ovirt.org/<wbr = class=3D"">mailman/listinfo/users</a><br class=3D""> <br class=3D""></blockquote></div><br class=3D""><br clear=3D"all" = class=3D""><div class=3D""><br class=3D""></div>-- <br class=3D""><div = class=3D"gmail_signature" data-smartmail=3D"gmail_signature"><div = dir=3D"ltr" class=3D"">Adam Litke</div></div> </div></div> _______________________________________________<br class=3D"">Users = mailing list<br class=3D""><a href=3D"mailto:Users@ovirt.org" = class=3D"">Users@ovirt.org</a><br = class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br = class=3D""></div></blockquote></div><br class=3D""></body></html>= --Apple-Mail=_17C188C1-8A28-4AEC-A39D-46D97240560E--

On Mon, Jun 26, 2017 at 9:15 AM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Mon, Jun 26, 2017 at 1:16 AM, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
Hello again,
Setting the folder permissions to vdsm:kvm (36:36) done the trick to make NFS work. I wasn't expecting this to work since it does not make sense to me, making a parallel with the iSCSI problem.
I'm starting to believe that's something just bad. Perhaps the Storage system is running with something broken or the host machine is just unstable.
I will consider this as solved, since further inside the oVirt Engine panel I was able to mount and use the same iSCSI sharing.
But let me ask for sincere answers here:
1. oVirt is feature complete comparing to RHEV?
Yes. In fact it may have additional features unavailable in RHV.
2. Should I migrate from XenServer to oVirt? This is biased, I know, but I would like to hear opinions. The folks with @redhat.com email addresses will know how to advocate in favor of oVirt.
I don't know, and I don't think (having a @redhat.com email myself) I should advocate. If someone made a comparison (even if it's just for their own specific use case), it could be great if it could be shared. If you do - please share your thoughts.
3. Some "journalists" says that oVirt is like Fedora in comparison to RHEL, is this really true? Or it's more aligned with a CentOS-like release? Because Fedora isn't really an Enterprise OS, and I was looking for an Enterprise Hypervisor. I'm aware that oVirt is the upstream from RHEV.
I'd cautiously say 'in between'. We strive to ensure oVirt is stable, and I believe we make good progress in every release. We also make an effort to quickly release fixes (minor releases). That being said, RHV has a longer life cycle, and for example, when oVirt stopped releasing oVirt 3.6.x, Red Hat continued to release minor versions of it.
We have hundreds of oVirt users running it in production, many in large scale, with mission critical workloads.
And probably not surprisingly, oVirt is running oVirt in production: https://engine-phx.ovirt.org/ovirt-engine/ for all the oVirt Ci infrastructure (13 hosts, 200 VMs).
Lastly, oVirt enables features in upstream before they are delivered in RHV.
(That being said, I've had a very good experience with Fedora 24 which was rock solid for me, then I've had some misfortune with Fedora 25, and now I'm assessing if I should upgrade to F26 beta...)
For what it's worth, I'm on Fedora 26 since April 14th and works like a charm for me.
4. There is any good SPICE client for macOS? Or should I just use the HTML5 version instead?
I'm afraid not. Y.
Thanks, V.
Sent from my iPhone
On 23 Jun 2017, at 18:50, Yaniv Kaul <ykaul@redhat.com> wrote:
On Sat, Jun 24, 2017 at 12:23 AM, Vinícius Ferrão <ferrao@if.ufrj.br> wrote:
Hello Adam and Karli,
I will remap uid and gid of NFS to 36 and try again with NFS sharing.
But this does not make much sense, because on iSCSI this should not happen. There are no permissions involved and when oVirt runs the hosted-engine setup it creates the ext3 filesystem on the iSCSI share without any issue. Here’s a photo of the network bandwidth during the OVF deployment: http://www.if.ufrj.br/~ferrao/ovirt/bandwidth-iscsi-ovf.jpg
So it’s appears to be working. Something happens after the deployment that brokes the connections and kills vdsm.
Indeed - may be two different issues. Let us know how the NFS works first, then let's try with iSCSI. Y.
Thanks, V.
On 23 Jun 2017, at 17:47, Adam Litke <alitke@redhat.com> wrote:
On Fri, Jun 23, 2017 at 4:40 PM, Karli Sjöberg <karli@inparadise.se> wrote:
Den 23 juni 2017 21:08 skrev Vinícius Ferrão <ferrao@if.ufrj.br>:
Hello oVirt folks.
I’m a traitor of the Xen movement and was looking for some good alternatives for XenServer hypervisors. I was aware of KVM for a long time but I was missing a more professional and appliance feeling of the product, and oVirt appears to deliver exactly what I’m looking for.
Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for equal or better alternatives, but I’m starting to get frustrated with oVirt.
Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on my notebook, it was a no go. For whatever reasons I don’t know vdsmd.service and libvirtd failed to start. I make sure that I was running with EPT support enabled to achieve nested virtualization, but as I said: it was a no go.
So I’ve decommissioned a XenServer machine that was in production just to try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon E5506 with 48GB of system RAM, but I can’t get the hosted engine to work, it always insults my hardware: --- Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy.
It’s definitely a problem on the storage subsystem, the error is just random, at this moment I’ve got:
[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request.
But on other tries it came up with something like this:
No response for JSON-RPC Volume.getSize request.
I was thinking that the problem was on the NFSv3 server on our FreeNAS box, so I’ve changed to an iSCSI backend, but the problem continues.
Can't say anything about your issues without the logs but there's nothing wrong with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's NFS since oVirt 3.2 so...
Could you share your relevant exports configuration to make sure he's using something that you know works?
"You're holding it wrong":) Sorry, I know you're frustrated but that's what I can add to the conversation.
/K
This happens at the very end of the ovirt-hosted-engine-setup command, which leads me to believe that’s an oVirt issue. The OVA was already copied and deployed to the storage:
[ INFO ] Starting vdsmd [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Extracting disk image from OVF archive (could take a few minutes depending on archive size) [ INFO ] Validating pre-allocated volume size [ INFO ] Uploading volume to data domain (could take a few minutes depending on archive size) [ INFO ] Image successfully imported from OVF [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request. [ INFO ] Yum Performing yum transaction rollback [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- setup/answers/answers-20170623032541.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-s etup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log
At this point I really don’t know what I should try. And the log file is too verborragic (hoping this word exists) to look for errors.
Any guidance?
Thanks, V.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Adam Litke
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On 28.06.2017 12:33, Sandro Bonazzola wrote:
2. Should I migrate from XenServer to oVirt? This is biased, I know, but I would like to hear opinions. The folks with @redhat.com email addresses will know how to advocate in favor of oVirt.
I don't know, and I don't think (having a @redhat.com email myself) I should advocate. If someone made a comparison (even if it's just for their own specific use case), it could be great if it could be shared. If you do - please share your thoughts.
Hi, for what it's worth, here is a matrix like comparison (you can configure the products and should select rhev for ovirt like features): https://www.whatmatrix.com/comparison/Virtualization some features of xen are not available in ovirt/rhev, e.g. live migration with local storage. HTH Sven

2. Should I migrate from XenServer to oVirt? This is biased, I know, but I would like to hear opinions. The folks with @redhat.com email addresses will know how to advocate in favor of oVirt.
in term of reliability, better stay with xenserver

On Mon, Jun 26, 2017 at 10:51:52AM +0200, InterNetX - Juergen Gotteswinter wrote:
2. Should I migrate from XenServer to oVirt? This is biased, I know, but I would like to hear opinions. The folks with @redhat.com email addresses will know how to advocate in favor of oVirt.
in term of reliability, better stay with xenserver
Seriously, you should have provided some more insights to support your statement. What reliability issues did you encounter in oVirt that are not present in Xenserver? I have deployed *several* oVirt setups since 2012 and haven't found a single realibility issue since then. Of course, there have been some bugs, but the oVirt project made *tremendous* progress since 2012. Regards - Frank

I really would like to hear it too. Because oVirt got my attention on how actively the community is and the tremendous progress that Frank mentioned. V.
On 29 Jun 2017, at 17:27, Frank Wall <fw@moov.de> wrote:
On Mon, Jun 26, 2017 at 10:51:52AM +0200, InterNetX - Juergen Gotteswinter wrote:
2. Should I migrate from XenServer to oVirt? This is biased, I know, but I would like to hear opinions. The folks with @redhat.com email addresses will know how to advocate in favor of oVirt.
in term of reliability, better stay with xenserver
Seriously, you should have provided some more insights to support your statement. What reliability issues did you encounter in oVirt that are not present in Xenserver?
I have deployed *several* oVirt setups since 2012 and haven't found a single realibility issue since then. Of course, there have been some bugs, but the oVirt project made *tremendous* progress since 2012.
Regards - Frank _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Are we running the same code? I applaud the amount of effort, but I cant image there is a depth of testing. Oh thats right we are the testers for rhel On 29 June 2017 at 21:27, Frank Wall <fw@moov.de> wrote:
On Mon, Jun 26, 2017 at 10:51:52AM +0200, InterNetX - Juergen Gotteswinter wrote:
2. Should I migrate from XenServer to oVirt? This is biased, I know, but I would like to hear opinions. The folks with @redhat.com email addresses will know how to advocate in favor of oVirt.
in term of reliability, better stay with xenserver
Seriously, you should have provided some more insights to support your statement. What reliability issues did you encounter in oVirt that are not present in Xenserver?
I have deployed *several* oVirt setups since 2012 and haven't found a single realibility issue since then. Of course, there have been some bugs, but the oVirt project made *tremendous* progress since 2012.
Regards - Frank _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Fri, Jun 30, 2017 at 2:30 AM, Ben De Luca <bdeluca@gmail.com> wrote:
Are we running the same code?
I applaud the amount of effort, but I cant image there is a depth of testing. Oh thats right we are the testers for rhel
Hi Ben, It's always great to get constructive feedback. I've tried to look for bugs you have reported, for RHEL, oVirt or Fedora, but did not find any[1]. Perhaps I was searching with the wrong username. Respectfully[2], Y. [1] https://bugzilla.redhat.com/buglist.cgi?chfield=%5BBug%20creation%5D&chfieldto=Now&email1=bdeluca%40gmail.com&emailreporter1=1&emailtype1=substring&f3=OP&j3=OR&list_id=7540196&query_format=advanced [2] http://www.ovirt.org/community/about/community-guidelines/#be-respectful
On 29 June 2017 at 21:27, Frank Wall <fw@moov.de> wrote:
On Mon, Jun 26, 2017 at 10:51:52AM +0200, InterNetX - Juergen Gotteswinter wrote:
2. Should I migrate from XenServer to oVirt? This is biased, I know, but I would like to hear opinions. The folks with @redhat.com email addresses will know how to advocate in favor of oVirt.
in term of reliability, better stay with xenserver
Seriously, you should have provided some more insights to support your statement. What reliability issues did you encounter in oVirt that are not present in Xenserver?
I have deployed *several* oVirt setups since 2012 and haven't found a single realibility issue since then. Of course, there have been some bugs, but the oVirt project made *tremendous* progress since 2012.
Regards - Frank _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (11)
-
Adam Litke
-
Ben De Luca
-
Fabrice Bacchella
-
Frank Wall
-
InterNetX - Juergen Gotteswinter
-
Karli Sjöberg
-
Michal Skrivanek
-
Sandro Bonazzola
-
Sven Kieske
-
Vinícius Ferrão
-
Yaniv Kaul