
My current understanding is that oVirt no longer supports any single-server configuration since the All-In-One install was removed in 3.6. While the hosted-engine install was supposed to replace it, it requires either networked storage (nfs, iscsi) or Glusterfs. To my knowledge nfs/iscsi exported to localhost is not supported, so I would need at least 2 machines. Furthermore Gluster requires at least 3 sources of storage for quorum (it would be great if there was an option to acknowledge the risks and continue), meaning a single machine is not practical. I understand and acknowledge that oVirt is not targeted towards homelab setups, or at least small homelab setups. However I believe that having a solid configuration for such use cases would be a benefit to the project as a whole. It allows oVirt to be much more visible in the homelab community, and more accessible to testing which in turn yields more people who have experience with oVirt. As it stands most other virtualization products allow for usage (not just a livecd) in a single server environment, although not all features can be used of course. vSphere, Xenserver, Proxmox, FIFO, and Nutanix all allow an installation on a single server. It appears that oVirt/RHV is the odd-one out - and it honestly shows when you look at what people talk about online - there is a huge gap between even Proxmox and oVirt when it comes to mindshare in the tech community, and it does not favor oVirt.

On Sun, Sep 4, 2016 at 11:45 PM, zero four <zfnoctis@gmail.com> wrote:
My current understanding is that oVirt no longer supports any single-server configuration since the All-In-One install was removed in 3.6. While the hosted-engine install was supposed to replace it, it requires either networked storage (nfs, iscsi) or Glusterfs. To my knowledge nfs/iscsi exported to localhost is not supported,
nfs exported to localhost may be fragile. iscsi server on your single host should work. The best option for single host is local storage, but I don't know if hosted engine supports it.
so I would need at least 2 machines. Furthermore Gluster requires at least 3 sources of storage for quorum (it would be great if there was an option to acknowledge the risks and continue), meaning a single machine is not practical.
You can use single glusterfs brick, I think it should work wit hosted engine setup.
I understand and acknowledge that oVirt is not targeted towards homelab setups, or at least small homelab setups. However I believe that having a solid configuration for such use cases would be a benefit to the project as a whole. It allows oVirt to be much more visible in the homelab community, and more accessible to testing which in turn yields more people who have experience with oVirt. As it stands most other virtualization products allow for usage (not just a livecd) in a single server environment, although not all features can be used of course. vSphere, Xenserver, Proxmox, FIFO, and Nutanix all allow an installation on a single server. It appears that oVirt/RHV is the odd-one out - and it honestly shows when you look at what people talk about online - there is a huge gap between even Proxmox and oVirt when it comes to mindshare in the tech community, and it does not favor oVirt.
I agree that it would nice if the all-in-one option was still available, but someone has to maintain this setup. For single host, better use virt-manager. You can import the vms later to ovirt when you want to scale your lab. If you want to experiment with ovirt, you can use virt-manager to create several vms - if you enable nested kvm, you can use the vms as your hosts. This is the standard setup we use for development. Nir

------=_NextPart_000_001A_01D20706.6A9AE520 Content-Type: multipart/alternative; boundary="----=_NextPart_001_001B_01D20706.6A9AE520" ------=_NextPart_001_001B_01D20706.6A9AE520 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable I=E2=80=99m running 3.6 with local NFS for the hosted engine. I have = more than one host but they are all isolated and export they storage via = local NFS. Setup has been running since 1 year now. =20 Maye you can give it a try? =20 Cheers, Chris =20 From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf = Of zero four Sent: dimanche 4 septembre 2016 22:45 To: users@ovirt.org Subject: [ovirt-users] oVirt on a single server =20 My current understanding is that oVirt no longer supports any = single-server configuration since the All-In-One install was removed in = 3.6. While the hosted-engine install was supposed to replace it, it = requires either networked storage (nfs, iscsi) or Glusterfs. To my = knowledge nfs/iscsi exported to localhost is not supported, so I would = need at least 2 machines. Furthermore Gluster requires at least 3 = sources of storage for quorum (it would be great if there was an option = to acknowledge the risks and continue), meaning a single machine is not = practical. =20 I understand and acknowledge that oVirt is not targeted towards homelab = setups, or at least small homelab setups. However I believe that having = a solid configuration for such use cases would be a benefit to the = project as a whole. It allows oVirt to be much more visible in the = homelab community, and more accessible to testing which in turn yields = more people who have experience with oVirt. As it stands most other = virtualization products allow for usage (not just a livecd) in a single = server environment, although not all features can be used of course. = vSphere, Xenserver, Proxmox, FIFO, and Nutanix all allow an installation = on a single server. It appears that oVirt/RHV is the odd-one out - and = it honestly shows when you look at what people talk about online - there = is a huge gap between even Proxmox and oVirt when it comes to mindshare = in the tech community, and it does not favor oVirt. ------=_NextPart_001_001B_01D20706.6A9AE520 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta = http-equiv=3DContent-Type content=3D"text/html; charset=3Dutf-8"><meta = name=3DGenerator content=3D"Microsoft Word 15 (filtered = medium)"><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman",serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:#954F72; text-decoration:underline;} p.msonormal0, li.msonormal0, div.msonormal0 {mso-style-name:msonormal; mso-margin-top-alt:auto; margin-right:0cm; mso-margin-bottom-alt:auto; margin-left:0cm; font-size:12.0pt; font-family:"Times New Roman",serif;} span.EmailStyle18 {mso-style-type:personal-reply; font-family:"Calibri",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;} @page WordSection1 {size:612.0pt 792.0pt; margin:70.85pt 70.85pt 70.85pt 70.85pt;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body lang=3DFR-LU = link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p = class=3DMsoNormal><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;mso-fareast-la= nguage:EN-US'>I=E2=80=99m running 3.6 with local NFS for the hosted = engine. I have more than one host but they are all isolated and export = they storage via local NFS. Setup has been running since 1 year = now.<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;mso-fareast-la= nguage:EN-US'><o:p> </o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;mso-fareast-la= nguage:EN-US'>Maye you can give it a try?<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;mso-fareast-la= nguage:EN-US'><o:p> </o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;mso-fareast-la= nguage:EN-US'>Cheers,<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;mso-fareast-la= nguage:EN-US'>Chris<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;mso-fareast-la= nguage:EN-US'><o:p> </o:p></span></p><div = style=3D'border:none;border-left:solid blue 1.5pt;padding:0cm 0cm 0cm = 4.0pt'><div><div style=3D'border:none;border-top:solid #E1E1E1 = 1.0pt;padding:3.0pt 0cm 0cm 0cm'><p class=3DMsoNormal><b><span = lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif'>From:</span><= /b><span lang=3DEN-US = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif'> = users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] <b>On Behalf Of = </b>zero four<br><b>Sent:</b> dimanche 4 septembre 2016 = 22:45<br><b>To:</b> users@ovirt.org<br><b>Subject:</b> [ovirt-users] = oVirt on a single server<o:p></o:p></span></p></div></div><p = class=3DMsoNormal><o:p> </o:p></p><div><div><p class=3DMsoNormal>My = current understanding is that oVirt no longer supports any single-server = configuration since the All-In-One install was removed in 3.6. = While the hosted-engine install was supposed to replace it, it requires = either networked storage (nfs, iscsi) or Glusterfs. To my = knowledge nfs/iscsi exported to localhost is not supported, so I would = need at least 2 machines. Furthermore Gluster requires at least 3 = sources of storage for quorum (it would be great if there was an option = to acknowledge the risks and continue), meaning a single machine is not = practical.<o:p></o:p></p></div><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p class=3DMsoNormal>I = understand and acknowledge that oVirt is not targeted towards homelab = setups, or at least small homelab setups. However I believe that = having a solid configuration for such use cases would be a benefit to = the project as a whole. It allows oVirt to be much more visible in = the homelab community, and more accessible to testing which in turn = yields more people who have experience with oVirt. As it stands = most other virtualization products allow for usage (not just a livecd) = in a single server environment, although not all features can be used of = course. vSphere, Xenserver, Proxmox, FIFO, and Nutanix all allow = an installation on a single server. It appears that oVirt/RHV is the = odd-one out - and it honestly shows when you look at what people talk = about online - there is a huge gap between even Proxmox and oVirt when = it comes to mindshare in the tech community, and it does not favor = oVirt.<o:p></o:p></p></div></div></div></div></body></html> ------=_NextPart_001_001B_01D20706.6A9AE520-- ------=_NextPart_000_001A_01D20706.6A9AE520 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIODTCCA7cw ggKfoAMCAQICEAzn4OUX2Eb+j+Vg/BvwMDkwDQYJKoZIhvcNAQEFBQAwZTELMAkGA1UEBhMCVVMx FTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3LmRpZ2ljZXJ0LmNvbTEkMCIGA1UE AxMbRGlnaUNlcnQgQXNzdXJlZCBJRCBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAw MDAwMFowZTELMAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3 LmRpZ2ljZXJ0LmNvbTEkMCIGA1UEAxMbRGlnaUNlcnQgQXNzdXJlZCBJRCBSb290IENBMIIBIjAN BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArQ4VzuRDgFyxh/O3YPlxEqWu3CaUiKr0zvUgOShY YAz4gNqpFZUyYTy1sSiEiorcnwoMgxd6j5Csiud5U1wxhCr2D5gyNnbM3t08qKLvavsh8lJh358g 1x/isdn+GGTSEltf+VgYNbxHzaE2+Wt/1LA4PsEbw4wz2dgvGP4oD7Ong9bDbkTAYTWWFv5ZnIt2 bdfxoksNK/8LctqeYNCOkDXGeFWHIKHP5W0KyEl8MZgzbCLph9AyWqK6E4IR7TkXnZk6cqHm+qTZ 1Rcxda6FfSKuPwFGhvYoecix2uRXF8R+HA6wtJKmVrO9spftqqfwt8WoP5UW0P+hlusIXxh3TwID AQABo2MwYTAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUReuir/SS y4IxLVGLp6chnfNtyA8wHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZIhvcN AQEFBQADggEBAKIOvN/i7fDjcnN6ZJS/93Jm2DLkQnVirofr8tXZ3lazn8zOFCi5DZdgXBJMWOTT PYNJRViXNWkaqEfqVsZ5qxLYZ4GE338JPJTmuCYsIL09syiJ91//IuKXhB/pZe+H4N/BZ0mzXeuy CSrrJu14vn0/K/O3JjVtX4kBtklbnwEFm6s9JcHMtn/C8W+GxvpkaOuBLZTrQrf6jB7dYvG+UGe3 bL3z8R9rDDYHFn83fKlbbXrxEkZgg9cnBL5Lzpe+w2cqaBHfgOcMM2a/Ew0UbvN/H2MQHvqNGyVt bI+lt2EBsdKjJqEQcZ2t4sP5w5lRtysHCM4u5lCyp/oKRS+i8PIwggUAMIID6KADAgECAhADS+4X H7fhBjcv1HJCQL0qMA0GCSqGSIb3DQEBCwUAMGUxCzAJBgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdp Q2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFz c3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgxMjAwMDBaFw0yNDExMTgxMjAwMDBaMGkxCzAJBgNV BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNV BAoTBlRFUkVOQTEdMBsGA1UEAxMUVEVSRU5BIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEB AQUAA4IBDwAwggEKAoIBAQDGpbsfVYL0pTRyFHJlm1/V6qBo2JuCiU9TYpx7jM4O2tQyDq8bjMum 69vg6wM0lMGHflMgqB75GxeKfQFmEldoXi2cLishqFUvU2cJeM3SaRsLk2BsuCgTzh9NsYgmrUX6 0KHOq7eYKVZxbPFWJF2nMOBuMXNu2qBXTGSLeLXHnNvG3r7TLzGg1oA5teAxQE6Eo8ySSeIXbP7w ZB76urwlh51PIbrJZjkDjdQVELh7OlTP1WO6T/Hf6BsEfeFcpoa1e+MW/lw0VetTPPHQ15HYKYP2 WYohHxzDiC+QUwE7UZVBlp9cXIpwHuDzSibc5RG3z0n/j2SQCx0Dk5FMAUErAgMBAAGjggGmMIIB ojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcBAQRtMGswJAYI KwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcwAoY3aHR0cDovL2Nh Y2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNydDCBgQYDVR0fBHow eDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB LmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9v dENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggrBgEFBQcCARYcaHR0cHM6Ly93d3cuZGln aWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQU8CHpSXdzn4WuGDvoUnAUBu1C7sowHwYDVR0jBBgwFoAU Reuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZIhvcNAQELBQADggEBADrCGyv+Y967YbS5R6j8fAWx JiAiUZvIPfn1xVgesF6jspwCQY8xGn/MG04d+Jh97I8I/Xfx29JEEFq2rQmw4PxiO9RiDZ7xoDxN d4rrRDR7jrtOKQP8J+o+ah0vSOP62hnD/zPS7NRMtIyVS2G277KAL5fIR62ngr984fmJghDv0bsj GAmeu3EP0xhUsDJT61IoAGoKBnxBPAeg3WXsdSm4Gn7btyvakeyFtYebr2KmOBSa28PRqGSDur56 aZhJoM2eMzc6prmvGwwtAzRsc5t2OsKRuHWV6O3anP2K27jGZR2bi1VX1NQUvIbpVNTuwjm+XcZt sa/AAJF9KGkEseAwggVKMIIEMqADAgECAhACQ3btB/bAKLpz75BYlW2oMA0GCSqGSIb3DQEBCwUA MGkxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJk YW0xDzANBgNVBAoTBlRFUkVOQTEdMBsGA1UEAxMUVEVSRU5BIFBlcnNvbmFsIENBIDMwHhcNMTYw MzI1MDAwMDAwWhcNMTcwMzI1MTIwMDAwWjB9MQswCQYDVQQGEwJMVTETMBEGA1UECBMKTHV4ZW1i b3VyZzEZMBcGA1UEBxMQRXNjaC1zdXItQWx6ZXR0ZTEhMB8GA1UEChMYVW5pdmVyc2l0eSBvZiBM dXhlbWJvdXJnMRswGQYDVQQDExJDaHJpc3RvcGhlIFRSRUZPSVMwggEiMA0GCSqGSIb3DQEBAQUA A4IBDwAwggEKAoIBAQDR8y9TnAwVj4A9oGchJ5mwtGiROMNCqXvisYkfCxR62uG6y/nSBTwemqAf fA6/B7J7eb8QQuSd3QojLE0hvMhSxueciVbHerob/V2Lodlb2tragn7sipiq1QVxUy+5P+DYxOvo D00rYLVcHJIsLU4Aw2i0CM2OlsQcvAFHqX6K4zK327ktfCZGwQ9OytFUZM7PFOyMfZe9YnGKLNs5 YYFCCdiUffmMWWBE833Hky8DZg795OvIkj5/f7yoGMz64d341eB6L1UH1pxp4rC/NlkOHwsYX9ts xjzsD0ziflcyIvHNtYKR4CRWw756IsCQorU4t06Ysm5quL0+EalLk6yVAgMBAAGjggHYMIIB1DAf BgNVHSMEGDAWgBTwIelJd3Ofha4YO+hScBQG7ULuyjAdBgNVHQ4EFgQU7TAait479R6IhoFn8pnl VbKTvtcwDAYDVR0TAQH/BAIwADAkBgNVHREEHTAbgRljaHJpc3RvcGhlLnRyZWZvaXNAdW5pLmx1 MA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwQwYDVR0gBDww OjA4BgpghkgBhv1sBAECMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LmRpZ2ljZXJ0LmNvbS9D UFMwdQYDVR0fBG4wbDA0oDKgMIYuaHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQVBlcnNv bmFsQ0EzLmNybDA0oDKgMIYuaHR0cDovL2NybDQuZGlnaWNlcnQuY29tL1RFUkVOQVBlcnNvbmFs Q0EzLmNybDBzBggrBgEFBQcBAQRnMGUwJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0 LmNvbTA9BggrBgEFBQcwAoYxaHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL1RFUkVOQVBlcnNv bmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOCAQEAIUTct3bxJB4snMdQZdriB/t+ByockhTovqLe j8Hm8QgbIQEk6eIpMMJhUxopOqsiePkpnqbhfHibEbcyof4i7RfNnVDcitESNwZxYbRgQBJOtbdW qTV7Om67CDzMnaf5YHji4P+8hHEx96G3rrL4dhXeb3pyhiea0Ip93NS8/zVhPQZcUbTqUadB7eo8 B4Ki9QalTXKoYqAlDc2CD9/mZd+yEo3WL0dG4ehb0Iy/Fu7iqLnZDkrTnoUTaSjU6U1snT+wxL4S P12oTskAQEIT0LXxWc3IbpLwO0TGLUhSQ49DmiwT1IflBjiur39IPeTQTBnS3yg7J20Q1r4WH+Zj ojGCA7swggO3AgEBMH0waTELMAkGA1UEBhMCTkwxFjAUBgNVBAgTDU5vb3JkLUhvbGxhbmQxEjAQ BgNVBAcTCUFtc3RlcmRhbTEPMA0GA1UEChMGVEVSRU5BMR0wGwYDVQQDExRURVJFTkEgUGVyc29u YWwgQ0EgMwIQAkN27Qf2wCi6c++QWJVtqDAJBgUrDgMCGgUAoIICEzAYBgkqhkiG9w0BCQMxCwYJ KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNjA5MDQyMTQ1MjhaMCMGCSqGSIb3DQEJBDEWBBTw TEIQ+UoE4ixjQvcXyTdAhstZfTCBjAYJKwYBBAGCNxAEMX8wfTBpMQswCQYDVQQGEwJOTDEWMBQG A1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkEx HTAbBgNVBAMTFFRFUkVOQSBQZXJzb25hbCBDQSAzAhACQ3btB/bAKLpz75BYlW2oMIGOBgsqhkiG 9w0BCRACCzF/oH0waTELMAkGA1UEBhMCTkwxFjAUBgNVBAgTDU5vb3JkLUhvbGxhbmQxEjAQBgNV BAcTCUFtc3RlcmRhbTEPMA0GA1UEChMGVEVSRU5BMR0wGwYDVQQDExRURVJFTkEgUGVyc29uYWwg Q0EgMwIQAkN27Qf2wCi6c++QWJVtqDCBkwYJKoZIhvcNAQkPMYGFMIGCMAsGCWCGSAFlAwQBKjAL BglghkgBZQMEARYwCgYIKoZIhvcNAwcwCwYJYIZIAWUDBAECMA4GCCqGSIb3DQMCAgIAgDANBggq hkiG9w0DAgIBQDAHBgUrDgMCGjALBglghkgBZQMEAgMwCwYJYIZIAWUDBAICMAsGCWCGSAFlAwQC ATANBgkqhkiG9w0BAQEFAASCAQAwKJ7i2mhLDI+4G1DDpaH1iVBQe3uZpmKcYI0RfP2PtsH4izxo +8dOxphs0DOrh5kqAjBqXr1hiTy58SaC5cUoNQ/3ZpKaK5hp7i2BrX7s6WmBvy0pbUaeRu2dJ+Xp jikFsfz5m85SZVO3JuJSMBXbfQbt4Gn2yL5cukSaD31CAJ3lEzM8nqRxwJko0V6vsdyNMYbi+QUv c8vA1h1m0mZV4LdN3kaxYJ/IroRnSOeRe7nzC8Nbh7ApaSfwLlZ5dB3ycxVSzDbvD99CGLyh+sjb +gJt6erIsyjlK+IbKO1bpHC6ZHjwmEE8UE2rhCkaviCwH/jT6s064bAku1wuCbzKAAAAAAAA ------=_NextPart_000_001A_01D20706.6A9AE520--

On 5 Sep 2016, at 5:45 AM, Christophe TREFOIS <christophe.trefois@uni.lu> w= rote: =20 I=A1=AFm running 3.6 with local NFS for the hosted engine. I have more tha= n one host but they are all isolated and export they storage via local NFS. S= etup has been running since 1 year now. =20 Maye you can give it a try? =20 Cheers, Chris =20 From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf O= f zero four Sent: dimanche 4 septembre 2016 22:45 To: users@ovirt.org Subject: [ovirt-users] oVirt on a single server =20 My current understanding is that oVirt no longer supports any single-serve= r configuration since the All-In-One install was removed in 3.6. While the h= osted-engine install was supposed to replace it, it requires either networke= d storage (nfs, iscsi) or Glusterfs. To my knowledge nfs/iscsi exported to l= ocalhost is not supported, so I would need at least 2 machines. Furthermore= Gluster requires at least 3 sources of storage for quorum (it would be grea= t if there was an option to acknowledge the risks and continue), meaning a s= ingle machine is not practical. =20 I understand and acknowledge that oVirt is not targeted towards homelab se= tups, or at least small homelab setups. However I believe that having a sol= id configuration for such use cases would be a benefit to the project as a w= hole. It allows oVirt to be much more visible in the homelab community, and= more accessible to testing which in turn yields more people who have experi= ence with oVirt. As it stands most other virtualization products allow for u= sage (not just a livecd) in a single server environment, although not all fe= atures can be used of course. vSphere, Xenserver, Proxmox, FIFO, and Nutani= x all allow an installation on a single server. It appears that oVirt/RHV is=
<span lang=3D"EN-US" style=3D"font-size:11.0pt;font-family:"Calibri&qu= ot;,sans-serif;mso-fareast-language:EN-US">Maye you can give it a try?<o:p><= /o:p></span></p><p class=3D"MsoNormal"><span lang=3D"EN-US" style=3D"font-si= ze:11.0pt;font-family:"Calibri",sans-serif;mso-fareast-language:EN= -US"><o:p> </o:p></span></p><p class=3D"MsoNormal"><span lang=3D"EN-US"=
</o:p></span></p></div></div><p class=3D"MsoNormal"><o:p> </o:p></p><d= iv><div><p class=3D"MsoNormal">My current understanding is that oVirt no lon= ger supports any single-server configuration since the All-In-One install wa= s removed in 3.6. While the hosted-engine install was supposed to repl= ace it, it requires either networked storage (nfs, iscsi) or Glusterfs. = ; To my knowledge nfs/iscsi exported to localhost is not supported, so I wou= ld need at least 2 machines. Furthermore Gluster requires at least 3 s= ources of storage for quorum (it would be great if there was an option to ac= knowledge the risks and continue), meaning a single machine is not practical= .<o:p></o:p></p></div><div><p class=3D"MsoNormal"><o:p> </o:p></p></div= <div><p class=3D"MsoNormal">I understand and acknowledge that oVirt is not t= argeted towards homelab setups, or at least small homelab setups. Howe= ver I believe that having a solid configuration for such use cases would be a= benefit to the project as a whole. It allows oVirt to be much more vi= sible in the homelab community, and more accessible to testing which in turn= yields more people who have experience with oVirt. As it stands most o=
--Apple-Mail-19D05AD9-9E1C-410A-AACD-059A95303FC2 Content-Type: text/plain; charset=gb2312 Content-Transfer-Encoding: quoted-printable I'm running single node hosted engine 4.0.x with local NFS and it runs just f= ine. Thanks Regards, Philip Lo the odd-one out - and it honestly shows when you look at what people talk a= bout online - there is a huge gap between even Proxmox and oVirt when it com= es to mindshare in the tech community, and it does not favor oVirt. --Apple-Mail-19D05AD9-9E1C-410A-AACD-059A95303FC2 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D= utf-8"></head><body dir=3D"auto"><div>I'm running single node hosted engine 4= .0.x with local NFS and it runs just fine. Thanks<br><br><div>Regards,</div>= <div>Philip Lo</div></div><div><br>On 5 Sep 2016, at 5:45 AM, Christophe TRE= FOIS <<a href=3D"mailto:christophe.trefois@uni.lu">christophe.trefois@uni= .lu</a>> wrote:<br><br></div><div><span></span></div><blockquote type=3D"= cite"><div><meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3D= utf-8"><meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered mediu= m)"><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman",serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:#954F72; text-decoration:underline;} p.msonormal0, li.msonormal0, div.msonormal0 {mso-style-name:msonormal; mso-margin-top-alt:auto; margin-right:0cm; mso-margin-bottom-alt:auto; margin-left:0cm; font-size:12.0pt; font-family:"Times New Roman",serif;} span.EmailStyle18 {mso-style-type:personal-reply; font-family:"Calibri",sans-serif; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri",sans-serif; mso-fareast-language:EN-US;} @page WordSection1 {size:612.0pt 792.0pt; margin:70.85pt 70.85pt 70.85pt 70.85pt;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--><div class=3D"WordSection1"><p class=3D"Ms= oNormal"><span lang=3D"EN-US" style=3D"font-size:11.0pt;font-family:"Ca= libri",sans-serif;mso-fareast-language:EN-US">I=E2=80=99m running 3.6 w= ith local NFS for the hosted engine. I have more than one host but they are a= ll isolated and export they storage via local NFS. Setup has been running si= nce 1 year now.<o:p></o:p></span></p><p class=3D"MsoNormal"><span lang=3D"EN= -US" style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;ms= o-fareast-language:EN-US"><o:p> </o:p></span></p><p class=3D"MsoNormal"= style=3D"font-size:11.0pt;font-family:"Calibri",sans-serif;mso-fa= reast-language:EN-US">Cheers,<o:p></o:p></span></p><p class=3D"MsoNormal"><s= pan lang=3D"EN-US" style=3D"font-size:11.0pt;font-family:"Calibri"= ,sans-serif;mso-fareast-language:EN-US">Chris<o:p></o:p></span></p><p class=3D= "MsoNormal"><span lang=3D"EN-US" style=3D"font-size:11.0pt;font-family:"= ;Calibri",sans-serif;mso-fareast-language:EN-US"><o:p> </o:p></spa= n></p><div style=3D"border:none;border-left:solid blue 1.5pt;padding:0cm 0cm= 0cm 4.0pt"><div><div style=3D"border:none;border-top:solid #E1E1E1 1.0pt;pa= dding:3.0pt 0cm 0cm 0cm"><p class=3D"MsoNormal"><b><span lang=3D"EN-US" styl= e=3D"font-size:11.0pt;font-family:"Calibri",sans-serif">From:</spa= n></b><span lang=3D"EN-US" style=3D"font-size:11.0pt;font-family:"Calib= ri",sans-serif"> <a href=3D"mailto:users-bounces@ovirt.org">users-bounc= es@ovirt.org</a> [<a href=3D"mailto:users-bounces@ovirt.org">mailto:users-bo= unces@ovirt.org</a>] <b>On Behalf Of </b>zero four<br><b>Sent:</b> dimanche 4= septembre 2016 22:45<br><b>To:</b> <a href=3D"mailto:users@ovirt.org">users= @ovirt.org</a><br><b>Subject:</b> [ovirt-users] oVirt on a single server<o:p= ther virtualization products allow for usage (not just a livecd) in a single= server environment, although not all features can be used of course. = vSphere, Xenserver, Proxmox, FIFO, and Nutanix all allow an installation on a= single server. It appears that oVirt/RHV is the odd-one out - and it honest= ly shows when you look at what people talk about online - there is a huge ga= p between even Proxmox and oVirt when it comes to mindshare in the tech comm= unity, and it does not favor oVirt.<o:p></o:p></p></div></div></div></div></= div></blockquote></body></html>= --Apple-Mail-19D05AD9-9E1C-410A-AACD-059A95303FC2--

On Mon, Sep 5, 2016 at 8:46 AM, Philip Lo <lokoiyin@yahoo.com> wrote:
I'm running single node hosted engine 4.0.x with local NFS and it runs just fine. Thanks
Regards, Philip Lo
On 5 Sep 2016, at 5:45 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
I’m running 3.6 with local NFS for the hosted engine. I have more than one host but they are all isolated and export they storage via local NFS. Setup has been running since 1 year now.
It run fine but it may deadlock :-) See - https://bugzilla.redhat.com/489889 - https://access.redhat.com/solutions/22231 Such setup is ok for testing or development. Nir

--Apple-Mail-B7150965-9033-4843-BC19-97F1BE1CB3D1 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: base64 T2ggd293LiBXZWxsIHRoZW4gSSBndWVzcyB3ZSBhcmUgaW4gYSBiYWQgc2l0dWF0aW9uIG5vdy4g RG9uJ3QgcmVhbGx5IGhhdmUgdGhlIGluZnJhIHRvIG1vdmUgdG8gc2hhcmVkIHN0b3JhZ2UuLi4N Cg0KSXNuJ3QgdGhpcyB0aGUgc2FtZSBpc3N1ZSB0aGVuIHdpdGggTkZTIG92ZXIgZ2x1c3Rlcj8N Cg0KQmVzdCwNCg0KU2VudCBmcm9tIG15IGlQaG9uZQ0KDQo+IE9uIDA1IFNlcCAyMDE2LCBhdCAw ODozNywgTmlyIFNvZmZlciA8bnNvZmZlckByZWRoYXQuY29tPiB3cm90ZToNCj4gDQo+PiBPbiBN b24sIFNlcCA1LCAyMDE2IGF0IDg6NDYgQU0sIFBoaWxpcCBMbyA8bG9rb2l5aW5AeWFob28uY29t PiB3cm90ZToNCj4+IEknbSBydW5uaW5nIHNpbmdsZSBub2RlIGhvc3RlZCBlbmdpbmUgNC4wLngg d2l0aCBsb2NhbCBORlMgYW5kIGl0IHJ1bnMganVzdA0KPj4gZmluZS4gVGhhbmtzDQo+PiANCj4+ IFJlZ2FyZHMsDQo+PiBQaGlsaXAgTG8NCj4+IA0KPj4gT24gNSBTZXAgMjAxNiwgYXQgNTo0NSBB TSwgQ2hyaXN0b3BoZSBUUkVGT0lTIDxjaHJpc3RvcGhlLnRyZWZvaXNAdW5pLmx1Pg0KPj4gd3Jv dGU6DQo+PiANCj4+IEnigJltIHJ1bm5pbmcgMy42IHdpdGggbG9jYWwgTkZTIGZvciB0aGUgaG9z dGVkIGVuZ2luZS4gSSBoYXZlIG1vcmUgdGhhbiBvbmUNCj4+IGhvc3QgYnV0IHRoZXkgYXJlIGFs bCBpc29sYXRlZCBhbmQgZXhwb3J0IHRoZXkgc3RvcmFnZSB2aWEgbG9jYWwgTkZTLiBTZXR1cA0K Pj4gaGFzIGJlZW4gcnVubmluZyBzaW5jZSAxIHllYXIgbm93Lg0KPiANCj4gSXQgcnVuIGZpbmUg YnV0IGl0IG1heSBkZWFkbG9jayA6LSkNCj4gDQo+IFNlZQ0KPiAtIGh0dHBzOi8vYnVnemlsbGEu cmVkaGF0LmNvbS80ODk4ODkNCj4gLSBodHRwczovL2FjY2Vzcy5yZWRoYXQuY29tL3NvbHV0aW9u cy8yMjIzMQ0KPiANCj4gU3VjaCBzZXR1cCBpcyBvayBmb3IgdGVzdGluZyBvciBkZXZlbG9wbWVu dC4NCj4gDQo+IE5pcg0K --Apple-Mail-B7150965-9033-4843-BC19-97F1BE1CB3D1 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Disposition: attachment; filename="smime.p7s" Content-Transfer-Encoding: base64 MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTjCCBUow ggQyoAMCAQICEAJDdu0H9sAounPvkFiVbagwDQYJKoZIhvcNAQELBQAwaTELMAkGA1UEBhMCTkwx FjAUBgNVBAgTDU5vb3JkLUhvbGxhbmQxEjAQBgNVBAcTCUFtc3RlcmRhbTEPMA0GA1UEChMGVEVS RU5BMR0wGwYDVQQDExRURVJFTkEgUGVyc29uYWwgQ0EgMzAeFw0xNjAzMjUwMDAwMDBaFw0xNzAz MjUxMjAwMDBaMH0xCzAJBgNVBAYTAkxVMRMwEQYDVQQIEwpMdXhlbWJvdXJnMRkwFwYDVQQHExBF c2NoLXN1ci1BbHpldHRlMSEwHwYDVQQKExhVbml2ZXJzaXR5IG9mIEx1eGVtYm91cmcxGzAZBgNV BAMTEkNocmlzdG9waGUgVFJFRk9JUzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANHz L1OcDBWPgD2gZyEnmbC0aJE4w0Kpe+KxiR8LFHra4brL+dIFPB6aoB98Dr8Hsnt5vxBC5J3dCiMs TSG8yFLG55yJVsd6uhv9XYuh2Vva2tqCfuyKmKrVBXFTL7k/4NjE6+gPTStgtVwckiwtTgDDaLQI zY6WxBy8AUepforjMrfbuS18JkbBD07K0VRkzs8U7Ix9l71icYos2zlhgUIJ2JR9+YxZYETzfceT LwNmDv3k68iSPn9/vKgYzPrh3fjV4HovVQfWnGnisL82WQ4fCxhf22zGPOwPTOJ+VzIi8c21gpHg JFbDvnoiwJCitTi3Tpiybmq4vT4RqUuTrJUCAwEAAaOCAdgwggHUMB8GA1UdIwQYMBaAFPAh6Ul3 c5+Frhg76FJwFAbtQu7KMB0GA1UdDgQWBBTtMBqK3jv1HoiGgWfymeVVspO+1zAMBgNVHRMBAf8E AjAAMCQGA1UdEQQdMBuBGWNocmlzdG9waGUudHJlZm9pc0B1bmkubHUwDgYDVR0PAQH/BAQDAgWg MB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDBDBgNVHSAEPDA6MDgGCmCGSAGG/WwEAQIw KjAoBggrBgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzB1BgNVHR8EbjBsMDSg MqAwhi5odHRwOi8vY3JsMy5kaWdpY2VydC5jb20vVEVSRU5BUGVyc29uYWxDQTMuY3JsMDSgMqAw hi5odHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BUGVyc29uYWxDQTMuY3JsMHMGCCsGAQUF BwEBBGcwZTAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQuY29tMD0GCCsGAQUFBzAC hjFodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vVEVSRU5BUGVyc29uYWxDQTMuY3J0MA0GCSqG SIb3DQEBCwUAA4IBAQAhRNy3dvEkHiycx1Bl2uIH+34HKhySFOi+ot6PwebxCBshASTp4ikwwmFT Gik6qyJ4+SmepuF8eJsRtzKh/iLtF82dUNyK0RI3BnFhtGBAEk61t1apNXs6brsIPMydp/lgeOLg /7yEcTH3obeusvh2Fd5venKGJ5rQin3c1Lz/NWE9BlxRtOpRp0Ht6jwHgqL1BqVNcqhioCUNzYIP 3+Zl37ISjdYvR0bh6FvQjL8W7uKoudkOStOehRNpKNTpTWydP7DEvhI/XahOyQBAQhPQtfFZzchu kvA7RMYtSFJDj0OaLBPUh+UGOK6vf0g95NBMGdLfKDsnbRDWvhYf5mOiMYIDJTCCAyECAQEwfTBp MQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UEBxMJQW1zdGVyZGFt MQ8wDQYDVQQKEwZURVJFTkExHTAbBgNVBAMTFFRFUkVOQSBQZXJzb25hbCBDQSAzAhACQ3btB/bA KLpz75BYlW2oMAkGBSsOAwIaBQCgggF9MBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZI hvcNAQkFMQ8XDTE2MDkwNTA2NDczOVowIwYJKoZIhvcNAQkEMRYEFOmM2/DhSDnyGMn9Ha04XDQk BRq1MIGMBgkrBgEEAYI3EAQxfzB9MGkxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xs YW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEdMBsGA1UEAxMUVEVSRU5B IFBlcnNvbmFsIENBIDMCEAJDdu0H9sAounPvkFiVbagwgY4GCyqGSIb3DQEJEAILMX+gfTBpMQsw CQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UEBxMJQW1zdGVyZGFtMQ8w DQYDVQQKEwZURVJFTkExHTAbBgNVBAMTFFRFUkVOQSBQZXJzb25hbCBDQSAzAhACQ3btB/bAKLpz 75BYlW2oMA0GCSqGSIb3DQEBAQUABIIBAI+fHUdVhAT5O0VaLWj99gx8+ZeDsSYB5Ra+JmQu78BX b3U8pnQj+gKG5c7nlNlMZI+Ar5OtK/E3ryUswPCgPmokX946Am1wuKm2lC7qOnZx0rQ+uQpBEEtn p6E7T9Vc7tTNW70KHgyXtKQa9fyDsDie7Q1V9BNCzFEPjKZJr1E5XP5qg3fCEpoVKz1Au3sJY+gF JT94gZGcQz2XeVLCKT6tHPCfX2LnJgoLT+7OiCGVDzM0XBJFUCQyyQGh7BvDobKC9EQNFboo+kI6 leUdJTPhIkjbhD8Eh2vyTwqa4m0+B5H6Pk64tdUbNSZiolUenly/pWL49gSSHfjBFn+ZpS0AAAAA AAA= --Apple-Mail-B7150965-9033-4843-BC19-97F1BE1CB3D1--

On Mon, Sep 5, 2016 at 9:37 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Sep 5, 2016 at 8:46 AM, Philip Lo <lokoiyin@yahoo.com> wrote:
I'm running single node hosted engine 4.0.x with local NFS and it runs just fine. Thanks
Regards, Philip Lo
On 5 Sep 2016, at 5:45 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
I’m running 3.6 with local NFS for the hosted engine. I have more than one host but they are all isolated and export they storage via local NFS. Setup has been running since 1 year now.
It run fine but it may deadlock :-)
See - https://bugzilla.redhat.com/489889 - https://access.redhat.com/solutions/22231
Indeed, see also: https://lwn.net/Articles/595652/
Such setup is ok for testing or development.
Another option is iSCSI, which AFAIU does not suffer from this problem. And yet another option is using nested-kvm to run multiple virtual "hosts" on a single physical one, and run hosted-engine on them, with another VM serving nfs or iSCSI storage. This obviously provides lower performance, but higher flexibility, and is probably ideal for learning oVirt, testing etc. Obviously you can't create/maintain these hosts using oVirt itself, but have to use e.g. virsh or virt-manager. There is a project called lago [1] doing just that, and some of the CI tests of oVirt already use it to do a full hosted-engine setup. [1] http://lago.readthedocs.io/en/stable/ -- Didi

On 4 September 2016 at 23:45, zero four <zfnoctis@gmail.com> wrote: ...
I understand and acknowledge that oVirt is not targeted towards homelab setups, or at least small homelab setups. However I believe that having a solid configuration for such use cases would be a benefit to the project as a whole.
As others have already mentioned, using the full oVirt with engine in a single host scenario can work, but is not currently actively maintained or tested. There are other options originating from the oVirt community however. One notable option is to use the Cockpit-oVirt plugin [1] which can use VDSM to manage VMs on a single host. Another option is to use the Kimchi project [2] for which discussion for making it an oVirt project had taken part in the past [3]. It seems that also some work for inclusion in oVirt node was also planned at some point [4]. [1]: http://www.ovirt.org/develop/release-management/features/cockpit/ [2]: https://github.com/kimchi-project/kimchi [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html [4]: http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/ -- Barak Korren bkorren@redhat.com RHEV-CI Team

Adding Kimchi to oVirt node perhaps may be the easiest option. It can be pretty useful for many situations and doesn't need such thing like mounting NFS in localhost. It is not nice to not have a All-in-One stable solution anymore as this can help with its adoption for later growth. oVirt-Cockpit looks nice and intresting. Fernando On 05/09/2016 05:18, Barak Korren wrote:
I understand and acknowledge that oVirt is not targeted towards homelab setups, or at least small homelab setups. However I believe that having a solid configuration for such use cases would be a benefit to the project as a whole. As others have already mentioned, using the full oVirt with engine in a single host scenario can work, but is not currently actively
On 4 September 2016 at 23:45, zero four <zfnoctis@gmail.com> wrote: ... maintained or tested.
There are other options originating from the oVirt community however.
One notable option is to use the Cockpit-oVirt plugin [1] which can use VDSM to manage VMs on a single host.
Another option is to use the Kimchi project [2] for which discussion for making it an oVirt project had taken part in the past [3]. It seems that also some work for inclusion in oVirt node was also planned at some point [4].
[1]: http://www.ovirt.org/develop/release-management/features/cockpit/ [2]: https://github.com/kimchi-project/kimchi [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html [4]: http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/

So basically we need at least 2 nodes to enter the realm of testing and maintained? If we’re talking pure oVirt here. -- Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc UNIVERSITÉ DU LUXEMBOURG LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb ---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ----
On 05 Sep 2016, at 16:31, Fernando Frediani <fernando.frediani@upx.com.br> wrote:
Adding Kimchi to oVirt node perhaps may be the easiest option. It can be pretty useful for many situations and doesn't need such thing like mounting NFS in localhost.
It is not nice to not have a All-in-One stable solution anymore as this can help with its adoption for later growth.
oVirt-Cockpit looks nice and intresting.
Fernando
On 05/09/2016 05:18, Barak Korren wrote:
I understand and acknowledge that oVirt is not targeted towards homelab setups, or at least small homelab setups. However I believe that having a solid configuration for such use cases would be a benefit to the project as a whole. As others have already mentioned, using the full oVirt with engine in a single host scenario can work, but is not currently actively
On 4 September 2016 at 23:45, zero four <zfnoctis@gmail.com> wrote: ... maintained or tested.
There are other options originating from the oVirt community however.
One notable option is to use the Cockpit-oVirt plugin [1] which can use VDSM to manage VMs on a single host.
Another option is to use the Kimchi project [2] for which discussion for making it an oVirt project had taken part in the past [3]. It seems that also some work for inclusion in oVirt node was also planned at some point [4].
[1]: http://www.ovirt.org/develop/release-management/features/cockpit/ [2]: https://github.com/kimchi-project/kimchi [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html [4]: http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
So basically we need at least 2 nodes to enter the realm of testing and maintained?
I think some people occasionally use hosted-engine with local iSCSI storage on a single machine. AFAIK it's not tested by CI or often, but patches are welcome - e.g. using lago and ovirt-system-tests. Can you please explain your intentions/requirements? Even if it works, oVirt is not designed for single-machine _production_ use. For that, I think that most people agree that virt-manager is more suitable. oVirt on a single machine is usually for testing/demonstration/learning/etc.
If we’re talking pure oVirt here.
--
Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc
UNIVERSITÉ DU LUXEMBOURG
LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb
---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ----
On 05 Sep 2016, at 16:31, Fernando Frediani <fernando.frediani@upx.com.br> wrote:
Adding Kimchi to oVirt node perhaps may be the easiest option. It can be pretty useful for many situations and doesn't need such thing like mounting NFS in localhost.
It is not nice to not have a All-in-One stable solution anymore as this can help with its adoption for later growth.
oVirt-Cockpit looks nice and intresting.
Fernando
On 05/09/2016 05:18, Barak Korren wrote:
I understand and acknowledge that oVirt is not targeted towards homelab setups, or at least small homelab setups. However I believe that having a solid configuration for such use cases would be a benefit to the project as a whole. As others have already mentioned, using the full oVirt with engine in a single host scenario can work, but is not currently actively
On 4 September 2016 at 23:45, zero four <zfnoctis@gmail.com> wrote: ... maintained or tested.
There are other options originating from the oVirt community however.
One notable option is to use the Cockpit-oVirt plugin [1] which can use VDSM to manage VMs on a single host.
Another option is to use the Kimchi project [2] for which discussion for making it an oVirt project had taken part in the past [3]. It seems that also some work for inclusion in oVirt node was also planned at some point [4].
[1]: http://www.ovirt.org/develop/release-management/features/cockpit/ [2]: https://github.com/kimchi-project/kimchi [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html [4]: http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi

--Apple-Mail-3B6425A6-9FE4-49A3-AC7E-A4013D0CCE43 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: base64 UGVyc29uYWxseSBteSB1c2UgY2FzZSBpcyB0aGF0IEkgaGF2ZSA0IG1hY2hpbmVzIHdpdGggZGlm ZmVyZW50IHNwZWNzIGFuZCBzdG9yYWdlIHNpemluZy4gU28gSSBzZXR1cCBmb3VyIERDIHdpdGgg MSBob3N0IGVhY2guIFRoZW4gSSBoYXZlIGhvc3RlZCBlbmdpbmUgb24gb25lIG9mIHRoZSBob3N0 cy4gU3RvcmFnZSBpcyBsb2NhbCBzaGFyZWQgdmlhIE5GUyBzbyB0aGF0IEkgY2FuIG1vdmUgVk1z IGFyb3VuZC4NCg0KQXQgdGhpcyBwb2ludCB3ZSBhcmUgbm90IGludGVyZXN0ZWQgbmVjZXNzYXJp bHkgaW4gSEEuDQoNCk1heWJlIGZvciB5b3UgdGhhdCdzIHRoZSBkZWZpbml0aW9uIG9mIGEgRGV2 IGVudmlyb25tZW50IGFzIHByb2R1Y3Rpb24gaGFzIG90aGVyIGF0dHJpYnV0ZXMgdGhhbiBqdXN0 IHRoZSB0eXBlIG9mIHN0b3JhZ2U/DQoNCldvdWxkIGJlIG5pY2UgdG8gaGVhciB5b3VyIHRob3Vn aHRzIGFib3V0IHRoaXMuDQoNCktpbmQgcmVnYXJkcywNCkNocmlzdG9waGUgDQoNClNlbnQgZnJv bSBteSBpUGhvbmUNCg0KPiBPbiAwNiBTZXAgMjAxNiwgYXQgMDg6NDUsIFllZGlkeWFoIEJhciBE YXZpZCA8ZGlkaUByZWRoYXQuY29tPiB3cm90ZToNCj4gDQo+IE9uIFR1ZSwgU2VwIDYsIDIwMTYg YXQgMTI6MzQgQU0sIENocmlzdG9waGUgVFJFRk9JUw0KPiA8Y2hyaXN0b3BoZS50cmVmb2lzQHVu aS5sdT4gd3JvdGU6DQo+PiBTbyBiYXNpY2FsbHkgd2UgbmVlZCBhdCBsZWFzdCAyIG5vZGVzIHRv IGVudGVyIHRoZSByZWFsbSBvZiB0ZXN0aW5nIGFuZCBtYWludGFpbmVkPw0KPiANCj4gSSB0aGlu ayBzb21lIHBlb3BsZSBvY2Nhc2lvbmFsbHkgdXNlIGhvc3RlZC1lbmdpbmUgd2l0aCBsb2NhbA0K PiBpU0NTSSBzdG9yYWdlIG9uIGEgc2luZ2xlIG1hY2hpbmUuIEFGQUlLIGl0J3Mgbm90IHRlc3Rl ZCBieSBDSQ0KPiBvciBvZnRlbiwgYnV0IHBhdGNoZXMgYXJlIHdlbGNvbWUgLSBlLmcuIHVzaW5n IGxhZ28gYW5kDQo+IG92aXJ0LXN5c3RlbS10ZXN0cy4NCj4gDQo+IENhbiB5b3UgcGxlYXNlIGV4 cGxhaW4geW91ciBpbnRlbnRpb25zL3JlcXVpcmVtZW50cz8NCj4gDQo+IEV2ZW4gaWYgaXQgd29y a3MsIG9WaXJ0IGlzIG5vdCBkZXNpZ25lZCBmb3Igc2luZ2xlLW1hY2hpbmUNCj4gX3Byb2R1Y3Rp b25fIHVzZS4gRm9yIHRoYXQsIEkgdGhpbmsgdGhhdCBtb3N0IHBlb3BsZSBhZ3JlZSB0aGF0DQo+ IHZpcnQtbWFuYWdlciBpcyBtb3JlIHN1aXRhYmxlLiBvVmlydCBvbiBhIHNpbmdsZSBtYWNoaW5l IGlzDQo+IHVzdWFsbHkgZm9yIHRlc3RpbmcvZGVtb25zdHJhdGlvbi9sZWFybmluZy9ldGMuDQo+ IA0KPj4gDQo+PiBJZiB3ZeKAmXJlIHRhbGtpbmcgcHVyZSBvVmlydCBoZXJlLg0KPj4gDQo+PiAt LQ0KPj4gDQo+PiBEciBDaHJpc3RvcGhlIFRyZWZvaXMsIERpcGwuLUluZy4NCj4+IFRlY2huaWNh bCBTcGVjaWFsaXN0IC8gUG9zdC1Eb2MNCj4+IA0KPj4gVU5JVkVSU0lUw4kgRFUgTFVYRU1CT1VS Rw0KPj4gDQo+PiBMVVhFTUJPVVJHIENFTlRSRSBGT1IgU1lTVEVNUyBCSU9NRURJQ0lORQ0KPj4g Q2FtcHVzIEJlbHZhbCB8IEhvdXNlIG9mIEJpb21lZGljaW5lDQo+PiA2LCBhdmVudWUgZHUgU3dp bmcNCj4+IEwtNDM2NyBCZWx2YXV4DQo+PiBUOiArMzUyIDQ2IDY2IDQ0IDYxMjQNCj4+IEY6ICsz NTIgNDYgNjYgNDQgNjk0OQ0KPj4gaHR0cDovL3d3dy51bmkubHUvbGNzYg0KPj4gDQo+PiANCj4+ IA0KPj4gLS0tLQ0KPj4gVGhpcyBtZXNzYWdlIGlzIGNvbmZpZGVudGlhbCBhbmQgbWF5IGNvbnRh aW4gcHJpdmlsZWdlZCBpbmZvcm1hdGlvbi4NCj4+IEl0IGlzIGludGVuZGVkIGZvciB0aGUgbmFt ZWQgcmVjaXBpZW50IG9ubHkuDQo+PiBJZiB5b3UgcmVjZWl2ZSBpdCBpbiBlcnJvciBwbGVhc2Ug bm90aWZ5IG1lIGFuZCBwZXJtYW5lbnRseSBkZWxldGUgdGhlIG9yaWdpbmFsIG1lc3NhZ2UgYW5k IGFueSBjb3BpZXMuDQo+PiAtLS0tDQo+PiANCj4+IA0KPj4gDQo+Pj4gT24gMDUgU2VwIDIwMTYs IGF0IDE2OjMxLCBGZXJuYW5kbyBGcmVkaWFuaSA8ZmVybmFuZG8uZnJlZGlhbmlAdXB4LmNvbS5i cj4gd3JvdGU6DQo+Pj4gDQo+Pj4gQWRkaW5nIEtpbWNoaSB0byBvVmlydCBub2RlIHBlcmhhcHMg bWF5IGJlIHRoZSBlYXNpZXN0IG9wdGlvbi4gSXQgY2FuIGJlIHByZXR0eSB1c2VmdWwgZm9yIG1h bnkgc2l0dWF0aW9ucyBhbmQgZG9lc24ndCBuZWVkIHN1Y2ggdGhpbmcgbGlrZSBtb3VudGluZyBO RlMgaW4gbG9jYWxob3N0Lg0KPj4+IA0KPj4+IEl0IGlzIG5vdCBuaWNlIHRvIG5vdCBoYXZlIGEg QWxsLWluLU9uZSBzdGFibGUgc29sdXRpb24gYW55bW9yZSBhcyB0aGlzIGNhbiBoZWxwIHdpdGgg aXRzIGFkb3B0aW9uIGZvciBsYXRlciBncm93dGguDQo+Pj4gDQo+Pj4gb1ZpcnQtQ29ja3BpdCBs b29rcyBuaWNlIGFuZCBpbnRyZXN0aW5nLg0KPj4+IA0KPj4+IEZlcm5hbmRvDQo+Pj4gDQo+Pj4g DQo+Pj4+IE9uIDA1LzA5LzIwMTYgMDU6MTgsIEJhcmFrIEtvcnJlbiB3cm90ZToNCj4+Pj4+IE9u IDQgU2VwdGVtYmVyIDIwMTYgYXQgMjM6NDUsIHplcm8gZm91ciA8emZub2N0aXNAZ21haWwuY29t PiB3cm90ZToNCj4+Pj4+IC4uLg0KPj4+Pj4gSSB1bmRlcnN0YW5kIGFuZCBhY2tub3dsZWRnZSB0 aGF0IG9WaXJ0IGlzIG5vdCB0YXJnZXRlZCB0b3dhcmRzIGhvbWVsYWINCj4+Pj4+IHNldHVwcywg b3IgYXQgbGVhc3Qgc21hbGwgaG9tZWxhYiBzZXR1cHMuICBIb3dldmVyIEkgYmVsaWV2ZSB0aGF0 IGhhdmluZyBhDQo+Pj4+PiBzb2xpZCBjb25maWd1cmF0aW9uIGZvciBzdWNoIHVzZSBjYXNlcyB3 b3VsZCBiZSBhIGJlbmVmaXQgdG8gdGhlIHByb2plY3QgYXMNCj4+Pj4+IGEgd2hvbGUuDQo+Pj4+ IEFzIG90aGVycyBoYXZlIGFscmVhZHkgbWVudGlvbmVkLCB1c2luZyB0aGUgZnVsbCBvVmlydCAg d2l0aCBlbmdpbmUgaW4NCj4+Pj4gYSBzaW5nbGUgaG9zdCBzY2VuYXJpbyBjYW4gd29yaywgYnV0 IGlzIG5vdCBjdXJyZW50bHkgYWN0aXZlbHkNCj4+Pj4gbWFpbnRhaW5lZCBvciB0ZXN0ZWQuDQo+ Pj4+IA0KPj4+PiBUaGVyZSBhcmUgb3RoZXIgb3B0aW9ucyBvcmlnaW5hdGluZyBmcm9tIHRoZSBv VmlydCBjb21tdW5pdHkgaG93ZXZlci4NCj4+Pj4gDQo+Pj4+IE9uZSBub3RhYmxlIG9wdGlvbiBp cyB0byB1c2UgdGhlIENvY2twaXQtb1ZpcnQgcGx1Z2luIFsxXSB3aGljaCBjYW4NCj4+Pj4gdXNl IFZEU00gdG8gbWFuYWdlIFZNcyBvbiBhIHNpbmdsZSBob3N0Lg0KPj4+PiANCj4+Pj4gQW5vdGhl ciBvcHRpb24gaXMgdG8gdXNlIHRoZSBLaW1jaGkgcHJvamVjdCBbMl0gZm9yIHdoaWNoIGRpc2N1 c3Npb24NCj4+Pj4gZm9yIG1ha2luZyBpdCBhbiBvVmlydCBwcm9qZWN0IGhhZCB0YWtlbiBwYXJ0 IGluIHRoZSBwYXN0IFszXS4gSXQNCj4+Pj4gc2VlbXMgdGhhdCBhbHNvIHNvbWUgd29yayBmb3Ig aW5jbHVzaW9uIGluIG9WaXJ0IG5vZGUgd2FzIGFsc28gcGxhbm5lZA0KPj4+PiBhdCBzb21lIHBv aW50IFs0XS4NCj4+Pj4gDQo+Pj4+IFsxXTogaHR0cDovL3d3dy5vdmlydC5vcmcvZGV2ZWxvcC9y ZWxlYXNlLW1hbmFnZW1lbnQvZmVhdHVyZXMvY29ja3BpdC8NCj4+Pj4gWzJdOiBodHRwczovL2dp dGh1Yi5jb20va2ltY2hpLXByb2plY3Qva2ltY2hpDQo+Pj4+IFszXTogaHR0cDovL2xpc3RzLm92 aXJ0Lm9yZy9waXBlcm1haWwvYm9hcmQvMjAxMy1KdWx5LzAwMDkyMS5odG1sDQo+Pj4+IFs0XTog aHR0cDovL3d3dy5vdmlydC5vcmcvZGV2ZWxvcC9yZWxlYXNlLW1hbmFnZW1lbnQvZmVhdHVyZXMv bm9kZS9raW1jaGlwbHVnaW4vDQo+Pj4gDQo+Pj4gX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX18NCj4+PiBVc2VycyBtYWlsaW5nIGxpc3QNCj4+PiBVc2Vyc0Bv dmlydC5vcmcNCj4+PiBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNl cnMNCj4+IA0KPj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X18NCj4+IFVzZXJzIG1haWxpbmcgbGlzdA0KPj4gVXNlcnNAb3ZpcnQub3JnDQo+PiBodHRwOi8v bGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMNCj4gDQo+IA0KPiANCj4gLS0g DQo+IERpZGkNCg== --Apple-Mail-3B6425A6-9FE4-49A3-AC7E-A4013D0CCE43 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Disposition: attachment; filename="smime.p7s" Content-Transfer-Encoding: base64 MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTjCCBUow ggQyoAMCAQICEAJDdu0H9sAounPvkFiVbagwDQYJKoZIhvcNAQELBQAwaTELMAkGA1UEBhMCTkwx FjAUBgNVBAgTDU5vb3JkLUhvbGxhbmQxEjAQBgNVBAcTCUFtc3RlcmRhbTEPMA0GA1UEChMGVEVS RU5BMR0wGwYDVQQDExRURVJFTkEgUGVyc29uYWwgQ0EgMzAeFw0xNjAzMjUwMDAwMDBaFw0xNzAz MjUxMjAwMDBaMH0xCzAJBgNVBAYTAkxVMRMwEQYDVQQIEwpMdXhlbWJvdXJnMRkwFwYDVQQHExBF c2NoLXN1ci1BbHpldHRlMSEwHwYDVQQKExhVbml2ZXJzaXR5IG9mIEx1eGVtYm91cmcxGzAZBgNV BAMTEkNocmlzdG9waGUgVFJFRk9JUzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANHz L1OcDBWPgD2gZyEnmbC0aJE4w0Kpe+KxiR8LFHra4brL+dIFPB6aoB98Dr8Hsnt5vxBC5J3dCiMs TSG8yFLG55yJVsd6uhv9XYuh2Vva2tqCfuyKmKrVBXFTL7k/4NjE6+gPTStgtVwckiwtTgDDaLQI zY6WxBy8AUepforjMrfbuS18JkbBD07K0VRkzs8U7Ix9l71icYos2zlhgUIJ2JR9+YxZYETzfceT LwNmDv3k68iSPn9/vKgYzPrh3fjV4HovVQfWnGnisL82WQ4fCxhf22zGPOwPTOJ+VzIi8c21gpHg JFbDvnoiwJCitTi3Tpiybmq4vT4RqUuTrJUCAwEAAaOCAdgwggHUMB8GA1UdIwQYMBaAFPAh6Ul3 c5+Frhg76FJwFAbtQu7KMB0GA1UdDgQWBBTtMBqK3jv1HoiGgWfymeVVspO+1zAMBgNVHRMBAf8E AjAAMCQGA1UdEQQdMBuBGWNocmlzdG9waGUudHJlZm9pc0B1bmkubHUwDgYDVR0PAQH/BAQDAgWg MB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDBDBgNVHSAEPDA6MDgGCmCGSAGG/WwEAQIw KjAoBggrBgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzB1BgNVHR8EbjBsMDSg MqAwhi5odHRwOi8vY3JsMy5kaWdpY2VydC5jb20vVEVSRU5BUGVyc29uYWxDQTMuY3JsMDSgMqAw hi5odHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BUGVyc29uYWxDQTMuY3JsMHMGCCsGAQUF BwEBBGcwZTAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQuY29tMD0GCCsGAQUFBzAC hjFodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vVEVSRU5BUGVyc29uYWxDQTMuY3J0MA0GCSqG SIb3DQEBCwUAA4IBAQAhRNy3dvEkHiycx1Bl2uIH+34HKhySFOi+ot6PwebxCBshASTp4ikwwmFT Gik6qyJ4+SmepuF8eJsRtzKh/iLtF82dUNyK0RI3BnFhtGBAEk61t1apNXs6brsIPMydp/lgeOLg /7yEcTH3obeusvh2Fd5venKGJ5rQin3c1Lz/NWE9BlxRtOpRp0Ht6jwHgqL1BqVNcqhioCUNzYIP 3+Zl37ISjdYvR0bh6FvQjL8W7uKoudkOStOehRNpKNTpTWydP7DEvhI/XahOyQBAQhPQtfFZzchu kvA7RMYtSFJDj0OaLBPUh+UGOK6vf0g95NBMGdLfKDsnbRDWvhYf5mOiMYIDJTCCAyECAQEwfTBp MQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UEBxMJQW1zdGVyZGFt MQ8wDQYDVQQKEwZURVJFTkExHTAbBgNVBAMTFFRFUkVOQSBQZXJzb25hbCBDQSAzAhACQ3btB/bA KLpz75BYlW2oMAkGBSsOAwIaBQCgggF9MBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZI hvcNAQkFMQ8XDTE2MDkwNjA2NTMwMFowIwYJKoZIhvcNAQkEMRYEFG7I1DcVByfxCthxJnRI7tdk HqOGMIGMBgkrBgEEAYI3EAQxfzB9MGkxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xs YW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEdMBsGA1UEAxMUVEVSRU5B IFBlcnNvbmFsIENBIDMCEAJDdu0H9sAounPvkFiVbagwgY4GCyqGSIb3DQEJEAILMX+gfTBpMQsw CQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UEBxMJQW1zdGVyZGFtMQ8w DQYDVQQKEwZURVJFTkExHTAbBgNVBAMTFFRFUkVOQSBQZXJzb25hbCBDQSAzAhACQ3btB/bAKLpz 75BYlW2oMA0GCSqGSIb3DQEBAQUABIIBACnGheV0wgFmLe9xUIrYefVdawXSAvxy4HTFcS9Olvcu 1QLhQJojVz/aO7cyv38hw534ysnNTlYZzd6B5xbw6xkG8/E4bKdRsfdJmaMlCfDZHKAwfVBXlBuW 4WvBeV5eupYcLg46mfrNTGHErbnrDwTymDCUrueXvh6x9ausGiL/jfca3bnEjxGfOa897eQ1vsA8 /lx8EgLq4TgNr7mMNWPuEnJecCqKxwGZD+J+fabkQ7lPZQw2UKsikam5ZAPdLI3gBGEu84KZ3iVW vg3nnOBO2wI7jzD4ev2vGpKQO9BEojKkj23+ATMXoCp00CXbgY5gOnvj5R8bM4UQJYaN8Y0AAAAA AAA= --Apple-Mail-3B6425A6-9FE4-49A3-AC7E-A4013D0CCE43--

On Tue, Sep 6, 2016 at 9:53 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
Personally my use case is that I have 4 machines with different specs and storage sizing. So I setup four DC with 1 host each. Then I have hosted engine on one of the hosts. Storage is local shared via NFS so that I can move VMs around.
Not sure I fully understand. You use each of the 4 machines for both storage and running VMs? And export nfs on each to all the others? So that if a VM needs more CPU/memory then disk IO, you can move it to another machine and hopefully get better performance even though the storage is not local? I admit that it sounds very reasonable, and agree that doing this with nfs is easier than with iSCSI. If you don't mind the risk of local-nfs-mount locks, fine. As others noted, seems like it's quite a low risk.
At this point we are not interested necessarily in HA.
Maybe for you that's the definition of a Dev environment as production has other attributes than just the type of storage?
Dev or Prod is for you to define :-) How much time/money do you loose if a machine dies? If a machine locks up until someone notices and handles?
Would be nice to hear your thoughts about this.
As wrote above, sounds reasonable if you understand the risks and can live with them. Looking at the future you might want to check HC: https://www.ovirt.org/develop/release-management/features/gluster/glusterfs-...
Kind regards, Christophe
Sent from my iPhone
On 06 Sep 2016, at 08:45, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
So basically we need at least 2 nodes to enter the realm of testing and maintained?
I think some people occasionally use hosted-engine with local iSCSI storage on a single machine. AFAIK it's not tested by CI or often, but patches are welcome - e.g. using lago and ovirt-system-tests.
Can you please explain your intentions/requirements?
Even if it works, oVirt is not designed for single-machine _production_ use. For that, I think that most people agree that virt-manager is more suitable. oVirt on a single machine is usually for testing/demonstration/learning/etc.
If we’re talking pure oVirt here.
--
Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc
UNIVERSITÉ DU LUXEMBOURG
LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb
---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ----
On 05 Sep 2016, at 16:31, Fernando Frediani <fernando.frediani@upx.com.br> wrote:
Adding Kimchi to oVirt node perhaps may be the easiest option. It can be pretty useful for many situations and doesn't need such thing like mounting NFS in localhost.
It is not nice to not have a All-in-One stable solution anymore as this can help with its adoption for later growth.
oVirt-Cockpit looks nice and intresting.
Fernando
On 4 September 2016 at 23:45, zero four <zfnoctis@gmail.com> wrote: ... I understand and acknowledge that oVirt is not targeted towards homelab setups, or at least small homelab setups. However I believe that having a solid configuration for such use cases would be a benefit to the project as a whole. As others have already mentioned, using the full oVirt with engine in a single host scenario can work, but is not currently actively
On 05/09/2016 05:18, Barak Korren wrote: maintained or tested.
There are other options originating from the oVirt community however.
One notable option is to use the Cockpit-oVirt plugin [1] which can use VDSM to manage VMs on a single host.
Another option is to use the Kimchi project [2] for which discussion for making it an oVirt project had taken part in the past [3]. It seems that also some work for inclusion in oVirt node was also planned at some point [4].
[1]: http://www.ovirt.org/develop/release-management/features/cockpit/ [2]: https://github.com/kimchi-project/kimchi [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html [4]: http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Didi

Hi, No, I move VMs around with an Export Storage domain. All NFS is exported only to the local machine. Nothing is “shared” between hosts. But because I want to export VMs, we use “shared” storage in oVirt instead of “local”. Best, -- Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc UNIVERSITÉ DU LUXEMBOURG LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb ---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ----
On 06 Sep 2016, at 10:06, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 6, 2016 at 9:53 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
Personally my use case is that I have 4 machines with different specs and storage sizing. So I setup four DC with 1 host each. Then I have hosted engine on one of the hosts. Storage is local shared via NFS so that I can move VMs around.
Not sure I fully understand.
You use each of the 4 machines for both storage and running VMs? And export nfs on each to all the others?
So that if a VM needs more CPU/memory then disk IO, you can move it to another machine and hopefully get better performance even though the storage is not local?
I admit that it sounds very reasonable, and agree that doing this with nfs is easier than with iSCSI. If you don't mind the risk of local-nfs-mount locks, fine. As others noted, seems like it's quite a low risk.
At this point we are not interested necessarily in HA.
Maybe for you that's the definition of a Dev environment as production has other attributes than just the type of storage?
Dev or Prod is for you to define :-)
How much time/money do you loose if a machine dies? If a machine locks up until someone notices and handles?
Would be nice to hear your thoughts about this.
As wrote above, sounds reasonable if you understand the risks and can live with them.
Looking at the future you might want to check HC:
https://www.ovirt.org/develop/release-management/features/gluster/glusterfs-...
Kind regards, Christophe
Sent from my iPhone
On 06 Sep 2016, at 08:45, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
So basically we need at least 2 nodes to enter the realm of testing and maintained?
I think some people occasionally use hosted-engine with local iSCSI storage on a single machine. AFAIK it's not tested by CI or often, but patches are welcome - e.g. using lago and ovirt-system-tests.
Can you please explain your intentions/requirements?
Even if it works, oVirt is not designed for single-machine _production_ use. For that, I think that most people agree that virt-manager is more suitable. oVirt on a single machine is usually for testing/demonstration/learning/etc.
If we’re talking pure oVirt here.
--
Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc
UNIVERSITÉ DU LUXEMBOURG
LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb
---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ----
On 05 Sep 2016, at 16:31, Fernando Frediani <fernando.frediani@upx.com.br> wrote:
Adding Kimchi to oVirt node perhaps may be the easiest option. It can be pretty useful for many situations and doesn't need such thing like mounting NFS in localhost.
It is not nice to not have a All-in-One stable solution anymore as this can help with its adoption for later growth.
oVirt-Cockpit looks nice and intresting.
Fernando
On 05/09/2016 05:18, Barak Korren wrote: > On 4 September 2016 at 23:45, zero four <zfnoctis@gmail.com> wrote: > ... > I understand and acknowledge that oVirt is not targeted towards homelab > setups, or at least small homelab setups. However I believe that having a > solid configuration for such use cases would be a benefit to the project as > a whole. As others have already mentioned, using the full oVirt with engine in a single host scenario can work, but is not currently actively maintained or tested.
There are other options originating from the oVirt community however.
One notable option is to use the Cockpit-oVirt plugin [1] which can use VDSM to manage VMs on a single host.
Another option is to use the Kimchi project [2] for which discussion for making it an oVirt project had taken part in the past [3]. It seems that also some work for inclusion in oVirt node was also planned at some point [4].
[1]: http://www.ovirt.org/develop/release-management/features/cockpit/ [2]: https://github.com/kimchi-project/kimchi [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html [4]: http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Didi

On Tue, Sep 6, 2016 at 12:17 PM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
Hi,
No, I move VMs around with an Export Storage domain.
OK.
All NFS is exported only to the local machine.
Nothing is “shared” between hosts. But because I want to export VMs, we use “shared” storage in oVirt instead of “local”.
I think you can use nfs export storage domains also in a local-storage DC.
Best,
--
Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc
UNIVERSITÉ DU LUXEMBOURG
LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb
---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ----
On 06 Sep 2016, at 10:06, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 6, 2016 at 9:53 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
Personally my use case is that I have 4 machines with different specs and storage sizing. So I setup four DC with 1 host each. Then I have hosted engine on one of the hosts. Storage is local shared via NFS so that I can move VMs around.
Not sure I fully understand.
You use each of the 4 machines for both storage and running VMs? And export nfs on each to all the others?
So that if a VM needs more CPU/memory then disk IO, you can move it to another machine and hopefully get better performance even though the storage is not local?
I admit that it sounds very reasonable, and agree that doing this with nfs is easier than with iSCSI. If you don't mind the risk of local-nfs-mount locks, fine. As others noted, seems like it's quite a low risk.
At this point we are not interested necessarily in HA.
Maybe for you that's the definition of a Dev environment as production has other attributes than just the type of storage?
Dev or Prod is for you to define :-)
How much time/money do you loose if a machine dies? If a machine locks up until someone notices and handles?
Would be nice to hear your thoughts about this.
As wrote above, sounds reasonable if you understand the risks and can live with them.
Looking at the future you might want to check HC:
https://www.ovirt.org/develop/release-management/features/gluster/glusterfs-...
Kind regards, Christophe
Sent from my iPhone
On 06 Sep 2016, at 08:45, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
So basically we need at least 2 nodes to enter the realm of testing and maintained?
I think some people occasionally use hosted-engine with local iSCSI storage on a single machine. AFAIK it's not tested by CI or often, but patches are welcome - e.g. using lago and ovirt-system-tests.
Can you please explain your intentions/requirements?
Even if it works, oVirt is not designed for single-machine _production_ use. For that, I think that most people agree that virt-manager is more suitable. oVirt on a single machine is usually for testing/demonstration/learning/etc.
If we’re talking pure oVirt here.
--
Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc
UNIVERSITÉ DU LUXEMBOURG
LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb
---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ----
On 05 Sep 2016, at 16:31, Fernando Frediani <fernando.frediani@upx.com.br> wrote:
Adding Kimchi to oVirt node perhaps may be the easiest option. It can be pretty useful for many situations and doesn't need such thing like mounting NFS in localhost.
It is not nice to not have a All-in-One stable solution anymore as this can help with its adoption for later growth.
oVirt-Cockpit looks nice and intresting.
Fernando
> On 05/09/2016 05:18, Barak Korren wrote: >> On 4 September 2016 at 23:45, zero four <zfnoctis@gmail.com> wrote: >> ... >> I understand and acknowledge that oVirt is not targeted towards homelab >> setups, or at least small homelab setups. However I believe that having a >> solid configuration for such use cases would be a benefit to the project as >> a whole. > As others have already mentioned, using the full oVirt with engine in > a single host scenario can work, but is not currently actively > maintained or tested. > > There are other options originating from the oVirt community however. > > One notable option is to use the Cockpit-oVirt plugin [1] which can > use VDSM to manage VMs on a single host. > > Another option is to use the Kimchi project [2] for which discussion > for making it an oVirt project had taken part in the past [3]. It > seems that also some work for inclusion in oVirt node was also planned > at some point [4]. > > [1]: http://www.ovirt.org/develop/release-management/features/cockpit/ > [2]: https://github.com/kimchi-project/kimchi > [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html > [4]: http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Didi
-- Didi

On Tue, Sep 6, 2016 at 12:17 PM, Christophe TREFOIS < christophe.trefois@uni.lu> wrote:
Hi,
No, I move VMs around with an Export Storage domain.
If you have enough disk and bandwidth, perhaps it makes more sense to set up Gluster as a shared storage? And then just pin VMs to specific hosts, instead of separate DCs, etc.? Y.
All NFS is exported only to the local machine.
Nothing is “shared” between hosts. But because I want to export VMs, we use “shared” storage in oVirt instead of “local”.
Best,
--
Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc
UNIVERSITÉ DU LUXEMBOURG
LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb
---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ----
On 06 Sep 2016, at 10:06, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 6, 2016 at 9:53 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
Personally my use case is that I have 4 machines with different specs and storage sizing. So I setup four DC with 1 host each. Then I have hosted engine on one of the hosts. Storage is local shared via NFS so that I can move VMs around.
Not sure I fully understand.
You use each of the 4 machines for both storage and running VMs? And export nfs on each to all the others?
So that if a VM needs more CPU/memory then disk IO, you can move it to another machine and hopefully get better performance even though the storage is not local?
I admit that it sounds very reasonable, and agree that doing this with nfs is easier than with iSCSI. If you don't mind the risk of local-nfs-mount locks, fine. As others noted, seems like it's quite a low risk.
At this point we are not interested necessarily in HA.
Maybe for you that's the definition of a Dev environment as production
has other attributes than just the type of storage?
Dev or Prod is for you to define :-)
How much time/money do you loose if a machine dies? If a machine locks up until someone notices and handles?
Would be nice to hear your thoughts about this.
As wrote above, sounds reasonable if you understand the risks and can live with them.
Looking at the future you might want to check HC:
https://www.ovirt.org/develop/release-management/features/ gluster/glusterfs-hyperconvergence/
Kind regards, Christophe
Sent from my iPhone
On 06 Sep 2016, at 08:45, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
So basically we need at least 2 nodes to enter the realm of testing
I think some people occasionally use hosted-engine with local iSCSI storage on a single machine. AFAIK it's not tested by CI or often, but patches are welcome - e.g. using lago and ovirt-system-tests.
Can you please explain your intentions/requirements?
Even if it works, oVirt is not designed for single-machine _production_ use. For that, I think that most people agree that virt-manager is more suitable. oVirt on a single machine is usually for testing/demonstration/learning/etc.
If we’re talking pure oVirt here.
--
Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc
UNIVERSITÉ DU LUXEMBOURG
LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb
---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete
----
On 05 Sep 2016, at 16:31, Fernando Frediani < fernando.frediani@upx.com.br> wrote:
Adding Kimchi to oVirt node perhaps may be the easiest option. It can be pretty useful for many situations and doesn't need such thing like mounting NFS in localhost.
It is not nice to not have a All-in-One stable solution anymore as
oVirt-Cockpit looks nice and intresting.
Fernando
> On 05/09/2016 05:18, Barak Korren wrote: >> On 4 September 2016 at 23:45, zero four <zfnoctis@gmail.com>
wrote:
>> ... >> I understand and acknowledge that oVirt is not targeted towards homelab >> setups, or at least small homelab setups. However I believe that having a >> solid configuration for such use cases would be a benefit to the
>> a whole. > As others have already mentioned, using the full oVirt with engine in > a single host scenario can work, but is not currently actively > maintained or tested. > > There are other options originating from the oVirt community however. > > One notable option is to use the Cockpit-oVirt plugin [1] which can > use VDSM to manage VMs on a single host. > > Another option is to use the Kimchi project [2] for which discussion > for making it an oVirt project had taken part in the past [3]. It > seems that also some work for inclusion in oVirt node was also
and maintained? the original message and any copies. this can help with its adoption for later growth. project as planned
> at some point [4]. > > [1]: http://www.ovirt.org/develop/release-management/features/ cockpit/ > [2]: https://github.com/kimchi-project/kimchi > [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html > [4]: http://www.ovirt.org/develop/release-management/features/ node/kimchiplugin/
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Didi
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I would suggest a local single brick gluster. This will probably be most simple and you can have a scale out option in this way to replica 3. Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 7692306 8272306 Email: ydary@redhat.com IRC : ydary On Tue, Sep 6, 2016 at 12:54 PM, Yaniv Kaul <ykaul@redhat.com> wrote:
On Tue, Sep 6, 2016 at 12:17 PM, Christophe TREFOIS < christophe.trefois@uni.lu> wrote:
Hi,
No, I move VMs around with an Export Storage domain.
If you have enough disk and bandwidth, perhaps it makes more sense to set up Gluster as a shared storage? And then just pin VMs to specific hosts, instead of separate DCs, etc.? Y.
All NFS is exported only to the local machine.
Nothing is “shared” between hosts. But because I want to export VMs, we use “shared” storage in oVirt instead of “local”.
Best,
--
Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc
UNIVERSITÉ DU LUXEMBOURG
LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb
---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ----
On 06 Sep 2016, at 10:06, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 6, 2016 at 9:53 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
Personally my use case is that I have 4 machines with different specs and storage sizing. So I setup four DC with 1 host each. Then I have hosted engine on one of the hosts. Storage is local shared via NFS so that I can move VMs around.
Not sure I fully understand.
You use each of the 4 machines for both storage and running VMs? And export nfs on each to all the others?
So that if a VM needs more CPU/memory then disk IO, you can move it to another machine and hopefully get better performance even though the storage is not local?
I admit that it sounds very reasonable, and agree that doing this with nfs is easier than with iSCSI. If you don't mind the risk of local-nfs-mount locks, fine. As others noted, seems like it's quite a low risk.
At this point we are not interested necessarily in HA.
Maybe for you that's the definition of a Dev environment as production
has other attributes than just the type of storage?
Dev or Prod is for you to define :-)
How much time/money do you loose if a machine dies? If a machine locks up until someone notices and handles?
Would be nice to hear your thoughts about this.
As wrote above, sounds reasonable if you understand the risks and can live with them.
Looking at the future you might want to check HC:
https://www.ovirt.org/develop/release-management/features/gl uster/glusterfs-hyperconvergence/
Kind regards, Christophe
Sent from my iPhone
On 06 Sep 2016, at 08:45, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
So basically we need at least 2 nodes to enter the realm of testing
I think some people occasionally use hosted-engine with local iSCSI storage on a single machine. AFAIK it's not tested by CI or often, but patches are welcome - e.g. using lago and ovirt-system-tests.
Can you please explain your intentions/requirements?
Even if it works, oVirt is not designed for single-machine _production_ use. For that, I think that most people agree that virt-manager is more suitable. oVirt on a single machine is usually for testing/demonstration/learning/etc.
If we’re talking pure oVirt here.
--
Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc
UNIVERSITÉ DU LUXEMBOURG
LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb
---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete
----
> On 05 Sep 2016, at 16:31, Fernando Frediani < fernando.frediani@upx.com.br> wrote: > > Adding Kimchi to oVirt node perhaps may be the easiest option. It can be pretty useful for many situations and doesn't need such thing like mounting NFS in localhost. > > It is not nice to not have a All-in-One stable solution anymore as
> > oVirt-Cockpit looks nice and intresting. > > Fernando > > >> On 05/09/2016 05:18, Barak Korren wrote: >>> On 4 September 2016 at 23:45, zero four <zfnoctis@gmail.com> wrote: >>> ... >>> I understand and acknowledge that oVirt is not targeted towards homelab >>> setups, or at least small homelab setups. However I believe that having a >>> solid configuration for such use cases would be a benefit to the
>>> a whole. >> As others have already mentioned, using the full oVirt with engine in >> a single host scenario can work, but is not currently actively >> maintained or tested. >> >> There are other options originating from the oVirt community however. >> >> One notable option is to use the Cockpit-oVirt plugin [1] which can >> use VDSM to manage VMs on a single host. >> >> Another option is to use the Kimchi project [2] for which discussion >> for making it an oVirt project had taken part in the past [3]. It >> seems that also some work for inclusion in oVirt node was also
and maintained? the original message and any copies. this can help with its adoption for later growth. project as planned
>> at some point [4]. >> >> [1]: http://www.ovirt.org/develop/release-management/features/coc kpit/ >> [2]: https://github.com/kimchi-project/kimchi >> [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html >> [4]: http://www.ovirt.org/develop/release-management/features/nod e/kimchiplugin/ > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi
-- Didi
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 6 September 2016 at 00:34, Christophe TREFOIS <christophe.trefois@uni.lu> wrote:
So basically we need at least 2 nodes to enter the realm of testing and maintained?
If we’re talking pure oVirt here.
The short answer is yes. The longer answer is more complex, but first a disclaimer, I'm am going to describe the situation as I am aware of it, from my point of view is a Red Hat employee and a member of the oVirt infra team. I'm an probably not knowledgeable about everything that goes on, for example there is a fairly large Chinese oVirt community that commits various efforts of which I know very little. When I'm taking testing and maintenance, I think we can agree that for something to be maintained it needs to meet to following criteria: 1. It needs to be tested at least once for every oVirt release 2. Results of that testing need to make their way to the hands of developers. Malfunctions should end up as bugs tracked in Bugzilla. Probably the largest group that does regular testing for oVirt is the quality engineering group in Red Hat. Red Hat puts a great deal of resources into oVirt, but those resources are not infinite. And when the time comes to schedule resources, the needs of paying Red Hat customers typically come first. Those customers are probably more likely to be running large data centers. Another set of regular testing is being done automatically by the oVirt CI systems. Those tests [1] use Lago [2] to run testing suits that simulate various situations for oVirt to run in. The smallest configuration currently tested that way is a 2-node hosted engine configuration. As all those tests have been written by Red Hat employees, they tend to focus on what ends up going into RHEV. It it important to note that not every oVirt feature ends up in RHEV, but that does not mean that that feature never gets tested. There are several oVirt features that are very useful for building oVirt-based testing systems for oVirt itself and as a result get regular testing as well. Notable examples are nested virtualization and the Glance support. The above being said, there is nothing preventing anyone in the community from creating a test suit for single-host use that will get run regularly by the oVirt CI system. That kind of effort will require some degree of commitment to make it work, fix it when it inevitably breaks, and report what it finds to the developers. There are already existing tools in the oVirt repos that make building such a test suit quite straight forward. I will be happy to guid anyone interested in taking such an effort. [1]: http://lago.readthedocs.io/en/stable/ [2]: https://gerrit.ovirt.org/#/admin/projects/ovirt-system-tests -- Barak Korren bkorren@redhat.com RHEV-CI Team
participants (9)
-
Barak Korren
-
Christophe TREFOIS
-
Fernando Frediani
-
Nir Soffer
-
Philip Lo
-
Yaniv Dary
-
Yaniv Kaul
-
Yedidyah Bar David
-
zero four