
I'm reading through all of the documentation at https://ovirt.org/documentation/, and am a bit overwhelmed with all of the different options for installing oVirt. My particular use case is that I'm looking for a way to manage VMs on multiple physical servers from 1 interface, and be able to deploy new VMs (or delete VMs) as necessary. Ideally, it would be great if I could move a VM from 1 host to a different host as well, particularly in the event that 1 host becomes degraded (bad HDD, bad processor, etc...) I'm trying to figure out what the difference is between an oVirt Node and the oVirt Engine, and how the engine differs from the Manager. I get the feeling that `Engine` = `Manager`. Same thing. I further think I understand the Engine to be essentially synonymous with a vCenter VM for ESXi hosts. Is this correct? If so, then what's the difference between the `self-hosted` vs the `stand-alone` engines? oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs. oVirt Nodes, on the other hand, require only 2GB RAM. Is this a requirement just for the physical host, or is that how much RAM that each oVirt node process requires? In other words, if I have a physical host with 12GB of physical RAM, will I only be able to allocate 10GB of that to guest VMs? How much of that should I dedicated to the oVirt node processes? Can you install the oVirt Engine as a VM onto an existing oVirt Node? And then connect that same node to the Engine, once the Engine is installed? Reading through the documentation, it also sounds like oVirt Engine and oVirt Node require different versions of RHEL or CentOS. I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2, whereas each Node requires 7.x (although I'll plan to just use the oVirt Node ISO). I'm also wondering about storage. I don't really like the idea of using local storage, but a single NFS server would also be a single point of failure, and Gluster would be too expensive to deploy, so at this point, I'm leaning towards using local storage. Any advice or clarity would be greatly appreciated. Thanks, David Sent with ProtonMail Secure Email.

На 21 юни 2020 г. 23:26:32 GMT+03:00, David White via Users <users@ovirt.org> написа:
I'm reading through all of the documentation at https://ovirt.org/documentation/, and am a bit overwhelmed with all of the different options for installing oVirt.
My particular use case is that I'm looking for a way to manage VMs on multiple physical servers from 1 interface, and be able to deploy new VMs (or delete VMs) as necessary. Ideally, it would be great if I could move a VM from 1 host to a different host as well, particularly in the event that 1 host becomes degraded (bad HDD, bad processor, etc...)
I'm trying to figure out what the difference is between an oVirt Node and the oVirt Engine, and how the engine differs from the Manager.
I get the feeling that `Engine` = `Manager`. Same thing. I further think I understand the Engine to be essentially synonymous with a vCenter VM for ESXi hosts. Is this correct? Generally speaking they are interchangeable, but usually the engine is the deamon that is running inside the manager. Correct, just like in VMware - you can have your Vcenter in a VM on the esxi or you can host it on a separate physical server.
If so, then what's the difference between the `self-hosted` vs the `stand-alone` engines? Self hosted -> the manager is managing the host that is hosting it , while standalone is on a non-managed location - like standalone KVM VM, VMware VM or physical server. oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs. oVirt Nodes, on the other hand, require only 2GB RAM. Is this a requirement just for the physical host, or is that how much RAM that each oVirt node process requires? In other words, if I have a physical host with 12GB of physical RAM, will I only be able to allocate 10GB of that to guest VMs? How much of that should I dedicated to the oVirt node processes?
oVirt Node -> a kind of ready to go appliance that has only 1 purpose - Hosting VMs. It has an advantage that you can easily rollback updates. Drawback - hard to customize (for example custom drivers). It will be nice to have as much memory as possible, but depends on the ammount and type of VMs you plan to host on it.
Can you install the oVirt Engine as a VM onto an existing oVirt Node? And then connect that same node to the Engine, once the Engine is installed? That's what the Hosted-Engine deployment is doing for you. The easiest way is to use the cockpit method.
Reading through the documentation, it also sounds like oVirt Engine and oVirt Node require different versions of RHEL or CentOS. I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2, whereas each Node requires 7.x (although I'll plan to just use the oVirt Node ISO). oVirt 4.3 (node, engine) are based on CentOS/EL 7.X oVirt 4.4 (node, engine) are based on CentOS/EL 8. oVirt 4.4 still needs some polishing, but keep in mind that migration from 4.3 to 4.4 requires redeploy (real reinstall).
I'm also wondering about storage. I don't really like the idea of using local storage, but a single NFS server would also be a single point of failure, and Gluster would be too expensive to deploy, so at this point, I'm leaning towards using local storage.
It's up to you.
Any advice or clarity would be greatly appreciated.
Thanks, David
Sent with ProtonMail Secure Email.

While ovirt can do what you would like it to do concerning a single user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface. Those memory/cpu requirements you listed are really tiny and I wouldn't recommend even trying ovirt on such challenged systems. I would specify at least 3 hosts for a gluster hyperconverged system, and a spare available that can take over if one of the hosts dies. I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions. My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 24core or more. ovirt node ng is a prepackaged installer for an oVirt hypervisor/gluster host, with its cockpit interface you can create and install the hosted-engine VM for the user and admin web interface. Its very good on enterprise server hardware with lots of RAM,CPU, and DISKS. On Sun, Jun 21, 2020 at 4:34 PM David White via Users <users@ovirt.org> wrote:
I'm reading through all of the documentation at https://ovirt.org/documentation/, and am a bit overwhelmed with all of the different options for installing oVirt.
My particular use case is that I'm looking for a way to manage VMs on multiple physical servers from 1 interface, and be able to deploy new VMs (or delete VMs) as necessary. Ideally, it would be great if I could move a VM from 1 host to a different host as well, particularly in the event that 1 host becomes degraded (bad HDD, bad processor, etc...)
I'm trying to figure out what the difference is between an oVirt Node and the oVirt Engine, and how the engine differs from the Manager.
I get the feeling that `Engine` = `Manager`. Same thing. I further think I understand the Engine to be essentially synonymous with a vCenter VM for ESXi hosts. Is this correct?
If so, then what's the difference between the `self-hosted` vs the `stand-alone` engines?
oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs. oVirt Nodes, on the other hand, require only 2GB RAM. Is this a requirement just for the physical host, or is that how much RAM that each oVirt node process requires? In other words, if I have a physical host with 12GB of physical RAM, will I only be able to allocate 10GB of that to guest VMs? How much of that should I dedicated to the oVirt node processes?
Can you install the oVirt Engine as a VM onto an existing oVirt Node? And then connect that same node to the Engine, once the Engine is installed?
Reading through the documentation, it also sounds like oVirt Engine and oVirt Node require different versions of RHEL or CentOS. I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2, whereas each Node requires 7.x (although I'll plan to just use the oVirt Node ISO).
I'm also wondering about storage. I don't really like the idea of using local storage, but a single NFS server would also be a single point of failure, and Gluster would be too expensive to deploy, so at this point, I'm leaning towards using local storage.
Any advice or clarity would be greatly appreciated.
Thanks, David
Sent with ProtonMail <https://protonmail.com> Secure Email.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...

Thank you and Strahil for your responses. They were both very helpful.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
Ok, so to clarify my question a little bit, I'm trying to figure out how much RAM I would need to reserve for the host OS (or oVirt Node). I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps that would suffice? And then as you noted, I would need to plan to give the engine 16GB.
My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 24core or more.
But this is the total amount of physical RAM in your systems, correct? Not the amount that you've reserved for your host OS?I've spec'd out some hardware, and am probably looking at purchasing two PowerEdge R820's to start, each with 64GB RAM and 32 cores.
While ovirt can do what you would like it to do concerning a single user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Can you migrate VMs from 1 host to another with virt-manager, and can you take snapshots? If those two features aren't supported by virt-manager, then that would almost certainly be a deal breaker. Come to think of it, if I decided to use local storage on each of the physical hosts, would I be able to migrate VMs? Or do I *have* to use a Gluster or NFS store for that? ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, June 21, 2020 5:58 PM, Edward Berger <edwberger@gmail.com> wrote:
While ovirt can do what you would like it to do concerning a single user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Those memory/cpu requirements you listed are really tiny and I wouldn't recommend even trying ovirt on such challenged systems. I would specify at least 3 hosts for a gluster hyperconverged system, and a spare available that can take over if one of the hosts dies.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 24core or more.
ovirt node ng is a prepackaged installer for an oVirt hypervisor/gluster host, with its cockpit interface you can create and install the hosted-engine VM for the user and admin web interface. Its very good on enterprise server hardware with lots of RAM,CPU, and DISKS.
On Sun, Jun 21, 2020 at 4:34 PM David White via Users <users@ovirt.org> wrote:
I'm reading through all of the documentation at https://ovirt.org/documentation/, and am a bit overwhelmed with all of the different options for installing oVirt.
My particular use case is that I'm looking for a way to manage VMs on multiple physical servers from 1 interface, and be able to deploy new VMs (or delete VMs) as necessary. Ideally, it would be great if I could move a VM from 1 host to a different host as well, particularly in the event that 1 host becomes degraded (bad HDD, bad processor, etc...)
I'm trying to figure out what the difference is between an oVirt Node and the oVirt Engine, and how the engine differs from the Manager.
I get the feeling that `Engine` = `Manager`. Same thing. I further think I understand the Engine to be essentially synonymous with a vCenter VM for ESXi hosts. Is this correct?
If so, then what's the difference between the `self-hosted` vs the `stand-alone` engines?
oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs. oVirt Nodes, on the other hand, require only 2GB RAM. Is this a requirement just for the physical host, or is that how much RAM that each oVirt node process requires? In other words, if I have a physical host with 12GB of physical RAM, will I only be able to allocate 10GB of that to guest VMs? How much of that should I dedicated to the oVirt node processes?
Can you install the oVirt Engine as a VM onto an existing oVirt Node? And then connect that same node to the Engine, once the Engine is installed?
Reading through the documentation, it also sounds like oVirt Engine and oVirt Node require different versions of RHEL or CentOS. I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2, whereas each Node requires 7.x (although I'll plan to just use the oVirt Node ISO).
I'm also wondering about storage. I don't really like the idea of using local storage, but a single NFS server would also be a single point of failure, and Gluster would be too expensive to deploy, so at this point, I'm leaning towards using local storage.
Any advice or clarity would be greatly appreciated.
Thanks, David
Sent with ProtonMail Secure Email.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...

На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Users <users@ovirt.org> написа:
Thank you and Strahil for your responses. They were both very helpful.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
Ok, so to clarify my question a little bit, I'm trying to figure out how much RAM I would need to reserve for the host OS (or oVirt Node).
I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps that would suffice? And then as you noted, I would need to plan to give the engine 16GB.
I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the larger the setup - the more ram for the engine is needed.
My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 24core or more.
But this is the total amount of physical RAM in your systems, correct? Not the amount that you've reserved for your host OS?I've spec'd out some hardware, and am probably looking at purchasing two PowerEdge R820's to start, each with 64GB RAM and 32 cores.
While ovirt can do what you would like it to do concerning a single user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Can you migrate VMs from 1 host to another with virt-manager, and can you take snapshots? If those two features aren't supported by virt-manager, then that would almost certainly be a deal breaker.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it.
Come to think of it, if I decided to use local storage on each of the physical hosts, would I be able to migrate VMs? Or do I *have* to use a Gluster or NFS store for that?
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, June 21, 2020 5:58 PM, Edward Berger <edwberger@gmail.com> wrote:
While ovirt can do what you would like it to do concerning a single user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Those memory/cpu requirements you listed are really tiny and I wouldn't recommend even trying ovirt on such challenged systems. I would specify at least 3 hosts for a gluster hyperconverged system, and a spare available that can take over if one of the hosts dies.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 24core or more.
ovirt node ng is a prepackaged installer for an oVirt hypervisor/gluster host, with its cockpit interface you can create and install the hosted-engine VM for the user and admin web interface. Its very good on enterprise server hardware with lots of RAM,CPU, and DISKS.
On Sun, Jun 21, 2020 at 4:34 PM David White via Users <users@ovirt.org> wrote:
I'm reading through all of the documentation at https://ovirt.org/documentation/, and am a bit overwhelmed with all of the different options for installing oVirt.
My particular use case is that I'm looking for a way to manage VMs on multiple physical servers from 1 interface, and be able to deploy new VMs (or delete VMs) as necessary. Ideally, it would be great if I could move a VM from 1 host to a different host as well, particularly in the event that 1 host becomes degraded (bad HDD, bad processor, etc...)
I'm trying to figure out what the difference is between an oVirt Node and the oVirt Engine, and how the engine differs from the Manager.
I get the feeling that `Engine` = `Manager`. Same thing. I further think I understand the Engine to be essentially synonymous with a vCenter VM for ESXi hosts. Is this correct?
If so, then what's the difference between the `self-hosted` vs the `stand-alone` engines?
oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs. oVirt Nodes, on the other hand, require only 2GB RAM. Is this a requirement just for the physical host, or is that how much RAM that each oVirt node process requires? In other words, if I have a physical host with 12GB of physical RAM, will I only be able to allocate 10GB of that to guest VMs? How much of that should I dedicated to the oVirt node processes?
Can you install the oVirt Engine as a VM onto an existing oVirt Node? And then connect that same node to the Engine, once the Engine is installed?
Reading through the documentation, it also sounds like oVirt Engine and oVirt Node require different versions of RHEL or CentOS. I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2, whereas each Node requires 7.x (although I'll plan to just use the oVirt Node ISO).
I'm also wondering about storage. I don't really like the idea of using local storage, but a single NFS server would also be a single point of failure, and Gluster would be too expensive to deploy, so at this point, I'm leaning towards using local storage.
Any advice or clarity would be greatly appreciated.
Thanks, David
Sent with ProtonMail Secure Email.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...

For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
Sounds like I'll be using NFS or Gluster after all. Thank you.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it. Yeah, this environment that I'm building is expected to grow over time (although that growth could go slowly), so I'm trying to architect things properly now to make future growth easier to deal with. I'm also trying to balance availability concerns with budget constraints starting out.
Given that NFS would also be a single point of failure, I'll probably go with Gluster, as long as I can fit the storage requirements into the overall budget. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users <users@ovirt.org> wrote:
На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Usersusers@ovirt.org написа:
Thank you and Strahil for your responses. They were both very helpful.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
Ok, so to clarify my question a little bit, I'm trying to figure out how much RAM I would need to reserve for the host OS (or oVirt Node). I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps that would suffice? And then as you noted, I would need to plan to give the engine 16GB.
I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the larger the setup - the more ram for the engine is needed.
My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 24core or more.
But this is the total amount of physical RAM in your systems, correct? Not the amount that you've reserved for your host OS?I've spec'd out some hardware, and am probably looking at purchasing two PowerEdge R820's to start, each with 64GB RAM and 32 cores.
While ovirt can do what you would like it to do concerning a single user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Can you migrate VMs from 1 host to another with virt-manager, and can you take snapshots? If those two features aren't supported by virt-manager, then that would almost certainly be a deal breaker.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it.
Come to think of it, if I decided to use local storage on each of the physical hosts, would I be able to migrate VMs? Or do I have to use a Gluster or NFS store for that?
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, June 21, 2020 5:58 PM, Edward Berger edwberger@gmail.com wrote:
While ovirt can do what you would like it to do concerning a single user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Those memory/cpu requirements you listed are really tiny and I wouldn't recommend even trying ovirt on such challenged systems. I would specify at least 3 hosts for a gluster hyperconverged system, and a spare available that can take over if one of the hosts dies.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 24core or more.
ovirt node ng is a prepackaged installer for an oVirt hypervisor/gluster host, with its cockpit interface you can create and install the hosted-engine VM for the user and admin web interface. Its very good on enterprise server hardware with lots of RAM,CPU, and DISKS.
On Sun, Jun 21, 2020 at 4:34 PM David White via Users users@ovirt.org wrote:
I'm reading through all of the documentation at https://ovirt.org/documentation/, and am a bit overwhelmed with all of the different options for installing oVirt.
My particular use case is that I'm looking for a way to manage VMs on multiple physical servers from 1 interface, and be able to deploy new VMs (or delete VMs) as necessary. Ideally, it would be great if I could move a VM from 1 host to a different host as well, particularly in the event that 1 host becomes degraded (bad HDD, bad processor, etc...)
I'm trying to figure out what the difference is between an oVirt Node and the oVirt Engine, and how the engine differs from the Manager.
I get the feeling that `Engine` = `Manager`. Same thing. I further think I understand the Engine to be essentially synonymous with a vCenter VM for ESXi hosts. Is this correct?
If so, then what's the difference between the `self-hosted` vs the `stand-alone` engines?
oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs.
oVirt Nodes, on the other hand, require only 2GB RAM. Is this a requirement just for the physical host, or is that how much RAM that each oVirt node process requires? In other words, if I have a physical host with 12GB of physical RAM, will I only be able to allocate 10GB of that to guest VMs? How much of that should I dedicated to the oVirt node processes?
Can you install the oVirt Engine as a VM onto an existing oVirt Node? And then connect that same node to the Engine, once the Engine is installed?
Reading through the documentation, it also sounds like oVirt Engine and oVirt Node require different versions of RHEL or CentOS.
I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2, whereas each Node requires 7.x (although I'll plan to just use the oVirt Node ISO).
I'm also wondering about storage. I don't really like the idea of using local storage, but a single NFS server would also be a single point of failure, and Gluster would be too expensive to deploy, so at this point, I'm leaning towards using local storage.
Any advice or clarity would be greatly appreciated.
Thanks, David
Sent with ProtonMail Secure Email.
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF63JRHWDBCJWD...

Hey David, keep in mind that you need some big NICs. I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1 Gbit NICs and I had to create multiple gluster volumes and multiple storage domains. Yet, windows VMs cannot use software raid for boot devices, thus it's a pain in the @$$. I think that optimal is to have several 10Gbit NICs (at least 1 for gluster and 1 for oVirt live migration). Also, NVMEs can be used as lvm cache for spinning disks. Best Regards, Strahil Nikolov На 22 юни 2020 г. 18:50:01 GMT+03:00, David White <dmwhite823@protonmail.com> написа:
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
Sounds like I'll be using NFS or Gluster after all. Thank you.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it. Yeah, this environment that I'm building is expected to grow over time (although that growth could go slowly), so I'm trying to architect things properly now to make future growth easier to deal with. I'm also trying to balance availability concerns with budget constraints starting out.
Given that NFS would also be a single point of failure, I'll probably go with Gluster, as long as I can fit the storage requirements into the overall budget.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users <users@ovirt.org> wrote:
На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Usersusers@ovirt.org написа:
Thank you and Strahil for your responses. They were both very helpful.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
Ok, so to clarify my question a little bit, I'm trying to figure out how much RAM I would need to reserve for the host OS (or oVirt Node). I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps that would suffice? And then as you noted, I would need to plan to give the engine 16GB.
I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the larger the setup - the more ram for the engine is needed.
My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 24core or more.
But this is the total amount of physical RAM in your systems, correct? Not the amount that you've reserved for your host OS?I've spec'd out some hardware, and am probably looking at purchasing two PowerEdge R820's to start, each with 64GB RAM and 32 cores.
While ovirt can do what you would like it to do concerning a single user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Can you migrate VMs from 1 host to another with virt-manager, and can you take snapshots? If those two features aren't supported by virt-manager, then that would almost certainly be a deal breaker.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it.
Come to think of it, if I decided to use local storage on each of the physical hosts, would I be able to migrate VMs? Or do I have to use a Gluster or NFS store for that?
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, June 21, 2020 5:58 PM, Edward Berger edwberger@gmail.com wrote:
While ovirt can do what you would like it to do concerning a single user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Those memory/cpu requirements you listed are really tiny and I wouldn't recommend even trying ovirt on such challenged systems. I would specify at least 3 hosts for a gluster hyperconverged system, and a spare available that can take over if one of the hosts dies.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB 24core or more.
ovirt node ng is a prepackaged installer for an oVirt hypervisor/gluster host, with its cockpit interface you can create and install the hosted-engine VM for the user and admin web interface. Its very good on enterprise server hardware with lots of RAM,CPU, and DISKS.
On Sun, Jun 21, 2020 at 4:34 PM David White via Users users@ovirt.org wrote:
I'm reading through all of the documentation at https://ovirt.org/documentation/, and am a bit overwhelmed with all of the different options for installing oVirt.
My particular use case is that I'm looking for a way to manage VMs on multiple physical servers from 1 interface, and be able to deploy new VMs (or delete VMs) as necessary. Ideally, it would be great if I could move a VM from 1 host to a different host as well, particularly in the event that 1 host becomes degraded (bad HDD, bad processor, etc...)
I'm trying to figure out what the difference is between an oVirt Node and the oVirt Engine, and how the engine differs from the Manager.
I get the feeling that `Engine` = `Manager`. Same thing. I further think I understand the Engine to be essentially synonymous with a vCenter VM for ESXi hosts. Is this correct?
If so, then what's the difference between the `self-hosted` vs the `stand-alone` engines?
oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs.
oVirt Nodes, on the other hand, require only 2GB RAM. Is this a requirement just for the physical host, or is that how much RAM that each oVirt node process requires? In other words, if I have a physical host with 12GB of physical RAM, will I only be able to allocate 10GB of that to guest VMs? How much of that should I dedicated to the oVirt node processes?
Can you install the oVirt Engine as a VM onto an existing oVirt Node? And then connect that same node to the Engine, once the Engine is installed?
Reading through the documentation, it also sounds like oVirt Engine and oVirt Node require different versions of RHEL or CentOS.
I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2, whereas each Node requires 7.x (although I'll plan to just use the oVirt Node ISO).
I'm also wondering about storage. I don't really like the idea of using local storage, but a single NFS server would also be a single point of failure, and Gluster would be too expensive to deploy, so at this point, I'm leaning towards using local storage.
Any advice or clarity would be greatly appreciated.
Thanks, David
Sent with ProtonMail Secure Email.
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF63JRHWDBCJWD...

Thanks. I've only been considering SSD drives for storage, as that is what I currently have in the cloud. I think I've seen some things in the documents about oVirt and gluster hyperconverged. Is it possible to run oVirt and Gluster together on the same hardware? So 3 physical hosts would run CentOS or something, and I would install oVirt Node + Gluster onto the same base host OS? If so, then I could probably make that fit into my budget. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users <users@ovirt.org> wrote:
Hey David,
keep in mind that you need some big NICs. I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1 Gbit NICs and I had to create multiple gluster volumes and multiple storage domains. Yet, windows VMs cannot use software raid for boot devices, thus it's a pain in the @$$. I think that optimal is to have several 10Gbit NICs (at least 1 for gluster and 1 for oVirt live migration). Also, NVMEs can be used as lvm cache for spinning disks.
Best Regards, Strahil Nikolov
На 22 юни 2020 г. 18:50:01 GMT+03:00, David White dmwhite823@protonmail.com написа:
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
Sounds like I'll be using NFS or Gluster after all. Thank you.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it. Yeah, this environment that I'm building is expected to grow over time (although that growth could go slowly), so I'm trying to architect things properly now to make future growth easier to deal with. I'm also trying to balance availability concerns with budget constraints starting out.
Given that NFS would also be a single point of failure, I'll probably go with Gluster, as long as I can fit the storage requirements into the overall budget. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users users@ovirt.org wrote:
На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Usersusers@ovirt.org написа:
Thank you and Strahil for your responses. They were both very helpful.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
Ok, so to clarify my question a little bit, I'm trying to figure out
how much RAM I would need to reserve for the host OS (or oVirt Node).
I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps that would suffice? And then as you noted, I would need to plan to give the engine 16GB.
I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the larger the setup - the more ram for the engine is needed.
My minimum ovirt systems were mostly 48GB 16core, but most are now
128GB 24core or more.
But this is the total amount of physical RAM in your systems, correct?
Not the amount that you've reserved for your host OS?I've spec'd out
some hardware, and am probably looking at purchasing two PowerEdge R820's to start, each with 64GB RAM and 32 cores.
While ovirt can do what you would like it to do concerning a single
user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Can you migrate VMs from 1 host to another with virt-manager, and can
you take snapshots? If those two features aren't supported by virt-manager, then that would
almost certainly be a deal breaker.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it.
Come to think of it, if I decided to use local storage on each of the
physical hosts, would I be able to migrate VMs? Or do I have to use a Gluster or NFS store for that?
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, June 21, 2020 5:58 PM, Edward Berger edwberger@gmail.com wrote:
While ovirt can do what you would like it to do concerning a single
user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Those memory/cpu requirements you listed are really tiny and I wouldn't recommend even trying ovirt on such challenged systems. I would specify at least 3 hosts for a gluster hyperconverged system,
and a spare available that can take over if one of the hosts dies.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
My minimum ovirt systems were mostly 48GB 16core, but most are now
128GB 24core or more.
ovirt node ng is a prepackaged installer for an oVirt hypervisor/gluster host, with its cockpit interface you can create and
install the hosted-engine VM for the user and admin web interface. Its
very good on enterprise server hardware with lots of RAM,CPU, and DISKS.
On Sun, Jun 21, 2020 at 4:34 PM David White via Users users@ovirt.org wrote:
I'm reading through all of the documentation at https://ovirt.org/documentation/, and am a bit overwhelmed with all of
the different options for installing oVirt.
My particular use case is that I'm looking for a way to manage VMs
on multiple physical servers from 1 interface, and be able to deploy
new VMs (or delete VMs) as necessary. Ideally, it would be great if I
could move a VM from 1 host to a different host as well, particularly
in the event that 1 host becomes degraded (bad HDD, bad processor,
etc...)
I'm trying to figure out what the difference is between an oVirt
Node and the oVirt Engine, and how the engine differs from the Manager.
I get the feeling that `Engine` = `Manager`. Same thing. I further
think I understand the Engine to be essentially synonymous with a
vCenter VM for ESXi hosts. Is this correct?
If so, then what's the difference between the `self-hosted` vs the
`stand-alone` engines?
oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs.
oVirt Nodes, on the other hand, require only 2GB RAM. Is this a requirement just for the physical host, or is that how
much RAM that each oVirt node process requires? In other words, if I
have a physical host with 12GB of physical RAM, will I only be able to
allocate 10GB of that to guest VMs? How much of that should I dedicated
to the oVirt node processes?
Can you install the oVirt Engine as a VM onto an existing oVirt Node? And then connect that same node to the Engine, once the Engine is
installed?
Reading through the documentation, it also sounds like oVirt Engine
and oVirt Node require different versions of RHEL or CentOS.
I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS)
8.2, whereas each Node requires 7.x (although I'll plan to just use the
oVirt Node ISO).
I'm also wondering about storage. I don't really like the idea of using local storage, but a single
NFS server would also be a single point of failure, and Gluster would
be too expensive to deploy, so at this point, I'm leaning towards using
local storage.
Any advice or clarity would be greatly appreciated.
Thanks, David
Sent with ProtonMail Secure Email.
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF63JRHWDBCJWD...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GKCVKWJJ56ITAC...

Yes this is the point of hyperconverged. You only need three hosts to setup a proper hci cluster. I would recommend ssds for gluster storage. You could get away with non raid to save money since you can do replica three with gluster meaning your data is fully replicated across all three hosts. On Tue, Jun 23, 2020 at 5:17 PM David White via Users <users@ovirt.org> wrote:
Thanks. I've only been considering SSD drives for storage, as that is what I currently have in the cloud.
I think I've seen some things in the documents about oVirt and gluster hyperconverged. Is it possible to run oVirt and Gluster together on the same hardware? So 3 physical hosts would run CentOS or something, and I would install oVirt Node + Gluster onto the same base host OS? If so, then I could probably make that fit into my budget.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users < users@ovirt.org> wrote:
Hey David,
keep in mind that you need some big NICs. I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1 Gbit NICs and I had to create multiple gluster volumes and multiple storage domains. Yet, windows VMs cannot use software raid for boot devices, thus it's a pain in the @$$. I think that optimal is to have several 10Gbit NICs (at least 1 for gluster and 1 for oVirt live migration). Also, NVMEs can be used as lvm cache for spinning disks.
Best Regards, Strahil Nikolov
На 22 юни 2020 г. 18:50:01 GMT+03:00, David White dmwhite823@protonmail.com написа:
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
Sounds like I'll be using NFS or Gluster after all. Thank you.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it. Yeah, this environment that I'm building is expected to grow over time (although that growth could go slowly), so I'm trying to architect things properly now to make future growth easier to deal with. I'm also trying to balance availability concerns with budget constraints starting out.
Given that NFS would also be a single point of failure, I'll probably go with Gluster, as long as I can fit the storage requirements into the overall budget. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users users@ovirt.org wrote:
На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Usersusers@ovirt.org написа:
Thank you and Strahil for your responses. They were both very helpful.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
Ok, so to clarify my question a little bit, I'm trying to figure out
how much RAM I would need to reserve for the host OS (or oVirt Node).
I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps that would suffice? And then as you noted, I would need to plan to give the engine 16GB.
I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the larger the setup - the more ram for the engine is needed.
My minimum ovirt systems were mostly 48GB 16core, but most are now
128GB 24core or more.
But this is the total amount of physical RAM in your systems, correct?
Not the amount that you've reserved for your host OS?I've spec'd out
some hardware, and am probably looking at purchasing two PowerEdge R820's to start, each with 64GB RAM and 32 cores.
While ovirt can do what you would like it to do concerning a single
user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Can you migrate VMs from 1 host to another with virt-manager, and can
you take snapshots? If those two features aren't supported by virt-manager, then that would
almost certainly be a deal breaker.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it.
Come to think of it, if I decided to use local storage on each of the
physical hosts, would I be able to migrate VMs? Or do I have to use a Gluster or NFS store for that?
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, June 21, 2020 5:58 PM, Edward Berger edwberger@gmail.com wrote:
While ovirt can do what you would like it to do concerning a single
user interface, but with what you listed, you're probably better off with just plain KVM/qemu and using virt-manager for the interface.
Those memory/cpu requirements you listed are really tiny and I wouldn't recommend even trying ovirt on such challenged systems. I would specify at least 3 hosts for a gluster hyperconverged system,
and a spare available that can take over if one of the hosts dies.
I think a hosted engine installation VM wants 16GB RAM configured though I've built older versions with 8GB RAM. For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7 was OK with 1, CentOS6 maybe 512K. The tendency is always increasing with updated OS versions.
My minimum ovirt systems were mostly 48GB 16core, but most are now
128GB 24core or more.
ovirt node ng is a prepackaged installer for an oVirt hypervisor/gluster host, with its cockpit interface you can create and
install the hosted-engine VM for the user and admin web interface. Its
very good on enterprise server hardware with lots of RAM,CPU, and DISKS.
On Sun, Jun 21, 2020 at 4:34 PM David White via Users users@ovirt.org wrote:
> I'm reading through all of the documentation at > https://ovirt.org/documentation/, and am a bit overwhelmed with > all of
> the different options for installing oVirt.
>
> My particular use case is that I'm looking for a way to manage > VMs
> on multiple physical servers from 1 interface, and be able to > deploy
> new VMs (or delete VMs) as necessary. Ideally, it would be > great if I
> could move a VM from 1 host to a different host as well, > particularly
> in the event that 1 host becomes degraded (bad HDD, bad > processor,
> etc...)
>
> I'm trying to figure out what the difference is between an > oVirt
> Node and the oVirt Engine, and how the engine differs from the > Manager.
>
> I get the feeling that `Engine` = `Manager`. Same thing. I > further
> think I understand the Engine to be essentially synonymous with > a
> vCenter VM for ESXi hosts. Is this correct?
>
> If so, then what's the difference between the `self-hosted` vs > the
> `stand-alone` engines?
>
> oVirt Engine requirements look to be a minimum of 4GB RAM and > 2CPUs.
> oVirt Nodes, on the other hand, require only 2GB RAM. > Is this a requirement just for the physical host, or is that > how
> much RAM that each oVirt node process requires? In other words, > if I
> have a physical host with 12GB of physical RAM, will I only be > able to
> allocate 10GB of that to guest VMs? How much of that should I > dedicated
> to the oVirt node processes?
>
> Can you install the oVirt Engine as a VM onto an existing oVirt > Node? And then connect that same node to the Engine, once the > Engine is
> installed?
>
> Reading through the documentation, it also sounds like oVirt > Engine
> and oVirt Node require different versions of RHEL or CentOS.
> I read that the Engine for oVirt 4.4.0 requires RHEL (or > CentOS)
> 8.2, whereas each Node requires 7.x (although I'll plan to just > use the
> oVirt Node ISO).
>
> I'm also wondering about storage. > I don't really like the idea of using local storage, but a > single
> NFS server would also be a single point of failure, and Gluster > would
> be too expensive to deploy, so at this point, I'm leaning > towards using
> local storage.
>
> Any advice or clarity would be greatly appreciated.
> Thanks, > David
> Sent with ProtonMail Secure Email.
> Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF63JRHWDBCJWD...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GKCVKWJJ56ITAC...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYFSHSV4IZBELA...

Thanks again for explaining all of this to me. Much appreciated. Regarding the hyperconverged environment, reviewing https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hy..., it appears to state that you need, exactly, 3 physical servers. Is it possible to run a hyperconverged environment with more than 3 physical servers? Because of the way that the gluster triple-redundancy works, I knew that I would need to size all 3 physical servers' SSD drives to store 100% of the data, but there's a possibility that 1 particular (future) customer is going to need about 10TB of disk space. For that reason, I'm thinking about what it would look like to have 4 or even 5 physical servers in order to increase the total amount of disk space made available to oVirt as a whole. And then from there, I would of course setup a number of virtual disks that I would attach back to that customer's VM. So to recap, if I were to have a 5-node Gluster Hyperconverged environment, I'm hoping that the data would still only be required to replicate across 3 nodes. Does this make sense? Is this how data replication works? Almost like a RAID -- add more drives, and the RAID gets expanded? Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, June 23, 2020 4:41 PM, Jayme <jaymef@gmail.com> wrote:
Yes this is the point of hyperconverged. You only need three hosts to setup a proper hci cluster. I would recommend ssds for gluster storage. You could get away with non raid to save money since you can do replica three with gluster meaning your data is fully replicated across all three hosts.
On Tue, Jun 23, 2020 at 5:17 PM David White via Users <users@ovirt.org> wrote:
Thanks. I've only been considering SSD drives for storage, as that is what I currently have in the cloud.
I think I've seen some things in the documents about oVirt and gluster hyperconverged. Is it possible to run oVirt and Gluster together on the same hardware? So 3 physical hosts would run CentOS or something, and I would install oVirt Node + Gluster onto the same base host OS? If so, then I could probably make that fit into my budget.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users <users@ovirt.org> wrote:
Hey David,
keep in mind that you need some big NICs. I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1 Gbit NICs and I had to create multiple gluster volumes and multiple storage domains. Yet, windows VMs cannot use software raid for boot devices, thus it's a pain in the @$$. I think that optimal is to have several 10Gbit NICs (at least 1 for gluster and 1 for oVirt live migration). Also, NVMEs can be used as lvm cache for spinning disks.
Best Regards, Strahil Nikolov
На 22 юни 2020 г. 18:50:01 GMT+03:00, David White dmwhite823@protonmail.com написа:
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
Sounds like I'll be using NFS or Gluster after all. Thank you.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it. Yeah, this environment that I'm building is expected to grow over time (although that growth could go slowly), so I'm trying to architect things properly now to make future growth easier to deal with. I'm also trying to balance availability concerns with budget constraints starting out.
Given that NFS would also be a single point of failure, I'll probably go with Gluster, as long as I can fit the storage requirements into the overall budget. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users users@ovirt.org wrote:
На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Usersusers@ovirt.org написа:
Thank you and Strahil for your responses. They were both very helpful.
> I think a hosted engine installation VM wants 16GB RAM configured > though I've built older versions with 8GB RAM. > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. > CentOS7 was OK with 1, CentOS6 maybe 512K. > The tendency is always increasing with updated OS versions.
Ok, so to clarify my question a little bit, I'm trying to figure out
how much RAM I would need to reserve for the host OS (or oVirt Node).
I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps that would suffice? And then as you noted, I would need to plan to give the engine 16GB.
I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the larger the setup - the more ram for the engine is needed.
> My minimum ovirt systems were mostly 48GB 16core, but most are > now
> 128GB 24core or more.
But this is the total amount of physical RAM in your systems, correct?
Not the amount that you've reserved for your host OS?I've spec'd out
some hardware, and am probably looking at purchasing two PowerEdge R820's to start, each with 64GB RAM and 32 cores.
> While ovirt can do what you would like it to do concerning a > single
> user interface, but with what you listed, > you're probably better off with just plain KVM/qemu and using > virt-manager for the interface.
Can you migrate VMs from 1 host to another with virt-manager, and can
you take snapshots? If those two features aren't supported by virt-manager, then that would
almost certainly be a deal breaker.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it.
Come to think of it, if I decided to use local storage on each of the
physical hosts, would I be able to migrate VMs? Or do I have to use a Gluster or NFS store for that?
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, June 21, 2020 5:58 PM, Edward Berger edwberger@gmail.com wrote:
> While ovirt can do what you would like it to do concerning a > single
> user interface, but with what you listed, > you're probably better off with just plain KVM/qemu and using > virt-manager for the interface.
> Those memory/cpu requirements you listed are really tiny and I > wouldn't recommend even trying ovirt on such challenged systems. > I would specify at least 3 hosts for a gluster hyperconverged > system,
> and a spare available that can take over if one of the hosts > dies.
> I think a hosted engine installation VM wants 16GB RAM configured > though I've built older versions with 8GB RAM. > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. > CentOS7 was OK with 1, CentOS6 maybe 512K. > The tendency is always increasing with updated OS versions.
> My minimum ovirt systems were mostly 48GB 16core, but most are > now
> 128GB 24core or more.
> ovirt node ng is a prepackaged installer for an oVirt > hypervisor/gluster host, with its cockpit interface you can > create and
> install the hosted-engine VM for the user and admin web > interface. Its
> very good on enterprise server hardware with lots of RAM,CPU, and > DISKS.
> On Sun, Jun 21, 2020 at 4:34 PM David White via Users > users@ovirt.org wrote:
> > I'm reading through all of the documentation at > > https://ovirt.org/documentation/, and am a bit overwhelmed with > > all of
> > the different options for installing oVirt.
> >
> > My particular use case is that I'm looking for a way to manage > > VMs
> > on multiple physical servers from 1 interface, and be able to > > deploy
> > new VMs (or delete VMs) as necessary. Ideally, it would be > > great if I
> > could move a VM from 1 host to a different host as well, > > particularly
> > in the event that 1 host becomes degraded (bad HDD, bad > > processor,
> > etc...)
> >
> > I'm trying to figure out what the difference is between an > > oVirt
> > Node and the oVirt Engine, and how the engine differs from the > > Manager.
>
> >
> > I get the feeling that `Engine` = `Manager`. Same thing. I > > further
> > think I understand the Engine to be essentially synonymous with > > a
> > vCenter VM for ESXi hosts. Is this correct?
> >
> > If so, then what's the difference between the `self-hosted` vs > > the
> > `stand-alone` engines?
> >
> > oVirt Engine requirements look to be a minimum of 4GB RAM and > > 2CPUs.
> > oVirt Nodes, on the other hand, require only 2GB RAM. > > Is this a requirement just for the physical host, or is that > > how
> > much RAM that each oVirt node process requires? In other words, > > if I
> > have a physical host with 12GB of physical RAM, will I only be > > able to
> > allocate 10GB of that to guest VMs? How much of that should I > > dedicated
> > to the oVirt node processes?
> >
> > Can you install the oVirt Engine as a VM onto an existing oVirt > > Node? And then connect that same node to the Engine, once the > > Engine is
> > installed?
> >
> > Reading through the documentation, it also sounds like oVirt > > Engine
> > and oVirt Node require different versions of RHEL or CentOS.
> > I read that the Engine for oVirt 4.4.0 requires RHEL (or > > CentOS)
> > 8.2, whereas each Node requires 7.x (although I'll plan to just > > use the
> > oVirt Node ISO).
> >
> > I'm also wondering about storage. > > I don't really like the idea of using local storage, but a > > single
> > NFS server would also be a single point of failure, and Gluster > > would
> > be too expensive to deploy, so at this point, I'm leaning > > towards using
> > local storage.
> >
> > Any advice or clarity would be greatly appreciated.
> > Thanks, > > David
> > Sent with ProtonMail Secure Email.
> > Users mailing list -- users@ovirt.org > > To unsubscribe send an email to users-leave@ovirt.org > > Privacy Statement: https://www.ovirt.org/privacy-policy.html > > oVirt Code of Conduct: > > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF63JRHWDBCJWD...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GKCVKWJJ56ITAC...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYFSHSV4IZBELA...

Hi David, it's a little bit different. Ovirt supports 'replica 3' (3 directories hsot the same content) or 'replica 3 arbiter 1' (2 directories host same data, third directory contains metadata to prevent split brain situations) volumes. If you have 'replica 3' it is smart to keep the data on separate hosts, although you can keep it on the same host (but then you should use no replica and oVirt's Single node setup). When you extend , yoou need to add bricks (fancy name for a directory) in the x3 count. If you wish that you want to use 5 nodes, you can go with 'replica 3 arbiter 1' volume, where ServerA & ServerB host data and ServerC host only metadata (arbiter). Then you can extend and for example ServerC can host again metadata while ServerD & ServerE host data for the second replica set. You can even use only 3 servers for Gluster , while much more systems as ovirt nodes (CPU & RAM) to host VMs. In case of a 4 node setup - 3 hosts have the gluster data and the 4th - is not part of ths gluster, just hosting VMs. Best Regards, Strahil Nikolov На 19 юли 2020 г. 15:25:10 GMT+03:00, David White via Users <users@ovirt.org> написа:
Thanks again for explaining all of this to me. Much appreciated.
Regarding the hyperconverged environment, reviewing https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hy..., it appears to state that you need, exactly, 3 physical servers.
Is it possible to run a hyperconverged environment with more than 3 physical servers? Because of the way that the gluster triple-redundancy works, I knew that I would need to size all 3 physical servers' SSD drives to store 100% of the data, but there's a possibility that 1 particular (future) customer is going to need about 10TB of disk space.
For that reason, I'm thinking about what it would look like to have 4 or even 5 physical servers in order to increase the total amount of disk space made available to oVirt as a whole. And then from there, I would of course setup a number of virtual disks that I would attach back to that customer's VM.
So to recap, if I were to have a 5-node Gluster Hyperconverged environment, I'm hoping that the data would still only be required to replicate across 3 nodes. Does this make sense? Is this how data replication works? Almost like a RAID -- add more drives, and the RAID gets expanded?
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, June 23, 2020 4:41 PM, Jayme <jaymef@gmail.com> wrote:
Yes this is the point of hyperconverged. You only need three hosts to setup a proper hci cluster. I would recommend ssds for gluster storage. You could get away with non raid to save money since you can do replica three with gluster meaning your data is fully replicated across all three hosts.
On Tue, Jun 23, 2020 at 5:17 PM David White via Users <users@ovirt.org> wrote:
Thanks. I've only been considering SSD drives for storage, as that is what I currently have in the cloud.
I think I've seen some things in the documents about oVirt and gluster hyperconverged. Is it possible to run oVirt and Gluster together on the same hardware? So 3 physical hosts would run CentOS or something, and I would install oVirt Node + Gluster onto the same base host OS? If so, then I could probably make that fit into my budget.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users <users@ovirt.org> wrote:
Hey David,
keep in mind that you need some big NICs. I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1 Gbit NICs and I had to create multiple gluster volumes and multiple storage domains. Yet, windows VMs cannot use software raid for boot devices, thus it's a pain in the @$$. I think that optimal is to have several 10Gbit NICs (at least 1 for gluster and 1 for oVirt live migration). Also, NVMEs can be used as lvm cache for spinning disks.
Best Regards, Strahil Nikolov
На 22 юни 2020 г. 18:50:01 GMT+03:00, David White dmwhite823@protonmail.com написа:
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
Sounds like I'll be using NFS or Gluster after all. Thank you.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it. Yeah, this environment that I'm building is expected to grow over time (although that growth could go slowly), so I'm trying to architect things properly now to make future growth easier to deal with. I'm also trying to balance availability concerns with budget constraints starting out.
Given that NFS would also be a single point of failure, I'll probably go with Gluster, as long as I can fit the storage requirements into the overall budget. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users users@ovirt.org wrote:
На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via Usersusers@ovirt.org написа:
> Thank you and Strahil for your responses. > They were both very helpful.
> > I think a hosted engine installation VM wants 16GB RAM configured > > though I've built older versions with 8GB RAM. > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. > > CentOS7 was OK with 1, CentOS6 maybe 512K. > > The tendency is always increasing with updated OS versions.
> Ok, so to clarify my question a little bit, I'm trying to figure > out
> how much RAM I would need to reserve for the host OS (or oVirt > Node).
> I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so perhaps > that would suffice? > And then as you noted, I would need to plan to give the engine > 16GB.
I run my engine on 4Gb or RAM, but i have no more than 20 VMs, the larger the setup - the more ram for the engine is needed.
> > My minimum ovirt systems were mostly 48GB 16core, but most are > > now
> > 128GB 24core or more.
> But this is the total amount of physical RAM in your systems, > correct?
> Not the amount that you've reserved for your host OS?I've spec'd > out
> some hardware, and am probably looking at purchasing two PowerEdge > R820's to start, each with 64GB RAM and 32 cores.
> > While ovirt can do what you would like it to do concerning a > > single
> > user interface, but with what you listed, > > you're probably better off with just plain KVM/qemu and using > > virt-manager for the interface.
> Can you migrate VMs from 1 host to another with virt-manager, and > can
> you take snapshots? > If those two features aren't supported by virt-manager, then that > would
> almost certainly be a deal breaker.
The engine is just a management layer. KVM/qemu has that option a long time ago, yet it's some manual work to do it.
> Come to think of it, if I decided to use local storage on each of > the
> physical hosts, would I be able to migrate VMs? > Or do I have to use a Gluster or NFS store for that?
For migration between hosts you need a shared storage. SAN, Gluster, CEPH, NFS, iSCSI are among the ones already supported (CEPH is a little bit experimental).
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ > On Sunday, June 21, 2020 5:58 PM, Edward Berger edwberger@gmail.com > wrote:
> > While ovirt can do what you would like it to do concerning a > > single
> > user interface, but with what you listed, > > you're probably better off with just plain KVM/qemu and using > > virt-manager for the interface.
> > Those memory/cpu requirements you listed are really tiny and I > > wouldn't recommend even trying ovirt on such challenged systems. > > I would specify at least 3 hosts for a gluster hyperconverged > > system,
> > and a spare available that can take over if one of the hosts > > dies.
>
> > I think a hosted engine installation VM wants 16GB RAM configured > > though I've built older versions with 8GB RAM. > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. > > CentOS7 was OK with 1, CentOS6 maybe 512K. > > The tendency is always increasing with updated OS versions.
> > My minimum ovirt systems were mostly 48GB 16core, but most are > > now
> > 128GB 24core or more.
> > ovirt node ng is a prepackaged installer for an oVirt > > hypervisor/gluster host, with its cockpit interface you can > > create and
> > install the hosted-engine VM for the user and admin web > > interface. Its
> > very good on enterprise server hardware with lots of RAM,CPU, and > > DISKS.
> > On Sun, Jun 21, 2020 at 4:34 PM David White via Users > > users@ovirt.org wrote:
> > > I'm reading through all of the documentation at > > > https://ovirt.org/documentation/, and am a bit overwhelmed with > > > all of
> > > the different options for installing oVirt.
> > >
>
> > > My particular use case is that I'm looking for a way to manage > > > VMs
> > > on multiple physical servers from 1 interface, and be able to > > > deploy
> > > new VMs (or delete VMs) as necessary. Ideally, it would be > > > great if I
> > > could move a VM from 1 host to a different host as well, > > > particularly
> > > in the event that 1 host becomes degraded (bad HDD, bad > > > processor,
> > > etc...)
> > >
>
> > > I'm trying to figure out what the difference is between an > > > oVirt
> > > Node and the oVirt Engine, and how the engine differs from the > > > Manager.
> >
> > >
>
> > > I get the feeling that `Engine` = `Manager`. Same thing. I > > > further
> > > think I understand the Engine to be essentially synonymous with > > > a
> > > vCenter VM for ESXi hosts. Is this correct?
> > >
>
> > > If so, then what's the difference between the `self-hosted` vs > > > the
> > > `stand-alone` engines?
> > >
>
> > > oVirt Engine requirements look to be a minimum of 4GB RAM and > > > 2CPUs.
> > > oVirt Nodes, on the other hand, require only 2GB RAM. > > > Is this a requirement just for the physical host, or is that > > > how
> > > much RAM that each oVirt node process requires? In other words, > > > if I
> > > have a physical host with 12GB of physical RAM, will I only be > > > able to
> > > allocate 10GB of that to guest VMs? How much of that should I > > > dedicated
> > > to the oVirt node processes?
> > >
>
> > > Can you install the oVirt Engine as a VM onto an existing oVirt > > > Node? And then connect that same node to the Engine, once the > > > Engine is
> > > installed?
> > >
>
> > > Reading through the documentation, it also sounds like oVirt > > > Engine
> > > and oVirt Node require different versions of RHEL or CentOS.
> > > I read that the Engine for oVirt 4.4.0 requires RHEL (or > > > CentOS)
> > > 8.2, whereas each Node requires 7.x (although I'll plan to just > > > use the
> > > oVirt Node ISO).
> > >
>
> > > I'm also wondering about storage. > > > I don't really like the idea of using local storage, but a > > > single
> > > NFS server would also be a single point of failure, and Gluster > > > would
> > > be too expensive to deploy, so at this point, I'm leaning > > > towards using
> > > local storage.
> > >
>
> > > Any advice or clarity would be greatly appreciated.
> > > Thanks, > > > David
> > > Sent with ProtonMail Secure Email.
> > > Users mailing list -- users@ovirt.org > > > To unsubscribe send an email to users-leave@ovirt.org > > > Privacy Statement: https://www.ovirt.org/privacy-policy.html > > > oVirt Code of Conduct: > > > https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF63JRHWDBCJWD...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GKCVKWJJ56ITAC...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYFSHSV4IZBELA...

Thank you. So to make sure I understand what you're saying, it sounds like if I need 4 nodes (or more), I should NOT do a "hyperconverged" installation, but should instead prepare Gluster separately from the oVirt Manager installation. Do I understand this correctly? If that is the case, can I still use some of the servers for dual purposes (Gluster + oVirt Manager)? I'm most likely going to need more servers for the storage than I will need for the RAM & CPU, which is a little bit opposite of what you wrote (using 3 servers for Gluster and adding additional nodes for RAM & CPU). Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, July 19, 2020 9:57 AM, Strahil Nikolov via Users <users@ovirt.org> wrote:
Hi David,
it's a little bit different.
Ovirt supports 'replica 3' (3 directories hsot the same content) or 'replica 3 arbiter 1' (2 directories host same data, third directory contains metadata to prevent split brain situations) volumes.
If you have 'replica 3' it is smart to keep the data on separate hosts, although you can keep it on the same host (but then you should use no replica and oVirt's Single node setup).
When you extend , yoou need to add bricks (fancy name for a directory) in the x3 count.
If you wish that you want to use 5 nodes, you can go with 'replica 3 arbiter 1' volume, where ServerA & ServerB host data and ServerC host only metadata (arbiter). Then you can extend and for example ServerC can host again metadata while ServerD & ServerE host data for the second replica set.
You can even use only 3 servers for Gluster , while much more systems as ovirt nodes (CPU & RAM) to host VMs. In case of a 4 node setup - 3 hosts have the gluster data and the 4th - is not part of ths gluster, just hosting VMs.
Best Regards, Strahil Nikolov
На 19 юли 2020 г. 15:25:10 GMT+03:00, David White via Users users@ovirt.org написа:
Thanks again for explaining all of this to me. Much appreciated. Regarding the hyperconverged environment, reviewing https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hy..., it appears to state that you need, exactly, 3 physical servers. Is it possible to run a hyperconverged environment with more than 3 physical servers? Because of the way that the gluster triple-redundancy works, I knew that I would need to size all 3 physical servers' SSD drives to store 100% of the data, but there's a possibility that 1 particular (future) customer is going to need about 10TB of disk space. For that reason, I'm thinking about what it would look like to have 4 or even 5 physical servers in order to increase the total amount of disk space made available to oVirt as a whole. And then from there, I would of course setup a number of virtual disks that I would attach back to that customer's VM. So to recap, if I were to have a 5-node Gluster Hyperconverged environment, I'm hoping that the data would still only be required to replicate across 3 nodes. Does this make sense? Is this how data replication works? Almost like a RAID -- add more drives, and the RAID gets expanded? Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, June 23, 2020 4:41 PM, Jayme jaymef@gmail.com wrote:
Yes this is the point of hyperconverged. You only need three hosts to setup a proper hci cluster. I would recommend ssds for gluster storage. You could get away with non raid to save money since you can do replica three with gluster meaning your data is fully replicated across all three hosts.
On Tue, Jun 23, 2020 at 5:17 PM David White via Users users@ovirt.org wrote:
Thanks. I've only been considering SSD drives for storage, as that is what I currently have in the cloud.
I think I've seen some things in the documents about oVirt and gluster hyperconverged.
Is it possible to run oVirt and Gluster together on the same hardware? So 3 physical hosts would run CentOS or something, and I would install oVirt Node + Gluster onto the same base host OS? If so, then I could probably make that fit into my budget.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users users@ovirt.org wrote:
Hey David,
keep in mind that you need some big NICs. I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1 Gbit NICs and I had to create multiple gluster volumes and multiple storage domains.
Yet, windows VMs cannot use software raid for boot devices, thus it's a pain in the @$$.
I think that optimal is to have several 10Gbit NICs (at least 1 for gluster and 1 for oVirt live migration).
Also, NVMEs can be used as lvm cache for spinning disks.
Best Regards, Strahil Nikolov
На 22 юни 2020 г. 18:50:01 GMT+03:00, David White dmwhite823@protonmail.com написа:
> For migration between hosts you need a shared storage. SAN, > Gluster,
> CEPH, NFS, iSCSI are among the ones already supported (CEPH > is a little
> bit experimental).
Sounds like I'll be using NFS or Gluster after all. Thank you.
> The engine is just a management layer. KVM/qemu has that > option a
> long time ago, yet it's some manual work to do it. > Yeah, this environment that I'm building is expected to grow > over time
> (although that growth could go slowly), so I'm trying to > architect
> things properly now to make future growth easier to deal > with. I'm also
> trying to balance availability concerns with budget > constraints
> starting out.
Given that NFS would also be a single point of failure, I'll probably
go with Gluster, as long as I can fit the storage requirements into the
overall budget. Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users users@ovirt.org wrote:
> На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via > Usersusers@ovirt.org написа:
> > Thank you and Strahil for your responses. > > They were both very helpful.
> > > I think a hosted engine installation VM wants 16GB RAM > > > configured
> > > though I've built older versions with 8GB RAM. > > > For modern VMs CentOS8 x86_64 recommends at least 2GB for > > > a host.
> > > CentOS7 was OK with 1, CentOS6 maybe 512K. > > > The tendency is always increasing with updated OS > > > versions.
> > Ok, so to clarify my question a little bit, I'm trying to > > figure
> > out
> > how much RAM I would need to reserve for the host OS (or > > oVirt
> > Node).
> > I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so > > perhaps
> > that would suffice? > > And then as you noted, I would need to plan to give the > > engine
> > 16GB.
> I run my engine on 4Gb or RAM, but i have no more than 20 > VMs, the
> larger the setup - the more ram for the engine is needed.
> > > My minimum ovirt systems were mostly 48GB 16core, but > > > most are
> > > now
> > > 128GB 24core or more.
> > But this is the total amount of physical RAM in your > > systems,
> > correct?
> > Not the amount that you've reserved for your host OS?I've > > spec'd
> > out
> > some hardware, and am probably looking at purchasing two > > PowerEdge
> > R820's to start, each with 64GB RAM and 32 cores.
> > > While ovirt can do what you would like it to do > > > concerning a
> > > single
> > > user interface, but with what you listed, > > > you're probably better off with just plain KVM/qemu and > > > using
> > > virt-manager for the interface.
> > Can you migrate VMs from 1 host to another with > > virt-manager, and
> > can
> > you take snapshots? > > If those two features aren't supported by virt-manager, > > then that
> > would
> > almost certainly be a deal breaker.
> The engine is just a management layer. KVM/qemu has that > option a
> long time ago, yet it's some manual work to do it.
> > Come to think of it, if I decided to use local storage on > > each of
> > the
> > physical hosts, would I be able to migrate VMs? > > Or do I have to use a Gluster or NFS store for that?
> For migration between hosts you need a shared storage. SAN, > Gluster,
> CEPH, NFS, iSCSI are among the ones already supported (CEPH > is a little
> bit experimental).
> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ > > On Sunday, June 21, 2020 5:58 PM, Edward Berger > > edwberger@gmail.com
> > wrote:
> > > While ovirt can do what you would like it to do > > > concerning a
> > > single
> > > user interface, but with what you listed, > > > you're probably better off with just plain KVM/qemu and > > > using
> > > virt-manager for the interface.
> > > Those memory/cpu requirements you listed are really tiny > > > and I
> > > wouldn't recommend even trying ovirt on such challenged > > > systems.
> > > I would specify at least 3 hosts for a gluster > > > hyperconverged
> > > system,
> > > and a spare available that can take over if one of the > > > hosts
> > > dies.
> >
> > > I think a hosted engine installation VM wants 16GB RAM > > > configured
> > > though I've built older versions with 8GB RAM. > > > For modern VMs CentOS8 x86_64 recommends at least 2GB for > > > a host.
> > > CentOS7 was OK with 1, CentOS6 maybe 512K. > > > The tendency is always increasing with updated OS > > > versions.
> > > My minimum ovirt systems were mostly 48GB 16core, but > > > most are
> > > now
> > > 128GB 24core or more.
> > > ovirt node ng is a prepackaged installer for an oVirt > > > hypervisor/gluster host, with its cockpit interface you > > > can
> > > create and
> > > install the hosted-engine VM for the user and admin web > > > interface. Its
> > > very good on enterprise server hardware with lots of > > > RAM,CPU, and
> > > DISKS.
> > > On Sun, Jun 21, 2020 at 4:34 PM David White via Users > > > users@ovirt.org wrote:
> > > > I'm reading through all of the documentation at > > > > https://ovirt.org/documentation/, and am a bit > > > > overwhelmed with
> > > > all of
> > > > the different options for installing oVirt.
> > > >
> >
> > > > My particular use case is that I'm looking for a way to > > > > manage
> > > > VMs
> > > > on multiple physical servers from 1 interface, and be > > > > able to
> > > > deploy
> > > > new VMs (or delete VMs) as necessary. Ideally, it would > > > > be
> > > > great if I
> > > > could move a VM from 1 host to a different host as > > > > well,
> > > > particularly
> > > > in the event that 1 host becomes degraded (bad HDD, bad > > > > processor,
> > > > etc...)
> > > >
> >
> > > > I'm trying to figure out what the difference is between > > > > an
> > > > oVirt
> > > > Node and the oVirt Engine, and how the engine differs > > > > from the
> > > > Manager.
> > >
> > > >
> >
> > > > I get the feeling that `Engine` = `Manager`. Same > > > > thing. I
> > > > further
> > > > think I understand the Engine to be essentially > > > > synonymous with
> > > > a
> > > > vCenter VM for ESXi hosts. Is this correct?
> > > >
> >
> > > > If so, then what's the difference between the > > > > `self-hosted` vs
> > > > the
> > > > `stand-alone` engines?
> > > >
> >
> > > > oVirt Engine requirements look to be a minimum of 4GB > > > > RAM and
> > > > 2CPUs.
> > > > oVirt Nodes, on the other hand, require only 2GB RAM. > > > > Is this a requirement just for the physical host, or is > > > > that
> > > > how
> > > > much RAM that each oVirt node process requires? In > > > > other words,
> > > > if I
> > > > have a physical host with 12GB of physical RAM, will I > > > > only be
> > > > able to
> > > > allocate 10GB of that to guest VMs? How much of that > > > > should I
> > > > dedicated
> > > > to the oVirt node processes?
> > > >
> >
> > > > Can you install the oVirt Engine as a VM onto an > > > > existing oVirt
> > > > Node? And then connect that same node to the Engine, > > > > once the
> > > > Engine is
> > > > installed?
> > > >
> >
> > > > Reading through the documentation, it also sounds like > > > > oVirt
> > > > Engine
> > > > and oVirt Node require different versions of RHEL or > > > > CentOS.
> > > > I read that the Engine for oVirt 4.4.0 requires RHEL > > > > (or
> > > > CentOS)
> > > > 8.2, whereas each Node requires 7.x (although I'll plan > > > > to just
> > > > use the
> > > > oVirt Node ISO).
> > > >
> >
> > > > I'm also wondering about storage. > > > > I don't really like the idea of using local storage, > > > > but a
> > > > single
> > > > NFS server would also be a single point of failure, and > > > > Gluster
> > > > would
> > > > be too expensive to deploy, so at this point, I'm > > > > leaning
> > > > towards using
> > > > local storage.
> > > >
> >
> > > > Any advice or clarity would be greatly appreciated.
> > > > Thanks, > > > > David
> > > > Sent with ProtonMail Secure Email.
> > > > Users mailing list -- users@ovirt.org > > > > To unsubscribe send an email to users-leave@ovirt.org > > > > Privacy Statement: > > > > https://www.ovirt.org/privacy-policy.html
> > > > oVirt Code of Conduct:
> > > > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...
>
> Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF63JRHWDBCJWD...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GKCVKWJJ56ITAC...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYFSHSV4IZBELA...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/22BOW47SBSCDLH...

You would setup three servers first in hyperconverged using either replica 3 or replica 3 arbiter 1 then add your fourth host afterward as a compute only host that can host vms but does not participate in glusterfs storage. On Sun, Jul 19, 2020 at 3:12 PM David White via Users <users@ovirt.org> wrote:
Thank you. So to make sure I understand what you're saying, it sounds like if I need 4 nodes (or more), I should NOT do a "hyperconverged" installation, but should instead prepare Gluster separately from the oVirt Manager installation. Do I understand this correctly?
If that is the case, can I still use some of the servers for dual purposes (Gluster + oVirt Manager)? I'm most likely going to need more servers for the storage than I will need for the RAM & CPU, which is a little bit opposite of what you wrote (using 3 servers for Gluster and adding additional nodes for RAM & CPU).
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Sunday, July 19, 2020 9:57 AM, Strahil Nikolov via Users < users@ovirt.org> wrote:
Hi David,
it's a little bit different.
Ovirt supports 'replica 3' (3 directories hsot the same content) or 'replica 3 arbiter 1' (2 directories host same data, third directory contains metadata to prevent split brain situations) volumes.
If you have 'replica 3' it is smart to keep the data on separate hosts, although you can keep it on the same host (but then you should use no replica and oVirt's Single node setup).
When you extend , yoou need to add bricks (fancy name for a directory) in the x3 count.
If you wish that you want to use 5 nodes, you can go with 'replica 3 arbiter 1' volume, where ServerA & ServerB host data and ServerC host only metadata (arbiter). Then you can extend and for example ServerC can host again metadata while ServerD & ServerE host data for the second replica set.
You can even use only 3 servers for Gluster , while much more systems as ovirt nodes (CPU & RAM) to host VMs. In case of a 4 node setup - 3 hosts have the gluster data and the 4th - is not part of ths gluster, just hosting VMs.
Best Regards, Strahil Nikolov
На 19 юли 2020 г. 15:25:10 GMT+03:00, David White via Users users@ovirt.org написа:
Thanks again for explaining all of this to me. Much appreciated. Regarding the hyperconverged environment, reviewing https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hy... , it appears to state that you need, exactly, 3 physical servers. Is it possible to run a hyperconverged environment with more than 3 physical servers? Because of the way that the gluster triple-redundancy works, I knew that I would need to size all 3 physical servers' SSD drives to store 100% of the data, but there's a possibility that 1 particular (future) customer is going to need about 10TB of disk space. For that reason, I'm thinking about what it would look like to have 4 or even 5 physical servers in order to increase the total amount of disk space made available to oVirt as a whole. And then from there, I would of course setup a number of virtual disks that I would attach back to that customer's VM. So to recap, if I were to have a 5-node Gluster Hyperconverged environment, I'm hoping that the data would still only be required to replicate across 3 nodes. Does this make sense? Is this how data replication works? Almost like a RAID -- add more drives, and the RAID gets expanded? Sent with ProtonMail Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, June 23, 2020 4:41 PM, Jayme jaymef@gmail.com wrote:
Yes this is the point of hyperconverged. You only need three hosts to setup a proper hci cluster. I would recommend ssds for gluster storage. You could get away with non raid to save money since you can do replica three with gluster meaning your data is fully replicated across all three hosts.
On Tue, Jun 23, 2020 at 5:17 PM David White via Users users@ovirt.org wrote:
Thanks. I've only been considering SSD drives for storage, as that is what I currently have in the cloud.
I think I've seen some things in the documents about oVirt and gluster hyperconverged.
Is it possible to run oVirt and Gluster together on the same hardware? So 3 physical hosts would run CentOS or something, and I would install oVirt Node + Gluster onto the same base host OS? If so, then I could probably make that fit into my budget.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users users@ovirt.org wrote:
Hey David,
keep in mind that you need some big NICs. I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1 Gbit NICs and I had to create multiple gluster volumes and multiple storage domains.
Yet, windows VMs cannot use software raid for boot devices, thus it's a pain in the @$$.
I think that optimal is to have several 10Gbit NICs (at least 1 for gluster and 1 for oVirt live migration).
Also, NVMEs can be used as lvm cache for spinning disks.
Best Regards, Strahil Nikolov
На 22 юни 2020 г. 18:50:01 GMT+03:00, David White dmwhite823@protonmail.com написа:
> > For migration between hosts you need a shared storage. SAN, > > Gluster,
> > CEPH, NFS, iSCSI are among the ones already supported (CEPH > > is a little
> > bit experimental).
> Sounds like I'll be using NFS or Gluster after all. > Thank you.
> > The engine is just a management layer. KVM/qemu has that > > option a
> > long time ago, yet it's some manual work to do it. > > Yeah, this environment that I'm building is expected to grow > > over time
> > (although that growth could go slowly), so I'm trying to > > architect
> > things properly now to make future growth easier to deal > > with. I'm also
> > trying to balance availability concerns with budget > > constraints
> > starting out.
> Given that NFS would also be a single point of failure, I'll > probably
> go with Gluster, as long as I can fit the storage requirements > into the
> overall budget. > Sent with ProtonMail Secure Email. > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ > On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users > users@ovirt.org wrote:
> > На 22 юни 2020 г. 11:06:16 GMT+03:00, David White via > > Usersusers@ovirt.org написа:
> > > Thank you and Strahil for your responses. > > > They were both very helpful.
> > > > I think a hosted engine installation VM wants 16GB RAM > > > > configured
> > > > though I've built older versions with 8GB RAM. > > > > For modern VMs CentOS8 x86_64 recommends at least 2GB for > > > > a host.
> > > > CentOS7 was OK with 1, CentOS6 maybe 512K. > > > > The tendency is always increasing with updated OS > > > > versions.
>
> > > Ok, so to clarify my question a little bit, I'm trying to > > > figure
> > > out
> > > how much RAM I would need to reserve for the host OS (or > > > oVirt
> > > Node).
> > > I do recall that CentOS / RHEL 8 wants a minimum of 2GB, so > > > perhaps
> > > that would suffice? > > > And then as you noted, I would need to plan to give the > > > engine
> > > 16GB.
> > I run my engine on 4Gb or RAM, but i have no more than 20 > > VMs, the
> > larger the setup - the more ram for the engine is needed.
> > > > My minimum ovirt systems were mostly 48GB 16core, but > > > > most are
> > > > now
> > > > 128GB 24core or more.
> > > But this is the total amount of physical RAM in your > > > systems,
> > > correct?
> > > Not the amount that you've reserved for your host OS?I've > > > spec'd
> > > out
> > > some hardware, and am probably looking at purchasing two > > > PowerEdge
> > > R820's to start, each with 64GB RAM and 32 cores.
> > > > While ovirt can do what you would like it to do > > > > concerning a
> > > > single
> > > > user interface, but with what you listed, > > > > you're probably better off with just plain KVM/qemu and > > > > using
> > > > virt-manager for the interface.
> > > Can you migrate VMs from 1 host to another with > > > virt-manager, and
> > > can
> > > you take snapshots? > > > If those two features aren't supported by virt-manager, > > > then that
> > > would
> > > almost certainly be a deal breaker.
> > The engine is just a management layer. KVM/qemu has that > > option a
> > long time ago, yet it's some manual work to do it.
> > > Come to think of it, if I decided to use local storage on > > > each of
> > > the
> > > physical hosts, would I be able to migrate VMs? > > > Or do I have to use a Gluster or NFS store for that?
> > For migration between hosts you need a shared storage. SAN, > > Gluster,
> > CEPH, NFS, iSCSI are among the ones already supported (CEPH > > is a little
> > bit experimental).
> > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ > > > On Sunday, June 21, 2020 5:58 PM, Edward Berger > > > edwberger@gmail.com
> > > wrote:
> > > > While ovirt can do what you would like it to do > > > > concerning a
> > > > single
> > > > user interface, but with what you listed, > > > > you're probably better off with just plain KVM/qemu and > > > > using
> > > > virt-manager for the interface.
> > > > Those memory/cpu requirements you listed are really tiny > > > > and I
> > > > wouldn't recommend even trying ovirt on such challenged > > > > systems.
> > > > I would specify at least 3 hosts for a gluster > > > > hyperconverged
> > > > system,
> > > > and a spare available that can take over if one of the > > > > hosts
> > > > dies.
> > >
>
> > > > I think a hosted engine installation VM wants 16GB RAM > > > > configured
> > > > though I've built older versions with 8GB RAM. > > > > For modern VMs CentOS8 x86_64 recommends at least 2GB for > > > > a host.
> > > > CentOS7 was OK with 1, CentOS6 maybe 512K. > > > > The tendency is always increasing with updated OS > > > > versions.
>
> > > > My minimum ovirt systems were mostly 48GB 16core, but > > > > most are
> > > > now
> > > > 128GB 24core or more.
> > > > ovirt node ng is a prepackaged installer for an oVirt > > > > hypervisor/gluster host, with its cockpit interface you > > > > can
> > > > create and
> > > > install the hosted-engine VM for the user and admin web > > > > interface. Its
> > > > very good on enterprise server hardware with lots of > > > > RAM,CPU, and
> > > > DISKS.
> > > > On Sun, Jun 21, 2020 at 4:34 PM David White via Users > > > > users@ovirt.org wrote:
> > > > > I'm reading through all of the documentation at > > > > > https://ovirt.org/documentation/, and am a bit > > > > > overwhelmed with
> > > > > all of
> > > > > the different options for installing oVirt.
> > > > >
>
> > >
>
> > > > > My particular use case is that I'm looking for a way to > > > > > manage
> > > > > VMs
> > > > > on multiple physical servers from 1 interface, and be > > > > > able to
> > > > > deploy
> > > > > new VMs (or delete VMs) as necessary. Ideally, it would > > > > > be
> > > > > great if I
> > > > > could move a VM from 1 host to a different host as > > > > > well,
> > > > > particularly
> > > > > in the event that 1 host becomes degraded (bad HDD, bad > > > > > processor,
> > > > > etc...)
> > > > >
>
> > >
>
> > > > > I'm trying to figure out what the difference is between > > > > > an
> > > > > oVirt
> > > > > Node and the oVirt Engine, and how the engine differs > > > > > from the
> > > > > Manager.
> > > >
>
> > > > >
>
> > >
>
> > > > > I get the feeling that `Engine` = `Manager`. Same > > > > > thing. I
> > > > > further
> > > > > think I understand the Engine to be essentially > > > > > synonymous with
> > > > > a
> > > > > vCenter VM for ESXi hosts. Is this correct?
> > > > >
>
> > >
>
> > > > > If so, then what's the difference between the > > > > > `self-hosted` vs
> > > > > the
> > > > > `stand-alone` engines?
> > > > >
>
> > >
>
> > > > > oVirt Engine requirements look to be a minimum of 4GB > > > > > RAM and
> > > > > 2CPUs.
> > > > > oVirt Nodes, on the other hand, require only 2GB RAM. > > > > > Is this a requirement just for the physical host, or is > > > > > that
> > > > > how
> > > > > much RAM that each oVirt node process requires? In > > > > > other words,
> > > > > if I
> > > > > have a physical host with 12GB of physical RAM, will I > > > > > only be
> > > > > able to
> > > > > allocate 10GB of that to guest VMs? How much of that > > > > > should I
> > > > > dedicated
> > > > > to the oVirt node processes?
> > > > >
>
> > >
>
> > > > > Can you install the oVirt Engine as a VM onto an > > > > > existing oVirt
> > > > > Node? And then connect that same node to the Engine, > > > > > once the
> > > > > Engine is
> > > > > installed?
> > > > >
>
> > >
>
> > > > > Reading through the documentation, it also sounds like > > > > > oVirt
> > > > > Engine
> > > > > and oVirt Node require different versions of RHEL or > > > > > CentOS.
>
> > > > > I read that the Engine for oVirt 4.4.0 requires RHEL > > > > > (or
> > > > > CentOS)
> > > > > 8.2, whereas each Node requires 7.x (although I'll plan > > > > > to just
> > > > > use the
> > > > > oVirt Node ISO).
> > > > >
>
> > >
>
> > > > > I'm also wondering about storage. > > > > > I don't really like the idea of using local storage, > > > > > but a
> > > > > single
> > > > > NFS server would also be a single point of failure, and > > > > > Gluster
> > > > > would
> > > > > be too expensive to deploy, so at this point, I'm > > > > > leaning
> > > > > towards using
> > > > > local storage.
> > > > >
>
> > >
>
> > > > > Any advice or clarity would be greatly appreciated.
> > > > > Thanks, > > > > > David
> > > > > Sent with ProtonMail Secure Email.
> > > > > Users mailing list -- users@ovirt.org > > > > > To unsubscribe send an email to users-leave@ovirt.org > > > > > Privacy Statement: > > > > > https://www.ovirt.org/privacy-policy.html
> > > > > oVirt Code of Conduct:
>
> > > > > List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR...
>
> >
>
> > Users mailing list -- users@ovirt.org > > To unsubscribe send an email to users-leave@ovirt.org > > Privacy Statement: https://www.ovirt.org/privacy-policy.html > > oVirt Code of Conduct: > > https://www.ovirt.org/community/about/community-guidelines/ > > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF63JRHWDBCJWD...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GKCVKWJJ56ITAC...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYFSHSV4IZBELA...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/22BOW47SBSCDLH...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HPFH7QJA6WSCBT...

На 19 юли 2020 г. 21:09:14 GMT+03:00, David White via Users <users@ovirt.org> написа:
Thank you. So to make sure I understand what you're saying, it sounds like if I need 4 nodes (or more), I should NOT do a "hyperconverged" installation, but should instead prepare Gluster separately from the oVirt Manager installation. Do I understand this correctly?
Not exactly. When you use oVirt to configure gluster you will provide only the 3 nodes for gluster , and when the engine is up and running - you will add the 4th node. You need to know what you are doing , in order to setup gluster yourself.
If that is the case, can I still use some of the servers for dual purposes (Gluster + oVirt Manager)? I'm most likely going to need more servers for the storage than I will need for the RAM & CPU, which is a little bit opposite of what you wrote (using 3 servers for Gluster and adding additional nodes for RAM & CPU).
It will be a waste of resources if you keep gluster separate from oVirt. You can add oVirt nodes in the cound of 3 , so you can both extend Gluster and the oVirt's cluster nodes. If you have compute-only nodes (servers only in oVirt , but not part of Gluster) - you can allow the engine to power off and power on Hosts on demand - so you can conserve power and cooling while keeping the count of oVirt nodes in the healthy zone. Best Regards, Strahil Nikolov
participants (4)
-
David White
-
Edward Berger
-
Jayme
-
Strahil Nikolov