Thanks again for explaining all of this to me.
Much appreciated.
Regarding the hyperconverged environment,
reviewing https://www.ovirt.org/documentation/gluster-hyperconverged/chap...,
it appears to state that you need, exactly, 3 physical servers.
Is it possible to run a hyperconverged environment with more than 3 physical servers?
Because of the way that the gluster triple-redundancy works, I knew that I would need to
size all 3 physical servers' SSD drives to store 100% of the data, but there's a
possibility that 1 particular (future) customer is going to need about 10TB of disk
space.
For that reason, I'm thinking about what it would look like to have 4 or even 5
physical servers in order to increase the total amount of disk space made available to
oVirt as a whole. And then from there, I would of course setup a number of virtual disks
that I would attach back to that customer's VM.
So to recap, if I were to have a 5-node Gluster Hyperconverged environment, I'm hoping
that the data would still only be required to replicate across 3 nodes. Does this make
sense? Is this how data replication works? Almost like a RAID -- add more drives, and the
RAID gets expanded?
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, June 23, 2020 4:41 PM, Jayme <jaymef(a)gmail.com> wrote:
Yes this is the point of hyperconverged. You only need three hosts to
setup a proper hci cluster. I would recommend ssds for gluster storage. You could get away
with non raid to save money since you can do replica three with gluster meaning your data
is fully replicated across all three hosts.
On Tue, Jun 23, 2020 at 5:17 PM David White via Users
<users(a)ovirt.org> wrote:
> Thanks.
> I've only been considering SSD drives for storage, as that is what I currently
have in the cloud.
>
> I think I've seen some things in the documents about oVirt
and gluster hyperconverged.
> Is it possible to run oVirt and Gluster together on the same hardware? So 3 physical
hosts would run CentOS or something, and I would install oVirt Node + Gluster onto the
same base host OS? If so, then I could probably make that fit into my budget.
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Monday, June 22, 2020 1:02 PM, Strahil Nikolov via Users <users(a)ovirt.org>
wrote:
>
> > Hey David,
> >
>
> > keep in mind that you need some big NICs.
> > I started my oVirt lab with 1 Gbit NIC and later added 4 dual-port 1 Gbit NICs
and I had to create multiple gluster volumes and multiple storage domains.
> > Yet, windows VMs cannot use software raid for boot devices, thus it's a
pain in the @$$.
> > I think that optimal is to have several 10Gbit NICs (at least 1 for gluster and
1 for oVirt live migration).
> > Also, NVMEs can be used as lvm cache for spinning disks.
> >
>
> > Best Regards,
> > Strahil Nikolov
> >
>
> > На 22 юни 2020 г. 18:50:01 GMT+03:00, David White
dmwhite823(a)protonmail.com написа:
> >
>
> > > > For migration between hosts you need a shared
storage. SAN, Gluster,
> > > > CEPH, NFS, iSCSI are among the ones already supported (CEPH is a
little
> > > > bit experimental).
> > >
>
> > > Sounds like I'll be using NFS or Gluster after
all.
> > > Thank you.
> > >
>
> > > > The engine is just a management layer. KVM/qemu
has that option a
> > > > long time ago, yet it's some manual work to do it.
> > > > Yeah, this environment that I'm building is expected to grow over
time
> > > > (although that growth could go slowly), so I'm trying to
architect
> > > > things properly now to make future growth easier to deal with.
I'm also
> > > > trying to balance availability concerns with budget constraints
> > > > starting out.
> > >
>
> > > Given that NFS would also be a single point of
failure, I'll probably
> > > go with Gluster, as long as I can fit the storage requirements into the
> > > overall budget.
> > > Sent with ProtonMail Secure Email.
> > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > > On Monday, June 22, 2020 6:31 AM, Strahil Nikolov via Users
> > > users(a)ovirt.org wrote:
> > >
>
> > > > На 22 юни 2020 г. 11:06:16 GMT+03:00, David White
via
> > > > Usersusers(a)ovirt.org написа:
> > >
>
> > > > > Thank you and Strahil for your responses.
> > > > > They were both very helpful.
> > >
>
> > > > > > I think a hosted engine installation VM
wants 16GB RAM configured
> > > > > > though I've built older versions with 8GB RAM.
> > > > > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a
host.
> > > > > > CentOS7 was OK with 1, CentOS6 maybe 512K.
> > > > > > The tendency is always increasing with updated OS
versions.
> > >
>
> > > > > Ok, so to clarify my question a little bit,
I'm trying to figure
> > > > > out
> > > >
>
> > > > > how much RAM I would need to reserve for the
host OS (or oVirt
> > > > > Node).
> > > >
>
> > > > > I do recall that CentOS / RHEL 8 wants a
minimum of 2GB, so perhaps
> > > > > that would suffice?
> > > > > And then as you noted, I would need to plan to give the engine
> > > > > 16GB.
> > >
>
> > > > I run my engine on 4Gb or RAM, but i have no more
than 20 VMs, the
> > > > larger the setup - the more ram for the engine is needed.
> > >
>
> > > > > > My minimum ovirt systems were mostly
48GB 16core, but most are
> > > > > > now
> > > >
>
> > > > > > 128GB 24core or more.
> > >
>
> > > > > But this is the total amount of physical RAM
in your systems,
> > > > > correct?
> > > >
>
> > > > > Not the amount that you've reserved for
your host OS?I've spec'd
> > > > > out
> > > >
>
> > > > > some hardware, and am probably looking at
purchasing two PowerEdge
> > > > > R820's to start, each with 64GB RAM and 32 cores.
> > >
>
> > > > > > While ovirt can do what you would like
it to do concerning a
> > > > > > single
> > > >
>
> > > > > > user interface, but with what you
listed,
> > > > > > you're probably better off with just plain KVM/qemu and
using
> > > > > > virt-manager for the interface.
> > >
>
> > > > > Can you migrate VMs from 1 host to another
with virt-manager, and
> > > > > can
> > > >
>
> > > > > you take snapshots?
> > > > > If those two features aren't supported by virt-manager, then
that
> > > > > would
> > > >
>
> > > > > almost certainly be a deal breaker.
> > >
>
> > > > The engine is just a management layer. KVM/qemu
has that option a
> > > > long time ago, yet it's some manual work to do it.
> > >
>
> > > > > Come to think of it, if I decided to use
local storage on each of
> > > > > the
> > > >
>
> > > > > physical hosts, would I be able to migrate
VMs?
> > > > > Or do I have to use a Gluster or NFS store for that?
> > >
>
> > > > For migration between hosts you need a shared
storage. SAN, Gluster,
> > > > CEPH, NFS, iSCSI are among the ones already supported (CEPH is a
little
> > > > bit experimental).
> > >
>
> > > > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > > > > On Sunday, June 21, 2020 5:58 PM, Edward Berger
edwberger(a)gmail.com
> > > > > wrote:
> > >
>
> > > > > > While ovirt can do what you would like
it to do concerning a
> > > > > > single
> > > >
>
> > > > > > user interface, but with what you
listed,
> > > > > > you're probably better off with just plain KVM/qemu and
using
> > > > > > virt-manager for the interface.
> > >
>
> > > > > > Those memory/cpu requirements you
listed are really tiny and I
> > > > > > wouldn't recommend even trying ovirt on such challenged
systems.
> > > > > > I would specify at least 3 hosts for a gluster
hyperconverged
> > > > > > system,
> > > >
>
> > > > > > and a spare available that can take
over if one of the hosts
> > > > > > dies.
> > > >
>
> > > > >
>
> > >
>
> > > > > > I think a hosted engine installation VM
wants 16GB RAM configured
> > > > > > though I've built older versions with 8GB RAM.
> > > > > > For modern VMs CentOS8 x86_64 recommends at least 2GB for a
host.
> > > > > > CentOS7 was OK with 1, CentOS6 maybe 512K.
> > > > > > The tendency is always increasing with updated OS
versions.
> > >
>
> > > > > > My minimum ovirt systems were mostly
48GB 16core, but most are
> > > > > > now
> > > >
>
> > > > > > 128GB 24core or more.
> > >
>
> > > > > > > ovirt node ng is a prepackaged installer for an oVirt
> > > > > > > hypervisor/gluster host, with its cockpit interface you
can
> > > > > > > create and
>
> > >
>
> > > > > > > install the hosted-engine VM for the user and admin
web
> > > > > > > interface. Its
>
> > >
>
> > > > > > > very good on enterprise server hardware with lots of
RAM,CPU, and
> > > > > > > DISKS.
> > >
>
> > > > > > > On Sun, Jun 21, 2020 at 4:34 PM David White via Users
> > > > > > > users(a)ovirt.org wrote:
> > >
>
> > > > > > > > I'm reading through all of the documentation
at
> > > > > > > >
https://ovirt.org/documentation/, and am a bit
overwhelmed with
> > > > > > > > all of
>
> > >
>
> > > > > > > > the different options for installing oVirt.
> > >
>
> >
> > > > >
>
> > >
>
> > > > >
>
> > >
>
> > > > > > > > My particular use case is that I'm looking for
a way to manage
> > > > > > > > VMs
>
> > >
>
> > > > > > > > on multiple physical servers from 1 interface, and
be able to
> > > > > > > > deploy
>
> > >
>
> > > > > > > > new VMs (or delete VMs) as necessary. Ideally, it
would be
> > > > > > > > great if I
>
> > >
>
> > > > > > > > could move a VM from 1 host to a different host as
well,
> > > > > > > > particularly
>
> > >
>
> > > > > > > > in the event that 1 host becomes degraded (bad
HDD, bad
> > > > > > > > processor,
>
> > >
>
> > > > > > > > etc...)
> > >
>
> >
> > > > >
>
> > >
>
> > > > >
>
> > >
>
> > > > > > > > I'm trying to figure out what the difference
is between an
> > > > > > > > oVirt
>
> > >
>
> > > > > > > > Node and the oVirt Engine, and how the engine
differs from the
> > > > > > > > Manager.
>
> > >
>
>
> > > > >
>
> > >
>
> >
> > > > >
>
> > >
>
> > > > >
>
> > >
>
> > > > > > > > I get the feeling that `Engine` = `Manager`. Same
thing. I
> > > > > > > > further
>
> > >
>
> > > > > > > > think I understand the Engine to be essentially
synonymous with
> > > > > > > > a
>
> > >
>
> > > > > > > > vCenter VM for ESXi hosts. Is this correct?
> > >
>
> >
> > > > >
>
> > >
>
> > > > >
>
> > >
>
> > > > > > > > If so, then what's the difference between the
`self-hosted` vs
> > > > > > > > the
>
> > >
>
> > > > > > > > `stand-alone` engines?
> > >
>
> >
> > > > >
>
> > >
>
> > > > >
>
> > >
>
> > > > > > > > oVirt Engine requirements look to be a minimum of
4GB RAM and
> > > > > > > > 2CPUs.
> > >
>
> > > > > > > > oVirt Nodes, on the other hand, require only 2GB
RAM.
> > > > > > > > Is this a requirement just for the physical host,
or is that
> > > > > > > > how
>
> > >
>
> > > > > > > > much RAM that each oVirt node process requires? In
other words,
> > > > > > > > if I
>
> > >
>
> > > > > > > > have a physical host with 12GB of physical RAM,
will I only be
> > > > > > > > able to
>
> > >
>
> > > > > > > > allocate 10GB of that to guest VMs? How much of
that should I
> > > > > > > > dedicated
>
> > >
>
> > > > > > > > to the oVirt node processes?
> > >
>
> >
> > > > >
>
> > >
>
> > > > >
>
> > >
>
> > > > > > > > Can you install the oVirt Engine as a VM onto an
existing oVirt
> > > > > > > > Node? And then connect that same node to the
Engine, once the
> > > > > > > > Engine is
>
> > >
>
> > > > > > > > installed?
> > >
>
> >
> > > > >
>
> > >
>
> > > > >
>
> > >
>
> > > > > > > > Reading through the documentation, it also sounds
like oVirt
> > > > > > > > Engine
>
> > >
>
> > > > > > > > and oVirt Node require different versions of RHEL
or CentOS.
> > >
>
> > > > > > > > I read that the Engine for oVirt 4.4.0 requires
RHEL (or
> > > > > > > > CentOS)
>
> > >
>
> > > > > > > > 8.2, whereas each Node requires 7.x (although
I'll plan to just
> > > > > > > > use the
>
> > >
>
> > > > > > > > oVirt Node ISO).
> > >
>
> >
> > > > >
>
> > >
>
> > > > >
>
> > >
>
> > > > > > > > I'm also wondering about storage.
> > > > > > > > I don't really like the idea of using local
storage, but a
> > > > > > > > single
>
> > >
>
> > > > > > > > NFS server would also be a single point of
failure, and Gluster
> > > > > > > > would
>
> > >
>
> > > > > > > > be too expensive to deploy, so at this point,
I'm leaning
> > > > > > > > towards using
>
> > >
>
> > > > > > > > local storage.
> > >
>
> >
> > > > >
>
> > >
>
> > > > >
>
> > >
>
> > > > > > > > Any advice or clarity would be greatly
appreciated.
> > >
>
> > > > > > > > Thanks,
> > > > > > > > David
> > >
>
> > > > > > > > Sent with ProtonMail Secure Email.
> > >
>
> > > > > > > > Users mailing list -- users(a)ovirt.org
> > > > > > > > To unsubscribe send an email to
users-leave(a)ovirt.org
> > > > > > > > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > > > > > > > oVirt Code of Conduct:
> > > > > > > >
https://www.ovirt.org/community/about/community-guidelines/
> > >
>
> > > > > > > > List Archives:
> > >
>
> > > >
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJ...
> > >
>
>
> > >
>
> > >
>
> > > > Users mailing list -- users(a)ovirt.org
> > > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > > > oVirt Code of Conduct:
> > > >
https://www.ovirt.org/community/about/community-guidelines/
> > > > List Archives:
> > > >
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TF63JRHWDBC...
> >
>
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GKCVKWJJ56I...
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYFSHSV4IZB...