hello:
I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a while
now, and am now revisiting some aspects of it for ensuring that I have good
reliability.
My cluster is a 3 node cluster, with gluster nodes running on each node.
After running my cluster a bit, I'm realizing I didn't do a very optimal
job of allocating the space on my disk to the different gluster mount
points. Fortunately, they were created with LVM, so I'm hoping that I can
resize them without much trouble.
I have a domain for iso, domain for export, and domain for storage, all
thin provisioned; then a domain for the engine, not thin provisioned. I'd
like to expand the storage domain, and possibly shrink the engine domain
and make that space also available to the main storage domain. Is it as
simple as expanding the LVM partition, or are there more steps involved?
Do I need to take the node offline?
second, I've noticed that the first two nodes seem to have a full copy of
the data (the disks are in use), but the 3rd node appears to not be using
any of its storage space...It is participating in the gluster cluster,
though.
Third, currently gluster shares the same network as the VM networks. I'd
like to put it on its own network. I'm not sure how to do this, as when I
tried to do it at install time, I never got the cluster to come online; I
had to make them share the same network to make that work.
Ovirt questions:
I've noticed that recently, I don't appear to be getting software updates
anymore. I used to get update available notifications on my nodes every
few days; I haven't seen one for a couple weeks now. is something wrong?
I have a windows 10 x64 VM. I get a warning that my VM type does not match
the installed OS. All works fine, but I've quadrouple-checked that it does
match. Is this a known bug?
I have a UPS that all three nodes and the networking are on. It is a USB
UPS. How should I best integrate monitoring in? I could put a raspberry
pi up and then run NUT or similar on it, but is there a "better" way with
oVirt?
Thanks!
--Jim