Ovirt gluster arbiter within hosted VM
by Alex K
Hi all,
I have a two node hyper-converged setup which are causing me split-brains
when network issues are encountered. Since I cannot add a third hardware
node, I was thinking to add a dedicated guest VM hosted in same
hyper-converged cluster which would do the arbiter for the volumes.
What do you think about this setup in regards to stability and performance?
I am running ovirt 4.2.
Thanx,
Alex
5 years, 8 months
Ovirt-engine self hostd
by Dennis Perfors
Hi,
We have build Ovirt-engine self hosted. We have created 2 vlans 1 services and 1 student. This works good.
Services vlan 192.168.60.0/24 gateway ip 192.168.60.1 on this card
Student vlan 192.168.10.0/24 gateway ip 192.168.10.1 on this network card
Everything seems to be oke, but the probem is on the vlan.
On the node 192.168.60.3, there is no internet connection when we do yum update -y
But there is no internet connection on the host. I cant do yum update -y or do anything else.
Here more information:
^Chttp://mirror.cj2.nl/centos/7.6.1810/extras/x86_64/repodata/repomd.xml: [Errno 14] curl#56 - "Callback aborted"
Trying other mirror.
^Chttp://mirror.sitbv.nl/centos/7.6.1810/extras/x86_64/repodata/repomd.xml: [Errno 14] curl#56 - "Callback aborted"
Trying other mirror.
http://mirror.serverius.net/centos/7.6.1810/extras/x86_64/repodata/repomd...: [Errno 14] curl#7 - "Failed to connect to 2a03:3f40:1::15: Network is unreachable"
[root@bastion ~]# ping nu.nl
PING nu.nl (52.85.140.48) 56(84) bytes of data.
64 bytes from server-52-85-140-48.man50.r.cloudfront.net (52.85.140.48): icmp_seq=1 ttl=241 time=12.8 ms
^C
[root@bastion ~]# ping alibabia.com
PING alibabia.com (137.175.26.243) 56(84) bytes of data.
64 bytes from 137.175.26.243 (137.175.26.243): icmp_seq=1 ttl=115 time=144 ms
64 bytes from 137.175.26.243 (137.175.26.243): icmp_seq=2 ttl=115 time=144 ms
^C
ipv4.method: manual
ipv4.dns: 1.1.1.1
ipv4.dns-search: --
ipv4.dns-options: ""
ipv4.dns-priority: 0
ipv4.addresses: 192.168.60.3/24
ipv4.gateway: 192.168.60.1
ipv4.routes: --
ipv4.route-metric: -1
ipv4.route-table: 0 (unspec)
ipv4.ignore-auto-routes: no
ipv4.ignore-auto-dns: no
public (active)
target: default
icmp-block-inversion: no
interfaces: ovirtmgmt
sources:
services: ssh dhcpv6-client ovirt-postgres ovirt-https ovn-central-firewall-service ovirt-fence-kdump-listener ovirt-imageio-proxy ovirt-websocket-proxy ovirt-http ovirt-vmconsole-proxy ovirt-provider-ovn https http cockpit libvirt-tls snmp vdsm ovirt-imageio ovirt-vmconsole
ports: 9986/tcp 22/tcp 6081/udp 6100/tcp
protocols:
masquerade: yes
forward-ports: port=1337:proto=tcp:toport=22:toaddr=192.168.60.3
port=80:proto=tcp:toport=80:toaddr=192.168.60.5
source-ports:
icmp-blocks:
rich rules:
rule family="ipv4" source address="0.0.0.0/24" accept
[root@athena ~]#
Portfowarding also works to the host 192.168.60.3. so that is no poiint. But only traffic from the insidehost to outside is not working.
Thanks
5 years, 8 months
Rebrandind Problems
by siovelrm@gmail.com
hello I made a rebranding of my ovirt 4.3.2, but something went wrong and I saved it from the original incorrectly without realizing it. Please I need the original "ovirt.brand" and "ovirt" directories when ovirt 4.3.2 is installed. Where can I get them to be able to restore them?
Greetings
5 years, 8 months
Re: Expand existing gluster storage in ovirt 4.2/4.3
by Strahil
Recently , it was discussed in the mailing lists and dev mentioned that distributed replicated volumes are not officially supported, but some users use them.
Even if not supported, it still should work without issues.If you think not to go this way, you can create a new 3 node cluster which will be fully suppported.
Otherwise, if you go towards replicated distributed volumes , you just need to provide another set of 3 bricks and once added you can rebalance in order to distribute the files accross the sets.
Here is an old thread that describes it for replica 2 volume types:
https://lists.gluster.org/pipermail/gluster-users/2011-February/006599.html
I guess I have confused you with my last e-mail, but that was not intentionable.
Best Regards,
Strahil NikolovOn Apr 17, 2019 17:13, adrianquintero(a)gmail.com wrote:
>
> Hi Strahil,
> I had a 3 node Hyperconverged setup and added 3 new nodes to the cluster for a total of 6 servers. I am now taking advantage of more compute power, however the gluster storage part is what gets me.
>
> Current Hyperconverged setup:
> - host1.mydomain.com
> Bricks:
> engine
> data1
> vmstore1
> - host2.mydomain.com
> Bricks:
> engine
> data1
> vmstore1
> - host3.mydomain.com
> Bricks:
> engine
> data1
> vmstore1
>
> - host4.mydomain.com
> Bricks:
>
> - host5.mydomain.com
> Bricks:
>
> - host6.mydomain.com
> Bricks:
>
>
> As you can see from the above, the original first 3 servers are the only ones that contain the gluster storage bricks, so storage redundancy is not set across all 6 nodes. I think it is a lack of understanding from my end on how ovirt and gluster integrate with one another so have a few questions:
>
> How would I go about achieving storage redundancy across all nodes?
> Do I need to configure gluster volumes manually through the OS CLI?
> If I configure the fail storage scenario manually will oVirt know about it?
>
> Again I know that the bricks must be added in sets of 3 and per the first 3 nodes my gluster setup looks like this (all done by hyperconverged seup in ovirt):
> engine volume: host1:brick1, host2:brick1, host3:brick1
> data1 volume: host1:brick2, host2:brick2, host3:brick2
> vmstore1 volume: host1:brick3, host2:brick3, host3:brick3
>
> So after adding the 3 new servers I dont know if I need to do something similar to the example in https://medium.com/@tumballi/scale-your-gluster-cluster-1-node-at-a-time-..., if I do a similar change will oVirt know about it? will it be able to handle it as hyperconverged?
>
> As I mentioned before I normally see 3 node hyperconverged setup examples with gluster but have not found one for 6, 9 or 12 node cluster.
>
> Thanks again.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5T7TCSP4HF...
5 years, 8 months
Ovirt Host Replacement/Rebuild
by Jim Kusznir
Hi all:
I had an unplanned power outage (generator failed to start, power failure
lasted 3 min longer than UPS batteries). One node didn't survive the
unplanned power outage.
By that, I mean it kernel panic's on boot, and I haven't been able to
capture the KP or the first part of it (just the end), and so I don't
truely know what the root cause is. I have validated the hardware is just
fine, so its got to be an OS corruption.
Based on this, I was thinking that perhaps the easiest way to recover would
simply be to delete the host from the cluster, reformat and reinstall this
host, and then add it back to the cluster as a new host. Is this in fact a
good idea? Are there any references to how to do this (the detailed steps
so I don't mess it up)?
My cluster is (was) a 3 node hyperconverged cluster with gluster used for
the management node. I also have a gluster share for VMs, but I use an NFS
share from a NAS for that (which I will ask about in another post).
Thanks for the help!
--Jim
5 years, 8 months
Importing existing GlusterFS
by Zryty ADHD
Hi,
I have a questiion about that. I Install Ovirt 4.3.3 on Rhel 7.6 and want to import my existing GlusterFs cluster to it but i can't find option to do that. Can anyone explain me how to do that or its not possible in this version ?
5 years, 8 months
oVirt and NetApp NFS storage
by klaasdemter@gmail.com
Hi,
I got a question regarding oVirt and the support of NetApp NFS storage.
We have a MetroCluster for our virtual machine disks but a HA-Failover
of that (active IP gets assigned to another node) seems to produce
outages too long for sanlock to handle - that affects all VMs that have
storage leases. NetApp says a "worst case" takeover time is 120 seconds.
That would mean sanlock has already killed all VMs. Is anyone familiar
with how we could setup oVirt to allow such storage outages? Do I need
to use another type of storage for my oVirt VMs because that NFS
implementation is unsuitable for oVirt?
Greetings
Klaas
5 years, 8 months
what is the best solution for gluster?
by Edoardo Mazza
Hi all,
I have 4 nodes with ovirt and gluster and I must create a new gluster
volume, but I would know which is the best solution to have a high
avaibility, best performance without wasting much space in the disk.
Thanks for suggestions
Edoardo
5 years, 8 months
Changing from thin provisioned to preallocated?
by Wesley Stewart
I am currently running a ZFS server (Running RaidZ2) and I have been
experimenting with NFS and shares to host my guests. I am currently
running oVirt 4.2.8 and using a RaidZ2 NFS mount for the guests.
ZFS definitely is definitely pretty awful (At least in my experience so
far) for hosting VMs. I believe this is due to the synchronous writes
being performed. However, I think running an iSCSI target with
Synchronization disabled over a 10Gb connection might do the trick. (I have
a couple of mirroed SSD drives for performance if I need it, but the RaidZ2
crawls for disk speed).
When I tried to migrate a thin provisioned guest to iSCSI, I keep getting
an "Out of disk space error" which I am pretty sure is due to the block
style storage on the iSCSI target. Is there a way to switch from Thin to
Preallocated? Or is my best bet to try and take a snapshot and clone this
into a pre-allocated disk?
5 years, 8 months