Re: Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
by Strahil
I guess I didn't clarify myself.
I asked if you can shutdown the VM and then set only this VM's 'Custom compatibility mode' to 4.2 and power up to test.
Yet, if the cluster was at 4.2 when you imported , then I am afraid that the 'Custom Compatibility mode' won't make a difference.
I'd recommend you to open a bug on bugzilla.redhat.com and share the link with the mailing list. Many of the dev take a look here.
But st Regards,
Strahil NikolovOn Jan 12, 2020 15:02, Latchezar Filtchev <Latcho(a)aubg.bg> wrote:
>
> Yes! It is.
>
>
>
> After upgrade 4.2 to 4.3 datacenter and cluster compatibility mode was 4.2 when I discovered the issue. Then I set Cluster mode to 4.3 – no luck. It is not so easy to return it to 4.2.
>
>
>
> Thank you!
>
> Best,
>
> Latcho
>
>
>
>
>
> From:
4 years, 10 months
Low disk space on Storage
by suporte@logicworks.pt
Hi,
I'm running ovirt Version:4.3.4.3-1.el7
My filesystem disk has 30 GB free space.
Cannot start a VM due to an I/O error storage.
When tryng to move the disk to another storage domain get this error:
Error while executing action: Cannot move Virtual Disk. Low disk space on Storage Domain DATA4.
The sum of pre-allocated disk is the total of the storage domain disk.
Any idea what can I do to move a disk to other storage domain?
Many thanks
--
Jose Ferradeira
http://www.logicworks.pt
4 years, 10 months
Re: Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
by Strahil
This seems quite odd.
What happens when you set custom compatibility mode to 4.2 ?
I think it was in Edit -> system -> Advanced options -> Custom compatibility mode -> 4.2
Best Regards,
Strahil NikolovOn Jan 12, 2020 13:24, Latchezar Filtchev <Latcho(a)aubg.bg> wrote:
>
> Hi Strahil,
>
>
>
> Unfortunately virtio drives cannot be used. Guest OS is CentOS 5.2, kernel 2.6.18-92.1.17.el5.centos.plusPAE
>
>
>
> The network connectivity of the VM is very short (please see below). Till I connect the VM (via spice) network connectivity is already gone.
>
> tcpdump –I eth0 –v
>
> tcpdump: listening on eth0, link-type EN10MB ( Ethernet ), capture 96 bytes
>
>
>
> 0 packets captured
>
> 0 packets received by filer
>
> 0 packets dropped by kernel
>
>
>
> From 172.17.17.7 icmp_seq=60 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=61 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=62 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=63 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=64 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=65 Destination Host Unreachable
>
> 64 bytes from 172.17.16.16: icmp_seq=66 ttl=64 time=2.19 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=67 ttl=64 time=0.567 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=68 ttl=64 time=0.339 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=69 ttl=64 time=0.337 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=70 ttl=64 time=0.288 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=71 ttl=64 time=0.305 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=72 ttl=64 time=0.293 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=73 ttl=64 time=0.346 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=74 ttl=64 time=0.359 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=75 ttl=64 time=0.479 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=76 ttl=64 time=0.391 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=77 ttl=64 time=0.302 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=78 ttl=64 time=0.327 ms
>
> 64 bytes from 172.17.16.16: icmp_seq=79 ttl=64 time=0.321 ms
>
> From 172.17.17.7 icmp_seq=125 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=126 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=127 Destination Host Unreachable
>
> From 172.17.17.7 icmp_seq=128 Destination Host Unreachable
>
>
>
> Thank you!
>
> Best,
>
> Latcho
>
>
>
>
>
> From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
> Sent: Sunday, January 12, 2020 12:52 PM
> To: users <users(a)ovirt.org>; Latchezar Filtchev <Latcho(a)aubg.bg>
> Subject: Re: [ovirt-users] Re: Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
>
>
>
> Hi Lacho,
>
>
>
> Can you run the virtio drivers ?
>
>
>
> They are the most tested in KVM .
>
>
>
> Also, can you run a tcpdump (with e1000) and check what is going on when the network disapperead?
>
>
>
> Best Regards,
>
> Strahil Nikolov
>
>
>
> В неделя, 12 януари 2020 г., 09:22:52 ч. Гринуич+2, Latchezar Filtchev <latcho(a)aubg.bg> написа:
>
>
>
>
>
> Dear Strahi,
>
>
>
> I tried rtl8139. The behavior is the same.
>
>
>
> Best,
>
> Latcho
>
>
>
>
>
> From: Strahil <hunter86_bg(a)yahoo.com>
> Sent: Sunday, January 12, 2020 1:19 AM
> To: Latchezar Filtchev <Latcho(a)aubg.bg>; users <users(a)ovirt.org>
> Subject: Re: [ovirt-users] Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
>
>
>
> Hi Latcho,
>
> Most probably it's a bug.
> Have you tried virtio and/or rtl-based NIC ?
>
> As far as I know, CentOS 5 supports Virtio after Kernel >= 2.6.25 .
>
> Best Regards,
> Strahil Nikolov
>
> On Jan 11, 2020 19:54, Latchezar Filtchev <Latcho(a)aubg.bg> wrote:
>>
>> Hello Everyone,
>>
>>
>>
>> I am aware my guest OS is out of support but I experienced the following:
>>
>>
>>
>> oVirt – 4.2.8 (self-hosted engine)
>>
>> VM – CentOS 5.2; two VNIC’s ( driver used e1000) connected to different VLAN’s; - no issues with network connectivity
>>
>>
>>
>> After upgrade to oVirt 4.3.7
>>
>>
>>
>> The same VM starts normally. Network is available for several seconds (10 – 20 pings) and then it disappears. The machine works but no ping to/from VM. When I am returning the same machine (via export domain) to oVirt 4.2.8 environment – it works as expected.
>>
>>
>>
>> Can someone advise on this? Any help will be greatly appreciated.
>>
>>
>>
>> Thank you.
>>
>> Best,
>>
>> Latcho
>>
>>
>>
>>
>>
>>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DAPMFAA3XZO...
4 years, 10 months
Re: Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
by Strahil
Hi Latcho,
Most probably it's a bug.
Have you tried virtio and/or rtl-based NIC ?
As far as I know, CentOS 5 supports Virtio after Kernel >= 2.6.25 .
Best Regards,
Strahil NikolovOn Jan 11, 2020 19:54, Latchezar Filtchev <Latcho(a)aubg.bg> wrote:
>
> Hello Everyone,
>
>
>
> I am aware my guest OS is out of support but I experienced the following:
>
>
>
> oVirt – 4.2.8 (self-hosted engine)
>
> VM – CentOS 5.2; two VNIC’s ( driver used e1000) connected to different VLAN’s; - no issues with network connectivity
>
>
>
> After upgrade to oVirt 4.3.7
>
>
>
> The same VM starts normally. Network is available for several seconds (10 – 20 pings) and then it disappears. The machine works but no ping to/from VM. When I am returning the same machine (via export domain) to oVirt 4.2.8 environment – it works as expected.
>
>
>
> Can someone advise on this? Any help will be greatly appreciated.
>
>
>
> Thank you.
>
> Best,
>
> Latcho
>
>
>
>
>
>
4 years, 10 months
Upload ISO to data domain - checksum mismatch on the brick
by Strahil Nikolov
Hello Community,
I'm currently migrating all my ISOs from the ISO domain to the DATA domain but it seems that the checksum (on brick level) mismatches.The problem is that I have managed to upload several ISOs (one of them 4.9GB) without issues.
Restarting the ovirt-imageio-proxy.service and later the HostedEngine VM doesn't help.
The logs from the imageio proxy indicate no issues.
I thought that the gluster could be the problem ,but this is not the case - I tested via copying (using ovirt1) between the 2 mountpoints and the sha512sum matches.
Trying to upload via ovirt2 also doesn't help.
Any ideas are appreciated.
PS: I'm running latest oVirt.
Best Regards,Strahil Nikolov
4 years, 10 months
HCI Disaster Recovery
by Christian Reiss
Hey folks,
- theoretical question, no live data in jeopardy -
Let's say a 3-way HCI cluster is up and running, with engine running,
all is well. The setup was done via gui, including gluster.
Now I would kill a host, poweroff & disk wipe. Simulating a full node
failure.
The remaining nodes should keep running on (3 copy sync, no arbiter),
vms keep running or will be restarted. I would reinstall using the ovirt
node installer on the "failed" node.
This would net me with a completely empty, no-gluster setup. What is the
oVirt way to recover from this point onward?
Thanks for your continued support! <3
-Christian.
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 10 months
Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
by Latchezar Filtchev
Hello Everyone,
I am aware my guest OS is out of support but I experienced the following:
oVirt - 4.2.8 (self-hosted engine)
VM - CentOS 5.2; two VNIC's ( driver used e1000) connected to different VLAN's; - no issues with network connectivity
After upgrade to oVirt 4.3.7
The same VM starts normally. Network is available for several seconds (10 - 20 pings) and then it disappears. The machine works but no ping to/from VM. When I am returning the same machine (via export domain) to oVirt 4.2.8 environment - it works as expected.
Can someone advise on this? Any help will be greatly appreciated.
Thank you.
Best,
Latcho
4 years, 10 months
Re: HCI Disaster Recovery
by Strahil
It's actually not so easy.
The fastest way to recover is just restore from backup.
Otherwise the flow should be:
1. Install the new node (new hostname will be easier).
2. Use gluster's replace-brick to change the dead brick with new one.
3. Once ovirt 's integration with gluster detects the change, you will be able to forcefully remove the dead node.
4. Add the newly installed node to the relevant cluster (with or without hosted-engine deployment)
5. Test to move a low-priority VM to the new host.
6. Power up a test VM on the new host to test the functionality.
7. You can make the new node SPM and test snapahots and new vm creation.
As you can see , in order to remove a missing node - it should not be part of the Gluster Cluster.
Best Regards,
Strahil NikolovOn Jan 10, 2020 20:10, Christian Reiss wrote: > > Hey, > > is there really no ovirt native way to restore a single host and bring > it back into the cluster? > > -Chris. > > On 07.01.2020 09:54, Christian Reiss wrote: > > Hey folks, > > > > - theoretical question, no live data in jeopardy - > > > > Let's say a 3-way HCI cluster is up and running, with engine running, > > all is well. The setup was done via gui, including gluster. > > > > Now I would kill a host, poweroff & disk wipe. Simulating a full node > > failure. > > > > The remaining nodes should keep running on (3 copy sync, no arbiter), > > vms keep running or will be restarted. I would reinstall using the ovirt > > node installer on the "failed" node. > > > > This would net me with a completely empty, no-gluster setup. What is the > > oVirt way to recover from this point onward? > > > > Thanks for your continued support! <3 > > -Christian. > > > > -- > Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon > christian(a)reiss.nrw \ / Campaign > X against HTML > XMPP chris(a)alpha-labs.net / \ in eMails > WEB christian-reiss.de, reiss.nrw > > GPG Retrieval http://gpg.christian-reiss.de > GPG ID ABCD43C5, 0x44E29126ABCD43C5 > GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5 > > "It's better to reign in hell than to serve in heaven.", > John Milton, Paradise lost. > _______________________________________________ > Users mailing list -- users(a)ovirt.org > To unsubscribe send an email to users-leave(a)ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GQECSH72L4S...
4 years, 10 months
Hyperconverged setup - storage architecture - scaling
by Leo David
Hello Everyone,
Reading through the document:
"Red Hat Hyperconverged Infrastructure for Virtualization 1.5
Automating RHHI for Virtualization deployment"
Regarding storage scaling, i see the following statements:
*2.7. SCALINGRed Hat Hyperconverged Infrastructure for Virtualization is
supported for one node, and for clusters of 3, 6, 9, and 12 nodes.The
initial deployment is either 1 or 3 nodes.There are two supported methods
of horizontally scaling Red Hat Hyperconverged Infrastructure for
Virtualization:*
*1 Add new hyperconverged nodes to the cluster, in sets of three, up to the
maximum of 12 hyperconverged nodes.*
*2 Create new Gluster volumes using new disks on existing hyperconverged
nodes.You cannot create a volume that spans more than 3 nodes, or expand an
existing volume so that it spans across more than 3 nodes at a time*
*2.9.1. Prerequisites for geo-replicationBe aware of the following
requirements and limitations when configuring geo-replication:One
geo-replicated volume onlyRed Hat Hyperconverged Infrastructure for
Virtualization (RHHI for Virtualization) supports only one geo-replicated
volume. Red Hat recommends backing up the volume that stores the data of
your virtual machines, as this is usually contains the most valuable data.*
------
Also in oVirtEngine UI, when I add a brick to an existing volume i get the
following warning:
*"Expanding gluster volume in a hyper-converged setup is not recommended as
it could lead to degraded performance. To expand storage for cluster, it is
advised to add additional gluster volumes." *
Those things are raising a couple of questions that maybe for some for you
guys are easy to answer, but for me it creates a bit of confusion...
I am also referring to RedHat product documentation, because I treat
oVirt as production-ready as RHHI is.
*1*. Is there any reason for not going to distributed-replicated volumes (
ie: spread one volume across 6,9, or 12 nodes ) ?
- ie: is recomanded that in a 9 nodes scenario I should have 3 separated
volumes, but how should I deal with the folowing question
*2.* If only one geo-replicated volume can be configured, how should I
deal with 2nd and 3rd volume replication for disaster recovery
*3.* If the limit of hosts per datacenter is 250, then (in theory ) the
recomended way in reaching this treshold would be to create 20 separated
oVirt logical clusters with 12 nodes per each ( and datacenter managed from
one ha-engine ) ?
*4.* In present, I have the folowing one 9 nodes cluster , all hosts
contributing with 2 disks each to a single replica 3 distributed
replicated volume. They where added to the volume in the following order:
node1 - disk1
node2 - disk1
......
node9 - disk1
node1 - disk2
node2 - disk2
......
node9 - disk2
At the moment, the volume is arbitrated, but I intend to go for full
distributed replica 3.
Is this a bad setup ? Why ?
It oviously brakes the redhat recommended rules...
Is there anyone so kind to discuss on these things ?
Thank you very much !
Leo
--
Best regards, Leo David
--
Best regards, Leo David
4 years, 10 months