Bug 1666795 - Related? - VM's don't start after shutdown on FCP

Hi There Wonder if this issue is related to our problem and if there is a way around it. We upgraded from 4.2.8. to 4.3.2. Now when we start some of the VM's fail to start. You need to deattach the disks, create new VM, reattach the disks to the new VM and then the new VM starts. Thanks Nar

nardusg@gmail.com writes:
Wonder if this issue is related to our problem and if there is a way around it. We upgraded from 4.2.8. to 4.3.2. Now when we start some of the VM's fail to start. You need to deattach the disks, create new VM, reattach the disks to the new VM and then the new VM starts.
Hi, were those VMs previously migrated from a 4.2.8 to a 4.3.2 host or to a 4.3.[01] host (which have the given bug)? Would it be possible to provide Vdsm logs from some of the failed and successful (with the new VM) starts with the same storage and also from the destination host of the preceding migration of the VM to a 4.3 host (if the VM was migrated)? Thanks, Milan

attached is the engine.log On Wed, 10 Apr 2019 at 10:39, Milan Zamazal <mzamazal@redhat.com> wrote:
nardusg@gmail.com writes:
Wonder if this issue is related to our problem and if there is a way around it. We upgraded from 4.2.8. to 4.3.2. Now when we start some of the VM's fail to start. You need to deattach the disks, create new VM, reattach the disks to the new VM and then the new VM starts.
Hi, were those VMs previously migrated from a 4.2.8 to a 4.3.2 host or to a 4.3.[01] host (which have the given bug)?
Would it be possible to provide Vdsm logs from some of the failed and successful (with the new VM) starts with the same storage and also from the destination host of the preceding migration of the VM to a 4.3 host (if the VM was migrated)?
Thanks, Milan

Nardus Geldenhuys <nardusg@gmail.com> writes:
attached is the engine.log
Can't find any logs containing the VM name on the host it was supposed to start. Seems that it does not even get to the host and that it fails in the ovirt engine
Thank you for the info. The problem looks completely unrelated to the cited bug. The VM fails to start already in Engine due to a NullPointerException when putting network interfaces into the VM domain XML. So it's probably unrelated to storage as well. Something probably broke during the upgrade regarding network interfaces attached to the VM. Is there anything special about your network interfaces or is there anything suspicious about them in Engine when the VM fails to start?
On Wed, 10 Apr 2019 at 10:39, Milan Zamazal <mzamazal@redhat.com> wrote:
nardusg@gmail.com writes:
Wonder if this issue is related to our problem and if there is a way around it. We upgraded from 4.2.8. to 4.3.2. Now when we start some of the VM's fail to start. You need to deattach the disks, create new VM, reattach the disks to the new VM and then the new VM starts.
Hi, were those VMs previously migrated from a 4.2.8 to a 4.3.2 host or to a 4.3.[01] host (which have the given bug)?
Would it be possible to provide Vdsm logs from some of the failed and successful (with the new VM) starts with the same storage and also from the destination host of the preceding migration of the VM to a 4.3 host (if the VM was migrated)?
Thanks, Milan

Hi Milan Nothing special. We did the upgrade on two clusters. One is fine and this one is broken. Is there a way to rescan the cluster with all its VM's to pull information I did notice also that there is no NIC showing under the VM's network. When you trying to add one it complains that it exists but it is not showing. Thanks Nardus

Hello, On Wed, Apr 10, 2019 at 2:26 PM Nardus Geldenhuys <nardusg@gmail.com> wrote:
Hi Milan
Nothing special. We did the upgrade on two clusters. One is fine and this one is broken. Is there a way to rescan the cluster with all its VM's to pull information
I did notice also that there is no NIC showing under the VM's network. When you trying to add one it complains that it exists but it is not showing.
That would mean that there is probably some corrupted information in the db about the VM interface. Which might be the reason why there is NPE in the libvirt XML process. To which network was the VM interface connected? Did something happened with the related network during upgrade?
Thanks
Nardus _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PDCCB5C2YXSEBX...
-- ALES MUSIL Associate Software Engineer - rhv network Red Hat EMEA <https://www.redhat.com/> amusil@redhat.com IM: amusil <https://red.ht/sig>

Can't find any logs containing the VM name on the host it was supposed to start. Seems that it does not even get to the host and that it fails in the ovirt engine

It seems that ovirt-engine thinks that the storage is attached to a running VM. But it is not. Is there away to refresh these stats ?

This is fixed. Was a table in db that was truncated, we fixed it by restoring backup
participants (4)
-
Ales Musil
-
Milan Zamazal
-
Nardus Geldenhuys
-
nardusg@gmail.com