Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE

Hello, we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and afterwards all nodes too. The cluster compatibility level has been set to 4.2. Now we can't start a VM after it has been powered off. The only hint we found in engine.log is: 2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@491983e9'}), log id: 7d49849e 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, UpdateVmDynamicDataVDSCommand, log id: 7d49849e 2018-03-07 14:51:52,531+01 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 4af1f227 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 71dcc8e7 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: 'prod-node-210': null 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Command 'CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'})' execution failed: null 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, log id: 71dcc8e7 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: java.lang.NullPointerException at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) [vdsbroker.jar:] [...] But this doesn't lead us to the root cause. I haven't found any matching bug tickets in release notes for upcoming 4.2.1. Can anyone help here? Kind regards Jan Siml

On Wed, Mar 7, 2018 at 4:11 PM, Jan Siml <jsiml@plusline.net> wrote:
Hello,
we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and afterwards all nodes too. The cluster compatibility level has been set to 4.2.
Now we can't start a VM after it has been powered off. The only hint we found in engine.log is:
2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbrok er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic=' org.ovirt.engine.core.common.businessentities.VmDynamic@491983e9'}), log id: 7d49849e 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbrok er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, UpdateVmDynamicDataVDSCommand, log id: 7d49849e 2018-03-07 14:51:52,531+01 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( CreateVDSCommandParameters:{ho stId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 4af1f227 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbrok er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 71dcc8e7 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbrok er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: 'prod-node-210': null 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbrok er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Command 'CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'})' execution failed: null 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbrok er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, log id: 71dcc8e7 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: java.lang.NullPointerException at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlB uilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) [vdsbroker.jar:]
[...]
But this doesn't lead us to the root cause. I haven't found any matching bug tickets in release notes for upcoming 4.2.1. Can anyone help here?
What's the mac address of that VM? You can find it in the UI or with: select mac_addr from vm_interface where vm_guid in (select vm_guid from vm_static where vm_name='<vm_name>');
Kind regards
Jan Siml _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Mar 7, 2018 at 5:20 PM, Arik Hadas <ahadas@redhat.com> wrote:
On Wed, Mar 7, 2018 at 4:11 PM, Jan Siml <jsiml@plusline.net> wrote:
Hello,
we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and afterwards all nodes too. The cluster compatibility level has been set to 4.2.
Now we can't start a VM after it has been powered off. The only hint we found in engine.log is:
2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbrok er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic=' org.ovirt.engine.core.common.businessentities.VmDynamic@491983e9'}), log id: 7d49849e 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbrok er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, UpdateVmDynamicDataVDSCommand, log id: 7d49849e 2018-03-07 14:51:52,531+01 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( CreateVDSCommandParameters:{ho stId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 4af1f227 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbrok er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 71dcc8e7 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbrok er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: 'prod-node-210': null 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbrok er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Command 'CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'})' execution failed: null 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbrok er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, log id: 71dcc8e7 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: java.lang.NullPointerException at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlB uilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) [vdsbroker.jar:]
[...]
But this doesn't lead us to the root cause. I haven't found any matching bug tickets in release notes for upcoming 4.2.1. Can anyone help here?
What's the mac address of that VM? You can find it in the UI or with:
select mac_addr from vm_interface where vm_guid in (select vm_guid from vm_static where vm_name='<vm_name>');
Actually, different question - does this VM has unplugged network interface?
Kind regards
Jan Siml _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hello Arik,
we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and afterwards all nodes too. The cluster compatibility level has been set to 4.2.
Now we can't start a VM after it has been powered off. The only hint we found in engine.log is:
2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic='org.ovirt.engine.co <http://org.ovirt.engine.co>re.common.businessentities.VmDynamic@491983e9'}), log id: 7d49849e 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, UpdateVmDynamicDataVDSCommand, log id: 7d49849e 2018-03-07 14:51:52,531+01 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 4af1f227 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 71dcc8e7 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: 'prod-node-210': null 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Command 'CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'})' execution failed: null 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, log id: 71dcc8e7 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: java.lang.NullPointerException at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) [vdsbroker.jar:]
[...]
But this doesn't lead us to the root cause. I haven't found any matching bug tickets in release notes for upcoming 4.2.1. Can anyone help here?
What's the mac address of that VM? You can find it in the UI or with:
select mac_addr from vm_interface where vm_guid in (select vm_guid from vm_static where vm_name='<vm_name>');
Actually, different question - does this VM has unplugged network interface?
The VM has two NICs. Both are plugged. The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 00:1a:4a:36:01:67 for nic2. Regards Jan

On Wed, Mar 7, 2018 at 5:32 PM, Jan Siml <jsiml@plusline.net> wrote:
Hello Arik,
we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9)
and afterwards all nodes too. The cluster compatibility level has been set to 4.2.
Now we can't start a VM after it has been powered off. The only hint we found in engine.log is:
2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic='org.ovirt.engine.co <http://org.ovirt.engine.co>re.common.businessentities.VmDyn amic@491983e9'}),
log id: 7d49849e 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, UpdateVmDynamicDataVDSCommand, log id: 7d49849e 2018-03-07 14:51:52,531+01 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f- 4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 4af1f227 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo mmand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f- 4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 71dcc8e7 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo mmand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: 'prod-node-210': null 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo mmand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Command 'CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f- 4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'})' execution failed: null 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo mmand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, log id: 71dcc8e7 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: java.lang.NullPointerException at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlB uilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) [vdsbroker.jar:]
[...]
But this doesn't lead us to the root cause. I haven't found any matching bug tickets in release notes for upcoming 4.2.1. Can anyone help here?
What's the mac address of that VM? You can find it in the UI or with:
select mac_addr from vm_interface where vm_guid in (select vm_guid from vm_static where vm_name='<vm_name>');
Actually, different question - does this VM has unplugged network interface?
The VM has two NICs. Both are plugged.
The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 00:1a:4a:36:01:67 for nic2.
OK, those seem like two valid mac addresses so maybe something is wrong with the vm devices. Could you please provide the output of: select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='<vm_name>');
Regards
Jan

Hello Arik,
we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and afterwards all nodes too. The cluster compatibility level has been set to 4.2.
Now we can't start a VM after it has been powered off. The only hint we found in engine.log is:
2018-03-07 14:51:52,504+01 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic='org.ovirt.engine.co <http://org.ovirt.engine.co>
<http://org.ovirt.engine.co>re.common.businessentities.VmDynamic@491983e9'}),
log id: 7d49849e 2018-03-07 14:51:52,509+01 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, UpdateVmDynamicDataVDSCommand, log id: 7d49849e 2018-03-07 14:51:52,531+01 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand(
CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 4af1f227 2018-03-07 14:51:52,533+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateBrokerVDSCommand(HostName = prod-node-210,
CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 71dcc8e7 2018-03-07 14:51:52,545+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: 'prod-node-210': null 2018-03-07 14:51:52,546+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Command 'CreateBrokerVDSCommand(HostName = prod-node-210,
CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'})' execution failed: null 2018-03-07 14:51:52,546+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, log id: 71dcc8e7 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: java.lang.NullPointerException at
org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) [vdsbroker.jar:]
[...]
But this doesn't lead us to the root cause. I haven't found any matching bug tickets in release notes for upcoming 4.2.1. Can anyone help here?
What's the mac address of that VM? You can find it in the UI or with:
select mac_addr from vm_interface where vm_guid in (select vm_guid from vm_static where vm_name='<vm_name>');
Actually, different question - does this VM has unplugged network interface?
The VM has two NICs. Both are plugged.
The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 00:1a:4a:36:01:67 for nic2.
OK, those seem like two valid mac addresses so maybe something is wrong with the vm devices. Could you please provide the output of:
select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='<vm_name>');
sure: engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201'); type | device | address | is_managed | is_plugged | alias ------------+---------------+--------------------------------------------------- -----------+------------+------------+---------------- video | qxl | | t | t | controller | virtio-scsi | | t | t | balloon | memballoon | | t | f | balloon0 graphics | spice | | t | t | controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | t | t | virtio-serial0 disk | disk | {slot=0x07, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | f | t | virtio-disk0 memballoon | memballoon | {slot=0x08, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | f | t | balloon0 interface | bridge | {slot=0x03, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | f | t | net0 interface | bridge | {slot=0x09, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | f | t | net1 controller | scsi | {slot=0x05, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | f | t | scsi0 controller | ide | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1} | f | t | ide controller | usb | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x2} | t | t | usb channel | unix | {bus=0, controller=0, type=virtio-serial, port=1} | f | t | channel0 channel | unix | {bus=0, controller=0, type=virtio-serial, port=2} | f | t | channel1 channel | spicevmc | {bus=0, controller=0, type=virtio-serial, port=3} | f | t | channel2 interface | bridge | | t | t | net1 interface | bridge | | t | t | net0 disk | cdrom | | t | f | ide0-1-0 disk | cdrom | {bus=1, controller=0, type=drive, target=0, unit=0} | f | t | ide0-1-0 disk | disk | | t | t | virtio-disk0 (20 rows) Kind regards Jan

Am 07.03.2018 um 16:49 schrieb Arik Hadas <ahadas@redhat.com>: =20 =20 =20
On Wed, Mar 7, 2018 at 5:32 PM, Jan Siml <jsiml@plusline.net> wrote: Hello Arik, =20 =20
we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and afterwards all nodes too. The cluster compatibility level has been set to 4.2. =20 Now we can't start a VM after it has been powered off. The only hint we found in engine.log is: =20 2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId=3D'null', vmId=3D'a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic=3D'org.ovirt.engine.co <http://org.ovirt.engine.co>re.common.businessentities.VmDynamic=
@491983e9'}),
=20 log id: 7d49849e 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, UpdateVmDynamicDataVDSCommand, log id: 7d49849e 2018-03-07 14:51:52,531+01 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId=3D'0add031e-c72f-473f-ab2f-4f= 7abd1f402b', vmId=3D'a7bc4124-06cb-4909-9389-bcf727df1304', vm=3D'VM [prod-hub-201]'}), log id: 4af1f227 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSComman= d] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateBrokerVDSCommand(HostName =3D prod-node-210, CreateVDSCommandParameters:{hostId=3D'0add031e-c72f-473f-ab2f-4f= 7abd1f402b', vmId=3D'a7bc4124-06cb-4909-9389-bcf727df1304', vm=3D'VM [prod-hub-201]'}), log id: 71dcc8e7 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSComman= d] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: 'prod-node-210': null 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSComman= d] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Command 'CreateBrokerVDSCommand(HostName =3D prod-node-210, CreateVDSCommandParameters:{hostId=3D'0add031e-c72f-473f-ab2f-4f= 7abd1f402b', vmId=3D'a7bc4124-06cb-4909-9389-bcf727df1304', vm=3D'VM [prod-hub-201]'})' execution failed: null 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSComman= d] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, log id: 71dcc8e7 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: java.lang.NullPointerException at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuild= er.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) [vdsbroker.jar:] =20 [...] =20 But this doesn't lead us to the root cause. I haven't found any matching bug tickets in release notes for upcoming 4.2.1. Can anyone help here? =20 =20 What's the mac address of that VM? You can find it in the UI or with: =20 select mac_addr from vm_interface where vm_guid in (select vm_guid from vm_static where vm_name=3D'<vm_name>'); =20 =20 Actually, different question - does this VM has unplugged network interf= ace? =20 The VM has two NICs. Both are plugged. =20 The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 00:1a:4a:36:01:67 fo= r nic2. =20 OK, those seem like two valid mac addresses so maybe something is wrong wi=
--Apple-Mail-1637D078-944C-455A-B1FA-357A09FC915B Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Hi, Enable network and disks on your VM than do: Run -> ONCE Ok Ignore errors. Ok Run=20 Cheeers Olri th the vm devices.
Could you please provide the output of: select type, device, address, is_managed, is_plugged, alias from vm_device= where vm_id in (select vm_guid from vm_static where vm_name=3D'<vm_name>');=
=20
=20 Regards =20 Jan =20
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail-1637D078-944C-455A-B1FA-357A09FC915B Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 7bit <html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div></div><div>Hi,</div><div><br></div><div>Enable network and disks on your VM than do:</div><div>Run -> ONCE Ok Ignore errors. Ok</div><div>Run </div><div>Cheeers</div><div>Olri</div><div><br></div><div><br>Am 07.03.2018 um 16:49 schrieb Arik Hadas <<a href="mailto:ahadas@redhat.com">ahadas@redhat.com</a>>:<br><br></div><blockquote type="cite"><div><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Mar 7, 2018 at 5:32 PM, Jan Siml <span dir="ltr"><<a href="mailto:jsiml@plusline.net" target="_blank">jsiml@plusline.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Hello Arik,<br> <br> <br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span class="gmail-"> we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9)<br> and afterwards all nodes too. The cluster compatibility level<br> has been set to 4.2.<br> <br> Now we can't start a VM after it has been powered off. The only<br> hint we found in engine.log is:<br> <br> 2018-03-07 14:51:52,504+01 INFO<br> [org.ovirt.engine.core.vdsbrok<wbr>er.UpdateVmDynamicDataVDSComma<wbr>nd]<br> (EE-ManagedThreadFactory-engin<wbr>e-Thread-25)<br> [f855b54a-56d9-4708-8a67-56094<wbr>38ddadb] START,<br> UpdateVmDynamicDataVDSCommand(<br> UpdateVmDynamicDataVDSCommandP<wbr>arameters:{hostId='null',<br> vmId='a7bc4124-06cb-4909-9389-<wbr>bcf727df1304',<br> vmDynamic='<a href="http://org.ovirt.engine.co" rel="noreferrer" target="_blank">org.ovirt.engine.co</a><br></span> <<a href="http://org.ovirt.engine.co" rel="noreferrer" target="_blank">http://org.ovirt.engine.co</a>>re<wbr>.common.businessentities.VmDyn<wbr>amic@491983e9'}),<div><div class="gmail-h5"><br> log id: 7d49849e<br> 2018-03-07 14:51:52,509+01 INFO<br> [org.ovirt.engine.core.vdsbrok<wbr>er.UpdateVmDynamicDataVDSComma<wbr>nd]<br> (EE-ManagedThreadFactory-engin<wbr>e-Thread-25)<br> [f855b54a-56d9-4708-8a67-56094<wbr>38ddadb] FINISH,<br> UpdateVmDynamicDataVDSCommand, log id: 7d49849e<br> 2018-03-07 14:51:52,531+01 INFO<br> [org.ovirt.engine.core.vdsbrok<wbr>er.CreateVDSCommand]<br> (EE-ManagedThreadFactory-engin<wbr>e-Thread-25)<br> [f855b54a-56d9-4708-8a67-56094<wbr>38ddadb] START, CreateVDSCommand(<br> CreateVDSCommandParameters:{ho<wbr>stId='0add031e-c72f-473f-ab2f-<wbr>4f7abd1f402b',<br> vmId='a7bc4124-06cb-4909-9389-<wbr>bcf727df1304', vm='VM<br> [prod-hub-201]'}), log id: 4af1f227<br> 2018-03-07 14:51:52,533+01 INFO<br> [org.ovirt.engine.core.vdsbrok<wbr>er.vdsbroker.CreateBrokerVDSCo<wbr>mmand]<br> (EE-ManagedThreadFactory-engin<wbr>e-Thread-25)<br> [f855b54a-56d9-4708-8a67-56094<wbr>38ddadb] START,<br> CreateBrokerVDSCommand(HostNam<wbr>e = prod-node-210,<br> CreateVDSCommandParameters:{ho<wbr>stId='0add031e-c72f-473f-ab2f-<wbr>4f7abd1f402b',<br> vmId='a7bc4124-06cb-4909-9389-<wbr>bcf727df1304', vm='VM<br> [prod-hub-201]'}), log id: 71dcc8e7<br> 2018-03-07 14:51:52,545+01 ERROR<br> [org.ovirt.engine.core.vdsbrok<wbr>er.vdsbroker.CreateBrokerVDSCo<wbr>mmand]<br> (EE-ManagedThreadFactory-engin<wbr>e-Thread-25)<br> [f855b54a-56d9-4708-8a67-56094<wbr>38ddadb] Failed in<br> 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host:<br> 'prod-node-210': null<br> 2018-03-07 14:51:52,546+01 ERROR<br> [org.ovirt.engine.core.vdsbrok<wbr>er.vdsbroker.CreateBrokerVDSCo<wbr>mmand]<br> (EE-ManagedThreadFactory-engin<wbr>e-Thread-25)<br> [f855b54a-56d9-4708-8a67-56094<wbr>38ddadb] Command<br> 'CreateBrokerVDSCommand(HostNa<wbr>me = prod-node-210,<br> CreateVDSCommandParameters:{ho<wbr>stId='0add031e-c72f-473f-ab2f-<wbr>4f7abd1f402b',<br> vmId='a7bc4124-06cb-4909-9389-<wbr>bcf727df1304', vm='VM<br> [prod-hub-201]'})' execution failed: null<br> 2018-03-07 14:51:52,546+01 INFO<br> [org.ovirt.engine.core.vdsbrok<wbr>er.vdsbroker.CreateBrokerVDSCo<wbr>mmand]<br> (EE-ManagedThreadFactory-engin<wbr>e-Thread-25)<br> [f855b54a-56d9-4708-8a67-56094<wbr>38ddadb] FINISH,<br> CreateBrokerVDSCommand, log id: 71dcc8e7<br> 2018-03-07 14:51:52,546+01 ERROR<br> [org.ovirt.engine.core.vdsbrok<wbr>er.CreateVDSCommand]<br> (EE-ManagedThreadFactory-engin<wbr>e-Thread-25) [f855b5<br> 4a-56d9-4708-8a67-5609438ddadb<wbr>] Failed to create VM:<br> java.lang.NullPointerException<br> at<br> org.ovirt.engine.core.vdsbroke<wbr>r.builder.vminfo.LibvirtVmXmlB<wbr>uilder.lambda$writeInterfaces$<wbr>23(LibvirtVmXmlBuilder.java:<wbr>1066)<br> [vdsbroker.jar:]<br> <br> [...]<br> <br> But this doesn't lead us to the root cause. I haven't found any<br> matching bug tickets in release notes for upcoming 4.2.1. Can<br> anyone help here?<br> <br> <br> What's the mac address of that VM?<br> You can find it in the UI or with:<br> <br> select mac_addr from vm_interface where vm_guid in (select vm_guid<br> from vm_static where vm_name='<vm_name>');<br> <br> <br> Actually, different question - does this VM has unplugged network interface?<br> </div></div></blockquote> <br> The VM has two NICs. Both are plugged.<br> <br> The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 00:1a:4a:36:01:67 for nic2.<br></blockquote><div><br></div><div>OK, those seem like two valid mac addresses so maybe something is wrong with the vm devices.</div><div>Could you please provide the output of:</div><div><p style="margin:0px;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0)"><span style="font-variant-ligatures:no-common-ligatures">select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='<vm_name>');</span></p></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"> <br> Regards<span class="gmail-HOEnZb"><font color="#888888"><br> <br> Jan<br> </font></span></blockquote></div><br></div></div> </div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>Users mailing list</span><br><span><a href="mailto:Users@ovirt.org">Users@ovirt.org</a></span><br><span><a href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a></span><br></div></blockquote></body></html> --Apple-Mail-1637D078-944C-455A-B1FA-357A09FC915B--

Hello,
Enable network and disks on your VM than do: Run -> ONCE Ok Ignore errors. Ok Run Cheeers
WTF! That worked.
Did you know, why this works and what happens in the background? Is there a Bugzilla bug ID for this issue?
BTW, here is the list of devices before: engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201'); type | device | address | is_managed | is_plugged | alias ------------+---------------+--------------------------------------------------------------+------------+------------+---------------- video | qxl | | t | t | controller | virtio-scsi | | t | t | balloon | memballoon | | t | f | balloon0 graphics | spice | | t | t | controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x0000, type=pci, function=0x0} | t | t | virtio-serial0 disk | disk | {slot=0x07, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | virtio-disk0 memballoon | memballoon | {slot=0x08, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | balloon0 interface | bridge | {slot=0x03, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | net0 interface | bridge | {slot=0x09, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | net1 controller | scsi | {slot=0x05, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | scsi0 controller | ide | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1} | f | t | ide controller | usb | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x2} | t | t | usb channel | unix | {bus=0, controller=0, type=virtio-serial, port=1} | f | t | channel0 channel | unix | {bus=0, controller=0, type=virtio-serial, port=2} | f | t | channel1 channel | spicevmc | {bus=0, controller=0, type=virtio-serial, port=3} | f | t | channel2 interface | bridge | | t | t | net1 interface | bridge | | t | t | net0 disk | cdrom | | t | f | ide0-1-0 disk | cdrom | {bus=1, controller=0, type=drive, target=0, unit=0} | f | t | ide0-1-0 disk | disk | | t | t | virtio-disk0 (20 rows) and afterwards: engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201'); type | device | address | is_managed | is_plugged | alias ------------+---------------+--------------------------------------------------------------+------------+------------+---------------- channel | spicevmc | {type=virtio-serial, bus=0, controller=0, port=3} | f | t | channel2 channel | unix | {type=virtio-serial, bus=0, controller=0, port=1} | f | t | channel0 interface | bridge | {type=pci, slot=0x04, bus=0x00, domain=0x0000, function=0x0} | t | t | net1 controller | usb | {type=pci, slot=0x01, bus=0x00, domain=0x0000, function=0x2} | t | t | usb controller | virtio-serial | {type=pci, slot=0x06, bus=0x00, domain=0x0000, function=0x0} | t | t | virtio-serial0 interface | bridge | {type=pci, slot=0x03, bus=0x00, domain=0x0000, function=0x0} | t | t | net0 controller | virtio-scsi | {type=pci, slot=0x05, bus=0x00, domain=0x0000, function=0x0} | t | t | scsi0 video | qxl | {type=pci, slot=0x02, bus=0x00, domain=0x0000, function=0x0} | t | t | video0 channel | unix | {type=virtio-serial, bus=0, controller=0, port=2} | f | t | channel1 balloon | memballoon | | t | t | balloon0 graphics | spice | | t | t | disk | cdrom | | t | f | ide0-1-0 disk | disk | {type=pci, slot=0x07, bus=0x00, domain=0x0000, function=0x0} | t | t | virtio-disk0 (13 rows) Regards Jan

On Wed, Mar 7, 2018 at 6:15 PM, Jan Siml <jsiml@plusline.net> wrote:
Hello,
Enable network and disks on your VM than do:
Run -> ONCE Ok Ignore errors. Ok Run Cheeers
WTF! That worked.
Did you know, why this works and what happens in the background? Is there a Bugzilla bug ID for this issue?
BTW, here is the list of devices before:
engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201'); type | device | address | is_managed | is_plugged | alias ------------+---------------+------------------------------- -------------------------------+------------+------------+---------------- video | qxl | | t | t | controller | virtio-scsi | | t | t | balloon | memballoon | | t | f | balloon0 graphics | spice | | t | t | controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x0000, type=pci, function=0x0} | t | t | virtio-serial0 disk | disk | {slot=0x07, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | virtio-disk0 memballoon | memballoon | {slot=0x08, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | balloon0 interface | bridge | {slot=0x03, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | net0 interface | bridge | {slot=0x09, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | net1 controller | scsi | {slot=0x05, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | scsi0 controller | ide | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1} | f | t | ide controller | usb | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x2} | t | t | usb channel | unix | {bus=0, controller=0, type=virtio-serial, port=1} | f | t | channel0 channel | unix | {bus=0, controller=0, type=virtio-serial, port=2} | f | t | channel1 channel | spicevmc | {bus=0, controller=0, type=virtio-serial, port=3} | f | t | channel2 interface | bridge | | t | t | net1 interface | bridge | | t | t | net0 disk | cdrom | | t | f | ide0-1-0 disk | cdrom | {bus=1, controller=0, type=drive, target=0, unit=0} | f | t | ide0-1-0 disk | disk | | t | t | virtio-disk0 (20 rows)
and afterwards:
engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201'); type | device | address | is_managed | is_plugged | alias ------------+---------------+------------------------------- -------------------------------+------------+------------+---------------- channel | spicevmc | {type=virtio-serial, bus=0, controller=0, port=3} | f | t | channel2 channel | unix | {type=virtio-serial, bus=0, controller=0, port=1} | f | t | channel0 interface | bridge | {type=pci, slot=0x04, bus=0x00, domain=0x0000, function=0x0} | t | t | net1 controller | usb | {type=pci, slot=0x01, bus=0x00, domain=0x0000, function=0x2} | t | t | usb controller | virtio-serial | {type=pci, slot=0x06, bus=0x00, domain=0x0000, function=0x0} | t | t | virtio-serial0 interface | bridge | {type=pci, slot=0x03, bus=0x00, domain=0x0000, function=0x0} | t | t | net0 controller | virtio-scsi | {type=pci, slot=0x05, bus=0x00, domain=0x0000, function=0x0} | t | t | scsi0 video | qxl | {type=pci, slot=0x02, bus=0x00, domain=0x0000, function=0x0} | t | t | video0 channel | unix | {type=virtio-serial, bus=0, controller=0, port=2} | f | t | channel1 balloon | memballoon | | t | t | balloon0 graphics | spice | | t | t | disk | cdrom | | t | f | ide0-1-0 disk | disk | {type=pci, slot=0x07, bus=0x00, domain=0x0000, function=0x0} | t | t | virtio-disk0 (13 rows)
Thanks. The problem was that unmanaged interfaces and disks were created (and thus, you previously had 4 interfaces devices, 2 disk devices and 2 CD devices). That is most probably a result of a bug we had when migrating a VM that was started in cluster < 4.2 to 4.2 host. The fix for this bug will be available in 4.2.2. You could, alternatively, remove the unmanaged (disk and interface) devices and plug the managed ones.
Regards
Jan

This is a multi-part message in MIME format. --------------6FF69D2ECA4C7749ADB622F9 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit On 07.03.2018 17:22, Arik Hadas wrote:
On Wed, Mar 7, 2018 at 6:15 PM, Jan Siml <jsiml@plusline.net <mailto:jsiml@plusline.net>> wrote:
Hello,
Enable network and disks on your VM than do: Run -> ONCE Ok Ignore errors. Ok Run Cheeers
WTF! That worked.
Did you know, why this works and what happens in the background? Is there a Bugzilla bug ID for this issue?
I figured it out, by they attempt to change the VM CPU family from old VM's, as a last try to get it work again. After i had a live upgrade to 4.2 with running VM's and shutdown them down they all dead with disabled network and disks. No delight to delete them all and recreate them, the other way. Have a nice day.
BTW, here is the list of devices before:
engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201'); type | device | address | is_managed | is_plugged | alias ------------+---------------+--------------------------------------------------------------+------------+------------+---------------- video | qxl | | t | t | controller | virtio-scsi | | t | t | balloon | memballoon | | t | f | balloon0 graphics | spice | | t | t | controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x0000, type=pci, function=0x0} | t | t | virtio-serial0 disk | disk | {slot=0x07, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | virtio-disk0 memballoon | memballoon | {slot=0x08, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | balloon0 interface | bridge | {slot=0x03, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | net0 interface | bridge | {slot=0x09, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | net1 controller | scsi | {slot=0x05, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | scsi0 controller | ide | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1} | f | t | ide controller | usb | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x2} | t | t | usb channel | unix | {bus=0, controller=0, type=virtio-serial, port=1} | f | t | channel0 channel | unix | {bus=0, controller=0, type=virtio-serial, port=2} | f | t | channel1 channel | spicevmc | {bus=0, controller=0, type=virtio-serial, port=3} | f | t | channel2 interface | bridge | | t | t | net1 interface | bridge | | t | t | net0 disk | cdrom | | t | f | ide0-1-0 disk | cdrom | {bus=1, controller=0, type=drive, target=0, unit=0} | f | t | ide0-1-0 disk | disk | | t | t | virtio-disk0 (20 rows)
and afterwards:
engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201'); type | device | address | is_managed | is_plugged | alias ------------+---------------+--------------------------------------------------------------+------------+------------+---------------- channel | spicevmc | {type=virtio-serial, bus=0, controller=0, port=3} | f | t | channel2 channel | unix | {type=virtio-serial, bus=0, controller=0, port=1} | f | t | channel0 interface | bridge | {type=pci, slot=0x04, bus=0x00, domain=0x0000, function=0x0} | t | t | net1 controller | usb | {type=pci, slot=0x01, bus=0x00, domain=0x0000, function=0x2} | t | t | usb controller | virtio-serial | {type=pci, slot=0x06, bus=0x00, domain=0x0000, function=0x0} | t | t | virtio-serial0 interface | bridge | {type=pci, slot=0x03, bus=0x00, domain=0x0000, function=0x0} | t | t | net0 controller | virtio-scsi | {type=pci, slot=0x05, bus=0x00, domain=0x0000, function=0x0} | t | t | scsi0 video | qxl | {type=pci, slot=0x02, bus=0x00, domain=0x0000, function=0x0} | t | t | video0 channel | unix | {type=virtio-serial, bus=0, controller=0, port=2} | f | t | channel1 balloon | memballoon | | t | t | balloon0 graphics | spice | | t | t | disk | cdrom | | t | f | ide0-1-0 disk | disk | {type=pci, slot=0x07, bus=0x00, domain=0x0000, function=0x0} | t | t | virtio-disk0 (13 rows)
Thanks. The problem was that unmanaged interfaces and disks were created (and thus, you previously had 4 interfaces devices, 2 disk devices and 2 CD devices). That is most probably a result of a bug we had when migrating a VM that was started in cluster < 4.2 to 4.2 host. The fix for this bug will be available in 4.2.2. You could, alternatively, remove the unmanaged (disk and interface) devices and plug the managed ones.
Regards
Jan
-- Mit freundlichem Gruß Oliver Riesener -- Hochschule Bremen Elektrotechnik und Informatik Oliver Riesener Neustadtswall 30 D-28199 Bremen Tel: 0421 5905-2405, Fax: -2400 e-mail:oliver.riesener@hs-bremen.de --------------6FF69D2ECA4C7749ADB622F9 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> On 07.03.2018 17:22, Arik Hadas wrote:<br> <blockquote type="cite" cite="mid:CAMCgCFGAw6HR+mfwP2zHOGUv9JgaP5ovgTM7CEShdKy8Omh4PA@mail.gmail.com"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Wed, Mar 7, 2018 at 6:15 PM, Jan Siml <span dir="ltr"><<a href="mailto:jsiml@plusline.net" target="_blank" moz-do-not-send="true">jsiml@plusline.net</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<span class=""><br> <br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Enable network and disks on your VM than do:<br> Run -> ONCE Ok Ignore errors. Ok<br> Run<br> Cheeers<br> </blockquote> <br> WTF! That worked.<br> <br> Did you know, why this works and what happens in the background? Is there a Bugzilla bug ID for this issue?<br> </blockquote> </span></blockquote> </div> </div> </div> </blockquote> I figured it out, by they attempt to change the VM CPU family from old VM's, as a last try to get it work again.<br> After i had a live upgrade to 4.2 with running VM's and shutdown them down they all dead with disabled network and disks.<br> No delight to delete them all and recreate them, the other way. <br> <br> Have a nice day.<br> <br> <blockquote type="cite" cite="mid:CAMCgCFGAw6HR+mfwP2zHOGUv9JgaP5ovgTM7CEShdKy8Omh4PA@mail.gmail.com"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> BTW, here is the list of devices before:<span class=""><br> <br> engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201');<br> type | device | address | is_managed | is_plugged | alias<br> ------------+---------------+-<wbr>------------------------------<wbr>------------------------------<wbr>-+------------+------------+--<wbr>--------------<br> video | qxl | | t | t |<br> controller | virtio-scsi | | t | t |<br> balloon | memballoon | | t | f | balloon0<br> graphics | spice | | t | t |<br> </span> controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x0000, type=pci, function=0x0} | t | t | virtio-serial0<br> disk | disk | {slot=0x07, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | virtio-disk0<br> memballoon | memballoon | {slot=0x08, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | balloon0<br> interface | bridge | {slot=0x03, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | net0<br> interface | bridge | {slot=0x09, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | net1<br> controller | scsi | {slot=0x05, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | scsi0<span class=""><br> controller | ide | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1} | f | t | ide<br> controller | usb | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x2} | t | t | usb<br> channel | unix | {bus=0, controller=0, type=virtio-serial, port=1} | f | t | channel0<br> channel | unix | {bus=0, controller=0, type=virtio-serial, port=2} | f | t | channel1<br> channel | spicevmc | {bus=0, controller=0, type=virtio-serial, port=3} | f | t | channel2<br> interface | bridge | | t | t | net1<br> interface | bridge | | t | t | net0<br> disk | cdrom | | t | f | ide0-1-0<br> disk | cdrom | {bus=1, controller=0, type=drive, target=0, unit=0} | f | t | ide0-1-0<br> disk | disk | | t | t | virtio-disk0<br> (20 rows)<br> <br> </span> and afterwards:<span class=""><br> <br> engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201');<br> type | device | address | is_managed | is_plugged | alias<br> ------------+---------------+-<wbr>------------------------------<wbr>------------------------------<wbr>-+------------+------------+--<wbr>--------------<br> </span> channel | spicevmc | {type=virtio-serial, bus=0, controller=0, port=3} | f | t | channel2<br> channel | unix | {type=virtio-serial, bus=0, controller=0, port=1} | f | t | channel0<br> interface | bridge | {type=pci, slot=0x04, bus=0x00, domain=0x0000, function=0x0} | t | t | net1<br> controller | usb | {type=pci, slot=0x01, bus=0x00, domain=0x0000, function=0x2} | t | t | usb<br> controller | virtio-serial | {type=pci, slot=0x06, bus=0x00, domain=0x0000, function=0x0} | t | t | virtio-serial0<br> interface | bridge | {type=pci, slot=0x03, bus=0x00, domain=0x0000, function=0x0} | t | t | net0<br> controller | virtio-scsi | {type=pci, slot=0x05, bus=0x00, domain=0x0000, function=0x0} | t | t | scsi0<br> video | qxl | {type=pci, slot=0x02, bus=0x00, domain=0x0000, function=0x0} | t | t | video0<br> channel | unix | {type=virtio-serial, bus=0, controller=0, port=2} | f | t | channel1<br> balloon | memballoon | | t | t | balloon0<br> graphics | spice | | t | t |<span class=""><br> disk | cdrom | | t | f | ide0-1-0<br> </span> disk | disk | {type=pci, slot=0x07, bus=0x00, domain=0x0000, function=0x0} | t | t | virtio-disk0<br> (13 rows)<br> <br> </blockquote> <div><br> </div> <div>Thanks.</div> <div>The problem was that unmanaged interfaces and disks were created (and thus, you previously had 4 interfaces devices, 2 disk devices and 2 CD devices).</div> <div>That is most probably a result of a bug we had when migrating a VM that was started in cluster < 4.2 to 4.2 host.</div> <div>The fix for this bug will be available in 4.2.2.</div> <div>You could, alternatively, remove the unmanaged (disk and interface) devices and plug the managed ones.</div> <div> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Regards<span class="HOEnZb"><font color="#888888"><br> <br> Jan<br> </font></span></blockquote> </div> <br> </div> </div> </blockquote> <br> <pre class="moz-signature" cols="72">-- Mit freundlichem Gruß Oliver Riesener -- Hochschule Bremen Elektrotechnik und Informatik Oliver Riesener Neustadtswall 30 D-28199 Bremen Tel: 0421 5905-2405, Fax: -2400 <a class="moz-txt-link-abbreviated" href="mailto:e-mail:oliver.riesener@hs-bremen.de">e-mail:oliver.riesener@hs-bremen.de</a> </pre> </body> </html> --------------6FF69D2ECA4C7749ADB622F9--
participants (4)
-
Arik Hadas
-
Jan Siml
-
Oliver Riesener
-
Oliver Riesener