On Tue, Oct 8, 2013 at 12:27 PM, Omer Frenkel wrote:
> >
> > so now I'm able to start VM without having to select run once and
> > attaching a cd iso
> > (note that is only valid for newly created VMs though)
>
> Yes. Old VMs are trashed with the bogus address reproted by the buggy
> Vdsm. Can someone from engine supply a script to clear all device
> addresses from the VM database table?
>
you can use this line (assuming engine as db and user), make sure only 'bad' vms
return:
psql -U engine -d engine -c "select distinct vm_name from vm_static, vm_device where
vm_guid=vm_id and device='cdrom' and address ilike '%pci%';"
if so, you can run this to clear the address field for them, so they could run again:
psql -U engine -d engine -c "update vm_device set address='' where
device='cdrom' and address ilike '%pci%';"
I wanted to test this but for some reason it seems actually it solved itself.
I first ran the query and already had no value:
engine=# select distinct vm_name from vm_static, vm_device where
vm_guid=vm_id and device='cdrom' and address ilike '%pci%';
vm_name
---------
(0 rows)
(overall my VMs are
engine=# select distinct vm_name from vm_static;
vm_name
-----------
Blank
c6s
c8again32
(3 rows)
)
Then I tried to start the powered off VM that gave problems and it started ok.
Tried to run shutdown inside it and power on again and it worked too.
As soon as I updated the bugzilla with my comment, I restarted vdsmd
on both nodes but I wasn't able to run it...
strange... any other thing that could have resolved by itself.
Things done in sequence related to that VM were:
modify vm.py on both nodes and restart vdsmd while VM was powered off
verify that power on gave the error
verify that run once worked as before
shutdown and power off VM
after two days (no activity on my side at all) I wanted to try the DB
workaround to solve this VM status but apparently the column was
already empty and VM able to start normally....
Gianluca