On Fri, Sep 06, 2013 at 04:05:13PM +0200, Joop wrote:
Alessandro Bianchi wrote:
>>On 6-9-2013 12:34, Alessandro Bianchi wrote:
>>>Hi all
>>>
>>>I'm running 3.2 on several Fedora 18 nodes
>>>
>>>One of them has a local storage running 4 VMs
>>>
>>>Today the UPS crashed and host was rebboted after UPS replacement
>>>
>>>None of the VM's were able to be started
>>>
>>>I tried to put the Host in maintenance and reinstalled it, but this
>>>didn't give any result
>>>
>>>Digging into the logs I discovered the following error:
>>>
>>>The first was of this kind (on every VM)
>>>
>>> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2630,
in
>>>createXML
>>> if ret is None:raise libvirtError('virDomainCreateXML()
failed',
>>>conn=self)
>>>libvirtError: errore interno process exited while connecting to
>>>monitor: ((null):5034): Spice-Warning **: reds.c:3247:reds_init_ssl:
>>>Could not use private key file
>>>qemu-kvm: failed to initialize spice server
>>>
>>>Thread-564::DEBUG::2013-09-06
>>>11:31:32,814::vm::1065::vm.Vm::(setDownStatus)
>>>vmId=`49d84915-490b-497d-a3f8-c7dac7485281`::Changed state to Down:
>>>errore interno process exited while connecting to monitor:
>>>((null):5034): Spice-Warning **: reds.c:3247:reds_init_ssl: Could not
>>>use private key file
>>>qemu-kvm: failed to initialize spice server
>>>
>>>The private key was marked 440 as permission owned by vdsm user and
>>>kvm group
>>>
>>>I had to change it to 444 to allow everyone to read it
>>>
>>>After that I had for every VM the following error:
>>>
>>>could not open disk image
>>>/rhev/data-center/3935800a-abe4-406d-84a1-4c3c0b915cce/6818de31-5cda-41d0-a41a-681230a409ba/images/54144c03-5057-462e-8275-6ab386ae8c5a/01298998-32d5-44c2-b5d1-91be1316ed19:
>>>
>>>Permission denied
>>>
>>>Disks were owned by vdsm:kvm with 660 permission
>>>
>>>I had to relax this to 666 to enable the VMs to start
>>>
>>>Has anyone faced this kind f problem before?
>>>
>>Yes, me.
>>>Any hint about what may have caused this odd problem?
>>>
>>yum update.
>>
>>I updated one of my hosts and after that that host couldn't start VMs
>>anymore with exact the same errors. See thread 'Starting VM error' by
>>Shaun Glass. I tried a couple of things but not making world readable
>>those files. Will probably restore a backup and try it.
>>I added the virt-preview repo for F18 and updated qemu/libvirt which
>>also solved the problem.
>>The difference between the updated and not updated host were really
>>minimal. See the thead for logs.
>>
>>Regards,
>>
>>Joop
>Thank you for your very quick answer
>
>I suspected the same thing !
>
>I'll update libvirt and revert the permission changes
>
That will give you way way newer libvirt/qemu than you probably
want. I would keep the permission changes and hope that one of the
following updates to either libvirt/qemu fixes this problem.
Joop, I'm sorry that I have many requests and few answers, but if indeed
the problem is related to a version of libvirt/qemu, would yould you try
to reproduce it outside ovirt?
I mean, in your working/non-working hosts, could you create a vdsm:kvm-
owned image, and try to run it from virsh (using vdsm@ovirt user and the
ever-so-secret password listed in vdsm/libvirt_password)?
What happens if you chown your image to vdsm:qemu? (keeping mode as 660)
What's `groups qemu` on your hosts?
Could you attach gdb to the short-living qemu process, and run
getgroups(2) on it?
Dan.