maybe it's a libvirt problem , since my nodes have used oVirt Node Hypervisor
2.2.2-2.2.fc16
engine:
libvirt-0.9.11.4-3.fc17.x86_64
node:
libvirt-0.9.6-4.fc16.x86_64
storage:
No local fs, I have two Domain , one is using NFS fs, the other is GlusterFS mount by
NFS.
Both have the problem
[root@ovirt-node-sun-1 ~]# strace -p 1209 -e chown -ff
Process 1209 attached with 11 threads - interrupt to quit
After start vm:
[pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19068, si_status=0,
si_utime=1, si_stime=1} (Child exited) ---
[pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19069, si_status=0,
si_utime=1, si_stime=1} (Child exited) ---
[pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19071, si_status=0,
si_utime=1, si_stime=1} (Child exited) ---
[pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19072, si_status=0,
si_utime=1, si_stime=1} (Child exited) ---
[pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19074, si_status=0,
si_utime=1, si_stime=0} (Child exited) ---
[pid 1209] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19080, si_status=0,
si_utime=0, si_stime=0} (Child exited) ---
[pid 1518]
chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a",
107, 107) = 0
After stop vm:
[pid 1209]
chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a",
0, 0) = 0
On 26 Jul, 2012, at 3:48 AM, Dan Kenigsberg wrote:
On Wed, Jul 25, 2012 at 04:58:48PM +0200, Martin Kletzander wrote:
> Thanks, I just wanted to make sure it's not libvirt that does this.
Pardon me, I still suspect libvirt... Which version thereof do you have
installed?
Which storage is used for the vm image - local fs, right?
would you
strace -p `<libvirtpid>` -e chown -ff
and start another VM just to prove me wrong?
You could do the same with <vdsmpid> to find the culprit.
>
> On 07/25/2012 04:44 PM, T-Sinjon wrote:
>> both engine and node dynamic_ownership are 0
>>
>> On 25 Jul, 2012, at 5:56 PM, Martin Kletzander wrote:
>>
>>> On 07/25/2012 10:36 AM, T-Sinjon wrote:
>>>>
>>>> Dear everyone:
>>>>
>>>> Description
>>>> When i create a vm , the vm owner is vdsm:kvm(36:36)
>>>>
>>>> when i start a vm , the vm owner change to qemu:qemu(107:107)
>>>> -rw-rw----. 1 qemu qemu 107374182400 Jul 25 2012
d1e6b671-6b48-4964-9c56-22847e9b83df
>>>> -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012
d1e6b671-6b48-4964-9c56-22847e9b83df.meta
>>>>
>>>> then i stop the vm , the vm owner change to root:root
>>>> -rw-rw----. 1 root root 107374182400 Jul 25 2012
d1e6b671-6b48-4964-9c56-22847e9b83df
>>>> -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012
d1e6b671-6b48-4964-9c56-22847e9b83df.meta
>>>>
>>>> then , i cannot start the vm , on the web logs event:
>>>>
>>>
>>> Just out of curiosity (it won't probably won't be the cause of the
>>> problem), do you have dynamic_ownership=0 in /etc/libvirt/qemu.conf ?