sorry for my careless , they all libvirtd
[root@ovirt-node-sun-1 ~]# top -b -n 2 -H -p 1209
top - 15:25:08 up 3 days, 9:11, 3 users, load average: 0.06, 0.49, 0.39
Tasks: 11 total, 0 running, 11 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.7%us, 1.3%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st
Mem: 16436060k total, 7349120k used, 9086940k free, 69100k buffers
Swap: 0k total, 0k used, 0k free, 2239792k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1209 root 20 0 909m 17m 7164 S 0.0 0.1 1:33.10 libvirtd
1515 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.56 libvirtd
1516 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.81 libvirtd
1517 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.78 libvirtd
1518 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.55 libvirtd
1519 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.46 libvirtd
1520 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.35 libvirtd
1521 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.36 libvirtd
1522 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.37 libvirtd
1523 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.34 libvirtd
1524 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.30 libvirtd
On 26 Jul, 2012, at 8:51 PM, Martin Kletzander wrote:
On 07/26/2012 02:30 PM, Dan Kenigsberg wrote:
> On Thu, Jul 26, 2012 at 11:05:21AM +0800, T-Sinjon wrote:
>> maybe it's a libvirt problem , since my nodes have used oVirt Node
Hypervisor 2.2.2-2.2.fc16
>>
>> engine:
>> libvirt-0.9.11.4-3.fc17.x86_64
> This one is unused.
>
>>
>> node:
>> libvirt-0.9.6-4.fc16.x86_64
>>
>> storage:
>> No local fs, I have two Domain , one is using NFS fs, the other is GlusterFS
mount by NFS.
>> Both have the problem
>>
>> [root@ovirt-node-sun-1 ~]# strace -p 1209 -e chown -ff
>> Process 1209 attached with 11 threads - interrupt to quit
>>
>> After start vm:
>> [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19068, si_status=0,
si_utime=1, si_stime=1} (Child exited) ---
>> [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19069, si_status=0,
si_utime=1, si_stime=1} (Child exited) ---
>> [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19071, si_status=0,
si_utime=1, si_stime=1} (Child exited) ---
>> [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19072, si_status=0,
si_utime=1, si_stime=1} (Child exited) ---
>> [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19074, si_status=0,
si_utime=1, si_stime=0} (Child exited) ---
>> [pid 1209] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19080, si_status=0,
si_utime=0, si_stime=0} (Child exited) ---
>> [pid 1518]
chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a",
107, 107) = 0
>>
>> After stop vm:
>> [pid 1209]
chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a",
0, 0) = 0
>
>
> Why are you are teasing us? ;-) who was pid 1209, vdsm or libvirtd?
>
=)
Unfortunately, you might be right, Dan. I think maybe it is libvirt and
it is hitting a bug, but the bug I know about does this only with
dynamic_ownership=1 (that's why I asked at first).
To be sure, let's wait till we know who was 1518. Until then I'll try to
investigate ;)
Martin