[Users] [User] why VM image owner change to root after stop the vm

Dear everyone: Description When i create a vm , the vm owner is vdsm:kvm(36:36) when i start a vm , the vm owner change to qemu:qemu(107:107) -rw-rw----. 1 qemu qemu 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta then i stop the vm , the vm owner change to root:root -rw-rw----. 1 root root 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta then , i cannot start the vm , on the web logs event: 2012-Jul-25, 16:27:29 VM Git-Server is down. Exit message: 'truesize'. 2012-Jul-25, 16:27:28 VM Git-Server was restarted on Host ovirt-node-sun-1.local 2012-Jul-25, 16:27:28 Failed to run VM Git-Server on Host ovirt-node-sun-4.local. and the ovirt-engine.log: 2012-07-25 16:27:24,359 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (ajp--0.0.0.0-8009-5) START, IsValidVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 7d8f1a84 2012-07-25 16:27:24,364 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (ajp--0.0.0.0-8009-5) FINISH, IsValidVDSCommand, return: true, log id: 7d8f1a84 2012-07-25 16:27:24,441 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--0.0.0.0-8009-5) START, IsVmDuringInitiatingVDSCommand(vmId = 4f03fc62-a71e-4560-b807-5388526f6968), log id: 6c699650 2012-07-25 16:27:24,443 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--0.0.0.0-8009-5) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 6c699650 2012-07-25 16:27:24,491 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-48) [36951f95] Lock Acquired to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM , sharedLocks= ] 2012-07-25 16:27:24,515 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-48) [36951f95] Running command: RunVmCommand internal: false. Entities affected : ID: 4f03fc62-a71e-4560-b807-5388526f6968 Type: VM 2012-07-25 16:27:24,691 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-48) [36951f95] START, IsoPrefixVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 74d1cdf9 2012-07-25 16:27:24,695 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-48) [36951f95] FINISH, IsoPrefixVDSCommand, return: /rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111, log id: 74d1cdf9 2012-07-25 16:27:24,699 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-48) [36951f95] START, CreateVmVDSCommand(vdsId = ebf0aae2-d4a6-11e1-8bef-0f498706821d, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@4fb10fb7), log id: 7a29d259 2012-07-25 16:27:24,708 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-48) [36951f95] START, CreateVDSCommand(vdsId = ebf0aae2-d4a6-11e1-8bef-0f498706821d, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@4fb10fb7), log id: 29395031 2012-07-25 16:27:24,789 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-48) [36951f95] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand kvmEnable=true,nicModel=pv,pv,keyboardLayout=en-us,nice=0,timeOffset=-2,transparentHugePages=true,vmId=4f03fc62-a71e-4560-b807-5388526f6968,drives=[Ljava.util.Map;@1869fe1c,acpiEnable=true,custom={},spiceSslCipherSuite=DEFAULT,memSize=4096,boot=cd,smp=2,vmType=kvm,emulatedMachine=pc-0.14,display=vnc,tabletEnable=true,spiceSecureChannels=smain,sinputs,smpCoresPerSocket=1,spiceMonitors=1,cdrom=/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111/CentOS-6.2-x86_64-LiveDVD.iso,macAddr=00:1a:4a:1e:01:10,00:1a:4a:1e:01:11,bridge=network_10,ovirtmgmt,vmName=Git-Server,cpuType=Conroe 2012-07-25 16:27:24,799 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-48) [36951f95] FINISH, CreateVDSCommand, log id: 29395031 2012-07-25 16:27:24,807 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-48) [36951f95] IncreasePendingVms::CreateVmIncreasing vds ovirt-node-sun-3.local pending vcpu count, now 2. Vm: Git-Server 2012-07-25 16:27:24,849 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-48) [36951f95] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 7a29d259 2012-07-25 16:27:24,858 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-48) [36951f95] Lock freed to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM , sharedLocks= ] 2012-07-25 16:27:26,096 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-37) [686128dd] START, DestroyVDSCommand(vdsId = ebf0aae2-d4a6-11e1-8bef-0f498706821d, vmId=4f03fc62-a71e-4560-b807-5388526f6968, force=false, secondsToWait=0, gracefully=false), log id: 168e7377 2012-07-25 16:27:26,181 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-37) [686128dd] FINISH, DestroyVDSCommand, log id: 168e7377 2012-07-25 16:27:26,207 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-37) [686128dd] Running on vds during rerun failed vm: null 2012-07-25 16:27:26,211 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-37) [686128dd] vm Git-Server running in db and not running in vds - add to rerun treatment. vds ovirt-node-sun-3.local 2012-07-25 16:27:26,232 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-37) [686128dd] Rerun vm 4f03fc62-a71e-4560-b807-5388526f6968. Called from vds ovirt-node-sun-3.local 2012-07-25 16:27:26,238 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-50) [686128dd] START, UpdateVdsDynamicDataVDSCommand(vdsId = ebf0aae2-d4a6-11e1-8bef-0f498706821d, vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@8ce6c23d), log id: 1d991881 2012-07-25 16:27:26,249 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-50) [686128dd] FINISH, UpdateVdsDynamicDataVDSCommand, log id: 1d991881 2012-07-25 16:27:26,274 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-50) [686128dd] Lock Acquired to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM , sharedLocks= ] 2012-07-25 16:27:26,374 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (pool-3-thread-50) [686128dd] START, IsValidVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 412b8a26 2012-07-25 16:27:26,379 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (pool-3-thread-50) [686128dd] FINISH, IsValidVDSCommand, return: true, log id: 412b8a26 2012-07-25 16:27:26,464 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-3-thread-50) [686128dd] START, IsVmDuringInitiatingVDSCommand(vmId = 4f03fc62-a71e-4560-b807-5388526f6968), log id: 557baa43 2012-07-25 16:27:26,467 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-3-thread-50) [686128dd] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 557baa43 2012-07-25 16:27:26,516 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-50) [686128dd] Running command: RunVmCommand internal: false. Entities affected : ID: 4f03fc62-a71e-4560-b807-5388526f6968 Type: VM 2012-07-25 16:27:26,636 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-50) [686128dd] START, IsoPrefixVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 4343f447 2012-07-25 16:27:26,639 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-50) [686128dd] FINISH, IsoPrefixVDSCommand, return: /rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111, log id: 4343f447 2012-07-25 16:27:26,643 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-50) [686128dd] START, CreateVmVDSCommand(vdsId = 07d47b32-d4aa-11e1-b06e-b3ff4a3f9e20, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@10bca4bb), log id: 225a83d1 2012-07-25 16:27:26,652 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-50) [686128dd] START, CreateVDSCommand(vdsId = 07d47b32-d4aa-11e1-b06e-b3ff4a3f9e20, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@10bca4bb), log id: 78491589 2012-07-25 16:27:26,735 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-50) [686128dd] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand kvmEnable=true,nicModel=pv,pv,keyboardLayout=en-us,nice=0,timeOffset=-2,transparentHugePages=true,vmId=4f03fc62-a71e-4560-b807-5388526f6968,drives=[Ljava.util.Map;@3d83b1e9,acpiEnable=true,custom={},spiceSslCipherSuite=DEFAULT,memSize=4096,boot=cd,smp=2,vmType=kvm,emulatedMachine=pc-0.14,display=vnc,tabletEnable=true,spiceSecureChannels=smain,sinputs,smpCoresPerSocket=1,spiceMonitors=1,cdrom=/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111/CentOS-6.2-x86_64-LiveDVD.iso,macAddr=00:1a:4a:1e:01:10,00:1a:4a:1e:01:11,bridge=network_10,ovirtmgmt,vmName=Git-Server,cpuType=Conroe 2012-07-25 16:27:26,745 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-50) [686128dd] FINISH, CreateVDSCommand, log id: 78491589 2012-07-25 16:27:26,750 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-50) [686128dd] IncreasePendingVms::CreateVmIncreasing vds ovirt-node-sun-4.local pending vcpu count, now 2. Vm: Git-Server 2012-07-25 16:27:26,780 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-50) [686128dd] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 225a83d1 2012-07-25 16:27:26,804 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-50) [686128dd] Lock freed to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM , sharedLocks= ] 2012-07-25 16:27:28,023 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-6) [5be91432] START, DestroyVDSCommand(vdsId = 07d47b32-d4aa-11e1-b06e-b3ff4a3f9e20, vmId=4f03fc62-a71e-4560-b807-5388526f6968, force=false, secondsToWait=0, gracefully=false), log id: 4b8dcf4e 2012-07-25 16:27:28,108 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-6) [5be91432] FINISH, DestroyVDSCommand, log id: 4b8dcf4e 2012-07-25 16:27:28,147 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-6) [5be91432] Running on vds during rerun failed vm: null 2012-07-25 16:27:28,151 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-6) [5be91432] vm Git-Server running in db and not running in vds - add to rerun treatment. vds ovirt-node-sun-4.local 2012-07-25 16:27:28,174 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-6) [5be91432] Rerun vm 4f03fc62-a71e-4560-b807-5388526f6968. Called from vds ovirt-node-sun-4.local 2012-07-25 16:27:28,180 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-43) [5be91432] START, UpdateVdsDynamicDataVDSCommand(vdsId = 07d47b32-d4aa-11e1-b06e-b3ff4a3f9e20, vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@54ec3b0a), log id: 60cb5c98 2012-07-25 16:27:28,191 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-43) [5be91432] FINISH, UpdateVdsDynamicDataVDSCommand, log id: 60cb5c98 2012-07-25 16:27:28,216 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-43) [5be91432] Lock Acquired to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM , sharedLocks= ] 2012-07-25 16:27:28,301 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (pool-3-thread-43) [5be91432] START, IsValidVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 78370bf8 2012-07-25 16:27:28,306 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand] (pool-3-thread-43) [5be91432] FINISH, IsValidVDSCommand, return: true, log id: 78370bf8 2012-07-25 16:27:28,383 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-3-thread-43) [5be91432] START, IsVmDuringInitiatingVDSCommand(vmId = 4f03fc62-a71e-4560-b807-5388526f6968), log id: 289dae44 2012-07-25 16:27:28,386 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-3-thread-43) [5be91432] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 289dae44 2012-07-25 16:27:28,416 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-43) [5be91432] Running command: RunVmCommand internal: false. Entities affected : ID: 4f03fc62-a71e-4560-b807-5388526f6968 Type: VM 2012-07-25 16:27:28,542 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-43) [5be91432] START, IsoPrefixVDSCommand(storagePoolId = 3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d, ignoreFailoverLimit = false, compatabilityVersion = null), log id: 15fd0eb1 2012-07-25 16:27:28,545 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-3-thread-43) [5be91432] FINISH, IsoPrefixVDSCommand, return: /rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111, log id: 15fd0eb1 2012-07-25 16:27:28,548 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-43) [5be91432] START, CreateVmVDSCommand(vdsId = cbebdd0a-d4a3-11e1-a014-ef58dce093d8, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@1e76519d), log id: 18cf91be 2012-07-25 16:27:28,557 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-43) [5be91432] START, CreateVDSCommand(vdsId = cbebdd0a-d4a3-11e1-a014-ef58dce093d8, vmId=4f03fc62-a71e-4560-b807-5388526f6968, vm=org.ovirt.engine.core.common.businessentities.VM@1e76519d), log id: 7b9e926c 2012-07-25 16:27:28,646 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-43) [5be91432] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand kvmEnable=true,nicModel=pv,pv,keyboardLayout=en-us,nice=0,timeOffset=-2,transparentHugePages=true,vmId=4f03fc62-a71e-4560-b807-5388526f6968,drives=[Ljava.util.Map;@1db78df0,acpiEnable=true,custom={},spiceSslCipherSuite=DEFAULT,memSize=4096,boot=cd,smp=2,vmType=kvm,emulatedMachine=pc-0.14,display=vnc,tabletEnable=true,spiceSecureChannels=smain,sinputs,smpCoresPerSocket=1,spiceMonitors=1,cdrom=/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/bb0c61bf-18d1-41da-8e92-b3da6e6abffb/images/11111111-1111-1111-1111-111111111111/CentOS-6.2-x86_64-LiveDVD.iso,macAddr=00:1a:4a:1e:01:10,00:1a:4a:1e:01:11,bridge=network_10,ovirtmgmt,vmName=Git-Server,cpuType=Conroe 2012-07-25 16:27:28,655 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-3-thread-43) [5be91432] FINISH, CreateVDSCommand, log id: 7b9e926c 2012-07-25 16:27:28,661 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-43) [5be91432] IncreasePendingVms::CreateVmIncreasing vds ovirt-node-sun-1.local pending vcpu count, now 2. Vm: Git-Server 2012-07-25 16:27:28,683 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-3-thread-43) [5be91432] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 18cf91be 2012-07-25 16:27:28,691 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-3-thread-43) [5be91432] Lock freed to object EngineLock [exclusiveLocks= key: 4f03fc62-a71e-4560-b807-5388526f6968 value: VM , sharedLocks= ] 2012-07-25 16:27:29,550 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-54) START, DestroyVDSCommand(vdsId = cbebdd0a-d4a3-11e1-a014-ef58dce093d8, vmId=4f03fc62-a71e-4560-b807-5388526f6968, force=false, secondsToWait=0, gracefully=false), log id: 2b6ad571 2012-07-25 16:27:29,644 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-54) FINISH, DestroyVDSCommand, log id: 2b6ad571 2012-07-25 16:27:29,713 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-54) Running on vds during rerun failed vm: null 2012-07-25 16:27:29,716 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-54) vm Git-Server running in db and not running in vds - add to rerun treatment. vds ovirt-node-sun-1.local 2012-07-25 16:27:29,741 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-54) Rerun vm 4f03fc62-a71e-4560-b807-5388526f6968. Called from vds ovirt-node-sun-1.local 2012-07-25 16:27:29,747 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-43) START, UpdateVdsDynamicDataVDSCommand(vdsId = cbebdd0a-d4a3-11e1-a014-ef58dce093d8, vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@53255766), log id: 6a9130c4 2012-07-25 16:27:29,758 INFO [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-43) FINISH, UpdateVdsDynamicDataVDSCommand, log id: 6a9130c4 my Environment: engine: ovirt-engine-sdk-3.1.0.2-gita89f4e.fc17.noarch ovirt-engine-cli-3.1.0.6-1.fc17.noarch ovirt-engine-tools-common-3.1.0-0.2.20120704git1df1ba.fc17.noarch ovirt-engine-genericapi-3.1.0-0.2.20120704git1df1ba.fc17.noarch ovirt-engine-notification-service-3.1.0-0.2.20120704git1df1ba.fc17.noarch ovirt-engine-restapi-3.1.0-0.2.20120704git1df1ba.fc17.noarch ovirt-engine-dbscripts-3.1.0-0.2.20120704git1df1ba.fc17.noarch ovirt-engine-userportal-3.1.0-0.2.20120704git1df1ba.fc17.noarch ovirt-engine-3.1.0-0.2.20120704git1df1ba.fc17.noarch ovirt-engine-setup-3.1.0-1.fc17.noarch ovirt-engine-webadmin-portal-3.1.0-0.2.20120704git1df1ba.fc17.noarch ovirt-engine-config-3.1.0-0.2.20120704git1df1ba.fc17.noarch ovirt-engine-backend-3.1.0-0.2.20120704git1df1ba.fc17.noarch node: oVirt Node Hypervisor 2.2.2-2.2.fc16

On 07/25/2012 10:36 AM, T-Sinjon wrote:
Dear everyone:
Description When i create a vm , the vm owner is vdsm:kvm(36:36)
when i start a vm , the vm owner change to qemu:qemu(107:107) -rw-rw----. 1 qemu qemu 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then i stop the vm , the vm owner change to root:root -rw-rw----. 1 root root 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then , i cannot start the vm , on the web logs event:
Just out of curiosity (it won't probably won't be the cause of the problem), do you have dynamic_ownership=0 in /etc/libvirt/qemu.conf ? Martin

both engine and node dynamic_ownership are 0 On 25 Jul, 2012, at 5:56 PM, Martin Kletzander wrote:
On 07/25/2012 10:36 AM, T-Sinjon wrote:
Dear everyone:
Description When i create a vm , the vm owner is vdsm:kvm(36:36)
when i start a vm , the vm owner change to qemu:qemu(107:107) -rw-rw----. 1 qemu qemu 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then i stop the vm , the vm owner change to root:root -rw-rw----. 1 root root 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then , i cannot start the vm , on the web logs event:
Just out of curiosity (it won't probably won't be the cause of the problem), do you have dynamic_ownership=0 in /etc/libvirt/qemu.conf ?
Martin

Thanks, I just wanted to make sure it's not libvirt that does this. On 07/25/2012 04:44 PM, T-Sinjon wrote:
both engine and node dynamic_ownership are 0
On 25 Jul, 2012, at 5:56 PM, Martin Kletzander wrote:
On 07/25/2012 10:36 AM, T-Sinjon wrote:
Dear everyone:
Description When i create a vm , the vm owner is vdsm:kvm(36:36)
when i start a vm , the vm owner change to qemu:qemu(107:107) -rw-rw----. 1 qemu qemu 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then i stop the vm , the vm owner change to root:root -rw-rw----. 1 root root 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then , i cannot start the vm , on the web logs event:
Just out of curiosity (it won't probably won't be the cause of the problem), do you have dynamic_ownership=0 in /etc/libvirt/qemu.conf ?
Martin

On Wed, Jul 25, 2012 at 04:58:48PM +0200, Martin Kletzander wrote:
Thanks, I just wanted to make sure it's not libvirt that does this.
Pardon me, I still suspect libvirt... Which version thereof do you have installed? Which storage is used for the vm image - local fs, right? would you strace -p `<libvirtpid>` -e chown -ff and start another VM just to prove me wrong? You could do the same with <vdsmpid> to find the culprit.
On 07/25/2012 04:44 PM, T-Sinjon wrote:
both engine and node dynamic_ownership are 0
On 25 Jul, 2012, at 5:56 PM, Martin Kletzander wrote:
On 07/25/2012 10:36 AM, T-Sinjon wrote:
Dear everyone:
Description When i create a vm , the vm owner is vdsm:kvm(36:36)
when i start a vm , the vm owner change to qemu:qemu(107:107) -rw-rw----. 1 qemu qemu 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then i stop the vm , the vm owner change to root:root -rw-rw----. 1 root root 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then , i cannot start the vm , on the web logs event:
Just out of curiosity (it won't probably won't be the cause of the problem), do you have dynamic_ownership=0 in /etc/libvirt/qemu.conf ?

maybe it's a libvirt problem , since my nodes have used oVirt Node Hypervisor 2.2.2-2.2.fc16 engine: libvirt-0.9.11.4-3.fc17.x86_64 node: libvirt-0.9.6-4.fc16.x86_64 storage: No local fs, I have two Domain , one is using NFS fs, the other is GlusterFS mount by NFS. Both have the problem [root@ovirt-node-sun-1 ~]# strace -p 1209 -e chown -ff Process 1209 attached with 11 threads - interrupt to quit After start vm: [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19068, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19069, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19071, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19072, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19074, si_status=0, si_utime=1, si_stime=0} (Child exited) --- [pid 1209] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19080, si_status=0, si_utime=0, si_stime=0} (Child exited) --- [pid 1518] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 107, 107) = 0 After stop vm: [pid 1209] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 0, 0) = 0 On 26 Jul, 2012, at 3:48 AM, Dan Kenigsberg wrote:
On Wed, Jul 25, 2012 at 04:58:48PM +0200, Martin Kletzander wrote:
Thanks, I just wanted to make sure it's not libvirt that does this.
Pardon me, I still suspect libvirt... Which version thereof do you have installed?
Which storage is used for the vm image - local fs, right?
would you strace -p `<libvirtpid>` -e chown -ff
and start another VM just to prove me wrong? You could do the same with <vdsmpid> to find the culprit.
On 07/25/2012 04:44 PM, T-Sinjon wrote:
both engine and node dynamic_ownership are 0
On 25 Jul, 2012, at 5:56 PM, Martin Kletzander wrote:
On 07/25/2012 10:36 AM, T-Sinjon wrote:
Dear everyone:
Description When i create a vm , the vm owner is vdsm:kvm(36:36)
when i start a vm , the vm owner change to qemu:qemu(107:107) -rw-rw----. 1 qemu qemu 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then i stop the vm , the vm owner change to root:root -rw-rw----. 1 root root 107374182400 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df -rw-r--r--. 1 vdsm kvm 269 Jul 25 2012 d1e6b671-6b48-4964-9c56-22847e9b83df.meta
then , i cannot start the vm , on the web logs event:
Just out of curiosity (it won't probably won't be the cause of the problem), do you have dynamic_ownership=0 in /etc/libvirt/qemu.conf ?

On Thu, Jul 26, 2012 at 11:05:21AM +0800, T-Sinjon wrote:
maybe it's a libvirt problem , since my nodes have used oVirt Node Hypervisor 2.2.2-2.2.fc16
engine: libvirt-0.9.11.4-3.fc17.x86_64 This one is unused.
node: libvirt-0.9.6-4.fc16.x86_64
storage: No local fs, I have two Domain , one is using NFS fs, the other is GlusterFS mount by NFS. Both have the problem
[root@ovirt-node-sun-1 ~]# strace -p 1209 -e chown -ff Process 1209 attached with 11 threads - interrupt to quit
After start vm: [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19068, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19069, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19071, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19072, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19074, si_status=0, si_utime=1, si_stime=0} (Child exited) --- [pid 1209] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19080, si_status=0, si_utime=0, si_stime=0} (Child exited) --- [pid 1518] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 107, 107) = 0
After stop vm: [pid 1209] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 0, 0) = 0
Why are you are teasing us? ;-) who was pid 1209, vdsm or libvirtd?

On 07/26/2012 02:30 PM, Dan Kenigsberg wrote:
On Thu, Jul 26, 2012 at 11:05:21AM +0800, T-Sinjon wrote:
maybe it's a libvirt problem , since my nodes have used oVirt Node Hypervisor 2.2.2-2.2.fc16
engine: libvirt-0.9.11.4-3.fc17.x86_64 This one is unused.
node: libvirt-0.9.6-4.fc16.x86_64
storage: No local fs, I have two Domain , one is using NFS fs, the other is GlusterFS mount by NFS. Both have the problem
[root@ovirt-node-sun-1 ~]# strace -p 1209 -e chown -ff Process 1209 attached with 11 threads - interrupt to quit
After start vm: [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19068, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19069, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19071, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19072, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19074, si_status=0, si_utime=1, si_stime=0} (Child exited) --- [pid 1209] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19080, si_status=0, si_utime=0, si_stime=0} (Child exited) --- [pid 1518] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 107, 107) = 0
After stop vm: [pid 1209] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 0, 0) = 0
Why are you are teasing us? ;-) who was pid 1209, vdsm or libvirtd?
=) Unfortunately, you might be right, Dan. I think maybe it is libvirt and it is hitting a bug, but the bug I know about does this only with dynamic_ownership=1 (that's why I asked at first). To be sure, let's wait till we know who was 1518. Until then I'll try to investigate ;) Martin

sorry for my careless , they all libvirtd [root@ovirt-node-sun-1 ~]# top -b -n 2 -H -p 1209 top - 15:25:08 up 3 days, 9:11, 3 users, load average: 0.06, 0.49, 0.39 Tasks: 11 total, 0 running, 11 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 1.3%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 16436060k total, 7349120k used, 9086940k free, 69100k buffers Swap: 0k total, 0k used, 0k free, 2239792k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1209 root 20 0 909m 17m 7164 S 0.0 0.1 1:33.10 libvirtd 1515 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.56 libvirtd 1516 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.81 libvirtd 1517 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.78 libvirtd 1518 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.55 libvirtd 1519 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.46 libvirtd 1520 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.35 libvirtd 1521 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.36 libvirtd 1522 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.37 libvirtd 1523 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.34 libvirtd 1524 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.30 libvirtd On 26 Jul, 2012, at 8:51 PM, Martin Kletzander wrote:
On 07/26/2012 02:30 PM, Dan Kenigsberg wrote:
On Thu, Jul 26, 2012 at 11:05:21AM +0800, T-Sinjon wrote:
maybe it's a libvirt problem , since my nodes have used oVirt Node Hypervisor 2.2.2-2.2.fc16
engine: libvirt-0.9.11.4-3.fc17.x86_64 This one is unused.
node: libvirt-0.9.6-4.fc16.x86_64
storage: No local fs, I have two Domain , one is using NFS fs, the other is GlusterFS mount by NFS. Both have the problem
[root@ovirt-node-sun-1 ~]# strace -p 1209 -e chown -ff Process 1209 attached with 11 threads - interrupt to quit
After start vm: [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19068, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19069, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19071, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19072, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19074, si_status=0, si_utime=1, si_stime=0} (Child exited) --- [pid 1209] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19080, si_status=0, si_utime=0, si_stime=0} (Child exited) --- [pid 1518] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 107, 107) = 0
After stop vm: [pid 1209] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 0, 0) = 0
Why are you are teasing us? ;-) who was pid 1209, vdsm or libvirtd?
=)
Unfortunately, you might be right, Dan. I think maybe it is libvirt and it is hitting a bug, but the bug I know about does this only with dynamic_ownership=1 (that's why I asked at first).
To be sure, let's wait till we know who was 1518. Until then I'll try to investigate ;)
Martin

OK, so it is done by libvirt. However, I was trying to reproduce this and I was looking in the code and it looks like your config file settings are not reflected in libvirt (does the ownership change also after libvirt restart?). There is no chown called when dynamic ownership is turned off. The only thing I haven't tried is checking older versions of libvirt, but this code haven't changed that much. On 07/26/2012 05:28 PM, T-Sinjon wrote:
sorry for my careless , they all libvirtd
[root@ovirt-node-sun-1 ~]# top -b -n 2 -H -p 1209 top - 15:25:08 up 3 days, 9:11, 3 users, load average: 0.06, 0.49, 0.39 Tasks: 11 total, 0 running, 11 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 1.3%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 16436060k total, 7349120k used, 9086940k free, 69100k buffers Swap: 0k total, 0k used, 0k free, 2239792k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1209 root 20 0 909m 17m 7164 S 0.0 0.1 1:33.10 libvirtd 1515 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.56 libvirtd 1516 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.81 libvirtd 1517 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.78 libvirtd 1518 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.55 libvirtd 1519 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.46 libvirtd 1520 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.35 libvirtd 1521 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.36 libvirtd 1522 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.37 libvirtd 1523 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.34 libvirtd 1524 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.30 libvirtd
On 26 Jul, 2012, at 8:51 PM, Martin Kletzander wrote:
On 07/26/2012 02:30 PM, Dan Kenigsberg wrote:
On Thu, Jul 26, 2012 at 11:05:21AM +0800, T-Sinjon wrote:
maybe it's a libvirt problem , since my nodes have used oVirt Node Hypervisor 2.2.2-2.2.fc16
engine: libvirt-0.9.11.4-3.fc17.x86_64 This one is unused.
node: libvirt-0.9.6-4.fc16.x86_64
storage: No local fs, I have two Domain , one is using NFS fs, the other is GlusterFS mount by NFS. Both have the problem
[root@ovirt-node-sun-1 ~]# strace -p 1209 -e chown -ff Process 1209 attached with 11 threads - interrupt to quit
After start vm: [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19068, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19069, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19071, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19072, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19074, si_status=0, si_utime=1, si_stime=0} (Child exited) --- [pid 1209] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19080, si_status=0, si_utime=0, si_stime=0} (Child exited) --- [pid 1518] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 107, 107) = 0
After stop vm: [pid 1209] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 0, 0) = 0
Why are you are teasing us? ;-) who was pid 1209, vdsm or libvirtd?
=)
Unfortunately, you might be right, Dan. I think maybe it is libvirt and it is hitting a bug, but the bug I know about does this only with dynamic_ownership=1 (that's why I asked at first).
To be sure, let's wait till we know who was 1518. Until then I'll try to investigate ;)
Martin

yeah, everything went ok after restart libvirtd I have do a test : Install a new node, the problem always there (any possible that "dynamic_ownership=0 # by vdsm" has done after libvirtd starting??) On 30 Jul, 2012, at 10:46 PM, Martin Kletzander wrote:
OK, so it is done by libvirt. However, I was trying to reproduce this and I was looking in the code and it looks like your config file settings are not reflected in libvirt (does the ownership change also after libvirt restart?). There is no chown called when dynamic ownership is turned off. The only thing I haven't tried is checking older versions of libvirt, but this code haven't changed that much.
On 07/26/2012 05:28 PM, T-Sinjon wrote:
sorry for my careless , they all libvirtd
[root@ovirt-node-sun-1 ~]# top -b -n 2 -H -p 1209 top - 15:25:08 up 3 days, 9:11, 3 users, load average: 0.06, 0.49, 0.39 Tasks: 11 total, 0 running, 11 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 1.3%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 16436060k total, 7349120k used, 9086940k free, 69100k buffers Swap: 0k total, 0k used, 0k free, 2239792k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1209 root 20 0 909m 17m 7164 S 0.0 0.1 1:33.10 libvirtd 1515 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.56 libvirtd 1516 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.81 libvirtd 1517 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.78 libvirtd 1518 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.55 libvirtd 1519 root 20 0 909m 17m 7164 S 0.0 0.1 0:07.46 libvirtd 1520 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.35 libvirtd 1521 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.36 libvirtd 1522 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.37 libvirtd 1523 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.34 libvirtd 1524 root 20 0 909m 17m 7164 S 0.0 0.1 0:01.30 libvirtd
On 26 Jul, 2012, at 8:51 PM, Martin Kletzander wrote:
On 07/26/2012 02:30 PM, Dan Kenigsberg wrote:
On Thu, Jul 26, 2012 at 11:05:21AM +0800, T-Sinjon wrote:
maybe it's a libvirt problem , since my nodes have used oVirt Node Hypervisor 2.2.2-2.2.fc16
engine: libvirt-0.9.11.4-3.fc17.x86_64 This one is unused.
node: libvirt-0.9.6-4.fc16.x86_64
storage: No local fs, I have two Domain , one is using NFS fs, the other is GlusterFS mount by NFS. Both have the problem
[root@ovirt-node-sun-1 ~]# strace -p 1209 -e chown -ff Process 1209 attached with 11 threads - interrupt to quit
After start vm: [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19068, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19069, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19071, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19072, si_status=0, si_utime=1, si_stime=1} (Child exited) --- [pid 1518] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19074, si_status=0, si_utime=1, si_stime=0} (Child exited) --- [pid 1209] --- {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19080, si_status=0, si_utime=0, si_stime=0} (Child exited) --- [pid 1518] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 107, 107) = 0
After stop vm: [pid 1209] chown("/rhev/data-center/3bdc6f14-bb92-4b0e-8db2-d0ba4c34f61d/b5078b10-a044-42c5-b270-8b81cd51ce35/images/979c2849-2587-4015-bad5-53159a11b6ed/38648b73-b0d4-4f2a-9f46-5b20613abb7a", 0, 0) = 0
Why are you are teasing us? ;-) who was pid 1209, vdsm or libvirtd?
=)
Unfortunately, you might be right, Dan. I think maybe it is libvirt and it is hitting a bug, but the bug I know about does this only with dynamic_ownership=1 (that's why I asked at first).
To be sure, let's wait till we know who was 1518. Until then I'll try to investigate ;)
Martin

It looks like this exactly could be the problem. I don't know how the node installation works, but when you install normal system it shouldn't start any services until you specify that. Of course this is another story when installing one-purpose system, so I won't be able to tell you exactly, but vdsm should definitely make sure libvirtd loads its config (re/starts *after* the config is modified) I'm glad it works for you now. Martin On 07/31/2012 01:42 PM, T-Sinjon wrote:
yeah, everything went ok after restart libvirtd
I have do a test : Install a new node, the problem always there (any possible that "dynamic_ownership=0 # by vdsm" has done after libvirtd starting??)

On Tue, Jul 31, 2012 at 02:15:12PM +0200, Martin Kletzander wrote:
It looks like this exactly could be the problem. I don't know how the node installation works, but when you install normal system it shouldn't start any services until you specify that. Of course this is another story when installing one-purpose system, so I won't be able to tell you exactly, but vdsm should definitely make sure libvirtd loads its config (re/starts *after* the config is modified)
I'm glad it works for you now.
Martin
On 07/31/2012 01:42 PM, T-Sinjon wrote:
yeah, everything went ok after restart libvirtd
I have do a test : Install a new node, the problem always there (any possible that "dynamic_ownership=0 # by vdsm" has done after libvirtd starting??)
T-Sinjon, How did you install your host? Using the "add new host" from Engine, or is it an ovirt-node host? We may be missing a libvirtd restart, as since http://gerrit.ovirt.org/3269 is in, we start libvirtd explicitly during bootstrap. If this is reproducible, please open a detailed bug in bugzilla. Regards, Dan.

I have 6 node , they all installed by "ovirt-node-image-2.2.2-2.2.fc16.iso" some of them had been added from engine web , the others added via admin@node UI I confirm that they both have the problem . Since it's reproducible , I'll open a case later. On 1 Aug, 2012, at 7:00 PM, Dan Kenigsberg wrote:
On Tue, Jul 31, 2012 at 02:15:12PM +0200, Martin Kletzander wrote:
It looks like this exactly could be the problem. I don't know how the node installation works, but when you install normal system it shouldn't start any services until you specify that. Of course this is another story when installing one-purpose system, so I won't be able to tell you exactly, but vdsm should definitely make sure libvirtd loads its config (re/starts *after* the config is modified)
I'm glad it works for you now.
Martin
On 07/31/2012 01:42 PM, T-Sinjon wrote:
yeah, everything went ok after restart libvirtd
I have do a test : Install a new node, the problem always there (any possible that "dynamic_ownership=0 # by vdsm" has done after libvirtd starting??)
T-Sinjon, How did you install your host? Using the "add new host" from Engine, or is it an ovirt-node host?
We may be missing a libvirtd restart, as since http://gerrit.ovirt.org/3269 is in, we start libvirtd explicitly during bootstrap. If this is reproducible, please open a detailed bug in bugzilla.
Regards, Dan.
participants (3)
-
Dan Kenigsberg
-
Martin Kletzander
-
T-Sinjon