
Hi, I have a pb when i try to snapshot a VM. Ovirt engine self hosted 3.4. Two node (host01 and host02). my engine.log : 2014-04-04 12:30:03,013 INFO [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-24) Ending command successfully: org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand 2014-04-04 12:30:03,028 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) START, SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1), log id: 36463977 2014-04-04 12:30:03,075 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Failed in SnapshotVDS method 2014-04-04 12:30:03,076 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Command org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return value StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]] 2014-04-04 12:30:03,077 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) HostName = host01 2014-04-04 12:30:03,078 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Command SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 2014-04-04 12:30:03,080 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) FINISH, SnapshotVDSCommand, log id: 36463977 2014-04-04 12:30:03,083 WARN [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-24) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot 2014-04-04 12:30:03,097 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-24) Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-1b63aa8b4fba, Call Stack: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48) My /var/log/messages Apr 4 12:30:04 host01 vdsm vm.Vm ERROR vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': '3b6cbb5d-beed-428d-ac66-9db3dd002e2f', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'} My /var/log/libvirt/libvirt.log 2014-04-04 10:40:13.886+0000: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0 buf={"execute":"query-blockstats","id":"libvirt-20842"} len=53 ret=53 errno=11 2014-04-04 10:40:13.888+0000: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={"return": [{"device": "drive-ide0-1-0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 11929902, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 135520, "rd_operations": 46}}, {"device": "drive-virtio-disk0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 22184332800, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 34786515034, "wr_highest_offset": 22184332800, "wr_total_time_ns": 5131205369094, "wr_bytes": 5122065408, "rd_total_time_ns": 12987633373, "flush_operations": 285398, "wr_operations": 401232, "rd_bytes": 392342016, "rd_operations": 15069}}], "id": "libvirt-20842"} len=1021 2014-04-04 10:40:13.888+0000: 8263: debug : qemuMonitorGetBlockStatsInfo:1478 : mon=0x7f77ec0ccce0 dev=ide0-1-0 2014-04-04 10:40:13.889+0000: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0 msg={"execute":"query-blockstats","id":"libvirt-20843"} /var/log/vdsm/vdsm.log Thread-4732::DEBUG::2014-04-04 12:43:34,439::BindingXMLRPC::1067::vds::(wrapper) client [192.168.99.104]::call vmSnapshot with ('cb038ccf-6c6f-475c-872f-ea812ff795a1', [{'baseVolumeID': 'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': 'f5fc4fed-4acd-46e8-9980-90a9c3985840', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'}], '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f,00000002-0002-0002-0002-000000000076,4fb31c32-8467-4d4a-b817-977643a462e3,ceb881f3-9a46-4ebc-b82e-c4c91035f807,2c06b4da-2743-4422-ba94-74da2c709188,02804da9-34f8-438f-9e8a-9689bc94790c') {} Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm::(snapshot) vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': 'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'} Thread-4732::DEBUG::2014-04-04 12:43:34,440::BindingXMLRPC::1074::vds::(wrapper) return vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Thread-299::DEBUG::2014-04-04 12:43:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/host01.ovirt.lan:_home_export/ff98d346-4515-4349-8437-fb2f5e9eaadf/dom_md/metadata bs=4096 count=1' (cwd None) Thx;)

On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
Hi,
I have a pb when i try to snapshot a VM.
are you running the right qemu/libvirt from virt-preview repo?
Ovirt engine self hosted 3.4. Two node (host01 and host02).
my engine.log :
2014-04-04 12:30:03,013 INFO [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-24) Ending command successfully: org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand 2014-04-04 12:30:03,028 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) START, SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1), log id: 36463977 2014-04-04 12:30:03,075 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Failed in SnapshotVDS method 2014-04-04 12:30:03,076 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Command org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return value StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]] 2014-04-04 12:30:03,077 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) HostName = host01 2014-04-04 12:30:03,078 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Command SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 2014-04-04 12:30:03,080 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) FINISH, SnapshotVDSCommand, log id: 36463977 2014-04-04 12:30:03,083 WARN [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-24) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot 2014-04-04 12:30:03,097 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-24) Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-1b63aa8b4fba, Call Stack: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
My /var/log/messages
Apr 4 12:30:04 host01 vdsm vm.Vm ERROR vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': '3b6cbb5d-beed-428d-ac66-9db3dd002e2f', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'}
My /var/log/libvirt/libvirt.log
2014-04-04 10:40:13.886+0000: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0 buf={"execute":"query-blockstats","id":"libvirt-20842"} len=53 ret=53 errno=11 2014-04-04 10:40:13.888+0000: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={"return": [{"device": "drive-ide0-1-0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 11929902, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 135520, "rd_operations": 46}}, {"device": "drive-virtio-disk0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 22184332800, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 34786515034, "wr_highest_offset": 22184332800, "wr_total_time_ns": 5131205369094, "wr_bytes": 5122065408, "rd_total_time_ns": 12987633373, "flush_operations": 285398, "wr_operations": 401232, "rd_bytes": 392342016, "rd_operations": 15069}}], "id": "libvirt-20842"} len=1021 2014-04-04 10:40:13.888+0000: 8263: debug : qemuMonitorGetBlockStatsInfo:1478 : mon=0x7f77ec0ccce0 dev=ide0-1-0 2014-04-04 10:40:13.889+0000: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0 msg={"execute":"query-blockstats","id":"libvirt-20843"}
/var/log/vdsm/vdsm.log Thread-4732::DEBUG::2014-04-04 12:43:34,439::BindingXMLRPC::1067::vds::(wrapper) client [192.168.99.104]::call vmSnapshot with ('cb038ccf-6c6f-475c-872f-ea812ff795a1', [{'baseVolumeID': 'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': 'f5fc4fed-4acd-46e8-9980-90a9c3985840', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'}], '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f,00000002-0002-0002-0002-000000000076,4fb31c32-8467-4d4a-b817-977643a462e3,ceb881f3-9a46-4ebc-b82e-c4c91035f807,2c06b4da-2743-4422-ba94-74da2c709188,02804da9-34f8-438f-9e8a-9689bc94790c') {} Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm::(snapshot) vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': 'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'} Thread-4732::DEBUG::2014-04-04 12:43:34,440::BindingXMLRPC::1074::vds::(wrapper) return vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Thread-299::DEBUG::2014-04-04 12:43:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/host01.ovirt.lan:_home_export/ff98d346-4515-4349-8437-fb2f5e9eaadf/dom_md/metadata bs=4096 count=1' (cwd None)
Thx;)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

is this a live snapshots (wile vm is running)? can you please make sure your vdsm log is in debug and attach the full log? Thanks, Dafna On 04/04/2014 02:23 PM, Michal Skrivanek wrote:
On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
Hi,
I have a pb when i try to snapshot a VM. are you running the right qemu/libvirt from virt-preview repo?
Ovirt engine self hosted 3.4. Two node (host01 and host02).
my engine.log :
2014-04-04 12:30:03,013 INFO [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-24) Ending command successfully: org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand 2014-04-04 12:30:03,028 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) START, SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1), log id: 36463977 2014-04-04 12:30:03,075 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Failed in SnapshotVDS method 2014-04-04 12:30:03,076 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Command org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return value StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]] 2014-04-04 12:30:03,077 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) HostName = host01 2014-04-04 12:30:03,078 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Command SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 2014-04-04 12:30:03,080 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) FINISH, SnapshotVDSCommand, log id: 36463977 2014-04-04 12:30:03,083 WARN [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-24) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot 2014-04-04 12:30:03,097 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-24) Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-1b63aa8b4fba, Call Stack: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
My /var/log/messages
Apr 4 12:30:04 host01 vdsm vm.Vm ERROR vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': '3b6cbb5d-beed-428d-ac66-9db3dd002e2f', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'}
My /var/log/libvirt/libvirt.log
2014-04-04 10:40:13.886+0000: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0 buf={"execute":"query-blockstats","id":"libvirt-20842"} len=53 ret=53 errno=11 2014-04-04 10:40:13.888+0000: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={"return": [{"device": "drive-ide0-1-0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 11929902, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 135520, "rd_operations": 46}}, {"device": "drive-virtio-disk0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 22184332800, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 34786515034, "wr_highest_offset": 22184332800, "wr_total_time_ns": 5131205369094, "wr_bytes": 5122065408, "rd_tota l_time_ns": 12987633373, "flush_operations": 285398, "wr_operations": 401232, "rd_bytes": 392342016, "rd_operations": 15069}}], "id": "libvirt-20842"} len=1021 2014-04-04 10:40:13.888+0000: 8263: debug : qemuMonitorGetBlockStatsInfo:1478 : mon=0x7f77ec0ccce0 dev=ide0-1-0 2014-04-04 10:40:13.889+0000: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0 msg={"execute":"query-blockstats","id":"libvirt-20843"}
/var/log/vdsm/vdsm.log Thread-4732::DEBUG::2014-04-04 12:43:34,439::BindingXMLRPC::1067::vds::(wrapper) client [192.168.99.104]::call vmSnapshot with ('cb038ccf-6c6f-475c-872f-ea812ff795a1', [{'baseVolumeID': 'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': 'f5fc4fed-4acd-46e8-9980-90a9c3985840', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'}], '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f,00000002-0002-0002-0002-000000000076,4fb31c32-8467-4d4a-b817-977643a462e3,ceb881f3-9a46-4ebc-b82e-c4c91035f807,2c06b4da-2743-4422-ba94-74da2c709188,02804da9-34f8-438f-9e8a-9689bc94790c') {} Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm::(snapshot) vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': 'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'} Thread-4732::DEBUG::2014-04-04 12:43:34,440::BindingXMLRPC::1074::vds::(wrapper) return vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Thread-299::DEBUG::2014-04-04 12:43:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/host01.ovirt.lan:_home_export/ff98d346-4515-4349-8437-fb2f5e9eaadf/dom_md/metadata bs=4096 count=1' (cwd None)
Thx;)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron

Yes it's a live snapshots. Normal snapshot works. How i make debug in vdsm ? mom.conf : log: /var/log/vdsm/mom.log verbosity: info vdsm.conf : [root@host02 ~]# cat /etc/vdsm/vdsm.conf [addresses] management_port = 54321 [vars] ssl = true 2014-04-04 15:27 GMT+02:00 Dafna Ron <dron@redhat.com>:
is this a live snapshots (wile vm is running)? can you please make sure your vdsm log is in debug and attach the full log?
Thanks, Dafna
On 04/04/2014 02:23 PM, Michal Skrivanek wrote:
On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
Hi,
I have a pb when i try to snapshot a VM.
are you running the right qemu/libvirt from virt-preview repo?
Ovirt engine self hosted 3.4. Two node (host01 and host02).
my engine.log :
2014-04-04 12:30:03,013 INFO [org.ovirt.engine.core.bll. CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-24) Ending command successfully: org.ovirt.engine.core.bll. CreateAllSnapshotsFromVmCommand 2014-04-04 12:30:03,028 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) START, SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1), log id: 36463977 2014-04-04 12:30:03,075 ERROR [org.ovirt.engine.core. vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Failed in SnapshotVDS method 2014-04-04 12:30:03,076 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Command org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return value StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]] 2014-04-04 12:30:03,077 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) HostName = host01 2014-04-04 12:30:03,078 ERROR [org.ovirt.engine.core. vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) Command SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-ea812ff795a1) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 2014-04-04 12:30:03,080 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-24) FINISH, SnapshotVDSCommand, log id: 36463977 2014-04-04 12:30:03,083 WARN [org.ovirt.engine.core.bll. CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-24) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot 2014-04-04 12:30:03,097 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-24) Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-1b63aa8b4fba, Call Stack: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
My /var/log/messages
Apr 4 12:30:04 host01 vdsm vm.Vm ERROR vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': '3b6cbb5d-beed-428d-ac66-9db3dd002e2f', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'}
My /var/log/libvirt/libvirt.log
2014-04-04 10:40:13.886+0000: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0 buf={"execute":"query- blockstats","id":"libvirt-20842"} len=53 ret=53 errno=11 2014-04-04 10:40:13.888+0000: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={"return": [{"device": "drive-ide0-1-0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 11929902, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 135520, "rd_operations": 46}}, {"device": "drive-virtio-disk0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 22184332800, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 34786515034, "wr_highest_offset": 22184332800, "wr_total_time_ns": 5131205369094, "wr_bytes": 5122065408, "rd_tot
a
l_time_ns": 12987633373, "flush_operations": 285398, "wr_operations": 401232, "rd_bytes": 392342016, "rd_operations": 15069}}], "id": "libvirt-20842"}
len=1021 2014-04-04 10:40:13.888+0000: 8263: debug : qemuMonitorGetBlockStatsInfo:1478 : mon=0x7f77ec0ccce0 dev=ide0-1-0 2014-04-04 10:40:13.889+0000: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0 msg={"execute":"query- blockstats","id":"libvirt-20843"}
/var/log/vdsm/vdsm.log Thread-4732::DEBUG::2014-04-04 12:43:34,439::BindingXMLRPC::1067::vds::(wrapper) client [192.168.99.104]::call vmSnapshot with ('cb038ccf-6c6f-475c-872f-ea812ff795a1', [{'baseVolumeID': 'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': 'f5fc4fed-4acd-46e8-9980-90a9c3985840', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'}], '5ae613a4-44e4-42cb-89fc- 7b5d34c1f30f,00000002-0002-0002-0002-000000000076, 4fb31c32-8467-4d4a-b817-977643a462e3,ceb881f3-9a46- 4ebc-b82e-c4c91035f807,2c06b4da-2743-4422-ba94- 74da2c709188,02804da9-34f8-438f-9e8a-9689bc94790c') {} Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm::(snapshot) vmId=`cb038ccf-6c6f-475c-872f-ea812ff795a1`::The base volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f', 'volumeID': 'b62232fc-4e02-41ce-ae10-5dff9e2f7bbe', 'imageID': '646df162-5c6d-44b1-bc47-b63c3fdab0e2'} Thread-4732::DEBUG::2014-04-04 12:43:34,440::BindingXMLRPC::1074::vds::(wrapper) return vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Thread-299::DEBUG::2014-04-04 12:43:35,423::fileSD::225:: Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/host01.ovirt.lan:_home_export/ ff98d346-4515-4349-8437-fb2f5e9eaadf/dom_md/metadata bs=4096 count=1' (cwd None)
Thx;)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, On 04/04/2014 10:04 AM, Kevin Tibi wrote:
Yes it's a live snapshots. Normal snapshot works.
Question: Is it a EL6 hosts? If yes, are you using qemu-kvm from: jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/ ? Thanks!
How i make debug in vdsm ?
mom.conf :
log: /var/log/vdsm/mom.log
verbosity: info
vdsm.conf :
[root@host02 ~]# cat /etc/vdsm/vdsm.conf [addresses] management_port = 54321
[vars] ssl = true
2014-04-04 15:27 GMT+02:00 Dafna Ron <dron@redhat.com <mailto:dron@redhat.com>>:
is this a live snapshots (wile vm is running)? can you please make sure your vdsm log is in debug and attach the full log?
Thanks, Dafna
On 04/04/2014 02:23 PM, Michal Skrivanek wrote:
On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
Hi,
I have a pb when i try to snapshot a VM.
are you running the right qemu/libvirt from virt-preview repo?
Ovirt engine self hosted 3.4. Two node (host01 and host02).
my engine.log :
2014-04-04 12:30:03,013 INFO [org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d] (org.ovirt.thread.pool-6-__thread-24) Ending command successfully: org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d 2014-04-04 12:30:03,028 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) START, SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1), log id: 36463977 2014-04-04 12:30:03,075 ERROR [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) Failed in SnapshotVDS method 2014-04-04 12:30:03,076 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) Command org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand return value StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]] 2014-04-04 12:30:03,077 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) HostName = host01 2014-04-04 12:30:03,078 ERROR [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) Command SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 2014-04-04 12:30:03,080 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) FINISH, SnapshotVDSCommand, log id: 36463977 2014-04-04 12:30:03,083 WARN [org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d] (org.ovirt.thread.pool-6-__thread-24) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.__vdsbroker.vdsbroker.__VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot 2014-04-04 12:30:03,097 INFO [org.ovirt.engine.core.dal.__dbbroker.auditloghandling.__AuditLogDirector] (org.ovirt.thread.pool-6-__thread-24) Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-__1b63aa8b4fba, Call Stack: org.ovirt.engine.core.common.__errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.__vdsbroker.vdsbroker.__VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
My /var/log/messages
Apr 4 12:30:04 host01 vdsm vm.Vm ERROR vmId=`cb038ccf-6c6f-475c-872f-__ea812ff795a1`::The base volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID': '3b6cbb5d-beed-428d-ac66-__9db3dd002e2f', 'imageID': '646df162-5c6d-44b1-bc47-__b63c3fdab0e2'}
My /var/log/libvirt/libvirt.log
2014-04-04 10:40:13.886+0000: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0 buf={"execute":"query-__blockstats","id":"libvirt-__20842"} len=53 ret=53 errno=11 2014-04-04 10:40:13.888+0000: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={"return": [{"device": "drive-ide0-1-0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 11929902, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 135520, "rd_operations": 46}}, {"device": "drive-virtio-disk0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 22184332800, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 34786515034, "wr_highest_offset": 22184332800, "wr_total_time_ns": 5131205369094, "wr_bytes": 5122065408, "rd_tot
a
l_time_ns": 12987633373, "flush_operations": 285398, "wr_operations": 401232, "rd_bytes": 392342016, "rd_operations": 15069}}], "id": "libvirt-20842"}
len=1021 2014-04-04 10:40:13.888+0000: 8263: debug : qemuMonitorGetBlockStatsInfo:__1478 : mon=0x7f77ec0ccce0 dev=ide0-1-0 2014-04-04 10:40:13.889+0000: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0 msg={"execute":"query-__blockstats","id":"libvirt-__20843"}
/var/log/vdsm/vdsm.log Thread-4732::DEBUG::2014-04-04 12:43:34,439::BindingXMLRPC::__1067::vds::(wrapper) client [192.168.99.104]::call vmSnapshot with ('cb038ccf-6c6f-475c-872f-__ea812ff795a1', [{'baseVolumeID': 'b62232fc-4e02-41ce-ae10-__5dff9e2f7bbe', 'domainID': '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID': 'f5fc4fed-4acd-46e8-9980-__90a9c3985840', 'imageID': '646df162-5c6d-44b1-bc47-__b63c3fdab0e2'}], '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f,00000002-0002-__0002-0002-000000000076,__4fb31c32-8467-4d4a-b817-__977643a462e3,ceb881f3-9a46-__4ebc-b82e-c4c91035f807,__2c06b4da-2743-4422-ba94-__74da2c709188,02804da9-34f8-__438f-9e8a-9689bc94790c') {} Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm:__:(snapshot) vmId=`cb038ccf-6c6f-475c-872f-__ea812ff795a1`::The base volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID': 'b62232fc-4e02-41ce-ae10-__5dff9e2f7bbe', 'imageID': '646df162-5c6d-44b1-bc47-__b63c3fdab0e2'} Thread-4732::DEBUG::2014-04-04 12:43:34,440::BindingXMLRPC::__1074::vds::(wrapper) return vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Thread-299::DEBUG::2014-04-04 12:43:35,423::fileSD::225::__Storage.Misc.excCmd::(__getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/__host01.ovirt.lan:_home_export/__ff98d346-4515-4349-8437-__fb2f5e9eaadf/dom_md/metadata bs=4096 count=1' (cwd None)
Thx;)
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-- Dafna Ron
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Cheers Douglas

It's centos 6.5. Have I need to change my repo ? I have just EPEL and Ovirt repo. 2014-04-04 16:23 GMT+02:00 Douglas Schilling Landgraf <dougsland@redhat.com> :
Hi,
On 04/04/2014 10:04 AM, Kevin Tibi wrote:
Yes it's a live snapshots. Normal snapshot works.
Question: Is it a EL6 hosts? If yes, are you using qemu-kvm from: jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/ ?
Thanks!
How i make debug in vdsm ?
mom.conf :
log: /var/log/vdsm/mom.log
verbosity: info
vdsm.conf :
[root@host02 ~]# cat /etc/vdsm/vdsm.conf [addresses] management_port = 54321
[vars] ssl = true
2014-04-04 15:27 GMT+02:00 Dafna Ron <dron@redhat.com <mailto:dron@redhat.com>>:
is this a live snapshots (wile vm is running)? can you please make sure your vdsm log is in debug and attach the full log?
Thanks, Dafna
On 04/04/2014 02:23 PM, Michal Skrivanek wrote:
On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
Hi,
I have a pb when i try to snapshot a VM.
are you running the right qemu/libvirt from virt-preview repo?
Ovirt engine self hosted 3.4. Two node (host01 and host02).
my engine.log :
2014-04-04 12:30:03,013 INFO [org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d] (org.ovirt.thread.pool-6-__thread-24) Ending command successfully: org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d
2014-04-04 12:30:03,028 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) START, SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1), log id: 36463977
2014-04-04 12:30:03,075 ERROR [org.ovirt.engine.core.__vdsbroker.vdsbroker.__ SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) Failed in SnapshotVDS
method 2014-04-04 12:30:03,076 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) Command org.ovirt.engine.core.__ vdsbroker.vdsbroker.__SnapshotVDSCommand return value
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]] 2014-04-04 12:30:03,077 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) HostName = host01
2014-04-04 12:30:03,078 ERROR [org.ovirt.engine.core.__vdsbroker.vdsbroker.__ SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) Command
SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1) execution
failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 2014-04-04 12:30:03,080 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) FINISH, SnapshotVDSCommand, log id: 36463977
2014-04-04 12:30:03,083 WARN [org.ovirt.engine.core.bll.__CreateAllSnapshotsFromVmComman__d] (org.ovirt.thread.pool-6-__thread-24) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.__ vdsbroker.vdsbroker.__VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot
2014-04-04 12:30:03,097 INFO [org.ovirt.engine.core.dal.__dbbroker.auditloghandling.__AuditLogDirector] (org.ovirt.thread.pool-6-__thread-24) Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-__1b63aa8b4fba, Call Stack: org.ovirt.engine.core.common.__errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.__vdsbroker.vdsbroker.__VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
My /var/log/messages
Apr 4 12:30:04 host01 vdsm vm.Vm ERROR vmId=`cb038ccf-6c6f-475c-872f-__ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID': '3b6cbb5d-beed-428d-ac66-__9db3dd002e2f', 'imageID': '646df162-5c6d-44b1-bc47-__b63c3fdab0e2'}
My /var/log/libvirt/libvirt.log
2014-04-04 10:40:13.886+0000: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0 buf={"execute":"query-__blockstats","id":"libvirt-__20842"}
len=53 ret=53 errno=11 2014-04-04 10:40:13.888+0000: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={"return": [{"device": "drive-ide0-1-0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 11929902, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 135520, "rd_operations": 46}}, {"device": "drive-virtio-disk0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 22184332800, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 34786515034, "wr_highest_offset": 22184332800, "wr_total_time_ns": 5131205369094, "wr_bytes": 5122065408, "rd_tot
a
l_time_ns": 12987633373, "flush_operations": 285398, "wr_operations": 401232, "rd_bytes": 392342016, "rd_operations": 15069}}], "id": "libvirt-20842"}
len=1021 2014-04-04 10:40:13.888+0000: 8263: debug : qemuMonitorGetBlockStatsInfo:__1478 : mon=0x7f77ec0ccce0
dev=ide0-1-0 2014-04-04 10:40:13.889+0000: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0 msg={"execute":"query-__blockstats","id":"libvirt-__20843"}
/var/log/vdsm/vdsm.log Thread-4732::DEBUG::2014-04-04 12:43:34,439::BindingXMLRPC::__1067::vds::(wrapper) client [192.168.99.104]::call vmSnapshot with ('cb038ccf-6c6f-475c-872f-__ea812ff795a1', [{'baseVolumeID': 'b62232fc-4e02-41ce-ae10-__5dff9e2f7bbe', 'domainID': '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID': 'f5fc4fed-4acd-46e8-9980-__90a9c3985840', 'imageID': '646df162-5c6d-44b1-bc47-__b63c3fdab0e2'}], '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f,00000002-0002-__ 0002-0002-000000000076,__4fb31c32-8467-4d4a-b817-__ 977643a462e3,ceb881f3-9a46-__4ebc-b82e-c4c91035f807,__ 2c06b4da-2743-4422-ba94-__74da2c709188,02804da9-34f8-__ 438f-9e8a-9689bc94790c') {} Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm:__:(snapshot) vmId=`cb038ccf-6c6f-475c-872f-__ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID': 'b62232fc-4e02-41ce-ae10-__5dff9e2f7bbe', 'imageID': '646df162-5c6d-44b1-bc47-__b63c3fdab0e2'} Thread-4732::DEBUG::2014-04-04 12:43:34,440::BindingXMLRPC::__1074::vds::(wrapper) return
vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Thread-299::DEBUG::2014-04-04 12:43:35,423::fileSD::225::__Storage.Misc.excCmd::(__ getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/__host01.ovirt.lan:_home_export/ __ff98d346-4515-4349-8437-__fb2f5e9eaadf/dom_md/metadata
bs=4096 count=1' (cwd None)
Thx;)
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
-- Dafna Ron
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Cheers Douglas
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Installed Packages qemu-kvm.x86_64 2:0.12.1.2-2.415.el6_5.6 @updates Available Packages qemu-kvm.x86_64 2:0.12.1.2-2.415.el6_5.7 updates [ 2014-04-04 17:06 GMT+02:00 Kevin Tibi <kevintibi@hotmail.com>:
It's centos 6.5. Have I need to change my repo ? I have just EPEL and Ovirt repo.
2014-04-04 16:23 GMT+02:00 Douglas Schilling Landgraf < dougsland@redhat.com>:
Hi,
On 04/04/2014 10:04 AM, Kevin Tibi wrote:
Yes it's a live snapshots. Normal snapshot works.
Question: Is it a EL6 hosts? If yes, are you using qemu-kvm from: jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/ ?
Thanks!
How i make debug in vdsm ?
mom.conf :
log: /var/log/vdsm/mom.log
verbosity: info
vdsm.conf :
[root@host02 ~]# cat /etc/vdsm/vdsm.conf [addresses] management_port = 54321
[vars] ssl = true
2014-04-04 15:27 GMT+02:00 Dafna Ron <dron@redhat.com <mailto:dron@redhat.com>>:
is this a live snapshots (wile vm is running)? can you please make sure your vdsm log is in debug and attach the full log?
Thanks, Dafna
On 04/04/2014 02:23 PM, Michal Skrivanek wrote:
On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
Hi,
I have a pb when i try to snapshot a VM.
are you running the right qemu/libvirt from virt-preview repo?
Ovirt engine self hosted 3.4. Two node (host01 and host02).
my engine.log :
2014-04-04 12:30:03,013 INFO [org.ovirt.engine.core.bll.__ CreateAllSnapshotsFromVmComman__d] (org.ovirt.thread.pool-6-__thread-24) Ending command successfully: org.ovirt.engine.core.bll.__ CreateAllSnapshotsFromVmComman__d
2014-04-04 12:30:03,028 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) START, SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1), log id: 36463977
2014-04-04 12:30:03,075 ERROR [org.ovirt.engine.core.__vdsbroker.vdsbroker.__ SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) Failed in SnapshotVDS
method 2014-04-04 12:30:03,076 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) Command org.ovirt.engine.core.__ vdsbroker.vdsbroker.__SnapshotVDSCommand return value
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]] 2014-04-04 12:30:03,077 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) HostName = host01
2014-04-04 12:30:03,078 ERROR [org.ovirt.engine.core.__vdsbroker.vdsbroker.__ SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) Command
SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-__dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-__ea812ff795a1) execution
failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 2014-04-04 12:30:03,080 INFO [org.ovirt.engine.core.__vdsbroker.vdsbroker.__SnapshotVDSCommand] (org.ovirt.thread.pool-6-__thread-24) FINISH, SnapshotVDSCommand, log id: 36463977
2014-04-04 12:30:03,083 WARN [org.ovirt.engine.core.bll.__ CreateAllSnapshotsFromVmComman__d] (org.ovirt.thread.pool-6-__thread-24) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.__vdsbroker.vdsbroker.__VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot
2014-04-04 12:30:03,097 INFO [org.ovirt.engine.core.dal.__dbbroker.auditloghandling.__AuditLogDirector] (org.ovirt.thread.pool-6-__thread-24) Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-__1b63aa8b4fba, Call Stack: org.ovirt.engine.core.common.__errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.__vdsbroker.vdsbroker.__VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
My /var/log/messages
Apr 4 12:30:04 host01 vdsm vm.Vm ERROR vmId=`cb038ccf-6c6f-475c-872f-__ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID': '3b6cbb5d-beed-428d-ac66-__9db3dd002e2f', 'imageID': '646df162-5c6d-44b1-bc47-__b63c3fdab0e2'}
My /var/log/libvirt/libvirt.log
2014-04-04 10:40:13.886+0000: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0 buf={"execute":"query-__blockstats","id":"libvirt-__20842"}
len=53 ret=53 errno=11 2014-04-04 10:40:13.888+0000: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={"return": [{"device": "drive-ide0-1-0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 11929902, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 135520, "rd_operations": 46}}, {"device": "drive-virtio-disk0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 22184332800, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 34786515034, "wr_highest_offset": 22184332800, "wr_total_time_ns": 5131205369094, "wr_bytes": 5122065408, "rd_tot
a
l_time_ns": 12987633373, "flush_operations": 285398, "wr_operations": 401232, "rd_bytes": 392342016, "rd_operations": 15069}}], "id": "libvirt-20842"}
len=1021 2014-04-04 10:40:13.888+0000: 8263: debug : qemuMonitorGetBlockStatsInfo:__1478 : mon=0x7f77ec0ccce0
dev=ide0-1-0 2014-04-04 10:40:13.889+0000: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0 msg={"execute":"query-__blockstats","id":"libvirt-__20843"}
/var/log/vdsm/vdsm.log Thread-4732::DEBUG::2014-04-04 12:43:34,439::BindingXMLRPC::__1067::vds::(wrapper) client [192.168.99.104]::call vmSnapshot with ('cb038ccf-6c6f-475c-872f-__ea812ff795a1', [{'baseVolumeID': 'b62232fc-4e02-41ce-ae10-__5dff9e2f7bbe', 'domainID': '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID': 'f5fc4fed-4acd-46e8-9980-__90a9c3985840', 'imageID': '646df162-5c6d-44b1-bc47-__b63c3fdab0e2'}], '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f,00000002-0002-__ 0002-0002-000000000076,__4fb31c32-8467-4d4a-b817-__ 977643a462e3,ceb881f3-9a46-__4ebc-b82e-c4c91035f807,__ 2c06b4da-2743-4422-ba94-__74da2c709188,02804da9-34f8-__ 438f-9e8a-9689bc94790c') {} Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm:__:(snapshot) vmId=`cb038ccf-6c6f-475c-872f-__ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-__7b5d34c1f30f', 'volumeID': 'b62232fc-4e02-41ce-ae10-__5dff9e2f7bbe', 'imageID': '646df162-5c6d-44b1-bc47-__b63c3fdab0e2'} Thread-4732::DEBUG::2014-04-04 12:43:34,440::BindingXMLRPC::__1074::vds::(wrapper) return
vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Thread-299::DEBUG::2014-04-04 12:43:35,423::fileSD::225::__Storage.Misc.excCmd::(__ getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/__host01.ovirt.lan:_home_export/ __ff98d346-4515-4349-8437-__fb2f5e9eaadf/dom_md/metadata
bs=4096 count=1' (cwd None)
Thx;)
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
-- Dafna Ron
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Cheers Douglas
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 04/04/2014 06:11 PM, Kevin Tibi wrote:
Installed Packages qemu-kvm.x86_64 2:0.12.1.2-2.415.el6_5.6 @updates Available Packages qemu-kvm.x86_64 2:0.12.1.2-2.415.el6_5.7 updates
until we resolve this with centos, you need qemu-kvm-rhev. we are currently providing it here: http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create_rpms_el6/lastSucc...
[
2014-04-04 17:06 GMT+02:00 Kevin Tibi <kevintibi@hotmail.com <mailto:kevintibi@hotmail.com>>:
It's centos 6.5. Have I need to change my repo ? I have just EPEL and Ovirt repo.
2014-04-04 16:23 GMT+02:00 Douglas Schilling Landgraf <dougsland@redhat.com <mailto:dougsland@redhat.com>>:
Hi,
On 04/04/2014 10:04 AM, Kevin Tibi wrote:
Yes it's a live snapshots. Normal snapshot works.
Question: Is it a EL6 hosts? If yes, are you using qemu-kvm from: jenkins.ovirt.org/view/__Packaging/job/qemu-kvm-rhev___create_rpms_el6/ <http://jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/> ?
Thanks!
How i make debug in vdsm ?
mom.conf :
log: /var/log/vdsm/mom.log
verbosity: info
vdsm.conf :
[root@host02 ~]# cat /etc/vdsm/vdsm.conf [addresses] management_port = 54321
[vars] ssl = true
2014-04-04 15:27 GMT+02:00 Dafna Ron <dron@redhat.com <mailto:dron@redhat.com> <mailto:dron@redhat.com <mailto:dron@redhat.com>>>:
is this a live snapshots (wile vm is running)? can you please make sure your vdsm log is in debug and attach the full log?
Thanks, Dafna
On 04/04/2014 02:23 PM, Michal Skrivanek wrote:
On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
Hi,
I have a pb when i try to snapshot a VM.
are you running the right qemu/libvirt from virt-preview repo?
Ovirt engine self hosted 3.4. Two node (host01 and host02).
my engine.log :
2014-04-04 12:30:03,013 INFO
[org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d] (org.ovirt.thread.pool-6-____thread-24) Ending command successfully: org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d
2014-04-04 12:30:03,028 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) START, SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-____dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-____ea812ff795a1), log id: 36463977
2014-04-04 12:30:03,075 ERROR
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) Failed in SnapshotVDS
method 2014-04-04 12:30:03,076 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) Command org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand return value
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]] 2014-04-04 12:30:03,077 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) HostName = host01
2014-04-04 12:30:03,078 ERROR
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) Command
SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-____dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-____ea812ff795a1) execution
failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 2014-04-04 12:30:03,080 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) FINISH, SnapshotVDSCommand, log id: 36463977
2014-04-04 12:30:03,083 WARN
[org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d] (org.ovirt.thread.pool-6-____thread-24) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.____vdsbroker.vdsbroker.____VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot
2014-04-04 12:30:03,097 INFO
[org.ovirt.engine.core.dal.____dbbroker.auditloghandling.____AuditLogDirector] (org.ovirt.thread.pool-6-____thread-24) Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-____1b63aa8b4fba, Call Stack: org.ovirt.engine.core.common.____errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.____vdsbroker.vdsbroker.____VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
My /var/log/messages
Apr 4 12:30:04 host01 vdsm vm.Vm ERROR
vmId=`cb038ccf-6c6f-475c-872f-____ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f', 'volumeID': '3b6cbb5d-beed-428d-ac66-____9db3dd002e2f', 'imageID': '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}
My /var/log/libvirt/libvirt.log
2014-04-04 10:40:13.886+0000: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0
buf={"execute":"query-____blockstats","id":"libvirt-____20842"}
len=53 ret=53 errno=11 2014-04-04 10:40:13.888+0000: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={"return": [{"device": "drive-ide0-1-0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 11929902, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 135520, "rd_operations": 46}}, {"device": "drive-virtio-disk0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 22184332800, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 34786515034, "wr_highest_offset": 22184332800, "wr_total_time_ns": 5131205369094, "wr_bytes": 5122065408 <tel:5122065408>, "rd_tot
a
l_time_ns": 12987633373, "flush_operations": 285398, "wr_operations": 401232, "rd_bytes": 392342016, "rd_operations": 15069}}], "id": "libvirt-20842"}
len=1021 2014-04-04 10:40:13.888+0000: 8263: debug : qemuMonitorGetBlockStatsInfo:____1478 : mon=0x7f77ec0ccce0
dev=ide0-1-0 2014-04-04 10:40:13.889+0000: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0
msg={"execute":"query-____blockstats","id":"libvirt-____20843"}
/var/log/vdsm/vdsm.log Thread-4732::DEBUG::2014-04-04
12:43:34,439::BindingXMLRPC::____1067::vds::(wrapper) client [192.168.99.104]::call vmSnapshot with ('cb038ccf-6c6f-475c-872f-____ea812ff795a1', [{'baseVolumeID': 'b62232fc-4e02-41ce-ae10-____5dff9e2f7bbe', 'domainID': '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f', 'volumeID': 'f5fc4fed-4acd-46e8-9980-____90a9c3985840', 'imageID': '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}],
'5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f,00000002-0002-____0002-0002-000000000076,____4fb31c32-8467-4d4a-b817-____977643a462e3,ceb881f3-9a46-____4ebc-b82e-c4c91035f807,____2c06b4da-2743-4422-ba94-____74da2c709188,02804da9-34f8-____438f-9e8a-9689bc94790c') {} Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm:____:(snapshot)
vmId=`cb038ccf-6c6f-475c-872f-____ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f', 'volumeID': 'b62232fc-4e02-41ce-ae10-____5dff9e2f7bbe', 'imageID': '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'} Thread-4732::DEBUG::2014-04-04
12:43:34,440::BindingXMLRPC::____1074::vds::(wrapper) return
vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Thread-299::DEBUG::2014-04-04
12:43:35,423::fileSD::225::____Storage.Misc.excCmd::(____getReadDelay) '/bin/dd iflag=direct
if=/rhev/data-center/mnt/____host01.ovirt.lan:_home_export/____ff98d346-4515-4349-8437-____fb2f5e9eaadf/dom_md/metadata
bs=4096 count=1' (cwd None)
Thx;)
___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
-- Dafna Ron
___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-- Cheers Douglas
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 04/04/2014 11:06 AM, Kevin Tibi wrote:
It's centos 6.5. Have I need to change my repo ? I have just EPEL and Ovirt repo.
Unfortunately we don't have any repo for this yet, only the packages recompiled with such feature enabled available to download in the url that I have provided. We are working with CentOS guys to have an official repo for it.
2014-04-04 16:23 GMT+02:00 Douglas Schilling Landgraf <dougsland@redhat.com <mailto:dougsland@redhat.com>>:
Hi,
On 04/04/2014 10:04 AM, Kevin Tibi wrote:
Yes it's a live snapshots. Normal snapshot works.
Question: Is it a EL6 hosts? If yes, are you using qemu-kvm from: jenkins.ovirt.org/view/__Packaging/job/qemu-kvm-rhev___create_rpms_el6/ <http://jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/> ?
Thanks!
How i make debug in vdsm ?
mom.conf :
log: /var/log/vdsm/mom.log
verbosity: info
vdsm.conf :
[root@host02 ~]# cat /etc/vdsm/vdsm.conf [addresses] management_port = 54321
[vars] ssl = true
2014-04-04 15:27 GMT+02:00 Dafna Ron <dron@redhat.com <mailto:dron@redhat.com> <mailto:dron@redhat.com <mailto:dron@redhat.com>>>:
is this a live snapshots (wile vm is running)? can you please make sure your vdsm log is in debug and attach the full log?
Thanks, Dafna
On 04/04/2014 02:23 PM, Michal Skrivanek wrote:
On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
Hi,
I have a pb when i try to snapshot a VM.
are you running the right qemu/libvirt from virt-preview repo?
Ovirt engine self hosted 3.4. Two node (host01 and host02).
my engine.log :
2014-04-04 12:30:03,013 INFO
[org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d] (org.ovirt.thread.pool-6-____thread-24) Ending command successfully: org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d
2014-04-04 12:30:03,028 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) START, SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-____dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-____ea812ff795a1), log id: 36463977
2014-04-04 12:30:03,075 ERROR
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) Failed in SnapshotVDS
method 2014-04-04 12:30:03,076 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) Command org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand return value
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]] 2014-04-04 12:30:03,077 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) HostName = host01
2014-04-04 12:30:03,078 ERROR
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) Command
SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-____dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-____ea812ff795a1) execution
failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 2014-04-04 12:30:03,080 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) FINISH, SnapshotVDSCommand, log id: 36463977
2014-04-04 12:30:03,083 WARN
[org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d] (org.ovirt.thread.pool-6-____thread-24) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.____vdsbroker.vdsbroker.____VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot
2014-04-04 12:30:03,097 INFO
[org.ovirt.engine.core.dal.____dbbroker.auditloghandling.____AuditLogDirector] (org.ovirt.thread.pool-6-____thread-24) Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-____1b63aa8b4fba, Call Stack: org.ovirt.engine.core.common.____errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.____vdsbroker.vdsbroker.____VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
My /var/log/messages
Apr 4 12:30:04 host01 vdsm vm.Vm ERROR
vmId=`cb038ccf-6c6f-475c-872f-____ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f', 'volumeID': '3b6cbb5d-beed-428d-ac66-____9db3dd002e2f', 'imageID': '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}
My /var/log/libvirt/libvirt.log
2014-04-04 10:40:13.886+0000: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0
buf={"execute":"query-____blockstats","id":"libvirt-____20842"}
len=53 ret=53 errno=11 2014-04-04 10:40:13.888+0000: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={"return": [{"device": "drive-ide0-1-0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 11929902, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 135520, "rd_operations": 46}}, {"device": "drive-virtio-disk0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 22184332800, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 34786515034, "wr_highest_offset": 22184332800, "wr_total_time_ns": 5131205369094, "wr_bytes": 5122065408 <tel:5122065408>, "rd_tot
a
l_time_ns": 12987633373, "flush_operations": 285398, "wr_operations": 401232, "rd_bytes": 392342016, "rd_operations": 15069}}], "id": "libvirt-20842"}
len=1021 2014-04-04 10:40:13.888+0000: 8263: debug : qemuMonitorGetBlockStatsInfo:____1478 : mon=0x7f77ec0ccce0
dev=ide0-1-0 2014-04-04 10:40:13.889+0000: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0
msg={"execute":"query-____blockstats","id":"libvirt-____20843"}
/var/log/vdsm/vdsm.log Thread-4732::DEBUG::2014-04-04
12:43:34,439::BindingXMLRPC::____1067::vds::(wrapper) client [192.168.99.104]::call vmSnapshot with ('cb038ccf-6c6f-475c-872f-____ea812ff795a1', [{'baseVolumeID': 'b62232fc-4e02-41ce-ae10-____5dff9e2f7bbe', 'domainID': '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f', 'volumeID': 'f5fc4fed-4acd-46e8-9980-____90a9c3985840', 'imageID': '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}],
'5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f,00000002-0002-____0002-0002-000000000076,____4fb31c32-8467-4d4a-b817-____977643a462e3,ceb881f3-9a46-____4ebc-b82e-c4c91035f807,____2c06b4da-2743-4422-ba94-____74da2c709188,02804da9-34f8-____438f-9e8a-9689bc94790c') {} Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm:____:(snapshot)
vmId=`cb038ccf-6c6f-475c-872f-____ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f', 'volumeID': 'b62232fc-4e02-41ce-ae10-____5dff9e2f7bbe', 'imageID': '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'} Thread-4732::DEBUG::2014-04-04
12:43:34,440::BindingXMLRPC::____1074::vds::(wrapper) return
vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Thread-299::DEBUG::2014-04-04
12:43:35,423::fileSD::225::____Storage.Misc.excCmd::(____getReadDelay) '/bin/dd iflag=direct
if=/rhev/data-center/mnt/____host01.ovirt.lan:_home_export/____ff98d346-4515-4349-8437-____fb2f5e9eaadf/dom_md/metadata
bs=4096 count=1' (cwd None)
Thx;)
___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
-- Dafna Ron
___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-- Cheers Douglas
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-- Cheers Douglas

Ok it's work with this version of qemu-kvm. Thx for your help :) 2014-04-04 17:23 GMT+02:00 Douglas Schilling Landgraf <dougsland@redhat.com> :
On 04/04/2014 11:06 AM, Kevin Tibi wrote:
It's centos 6.5. Have I need to change my repo ? I have just EPEL and Ovirt repo.
Unfortunately we don't have any repo for this yet, only the packages recompiled with such feature enabled available to download in the url that I have provided. We are working with CentOS guys to have an official repo for it.
2014-04-04 16:23 GMT+02:00 Douglas Schilling Landgraf <dougsland@redhat.com <mailto:dougsland@redhat.com>>:
Hi,
On 04/04/2014 10:04 AM, Kevin Tibi wrote:
Yes it's a live snapshots. Normal snapshot works.
Question: Is it a EL6 hosts? If yes, are you using qemu-kvm from: jenkins.ovirt.org/view/__Packaging/job/qemu-kvm-rhev___ create_rpms_el6/ <http://jenkins.ovirt.org/view/Packaging/job/qemu-kvm- rhev_create_rpms_el6/>
?
Thanks!
How i make debug in vdsm ?
mom.conf :
log: /var/log/vdsm/mom.log
verbosity: info
vdsm.conf :
[root@host02 ~]# cat /etc/vdsm/vdsm.conf [addresses] management_port = 54321
[vars] ssl = true
2014-04-04 15:27 GMT+02:00 Dafna Ron <dron@redhat.com <mailto:dron@redhat.com> <mailto:dron@redhat.com <mailto:dron@redhat.com>>>:
is this a live snapshots (wile vm is running)? can you please make sure your vdsm log is in debug and attach the full log?
Thanks, Dafna
On 04/04/2014 02:23 PM, Michal Skrivanek wrote:
On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
Hi,
I have a pb when i try to snapshot a VM.
are you running the right qemu/libvirt from virt-preview repo?
Ovirt engine self hosted 3.4. Two node (host01 and host02).
my engine.log :
2014-04-04 12:30:03,013 INFO
[org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman ____d] (org.ovirt.thread.pool-6-____thread-24) Ending command successfully: org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d
2014-04-04 12:30:03,028 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____ SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) START,
SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-____dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-____ea812ff795a1), log id: 36463977
2014-04-04 12:30:03,075 ERROR
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____ SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) Failed in
SnapshotVDS
method 2014-04-04 12:30:03,076 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____ SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) Command org.ovirt.engine.core.____vdsbroker.vdsbroker.____ SnapshotVDSCommand
return value
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]] 2014-04-04 12:30:03,077 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____ SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) HostName = host01
2014-04-04 12:30:03,078 ERROR
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____ SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) Command
SnapshotVDSCommand(HostName = host01, HostId = fcb9a5cf-2064-42a5-99fe-____dc56ea39ed81, vmId=cb038ccf-6c6f-475c-872f-____ea812ff795a1)
execution
failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 2014-04-04 12:30:03,080 INFO
[org.ovirt.engine.core.____vdsbroker.vdsbroker.____ SnapshotVDSCommand] (org.ovirt.thread.pool-6-____thread-24) FINISH,
SnapshotVDSCommand, log id: 36463977
2014-04-04 12:30:03,083 WARN
[org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman ____d] (org.ovirt.thread.pool-6-____thread-24) Wasnt able to live
snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.____vdsbroker.vdsbroker.____ VDSErrorException:
VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot
2014-04-04 12:30:03,097 INFO
[org.ovirt.engine.core.dal.____dbbroker.auditloghandling.___ _AuditLogDirector] (org.ovirt.thread.pool-6-____thread-24) Correlation ID: 5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-____1b63aa8b4fba, Call Stack: org.ovirt.engine.core.common.____errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.____vdsbroker.vdsbroker.____ VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)
My /var/log/messages
Apr 4 12:30:04 host01 vdsm vm.Vm ERROR
vmId=`cb038ccf-6c6f-475c-872f-____ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f', 'volumeID': '3b6cbb5d-beed-428d-ac66-____9db3dd002e2f', 'imageID': '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}
My /var/log/libvirt/libvirt.log
2014-04-04 10:40:13.886+0000: 8234: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f77ec0ccce0
buf={"execute":"query-____blockstats","id":"libvirt-____20842"}
len=53 ret=53 errno=11 2014-04-04 10:40:13.888+0000: 8234: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f77ec0ccce0 buf={"return": [{"device": "drive-ide0-1-0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 11929902, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 135520, "rd_operations": 46}}, {"device": "drive-virtio-disk0", "parent": {"stats": {"flush_total_time_ns": 0, "wr_highest_offset": 22184332800, "wr_total_time_ns": 0, "wr_bytes": 0, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_bytes": 0, "rd_operations": 0}}, "stats": {"flush_total_time_ns": 34786515034, "wr_highest_offset": 22184332800, "wr_total_time_ns": 5131205369094, "wr_bytes": 5122065408 <tel:5122065408>,
"rd_tot
a
l_time_ns": 12987633373, "flush_operations": 285398, "wr_operations": 401232, "rd_bytes": 392342016, "rd_operations": 15069}}], "id": "libvirt-20842"}
len=1021 2014-04-04 10:40:13.888+0000: 8263: debug : qemuMonitorGetBlockStatsInfo:____1478 :
mon=0x7f77ec0ccce0
dev=ide0-1-0 2014-04-04 10:40:13.889+0000: 8263: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f77ec0ccce0
msg={"execute":"query-____blockstats","id":"libvirt-____20843"}
/var/log/vdsm/vdsm.log Thread-4732::DEBUG::2014-04-04
12:43:34,439::BindingXMLRPC::____1067::vds::(wrapper) client [192.168.99.104]::call vmSnapshot with
('cb038ccf-6c6f-475c-872f-____ea812ff795a1', [{'baseVolumeID': 'b62232fc-4e02-41ce-ae10-____5dff9e2f7bbe', 'domainID': '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f', 'volumeID': 'f5fc4fed-4acd-46e8-9980-____90a9c3985840', 'imageID': '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}],
'5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f,00000002-0002-___ _0002-0002-000000000076,____4fb31c32-8467-4d4a-b817-____ 977643a462e3,ceb881f3-9a46-____4ebc-b82e-c4c91035f807,____ 2c06b4da-2743-4422-ba94-____74da2c709188,02804da9-34f8-___ _438f-9e8a-9689bc94790c') {} Thread-4732::ERROR::2014-04-04 12:43:34,440::vm::3910::vm.Vm:____:(snapshot)
vmId=`cb038ccf-6c6f-475c-872f-____ea812ff795a1`::The base
volume doesn't exist: {'device': 'disk', 'domainID': '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f', 'volumeID': 'b62232fc-4e02-41ce-ae10-____5dff9e2f7bbe', 'imageID': '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'} Thread-4732::DEBUG::2014-04-04
12:43:34,440::BindingXMLRPC::____1074::vds::(wrapper) return
vmSnapshot with {'status': {'message': 'Snapshot failed', 'code': 48}} Thread-299::DEBUG::2014-04-04
12:43:35,423::fileSD::225::____Storage.Misc.excCmd::(____ getReadDelay) '/bin/dd iflag=direct
if=/rhev/data-center/mnt/____host01.ovirt.lan:_home_export/ ____ff98d346-4515-4349-8437-____fb2f5e9eaadf/dom_md/metadata
bs=4096 count=1' (cwd None)
Thx;)
___________________________________________________ Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
___________________________________________________ Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
-- Dafna Ron
___________________________________________________ Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-- Cheers Douglas
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-- Cheers Douglas _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (5)
-
Dafna Ron
-
Douglas Schilling Landgraf
-
Itamar Heim
-
Kevin Tibi
-
Michal Skrivanek