[Users] Fail snapshot

Kevin Tibi kevintibi at hotmail.com
Sat Apr 5 15:09:05 UTC 2014


Ok it's work with this version of qemu-kvm.

Thx for your help :)


2014-04-04 17:23 GMT+02:00 Douglas Schilling Landgraf <dougsland at redhat.com>
:

> On 04/04/2014 11:06 AM, Kevin Tibi wrote:
>
>> It's centos 6.5. Have I need to change my repo ? I have just EPEL and
>> Ovirt repo.
>>
>>
> Unfortunately we don't have any repo for this yet, only the packages
> recompiled with such feature enabled available to download in the url that
> I have provided. We are working with CentOS guys to have an official repo
> for it.
>
>
>
>
>> 2014-04-04 16:23 GMT+02:00 Douglas Schilling Landgraf
>> <dougsland at redhat.com <mailto:dougsland at redhat.com>>:
>>
>>
>>     Hi,
>>
>>
>>     On 04/04/2014 10:04 AM, Kevin Tibi wrote:
>>
>>         Yes it's a live snapshots. Normal snapshot works.
>>
>>
>>     Question:
>>     Is it a EL6 hosts? If yes, are you using qemu-kvm from:
>>     jenkins.ovirt.org/view/__Packaging/job/qemu-kvm-rhev___
>> create_rpms_el6/
>>     <http://jenkins.ovirt.org/view/Packaging/job/qemu-kvm-
>> rhev_create_rpms_el6/>
>>
>>       ?
>>
>>
>>     Thanks!
>>
>>
>>         How i make debug in vdsm ?
>>
>>         mom.conf :
>>
>>         log: /var/log/vdsm/mom.log
>>
>>         verbosity: info
>>
>>         vdsm.conf :
>>
>>         [root at host02 ~]# cat /etc/vdsm/vdsm.conf
>>         [addresses]
>>         management_port = 54321
>>
>>         [vars]
>>         ssl = true
>>
>>
>>
>>         2014-04-04 15:27 GMT+02:00 Dafna Ron <dron at redhat.com
>>         <mailto:dron at redhat.com>
>>         <mailto:dron at redhat.com <mailto:dron at redhat.com>>>:
>>
>>
>>
>>              is this a live snapshots (wile vm is running)?
>>              can you please make sure your vdsm log is in debug and
>>         attach the
>>              full log?
>>
>>              Thanks,
>>              Dafna
>>
>>
>>
>>              On 04/04/2014 02:23 PM, Michal Skrivanek wrote:
>>
>>                  On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
>>
>>                      Hi,
>>
>>                      I have a pb when i try to snapshot a VM.
>>
>>                  are you running the right qemu/libvirt from
>>         virt-preview repo?
>>
>>                      Ovirt engine self hosted 3.4. Two node (host01 and
>>         host02).
>>
>>                      my engine.log :
>>
>>                      2014-04-04 12:30:03,013 INFO
>>
>>         [org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman
>> ____d]
>>         (org.ovirt.thread.pool-6-____thread-24) Ending command
>>         successfully:
>>         org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d
>>
>>                      2014-04-04 12:30:03,028 INFO
>>
>>         [org.ovirt.engine.core.____vdsbroker.vdsbroker.____
>> SnapshotVDSCommand]
>>         (org.ovirt.thread.pool-6-____thread-24) START,
>>
>>         SnapshotVDSCommand(HostName = host01, HostId =
>>         fcb9a5cf-2064-42a5-99fe-____dc56ea39ed81,
>>         vmId=cb038ccf-6c6f-475c-872f-____ea812ff795a1), log id: 36463977
>>
>>
>>                      2014-04-04 12:30:03,075 ERROR
>>
>>         [org.ovirt.engine.core.____vdsbroker.vdsbroker.____
>> SnapshotVDSCommand]
>>                      (org.ovirt.thread.pool-6-____thread-24) Failed in
>>
>>         SnapshotVDS
>>
>>                      method
>>                      2014-04-04 12:30:03,076 INFO
>>
>>         [org.ovirt.engine.core.____vdsbroker.vdsbroker.____
>> SnapshotVDSCommand]
>>         (org.ovirt.thread.pool-6-____thread-24) Command
>>         org.ovirt.engine.core.____vdsbroker.vdsbroker.____
>> SnapshotVDSCommand
>>
>>         return value
>>
>>                         StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc
>>                      [mCode=48, mMessage=Snapshot failed]]
>>                      2014-04-04 12:30:03,077 INFO
>>
>>         [org.ovirt.engine.core.____vdsbroker.vdsbroker.____
>> SnapshotVDSCommand]
>>         (org.ovirt.thread.pool-6-____thread-24) HostName = host01
>>
>>
>>                      2014-04-04 12:30:03,078 ERROR
>>
>>         [org.ovirt.engine.core.____vdsbroker.vdsbroker.____
>> SnapshotVDSCommand]
>>                      (org.ovirt.thread.pool-6-____thread-24) Command
>>
>>
>>                      SnapshotVDSCommand(HostName = host01, HostId =
>>                      fcb9a5cf-2064-42a5-99fe-____dc56ea39ed81,
>>                      vmId=cb038ccf-6c6f-475c-872f-____ea812ff795a1)
>>
>>         execution
>>
>>                      failed. Exception: VDSErrorException:
>>         VDSGenericException:
>>                      VDSErrorException: Failed to SnapshotVDS, error =
>>         Snapshot
>>                      failed, code = 48
>>                      2014-04-04 12:30:03,080 INFO
>>
>>         [org.ovirt.engine.core.____vdsbroker.vdsbroker.____
>> SnapshotVDSCommand]
>>         (org.ovirt.thread.pool-6-____thread-24) FINISH,
>>
>>         SnapshotVDSCommand, log id: 36463977
>>
>>                      2014-04-04 12:30:03,083 WARN
>>
>>         [org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman
>> ____d]
>>         (org.ovirt.thread.pool-6-____thread-24) Wasnt able to live
>>
>>         snapshot due to error: VdcBLLException: VdcBLLException:
>>         org.ovirt.engine.core.____vdsbroker.vdsbroker.____
>> VDSErrorException:
>>
>>         VDSGenericException: VDSErrorException: Failed to SnapshotVDS,
>>         error = Snapshot failed, code = 48 (Failed with error
>>         SNAPSHOT_FAILED and code 48). VM will still be configured to the
>>         new created snapshot
>>
>>                      2014-04-04 12:30:03,097 INFO
>>
>>         [org.ovirt.engine.core.dal.____dbbroker.auditloghandling.___
>> _AuditLogDirector]
>>         (org.ovirt.thread.pool-6-____thread-24) Correlation ID:
>>         5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-____1b63aa8b4fba, Call
>>         Stack: org.ovirt.engine.core.common.____errors.VdcBLLException:
>>         VdcBLLException:
>>         org.ovirt.engine.core.____vdsbroker.vdsbroker.____
>> VDSErrorException:
>>         VDSGenericException: VDSErrorException: Failed to SnapshotVDS,
>>         error = Snapshot failed, code = 48 (Failed with error
>>         SNAPSHOT_FAILED and code 48)
>>
>>
>>
>>                      My /var/log/messages
>>
>>                      Apr  4 12:30:04 host01 vdsm vm.Vm ERROR
>>
>>         vmId=`cb038ccf-6c6f-475c-872f-____ea812ff795a1`::The base
>>
>>
>>                      volume doesn't exist: {'device': 'disk', 'domainID':
>>                      '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f',
>> 'volumeID':
>>                      '3b6cbb5d-beed-428d-ac66-____9db3dd002e2f',
>> 'imageID':
>>                      '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}
>>
>>
>>                      My /var/log/libvirt/libvirt.log
>>
>>                      2014-04-04 10:40:13.886+0000: 8234: debug :
>>                      qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE:
>>                      mon=0x7f77ec0ccce0
>>
>>         buf={"execute":"query-____blockstats","id":"libvirt-____20842"}
>>
>>
>>                         len=53 ret=53 errno=11
>>                      2014-04-04 10:40:13.888+0000: 8234: debug :
>>                      qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS:
>>                      mon=0x7f77ec0ccce0 buf={"return": [{"device":
>>                      "drive-ide0-1-0", "parent": {"stats":
>>                      {"flush_total_time_ns": 0, "wr_highest_offset": 0,
>>                      "wr_total_time_ns": 0, "wr_bytes": 0,
>>         "rd_total_time_ns": 0,
>>                      "flush_operations": 0, "wr_operations": 0,
>>         "rd_bytes": 0,
>>                      "rd_operations": 0}}, "stats":
>>         {"flush_total_time_ns": 0,
>>                      "wr_highest_offset": 0, "wr_total_time_ns": 0,
>>         "wr_bytes":
>>                      0, "rd_total_time_ns": 11929902, "flush_operations":
>> 0,
>>                      "wr_operations": 0, "rd_bytes": 135520,
>>         "rd_operations":
>>                      46}}, {"device": "drive-virtio-disk0", "parent":
>>         {"stats":
>>                      {"flush_total_time_ns": 0, "wr_highest_offset":
>>         22184332800,
>>                      "wr_total_time_ns": 0, "wr_bytes": 0,
>>         "rd_total_time_ns": 0,
>>                      "flush_operations": 0, "wr_operations": 0,
>>         "rd_bytes": 0,
>>                      "rd_operations": 0}}, "stats":
>> {"flush_total_time_ns":
>>                      34786515034, "wr_highest_offset": 22184332800,
>>                      "wr_total_time_ns": 5131205369094, "wr_bytes":
>>         5122065408 <tel:5122065408>,
>>
>>                      "rd_tot
>>
>>              a
>>
>>                     l_time_ns": 12987633373, "flush_operations": 285398,
>>                  "wr_operations": 401232, "rd_bytes": 392342016,
>>         "rd_operations":
>>                  15069}}], "id": "libvirt-20842"}
>>
>>                         len=1021
>>                      2014-04-04 10:40:13.888+0000: 8263: debug :
>>                      qemuMonitorGetBlockStatsInfo:____1478 :
>>
>>         mon=0x7f77ec0ccce0
>>
>>                      dev=ide0-1-0
>>                      2014-04-04 10:40:13.889+0000: 8263: debug :
>>                      qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG:
>>                      mon=0x7f77ec0ccce0
>>
>>         msg={"execute":"query-____blockstats","id":"libvirt-____20843"}
>>
>>                      /var/log/vdsm/vdsm.log
>>                      Thread-4732::DEBUG::2014-04-04
>>
>>         12:43:34,439::BindingXMLRPC::____1067::vds::(wrapper) client
>>                      [192.168.99.104]::call vmSnapshot with
>>
>>                      ('cb038ccf-6c6f-475c-872f-____ea812ff795a1',
>>         [{'baseVolumeID':
>>                      'b62232fc-4e02-41ce-ae10-____5dff9e2f7bbe',
>> 'domainID':
>>                      '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f',
>> 'volumeID':
>>                      'f5fc4fed-4acd-46e8-9980-____90a9c3985840',
>> 'imageID':
>>                      '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}],
>>
>>         '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f,00000002-0002-___
>> _0002-0002-000000000076,____4fb31c32-8467-4d4a-b817-____
>> 977643a462e3,ceb881f3-9a46-____4ebc-b82e-c4c91035f807,____
>> 2c06b4da-2743-4422-ba94-____74da2c709188,02804da9-34f8-___
>> _438f-9e8a-9689bc94790c')
>>                      {}
>>                      Thread-4732::ERROR::2014-04-04
>>                      12:43:34,440::vm::3910::vm.Vm:____:(snapshot)
>>
>>         vmId=`cb038ccf-6c6f-475c-872f-____ea812ff795a1`::The base
>>
>>
>>                      volume doesn't exist: {'device': 'disk', 'domainID':
>>                      '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f',
>> 'volumeID':
>>                      'b62232fc-4e02-41ce-ae10-____5dff9e2f7bbe',
>> 'imageID':
>>                      '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}
>>                      Thread-4732::DEBUG::2014-04-04
>>
>>         12:43:34,440::BindingXMLRPC::____1074::vds::(wrapper) return
>>
>>
>>                      vmSnapshot with {'status': {'message': 'Snapshot
>>         failed',
>>                      'code': 48}}
>>                      Thread-299::DEBUG::2014-04-04
>>
>>         12:43:35,423::fileSD::225::____Storage.Misc.excCmd::(____
>> getReadDelay)
>>                      '/bin/dd iflag=direct
>>
>>         if=/rhev/data-center/mnt/____host01.ovirt.lan:_home_export/
>> ____ff98d346-4515-4349-8437-____fb2f5e9eaadf/dom_md/metadata
>>
>>                      bs=4096 count=1' (cwd None)
>>
>>
>>                      Thx;)
>>
>>                      ___________________________________________________
>>                      Users mailing list
>>
>>         Users at ovirt.org <mailto:Users at ovirt.org> <mailto:Users at ovirt.org
>>         <mailto:Users at ovirt.org>>
>>         http://lists.ovirt.org/____mailman/listinfo/users
>>         <http://lists.ovirt.org/__mailman/listinfo/users>
>>                      <http://lists.ovirt.org/__mailman/listinfo/users
>>         <http://lists.ovirt.org/mailman/listinfo/users>>
>>
>>                  ___________________________________________________
>>                  Users mailing list
>>
>>         Users at ovirt.org <mailto:Users at ovirt.org> <mailto:Users at ovirt.org
>>         <mailto:Users at ovirt.org>>
>>         http://lists.ovirt.org/____mailman/listinfo/users
>>         <http://lists.ovirt.org/__mailman/listinfo/users>
>>
>>                  <http://lists.ovirt.org/__mailman/listinfo/users
>>         <http://lists.ovirt.org/mailman/listinfo/users>>
>>
>>
>>
>>              --
>>              Dafna Ron
>>
>>
>>              ___________________________________________________
>>              Users mailing list
>>
>>         Users at ovirt.org <mailto:Users at ovirt.org> <mailto:Users at ovirt.org
>>         <mailto:Users at ovirt.org>>
>>         http://lists.ovirt.org/____mailman/listinfo/users
>>         <http://lists.ovirt.org/__mailman/listinfo/users>
>>
>>              <http://lists.ovirt.org/__mailman/listinfo/users
>>         <http://lists.ovirt.org/mailman/listinfo/users>>
>>
>>
>>
>>
>>         _________________________________________________
>>         Users mailing list
>>         Users at ovirt.org <mailto:Users at ovirt.org>
>>         http://lists.ovirt.org/__mailman/listinfo/users
>>         <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>>
>>     --
>>     Cheers
>>     Douglas
>>
>>
>>     _________________________________________________
>>     Users mailing list
>>     Users at ovirt.org <mailto:Users at ovirt.org>
>>     http://lists.ovirt.org/__mailman/listinfo/users
>>     <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>>
>
> --
> Cheers
> Douglas
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140405/738121a0/attachment-0001.html>


More information about the Users mailing list