[Users] Fail snapshot

Itamar Heim iheim at redhat.com
Fri Apr 4 15:19:39 UTC 2014


On 04/04/2014 06:11 PM, Kevin Tibi wrote:
> Installed Packages
> qemu-kvm.x86_64
>   2:0.12.1.2-2.415.el6_5.6                                     @updates
> Available Packages
> qemu-kvm.x86_64
>   2:0.12.1.2-2.415.el6_5.7                                     updates

until we resolve this with centos, you need qemu-kvm-rhev.
we are currently providing it here:
http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create_rpms_el6/lastSuccessfulBuild/artifact/rpms/

> [
>
>
> 2014-04-04 17:06 GMT+02:00 Kevin Tibi <kevintibi at hotmail.com
> <mailto:kevintibi at hotmail.com>>:
>
>     It's centos 6.5. Have I need to change my repo ? I have just EPEL
>     and Ovirt repo.
>
>
>     2014-04-04 16:23 GMT+02:00 Douglas Schilling Landgraf
>     <dougsland at redhat.com <mailto:dougsland at redhat.com>>:
>
>         Hi,
>
>
>         On 04/04/2014 10:04 AM, Kevin Tibi wrote:
>
>             Yes it's a live snapshots. Normal snapshot works.
>
>
>         Question:
>         Is it a EL6 hosts? If yes, are you using qemu-kvm from:
>         jenkins.ovirt.org/view/__Packaging/job/qemu-kvm-rhev___create_rpms_el6/
>         <http://jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/>
>           ?
>
>
>         Thanks!
>
>
>             How i make debug in vdsm ?
>
>             mom.conf :
>
>             log: /var/log/vdsm/mom.log
>
>             verbosity: info
>
>             vdsm.conf :
>
>             [root at host02 ~]# cat /etc/vdsm/vdsm.conf
>             [addresses]
>             management_port = 54321
>
>             [vars]
>             ssl = true
>
>
>
>             2014-04-04 15:27 GMT+02:00 Dafna Ron <dron at redhat.com
>             <mailto:dron at redhat.com>
>             <mailto:dron at redhat.com <mailto:dron at redhat.com>>>:
>
>
>                  is this a live snapshots (wile vm is running)?
>                  can you please make sure your vdsm log is in debug and
>             attach the
>                  full log?
>
>                  Thanks,
>                  Dafna
>
>
>
>                  On 04/04/2014 02:23 PM, Michal Skrivanek wrote:
>
>                      On 4 Apr 2014, at 12:45, Kevin Tibi wrote:
>
>                          Hi,
>
>                          I have a pb when i try to snapshot a VM.
>
>                      are you running the right qemu/libvirt from
>             virt-preview repo?
>
>                          Ovirt engine self hosted 3.4. Two node (host01
>             and host02).
>
>                          my engine.log :
>
>                          2014-04-04 12:30:03,013 INFO
>
>             [org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d]
>             (org.ovirt.thread.pool-6-____thread-24) Ending command
>             successfully:
>             org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d
>
>                          2014-04-04 12:30:03,028 INFO
>
>             [org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand]
>             (org.ovirt.thread.pool-6-____thread-24) START,
>             SnapshotVDSCommand(HostName = host01, HostId =
>             fcb9a5cf-2064-42a5-99fe-____dc56ea39ed81,
>             vmId=cb038ccf-6c6f-475c-872f-____ea812ff795a1), log id: 36463977
>
>                          2014-04-04 12:30:03,075 ERROR
>
>             [org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand]
>                          (org.ovirt.thread.pool-6-____thread-24) Failed
>             in SnapshotVDS
>
>                          method
>                          2014-04-04 12:30:03,076 INFO
>
>             [org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand]
>             (org.ovirt.thread.pool-6-____thread-24) Command
>             org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand
>             return value
>
>                             StatusOnlyReturnForXmlRpc
>             [mStatus=StatusForXmlRpc
>                          [mCode=48, mMessage=Snapshot failed]]
>                          2014-04-04 12:30:03,077 INFO
>
>             [org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand]
>             (org.ovirt.thread.pool-6-____thread-24) HostName = host01
>
>                          2014-04-04 12:30:03,078 ERROR
>
>             [org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand]
>                          (org.ovirt.thread.pool-6-____thread-24) Command
>
>                          SnapshotVDSCommand(HostName = host01, HostId =
>                          fcb9a5cf-2064-42a5-99fe-____dc56ea39ed81,
>                          vmId=cb038ccf-6c6f-475c-872f-____ea812ff795a1)
>             execution
>
>                          failed. Exception: VDSErrorException:
>             VDSGenericException:
>                          VDSErrorException: Failed to SnapshotVDS, error
>             = Snapshot
>                          failed, code = 48
>                          2014-04-04 12:30:03,080 INFO
>
>             [org.ovirt.engine.core.____vdsbroker.vdsbroker.____SnapshotVDSCommand]
>             (org.ovirt.thread.pool-6-____thread-24) FINISH,
>             SnapshotVDSCommand, log id: 36463977
>
>                          2014-04-04 12:30:03,083 WARN
>
>             [org.ovirt.engine.core.bll.____CreateAllSnapshotsFromVmComman____d]
>             (org.ovirt.thread.pool-6-____thread-24) Wasnt able to live
>             snapshot due to error: VdcBLLException: VdcBLLException:
>             org.ovirt.engine.core.____vdsbroker.vdsbroker.____VDSErrorException:
>             VDSGenericException: VDSErrorException: Failed to
>             SnapshotVDS, error = Snapshot failed, code = 48 (Failed with
>             error SNAPSHOT_FAILED and code 48). VM will still be
>             configured to the new created snapshot
>
>                          2014-04-04 12:30:03,097 INFO
>
>             [org.ovirt.engine.core.dal.____dbbroker.auditloghandling.____AuditLogDirector]
>             (org.ovirt.thread.pool-6-____thread-24) Correlation ID:
>             5650b99f, Job ID: c1b2d861-2a52-49f1-9eaa-____1b63aa8b4fba,
>             Call Stack:
>             org.ovirt.engine.core.common.____errors.VdcBLLException:
>             VdcBLLException:
>             org.ovirt.engine.core.____vdsbroker.vdsbroker.____VDSErrorException:
>             VDSGenericException: VDSErrorException: Failed to
>             SnapshotVDS, error = Snapshot failed, code = 48 (Failed with
>             error SNAPSHOT_FAILED and code 48)
>
>
>
>                          My /var/log/messages
>
>                          Apr  4 12:30:04 host01 vdsm vm.Vm ERROR
>
>             vmId=`cb038ccf-6c6f-475c-872f-____ea812ff795a1`::The base
>
>                          volume doesn't exist: {'device': 'disk',
>             'domainID':
>                          '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f',
>             'volumeID':
>                          '3b6cbb5d-beed-428d-ac66-____9db3dd002e2f',
>             'imageID':
>                          '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}
>
>
>                          My /var/log/libvirt/libvirt.log
>
>                          2014-04-04 10:40:13.886+0000: 8234: debug :
>                          qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE:
>                          mon=0x7f77ec0ccce0
>
>             buf={"execute":"query-____blockstats","id":"libvirt-____20842"}
>
>                             len=53 ret=53 errno=11
>                          2014-04-04 10:40:13.888+0000: 8234: debug :
>                          qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS:
>                          mon=0x7f77ec0ccce0 buf={"return": [{"device":
>                          "drive-ide0-1-0", "parent": {"stats":
>                          {"flush_total_time_ns": 0, "wr_highest_offset": 0,
>                          "wr_total_time_ns": 0, "wr_bytes": 0,
>             "rd_total_time_ns": 0,
>                          "flush_operations": 0, "wr_operations": 0,
>             "rd_bytes": 0,
>                          "rd_operations": 0}}, "stats":
>             {"flush_total_time_ns": 0,
>                          "wr_highest_offset": 0, "wr_total_time_ns": 0,
>             "wr_bytes":
>                          0, "rd_total_time_ns": 11929902,
>             "flush_operations": 0,
>                          "wr_operations": 0, "rd_bytes": 135520,
>             "rd_operations":
>                          46}}, {"device": "drive-virtio-disk0",
>             "parent": {"stats":
>                          {"flush_total_time_ns": 0, "wr_highest_offset":
>             22184332800,
>                          "wr_total_time_ns": 0, "wr_bytes": 0,
>             "rd_total_time_ns": 0,
>                          "flush_operations": 0, "wr_operations": 0,
>             "rd_bytes": 0,
>                          "rd_operations": 0}}, "stats":
>             {"flush_total_time_ns":
>                          34786515034, "wr_highest_offset": 22184332800,
>                          "wr_total_time_ns": 5131205369094, "wr_bytes":
>             5122065408 <tel:5122065408>,
>                          "rd_tot
>
>                  a
>
>                         l_time_ns": 12987633373, "flush_operations": 285398,
>                      "wr_operations": 401232, "rd_bytes": 392342016,
>             "rd_operations":
>                      15069}}], "id": "libvirt-20842"}
>
>                             len=1021
>                          2014-04-04 10:40:13.888+0000: 8263: debug :
>                          qemuMonitorGetBlockStatsInfo:____1478 :
>             mon=0x7f77ec0ccce0
>
>                          dev=ide0-1-0
>                          2014-04-04 10:40:13.889+0000: 8263: debug :
>                          qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG:
>                          mon=0x7f77ec0ccce0
>
>             msg={"execute":"query-____blockstats","id":"libvirt-____20843"}
>
>                          /var/log/vdsm/vdsm.log
>                          Thread-4732::DEBUG::2014-04-04
>
>             12:43:34,439::BindingXMLRPC::____1067::vds::(wrapper) client
>                          [192.168.99.104]::call vmSnapshot with
>                          ('cb038ccf-6c6f-475c-872f-____ea812ff795a1',
>             [{'baseVolumeID':
>                          'b62232fc-4e02-41ce-ae10-____5dff9e2f7bbe',
>             'domainID':
>                          '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f',
>             'volumeID':
>                          'f5fc4fed-4acd-46e8-9980-____90a9c3985840',
>             'imageID':
>                          '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}],
>
>             '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f,00000002-0002-____0002-0002-000000000076,____4fb31c32-8467-4d4a-b817-____977643a462e3,ceb881f3-9a46-____4ebc-b82e-c4c91035f807,____2c06b4da-2743-4422-ba94-____74da2c709188,02804da9-34f8-____438f-9e8a-9689bc94790c')
>                          {}
>                          Thread-4732::ERROR::2014-04-04
>                          12:43:34,440::vm::3910::vm.Vm:____:(snapshot)
>
>             vmId=`cb038ccf-6c6f-475c-872f-____ea812ff795a1`::The base
>
>                          volume doesn't exist: {'device': 'disk',
>             'domainID':
>                          '5ae613a4-44e4-42cb-89fc-____7b5d34c1f30f',
>             'volumeID':
>                          'b62232fc-4e02-41ce-ae10-____5dff9e2f7bbe',
>             'imageID':
>                          '646df162-5c6d-44b1-bc47-____b63c3fdab0e2'}
>                          Thread-4732::DEBUG::2014-04-04
>
>             12:43:34,440::BindingXMLRPC::____1074::vds::(wrapper) return
>
>                          vmSnapshot with {'status': {'message':
>             'Snapshot failed',
>                          'code': 48}}
>                          Thread-299::DEBUG::2014-04-04
>
>             12:43:35,423::fileSD::225::____Storage.Misc.excCmd::(____getReadDelay)
>                          '/bin/dd iflag=direct
>
>             if=/rhev/data-center/mnt/____host01.ovirt.lan:_home_export/____ff98d346-4515-4349-8437-____fb2f5e9eaadf/dom_md/metadata
>
>                          bs=4096 count=1' (cwd None)
>
>
>                          Thx;)
>
>                          ___________________________________________________
>                          Users mailing list
>             Users at ovirt.org <mailto:Users at ovirt.org>
>             <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
>             http://lists.ovirt.org/____mailman/listinfo/users
>             <http://lists.ovirt.org/__mailman/listinfo/users>
>
>             <http://lists.ovirt.org/__mailman/listinfo/users
>             <http://lists.ovirt.org/mailman/listinfo/users>>
>
>                      ___________________________________________________
>                      Users mailing list
>             Users at ovirt.org <mailto:Users at ovirt.org>
>             <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
>             http://lists.ovirt.org/____mailman/listinfo/users
>             <http://lists.ovirt.org/__mailman/listinfo/users>
>
>                      <http://lists.ovirt.org/__mailman/listinfo/users
>             <http://lists.ovirt.org/mailman/listinfo/users>>
>
>
>
>                  --
>                  Dafna Ron
>
>
>                  ___________________________________________________
>                  Users mailing list
>             Users at ovirt.org <mailto:Users at ovirt.org>
>             <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
>             http://lists.ovirt.org/____mailman/listinfo/users
>             <http://lists.ovirt.org/__mailman/listinfo/users>
>
>                  <http://lists.ovirt.org/__mailman/listinfo/users
>             <http://lists.ovirt.org/mailman/listinfo/users>>
>
>
>
>
>             _________________________________________________
>             Users mailing list
>             Users at ovirt.org <mailto:Users at ovirt.org>
>             http://lists.ovirt.org/__mailman/listinfo/users
>             <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>         --
>         Cheers
>         Douglas
>
>         _________________________________________________
>         Users mailing list
>         Users at ovirt.org <mailto:Users at ovirt.org>
>         http://lists.ovirt.org/__mailman/listinfo/users
>         <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>




More information about the Users mailing list