On 02/03/2014 05:34 PM, Maor Lipchuk wrote:
> On 02/03/2014 07:18 PM, Dafna Ron wrote:
>> Maor,
>>
>> If snapshotVDSCommand is for live snapshot, what is the offline create
>> snapshot command?
> It is the CreateSnapshotVdsCommand which calls createVolume in VDSM
but we need to be able to know that a live snapshot was sent and not an
offline snapshot.
Yes, at the logs we can see the all process :
First a request to create a snapshot (new volume) sent to VDSM:
2014-02-02 09:41:09,557 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(pool-6-thread-49) [67ea047a] START, CreateSnapshotVDSCommand(
storagePoolId = fcb89071-6cdb-4972-94d1-c9324cebf814,
ignoreFailoverLimit = false, storageDomainId =
a52938f7-2cf4-4771-acb2-0c78d14999e5, imageGroupId =
c1cb6b66-655e-48c3-8568-4975295eb037, imageSizeInBytes = 21474836480,
volumeFormat = COW, newImageId = 6d8c80a4-328f-4a53-86a2-a4080a2662ce,
newImageDescription = , imageId = 5085422e-6592-415a-9da3-9e43dac9374b,
sourceImageGroupId = c1cb6b66-655e-48c3-8568-4975295eb037), log id: 7875f3f5
after the snapshot gets created :
2014-02-02 09:41:20,553 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(pool-6-thread-49) Ending command successfully:
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand
then the engine calls the live snapshot (see also [1])
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
Elad, somewhere in this flow we need to know that the snapshot was
taken
on a running vm :) this seems like a bug to me.
>> we did not say that live snapshot did not succeed :) we said that the
>> vm is paused and restarted - which is something that should not happen
>> for live snapshot (or at least never did before).
> It's not sure that the restart is related to the live snapshot. but that
> should be observed in the libvirt/vdsm logs.
yes, I am sure because the user is reporting it and the logs show it...
>> as I wrote before, we know that vdsm is reporting the vm as paused, that
>> is because libvirt is reporting the vm as paused and I think that its
>> happening because libvirt is not doing a live snapshot and so pauses the
>> vm while taking the snapshot.
> That sounds logic to me, it's need to be checked with libvirt, if that
> kind of behaviour could happen.
Elad, can you please try to reproduce and open a bug to libvirt?
>> Dafna
>>
>>
>> On 02/03/2014 05:08 PM, Maor Lipchuk wrote:
>>> From the engine logs it seems that indeed live snapshot is called
>>> (The
>>> command is snapshotVDSCommand see [1]).
>>> This is done right after the snapshot has been created in the VM and it
>>> signals the qemu process to start using the new volume created.
>>>
>>> When live snapshot does not succeed we should see in the log something
>>> like "Wasn't able to live snapshot due to error:...", but it
does not
>>> appear so it seems that this worked out fine.
>>>
>>> At some point I can see in the logs that VDSM reports to the engine
>>> that
>>> the VM is paused.
>>>
>>>
>>> [1]
>>> 2014-02-02 09:41:20,564 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
>>> (pool-6-thread-49) START, SnapshotVDSCommand(HostName = ovirt002,
>>> HostId
>>> = 3080fb61-2d03-4008-b47f-9b66276a4257,
>>> vmId=e261e707-a21f-4ae8-9cff-f535f4430446), log id: 7e0d7872
>>> 2014-02-02 09:41:21,119 INFO
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (DefaultQuartzScheduler_Worker-93) VM snapshot-test
>>> e261e707-a21f-4ae8-9cff-f535f4430446 moved from Up --> Paused
>>> 2014-02-02 09:41:30,234 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
>>> (pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
>>> 2014-02-02 09:41:30,238 INFO
>>> [org.ovirt.engine.core.bll.CreateSnapshotCommand] (pool-6-thread-49)
>>> [67ea047a] Ending command successfully:
>>> org.ovirt.engine.core.bll.CreateSnapshotCommand
>>> ...
>>>
>>> Regards,
>>> Maor
>>>
>>> On 02/03/2014 06:24 PM, Dafna Ron wrote:
>>>> Thanks Steve.
>>>>
>>>> from the logs I can see that the create snapshot succeeds and that the
>>>> vm is resumed.
>>>> the vm moves to pause as part of libvirt flows:
>>>>
>>>> 2014-02-02 14:41:20.872+0000: 5843: debug :
>>>> qemuProcessHandleStop:728 :
>>>> Transitioned guest snapshot-test to paused state
>>>> 2014-02-02 14:41:30.031+0000: 5843: debug :
>>>> qemuProcessHandleResume:776
>>>> : Transitioned guest snapshot-test out of paused into resumed state
>>>>
>>>> There are bugs here but I am not sure yet if this is libvirt
>>>> regression
>>>> or engine.
>>>>
>>>> I'm adding Elad and Maor since in engine logs I can't see
anything
>>>> calling for live snapshot (only for snapshot) - Maor, shouldn't live
>>>> snapshot command be logged somewhere in the logs?
>>>> Is it possible that engine is calling to create snapshot and not
>>>> create
>>>> live snapshot which is why the vm pauses?
>>>>
>>>> Elad, if engine is not logging live snapshot anywhere I would open
>>>> a bug
>>>> for engine (to print that in the logs).
>>>> Also, there is a bug in vdsm log for sdc where the below is logged as
>>>> ERROR and not INFO:
>>>>
>>>> Thread-23::ERROR::2014-02-02
>>>> 09:51:19,497::sdc::137::Storage.StorageDomainCache::(_findDomain)
>>>> looking for unfetched domain a52938f7-2cf4-4771-acb2-0c78d14999e5
>>>> Thread-23::ERROR::2014-02-02
>>>>
09:51:19,497::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
>>>>
>>>>
>>>> looking for domain a52938f7-2cf4-4771-acb2-0c78d14999e5
>>>>
>>>> If the engine was sending live snapshot or if there is no
>>>> difference in
>>>> the two commands in engine side than I would open a bug for libvirt
>>>> for
>>>> pausing the vm during live snapshot.
>>>>
>>>> Dafna
>>>>
>>>> On 02/03/2014 02:41 PM, Steve Dainard wrote:
>>>>> [root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
>>>>> a52938f7-2cf4-4771-acb2-0c78d14999e5
>>>>> uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
>>>>> pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
>>>>> lver = 5
>>>>> version = 3
>>>>> role = Master
>>>>> remotePath = gluster-store-vip:/rep1
>>>>> spm_id = 2
>>>>> type = NFS
>>>>> class = Data
>>>>> master_ver = 1
>>>>> name = gluster-store-rep1
>>>>>
>>>>>
>>>>> *Steve Dainard *
>>>>> IT Infrastructure Manager
>>>>> Miovision <
http://miovision.com/> | /Rethink Traffic/
>>>>> 519-513-2407 ex.250
>>>>> 877-646-8476 (toll-free)
>>>>>
>>>>> *Blog <
http://miovision.com/blog> | **LinkedIn
>>>>> <
https://www.linkedin.com/company/miovision-technologies> |
Twitter
>>>>> <
https://twitter.com/miovision> | Facebook
>>>>> <
https://www.facebook.com/miovision>*
>>>>>
------------------------------------------------------------------------
>>>>>
>>>>>
>>>>> Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
>>>>> Kitchener,
>>>>> ON, Canada | N2C 1L3
>>>>> This e-mail may contain information that is privileged or
>>>>> confidential. If you are not the intended recipient, please delete
>>>>> the
>>>>> e-mail and any attachments and notify us immediately.
>>>>>
>>>>>
>>>>> On Sun, Feb 2, 2014 at 2:55 PM, Dafna Ron <dron(a)redhat.com
>>>>> <mailto:dron@redhat.com>> wrote:
>>>>>
>>>>> please run vdsClient -s 0 getStorageDomainInfo
>>>>> a52938f7-2cf4-4771-acb2-0c78d14999e5
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Dafna
>>>>>
>>>>>
>>>>>
>>>>> On 02/02/2014 03:02 PM, Steve Dainard wrote:
>>>>>
>>>>> Logs attached with VM running on qemu-kvm-rhev packages
>>>>> installed.
>>>>>
>>>>> *Steve Dainard *
>>>>> IT Infrastructure Manager
>>>>> Miovision <
http://miovision.com/> | /Rethink
Traffic/
>>>>> 519-513-2407 <tel:519-513-2407> ex.250
>>>>>
>>>>> 877-646-8476 <tel:877-646-8476> (toll-free)
>>>>>
>>>>> *Blog <
http://miovision.com/blog> | **LinkedIn
>>>>>
>>>>> <
https://www.linkedin.com/company/miovision-technologies> |
>>>>> Twitter <
https://twitter.com/miovision> | Facebook
>>>>> <
https://www.facebook.com/miovision>*
>>>>>
>>>>>
------------------------------------------------------------------------
>>>>>
>>>>>
>>>>> Miovision Technologies Inc. | 148 Manitou Drive, Suite
101,
>>>>> Kitchener, ON, Canada | N2C 1L3
>>>>> This e-mail may contain information that is privileged or
>>>>> confidential. If you are not the intended recipient,
please
>>>>> delete the e-mail and any attachments and notify us
>>>>> immediately.
>>>>>
>>>>>
>>>>> On Sun, Feb 2, 2014 at 5:05 AM, Dafna Ron
<dron(a)redhat.com
>>>>> <mailto:dron@redhat.com> <mailto:dron@redhat.com
>>>>> <mailto:dron@redhat.com>>> wrote:
>>>>>
>>>>> can you please upload full engine, vdsm, libvirt and
>>>>> vm's
>>>>> qemu logs?
>>>>>
>>>>>
>>>>> On 02/02/2014 02:08 AM, Steve Dainard wrote:
>>>>>
>>>>> I have two CentOS 6.5 Ovirt hosts (ovirt001,
>>>>> ovirt002)
>>>>>
>>>>> I've installed the applicable qemu-kvm-rhev
>>>>> packages
>>>>> from this
>>>>> site:
>>>>>
http://www.dreyou.org/ovirt/vdsm32/Packages/ on
>>>>> ovirt002.
>>>>>
>>>>> On ovirt001 if I take a live snapshot:
>>>>>
>>>>> Snapshot 'test qemu-kvm' creation for VM
>>>>> 'snapshot-test' was
>>>>> initiated by admin@internal.
>>>>> The VM is paused
>>>>> Failed to create live snapshot 'test
qemu-kvm'
>>>>> for VM
>>>>> 'snapshot-test'. VM restart is
recommended.
>>>>> Failed to complete snapshot 'test
qemu-kvm'
>>>>> creation
>>>>> for VM
>>>>> 'snapshot-test'.
>>>>> The VM is then started, and the status for the
>>>>> snapshot
>>>>> changes to OK.
>>>>>
>>>>> On ovirt002 (with the packages from dreyou) I
don't
>>>>> get any
>>>>> messages about a snapshot failing, but my VM is
>>>>> still
>>>>> paused
>>>>> to complete the snapshot. Is there something else
>>>>> other than
>>>>> the qemu-kvm-rhev packages that would enable this
>>>>> functionality?
>>>>>
>>>>> I've looked for some information on when the
>>>>> packages
>>>>> would be
>>>>> built as required in the CentOS repos, but I
>>>>> don't see
>>>>> anything definitive.
>>>>>
>>>>>
>>>>>
http://lists.ovirt.org/pipermail/users/2013-December/019126.html
>>>>> Looks like one of the maintainers is waiting for
>>>>> someone to
>>>>> tell him what flags need to be set.
>>>>>
>>>>> Also, another thread here:
>>>>>
>>>>>
http://comments.gmane.org/gmane.comp.emulators.ovirt.arch/1618
>>>>> same maintainer, mentioning that he hasn't
seen
>>>>> anything in
>>>>> the bug tracker.
>>>>>
>>>>> There is a bug here:
>>>>>
https://bugzilla.redhat.com/show_bug.cgi?id=1009100 that
>>>>> seems
>>>>> to have ended in finding a way for qemu to expose
>>>>> whether it
>>>>> supports live snapshots, rather than figuring
>>>>> out how
>>>>> to get
>>>>> the CentOS team the info they need to build the
>>>>> packages with
>>>>> the proper flags set.
>>>>>
>>>>> I have bcc'd both dreyou (packaged the
>>>>> qemu-kvm-rhev
>>>>> packages
>>>>> listed above) and Russ (CentOS maintainer
>>>>> mentioned in
>>>>> the
>>>>> other threads) if they wish to chime in and
perhaps
>>>>> collaborate on which flags, if any, should be set
>>>>> for the
>>>>> qemu-kvm builds so we can get a CentOS bug report
>>>>> going and
>>>>> hammer this out.
>>>>>
>>>>> Thanks everyone.
>>>>>
>>>>> **crosses fingers and hopes for live snapshots
>>>>> soon**
>>>>>
>>>>>
>>>>>
>>>>> *Steve Dainard *
>>>>> IT Infrastructure Manager
>>>>> Miovision <
http://miovision.com/> | /Rethink
>>>>> Traffic/
>>>>> 519-513-2407 <tel:519-513-2407> <tel:519-513-2407
>>>>> <tel:519-513-2407>> <tel:519-513-2407
<tel:519-513-2407>
>>>>> <tel:519-513-2407
<tel:519-513-2407>>> ex.250
>>>>> 877-646-8476 <tel:877-646-8476> <tel:877-646-8476
>>>>> <tel:877-646-8476>> <tel:877-646-8476
<tel:877-646-8476>
>>>>>
>>>>> <tel:877-646-8476
<tel:877-646-8476>>> (toll-free)
>>>>>
>>>>> *Blog <
http://miovision.com/blog> |
**LinkedIn
>>>>>
>>>>> <
https://www.linkedin.com/company/miovision-technologies> |
>>>>> Twitter <
https://twitter.com/miovision> |
Facebook
>>>>> <
https://www.facebook.com/miovision>*
>>>>>
>>>>>
------------------------------------------------------------------------
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Miovision Technologies Inc. | 148 Manitou Drive,
>>>>> Suite
>>>>> 101,
>>>>> Kitchener, ON, Canada | N2C 1L3
>>>>> This e-mail may contain information that is
>>>>> privileged or
>>>>> confidential. If you are not the intended
>>>>> recipient,
>>>>> please
>>>>> delete the e-mail and any attachments and notify
us
>>>>> immediately.
>>>>>
>>>>>
>>>>> On Fri, Jan 31, 2014 at 1:26 PM, Steve Dainard
>>>>> <sdainard(a)miovision.com
>>>>> <mailto:sdainard@miovision.com>
>>>>> <mailto:sdainard@miovision.com
>>>>> <mailto:sdainard@miovision.com>>
>>>>> <mailto:sdainard@miovision.com
>>>>> <mailto:sdainard@miovision.com>
>>>>>
>>>>> <mailto:sdainard@miovision.com
>>>>> <mailto:sdainard@miovision.com>>>> wrote:
>>>>>
>>>>>
>>>>> How would you developers, speaking for the
>>>>> oVirt-community,
>>>>> propose to
>>>>> solve this for CentOS _now_ ?
>>>>>
>>>>> I would imagine that the easiest way is
>>>>> that
>>>>> you build and
>>>>> host this one
>>>>> package(qemu-kvm-rhev), since you´ve
>>>>> basically
>>>>> already
>>>>> have
>>>>> the source
>>>>> and recipe (since you´re already
>>>>> providing it
>>>>> for RHEV
>>>>> anyway). Then,
>>>>> once that´s in place, it´s more a
>>>>> question of
>>>>> where to
>>>>> host the
>>>>> packages, in what repository. Be it your
>>>>> own,
>>>>> or some
>>>>> other
>>>>> repo set up
>>>>> for the SIG.
>>>>>
>>>>> This is my view, how I as a user view this
>>>>> issue.
>>>>>
>>>>>
>>>>>
>>>>> I think this is a pretty valid view.
>>>>>
>>>>> What would it take to get the correct qemu
>>>>> package
>>>>> hosted
>>>>> in the
>>>>> ovirt repo?
>>>>>
>>>>> --
>>>>>
>>>>> Med Vänliga Hälsningar
>>>>>
>>>>>
>>>>>
-------------------------------------------------------------------------------
>>>>>
>>>>>
>>>>>
>>>>> Karli Sjöberg
>>>>> Swedish University of Agricultural
Sciences
>>>>> Box 7079
>>>>> (Visiting
>>>>> Address
>>>>> Kronåsvägen 8)
>>>>> S-750 07 Uppsala, Sweden
>>>>> Phone: +46-(0)18-67 15 66
>>>>> <tel:%2B46-%280%2918-67%2015%2066>
>>>>> <tel:%2B46-%280%2918-67%2015%2066>
>>>>> <tel:%2B46-%280%2918-67%2015%2066>
>>>>> karli.sjoberg(a)slu.se <mailto:karli.sjoberg@slu.se>
>>>>> <mailto:karli.sjoberg@slu.se
<mailto:karli.sjoberg@slu.se>>
>>>>> <mailto:karli.sjoberg@slu.se
>>>>> <mailto:karli.sjoberg@slu.se>
<mailto:karli.sjoberg@slu.se
>>>>> <mailto:karli.sjoberg@slu.se>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>>>> <mailto:Users@ovirt.org
<mailto:Users@ovirt.org>>
>>>>> <mailto:Users@ovirt.org
<mailto:Users@ovirt.org>
>>>>> <mailto:Users@ovirt.org
<mailto:Users@ovirt.org>>>
>>>>>
>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>>>> <mailto:Users@ovirt.org
<mailto:Users@ovirt.org>>
>>>>>
>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>>
>>>>> -- Dafna Ron
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> -- Dafna Ron
>>>>>
>>>>>
>>