When I trigger the live snapshot while pinging the guest I get the following high latency but no packet loss:

...
64 bytes from 10.0.6.228: icmp_seq=32 ttl=63 time=0.267 ms
64 bytes from 10.0.6.228: icmp_seq=33 ttl=63 time=0.319 ms
64 bytes from 10.0.6.228: icmp_seq=34 ttl=63 time=0.231 ms
64 bytes from 10.0.6.228: icmp_seq=35 ttl=63 time=0.294 ms
64 bytes from 10.0.6.228: icmp_seq=36 ttl=63 time=0.357 ms
64 bytes from 10.0.6.228: icmp_seq=37 ttl=63 time=10375 ms
64 bytes from 10.0.6.228: icmp_seq=38 ttl=63 time=9375 ms
64 bytes from 10.0.6.228: icmp_seq=39 ttl=63 time=8375 ms
64 bytes from 10.0.6.228: icmp_seq=40 ttl=63 time=7375 ms
64 bytes from 10.0.6.228: icmp_seq=41 ttl=63 time=6375 ms
64 bytes from 10.0.6.228: icmp_seq=42 ttl=63 time=5375 ms
64 bytes from 10.0.6.228: icmp_seq=43 ttl=63 time=4375 ms
64 bytes from 10.0.6.228: icmp_seq=44 ttl=63 time=3375 ms
64 bytes from 10.0.6.228: icmp_seq=45 ttl=63 time=2375 ms
64 bytes from 10.0.6.228: icmp_seq=46 ttl=63 time=1375 ms
64 bytes from 10.0.6.228: icmp_seq=47 ttl=63 time=375 ms
64 bytes from 10.0.6.228: icmp_seq=48 ttl=63 time=0.324 ms
64 bytes from 10.0.6.228: icmp_seq=49 ttl=63 time=0.232 ms
64 bytes from 10.0.6.228: icmp_seq=50 ttl=63 time=0.318 ms
64 bytes from 10.0.6.228: icmp_seq=51 ttl=63 time=0.297 ms
64 bytes from 10.0.6.228: icmp_seq=52 ttl=63 time=0.343 ms
64 bytes from 10.0.6.228: icmp_seq=53 ttl=63 time=0.293 ms
64 bytes from 10.0.6.228: icmp_seq=54 ttl=63 time=0.286 ms
64 bytes from 10.0.6.228: icmp_seq=55 ttl=63 time=0.302 ms
64 bytes from 10.0.6.228: icmp_seq=56 ttl=63 time=0.304 ms
64 bytes from 10.0.6.228: icmp_seq=57 ttl=63 time=0.305 ms
^C
--- 10.0.6.228 ping statistics ---
57 packets transmitted, 57 received, 0% packet loss, time 56000ms
rtt min/avg/max/mdev = 0.228/1037.547/10375.035/2535.522 ms, pipe 11

So the guest is in some sort of paused mode, but its interesting that the pings seems to be queued rather than dropped.

Steve Dainard 
IT Infrastructure Manager
Miovision | Rethink Traffic
519-513-2407 ex.250
877-646-8476 (toll-free)

Blog  |  LinkedIn  |  Twitter  |  Facebook

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.


On Mon, Feb 3, 2014 at 12:08 PM, Maor Lipchuk <mlipchuk@redhat.com> wrote:
>From the engine logs it seems that indeed live snapshot is called (The
command is snapshotVDSCommand see [1]).
This is done right after the snapshot has been created in the VM and it
signals the qemu process to start using the new volume created.

When live snapshot does not succeed we should see in the log something
like "Wasn't able to live snapshot due to error:...", but it does not
appear so it seems that this worked out fine.

At some point I can see in the logs that VDSM reports to the engine that
the VM is paused.


[1]
2014-02-02 09:41:20,564 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) START, SnapshotVDSCommand(HostName = ovirt002, HostId
= 3080fb61-2d03-4008-b47f-9b66276a4257,
vmId=e261e707-a21f-4ae8-9cff-f535f4430446), log id: 7e0d7872
2014-02-02 09:41:21,119 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-93) VM snapshot-test
e261e707-a21f-4ae8-9cff-f535f4430446 moved from Up --> Paused
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
2014-02-02 09:41:30,238 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand] (pool-6-thread-49)
[67ea047a] Ending command successfully:
org.ovirt.engine.core.bll.CreateSnapshotCommand
...

Regards,
Maor

On 02/03/2014 06:24 PM, Dafna Ron wrote:
> Thanks Steve.
>
> from the logs I can see that the create snapshot succeeds and that the
> vm is resumed.
> the vm moves to pause as part of libvirt flows:
>
> 2014-02-02 14:41:20.872+0000: 5843: debug : qemuProcessHandleStop:728 :
> Transitioned guest snapshot-test to paused state
> 2014-02-02 14:41:30.031+0000: 5843: debug : qemuProcessHandleResume:776
> : Transitioned guest snapshot-test out of paused into resumed state
>
> There are bugs here but I am not sure yet if this is libvirt regression
> or engine.
>
> I'm adding Elad and Maor since in engine logs I can't see anything
> calling for live snapshot (only for snapshot) - Maor, shouldn't live
> snapshot command be logged somewhere in the logs?
> Is it possible that engine is calling to create snapshot and not create
> live snapshot which is why the vm pauses?
>
> Elad, if engine is not logging live snapshot anywhere I would open a bug
> for engine (to print that in the logs).
> Also, there is a bug in vdsm log for sdc where the below is logged as
> ERROR and not INFO:
>
> Thread-23::ERROR::2014-02-02
> 09:51:19,497::sdc::137::Storage.StorageDomainCache::(_findDomain)
> looking for unfetched domain a52938f7-2cf4-4771-acb2-0c78d14999e5
> Thread-23::ERROR::2014-02-02
> 09:51:19,497::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain a52938f7-2cf4-4771-acb2-0c78d14999e5
>
> If the engine was sending live snapshot or if there is no difference in
> the two commands in engine side than I would open a bug for libvirt for
> pausing the vm during live snapshot.
>
> Dafna
>
> On 02/03/2014 02:41 PM, Steve Dainard wrote:
>> [root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
>> a52938f7-2cf4-4771-acb2-0c78d14999e5
>> uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
>> pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
>> lver = 5
>> version = 3
>> role = Master
>> remotePath = gluster-store-vip:/rep1
>> spm_id = 2
>> type = NFS
>> class = Data
>> master_ver = 1
>> name = gluster-store-rep1
>>
>>
>> *Steve Dainard *
>> IT Infrastructure Manager
>> Miovision <http://miovision.com/> | /Rethink Traffic/
>> 519-513-2407 ex.250
>> 877-646-8476 (toll-free)
>>
>> *Blog <http://miovision.com/blog> | **LinkedIn
>> <https://www.linkedin.com/company/miovision-technologies>  | Twitter
>> <https://twitter.com/miovision>  | Facebook
>> <https://www.facebook.com/miovision>*
>> ------------------------------------------------------------------------
>> Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
>> ON, Canada | N2C 1L3
>> This e-mail may contain information that is privileged or
>> confidential. If you are not the intended recipient, please delete the
>> e-mail and any attachments and notify us immediately.
>>
>>
>> On Sun, Feb 2, 2014 at 2:55 PM, Dafna Ron <dron@redhat.com
>> <mailto:dron@redhat.com>> wrote:
>>
>>     please run vdsClient -s 0 getStorageDomainInfo
>>     a52938f7-2cf4-4771-acb2-0c78d14999e5
>>
>>     Thanks,
>>
>>     Dafna
>>
>>
>>
>>     On 02/02/2014 03:02 PM, Steve Dainard wrote:
>>
>>         Logs attached with VM running on qemu-kvm-rhev packages
>> installed.
>>
>>         *Steve Dainard *
>>         IT Infrastructure Manager
>>         Miovision <http://miovision.com/> | /Rethink Traffic/
>>         519-513-2407 <tel:519-513-2407> ex.250
>>
>>         877-646-8476 <tel:877-646-8476> (toll-free)
>>
>>         *Blog <http://miovision.com/blog> | **LinkedIn
>>         <https://www.linkedin.com/company/miovision-technologies>  |
>>         Twitter <https://twitter.com/miovision>  | Facebook
>>         <https://www.facebook.com/miovision>*
>>
>> ------------------------------------------------------------------------
>>         Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
>>         Kitchener, ON, Canada | N2C 1L3
>>         This e-mail may contain information that is privileged or
>>         confidential. If you are not the intended recipient, please
>>         delete the e-mail and any attachments and notify us immediately.
>>
>>
>>         On Sun, Feb 2, 2014 at 5:05 AM, Dafna Ron <dron@redhat.com
>>         <mailto:dron@redhat.com> <mailto:dron@redhat.com
>>         <mailto:dron@redhat.com>>> wrote:
>>
>>             can you please upload full engine, vdsm, libvirt and vm's
>>         qemu logs?
>>
>>
>>             On 02/02/2014 02:08 AM, Steve Dainard wrote:
>>
>>                 I have two CentOS 6.5 Ovirt hosts (ovirt001, ovirt002)
>>
>>                 I've installed the applicable qemu-kvm-rhev packages
>>         from this
>>                 site: http://www.dreyou.org/ovirt/vdsm32/Packages/ on
>>         ovirt002.
>>
>>                 On ovirt001 if I take a live snapshot:
>>
>>                 Snapshot 'test qemu-kvm' creation for VM
>>         'snapshot-test' was
>>                 initiated by admin@internal.
>>                 The VM is paused
>>                 Failed to create live snapshot 'test qemu-kvm' for VM
>>                 'snapshot-test'. VM restart is recommended.
>>                 Failed to complete snapshot 'test qemu-kvm' creation
>>         for VM
>>                 'snapshot-test'.
>>                 The VM is then started, and the status for the snapshot
>>                 changes to OK.
>>
>>                 On ovirt002 (with the packages from dreyou) I don't
>>         get any
>>                 messages about a snapshot failing, but my VM is still
>>         paused
>>                 to complete the snapshot. Is there something else
>>         other than
>>                 the qemu-kvm-rhev packages that would enable this
>>         functionality?
>>
>>                 I've looked for some information on when the packages
>>         would be
>>                 built as required in the CentOS repos, but I don't see
>>                 anything definitive.
>>
>>         http://lists.ovirt.org/pipermail/users/2013-December/019126.html
>>                 Looks like one of the maintainers is waiting for
>>         someone to
>>                 tell him what flags need to be set.
>>
>>                 Also, another thread here:
>>         http://comments.gmane.org/gmane.comp.emulators.ovirt.arch/1618
>>                 same maintainer, mentioning that he hasn't seen
>>         anything in
>>                 the bug tracker.
>>
>>                 There is a bug here:
>>         https://bugzilla.redhat.com/show_bug.cgi?id=1009100 that seems
>>                 to have ended in finding a way for qemu to expose
>>         whether it
>>                 supports live snapshots, rather than figuring out how
>>         to get
>>                 the CentOS team the info they need to build the
>>         packages with
>>                 the proper flags set.
>>
>>                 I have bcc'd both dreyou (packaged the qemu-kvm-rhev
>>         packages
>>                 listed above) and Russ (CentOS maintainer mentioned in
>> the
>>                 other threads) if they wish to chime in and perhaps
>>                 collaborate on which flags, if any, should be set for the
>>                 qemu-kvm builds so we can get a CentOS bug report
>>         going and
>>                 hammer this out.
>>
>>                 Thanks everyone.
>>
>>                 **crosses fingers and hopes for live snapshots soon**
>>
>>
>>
>>                 *Steve Dainard *
>>                 IT Infrastructure Manager
>>                 Miovision <http://miovision.com/> | /Rethink Traffic/
>>         519-513-2407 <tel:519-513-2407> <tel:519-513-2407
>>         <tel:519-513-2407>> <tel:519-513-2407 <tel:519-513-2407>
>>                 <tel:519-513-2407 <tel:519-513-2407>>> ex.250
>>         877-646-8476 <tel:877-646-8476> <tel:877-646-8476
>>         <tel:877-646-8476>> <tel:877-646-8476 <tel:877-646-8476>
>>
>>                 <tel:877-646-8476 <tel:877-646-8476>>> (toll-free)
>>
>>                 *Blog <http://miovision.com/blog> | **LinkedIn
>>
>> <https://www.linkedin.com/company/miovision-technologies>  |
>>                 Twitter <https://twitter.com/miovision>  | Facebook
>>                 <https://www.facebook.com/miovision>*
>>
>> ------------------------------------------------------------------------
>>
>>
>>                 Miovision Technologies Inc. | 148 Manitou Drive, Suite
>>         101,
>>                 Kitchener, ON, Canada | N2C 1L3
>>                 This e-mail may contain information that is privileged or
>>                 confidential. If you are not the intended recipient,
>>         please
>>                 delete the e-mail and any attachments and notify us
>>         immediately.
>>
>>
>>                 On Fri, Jan 31, 2014 at 1:26 PM, Steve Dainard
>>                 <sdainard@miovision.com
>>         <mailto:sdainard@miovision.com> <mailto:sdainard@miovision.com
>>         <mailto:sdainard@miovision.com>>
>>                 <mailto:sdainard@miovision.com
>>         <mailto:sdainard@miovision.com>
>>
>>                 <mailto:sdainard@miovision.com
>>         <mailto:sdainard@miovision.com>>>> wrote:
>>
>>
>>                         How would you developers, speaking for the
>>                 oVirt-community,
>>                         propose to
>>                         solve this for CentOS _now_ ?
>>
>>                         I would imagine that the easiest way is that
>>         you build and
>>                         host this one
>>                         package(qemu-kvm-rhev), since you´ve basically
>>         already
>>                 have
>>                         the source
>>                         and recipe (since you´re already providing it
>>         for RHEV
>>                         anyway). Then,
>>                         once that´s in place, it´s more a question of
>>         where to
>>                 host the
>>                         packages, in what repository. Be it your own,
>>         or some
>>                 other
>>                         repo set up
>>                         for the SIG.
>>
>>                         This is my view, how I as a user view this issue.
>>
>>
>>
>>                     I think this is a pretty valid view.
>>
>>                     What would it take to get the correct qemu package
>>         hosted
>>                 in the
>>                     ovirt repo?
>>
>>                         --
>>
>>                         Med Vänliga Hälsningar
>>
>>
>> -------------------------------------------------------------------------------
>>
>>                         Karli Sjöberg
>>                         Swedish University of Agricultural Sciences
>>         Box 7079
>>                 (Visiting
>>                         Address
>>                         Kronåsvägen 8)
>>                         S-750 07 Uppsala, Sweden
>>                         Phone: +46-(0)18-67 15 66
>>         <tel:%2B46-%280%2918-67%2015%2066>
>>                 <tel:%2B46-%280%2918-67%2015%2066>
>>                 <tel:%2B46-%280%2918-67%2015%2066>
>>         karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>
>>         <mailto:karli.sjoberg@slu.se <mailto:karli.sjoberg@slu.se>>
>>                 <mailto:karli.sjoberg@slu.se
>>         <mailto:karli.sjoberg@slu.se> <mailto:karli.sjoberg@slu.se
>>         <mailto:karli.sjoberg@slu.se>>>
>>
>>                         _______________________________________________
>>                         Users mailing list
>>         Users@ovirt.org <mailto:Users@ovirt.org>
>>         <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
>>                 <mailto:Users@ovirt.org <mailto:Users@ovirt.org>
>>         <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
>>
>>         http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>>
>>
>>
>>                 _______________________________________________
>>                 Users mailing list
>>         Users@ovirt.org <mailto:Users@ovirt.org>
>>         <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
>>
>>         http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>>             --     Dafna Ron
>>
>>
>>
>>
>>     --     Dafna Ron
>>
>>
>
>