Re: [ovirt-users] providing hosts with foreman
by Oved Ourfali
At this step, after the host installation was successful, which means iirc that all puppet classes were applied correctly, the plug in on the foreman side handles the bootstrapping of the new host.
Attaching the production.log file on Foreman, and the engine.log from the ovirt engine is needed to understand what went wrong.
A workaround is to remove the host, and add it back. Iirc we haven't allowed reinstall for the installing-OS status.
Oved
On May 22, 2015 7:57 PM, Nathanaël Blanchet <blanchet(a)abes.fr> wrote:
>
> Hello Yaniv,
>
> Okay for the DNS proxy, everything works now as expected.
> I have a new question now about the install host workflow.
> The host is in "installing OS" state, and on the host side it has been
> successfully installed. But now how to complete the host vdms
> installation and registration?
> I found this on your wiki :
> "For other OS - at first step won't do the registration by themselves,
> but foreman will do that using a plugin (plugin will send REST-API call
> to add or approve the host) "
> After investigation, I found that this plugin was this one :
> ruby193-rubygem-ovirt_provision_plugin (and with foreman 1.8 we can
> activate it with foreman-installer now)
>
> But nothing happens once the OS is installed et the state is stuck on
> "installing OS". Communication between foreman and engine is okay and
> without firewall issue... I found nothing significative into the foreman
> logs or somewhere else...
> What is it supposed to happen at this step?
>
>
> Le 17/05/2015 17:35, ybronhei a écrit :
> > Hey Nathenael,
> >
> > On 05/13/2015 06:28 PM, Nathanaël Blanchet wrote:
> >> Hi all,
> >>
> >> I've setup a foreman server, but when adding a new host by "discovered
> >> hosts", I can't modify the address item which is default filled with a
> >> built "mac-DNS".
> >
> > Not exactly, it set the address field to be the name you choose for
> > the host dot (.) the domain that related to the picked host-group
> >
> >> In ovirt setup, I want to identify my future hosts by their IP and not
> >> their unknown DNS name like it is described here:
> >> http://www.ovirt.org/Features/ForemanIntegration.
> >
> > IP addresses can and should be dynamic based on your DHCP server
> > configuration, but DNS name should stay the same. Adding the host that
> > way to engine uses satellite to configure its DNS entry and other
> > network configurations. That's why we lock the address field and fill
> > it with the future FQDN.
> >
> >> How can I setup foreman to do such a thing? Is the setup of the DNS
> >> proxy related?
> >
> > Yes, the DNS setup is related to it. We depend on it. Using IP address
> > might brake the integration between engine and satellite when the DHCP
> > service is configured with address ttl option and can give the host
> > different IP address in next boot. So currently we don't support
> > address modification with Discovered\Provisioned Hosts
> >
> >>
> >
> > If that answer is not clear feel free to ping me in irc (ybronhei
> > @freenode #ovirt) or reply here
> >
> > Regards,
> >
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax 33 (0)4 67 54 84 14
> blanchet(a)abes.fr
>
9 years, 6 months
providing hosts with foreman
by Nathanaël Blanchet
Hi all,
I've setup a foreman server, but when adding a new host by "discovered
hosts", I can't modify the address item which is default filled with a
built "mac-DNS".
In ovirt setup, I want to identify my future hosts by their IP and not
their unknown DNS name like it is described here:
http://www.ovirt.org/Features/ForemanIntegration.
How can I setup foreman to do such a thing? Is the setup of the DNS
proxy related?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
9 years, 6 months
Re: [ovirt-users] vdsm storage problem - maybe cache problem?
by ml@ohnewald.net
Hello List,
i really need some help here....could someone please give it a try? I
already tried to get help on the irc channel but it seems that my
problem here is too complicated or maybe i am providing not useful infos?
DB & vdsClient: http://fpaste.org/223483/14320588/ (i think is part is
very intresting)
engine.log: http://paste.fedoraproject.org/223349/04494414
Node02 vdsm Log: http://paste.fedoraproject.org/223350/43204496
Node01 vdsm Log: http://paste.fedoraproject.org/223347/20448951
Why does my vdsm look for StorageDomain
036b5575-51fa-4f14-8b05-890d7807894c ? => This was a NFS Export which i
deleted from the GUI yesterday (!!!).
From the Database Log/Dump:
=============================
USER_FORCE_REMOVE_STORAGE_DOMAIN 981 0 Storage Domain
EXPORT2 was forcibly removed by admin@internal f
b384b3da-02a6-44f3-a3f6-56751ce8c26d HP_Proliant_DL180G6
036b5575-51fa-4f14-8b05-890d7807894c EXPORT2
00000000-0000-0000-0000-000000000000 c1754ff
807321f6-2043-4a26-928c-0ce6b423c381
I already put one node into maintance, rebootet and activated it. Still
the same problem.
Some Screen Shots:
--------------------
http://postimg.org/image/8zo4ujgjb/
http://postimg.org/image/le918grdr/
http://postimg.org/image/wnawwhrgh/
My GlusterFS fully works and is not the problem here i guess:
==============================================================
2015-05-19 04:09:06,292 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(DefaultQuartzScheduler_Worker-43) [3df9132b] START,
HSMClearTaskVDSCommand(HostName = ovirt-node02.stuttgart.imos.net,
HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3,
taskId=2aeec039-5b95-40f0-8410-da62b44a28e8), log id: 19b18840
2015-05-19 04:09:06,337 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(DefaultQuartzScheduler_Worker-43) [3df9132b] FINISH,
HSMClearTaskVDSCommand, log id: 19b18840
I already had a chat with Maor Lipchuk and he told me to add another
host to the datacenter and then re-initialize it. I will migrate back to
ESXi for now to get those two nodes free. Then we can mess with it if
anyone is interested to help me. Otherwise i will have to stay with ESXi :-(
Has anyone else an idea until then? Why is my vdsm host all messed up
with that zombie StorageDomain?
Thanks for your time. I really, really appreciate it.
Mario
Am 19.05.15 um 14:57 schrieb ml(a)ohnewald.net:
> Hello List,
>
> okay, i really need some help now. I stopped vdsmd for a little bit too
> long, fencing stepped in and rebootet node01.
>
> I now can NOT start any vm because the storage is marked "Unknown".
>
>
> Since node01 rebootet i am wondering why the hell it is still looking to
> this StorageDomain:
>
> StorageDomainCache::(_findDomain) domain
> 036b5575-51fa-4f14-8b05-890d7807894c not found
>
> Can anyone please please tell me where the id exactly it comes from?
>
>
> Thanks,
> Mario
>
>
>
>
>
> Am 18.05.15 um 14:21 schrieb Maor Lipchuk:
>> Hi Mario,
>>
>> Can u try to mount this directly from the Host?
>> Can u please attach the VDSM and engine logs
>>
>> Thanks,
>> Maor
>>
>>
>> ----- Original Message -----
>>> From: ml(a)ohnewald.net
>>> To: "Maor Lipchuk" <mlipchuk(a)redhat.com>
>>> Cc: users(a)ovirt.org
>>> Sent: Monday, May 18, 2015 2:36:38 PM
>>> Subject: Re: [ovirt-users] vdsm storage problem - maybe cache problem?
>>>
>>> Hi Maor,
>>>
>>> thanks for the quick reply.
>>>
>>> Am 18.05.15 um 13:25 schrieb Maor Lipchuk:
>>>
>>>>> Now my Question: Why does the vdsm node not know that i deleted the
>>>>> storage? Has the vdsm cached this mount informations? Why does it
>>>>> still
>>>>> try to access 036b5575-51fa-4f14-8b05-890d7807894c?
>>>>
>>>>
>>>> Yes, the vdsm use a cache for Storage Domains, you can try to
>>>> restart the
>>>> vdsmd service instead of rebooting the host.
>>>>
>>>
>>> I am still getting the same error.
>>>
>>>
>>> [root@ovirt-node01 ~]# /etc/init.d/vdsmd stop
>>> Shutting down vdsm daemon:
>>> vdsm watchdog stop [ OK ]
>>> vdsm: Running run_final_hooks [ OK ]
>>> vdsm stop [ OK ]
>>> [root@ovirt-node01 ~]#
>>> [root@ovirt-node01 ~]#
>>> [root@ovirt-node01 ~]#
>>> [root@ovirt-node01 ~]# ps aux | grep vdsmd
>>> root 3198 0.0 0.0 11304 740 ? S< May07 0:00
>>> /bin/bash -e /usr/share/vdsm/respawn --minlifetime 10 --daemon
>>> --masterpid /var/run/vdsm/supervdsm_respawn.pid
>>> /usr/share/vdsm/supervdsmServer --sockfile /var/run/vdsm/svdsm.sock
>>> --pidfile /var/run/vdsm/supervdsmd.pid
>>> root 3205 0.0 0.0 922368 26724 ? S<l May07 12:10
>>> /usr/bin/python /usr/share/vdsm/supervdsmServer --sockfile
>>> /var/run/vdsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid
>>> root 15842 0.0 0.0 103248 900 pts/0 S+ 13:35 0:00 grep
>>> vdsmd
>>>
>>>
>>> [root@ovirt-node01 ~]# /etc/init.d/vdsmd start
>>> initctl: Job is already running: libvirtd
>>> vdsm: Running mkdirs
>>> vdsm: Running configure_coredump
>>> vdsm: Running configure_vdsm_logs
>>> vdsm: Running run_init_hooks
>>> vdsm: Running gencerts
>>> vdsm: Running check_is_configured
>>> libvirt is already configured for vdsm
>>> sanlock service is already configured
>>> vdsm: Running validate_configuration
>>> SUCCESS: ssl configured to true. No conflicts
>>> vdsm: Running prepare_transient_repository
>>> vdsm: Running syslog_available
>>> vdsm: Running nwfilter
>>> vdsm: Running dummybr
>>> vdsm: Running load_needed_modules
>>> vdsm: Running tune_system
>>> vdsm: Running test_space
>>> vdsm: Running test_lo
>>> vdsm: Running restore_nets
>>> vdsm: Running unified_network_persistence_upgrade
>>> vdsm: Running upgrade_300_nets
>>> Starting up vdsm daemon:
>>> vdsm start [ OK ]
>>> [root@ovirt-node01 ~]#
>>>
>>> [root@ovirt-node01 ~]# grep ERROR /var/log/vdsm/vdsm.log | tail -n 20
>>> Thread-13::ERROR::2015-05-18
>>> 13:35:03,631::sdc::137::Storage.StorageDomainCache::(_findDomain)
>>> looking for unfetched domain abc51e26-7175-4b38-b3a8-95c6928fbc2b
>>> Thread-13::ERROR::2015-05-18
>>> 13:35:03,632::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
>>>
>>> looking for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:11,607::sdc::137::Storage.StorageDomainCache::(_findDomain)
>>> looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:11,621::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
>>>
>>> looking for domain 036b5575-51fa-4f14-8b05-890d7807894c
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:11,960::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
>>> 036b5575-51fa-4f14-8b05-890d7807894c not found
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:11,960::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
>>>
>>> Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c
>>> monitoring information
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:21,962::sdc::137::Storage.StorageDomainCache::(_findDomain)
>>> looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:21,965::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
>>>
>>> looking for domain 036b5575-51fa-4f14-8b05-890d7807894c
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:22,068::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
>>> 036b5575-51fa-4f14-8b05-890d7807894c not found
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:22,072::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
>>>
>>> Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c
>>> monitoring information
>>> Thread-15::ERROR::2015-05-18
>>> 13:35:33,821::task::866::TaskManager.Task::(_setError)
>>> Task=`54bdfc77-f63a-493b-b24e-e5a3bc4977bb`::Unexpected error
>>> Thread-15::ERROR::2015-05-18
>>> 13:35:33,864::dispatcher::65::Storage.Dispatcher.Protect::(run)
>>> {'status': {'message': "Unknown pool id, pool not connected:
>>> ('b384b3da-02a6-44f3-a3f6-56751ce8c26d',)", 'code': 309}}
>>> Thread-13::ERROR::2015-05-18
>>> 13:35:33,930::sdc::137::Storage.StorageDomainCache::(_findDomain)
>>> looking for unfetched domain abc51e26-7175-4b38-b3a8-95c6928fbc2b
>>> Thread-15::ERROR::2015-05-18
>>> 13:35:33,928::task::866::TaskManager.Task::(_setError)
>>> Task=`fe9bb0fa-cf1e-4b21-af00-0698c6d1718f`::Unexpected error
>>> Thread-13::ERROR::2015-05-18
>>> 13:35:33,932::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
>>>
>>> looking for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b
>>> Thread-15::ERROR::2015-05-18
>>> 13:35:33,978::dispatcher::65::Storage.Dispatcher.Protect::(run)
>>> {'status': {'message': 'Not SPM: ()', 'code': 654}}
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:41,117::sdc::137::Storage.StorageDomainCache::(_findDomain)
>>> looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:41,131::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
>>>
>>> looking for domain 036b5575-51fa-4f14-8b05-890d7807894c
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:41,452::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
>>> 036b5575-51fa-4f14-8b05-890d7807894c not found
>>> Thread-36::ERROR::2015-05-18
>>> 13:35:41,453::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
>>>
>>> Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c
>>> monitoring information
>>>
>>>
>>> Thanks,
>>> Mario
>>>
>>>
>
9 years, 6 months
vdsmd fails to start on boot
by Chris Jones - BookIt.com Systems Administrator
Running oVirt Node - 3.5 - 0.999.201504280931.el7.centos. On first boot,
vdsmd fails to load with "Dependency failed for Virtual Desktop Server
Manager."
When I run "systemctl start vdsmd" it loads fine. This happens on every
reboot. Looks like there is an old bug for this from 3.4.
https://bugzilla.redhat.com/show_bug.cgi?id=1055153
--
This email was Virus checked by UTM 9. For issues please contact the Windows Systems Admin.
9 years, 6 months
how to pause a VM from rest-api?
by Ernest Beinrohr
This is a multi-part message in MIME format.
--------------070900040009080707030501
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
I'd like to pause a VM from rest-api. But the docs are not mentioning
pause, only suspend. This works: URL/api/vms/ID/suspend and pause
doesn't. Pausing from vdsClient works however, but I'd like to use rest-api:
vdsClient -s host1 pause ID
--
Ernest Beinrohr, AXON PRO
Ing <http://www.beinrohr.sk/ing.php>, RHCE
<http://www.beinrohr.sk/rhce.php>, RHCVA
<http://www.beinrohr.sk/rhce.php>, LPIC
<http://www.beinrohr.sk/lpic.php>, VCA <http://www.beinrohr.sk/vca.php>,
+421-2-62410360 +421-903-482603
--------------070900040009080707030501
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
I'd like to pause a VM from rest-api. But the docs are not
mentioning pause, only suspend. This works: URL/api/vms/ID/suspend
and pause doesn't. Pausing from vdsClient works however, but I'd
like to use rest-api:<br>
vdsClient -s host1 pause ID<br>
<br>
<br>
<br>
<div class="moz-signature">-- <br>
<div id="oernii_footer" style="color: gray;">
<span style="font-family: Lucida Console, Luxi Mono, Courier,
monospace; font-size: 90%;">
Ernest Beinrohr, AXON PRO<br>
<a style="text-decoration: none; color: gray;"
href="http://www.beinrohr.sk/ing.php">Ing</a>, <a
style="text-decoration: none; color: gray;"
href="http://www.beinrohr.sk/rhce.php">RHCE</a>, <a
style="text-decoration: none; color: gray;"
href="http://www.beinrohr.sk/rhce.php">RHCVA</a>, <a
style="text-decoration: none; color: gray;"
href="http://www.beinrohr.sk/lpic.php">LPIC</a>, <a
style="text-decoration: none; color: gray;"
href="http://www.beinrohr.sk/vca.php">VCA</a>, <br>
+421-2-62410360 +421-903-482603
<br>
</span> </div>
<img
src="http://nojsstats.appspot.com/UA-44497096-1/email.beinrohr.sk"
moz-do-not-send="true" border="0" width="1" height="1">
</div>
</body>
</html>
--------------070900040009080707030501--
9 years, 6 months
[ANN] oVirt 3.6.0 First Alpha Release is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the first Alpha release of oVirt 3.6 for testing, as of May 20, 2015.
oVirt is an open source alternative to VMware vSphere, and provides an excellent KVM management interface for multi-node virtualization.
The Alpha release is available now for Fedora 20,
Red Hat Enterprise Linux 6.6, CentOS 6.6 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS 7.1 (or similar).
The Alpha release can also be deployed on Hypervisor Hosts running Fedora 21 and Fedora 22.
This Alpha release of oVirt includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
A new oVirt Live ISO and Node ISO will be available soon[2].
Please note that mirrors[3] may need usually one day before being synchronized.
Please refer to the release notes for known issues in this release.
[1] http://www.ovirt.org/OVirt_3.6_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 6 months
disk format hang
by paf1@email.cz
This is a multi-part message in MIME format.
--------------010600070900060906080603
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
during VM instalation I needed add 300GB new disk and format as EXT3.
Easy task, but ....
Used striped mirror volume for that - created by:
# gluster volume create 12KVM12SC4 replica 2 stripe 2 1kvm1-SAN:/p4/GFS1
1kvm2-SAN:/p4/GFS1 2kvm1-SAN /p4/GFS1 2kvm2-SAN:/p4/GFS1 force
- added disk throught oVisrt GUI
- VM# partprobe
- VM# fdisk - make one partition, type linux
- VM# mkfs.ext3 /dev/vdb1
it hangs after cca 30% of format
oVirt alert:
*VM has been paused due to a storage I/O error*
so I tried to copy some data directly from hypervizor ( cca 80GB ) to
default gluster-ovirt mountpoint (
/rhev/data-center/mnt/glusterSD/localhost:_12KVM12SC4 ) with no hangs.
The same whole operation on volume distribute replica type succesfully done.
Can anybody help me with it ??
Maybe continual error on /var/log/messages occure will get right view on
it :
May 20 15:47:00 1kvm2 virsh: All-whitespace username.
May 20 15:47:00 1kvm2 journal: End of file while reading data:
Input/output error
May 20 15:47:00 1kvm2 virsh: All-whitespace username.
May 20 15:47:00 1kvm2 journal: End of file while reading data:
Input/output error
May 20 15:48:00 1kvm2 virsh: All-whitespace username.
May 20 15:48:00 1kvm2 journal: End of file while reading data:
Input/output error
May 20 15:48:00 1kvm2 virsh: All-whitespace username.
May 20 15:48:00 1kvm2 journal: End of file while reading data:
Input/output error
regs to ALL
!! URGENT !!
Pavel
--------------010600070900060906080603
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hello, <br>
<br>
during VM instalation I needed add 300GB new disk and format as
EXT3. Easy task, but ....<br>
Used striped mirror volume for that - created by:<br>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
# gluster volume create 12KVM12SC4 replica 2
stripe 2 1kvm1-SAN:/p4/GFS1
1kvm2-SAN:/p4/GFS1
2kvm1-SAN /p4/GFS1
2kvm2-SAN:/p4/GFS1 force<br>
- added disk throught oVisrt GUI<br>
- VM# partprobe<br>
- VM# fdisk - make one partition, type linux<br>
- VM# mkfs.ext3 /dev/vdb1<br>
it hangs after cca 30% of format<br>
<br>
oVirt alert:
<div>
<div>
<div style="display: inline;" class="gwt-Label"><b>VM has been
paused due to a storage I/O error</b></div>
</div>
</div>
<br>
so I tried to copy some data directly from hypervizor ( cca 80GB )
to default gluster-ovirt mountpoint (
/rhev/data-center/mnt/glusterSD/localhost:_12KVM12SC4 ) with no
hangs.<br>
The same whole operation on volume distribute replica type
succesfully done.<br>
<br>
Can anybody help me with it ??<br>
<br>
Maybe continual error on /var/log/messages occure will get right
view on it :<br>
<br>
May 20 15:47:00 1kvm2 virsh: All-whitespace username.<br>
May 20 15:47:00 1kvm2 journal: End of file while reading data:
Input/output error<br>
May 20 15:47:00 1kvm2 virsh: All-whitespace username.<br>
May 20 15:47:00 1kvm2 journal: End of file while reading data:
Input/output error<br>
May 20 15:48:00 1kvm2 virsh: All-whitespace username.<br>
May 20 15:48:00 1kvm2 journal: End of file while reading data:
Input/output error<br>
May 20 15:48:00 1kvm2 virsh: All-whitespace username.<br>
May 20 15:48:00 1kvm2 journal: End of file while reading data:
Input/output error<br>
<br>
regs to ALL<br>
<br>
!! URGENT !!<br>
<br>
Pavel<br>
</body>
</html>
--------------010600070900060906080603--
9 years, 6 months
Re: [ovirt-users] Linux VMs get stuck after Live Migration
by Soeren Malchow
--_000_BE85E676B6644C8787ACFF9C76B9E900mconnet_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
RGVhciBhbGwsDQoNCnNvbWUgYWRkaXRpb25zOg0KDQogICogICBzd2l0Y2hpbmcgb2YgbWVtb3J5
IGJhbGxvb25pZyBkb2VzIG5vdCBzb2x2ZSB0aGUgcHJvYmxlbQ0KICAqICAgT25jZSB3ZSBtaWdy
YXRlZCB0aGUgbWFjaGluZXMgd2UgY2FuIGJhc2ljYWxseSBub3QgdXNlIGFueXRoaW5nIGFueW1v
cmUgdGhhdCBuZWVkcyBzb21lIGtpbmQgb2YgdGltaW5nIOKAkyBlLmcuIOKAnHBpbmcgMS4xLjEu
MSDigJNpIDLigJ0gb3Ig4oCcd2hpbGUgdHJ1ZTsgZG8gZWNobyDigJxsYWxh4oCdOyBzbGVlcOKA
nTsgZG9uZeKAnSBib3RoIGRvIHRoZSBmcmlzdCB0aGluZyAoMSBwaW5nIGFuZCBvbmUg4oCcbGFs
YeKAnSBvdXRwdXQpIGJ1dCB0aGVuIHN0YWxsDQoNCkNoZWVycw0KU29lcmVuDQoNCkZyb206IDx1
c2Vycy1ib3VuY2VzQG92aXJ0Lm9yZzxtYWlsdG86dXNlcnMtYm91bmNlc0BvdmlydC5vcmc+PiBv
biBiZWhhbGYgb2YgU29lcmVuIE1hbGNob3cNCkRhdGU6IFdlZG5lc2RheSAyMCBNYXkgMjAxNSAx
Mzo0Mg0KVG86ICJ1c2Vyc0BvdmlydC5vcmc8bWFpbHRvOnVzZXJzQG92aXJ0Lm9yZz4iDQpTdWJq
ZWN0OiBbb3ZpcnQtdXNlcnNdIExpbnV4IFZNcyBnZXQgc3R1Y2sgYWZ0ZXIgTGl2ZSBNaWdyYXRp
b24NCg0KRGVhciBhbGwsDQoNCldlIGFyZSBleHBlcmllbmNpbmcgYSBwcm9ibGVtIHdpdGggTGlu
dXggVm1zIHNwZWNpZmljYWxseSB0aGUgVm1zIHdlIHRlc3RlZCBhcmUgQ2VudE9TIDcuMSAoc28g
ZmFyIGl0IGxvb2tzIGFzIGlmIFdpbmRvd3MgVm1zIGFyZSBub3QgYSBwcm9ibGVtKSwgYWZ0ZXIg
dGhlIGxpdmUgbWlncmF0aW9uIHRoZSBtYWNoaW5lcyBkbyBub3Qgc2hvdyB0aGUgSVAgYW5kIGhv
c3RuYW1lIGluIHRoZSBHVUksIHRoZSBzYW1lIHdheSBhcyBpZiB0aGUgZ3Vlc3QgdG9vbHMgYXJl
IG5vdCBpbnN0YWxsZWQsIGJ1dCB0aGUgbWFjaGluZXMgYXJlIHN0aWxsIHJ1bm5pbmcuDQoNClRo
ZW4gd2UgaGF2ZSAyIGRpZmZlcmVudCBzY2VuYXJpb3MNCg0KUmVib290L3NodXRkb3duOg0KV2hl
biBpIHJlYm9vdC9zaHV0ZG93biBvbmUgb2YgdGhlc2UgVk1TIGZyb20gaW5zaWRlIHRoZSBWTSB0
aGUgT1Mgc2VlbXMgdG8gc2h1dGRvd24sIGhvd2V2ZXIsIHRoZSBxZW11IHByb2Nlc3MgaXMgc3Rp
bGwgcnVubmluZyBhbmQgZm9yIG92aXJ0IHRoZSBtYWNoaW5lIGlzIHN0aWxsIHVwLCBhIOKAnHNo
dXRkb3du4oCdIGluIHRoZSBmcm9udGVuZCBkb2VzIG5vdCBkbyBhbnl0aGluZywgYSDigJxwb3dl
ciBvZmbigJ0gc2h1dHMgZG93biB0aGUgbWFjaGluZQ0KDQpKdXN0IHdhaXQ6DQpBZnRlciBhIHdo
aWxlIHRoZSBtYWNoaW5lcyBqdXN0IHNlZW0gdG8gc2h1dGRvd24gYW5kIHRoZW4gYmFzaWNhbGx5
IHRoZW4gc2FtZSBoYXBwZXNuIGFzIHdpdGggcmVib290L3NodXRkb3duDQoNClRoZSBlbnZpcm9u
bWVudCBpczoNCg0KSG9zdGVkIEVuZ2luZSBvbiBDZW50T1MgNi42IHdpdGggb3ZpcnQgMy41LjIu
MQ0KQ29tcHV0ZSBob3N0cyBvbiBGZWRvcmEgMjAgd2l0aCB2ZHNtIDQuMTYuMTQgYW5kIGxpYnZp
cnQgMS4yLjkuMSBmcm9tIHRoZSBsaWJ2aXJ0LXByZXZpZXcgcmVwbyAoZm9yIGxpdmUgbWVyZ2Up
DQpTdG9yYWdlIC0+IENlbnRPUyA3LjEgd2l0aCBnbHVzdGVyIDMuNi4zDQoNCkNhbiBhbnlvbmUg
cG9pbnQgdXMgdG93YXJkcyB0aGUgcmlnaHQgZGlyZWN0aW9uDQoNClJlZ2FyZHMNClNvZXJlbg0K
--_000_BE85E676B6644C8787ACFF9C76B9E900mconnet_
Content-Type: text/html; charset="utf-8"
Content-ID: <E72DBA6120393545B26397A309AE61D7(a)liquidcampaign.com>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsgY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1zaXplOiAx
NHB4OyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsiPg0KPGRpdj4NCjxkaXY+DQo8
ZGl2PkRlYXIgYWxsLDwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+c29tZSBhZGRpdGlv
bnM6Jm5ic3A7PC9kaXY+DQo8dWw+DQo8bGk+c3dpdGNoaW5nIG9mIG1lbW9yeSBiYWxsb29uaWcg
ZG9lcyBub3Qgc29sdmUgdGhlIHByb2JsZW08L2xpPjxsaT5PbmNlIHdlIG1pZ3JhdGVkIHRoZSBt
YWNoaW5lcyB3ZSBjYW4gYmFzaWNhbGx5IG5vdCB1c2UgYW55dGhpbmcgYW55bW9yZSB0aGF0IG5l
ZWRzIHNvbWUga2luZCBvZiB0aW1pbmcg4oCTIGUuZy4g4oCccGluZyAxLjEuMS4xIOKAk2kgMuKA
nSBvciDigJx3aGlsZSB0cnVlOyBkbyBlY2hvIOKAnGxhbGHigJ07IHNsZWVw4oCdOyBkb25l4oCd
IGJvdGggZG8gdGhlIGZyaXN0IHRoaW5nICgxIHBpbmcgYW5kIG9uZSDigJxsYWxh4oCdIG91dHB1
dCkgYnV0IHRoZW4gc3RhbGw8L2xpPjwvdWw+DQo8ZGl2PkNoZWVyczwvZGl2Pg0KPGRpdj5Tb2Vy
ZW4mbmJzcDs8L2Rpdj4NCjxkaXY+DQo8ZGl2IGlkPSJNQUNfT1VUTE9PS19TSUdOQVRVUkUiPjwv
ZGl2Pg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxzcGFuIGlk
PSJPTEtfU1JDX0JPRFlfU0VDVElPTiI+DQo8ZGl2IHN0eWxlPSJmb250LWZhbWlseTpDYWxpYnJp
OyBmb250LXNpemU6MTJwdDsgdGV4dC1hbGlnbjpsZWZ0OyBjb2xvcjpibGFjazsgQk9SREVSLUJP
VFRPTTogbWVkaXVtIG5vbmU7IEJPUkRFUi1MRUZUOiBtZWRpdW0gbm9uZTsgUEFERElORy1CT1RU
T006IDBpbjsgUEFERElORy1MRUZUOiAwaW47IFBBRERJTkctUklHSFQ6IDBpbjsgQk9SREVSLVRP
UDogI2I1YzRkZiAxcHQgc29saWQ7IEJPUkRFUi1SSUdIVDogbWVkaXVtIG5vbmU7IFBBRERJTkct
VE9QOiAzcHQiPg0KPHNwYW4gc3R5bGU9ImZvbnQtd2VpZ2h0OmJvbGQiPkZyb206IDwvc3Bhbj4m
bHQ7PGEgaHJlZj0ibWFpbHRvOnVzZXJzLWJvdW5jZXNAb3ZpcnQub3JnIj51c2Vycy1ib3VuY2Vz
QG92aXJ0Lm9yZzwvYT4mZ3Q7IG9uIGJlaGFsZiBvZiBTb2VyZW4gTWFsY2hvdzxicj4NCjxzcGFu
IHN0eWxlPSJmb250LXdlaWdodDpib2xkIj5EYXRlOiA8L3NwYW4+V2VkbmVzZGF5IDIwIE1heSAy
MDE1IDEzOjQyPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtd2VpZ2h0OmJvbGQiPlRvOiA8L3NwYW4+
JnF1b3Q7PGEgaHJlZj0ibWFpbHRvOnVzZXJzQG92aXJ0Lm9yZyI+dXNlcnNAb3ZpcnQub3JnPC9h
PiZxdW90Ozxicj4NCjxzcGFuIHN0eWxlPSJmb250LXdlaWdodDpib2xkIj5TdWJqZWN0OiA8L3Nw
YW4+W292aXJ0LXVzZXJzXSBMaW51eCBWTXMgZ2V0IHN0dWNrIGFmdGVyIExpdmUgTWlncmF0aW9u
PGJyPg0KPC9kaXY+DQo8ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj4NCjxkaXYgc3R5bGU9IndvcmQt
d3JhcDogYnJlYWstd29yZDsgLXdlYmtpdC1uYnNwLW1vZGU6IHNwYWNlOyAtd2Via2l0LWxpbmUt
YnJlYWs6IGFmdGVyLXdoaXRlLXNwYWNlOyBjb2xvcjogcmdiKDAsIDAsIDApOyBmb250LXNpemU6
IDE0cHg7IGZvbnQtZmFtaWx5OiBDYWxpYnJpLCBzYW5zLXNlcmlmOyI+DQo8ZGl2PkRlYXIgYWxs
LDwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+V2UgYXJlIGV4cGVyaWVuY2luZyBhIHBy
b2JsZW0gd2l0aCBMaW51eCBWbXMgc3BlY2lmaWNhbGx5IHRoZSBWbXMgd2UgdGVzdGVkIGFyZSBD
ZW50T1MgNy4xIChzbyBmYXIgaXQgbG9va3MgYXMgaWYgV2luZG93cyBWbXMgYXJlIG5vdCBhIHBy
b2JsZW0pLCBhZnRlciB0aGUgbGl2ZSBtaWdyYXRpb24gdGhlIG1hY2hpbmVzIGRvIG5vdCBzaG93
IHRoZSBJUCBhbmQgaG9zdG5hbWUgaW4gdGhlIEdVSSwgdGhlIHNhbWUgd2F5IGFzIGlmIHRoZSBn
dWVzdA0KIHRvb2xzIGFyZSBub3QgaW5zdGFsbGVkLCBidXQgdGhlIG1hY2hpbmVzIGFyZSBzdGls
bCBydW5uaW5nLjwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+VGhlbiB3ZSBoYXZlIDIg
ZGlmZmVyZW50IHNjZW5hcmlvczwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+UmVib290
L3NodXRkb3duOjwvZGl2Pg0KPGRpdj5XaGVuIGkgcmVib290L3NodXRkb3duIG9uZSBvZiB0aGVz
ZSBWTVMgZnJvbSBpbnNpZGUgdGhlIFZNIHRoZSBPUyBzZWVtcyB0byBzaHV0ZG93biwgaG93ZXZl
ciwgdGhlIHFlbXUgcHJvY2VzcyBpcyBzdGlsbCBydW5uaW5nIGFuZCBmb3Igb3ZpcnQgdGhlIG1h
Y2hpbmUgaXMgc3RpbGwgdXAsIGEg4oCcc2h1dGRvd27igJ0gaW4gdGhlIGZyb250ZW5kIGRvZXMg
bm90IGRvIGFueXRoaW5nLCBhIOKAnHBvd2VyIG9mZuKAnSBzaHV0cyBkb3duIHRoZSBtYWNoaW5l
PC9kaXY+DQo8ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj5KdXN0IHdhaXQ6PC9kaXY+DQo8ZGl2PkFm
dGVyIGEgd2hpbGUgdGhlIG1hY2hpbmVzIGp1c3Qgc2VlbSB0byBzaHV0ZG93biBhbmQgdGhlbiBi
YXNpY2FsbHkgdGhlbiBzYW1lIGhhcHBlc24gYXMgd2l0aCByZWJvb3Qvc2h1dGRvd248L2Rpdj4N
CjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PlRoZSBlbnZpcm9ubWVudCBpczo8L2Rpdj4NCjxkaXY+
PGJyPg0KPC9kaXY+DQo8ZGl2Pkhvc3RlZCBFbmdpbmUgb24gQ2VudE9TIDYuNiB3aXRoIG92aXJ0
IDMuNS4yLjE8L2Rpdj4NCjxkaXY+Q29tcHV0ZSBob3N0cyBvbiBGZWRvcmEgMjAgd2l0aCB2ZHNt
IDQuMTYuMTQgYW5kIGxpYnZpcnQgMS4yLjkuMSBmcm9tIHRoZSBsaWJ2aXJ0LXByZXZpZXcgcmVw
byAoZm9yIGxpdmUgbWVyZ2UpPC9kaXY+DQo8ZGl2PlN0b3JhZ2UgLSZndDsgQ2VudE9TIDcuMSB3
aXRoIGdsdXN0ZXIgMy42LjM8L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PkNhbiBhbnlv
bmUgcG9pbnQgdXMgdG93YXJkcyB0aGUgcmlnaHQgZGlyZWN0aW9uPC9kaXY+DQo8ZGl2Pjxicj4N
CjwvZGl2Pg0KPGRpdj5SZWdhcmRzPC9kaXY+DQo8ZGl2PlNvZXJlbiZuYnNwOzwvZGl2Pg0KPGRp
dj4NCjxkaXYgaWQ9IiI+PC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+DQo8L3NwYW4+DQo8
L2JvZHk+DQo8L2h0bWw+DQo=
--_000_BE85E676B6644C8787ACFF9C76B9E900mconnet_--
9 years, 6 months
NFS export domain still remain in "Preparing for maintenance"
by NUNIN Roberto
--_000_8C2CD99696B62B409EBEBFDF87C83FBF59B64FEB81DENU1XMAIL02p_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi all
We are using oVirt engine 3.5.1-0.0 on Centos 6.6
We have two DC. One with hosts using vdsm-4.16.10-8.gitc937927.el7.x86_64, =
the other vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6
No hosted-engine, it run on a dedicates VM, outside oVirt.
Behavior: When try to put the NFS export currently active and attached to t=
he 6.6 cluster, used to move VM from one DC to the other, this remain indef=
initely in "Preparing for maintenance phase".
No DNS resolution issue in place. All parties involved are solved directly =
and via reverse resolution.
I've read about the issue on el7 and IPv6 bug, but here we have the problem=
on Centos 6.6 hosts.
Any idea/suggestion/further investigation ?
Can we reinitialize the NFS export in some way ? Only erasing content ?
Thanks in advance for any suggestion.
Roberto Nunin
Italy
________________________________
Questo messaggio e' indirizzato esclusivamente al destinatario indicato e p=
otrebbe contenere informazioni confidenziali, riservate o proprietarie. Qua=
lora la presente venisse ricevuta per errore, si prega di segnalarlo immedi=
atamente al mittente, cancellando l'originale e ogni sua copia e distruggen=
do eventuali copie cartacee. Ogni altro uso e' strettamente proibito e potr=
ebbe essere fonte di violazione di legge.
This message is for the designated recipient only and may contain privilege=
d, proprietary, or otherwise private information. If you have received it i=
n error, please notify the sender immediately, deleting the original and al=
l copies and destroying any hard copies. Any other use is strictly prohibit=
ed and may be unlawful.
--_000_8C2CD99696B62B409EBEBFDF87C83FBF59B64FEB81DENU1XMAIL02p_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.StileMessaggioDiPostaElettronica17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 2.0cm 2.0cm 2.0cm;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"IT" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi all<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">We are using oVirt engine 3.5.1=
-0.0 on Centos 6.6<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">We have two DC. One with hosts =
using vdsm-4.16.10-8.gitc937927.el7.x86_64, the other vdsm-4.16.12-7.gita30=
da75.el6.x86_64 on Centos6.6<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">No hosted-engine, it run on a d=
edicates VM, outside oVirt.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Behavior: When try to put the N=
FS export currently active and attached to the 6.6 cluster, used to move VM=
from one DC to the other, this remain indefinitely in “Preparing for=
maintenance phase”.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">No DNS resolution issue in plac=
e. All parties involved are solved directly and via reverse resolution.<o:p=
></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I’ve read about the issue=
on el7 and IPv6 bug, but here we have the problem on Centos 6.6 hosts.<o:p=
></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Any idea/suggestion/further inv=
estigation ?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Can we reinitialize the NFS exp=
ort in some way ? Only erasing content ?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thanks in advance for any sugge=
stion.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-si=
ze:10.0pt;font-family:Consolas;mso-fareast-language:IT">Roberto Nunin<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><span style=3D"font-si=
ze:10.0pt;font-family:Consolas;mso-fareast-language:IT">Italy<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
<br>
<hr>
<font face=3D"Courier New" color=3D"Black" size=3D"2">Questo messaggio e' i=
ndirizzato esclusivamente al destinatario indicato e potrebbe contenere inf=
ormazioni confidenziali, riservate o proprietarie. Qualora la presente veni=
sse ricevuta per errore, si prega di segnalarlo
immediatamente al mittente, cancellando l'originale e ogni sua copia e dis=
truggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito=
e potrebbe essere fonte di violazione di legge.<br>
<br>
This message is for the designated recipient only and may contain privilege=
d, proprietary, or otherwise private information. If you have received it i=
n error, please notify the sender immediately, deleting the original and al=
l copies and destroying any hard
copies. Any other use is strictly prohibited and may be unlawful.<br>
</font>
</body>
</html>
--_000_8C2CD99696B62B409EBEBFDF87C83FBF59B64FEB81DENU1XMAIL02p_--
9 years, 6 months
Linux VMs get stuck after Live Migration
by Soeren Malchow
--_000_421460FF6BCD498AB3700C5139BC6AD5mconnet_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
RGVhciBhbGwsDQoNCldlIGFyZSBleHBlcmllbmNpbmcgYSBwcm9ibGVtIHdpdGggTGludXggVm1z
IHNwZWNpZmljYWxseSB0aGUgVm1zIHdlIHRlc3RlZCBhcmUgQ2VudE9TIDcuMSAoc28gZmFyIGl0
IGxvb2tzIGFzIGlmIFdpbmRvd3MgVm1zIGFyZSBub3QgYSBwcm9ibGVtKSwgYWZ0ZXIgdGhlIGxp
dmUgbWlncmF0aW9uIHRoZSBtYWNoaW5lcyBkbyBub3Qgc2hvdyB0aGUgSVAgYW5kIGhvc3RuYW1l
IGluIHRoZSBHVUksIHRoZSBzYW1lIHdheSBhcyBpZiB0aGUgZ3Vlc3QgdG9vbHMgYXJlIG5vdCBp
bnN0YWxsZWQsIGJ1dCB0aGUgbWFjaGluZXMgYXJlIHN0aWxsIHJ1bm5pbmcuDQoNClRoZW4gd2Ug
aGF2ZSAyIGRpZmZlcmVudCBzY2VuYXJpb3MNCg0KUmVib290L3NodXRkb3duOg0KV2hlbiBpIHJl
Ym9vdC9zaHV0ZG93biBvbmUgb2YgdGhlc2UgVk1TIGZyb20gaW5zaWRlIHRoZSBWTSB0aGUgT1Mg
c2VlbXMgdG8gc2h1dGRvd24sIGhvd2V2ZXIsIHRoZSBxZW11IHByb2Nlc3MgaXMgc3RpbGwgcnVu
bmluZyBhbmQgZm9yIG92aXJ0IHRoZSBtYWNoaW5lIGlzIHN0aWxsIHVwLCBhIOKAnHNodXRkb3du
4oCdIGluIHRoZSBmcm9udGVuZCBkb2VzIG5vdCBkbyBhbnl0aGluZywgYSDigJxwb3dlciBvZmbi
gJ0gc2h1dHMgZG93biB0aGUgbWFjaGluZQ0KDQpKdXN0IHdhaXQ6DQpBZnRlciBhIHdoaWxlIHRo
ZSBtYWNoaW5lcyBqdXN0IHNlZW0gdG8gc2h1dGRvd24gYW5kIHRoZW4gYmFzaWNhbGx5IHRoZW4g
c2FtZSBoYXBwZXNuIGFzIHdpdGggcmVib290L3NodXRkb3duDQoNClRoZSBlbnZpcm9ubWVudCBp
czoNCg0KSG9zdGVkIEVuZ2luZSBvbiBDZW50T1MgNi42IHdpdGggb3ZpcnQgMy41LjIuMQ0KQ29t
cHV0ZSBob3N0cyBvbiBGZWRvcmEgMjAgd2l0aCB2ZHNtIDQuMTYuMTQgYW5kIGxpYnZpcnQgMS4y
LjkuMSBmcm9tIHRoZSBsaWJ2aXJ0LXByZXZpZXcgcmVwbyAoZm9yIGxpdmUgbWVyZ2UpDQpTdG9y
YWdlIC0+IENlbnRPUyA3LjEgd2l0aCBnbHVzdGVyIDMuNi4zDQoNCkNhbiBhbnlvbmUgcG9pbnQg
dXMgdG93YXJkcyB0aGUgcmlnaHQgZGlyZWN0aW9uDQoNClJlZ2FyZHMNClNvZXJlbg0K
--_000_421460FF6BCD498AB3700C5139BC6AD5mconnet_
Content-Type: text/html; charset="utf-8"
Content-ID: <FCD76AD0F70A3F428467305A213B2D3F(a)liquidcampaign.com>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsgY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1zaXplOiAx
NHB4OyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsiPg0KPGRpdj5EZWFyIGFsbCw8
L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PldlIGFyZSBleHBlcmllbmNpbmcgYSBwcm9i
bGVtIHdpdGggTGludXggVm1zIHNwZWNpZmljYWxseSB0aGUgVm1zIHdlIHRlc3RlZCBhcmUgQ2Vu
dE9TIDcuMSAoc28gZmFyIGl0IGxvb2tzIGFzIGlmIFdpbmRvd3MgVm1zIGFyZSBub3QgYSBwcm9i
bGVtKSwgYWZ0ZXIgdGhlIGxpdmUgbWlncmF0aW9uIHRoZSBtYWNoaW5lcyBkbyBub3Qgc2hvdyB0
aGUgSVAgYW5kIGhvc3RuYW1lIGluIHRoZSBHVUksIHRoZSBzYW1lIHdheSBhcyBpZiB0aGUgZ3Vl
c3QNCiB0b29scyBhcmUgbm90IGluc3RhbGxlZCwgYnV0IHRoZSBtYWNoaW5lcyBhcmUgc3RpbGwg
cnVubmluZy48L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PlRoZW4gd2UgaGF2ZSAyIGRp
ZmZlcmVudCBzY2VuYXJpb3M8L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PlJlYm9vdC9z
aHV0ZG93bjo8L2Rpdj4NCjxkaXY+V2hlbiBpIHJlYm9vdC9zaHV0ZG93biBvbmUgb2YgdGhlc2Ug
Vk1TIGZyb20gaW5zaWRlIHRoZSBWTSB0aGUgT1Mgc2VlbXMgdG8gc2h1dGRvd24sIGhvd2V2ZXIs
IHRoZSBxZW11IHByb2Nlc3MgaXMgc3RpbGwgcnVubmluZyBhbmQgZm9yIG92aXJ0IHRoZSBtYWNo
aW5lIGlzIHN0aWxsIHVwLCBhIOKAnHNodXRkb3du4oCdIGluIHRoZSBmcm9udGVuZCBkb2VzIG5v
dCBkbyBhbnl0aGluZywgYSDigJxwb3dlciBvZmbigJ0gc2h1dHMgZG93biB0aGUgbWFjaGluZTwv
ZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+SnVzdCB3YWl0OjwvZGl2Pg0KPGRpdj5BZnRl
ciBhIHdoaWxlIHRoZSBtYWNoaW5lcyBqdXN0IHNlZW0gdG8gc2h1dGRvd24gYW5kIHRoZW4gYmFz
aWNhbGx5IHRoZW4gc2FtZSBoYXBwZXNuIGFzIHdpdGggcmVib290L3NodXRkb3duPC9kaXY+DQo8
ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj5UaGUgZW52aXJvbm1lbnQgaXM6PC9kaXY+DQo8ZGl2Pjxi
cj4NCjwvZGl2Pg0KPGRpdj5Ib3N0ZWQgRW5naW5lIG9uIENlbnRPUyA2LjYgd2l0aCBvdmlydCAz
LjUuMi4xPC9kaXY+DQo8ZGl2PkNvbXB1dGUgaG9zdHMgb24gRmVkb3JhIDIwIHdpdGggdmRzbSA0
LjE2LjE0IGFuZCBsaWJ2aXJ0IDEuMi45LjEgZnJvbSB0aGUgbGlidmlydC1wcmV2aWV3IHJlcG8g
KGZvciBsaXZlIG1lcmdlKTwvZGl2Pg0KPGRpdj5TdG9yYWdlIC0mZ3Q7IENlbnRPUyA3LjEgd2l0
aCBnbHVzdGVyIDMuNi4zPC9kaXY+DQo8ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj5DYW4gYW55b25l
IHBvaW50IHVzIHRvd2FyZHMgdGhlIHJpZ2h0IGRpcmVjdGlvbjwvZGl2Pg0KPGRpdj48YnI+DQo8
L2Rpdj4NCjxkaXY+UmVnYXJkczwvZGl2Pg0KPGRpdj5Tb2VyZW4mbmJzcDs8L2Rpdj4NCjxkaXY+
DQo8ZGl2IGlkPSJNQUNfT1VUTE9PS19TSUdOQVRVUkUiPjwvZGl2Pg0KPC9kaXY+DQo8L2JvZHk+
DQo8L2h0bWw+DQo=
--_000_421460FF6BCD498AB3700C5139BC6AD5mconnet_--
9 years, 6 months