[ovirt-users] Switch from Fedora 20 to CentOS 7.1

Soeren Malchow soeren.malchow at mcon.net
Fri May 22 20:50:26 UTC 2015


Dear Nir,

Thanks for the answer.

The problem is not related to ovirt, vdsm or libvirt, it was in gluster and the secondary ovirt cluster actually had the gluster mounted correctly and saw everything but it could not see the files in “dom_md”, we updated all gluster packages to 3.7.0 and all was good.

If someone else comes up with this - first check for this.

The switch from Fedora 20 to CentOS 7.1 works just fine if all gluster is on 3.7.0 and the ovirt is on 3.5.2.1

Cheers
Soeren 






On 22/05/15 20:59, "Nir Soffer" <nsoffer at redhat.com> wrote:

>----- Original Message -----
>> From: "Soeren Malchow" <soeren.malchow at mcon.net>
>> To: "Jurriën Bloemen" <Jurrien.Bloemen at dmc.amcnetworks.com>, users at ovirt.org
>> Sent: Thursday, May 21, 2015 7:35:02 PM
>> Subject: Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1
>> 
>> Hi,
>> 
>> We now created the new Cluster based on CentOS 7.1 which went fine, then we
>> migrated 2 machines – no problem, we have Live Migration (back), Live Merge
>> and so on, all good.
>> 
>> But some additional machine have problems starting on the new cluster and
>> this happens
>> 
>> 
>> Grep for the Thread in vdsm.log
>> <— snip —>
>> 
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:21,999::vm::2264::vm.Vm::(_startUnderlyingVm)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::Start
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,003::vm::2268::vm.Vm::(_startUnderlyingVm)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::_ongoingCreations acquired
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,008::vm::3261::vm.Vm::(_run)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::VM wrapper has started
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,021::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::moving from state init -> state
>> preparing
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,028::logUtils::44::dispatcher::(wrapper) Run and protect:
>> getVolumeSize(sdUUID=u'276e9ba7-e19a-49c5-8ad7-26711934d5e4',
>> spUUID=u'0f954891-b1cd-4f09-99ae-75d404d95f9d',
>> imgUUID=u'eae65249-e5e8-49e7-90a0-c7385e80e6ca',
>> volUUID=u'8791f6ec-a6ef-484d-bd5a-730b22b19250', options=None)
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,069::logUtils::47::dispatcher::(wrapper) Run and protect:
>> getVolumeSize, Return response: {'truesize': '2696552448', 'apparentsize':
>> '2696609792'}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,069::task::1191::Storage.TaskManager.Task::(prepare)
>> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::finished: {'truesize':
>> '2696552448', 'apparentsize': '2696609792'}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,069::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::moving from state preparing ->
>> state finished
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,070::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources {}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,070::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,070::task::993::Storage.TaskManager.Task::(_decref)
>> Task=`2bc7fe9c-204a-4ab7-a116-f7fbba32bd34`::ref 0 aborting False
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,071::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`c508cf8f-9f02-43a6-a45d-2b3f1d7e66be`::moving from state init -> state
>> preparing
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,071::logUtils::44::dispatcher::(wrapper) Run and protect:
>> getVolumeSize(sdUUID=u'276e9ba7-e19a-49c5-8ad7-26711934d5e4',
>> spUUID=u'0f954891-b1cd-4f09-99ae-75d404d95f9d',
>> imgUUID=u'967d966c-3653-4ff6-9299-2fb5b4197c37',
>> volUUID=u'99b085e6-6662-43ef-8ab4-40bc00e82460', options=None)
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,086::logUtils::47::dispatcher::(wrapper) Run and protect:
>> getVolumeSize, Return response: {'truesize': '1110773760', 'apparentsize':
>> '1110835200'}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,087::task::1191::Storage.TaskManager.Task::(prepare)
>> Task=`c508cf8f-9f02-43a6-a45d-2b3f1d7e66be`::finished: {'truesize':
>> '1110773760', 'apparentsize': '1110835200'}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,087::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`c508cf8f-9f02-43a6-a45d-2b3f1d7e66be`::moving from state preparing ->
>> state finished
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,087::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources {}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,088::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,088::task::993::Storage.TaskManager.Task::(_decref)
>> Task=`c508cf8f-9f02-43a6-a45d-2b3f1d7e66be`::ref 0 aborting False
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,088::clientIF::335::vds::(prepareVolumePath) prepared volume path:
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,089::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::moving from state init -> state
>> preparing
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:22,089::logUtils::44::dispatcher::(wrapper) Run and protect:
>> prepareImage(sdUUID=u'276e9ba7-e19a-49c5-8ad7-26711934d5e4',
>> spUUID=u'0f954891-b1cd-4f09-99ae-75d404d95f9d',
>> imgUUID=u'eae65249-e5e8-49e7-90a0-c7385e80e6ca',
>> leafUUID=u'8791f6ec-a6ef-484d-bd5a-730b22b19250')
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,090::resourceManager::198::Storage.ResourceManager.Request::(__init__)
>> ResName=`Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4`ReqID=`2ba5bd10-3b98-44fa-9c90-8a2ade3261dc`::Request
>> was made in '/usr/share/vdsm/storage/hsm.py' line '3226' at 'prepareImage'
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,090::resourceManager::542::Storage.ResourceManager::(registerResource)
>> Trying to register resource 'Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4'
>> for lock type 'shared'
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,091::resourceManager::601::Storage.ResourceManager::(registerResource)
>> Resource 'Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4' is free. Now locking
>> as 'shared' (1 active user)
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,091::resourceManager::238::Storage.ResourceManager.Request::(grant)
>> ResName=`Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4`ReqID=`2ba5bd10-3b98-44fa-9c90-8a2ade3261dc`::Granted
>> request
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,092::task::827::Storage.TaskManager.Task::(resourceAcquired)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::_resourcesAcquired:
>> Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4 (shared)
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:22,092::task::993::Storage.TaskManager.Task::(_decref)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::ref 1 aborting False
>> vdsm/vdsm.log:Thread-5475::ERROR::2015-05-21
>> 18:27:24,107::task::866::Storage.TaskManager.Task::(_setError)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::Unexpected error
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,108::task::885::Storage.TaskManager.Task::(_run)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::Task._run:
>> 7ca2f743-09b1-4499-b9e4-5f640002a2bc
>> (u'276e9ba7-e19a-49c5-8ad7-26711934d5e4',
>> u'0f954891-b1cd-4f09-99ae-75d404d95f9d',
>> u'eae65249-e5e8-49e7-90a0-c7385e80e6ca',
>> u'8791f6ec-a6ef-484d-bd5a-730b22b19250') {} failed - stopping task
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,108::task::1217::Storage.TaskManager.Task::(stop)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::stopping in state preparing
>> (force False)
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,108::task::993::Storage.TaskManager.Task::(_decref)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::ref 1 aborting True
>> vdsm/vdsm.log:Thread-5475::INFO::2015-05-21
>> 18:27:24,109::task::1171::Storage.TaskManager.Task::(prepare)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::aborting: Task is aborted:
>> 'Volume does not exist' - code 201
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,109::task::1176::Storage.TaskManager.Task::(prepare)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::Prepare: aborted: Volume does
>> not exist
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,109::task::993::Storage.TaskManager.Task::(_decref)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::ref 0 aborting True
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,110::task::928::Storage.TaskManager.Task::(_doAbort)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::Task._doAbort: force False
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,110::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,110::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::moving from state preparing ->
>> state aborting
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,110::task::550::Storage.TaskManager.Task::(__state_aborting)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::_aborting: recover policy none
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,111::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::moving from state aborting ->
>> state failed
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,111::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources
>> {u'Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4': < ResourceRef
>> 'Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4', isValid: 'True' obj:
>> 'None'>}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,111::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,112::resourceManager::616::Storage.ResourceManager::(releaseResource)
>> Trying to release resource 'Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4'
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,112::resourceManager::635::Storage.ResourceManager::(releaseResource)
>> Released resource 'Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4' (0 active
>> users)
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,112::resourceManager::641::Storage.ResourceManager::(releaseResource)
>> Resource 'Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4' is free, finding out
>> if anyone is waiting for it.
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,113::resourceManager::649::Storage.ResourceManager::(releaseResource)
>> No one is waiting for resource
>> 'Storage.276e9ba7-e19a-49c5-8ad7-26711934d5e4', Clearing records.
>> vdsm/vdsm.log:Thread-5475::ERROR::2015-05-21
>> 18:27:24,113::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
>> {'message': "Volume does not exist:
>> (u'8791f6ec-a6ef-484d-bd5a-730b22b19250',)", 'code': 201}}
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,114::vm::2294::vm.Vm::(_startUnderlyingVm)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::_ongoingCreations released
>> vdsm/vdsm.log:Thread-5475::ERROR::2015-05-21
>> 18:27:24,114::vm::2331::vm.Vm::(_startUnderlyingVm)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::The vm start process failed
>> vdsm/vdsm.log:Thread-5475::DEBUG::2015-05-21
>> 18:27:24,117::vm::2786::vm.Vm::(setDownStatus)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::Changed state to Down: Bad
>> volume specification {u'index': 0, u'iface': u'virtio', u'type': u'disk',
>> u'format': u'cow', u'bootOrder': u'1', u'address': {u'slot': u'0x06',
>> u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function':
>> u'0x0'}, u'volumeID': u'8791f6ec-a6ef-484d-bd5a-730b22b19250',
>> 'apparentsize': '2696609792', u'imageID':
>> u'eae65249-e5e8-49e7-90a0-c7385e80e6ca', u'specParams': {}, u'readonly':
>> u'false', u'domainID': u'276e9ba7-e19a-49c5-8ad7-26711934d5e4', 'reqsize':
>> '0', u'deviceId': u'eae65249-e5e8-49e7-90a0-c7385e80e6ca', 'truesize':
>> '2696552448', u'poolID': u'0f954891-b1cd-4f09-99ae-75d404d95f9d', u'device':
>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'optional':
>> u'false'} (code=1)
>> 
>> <— snip —>
>> 
>> 
>> Additionally i can find this
>> 
>>>> Thread-5475::ERROR::2015-05-21
>> 18:27:24,107::task::866::Storage.TaskManager.Task::(_setError)
>> Task=`7ca2f743-09b1-4499-b9e4-5f640002a2bc`::Unexpected error
>> Traceback (most recent call last):
>> File "/usr/share/vdsm/storage/task.py", line 873, in _run
>> return fn(*args, **kargs)
>> File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
>> res = f(*args, **kwargs)
>> File "/usr/share/vdsm/storage/hsm.py", line 3235, in prepareImage
>> raise se.VolumeDoesNotExist(leafUUID)
>> VolumeDoesNotExist: Volume does not exist:
>> (u'8791f6ec-a6ef-484d-bd5a-730b22b19250’,)
>> 
>>>> 
>> 
>>>> Thread-5475::ERROR::2015-05-21
>> 18:27:24,114::vm::2331::vm.Vm::(_startUnderlyingVm)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::The vm start process failed
>> Traceback (most recent call last):
>> File "/usr/share/vdsm/virt/vm.py", line 2271, in _startUnderlyingVm
>> self._run()
>> File "/usr/share/vdsm/virt/vm.py", line 3266, in _run
>> self.preparePaths(devices[DISK_DEVICES])
>> File "/usr/share/vdsm/virt/vm.py", line 2353, in preparePaths
>> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>> File "/usr/share/vdsm/clientIF.py", line 277, in prepareVolumePath
>> raise vm.VolumeError(drive)
>> VolumeError: Bad volume specification {u'index': 0, u'iface': u'virtio',
>> u'type': u'disk', u'format': u'cow', u'bootOrder': u'1', u'address':
>> {u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci',
>> u'function': u'0x0'}, u'volumeID': u'8791f6ec-a6ef-484d-bd5a-730b22b19250',
>> 'apparentsize': '2696609792', u'imageID':
>> u'eae65249-e5e8-49e7-90a0-c7385e80e6ca', u'specParams': {}, u'readonly':
>> u'false', u'domainID': u'276e9ba7-e19a-49c5-8ad7-26711934d5e4', 'reqsize':
>> '0', u'deviceId': u'eae65249-e5e8-49e7-90a0-c7385e80e6ca', 'truesize':
>> '2696552448', u'poolID': u'0f954891-b1cd-4f09-99ae-75d404d95f9d', u'device':
>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'optional':
>> u'false'}
>> Thread-5475::DEBUG::2015-05-21 18:27:24,117::vm::2786::vm.Vm::(setDownStatus)
>> vmId=`24bd5074-64fc-4aa0-87cd-5de3dd7b50d1`::Changed state to Down: Bad
>> volume specification {u'index': 0, u'iface': u'virtio', u'type': u'disk',
>> u'format': u'cow', u'bootOrder': u'1', u'address': {u'slot': u'0x06',
>> u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function':
>> u'0x0'}, u'volumeID': u'8791f6ec-a6ef-484d-bd5a-730b22b19250',
>> 'apparentsize': '2696609792', u'imageID':
>> u'eae65249-e5e8-49e7-90a0-c7385e80e6ca', u'specParams': {}, u'readonly':
>> u'false', u'domainID': u'276e9ba7-e19a-49c5-8ad7-26711934d5e4', 'reqsize':
>> '0', u'deviceId': u'eae65249-e5e8-49e7-90a0-c7385e80e6ca', 'truesize':
>> '2696552448', u'poolID': u'0f954891-b1cd-4f09-99ae-75d404d95f9d', u'device':
>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'optional':
>> u'false'} (code=1)
>>>> 
>> The thing is, if i move that respective VM back to the old Cluster running
>> Fedora 20 with the libvirt from the libvirt-preview repo, then the VM start
>> with no problem.
>> 
>> That volume ‘8791f6ec-a6ef-484d-bd5a-730b22b19250’ does not exist
>> 
>> I experienced that problem with several Vms now – OS does not matter, also i
>> check snapshots, there are none, i also tried cloning the VM and then moving
>> it over, no luck either.
>> 
>> Any ideas where to look ?
>
>Lets open a bug for this.
>
>It would be useful to get engine database dump and engine and vdsm logs.
>
>Nir
>
>> 
>> Regards
>> Soeren
>> 
>> 
>> 
>> 
>> From: < users-bounces at ovirt.org > on behalf of Soeren Malchow
>> Date: Wednesday 20 May 2015 15:42
>> To: "Bloemen, Jurriën", " users at ovirt.org "
>> Subject: Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1
>> 
>> Great, thanks, that is the plan then
>> 
>> From: < users-bounces at ovirt.org > on behalf of "Bloemen, Jurriën"
>> Date: Wednesday 20 May 2015 15:27
>> To: " users at ovirt.org "
>> Subject: Re: [ovirt-users] Switch from Fedora 20 to CentOS 7.1
>> 
>> Hi Soeren,
>> 
>> Yes! That works perfectly. Did it myself several times.
>> 
>> Regards,
>> 
>> Jurriën
>> 
>> On 20-05-15 14:19, Soeren Malchow wrote:
>> 
>> 
>> 
>> Hi Vered,
>> 
>> Thanks for the quick answer, ok, understood
>> 
>> Then i could create a new Cluster in the same datacenter with newly installed
>> hosts and then migrate the machines through shutting them down in the old
>> cluster and then starting them in the new cluster, only thing i loose is the
>> live migration
>> 
>> Regards
>> Soeren
>> 
>> 
>> 
>> On 20/05/15 14:04, "Vered Volansky" <vered at redhat.com> wrote:
>> 
>> 
>> 
>> Hi Soeren,
>> 
>> oVirt Clusters support one host distribution (all hosts must be of the same
>> distribution).
>> If the cluster is empty at one point, you can add a host of a different
>> distribution than the cluster occupied before.
>> But there can't be two type of distributions at the same time in one cluster.
>> 
>> Regards,
>> Vered
>> 
>> ----- Original Message -----
>> 
>> 
>> 
>> From: "Soeren Malchow" <soeren.malchow at mcon.net> To: users at ovirt.org Sent:
>> Wednesday, May 20, 2015 2:58:11 PM
>> Subject: [ovirt-users] Switch from Fedora 20 to CentOS 7.1
>> 
>> Dear all,
>> 
>> Would it be possible to switch from fedora 20 to centos 7.1 (as far as i
>> understood it has live merge support now) within one cluster, meaning
>> 
>> 
>>     * take out one compute host
>>     * Reinstall that compute host with Centos 7.1
>>     * Do a hosted-engine —deploy
>>     * Migrate VM to the CentOS 7.1 host
>>     * Take the next fedora host and reinstall
>> 
>> Any experiences, recommendations or remarks on that ?
>> 
>> Regards
>> Soeren
>> 
>> _______________________________________________
>> Users mailing list Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> _______________________________________________
>> Users mailing list Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
>> This message (including any attachments) may contain information that is
>> privileged or confidential. If you are not the intended recipient, please
>> notify the sender and delete this email immediately from your systems and
>> destroy all copies of it. You may not, directly or indirectly, use,
>> disclose, distribute, print or copy this email or any part of it if you are
>> not the intended recipient
>> 
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 


More information about the Users mailing list