oVirt Survey Autumn 2020
by Sandro Bonazzola
As we continue to develop oVirt 4.4, the Development and Integration teams
at Red Hat would value insights on how you are deploying the oVirt
environment.
Please help us to hit the mark by completing this short survey.
The survey will close on October 18th 2020. If you're managing multiple
oVirt deployments with very different use cases or very different
deployments you can consider answering this survey multiple times.
*Please note the answers to this survey will be publicly accessible*.
This survey is under oVirt Privacy Policy available at
https://www.ovirt.org/site/privacy-policy.html .
The survey is available https://forms.gle/bPvEAdRyUcyCbgEc7
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
4 years, 1 month
oVirt Node 4.4.2 is now generally available
by Sandro Bonazzola
oVirt Node 4.4.2 is now generally available
The oVirt project is pleased to announce the general availability of oVirt
Node 4.4.2 , as of September 25th, 2020.
This release completes the oVirt 4.4.2 release published on September 17th
Important notes before you install / upgrade
Please note that oVirt 4.4 only supports clusters and data centers with
compatibility version 4.2 and above. If clusters or data centers are
running with an older compatibility version, you need to upgrade them to at
least 4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, the megaraid_sas driver is removed. If you use Enterprise
Linux 8 hosts you can try to provide the necessary drivers for the
deprecated hardware using the DUD method (See the users’ mailing list
thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt Node 4.4.2 Release?
oVirt Node has been updated, including:
-
oVirt 4.4.2: http://www.ovirt.org/release/4.4.2/
-
Ansible 2.9.13:
https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v...
-
Glusterfs 7.7: https://docs.gluster.org/en/latest/release-notes/7.7/
-
Advanced Virtualization 8.2.1
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Additional resources:
-
Read more about the oVirt 4.4.2 release highlights:
http://www.ovirt.org/release/4.4.2/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.2/
[2] http://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
4 years, 2 months
Re: OVN Geneve tunnels not been established
by Konstantinos Betsis
I did a restart of the ovn-controller, this is the output of the
ovn-controller.log
2020-09-11T10:54:07.566Z|00001|vlog|INFO|opened log file
/var/log/openvswitch/ovn-controller.log
2020-09-11T10:54:07.568Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
connecting...
2020-09-11T10:54:07.568Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
connected
2020-09-11T10:54:07.570Z|00004|main|INFO|OVS IDL reconnected, force
recompute.
2020-09-11T10:54:07.571Z|00005|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
connecting...
2020-09-11T10:54:07.571Z|00006|main|INFO|OVNSB IDL reconnected, force
recompute.
2020-09-11T10:54:07.685Z|00007|stream_ssl|WARN|SSL_connect: unexpected SSL
connection close
2020-09-11T10:54:07.685Z|00008|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
connection attempt failed (Protocol error)
2020-09-11T10:54:08.685Z|00009|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
connecting...
2020-09-11T10:54:08.800Z|00010|stream_ssl|WARN|SSL_connect: unexpected SSL
connection close
2020-09-11T10:54:08.800Z|00011|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
connection attempt failed (Protocol error)
2020-09-11T10:54:08.800Z|00012|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
waiting 2 seconds before reconnect
2020-09-11T10:54:10.802Z|00013|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
connecting...
2020-09-11T10:54:10.917Z|00014|stream_ssl|WARN|SSL_connect: unexpected SSL
connection close
2020-09-11T10:54:10.917Z|00015|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
connection attempt failed (Protocol error)
2020-09-11T10:54:10.917Z|00016|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
waiting 4 seconds before reconnect
2020-09-11T10:54:14.921Z|00017|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
connecting...
2020-09-11T10:54:15.036Z|00018|stream_ssl|WARN|SSL_connect: unexpected SSL
connection close
2020-09-11T10:54:15.036Z|00019|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
connection attempt failed (Protocol error)
2020-09-11T10:54:15.036Z|00020|reconnect|INFO|ssl:OVIRT_ENGINE_IP:6642:
continuing to reconnect in the background but suppressing further logging
I have also done the vdsm-tool ovn-config OVIRT_ENGINE_IP
OVIRTMGMT_NETWORK_DC
This is how the OVIRT_ENGINE_IP is provided in the ovn controller, i can
redo it if you wan.
After the restart of the ovn-controller the OVIRT ENGINE still shows only
two geneve connections one with DC01-host02 and DC02-host01.
Chassis "c4b23834-aec7-4bf8-8be7-aa94a50a6144"
hostname: "dc02-host01"
Encap geneve
ip: "DC02-host01_IP"
options: {csum="true"}
Chassis "be3abcc9-7358-4040-a37b-8d8a782f239c"
hostname: "DC01-host02"
Encap geneve
ip: "DC01-host02"
options: {csum="true"}
I've re-done the vdsm-tool command and nothing changed.... again....with
the same errors as the systemctl restart ovn-controller
On Fri, Sep 11, 2020 at 1:49 PM Dominik Holler <dholler(a)redhat.com> wrote:
> Please include ovirt-users list in your reply, to share the knowledge and
> experience with the community!
>
> On Fri, Sep 11, 2020 at 12:12 PM Konstantinos Betsis <k.betsis(a)gmail.com>
> wrote:
>
>> Ok below the output per node and DC
>> DC01
>> node01
>>
>> [root@dc01-node01 ~]# ovs-vsctl --no-wait get open .
>> external-ids:ovn-remote
>> "ssl:*OVIRT_ENGINE_IP*:6642"
>> [root@ dc01-node01 ~]# ovs-vsctl --no-wait get open .
>> external-ids:ovn-encap-type
>> geneve
>> [root@ dc01-node01 ~]# ovs-vsctl --no-wait get open .
>> external-ids:ovn-encap-ip
>>
>> "*OVIRTMGMT_IP_DC01-NODE01*"
>>
>> node02
>>
>> [root@dc01-node02 ~]# ovs-vsctl --no-wait get open .
>> external-ids:ovn-remote
>> "ssl:*OVIRT_ENGINE_IP*:6642"
>> [root@ dc01-node02 ~]# ovs-vsctl --no-wait get open .
>> external-ids:ovn-encap-type
>> geneve
>> [root@ dc01-node02 ~]# ovs-vsctl --no-wait get open .
>> external-ids:ovn-encap-ip
>>
>> "*OVIRTMGMT_IP_DC01-NODE02*"
>>
>> DC02
>> node01
>>
>> [root@dc02-node01 ~]# ovs-vsctl --no-wait get open .
>> external-ids:ovn-remote
>> "ssl:*OVIRT_ENGINE_IP*:6642"
>> [root@ dc02-node01 ~]# ovs-vsctl --no-wait get open .
>> external-ids:ovn-encap-type
>> geneve
>> [root@ dc02-node01 ~]# ovs-vsctl --no-wait get open .
>> external-ids:ovn-encap-ip
>>
>> "*OVIRTMGMT_IP_DC02-NODE01*"
>>
>>
> Looks good.
>
>
>> DC01 node01 and node02 share the same VM networks and VMs deployed on top
>> of them cannot talk to VM on the other hypervisor.
>>
>
> Maybe there is a hint on ovn-controller.log on dc01-node02 ? Maybe
> restarting ovn-controller creates more helpful log messages?
>
> You can also try restart the ovn configuration on all hosts by executing
> vdsm-tool ovn-config OVIRT_ENGINE_IP LOCAL_OVIRTMGMT_IP
> on each host, this would trigger
>
> https://github.com/oVirt/ovirt-provider-ovn/blob/master/driver/scripts/se...
> internally.
>
>
>> So I would expect to see the same output for node01 to have a geneve
>> tunnel to node02 and vice versa.
>>
>>
> Me too.
>
>
>> On Fri, Sep 11, 2020 at 12:14 PM Dominik Holler <dholler(a)redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Fri, Sep 11, 2020 at 10:53 AM Konstantinos Betsis <k.betsis(a)gmail.com>
>>> wrote:
>>>
>>>> Hi Dominik
>>>>
>>>> OVN is selected as the default network provider on the clusters and the
>>>> hosts.
>>>>
>>>>
>>> sounds good.
>>> This configuration is required already during the host is added to oVirt
>>> Engine, because OVN is configured during this step.
>>>
>>>
>>>> The "ovn-sbctl show" works on the ovirt engine and shows only two
>>>> hosts, 1 per DC.
>>>>
>>>> Chassis "c4b23834-aec7-4bf8-8be7-aa94a50a6144"
>>>> hostname: "dc01-node02"
>>>> Encap geneve
>>>> ip: "X.X.X.X"
>>>> options: {csum="true"}
>>>> Chassis "be3abcc9-7358-4040-a37b-8d8a782f239c"
>>>> hostname: "dc02-node1"
>>>> Encap geneve
>>>> ip: "A.A.A.A"
>>>> options: {csum="true"}
>>>>
>>>>
>>>> The new node is not listed (dc01-node1).
>>>>
>>>> When executed on the nodes the same command (ovn-sbctl show) times-out
>>>> on all nodes.....
>>>>
>>>> The output of the /var/log/openvswitch/ovn-conntroller.log lists on all
>>>> logs
>>>>
>>>> 2020-09-11T08:46:55.197Z|07361|stream_ssl|WARN|SSL_connect: unexpected
>>>> SSL connection close
>>>>
>>>>
>>>>
>>> Can you please compare the output of
>>>
>>> ovs-vsctl --no-wait get open . external-ids:ovn-remote
>>> ovs-vsctl --no-wait get open . external-ids:ovn-encap-type
>>> ovs-vsctl --no-wait get open . external-ids:ovn-encap-ip
>>>
>>> of the working hosts, e.g. dc01-node02, and the failing host dc01-node1?
>>> This should point us the relevant difference in the configuration.
>>>
>>> Please include ovirt-users list in your replay, to share the knowledge
>>> and experience with the community.
>>>
>>>
>>>
>>>> Thank you
>>>> Best regards
>>>> Konstantinos Betsis
>>>>
>>>>
>>>> On Fri, Sep 11, 2020 at 11:01 AM Dominik Holler <dholler(a)redhat.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Thu, Sep 10, 2020 at 6:26 PM Konstantinos B <k.betsis(a)gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi all
>>>>>>
>>>>>> We have a small installation based on OVIRT 4.3.
>>>>>> 1 Cluster is based on Centos 7 and the other on OVIRT NG Node image.
>>>>>>
>>>>>> The environment was stable till an upgrade took place a couple of
>>>>>> months ago.
>>>>>> As such we had to re-install one of the Centos 7 node and start from
>>>>>> scratch.
>>>>>>
>>>>>
>>>>> To trigger the automatic configuration of the host, it is required to
>>>>> configure ovirt-provider-ovn as the default network provider for the
>>>>> cluster before adding the host to oVirt.
>>>>>
>>>>>
>>>>>> Even though the installation completed successfully and VMs are
>>>>>> created, the following are not working as expected:
>>>>>> 1. ovn geneve tunnels are not established with the other Centos 7
>>>>>> node in the cluster.
>>>>>> 2. Centos 7 node is configured by ovirt engine however no geneve
>>>>>> tunnel is established when "ovn-sbctl show" is issued on the engine.
>>>>>>
>>>>>
>>>>> Does "ovn-sbctl show" list the hosts?
>>>>>
>>>>>
>>>>>> 3. no flows are shown on the engine on port 6642 for the ovs db.
>>>>>>
>>>>>> Does anyone have any experience on how to troubleshoot OVN on ovirt?
>>>>>>
>>>>>>
>>>>> /var/log/openvswitch/ovncontroller.log on the host should contain a
>>>>> helpful hint.
>>>>>
>>>>>
>>>>>
>>>>>> Thank you
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>>>> oVirt Code of Conduct:
>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LBVGLQJBWJF...
>>>>>>
>>>>>
4 years, 2 months
Is it possible to change scheduler optimization settings of cluster using ansible or some other automation way
by Kushagra Agarwal
I was hoping if i can get some help with the below oVirt scenario:
*Problem Statement*:-
Is it possible to change scheduler optimization settings of cluster using
ansible or some other automation way
*Description*:- Do we have any ansible module or any other CLI based
approach which can help us to change 'scheduler optimization' settings of
cluster in oVIrt. Scheduler optimization settings of cluster can be found
under Scheduling Policy tab ( Compute -> Clusters(select the cluster) ->
click on edit and then navigate to scheduling policy
Any help in this will be highly appreciated.
Thanks,
Kushagra
4 years, 2 months
VM AutoStart
by Jeremey Wise
When I have to shut down cluster... ups runs out etc.. I need a sequence
set of just a small number of VMs to "autostart"
Normally I just use DNS FQND to connect to oVirt engine but as two of my
VMs are a DNS HA cluster.. as well as NTP / SMTP /DHCP etc... I need
those two infrastructure VMs to be auto boot.
I looked at HA settings for those VMs but it seems to be watching for pause
/resume.. but it does not imply or state auto start on clean first boot.
Options?
--
p <jeremey.wise(a)gmail.com>enguinpages
4 years, 2 months
java.lang.reflect.UndeclaredThrowableException - oVirt engine UI
by Jeremey Wise
I tried to post on website but .. it did not seem to work... so sorry if
this is double posting.
oVirt login this AM. accepted username and password but got java error.
Restarted oVirt engine
##
hosted-engine --set-maintenance --mode=global
hosted-engine --vm-shutdown
hosted-engine --vm-status
#make sure that the status is shutdown before restarting
hosted-engine --vm-start
hosted-engine --vm-status
#make sure the status is health before leaving maintenance mode
hosted-engine --set-maintenance --mode=none
##
[root@thor ~]# hosted-engine --vm-status
--== Host thor.penguinpages.local (id: 1) status ==--
Host ID : 1
Host timestamp : 65342
Score : 3400
Engine status : {"vm": "down", "health": "bad",
"detail": "unknown", "reason": "vm not running on this host"}
Hostname : thor.penguinpages.local
Local maintenance : False
stopped : False
crc32 : 824c29fd
conf_on_shared_storage : True
local_conf_timestamp : 65342
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=65342 (Wed Sep 30 08:11:45 2020)
host-id=1
score=3400
vm_conf_refresh_time=65342 (Wed Sep 30 08:11:45 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host medusa.penguinpages.local (id: 3) status ==--
Host ID : 3
Host timestamp : 87556
Score : 3400
Engine status : {"vm": "up", "health": "good",
"detail": "Up"}
Hostname : medusa.penguinpages.local
Local maintenance : False
stopped : False
crc32 : 63296a70
conf_on_shared_storage : True
local_conf_timestamp : 87556
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=87556 (Wed Sep 30 08:11:39 2020)
host-id=3
score=3400
vm_conf_refresh_time=87556 (Wed Sep 30 08:11:39 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
[root@thor ~]# yum update -y
Last metadata expiration check: 0:31:17 ago on Wed 30 Sep 2020 09:17:03 AM
EDT.
Dependencies resolved.
Nothing to do.
Complete!
[root@thor ~]#
Gogled around .. just found this thread.
##
https://bugzilla.redhat.com/show_bug.cgi?id=1378045
# pgadmin connect to ovirte01.penguinpages.com as engine to db engine
select mac_addr from vm_interface
"00:16:3e:57:0d:47"
"56:6f:86:41:00:01"
"56:6f:86:41:00:00"
"56:6f:86:41:00:02"
"56:6f:86:41:00:03"
"56:6f:86:41:00:04"
"56:6f:86:41:00:05"
"56:6f:86:41:00:15"
"56:6f:86:41:00:16"
"56:6f:86:41:00:17"
"56:6f:86:41:00:18"
"56:6f:86:41:00:19"
# Note one field is "null"
Question:
1) is this bad?
2) How do I fix?
3) Any idea on root cause?
--
p <jeremey.wise(a)gmail.com>enguinpages
4 years, 2 months
Gluster Volumes - Correct Peer Connection
by Jeremey Wise
I just noticed when HCI setup bult the gluster engine / data / vmstore
volumes... it did use correctly the definition of 10Gb "back end"
interfaces / hosts.
But.. oVirt Engine is NOT referencing this.
it lists bricks as 1Gb "managment / host" interfaces. Is this a GUI
issue? I doubt this and how do I correct it?
### Data Volume Example
Name:
data
Volume ID:
0ae7b487-8b87-4192-bd30-621d445902fe
Volume Type:
Replicate
Replica Count:
3
Number of Bricks:
3
Transport Types:
TCP
Maximum no of snapshots:
256
Capacity:
999.51 GiB total, 269.02 GiB used, 730.49 GiB free, 297.91 GiB Guaranteed
free, 78 Deduplication/Compression savings (%)
medusa.penguinpages.local
medusa.penguinpages.local:/gluster_bricks/data/data
25%
OK
odin.penguinpages.local
odin.penguinpages.local:/gluster_bricks/data/data
25%
OK
thor.penguinpages.local
thor.penguinpages.local:/gluster_bricks/data/data
25%
OK
# I have storage back end of 172.16.101.x which is 10Gb dedicated for
replication. Peers reflect this
[root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# gluster peer status
Number of Peers: 2
Hostname: thorst.penguinpages.local
Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9
State: Peer in Cluster (Connected)
Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@odin c4918f28-00ce-49f9-91c8-224796a158b9]#
--
p <jeremey.wise(a)gmail.com>enguinpages
4 years, 2 months
Version 4.4.2.6-1.el8 -Console Error: java.lang.reflect.UndeclaredThrowableException
by penguin pages
Got message this AM when tried to login to oVirt Engine which up till now has been working fine.
I can supply username and password and get portal to choose "Administration Portal" or "VM Portal"
I have tested both.. both have same response about java.lang.reflect.UndeclaredThrowableException
I restarted the engine
#
hosted-engine --set-maintenance --mode=global
hosted-engine --vm-shutdown
hosted-engine --vm-status
#make sure that the status is shutdown before restarting
hosted-engine --vm-start
hosted-engine --vm-status
#make sure the status is health before leaving maintenance mode
hosted-engine --set-maintenance --mode=none
#
--== Host thor.penguinpages.local (id: 1) status ==--
Host ID : 1
Host timestamp : 70359
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : thor.penguinpages.local
Local maintenance : False
stopped : False
crc32 : 25adf6d0
conf_on_shared_storage : True
local_conf_timestamp : 70359
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=70359 (Wed Sep 30 09:35:22 2020)
host-id=1
score=3400
vm_conf_refresh_time=70359 (Wed Sep 30 09:35:22 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host medusa.penguinpages.local (id: 3) status ==--
Host ID : 3
Host timestamp : 92582
Score : 3400
Engine status : {"vm": "up", "health": "good", "detail": "Up"}
Hostname : medusa.penguinpages.local
Local maintenance : False
stopped : False
crc32 : 623359d2
conf_on_shared_storage : True
local_conf_timestamp : 92582
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=92582 (Wed Sep 30 09:35:25 2020)
host-id=3
score=3400
vm_conf_refresh_time=92582 (Wed Sep 30 09:35:25 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
##
I downloaded and installed key from portal.. thinking that may have been issue.. it was not.
I googled around /searched forum and nothing jumped out. (only hit I found in forum https://lists.ovirt.org/pipermail/users/2015-June/033421.html but no note about fix)
4 years, 2 months
update to 4.4 fails with "Domain format is different from master storage domain format" (v4.3 cluster with V4 NFS storage domains)
by Sergey Kulikov
Hello, I'm trying to update our hosted-engine ovirt to version 4.4 from 4.3.10 and everything goes fine until
hosted-engine --deploy tries to add new hosted_storage domain, we have NFS storage domains, and it
fails with error:
[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Domain format is different from master storage domain format]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Domain format is different from master storage domain format]\". HTTP response code is 400."}
It looks like storage domains in data center should have been upgraded to V5 when DC and cluster
compatibility version was updated to 4.3, but looks like it was implemented in ovirt 4.3.3 and this
setup was updated from 4.2 to 4.3 before 4.3.3 was released, so I ended up with 4.3 DCs and clusters
with V4 storage domain format.
Is there any way to convert V4 to V5 (there are running VMs on them) to be able upgrade to 4.4?
--
4 years, 2 months