Import VM from OVA not working
by kiv@intercom.pro
Hi all.
Make dir /var/lib/exports/import
chown 36:36 /var/lib/exports/import/VM.ova
Go to oVirt admin portal. Import Vm from OVA source. In file path - /var/lib/exports/import/VM.ova
Error:
Failed to load VM configuration from OVA file: /var/lib/exports/import/VM.ova
Log file ovirt-query-ova-ansible:
2019-01-18 10:26:25,018 p=8066 u=ovirt | Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1547789184.32-243441920380227/query_ova.py", line 59, in <module>
ovf = get_ovf_from_dir(ova_path, sys.argv[2])
File "/root/.ansible/tmp/ansible-tmp-1547789184.32-243441920380227/query_ova.py", line 29, in get_ovf_from_dir
files = os.listdir(ova_path)
OSError: [Errno 2] No such file or directory: '/var/lib/exports/import/VM.ova'
ls -la /var/lib/exports/import/
-rw-r--r--. 1 vdsm kvm 4165205504 Jan 17 22:51 VM.ova
5 years, 10 months
ovirt node ng 4.3.0 rc1 upgrade fails
by Jorick Astrego
Hi,
Trying to update oVirt Node 4.2.7.1 to 4.3.0, but it fails with the
following dependencies:
"Loaded plugins: enabled_repos_upload, fastestmirror,
imgbased-persist,\n : package_upload, product-id,
search-disabled-repos, subscription-\n : manager,
vdsmupgrade\nThis system is not registered with an entitlement
server. You can use subscription-manager to register.\nLoading
mirror speeds from cached hostfile\n * ovirt-4.3-epel:
ftp.nluug.nl\nResolving Dependencies\n--> Running transaction
check\n---> Package ovirt-host.x86_64 0:4.2.3-1.el7 will be
updated\n---> Package ovirt-host.x86_64 0:4.3.0-1.el7 will be an
update\n--> Processing Dependency: ovirt-host-dependencies =
4.3.0-1.el7 for package: ovirt-host-4.3.0-1.el7.x86_64\n-->
Processing Dependency: aide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: openscap
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: openscap-utils for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: pam_pkcs11
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: scap-security-guide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Running transaction check\n--->
Package ovirt-host.x86_64 0:4.3.0-1.el7 will be an update\n-->
Processing Dependency: aide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: openscap
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: openscap-utils for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: pam_pkcs11
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: scap-security-guide for package:
ovirt-host-4.3.0-1.el7.x86_64\n---> Package
ovirt-host-dependencies.x86_64 0:4.2.3-1.el7 will be updated\n--->
Package ovirt-host-dependencies.x86_64 0:4.3.0-1.el7 will be an
update\n--> Processing Dependency: vdsm >= 4.30.5 for package:
ovirt-host-dependencies-4.3.0-1.el7.x86_64\n--> Processing
Dependency: vdsm-client >= 4.30.5 for package:
ovirt-host-dependencies-4.3.0-1.el7.x86_64\n--> Running transaction
check\n---> Package ovirt-host.x86_64 0:4.3.0-1.el7 will be an
update\n--> Processing Dependency: aide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: openscap
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: openscap-utils for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: pam_pkcs11
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: scap-security-guide for package:
ovirt-host-4.3.0-1.el7.x86_64\n---> Package vdsm.x86_64
0:4.20.43-1.el7 will be updated\n--> Processing Dependency: vdsm =
4.20.43-1.el7 for package:
vdsm-hook-ethtool-options-4.20.43-1.el7.noarch\n--> Processing
Dependency: vdsm = 4.20.43-1.el7 for package:
vdsm-gluster-4.20.43-1.el7.x86_64\n--> Processing Dependency: vdsm =
4.20.43-1.el7 for package:
vdsm-hook-vmfex-dev-4.20.43-1.el7.noarch\n--> Processing Dependency:
vdsm = 4.20.43-1.el7 for package:
vdsm-hook-fcoe-4.20.43-1.el7.noarch\n---> Package vdsm.x86_64
0:4.30.5-1.el7 will be an update\n--> Processing Dependency:
vdsm-http = 4.30.5-1.el7 for package: vdsm-4.30.5-1.el7.x86_64\n-->
Processing Dependency: vdsm-jsonrpc = 4.30.5-1.el7 for package:
vdsm-4.30.5-1.el7.x86_64\n--> Processing Dependency: vdsm-python =
4.30.5-1.el7 for package: vdsm-4.30.5-1.el7.x86_64\n--> Processing
Dependency: qemu-kvm-rhev >= 10:2.12.0-18.el7_6.1 for package:
vdsm-4.30.5-1.el7.x86_64\n---> Package vdsm-client.noarch
0:4.20.43-1.el7 will be updated\n---> Package vdsm-client.noarch
0:4.30.5-1.el7 will be an update\n--> Processing Dependency:
vdsm-api = 4.30.5-1.el7 for package:
vdsm-client-4.30.5-1.el7.noarch\n--> Processing Dependency:
vdsm-yajsonrpc = 4.30.5-1.el7 for package:
vdsm-client-4.30.5-1.el7.noarch\n--> Running transaction check\n--->
Package ovirt-host.x86_64 0:4.3.0-1.el7 will be an update\n-->
Processing Dependency: aide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: openscap
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: openscap-utils for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: pam_pkcs11
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: scap-security-guide for package:
ovirt-host-4.3.0-1.el7.x86_64\n---> Package qemu-kvm-ev.x86_64
10:2.10.0-21.el7_5.7.1 will be updated\n---> Package
qemu-kvm-ev.x86_64 10:2.12.0-18.el7_6.1.1 will be an update\n-->
Processing Dependency: qemu-kvm-common-ev = 10:2.12.0-18.el7_6.1.1
for package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n-->
Processing Dependency: qemu-img-ev = 10:2.12.0-18.el7_6.1.1 for
package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n--> Processing
Dependency: libibumad.so.3()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n--> Processing
Dependency: libgbm.so.1()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n--> Processing
Dependency: libepoxy.so.0()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n---> Package
vdsm-api.noarch 0:4.20.43-1.el7 will be updated\n---> Package
vdsm-api.noarch 0:4.30.5-1.el7 will be an update\n---> Package
vdsm-gluster.x86_64 0:4.20.43-1.el7 will be updated\n---> Package
vdsm-gluster.x86_64 0:4.30.5-1.el7 will be an update\n---> Package
vdsm-hook-ethtool-options.noarch 0:4.20.43-1.el7 will be
updated\n---> Package vdsm-hook-ethtool-options.noarch
0:4.30.5-1.el7 will be an update\n---> Package vdsm-hook-fcoe.noarch
0:4.20.43-1.el7 will be updated\n---> Package vdsm-hook-fcoe.noarch
0:4.30.5-1.el7 will be an update\n---> Package
vdsm-hook-vmfex-dev.noarch 0:4.20.43-1.el7 will be updated\n--->
Package vdsm-hook-vmfex-dev.noarch 0:4.30.5-1.el7 will be an
update\n---> Package vdsm-http.noarch 0:4.20.43-1.el7 will be
updated\n---> Package vdsm-http.noarch 0:4.30.5-1.el7 will be an
update\n---> Package vdsm-jsonrpc.noarch 0:4.20.43-1.el7 will be
updated\n---> Package vdsm-jsonrpc.noarch 0:4.30.5-1.el7 will be an
update\n---> Package vdsm-python.noarch 0:4.20.43-1.el7 will be
updated\n---> Package vdsm-python.noarch 0:4.30.5-1.el7 will be an
update\n--> Processing Dependency: vdsm-common = 4.30.5-1.el7 for
package: vdsm-python-4.30.5-1.el7.noarch\n--> Processing Dependency:
vdsm-network = 4.30.5-1.el7 for package:
vdsm-python-4.30.5-1.el7.noarch\n---> Package vdsm-yajsonrpc.noarch
0:4.20.43-1.el7 will be updated\n---> Package vdsm-yajsonrpc.noarch
0:4.30.5-1.el7 will be an update\n--> Running transaction
check\n---> Package ovirt-host.x86_64 0:4.3.0-1.el7 will be an
update\n--> Processing Dependency: aide for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: openscap
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: openscap-utils for package:
ovirt-host-4.3.0-1.el7.x86_64\n--> Processing Dependency: pam_pkcs11
for package: ovirt-host-4.3.0-1.el7.x86_64\n--> Processing
Dependency: scap-security-guide for package:
ovirt-host-4.3.0-1.el7.x86_64\n---> Package qemu-img-ev.x86_64
10:2.10.0-21.el7_5.7.1 will be updated\n---> Package
qemu-img-ev.x86_64 10:2.12.0-18.el7_6.1.1 will be an update\n--->
Package qemu-kvm-common-ev.x86_64 10:2.10.0-21.el7_5.7.1 will be
updated\n---> Package qemu-kvm-common-ev.x86_64
10:2.12.0-18.el7_6.1.1 will be an update\n---> Package
qemu-kvm-ev.x86_64 10:2.12.0-18.el7_6.1.1 will be an update\n-->
Processing Dependency: libibumad.so.3()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n--> Processing
Dependency: libgbm.so.1()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n--> Processing
Dependency: libepoxy.so.0()(64bit) for package:
10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64\n---> Package
vdsm-common.noarch 0:4.20.43-1.el7 will be updated\n---> Package
vdsm-common.noarch 0:4.30.5-1.el7 will be an update\n---> Package
vdsm-network.x86_64 0:4.20.43-1.el7 will be updated\n---> Package
vdsm-network.x86_64 0:4.30.5-1.el7 will be an update\n--> Finished
Dependency Resolution\n You could try using --skip-broken to work
around the problem\n You could try running: rpm -Va --nofiles
--nodigest\nUploading Enabled Repositories Report\nLoaded plugins:
fastestmirror, product-id, subscription-manager\nThis system is not
registered with an entitlement server. You can use
subscription-manager to register.\n"
]
}
MSG:
2019-01-11 14:25:52,185 [INFO] yum:52830:MainThread
@connection.py:871 - Connection built:
host=subscription.rhsm.redhat.com port=443 handler=/subscription
auth=identity_cert ca_dir=/etc/rhsm/ca/ insecure=False
2019-01-11 14:25:52,186 [INFO] yum:52830:MainThread @repolib.py:494
- repos updated: Repo updates
Total repo updates: 0
Updated
<NONE>
Added (new)
<NONE>
Deleted
<NONE>
Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64
(ovirt-4.3-centos-qemu-ev)
Requires: libibumad.so.3()(64bit)
Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64
(ovirt-4.3-centos-qemu-ev)
Requires: libgbm.so.1()(64bit)
Error: Package: ovirt-host-4.3.0-1.el7.x86_64 (ovirt-4.3-pre)
Requires: openscap-utils
Error: Package: ovirt-host-4.3.0-1.el7.x86_64 (ovirt-4.3-pre)
Requires: openscap
Error: Package: ovirt-host-4.3.0-1.el7.x86_64 (ovirt-4.3-pre)
Requires: pam_pkcs11
Error: Package: ovirt-host-4.3.0-1.el7.x86_64 (ovirt-4.3-pre)
Requires: aide
Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.1.1.x86_64
(ovirt-4.3-centos-qemu-ev)
Requires: libepoxy.so.0()(64bit)
Error: Package: ovirt-host-4.3.0-1.el7.x86_64 (ovirt-4.3-pre)
Requires: scap-security-guide
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
5 years, 10 months
ovirtmgmt always out of sync on ovirt node
by Jorick Astrego
Hi,
We're switching from Centos 7 hosts to oVirt node ng (tried 4.2.7,
4.3rc1 and 4.3rc2) and after adding them to oVirt (currently on 4.3rc2)
the ovirtmgmt interface is always out of sync.
Tried synching the network and refresh capabilities. I also tried
removing ovirtmgmt from the interface and adding it to a bond, then I
get this error:
Cannot setup Networks. The following Network definitions on the
Network Interface are different than those on the Logical Network.
Please synchronize the Network Interface before editing network
ovirtmgmt. The non-synchronized values are: ${OUTAVERAGELINKSHARE}
${HOST_OUT_OF_SYNC} - null, ${DC_OUT_OF_SYNC} - 50
I can setup the bond at install so the ovirtmgmt will use it so I can
use the host, but I'm hesitant to do this in production as I cannot
change the interface anymore.
Regards,
Jorick Astrego
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
5 years, 10 months
Converting Thin/Dependent to Thick/Independent
by Luca 'remix_tj' Lorenzetto
Hello,
I'm looking for a way for speeding up the deployment of new vms. I've
seen that if i use, under "Resource Allocation", "Storage Allocation"
flagged Thin, the vm creation is very quick.
But this makes the machine dependent from the template and if, for any
reason, the template gets corrupted or is not accessible, the vm will
not boot.
Is there any way for converting the vm from dependent to independent
while this is online?
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
5 years, 10 months
oVirt 4.3 RC2 and libgfapi
by Gianluca Cecchi
Just installed a single host HCI with gluster, with only the engine vm
running.
Is it expected this situation below?
# virsh -r list
Id Name State
----------------------------------------------------
2 HostedEngine running
and
# virsh -r dumpxml 2
. . .
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source
file='/var/run/vdsm/storage/e4eb6832-e0f6-40ee-902f-f301e5a3a643/fc34d770-9318-4539-9233-bfb1c5d68d14/b151557e-f1a2-45cb-b5c9-12c1f470467e'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>fc34d770-9318-4539-9233-bfb1c5d68d14</serial>
<alias name='ua-fc34d770-9318-4539-9233-bfb1c5d68d14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</disk>
. . .
where
# ll /var/run/vdsm/storage/e4eb6832-e0f6-40ee-902f-f301e5a3a643/
total 24
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
39df7b45-4932-4bfe-b69e-4fb2f8872f4f ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/39df7b45-4932-4bfe-b69e-4fb2f8872f4f
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
5ba6cd9e-b78d-4de4-9b7f-9688365128bf ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/5ba6cd9e-b78d-4de4-9b7f-9688365128bf
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 15:56
8b8e41e0-a875-4204-8ab1-c10214a49f5c ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/8b8e41e0-a875-4204-8ab1-c10214a49f5c
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 15:56
c21a62ba-73d2-4914-940f-cee6a67a1b08 ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/c21a62ba-73d2-4914-940f-cee6a67a1b08
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
fc34d770-9318-4539-9233-bfb1c5d68d14 ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/fc34d770-9318-4539-9233-bfb1c5d68d14
lrwxrwxrwx. 1 vdsm kvm 133 Jan 16 14:44
fd73354d-699b-478e-893c-e2a0bd1e6cbb ->
/rhev/data-center/mnt/glusterSD/10.10.10.216:
_engine/e4eb6832-e0f6-40ee-902f-f301e5a3a643/images/fd73354d-699b-478e-893c-e2a0bd1e6cbb
so hosted engine not using libgfapi?
Also, on hosted engine
[root@hciengine ~]# engine-config -g LibgfApiSupported
LibgfApiSupported: false version: 4.1
LibgfApiSupported: false version: 4.2
LibgfApiSupported: false version: 4.3
[root@hciengine ~]#
So that if I import a CentOS7 Atomic host image from glance repo as
template and create a new vm from it,when running this VM I get
# virsh -r dumpxml 3
. . .
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source file='/rhev/data-center/mnt/glusterSD/10.10.10.216:
_data/601d725a-1622-4dc8-a24d-2dba72ddf6ae/images/e4f92226-0f56-4822-a622-d1ebff41df9f/c6b2e076-1519-433e-9b37-2005c9ce6d2e'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>e4f92226-0f56-4822-a622-d1ebff41df9f</serial>
<boot order='1'/>
<alias name='ua-e4f92226-0f56-4822-a622-d1ebff41df9f'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>
. . .
I remember there was an "old" bug opened causing this default of not
enabling libgfapi
Does this mean it was not solved yet?
If I remember correctly the bugzilla was this one related to HA:
https://bugzilla.redhat.com/show_bug.cgi?id=1484227
that is still in new status.... since almost 2 years
Is this the only one open?
Thanks,
Gianluca
5 years, 10 months
ovirt 4.3 / Adding NFS storage issue
by Devin Acosta
I installed the latest 4.3 release candidate and tried to add an NFS mount
to the Data Center, and it errors in the GUI with “Error while executing
action New NFS Storage Domain: Invalid parameter”, then in the vdsm.log I
see it is passing “block_size=None”. Does this regardless if NFS v3 or v4.
InvalidParameterException: Invalid parameter: 'block_size=None'
2019-01-12 20:37:58,241-0700 INFO (jsonrpc/7) [vdsm.api] START
createStorageDomain(storageType=1,
sdUUID=u'b30c64c4-4b1f-4ebf-828b-e54c330ae84c', domainName=u'nfsdata',
typeSpecificArg=u'192.168.19.155:/data/data', domClass=1, domVersion=u'4',
block_size=None, max_hosts=2000, options=None)
from=::ffff:192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:48)
2019-01-12 20:37:58,241-0700 INFO (jsonrpc/7) [vdsm.api] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
from=::ffff:192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:52)
2019-01-12 20:37:58,241-0700 ERROR (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File "<string>", line 2, in createStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2583,
in createStorageDomain
alignment = clusterlock.alignment(block_size, max_hosts)
File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
661, in alignment
raise se.InvalidParameterException('block_size', block_size)
InvalidParameterException: Invalid parameter: 'block_size=None'
2019-01-12 20:37:58,242-0700 INFO (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') aborting: Task is aborted:
u"Invalid parameter: 'block_size=None'" - code 100 (task:1181)
2019-01-12 20:37:58,242-0700 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
(dispatcher:81)
2019-01-12 20:37:58,242-0700 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call StorageDomain.create failed (error 1000) in 0.00 seconds (__init__:312)
2019-01-12 20:37:58,541-0700 INFO (jsonrpc/1) [vdsm.api] START
disconnectStorageServer(domType=1,
spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'tpgt': u'1',
u'id': u'db7d16c8-7497-42db-8a75-81cb7f9d3350', u'connection':
u'192.168.19.155:/data/data', u'iqn': u'', u'user': u'', u'ipv6_enabled':
u'false', u'protocol_version': u'auto', u'password': '********', u'port':
u''}], options=None) from=::ffff:192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:48)
2019-01-12 20:37:58,542-0700 INFO (jsonrpc/1) [storage.Mount] unmounting
/rhev/data-center/mnt/192.168.19.155:_data_data (mount:212)
2019-01-12 20:37:59,087-0700 INFO (jsonrpc/1) [vdsm.api] FINISH
disconnectStorageServer return={'statuslist': [{'status': 0, 'id':
u'db7d16c8-7497-42db-8a75-81cb7f9d3350'}]}
from=::ffff:192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:54)
2019-01-12 20:37:59,089-0700 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call StoragePool.disconnectStorageServer succeeded in 0.55 seconds
(__init__:312)
5 years, 10 months
ovirt node ng 4.3.0 rc1 and HCI single host problems
by Gianluca Cecchi
Let's start a new thread more focused on the subject
I'm just testing deployment of HCI single host using oVirt Node NG CentOS 7
iso
I was able to complete the gluster setup via cockpit with these
modifications:
1) I wanted to check via ssh and found that files *key under /etc/ssh/ had
too weak config so that ssh daemon didn't started after installation of
node from iso
changing to 600 and restarting the service was ok
2) I used a single disk configured as jbod, so I choose that option instead
of the default proposed RAID6
But the playook failed with
. . .
PLAY [gluster_servers]
*********************************************************
TASK [Create LVs with specified size for the VGs]
******************************
changed: [192.168.124.211] => (item={u'lv': u'gluster_thinpool_sdb',
u'size': u'50GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'})
PLAY RECAP
*********************************************************************
192.168.124.211 : ok=1 changed=1 unreachable=0
failed=0
Ignoring errors...
Error: Section diskcount not found in the configuration file
Reading inside the playbooks involved here:
/usr/share/gdeploy/playbooks/auto_lvcreate_for_gluster.yml
/usr/share/gdeploy/playbooks/vgcreate.yml
and the snippet
- name: Convert the logical volume
lv: action=convert thinpool={{ item.vg }}/{{item.pool }}
poolmetadata={{ item.vg }}/'metadata' poolmetadataspare=n
vgname={{ item.vg }} disktype="{{disktype}}"
diskcount="{{ diskcount }}"
stripesize="{{stripesize}}"
chunksize="{{ chunksize | default('') }}"
snapshot_reserve="{{ snapshot_reserve }}"
with_items: "{{ lvpools }}"
ignore_errors: yes
I simply edited the gdeploy.conf from the gui button adding this section
under the [disktype] one
"
[diskcount]
1
"
then clean lv/vg/pv and the gdeploy step completed successfully
3) at first stage of ansible deploy I have this failed command that seems
not to prevent from completion but that I have not understood..
PLAY [gluster_servers]
*********************************************************
TASK [Run a command in the shell]
**********************************************
failed: [192.168.124.211] (item=vdsm-tool configure --force) => {"changed":
true, "cmd": "vdsm-tool configure --force", "delta": "0:00:01.475528",
"end": "2019-01-11 10:59:55.147601", "item": "vdsm-tool configure --force",
"msg": "non-zero return code", "rc": 1, "start": "2019-01-11
10:59:53.672073", "stderr": "Traceback (most recent call last):\n File
\"/usr/bin/vdsm-tool\", line 220, in main\n return
tool_command[cmd][\"command\"](*args)\n File
\"/usr/lib/python2.7/site-packages/vdsm/tool/__init__.py\", line 40, in
wrapper\n func(*args, **kwargs)\n File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py\", line 143,
in configure\n _configure(c)\n File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py\", line 90, in
_configure\n getattr(module, 'configure', lambda: None)()\n File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurators/bond_defaults.py\",
line 39, in configure\n sysfs_options_mapper.dump_bonding_options()\n
File
\"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py\",
line 48, in dump_bonding_options\n with
open(sysfs_options.BONDING_DEFAULTS, 'w') as f:\nIOError: [Errno 2] No such
file or directory: '/var/run/vdsm/bonding-defaults.json'", "stderr_lines":
["Traceback (most recent call last):", " File \"/usr/bin/vdsm-tool\", line
220, in main", " return tool_command[cmd][\"command\"](*args)", " File
\"/usr/lib/python2.7/site-packages/vdsm/tool/__init__.py\", line 40, in
wrapper", " func(*args, **kwargs)", " File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py\", line 143,
in configure", " _configure(c)", " File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py\", line 90, in
_configure", " getattr(module, 'configure', lambda: None)()", " File
\"/usr/lib/python2.7/site-packages/vdsm/tool/configurators/bond_defaults.py\",
line 39, in configure", " sysfs_options_mapper.dump_bonding_options()",
" File
\"/usr/lib/python2.7/site-packages/vdsm/network/link/bond/sysfs_options_mapper.py\",
line 48, in dump_bonding_options", " with
open(sysfs_options.BONDING_DEFAULTS, 'w') as f:", "IOError: [Errno 2] No
such file or directory: '/var/run/vdsm/bonding-defaults.json'"], "stdout":
"\nChecking configuration status...\n\nabrt is already configured for
vdsm\nlvm is configured for vdsm\nlibvirt is already configured for
vdsm\nSUCCESS: ssl configured to true. No conflicts\nManual override for
multipath.conf detected - preserving current configuration\nThis manual
override for multipath.conf was based on downrevved template. You are
strongly advised to contact your support representatives\n\nRunning
configure...\nReconfiguration of abrt is done.\nReconfiguration of passwd
is done.\nReconfiguration of libvirt is done.", "stdout_lines": ["",
"Checking configuration status...", "", "abrt is already configured for
vdsm", "lvm is configured for vdsm", "libvirt is already configured for
vdsm", "SUCCESS: ssl configured to true. No conflicts", "Manual override
for multipath.conf detected - preserving current configuration", "This
manual override for multipath.conf was based on downrevved template. You
are strongly advised to contact your support representatives", "", "Running
configure...", "Reconfiguration of abrt is done.", "Reconfiguration of
passwd is done.", "Reconfiguration of libvirt is done."]}
to retry, use: --limit @/tmp/tmpQXe2el/shell_cmd.retry
PLAY RECAP
*********************************************************************
192.168.124.211 : ok=0 changed=0 unreachable=0
failed=1
Would it be possible to save in some way the ansible playbook log even if
it completes ok, without going directly to the "successful" page?
Or is anyway stored in some location on disk of host?
I then proceeded with Hosted Engine install/setup and
4) it fails here at final stages of the local vm engine setup during host
activation:
[ INFO ] TASK [oVirt.hosted-engine-setup : Set Engine public key as
authorized key without validating the TLS/SSL certificates]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Obtain SSO token using
username/password credentials]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Ensure that the target
datacenter is present]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Ensure that the target cluster
is present in the target datacenter]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Enable GlusterFS at cluster
level]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Set VLAN ID at datacenter level]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Force host-deploy in offline
mode]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Add host]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Wait for the host to be up]
then after several minutes:
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts":
[{"address": "ov4301.localdomain.local", "affinity_labels": [],
"auto_numa_status": "unknown", "certificate": {"organization":
"localdomain.local", "subject":
"O=localdomain.local,CN=ov4301.localdomain.local"}, "cluster": {"href":
"/ovirt-engine/api/clusters/5e8fea14-158b-11e9-b2f0-00163e29b9f2", "id":
"5e8fea14-158b-11e9-b2f0-00163e29b9f2"}, "comment": "", "cpu": {"speed":
0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices":
[], "external_network_provider_configurations": [], "external_status":
"ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [],
"href": "/ovirt-engine/api/hosts/4202de75-75d3-4dcb-b128-2c4a1d257a15",
"id": "4202de75-75d3-4dcb-b128-2c4a1d257a15", "katello_errata": [],
"kdump_status": "unknown", "ksm": {"enabled": false},
"max_scheduling_memory": 0, "memory": 0, "name":
"ov4301.localdomain.local", "network_attachments": [], "nics": [],
"numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline":
""}, "permissions": [], "port": 54321, "power_management":
{"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true,
"pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority":
5, "status": "none"}, "ssh": {"fingerprint":
"SHA256:iqeQjdWCm15+xe74xEnswrgRJF7JBAWrvsjO/RaW8q8", "port": 22},
"statistics": [], "status": "install_failed",
"storage_connection_extensions": [], "summary": {"total": 0}, "tags": [],
"transparent_huge_pages": {"enabled": false}, "type": "ovirt_node",
"unmanaged_networks": [], "update_available": false}]}, "attempts": 120,
"changed": false}
[ INFO ] TASK [oVirt.hosted-engine-setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : debug]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Give the vm time to flush dirty
buffers]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Copy engine logs]
[ INFO ] TASK [oVirt.hosted-engine-setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Remove local vm dir]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : debug]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Remove temporary entry in
/etc/hosts for the local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
system may not be provisioned according to the playbook results: please
check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
Going to see the log
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190111113227-ov4301.localdomain.local-5d387e0d.log
it seems the error is about ovirt-imageio-daemon
2019-01-11 11:32:26,893+0100 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:863 execute-result: ('/usr/bin/systemctl', 'start',
'ovirt-imageio-daemon.service'), rc=1
2019-01-11 11:32:26,894+0100 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:921 execute-output: ('/usr/bin/systemctl', 'start',
'ovirt-imageio-daemon.service') stdout:
2019-01-11 11:32:26,895+0100 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'start',
'ovirt-imageio-daemon.service') stderr:
Job for ovirt-imageio-daemon.service failed because the control process
exited with error code. See "systemctl status ovirt-imageio-daemon.service"
and "journalctl -xe" for details.
2019-01-11 11:32:26,896+0100 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
File "/tmp/ovirt-PBFI2dyoDO/pythonlib/otopi/context.py", line 133, in
_executeMethod
method['method']()
File
"/tmp/ovirt-PBFI2dyoDO/otopi-plugins/ovirt-host-deploy/vdsm/packages.py",
line 175, in _start
self.services.state('ovirt-imageio-daemon', True)
File "/tmp/ovirt-PBFI2dyoDO/otopi-plugins/otopi/services/systemd.py",
line 141, in state
service=name,
RuntimeError: Failed to start service 'ovirt-imageio-daemon'
2019-01-11 11:32:26,898+0100 ERROR otopi.context context._executeMethod:152
Failed to execute stage 'Closing up': Failed to start service
'ovirt-imageio-daemon'
2019-01-11 11:32:26,899+0100 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE closeup METHOD
otopi.plugins.ovirt_host_deploy.vdsm.packages.Plugin._start
(odeploycons.packages.vdsm.started)
The reason:
[root@ov4301 ~]# systemctl status ovirt-imageio-daemon -l
● ovirt-imageio-daemon.service - oVirt ImageIO Daemon
Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service;
disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2019-01-11 11:32:29 CET;
27min ago
Process: 11625 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited,
status=1/FAILURE)
Main PID: 11625 (code=exited, status=1/FAILURE)
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]:
ovirt-imageio-daemon.service: main process exited, code=exited,
status=1/FAILURE
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: Failed to start oVirt
ImageIO Daemon.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: Unit
ovirt-imageio-daemon.service entered failed state.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]:
ovirt-imageio-daemon.service failed.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]:
ovirt-imageio-daemon.service holdoff time over, scheduling restart.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: Stopped oVirt ImageIO
Daemon.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: start request repeated
too quickly for ovirt-imageio-daemon.service
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: Failed to start oVirt
ImageIO Daemon.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]: Unit
ovirt-imageio-daemon.service entered failed state.
Jan 11 11:32:29 ov4301.localdomain.local systemd[1]:
ovirt-imageio-daemon.service failed.
[root@ov4301 ~]#
The file /var/log/ovirt-imageio-daemon/daemon.log contains
2019-01-11 10:28:30,191 INFO (MainThread) [server] Starting (pid=3702,
version=1.4.6)
2019-01-11 10:28:30,229 ERROR (MainThread) [server] Service failed
(remote_service=<ovirt_imageio_daemon.server.RemoteService object at
0x7fea9dc88050>, local_service=<ovirt_imageio_daemon.server.LocalService
object at 0x7fea9ca24850>, control_service=None, running=True)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py",
line 58, in main
start(config)
File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py",
line 99, in start
control_service = ControlService(config)
File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py",
line 206, in __init__
config.tickets.socket, uhttp.UnixWSGIRequestHandler)
File "/usr/lib64/python2.7/SocketServer.py", line 419, in __init__
self.server_bind()
File "/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/uhttp.py",
line 79, in server_bind
self.socket.bind(self.server_address)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 2] No such file or directory
One potential problem I noticed is that on this host I setup eth0 with
192.168.122.x (for ovirtmgmt) and eth1 with 192.168.124.y (for gluster,
even if only one host, but aiming at adding other 2 hosts in second step)
and the libvirt network temporarily created for the local engine vm is also
on 192.168.124.0 network.....
4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP group default qlen 1000
link/ether 52:54:00:b8:6b:3c brd ff:ff:ff:ff:ff:ff
inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master
virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:b8:6b:3c brd ff:ff:ff:ff:ff:ff
I can change my gluster network of this env and re-test, but would it be
possible to have the libvirt network configurable? It seems risky to have a
fixed one...
Can I go ahead from this failed hosted engine after understanding reason of
ovirt-imageio-daemon failure or am I forced to scratch?
Supposing I go to power down and then power on again this host, how can I
retry without scratching?
Gianluca
5 years, 10 months
Problems uploading ISO
by jerry@tom.com
I using ovirt-iso-uploader command to upload iso. but failed and giving me the following error.[root@ha ~]# engine-iso-uploader --verbose -i iso upload delta1.iso
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
DEBUG: API Vendor(None) API Version(4.2.0)
DEBUG: id=029786a6-4686-44a1-a667-b948df945712 address=storage1 path=/iso
Uploading, please wait...
INFO: Start uploading delta1.iso
ERROR: glfs_init failed: Success
[root@ha ~]# ^C
I've been looking for a possible solution but nothing works yet. Any
guidance on this matter will be appreciate it.
m1988m(a)tom.com
5 years, 10 months
oVirt 4.3 hosted engine migration
by Maton, Brett
In the updated UI, it doesn't seem possible to migrate hosted engine from
Compute -> Host -> Virtual Machines
anymore, although there does appear to be a 'new' Cancel Migration button.
It's handy to be able to migrate the hosted engine from this view, I
normally manually migrate the hosted engine to another host before
upgrading and it's nice to be able to do it all in the same area, rather
than having to switch to the (all) virtual machines view and then back to
hosts.
Any chance this feature could be re-enabled?
Regards,
Brett
5 years, 10 months
oVirt 4.3 RC2 test on single host HCI --> OK
by Gianluca Cecchi
Just tested what in subject on a vm (so it has been a nested env)
configured with 2 disks: 1x60Gb for OS and 1x100Gb for gluster.
Used node ng from ovirt-node-ng-installer-4.3.0-2019011608.el7.iso and
almost all the problems had with RC1 here:
https://www.mail-archive.com/users@ovirt.org/msg52870.html
have gone solved.
The hosted engine deploy completed successfully and I'm able to access
engine web admin and see the VM and its storage domain and connect to
console.
The only persisting problem is that I choose JBOD in my disk config
(because I have only one disk) and the deploy gives error about missing
diskcount parameter in ansible playbook, so I have to edit the gdeploy conf
file from the gui and add the section
[diskcount]
1
after that all goes well.
Good job guys!
BTW: is it correct hat for data and vmstore I only have the volumes
configured, while I have to manullay create storage domains?
If I want to extend this initial configuration to 3 nodes, what would be
the installation path to execute?
Thanks,
Gianluca
5 years, 10 months