Re: oVirt deploy new HE Host problem
by Marko Vrgotic
Hi Yedidyah and Strahil,
Just to double check, if you received the issue request and log files.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
From: Marko Vrgotic <M.Vrgotic(a)activevideo.com>
Date: Thursday, 6 May 2021 at 11:43
To: Yedidyah Bar David <didi(a)redhat.com>, Strahil Nikolov <hunter86_bg(a)yahoo.com>, users(a)ovirt.org <users(a)ovirt.org>
Subject: Re: oVirt deploy new HE Host problem
It might come handy, here is the complete hosted-engine.conf file
[root@ovirt-sj-03 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
fqdn=ovirt-engine.ictv.com
vm_disk_id=b019c5fa-8fb5-4bfc-8339-f5b7f590a051
sdUUID=054c43fc-1924-4106-9f80-0f2ac62b9886
console=vnc
vmid=66b6d489-ceb8-486a-951a-355e21f13627
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
iqn=
conf_image_UUID=910f445e-31c0-4441-9c82-720901f7f19b
port=
network_test=dns
vm_disk_vol_id=f1ce8ba6-2d3b-4309-bca0-e6a00ce74c75
storage=10.210.13.64:/hosted_engine
gateway=10.210.11.254
ca_subject="C=EN, L=Test, O=Test, CN=Test"
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
connectionUUID=e29cf818-5ee5-46e1-85c1-8aeefa33e95d
nfs_version=auto
bridge=ovirtmgmt
metadata_image_UUID=16b3e5ac-e70b-46e3-bf81-322954fe0b44
mnt_options=
domainType=nfs
password=
vdsm_use_ssl=true
tcp_t_port=
user=
host_id=3
metadata_volume_UUID=b6326e48-a7d2-4cba-af91-441db9f353c2
spUUID=00000000-0000-0000-0000-000000000000
conf_volume_UUID=c518f937-60fe-4fed-a54c-db11328bb507
portal=
lockspace_image_UUID=e08188be-f733-4d5c-9222-a4b4e2228955
lockspace_volume_UUID=081f81c5-b2b2-46d5-9f82-9d9041ccc108
tcp_t_address=
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
From: Marko Vrgotic <M.Vrgotic(a)activevideo.com>
Date: Thursday, 6 May 2021 at 11:20
To: Yedidyah Bar David <didi(a)redhat.com>, Strahil Nikolov <hunter86_bg(a)yahoo.com>, users(a)ovirt.org <users(a)ovirt.org>
Subject: oVirt deploy new HE Host problem
Hi Strahil and Yedidyah,
As agreed, short summary: Deploy new HE host fails
Pre Deploy state:
* Host1 and Host3 are current HE HA pool
* Host1 and Host3 are unaware of Host2 (check below)
* I am trying to add Host2 to HE HA pool
* Host2 is fully reinstalled – clean OS
* Host2 is added to oVirt as regular Host
* Host2 is currently in Maintenance mode (waiting for Reinstall with HE Deploy)
[root@ovirt-sj-03 ~]# hosted-engine --vm-status
--== Host ovirt-sj-01.ictv.com (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt-sj-01.ictv.com
Host ID : 1
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : d15bb877
local_conf_timestamp : 3103909
Host timestamp : 3103909
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3103909 (Thu May 6 01:42:06 2021)
host-id=1
score=3400
vm_conf_refresh_time=3103909 (Thu May 6 01:42:06 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host ovirt-sj-03.ictv.com (id: 3) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt-sj-03.ictv.com
Host ID : 3
Engine status : {"health": "good", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 15801717
local_conf_timestamp : 3106395
Host timestamp : 3106395
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3106395 (Thu May 6 01:42:13 2021)
host-id=3
score=3400
vm_conf_refresh_time=3106395 (Thu May 6 01:42:13 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
Deployment:
I have attached logs from Host2 and Engine – if anything is missing, please let me know.
Kindly awaiting your reply.
You might notice slight time change on Host2 logs – that’s after re-provisioning I did not set correct timezone
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
3 years, 11 months
Importing VM fails with "No space left on device"
by j.velasco@outlook.com
Hello List,
I am facing the following issue when I try to import a VM from a KVM host to my oVirt (4.4.5.11-1.el8).
The importing I done throuth GUI using the option of KVM provider.
-- Log1:
# cat /var/log/vdsm/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T124121.log
[ 0.0] preparing for copy
[ 0.0] Copying disk 1/1 to /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/cb63ffc9-07ee-4323-9e8a-378be31ae3f7/e7e69cbc-47bf-4557-ae02-ca1c53c8423f
Traceback (most recent call last):
File "/usr/libexec/vdsm/kvm2ovirt", line 23, in <module>
kvm2ovirt.main()
File "/usr/lib/python3.6/site-packages/vdsm/kvm2ovirt.py", line 277, in main
handle_volume(con, diskno, src, dst, options)
File "/usr/lib/python3.6/site-packages/vdsm/kvm2ovirt.py", line 228, in handle_volume
download_disk(sr, estimated_size, None, dst, options.bufsize)
File "/usr/lib/python3.6/site-packages/vdsm/kvm2ovirt.py", line 169, in download_disk
op.run()
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 57, in run
res = self._run()
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 163, in _run
self._write_chunk(count)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 188, in _write_chunk
n = self._dst.write(v)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/file.py", line 88, in write
return util.uninterruptible(self._fio.write, buf)
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/util.py", line 20, in uninterruptible
return func(*args)
OSError: [Errno 28] No space left on device
-- Log2:
# cat /var/log/vdsm/vdsm.log
2021-05-07 10:29:49,813-0500 DEBUG (v2v/57f84423) [root] START thread <Thread(v2v/57f84423, started daemon 140273162123008)> (func=<bound method ImportVm._run of <vdsm.v2v.ImportVm object at 0x7f946051c5c0>>, args=(), kwargs={}) (concurrent:258)
2021-05-07 10:29:49,813-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' starting import (v2v:880)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') moving from state preparing -> state preparing (task:624)
2021-05-07 10:29:49,814-0500 INFO (v2v/57f84423) [vdsm.api] START prepareImage(sdUUID='cc9fae8e-b714-44cf-9dac-3a83a15b0455', spUUID='24d9d2fa-98f9-11eb-aea7-00163e09cc71', imgUUID='226cc137-1992-4246-9484-80a1bfb5e9f7', leafUUID='847bc460-1b54-4756-8ced-4b969c399900', allowIllegal=False) from=internal, task_id=58a7bdc0-0f7e-4307-92ba-040f1a272721 (api:48)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to register resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' for lock type 'shared' (resourceManager:474)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free. Now locking as 'shared' (1 active user) (resourceManager:531)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Request] (ResName='00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', ReqID='b2ebef3c-8b3d-4429-b9da-b6f3af2c9ac4') Granted request (resourceManager:221)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') _resourcesAcquired: 00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455 (shared) (task:856)
2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') ref 1 aborting False (task:1008)
2021-05-07 10:29:49,815-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cc9fae8e-b714-44cf-9dac-3a83a15b0455 (cwd None) (commands:153)
2021-05-07 10:29:49,900-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185)
2021-05-07 10:29:49,902-0500 DEBUG (v2v/57f84423) [storage.LVM] lvs reloaded (lvm:759)
2021-05-07 10:29:49,904-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=2096 bs=512 if=/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/metadata count=1 (cwd None) (commands:211)
2021-05-07 10:29:49,916-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000228452 s, 2.2 MB/s\n'; <rc> = 0 (commands:224)
2021-05-07 10:29:49,916-0500 DEBUG (v2v/57f84423) [storage.Misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000228452 s, 2.2 MB/s'], size: 512 (misc:114)
2021-05-07 10:29:49,917-0500 INFO (v2v/57f84423) [storage.LVM] Activating lvs: vg=cc9fae8e-b714-44cf-9dac-3a83a15b0455 lvs=['847bc460-1b54-4756-8ced-4b969c399900'] (lvm:1738)
2021-05-07 10:29:49,917-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvchange --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --autobackup n --available y cc9fae8e-b714-44cf-9dac-3a83a15b0455/847bc460-1b54-4756-8ced-4b969c399900 (cwd None) (commands:153)
2021-05-07 10:29:50,034-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185)
2021-05-07 10:29:50,035-0500 INFO (v2v/57f84423) [storage.StorageDomain] Creating image run directory '/run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7' (blockSD:1362)
2021-05-07 10:29:50,035-0500 INFO (v2v/57f84423) [storage.fileUtils] Creating directory: /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7 mode: None (fileUtils:201)
2021-05-07 10:29:50,036-0500 INFO (v2v/57f84423) [storage.StorageDomain] Creating symlink from /dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/847bc460-1b54-4756-8ced-4b969c399900 to /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900 (blockSD:1367)
2021-05-07 10:29:50,037-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cc9fae8e-b714-44cf-9dac-3a83a15b0455 (cwd None) (commands:153)
2021-05-07 10:29:50,119-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185)
2021-05-07 10:29:50,121-0500 DEBUG (v2v/57f84423) [storage.LVM] lvs reloaded (lvm:759)
2021-05-07 10:29:50,122-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=2096 bs=512 if=/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/metadata count=1 (cwd None) (commands:211)
2021-05-07 10:29:50,135-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000212608 s, 2.4 MB/s\n'; <rc> = 0 (commands:224)
2021-05-07 10:29:50,135-0500 DEBUG (v2v/57f84423) [storage.Misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000212608 s, 2.4 MB/s'], size: 512 (misc:114)
2021-05-07 10:29:50,136-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=2096 bs=512 if=/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/metadata count=1 (cwd None) (commands:211)
2021-05-07 10:29:50,143-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000248951 s, 2.1 MB/s\n'; <rc> = 0 (commands:224)
2021-05-07 10:29:50,143-0500 DEBUG (v2v/57f84423) [storage.Misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000248951 s, 2.1 MB/s'], size: 512 (misc:114)
2021-05-07 10:29:50,143-0500 DEBUG (v2v/57f84423) [root] /usr/bin/taskset --cpu-list 0-23 /usr/bin/qemu-img info --output json -U /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900 (cwd None) (commands:211)
2021-05-07 10:29:50,161-0500 DEBUG (v2v/57f84423) [root] SUCCESS: <err> = b''; <rc> = 0 (commands:224)
2021-05-07 10:29:50,162-0500 INFO (v2v/57f84423) [storage.StorageDomain] Creating symlink from /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7 to /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7 (blockSD:1332)
2021-05-07 10:29:50,162-0500 DEBUG (v2v/57f84423) [storage.StorageDomain] path to image directory already exists: /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7 (blockSD:1338)
2021-05-07 10:29:50,163-0500 INFO (v2v/57f84423) [vdsm.api] FINISH prepareImage return={'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'info': {'type': 'block', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900'}, 'imgVolumesInfo': [{'domainID': 'cc9fae8e-b714-44cf-9dac-3a83a15b0455', 'imageID': '226cc137-1992-4246-9484-80a1bfb5e9f7', 'volumeID': '847bc460-1b54-4756-8ced-4b969c399900', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'leasePath': '/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/leases', 'leaseOffset': 108003328}]} from=internal, task_id=58a7bdc0-0f7e-4307-92ba-040f1a272721 (api:54)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') finished: {'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'info': {'type': 'block', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900'}, 'imgVolumesInfo': [{'domainID': 'cc9fae8e-b714-44cf-9dac-3a83a15b0455', 'imageID': '226cc137-1992-4246-9484-80a1bfb5e9f7', 'volumeID': '847bc460-1b54-4756-8ced-4b969c399900', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'leasePath': '/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/leases', 'leaseOffset': 108003328}]} (task:1210)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') moving from state finished -> state finished (task:624)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Owner] Owner.releaseAll resources %s (resourceManager:742)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to release resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (resourceManager:546)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Released resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (0 active users) (resourceManager:564)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free, finding out if anyone is waiting for it. (resourceManager:570)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] No one is waiting for resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', Clearing records. (resourceManager:578)
2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') ref 0 aborting False (task:1008)
2021-05-07 10:29:50,164-0500 INFO (v2v/57f84423) [root] Storing import log at: '/var/log/vdsm/import/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T102950.log' (v2v:436)
2021-05-07 10:29:50,170-0500 DEBUG (v2v/57f84423) [root] /usr/bin/taskset --cpu-list 0-23 /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/libexec/vdsm/kvm2ovirt --uri qemu+tcp://root@172.16.0.61/system --bufsize 1048576 --source /var/lib/libvirt/images/vm_powervp-si.qcow2 --dest /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900 --storage-type volume --vm-name vm_powervp-si --allocation sparse (cwd None) (v2v:1511)
2021-05-07 10:29:50,175-0500 DEBUG (v2v/57f84423) [root] /usr/bin/taskset --cpu-list 0-23 /usr/bin/nice -n 19 /usr/bin/ionice -c 3 tee /var/log/vdsm/import/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T102950.log (cwd None) (v2v:1511)
2021-05-07 10:29:50,274-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copying disk 1/1 (v2v:912)
2021-05-07 10:29:50,277-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 0/100 (v2v:921)
2021-05-07 10:29:51,277-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 0/100 (v2v:921)
2021-05-07 10:29:52,277-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 0/100 (v2v:921)
2021-05-07 10:30:14,283-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 10/100 (v2v:921)
2021-05-07 10:30:15,283-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 10/100 (v2v:921)
2021-05-07 10:30:16,283-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 10/100 (v2v:921)
2021-05-07 10:30:39,288-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 20/100 (v2v:921)
2021-05-07 10:30:40,288-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 20/100 (v2v:921)
2021-05-07 10:30:46,281-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 100/100 (v2v:921)
2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') moving from state preparing -> state preparing (task:624)
2021-05-07 10:30:46,407-0500 INFO (v2v/57f84423) [vdsm.api] START teardownImage(sdUUID='cc9fae8e-b714-44cf-9dac-3a83a15b0455', spUUID='24d9d2fa-98f9-11eb-aea7-00163e09cc71', imgUUID='226cc137-1992-4246-9484-80a1bfb5e9f7', volUUID=None) from=internal, task_id=ecbcd983-2f33-45a4-b962-7fc4b9342822 (api:48)
2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to register resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' for lock type 'shared' (resourceManager:474)
2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free. Now locking as 'shared' (1 active user) (resourceManager:531)
2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Request] (ResName='00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', ReqID='1744a1a5-d543-4528-be79-c752bce08263') Granted request (resourceManager:221)
2021-05-07 10:30:46,408-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') _resourcesAcquired: 00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455 (shared) (task:856)
2021-05-07 10:30:46,408-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') ref 1 aborting False (task:1008)
2021-05-07 10:30:46,408-0500 INFO (v2v/57f84423) [storage.StorageDomain] Removing image run directory '/run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7' (blockSD:1386)
2021-05-07 10:30:46,408-0500 INFO (v2v/57f84423) [storage.fileUtils] Removing directory: /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7 (fileUtils:182)
2021-05-07 10:30:46,408-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cc9fae8e-b714-44cf-9dac-3a83a15b0455 (cwd None) (commands:153)
2021-05-07 10:30:46,510-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185)
2021-05-07 10:30:46,511-0500 DEBUG (v2v/57f84423) [storage.LVM] lvs reloaded (lvm:759)
2021-05-07 10:30:46,512-0500 INFO (v2v/57f84423) [storage.LVM] Deactivating lvs: vg=cc9fae8e-b714-44cf-9dac-3a83a15b0455 lvs=['847bc460-1b54-4756-8ced-4b969c399900'] (lvm:1746)
2021-05-07 10:30:46,512-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvchange --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --autobackup n --available n cc9fae8e-b714-44cf-9dac-3a83a15b0455/847bc460-1b54-4756-8ced-4b969c399900 (cwd None) (commands:153)
2021-05-07 10:30:46,629-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185)
2021-05-07 10:30:46,630-0500 INFO (v2v/57f84423) [vdsm.api] FINISH teardownImage return=None from=internal, task_id=ecbcd983-2f33-45a4-b962-7fc4b9342822 (api:54)
2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') finished: None (task:1210)
2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') moving from state finished -> state finished (task:624)
2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Owner] Owner.releaseAll resources %s (resourceManager:742)
2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to release resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (resourceManager:546)
2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Released resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (0 active users) (resourceManager:564)
2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free, finding out if anyone is waiting for it. (resourceManager:570)
2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] No one is waiting for resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', Clearing records. (resourceManager:578)
2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') ref 0 aborting False (task:1008)
2021-05-07 10:30:46,631-0500 ERROR (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' failed (v2v:869)
2021-05-07 10:30:46,635-0500 DEBUG (v2v/57f84423) [root] FINISH thread <Thread(v2v/57f84423, stopped daemon 140273162123008)> (concurrent:261)
-- Details of the enviroment:
# df -Ph
Filesystem Size Used Avail Use% Mounted on
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 4.0K 32G 1% /dev/shm
tmpfs 32G 26M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.5.1--0.20210323.0+1 584G 11G 573G 2% /
/dev/mapper/onn-home 1014M 40M 975M 4% /home
/dev/mapper/onn-tmp 1014M 40M 975M 4% /tmp
/dev/sda2 1014M 479M 536M 48% /boot
/dev/mapper/onn-var 30G 3.2G 27G 11% /var
/dev/sda1 599M 6.9M 592M 2% /boot/efi
/dev/mapper/onn-var_log 8.0G 498M 7.6G 7% /var/log
/dev/mapper/onn-var_crash 10G 105M 9.9G 2% /var/crash
/dev/mapper/onn-var_log_audit 2.0G 84M 2.0G 5% /var/log/audit
tmpfs 6.3G 0 6.3G 0% /run/user/0
/dev/mapper/da3e3aff--0bfc--42cd--944f--f6145c50134a-master 976M 1.3M 924M 1% /rhev/data-center/mnt/blockSD/da3e3aff-0bfc-42cd-944f-f6145c50134a/master
/dev/mapper/onn-lv_iso 12G 11G 1.6G 88% /rhev/data-center/mnt/_dev_mapper_onn-lv__iso
172.19.1.80:/exportdomain 584G 11G 573G 2% /rhev/data-center/mnt/172.19.1.80:_exportdomain
* Inodes available = 99%.
# qemu-img info /var/lib/libvirt/images/vm_powervp-si.qcow2
image: /var/lib/libvirt/images/vm_powervp-si.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 4.2G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: true
3 years, 11 months
hosted-engine /var full
by Pascal D
I have an issue with my 4.3 hosted-engine. The /var is full. The directory taking 20G is /var/opt/rh/rh-postgresql10/lib/pgsql/data/base/data/16398
some of those files have size over 1GB. Can they be safely removed?
TIA
3 years, 11 months
No space left on /var/tmp
by Pietro Pesce
Hello
from the documents i see "Appliance requires at least 5GB of free space for /var/tmp as mentioned in the documentation."
so , /var have 20G free, why during the task of extraction of appliance the /var was fulled up? and the installation fail.
# rpm -qa|grep ovirt-engine-appliance
ovirt-engine-appliance-4.3-20200603.1.0.2.el7.x86_64
# rpm -qa|grep ovirt-hosted-engine-setup
ovirt-hosted-engine-setup-2.3.12-1.0.1.el7.noarch
thanks a lot
PP
3 years, 11 months
Fail if MAC address structure is incorrect
by Pietro Pesce
Hello
i try to deploy self hosted engine, but recive this error:
[ INFO ] TASK [ovirt.hosted_engine_setup : Fail if MAC address structure is incorrect]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 'not he_vm_mac_addr | regex_search( \"^[a-fA-F0-9][02468aAcCeE](:[a-fA-F0-9]{2}){5}$\" )' failed. The error was: Unexpected templating type error occurred on ({% if not he_vm_mac_addr | regex_search( \"^[a-fA-F0-9][02468aAcCeE](:[a-fA-F0-9]{2}){5}$\" ) %} True {% else %} False {% endif %}): expected string or buffer\n\nThe error appears to be in '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/pre_checks/validate_mac_address.yml': line 14, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n register: he_vm_mac_addr\n - name: Fail if MAC address structure is incorrect\n ^ here\n"}
on the MAC field there is 00:16:3e:62:be:0e
# rpm -qa|grep ovirt-hosted-engine-setup
ovirt-hosted-engine-setup-2.3.13-1.0.1.el7.noarch
# rpm -qa|grep ovirt-engine-appliance
ovirt-engine-appliance-4.3-20200603.1.0.2.el7.x86_64
# rpm -qa|grep -i oracle-ovirt-release-el7
oracle-ovirt-release-el7-1.0-3.el7.x86_64
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.9 (Maipo)
Best Regards
3 years, 11 months
Glusterfs and vm's
by eevans@digitaldatatechs.com
I have researched and applied several tweaks to Gluster to improve performance but on the vm side, depending on distribution, you can "tweak" the vm for better performance on Gluster as well;
yum install tuned.noarch tuned-utils-systemtap.noarch tuned-utils.noarch tuned-gtk.noarch -y
(GTK is for a gui if you have one.)
tuned-adm profile virtual-guest
Also, you can increase or decrease cache or swappiness on the host to fit your vm's, for workstations or servers. My cache is set to 2048 MB cache and swappiness of 10.
Thanks for the help you have given me.
Eric
This helps RHEL and CentOS machines utilize glusterfs and actually speeds teh vm up.
I hope this will help someone. If you want the URL for the article, just ask.
Also, you can increase or decrease cache or swappiness to fit your vm's, for workstations or servers.
3 years, 11 months
oVirt virtual machines migration 3.6.3 to 4.4.5
by jean-marie.perron@viseo.com
Hello,
We have a project to migrate from an old oVirt infrastructure (3.6.3) to a new one (4.4.5).
The new infrastructure is already in place and we want to migrate the virtual machines.
We tried to import the virtual machines via a domain with an NFS mount from the old oVirt (import pre-configured domain).
With this domain we only see VM disks and not full virtual machines for import.
Is there a way to transfer the virtual machines from the old oVirt to the new one by an NFS mount then import into oVirt Engine or a "copy/paste" virtual machine files then import.
Thanks a lot,
JMP
3 years, 11 months
oVirt 2021 Spring survey questions
by Sandro Bonazzola
Hi,
it's about the usual time of the year when we ask the community to provide
feedback with a survey.
Any questions you'd like to be asked?
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 11 months