Re: Q: Node Upgrade Path
by Matthew.Stier@fujitsu.com
Some of my systems have megaraid sas2 raid controllers, which Red Hat have stopped supporting "under EL8", and all the variants have followed suit.
-----Original Message-----
From: Matthew.Stier(a)fujitsu.com <Matthew.Stier(a)fujitsu.com>
Sent: Thursday, July 8, 2021 12:42 PM
To: Andrei Verovski <andreil1(a)starlett.lv>; users(a)ovirt.org
Subject: [ovirt-users] Re: Q: Node Upgrade Path
If the hardware is old, and won't support EL8, the best option is to update to oVirt 4.3.10 and update CentOS to 7u9.
Some of my systems have megaraid sas2 raid controllers, which Red Hat have stopped supporting, and all the variant have followed suit.
I have tried the Elrepo DUD's to work around the problem, but my results (in the 4.4.1/4.4.2 timeframe) were not satisfactory.
-----Original Message-----
From: Andrei Verovski <andreil1(a)starlett.lv>
Sent: Thursday, July 8, 2021 6:42 AM
To: users(a)ovirt.org
Subject: [ovirt-users] Q: Node Upgrade Path
Hi,
I have 2 oVirt nodes still running on CentOS 7.6.
rpm -qa | grep ovirt - results in version 4.2.
Does it makes sense to upgrade CentOS 7.6 to Stream, and then upgrade oVirt Node sw, or just leave for a couple of years till hardware will be trashed?
I’m always keep oVirt Engine up to date, so my only concern if at any point it will stop support old Node sw.
Thanks in advance for any suggestion.
Andrei
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6SCBZF3NASM...
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJQW32RA6OO...
3 years, 6 months
Q: Node Upgrade Path
by Andrei Verovski
Hi,
I have 2 oVirt nodes still running on CentOS 7.6.
rpm -qa | grep ovirt - results in version 4.2.
Does it makes sense to upgrade CentOS 7.6 to Stream, and then upgrade oVirt Node sw, or just leave for a couple of years till hardware will be trashed?
I’m always keep oVirt Engine up to date, so my only concern if at any point it will stop support old Node sw.
Thanks in advance for any suggestion.
Andrei
3 years, 6 months
Updates failing
by Gary Pedretty
Getting errors trying to run dnf/yum update due to a vdsm issue.
yum update
Last metadata expiration check: 0:17:33 ago on Tue 06 Jul 2021 11:17:05 AM AKDT.
Error: Running QEMU processes found, cannot upgrade Vdsm.
Current running version of vdsm is
vdsm-4.40.60.7-1.el8
CentOS Stream
RHEL - 8.5 - 3.el8
kernel
4.18.0 - 310.el8.x86_64
_______________________________
Gary Pedretty
IT Manager
Ravn Alaska
Office: 907-266-8451
Mobile: 907-388-2247
Email: gary.pedretty(a)ravnalaska.com
3 years, 6 months
Cloud-Init - What am I doing wrong?
by jeremy_tourville@hotmail.com
I am running into an issue using Cloud-Init on CentOS 7. I have imported OpenStack Glance Generic cloud images for CentOS 7 & 8 and I chose to save them as a template at the time of import.
In the template, under Initial Config I defined the following values:
-cloud-init : checked
-configure time zone: checked
-choose dropdown for the correct timezone
-VM Hostname, Authentication & Networks are all blank
-Custom script:
#cloud-config
users:
- name: ansuser
passwd: $6$TcX2D/LcPtq/a$m6AKlcIbZXL9ZJGNSgeXjnKGsz98Yw.v7rt5m18zunittqjMTygf7KLliwxXwzFrvj8rrFpOMGZmUYoX.mWib.
In theory I think this should create my user and set the password for my user. When I go to login to the system the login for my new user fails. What am I doing wrong?
If I leave the Custom Script blank and put the user in the Authentication section the login works as expected. This tells me the image itself is working properly and is accepting Cloud-Init input. It would be my preference to add the user under the Custom Script section so I can further extend some functionality such as sudo: ALL=(ALL) NOPASSWD:ALL and SSH keys so that an initial Ansible config could be run easily.
I am going to test 8 in the same manner and see what happens. I sort of expect the same results though. Thanks in advance for you input!
3 years, 6 months
upgrading gluster brick os from CentOS7 to 8
by Jiří Sléžka
Hello,
I'm in progress of moving my oVirt HCI cluster to production. It was a
bit complicated process because I had two new servers to install oVirt
from scratch in the lab and three old servers with standalone kvm
hypervisors in production. One of them (installed with CentOS7) was good
enough to join HCI cluster. All three old kvm servers had production vms
running on them.
New cluster was installed with CentOS8 and oVirt4.4, I started with
single node, then expanded it to two hosts with one another which acts
as arbiter (just for stability in lab envirnment).
After some testing and tuning I moved this two node cluster (without
arbiter node) to my server housing. Then I prepared gluster brick on
that one old server which I want reuse and join it to gluster storage so
now it is replica 3 - two nodes are CentOS8 based and acts as oVirt
hosts, one is CentOS7 and acts also as standalone kvm hypervisor.
Then I migrated all vms from standalone kvm hypervisors to oVirt, then
switched off two oldest hosts.
Now I would like to reinstall CentOS7 node to oVirt host. My plan is to
keep gluster brick fs and backup configuration and restore it after
reinstall to CentOS8 as mentioned in
https://mjanja.ch/2018/08/migrate-glusterfs-to-a-new-operating-system/.
I want to speed up gluster healing. Does it make sense?
Now the questin: Brick fs was created on CentOS7 and don't have new xfs
features FINOBT,SPARSE_INODES,REFLINK. Could it be problem? Will be
better to recreate fs in CentOS8 and then do full heal? I am afraid that
it will take a long time and make big load to production hosts.
TL;DR: Does gluster/oVirt4.4 make use of FINOBT,SPARSE_INODES,REFLINK
xfs features?
btw. oVirt is great product and HCI looks really functional and usable
(with 10GE network and SSDsof course). Big thanks to developers and
community!
Cheers,
Jiri
3 years, 6 months
Re: blok size and 512e and 4k
by Nir Soffer
On Wed, Jul 7, 2021 at 6:54 PM Anatoliy Radchenko
<radchenko.anatoliy(a)gmail.com> wrote:
>
Adding back users(a)ovirt.org. We want to keep the discussion public
and searchable.
> But,thinking,
> in case you want to create the data domain posix compliant with a block in 4k you cannot do this because you still have 512 block on engine and gives an error about the incompatibility of 512 and 4k.
posix data domain is just NFS data domain under the hood, with
modified mount options, so it share the same limits of NFS data
domain.
I'm not sure how do you get the compatibility issue - do you mean provisioning
engine on a host with 4d disk (using libvirt and local storage) and then copying
to data domain created on same local storage (512 bytes sector size)?
> How to be in this case?
I think this should work as is since the bootstrap vm created using libvirt
is using 512 bytes sector size, regardless of the underlying storage sector
size (qemu hides the actual sector size from the vm). So the vm starts
with 512 bytes sector size, and them move to another storage which again
have 512 bytes sector size.
This also works when the other storage uses 4k sector (e.g hyperconverge
using gluster and vdo) size since again qemu hides the underlying storage
sector size.
> Il giorno mer 7 lug 2021 alle ore 17:45 Anatoliy Radchenko <radchenko.anatoliy(a)gmail.com> ha scritto:
>>
>> Hi Nir,
>> with transferring engine I meen to move it after installation to hosted engine storage domain, but not important.
>> In any case thank you for your response, everything is just as I thought.
>> Regards
>>
>> Il giorno mer 7 lug 2021 alle ore 17:32 Nir Soffer <nsoffer(a)redhat.com> ha scritto:
>>>
>>> On Wed, Jul 7, 2021 at 5:58 PM <radchenko.anatoliy(a)gmail.com> wrote:
>>> > With some installations of Ovirt in the configuration of ssd for boot OS and hdd (4k) for data domain, engine is installed on ssd with a block size of 512e and then transferred to hdd 4k.
>>>
>>> What do we mean by transferring engine to another disk?
>>>
>>> Engine is installed either on a host, or in a vm. On a vm, it always
>>> using sector
>>> size of 512 bytes, since we don't support vms with sector size of 4k. If enigne
>>> in installed on a host, there is no such thing as "transfer" to
>>> another host. You
>>> can reinstall engine on another host and restore engine database using a
>>> backup. Sector size does not matter in this flow.
>>>
>>> Sector size for data domain is relevant only for gluster storage or
>>> local storage.
>>> These are the only storage domain types that can use 4k storage.
>>>
>>> > and if I wanted to leave the engine on ssd (in a separate partition which will also have a size of 512e), I encountered an incompatibility error between block sizes 512 and 4k (example glusterfs or when the data domain was configured as posix compliant fs). NFS domain passed.
>>> > As I understand it, the NFS uses 512e as well as the drivers of the guest machines,
>>>
>>> Yes, NFS is always using a sector size of 512 bytes since we don't
>>> have any way to
>>> detect the underlying device sector size.
>>>
>>> > and a VM created on a domain with a 4k block will still have 512e. As a result, we have a block of size 512 in any case.
>>>
>>> Yes, vms always see sector size of 512 bytes. qemu knows the underlying storage
>>> sector size and align I/O requests to the underlying storage sector size.
>>>
>>> > Does it make sense to use devices with a 4k block at the present time? Is there any configuration to take advantage of 4K?
>>>
>>> Yes, the most attractive use case is VDO, which is optimized for 4k sector size.
>>>
>>> Another use case is having disks with 4k sector size which may be cheaper or
>>> easier to get compared with disk supporting 512e.
>>>
>>> For block storage oVirt does not support devices with 4k sector size yet.
>>>
>>> Nir
>>>
>>
>>
>> --
>> _____________________________________
>>
>> Radchenko Anatolii
>> via Manoppello, 83 - 00132 Roma
>> tel. 06 96044328
>> cel. 329 6030076
>>
>> Nota di riservatezza : ai sensi e per gli effetti della Legge sulla Tutela della Riservatezza Personale (Legge 196/03) si precisa che il presente messaggio, corredato dei relativi allegati, contiene informazioni da considerarsi strettamente riservate, ed è destinato esclusivamente al destinatario sopra indicato, il quale è l'unico autorizzato ad usarlo, copiarlo e, sotto la propria responsabilità, diffonderlo. Chiunque ricevesse questo messaggio per errore o comunque lo leggesse senza esserne legittimato è avvertito che trattenerlo, copiarlo, divulgarlo, distribuirlo a persone diverse dal destinatario è severamente proibito,ed è pregato di rinviarlo al mittente.
>
>
>
> --
> _____________________________________
>
> Radchenko Anatolii
> via Manoppello, 83 - 00132 Roma
> tel. 06 96044328
> cel. 329 6030076
>
> Nota di riservatezza : ai sensi e per gli effetti della Legge sulla Tutela della Riservatezza Personale (Legge 196/03) si precisa che il presente messaggio, corredato dei relativi allegati, contiene informazioni da considerarsi strettamente riservate, ed è destinato esclusivamente al destinatario sopra indicato, il quale è l'unico autorizzato ad usarlo, copiarlo e, sotto la propria responsabilità, diffonderlo. Chiunque ricevesse questo messaggio per errore o comunque lo leggesse senza esserne legittimato è avvertito che trattenerlo, copiarlo, divulgarlo, distribuirlo a persone diverse dal destinatario è severamente proibito,ed è pregato di rinviarlo al mittente.
3 years, 6 months
blok size and 512e and 4k
by radchenko.anatoliy@gmail.com
Good day,
Some controllers do not allow to set a 4k block on a boot device.
With some installations of Ovirt in the configuration of ssd for boot OS and hdd (4k) for data domain, engine is installed on ssd with a block size of 512e and then transferred to hdd 4k. and if I wanted to leave the engine on ssd (in a separate partition which will also have a size of 512e), I encountered an incompatibility error between block sizes 512 and 4k (example glusterfs or when the data domain was configured as posix compliant fs). NFS domain passed.
As I understand it, the NFS uses 512e as well as the drivers of the guest machines, and a VM created on a domain with a 4k block will still have 512e. As a result, we have a block of size 512 in any case.
Does it make sense to use devices with a 4k block at the present time? Is there any configuration to take advantage of 4K?
Thanks in advance.
Best regards.
3 years, 6 months
Hosted Engine Deployment
by Harry O
Why does the HE Deployment Not deploy a hosted engine dc and cluster which fits the host in version?
My Hosted Engine Deployment now fails again because of "Host hej1.5ervers.lan is compatible with versions (4.2,4.3,4.4,4.5) and cannot join Cluster Default which is set to version 4.6."
I think the deployment shoud make a dc and cluster that fits the host that is used to deploy. Otherwise it's doomed to fail.
Is there a proccess for fixing this? I can't change the version from HE ui as I'm instructed to, there is no other options on datacenter other then 4.6:
[ INFO ] You can now connect to https://hej1.5ervers.lan:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up'
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.z_g6jh7h_he_setup_lock is removed, delete it once ready to proceed]
3 years, 6 months
ovrt node kernel
by Giulio Casella
Hello everybody,
using ovirt node many of the drivers to access storage are not in the
kernel drivers (anymore).
For example I have a bunch of (old) hypervisors (50+), with infiniband
nics, used to access storage. For all those hypervisors I abandoned
ovirt node, in favor of a base CentOS stream distro, with ovirt repo and
using kernel-plus repo (conatining all the needed stuff).
Obviously this is not a step I liked, I definitely prefer ovirt node.
Now my proposal: why don't include kernel-plus in ovirt-node, instead of
traditional kernel?
I know this would be a big change, and maybe @Sandro and all RH guys
(they have a more general perspective) can see some drawback I don't,
but... hey, I just tried! :-)
Thank you in advance,
Giulio
3 years, 6 months
Re: Failing to migrate hosted engine from 4.4.6 host to 4.4.7 host
by Sandro Bonazzola
Il giorno mer 7 lug 2021 alle ore 06:20 Nur Imam Febrianto <
nur_imam(a)outlook.com> ha scritto:
> Already tried it at one 4.4.7 host, and it solves the issue.
>
> Maybe this issue should marked as a critical one, because the host is
> unusable at all if upgraded to 4.4.7.
>
It is: https://bugzilla.redhat.com/show_bug.cgi?id=1979624
> 😊
>
>
>
> Regards,
>
> Nur Imam Febrianto
>
>
>
> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
> Windows 10
>
>
>
> *From: *Nur Imam Febrianto <nur_imam(a)outlook.com>
> *Sent: *07 July 2021 9:02
> *To: *Klaas Demter <klaasdemter(a)gmail.com>; users(a)ovirt.org
> *Subject: *[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6
> host to 4.4.7 host
>
>
>
> Where should I done this ?
>
> At the Host ? or at HE ?
>
>
>
> Thanks.
>
>
>
> Regards,
>
> Nur Imam Febrianto
>
>
>
> *From: *Klaas Demter <klaasdemter(a)gmail.com>
> *Sent: *07 July 2021 3:31
> *To: *users(a)ovirt.org
> *Subject: *[ovirt-users] Re: Failing to migrate hosted engine from 4.4.6
> host to 4.4.7 host
>
>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1979624
> <https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugzil...>
>
> run: semodule -B; touch /.autorelabel; reboot
>
> report back if it fixes everything
>
>
>
> On 7/6/21 5:40 PM, Nur Imam Febrianto wrote:
>
> I’m having similar problem like this. 15 host, 7 of them already upgraded
> to 4.4.7 and I can’t migrate any VM or HE from 4.4.6 host to 4.4.7.
>
>
>
> Regards,
>
> Nur Imam Febrianto
>
>
>
> *From: *Sandro Bonazzola <sbonazzo(a)redhat.com>
> *Sent: *06 July 2021 19:37
> *To: *oVirt Users <users(a)ovirt.org>; Arik Hadas <ahadas(a)redhat.com>
> *Subject: *[ovirt-users] Failing to migrate hosted engine from 4.4.6 host
> to 4.4.7 host
>
>
>
> Hi,
>
> I update the hosted engine to 4.4.7 and one of the 2 nodes where the
> engine is running.
>
> Current status is:
>
> - Hosted engine at 4.4.7 running on Node 0
>
> - Node 0 at 4.4.6
>
> - Node 1 at 4.4.7
>
>
>
> Now, moving Node 0 to maintenance successfully moved the SPM from Node 0
> to Node 1 but while trying to migrate hosted engine I get on Node 0
> vdsm.log:
>
> 2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] START repoStats(domains=()) from=::ffff:10.46.8.133,35048, task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:48)
>
> 2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] FINISH repoStats return={'1996dc3b-d33f-49cb-b32a-8f7b1d50af5e': {'code': 0, 'lastCheck': '3.0', 'delay': '0.00114065', 'valid': True, 'version': 5, 'acq
>
> uired': True, 'actual': True}} from=::ffff:10.46.8.133,35048, task_id=f12d7694-d2b5-4658-9e0d-3f0dc54aca93 (api:54)
>
> 2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] START multipath_health() from=::ffff:10.46.8.133,35048, task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:48)
>
> 2021-07-06 12:25:07,882+0000 INFO (jsonrpc/5) [vdsm.api] FINISH multipath_health return={} from=::ffff:10.46.8.133,35048, task_id=6515fac9-830a-4b6a-904e-cc1262e87f01 (api:54)
>
> 2021-07-06 12:25:07,883+0000 ERROR (migsrc/b2072331) [virt.vm] (vmId='b2072331-1558-4186-86b4-fa83af8eba95') can't connect to virtlogd: Unable to open system token /run/libvirt/common/system.token: Permission de
>
> nied (migration:294)
>
> 2021-07-06 12:25:07,888+0000 INFO (jsonrpc/5) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:10.46.8.133,35048 (api:54)
>
> 2021-07-06 12:25:08,166+0000 ERROR (migsrc/b2072331) [virt.vm] (vmId='b2072331-1558-4186-86b4-fa83af8eba95') Failed to migrate (migration:467)
>
> Traceback (most recent call last):
>
> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 441, in _regular_run
>
> time.time(), machineParams
>
> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 537, in _startUnderlyingMigration
>
> self._perform_with_conv_schedule(duri, muri)
>
> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 626, in _perform_with_conv_schedule
>
> self._perform_migration(duri, muri)
>
> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 555, in _perform_migration
>
> self._migration_flags)
>
> File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 159, in call
>
> return getattr(self._vm._dom, name)(*a, **kw)
>
> File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, in f
>
> ret = attr(*args, **kwargs)
>
> File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
>
> ret = f(*args, **kwargs)
>
> File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
>
> return func(inst, *args, **kwargs)
>
> File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2119, in migrateToURI3
>
> raise libvirtError('virDomainMigrateToURI3() failed')
>
> libvirt.libvirtError: can't connect to virtlogd: Unable to open system token /run/libvirt/common/system.token: Permission denied
>
> 2021-07-06 12:25:08,197+0000 INFO (jsonrpc/6) [api.virt] START getMigrationStatus() from=::ffff:10.46.8.133,35048, flow_id=4e86b85d, vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:48)
>
> 2021-07-06 12:25:08,197+0000 INFO (jsonrpc/6) [api.virt] FINISH getMigrationStatus return={'status': {'code': 0, 'message': 'Done'}, 'migrationStats': {'status': {'code': 12, 'message': 'Fatal error during migr
>
> ation'}, 'progress': 0}} from=::ffff:10.46.8.133,35048, flow_id=4e86b85d, vmId=b2072331-1558-4186-86b4-fa83af8eba95 (api:54)
>
> On node 0:
>
> # ls -lZ /run/libvirt/common/system.token
> ls: cannot access '/run/libvirt/common/system.token': No such file or
> directory
>
>
>
> On node 1:
>
> # ls -lZ /run/libvirt/common/system.token
> -rw-------. 1 root root system_u:object_r:virt_var_run_t:s0 32 Jul 6
> 09:29 /run/libvirt/common/system.token
>
>
>
> any clue?
>
> --
>
> *Sandro Bonazzola*
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
> <https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.re...>
>
> sbonazzo(a)redhat.com
>
>
> <https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.re...>
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>
>
>
>
>
> _______________________________________________
>
> Users mailing list -- users(a)ovirt.org
>
> To unsubscribe send an email to users-leave(a)ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ov...>
>
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ov...>
>
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2JG4HZW6RDT... <https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists....>
>
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6UMV243KWTW...
>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
<https://mojo.redhat.com/docs/DOC-1199578>*
3 years, 6 months