Removing Direct Mapped LUNs
by Ryan Chewning
Hi List,
We need to add and remove directly mapped LUNs to multiple VMs in our
Non-Production environment. The environment is backed by an iSCSI SAN. In
testing when removing a directly mapped LUN it doesn't remove the
underlying multipath and devices. Several questions.
1) Is this the expected behavior?
2) Are we supposed to go to each KVM host and manually remove the
underlying multipath devices?
3) Is there a technical reason that oVirt doesn't do this as part of the
steps to removing the storage?
This is something that was handled by the manager in the previous
virtualization that we used, Oracle's Xen based Oracle VM.
Thanks!
Ryan
3 years, 5 months
upgrading gluster brick os from CentOS7 to 8
by Jiří Sléžka
Hello,
I'm in progress of moving my oVirt HCI cluster to production. It was a
bit complicated process because I had two new servers to install oVirt
from scratch in the lab and three old servers with standalone kvm
hypervisors in production. One of them (installed with CentOS7) was good
enough to join HCI cluster. All three old kvm servers had production vms
running on them.
New cluster was installed with CentOS8 and oVirt4.4, I started with
single node, then expanded it to two hosts with one another which acts
as arbiter (just for stability in lab envirnment).
After some testing and tuning I moved this two node cluster (without
arbiter node) to my server housing. Then I prepared gluster brick on
that one old server which I want reuse and join it to gluster storage so
now it is replica 3 - two nodes are CentOS8 based and acts as oVirt
hosts, one is CentOS7 and acts also as standalone kvm hypervisor.
Then I migrated all vms from standalone kvm hypervisors to oVirt, then
switched off two oldest hosts.
Now I would like to reinstall CentOS7 node to oVirt host. My plan is to
keep gluster brick fs and backup configuration and restore it after
reinstall to CentOS8 as mentioned in
https://mjanja.ch/2018/08/migrate-glusterfs-to-a-new-operating-system/.
I want to speed up gluster healing. Does it make sense?
Now the questin: Brick fs was created on CentOS7 and don't have new xfs
features FINOBT,SPARSE_INODES,REFLINK. Could it be problem? Will be
better to recreate fs in CentOS8 and then do full heal? I am afraid that
it will take a long time and make big load to production hosts.
TL;DR: Does gluster/oVirt4.4 make use of FINOBT,SPARSE_INODES,REFLINK
xfs features?
btw. oVirt is great product and HCI looks really functional and usable
(with 10GE network and SSDsof course). Big thanks to developers and
community!
Cheers,
Jiri
3 years, 5 months
Re: Strange Issue with imageio
by Gianluca Cecchi
On Sat, Apr 17, 2021 at 6:27 AM Nur Imam Febrianto <nur_imam(a)outlook.com>
wrote:
> Hi,
>
>
>
> Already submit *Bug 1950593*
> <https://bugzilla.redhat.com/show_bug.cgi?id=1950593> for this issue.
>
> Thanks before.
>
>
>
> Regards,
>
> Nur Imam Febrianto
>
>
>
>
It seems I have the same problem with my 4.4.5.
Any info if it is fixed in the latest 4.4.6? It seems no update inside the
bug page..
Gianluca
3 years, 5 months
Re: How to Upgrade Node with Local Storage ?
by Vojtech Juranek
On Wednesday, 21 April 2021 16:40:29 CEST Nur Imam Febrianto wrote:
> Set global maintenance and then turn off all vm, do yum update but it
> completed with failed. Am I missing something ?
can you share the details? What failed, wthat was the error?
> From: Adam Xu<mailto:adam_xu@adagene.com.cn>
> Sent: 20 April 2021 7:36
> To: users(a)ovirt.org<mailto:users@ovirt.org>
> Subject: [ovirt-users] Re: How to Upgrade Node with Local Storage ?
>
>
> For oVirt Node that using Local Storage, I think you should shutdown all
> your vms before you upgrade the Node. 在 2021/4/19 22:09, Nur Imam Febrianto
> 写道:
> Hi,
>
> How we can upgrade oVirt Node that using Local Storage ? Seems I cant find
> any good documentation about this. Planning to upgrade one 4.4.4 node with
> local storage to 4.4.5.
> Thanks before.
>
> Regards,
> Nur Imam Febrianto
>
>
>
> _______________________________________________
>
> Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
>
> To unsubscribe send an email to
> users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
>
> Privacy Statement:
> https://www.ovirt.org/privacy-policy.html<https://apac01.safelinks.protecti
> on.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fprivacy-policy.html&data=0
> 4%7C01%7C%7C2817620cd2134104f81908d90394654e%7C84df9e7fe9f640afb435aaaaaaaaa
> aaa%7C1%7C0%7C637544758156873163%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDA
> iLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=VV663qDHl6C24C0
> XTZLoUF4n%2Bm4umP4dkG1EsKS895E%3D&reserved=0>
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/<https://apac01.
> safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunit
> y%2Fabout%2Fcommunity-guidelines%2F&data=04%7C01%7C%7C2817620cd2134104f81908
> d90394654e%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637544758156883118%7
> CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiL
> CJXVCI6Mn0%3D%7C1000&sdata=XF1vC3JCyVjQCsD1x%2FDQinfKOImw5VJbZIryDg9sKKg%3D&
> reserved=0>
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOCHCAJPXO4LT
> IMA4JHK3WMR5CTKR37C/<https://apac01.safelinks.protection.outlook.com/?url=ht
> tps%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%
> 2FZOCHCAJPXO4LTIMA4JHK3WMR5CTKR37C%2F&data=04%7C01%7C%7C2817620cd2134104f819
> 08d90394654e%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637544758156883118
> %7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWw
> iLCJXVCI6Mn0%3D%7C1000&sdata=raH9RGOWlZIWlj%2FCWFJUbqI94y0U2TR9lUukXhzi2B8%3
> D&reserved=0>
3 years, 5 months
Libgfapi considerations
by Jayme
Are there currently any known issues with using libgfapi in the latest
stable version of ovirt in hci deployments? I have recently enabled it and
have noticed a significant (over 4x) increase in io performance on my vms.
I’m concerned however since it does not seem to be an ovirt default
setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci?
3 years, 5 months
Import Ge0-Replicated Storage Domain fails
by simon@justconnect.ie
Hi All,
I have 2 independent Hyperconverged Sites/Data Centers.
Site A has a GlusterFS Replica 3 + Arbiter Volume that is Storage Domain data2
This Volume is Geo-Replicated to a Replica 3 + Arbiter Volume at Site B called data2_bdt
I have simulated a DR event and now want to import the Ge0-Replicated volume data2_bdt as a Storage Domain on Site B. Once imported I need to import the VMs on this volume to run in Site B.
The Geo-Replication now works perfectly (thanks Strahil) but I haven't been able to import the Storage Domain.
Please can someone point me in the right direction or documentation on how this can be achieved.
Kind Regards
Shimme...
3 years, 6 months
Grafana oVirt 4.4.4 Node install - Grafana Monitoring Portal not available
by simon@justconnect.ie
Having installed oVirt 4.4.4 HCI 3 node cluster, the Grafana 'Monitoring Portal' was not visible on the Default page.
It appears that Grafana is installed but not configured.
I have checked a 4.4.3 environment and it is installed and running, so is this a bug with 4.4.4.7-1.el8?
Any help would be appreciated.
3 years, 6 months
fresh ovirt node 4.4.6 fail on firewalld both host and engine deployment
by Charles Kozler
Hello -
Deployed fresh ovirt node 4.4.6 and the only thing I did to the system was
configure the NIC with nmtui
During the gluster install the install errored out with
gluster-deployment-1620832547044.log:failed: [n2] (item=5900/tcp) =>
{"ansible_loop_var": "item", "changed": false, "item": "5900/tcp", "msg":
"ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception:
ALREADY_ENABLED: '5900:tcp' already in 'public' Permanent and
Non-Permanent(immediate) operation"}
The fix here was easy - I just deleted the port it was complaining about
with firewall-cmd and restarted the installation and it was all fine
During the hosted engine deployment when the VM is being deployed it dies
here
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "ERROR:
Exception caught: org.fedoraproject.FirewallD1.Exception: ALREADY_ENABLED:
'6900:tcp' already in 'public' Non-permanent operation"}
Now the issue here is that I do not have access to the engine VM as it is
in a bit of a transient state since when it fails the current image that is
open is discarded when the ansible playbook is kicked off again
I cannot find any BZ on this and google is turning up nothing. I don't
think firewalld failing due to the firewall rule already existing should be
a reason to exit the installation
The interesting part is that this only fails on certain ports. i.e when I
reran the gluster wizard after 5900 failed, the other ports are presumably
still added to the firewall, and the installation completes
Suggestions?
--
*Notice to Recipient*: https://www.fixflyer.com/disclaimer
<https://www.fixflyer.com/disclaimer>
3 years, 6 months
I am installing ovirt engine 4.3.10
by ken@everheartpartners.com
I am getting this error message when I install it on CentOS 7.9 when running the hosted engine setup.
[ INFO ] TASK [ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exists]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The selected network interface is not valid"}
I have two interfaces
enp6s0
enp11s0
Enp11s0 is the public network
enp6s0 is the storage network to the netapp.
Any idea how to resolve this?
3 years, 6 months