LUN ID Change on Storage Domain
by Alan G
Hi,
I made an error when provisioning a new FC storage domain and gave it LUN ID 0 by mistake. I now need to move it to another ID.
Can I do this: -
1. Shutdown all VMs on the domain.
2. Put the domain in maintenance.
3. Change the LUN ID.
4. Force SCSI rescan on every host.
5. Bring the domain out of maintenance.
Will oVIrt still recognise the domain on a different LUN ID?
Thanks,
Alan
5 years, 8 months
Re: Infiniband Usage / Support
by Strahil
When you create the new network, can't you set the MTU also?
I know that UI allows changing MTU, but never checked if it can be set during definition.
Best Regards,
Strahil Nikolov
5 years, 8 months
Infiniband Usage / Support
by Andrey Rusakov
Hi,
I got a config with IB Storage (IPoIB).
I would like to extend IB usage to OVN network, but trying to do this i find some problems ...
Problem #1 (IPoIB Storage Network)
I configure IPoIB Before adding to oVirt cluster (I did it using Cockpit)
Create [Bond], assign IP, change MTU.
At this moment of time everything is fine...
But - I am not able to assign IP/MTU using oVirt Cluster.
And I can't perform any changes using Cockpit as IB interfaces are "Unmanaged Interfaces".
Problem #1.1
It will be great to avoid users adding IPoIB networks using brctl, as it will not work...
Problem #2
What is correct way to move OVN to IPoIB network?
5 years, 8 months
Info about firewall type and 4.3
by Gianluca Cecchi
Hello,
I have updated a 4.2.8 environment to 4.3.1
So far so good, I have updated cluster level and dc level from 4.2 to 4.3
I notice the field "Firewall type" in my cluster and it is currently set to
"iptables".
My 3 hosts are CentOS 7.6 plain servers.
My external engine is CentOS 7.6 and already with firewalld
I seem to remember in the long run only firewalld supported also on hosts.
Is this correct and in case is there an ETA/version?
What would be the steps to pass my current hosts to firewalld in case?
Currently I see:
iptables enabled and running
ip6tables disabled
ebtables disabled
Thanks in advance,
Gianluca
5 years, 8 months
ipxe-roms-qemu question
by Ladislav Humenik
Hi all,
in the past we have used customized ipxe (to allow boot over network
with 10G cards), now we have finally updated our hypervisors to the
latest ipxe-roms-qemu
Of course the sum now differs and during live-migration the libvirtd
throws this error:
Mar 4 11:37:14 hypevisor-01 libvirtd: 2019-03-04 10:37:14.084+0000:
15862: error : qemuMigrationJobCheckStatus:1313 : operation failed:
migration out job: unexpectedly failed
Mar 4 11:37:15 hypevisor-01 libvirtd: 2019-03-04T10:37:13.941040Z
qemu-kvm: Length mismatch: 0000:00:03.0/virtio-net-pci.rom: 0x20000 in
!= 0x40000: Invalid argument
Mar 4 11:37:15 hypevisor-0 libvirtd: 2019-03-04T10:37:13.941090Z
qemu-kvm: error while loading state for instance 0x0 of device 'ram'
Mar 4 11:37:15 hypevisor-0 libvirtd: 2019-03-04T10:37:13.941530Z
qemu-kvm: load of migration failed: Invalid argument
is there an easy command we can use to identify what guests are still
using the old .rom and must be powercycled ?
Thank you in advance
5 years, 8 months
virt-viewer centos7.6
by p.staniforth@leedsbeckett.ac.uk
Hello is there a newer version than 5 for virt-viewer for Centos?
Thanks,
Paul S.
5 years, 8 months
Re: Two Hosts with Self Hosted Engine - HA / Failover & NFS
by Strahil
I don't think that you can achieve it with only 2 nodes, as you can't protect yourself of split brain.
Ovirt supports only glusterfs of replica 3 arbiter 1. If you create your own glusterfs , you can use glusterd2 with a "remote arbiter" in another location. That will give you protection from split brain and as it is remote - the latency won't kill your write speed.
I can recommend you to get a VM in your environment (not hosted on any of the 2 hosts) or a small machine with a SSD and use that as pure arbiter.
Using one of the 2 nodes as arbiter brick is not going to help, as when that node fails - the cluster on the other node will stop working.
It's the same with hosting NFS on one of the machines.
As far as I know DRBD is not yet fully integrated, but you can give it a try.Still, using only 2 nodes gives no protection from split brain.
Best Regards,
Strahil NikolovOn Mar 7, 2019 01:28, shanep(a)lifestylepanel.com wrote:
>
> Is it possible to have only two physical hosts with NFS and be able to do VM HA / Failover between these hosts?
>
> Both hosts are identical with RAID Drive Arrays of 8TB.
>
> If so, can anybody point me to any docs or examples on exactly how the Storage setup is done so that NFS will replicate across the hosts?
>
> If not what file system should I use to achieve this?
>
> Thanks
> Shane
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5H4NDGUBBFY...
5 years, 8 months
Please don't remove instance type
by Baptiste Agasse
Hi all,
We are happy oVirt users for some years now (we started with 3.6, now on 4.2) and we manage most of our virtualization stacks with it. To provision and manage our machines, we use the foreman (for bare metal and virtual machines) on top of it. I made some little contributions to the foreman and other underlying stuff to have a deeper integration with oVirt, like to be able to select instance type directly from foreman interface/api and we rely on it. We use instance types to standardize our vms by defining system resources (memory, cpu and cpu topology) console type, boot options. On top of that we plan to use templates to apply OS (CentOS 7 and CentOS 6 actually). Having resources definitions separated from OS installation help us to keep instance types and templates lists small and don't bother users about some technical underlying stuff. As we are interested in automating oVirt maintenance tasks and configuration with ansible, I asked at FOSDEM oVirt booth if there is any ansible module to manage instance types in Ovirt as I didn't find it in ovirt ansible infra repo. The person to whom I asked the question said that you are planning to remove instance types from ovirt, and this make me sad :(. So here I am to ask why do you plan to remove instance types from oVirt. As far as I know, it's fairly common to have "instance types" / "flavors" / "sizes" on one side and then templates (bare OS, preinstalled appliances...) on other side and pick one of each to make an instance. If this first part is missing in future version of ovirt, it will be a pain point for us. So, my question is, do you really plan to remove instances type definitely ?
Cheers.
--
Baptiste
5 years, 8 months