oVirt on a Single Server
by webmattr@hotmail.com
Hello,
I can't seem to install the self-hosted engine onto local storage. It gives me glustefs, iscsi, fc, and nfs as the available options. I'm using this in a home-lab scenario, and don't have budget/etc. for building out a dedicated NAS for it, or setting up multiple nodes. I like the look of oVirt, and wanted to try it with a couple disposable vm's (plex, and a docker instance I break often). My current best-thought for how to make it work is to setup NFS on the server, and then point the self-hosted engine at the (local) NFS share. Is there a better way to do this that I might be overlooking?*
*Factoring that I don't have the funds to build out a proper storage environment, yet.
(and if anyone asks, I did search for a solution to this, but didn't find anything super helpful. Mostly I found 5+ year old articles on a similar but different scenario).
4 years, 11 months
VM migrations stalling over migration-only network
by Ben
Hi, I'm pretty stuck at the moment so I hope someone can help me.
I have an oVirt 4.3 data center with two hosts. Recently, I attempted to
segregate migration traffic from the the standard ovirtmgmt network, where
the VM traffic and all other traffic resides.
I set up the VLAN on my router and switch, and created LACP bonds on both
hosts, tagging them with the VLAN ID. I confirmed the routes work fine, and
traffic speeds are as expected. MTU is set to 9000.
After configuring the migration network in the cluster and dragging and
dropping it onto the bonds on each host, VMs fail to migrate.
oVirt is not reporting any issues with the network interfaces or sync with
the hosts. However, when I attempt to live-migrate a VM, progress gets to
1% and stalls. The transfer rate is 0Mbps, and the operation eventually
fails.
I have not been able to identify anything useful in the VDSM logs on the
source or destination hosts, or in the engine logs. It repeats the below
WARNING and INFO logs for the duration of the process, then logs the last
entries when it fails. I can provide more logs if it would help. I'm not
even sure where to start -- since I am a novice at networking, at best, my
suspicion the entire time was that something is misconfigured in my
network. However, the routes are good, speed tests are fine, and I can't
find anything else wrong with the connections. It's not impacting any other
traffic over the bond interfaces.
Are there other requirements that must be met for VMs to migrate over a
separate interface/network?
2020-01-12 03:18:28,245-0500 WARN (migmon/a24fd7e3) [virt.vm]
(vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration stalling: remaining
(4191MiB) > lowmark (4191MiB). (migration:854)
2020-01-12 03:18:28,245-0500 INFO (migmon/a24fd7e3) [virt.vm]
(vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration Progress: 930.341
seconds elapsed, 1% of data processed, total data: 4192MB, processed data:
0MB, remaining data: 4191MB, transfer speed 0MBps, zero pages: 149MB,
compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:881)
2020-01-12 03:18:31,386-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
(vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') operation failed: migration
out job: unexpectedly failed (migration:282)
2020-01-12 03:18:32,695-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
(vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Failed to migrate
(migration:450)
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 431,
in _regular_run
time.time(), migrationParams, machineParams
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 505,
in _startUnderlyingMigration
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 591,
in _perform_with_conv_schedule
self._perform_migration(duri, muri)
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 525,
in _perform_migration
self._migration_flags)
libvirtError: operation failed: migration out job: unexpectedly failed
2020-01-12 03:18:40,880-0500 INFO (jsonrpc/6) [api.virt] FINISH
getMigrationStatus return={'status': {'message': 'Done', 'code': 0},
'migrationStats': {'status': {'message': 'Fatal error during migration',
'code': 12}, 'progress': 1L}} from=::ffff:10.0.0.20,41462,
vmId=a24fd7e3-161c-451e-8880-b3e7e1f7d86f (api:54)
4 years, 11 months
Ovirt backup
by Nazan CENGİZ
Hi all,
I want to back up Ovirt for free. Is there a script, project or tool that you can recommend for this?
Is there a project that you can test, both backup and restore process can work properly?
Best Regards,
Nazan.
[cid:image635035.PNG@c85af1f5.4398eec0]<http://www.havelsan.com.tr> [cid:image1b01e4.JPG@3906782d.46addbb9]
Nazan CENGİZ
AR-GE MÜHENDİSİ
Mustafa Kemal Mahallesi 2120 Cad. No:39 06510 Çankaya Ankara TÜRKİYE
[cid:imagebfc2cf.PNG@f8421962.4190403c] +90 312 219 57 87 [cid:imageedc33b.PNG@1c7f852f.478fd74c] +90 312 219 57 97
[cid:image1e4b47.JPG@66431552.499ea47a]
YASAL UYARI: Bu elektronik posta işbu linki kullanarak ulaşabileceğiniz Koşul ve Şartlar dokümanına tabidir. <http://havelsan.com.tr/tr/news/e-posta-yasal-uyari>
LEGAL NOTICE: This e-mail is subject to the Terms and Conditions document which can be accessed with this link. <http://havelsan.com.tr/tr/news/e-posta-yasal-uyari>
[http://www.havelsan.com.tr/Library/images/mail/email.jpg] Lütfen gerekmedikçe bu sayfa ve eklerini yazdırmayınız / Please consider the environment before printing this email
4 years, 11 months
oVirt failing to make a template of Centos 7 VM using seal
by damien.altman@gmail.com
Hi there,
My oVirt engine is failing to create a template from a Centos 7 VM, the
/var/log/vdsm/vdsm.log is as follows:
)
2020-01-20 19:39:07,332+1100 ERROR (virt/875f8036) [root] Job
u'875f8036-e28a-4741-b3a3-046cc711d252' failed (jobs:221)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 157, in run
self._run()
File "/usr/lib/python2.7/site-packages/vdsm/virt/jobs/seal.py", line 74,
in _run
virtsysprep.sysprep(vol_paths)
File "/usr/lib/python2.7/site-packages/vdsm/virtsysprep.py", line 39, in
sysprep
commands.run(cmd)
File "/usr/lib/python2.7/site-packages/vdsm/common/commands.py", line 110,
in run
raise cmdutils.Error(args, p.returncode, out, err)
Error: Command ['/usr/bin/virt-sysprep', '-a',
u'/rhev/data-center/mnt/xxxxx.xxxxx.net:_storage_host__storage/eb8ba7f9-27d5
-44c0-a744-9027be39a756/images/3ac44e69-ae82-4d79-8b58-0f3ef4cf60db/f4631212
-4b1e-4c65-b19b-47215e9aca55'] failed with rc=1 out='[ 0.0] Examining the
guest ...\n' err="libvirt: XML-RPC error : Cannot create user runtime
directory '//.cache/libvirt': Permission denied\nvirt-sysprep: error:
libguestfs error: could not connect to libvirt (URI = \nqemu:///session):
Cannot create user runtime directory '//.cache/libvirt': \nPermission denied
[code=38 int1=13]\n\nIf reporting bugs, run virt-sysprep with debugging
enabled and include the \ncomplete output:\n\n virt-sysprep -v -x [...]\n"
The template saves fine if I do not have 'Seal Template' selected.
4 years, 11 months
VM Failover Scenario - automatically shutting down Test VM when failover for Prod VM is required
by Martin Decker
Hello List,
I have a simple setup with 2 Compute Nodes to host 2 VMs: a PROD VM and a
TEST VM. The requirement is that if the compute node with the Prod VM
fails, the Prod VM should automatically migrate to the second compute node.
In order to maximize performance for the Prod VM, there should be no other
VMs running on the same compute node. This means that the Test VM should be
stopped before the Prod VM is started on the failover compute node.
I want to use 80-90% of the physical memory of the compute node for Prod
VM, therefore Prod VM can not be started due to lack of memory if Test VM
is not stopped up front.
Can this be implemented?
Best regards,
Martin
4 years, 11 months
ISO Upload
by Christian Reiss
Hey folks,
I have a cluster setup as follows (among others)
/vms for vm storage. Attached as data domain.
/isos for, well, isos. Attached as iso domain.
When I try to upload a(ny) iso via the ovirt engine, I can only select
the vms storage path, not the iso storage path. Both are green and
active across the board.
Is my understanding correct that I can only upload and ISO to a data
domain, then (once uploaded), move the iso file to the iso domain via
the web-ui?
-Chris.
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 11 months
Re: [moVirt] Expanding oVirt
by Michal Skrivanek
[fwding to the right list]
> On 15 Jan 2020, at 22:57, mweber(a)maetrix.tech wrote:
>
> I am planning a basic 3-host hyperconverged oVirt cluster and am very happy with tests I have conducted regarding the deployment.
>
> I have a question regarding expanding the cluster and cant seem to find a direct answer. My hosts have a limited number of HDD slots and am curious about expanding the gluster volumes. Is this a simple process of adding another host or hosts as I go and adding the gluster bricks to the volume and rebalancing? I also recall seeing a hard limit of nine hosts in a cluster. Is this correct?
> Thank you,
> _______________________________________________
> moVirt mailing list -- movirt(a)ovirt.org
> To unsubscribe send an email to movirt-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/movirt@ovirt.org/message/YSSYKBGN6H...
4 years, 11 months
Re: Ovirt OVN help needed
by Strahil
Hi Miguel,
It seems the Cluster's switch is of type 'Linux Bridge'.
Best Regards,
Strahil NikolovOn Jan 10, 2020 12:37, Miguel Duarte de Mora Barroso <mdbarroso(a)redhat.com> wrote:
>
> On Mon, Jan 6, 2020 at 9:21 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
> >
> > Hi Miguel,
> >
> > I had read some blogs about OVN and I tried to collect some data that might hint where the issue is.
> >
> > I still struggle to "decode" that , but it may be easier for you or anyone on the list.
> >
> > I am eager to receive your reply.
> > Thanks in advance and Happy New Year !
>
> Hi,
>
> Sorry for not noticing your email before. Hope late is better than never ..
>
> >
> >
> > Best Regards,
> > Strahil Nikolov
> >
> > В сряда, 18 декември 2019 г., 21:10:31 ч. Гринуич+2, Strahil Nikolov <hunter86_bg(a)yahoo.com> написа:
> >
> >
> > That's a good question.
> > ovirtmgmt is using linux bridge, but I'm not so sure about the br-int.
> > 'brctl show' is not understanding what type is br-int , so I guess openvswitch.
> >
> > This is still a guess, so you can give me the command to verify that :)
>
> You can use the GUI for that; access "Compute > clusters" , choose the
> cluster in question, hit 'edit', then look for the 'Swtich type'
> entry.
>
>
> >
> > As the system was first build on 4.2.7 , most probably it never used anything except openvswitch.
> >
> > Thanks in advance for your help. I really appreciate that.
> >
> > Best Regards,
> > Strahil Nikolov
> >
> >
> > В сряда, 18 декември 2019 г., 17:53:31 ч. Гринуич+2, Miguel Duarte de Mora Barroso <mdbarroso(a)redhat.com> написа:
> >
> >
> > On Wed, Dec 18, 2019 at 6:35 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
> > >
> > > Hi Dominik,
> > >
> > > sadly reinstall of all hosts is not helping.
> > >
> > > @ Miguel,
> > >
> > > I have 2 clusters
> > > 1. Default (amd-based one) -> ovirt1 (192.168.1.90) & ovirt2 (192.168.1.64)
> > > 2. Intel (intel-based one and a gluster arbiter) -> ovirt3 (192.168.1.41)
> >
> > But what are the switch types used on the clusters: openvswitch *or*
> > legacy / linux bridges ?
> >
> >
> >
> > >
> > > The output of the 2 commands (after I run reinstall on all hosts ):
> > >
> > > [root@engine ~]# ovn-sbctl list encap
> > > _uuid : d4d98c65-11da-4dc8-9da3-780e7738176f
> > > chassis_name : "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> > > ip : "192.168.1.90"
> > > options : {csum="true"}
> > > type : geneve
> > >
> > > _uuid : ed8744a5-a302-493b-8c3b-19a4d2e170de
> > > chassis_name : "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> > > ip : "192.168.1.64"
> > > options : {csum="true"}
> > > type : geneve
> > >
> > > _uuid : b72ff0ab-92fc-450c-a6eb-ab2869dee217
> > > chassis_name : "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> > > ip : "192.168.1.41"
> > > options : {csum="true"}
> > > type : geneve
> > >
> > >
> > > [root@engine ~]# ovn-sbctl list chassis
> > > _uuid : b1da5110-f477-4c60-9963-b464ab96c644
> > > encaps : [ed8744a5-a302-493b-8c3b-19a4d2e170de]
> > > external_ids : {datapath-type="", iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", ovn-bridge-mappings=""}
> > > hostname : "ovirt2.localdomain"
> > > name : "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> > > nb_cfg : 0
> > > transport_zones : []
> > > vtep_logical_switches: []
> > >
> > > _uuid : dcc94e1c-bf44-46a3-b9d1-45360c307b26
> > > encaps : [b72ff0ab-92fc-450c-a6eb-ab2869dee217]
> > > external_ids : {datapath-type="", iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", ovn-bridge-mappings=""}
> > > hostname : "ovirt3.localdomain"
> > > name : "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> > > nb_cfg : 0
> > > transport_zones : []
> > > vtep_logical_switches: []
> > >
> > > _uuid : 897b34c5-d1d1-41a7-b2fd-5f1fa203c1da
> > > encaps : [d4d98c65-11da-4dc8-9da3-780e7738176f]
> > > external_ids : {datapath-type="", iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", ovn-bridge-mappings=""}
> > > hostname : "ovirt1.localdomain"
> > > name : "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> > > nb_cfg : 0
> > > transport_zones : []
> > > vtep_logical_switches: []
> > >
> > >
> > > If you know an easy method to reach default settings will be best, as I'm currently not using OVN in production (just for tests and to learn more about how it works) and I can afford any kind of downtime.
> > >
> > > Best Regards,
> > > Strahil Nikolov
> > >
> > > В вторник, 17 декември 2019 г., 11:28:25 ч. Гринуич+2, Miguel Duarte
4 years, 11 months
ovirt_disk Ansible module creating disks of wrong size but why?
by m.skrzetuski@gmail.com
Hello everyone,
I just saw the list of modules at https://github.com/ansible/ansible/tree/663f8464ee7ab2cb086857f04393e6434.... Does this mean you (oVirt devs) did not update ovirt Ansible modules since 2 years?
Anyhow, the following tasks create a VM with a 8GB instead of 500GB disk (confirmed with lsblk and fdisk). Why is that and how do I get around it?
- name: download centos image
get_url:
url: https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
dest: /data/isos/CentOS-7-x86_64-GenericCloud.qcow2
- name: create disk for vm
ovirt_disk:
name: centos
upload_image_path: /data/isos/CentOS-7-x86_64-GenericCloud.qcow2
storage_domain: vmdata2
size: 500GiB
wait: true
bootable: true
format: cow
when: register_centos_disks.ovirt_disks|length == 0
- name: create vm
ovirt_vm:
name: dumbo
type: server
state: "running"
cluster: dumbo
high_availability: yes
disks:
- name: centos
graphical_console:
protocol: vnc
operating_system: Linux
nics:
- name: nic1
profile_name: ovirtmgmt
memory: 8GiB
cpu_sockets: 1
cpu_cores: 1
cpu_threads: 2
If I use sparse: false with the ovirt_disk module I am getting following errors.
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Error: Fault reason is "Operation Failed". Fault detail is "[Cannot add Virtual Disk. Disk configuration (COW Preallocated) is incompatible with the storage domain type.]". HTTP response code is 409.
fatal: [dumbo]: FAILED! => changed=false
msg: Fault reason is "Operation Failed". Fault detail is "[Cannot add Virtual Disk. Disk configuration (COW Preallocated) is incompatible with the storage domain type.]". HTTP response code is 409.
The storage domain type is data. So what is wrong with that?
Frustrated regards
Skrzetuski
4 years, 11 months