Disconnecting drive from VM
by anthonywest@alfatron.com.au
Hi!
For a Virtual Machine that is already shutdown, does anyone know if I can detach a disk from the VM, export the remaining parts of the VM, then reattach the disk without causing any problems with the software installed in the VM?
Thanks,
Anthony
4 years, 2 months
Memory problem
by Stefan Wolf
Hi to all,
I ve a memory problem
I got this error:
Used memory of host kvm380.durchhalten.intern in cluster Default [96%] exceeded defined threshold [95%].
after reviewing the server with top command, I found ovn-controller with heavy memory usage:
45055 root 10 -10 46,5g 45,4g 2400 S 51,3 72,4 627:58.60 ovn-controller
after restarting ovn-controller, every thing is fine again.
after some days, it uses so much memory again. I ve also tried to wait a day or two, after that it seems to restart it self.
is there a solution that ovn-controller does not use so much memory or restart automatic?
thx
shb
4 years, 2 months
Re: possible to export a running VM to OVA?
by Jürgen Walch
Shutting down the VM during the export will give you a fully consistent state of the VM whereas snapshots without further help from the VM itself will only provide "crash consistency", that is provide you with a disk image equivalent to one where the power cord was plugged out of the running machine, which is usually, but not always consistent enough :)
BTW: As far as I understand the OVA export run from the engines web-gui *is* also doing a snapshot.
You can see the snapshot if you have a look at the VM's disks while the export is running.
--
juergen
4 years, 2 months
Re: oVirt on a Single Server
by Staniforth, Paul
Hello Matt,
probably the easiest setup to test/evaluate oVirt it to use ORB
https://www.ovirt.org/documentation/ovirt-orb/
oVirt Orb | oVirt<https://www.ovirt.org/documentation/ovirt-orb/>
oVirt is a free open-source virtualization solution for your entire enterprise
www.ovirt.org
From version 4.2 you can setup a single node hyper-converged install but it requires more skills and resources.
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Introduct...
Introduction | oVirt<https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Introduct...>
Chapter: Introduction. Hyperconvergence is a type of infrastructure system with a software-centric architecture that tightly integrates compute, storage, networking and virtualization resources and other technologies from scratch in a commodity hardware box supported by a single vendor[1].
www.ovirt.org
You could also use nested virtualization on KVM and virt-manager, use your host to provide NFS, iscsi or gluster and then provide a nested VM for the engine and hosts.
Regards,
Paul S.
________________________________
From: Matt R <webmattr(a)hotmail.com>
Sent: 21 January 2020 08:05
To: Tony Brian Albers <tba(a)kb.dk>
Cc: users(a)ovirt.org <users(a)ovirt.org>
Subject: [ovirt-users] Re: oVirt on a Single Server
Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
That was my original configuration, but I found that it wouldn't let me add the local machine as a host, and so I thought perhaps I needed to use the self-hosted deployment methodology instead.
Would a regular engine be better for my deployment type? If so, I can investigate why that isn't working, and start over.
Sent from my iPad
> On Jan 20, 2020, at 11:46 PM, Tony Brian Albers <tba(a)kb.dk> wrote:
>
>> On Tue, 2020-01-21 at 07:35 +0000, webmattr(a)hotmail.com wrote:
>> Hello,
>>
>> I can't seem to install the self-hosted engine onto local storage. It
>> gives me glustefs, iscsi, fc, and nfs as the available options. I'm
>> using this in a home-lab scenario, and don't have budget/etc. for
>> building out a dedicated NAS for it, or setting up multiple nodes. I
>> like the look of oVirt, and wanted to try it with a couple disposable
>> vm's (plex, and a docker instance I break often). My current best-
>> thought for how to make it work is to setup NFS on the server, and
>> then point the self-hosted engine at the (local) NFS share. Is there
>> a better way to do this that I might be overlooking?*
>>
>> *Factoring that I don't have the funds to build out a proper storage
>> environment, yet.
>>
>> (and if anyone asks, I did search for a solution to this, but didn't
>> find anything super helpful. Mostly I found 5+ year old articles on a
>> similar but different scenario).
>>
>
> Well, if you can live with a regular engine(not self-hosted), this
> works:
>
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
>
>
> HTH
>
> /tony
>
>
>
>
>
>
>
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
oVirt Code of Conduct: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
List Archives: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
4 years, 2 months
OVA export to NFS share slow
by Jürgen Walch
Hello,
we are using oVirt on a production system with a three node hyperconverged-cluster based on GlusterFS with a 10Gbit storage backbone network.
Everything runs smooth except OVA exports.
Each node has a NFS mount mounted on
/data/ova
with custom mount option "soft".
The NFS server used is a plain vanilla CentOS7 host with /etc/exports containing a line
/data/ova *(rw,all_squash,anonuid=36,anongid=36)
When exporting VM's as OVA using the engine web gui, the export is terribly slow (~4MiB/s), it succeeds for small disks (up to 20GB), exporting larger disks fails with a timeout.
The network link between oVirt-nodes and NFS server is 1Gbit.
I have done a little testing and looked at the code in /usr/share/ovirt-engine/playbooks/roles/ovirt-ova-pack/files/pack_ova.py.
It seems, the export is done by setting up a loop device /dev/loopX on the exporting node linked to a freshly generated sparse file /data/ova/{vmname}.tmp on the NFS share and then exporting the disk using qemu-img with target /dev/loopX.
Using iotop on the node doing the export I can see write rates ranging from 2-5 Mib/s on the /dev/loopX device.
When copying to the NFS share /data/ova using dd or qemu-img *directly* (that is using /data/ova/test.img instead of the loop device as target) I am getting write rates of ~100MiB/s which is the expected performance of the NFS servers underlying harddisk-system and the network connection. It seems that the loop device is the bottleneck.
So far I have been playing with NFS mount options and the options passed to qemu-img in /usr/share/ovirt-engine/playbooks/roles/ovirt-ova-pack/files/pack_ova.py without any success.
Any ideas or anyone with similar problems ? 😊
--
juergen walch
4 years, 2 months
Gluster: a lof of Number of ntries in heal pending
by Stefan Wolf
Hello to all,
I ve a problem with gluster
[root@kvm10 ~]# gluster volume heal data info summary
Brick kvm10:/gluster_bricks/data
Status: Connected
Total Number of entries: 868
Number of entries in heal pending: 868
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick kvm320.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick kvm360.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 867
Number of entries in heal pending: 867
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick kvm380.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 868
Number of entries in heal pending: 868
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@kvm10 ~]# gluster volume heal data info split-brain
Brick kvm10:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0
Brick kvm320.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0
Brick kvm360.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0
Brick kvm380.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0
As I understand there is no split-brain but 868 files ar in state heal pending.
I ve restarted every node.
I ve also tried:
[root@kvm10 ~]# gluster volume heal data full
Launching heal operation to perform full self heal on volume data has been successful
Use heal info commands to check status.
but even after a week there is no really change ( I started with 912 Number of entries in heal pending)
can somebody tell what exactly is the problem and how can I solve it.
thank you very much
4 years, 3 months
Re: oVirt on a Single Server
by Tony Brian Albers
In a small environment I think the easiest way would be to use the
local machine as a host and then run the engine as a vm on a laptop or
an older PC. As long as you have memory enough it runs on pretty much
anything.
Be careful when using local storage, it's got some special
requirements:
https://www.ovirt.org/documentation/admin-guide/chap-Storage.html
/tony
On Tue, 2020-01-21 at 08:05 +0000, Matt R wrote:
> That was my original configuration, but I found that it wouldn't let
> me add the local machine as a host, and so I thought perhaps I needed
> to use the self-hosted deployment methodology instead.
>
> Would a regular engine be better for my deployment type? If so, I can
> investigate why that isn't working, and start over.
>
> Sent from my iPad
>
> > On Jan 20, 2020, at 11:46 PM, Tony Brian Albers <tba(a)kb.dk> wrote:
> >
> > > On Tue, 2020-01-21 at 07:35 +0000, webmattr(a)hotmail.com wrote:
> > > Hello,
> > >
> > > I can't seem to install the self-hosted engine onto local
> > > storage. It
> > > gives me glustefs, iscsi, fc, and nfs as the available options.
> > > I'm
> > > using this in a home-lab scenario, and don't have budget/etc. for
> > > building out a dedicated NAS for it, or setting up multiple
> > > nodes. I
> > > like the look of oVirt, and wanted to try it with a couple
> > > disposable
> > > vm's (plex, and a docker instance I break often). My current
> > > best-
> > > thought for how to make it work is to setup NFS on the server,
> > > and
> > > then point the self-hosted engine at the (local) NFS share. Is
> > > there
> > > a better way to do this that I might be overlooking?*
> > >
> > > *Factoring that I don't have the funds to build out a proper
> > > storage
> > > environment, yet.
> > >
> > > (and if anyone asks, I did search for a solution to this, but
> > > didn't
> > > find anything super helpful. Mostly I found 5+ year old articles
> > > on a
> > > similar but different scenario).
> > >
> >
> > Well, if you can live with a regular engine(not self-hosted), this
> > works:
> >
> > https://www.ovirt.org/documentation/install-guide/chap-Installing_oVirt.html
> >
> >
> > HTH
> >
> > /tony
> >
> >
> >
> >
> >
> >
> >
4 years, 3 months
oVirt on a Single Server
by webmattr@hotmail.com
Hello,
I can't seem to install the self-hosted engine onto local storage. It gives me glustefs, iscsi, fc, and nfs as the available options. I'm using this in a home-lab scenario, and don't have budget/etc. for building out a dedicated NAS for it, or setting up multiple nodes. I like the look of oVirt, and wanted to try it with a couple disposable vm's (plex, and a docker instance I break often). My current best-thought for how to make it work is to setup NFS on the server, and then point the self-hosted engine at the (local) NFS share. Is there a better way to do this that I might be overlooking?*
*Factoring that I don't have the funds to build out a proper storage environment, yet.
(and if anyone asks, I did search for a solution to this, but didn't find anything super helpful. Mostly I found 5+ year old articles on a similar but different scenario).
4 years, 3 months
VM migrations stalling over migration-only network
by Ben
Hi, I'm pretty stuck at the moment so I hope someone can help me.
I have an oVirt 4.3 data center with two hosts. Recently, I attempted to
segregate migration traffic from the the standard ovirtmgmt network, where
the VM traffic and all other traffic resides.
I set up the VLAN on my router and switch, and created LACP bonds on both
hosts, tagging them with the VLAN ID. I confirmed the routes work fine, and
traffic speeds are as expected. MTU is set to 9000.
After configuring the migration network in the cluster and dragging and
dropping it onto the bonds on each host, VMs fail to migrate.
oVirt is not reporting any issues with the network interfaces or sync with
the hosts. However, when I attempt to live-migrate a VM, progress gets to
1% and stalls. The transfer rate is 0Mbps, and the operation eventually
fails.
I have not been able to identify anything useful in the VDSM logs on the
source or destination hosts, or in the engine logs. It repeats the below
WARNING and INFO logs for the duration of the process, then logs the last
entries when it fails. I can provide more logs if it would help. I'm not
even sure where to start -- since I am a novice at networking, at best, my
suspicion the entire time was that something is misconfigured in my
network. However, the routes are good, speed tests are fine, and I can't
find anything else wrong with the connections. It's not impacting any other
traffic over the bond interfaces.
Are there other requirements that must be met for VMs to migrate over a
separate interface/network?
2020-01-12 03:18:28,245-0500 WARN (migmon/a24fd7e3) [virt.vm]
(vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration stalling: remaining
(4191MiB) > lowmark (4191MiB). (migration:854)
2020-01-12 03:18:28,245-0500 INFO (migmon/a24fd7e3) [virt.vm]
(vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Migration Progress: 930.341
seconds elapsed, 1% of data processed, total data: 4192MB, processed data:
0MB, remaining data: 4191MB, transfer speed 0MBps, zero pages: 149MB,
compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:881)
2020-01-12 03:18:31,386-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
(vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') operation failed: migration
out job: unexpectedly failed (migration:282)
2020-01-12 03:18:32,695-0500 ERROR (migsrc/a24fd7e3) [virt.vm]
(vmId='a24fd7e3-161c-451e-8880-b3e7e1f7d86f') Failed to migrate
(migration:450)
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 431,
in _regular_run
time.time(), migrationParams, machineParams
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 505,
in _startUnderlyingMigration
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 591,
in _perform_with_conv_schedule
self._perform_migration(duri, muri)
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 525,
in _perform_migration
self._migration_flags)
libvirtError: operation failed: migration out job: unexpectedly failed
2020-01-12 03:18:40,880-0500 INFO (jsonrpc/6) [api.virt] FINISH
getMigrationStatus return={'status': {'message': 'Done', 'code': 0},
'migrationStats': {'status': {'message': 'Fatal error during migration',
'code': 12}, 'progress': 1L}} from=::ffff:10.0.0.20,41462,
vmId=a24fd7e3-161c-451e-8880-b3e7e1f7d86f (api:54)
4 years, 3 months