KVM (libvirt) VM Import
by jorgevisentini@gmail.com
Hi all!
I have a Suse KVM and I would like to import all VMs to oVirt. There is a easy mode to make it?
I am try to import with Compute -> Virtual Machine -> Import.
I can visualize the VMs but its show me the error when I try import:
"libvirtError: Storage volume not found: no storage vol with matching path '/VMs01/VM/disk.qcow2"
Anyone can help-me please?
4 years, 11 months
Gluster storage options
by Shareef Jalloq
Hi there,
I'm wanting to build a 3 node Gluster hyperconverged setup but am
struggling to find documentation and examples of the storage setup.
There seems to be a dead link to an old blog post on the Gluster section of
the documentation:
https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-glus...
Is the flow to install the oVirt Node image on a boot drive and then add
disks for Gluster? Or is Gluster setup first with ovirt installed on top?
Thanks.
4 years, 11 months
Disconnecting drive from VM
by anthonywest@alfatron.com.au
Hi!
For a Virtual Machine that is already shutdown, does anyone know if I can detach a disk from the VM, export the remaining parts of the VM, then reattach the disk without causing any problems with the software installed in the VM?
Thanks,
Anthony
4 years, 11 months
Memory problem
by Stefan Wolf
Hi to all,
I ve a memory problem
I got this error:
Used memory of host kvm380.durchhalten.intern in cluster Default [96%] exceeded defined threshold [95%].
after reviewing the server with top command, I found ovn-controller with heavy memory usage:
45055 root 10 -10 46,5g 45,4g 2400 S 51,3 72,4 627:58.60 ovn-controller
after restarting ovn-controller, every thing is fine again.
after some days, it uses so much memory again. I ve also tried to wait a day or two, after that it seems to restart it self.
is there a solution that ovn-controller does not use so much memory or restart automatic?
thx
shb
4 years, 11 months
Re: possible to export a running VM to OVA?
by Jürgen Walch
Shutting down the VM during the export will give you a fully consistent state of the VM whereas snapshots without further help from the VM itself will only provide "crash consistency", that is provide you with a disk image equivalent to one where the power cord was plugged out of the running machine, which is usually, but not always consistent enough :)
BTW: As far as I understand the OVA export run from the engines web-gui *is* also doing a snapshot.
You can see the snapshot if you have a look at the VM's disks while the export is running.
--
juergen
4 years, 11 months
Re: oVirt on a Single Server
by Staniforth, Paul
Hello Matt,
probably the easiest setup to test/evaluate oVirt it to use ORB
https://www.ovirt.org/documentation/ovirt-orb/
oVirt Orb | oVirt<https://www.ovirt.org/documentation/ovirt-orb/>
oVirt is a free open-source virtualization solution for your entire enterprise
www.ovirt.org
From version 4.2 you can setup a single node hyper-converged install but it requires more skills and resources.
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Introduct...
Introduction | oVirt<https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Introduct...>
Chapter: Introduction. Hyperconvergence is a type of infrastructure system with a software-centric architecture that tightly integrates compute, storage, networking and virtualization resources and other technologies from scratch in a commodity hardware box supported by a single vendor[1].
www.ovirt.org
You could also use nested virtualization on KVM and virt-manager, use your host to provide NFS, iscsi or gluster and then provide a nested VM for the engine and hosts.
Regards,
Paul S.
________________________________
From: Matt R <webmattr(a)hotmail.com>
Sent: 21 January 2020 08:05
To: Tony Brian Albers <tba(a)kb.dk>
Cc: users(a)ovirt.org <users(a)ovirt.org>
Subject: [ovirt-users] Re: oVirt on a Single Server
Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
That was my original configuration, but I found that it wouldn't let me add the local machine as a host, and so I thought perhaps I needed to use the self-hosted deployment methodology instead.
Would a regular engine be better for my deployment type? If so, I can investigate why that isn't working, and start over.
Sent from my iPad
> On Jan 20, 2020, at 11:46 PM, Tony Brian Albers <tba(a)kb.dk> wrote:
>
>> On Tue, 2020-01-21 at 07:35 +0000, webmattr(a)hotmail.com wrote:
>> Hello,
>>
>> I can't seem to install the self-hosted engine onto local storage. It
>> gives me glustefs, iscsi, fc, and nfs as the available options. I'm
>> using this in a home-lab scenario, and don't have budget/etc. for
>> building out a dedicated NAS for it, or setting up multiple nodes. I
>> like the look of oVirt, and wanted to try it with a couple disposable
>> vm's (plex, and a docker instance I break often). My current best-
>> thought for how to make it work is to setup NFS on the server, and
>> then point the self-hosted engine at the (local) NFS share. Is there
>> a better way to do this that I might be overlooking?*
>>
>> *Factoring that I don't have the funds to build out a proper storage
>> environment, yet.
>>
>> (and if anyone asks, I did search for a solution to this, but didn't
>> find anything super helpful. Mostly I found 5+ year old articles on a
>> similar but different scenario).
>>
>
> Well, if you can live with a regular engine(not self-hosted), this
> works:
>
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
>
>
> HTH
>
> /tony
>
>
>
>
>
>
>
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
oVirt Code of Conduct: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
List Archives: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
4 years, 11 months
OVA export to NFS share slow
by Jürgen Walch
Hello,
we are using oVirt on a production system with a three node hyperconverged-cluster based on GlusterFS with a 10Gbit storage backbone network.
Everything runs smooth except OVA exports.
Each node has a NFS mount mounted on
/data/ova
with custom mount option "soft".
The NFS server used is a plain vanilla CentOS7 host with /etc/exports containing a line
/data/ova *(rw,all_squash,anonuid=36,anongid=36)
When exporting VM's as OVA using the engine web gui, the export is terribly slow (~4MiB/s), it succeeds for small disks (up to 20GB), exporting larger disks fails with a timeout.
The network link between oVirt-nodes and NFS server is 1Gbit.
I have done a little testing and looked at the code in /usr/share/ovirt-engine/playbooks/roles/ovirt-ova-pack/files/pack_ova.py.
It seems, the export is done by setting up a loop device /dev/loopX on the exporting node linked to a freshly generated sparse file /data/ova/{vmname}.tmp on the NFS share and then exporting the disk using qemu-img with target /dev/loopX.
Using iotop on the node doing the export I can see write rates ranging from 2-5 Mib/s on the /dev/loopX device.
When copying to the NFS share /data/ova using dd or qemu-img *directly* (that is using /data/ova/test.img instead of the loop device as target) I am getting write rates of ~100MiB/s which is the expected performance of the NFS servers underlying harddisk-system and the network connection. It seems that the loop device is the bottleneck.
So far I have been playing with NFS mount options and the options passed to qemu-img in /usr/share/ovirt-engine/playbooks/roles/ovirt-ova-pack/files/pack_ova.py without any success.
Any ideas or anyone with similar problems ? 😊
--
juergen walch
4 years, 11 months
Gluster: a lof of Number of ntries in heal pending
by Stefan Wolf
Hello to all,
I ve a problem with gluster
[root@kvm10 ~]# gluster volume heal data info summary
Brick kvm10:/gluster_bricks/data
Status: Connected
Total Number of entries: 868
Number of entries in heal pending: 868
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick kvm320.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick kvm360.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 867
Number of entries in heal pending: 867
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick kvm380.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 868
Number of entries in heal pending: 868
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@kvm10 ~]# gluster volume heal data info split-brain
Brick kvm10:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0
Brick kvm320.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0
Brick kvm360.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0
Brick kvm380.durchhalten.intern:/gluster_bricks/data
Status: Connected
Number of entries in split-brain: 0
As I understand there is no split-brain but 868 files ar in state heal pending.
I ve restarted every node.
I ve also tried:
[root@kvm10 ~]# gluster volume heal data full
Launching heal operation to perform full self heal on volume data has been successful
Use heal info commands to check status.
but even after a week there is no really change ( I started with 912 Number of entries in heal pending)
can somebody tell what exactly is the problem and how can I solve it.
thank you very much
4 years, 11 months
Re: oVirt on a Single Server
by Tony Brian Albers
In a small environment I think the easiest way would be to use the
local machine as a host and then run the engine as a vm on a laptop or
an older PC. As long as you have memory enough it runs on pretty much
anything.
Be careful when using local storage, it's got some special
requirements:
https://www.ovirt.org/documentation/admin-guide/chap-Storage.html
/tony
On Tue, 2020-01-21 at 08:05 +0000, Matt R wrote:
> That was my original configuration, but I found that it wouldn't let
> me add the local machine as a host, and so I thought perhaps I needed
> to use the self-hosted deployment methodology instead.
>
> Would a regular engine be better for my deployment type? If so, I can
> investigate why that isn't working, and start over.
>
> Sent from my iPad
>
> > On Jan 20, 2020, at 11:46 PM, Tony Brian Albers <tba(a)kb.dk> wrote:
> >
> > > On Tue, 2020-01-21 at 07:35 +0000, webmattr(a)hotmail.com wrote:
> > > Hello,
> > >
> > > I can't seem to install the self-hosted engine onto local
> > > storage. It
> > > gives me glustefs, iscsi, fc, and nfs as the available options.
> > > I'm
> > > using this in a home-lab scenario, and don't have budget/etc. for
> > > building out a dedicated NAS for it, or setting up multiple
> > > nodes. I
> > > like the look of oVirt, and wanted to try it with a couple
> > > disposable
> > > vm's (plex, and a docker instance I break often). My current
> > > best-
> > > thought for how to make it work is to setup NFS on the server,
> > > and
> > > then point the self-hosted engine at the (local) NFS share. Is
> > > there
> > > a better way to do this that I might be overlooking?*
> > >
> > > *Factoring that I don't have the funds to build out a proper
> > > storage
> > > environment, yet.
> > >
> > > (and if anyone asks, I did search for a solution to this, but
> > > didn't
> > > find anything super helpful. Mostly I found 5+ year old articles
> > > on a
> > > similar but different scenario).
> > >
> >
> > Well, if you can live with a regular engine(not self-hosted), this
> > works:
> >
> > https://www.ovirt.org/documentation/install-guide/chap-Installing_oVirt.html
> >
> >
> > HTH
> >
> > /tony
> >
> >
> >
> >
> >
> >
> >
4 years, 11 months