Managed Block Storage and Templates
by Shantur Rathore
Hi all,
Anyone tried using Templates with Managed Block Storage?
I created a VM on MBS and then took a snapshot.
This worked but as soon as I created a Template from snapshot, the
template got created but there is no disk attached to the template.
Anyone seeing something similar?
Thanks
2 years, 7 months
Managed Block Storage issues
by Shantur Rathore
Hi all,
I am trying to set up Managed block storage and have the following issues.
My setup:
Latest oVirt Node NG : 4.4.8
Latest oVirt Engine : 4.4.8
1. Unable to copy to iSCSI based block storage
I created a MBS with Synology UC3200 as a backend ( supported by
Cinderlib ). It was created fine but when I try to copy disks to it,
it fails.
Upon looking at the logs from SPM, I found "qemu-img" failed with an
error that it cannot open "/dev/mapper/xxxxxxxxxx" : Permission Error.
Had a look through the code and digging out more, I saw that
a. Sometimes /dev/mapper/xxxx symlink isn't created ( log attached )
b. The ownership to /dev/mapper/xxxxxx and /dev/dm-xx for the new
device always stays at root:root
I added a udev rule
ACTION=="add|change", ENV{DM_UUID}=="mpath-*", GROUP="qemu",
OWNER="vdsm", MODE="0660"
and the disk copied correctly when /dev/mapper/xxxxx got created.
2. Copy progress finishes in UI very early than the actual qemu-img process.
The UI shows the Copy process is completed successfully but it's
actually still copying the image.
This happens both for ceph and iscsi based mbs.
Is there any known workaround to get iSCSI MBS working?
Kind regards,
Shantur
2 years, 7 months
oVirt / Hyperconverged
by topoigerm@gmail.com
I have 4 servers of identical hardware. The documentation says "you need 3", not "you need 3 or more"; is it possible to run hyperconverged with 4 servers. Currently all the 4 nodes server has been crashed n after the 4th node try joining the hyperconverged 3nodes cluster. Kindly advise.
FYI currently i'm trying to reinstall back all the OS back due mentioned incident happen.
/BR
Faizal
2 years, 7 months
about the Live Storage Migration
by Tommy Sway
From the document:
Overview of Live Storage Migration
Virtual disks can be migrated from one storage domain to another while the
virtual machine to which they are attached is running. This is referred to
as live storage migration. When a disk attached to a running virtual machine
is migrated, a snapshot of that disk's image chain is created in the source
storage domain, and the entire image chain is replicated in the destination
storage domain. As such, ensure that you have sufficient storage space in
both the source storage domain and the destination storage domain to host
both the disk image chain and the snapshot. A new snapshot is created on
each live storage migration attempt, even when the migration fails.
Consider the following when using live storage migration:
You can live migrate multiple disks at one time.
Multiple disks for the same virtual machine can reside across more than one
storage domain, but the image chain for each disk must reside on a single
storage domain.
You can live migrate disks between any two storage domains in the same data
center.
You cannot live migrate direct LUN hard disk images or disks marked as
shareable.
But where do users perform online storage migrations?
There seems to be no interface.
2 years, 7 months
Re: About the vm memory limit
by Tommy Sway
In fact, I am very interested in the part you mentioned, because my environment is running relational database, which usually requires a large amount of memory,
and some systems clearly need to configure HUGEPAGE memory (such as Oracle).
Could you elaborate on some technical details about the management of huge page memory? And the difference between 4.3 and 4.4 in this respect?
Thank you very much!
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Saturday, September 25, 2021 5:32 PM
To: Tommy Sway <sz_cuitao(a)163.com>
Subject: Re: [ovirt-users] About the vm memory limit
It depends on the numa configuration of the host.
If you have 256G per CPU, it's best to stay into that range.
Also, consider disabling transparent huge pages on the host & VM.
Since 4.4 Regular Huge Pages (do not confuse them with THP) can be used on the Hypervisors, while on 4.3 there were some issues but I can't provode any details.
Best Regards,
Strahil Nikolov
On Fri, Sep 24, 2021 at 6:40, Tommy Sway
<sz_cuitao(a)163.com <mailto:sz_cuitao@163.com> > wrote:
I would like to ask if there is any limit on the memory size of virtual machines, or performance curve or something like that?
As long as there is memory on the physical machine, the more virtual machines the better?
In our usage scenario, there are many virtual machines with databases, and their memory varies greatly.
For some virtual machines, 4G memory is enough, while for some virtual machines, 64GB memory is needed.
I want to know what is the best use of memory for a virtual machine, since the virtual machine is just a QEMU emulation process on a physical machine, and I worry that it is not using as much memory as a physical machine. Understand this so that we can develop guidelines for optimal memory usage scenarios for virtual machines.
Thank you!
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y6XDOIMKCP4...
2 years, 7 months
Hypervisor broken
by Hans Scheffers
Hi all,
I am new here, been searching the mailing list on a regular base when
I encountered problems and till now I always was able to keep the
system up & running. As said... till now...
I have oVirt at home, and I have about 7 vm's running on it. Lately I
have had some troubles with my electricity, which results in a
complete power outage on a irregular base. Yesterday I had a power
outage which left the system unbootable with the following errors:
error:
../../grub-core/loader/i386/pc/linux.c:170:invalid magic number.
error: ../../grub-core/loader/i386/pc/linux.c:1418:you need to load
the kernel first.
Press any key to continue
Normally this can be solved following
https://access.redhat.com/solutions/5829141, but his time also the
files in /boot had a size of 0 bytes. So basically i did not have a
working kernel on the system anymore.
https://www.thegeekdiary.com/centos-rhel-7-how-to-install-kernel-from-res...
does work for CentOS 8 also, but the ovirt 4.8 iso does not have the
same directory structure. Using CentOS 8 installs a kernel, but not a
bootable system.
When booting the ovirt kernel, I get a rescue prompt with the
following error: "Warning:
/dev/onn/ovirt-node-ng-4.4.6.3-0.20210518.0+1 does not exist"
Is there a way i can start the installer in troubleshooting mode so I
can do a reinstall and keep the current configuration or can I just
reinstall and import the current datacenter in it?
2 years, 7 months
Hosted Engine cluster version compatib.
by Andrea Chierici
Dear all,
I have just updated my ovirt installation, with self hosted engine, from
4.4.5 to 4.4.8.5-1.
Everything went smoothly and in a few minutes the system was back up and
running.
A little issue is still puzzling me.
I am asked to update from 4.5 to 4.6 the cluster and data center
compatibility level. When I try to issue the command from the cluster
config I get this error:
> Error while executing action: Cannot update cluster because the update
> triggered update of the VMs/Templates and it failed for the following:
> HostedEngine. To fix the issue, please go to each of them, edit,
> change the Custom Compatibility Version (or other fields changed
> previously in the cluster dialog) and press OK. If the save does not
> pass, fix the dialog validation. After successful cluster update, you
> can revert your Custom Compatibility Version change (or other
> changes). If the problem still persists, you may refer to the
> engine.log file for further details.
It's very strange because the config of the hostedengine is "plain" and
there are no constrains on compatibility version, as you can see in this
picture:
In any case, if I try to force compatibility with 4.6 I get this error:
> Error while executing action:
>
> HostedEngine:
>
> There was an attempt to change Hosted Engine VM values that are
> locked.
So I am stuck. Not a big deal at the moment, but sooner or later I will
have to do this upgrade and I don't know where I am wrong.
Can anybody give a clue?
Thanks in advance,
Andrea
--
Andrea Chierici - INFN-CNAF
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463
SkypeID ataruz
--
2 years, 7 months
Ooops! in last step of Hyperconverged deployment
by Harry O
Hi,
In the second engine dep run in Hyperconverged deployment I get red "Ooops!" in cockpit.
I think it fails on some networking setup.
The first oVirt Node says "Hosted Engine is up!" but the other nodes is not added to HostedEngine yet.
There is no network connectivity to the Engine outside node1, I can ssh to engine from node1 on the right IP-address.
Please tell what logs I should pull.
2 years, 7 months