Ovirt 4.3 Upload of Image fails
by Mark Morgan
Hi, I am trying to upload an image to a Ovirt 4.3 Instance but it keeps
failing.
After a few seconds it says paused by system.
The test connection is successful in the upload image window so we have
installed the certificate properly.
Due to an older
thread(https://www.mail-archive.com/users@ovirt.org/msg50954.html) I
also checked if it has something to do with wifi. But I am not even
using a wifi connection.
Here is a small part of the log, where you can see the transfer failing.
2021-09-29 11:44:43,011+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-96804) [d370a18b-bb12-4992-9fc8-7ce6607358f8] Running
command: TransferImageStatusCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
2021-09-29 11:44:43,055+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-96804) [1cbc3b4f-b1d4-428a-965a-b9745fd0e108] Running
command: TransferImageStatusCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
2021-09-29 11:44:43,056+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
(default task-96804) [1cbc3b4f-b1d4-428a-965a-b9745fd0e108] Updating
image transfer 0681f799-f44f-4b1e-8369-4d1033bd81e6 (image
ce221b1f-46aa-4eb4-b159-0e0adb762102) phase to Resuming (message: 'Sent
0MB')
2021-09-29 11:44:47,096+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-96801) [50849f1b-ef18-41ab-9380-e2c7980a1f73] Running
command: TransferImageStatusCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
2021-09-29 11:44:48,878+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Resuming transfer for Upload disk
'CentOS-8.4.2105-x86_64-boot.iso' (disk id:
'ce221b1f-46aa-4eb4-b159-0e0adb762102', image id:
'45896ce1-a602-49f5-9774-4dc17d960589')
2021-09-29 11:44:48,896+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] EVENT_ID:
TRANSFER_IMAGE_RESUMED_BY_USER(1,074), Image transfer was resumed by
user (admin@internal-authz).
2021-09-29 11:44:48,902+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Renewing transfer ticket for
Upload disk 'CentOS-8.4.2105-x86_64-boot.iso' (disk id:
'ce221b1f-46aa-4eb4-b159-0e0adb762102', image id:
'45896ce1-a602-49f5-9774-4dc17d960589')
2021-09-29 11:44:48,903+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ExtendImageTicketVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] START,
ExtendImageTicketVDSCommand(HostName = virthost01,
ExtendImageTicketVDSCommandParameters:{hostId='15d10fdf-4dc1-4a4c-a12f-cab50c492974',
ticketId='8d09cf8c-baf9-4497-8b52-ea53a97b4a19', timeout='300'}), log
id: 197aba7
2021-09-29 11:44:48,908+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ExtendImageTicketVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] FINISH,
ExtendImageTicketVDSCommand, return: StatusOnlyReturn [status=Status
[code=0, message=Done]], log id: 197aba7
2021-09-29 11:44:48,908+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Transfer session with ticket id
8d09cf8c-baf9-4497-8b52-ea53a97b4a19 extended, timeout 300 seconds
2021-09-29 11:44:48,920+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
(EE-ManagedThreadFactory-engineScheduled-Thread-80)
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Updating image transfer
0681f799-f44f-4b1e-8369-4d1033bd81e6 (image
ce221b1f-46aa-4eb4-b159-0e0adb762102) phase to Transferring (message:
'Sent 0MB')
2021-09-29 11:44:51,379+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-96801) [e2247750-524d-40e4-bffb-1176ff13f1f5] Running
command: TransferImageStatusCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
2021-09-29 11:44:55,376+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
(default task-96801) [f9b3dec1-9aac-4695-ba39-43e5e66bdccd] Running
command: TransferImageStatusCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_DISK with role type USER
Am I doing something wrong?
3 years, 2 months
Failed to update OVF disks / Failed to update VMs/Templates OVF data for Storage Domain
by nicolas@devels.es
Hi,
We upgraded from oVirt 4.3.8 to 4.4.8 and sometimes we're finding events
like these in the event log (3-4 times/day):
Failed to update OVF disks 77818843-f72e-4d40-9354-4e1231da341f, OVF
data isn't updated on those OVF stores (Data Center KVMRojo, Storage
Domain pv04-003).
Failed to update VMs/Templates OVF data for Storage Domain pv02-002
in Data Center KVMRojo.
I found [1], however, it seems not to solve the issue. I restarted all
the hosts and we're still getting the messages.
We couldn't upgrade hosts to 4.4 yet, FWIW. Maybe it's caused by this?
If someone could shed some light about this, I'd be grateful.
Thanks.
[1]: https://access.redhat.com/solutions/3353011
3 years, 2 months
Managed Block Storage and Templates
by Shantur Rathore
Hi all,
Anyone tried using Templates with Managed Block Storage?
I created a VM on MBS and then took a snapshot.
This worked but as soon as I created a Template from snapshot, the
template got created but there is no disk attached to the template.
Anyone seeing something similar?
Thanks
3 years, 2 months
Managed Block Storage issues
by Shantur Rathore
Hi all,
I am trying to set up Managed block storage and have the following issues.
My setup:
Latest oVirt Node NG : 4.4.8
Latest oVirt Engine : 4.4.8
1. Unable to copy to iSCSI based block storage
I created a MBS with Synology UC3200 as a backend ( supported by
Cinderlib ). It was created fine but when I try to copy disks to it,
it fails.
Upon looking at the logs from SPM, I found "qemu-img" failed with an
error that it cannot open "/dev/mapper/xxxxxxxxxx" : Permission Error.
Had a look through the code and digging out more, I saw that
a. Sometimes /dev/mapper/xxxx symlink isn't created ( log attached )
b. The ownership to /dev/mapper/xxxxxx and /dev/dm-xx for the new
device always stays at root:root
I added a udev rule
ACTION=="add|change", ENV{DM_UUID}=="mpath-*", GROUP="qemu",
OWNER="vdsm", MODE="0660"
and the disk copied correctly when /dev/mapper/xxxxx got created.
2. Copy progress finishes in UI very early than the actual qemu-img process.
The UI shows the Copy process is completed successfully but it's
actually still copying the image.
This happens both for ceph and iscsi based mbs.
Is there any known workaround to get iSCSI MBS working?
Kind regards,
Shantur
3 years, 2 months
oVirt / Hyperconverged
by topoigerm@gmail.com
I have 4 servers of identical hardware. The documentation says "you need 3", not "you need 3 or more"; is it possible to run hyperconverged with 4 servers. Currently all the 4 nodes server has been crashed n after the 4th node try joining the hyperconverged 3nodes cluster. Kindly advise.
FYI currently i'm trying to reinstall back all the OS back due mentioned incident happen.
/BR
Faizal
3 years, 2 months
about the Live Storage Migration
by Tommy Sway
From the document:
Overview of Live Storage Migration
Virtual disks can be migrated from one storage domain to another while the
virtual machine to which they are attached is running. This is referred to
as live storage migration. When a disk attached to a running virtual machine
is migrated, a snapshot of that disk's image chain is created in the source
storage domain, and the entire image chain is replicated in the destination
storage domain. As such, ensure that you have sufficient storage space in
both the source storage domain and the destination storage domain to host
both the disk image chain and the snapshot. A new snapshot is created on
each live storage migration attempt, even when the migration fails.
Consider the following when using live storage migration:
You can live migrate multiple disks at one time.
Multiple disks for the same virtual machine can reside across more than one
storage domain, but the image chain for each disk must reside on a single
storage domain.
You can live migrate disks between any two storage domains in the same data
center.
You cannot live migrate direct LUN hard disk images or disks marked as
shareable.
But where do users perform online storage migrations?
There seems to be no interface.
3 years, 2 months
Re: About the vm memory limit
by Tommy Sway
In fact, I am very interested in the part you mentioned, because my environment is running relational database, which usually requires a large amount of memory,
and some systems clearly need to configure HUGEPAGE memory (such as Oracle).
Could you elaborate on some technical details about the management of huge page memory? And the difference between 4.3 and 4.4 in this respect?
Thank you very much!
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Saturday, September 25, 2021 5:32 PM
To: Tommy Sway <sz_cuitao(a)163.com>
Subject: Re: [ovirt-users] About the vm memory limit
It depends on the numa configuration of the host.
If you have 256G per CPU, it's best to stay into that range.
Also, consider disabling transparent huge pages on the host & VM.
Since 4.4 Regular Huge Pages (do not confuse them with THP) can be used on the Hypervisors, while on 4.3 there were some issues but I can't provode any details.
Best Regards,
Strahil Nikolov
On Fri, Sep 24, 2021 at 6:40, Tommy Sway
<sz_cuitao(a)163.com <mailto:sz_cuitao@163.com> > wrote:
I would like to ask if there is any limit on the memory size of virtual machines, or performance curve or something like that?
As long as there is memory on the physical machine, the more virtual machines the better?
In our usage scenario, there are many virtual machines with databases, and their memory varies greatly.
For some virtual machines, 4G memory is enough, while for some virtual machines, 64GB memory is needed.
I want to know what is the best use of memory for a virtual machine, since the virtual machine is just a QEMU emulation process on a physical machine, and I worry that it is not using as much memory as a physical machine. Understand this so that we can develop guidelines for optimal memory usage scenarios for virtual machines.
Thank you!
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y6XDOIMKCP4...
3 years, 2 months
Hypervisor broken
by Hans Scheffers
Hi all,
I am new here, been searching the mailing list on a regular base when
I encountered problems and till now I always was able to keep the
system up & running. As said... till now...
I have oVirt at home, and I have about 7 vm's running on it. Lately I
have had some troubles with my electricity, which results in a
complete power outage on a irregular base. Yesterday I had a power
outage which left the system unbootable with the following errors:
error:
../../grub-core/loader/i386/pc/linux.c:170:invalid magic number.
error: ../../grub-core/loader/i386/pc/linux.c:1418:you need to load
the kernel first.
Press any key to continue
Normally this can be solved following
https://access.redhat.com/solutions/5829141, but his time also the
files in /boot had a size of 0 bytes. So basically i did not have a
working kernel on the system anymore.
https://www.thegeekdiary.com/centos-rhel-7-how-to-install-kernel-from-res...
does work for CentOS 8 also, but the ovirt 4.8 iso does not have the
same directory structure. Using CentOS 8 installs a kernel, but not a
bootable system.
When booting the ovirt kernel, I get a rescue prompt with the
following error: "Warning:
/dev/onn/ovirt-node-ng-4.4.6.3-0.20210518.0+1 does not exist"
Is there a way i can start the installer in troubleshooting mode so I
can do a reinstall and keep the current configuration or can I just
reinstall and import the current datacenter in it?
3 years, 2 months