How to modify the MAC address of a bonding interface after changing the nic card
by wodel youchi
Hi,
ovirt 4.4 on centos 8.
We have changed the network card of a hypervisor which uses bonding
interface.
The bonding interface still uses the old MAC address of the former slave
interface.
We tried to modify it via CLI but it didn't work, as soon as the node
(hypervisor) is activated, the old MAC is reestablished.
it seems to be backed up somewhere.
Should we reinstall the host?
Regards.
2 weeks, 1 day
Re: Unable to get console of self hosted engine via oracle kvm
by Dushyant Khobragade
For me it is not going any further.
On Fri, Mar 14, 2025 at 5:54 PM Geoff O'Callaghan <geoffocallaghan(a)gmail.com>
wrote:
> I didn't have to perform any extra steps, but I was using oVirt Node
> (current), not OLVM itself - I don't have an OLVM environment to test
> against, but might be able to build one over the next few days.
>
> I merely ssh'd into the host where the self-hosted engine was running and
> ran:
>
> *hosted-engine --console*
>
> That's it
>
> ------------------------------
> *From:* dushyantk.sun(a)gmail.com <dushyantk.sun(a)gmail.com>
> *Sent:* Thursday, March 13, 2025 7:02 AM
> *To:* users(a)ovirt.org <users(a)ovirt.org>
> *Subject:* [ovirt-users] Re: Unable to get console of self hosted engine
> via oracle kvm
>
> Yes I did.. But it is like stuck and not giving any prompt.
>
> Did we need to set terminal on kvm side.
>
> Also do you have any steps to perform prior to taking console?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILVEWHD3J4D...
>
2 weeks, 3 days
Unable to get console of self hosted engine via oracle kvm
by dushyantk.sun@gmail.com
Hi,
I am trying to get console of hosted engine via KVM host where this SHE is.
I am getting below but unable to see login prompt. I have set the password for console as per oracle doc.
usecase: SHE UI is down so wanted to know whats going on SHE level by login into it.
# echo $TERM
xterm-256color
# hosted-engine --console
The engine VM is running on this host
Escape character is ^]
Not getting any prompt after this.
Anyone come across such scenario.
2 weeks, 6 days
Re: Couldn't resolve host name for http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=storage-ceph-pacific
by Geoff O'Callaghan
Correcting my email - you can remove the following on an EL9 or related system. I haven't checked to see what happens on RHEL9 itself, but will later
**remove**
cat >/etc/yum.repos.d/CentOS-Stream-Extras-common.repo <<'EOF'
[c9s-extras-common]
name=CentOS Stream $releasever - Extras packages
metalink=https://mirrors.centos.org/metalink?repo=centos-extras-sig-extra...
gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Extras
gpgcheck=1
repo_gpgcheck=0
metadata_expire=6h
countme=1
enabled=1
EOF
echo "9-stream" > /etc/yum/vars/stream
dnf distro-sync --nobest
**remove**
Summary is
## Installing on RHEL 9.0 derivatives
Once a minimal RHEL 9.0 derivative system has been built, you can install the pre-production master snapshots (oVirt Release 4.5.6 + Engine 4.5.7)
```bash
dnf copr enable -y ovirt/ovirt-master-snapshot centos-stream-9
dnf install -y ovirt-release-master
```
At this point you can choose to install the :
* Hosted Engine, `dnf install -y ovirt-hosted-engine-setup`
* A minimal host, `dnf install -y ovirt-host`
NOTE: This is using the latest builds. There hasn't been a release for some time, so your best chance of success is to use these snapshots. When there is a release i'll update the document to include the details.
________________________________
From: Geoff O'Callaghan
Sent: Wednesday, March 05, 2025 5:15 PM
To: users(a)ovirt.org; fiorletta(a)ssolo.eu
Subject: Re: [ovirt-users] Re: Couldn't resolve host name for http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=storage-c...
Let me try and break this down into a sequence of steps and let's see where it breaks down for you. Let's start with the simplest scenario. Centos 9 Stream system (6 core, 32Gb memory, 250Gb disk, single 1GbE nic - note i'm doing this in a virt-enabled VM, but bare metal should be similar). This is the sort of information I need to debug your problem.
*
Build a vanilla Centos 9 Stream system. (non-oVirt node image, but we can do that if you like)
*
My centos 9 system is called ovnode01.<fqdn> 192.168.1.111 and the manager i'm going to install is emanager.<fqdn> 192.168.1.251, both fqdn's resolve in my DNS (important)
*
As per https://ovirt.org/download/install_on_rhel.html
cat >/etc/yum.repos.d/CentOS-Stream-Extras-common.repo <<'EOF'
[c9s-extras-common]
name=CentOS Stream $releasever - Extras packages
metalink=https://mirrors.centos.org/metalink?repo=centos-extras-sig-extra...
gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Extras
gpgcheck=1
repo_gpgcheck=0
metadata_expire=6h
countme=1
enabled=1
EOF
echo "9-stream" > /etc/yum/vars/stream
dnf distro-sync --nobest
*
*
dnf copr enable -y ovirt/ovirt-master-snapshot centos-stream-9
dnf install -y ovirt-release-master
*
You should get something like this
Running transaction
Preparing : 1/1
Installing : tar-2:1.34-7.el9.x86_64 1/2
Installing : ovirt-release-master-4.5.6-0.0.master.20240719070214.gitf7ad2c3.el9.noarch 2/2
Running scriptlet: ovirt-release-master-4.5.6-0.0.master.20240719070214.gitf7ad2c3.el9.noarch 2/2
Verifying : ovirt-release-master-4.5.6-0.0.master.20240719070214.gitf7ad2c3.el9.noarch 1/2
Verifying : tar-2:1.34-7.el9.x86_64
*
dnf install ovirt-hosted-engine-setup
*
Apart from this message 'Repository copr:copr.fedorainfracloud.org:ovirt:ovirt-master-snapshot is listed more than once in the configuration' you should have no other repository issues at this time.
*
663 packages later..... (my initial builds are minimal installs)
*
*
Remembering the above comment about the dns resolution of this node and the about to be built manager vm - make sure the dns resolves correctly to the systems you're about to build.
*
hosted-engine --deploy --4
*
Pretty much hit enter for most things, except for the following:
*
Please specify which way the network connectivity should be checked (ping, dns, tcp, none) [dns]:ping
*
Engine VM FQDN:emanager.<fqdn>
*
Enter root password that will be used for the engine appliance:<your password>
*
How should the engine VM network be configured? (DHCP, Static)[DHCP]:Static
*
Please enter the IP address to be used for the engine VM []:192.168.1.251 # the ip address emanager.<fqdn> resolves to.
*
Enter engine admin password:<your password>
*
Please provide the hostname of this host on the management network [ovnode01]:ovnode01.<fqdn> # fully qualified name that we know resolves to the correct ip address.
*
....... Lots of stuff happens
*
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Install oVirt Hosted Engine packages]
[ INFO ] ok: [localhost]
...
*
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Install ovirt-engine-appliance rpm] (sits here for quite a while as the rpm contains the appliance vm, it has to install the rpm, unwind it, copy bits and pieces around etc)
*
Eventually you'll get to
*
Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:nfs
*
Please specify the nfs version you would like to use (auto, v3, v4, v4_0, v4_1, v4_2)[auto]:
*
Please specify the full shared storage connection path to use (example: host:/path):192.168.1.250:/ovirtx
*
If needed, specify additional mount options for the connection to the hosted-engine storagedomain
*
(example: rsize=32768,wsize=32768) []:
...
Please specify the size of the VM disk in GiB: [51]:
Building engine vm and shoving it out on NFS (In my case)
...
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Shutdown local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for local VM shutdown]
...
TASK [ovirt.ovirt.hosted_engine_setup : Copy local VM disk to shared storage]
...
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Hosted Engine successfully deployed
And hey presto the manager should be at https://emanager.<fqdn>
________________________________
From: Marco Fiorletta
Sent: Monday, March 03, 2025 7:00 PM
To: Geoff O'Callaghan; users(a)ovirt.org
Subject: Re: [ovirt-users] Re: Couldn't resolve host name for http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=storage-c...
Hey there, I've been running into a really frustrating issue while trying to set up a self-hosted engine. I've tried multiple times, with different distributions like CentOS Stream 9, AlmaLinux 9.5, and Rocky Linux 9, but I keep hitting the same wall.
Essentially, during the engine configuration, I get an error saying that https://mirroros.centos.org is unreachable.
I've tried all the repository workarounds I've seen online, changing the pointers and everything, but nothing seems to help.
The weird thing is, if I install the engine on a dedicated server, it works fine. It's only when I try to deploy a self-hosted engine that I get this mirror error.
Then, just to see what would happen, I gave Oracle Linux a shot. And guess what?
The self-hosted engine configured perfectly, without even trying to touch those CentOS repositories.
So, that's definitely pointing to some kind of distribution-specific issue.
On top of that, I've also run into another problem with the 4.5 distribution related to VLANs.
I'm working in a data center with a bunch of VLAN-certified networks.
I wanted to set up oVirt 4.5, using a self-hosted engine, and assign one of those VLAN networks to ovirtmgmt.
But the installation just flat-out fails because it only seems to accept physical network devices.
The only way I could get it to work was to put a switch in place to mask the VLAN, which is a bit of a clunky workaround.
My older 4.4-based virtualizer didn't have any of these problems, so I'm wondering if something's changed significantly in 4.5. Any ideas?
Best regards
On Sun, 2025-03-02 at 20:56 +0000, Geoff O'Callaghan wrote:
Hi
I'm not sure where things are going wrong for you, so I built a new Centos 9 Stream system and was able to install a self-hosted appliance on it - ie. It got turned into a node and the engine appliance was deployed OK. I just wanted to make sure the current process worked.
Can you provide more information on what/how you're doing the installation? It might be that the documentation is wrong, but I will fix it if you can tell me where you're going wrong.
Thanks
Geoff
________________________________
From: fiorletta--- via Users
Sent: Tuesday, February 25, 2025 3:30 AM
To: users(a)ovirt.org
Subject: [ovirt-users] Re: Couldn't resolve host name for http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=storage-c...
I have the same problem.
Centos stream 9, ovirt 4.5 the errore is present only on self-hosted-engine.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5TXJXB5FCV3...
--
[cid:d80ea0d0d3efc5e30a5e8dbbdb04362ac24caae7.camel@ssolo.eu-0]
3 weeks, 4 days
Re: [External] : Really weird ! Image IO / browser upload only working on 1 cluster (out of many) - anyone got any advice ? (network access, certs seem fine.)
by morgan cox
Hi.
Sorry for the delay in response..
still have the issue
Its is not cert related as I can use image io for one cluster/dc (the first one) and that wouldn't work if cert was the issue..
i.e we have
cluster1: app1+app2 - app1 sd (this works)
cluster2: db1+db2 - db1 sd
cluster3: dmz1+dmz2 - dmz1 sd
, and so on (we have 8 clusters each in own dc) each cluster has access to their own section of san (storage domain)
the issue is that if I go to storage - domains - and enter any storage domain and 'upload' the hosts showing in ALL clusters/storage domains in the drop down in the upload page shows the same hosts 'app1 and app2..'
So I can upload to the storage domain app1+app2 uses (only) and only if I use the hosts app1 and app2 (cluster1) - but all storage domains only show app1 and app2 as hosts (in the upload section) - in the storage domains used by db1 +db2 they should show those hosts (but I only see in drop down app1 + app2).. Really odd.. -
test connection always works (because I do have access to app1/app2 ) - but app1+app2 only have access to their sd ..
Here are logs when attempting to use imageio (through browser )
From engine - engine.log
-----------------------------------------------------------------------------------------------------------------------------------------
2025-03-06 17:16:48,356Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2025-03-06 17:16:48,505Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] Running command: TransferDiskImageCommand internal: false. Entities affected : ID: 5fda4199-06c3-4d36-a6f2-be9791972c3b Type: StorageAction group CREATE_DISK with role type USER
2025-03-06 17:16:48,505Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] Creating ImageTransfer entity for command '0e3822a0-68e8-4c51-ac86-b6a433649183', proxyEnabled: true
2025-03-06 17:16:48,535Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] Starting image transfer: ImageTransfer:{id='0e3822a0-68e8-4c51-ac86-b6a433649183', phase='Initializing', type='Upload', active='false', lastUpdated='Thu Mar 06 17:16:48 UTC 2025', message='null', vdsId='null', diskId='null', imagedTicketId='null', proxyUri='null', bytesSent='null', bytesTotal='113', clientInactivityTimeout='60', timeoutPolicy='legacy', imageFormat='RAW', transferClientType='Transfer via browser', shallow='false'}
2025-03-06 17:16:48,535Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] Creating disk image
2025-03-06 17:16:48,579Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] Running command: AddDiskCommand internal: true. Entities affected : ID: 5fda4199-06c3-4d36-a6f2-be9791972c3b Type: StorageAction group CREATE_DISK with role type USER
2025-03-06 17:16:48,646Z INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] Running command: AddImageFromScratchCommand internal: true. Entities affected : ID: 5fda4199-06c3-4d36-a6f2-be9791972c3b Type: Storage
2025-03-06 17:16:48,733Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='5c316ba3-d868-4314-917c-da0cc53b577e', ignoreFailoverLimit='false', storageDomainId='5fda4199-06c3-4d36-a6f2-be9791972c3b', imageGroupId='22372bb7-9d9b-4db2-927d-1820917fd6c5', imageSizeInBytes='113', volumeFormat='COW', newImageId='79424cad-773d-4f29-ab88-b14ce0cfaa72', imageType='Sparse', newImageDescription='{"DiskAlias":"test","DiskDescription":""}', imageInitialSizeInBytes='113', imageId='00000000-0000-0000-0000-000000000000', sourceImageGroupId='00000000-0000-0000-0000-000000000000', shouldAddBitmaps='false', legal='true', sequenceNumber='1', bitmap='null'}), log id: 590d036d
2025-03-06 17:16:48,841Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] FINISH, CreateVolumeVDSCommand, return: 79424cad-773d-4f29-ab88-b14ce0cfaa72, log id: 590d036d
2025-03-06 17:16:48,841Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command '3e07f385-e2b9-46ee-822f-a466e2215347'
2025-03-06 17:16:48,841Z INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] CommandMultiAsyncTasks::attachTask: Attaching task '10c19ce5-cdac-4a0c-b1d2-75c34c5e2c27' to command '3e07f385-e2b9-46ee-822f-a466e2215347'.
2025-03-06 17:16:49,139Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] Adding task '10c19ce5-cdac-4a0c-b1d2-75c34c5e2c27' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2025-03-06 17:16:49,172Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] BaseAsyncTask::startPollingTask: Starting to poll task '10c19ce5-cdac-4a0c-b1d2-75c34c5e2c27'.
2025-03-06 17:16:49,256Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] EVENT_ID: ADD_DISK_INTERNAL(2,036), Add-Disk operation of 'test' was initiated by the system.
2025-03-06 17:16:49,371Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-385) [70460fc8-7d40-4153-a387-f18a06641f6c] EVENT_ID: TRANSFER_IMAGE_INITIATED(1,031), Image Upload with disk test was initiated by mcox@internal-authz.
2025-03-06 17:16:50,682Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [70460fc8-7d40-4153-a387-f18a06641f6c] Waiting for disk to be added for image transfer '0e3822a0-68e8-4c51-ac86-b6a433649183'
2025-03-06 17:16:50,682Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [70460fc8-7d40-4153-a387-f18a06641f6c] Command 'AddDisk' (id: '53dfac83-aa6f-4b6e-8941-af27a4dc0e0b') waiting on child command id: '3e07f385-e2b9-46ee-822f-a466e2215347' type:'AddImageFromScratch' to complete
2025-03-06 17:16:52,584Z INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-4) [] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now
2025-03-06 17:16:52,591Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-4) [] SPMAsyncTask::PollTask: Polling task '10c19ce5-cdac-4a0c-b1d2-75c34c5e2c27' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.
2025-03-06 17:16:52,591Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-4) [] BaseAsyncTask::onTaskEndSuccess: Task '10c19ce5-cdac-4a0c-b1d2-75c34c5e2c27' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.
2025-03-06 17:16:52,591Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-4) [] CommandAsyncTask::endActionIfNecessary: All tasks of command '3e07f385-e2b9-46ee-822f-a466e2215347' has ended -> executing 'endAction'
2025-03-06 17:16:52,591Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-4) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: '3e07f385-e2b9-46ee-822f-a466e2215347'): calling endAction '.
2025-03-06 17:16:52,592Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-65196) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'AddImageFromScratch',
2025-03-06 17:16:52,648Z INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-65196) [70460fc8-7d40-4153-a387-f18a06641f6c] Command [id=3e07f385-e2b9-46ee-822f-a466e2215347]: Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands.
2025-03-06 17:16:52,648Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-65196) [70460fc8-7d40-4153-a387-f18a06641f6c] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' completed, handling the result.
2025-03-06 17:16:52,648Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-65196) [70460fc8-7d40-4153-a387-f18a06641f6c] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' succeeded, clearing tasks.
2025-03-06 17:16:52,648Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-65196) [70460fc8-7d40-4153-a387-f18a06641f6c] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '10c19ce5-cdac-4a0c-b1d2-75c34c5e2c27'
2025-03-06 17:16:52,649Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-65196) [70460fc8-7d40-4153-a387-f18a06641f6c] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='5c316ba3-d868-4314-917c-da0cc53b577e', ignoreFailoverLimit='false', taskId='10c19ce5-cdac-4a0c-b1d2-75c34c5e2c27'}), log id: 3adf882e
2025-03-06 17:16:52,649Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-65196) [70460fc8-7d40-4153-a387-f18a06641f6c] START, HSMClearTaskVDSCommand(HostName = ng2-olvm-stbdb2, HSMTaskGuidBaseVDSCommandParameters:{hostId='52ad3b6b-8bd8-4cb4-a6dc-432ca3c15f26', taskId='10c19ce5-cdac-4a0c-b1d2-75c34c5e2c27'}), log id: 6432d6de
2025-03-06 17:16:52,666Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-65196) [70460fc8-7d40-4153-a387-f18a06641f6c] FINISH, HSMClearTaskVDSCommand, return: , log id: 6432d6de
2025-03-06 17:16:52,666Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-65196) [70460fc8-7d40-4153-a387-f18a06641f6c] FINISH, SPMClearTaskVDSCommand, return: , log id: 3adf882e
2025-03-06 17:16:52,673Z INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-65196) [70460fc8-7d40-4153-a387-f18a06641f6c] BaseAsyncTask::removeTaskFromDB: Removed task '10c19ce5-cdac-4a0c-b1d2-75c34c5e2c27' from DataBase
2025-03-06 17:16:52,673Z INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-65196) [70460fc8-7d40-4153-a387-f18a06641f6c] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity '3e07f385-e2b9-46ee-822f-a466e2215347'
2025-03-06 17:16:53,553Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-385) [268debf2-769a-4562-be29-65fa4b11e4c2] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: 5fda4199-06c3-4d36-a6f2-be9791972c3b Type: SystemAction group CREATE_DISK with role type USER
2025-03-06 17:16:54,683Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84) [70460fc8-7d40-4153-a387-f18a06641f6c] Waiting for disk to be added for image transfer '0e3822a0-68e8-4c51-ac86-b6a433649183'
2025-03-06 17:16:54,684Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84) [70460fc8-7d40-4153-a387-f18a06641f6c] Getting volume info for image '22372bb7-9d9b-4db2-927d-1820917fd6c5/79424cad-773d-4f29-ab88-b14ce0cfaa72'
2025-03-06 17:16:54,686Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84) [70460fc8-7d40-4153-a387-f18a06641f6c] START, GetVolumeInfoVDSCommand(HostName = ng2-olvm-stbdb2, GetVolumeInfoVDSCommandParameters:{hostId='52ad3b6b-8bd8-4cb4-a6dc-432ca3c15f26', storagePoolId='5c316ba3-d868-4314-917c-da0cc53b577e', storageDomainId='5fda4199-06c3-4d36-a6f2-be9791972c3b', imageGroupId='22372bb7-9d9b-4db2-927d-1820917fd6c5', imageId='79424cad-773d-4f29-ab88-b14ce0cfaa72'}), log id: 4c62db44
2025-03-06 17:16:54,861Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84) [70460fc8-7d40-4153-a387-f18a06641f6c] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@7f011b06, log id: 4c62db44
2025-03-06 17:16:54,861Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84) [70460fc8-7d40-4153-a387-f18a06641f6c] Updating size from '113' to '4096'
2025-03-06 17:16:54,874Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84) [70460fc8-7d40-4153-a387-f18a06641f6c] Command 'AddDisk' id: '53dfac83-aa6f-4b6e-8941-af27a4dc0e0b' child commands '[3e07f385-e2b9-46ee-822f-a466e2215347]' executions were completed, status 'SUCCEEDED'
2025-03-06 17:16:55,883Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully.
2025-03-06 17:16:55,964Z INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' successfully.
2025-03-06 17:16:55,970Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='5c316ba3-d868-4314-917c-da0cc53b577e', ignoreFailoverLimit='false', storageDomainId='5fda4199-06c3-4d36-a6f2-be9791972c3b', imageGroupId='22372bb7-9d9b-4db2-927d-1820917fd6c5', imageId='79424cad-773d-4f29-ab88-b14ce0cfaa72'}), log id: 4125fc8d
2025-03-06 17:16:55,970Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] START, GetVolumeInfoVDSCommand(HostName = ng2-olvm-stbdb2, GetVolumeInfoVDSCommandParameters:{hostId='52ad3b6b-8bd8-4cb4-a6dc-432ca3c15f26', storagePoolId='5c316ba3-d868-4314-917c-da0cc53b577e', storageDomainId='5fda4199-06c3-4d36-a6f2-be9791972c3b', imageGroupId='22372bb7-9d9b-4db2-927d-1820917fd6c5', imageId='79424cad-773d-4f29-ab88-b14ce0cfaa72'}), log id: 6a92bb72
2025-03-06 17:16:56,015Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@7f011b06, log id: 6a92bb72
2025-03-06 17:16:56,015Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@7f011b06, log id: 4125fc8d
2025-03-06 17:16:56,057Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] START, PrepareImageVDSCommand(HostName = ng2-olvm-stbdb1, PrepareImageVDSCommandParameters:{hostId='681f0da5-9d7c-4a62-b7b8-df833e43ff29'}), log id: 1f67388c
2025-03-06 17:16:56,889Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] FINISH, PrepareImageVDSCommand, return: PrepareImageReturn:{status='Status [code=0, message=Done]'}, log id: 1f67388c
2025-03-06 17:16:56,891Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] START, GetQemuImageInfoVDSCommand(HostName = ng2-olvm-stbdb1, GetVolumeInfoVDSCommandParameters:{hostId='681f0da5-9d7c-4a62-b7b8-df833e43ff29', storagePoolId='5c316ba3-d868-4314-917c-da0cc53b577e', storageDomainId='5fda4199-06c3-4d36-a6f2-be9791972c3b', imageGroupId='22372bb7-9d9b-4db2-927d-1820917fd6c5', imageId='79424cad-773d-4f29-ab88-b14ce0cfaa72'}), log id: 1a568567
2025-03-06 17:16:56,935Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] FINISH, GetQemuImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.QemuImageInfo@6ab4f0e9, log id: 1a568567
2025-03-06 17:16:56,936Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] START, TeardownImageVDSCommand(HostName = ng2-olvm-stbdb1, ImageActionsVDSCommandParameters:{hostId='681f0da5-9d7c-4a62-b7b8-df833e43ff29'}), log id: 15dea1c0
2025-03-06 17:16:57,253Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [70460fc8-7d40-4153-a387-f18a06641f6c] FINISH, TeardownImageVDSCommand, return: StatusReturn:{status='Status [code=0, message=Done]'}, log id: 15dea1c0
2025-03-06 17:16:57,268Z WARN [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [] VM is null - no unlocking
2025-03-06 17:16:57,301Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-22) [] EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test' was successfully added.
2025-03-06 17:16:57,552Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-385) [c31320c9-f078-4aa8-b17f-802907bf5124] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: 5fda4199-06c3-4d36-a6f2-be9791972c3b Type: SystemAction group CREATE_DISK with role type USER
2025-03-06 17:16:58,376Z INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [70460fc8-7d40-4153-a387-f18a06641f6c] Successfully added Upload disk 'test' (disk id: '22372bb7-9d9b-4db2-927d-1820917fd6c5', image id: '79424cad-773d-4f29-ab88-b14ce0cfaa72') for image transfer '0e3822a0-68e8-4c51-ac86-b6a433649183'
2025-03-06 17:16:58,398Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [70460fc8-7d40-4153-a387-f18a06641f6c] START, PrepareImageVDSCommand(HostName = ng2-olvm-app1, PrepareImageVDSCommandParameters:{hostId='a2500912-390f-4216-88b0-34f4e62c4dff'}), log id: 570232b4
2025-03-06 17:16:58,402Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [70460fc8-7d40-4153-a387-f18a06641f6c] Failed in 'PrepareImageVDS' method, for vds: 'ng2-olvm-app1'; host: '10.168.72.201': null
2025-03-06 17:16:58,402Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [70460fc8-7d40-4153-a387-f18a06641f6c] Command 'PrepareImageVDSCommand(HostName = ng2-olvm-app1, PrepareImageVDSCommandParameters:{hostId='a2500912-390f-4216-88b0-34f4e62c4dff'})' execution failed: null
2025-03-06 17:16:58,402Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [70460fc8-7d40-4153-a387-f18a06641f6c] FINISH, PrepareImageVDSCommand, return: , log id: 570232b4
2025-03-06 17:16:58,402Z ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-34) [70460fc8-7d40-4153-a387-f18a06641f6c] Failed to prepare image for image transfer '0e3822a0-68e8-4c51-ac86-b6a433649183': {}: org.ovirt.engine.core.common.errors.EngineException: EngineException: java.lang.NullPointerException (Failed with error ENGINE and code 5001)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2121)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.prepareImage(TransferDiskImageCommand.java:187)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.startImageTransferSession(TransferDiskImageCommand.java:1063)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleImageIsReadyForTransfer(TransferDiskImageCommand.java:680)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.handleInitializing(TransferDiskImageCommand.java:653)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.executeStateHandler(TransferDiskImageCommand.java:586)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand.proceedCommandExecution(TransferDiskImageCommand.java:573)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.storage.disk.image.TransferImageCommandCallback.doPolling(TransferImageCommandCallback.java:21)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:360)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:511)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)
Caused by: java.lang.NullPointerException
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageReturn.<init>(PrepareImageReturn.java:15)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer.prepareImage(JsonRpcVdsServer.java:1947)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand.executeImageActionVdsBrokerCommand(PrepareImageVDSCommand.java:18)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand.executeImageActionVdsBrokerCommand(PrepareImageVDSCommand.java:5)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.ImageActionsVDSCommandBase.executeVdsBrokerCommand(ImageActionsVDSCommandBase.java:14)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:410)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown Source)
at jdk.internal.reflect.GeneratedMethodAccessor70.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:51)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:78)
at org.ovirt.engine.core.common//org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
at jdk.internal.reflect.GeneratedMethodAccessor65.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79)
at org.jboss.weld.core@3.1.7.SP1//org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68)
at deployment.engine.ear//org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown Source)
... 19 more
2025-03-06 17:16:59,427Z ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-90) [70460fc8-7d40-4153-a387-f18a06641f6c] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand' with failure.
2025-03-06 17:16:59,427Z ERROR [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-90) [70460fc8-7d40-4153-a387-f18a06641f6c] Failed to transfer disk '00000000-0000-0000-0000-000000000000' for image transfer '0e3822a0-68e8-4c51-ac86-b6a433649183'
2025-03-06 17:16:59,459Z INFO [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-90) [45a61a6a] Running command: RemoveDiskCommand internal: true. Entities affected : ID: 22372bb7-9d9b-4db2-927d-1820917fd6c5 Type: DiskAction group DELETE_DISK with role type USER
2025-03-06 17:16:59,505Z INFO [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-90) [45a61a6a] Running command: RemoveImageCommand internal: true. Entities affected : ID: 5fda4199-06c3-4d36-a6f2-be9791972c3b Type: Storage
2025-03-06 17:16:59,586Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-90) [45a61a6a] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='5c316ba3-d868-4314-917c-da0cc53b577e', ignoreFailoverLimit='false', storageDomainId='5fda4199-06c3-4d36-a6f2-be9791972c3b', imageGroupId='22372bb7-9d9b-4db2-927d-1820917fd6c5', postZeros='false', discard='false', forceDelete='false'}), log id: 5e1bb690
2025-03-06 17:16:59,958Z INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-T
-----------------------------------------------------------------------------------------------------------------------------------------
And from image io log
25-03-06 17:14:03,483 INFO (Thread-2) [http] OPEN connection=2 client=::ffff:10.88.1.105
2025-03-06 17:15:03,511 WARNING (Thread-2) [http] Timeout reading or writing to socket: The read operation timed out
2025-03-06 17:15:03,511 INFO (Thread-2) [http] CLOSE connection=2 client=::ffff:10.88.1.105 [connection 1 ops, 60.028099 s] [dispatch 1 ops, 0.000126 s]
2025-03-06 17:16:48,303 INFO (Thread-3) [http] OPEN connection=3 client=::ffff:10.88.1.105
2025-03-06 17:17:48,306 WARNING (Thread-3) [http] Timeout reading or writing to socket: The read operation timed out
2025-03-06 17:17:48,307 INFO (Thread-3) [http] CLOSE connection=3 client=::ffff:10.88.1.105 [connection 1 ops, 60.003327 s] [dispatch 1 ops, 0.000124 s]
-----------------------------------------------------------------------------------------------------------------
Has anyone got any advice - pretty sure this used to work on all domains ..
3 weeks, 5 days
Re: Couldn't resolve host name for http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=storage-ceph-pacific
by Marco Fiorletta
Hey there, I've been running into a really frustrating issue while
trying to set up a self-hosted engine. I've tried multiple times, with
different distributions like CentOS Stream 9, AlmaLinux 9.5, and Rocky
Linux 9, but I keep hitting the same wall.
Essentially, during the engine configuration, I get an error saying
that https://mirroros.centos.org is unreachable.
I've tried all the repository workarounds I've seen online, changing
the pointers and everything, but nothing seems to help.
The weird thing is, if I install the engine on a dedicated server, it
works fine. It's only when I try to deploy a self-hosted engine that I
get this mirror error.
Then, just to see what would happen, I gave Oracle Linux a shot. And
guess what?
The self-hosted engine configured perfectly, without even trying to
touch those CentOS repositories.
So, that's definitely pointing to some kind of distribution-specific
issue.
On top of that, I've also run into another problem with the 4.5
distribution related to VLANs.
I'm working in a data center with a bunch of VLAN-certified networks.
I wanted to set up oVirt 4.5, using a self-hosted engine, and assign
one of those VLAN networks to ovirtmgmt.
But the installation just flat-out fails because it only seems to
accept physical network devices.
The only way I could get it to work was to put a switch in place to
mask the VLAN, which is a bit of a clunky workaround.
My older 4.4-based virtualizer didn't have any of these problems, so
I'm wondering if something's changed significantly in 4.5. Any ideas?
Best regards
On Sun, 2025-03-02 at 20:56 +0000, Geoff O'Callaghan wrote:
>
> Hi
>
>
> I'm not sure where things are going wrong for you, so I built a new
> Centos 9 Stream system and was able to install a self-hosted
> appliance on it - ie. It got turned into a node and the engine
> appliance was deployed OK. I just wanted to make sure the current
> process worked.
>
>
> Can you provide more information on what/how you're doing the
> installation? It might be that the documentation is wrong, but I
> will fix it if you can tell me where you're going wrong.
>
>
>
> Thanks
> Geoff
>
>
> From: fiorletta--- via Users
> Sent: Tuesday, February 25, 2025 3:30 AM
> To: users@ovirt.org
> Subject: [ovirt-users] Re: Couldn't resolve host name for
> http://mirrorlist.centos.org/?release=8-
> stream&arch=x86_64&repo=storage-ceph-pacific
>
>
> I have the same problem.
> Centos stream 9, ovirt 4.5 the errore is present only on self-hosted-
> engine.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5TXJXB5FCV3...
--
4 weeks