Live ("any") disk migration between storages KILLS a virtual machine with Windows (UEFI) !!!
by Patrick Lomakin
The weekend went horribly. I created a second storage domain in order to move all the virtual machines to a higher performance RAID array (From RAID6 to RAID10). My mistake was trying to move two Windows domain controllers from one storage domain to another. The problem is that when I "live" moved the virtual machine disk between storage domains, my Windows Server 2019 virtual machines worked fine until the next reboot. After a reboot, the machine just won't start and throws an error - Status: 0xc0000428 - The digital signature for this file couldn't be verified. Moving back to the "old" domain doesn't help.
None of the procedures with VMs worked (Re-create boot area, turn on kernel flag - bcdedit.exe /set nointegritychecks on). I am ready to help and interested in developing Ovirt. I can also provide more information to developers if needed.
3 years, 7 months
Re: Parent checkpoint ID does not match the actual leaf checkpoint
by Nir Soffer
On Sun, Jul 19, 2020 at 5:38 PM Łukasz Kołaciński <l.kolacinski(a)storware.eu>
wrote:
> Hello,
> Thanks to previous answers, I was able to make backups. Unfortunately, we
> had some infrastructure issues and after the host reboots new problems
> appeared. I am not able to do any backup using the commands that worked
> yesterday. I looked through the logs and there is something like this:
>
> 2020-07-17 15:06:30,644+02 ERROR
> [org.ovirt.engine.core.bll.StartVmBackupCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54)
> [944a1447-4ea5-4a1c-b971-0bc612b6e45e] Failed to execute VM backup
> operation 'StartVmBackup': {}:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to StartVmBackupVDS, error =
> Checkpoint Error: {'parent_checkpoint_id': None, 'leaf_checkpoint_id':
> 'cd078706-84c0-4370-a6ec-654ccd6a21aa', 'vm_id':
> '116aa6eb-31a1-43db-9b1e-ad6e32fb9260', 'reason': '*Parent checkpoint ID
> does not match the actual leaf checkpoint*'}, code = 1610 (Failed with
> error unexpected and code 16)
>
>
It looks like engine sent:
parent_checkpoint_id: None
This issue was fix in engine few weeks ago.
Which engine and vdsm versions are you testing?
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2114)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performVmBackupOperation(StartVmBackupCommand.java:368)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.runVmBackup(StartVmBackupCommand.java:225)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performNextOperation(StartVmBackupCommand.java:199)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
> at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
>
>
> And the last error is:
>
> 2020-07-17 15:13:45,835+02 ERROR
> [org.ovirt.engine.core.bll.StartVmBackupCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-14)
> [f553c1f2-1c99-4118-9365-ba6b862da936] Failed to execute VM backup
> operation 'GetVmBackupInfo': {}:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to GetVmBackupInfoVDS, error
> = No such backup Error: {'vm_id': '116aa6eb-31a1-43db-9b1e-ad6e32fb9260',
> 'backup_id': 'bf1c26f7-c3e5-437c-bb5a-255b8c1b3b73', 'reason': '*VM
> backup not exists: Domain backup job id not found: no domain backup job
> present'*}, code = 1601 (Failed with error unexpected and code 16)
>
>
This is likely a result of the first error. If starting backup failed the
backup entity
is deleted.
> (these errors are from full backup)
>
> Like I said this is very strange because everything was working correctly.
>
>
> Regards
>
> Łukasz Kołaciński
>
> Junior Java Developer
>
> e-mail: l.kolacinski(a)storware.eu
> <m.helbert(a)storware.eu>
>
>
>
>
> *[image: STORWARE]* <http://www.storware.eu/>
>
>
>
> *ul. Leszno 8/44 01-192 Warszawa www.storware.eu
> <https://www.storware.eu/>*
>
> *[image: facebook]* <https://www.facebook.com/storware>
>
> *[image: twitter]* <https://twitter.com/storware>
>
> *[image: linkedin]* <https://www.linkedin.com/company/storware>
>
> *[image: Storware_Stopka_09]*
> <https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
>
>
>
> *Storware Spółka z o.o. nr wpisu do ewidencji KRS dla M.St. Warszawa
> 000510131* *, NIP 5213672602.** Wiadomość ta jest przeznaczona jedynie
> dla osoby lub podmiotu, który jest jej adresatem i może zawierać poufne
> i/lub uprzywilejowane informacje. Zakazane jest jakiekolwiek przeglądanie,
> przesyłanie, rozpowszechnianie lub inne wykorzystanie tych informacji lub
> podjęcie jakichkolwiek działań odnośnie tych informacji przez osoby lub
> podmioty inne niż zamierzony adresat. Jeżeli Państwo otrzymali przez
> pomyłkę tę informację prosimy o poinformowanie o tym nadawcy i usunięcie
> tej wiadomości z wszelkich komputerów. **This message is intended only
> for the person or entity to which it is addressed and may contain
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended recipient
> is prohibited. If you have received this message in error, please contact
> the sender and remove the material from all of your computer systems.*
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3PLYPOZGT6...
>
3 years, 7 months
Re: CPU Compatibility Problem after Upgrading Centos 8 Stream Host
by k.gunasekhar@non.keysight.com
I also end up with the same problem today. How did rollback yum i see many yum updates in the yum history.
Here is what the error says.
The host CPU does not match the Cluster CPU Type and is running in a degraded mode. It is missing the following CPU flags: model_IvyBridge, spec_ctrl. Please update the host CPU microcode or change the Cluster CPU Type.
3 years, 7 months
Adding a 4th compute host to a HCI environment
by David White
Hello,
Is there documentation anywhere for adding a 4th compute-only host to an existing HCI cluster?
I did the following earlier today:
- Installed RHEL 8.4 onto the new (4th) host
- Setup an NFS share on the host
- Attached the NFS share to oVirt as a new storage domain
- I then turned the NFS share into a "backup domain"
- Once that was done, I installed ovirt-release44.rpm, followed by some ovirt packages:
- yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
- yum install cockpit-ovirt-dashboard vdsm-gluster ovirt-host
- I then logged into the oVirt Manager Web UI, navigated to Hosts, and clicked on New
- I successfully added the new host to the cluster
Once this was all done, load average on all 3 of my original HCI nodes started to go through the roof.
I've seen servers' load average go through the roof before when NFS shares are down, so I suspect that something happened to the NFS share when I went to add the host as a compute node.
In an effort to get things stabilized, I wound up yanking the new host out of oVirt, as well as the NFS share, at which point things did stabilize again.
My goal here:
- Setup the host as a compute-only host (don't participate in the gluster cluster)
- Setup the NFS share for backup purposes
Writing this email and reviewing my bash history, I think one of the places I went wrong was I installed vdsm-gluster. I don't think I need that, do I?
Is what I'm trying to do possible?
Sent with ProtonMail Secure Email.
3 years, 7 months
Guest CPU compatibility issues on upgraded host - RHEL 8
by David White
I have oVirt 4.4.6 running on my Engine VM.
I also have oVirt 4.4.6 on one of my hosts.
The other two hosts are still on oVirt 4.4.5.
My issue seems slightly different than the issue(s) other people have described.
I'm on RHEL 8 hosts.
My Engine VM is running fine on the upgraded host, as is one of my Ubuntu guests.
However, it seems like I'm not able to migrate *ANY* other guests over the new host. The oVirt UI just indicates that for whatever reason the upgraded host isn't available / compatible for a VM to be migrated to.
I'm not sure how or why this 1 Ubuntu guest VM was able to get migrated, but it looks like its working just fine.
I did try:
yum downgrade edk2-ovmf
Base on other people's comments. But I'm not sure how long to wait, or what else to try, to try to get my upgraded host operational again.
Sent with ProtonMail Secure Email.
3 years, 7 months
issue creating disk with rest api using json format
by Pascal D
I am unable to create disk using JSON however the same query in XML works great. In Json I get the following message back:
{
"detail": "For correct usage, see: https://ov1.butterflyit.com/ovirt-engine/apidoc#services/disks/methods/add",
"reason": "Request syntactically incorrect."
}
Both use POST /ovirt-engine/api/disks and the content-type is either application/json or application/xml
Here is the request in JSON:
{
"id": "866770c3-acf9-4f67-b72c-05ed241908e4",
"name": "mydisk",
"description": "test disk",
"bootable": false,
"shareable": true,
"provisioned_size": 10240000000,
"interface": "virtio",
"format": "cow",
"storage_domains": {
"storage_domain": {
"name": "VMS"
}
}
}
And here it is in XML
<disk id="866770c3-acf9-4f67-b72c-05ed241908e4">
<bootable>false</bootable>
<name>mydisk</name>
<description>test Drive</description>
<interface>virtio</interface>
<provisioned_size>10240000000</provisioned_size>
<format>cow</format>
<storage_domains>
<storage_domain>
<name>VMS</name>
</storage_domain>
</storage_domains>
</disk>
3 years, 7 months
Ovirt 4.4.6.8 High Performance VM quirkiness
by Don Dupuis
I created a high performance vm and everything is fine. I stop the virtual
machine and make edits to it and again runs correctly. I shutdown the vm
and click the edit tab to look at what I changed and it is now back to the
defaults and my edits are gone. Is this now the intended result. I was
using the same settings in 4.3 and never exhibited this behavior. What I
have observed so for is that I disabled Headless, disabled cpu pinning, and
changed numa config. When I shutdown the vm, they revert to default of
Headless enabled, cpu pinning is set and the numa config changed. This is
not what I expect to happen. Has anyone else noticed this behavior?
Thanks
Don
3 years, 7 months
Centos 8 to Centos Stream
by nikkognt@gmail.com
Hi,
in preparation for upgrade of stand alone engine from centos 8 to centos stream, I would like to know if exist a procedure to follow for this operation or I must only follow the instruction on CentOS official site. [1]
Best regards
Nikkognt
[1] https://www.centos.org/centos-stream/
3 years, 7 months
CPU Compatibility Problem after Upgrading Centos 8 Stream Host
by Nur Imam Febrianto
Hi,
This morning I have a problem that suddenly my Host (Using Centos 8 Stream, not oVirt Node) showing error with Cpu Compatibility. It says my CPU doesn’t have Haswell-noTSX and spec_ctrl flag. After downgrading the kernel back from 4.18.0 – 305 to 4.18.0 – 301.1 and rollback yum update, it working again. Do anybody having same issue ?
I’m using Xeon E5-2695v4 with cluster compatibility Secure Intel Haswell. Using oVirt 4.4.6 on CentOS Stream host.
Thanks before.
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
3 years, 7 months