Snapshots - Preview button greyed out
by Leo David
Hi Everyone,
I am encountering an issue on my oVirt 4.2 self-hosted engine installation.
Suddenly the "Preview" button in snapshots menu became inactive, I just
cannot revert any previews snapshots.
This feature worked fine over the last month, but now it seems to be just
inactive.
Any thoughts on this ?
Thank you very much !
Leo
--
Best regards, Leo David
6 years, 3 months
Patches for L1FT?
by Eduardo Mayoral
Hi,
For mitigation of the recently announced L1TF vulnerability, is it
sufficient to update the compute nodes to the updated kernel? Are any
other updates to KVM / vdsm / ovirt-engine required?
Also, for the concurrent variant. Should we disable hyperthreading
altogether? Is there any remediation (even if expensive from a
performance view), that can be enabled?
Thanks for your help!
--
Eduardo Mayoral.
6 years, 3 months
storage domain sync problem and fail to run vm after change LUN mapping
by dvotrak
Environment:
Ovirt version: 4.2.5
Centos 7.5
During a planned maintenance window, multipath configuration was changed according to the storage vendor guidelines.
The new configuration modify user_friendly_names from no to yes, so the multipath -ll output changed from this "360060160a6213400fa2d31acbbfce511 dm-8 DGC" to this: "mpathh (360060160a6213400fa2d31acbbfce511) dm-8 DGC".
After this change, the following errors are displayed:
1) when attempting to start a VM with direct LUN: Failed to run VM <VMNAME> on Host <HYPERVISOR> VM <VMNAME> is down. Exit message: Bad volume specification <LUN_ID>
Inspecting the VM disk, the lun id remains the previous one (360060160a6213400fa2d31acbbfce511) insted of the new one (mpathh)
Remove/detach and re-add/re-attach the disk to VM doesen't help.
2) randomly: Storage domains with IDs [<ID>] could not be synchronized. To synchronize them, please move them to maintenance and then activate.
Moving it to maintenance and then activate it doesen't help.
Inspecting storage domain luns, almost all change the lun id to the new ones (mpathXX), but this one remain in "the old style" (maybe because lun_id is a primary key and this one was already taken?)
Any idea on how to resolve these problems?
Thanks
Sent with [ProtonMail](https://protonmail.com) Secure Email.
6 years, 3 months
Re: Issue with NFS and Storage domain setup
by Douglas Duckworth
https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshootin...
Try the script outlined in section "nfs-check-program."
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
E: doug(a)med.cornell.edu
O: 212-746-6305
F: 212-746-8690
On Mon, Aug 13, 2018 at 11:21 PM, Inquirer Guy <inquirerguy(a)gmail.com>
wrote:
> Adding to the below issue, my NODE01 can see the NFS share i created from
> the ENGINE01 which I don't know how it got through because when I add a
> storage domain from the ovirt engine I still get the error
>
>
>
>
>
>
>
> On 14 August 2018 at 10:22, Inquirer Guy <inquirerguy(a)gmail.com> wrote:
>
>> Hi Ovirt,
>>
>> I successfully installed both ovirt-engine(ENGINE01) and ovirt
>> node(NODE01) on a separate machines. I also created a FreeNAS(NAS01) with
>> NFS share and already connected to my NODE01, all of these server though I
>> haven't setup a DNS server, was manually added hostname on every machines
>> and I can lookup and ping on them without a problem, I was able to add the
>> NODE01 to my ENGINE01 as well.
>>
>> My issue was when I tried creating a storage domain on my ENGINE01, I did
>> the below steps before running the engine-setup while also following the
>> guide on the ovirt url: https://www.ovirt.org/document
>> ation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_docume...>
>>
>> #touch /etc/exports
>> #systemctl start rpcbind nfs-server
>> #systemctl enable rpcbind nfs-server
>> #engine-setup
>> #mkdir /var/lib/exports/data
>> #chown vdsm:kvm /var/lib/exports/data
>>
>> I added the 2 just in case but I have tried each alone but all fails
>> #vi /etc/exports
>> /var/lib/exports/data *(rw,sync,no_subtree_check,all
>> _squash,anonuid=36,anongid=36)
>> /var/lib/exports/data 0.0.0.0/0.0.0.0(rw)
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__0.0.0.0_0.0.0.0-28rw-...>
>>
>> #systemctl restart rpc-statd nfs-server
>>
>>
>> Once I started to add my storage domain I get the below error
>>
>>
>>
>> Attached is the engine log for your reference.
>>
>> Hope you guys can help me with these, Im really interested with this
>> great product. Thanks!
>>
>
>
6 years, 3 months
"ISCSI multipathing" tab isn't appear in datacenter settings
by navigator80@tut.by
Hello.
We have 6 servers in our cluster, which use 2 storage through iSCSI connections. Each storage has 2 nodes. Each node has 2 IP addresses in two different VLANs. Each host has 2 networks in this VLANs, so, the iSCSI traffic is separated from other types of traffic.
I want to turn on iSCSI multipathing beetween hosts and storage, but "ISCSI multipathing" tab isn't appear in datacenter settings. But when I'm add a additional storage domain, "ISCSI multipathing" tab is displaying. If I want to detach this additional storage domain, "ISCSI multipathing" tab is disappear at once.
Why is this happening?
6 years, 3 months
ovirt-node-ng update to 4.2.5 failed
by p.staniforth@leedsbeckett.ac.uk
Hello,
I tried to update a ovirt-ng-node from 4.2.4 via the engine and it failed, I also tried using "yum update ovirt-node-ng-image-update".
What is the correct way to update a node and how do I delete old layers
Thanks,
Paul S.
The output from "nodectl info" is
layers:
ovirt-node-ng-4.2.4-0.20180626.0:
ovirt-node-ng-4.2.4-0.20180626.0+1
ovirt-node-ng-4.1.9-0.20180124.0:
ovirt-node-ng-4.1.9-0.20180124.0+1
ovirt-node-ng-4.2.5.1-0.20180731.0:
ovirt-node-ng-4.2.5.1-0.20180731.0+1
bootloader:
default: ovirt-node-ng-4.2.4-0.20180626.0+1
entries:
ovirt-node-ng-4.2.4-0.20180626.0+1:
index: 0
title: ovirt-node-ng-4.2.4-0.20180626.0
kernel: /boot/ovirt-node-ng-4.2.4-0.20180626.0+1/vmlinuz-3.10.0-862.3.3.el7.x86_64
args: "ro crashkernel=auto rd.lvm.lv=onn/ovirt-node-ng-4.2.4-0.20180626.0+1 rd.lvm.lv=onn/swap rhgb quiet LANG=en_GB.UTF-8 img.bootid=ovirt-node-ng-4.2.4-0.20180626.0+1"
initrd: /boot/ovirt-node-ng-4.2.4-0.20180626.0+1/initramfs-3.10.0-862.3.3.el7.x86_64.img
root: /dev/onn/ovirt-node-ng-4.2.4-0.20180626.0+1
ovirt-node-ng-4.1.9-0.20180124.0+1:
index: 1
title: ovirt-node-ng-4.1.9-0.20180124.0
kernel: /boot/ovirt-node-ng-4.1.9-0.20180124.0+1/vmlinuz-3.10.0-693.11.6.el7.x86_64
args: "ro crashkernel=auto rd.lvm.lv=onn/ovirt-node-ng-4.1.9-0.20180124.0+1 rd.lvm.lv=onn/swap rhgb quiet LANG=en_GB.UTF-8 img.bootid=ovirt-node-ng-4.1.9-0.20180124.0+1"
initrd: /boot/ovirt-node-ng-4.1.9-0.20180124.0+1/initramfs-3.10.0-693.11.6.el7.x86_64.img
root: /dev/onn/ovirt-node-ng-4.1.9-0.20180124.0+1
current_layer: ovirt-node-ng-4.2.4-0.20180626.0+1
6 years, 3 months
oVirt 4.2.6.1 - 4.2.6.2 upgrade fails
by Maton, Brett
Just tried to update my test cluster to 4.2.6.2 :
[ INFO ] Stage: Misc configuration
[ INFO ] Running vacuum full on the engine schema
[ INFO ] Running vacuum full elapsed 0:00:04.523561
[ INFO ] Upgrading CA
[ INFO ] Backing up database localhost:ovirt_engine_history to
'/var/lib/ovirt-engine-dwh/backups/dwh-20180814071815.xVSlda.dump'.
[ INFO ] Creating/refreshing DWH database schema
[ INFO ] Configuring Image I/O Proxy
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Backing up database localhost:engine to
'/var/lib/ovirt-engine/backups/engine-20180814071825.af3Hq2.dump'.
[ INFO ] Creating/refreshing Engine database schema
[ ERROR ] schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_02_1220_default_all_search_engine_string_fields_to_not_null.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed
[ INFO ] Yum Performing yum transaction rollback
May or may not be relevant in this case but /tmp and /var/tmp are mounted
noexec.
Any more logs you need let me know.
Regards,
Brett
6 years, 3 months