upload image and novnc does not works after power failure
by hkicsi@gmail.com
Hi,
In my lab I use Oracle veriosn of ovirt. suddenly my power was out, and after that the iso upload and novnc stopped to cooperate. I had updated the browser certificate, and also checked the finderprints. The issue should somewhere deeper.
I would like to understand how the certificate structure should work, how-to debug it further?
Thanks
hkicsi
3 months, 4 weeks
oVirt CLI tool for automation tasks
by munnadawood@gmail.com
We recently migrated from VMware to oVirt. I am looking for any CLI tool well suited for my automation tasks like VM create, clone, migrate 100s of Virtual machines in oVirt cluster.
with VMware I was using govc (vSphere CLI built on top of govmomi). Another option I read is powercli, quite unsure if it works with oVirt.
Any suggestions would be highly appreciated.
Thanks!
3 months, 4 weeks
PowerShell module for oVirt
by itsavant@gmail.com
Question to all:
If there was a PowerShell module for oVirt, similar to PowerCli for VMware, how many would
find that useful?
3 months, 4 weeks
hci, glusterfs, glusterfs mount shows old size after expanding volume (adding brick)
by Jiří Sléžka
Hello,
I have 3 node HCI cluster (Rocky Linux 8,
4.5.7-0.master.20240415165511.git7238a3766d.el8). I had 2 SSD in each
node, each as separate brick. Recently I added third SSD and expanded
volume to 3 x 3 topology. Despite this, free space on volume was not
changed.
gluster volume info vms
Volume Name: vms
Type: Distributed-Replicate
Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.0.4.11:/gluster_bricks/vms/vms
Brick2: 10.0.4.13:/gluster_bricks/vms/vms
Brick3: 10.0.4.12:/gluster_bricks/vms/vms
Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2
Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2
Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2
Brick7: 10.0.4.11:/gluster_bricks/vms3/vms3
Brick8: 10.0.4.12:/gluster_bricks/vms3/vms3
Brick9: 10.0.4.13:/gluster_bricks/vms3/vms3
Options Reconfigured:
cluster.shd-max-threads: 1
performance.client-io-threads: off
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: on
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable
performance.stat-prefetch: off
cluster.granular-entry-heal: enable
storage.health-check-interval: 0
df -h on all nodes looks the same
...
10.0.4.11:/engine 100G 23G 78G 23%
/rhev/data-center/mnt/glusterSD/10.0.4.11:_engine
10.0.4.11:/vms 1.7T 773G 952G 45%
/rhev/data-center/mnt/glusterSD/10.0.4.11:_vms
...
/dev/mapper/gluster_vg_sdb-gluster_lv_engine 100G 22G 79G 22%
/gluster_bricks/engine
/dev/mapper/gluster_vg_sdb-gluster_lv_vms 794G 476G 319G 60%
/gluster_bricks/vms
/dev/mapper/gluster_vg_sdd-gluster_lv_vms2 930G 553G 378G 60%
/gluster_bricks/vms2
/dev/mapper/gluster_vg_vms3-gluster_lv_vms3 932G 6.6G 925G 1%
/gluster_bricks/vms3
...
size of mounted vms volume is reported as 1.7T which is old value (sum
of two bricks - 794G + 930G). Correct size should be sum of all bricks -
around 2.6T.
what step I am missing?
Cheers,
Jiri
4 months
Ceph-only storage for self-hosted engine
by Michael Thomas
I found this thread from last year that indicates it should be possible
to use ceph rbd for the self-hosted engine:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/4VG4AWPQ2FBY...
In my new hosted-engine deployment, the hosted-engine --deploy menu only
shows me options for:
Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs)[nfs]:
Should I also see options for using ceph rbd, or do I need to start with
nfs, and migrate the self-hosted engine to rbd after the initial setup
is complete?
--Mike
4 months
Latest oVirt images
by Michael Thomas
tl;dr Where can I find a compatible set of host packages and
hosted-engine image? What is the recommended combination to use for new
installs?
First, some background:
I've been trying to get a new oVirt 4.5.5 install on Rocky 9 hosts using
a hosted engine. My first few attempts failed because the engine image
(ovirt-engine-appliance-4.5-20231201120201.1.el9) was still based on
CentOS8-stream. Using
--ansible-extra-vars=he_pause_before_engine_setup=true I was able to
redirect the repos to vault.centos.org. This helped, but still failed
when the engine tried to access the host:
2024-11-06 17:05:53,600-0600 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 {'msg': 'Host is not up, please check
logs, perhaps also on the engine machine', '_ansible_no_log': False,
'changed': False}
2024-11-06 17:05:53,700-0600 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:113 fatal: [localhost]: FAILED! =>
{"changed": false, "msg": "Host is not up, please check logs, perhaps
also on the engine machine"}
...and the logs on the engine throw a NetworkNotFoundError while trying
to set up OVN:
"stdout" : "fatal: [hv1-mgmt.cds.ligo-la.caltech.edu]: FAILED! =>
{\"changed\": true, \"cmd\": [\"vdsm-tool\", \"ovn-config\",
\"10.110.115.21\", \"hv1-mgmt.cds.ligo-la.caltech.edu\"], \"delta\":
\"0:00:02.413890\", \"end\": \"2024-11-08 14:29:54.215138\", \"msg\":
\"non-zero return code\", \"rc\": 1, \"start\": \"2024-11-08
14:29:51.801248\", \"stderr\": \"Traceback (most recent call last):\\n
File \\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\",
line 117, in get_network\\n return networks[net_name]\\nKeyError:
'hv1-mgmt.cds.ligo-la.caltech.edu'\\n\\nDuring handling of the above
exception, another exception o
ccurred:\\n\\nTraceback (most recent call last):\\n File
\\\"/usr/bin/vdsm-tool\\\", line 195, in main\\n return
tool_command[cmd][\\\"command\\\"](*args)\\n File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line
63, in ovn_config\\n ip_address =
get_ip_addr(get_network(network_caps(), net_name))\\n File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line
119, in get_network\\n raise
NetworkNotFoundError(net_name)\\nvdsm.tool.ovn_config.NetworkNotFoundError:
hv1-mgmt.cds.ligo-la.caltech.edu\", \"stderr_lines\": [\"Traceback (most
recent call last):\", \" File \\\"/usr/lib/python3.
9/site-packages/vdsm/tool/ovn_config.py\\\", line 117, in get_network\",
\" r
eturn networks[net_name]\", \"KeyError:
'hv1-mgmt.cds.ligo-la.caltech.edu'\", \"\", \"During handling of the
above exception, another exception occurred:\", \"\", \"Traceback (most
recent call last):\", \" File \\\"/usr/bin/vdsm-tool\\\", line 195, in
main\", \" return tool_command[cmd][\\\"command\\\"](*args)\", \"
File \\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\",
line 63, in ovn_config\", \" ip_address =
get_ip_addr(get_network(network_caps(), net_name))\", \" File
\\\"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\\\", line
119, in get_network\", \" raise NetworkNotFoundError(net_name)\",
\"vdsm.tool.ovn_config.NetworkNotFoundError:
hv1-mgmt.cds.ligo-la.caltech.edu\"], \"stdout\": \"\", \"stdout_lines\":
[]}",
Ok, so then I think to myself that I should be using a newer engine
image. I installed
ovirt-engine-appliance-4.5-20240817071039.1.el9.x86_64.rpm and tried
again. But of course that failed because the host and engine now have
incompatible versions:
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
host has
been set in non_operational status, deployment errors: code 154:
Host hv1-mgmt.cds.ligo-la.caltech.edu is compatible with versions
(4.2,4.3,4.4,4.5,4.6,4.7) and cannot join Cluster CDS which is set to
version 4.8., code 1110: Host hv1-mgmt.cds.ligo-la.caltech.edu's
following network(s) are not synchronized with their Logical Network
configuration: ovirtmgmt., code 9000: Failed to verify Power
Management configuration for Host hv1-mgmt.cds.ligo-la.caltech.edu.,
fix accordingly and re-deploy."}
--Mike
4 months
Re: [External] : oVirt slow backups from ISCSI storage domain to nfs target
by Don Dupuis
Marcos
Yes I understand what you are saying and that is great for a 1Gbps network,
I have a 200Gbps network and I am only getting 140MiB/sec. There is just
something different with how the data is being read out of this storage
system.
Don
On Mon, Nov 11, 2024 at 9:40 AM Marcos Sungaila <marcos.sungaila(a)oracle.com>
wrote:
> Don,
>
>
>
> Sorry, I wasn’t clear in my previous e-mail. My comment about small disks
> actually is about how much disk space you have currently in use, not the
> virtual disk size.
>
> For example, I have a 200GB virtual disk VM with only 4,6 GB in use. It
> takes about 40 sec to transfer the disk to an NFS share in my lab reaching
> 128MiB/sec on a 1Gbps network connection.
>
>
>
> Marcos
>
>
>
> *From:* Don Dupuis <dondster(a)gmail.com>
> *Sent:* Monday, November 11, 2024 11:32 AM
> *To:* Marcos Sungaila <marcos.sungaila(a)oracle.com>
> *Cc:* users <users(a)ovirt.org>
> *Subject:* Re: [External] : [ovirt-users] oVirt slow backups from ISCSI
> storage domain to nfs target
>
>
>
> Marcos,
>
> These are not small files, the files I am talking about are about 400GB or
> bigger. If I do the same thing on another DE6000 with SAS disks, I don't
> see this issue. Thin provisioned disks are ok on both storage systems, But
> the 6600F has NVME interfaced SSDs and runs this slow on preallocated
> disks. For the short term we are just going to go back to thin provisioned
> as we can't get the backups to finish in the allocated time windows.
>
>
>
> Don
>
>
>
> On Mon, Nov 11, 2024 at 5:23 AM Marcos Sungaila <
> marcos.sungaila(a)oracle.com> wrote:
>
> Hi Don,
>
>
>
> When transferring small files to an NFS share, many times, they can fit
> the in-memory cache, leading to high transfer rates. For bigger files, the
> remote NFS storage will need to commit data to the disk once the cache is
> filled.
>
> In general, it is common to see the initial data transfer at high numbers,
> slowing down as the transfer continues. This behavior is not only with NFS
> shares; you may face the same behavior using scp.
>
> Anyway, it is recommended that a transfer test be run using other methods
> to confirm this is the cause.
>
>
>
> Marcos
>
>
>
> *From:* Don Dupuis <dondster(a)gmail.com>
> *Sent:* Friday, November 8, 2024 5:06 PM
> *To:* users <users(a)ovirt.org>
> *Subject:* [External] : [ovirt-users] oVirt slow backups from ISCSI
> storage domain to nfs target
>
>
>
> Hi
>
> I am using oVirt 4.4.10.7 with vms on iscsi Lenovo DE-6600F(nvme
> interfaced ssds). Backing up thin vms performance is fine, but with
> preallocated disks, backups to nfs storage with 200Gb interface is only
> transferring qcow2 image to nfs at 140MiB per sec. Why is qemu-nbd so slow
> with preallocated images? Does anyone have any issues related to this? Any
> help would be appreciated. I have made iscsid.conf changes, linux kernel
> boot changes, and still the same performance.
>
>
>
> Thanks
>
> Don
>
>
4 months
Re: [External] : oVirt slow backups from ISCSI storage domain to nfs target
by Don Dupuis
Marcos,
These are not small files, the files I am talking about are about 400GB or
bigger. If I do the same thing on another DE6000 with SAS disks, I don't
see this issue. Thin provisioned disks are ok on both storage systems, But
the 6600F has NVME interfaced SSDs and runs this slow on preallocated
disks. For the short term we are just going to go back to thin provisioned
as we can't get the backups to finish in the allocated time windows.
Don
On Mon, Nov 11, 2024 at 5:23 AM Marcos Sungaila <marcos.sungaila(a)oracle.com>
wrote:
> Hi Don,
>
>
>
> When transferring small files to an NFS share, many times, they can fit
> the in-memory cache, leading to high transfer rates. For bigger files, the
> remote NFS storage will need to commit data to the disk once the cache is
> filled.
>
> In general, it is common to see the initial data transfer at high numbers,
> slowing down as the transfer continues. This behavior is not only with NFS
> shares; you may face the same behavior using scp.
>
> Anyway, it is recommended that a transfer test be run using other methods
> to confirm this is the cause.
>
>
>
> Marcos
>
>
>
> *From:* Don Dupuis <dondster(a)gmail.com>
> *Sent:* Friday, November 8, 2024 5:06 PM
> *To:* users <users(a)ovirt.org>
> *Subject:* [External] : [ovirt-users] oVirt slow backups from ISCSI
> storage domain to nfs target
>
>
>
> Hi
>
> I am using oVirt 4.4.10.7 with vms on iscsi Lenovo DE-6600F(nvme
> interfaced ssds). Backing up thin vms performance is fine, but with
> preallocated disks, backups to nfs storage with 200Gb interface is only
> transferring qcow2 image to nfs at 140MiB per sec. Why is qemu-nbd so slow
> with preallocated images? Does anyone have any issues related to this? Any
> help would be appreciated. I have made iscsid.conf changes, linux kernel
> boot changes, and still the same performance.
>
>
>
> Thanks
>
> Don
>
4 months
Re: [External] : open v-switch woes
by Marcos Sungaila
Rephrasing my last e-mail:
Working with open vswitch at the ovirt infrastructure level is NOT an easy task.
Marcos
From: Marcos Sungaila via Users <users(a)ovirt.org>
Sent: Monday, November 11, 2024 9:57 AM
To: Tim Walsh <mr_tim_walsh(a)hotmail.com>; users(a)ovirt.org
Subject: [ovirt-users] Re: [External] : open v-switch woes
Tim,
Working with open vswitch at the ovirt infrastructure level is an easy task.
Here you have some steps to start with it.
Deploying an OVN enabled system have the following requirements:
* Stand-alone Engine installed on a bare metal or a VM outside the ovirt cluster. Running SHE is not possible since the default cluster will use Linux bridges, not openvswitch
* No extra packages are required
* After the Engine is deployed, create a new cluster, set the switch type to OVS and the default network provider to ovirt-provider-ovn
* Add a host to the new cluster, note that migrating a host from a Linux bridge cluster to OVS may fail and may require you to clean-up all network configuration before joining the host to the right cluster.
* In the network menu, you will need to create an ovn network for each tagged vlan, as well as for the untagged network you may have
* If you enable network port security, you will need to create security groups and security rules before any communication can happen between instances, and to/from external ips
Marcos
From: Tim Walsh <mr_tim_walsh(a)hotmail.com<mailto:mr_tim_walsh@hotmail.com>>
Sent: Friday, October 25, 2024 6:00 PM
To: Marcos Sungaila <marcos.sungaila(a)oracle.com<mailto:marcos.sungaila@oracle.com>>; users(a)ovirt.org<mailto:users@ovirt.org>
Subject: Re: [External] : [ovirt-users] open v-switch woes
Yes, not only a desire to replicate the functionality of V-Switches on VM-Ware, but also to have an isolated test environment.
Create a virtual firewall (using Opensense or pfSense or something) and create VMs behind the firewall that can all talk to each other, but the firewall only passes out the application traffic (for example a web app on Nginx, or Remote Desktop to the private Environment).
Microsoft Hyper-V has "private" and "internal" switches that can be set up, (and thats the case on either a standalone or a cluster) so I figured oVirt must have something similar if not the same. these may or may not necessarily be tagged to a VLAN on a physical switch.
Thanks,
Tim
________________________________
From: Marcos Sungaila <marcos.sungaila(a)oracle.com<mailto:marcos.sungaila@oracle.com>>
Sent: Thursday, October 24, 2024 4:22 PM
To: Tim Walsh <mr_tim_walsh(a)hotmail.com<mailto:mr_tim_walsh@hotmail.com>>; users(a)ovirt.org<mailto:users@ovirt.org> <users(a)ovirt.org<mailto:users@ovirt.org>>
Subject: RE: [External] : [ovirt-users] open v-switch woes
Hey Tim,
Deploying an OVN-enabled cluster is not that trivial.
There are many caveats to make it run.
Is there any special use case you need to address to use OVS/OVN?
Marcos
From: Tim Walsh <mr_tim_walsh(a)hotmail.com<mailto:mr_tim_walsh@hotmail.com>>
Sent: Thursday, October 24, 2024 12:28 AM
To: users(a)ovirt.org<mailto:users@ovirt.org>
Subject: [External] : [ovirt-users] open v-switch woes
Hey community,
I'm trying to get open v-switch set up to work like it does in VMware. I am running Rocky Linux 8.9 and oVirt 4.5.5 (el8)
I got the repos but online feedback recommends installing openvswitch, and ovn-northd, ovn-central and ovn-host
I got openvswitch installed, but the other three: ovn-northd, ovn-central and ovn-host seemt o be elusive even after adding the CentOS-Advanced-Virtualization.repo and uipdating all "CentOS-" repost to point to "vault" instead of "mirrorlist"
Can someo0ne help me with what I'm missing? I've tried Bing CoPilot and ChatGPT LOL but they say add that repo and install those packages.
Thanks,
Tim
4 months
Re: [External] : open v-switch woes
by Marcos Sungaila
Tim,
Working with open vswitch at the ovirt infrastructure level is an easy task.
Here you have some steps to start with it.
Deploying an OVN enabled system have the following requirements:
* Stand-alone Engine installed on a bare metal or a VM outside the ovirt cluster. Running SHE is not possible since the default cluster will use Linux bridges, not openvswitch
* No extra packages are required
* After the Engine is deployed, create a new cluster, set the switch type to OVS and the default network provider to ovirt-provider-ovn
* Add a host to the new cluster, note that migrating a host from a Linux bridge cluster to OVS may fail and may require you to clean-up all network configuration before joining the host to the right cluster.
* In the network menu, you will need to create an ovn network for each tagged vlan, as well as for the untagged network you may have
* If you enable network port security, you will need to create security groups and security rules before any communication can happen between instances, and to/from external ips
Marcos
From: Tim Walsh <mr_tim_walsh(a)hotmail.com>
Sent: Friday, October 25, 2024 6:00 PM
To: Marcos Sungaila <marcos.sungaila(a)oracle.com>; users(a)ovirt.org
Subject: Re: [External] : [ovirt-users] open v-switch woes
Yes, not only a desire to replicate the functionality of V-Switches on VM-Ware, but also to have an isolated test environment.
Create a virtual firewall (using Opensense or pfSense or something) and create VMs behind the firewall that can all talk to each other, but the firewall only passes out the application traffic (for example a web app on Nginx, or Remote Desktop to the private Environment).
Microsoft Hyper-V has "private" and "internal" switches that can be set up, (and thats the case on either a standalone or a cluster) so I figured oVirt must have something similar if not the same. these may or may not necessarily be tagged to a VLAN on a physical switch.
Thanks,
Tim
________________________________
From: Marcos Sungaila <marcos.sungaila(a)oracle.com<mailto:marcos.sungaila@oracle.com>>
Sent: Thursday, October 24, 2024 4:22 PM
To: Tim Walsh <mr_tim_walsh(a)hotmail.com<mailto:mr_tim_walsh@hotmail.com>>; users(a)ovirt.org<mailto:users@ovirt.org> <users(a)ovirt.org<mailto:users@ovirt.org>>
Subject: RE: [External] : [ovirt-users] open v-switch woes
Hey Tim,
Deploying an OVN-enabled cluster is not that trivial.
There are many caveats to make it run.
Is there any special use case you need to address to use OVS/OVN?
Marcos
From: Tim Walsh <mr_tim_walsh(a)hotmail.com<mailto:mr_tim_walsh@hotmail.com>>
Sent: Thursday, October 24, 2024 12:28 AM
To: users(a)ovirt.org<mailto:users@ovirt.org>
Subject: [External] : [ovirt-users] open v-switch woes
Hey community,
I'm trying to get open v-switch set up to work like it does in VMware. I am running Rocky Linux 8.9 and oVirt 4.5.5 (el8)
I got the repos but online feedback recommends installing openvswitch, and ovn-northd, ovn-central and ovn-host
I got openvswitch installed, but the other three: ovn-northd, ovn-central and ovn-host seemt o be elusive even after adding the CentOS-Advanced-Virtualization.repo and uipdating all "CentOS-" repost to point to "vault" instead of "mirrorlist"
Can someo0ne help me with what I'm missing? I've tried Bing CoPilot and ChatGPT LOL but they say add that repo and install those packages.
Thanks,
Tim
4 months