Max running ansible-playbook
by Tommaso - Shellrent
Hi to all.
on our angine we always see max 2 ansible-playbook running on the same time.
How can we ingrease this value?
Regards.
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
4 years, 4 months
HCI Cluster Configuration
by Clint Boggio
Ovirt 4.3.9
HCI Gluster 3 Dell 730XD
Managed Engine
When I finished the "hosted engine deploy" I failed to add the gluster
storage according to the documentation and i'd like to get that back on
track.
1. I ticked the check box in the cluster configuration to "enable gluster
service", all of my current gluster storage domains went "red triangle" on
me and the hosts also went "red triangle" so I quickly changed it back and
all went back to normal.
Do I need to check that box to make the "Use Managed Gluster Volume" check
box and drop down menu show my HCI gluster volumes that exist on the three
nodes ?
Do I need to put the storage domain(s) in maintenance mode to change them
from unmanaged to managed ?
Will the system honor the VM(s) that live in my "engine" and "vmstore"
gluster volumes which are currently in use in the system if/when I change
them from unmanaged to managed,or will the system wipe them and hose my
managed engine and VMs ?
Thank you for your help !
4 years, 4 months
Re: Help remove snapshot
by Shani Leviim
Hi Magnus,
Sounds like you triggered this one
https://bugzilla.redhat.com/show_bug.cgi?id=1555116.
The error message you've got was meant to avoid removing a snapshot from a
broken chain.
On the Active VM snapshot -> Disks, is there any disk with Illegal status?
Deleting the whole snapshot can be done from the DB:
*delete from snapshots where snapshot_id = '.....';*
Although it's recommended to understand the root cause for that one.
Can you please share your engine.log?
*Regards,*
*Shani Leviim*
On Mon, Aug 17, 2020 at 1:37 PM Magnus Isaksson <magnus(a)vmar.se> wrote:
> Hi!
>
> I would need some help removing a snapshot that looks like this.
> How do i remove it manually or fix snapshot definition?
> -----------------
> Error while executing action:
>
> TUN_SALDC01:
>
> - The requested snapshot has an invalid parent, either fix the
> snapshot definition or remove it manually to complete the process.
>
> -----------------
> And when i look at the disk on tha tnapshot it show as this:
>
>
> And general info:
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QEBPZPUHDG...
>
4 years, 4 months
Re-creating ISO Storage Domain after Manager Rebuild
by bob.franzke@mdaemon.com
Hi All,
Late last year I had a disk issue with my Ovirt Manager Server that required I rebuild it and restored from backups. While this worked mostly, one thing that did not get put back correctly was the physical storage location of an ISO domain on the manager server. I believe this was previously set up as an NFS share being hosted on the Manager Server itself. The physical disk path and filesystem were not re-created on the manager server's local disk. So after the restore, the ISO domain shows up in the Ovirt Admin portal but shows the as down, and inactive. If I go into the domain, the 'Manage Domain' and 'Remove' buttons are not available. The only option I have for this is 'Destroy'. I right now have no ability to do things like boot a VM from a CD and am assuming its because the ISO domain that the images for this would be stored on is not available. How can I recreate this ISO domains to be able to upload ISO images that VM machines can boot from.The original path on the manager server was /gluster:
sdc 8:32 0 278.9G 0 disk
├─sdc1 8:33 0 1G 0 part /boot
└─sdc2 8:34 0 277.9G 0 part
├─centos_mydesktop-root 253:0 0 50G 0 lvm /
├─centos_mydesktop-swap 253:1 0 23.6G 0 lvm [SWAP]
├─centos_mydesktop-home 253:2 0 25G 0 lvm /home
└─centos_mydesktop-gluster 253:3 0 179.3G 0 lvm /gluster
The current layout is now:
└─sda3 8:3 0 277.7G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 4G 0 lvm [SWAP]
└─centos-home 253:2 0 223.7G 0 lvm /home
Can I just create a new filesystem here, destroy the old ISO domain, and add a new one using the new path? I am a total Ovirt Noob so would appreciate any help here. Thanks.
Bob
4 years, 4 months
Problem with "ceph-common" pkg for oVirt Node 4.4.1
by m.andreev@2500baden.eu
Hi!
I have a problem with install ceph-common package(needed for cinderlib Managed Block Storage) in oVirt Node 4.4.1 - oVirt doc say: "$ yum install -y ceph-common" but no Repo with ceph-common ver 12.2.7 for CentOS8 - official CentOS has only "ceph-common-10.2.5-4.el7.x86_64.rpm" and CEPH has only ceph-common ver. 14.2 for EL8
How can I install ceph-common ver. 12.2.7?
BR
Mike
4 years, 4 months
Upgrade Memory of oVirt Nodes
by souvaliotimaria@mail.com
Hello everyone,
I have an oVirt 4.3.2.5 hyperconverged 3 node production environment and we want to add some RAM to it.
Can I upgrade the RAM without my users noticing any disruptions and keep the VMs running?
The way I thought I should do it was to migrate any running VMs to the other nodes, then set one node in maintenance mode, shut it down, place the new memory, bring it back up, remove it from maintenance mode and see how the installation reacts and repeat for the other two nodes. Is this correct or should I follow another way?
Will there be a problem during the time when the nodes will not be identical in their resources?
Thank you for your time,
Souvalioti Maria
4 years, 4 months
OVirt hosted engine setup dnf config
by Raphael Höser
Hi all,
I'm currently installing OVirt 4.4 on CentOS 8 in an hosted engine setup and it fails during ovirt-hosted-engine-setup.
Because of our corporate proxy, some changes to the dnf.conf is needed (namely proxy_auth_method=basic) and the config is working fine when running dnf commands the normal way.
Proxy configuration is provided via http_proxy, https_proxy and ftp_proxy env vars.
The setup fails during a DNF call with status code 407 (proxy auth required).
Is there another way to provide that config or any other way to resolve this? My googling didn't help me yet.
On the same machine I had a running OVirt 4.3 on CentOS 7 running as a test and now wanted to upgrade (after a complete wipe).
Thanks for your help,
Raphael
content of /etc/dnf/dnf.conf:
[main]
gpgcheck=1
installonly_limit=3
clean_requirements_on_remove=True
best=True
skip_if_unavailable=False
proxy_auth_method=basic
timeout=120
minrate=90
Log of ovirt-hosted-engine-setup:
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
During customization use CTRL-D to abort.
Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine.
The locally running engine will be used to configure a new storage domain and create a VM there.
At the end the disk of the local VM will be moved to the shared storage.
Are you sure you want to continue? (Yes, No)[Yes]:
Configuration files:
Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20200818091958-tozsht.log
Version: otopi-1.9.2 (otopi-1.9.2-1.el8)
[ INFO ] DNF Downloading 1 files, 0.00KB
[ INFO ] DNF Downloaded CentOS-8 - AppStream
[ INFO ] DNF Errors during downloading metadata for repository 'AppStream':
- Status code: 407 for http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=... (IP: 10.201.210.68)
- Status code: 407 for http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=... (IP: 10.201.210.67)
[ ERROR ] DNF Failed to download metadata for repo 'AppStream': Cannot prepare internal mirrorlist: Status code: 407 for http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=... (IP: 10.201.210.67)
[ ERROR ] Failed to execute stage 'Environment setup': Failed to download metadata for repo 'AppStream': Cannot prepare internal mirrorlist: Status code: 407 for http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=... (IP: 10.201.210.67)
[ INFO ] Stage: Clean up
[Some more cleanup]
4 years, 4 months
Re: Help remove snapshot
by Shani Leviim
Hi Magnus,
You can find the VM's snapshot by sending a REST API request
http://localhost:8080/ovirt-engine/api/vms/<vm-id>/snapshots/
You can also find the VM's snapshots (and view their parameters) by using
SQL:
select * from snapshots where vm_id = '<vm-id>';
You can also delete the relevant snapshot by specifying the snapshot's
description:
*delete from snapshots where description = '.....';*
*Regards,*
*Shani Leviim*
On Tue, Aug 18, 2020 at 7:50 PM Magnus Isaksson <magnus(a)vmar.se> wrote:
> Hi Shani
>
> Thanks for the help.
>
> What i can see it does not display any illegal statuses.
>
> I am a bit unsure how to delete it from the DB, how do i find the
> snapshot_id?
>
> Cheers
> Magnus
>
> ------------------------------
> *From:* Shani Leviim <sleviim(a)redhat.com>
> *Sent:* 17 August 2020 13:45
> *To:* Magnus Isaksson <magnus(a)vmar.se>
> *Cc:* users <users(a)ovirt.org>
> *Subject:* Re: [ovirt-users] Help remove snapshot
>
> Hi Magnus,
> Sounds like you triggered this one
> https://bugzilla.redhat.com/show_bug.cgi?id=1555116.
> The error message you've got was meant to avoid removing a snapshot from a
> broken chain.
>
> On the Active VM snapshot -> Disks, is there any disk with Illegal status?
>
> Deleting the whole snapshot can be done from the DB:
> *delete from snapshots where snapshot_id = '.....';*
>
> Although it's recommended to understand the root cause for that one.
>
> Can you please share your engine.log?
>
>
> *Regards, *
>
> *Shani Leviim *
>
>
> On Mon, Aug 17, 2020 at 1:37 PM Magnus Isaksson <magnus(a)vmar.se> wrote:
>
> Hi!
>
> I would need some help removing a snapshot that looks like this.
> How do i remove it manually or fix snapshot definition?
> -----------------
> Error while executing action:
>
> TUN_SALDC01:
>
> - The requested snapshot has an invalid parent, either fix the
> snapshot definition or remove it manually to complete the process.
>
> -----------------
> And when i look at the disk on tha tnapshot it show as this:
>
>
> And general info:
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QEBPZPUHDG...
>
>
4 years, 4 months
HA Storage options
by David White
Hi,
I started an email thread a couple months ago, and felt like I got some great feedback and suggestions on how to best setup an oVirt cluster. Thanks for your responses thus far.My goal is to take a total of 3-4 servers that I can use for both the storage and the virtualization, and I want both to be highly available.
You guys told me about oVirt Hyperconverged with Gluster, and that seemed like a great option. However, I'm concerned that this may not actually be the best approach. I've spoken with multiple people at Red Hat who I have a relationship with (outside of the context of the project I'm working on here), and all of them have indicated to me that Gluster is being deprecated, and that most of the engineering focus these days is on Ceph. I was also told by a Solutions Architect who has extensive experience with RHV that the hyperconverged clusters he used to build would always give him problems.
Does oVirt support DRBD or Ceph storage? From what I can find, I think that the answer to both of those is, sadly, no.
So now I'm thinking about switching gears, and going with iSCSI instead.
But I'm still trying to think about the best way to replicate the storage, and possibly use multipathing so that it will be HA for the VMs that rely on it.
Has anyone else experienced problems with the Gluster hyperconverged solution?
Am I overthinking this whole thing, and am I being too paranoid?
Is it possible to setup some sort of software-RAID with multiple iSCSI targets?
As an aside, I now have a machine that I was planning to begin doing some testing and practicing with.
Previous to my conversations with the folks at Red Hat, I was planning on doing some initial testing and config with this server before purchasing another 2-3 servers to build the hyperconverged cluster.
Sent with ProtonMail Secure Email.
4 years, 4 months
Cannot access Engine VM
by Steven Bach
Hello All,
I installed Ovirt node 4.4.1.5 from the ovirt ISO, IP 192.168.1.53/24
I setup the self hosted engine by logging into the web interface,
configured it for 192.168.1.55/24 IP on the default generated bridge
interface and setup a FQDN that is resolvable via DNS.
Setup NFS storage, and all seems good.
The Engine VM is running, but I cannot access it at all outside the node.
SSH'd in the node I can ping the engine, anything else on that same subnet
cannot SSH or even ping it (Again, DNS is resolving correctly to the IP
Address). I am able to SSH into the engine from the node.
While SSH'd into the Engine from the node, I can only ping the Node and
nothing beyond that, not able to install ovirt repo, nothing.
Any ideas are greatly appreciated, thank you,
--
Steven Bach
sbach(a)bachnetworks.com
4 years, 4 months