Cluster CPU type update
by Morris, Roy
Hello,
I'm running a 4.3 cluster with 56 VMs all set to default cluster and the cluster itself is set to Intel Westmere Family. I am getting a new server and retiring 3 old ones which brings me to my question.
When implementing the new host into the cluster and retiring the older hosts. I want to change the CPU type setting in the cluster default to Broadwell which is what my new 3 host cluster will be. To make this change, I assume the process would be to turn off all VMs, set cluster cpu type to broadwell, then turn on VMs to get the new CPU flags?
Just wanted to reach out and see if someone else has had to go through this before and has any notes/tips.
Best regards,
Roy
4 years, 9 months
Can't add host with "Certificate enrollment failed". Any ideas?
by m.skrzetuski@gmail.com
Hello everyone,
I am trying to add a new host with the "ovirt.infra" role and use the following Ansible snippet for it.
hosts:
- name: delirium
address: delirium.home
cluster: Matrix
public_key: true
power_management_enabled: true
I am getting the following error.
msg: 'Error message: [''Host delirium installation failed. Certificate enrollment failed.'', ''An error has occurred during installation of Host delirium: Certificate enrollment failed.'']'
Any ideas? The error does not say what exactly is wrong.
The following are my SSL certs.
[skrzetuski@delirium ~]$ sudo ls -lR /etc/pki/ovirt-engine/
/etc/pki/ovirt-engine/:
total 68
lrwxrwxrwx. 1 root root 28 Jan 2 23:51 apache-ca.pem -> /etc/pki/ovirt-engine/ca.pem
-rw-r--r--. 1 root root 384 Jan 2 23:51 cacert.conf
-rw-r--r--. 1 root root 384 Jan 2 23:51 cacert.template
-rw-r--r--. 1 root root 384 Nov 12 14:16 cacert.template.in
-rw-r--r--. 1 root root 1647 Jan 3 01:12 ca.pem
-rw-r--r--. 1 root root 1444 Jan 2 23:51 cert.conf
drwxr-xr-x. 2 ovirt ovirt 4096 Jan 2 23:52 certs
-rw-r--r--. 1 root root 1444 Jan 2 23:51 cert.template
-rw-r--r--. 1 root root 1156 Nov 12 14:16 cert.template.in
-rw-r--r--. 1 ovirt ovirt 800 Jan 2 23:52 database.txt
-rw-r--r--. 1 ovirt ovirt 20 Jan 2 23:52 database.txt.attr
-rw-r--r--. 1 ovirt ovirt 20 Jan 2 23:52 database.txt.attr.old
-rw-r--r--. 1 ovirt ovirt 733 Jan 2 23:52 database.txt.old
drwxr-xr-x. 2 root root 4096 Jan 2 23:52 keys
-rw-r--r--. 1 root root 550 Nov 12 14:16 openssl.conf
drwxr-x---. 2 ovirt ovirt 20 Jan 2 23:51 private
drwxr-xr-x. 2 ovirt ovirt 4096 Jan 3 12:22 requests
-rw-r--r--. 1 ovirt ovirt 5 Jan 2 23:52 serial.txt
-rw-r--r--. 1 ovirt ovirt 5 Jan 2 23:52 serial.txt.old
/etc/pki/ovirt-engine/certs:
total 244
-rw-r--r--. 1 root root 1310 Jan 2 23:51 1000.pem
-rw-r--r--. 1 root root 5109 Jan 2 23:51 1001.pem
-rw-r--r--. 1 root root 5109 Jan 2 23:51 1002.pem
-rw-r--r--. 1 root root 5109 Jan 2 23:51 1003.pem
-rw-r--r--. 1 root root 5109 Jan 2 23:51 1004.pem
-rw-r--r--. 1 root root 5109 Jan 2 23:51 1005.pem
-rw-r--r--. 1 root root 5109 Jan 2 23:51 1006.pem
-rw-r--r--. 1 root root 5109 Jan 2 23:51 1007.pem
-rw-r--r--. 1 root root 5109 Jan 2 23:51 1008.pem
-rw-r--r--. 1 root root 5109 Jan 2 23:51 1009.pem
-rw-r--r--. 1 root root 4925 Jan 2 23:52 100A.pem
-rw-r--r--. 1 root root 5011 Jan 2 23:52 100B.pem
-rw-r--r--. 1 root root 5011 Jan 2 23:52 100C.pem
-rw-r--r--. 1 root root 1968 Jan 3 01:12 apache.cer
-rw-r--r--. 1 root root 5109 Jan 2 23:51 apache.cer.20200102235148
-rw-r--r--. 1 root root 1310 Jan 2 23:51 ca.der
-rw-r--r--. 1 root root 5109 Jan 2 23:51 engine.cer
-rw-r--r--. 1 root root 1727 Jan 2 23:51 imageio-proxy.cer
-rw-r--r--. 1 root root 5109 Jan 2 23:51 imageio-proxy.cer.20200102235148
-rw-r--r--. 1 root root 5109 Jan 2 23:51 jboss.cer
-rw-r--r--. 1 root root 1727 Jan 2 23:51 ovirt-provider-ovn.cer
-rw-r--r--. 1 root root 5109 Jan 2 23:51 ovirt-provider-ovn.cer.20200102235149
-rw-r--r--. 1 root root 1727 Jan 2 23:51 ovn-ndb.cer
-rw-r--r--. 1 root root 5109 Jan 2 23:51 ovn-ndb.cer.20200102235148
-rw-r--r--. 1 root root 1727 Jan 2 23:51 ovn-sdb.cer
-rw-r--r--. 1 root root 5109 Jan 2 23:51 ovn-sdb.cer.20200102235149
-rw-r--r--. 1 root root 1727 Jan 2 23:51 reports.cer
-rw-r--r--. 1 root root 5109 Jan 2 23:51 reports.cer.20200102235148
-rw-r--r--. 1 root root 4925 Jan 2 23:52 vmconsole-proxy-helper.cer
-rw-r--r--. 1 root root 5011 Jan 2 23:52 vmconsole-proxy-host.cer
-rw-r--r--. 1 root root 1391 Jan 2 23:52 vmconsole-proxy-host-cert.pub
-rw-r--r--. 1 root root 381 Jan 2 23:52 vmconsole-proxy-host.pub
-rw-r--r--. 1 root root 5011 Jan 2 23:52 vmconsole-proxy-user.cer
-rw-r--r--. 1 root root 1423 Jan 2 23:52 vmconsole-proxy-user-cert.pub
-rw-r--r--. 1 root root 381 Jan 2 23:52 vmconsole-proxy-user.pub
-rw-r--r--. 1 root root 1727 Jan 2 23:51 websocket-proxy.cer
-rw-r--r--. 1 root root 5109 Jan 2 23:51 websocket-proxy.cer.20200102235148
/etc/pki/ovirt-engine/keys:
total 84
-rw-r-----. 1 root ovirt 1675 Jan 3 01:12 apache.key.nopass
-rw-------. 1 root root 2709 Jan 2 23:51 apache.p12
-rw-------. 1 ovirt ovirt 1828 Jan 2 23:51 engine_id_rsa
-rw-------. 1 ovirt root 2709 Jan 2 23:51 engine.p12
-rw-------. 1 root root 1828 Jan 2 23:51 imageio-proxy.key.nopass
-rw-------. 1 root root 2709 Jan 2 23:51 imageio-proxy.p12
-rw-------. 1 ovirt root 2709 Jan 2 23:51 jboss.p12
-rw-------. 1 root root 1828 Jan 2 23:51 ovirt-provider-ovn.key.nopass
-rw-------. 1 root root 2709 Jan 2 23:51 ovirt-provider-ovn.p12
-rw-------. 1 root root 1828 Jan 2 23:51 ovn-ndb.key.nopass
-rw-------. 1 root root 2709 Jan 2 23:51 ovn-ndb.p12
-rw-------. 1 root root 1832 Jan 2 23:51 ovn-sdb.key.nopass
-rw-------. 1 root root 2709 Jan 2 23:51 ovn-sdb.p12
-rw-------. 1 root root 1828 Jan 2 23:51 reports.key.nopass
-rw-------. 1 root root 2709 Jan 2 23:51 reports.p12
-rw-------. 1 ovirt-vmconsole ovirt-vmconsole 1832 Jan 2 23:52 vmconsole-proxy-helper.key.nopass
-rw-------. 1 root root 2677 Jan 2 23:52 vmconsole-proxy-helper.p12
-rw-------. 1 root root 2693 Jan 2 23:52 vmconsole-proxy-host.p12
-rw-------. 1 root root 2693 Jan 2 23:52 vmconsole-proxy-user.p12
-rw-------. 1 ovirt ovirt 1828 Jan 2 23:51 websocket-proxy.key.nopass
-rw-------. 1 ovirt root 2709 Jan 2 23:51 websocket-proxy.p12
/etc/pki/ovirt-engine/private:
total 4
-rw-r-----. 1 ovirt ovirt 1675 Jan 2 23:51 ca.pem
/etc/pki/ovirt-engine/requests:
total 64
-rw-r--r--. 1 ovirt ovirt 862 Jan 3 12:04 192.168.1.5.req
-rw-r--r--. 1 root root 863 Jan 2 23:51 apache.req
-rw-r--r--. 1 root root 863 Jan 2 23:51 ca.csr
-rw-r--r--. 1 root root 863 Jan 2 23:51 engine.req
-rw-r--r--. 1 root root 863 Jan 2 23:51 imageio-proxy.req
-rw-r--r--. 1 root root 863 Jan 2 23:51 jboss.req
-rw-r--r--. 1 ovirt ovirt 862 Jan 3 12:11 delirium.home.req
-rw-r--r--. 1 ovirt ovirt 862 Jan 3 12:22 delirium.mysecretdomain.ch.req
-rw-r--r--. 1 root root 863 Jan 2 23:51 ovirt-provider-ovn.req
-rw-r--r--. 1 root root 863 Jan 2 23:51 ovn-ndb.req
-rw-r--r--. 1 root root 863 Jan 2 23:51 ovn-sdb.req
-rw-r--r--. 1 root root 863 Jan 2 23:51 reports.req
-rw-r--r--. 1 root root 863 Jan 2 23:52 vmconsole-proxy-helper.req
-rw-r--r--. 1 root root 863 Jan 2 23:52 vmconsole-proxy-host.req
-rw-r--r--. 1 root root 863 Jan 2 23:52 vmconsole-proxy-user.req
-rw-r--r--. 1 root root 863 Jan 2 23:51 websocket-proxy.req
Kind regards
skrzetuski
4 years, 9 months
2 node redundant cluster without hyperconverged
by gokerdalar@gmail.com
Hello,
I want to get an idea in this topic.
I have two servers with the same capabilities and 8 same physical disc per node. I want to setup a cluster using a redundant disc.I dont have another server for gluster hyperconverged . How i should build for this structure ?
Thanks in advance,
Göker
4 years, 9 months
Re: Confusion about storage domain ISO. How do I provision VMs?
by Strahil
On Jan 2, 2020 23:34, m.skrzetuski(a)gmail.com wrote:
>
> Hello there,
>
> I'm a bit confused about provisioning VMs (with Ansible).
>
> At first I wanted to setup a VM manually over the web UI, just to see how it works.
> I was not able to upload ISO files with cloud images to a data domain but I was able to upload to ISO domain. However I read that ISO domains are deprecated (see https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/...).
You should be able to import any ISO to data domains... What is the error you get ?
> So what is the workflow/process of provisioning VMs from cloud images with cloud init? Where do I store my ISOs?
In Storage -> Storage Domains -> Ovirt-Image-repository there are a lot of cloud images that can be imported and later used (I have neverr done that so far).
So far, I'm using the ISO domain as it was in active isse when I setup my lab.
> Kind regards
> skrzetuski
Best Regards,
Strahil Nikolov
4 years, 9 months
Re: 4.2.8 to 4.3.7 > Management slow
by Strahil
None is when you use multiple queues , which seems to be enabled on my HostedEngine VM by default.
In both cases, we shouldn't reorder I/O requests in the VM, just to do the same on the Host.
That's why I left queueing to happen only on the host.
Best Regards,
Strahil NikolovOn Jan 2, 2020 07:54, Guillaume Pavese <guillaume.pavese(a)interactiv-group.com> wrote:
>
> Hi,
>
> I use something similar
> However, I think that the correct scheduler in case of virtio-scsi devices (sd*) should be "noop" instead of "none"
>
> Best,
>
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Tue, Dec 31, 2019 at 9:27 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> You can manually change the I/O scheduler of the disks and if that works better for you, put a rule in udev.
>>
>> Here is mine:
>>
>> [root@engine rules.d]# cat /etc/udev/rules.d/90-default-io-scheduler.rules
>> ACTION=="add|change", KERNEL=="sd*[!0-9]", ATTR{queue/scheduler}="none"
>> ACTION=="add|change", KERNEL=="vd*[!0-9]", ATTR{queue/scheduler}="none"
>>
>> [root@engine rules.d]# cat /sys/block/vda/queue/scheduler
>> [none] mq-deadline kyber
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Dec 31, 2019 12:30, Demeter Tibor <tdemeter(a)itsmart.hu> wrote:
>>>
>>> Dear Users,
>>>
>>> I've successfully upgraded my 4 node hyperconverged system from 4.2.8 to 4.3.7.
>>> After upgrade everything seems to working fine, but the whole management system seems very slow.
>>> Spends many seconds when I clicking on "virtual machines" or I want to edit a virtual machines.
>>> The speed of vms and the IO is fine.
>>>
>>> It is running on a glusterfs (distributed replicate, on 3 node, 9 bricks). There are no errors, everything fine. but terrible slow:(
>>> The engine vm has 0.2-0.3 load.
>>>
>>> What can I do?
>>>
>>> Thanks in advance and I wish Happy New Year!
>>>
>>> Regards,
>>> Tibor
>>>
>>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYHGQ6JVLHD...
>
>
> Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité
4 years, 9 months
I am interested in using oVirt for my homelab but is it dead?
by m.skrzetuski@gmail.com
Hi there,
I'm a huge Ansible fan and work a lot with OpenStack and OpenShift. At home I purchased an Intel NUC to host some services for my home(-lab). I was reading about oVirt and it sounded promising, I saw all the Ansible roles. I installed ovirt on CentOS 8 (and used the Ansible roles from the Galaxy) and hit few strange walls like the standard SSL cert not being accepted by Google Chrome on Mac OSX with error "revoked". Additionally the community seems so small compared to others (like Proxmox).
So I guess what I wanted to ask is: Is it dead? What's the future of oVirt? Is Redhat investing in the project?
Kind regards
skrzetuski
4 years, 9 months
ovirt 4.3.7 + Gluster in hyperconverged (production design)
by adrianquintero@gmail.com
Hi,
After playing a bit with oVirt and Gluster in our pre-production environment for the last year, we have decided to move forward with a our production design using ovirt 4.3.7 + Gluster in a hyperconverged setup.
For this we are looking get answers to a few questions that will help out with our design and eventually lead to our production deployment phase:
Current HW specs (total servers = 18):
1.- Server type: DL380 GEN 9
2.- Memory: 256GB
3.-Disk QTY per hypervisor:
- 2x600GB SAS (RAID 0) for the OS
- 9x1.2TB SSD (RAID[0, 6..]..? ) for GLUSTER.
4.-Network:
- Bond1: 2x10Gbps
- Bond2: 2x10Gbps (for migration and gluster)
Our plan is to build 2x9 node clusters, however the following questions come up:
1.-Should we build 2 separate environments each with its own engine? or should we do 1 engine that will manage both clusters?
2.-What would be the best gluster volume layout for #1 above with regards to RAID configuration:
- JBOD or RAID6 or…?.
- what is the benefit or downside of using JBOD vs RAID 6 for this particular scenario?
3.-Would you recommend Ansible-based deployment (if supported)? If yes where would I find the documentation for it? or should we just deploy using the UI?.
- I have reviewed the following and in Chapter 5 it only mentions Web UI https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infr...
- Also looked at https://github.com/gluster/gluster-ansible/tree/master/playbooks/hc-ansib... but could not get it to work properly.
4.-what is the recommended max server qty in a hyperconverged setup with gluster, 12, 15, 18...?
Thanks,
Adrian
4 years, 9 months
Re: Issue deploying self hosted engine on new install
by Sang Un Ahn
Hi,
I have figured it out that the root cause of the deployment failure is timing out while the hosted engine was trying to connect to host vis SSH as shown in engine.log (located in /var/log/ovirt-hosted-engine-setup/engine-logs-2019-12-31T06:34:38Z/ovirt-engine):
2019-12-31 15:43:06,082+09 ERROR [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-1) [f48796e7-a4c5-4c09-a70d-956f0c4249b4] Failed to establish session with host 'alice-ovirt-01.sdfarm.kr': SSH connection timed out connecting to 'root(a)alice-ovirt-01.sdfarm.kr'
2019-12-31 15:43:06,085+09 WARN [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-1) [f48796e7-a4c5-4c09-a70d-956f0c4249b4] Validation of action 'AddVds' failed for user admin@internal-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__HOST,$server alice-ovirt-01.sdfarm.kr,VDS_CANNOT_CONNECT_TO_SERVER
2019-12-31 15:43:06,129+09 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-1) [] Operation Failed: [Cannot add Host. Connecting to host via SSH has failed, verify that the host is reachable (IP address, routable address etc.) You may refer to the engine.log file for further details.]
The FQDN of hosted engine (alice-ovirt-engine.sdfarm.kr <http://alice-ovirt-engine.sdfarm.kr/>) is resolved as well as the host (alice-ovirt-01.sdfarm.kr <http://alice-ovirt-01.sdfarm.kr/>) and SSH is the one of services that are allowed by firewalld. I believe the rules of firewalld is automatically configured during the deployment to work with hosted engine and the host. Also root access is configured to be allowed at the first stage of deployment.
I was just wondering how I can verify the hosted engine can access to the host at this stage? Once it fails to deploy, the deployment script make all things rolled back (I believe it cleans all up) and the vm-status of hosted-engine is un-deployed.
Thank you in advance,
Best regards,
Sang-Un
4 years, 9 months
Re: oVirt and Containers
by Jan Zmeskal
Hi Robert and happy new year! :)
If I understand correctly, with OKD, I just need to build full vm's in the
> oVirt environment which will become hosts, compute nodes, for containers
> that get deployed.
>
You don't even need to build the VMs. If you follow the steps outlined here
<https://docs.okd.io/3.11/install_config/configuring_for_rhv.html>, the
installation playbooks will take care of creating the VMs for you.
Otherwise, you're right. Those VM will become master, infra and compute
nodes of OKD cluster.
OKD is basically just container management and it does not care whether or
> not the compute nodes are VM's or bare metal.
>
That's correct - at least in theory. In practice the difference is mostly
in installation process. But you're right - once the OKD cluster is up and
running, it does not care whether its nodes are VMs or bare metal machines.
Best regards
Jan
On Thu, Dec 26, 2019 at 6:22 PM Robert Webb <rwebb(a)ropeguru.com> wrote:
> Jan,
>
> Thanks for taking the time for a great reply.
>
> For what I am trying to accomplish here in my home lab, it seems that OKD
> is the safest path. If I understand correctly, with OKD, I just need to
> build full vm's in the oVirt environment which will become hosts, compute
> nodes, for containers that get deployed. OKD is basically just container
> management and it does not care whether or not the compute nodes are VM's
> or bare metal.
>
> Again, thanks for taking the time to educate me.
>
> Robert
>
> ________________________________________
> From: Jan Zmeskal <jzmeskal(a)redhat.com>
> Sent: Tuesday, December 24, 2019 4:43 PM
> To: Robert Webb
> Cc: users
> Subject: Re: [ovirt-users] oVirt and Containers
>
> Okay, so this topic is quite vast, but I believe I can at the very least
> give you a few pointers and maybe others might chime in as well.
>
> Firstly, there's the Kubevirt project. It enables you to manage both
> application containers and virtual machines workloads (that cannot be
> easily containerized) in a shared environment. Another benefit is getting
> advantages of the powerful Kubernetes scheduler. I myself am not too
> familiar with Kubevirt, so I can only offer this high-level overview. More
> info here: https://kubevirt.io/
>
> Then there is another approach which I am more familiar with. You might
> want to use oVirt as an infrastructure layer on top of which you run
> containerized workflow. This is achieved by deploying either OpenShift<
> https://www.openshift.com/> or the upstream project OKD<
> https://www.okd.io/> in the oVirt virtual machines. In that scenario,
> oVirt VMs are considered by OpenShift as compute resources and are used for
> scheduling containers. There are some advantages to this setup and two come
> into mind. Firstly, you can scale such OpenShift cluster up or down by
> adding/removing oVirt VMs according to your needs. Secondly, you don't need
> to set up all of this yourself.
> For OpenShift 3, Red Hat provides detailed guide on how to go about this.
> Part of that guide are Ansible playbooks that automate the deployment for
> you as long as you provide required variables. More info here:
> https://docs.openshift.com/container-platform/3.11/install_config/configu...
> When it comes to OpenShift 4, there are two types of deployment. There's
> UPI - user provisioned infrastructure. In that scenario, you prepare all
> the resources for OpenShift 4 beforehand and deploy it in that existing
> environment. And there's also IPI - installer provisioned infrastructure.
> This means that you just give the installer access to your environment
> (e.g. AWS public cloud) and the installer provisions resources for itself
> based on recommendations and best practices. At this point, neither UPI nor
> IPI is supported for oVirt. However there is a GitHub repository<
> https://github.com/sa-ne/openshift4-rhv-upi> that can guide you through
> UPI installation on oVirt and also provides automation playbooks for that.
> I have personally followed the steps from the repository and deployed
> OpenShift 4.2 on top of oVirt without any major issues. As far as I
> remember, I might have needed occasional variable here and there but the
> process worked.
>
> Hope this helps!
> Jan
>
> On Tue, Dec 24, 2019 at 8:21 PM Robert Webb <rwebb(a)ropeguru.com<mailto:
> rwebb(a)ropeguru.com>> wrote:
> Hi Jan,
>
> Honestly, I didn't have anything specific in mind, just what is being used
> out there today and what may be more prevalent.
>
> Just getting my oVIrt set up and want to know what might be recommended.
> Would probably be mostly deploying images like Homeassistent, piHole, etc..
> for now.
>
> I guess if there is good oVirt direct integration, it would be nice to
> keep it all in a single interface.
>
> Thanks..
>
> ________________________________________
> From: Jan Zmeskal <jzmeskal(a)redhat.com<mailto:jzmeskal@redhat.com>>
> Sent: Tuesday, December 24, 2019 1:54 PM
> To: Robert Webb
> Cc: users
> Subject: Re: [ovirt-users] oVirt and Containers
>
> Hi Robert,
>
> there are different answers based on what you mean by integrating oVirt
> and containers. Do you mean:
>
> - Installing container management (Kubernetes or OpenShift) on top of
> oVirt and using oVirt as infrastructure?
> - Managing containers from oVirt interface?
> - Running VM workloads inside containers?
> - Something different?
>
> I can elaborate more based on your specific needs
>
> Best regards
> Jan
>
> On Tue, Dec 24, 2019 at 3:52 PM Robert Webb <rwebb(a)ropeguru.com<mailto:
> rwebb(a)ropeguru.com><mailto:rwebb@ropeguru.com<mailto:rwebb@ropeguru.com>>>
> wrote:
> I was searching around to try and figure out the best way to integrate
> oVirt and containers.
>
> I have found some sites that discuss it but all of them are like 2017 and
> older.
>
> Any recommendations?
>
> Just build VM’s to host containers or is there some direct integration?
>
> Here are a couple of the old sites
>
> https://fromanirh.github.io/containers-in-ovirt.html
>
>
> https://kubernetes.io/docs/setup/production-environment/on-premises-vm/ov...
>
>
> https://www.ovirt.org/develop/release-management/features/integration/con...
> _______________________________________________
> Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org><mailto:
> users(a)ovirt.org<mailto:users@ovirt.org>>
> To unsubscribe send an email to users-leave(a)ovirt.org<mailto:
> users-leave(a)ovirt.org><mailto:users-leave@ovirt.org<mailto:
> users-leave(a)ovirt.org>>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7BIW2Z3ILBS...
>
>
> --
>
> Jan Zmeskal
>
> Quality Engineer, RHV Core System
>
> Red Hat <https://www.redhat.com>
>
> [
> https://marketing-outfit-prod-images.s3-us-west-2.amazonaws.com/f5445ae0c...
> ]<https://www.redhat.com>
>
>
>
> --
>
> Jan Zmeskal
>
> Quality Engineer, RHV Core System
>
> Red Hat <https://www.redhat.com>
>
> [
> https://marketing-outfit-prod-images.s3-us-west-2.amazonaws.com/f5445ae0c...
> ]<https://www.redhat.com>
>
>
--
Jan Zmeskal
Quality Engineer, RHV Core System
Red Hat <https://www.redhat.com>
<https://www.redhat.com>
4 years, 9 months
Re: 4.2.8 to 4.3.7 > Management slow
by Strahil
You can manually change the I/O scheduler of the disks and if that works better for you, put a rule in udev.
Here is mine:
[root@engine rules.d]# cat /etc/udev/rules.d/90-default-io-scheduler.rules
ACTION=="add|change", KERNEL=="sd*[!0-9]", ATTR{queue/scheduler}="none"
ACTION=="add|change", KERNEL=="vd*[!0-9]", ATTR{queue/scheduler}="none"
[root@engine rules.d]# cat /sys/block/vda/queue/scheduler
[none] mq-deadline kyber
Best Regards,
Strahil NikolovOn Dec 31, 2019 12:30, Demeter Tibor <tdemeter(a)itsmart.hu> wrote:
>
> Dear Users,
>
> I've successfully upgraded my 4 node hyperconverged system from 4.2.8 to 4.3.7.
> After upgrade everything seems to working fine, but the whole management system seems very slow.
> Spends many seconds when I clicking on "virtual machines" or I want to edit a virtual machines.
> The speed of vms and the IO is fine.
>
> It is running on a glusterfs (distributed replicate, on 3 node, 9 bricks). There are no errors, everything fine. but terrible slow:(
> The engine vm has 0.2-0.3 load.
>
> What can I do?
>
> Thanks in advance and I wish Happy New Year!
>
> Regards,
> Tibor
>
>
4 years, 9 months