Fwd: [Gluster-users] Announcing Gluster release 5.5
by Darrell Budic
This release of Gluster 5.5 appears to fix the gluster 3.12->5.3 migration problems many ovirt users have encountered.
I’ll try and test it out this weekend and report back. If anyone else gets a chance to check it out, let us know how it goes!
-Darrell
> Begin forwarded message:
>
> From: Shyam Ranganathan <srangana(a)redhat.com>
> Subject: [Gluster-users] Announcing Gluster release 5.5
> Date: March 21, 2019 at 6:06:33 AM CDT
> To: announce(a)gluster.org, gluster-users Discussion List <gluster-users(a)gluster.org>
> Cc: GlusterFS Maintainers <maintainers(a)gluster.org>
>
> The Gluster community is pleased to announce the release of Gluster
> 5.5 (packages available at [1]).
>
> Release notes for the release can be found at [3].
>
> Major changes, features and limitations addressed in this release:
>
> - Release 5.4 introduced an incompatible change that prevented rolling
> upgrades, and hence was never announced to the lists. As a result we are
> jumping a release version and going to 5.5 from 5.3, that does not have
> the problem.
>
> Thanks,
> Gluster community
>
> [1] Packages for 5.5:
> https://download.gluster.org/pub/gluster/glusterfs/5/5.5/
>
> [2] Release notes for 5.5:
> https://docs.gluster.org/en/latest/release-notes/5.5/
> _______________________________________________
> Gluster-users mailing list
> Gluster-users(a)gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
5 years, 8 months
Re: Dell OMSA on oVirt node
by Strahil
Have you checked this one -> https://www.centos.org/forums/viewtopic.php?f=49&t=48532
It might help you in your journey. The thread is for CentOS, so if you run a Debian port - won't help you.
Best Regards,
Strahil NikolovOn Mar 21, 2019 19:26, Leo David <leoalex(a)gmail.com> wrote:
>
> Hello everyone,
> I would really like to have installed Dell OMSA on my dell nodes so I can benefit of lots of administration features. Does anyone managed to have it installed ?
> I am running oVirt 4.2.8, and after adding dell yum repos and running "yum install srvadmin-all" I get the following errors:
>
> Error: Package: srvadmin-itunnelprovider-9.2.0-3142.13664.el7.x86_64 (dell-system-update_dependent)
> Requires: sblim-sfcc >= 2.2.1
> Error: Package: srvadmin-itunnelprovider-9.2.0-3142.13664.el7.x86_64 (dell-system-update_dependent)
> Requires: sblim-sfcb >= 1.3.7
> Error: Package: srvadmin-itunnelprovider-9.2.0-3142.13664.el7.x86_64 (dell-system-update_dependent)
> Requires: libcmpiCppImpl0 >= 2.0.0
> Error: Package: srvadmin-itunnelprovider-9.2.0-3142.13664.el7.x86_64 (dell-system-update_dependent)
> Requires: libcmpiCppImpl.so.0()(64bit)
> Error: Package: srvadmin-itunnelprovider-9.2.0-3142.13664.el7.x86_64 (dell-system-update_dependent)
> Requires: openwsman-server >= 2.2.3
> Error: Package: srvadmin-tomcat-9.2.0-3142.13664.el7.x86_64 (dell-system-update_dependent)
> Requires: openwsman-client >= 2.1.5
> You could try using --skip-broken to work around the problem
> You could try running: rpm -Va --nofiles --nodigest
> Obviously there are conflicts, but the server being on production, I would not mess around with the packages and trying to solve the conflicts.
>
> Any thoughts ?
> Thank you very much,
>
> Leo
>
>
> --
> Best regards, Leo David
5 years, 8 months
unable to put node 4.3.0 host into maintenance for upgrade, Image transfer in progress
by Edward Berger
When trying to put a node 4.3.0 into maintenance, I get the following error:
--
Error while executing action: Cannot switch Host winterfell.psc.edu to
Maintenance mode. Image transfer is in progress for the following (2) disks:
8d130846-bd84-46b0-9a45-b6a2ecf66865,
35fd6f8f-65f5-49e7-ae5a-9b10c5c0a38f
Please wait for the operations to complete and try again.
--
Attached is an image of the disks UI for those, showing their status.
From the engine UI there seems to be no way to clean out "completed" or
"failed" uploads/downloads.
I tried running the taskcleaner.sh and unlock-entity disk UUIDs but I still
get the error.
Not sure what to try next...
5 years, 8 months
Re: Hosts not coming back into oVirt
by Strahil
Have you tries to select host and 'Activate' it?
Best Regards,
Strahil NikolovOn Mar 21, 2019 15:46, Arif Ali <mail(a)arif-ali.co.uk> wrote:
>
> Hi all,
>
> Recently deployed oVirt version 4.3.1
>
> It's in a self-hosted engine environment
>
> Used the steps via cockpit to install the engine, and was able to add
> the rest of the oVirt nodes without any specific problems
>
> We tested the HA of the hosted-engine without a problem, and then at one
> point of turn off the machine that was hosting the engine, to mimic
> failure to see how it goes; the vm was able to move over successfully,
> but some of the oVirt started to go into Unassigned. From a total of 6
> oVirt hosts, I have 4 of them in this state.
>
> Clicking on the host, I see the following message in the events. I can
> get to the hosts via the engine, and ping the machine, so not sure what
> it's doing that it's no longer working
>
> VDSM <snip> command Get Host Capabilities failed: Message timeout which
> can be caused by communication issues
>
> Mind you, I have been trying to resolve this issue since Monday, and
> have tried various things, like rebooting and re-installing the oVirt
> hosts, without having much luck
>
> So any assistance on this would be grateful, maybe I've missed something
> really simple, and I am overlooking it
>
> --
> regards,
>
> Arif Ali
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYG7NEV24JC...
5 years, 8 months
Re: Hosted Engine I/O scheduler
by Strahil
I have opened https://bugzilla.redhat.com/show_bug.cgi?id=1691405
Best Regards,
Strahil NikolovOn Mar 21, 2019 09:43, Simone Tiraboschi <stirabos(a)redhat.com> wrote:
>
>
>
> On Thu, Mar 21, 2019 at 6:14 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> Unfortunately, the ideal scheduler really depends on storage configuration. Gluster, ZFS, iSCSI, FC, and NFS don't align on a single "best" configuration (to say nothing of direct LUNs on guests), then there's workload considerations.
>> >
>> > The scale team is aiming for a balanced "default" policy rather than one which is best for a specific environment.
>> >
>> > That said, I'm optimistic that the results will let us give better recommendations if your workload/storage benefits from a different scheduler
>>
>> I completely disagree !
>> If you use anything other than noop/none (depending if multiqueue is on), your scheduler inside the VM will reorder and delay your I/O.
>> Then the I/O will be received by the Host and this repeats again.
>> I can point to SuSe and Red Hat knowledge base where both vendors highly recommend noop/none as schedulers for VM.
>> It has nothing in common with the backend - that's in control of the hosts I/O scheduler.
>>
>> Can some one tell me under which section should I open a bug ? Bugzilla is not newbie-friendly and I should admit that opening bugs for RHEL/CentOS is far easier.
>>
>> The best bug section might be ovirt appliance - related , as this is only valid for VMs and not bare-metal Engine.
>
>
> Yes, it seems the most appropriate choice; start from here: https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-appliance
> thanks
>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SNSSDE33P2Y...
5 years, 8 months
SR-IOV and linux bridges
by opiekelly@hotmail.com
Hello, I believe I have setup my environment to support SR-IOV. I am not sure why the system creates a linux bridge when you setup and bind a VM network to a virtual function on the NIC.
I have 2 VLANs 102 and 112 however, there are linux bridges now setup.
I would assume with SR-IOV the VF would be connected only to the VM.
[root@rhv1 ~]# brctl show
bridge name bridge id STP enabled interfaces
;vdsmdummy; 8000.000000000000 no
ovirtmgmt 8000.64122536772a no enp2s0f0
vnet0
vlan-102 8000.020000000001 no enp130s16f4.102
vlan-112 8000.163d99a43b4a no enp130s16f2.112
[root@rhv1 ~]# ip a | grep enp130s16f
28: enp130s16f2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
29: enp130s16f4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
30: enp130s16f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
35: enp130s16f2.112@enp130s16f2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vlan-112 state UP group default qlen 1000
45: enp130s16f4.102@enp130s16f4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vlan-102 state UP group default qlen 1000
[root@rhv1 ~]# virsh dumpxml nsg-v-west
Please enter your authentication name: vuser
Please enter your password:
<domain type='kvm' id='6'>
<name>nsg-v-west</name>
<uuid>27e8e6b0-62a9-4acd-8d88-0c777adb6dc1</uuid>
<metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ns0:qos/>
<ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:clusterVersion>4.2</ovirt-vm:clusterVersion>
<ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>
<ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>
<ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
<ovirt-vm:startTime type="float">1553092201.47</ovirt-vm:startTime>
<ovirt-vm:device mac_address="56:6f:10:b5:00:00">
<ovirt-vm:network>;vdsmdummy;</ovirt-vm:network>
<ovirt-vm:specParams/>
<ovirt-vm:vm_custom/>
</ovirt-vm:device>
<ovirt-vm:device mac_address="56:6f:10:b5:00:02">
<ovirt-vm:network>;vdsmdummy;</ovirt-vm:network>
<ovirt-vm:specParams/>
<ovirt-vm:vm_custom/>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="hda">
<ovirt-vm:domainID>200ee819-d377-4069-bcfa-e6e168ca7adf</ovirt-vm:domainID>
<ovirt-vm:imageID>e7ff0aff-4f25-4279-92d8-1f4928dcabb7</ovirt-vm:imageID>
<ovirt-vm:poolID>ec42e166-4a9e-11e9-b2f6-00163e2d699c</ovirt-vm:poolID>
<ovirt-vm:volumeID>91620241-bcb5-4177-ace1-918050841d0c</ovirt-vm:volumeID>
<ovirt-vm:specParams/>
<ovirt-vm:vm_custom/>
<ovirt-vm:volumeChain>
<ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>200ee819-d377-4069-bcfa-e6e168ca7adf</ovirt-vm:domainID>
<ovirt-vm:imageID>e7ff0aff-4f25-4279-92d8-1f4928dcabb7</ovirt-vm:imageID>
<ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
<ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.0.15:_volume1_rhv/200ee819-d377-4069-bcfa-e6e168ca7adf/images/e7ff0aff-4f25-4279-92d8-1f4928dcabb7/91620241-bcb5-4177-ace1-918050841d0c.lease</ovirt-vm:leasePath>
<ovirt-vm:path>/rhev/data-center/mnt/192.168.0.15:_volume1_rhv/200ee819-d377-4069-bcfa-e6e168ca7adf/images/e7ff0aff-4f25-4279-92d8-1f4928dcabb7/91620241-bcb5-4177-ace1-918050841d0c</ovirt-vm:path>
<ovirt-vm:volumeID>91620241-bcb5-4177-ace1-918050841d0c</ovirt-vm:volumeID>
</ovirt-vm:volumeChainNode>
</ovirt-vm:volumeChain>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="hdc">
<ovirt-vm:specParams/>
<ovirt-vm:vm_custom/>
</ovirt-vm:device>
<ovirt-vm:custom>
<ovirt-vm:hugepages>1048576</ovirt-vm:hugepages>
</ovirt-vm:custom>
</ovirt-vm:vm>
</metadata>
<maxMemory slots='16' unit='KiB'>16777216</maxMemory>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB'/>
</hugepages>
</memoryBacking>
<vcpu placement='static' current='2'>32</vcpu>
<iothreads>1</iothreads>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>Red Hat</entry>
<entry name='product'>RHEV Hypervisor</entry>
<entry name='version'>7.6-4.el7</entry>
<entry name='serial'>20ba27d8-a49a-2a45-b993-0cd2851eea03</entry>
<entry name='uuid'>27e8e6b0-62a9-4acd-8d88-0c777adb6dc1</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.4.0'>hvm</type>
<boot dev='hd'/>
<bios useserial='yes'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>SandyBridge</model>
<topology sockets='16' cores='2' threads='1'/>
<feature policy='require' name='pcid'/>
<feature policy='require' name='spec-ctrl'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='vme'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='xsaveopt'/>
<numa>
<cell id='0' cpus='0-1' memory='4194304' unit='KiB'/>
</numa>
</cpu>
<clock offset='variable' adjustment='0' basis='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads'/>
<source file='/rhev/data-center/mnt/192.168.0.15:_volume1_rhv/200ee819-d377-4069-bcfa-e6e168ca7adf/images/e7ff0aff-4f25-4279-92d8-1f4928dcabb7/91620241-bcb5-4177-ace1-918050841d0c'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<serial>e7ff0aff-4f25-4279-92d8-1f4928dcabb7</serial>
<alias name='ua-e7ff0aff-4f25-4279-92d8-1f4928dcabb7'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' error_policy='report'/>
<source startupPolicy='optional'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ua-85e4095c-5366-4099-816e-5258e80c6da2'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0' ports='16'>
<alias name='ua-b84d3eaf-525a-4c87-a852-a2386700db62'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</controller>
<controller type='scsi' index='0' model='virtio-scsi'>
<driver iothread='1'/>
<alias name='ua-ff5a8da2-a1af-4510-a5b7-68f7eb19edcd'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<interface type='hostdev'>
<mac address='56:6f:10:b5:00:00'/>
<driver name='vfio'/>
<source>
<address type='pci' domain='0x0000' bus='0x82' slot='0x10' function='0x0'/>
</source>
<vlan>
<tag id='112'/>
</vlan>
<alias name='ua-a690778c-1927-465a-840d-15c730320703'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</interface>
<interface type='hostdev'>
<mac address='56:6f:10:b5:00:02'/>
<driver name='vfio'/>
<source>
<address type='pci' domain='0x0000' bus='0x82' slot='0x11' function='0x0'/>
</source>
<vlan>
<tag id='102'/>
</vlan>
<alias name='ua-24c9018b-ba51-49e0-8d3b-af920842919e'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<serial type='unix'>
<source mode='bind' path='/var/run/ovirt-vmconsole-console/27e8e6b0-62a9-4acd-8d88-0c777adb6dc1.sock'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='unix'>
<source mode='bind' path='/var/run/ovirt-vmconsole-console/27e8e6b0-62a9-4acd-8d88-0c777adb6dc1.sock'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channels/27e8e6b0-62a9-4acd-8d88-0c777adb6dc1.ovirt-guest-agent.0'/>
<target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channels/27e8e6b0-62a9-4acd-8d88-0c777adb6dc1.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
<alias name='channel1'/>
<address type='virtio-serial' controller='0' bus='0' port='2'/>
</channel>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<memballoon model='virtio'>
<stats period='5'/>
<alias name='ua-6bd7b1bd-73c9-49d2-8639-71323f8b3033'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
</domain>
5 years, 8 months
engine-backup ETL errors on new server
by Staniforth, Paul
Hello,
I have deployed a new server for our engine using engine-backup to backup and restore to a clean system. We now get errors
ETL service sampling has encountered an error. Please consult the service log for more details.
ETL service aggregation to hourly tables has encountered an error. Please consult the service log for more details.
and in /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
errors such as
java.lang.OutOfMemoryError: Java heap space
Exception in component tJDBCInput_9
org.postgresql.util.PSQLException: Ran out of memory retrieving query results.
Regards,
Paul S.
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
5 years, 8 months
Re: Can't connect "https://FQDN/ovirt-engine" after reinstall ovirt-engine
by Александр Егоров
On the new system, I restored from the backup ovirt engine 4.1. Restored without errors
engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db --restore-permissions
Also restored /etc/pki from backup. I did not know what should be the rights to the folders, now they are:
drwxr-xr-x. 7 root root 213 мар 21 06:17 CA
drwxr-xr-x. 4 root root 73 мар 20 23:53 ca-trust
drwxr-xr-x. 2 root root 123 мар 21 06:17 fwupd
drwxr-xr-x. 2 root root 92 мар 21 06:17 fwupd-metadata
drwxr-xr-x. 2 root root 21 мар 20 23:53 java
drwxr-xr-x. 2 root root 103 мар 20 23:53 nssdb
drwxr-xr-x. 2 root root 30 мар 20 23:53 nss-legacy
drwxr-xr-x. 6 ovirt ovirt 4,0K мар 21 05:49 ovirt-engine
drwxr-xr-x. 2 ovirt-vmconsole ovirt-vmconsole 142 окт 14 2016 ovirt-vmconsole
drwxr-xr-x. 2 root root 243 мар 21 06:17 rpm-gpg
drwxr-xr-x. 2 root root 6 окт 31 08:08 rsyslog
drwxr-xr-x. 5 root root 104 мар 21 06:17 tls
in the ovirt-engine directory for all files, the owner is ovirt, for the ovirt-vmconsole directory for all files is the owner for ovirt-vmconsole
I can connect to the web interface at "https://FQDN/ovirt-engine", I can log in using my login/password, but the Data Center is not active. Here is what it writes in the engine.log:
2019-03-21 16:03:02,029+09 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to ovirt-n1.vs.jak/192.168.0.6
2019-03-21 16:03:04,386+09 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler9) [5ecc17d5] Failure to refresh host 'ovirt-n1.vs.jak' runtime info: org/apache/commons/lang/StringUtils
2019-03-21 16:03:04,387+09 ERROR [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl] (DefaultQuartzScheduler9) [5ecc17d5] Failed to invoke scheduled method onTimer: null
2019-03-21 16:03:04,419+09 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to ovirt-n2.vs.jak/192.168.0.7
2019-03-21 16:03:05,487+09 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler10) [8bfc0e5] Failure to refresh host 'ovirt-n2.vs.jak' runtime info: org/apache/commons/lang/StringUtils
2019-03-21 16:03:05,488+09 ERROR [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl] (DefaultQuartzScheduler10) [8bfc0e5] Failed to invoke scheduled method onTimer: null
2019-03-21 16:03:07,436+09 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler9) [5e8dcb07] Failure to refresh host 'ovirt-n1.vs.jak' runtime info: org/apache/commons/lang/StringUtils
2019-03-21 16:03:07,437+09 ERROR [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl] (DefaultQuartzScheduler9) [5e8dcb07] Failed to invoke scheduled method onTimer: null
2019-03-21 16:03:08,521+09 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler10) [486f9b4] Failure to refresh host 'ovirt-n2.vs.jak' runtime info: org/apache/commons/lang/StringUtils
2019-03-21 16:03:08,521+09 ERROR [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl] (DefaultQuartzScheduler10) [486f9b4] Failed to invoke scheduled method onTimer: null
5 years, 8 months
Re: Hosted Engine I/O scheduler
by Strahil
Unfortunately, the ideal scheduler really depends on storage configuration. Gluster, ZFS, iSCSI, FC, and NFS don't align on a single "best" configuration (to say nothing of direct LUNs on guests), then there's workload considerations.
>
> The scale team is aiming for a balanced "default" policy rather than one which is best for a specific environment.
>
> That said, I'm optimistic that the results will let us give better recommendations if your workload/storage benefits from a different scheduler
I completely disagree !
If you use anything other than noop/none (depending if multiqueue is on), your scheduler inside the VM will reorder and delay your I/O.
Then the I/O will be received by the Host and this repeats again.
I can point to SuSe and Red Hat knowledge base where both vendors highly recommend noop/none as schedulers for VM.
It has nothing in common with the backend - that's in control of the hosts I/O scheduler.
Can some one tell me under which section should I open a bug ? Bugzilla is not newbie-friendly and I should admit that opening bugs for RHEL/CentOS is far easier.
The best bug section might be ovirt appliance - related , as this is only valid for VMs and not bare-metal Engine.
Best Regards,
Strahil Nikolov
5 years, 8 months
Re: Hosted Engine I/O scheduler
by Strahil
Hi Darrel,
Still, based on my experience we shouldn't queue our I/O in the VM, just to do the same in the Host.
I'm still considering if I should keep deadline in my hosts or to switch to 'cfq'.
After all, I'm using Hyper-converged oVirt and this needs testing.
What I/O scheduler are you using on the host?
Best Regards,
Strahil NikolovOn Mar 18, 2019 19:15, Darrell Budic <budic(a)onholyground.com> wrote:
>
> Checked this on mine, see the same thing. Switching the engine to noop definitely feels more responsive.
>
> I checked on some VMs as well, it looks like virtio drives (vda, vdb….) get mq-deadline by default, but virtscsi gets noop. I used to think the tuned profile for virtual-guest would set noop, but apparently not…
>
> -Darrell
>
>> On Mar 18, 2019, at 1:58 AM, Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hi All,
>>
>> I have changed my I/O scheduler to none and here are the results so far:
>>
>> Before (mq-deadline):
>> Adding a disk to VM (initial creation) START: 2019-03-17 16:34:46.709
>> Adding a disk to VM (initial creation) COMPLETED: 2019-03-17 16:45:17.996
>>
>> After (none):
>> Adding a disk to VM (initial creation) START: 2019-03-18 08:52:02.xxx
>> Adding a disk to VM (initial creation) COMPLETED: 2019-03-18 08:52:20.xxx
>>
>> Of course the results are inconclusive, as I have tested only once - but I feel the engine more responsive.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В неделя, 17 март 2019 г., 18:30:23 ч. Гринуич+2, Strahil <hunter86_bg(a)yahoo.com> написа:
>>
>>
>> Dear All,
>>
>> I have just noticed that my Hosted Engine has a strange I/O scheduler:
>>
>> Last login: Sun Mar 17 18:14:26 2019 from 192.168.1.43
>> [root@engine ~]# cat /sys/block/vda/queue/scheduler
>> [mq-deadline] kyber none
>> [root@engine ~]#
>>
>> Based on my experience anything than noop/none is useless and performance degrading for a VM.
>>
>> Is there any reason that we have this scheduler ?
>> It is quite pointless to process (and delay) the I/O in the VM and then process (and again delay) on Host Level .
>>
>> If there is no reason to keep the deadline, I will open a bug about it.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> Dear All,
>>
>> I have just noticed that my Hosted Engine has a strange I/O scheduler:
>>
>> Last login: Sun Mar 17 18:14:26 2019 from 192.168.1.43
>> [root@engine ~]# cat /sys/block/vda/queue/scheduler
>> [mq-deadline] kyber none
>> [root@engine ~]#
>>
>> Based on my experience anything than noop/none is useless and performance degrading for a VM.
>>
>>
5 years, 8 months