oVirt on Fedora 28 status update
by Nir Soffer
Last time we discussed this here, we had only sanlock issue:
https://bugzilla.redhat.com/1593853
The bug was fixed upstream about 2 month ago, but the Fedora package was
not available.
The package is not available yet in Fedora, but we have a build here:
https://koji.fedoraproject.org/koji/buildinfo?buildID=1182539
You can install sanlock from this build using:
dnf upgrade
https://kojipkgs.fedoraproject.org//packages/sanlock/3.6.0/4.fc28/x86_64/...
\
https://kojipkgs.fedoraproject.org//packages/sanlock/3.6.0/4.fc28/x86_64/...
\
https://kojipkgs.fedoraproject.org//packages/sanlock/3.6.0/4.fc28/x86_64/...
Hopefully the package will pushed soon to updates-testing repo.
With this you can use enable selinux as god intended.
But if you update your host to kernel 4.20.4-100, multipath is broken. All
multipath devices
are not available, and your hosts will probably become non-operational
since they report the
iSCSI/FC storage domains in problem.
The issue was reported here:
https://lkml.org/lkml/2018/11/5/398
And we have this Fedora 29 bug:
https://bugzilla.redhat.com/1669235
Ben explains it is:
The kernel is switching over to use block multiqueue instead of the old
request
queue. Part of doing this is removing support for the old request queue
from device-mapper. Another part is to remove support for the old
request queue from the scsi layer. For some reason, the first part got
into this fedora kernel, but the second part didn't. It seems to me
that since the fedora kernel has removed support for non-blk-mq based
devices,
I should have been compiled with CONFIG_SCSI_MQ_DEFAULT=y
To fix this issue you need to add the scsi_mod.use_blk_mq=Y option to the
kernel command line:
grubby --args=scsi_mod.use_blk_mq=Y --update-kernel
/boot/vmlinuz-4.20.4-100.fc28.x86_64
After reboot, your multipath devices will appear again.
Cheers,
Nir
5 years, 10 months
[Ovirt] [CQ weekly status] [01-02-2019]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: GREEN (#2)
We have no current failures.
Last failed job was for ovirt-ansible-hosted-engine-setup due to
build-artiacts job on Jan 25. the issue is worked on by Simon Tiraboschi
*CQ-Master:* GREEN (#1)
There are no current issues in master CQ
During the week we had 3 regressions for ovirt-engine
1. Domain version 5 feature was merged but its was not yet supported on
vdsm side: https://gerrit.ovirt.org/#/c/96178/ - storage: Added V5 domain
format definition.
Date failure in CQ: 24-01-2019
test failing: 002_bootstrap.add_master_storage_domain
Fix: https://gerrit.ovirt.org/#/c/97262/ - storage: Make default domain
verion V4 for 4.3
2. CQ was still failing on create domain v5 after the fix was merged to fix
regression #1
https://gerrit.ovirt.org/#/c/96178/ - storage: Added V5 domain format
definition.
Date failure in CQ: 26-01-2019
test failing: 002_bootstrap.add_secondary_storage_domains
Fix: https://gerrit.ovirt.org/#/c/97408/ - storage: V5 is not a latest
domain version yet
** first patch fixed the domain create for manual create but via the API
the version was available which is why we were still failing on CQ.
3. Added callbacks for MBS disks causing regular disks to use callbacks
unnecessarily
https://gerrit.ovirt.org/#/c/96815/ - core: set VmDevices properly when
creating and attaching
We saw this regression once #1 and #2 regressions were fixed.
Date first failure detected: 30-01-2019
Fix: https://gerrit.ovirt.org/#/c/97428/ - core: partial revert of 4d439006
Ovirt-engine passed on 31-01-2019
Current running jobs for 4.2 [1] and master [2] can be found here:
[1]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-...
[2]
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 10 months