[ANN] oVirt 4.2.6 is now generally available

The oVirt Project is pleased to announce the general availability of oVirt 4.2.6, as of September 3rd, 2018. This update is the sixth in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production. This release is available now for: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.5 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.5 or later See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance is available - oVirt Node is available [2] - oVirt Windows Guest Tools is available [2] Additional Resources: * Read more about the oVirt 4.2.6 release highlights: http://www.ovirt.org/release/4.2.6/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4.2.6/ [2] http://resources.ovirt.org/pub/ovirt-4.2/iso/ -- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig> <https://www.redhat.com/en/events/red-hat-open-source-day-italia?sc_cid=701f2000000RgRyAAK>

In the release notes, I see: • BZ 1622700 [downstream clone - 4.2.6] [RFE][Dalton] - Blacklist all local disk in multipath on RHEL / RHEV Host (RHEL 7.5) Feature: Blacklist local devices in multipath. Reason: multipath repeatedly logs irrelevant errors for local devices. Result: Local devices are blacklisted, and no irrelevant errors are logged anymore. What defines a local disk ? I'm using a SAN on SAS. For many peoples, SAS is only for local disks, but that's not the case. Will other 4.2.6 will detect that ? BZ 1622700 is private, I can't check it.
Le 3 sept. 2018 à 13:57, Sandro Bonazzola <sbonazzo@redhat.com> a écrit :
The oVirt Project is pleased to announce the general availability of oVirt 4.2.6, as of September 3rd, 2018.
This update is the sixth in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production.
This release is available now for: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.5 or later
This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.5 or later
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is available - oVirt Node is available [2] - oVirt Windows Guest Tools is available [2]
Additional Resources: * Read more about the oVirt 4.2.6 release highlights:http://www.ovirt.org/release/4.2.6/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.2.6/ [2] http://resources.ovirt.org/pub/ovirt-4.2/iso/
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA sbonazzo@redhat.com
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFEQ5JK2RZM3Q7...

2018-09-03 15:57 GMT+02:00 Fabrice Bacchella <fabrice.bacchella@orange.fr>:
In the release notes, I see:
• BZ 1622700 [downstream clone - 4.2.6] [RFE][Dalton] - Blacklist all local disk in multipath on RHEL / RHEV Host (RHEL 7.5) Feature: Blacklist local devices in multipath.
Reason: multipath repeatedly logs irrelevant errors for local devices.
Result: Local devices are blacklisted, and no irrelevant errors are logged anymore.
What defines a local disk ? I'm using a SAN on SAS. For many peoples, SAS is only for local disks, but that's not the case. Will other 4.2.6 will detect that ?
BZ 1622700 is private, I can't check it.
I don't know why a bug marked as private landed in release notes, I'll investigate on this. For your questions, adding Sahina.
Le 3 sept. 2018 à 13:57, Sandro Bonazzola <sbonazzo@redhat.com> a écrit :
The oVirt Project is pleased to announce the general availability of oVirt 4.2.6, as of September 3rd, 2018.
This update is the sixth in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production.
This release is available now for: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.5 or later
This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.5 or later
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is available - oVirt Node is available [2] - oVirt Windows Guest Tools is available [2]
Additional Resources: * Read more about the oVirt 4.2.6 release highlights:http://www.ovirt. org/release/4.2.6/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.2.6/ [2] http://resources.ovirt.org/pub/ovirt-4.2/iso/
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA sbonazzo@redhat.com
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/RFEQ5JK2RZM3Q7U3RDARIV7ZPDMHSPW2/
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig> <https://www.redhat.com/en/events/red-hat-open-source-day-italia?sc_cid=701f2000000RgRyAAK>

On Mon, Sep 3, 2018 at 5:07 PM Fabrice Bacchella < fabrice.bacchella@orange.fr> wrote:
In the release notes, I see:
• BZ 1622700 [downstream clone - 4.2.6] [RFE][Dalton] - Blacklist all local disk in multipath on RHEL / RHEV Host (RHEL 7.5) Feature: Blacklist local devices in multipath.
Reason: multipath repeatedly logs irrelevant errors for local devices.
Result: Local devices are blacklisted, and no irrelevant errors are logged anymore.
What defines a local disk ? I'm using a SAN on SAS. For many peoples, SAS is only for local disks, but that's not the case. Will other 4.2.6 will detect that ?
We don't have any support for SAS. If you SAS drives are attached to the host using FC or iSCSI, you are fine. If your drives are connected in another way, you probably need to edit /etc/multipath.conf. The current setting is: blacklist_exceptions { protocol "(scsi:fcp|scsi:iscsi)" } You may need to change this to get multipath to grab these disks. Nir
BZ 1622700 is private, I can't check it.
Le 3 sept. 2018 à 13:57, Sandro Bonazzola <sbonazzo@redhat.com> a écrit :
The oVirt Project is pleased to announce the general availability of oVirt 4.2.6, as of September 3rd, 2018.
This update is the sixth in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production.
This release is available now for: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.5 or later
This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.5 or later
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: - oVirt Appliance is available - oVirt Node is available [2] - oVirt Windows Guest Tools is available [2]
Additional Resources: * Read more about the oVirt 4.2.6 release highlights: http://www.ovirt.org/release/4.2.6/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.2.6/ [2] http://resources.ovirt.org/pub/ovirt-4.2/iso/
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA sbonazzo@redhat.com
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFEQ5JK2RZM3Q7...
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/D4GNS3AHS3PV7L...

Le 3 sept. 2018 à 18:31, Nir Soffer <nsoffer@redhat.com> a écrit :
On Mon, Sep 3, 2018 at 5:07 PM Fabrice Bacchella <fabrice.bacchella@orange.fr <mailto:fabrice.bacchella@orange.fr>> wrote: In the release notes, I see:
• BZ 1622700 [downstream clone - 4.2.6] [RFE][Dalton] - Blacklist all local disk in multipath on RHEL / RHEV Host (RHEL 7.5) Feature: Blacklist local devices in multipath.
Reason: multipath repeatedly logs irrelevant errors for local devices.
Result: Local devices are blacklisted, and no irrelevant errors are logged anymore.
What defines a local disk ? I'm using a SAN on SAS. For many peoples, SAS is only for local disks, but that's not the case. Will other 4.2.6 will detect that ?
We don't have any support for SAS.
If you SAS drives are attached to the host using FC or iSCSI, you are fine.
Nope, they are attached using SAS. In /dev/disk, they show as: ls -l /dev/disk/by-*/*:16 lrwxrwxrwx 1 root root 9 Sep 3 18:01 /dev/disk/by-path/pci-0000:87:00.0-scsi-0:2:1:16 -> ../../sdc lrwxrwxrwx 1 root root 9 Sep 3 18:01 /dev/disk/by-path/pci-0000:87:00.0-scsi-0:2:2:16 -> ../../sds lrwxrwxrwx 1 root root 10 Sep 3 18:01 /dev/disk/by-path/pci-0000:87:00.0-scsi-0:2:3:16 -> ../../sdai lrwxrwxrwx 1 root root 10 Sep 3 18:01 /dev/disk/by-path/pci-0000:87:00.0-scsi-0:2:4:16 -> ../../sdaz lrwxrwxrwx 1 root root 10 Sep 3 18:01 /dev/disk/by-path/pci-0000:87:00.0-scsi-0:2:5:16 -> ../../sdbq lrwxrwxrwx 1 root root 10 Sep 3 18:01 /dev/disk/by-path/pci-0000:87:00.0-scsi-0:2:6:16 -> ../../sdar lrwxrwxrwx 1 root root 10 Sep 3 18:01 /dev/disk/by-path/pci-0000:87:00.0-scsi-0:2:7:16 -> ../../sdch lrwxrwxrwx 1 root root 10 Sep 3 18:01 /dev/disk/by-path/pci-0000:87:00.0-scsi-0:2:8:16 -> ../../sdcv ls -l /dev/disk/by-* | grep sdcv lrwxrwxrwx 1 root root 10 Sep 3 18:01 scsi-3600c0ff0002631c42168f15601000000 -> ../../sdcv lrwxrwxrwx 1 root root 10 Sep 3 18:01 wwn-0x600c0ff0002631c42168f15601000000 -> ../../sdcv lrwxrwxrwx 1 root root 10 Sep 3 18:01 pci-0000:87:00.0-scsi-0:2:8:16 -> ../../sdcv
If your drives are connected in another way, you probably need to edit /etc/multipath.conf.
The current setting is:
blacklist_exceptions { protocol "(scsi:fcp|scsi:iscsi)" }
Where do I find the protocol multipath thinks the drives are using ?

On Mon, Sep 3, 2018 at 8:01 PM Fabrice Bacchella < fabrice.bacchella@orange.fr> wrote:
Le 3 sept. 2018 à 18:31, Nir Soffer <nsoffer@redhat.com> a écrit :
On Mon, Sep 3, 2018 at 5:07 PM Fabrice Bacchella < fabrice.bacchella@orange.fr> wrote:
In the release notes, I see:
• BZ 1622700 [downstream clone - 4.2.6] [RFE][Dalton] - Blacklist all local disk in multipath on RHEL / RHEV Host (RHEL 7.5) Feature: Blacklist local devices in multipath.
Reason: multipath repeatedly logs irrelevant errors for local devices.
Result: Local devices are blacklisted, and no irrelevant errors are logged anymore.
What defines a local disk ? I'm using a SAN on SAS. For many peoples, SAS is only for local disks, but that's not the case. Will other 4.2.6 will detect that ?
We don't have any support for SAS.
If you SAS drives are attached to the host using FC or iSCSI, you are fine.
Nope, they are attached using SAS.
I guess oVirt see them as FCP devices? Are these disks connected to multiple hosts? Please share the output of: vdsm-client Host getDeviceList ... Where do I find the protocol multipath thinks the drives are using ? multipath.conf(5) says: The protocol strings that multipath recognizes are scsi:fcp, scsi:spi, scsi:ssa, scsi:sbp, scsi:srp, scsi:iscsi, scsi:sas, scsi:adt, scsi:ata, scsi:unspec, ccw, cciss, nvme, and undef. The protocol that a path is using can be viewed by running multipathd show paths format "%d %P" So this should work: blacklist_exceptions { protocol "(scsi:fcp|scsi:iscsi|scsi:sas)" } The best way to make this change is to create a dropin conf file, and not touch /etc/multipath.conf, so vdsm will be able to update later. $cat /etc/multipath/conf.d/local.conf blacklist_exceptions { protocol "(scsi:fcp|scsi:iscsi|scsi:sas)" } I hope it works for overriding vdsm configuration, if not, you will need to change /etc/multipath.conf, and mark it as VDSM_PRIVATE like this; $ head -3 /etc/multipath.conf # # VDSM REVISION 1.6 # VDSM PRIVATE Once it works, I suggest to file a bug to support sas disks by default. Nir

Le 3 sept. 2018 à 19:15, Nir Soffer <nsoffer@redhat.com> a écrit :
Thank you for you help, but I'm still not out of trouble.
On Mon, Sep 3, 2018 at 8:01 PM Fabrice Bacchella <fabrice.bacchella@orange.fr> wrote:
Le 3 sept. 2018 à 18:31, Nir Soffer <nsoffer@redhat.com> a écrit :
On Mon, Sep 3, 2018 at 5:07 PM Fabrice Bacchella <fabrice.bacchella@orange.fr> wrote: In the release notes, I see:
• BZ 1622700 [downstream clone - 4.2.6] [RFE][Dalton] - Blacklist all local disk in multipath on RHEL / RHEV Host (RHEL 7.5) Feature: Blacklist local devices in multipath.
Reason: multipath repeatedly logs irrelevant errors for local devices.
Result: Local devices are blacklisted, and no irrelevant errors are logged anymore.
What defines a local disk ? I'm using a SAN on SAS. For many peoples, SAS is only for local disks, but that's not the case. Will other 4.2.6 will detect that ?
We don't have any support for SAS.
If you SAS drives are attached to the host using FC or iSCSI, you are fine.
Nope, they are attached using SAS.
I guess oVirt see them as FCP devices?
yes, in ovirt UI, I've configured my storage to be on FCP, and everything worked well since 3.6.
Are these disks connected to multiple hosts?
Yes, that's a real SAN, multi-attached to HPE's blades
Please share the output of:
vdsm-client Host getDeviceList
Things are strange: { "status": "used", "vendorID": "HP iLO", "GUID": "HP_iLO_LUN_01_Media_0_000002660A01-0:1", "capacity": "1073741824", "fwrev": "2.10", "discard_zeroes_data": 0, "vgUUID": "", "pathlist": [], "pvsize": "", "discard_max_bytes": 0, "pathstatus": [ { "capacity": "1073741824", "physdev": "sddj", "type": "FCP", "state": "active", "lun": "1" } ], "devtype": "FCP", "physicalblocksize": "512", "pvUUID": "", "serial": "", "logicalblocksize": "512", "productID": "LUN 01 Media 0" }, ... { "status": "used", "vendorID": "HP", "GUID": "3600c0ff0002631c42168f15601000000", "capacity": "1198996324352", "fwrev": "G22x", "discard_zeroes_data": 0, "vgUUID": "xGCmpC-DhHe-3v6v-6LJw-iS24-ExCE-0Hv48U", "pathlist": [], "pvsize": "1198698528768", "discard_max_bytes": 0, "pathstatus": [ { "capacity": "1198996324352", "physdev": "sdc", "type": "FCP", "state": "active", "lun": "16" }, { "capacity": "1198996324352", "physdev": "sds", "type": "FCP", "state": "active", "lun": "16" }, ... The first one is an embedded flash drive: lrwxrwxrwx 1 root root 10 Jul 12 17:11 /dev/disk/by-id/usb-HP_iLO_LUN_01_Media_0_000002660A01-0:1 -> ../../sddj lrwxrwxrwx 1 root root 10 Jul 12 17:11 /dev/disk/by-path/pci-0000:00:14.0-usb-0:3.1:1.0-scsi-0:0:0:1 -> ../../sddj So why "type": "FCP", ? The second is indeed a SAS drives behind a SAS SAN (a MSA 2040 SAS from HPE).
... Where do I find the protocol multipath thinks the drives are using ?
multipath.conf(5) says:
The protocol strings that multipath recognizes are scsi:fcp, scsi:spi, scsi:ssa, scsi:sbp, scsi:srp, scsi:iscsi, scsi:sas, scsi:adt, scsi:ata, scsi:unspec, ccw, cciss, nvme, and undef. The protocol that a path is using can be viewed by running multipathd show paths format "%d %P"
I have a centos 7.5: lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: CentOS Description: CentOS Linux release 7.5.1804 (Core) Release: 7.5.1804 Codename: Core and I don't have this in multipath.conf(5). But blacklist_exceptions exists. The given command don't works: multipathd show paths format "%d %P" dev sddi sddj sda ...
So this should work:
blacklist_exceptions { protocol "(scsi:fcp|scsi:iscsi|scsi:sas)" }
The best way to make this change is to create a dropin conf file, and not touch /etc/multipath.conf, so vdsm will be able to update later.
$cat /etc/multipath/conf.d/local.conf blacklist_exceptions { protocol "(scsi:fcp|scsi:iscsi|scsi:sas)" }
The header in /etc/multipath.conf says: # The recommended way to add configuration for your storage is to add a # drop-in configuration file in "/etc/multipath/conf.d/<mydevice>.conf". Does <mydevice> have a signification or it's just a meaningless string that can be used as a reminder ?
I hope it works for overriding vdsm configuration, if not, you will need to change /etc/multipath.conf, and mark it as VDSM_PRIVATE like this;
$ head -3 /etc/multipath.conf # # VDSM REVISION 1.6 # VDSM PRIVATE
Once it works, I suggest to file a bug to support sas disks by default.
Nir _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2ZFWP7TTRG4T32...

On Tue, Sep 4, 2018 at 11:30 AM Fabrice Bacchella < fabrice.bacchella@orange.fr> wrote:
Le 3 sept. 2018 à 19:15, Nir Soffer <nsoffer@redhat.com> a écrit :
Thank you for you help, but I'm still not out of trouble.
On Mon, Sep 3, 2018 at 8:01 PM Fabrice Bacchella <
fabrice.bacchella@orange.fr> wrote:
Le 3 sept. 2018 à 18:31, Nir Soffer <nsoffer@redhat.com> a écrit :
On Mon, Sep 3, 2018 at 5:07 PM Fabrice Bacchella <
fabrice.bacchella@orange.fr> wrote:
In the release notes, I see:
• BZ 1622700 [downstream clone - 4.2.6] [RFE][Dalton] - Blacklist all local disk in multipath on RHEL / RHEV Host (RHEL 7.5) Feature: Blacklist local devices in multipath.
Reason: multipath repeatedly logs irrelevant errors for local devices.
Result: Local devices are blacklisted, and no irrelevant errors are logged anymore.
What defines a local disk ? I'm using a SAN on SAS. For many peoples, SAS is only for local disks, but that's not the case. Will other 4.2.6 will detect that ?
We don't have any support for SAS.
If you SAS drives are attached to the host using FC or iSCSI, you are fine.
Nope, they are attached using SAS.
I guess oVirt see them as FCP devices?
yes, in ovirt UI, I've configured my storage to be on FCP, and everything worked well since 3.6.
Are these disks connected to multiple hosts?
Yes, that's a real SAN, multi-attached to HPE's blades
Please share the output of:
vdsm-client Host getDeviceList
Things are strange:
{ "status": "used", "vendorID": "HP iLO", "GUID": "HP_iLO_LUN_01_Media_0_000002660A01-0:1", "capacity": "1073741824", "fwrev": "2.10", "discard_zeroes_data": 0, "vgUUID": "", "pathlist": [], "pvsize": "", "discard_max_bytes": 0, "pathstatus": [ { "capacity": "1073741824", "physdev": "sddj", "type": "FCP", "state": "active", "lun": "1" } ], "devtype": "FCP", "physicalblocksize": "512", "pvUUID": "", "serial": "", "logicalblocksize": "512", "productID": "LUN 01 Media 0" }, ... { "status": "used", "vendorID": "HP", "GUID": "3600c0ff0002631c42168f15601000000", "capacity": "1198996324352", "fwrev": "G22x", "discard_zeroes_data": 0, "vgUUID": "xGCmpC-DhHe-3v6v-6LJw-iS24-ExCE-0Hv48U", "pathlist": [], "pvsize": "1198698528768", "discard_max_bytes": 0, "pathstatus": [ { "capacity": "1198996324352", "physdev": "sdc", "type": "FCP", "state": "active", "lun": "16" }, { "capacity": "1198996324352", "physdev": "sds", "type": "FCP", "state": "active", "lun": "16" },
...
The first one is an embedded flash drive: lrwxrwxrwx 1 root root 10 Jul 12 17:11 /dev/disk/by-id/usb-HP_iLO_LUN_01_Media_0_000002660A01-0:1 -> ../../sddj lrwxrwxrwx 1 root root 10 Jul 12 17:11 /dev/disk/by-path/pci-0000:00:14.0-usb-0:3.1:1.0-scsi-0:0:0:1 -> ../../sddj
So why "type": "FCP", ?
"FCP" actually means "not iSCSI". This why your sas storage works while oVirt does know anything about sas. This is why the blacklist by protocol feature was introduced in 7.5, to multipath can grab only shared storage, and avoid grabbing local devices like your SSD. See https://bugzilla.redhat.com/show_bug.cgi?id=1593459 According to this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1607749 The fix is available in: device-mapper-multipath-0.4.9-119.el7_5.1.x86_64 Which device-mapper-multipath package are you using?
The second is indeed a SAS drives behind a SAS SAN (a MSA 2040 SAS from HPE).
... Where do I find the protocol multipath thinks the drives are using ?
multipath.conf(5) says:
The protocol strings that multipath recognizes are scsi:fcp, scsi:spi, scsi:ssa, scsi:sbp, scsi:srp, scsi:iscsi, scsi:sas, scsi:adt, scsi:ata, scsi:unspec, ccw, cciss, nvme, and undef. The protocol that a path is using can be viewed by running multipathd show paths format "%d %P"
I have a centos 7.5:
lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: CentOS Description: CentOS Linux release 7.5.1804 (Core) Release: 7.5.1804 Codename: Core
and I don't have this in multipath.conf(5). But blacklist_exceptions exists.
The given command don't works: multipathd show paths format "%d %P" dev sddi sddj sda ...
It looks like your system does not have the fix.
So this should work:
blacklist_exceptions { protocol "(scsi:fcp|scsi:iscsi|scsi:sas)"
}
The best way to make this change is to create a dropin conf file, and not touch /etc/multipath.conf, so vdsm will be able to update later.
$cat /etc/multipath/conf.d/local.conf blacklist_exceptions { protocol "(scsi:fcp|scsi:iscsi|scsi:sas)"
}
The header in /etc/multipath.conf says:
# The recommended way to add configuration for your storage is to add a # drop-in configuration file in "/etc/multipath/conf.d/<mydevice>.conf".
Does <mydevice> have a signification or it's just a meaningless string that can be used as a reminder ?
mydevice is not a good name, this is just arbitrary name that is useful to you, multipath does not care about the name. I'll update this to "my.conf" to make this more clear. Nir

On Tue, Sep 4, 2018 at 9:51 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Sep 4, 2018 at 11:30 AM Fabrice Bacchella < fabrice.bacchella@orange.fr> wrote:
Le 3 sept. 2018 à 19:15, Nir Soffer <nsoffer@redhat.com> a écrit :
Thank you for you help, but I'm still not out of trouble.
On Mon, Sep 3, 2018 at 8:01 PM Fabrice Bacchella <
fabrice.bacchella@orange.fr> wrote:
Le 3 sept. 2018 à 18:31, Nir Soffer <nsoffer@redhat.com> a écrit :
On Mon, Sep 3, 2018 at 5:07 PM Fabrice Bacchella <
fabrice.bacchella@orange.fr> wrote:
In the release notes, I see:
• BZ 1622700 [downstream clone - 4.2.6] [RFE][Dalton] - Blacklist all local disk in multipath on RHEL / RHEV Host (RHEL 7.5) Feature: Blacklist local devices in multipath.
Reason: multipath repeatedly logs irrelevant errors for local devices.
Result: Local devices are blacklisted, and no irrelevant errors are logged anymore.
What defines a local disk ? I'm using a SAN on SAS. For many peoples, SAS is only for local disks, but that's not the case. Will other 4.2.6 will detect that ?
We don't have any support for SAS.
If you SAS drives are attached to the host using FC or iSCSI, you are fine.
Nope, they are attached using SAS.
I guess oVirt see them as FCP devices?
yes, in ovirt UI, I've configured my storage to be on FCP, and everything worked well since 3.6.
Are these disks connected to multiple hosts?
Yes, that's a real SAN, multi-attached to HPE's blades
Please share the output of:
vdsm-client Host getDeviceList
Things are strange:
{ "status": "used", "vendorID": "HP iLO", "GUID": "HP_iLO_LUN_01_Media_0_000002660A01-0:1", "capacity": "1073741824", "fwrev": "2.10", "discard_zeroes_data": 0, "vgUUID": "", "pathlist": [], "pvsize": "", "discard_max_bytes": 0, "pathstatus": [ { "capacity": "1073741824", "physdev": "sddj", "type": "FCP", "state": "active", "lun": "1" } ], "devtype": "FCP", "physicalblocksize": "512", "pvUUID": "", "serial": "", "logicalblocksize": "512", "productID": "LUN 01 Media 0" }, ... { "status": "used", "vendorID": "HP", "GUID": "3600c0ff0002631c42168f15601000000", "capacity": "1198996324352", "fwrev": "G22x", "discard_zeroes_data": 0, "vgUUID": "xGCmpC-DhHe-3v6v-6LJw-iS24-ExCE-0Hv48U", "pathlist": [], "pvsize": "1198698528768", "discard_max_bytes": 0, "pathstatus": [ { "capacity": "1198996324352", "physdev": "sdc", "type": "FCP", "state": "active", "lun": "16" }, { "capacity": "1198996324352", "physdev": "sds", "type": "FCP", "state": "active", "lun": "16" },
...
The first one is an embedded flash drive: lrwxrwxrwx 1 root root 10 Jul 12 17:11 /dev/disk/by-id/usb-HP_iLO_LUN_01_Media_0_000002660A01-0:1 -> ../../sddj lrwxrwxrwx 1 root root 10 Jul 12 17:11 /dev/disk/by-path/pci-0000:00:14.0-usb-0:3.1:1.0-scsi-0:0:0:1 -> ../../sddj
So why "type": "FCP", ?
"FCP" actually means "not iSCSI". This why your sas storage works while oVirt does know anything about sas.
This is why the blacklist by protocol feature was introduced in 7.5, to multipath can grab only shared storage, and avoid grabbing local devices like your SSD. See https://bugzilla.redhat.com/show_bug.cgi?id=1593459
According to this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1607749
The fix is available in: device-mapper-multipath-0.4.9-119.el7_5.1.x86_64
Which device-mapper-multipath package are you using?
The second is indeed a SAS drives behind a SAS SAN (a MSA 2040 SAS from HPE).
... Where do I find the protocol multipath thinks the drives are using ?
multipath.conf(5) says:
The protocol strings that multipath recognizes are scsi:fcp, scsi:spi, scsi:ssa, scsi:sbp, scsi:srp, scsi:iscsi, scsi:sas, scsi:adt, scsi:ata, scsi:unspec, ccw, cciss, nvme, and undef. The protocol that a path is using can be viewed by running multipathd show paths format "%d %P"
I have a centos 7.5:
lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: CentOS Description: CentOS Linux release 7.5.1804 (Core) Release: 7.5.1804 Codename: Core
and I don't have this in multipath.conf(5). But blacklist_exceptions exists.
The given command don't works: multipathd show paths format "%d %P" dev sddi sddj sda ...
It looks like your system does not have the fix.
So this should work:
blacklist_exceptions { protocol "(scsi:fcp|scsi:iscsi|scsi:sas)"
}
The best way to make this change is to create a dropin conf file, and not touch /etc/multipath.conf, so vdsm will be able to update later.
$cat /etc/multipath/conf.d/local.conf blacklist_exceptions { protocol "(scsi:fcp|scsi:iscsi|scsi:sas)"
}
The header in /etc/multipath.conf says:
# The recommended way to add configuration for your storage is to add a # drop-in configuration file in "/etc/multipath/conf.d/<mydevice>.conf".
Does <mydevice> have a signification or it's just a meaningless string that can be used as a reminder ?
mydevice is not a good name, this is just arbitrary name that is useful to you, multipath does not care about the name.
I'll update this to "my.conf" to make this more clear
For reference, I just installed multipath on CentOS: # cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core) # rpm -q device-mapper-multipath device-mapper-multipath-0.4.9-119.el7_5.1.x86_64 # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 49G 0 part ├─centos_voodoo4-root 253:0 0 45.1G 0 lvm / └─centos_voodoo4-swap 253:1 0 3.9G 0 lvm [SWAP] sr0 11:0 1 1024M 0 rom # multipathd show paths format "%d %P" dev protocol sda scsi:unspec # man multipath.conf ... blacklist section ... protocol Regular expression of the protocol to be excluded. See below for a list of recognized protocols ... The protocol strings that multipath recognizes are scsi:fcp, scsi:spi, scsi:ssa, scsi:sbp, scsi:srp, scsi:iscsi, scsi:sas, scsi:adt, scsi:ata, scsi:unspec, ccw, cciss, nvme, and undef. The protocol that a path is using can be viewed by running multipathd show paths format "%d %P" Nir

Le 4 sept. 2018 à 21:56, Nir Soffer <nsoffer@redhat.com> a écrit :
For reference, I just installed multipath on CentOS:
# cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core)
# rpm -q device-mapper-multipath device-mapper-multipath-0.4.9-119.el7_5.1.x86_64
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 49G 0 part ├─centos_voodoo4-root 253:0 0 45.1G 0 lvm / └─centos_voodoo4-swap 253:1 0 3.9G 0 lvm [SWAP] sr0 11:0 1 1024M 0 rom
# multipathd show paths format "%d %P" dev protocol sda scsi:unspec
# man multipath.conf ... blacklist section ... protocol Regular expression of the protocol to be excluded. See below for a list of recognized protocols ... The protocol strings that multipath recognizes are scsi:fcp, scsi:spi, scsi:ssa, scsi:sbp, scsi:srp, scsi:iscsi, scsi:sas, scsi:adt, scsi:ata, scsi:unspec, ccw, cciss, nvme, and undef. The protocol that a path is using can be viewed by running multipathd show paths format "%d %P"
My version: device-mapper-multipath-0.4.9-119.el7.x86_64 yours:
device-mapper-multipath-0.4.9-119.el7_5.1.x86_64
So this is quite new. After a yum update it's much 'better': ~$ sudo multipathd show paths format "%d %P" dev protocol sddi scsi:unspec sddj scsi:unspec sda scsi:unspec sdc scsi:unspec sdd scsi:unspec But as it's not in the blacklist_exceptions, that's what I will need to add.

Le 3 sept. 2018 à 18:31, Nir Soffer <nsoffer@redhat.com> a écrit :
On Mon, Sep 3, 2018 at 5:07 PM Fabrice Bacchella <fabrice.bacchella@orange.fr <mailto:fabrice.bacchella@orange.fr>> wrote: In the release notes, I see:
• BZ 1622700 [downstream clone - 4.2.6] [RFE][Dalton] - Blacklist all local disk in multipath on RHEL / RHEV Host (RHEL 7.5) Feature: Blacklist local devices in multipath.
Reason: multipath repeatedly logs irrelevant errors for local devices.
Result: Local devices are blacklisted, and no irrelevant errors are logged anymore.
What defines a local disk ? I'm using a SAN on SAS. For many peoples, SAS is only for local disks, but that's not the case. Will other 4.2.6 will detect that ?
We don't have any support for SAS.
What you call SAS is any block device we might want to attach directly and let oVirt manage. I was doing the same thing on old HPE hardware, using old smart array controlers. I gave the raw device to ovirt. After an upgrade, it fails as it was blacklisted. I need to add it to the blacklist exceptions: cat /etc/multipath/conf.d/enable-sas.conf blacklist_exceptions { protocol "cciss" } I think your default rule is quite hard, and can brake many existing setup: multipathd show blacklist ... protocol rules: - blacklist: (config file rule) .* - exceptions: (config file rule) (scsi:fcp|scsi:iscsi) (config file rule) cciss <-- mine

On Wed, Sep 5, 2018 at 4:06 PM Fabrice Bacchella < fabrice.bacchella@orange.fr> wrote:
Le 3 sept. 2018 à 18:31, Nir Soffer <nsoffer@redhat.com> a écrit :
On Mon, Sep 3, 2018 at 5:07 PM Fabrice Bacchella < fabrice.bacchella@orange.fr> wrote:
In the release notes, I see:
• BZ 1622700 [downstream clone - 4.2.6] [RFE][Dalton] - Blacklist all local disk in multipath on RHEL / RHEV Host (RHEL 7.5) Feature: Blacklist local devices in multipath.
Reason: multipath repeatedly logs irrelevant errors for local devices.
Result: Local devices are blacklisted, and no irrelevant errors are logged anymore.
What defines a local disk ? I'm using a SAN on SAS. For many peoples, SAS is only for local disks, but that's not the case. Will other 4.2.6 will detect that ?
We don't have any support for SAS.
What you call SAS is any block device we might want to attach directly and let oVirt manage. I was doing the same thing on old HPE hardware, using old smart array controlers. I gave the raw device to ovirt. After an upgrade, it fails as it was blacklisted. I need to add it to the blacklist exceptions:
cat /etc/multipath/conf.d/enable-sas.conf blacklist_exceptions { protocol "cciss" }
I think your default rule is quite hard, and can brake many existing setup:
multipathd show blacklist ... protocol rules: - blacklist: (config file rule) .* - exceptions: (config file rule) (scsi:fcp|scsi:iscsi) (config file rule) cciss <-- mine
Thanks for this info. Yes, our current default is too naive. The next 4.2.6 build will remove this blacklist or replace it with a better one that will not break existing setups. See: - https://gerrit.ovirt.org/c/94190/ - https://gerrit.ovirt.org/c/94168/ It would be helpful if you can test the next build before we release, since we don't have your particular storage in our lab. Nir

On Mon, Sep 3, 2018 at 2:00 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.2.6, as of September 3rd, 2018.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
[1] http://www.ovirt.org/release/4.2.6/ [2] http://resources.ovirt.org/pub/ovirt-4.2/iso/
In release notes I see this: cockpit-ovirt ... BZ 1608660 Support single node deployment from cockpit Take in mind that, as confirmed inside bugzilla, it is actually expected to be resolved in the first release (perhaps already in first candidate) of 4.2.7. Not gone in 4.2.6 Also, I see some bugzillas in verified state. I think that the ones inside release notes should be only those in closed state. Otherwise it could be misleading. Gianluca

2018-09-03 16:51 GMT+02:00 Gianluca Cecchi <gianluca.cecchi@gmail.com>:
On Mon, Sep 3, 2018 at 2:00 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.2.6, as of September 3rd, 2018.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
[1] http://www.ovirt.org/release/4.2.6/ [2] http://resources.ovirt.org/pub/ovirt-4.2/iso/
In release notes I see this:
cockpit-ovirt ... BZ 1608660 Support single node deployment from cockpit
Take in mind that, as confirmed inside bugzilla, it is actually expected to be resolved in the first release (perhaps already in first candidate) of 4.2.7. Not gone in 4.2.6
Thanks for the heads up, I've notified bug assignee of the issue and reopened the bug for 4.2.7 inclusion.
Also, I see some bugzillas in verified state. I think that the ones inside release notes should be only those in closed state.
Otherwise it could be misleading.
Bugs are still moving from VERIFIED to CLOSED, looks like bugzilla is slow today. Fore those not in CLOSED state I'm cross checking with assignee and QA contacts about status. I'll refresh release notes right after. Note that the way release notes has been generated this time is exactly the same way we used till now. I'm open to improve the process, will discuss suggestions from this feedback with team leads.
Gianluca
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig> <https://www.redhat.com/en/events/red-hat-open-source-day-italia?sc_cid=701f2000000RgRyAAK>

On Mon, 3 Sep 2018 at 07:59, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.2.6, as of September 3rd, 2018.
This update is the sixth in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production.
I am curious about this statement that this is pre-release software. When you announce General Availability it is usually considered "released." Is this a simple error, or is there another implication here?
This release is available now for: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.5 or later -- *snip*
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://www.redhat.com/en/events/red-hat-open-source-day-italia?sc_cid=701f2000000RgRyAAK> _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFEQ5JK2RZM3Q7...

2018-09-06 18:55 GMT+02:00 Alastair Neil <ajneil.tech@gmail.com>:
On Mon, 3 Sep 2018 at 07:59, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.2.6, as of September 3rd, 2018.
This update is the sixth in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production.
I am curious about this statement that this is pre-release software. When you announce General Availability it is usually considered "released." Is this a simple error, or is there another implication here?
Sadly, an error in the automated generation of the announce email. Apologies for that.
This release is available now for: * Red Hat Enterprise Linux 7.5 or later * CentOS Linux (or similar) 7.5 or later -- *snip*
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://www.redhat.com/en/events/red-hat-open-source-day-italia?sc_cid=701f2000000RgRyAAK> _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/RFEQ5JK2RZM3Q7U3RDARIV7ZPDMHSPW2/
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig> <https://www.redhat.com/en/events/red-hat-open-source-day-italia?sc_cid=701f2000000RgRyAAK>
participants (5)
-
Alastair Neil
-
Fabrice Bacchella
-
Gianluca Cecchi
-
Nir Soffer
-
Sandro Bonazzola