OVIRT 4.2: MountError on Posix Compliant FS

Hi all, I've deployed the ovirt Version 4.2.4.5-1.el7 on a small cluster and I'm trying to use as datastorage domain a configured Spectrum Scale (GPFS) distributed filesystem for test it. I've completed the configuration of the storage, and the filesystem defined are correctly mounted on the client hosts as we can see: gpfs_kvm gpfs 233T 288M 233T 1% /gpfs/kvm gpfs_fast gpfs 8.8T 5.2G 8.8T 1% /gpfs/fast The content output of the mount comand is: gpfs_kvm on /gpfs/kvm type gpfs (rw,relatime) gpfs_fast on /gpfs/fast type gpfs (rw,relatime) I can write and read to the mounted filesystem correctly, but when I try to add the local mounted filesystem I encountered some errors related to the mounting process of the storage. The parameters passed to add new storage domain are: Data Center: Default (V4) Name: gpfs_kvm Description: VM data on GPFS Domain Function: Data Storage Type: Posix Compliant FS Host to Use: kvm1c01 Path: /gpfs/kvm VFS Type: gpfs Mount Options: rw, relatime The ovirt error that I obtained is: Error while executing action Add Storage Connection: Problem while trying to mount target On the /var/log/vdsm/vdsm.log I can see: 2018-07-05 09:07:43,400+0200 INFO (jsonrpc/0) [vdsm.api] START connectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'rw,relatime', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'/gpfs/kvm', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'gpfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.2.1.254,43908, flow_id=498668a0-a240-469f-ac33-f8c7bdeb481f, task_id=857460ed-5f0e-4bc6-ba54-e8f2c72e9ac2 (api:46) 2018-07-05 09:07:43,403+0200 INFO (jsonrpc/0) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/_gpfs_kvm' (storageServer:167) 2018-07-05 09:07:43,404+0200 INFO (jsonrpc/0) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/_gpfs_kvm mode: None (fileUtils:197) 2018-07-05 09:07:43,404+0200 INFO (jsonrpc/0) [storage.Mount] mounting /gpfs/kvm at /rhev/data-center/mnt/_gpfs_kvm (mount:204) MountError: (32, ';mount: wrong fs type, bad option, bad superblock on /gpfs/kvm,\n missing codepage or helper program, or other error\n\n In some cases useful info is found in syslog - try\n dmesg | tail or so.\n') I know, that from previous versions on the GPFS implementation, they removed the device on /dev, due to incompatibilities with systemd. I don't know if this change affect the ovirt mounting process. Can you help me to add this filesystem to the ovirt environment? The parameters that I used previously are ok? Or i need to do some modification? Its possible that the process fails because i dont have a device related to the gpfs filesystems on /dev? Can we apply some kind of workaround to mount manually the filesystem to the ovirt environment? Ex. create the dir /rhev/data-center/mnt/_gpfs_kvm manually and then mount the /gpfs/kvm over this? It's posible to modify the code to bypass some comprobations or something? Reading the available documentation over Internet I find that ovirt was compatible with this fs (gpfs) implementation, because it's POSIX Compliant, this is a main reason for test it in our cluster. It remains compatible on the actual versions? Or maybe there are changes that brokes this integration? Many thanks for all in advance! Kind regards!

2018-07-06 11:26 GMT+02:00 <jtorres@bsc.es>:
Hi all,
I've deployed the ovirt Version 4.2.4.5-1.el7 on a small cluster and I'm trying to use as datastorage domain a configured Spectrum Scale (GPFS) distributed filesystem for test it.
Hi, welcome to oVirt community!
I've completed the configuration of the storage, and the filesystem defined are correctly mounted on the client hosts as we can see:
gpfs_kvm gpfs 233T 288M 233T 1% /gpfs/kvm gpfs_fast gpfs 8.8T 5.2G 8.8T 1% /gpfs/fast
The content output of the mount comand is:
gpfs_kvm on /gpfs/kvm type gpfs (rw,relatime) gpfs_fast on /gpfs/fast type gpfs (rw,relatime)
I can write and read to the mounted filesystem correctly, but when I try to add the local mounted filesystem I encountered some errors related to the mounting process of the storage.
The parameters passed to add new storage domain are:
Data Center: Default (V4) Name: gpfs_kvm Description: VM data on GPFS Domain Function: Data Storage Type: Posix Compliant FS Host to Use: kvm1c01 Path: /gpfs/kvm VFS Type: gpfs Mount Options: rw, relatime
The ovirt error that I obtained is: Error while executing action Add Storage Connection: Problem while trying to mount target
On the /var/log/vdsm/vdsm.log I can see: 2018-07-05 09:07:43,400+0200 INFO (jsonrpc/0) [vdsm.api] START connectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'rw,relatime', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'/gpfs/kvm', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'gpfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.2.1.254,43908, flow_id=498668a0-a240-469f-ac33-f8c7bdeb481f, task_id=857460ed-5f0e-4bc6-ba54-e8f2c72e9ac2 (api:46) 2018-07-05 09:07:43,403+0200 INFO (jsonrpc/0) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/_gpfs_kvm' (storageServer:167) 2018-07-05 09:07:43,404+0200 INFO (jsonrpc/0) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/_gpfs_kvm mode: None (fileUtils:197) 2018-07-05 09:07:43,404+0200 INFO (jsonrpc/0) [storage.Mount] mounting /gpfs/kvm at /rhev/data-center/mnt/_gpfs_kvm (mount:204) MountError: (32, ';mount: wrong fs type, bad option, bad superblock on /gpfs/kvm,\n missing codepage or helper program, or other error\n\n In some cases useful info is found in syslog - try\n dmesg | tail or so.\n')
I know, that from previous versions on the GPFS implementation, they removed the device on /dev, due to incompatibilities with systemd. I don't know if this change affect the ovirt mounting process.
Can you help me to add this filesystem to the ovirt environment?
Adding Nir and Tal for all your questions. In the meanwhile, can you please provide a sos report from the host? Did dmesg provide any useful information?
The parameters that I used previously are ok? Or i need to do some modification? Its possible that the process fails because i dont have a device related to the gpfs filesystems on /dev? Can we apply some kind of workaround to mount manually the filesystem to the ovirt environment? Ex. create the dir /rhev/data-center/mnt/_gpfs_kvm manually and then mount the /gpfs/kvm over this? It's posible to modify the code to bypass some comprobations or something?
Reading the available documentation over Internet I find that ovirt was compatible with this fs (gpfs) implementation, because it's POSIX Compliant, this is a main reason for test it in our cluster.
It remains compatible on the actual versions? Or maybe there are changes that brokes this integration?
Many thanks for all in advance!
Kind regards! _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/LLXLSI4ZQFU32PV7AXLYD5LJBS23HBKO/
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig>

On Fri, Jul 6, 2018 at 2:48 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2018-07-06 11:26 GMT+02:00 <jtorres@bsc.es>:
Hi all,
I've deployed the ovirt Version 4.2.4.5-1.el7 on a small cluster and I'm trying to use as datastorage domain a configured Spectrum Scale (GPFS) distributed filesystem for test it.
Hi, welcome to oVirt community!
I've completed the configuration of the storage, and the filesystem defined are correctly mounted on the client hosts as we can see:
gpfs_kvm gpfs 233T 288M 233T 1% /gpfs/kvm gpfs_fast gpfs 8.8T 5.2G 8.8T 1% /gpfs/fast
The content output of the mount comand is:
gpfs_kvm on /gpfs/kvm type gpfs (rw,relatime) gpfs_fast on /gpfs/fast type gpfs (rw,relatime)
I can write and read to the mounted filesystem correctly, but when I try to add the local mounted filesystem I encountered some errors related to the mounting process of the storage.
The parameters passed to add new storage domain are:
Data Center: Default (V4) Name: gpfs_kvm Description: VM data on GPFS Domain Function: Data Storage Type: Posix Compliant FS Host to Use: kvm1c01 Path: /gpfs/kvm VFS Type: gpfs Mount Options: rw, relatime
The ovirt error that I obtained is: Error while executing action Add Storage Connection: Problem while trying to mount target
On the /var/log/vdsm/vdsm.log I can see: 2018-07-05 09:07:43,400+0200 INFO (jsonrpc/0) [vdsm.api] START connectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'rw,relatime', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'/gpfs/kvm', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'gpfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.2.1.254,43908, flow_id=498668a0-a240-469f-ac33-f8c7bdeb481f, task_id=857460ed-5f0e-4bc6-ba54-e8f2c72e9ac2 (api:46) 2018-07-05 09:07:43,403+0200 INFO (jsonrpc/0) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/_gpfs_kvm' (storageServer:167) 2018-07-05 09:07:43,404+0200 INFO (jsonrpc/0) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/_gpfs_kvm mode: None (fileUtils:197) 2018-07-05 09:07:43,404+0200 INFO (jsonrpc/0) [storage.Mount] mounting /gpfs/kvm at /rhev/data-center/mnt/_gpfs_kvm (mount:204)
You can find vdsm mount command in supervdsmd.log.
MountError: (32, ';mount: wrong fs type, bad option, bad superblock on
/gpfs/kvm,\n missing codepage or helper program, or other error\n\n In some cases useful info is found in syslog - try\n dmesg | tail or so.\n')
Did you look in /var/log/messages or demsg as the error message suggests?
I know, that from previous versions on the GPFS implementation, they removed the device on /dev, due to incompatibilities with systemd. I don't know if this change affect the ovirt mounting process.
We support POSIX compliant file systems. If GPFS is POSIX compliant, it should work, but I don't think we test it. Elad, do we test GPFS? You should start with mounting the file system manually. When you have a working mount command, you may need to add some mount options to the storage domain additional mount options. Nir
Can you help me to add this filesystem to the ovirt environment?
Adding Nir and Tal for all your questions. In the meanwhile, can you please provide a sos report from the host? Did dmesg provide any useful information?
The parameters that I used previously are ok? Or i need to do some modification? Its possible that the process fails because i dont have a device related to the gpfs filesystems on /dev? Can we apply some kind of workaround to mount manually the filesystem to the ovirt environment? Ex. create the dir /rhev/data-center/mnt/_gpfs_kvm manually and then mount the /gpfs/kvm over this? It's posible to modify the code to bypass some comprobations or something?
Reading the available documentation over Internet I find that ovirt was compatible with this fs (gpfs) implementation, because it's POSIX Compliant, this is a main reason for test it in our cluster.
It remains compatible on the actual versions? Or maybe there are changes that brokes this integration?
Many thanks for all in advance!
Kind regards! _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLXLSI4ZQFU32P...
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig>

On Fri, Jul 6, 2018 at 2:48 PM Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
Hi Nir, thanks for your help.
You can find vdsm mount command in supervdsmd.log.
I checked the file at: /var/log/vdsm/supervdsm.log, but without any relevant information about mount or gpfs, including other words like: mnt, posix, domain or file. If you consider, we can add it too. -rw-r--r-- 1 root root 0 Jul 4 11:58 supervdsm.log
Did you look in /var/log/messages or demsg as the error message suggests?
As I commented before, i can't find any relevant messages on the dmesg output when I tried to add a new storage domain. The only reports that I can saw, was on the vdsm.log 2018-07-06 09:00:54,861+0200 INFO (jsonrpc/3) [vdsm.api] START connectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'loop,bind', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'/gpfs/kvm', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'gpfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.2.1.254,34696, flow_id=6bd725cf-a982-4d2a-a3f5-edbcc5f39d64, task_id=570791fc-46f7-4900-a1bc-834670988259 (api:46) 2018-07-06 09:00:54,864+0200 INFO (jsonrpc/3) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/_gpfs_kvm' (storageServer:167) 2018-07-06 09:00:54,864+0200 INFO (jsonrpc/3) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/_gpfs_kvm mode: None (fileUtils:197) 2018-07-06 09:00:54,864+0200 INFO (jsonrpc/3) [storage.Mount] mounting /gpfs/kvm at /rhev/data-center/mnt/_gpfs_kvm (mount:204) OSError: [Errno 2] Mount of `/gpfs/kvm` at `/rhev/data-center/mnt/_gpfs_kvm` does not exist
We support POSIX compliant file systems. If GPFS is POSIX compliant, it should work, but I don't think we test it. Elad, do we test GPFS?
You should start with mounting the file system manually. When you have a working mount command, you may need to add some mount options to the storage domain additional mount options.
Apparently, on some documentation I read that GPFS is fully supported with Ovirt and RHEV environments, so I consider that GPFS is a POSIX compliant filesystem. In addition, when I configured the fs, I defined the type POSIX for two parameters during the creation: -D posix File locking semantics in effect -k posix ACL semantics in effect On the ovirt Add Storage Domain Form, I put on Custom Connection Parameters, the flags that I encountered on the mount entry for the two GPFS file systems: rw,relatime I'm looking for a workaround to try to add this fs on the environment. If you consider that I can collaborate checking other confs or files, I'm ready to do that. Kind regards and thanks in advance!

Hi, welcome to oVirt community!
Hi Sandro, Nir and Tal, thanks in advance for your help,
Adding Nir and Tal for all your questions. In the meanwhile, can you please provide a sos report from the host? Did dmesg provide any useful information?
I search on the dmesg output without results, I can't find relevant messages at the moment when I try to add the new storage domain or later. As you say, I attached a sosreport on this webpage (due to limits on the mailserver), maybe it's useful to find the cause of the problem. If you need more logs, confs, etc, I can add it without problems because we are on a testing environment. SOSREPORT - https://jirafeau.net/f.php?h=3MT_XlnI&d=1 Kind regards! -- Jose E Torres Operations - Systems Administrator Barcelona Supercomputing Center - Centro Nacional de Supercomputación jose.torres@bsc.es www.bsc.es http://bsc.es/disclaimer

2018-07-06 11:26 GMT+02:00 <jtorres(a)bsc.es>:
Hi, welcome to oVirt community! Hi Sandro, thanks in advance for your help,
Adding Nir and Tal for all your questions. In the meanwhile, can you please provide a sos report from the host? Did dmesg provide any useful information?
I search on the dmesg output without results, I can't find relevant messages at the moment when I try to add the new storage domain or later. As you say, I attached a sosreport on this webpage (due to limits on the mailserver), maybe it's useful to find the cause of the problem. If you need more logs, confs, etc, I can add it without problems because we are on a testing environment. SOSREPORT - https://jirafeau.net/f.php?h=3MT_XlnI&d=1 Kind regards!

Hi Sandro, I finally mounted the GPFS Filesystem as a Data Domain using this parameters on the form for New Domain : Domain Function: Data Name: name_of_the_domain Storage Type: Posix Compliant FS Path: gpfs_kvm (the name of the "device"/"filesystem" created, not an absoluthe path) VFS Type: gpfs Mount Options: rw,relatime,dev=gpfs_kvm (the dev parameter is the key to mount the filesystem). Due to a changes implemented on the Spectrum Scale / GPFS last versions, removing the block device on /dev, you must to specify this last parameter to mount correctly the filesystem. Another point is to check the correct privileges and ownership of the directory, to allow Ovirt to mount it correctly, is recommended to mount firts manually the filesystem with mmmount, and change the owner and privileges over the mountpoint. Thanks for your help. I consider that this thread is closed.

On Thu, Jul 19, 2018 at 12:06 PM <jtorres@bsc.es> wrote:
Hi Sandro,
I finally mounted the GPFS Filesystem as a Data Domain using this parameters on the form for New Domain :
Domain Function: Data Name: name_of_the_domain Storage Type: Posix Compliant FS Path: gpfs_kvm (the name of the "device"/"filesystem" created, not an absoluthe path) VFS Type: gpfs Mount Options: rw,relatime,dev=gpfs_kvm (the dev parameter is the key to mount the filesystem).
Due to a changes implemented on the Spectrum Scale / GPFS last versions, removing the block device on /dev, you must to specify this last parameter to mount correctly the filesystem. Another point is to check the correct privileges and ownership of the directory, to allow Ovirt to mount it correctly, is recommended to mount firts manually the filesystem with mmmount, and change the owner and privileges over the mountpoint.
Thanks for your help.
I consider that this thread is closed.
Thanks for the info! Can you file oVirt bug to document this procedure?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BJXDHLTLQZNGEU...
participants (4)
-
Jose E Torres
-
jtorres@bsc.es
-
Nir Soffer
-
Sandro Bonazzola