On Fri, Jul 6, 2018 at 2:48 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2018-07-06 11:26 GMT+02:00 <jtorres@bsc.es>:
Hi all,

I've deployed the ovirt Version 4.2.4.5-1.el7 on a small cluster and I'm trying to use as datastorage domain a configured Spectrum Scale (GPFS) distributed filesystem for test it.

Hi, welcome to oVirt community!


 

I've completed the configuration of the storage, and the filesystem defined are correctly mounted on the client hosts as we can see:

gpfs_kvm                        gpfs      233T  288M  233T   1% /gpfs/kvm
gpfs_fast                       gpfs      8.8T  5.2G  8.8T   1% /gpfs/fast

The content output of the mount comand is:

gpfs_kvm on /gpfs/kvm type gpfs (rw,relatime)
gpfs_fast on /gpfs/fast type gpfs (rw,relatime)

I can write and read to the mounted filesystem correctly, but when I try to add the local mounted filesystem I encountered some errors related to the mounting process of the storage.

The parameters passed to add new storage domain are:

Data Center: Default (V4)
Name: gpfs_kvm
Description: VM data on GPFS
Domain Function: Data
Storage Type: Posix Compliant FS
Host to Use: kvm1c01
Path: /gpfs/kvm
VFS Type: gpfs
Mount Options: rw, relatime

The ovirt error that I obtained is:
    Error while executing action Add Storage Connection: Problem while trying to mount target

On the /var/log/vdsm/vdsm.log I can see:
    2018-07-05 09:07:43,400+0200 INFO  (jsonrpc/0) [vdsm.api] START connectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'rw,relatime', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'/gpfs/kvm', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'gpfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.2.1.254,43908, flow_id=498668a0-a240-469f-ac33-f8c7bdeb481f, task_id=857460ed-5f0e-4bc6-ba54-e8f2c72e9ac2 (api:46)
    2018-07-05 09:07:43,403+0200 INFO  (jsonrpc/0) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/_gpfs_kvm' (storageServer:167)
    2018-07-05 09:07:43,404+0200 INFO  (jsonrpc/0) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/_gpfs_kvm mode: None (fileUtils:197)
    2018-07-05 09:07:43,404+0200 INFO  (jsonrpc/0) [storage.Mount] mounting /gpfs/kvm at /rhev/data-center/mnt/_gpfs_kvm (mount:204)

You can find vdsm mount command in supervdsmd.log.
 
    MountError: (32, ';mount: wrong fs type, bad option, bad superblock on /gpfs/kvm,\n       missing codepage or helper program, or other error\n\n       In some cases useful info is found in syslog - try\n       dmesg | tail or so.\n')

Did you look in /var/log/messages or demsg as the error message suggests?
 

I know, that from previous versions on the GPFS implementation, they removed the device on /dev, due to incompatibilities with systemd. I don't know if this change affect the ovirt mounting process.

We support POSIX compliant file systems. If GPFS is POSIX compliant, it should
work, but I don't think we test it.
Elad, do we test GPFS?

You should start with mounting the file system manually. When you have a working
mount command, you may need to add some mount options to the storage domain
additional mount options.

Nir
 

Can you help me to add this filesystem to the ovirt environment?

Adding Nir and Tal for all your questions.
In the meanwhile, can you please provide a sos report from the host? Did dmesg provide any useful information?

 
The parameters that I used previously are ok? Or i need to do some modification?
Its possible that the process fails because i dont have a device related to the gpfs filesystems on /dev?
Can we apply some kind of workaround to mount manually the filesystem to the ovirt environment? Ex. create the dir /rhev/data-center/mnt/_gpfs_kvm manually and then mount the /gpfs/kvm over this?
It's posible to modify the code to bypass some comprobations or something?

Reading the available documentation over Internet I find that ovirt was compatible with this fs (gpfs) implementation, because it's POSIX Compliant, this is a main reason for test it in our cluster.

It remains compatible on the actual versions? Or maybe there are changes that brokes this integration?

Many thanks for all in advance!

Kind regards!
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLXLSI4ZQFU32PV7AXLYD5LJBS23HBKO/



--

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA

sbonazzo@redhat.com