Roy,

 

It came down to manually mounting ovirt-storage domains and executing chown command.

 

Still, I took you advice and did NFS3-only  and NFS4-only tests.

 

Here are the results:

 

Test1: Protocol NFS3 / ExportPolicy NFS3  with Default (allow all)

 

[ INFO  ] TASK [ovirt.hosted_engine_setup : Add NFS storage domain]

[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[General Exception]". HTTP response code is 400.

[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[General Exception]\". HTTP response code is 400."}

 

172.17.28.5:/ovirt_hosted_engine on /rhev/data-center/mnt/172.17.28.5:_ovirt__hosted__engine type nfs (rw,relatime,vers=3,rsize=65536,wsize=655

36,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.17.28.5,mountvers=3,mountport=635,mountproto=udp,loc

al_lock=all,addr=172.17.28.5)

 

[root@ovirt-hv-01 mnt]# ls -la

total 4

drwxr-xr-x. 3 vdsm kvm    48 May 31 07:55 .

drwxr-xr-x. 3 vdsm kvm    17 May 31 07:50 ..

drwxrwxr-x. 2 root root 4096 May 29 10:12 172.17.28.5:_ovirt__hosted__engine

 

 

Reran deployment script and deployment completed successfully.

 

 

Test2: Protocol NFS4 / ExportPolicy NFS  with Default (allow all)

 

Deployment went through without single issue.

 

Seems that even though vddm:vm group with an ID 36 are created and added to NetApp volume, they are not applicable.

It still required to mount manually, execute “chown -R vdsm:kvm <mounted_location>”, amount and rerun the deployment script or rather reenter storage information for deployment to proceed.

Adding next storage domain, fro example for all other test VMs, will again fail from UI, unless the manual mount and chow are executed previously.

 

Then I tried just mount second storage domain, and it failed reporting permission issue (which it was). After executing manual mount actions, adding domain from UI worked flawlessly.

 

Output of mount:

172.17.28.5:/ovirt_production on /rhev/data-center/mnt/172.17.28.5:_ovirt__production type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,

nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.12,local_lock=none,addr=172.17.28.5)

172.17.28.5:/ovirt_hosted_engine on /rhev/data-center/mnt/172.17.28.5:_ovirt__hosted__engine type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=25

5,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.12,local_lock=none,addr=172.17.28.5)

 

 

Test3: Protocol NFS4 / ExportPolicy NFS with limited IP group access

In progress, but I have high hopes now.

 

Will keep you posted.

 

BTW.I was not able to find location in NetApp Volume where squashing is being defined, so I can not answer that one yet.

 

Thank you.

 

Kind regards,

 

Marko Vrgotic

ActiveVideo

 

From: "Morris, Roy" <roy.morris@ventura.org>
Date: Thursday, 30 May 2019 at 21:29
To: "Vrgotic, Marko" <M.Vrgotic@activevideo.com>, "users@ovirt.org" <users@ovirt.org>
Cc: "Stojchev, Darko" <D.Stojchev@activevideo.com>
Subject: RE: Install fresh 4.3 fails with mounting shared storage

 

Marko,

 

No problem, here are some other things to check as well.

 

NetApp is weird about allowing changes done to the root directory of a share. I would recommend creating a folder on the NetApp share like “rhevstor” or something so that you can chown that folder and mount the folder for the storage domain. I never had much luck mounting and using the root level of the NetApp NFS share. I also have in my notes that I set “sec=sys” as a property of my NetApp data domain which wouldn’t allow me to mount it until I input it into the RHEV manager. However, you aren’t at a point of having the RHEV manager up and running so I’m not sure how much use this would be at the moment.

 

#mount -o sec=sys 172.17.28.5:/rhevstor /mnt/temp

 

NFS share will fail if it isn’t accessible from all hosts, so make sure to go into each host to run

 

#showmount -e 172.17.28.5

 

The ownership of the NFS share needs to be owned by vdsm:kvm. To do this, you have to manually mount the NFS share to one of the hosts temporarily then run the following command to get ownership settings setup.

 

#mkdir /mnt/temp

#mount -o sec=sys 172.17.28.5:/rhevstor /mnt/temp

#chown 36:36 /mnt/temp

#umount /mnt/temp

 

Then try and run the install again. If it fails, disable NFSv3 and run again to see if it is related to NFSv4 security settings.

 

Best regards,

Roy Morris

 

From: Vrgotic, Marko <M.Vrgotic@activevideo.com>
Sent: Thursday, May 30, 2019 12:07 PM
To: Morris, Roy <roy.morris@ventura.org>; users@ovirt.org
Cc: Stojchev, Darko <D.Stojchev@activevideo.com>
Subject: [External] Re: Install fresh 4.3 fails with mounting shared storage

 

Hi Roy,

 

I will run all those tests tomorrow morning  (Amsterdam TimeZone) and reply back with results.

 

Regarding NetApp documentation you mentioned below, I assume it should be enough to just “google” for it.

 

Thank you very much for jumping in, we really appreciate it.

 

Kind regards,

 

Marko Vrgotic

 

From: "Morris, Roy" <roy.morris@ventura.org>
Date: Thursday, 30 May 2019 at 18:46
To: "Vrgotic, Marko" <M.Vrgotic@activevideo.com>, "users@ovirt.org" <users@ovirt.org>
Cc: "users-request@ovirt.org" <users-request@ovirt.org>, "Stojchev, Darko" <D.Stojchev@activevideo.com>
Subject: RE: Install fresh 4.3 fails with mounting shared storage

 

Marko,

 

Can you try disabling NFSv4 on the NetApp side for testing and rerun the installer? I don’t advise leaving it at NFSv3 but just for testing we can try it out.

 

Also, there is some documentation on NetApp support regarding manually mounting the NFS share to change permissions then unmount. It has to be done once but after that the mounting should be fine.

 

Do you have root squash set on NetApp?

 

Best regards,

Roy Morris

GSA Virtualization Systems Analyst

County of Ventura

(805) 654-3625

(805) 603-9403

cid:7c03dd9d67a9cfb78447b56087323d91a66d7c29.camel@ventura.org

 

From: Vrgotic, Marko <M.Vrgotic@activevideo.com>
Sent: Thursday, May 30, 2019 1:34 AM
To: Morris, Roy <roy.morris@ventura.org>; users@ovirt.org
Cc: users-request@ovirt.org; Stojchev, Darko <D.Stojchev@activevideo.com>
Subject: [External] Re: Install fresh 4.3 fails with mounting shared storage

 

Hi Roy,

 

Sure, here is the output:

 

Last login: Wed May 29 17:25:30 2019 from ovirt-engine.avinity.tv

[root@ovirt-hv-03 ~]# showmount -e 172.17.28.5

Export list for 172.17.28.5:

/ (everyone)

[root@ovirt-hv-03 ~]# ls -la /rhev/data-center/mnt/

total 0

drwxr-xr-x. 2 vdsm kvm  6 May 29 17:14 .

drwxr-xr-x. 3 vdsm kvm 17 May 29 17:11 ..

[root@ovirt-hv-03 ~]#

 

In addition, if it helps, here is the list of shares/mount points from Netapp side, behind the 172.17.28.5 IP:

cid:image003.png@01D516CC.8548AE10

 

Kind regards

Marko Vrgotic

 

From: "Morris, Roy" <roy.morris@ventura.org>
Date: Thursday, 30 May 2019 at 00:57
To: "Vrgotic, Marko" <M.Vrgotic@activevideo.com>, "users@ovirt.org" <users@ovirt.org>
Cc: "users-request@ovirt.org" <users-request@ovirt.org>
Subject: RE: Install fresh 4.3 fails with mounting shared storage

 

Marko,

 

Can you run the following commands and let us know the results.

 

showmount -e 172.17.28.5

ls -la /rhev/data-center/mnt/

 

Best regards,

Roy Morris

 

From: Vrgotic, Marko <M.Vrgotic@activevideo.com>
Sent: Wednesday, May 29, 2019 4:07 AM
To: users@ovirt.org
Cc: users-request@ovirt.org
Subject: [External] [ovirt-users] Install fresh 4.3 fails with mounting shared storage

 

CAUTION: This email contains links. If it looks suspicious or is not expected, DO NOT click and immediately forward to Spam.Manager@ventura.org.


 

Dear oVIrt,

 

We are trying to deploy new setup with Hosted-Engine , oVirt version 4.3.

 

Volume is on the Netapp, protocol NFS v4.

Upon populating shared storage information and path:

 

          Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: nfs

          Please specify the nfs version you would like to use (auto, v3, v4, v4_1)[auto]: auto

          Please specify the full shared storage connection path to use (example: host:/path): 172.17.28.5:/ovirt_hosted_engine

 

Following is displayed on the screen:

 

[ INFO  ] Creating Storage Domain

[ INFO  ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Force facts gathering]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Check local VM dir stat]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Enforce local VM dir existence]

[ INFO  ] skipping: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch host facts]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch cluster ID]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch cluster facts]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter facts]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter ID]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter name]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Add NFS storage domain]

[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[General Exception]". HTTP response code is 400.

[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[General Exception]\". HTTP response code is 400."}

 

Even with this error – storage gets mounted on the Host:

 

172.17.28.5:/ovirt_hosted_engine on /rhev/data-center/mnt/172.17.28.5:_ovirt__hosted__engine type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.11,local_lock=none,addr=172.17.28.5)

 

But playbook execution fails and we can not proceed with install.

 

Please advise.

 

Kindly awaiting your reply.

 

Marko Vrgotic