issue connecting 4.3.8 node to nfs domain

Hi, Something weird is going on with our ovirt node 4.3.8 install mounting a nfs share. We have a NFS domain for a couple of backup disks and we have a couple of 4.2 nodes connected to it. Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount doesn't work. (annoying you cannot copy the text from the events view) The domain is up and working ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9 Size:10238 GiB Available:2491 GiB Used:7747 GiB Allocated:3302 GiB Over Allocation Ratio:37% Images:7 Path:*.*.*.*:/data/ovirt NFS Version:AUTO Warning Low Space Indicator:10% (1023 GiB) Critical Space Action Blocker:5 GiB But somehow the node appears to thin thinks it's an LVM volume? It tries to find the VGs volume group but fails... which is not so strange as it is an NFS volume: 2020-02-05 14:17:54,190+0000 WARN (monitor/f5d2f7c) [storage.LVM] Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5 out=[] err=[' Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not found', ' Cannot process volume group f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470) 2020-02-05 14:17:54,201+0000 ERROR (monitor/f5d2f7c) [storage.Monitor] Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed (monitor:330) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 327, in _setupLoop self._setupMonitor() File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 349, in _setupMonitor self._produceDomain() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in wrapper value = meth(self, *a, **kw) File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 367, in _produceDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in produce domain.getRealDomain() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in _realProduce domain = self._findDomain(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in _findDomain return findMethod(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',) The volume is actually mounted fine on the node: On NFS server Feb 5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request from *.*.*.*:673 for /data/ovirt (/data/ovirt) On the host mount|grep nfs *.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*) And I can see the files: ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt total 4 drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 1ed0a635-67ee-4255-aad9-b70822350706 -rwxr-xr-x. 1 vdsm kvm 0 Feb 5 14:37 __DIRECT_IO_TEST__ drwxrwxrwx. 3 root root 86 Feb 5 14:37 . drwxr-xr-x. 5 vdsm kvm 4096 Feb 5 14:37 .. Met vriendelijke groet, With kind regards, Jorick Astrego Netbulae Virtualization Experts ---------------- Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ----------------

On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego <jorick@netbulae.eu> wrote:
Hi,
Something weird is going on with our ovirt node 4.3.8 install mounting a nfs share.
We have a NFS domain for a couple of backup disks and we have a couple of 4.2 nodes connected to it.
Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount doesn't work.
(annoying you cannot copy the text from the events view)
The domain is up and working
ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9 Size: 10238 GiB Available:2491 GiB Used:7747 GiB Allocated: 3302 GiB Over Allocation Ratio:37% Images:7 Path:*.*.*.*:/data/ovirt NFS Version: AUTO Warning Low Space Indicator:10% (1023 GiB) Critical Space Action Blocker:5 GiB
But somehow the node appears to thin thinks it's an LVM volume? It tries to find the VGs volume group but fails... which is not so strange as it is an NFS volume:
Could you provide full vdsm.log file with this flow?
2020-02-05 14:17:54,190+0000 WARN (monitor/f5d2f7c) [storage.LVM] Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5 out=[] err=[' Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not found', ' Cannot process volume group f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470) 2020-02-05 14:17:54,201+0000 ERROR (monitor/f5d2f7c) [storage.Monitor] Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed (monitor:330) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 327, in _setupLoop self._setupMonitor() File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 349, in _setupMonitor self._produceDomain() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in wrapper value = meth(self, *a, **kw) File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 367, in _produceDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in produce domain.getRealDomain() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in _realProduce domain = self._findDomain(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in _findDomain return findMethod(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
The volume is actually mounted fine on the node:
On NFS server
Feb 5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request from *.*.*.*:673 for /data/ovirt (/data/ovirt)
On the host
mount|grep nfs
*.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
And I can see the files:
ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt total 4 drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 1ed0a635-67ee-4255-aad9-b70822350706 -rwxr-xr-x. 1 vdsm kvm 0 Feb 5 14:37 __DIRECT_IO_TEST__ drwxrwxrwx. 3 root root 86 Feb 5 14:37 . drwxr-xr-x. 5 vdsm kvm 4096 Feb 5 14:37 ..
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts * ------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ------------------------------
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IFTO5WBLVLGTVW...

On Thu, Feb 6, 2020 at 10:07 AM Jorick Astrego <jorick@netbulae.eu> wrote:
Hi,
[snip]
(annoying you cannot copy the text from the events view)
I don't know if I understood correctly your concern, but in my case if I go in Hosts --> Events and double click on a line I have a pop-up window named "Event Details" where I can copy and paste "ID" "Time" and "Message" fields' values. Not so friendly but it works I don't know if from the Rest API you can also filter the kind of events to display, but at least you can do something like: curl -X GET -H "Accept: application/xml" -u $(cat ident_file ) --cacert /etc/pki/ovirt-engine/ca.pem https://engine_fqdn:443/ovirt-engine/api/events | grep description HIH, Gianluca

On 2/6/20 12:08 PM, Gianluca Cecchi wrote:
On Thu, Feb 6, 2020 at 10:07 AM Jorick Astrego <jorick@netbulae.eu <mailto:jorick@netbulae.eu>> wrote:
Hi,
[snip]
(annoying you cannot copy the text from the events view)
I don't know if I understood correctly your concern, but in my case if I go in Hosts --> Events and double click on a line I have a pop-up window named "Event Details" where I can copy and paste "ID" "Time" and "Message" fields' values. Not so friendly but it works
I don't know if from the Rest API you can also filter the kind of events to display, but at least you can do something like:
curl -X GET -H "Accept: application/xml" -u $(cat ident_file ) --cacert /etc/pki/ovirt-engine/ca.pem https://engine_fqdn:443/ovirt-engine/api/events | grep description
HIH, Gianluca
Hi Gianluca, Thanks, it's a bit counter intuitive but that works! Regards, Jorick Met vriendelijke groet, With kind regards, Jorick Astrego Netbulae Virtualization Experts ---------------- Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ----------------

On Thu, Feb 6, 2020 at 1:19 PM Jorick Astrego <jorick@netbulae.eu> wrote:
On 2/6/20 12:08 PM, Gianluca Cecchi wrote:
On Thu, Feb 6, 2020 at 10:07 AM Jorick Astrego <jorick@netbulae.eu> wrote:
Hi,
[snip]
(annoying you cannot copy the text from the events view)
I don't know if I understood correctly your concern, but in my case if I go in Hosts --> Events and double click on a line I have a pop-up window named "Event Details" where I can copy and paste "ID" "Time" and "Message" fields' values. Not so friendly but it works
[snip]
Hi Gianluca,
Thanks, it's a bit counter intuitive but that works!
Regards,
Jorick
In fact, I remember I casually bumped into this... "feature" ;-)

On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego <jorick@netbulae.eu> wrote:
Hi,
Something weird is going on with our ovirt node 4.3.8 install mounting a nfs share.
We have a NFS domain for a couple of backup disks and we have a couple of 4.2 nodes connected to it.
Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount doesn't work.
(annoying you cannot copy the text from the events view)
The domain is up and working
ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9 Size: 10238 GiB Available:2491 GiB Used:7747 GiB Allocated: 3302 GiB Over Allocation Ratio:37% Images:7 Path:*.*.*.*:/data/ovirt NFS Version: AUTO Warning Low Space Indicator:10% (1023 GiB) Critical Space Action Blocker:5 GiB
But somehow the node appears to thin thinks it's an LVM volume? It tries to find the VGs volume group but fails... which is not so strange as it is an NFS volume:
2020-02-05 14:17:54,190+0000 WARN (monitor/f5d2f7c) [storage.LVM] Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5 out=[] err=[' Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not found', ' Cannot process volume group f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470) 2020-02-05 14:17:54,201+0000 ERROR (monitor/f5d2f7c) [storage.Monitor] Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed (monitor:330) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 327, in _setupLoop self._setupMonitor() File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 349, in _setupMonitor self._produceDomain() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in wrapper value = meth(self, *a, **kw) File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 367, in _produceDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in produce domain.getRealDomain() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in _realProduce domain = self._findDomain(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in _findDomain return findMethod(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
The volume is actually mounted fine on the node:
On NFS server
Feb 5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request from *.*.*.*:673 for /data/ovirt (/data/ovirt)
On the host
mount|grep nfs
*.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
And I can see the files:
ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt total 4 drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 1ed0a635-67ee-4255-aad9-b70822350706
What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?
-rwxr-xr-x. 1 vdsm kvm 0 Feb 5 14:37 __DIRECT_IO_TEST__ drwxrwxrwx. 3 root root 86 Feb 5 14:37 . drwxr-xr-x. 5 vdsm kvm 4096 Feb 5 14:37 ..
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts * ------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ------------------------------
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IFTO5WBLVLGTVW...

On 2/9/20 10:27 AM, Amit Bawer wrote:
On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego <jorick@netbulae.eu <mailto:jorick@netbulae.eu>> wrote:
Hi,
Something weird is going on with our ovirt node 4.3.8 install mounting a nfs share.
We have a NFS domain for a couple of backup disks and we have a couple of 4.2 nodes connected to it.
Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount doesn't work.
(annoying you cannot copy the text from the events view)
The domain is up and working
ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9 Size:10238 GiB Available:2491 GiB Used:7747 GiB Allocated:3302 GiB Over Allocation Ratio:37% Images:7 Path:*.*.*.*:/data/ovirt NFS Version:AUTO Warning Low Space Indicator:10% (1023 GiB) Critical Space Action Blocker:5 GiB
But somehow the node appears to thin thinks it's an LVM volume? It tries to find the VGs volume group but fails... which is not so strange as it is an NFS volume:
2020-02-05 14:17:54,190+0000 WARN (monitor/f5d2f7c) [storage.LVM] Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5 out=[] err=[' Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not found', ' Cannot process volume group f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470) 2020-02-05 14:17:54,201+0000 ERROR (monitor/f5d2f7c) [storage.Monitor] Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed (monitor:330) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 327, in _setupLoop self._setupMonitor() File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 349, in _setupMonitor self._produceDomain() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in wrapper value = meth(self, *a, **kw) File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 367, in _produceDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in produce domain.getRealDomain() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in _realProduce domain = self._findDomain(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in _findDomain return findMethod(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
The volume is actually mounted fine on the node:
On NFS server
Feb 5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request from *.*.*.*:673 for /data/ovirt (/data/ovirt)
On the host
mount|grep nfs
*.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
And I can see the files:
ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt total 4 drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 1ed0a635-67ee-4255-aad9-b70822350706
What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?
ls -arlt 1ed0a635-67ee-4255-aad9-b70822350706/ total 4 drwxr-xr-x. 2 vdsm kvm 93 Oct 26 2016 dom_md drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 . drwxr-xr-x. 4 vdsm kvm 40 Oct 26 2016 master drwxr-xr-x. 5 vdsm kvm 4096 Oct 26 2016 images drwxrwxrwx. 3 root root 86 Feb 5 14:37 .. Regards, Jorick Astrego Met vriendelijke groet, With kind regards, Jorick Astrego Netbulae Virtualization Experts ---------------- Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ----------------

compared it with host having nfs domain working this On Mon, Feb 10, 2020 at 11:11 AM Jorick Astrego <jorick@netbulae.eu> wrote:
On 2/9/20 10:27 AM, Amit Bawer wrote:
On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego <jorick@netbulae.eu> wrote:
Hi,
Something weird is going on with our ovirt node 4.3.8 install mounting a nfs share.
We have a NFS domain for a couple of backup disks and we have a couple of 4.2 nodes connected to it.
Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount doesn't work.
(annoying you cannot copy the text from the events view)
The domain is up and working
ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9 Size: 10238 GiB Available:2491 GiB Used:7747 GiB Allocated: 3302 GiB Over Allocation Ratio:37% Images:7 Path:*.*.*.*:/data/ovirt NFS Version: AUTO Warning Low Space Indicator:10% (1023 GiB) Critical Space Action Blocker:5 GiB
But somehow the node appears to thin thinks it's an LVM volume? It tries to find the VGs volume group but fails... which is not so strange as it is an NFS volume:
2020-02-05 14:17:54,190+0000 WARN (monitor/f5d2f7c) [storage.LVM] Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5 out=[] err=[' Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not found', ' Cannot process volume group f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470) 2020-02-05 14:17:54,201+0000 ERROR (monitor/f5d2f7c) [storage.Monitor] Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed (monitor:330) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 327, in _setupLoop self._setupMonitor() File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 349, in _setupMonitor self._produceDomain() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in wrapper value = meth(self, *a, **kw) File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 367, in _produceDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in produce domain.getRealDomain() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in _realProduce domain = self._findDomain(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in _findDomain return findMethod(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
The volume is actually mounted fine on the node:
On NFS server
Feb 5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request from *.*.*.*:673 for /data/ovirt (/data/ovirt)
On the host
mount|grep nfs
*.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
And I can see the files:
ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt total 4 drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 1ed0a635-67ee-4255-aad9-b70822350706
What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?
ls -arlt 1ed0a635-67ee-4255-aad9-b70822350706/ total 4 drwxr-xr-x. 2 vdsm kvm 93 Oct 26 2016 dom_md drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 . drwxr-xr-x. 4 vdsm kvm 40 Oct 26 2016 master drwxr-xr-x. 5 vdsm kvm 4096 Oct 26 2016 images drwxrwxrwx. 3 root root 86 Feb 5 14:37 ..
On a working nfs domain host we have following storage hierarchy, feece142-9e8d-42dc-9873-d154f60d0aac is the nfs domain in my case /rhev/data-center/ ├── edefe626-3ada-11ea-9877-525400b37767 ... │ ├── feece142-9e8d-42dc-9873-d154f60d0aac -> /rhev/data-center/mnt/10.35.18.45: _exports_data/feece142-9e8d-42dc-9873-d154f60d0aac │ └── mastersd -> /rhev/data-center/mnt/blockSD/a6a14714-6eaa-4054-9503-0ea3fcc38531 └── mnt ├── 10.35.18.45:_exports_data │ └── feece142-9e8d-42dc-9873-d154f60d0aac │ ├── dom_md │ │ ├── ids │ │ ├── inbox │ │ ├── leases │ │ ├── metadata │ │ ├── outbox │ │ └── xleases │ └── images │ ├── 915e6f45-ea13-428c-aab2-fb27798668e5 │ │ ├── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2 │ │ ├── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2.lease │ │ └── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2.meta │ ├── b3be4748-6e18-43c2-84fb-a2909d8ee2d6 │ │ ├── ac46e91d-6a50-4893-92c8-2693c192fbc8 │ │ ├── ac46e91d-6a50-4893-92c8-2693c192fbc8.lease │ │ └── ac46e91d-6a50-4893-92c8-2693c192fbc8.meta │ ├── b9edd81a-06b0-421c-85a3-f6618c05b25a │ │ ├── 9b9e1d3d-fc89-4c08-87b6-557b17a4b5dd │ │ ├── 9b9e1d3d-fc89-4c08-87b6-557b17a4b5dd.lease │ │ └── 9b9e1d3d-fc89-4c08-87b6-557b17a4b5dd.meta │ ├── f88a6f36-fcb2-413c-8fd6-c2b090321542 │ │ ├── d8f8b2d7-7232-4feb-bce4-dbf0d37dba9b │ │ ├── d8f8b2d7-7232-4feb-bce4-dbf0d37dba9b.lease │ │ └── d8f8b2d7-7232-4feb-bce4-dbf0d37dba9b.meta │ └── fe59753e-f3b5-4840-8d1d-31c49c2448f0 │ ├── ad0107bc-46d2-4977-b6c3-082adbf3083d │ ├── ad0107bc-46d2-4977-b6c3-082adbf3083d.lease │ └── ad0107bc-46d2-4977-b6c3-082adbf3083d.meta Maybe I got confused by your ls command output, but I was looking to see how the dir tree for your nfs domain looks like which should be rooted under /rhev/data-center/mnt/<nfs server>:<exported path> In your output, only 1ed0a635-67ee-4255-aad9-b70822350706 is there which is not the nfs domain f5d2f7c6-093f-46d6-a844-224d92db5ef9 at question. So to begin with, there is a need to figure why in your case the f5d2f7c6-093f-46d6-a844-224d92db5ef9 folder is not to be found on the nfs storage mounted on that node, which should be there as far as I understood since this is the same nfs mount path and server shared between all hosts which are connected to this SD. Maybe compare the mount options between the working nodes to the non-working node and check the export options on the nfs server itself, maybe it has some specific client ip exports settings?
Regards,
Jorick Astrego
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts * ------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ------------------------------

On 2/10/20 11:09 AM, Amit Bawer wrote:
compared it with host having nfs domain working this
On Mon, Feb 10, 2020 at 11:11 AM Jorick Astrego <jorick@netbulae.eu <mailto:jorick@netbulae.eu>> wrote:
On 2/9/20 10:27 AM, Amit Bawer wrote:
On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego <jorick@netbulae.eu <mailto:jorick@netbulae.eu>> wrote:
Hi,
Something weird is going on with our ovirt node 4.3.8 install mounting a nfs share.
We have a NFS domain for a couple of backup disks and we have a couple of 4.2 nodes connected to it.
Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount doesn't work.
(annoying you cannot copy the text from the events view)
The domain is up and working
ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9 Size:10238 GiB Available:2491 GiB Used:7747 GiB Allocated:3302 GiB Over Allocation Ratio:37% Images:7 Path:*.*.*.*:/data/ovirt NFS Version:AUTO Warning Low Space Indicator:10% (1023 GiB) Critical Space Action Blocker:5 GiB
But somehow the node appears to thin thinks it's an LVM volume? It tries to find the VGs volume group but fails... which is not so strange as it is an NFS volume:
2020-02-05 14:17:54,190+0000 WARN (monitor/f5d2f7c) [storage.LVM] Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5 out=[] err=[' Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not found', ' Cannot process volume group f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470) 2020-02-05 14:17:54,201+0000 ERROR (monitor/f5d2f7c) [storage.Monitor] Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed (monitor:330) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 327, in _setupLoop self._setupMonitor() File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 349, in _setupMonitor self._produceDomain() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in wrapper value = meth(self, *a, **kw) File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 367, in _produceDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in produce domain.getRealDomain() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in _realProduce domain = self._findDomain(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in _findDomain return findMethod(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
The volume is actually mounted fine on the node:
On NFS server
Feb 5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request from *.*.*.*:673 for /data/ovirt (/data/ovirt)
On the host
mount|grep nfs
*.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
And I can see the files:
ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt total 4 drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 1ed0a635-67ee-4255-aad9-b70822350706
What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?
ls -arlt 1ed0a635-67ee-4255-aad9-b70822350706/ total 4 drwxr-xr-x. 2 vdsm kvm 93 Oct 26 2016 dom_md drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 . drwxr-xr-x. 4 vdsm kvm 40 Oct 26 2016 master drwxr-xr-x. 5 vdsm kvm 4096 Oct 26 2016 images drwxrwxrwx. 3 root root 86 Feb 5 14:37 ..
On a working nfs domain host we have following storage hierarchy, feece142-9e8d-42dc-9873-d154f60d0aac is the nfs domain in my case
/rhev/data-center/ ├── edefe626-3ada-11ea-9877-525400b37767 ... │ ├── feece142-9e8d-42dc-9873-d154f60d0aac -> /rhev/data-center/mnt/10.35.18.45:_exports_data/feece142-9e8d-42dc-9873-d154f60d0aac │ └── mastersd -> /rhev/data-center/mnt/blockSD/a6a14714-6eaa-4054-9503-0ea3fcc38531 └── mnt ├── 10.35.18.45:_exports_data │ └── feece142-9e8d-42dc-9873-d154f60d0aac │ ├── dom_md │ │ ├── ids │ │ ├── inbox │ │ ├── leases │ │ ├── metadata │ │ ├── outbox │ │ └── xleases │ └── images │ ├── 915e6f45-ea13-428c-aab2-fb27798668e5 │ │ ├── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2 │ │ ├── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2.lease │ │ └── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2.meta │ ├── b3be4748-6e18-43c2-84fb-a2909d8ee2d6 │ │ ├── ac46e91d-6a50-4893-92c8-2693c192fbc8 │ │ ├── ac46e91d-6a50-4893-92c8-2693c192fbc8.lease │ │ └── ac46e91d-6a50-4893-92c8-2693c192fbc8.meta │ ├── b9edd81a-06b0-421c-85a3-f6618c05b25a │ │ ├── 9b9e1d3d-fc89-4c08-87b6-557b17a4b5dd │ │ ├── 9b9e1d3d-fc89-4c08-87b6-557b17a4b5dd.lease │ │ └── 9b9e1d3d-fc89-4c08-87b6-557b17a4b5dd.meta │ ├── f88a6f36-fcb2-413c-8fd6-c2b090321542 │ │ ├── d8f8b2d7-7232-4feb-bce4-dbf0d37dba9b │ │ ├── d8f8b2d7-7232-4feb-bce4-dbf0d37dba9b.lease │ │ └── d8f8b2d7-7232-4feb-bce4-dbf0d37dba9b.meta │ └── fe59753e-f3b5-4840-8d1d-31c49c2448f0 │ ├── ad0107bc-46d2-4977-b6c3-082adbf3083d │ ├── ad0107bc-46d2-4977-b6c3-082adbf3083d.lease │ └── ad0107bc-46d2-4977-b6c3-082adbf3083d.meta Maybe I got confused by your ls command output, but I was looking to see how the dir tree for your nfs domain looks like which should be rooted under /rhev/data-center/mnt/<nfs server>:<exported path>
In your output, only 1ed0a635-67ee-4255-aad9-b70822350706 is there which is not the nfs domain f5d2f7c6-093f-46d6-a844-224d92db5ef9 at question.
So to begin with, there is a need to figure why in your case the f5d2f7c6-093f-46d6-a844-224d92db5ef9 folder is not to be found on the nfs storage mounted on that node, which should be there as far as I understood since this is the same nfs mount path and server shared between all hosts which are connected to this SD.
Maybe compare the mount options between the working nodes to the non-working node and check the export options on the nfs server itself, maybe it has some specific client ip exports settings?
Hmm, I didn't notice that. I did a check on the NFS server and I found the "1ed0a635-67ee-4255-aad9-b70822350706" in the exportdom path (/data/exportdom). This was an old NFS export domain that has been deleted for a while now. I remember finding somewhere an issue with old domains still being active after removal but I cannot find it now. I unexported the directory on the nfs server and now I have to correct mount and it activates fine. Thanks! Met vriendelijke groet, With kind regards, Jorick Astrego Netbulae Virtualization Experts ---------------- Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ----------------

On Mon, Feb 10, 2020 at 2:27 PM Jorick Astrego <jorick@netbulae.eu> wrote:
On 2/10/20 11:09 AM, Amit Bawer wrote:
compared it with host having nfs domain working this
On Mon, Feb 10, 2020 at 11:11 AM Jorick Astrego <jorick@netbulae.eu> wrote:
On 2/9/20 10:27 AM, Amit Bawer wrote:
On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego <jorick@netbulae.eu> wrote:
Hi,
Something weird is going on with our ovirt node 4.3.8 install mounting a nfs share.
We have a NFS domain for a couple of backup disks and we have a couple of 4.2 nodes connected to it.
Now I'm adding a fresh cluster of 4.3.8 nodes and the backupnfs mount doesn't work.
(annoying you cannot copy the text from the events view)
The domain is up and working
ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9 Size: 10238 GiB Available:2491 GiB Used:7747 GiB Allocated: 3302 GiB Over Allocation Ratio:37% Images:7 Path:*.*.*.*:/data/ovirt NFS Version: AUTO Warning Low Space Indicator:10% (1023 GiB) Critical Space Action Blocker:5 GiB
But somehow the node appears to thin thinks it's an LVM volume? It tries to find the VGs volume group but fails... which is not so strange as it is an NFS volume:
2020-02-05 14:17:54,190+0000 WARN (monitor/f5d2f7c) [storage.LVM] Reloading VGs failed (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5 out=[] err=[' Volume group "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not found', ' Cannot process volume group f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470) 2020-02-05 14:17:54,201+0000 ERROR (monitor/f5d2f7c) [storage.Monitor] Setting up monitor for f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed (monitor:330) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 327, in _setupLoop self._setupMonitor() File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 349, in _setupMonitor self._produceDomain() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 159, in wrapper value = meth(self, *a, **kw) File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 367, in _produceDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in produce domain.getRealDomain() File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in _realProduce domain = self._findDomain(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in _findDomain return findMethod(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
The volume is actually mounted fine on the node:
On NFS server
Feb 5 15:47:09 back1en rpc.mountd[4899]: authenticated mount request from *.*.*.*:673 for /data/ovirt (/data/ovirt)
On the host
mount|grep nfs
*.*.*.*:/data/ovirt on /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
And I can see the files:
ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt total 4 drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 1ed0a635-67ee-4255-aad9-b70822350706
What ls -lart for 1ed0a635-67ee-4255-aad9-b70822350706 is showing?
ls -arlt 1ed0a635-67ee-4255-aad9-b70822350706/ total 4 drwxr-xr-x. 2 vdsm kvm 93 Oct 26 2016 dom_md drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016 . drwxr-xr-x. 4 vdsm kvm 40 Oct 26 2016 master drwxr-xr-x. 5 vdsm kvm 4096 Oct 26 2016 images drwxrwxrwx. 3 root root 86 Feb 5 14:37 ..
On a working nfs domain host we have following storage hierarchy, feece142-9e8d-42dc-9873-d154f60d0aac is the nfs domain in my case
/rhev/data-center/ ├── edefe626-3ada-11ea-9877-525400b37767 ... │ ├── feece142-9e8d-42dc-9873-d154f60d0aac -> /rhev/data-center/mnt/10.35.18.45: _exports_data/feece142-9e8d-42dc-9873-d154f60d0aac │ └── mastersd -> /rhev/data-center/mnt/blockSD/a6a14714-6eaa-4054-9503-0ea3fcc38531 └── mnt ├── 10.35.18.45:_exports_data │ └── feece142-9e8d-42dc-9873-d154f60d0aac │ ├── dom_md │ │ ├── ids │ │ ├── inbox │ │ ├── leases │ │ ├── metadata │ │ ├── outbox │ │ └── xleases │ └── images │ ├── 915e6f45-ea13-428c-aab2-fb27798668e5 │ │ ├── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2 │ │ ├── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2.lease │ │ └── b83843d7-4c5a-4872-87a4-d0fe27a2c3d2.meta │ ├── b3be4748-6e18-43c2-84fb-a2909d8ee2d6 │ │ ├── ac46e91d-6a50-4893-92c8-2693c192fbc8 │ │ ├── ac46e91d-6a50-4893-92c8-2693c192fbc8.lease │ │ └── ac46e91d-6a50-4893-92c8-2693c192fbc8.meta │ ├── b9edd81a-06b0-421c-85a3-f6618c05b25a │ │ ├── 9b9e1d3d-fc89-4c08-87b6-557b17a4b5dd │ │ ├── 9b9e1d3d-fc89-4c08-87b6-557b17a4b5dd.lease │ │ └── 9b9e1d3d-fc89-4c08-87b6-557b17a4b5dd.meta │ ├── f88a6f36-fcb2-413c-8fd6-c2b090321542 │ │ ├── d8f8b2d7-7232-4feb-bce4-dbf0d37dba9b │ │ ├── d8f8b2d7-7232-4feb-bce4-dbf0d37dba9b.lease │ │ └── d8f8b2d7-7232-4feb-bce4-dbf0d37dba9b.meta │ └── fe59753e-f3b5-4840-8d1d-31c49c2448f0 │ ├── ad0107bc-46d2-4977-b6c3-082adbf3083d │ ├── ad0107bc-46d2-4977-b6c3-082adbf3083d.lease │ └── ad0107bc-46d2-4977-b6c3-082adbf3083d.meta
Maybe I got confused by your ls command output, but I was looking to see how the dir tree for your nfs domain looks like which should be rooted under /rhev/data-center/mnt/<nfs server>:<exported path>
In your output, only 1ed0a635-67ee-4255-aad9-b70822350706 is there which is not the nfs domain f5d2f7c6-093f-46d6-a844-224d92db5ef9 at question.
So to begin with, there is a need to figure why in your case the f5d2f7c6-093f-46d6-a844-224d92db5ef9 folder is not to be found on the nfs storage mounted on that node, which should be there as far as I understood since this is the same nfs mount path and server shared between all hosts which are connected to this SD.
Maybe compare the mount options between the working nodes to the non-working node and check the export options on the nfs server itself, maybe it has some specific client ip exports settings?
Hmm, I didn't notice that.
It's okay, I almost missed that as well and went looking for format versions problems. Glad to hear you have managed to sort this out.
I did a check on the NFS server and I found the "1ed0a635-67ee-4255-aad9-b70822350706" in the exportdom path (/data/exportdom).
This was an old NFS export domain that has been deleted for a while now. I remember finding somewhere an issue with old domains still being active after removal but I cannot find it now.
I unexported the directory on the nfs server and now I have to correct mount and it activates fine.
Thanks!
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts * ------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ------------------------------

On 2/10/20 1:27 PM, Jorick Astrego wrote:
Hmm, I didn't notice that.
I did a check on the NFS server and I found the "1ed0a635-67ee-4255-aad9-b70822350706" in the exportdom path (/data/exportdom).
This was an old NFS export domain that has been deleted for a while now. I remember finding somewhere an issue with old domains still being active after removal but I cannot find it now.
I unexported the directory on the nfs server and now I have to correct mount and it activates fine.
Thanks!
Still weird that it picks another nfs mount path to mount that has been removed months ago from engine. It's not listed in the database on engine: engine=# select * from storage_domain_static ; id | storage | storage_name | storage_domain_type | storage_type | storage_domain_format_type | _create_date | _update_date | reco verable | last_time_used_as_master | storage_description | storage_comment | wipe_after_delete | warning_low_space_indicator | critical_space_action_blocker | first_metadata_device | vg_metadata_device | discard_after_delet e | backup | warning_low_confirmed_space_indicator | block_size --------------------------------------+--------------------------------------+------------------------+---------------------+--------------+----------------------------+-------------------------------+-------------------------------+----- --------+--------------------------+------------------------------------+-----------------+-------------------+-----------------------------+-------------------------------+-----------------------+--------------------+-------------------- --+--------+---------------------------------------+------------ 782a61af-a520-44c4-8845-74bf92888552 | 640ab34d-aa5d-478b-97be-e3f810558628 | ISO_DOMAIN | 2 | 1 | 0 | 2017-11-16 09:49:49.225478+01 | | t | 0 | ISO_DOMAIN | | f | | | | | f | f | | 512 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository | 4 | 8 | 0 | 2016-10-14 20:40:44.700381+02 | 2018-04-06 14:03:31.201898+02 | t | 0 | Public Glance repository for oVirt | | f | | | | | f | f | | 512 b30bab9d-9a66-44ce-ad17-2eb4ee858d8f | 40d191b0-b7f8-48f9-bf6f-327275f51fef | ssd-6 | 1 | 7 | 4 | 2017-06-25 12:45:24.52974+02 | 2019-01-24 15:35:57.013832+01 | t | 1498461838176 | | | f | 10 | 5 | | | f | f | | 512 95b4e5d2-2974-4d5f-91e4-351f75a15435 | f11fed97-513a-4a10-b85c-2afe68f42608 | ssd-3 | 1 | 7 | 4 | 2019-01-10 12:15:55.20347+01 | 2019-01-24 15:35:57.013832+01 | t | 0 | | | f | 10 | 5 | | | f | f | 10 | 512 f5d2f7c6-093f-46d6-a844-224d92db5ef9 | b8b456f0-27c3-49b9-b5e9-9fa81fb3cdaa | backupnfs | 1 | 1 | 4 | 2018-01-19 13:31:25.899738+01 | 2019-02-14 14:36:22.3171+01 | t | 1530772724454 | | | f | 10 | 5 | | | f | f | 0 | 512 33f1ba00-6a16-4e58-b4c5-94426f1c4482 | 6b6b7899-c82b-4417-b453-0b3b0ac11deb | ssd-4 | 1 | 7 | 4 | 2017-06-25 12:43:49.339884+02 | 2019-02-27 21:30:23.358231+01 | t | 1550151382205 | | | f | 10 | 5 | | | f | f | 0 | 512 09959920-a31b-42c2-a547-e50b73602c96 | b036005a-d44d-4689-a8c3-13e1bbf55af7 | ssd-5 | 0 | 7 | 4 | 2017-06-25 12:44:36.215841+02 | 2019-04-02 12:11:45.468731+02 | t | 1554199905458 | | | f | 10 | 5 | | | f | f | 0 | 512 515166a4-4b2f-402d-bf37-95f3c59635cb | 420c6356-1645-49ed-8d1e-56e45ba60d4a | ssd8 | 1 | 7 | 4 | 2020-01-31 09:49:12.484345+01 | 2020-01-31 09:49:12.933808+01 | t | 0 | | | f | 10 | 5 | | | f | f | 10 | 512 (8 rows) So there still is something weird going on there. Met vriendelijke groet, With kind regards, Jorick Astrego Netbulae Virtualization Experts ---------------- Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ----------------

On Mon, Feb 10, 2020 at 4:13 PM Jorick Astrego <jorick@netbulae.eu> wrote:
On 2/10/20 1:27 PM, Jorick Astrego wrote:
Hmm, I didn't notice that.
I did a check on the NFS server and I found the "1ed0a635-67ee-4255-aad9-b70822350706" in the exportdom path (/data/exportdom).
This was an old NFS export domain that has been deleted for a while now. I remember finding somewhere an issue with old domains still being active after removal but I cannot find it now.
I unexported the directory on the nfs server and now I have to correct mount and it activates fine.
Thanks!
Still weird that it picks another nfs mount path to mount that has been removed months ago from engine.
This is because vdsm scans for domains by storage i.e. looking up under /rhev/data-center/mnt/* in case of nfs domains [1]
It's not listed in the database on engine:
The table lists the valid domains known to engine, removals/additions of storage domains update this table. If you removed the old nfs domain, but the nfs storage was not available to the time (i.e. not mounted) then storage format could fail silently [2] and yet this table would still be updated for the SD removal. [3] Haven't tested this out, and may need to unmount at a very specific moment to achieve this in [2], but looking around with the kind assistance of +Benny Zlotnik <bzlotnik@redhat.com> on engine side makes this assumption seem possible. [1] https://github.com/oVirt/vdsm/blob/821afbbc238ba379c12666922fc1ac80482ee383/... [2] https://github.com/oVirt/vdsm/blob/master/lib/vdsm/storage/fileSD.py#L628 [3] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bl... engine=# select * from storage_domain_static ;
id | storage | storage_name | storage_domain_type | storage_type | storage_domain_format_type | _create_date | _update_date | reco verable | last_time_used_as_master | storage_description | storage_comment | wipe_after_delete | warning_low_space_indicator | critical_space_action_blocker | first_metadata_device | vg_metadata_device | discard_after_delet e | backup | warning_low_confirmed_space_indicator | block_size
--------------------------------------+--------------------------------------+------------------------+---------------------+--------------+----------------------------+-------------------------------+-------------------------------+-----
--------+--------------------------+------------------------------------+-----------------+-------------------+-----------------------------+-------------------------------+-----------------------+--------------------+-------------------- --+--------+---------------------------------------+------------ 782a61af-a520-44c4-8845-74bf92888552 | 640ab34d-aa5d-478b-97be-e3f810558628 | ISO_DOMAIN | 2 | 1 | 0 | 2017-11-16 09:49:49.225478+01 | | t | 0 | ISO_DOMAIN | | f | | | | | f | f | | 512 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository | 4 | 8 | 0 | 2016-10-14 20:40:44.700381+02 | 2018-04-06 14:03:31.201898+02 | t | 0 | Public Glance repository for oVirt | | f | | | | | f | f | | 512 b30bab9d-9a66-44ce-ad17-2eb4ee858d8f | 40d191b0-b7f8-48f9-bf6f-327275f51fef | ssd-6 | 1 | 7 | 4 | 2017-06-25 12:45:24.52974+02 | 2019-01-24 15:35:57.013832+01 | t | 1498461838176 | | | f | 10 | 5 | | | f | f | | 512 95b4e5d2-2974-4d5f-91e4-351f75a15435 | f11fed97-513a-4a10-b85c-2afe68f42608 | ssd-3 | 1 | 7 | 4 | 2019-01-10 12:15:55.20347+01 | 2019-01-24 15:35:57.013832+01 | t | 0 | | | f | 10 | 5 | | | f | f | 10 | 512 f5d2f7c6-093f-46d6-a844-224d92db5ef9 | b8b456f0-27c3-49b9-b5e9-9fa81fb3cdaa | backupnfs | 1 | 1 | 4 | 2018-01-19 13:31:25.899738+01 | 2019-02-14 14:36:22.3171+01 | t | 1530772724454 | | | f | 10 | 5 | | | f | f | 0 | 512 33f1ba00-6a16-4e58-b4c5-94426f1c4482 | 6b6b7899-c82b-4417-b453-0b3b0ac11deb | ssd-4 | 1 | 7 | 4 | 2017-06-25 12:43:49.339884+02 | 2019-02-27 21:30:23.358231+01 | t | 1550151382205 | | | f | 10 | 5 | | | f | f | 0 | 512 09959920-a31b-42c2-a547-e50b73602c96 | b036005a-d44d-4689-a8c3-13e1bbf55af7 | ssd-5 | 0 | 7 | 4 | 2017-06-25 12:44:36.215841+02 | 2019-04-02 12:11:45.468731+02 | t | 1554199905458 | | | f | 10 | 5 | | | f | f | 0 | 512 515166a4-4b2f-402d-bf37-95f3c59635cb | 420c6356-1645-49ed-8d1e-56e45ba60d4a | ssd8 | 1 | 7 | 4 | 2020-01-31 09:49:12.484345+01 | 2020-01-31 09:49:12.933808+01 | t | 0 | | | f | 10 | 5 | | | f | f | 10 | 512 (8 rows)
So there still is something weird going on there.
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts * ------------------------------ Tel: 053 20 30 270 info@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01 ------------------------------
participants (3)
-
Amit Bawer
-
Gianluca Cecchi
-
Jorick Astrego