Hello,
oVirt Engine Version: 3.5.0.1-1.el6
We recently removed the Data (Master) storage domain from our ovirt cluster
and replaced it with another. All is working great. When looking at the old
storage device I noticed that one of our nodes still has an NFS connection
to it.
Looking at the results for 'mount' I see two mounts to the node in question
(192.168.64.15):
192.168.64.15:/nfs-share/ovirt-store/hosted-engine on
/rhev/data-center/mnt/192.168.64.15:_nfs-share_ovirt-store_hosted-engine
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.15,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.64.15)
192.168.64.11:/export/testovirt on
/rhev/data-center/mnt/192.168.64.11:_export_testovirt
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.11,mountvers=3,mountport=46034,mountproto=udp,local_lock=none,addr=192.168.64.11)
192.168.64.163:/export/storage on
/rhev/data-center/mnt/192.168.64.163:_export_storage
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.163,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.64.163)
192.168.64.55:/export/storage on
/rhev/data-center/mnt/192.168.64.55:_export_storage
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.55,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.64.55)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
192.168.64.15:/nfs-share/ovirt-store/hosted-engine on
/rhev/data-center/mnt/192.168.64.15:_nfs-share_ovirt-store_hosted-engine
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.15,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.64.15)
10.1.90.64:/ifs/telvue/infrastructure/iso on
/rhev/data-center/mnt/10.1.90.64:_ifs_telvue_infrastructure_iso type nfs
(rw,relatime,vers=3,rsize=131072,wsize=524288,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.1.90.64,mountvers=3,mountport=300,mountproto=udp,local_lock=none,addr=10.1.90.64)
192.168.64.163:/export/storage/iso-store on
/rhev/data-center/mnt/192.168.64.163:_export_storage_iso-store type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.163,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.64.163)
/etc/fstab has no entry for these so I assume they are left over from when
the storage domain existed.
Is it safe to 'umount' these mounts or is there a hook I may not be aware
of? Is there another way of removing this from the node via that OVM?
None of the other nodes in the cluster have this mount. This node is not
the SPM.
Thank you for your time and consideration.
Best regards,
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | msteele(a)telvue.com |
http://www.telvue.com
twitter:
http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue