It should be anyway better as the mapping could change after reboot.Can you try to use /dev/mapper/KINGSTON_It seems the kernel change the path that we use to mount and then we cannot validate that the mount exists.SV300S37A240G_ 50026B726804F13B1 in the UI. On Tue, Mar 21, 2017 at 2:20 PM, carl langlois <crl.langlois@gmail.com> wrote:Here is the /proc/mountsrootfs / rootfs rw 0 0sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 devtmpfs /dev devtmpfs rw,nosuid,size=65948884k,nr_inodes=16487221,mode=755 0 0 securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0 cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/ lib/systemd/systemd-cgroups- agent,name=systemd 0 0 pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0 cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0 cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatim e,net_prio,net_cls 0 0 cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0 cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0 cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0 cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0 cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0 cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0 cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0 cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0 configfs /sys/kernel/config configfs rw,relatime 0 0/dev/mapper/cl_ovhost1-root / xfs rw,relatime,attr2,inode64,noquota 0 0 systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=35,pgrp=1,timeout=300,minproto=5,maxproto=5, direct 0 0 mqueue /dev/mqueue mqueue rw,relatime 0 0debugfs /sys/kernel/debug debugfs rw,relatime 0 0hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0tmpfs /tmp tmpfs rw 0 0nfsd /proc/fs/nfsd nfsd rw,relatime 0 0/dev/mapper/cl_ovhost1-home /home xfs rw,relatime,attr2,inode64,noquota 0 0 /dev/sda1 /boot xfs rw,relatime,attr2,inode64,noquota 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0tmpfs /run/user/42 tmpfs rw,nosuid,nodev,relatime,size=13192948k,mode=700,uid=42,gid= 42 0 0 gvfsd-fuse /run/user/42/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=42,group_id=42 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0ovhost2:/home/exports/defaultdata /rhev/data-center/mnt/ovhost2: _home_exports_defaultdata nfs rw,relatime,vers=3,rsize=10485 76,wsize=1048576,namlen=255, soft,nosharecache,proto=tcp, timeo=600,retrans=6,sec=sys, mountaddr=10.8.236.162,mountve rs=3,mountport=20048,mountprot o=udp,local_lock=none,addr=10. 8.236.162 0 0 ovhost2:/home/exports/ISO /rhev/data-center/mnt/ovhost2:_home_exports_ISO nfs rw,relatime,vers=3,rsize=10485 76,wsize=1048576,namlen=255, soft,nosharecache,proto=tcp, timeo=600,retrans=6,sec=sys, mountaddr=10.8.236.162,mountve rs=3,mountport=20048,mountprot o=udp,local_lock=none,addr=10. 8.236.162 0 0 ovhost2:/home/exports/data /rhev/data-center/mnt/ovhost2:_home_exports_data nfs rw,relatime,vers=3,rsize=10485 76,wsize=1048576,namlen=255, soft,nosharecache,proto=tcp, timeo=600,retrans=6,sec=sys, mountaddr=10.8.236.162,mountve rs=3,mountport=20048,mountprot o=udp,local_lock=none,addr=10. 8.236.162 0 0 tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=13192948k,mode=700 0 0 /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1 /rhev/data-center/mnt/_dev_dm- 3 ext4 rw,nosuid,relatime,data=ordere d 0 0 Thanks you for your help.CarlOn Tue, Mar 21, 2017 at 6:31 AM, Fred Rolland <frolland@redhat.com> wrote:Can you provide the content of /proc/mounts after it has being mounted by VDSM ?On Tue, Mar 21, 2017 at 12:28 PM, carl langlois <crl.langlois@gmail.com> wrote:Here is the vdsm.logjsonrpc.Executor/0::ERROR::2017-03-18 08:23:48,317::hsm::2403::Stora ge.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last):File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer conObj.connect()File "/usr/share/vdsm/storage/storageServer.py", line 242, in connect self.getMountObj().getRecord().fs_file) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 260, in getRecord (self.fs_spec, self.fs_file))OSError: [Errno 2] Mount of `/dev/dm-3` at `/rhev/data-center/mnt/_dev_dm-3` does not exist thanksOn Fri, Mar 17, 2017 at 3:06 PM, Fred Rolland <frolland@redhat.com> wrote:Please send Vdsm log.ThanksOn Fri, Mar 17, 2017 at 8:46 PM, carl langlois <crl.langlois@gmail.com> wrote:Hi,The link that you send is for NFS strorage but i am trying to add a POSIX compliant.when i press okey it mount the disk to :[root@ovhost4 ~]# ls -al /rhev/data-center/mnt/_dev_dm-4/ total 28drwxr-xr-x. 4 vdsm kvm 4096 Mar 16 12:12 .drwxr-xr-x. 6 vdsm kvm 4096 Mar 17 13:35 ..drwxr-xr-x. 2 vdsm kvm 16384 Mar 16 11:42 lost+founddrwxr-xr-x. 4 vdsm kvm 4096 Mar 16 12:12 .Trash-0and doing a touch with vdsm user work[root@ovhost4 ~]# sudo -u vdsm touch /rhev/data-center/mnt/_dev_dm-4/test [root@ovhost4 ~]# ls -al /rhev/data-center/mnt/_dev_dm-4/ total 28drwxr-xr-x. 4 vdsm kvm 4096 Mar 17 13:44 .drwxr-xr-x. 6 vdsm kvm 4096 Mar 17 13:35 ..drwxr-xr-x. 2 vdsm kvm 16384 Mar 16 11:42 lost+found-rw-r--r--. 1 vdsm kvm 0 Mar 17 13:44 testdrwxr-xr-x. 4 vdsm kvm 4096 Mar 16 12:12 .Trash-0But it fail with a general exception error and the storage does not exist in ovirtany help would be appreciated.Which log you need to see?ThanksLe jeu. 16 mars 2017 17:02, Fred Rolland <frolland@redhat.com> a écrit :Can you share more of the log ?Check [1] for more details.Hi,Can you check if the folder permissions are OK ?On Thu, Mar 16, 2017 at 7:49 PM, carl langlois <crl.langlois@gmail.com> wrote:Hi Guys,I am trying to add a posix FS on one of my host. Ovirt in actually mounting it but fail with "Error while executing action Add Storage Connection: General Exception"If i look in the vdsm.log i cant seesonrpc.Executor/7::DEBUG::2017-03-16 12:39:28,248::fileUtils::209:: Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/_dev_dm- 3 mode: None jsonrpc.Executor/7::DEBUG::2017-03-16 12:39:28,248::fileUtils::218:: Storage.fileUtils::(createdir) Using existing directory: /rhev/data-center/mnt/_dev_dm- 3 jsonrpc.Executor/7::INFO::2017-03-16 12:39:28,248::mount::226::stor age.Mount::(mount) mounting /dev/dm-3 at /rhev/data-center/mnt/_dev_dm- 3 jsonrpc.Executor/7::DEBUG::2017-03-16 12:39:28,270::utils::871::stor age.Mount::(stopwatch) /rhev/data-center/mnt/_dev_dm- 3 mounted: 0.02 seconds jsonrpc.Executor/7::ERROR::2017-03-16 12:39:28,271::hsm::2403::Stora ge.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last):File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer conObj.connect()File "/usr/share/vdsm/storage/storageServer.py", line 242, in connect self.getMountObj().getRecord().fs_file) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 260, in getRecord (self.fs_spec, self.fs_file))OSError: [Errno 2] Mount of `/dev/dm-3` at `/rhev/data-center/mnt/_dev_dm-3` does not exist any help would be appreciated.ThanksCL_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users