If you are using 4.1 version you can add a shared storage domain to the DC
with the local storage, then move the disks of the VM from the local to the
shared domain.
Then you can detach the domain and reattached to the other DC and import
the VM.
On Wed, Mar 22, 2017 at 12:43 PM, carl langlois <crl.langlois(a)gmail.com>
wrote:
Thanks for your help. This is what i understand after seeing all the
vms
being migrate to that hosts :-). So i have created a new data center with
local storage. My question on this is is it possible to migrate VM between
the 2 data center? also is it possible to share ISO/EXPORT domain between
the 2 data center.
Thanks
On Wed, Mar 22, 2017 at 3:40 AM, Fred Rolland <frolland(a)redhat.com> wrote:
> Is it a local disk ? If you want to use a local disk, it is not the way
> to do it.
> POSIX Storage Domain should be accessible from all the hosts in the
> cluster.
>
> On Tue, Mar 21, 2017 at 9:36 PM, carl langlois <crl.langlois(a)gmail.com>
> wrote:
>
>> Okey, i have manager to use the POSIX compliant FS.
>>
>> First thing i did was to remove any multipath stuff from the disk and
>> have a standard parttiton table i.e /dev/sdb1 (but i do not think that
>> realy help)
>> change block device(/dev/sdb1) group and owner to vdsm:kvm (did not do
>> the trick either got still permission denied)
>> create a directory in /rhev/data-center/mnt/_dev_sdb1 and set owner and
>> group to vdsm:kvm (this did the trick)
>>
>> So why did i had to create the last directory by hand to make it
>> work?..i my missing something?
>>
>> Thanks
>> Carl
>>
>>
>>
>>
>> On Tue, Mar 21, 2017 at 10:50 AM, Fred Rolland <frolland(a)redhat.com>
>> wrote:
>>
>>> Can you try :
>>>
>>> chown -R vdsm:kvm /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>>>
>>> On Tue, Mar 21, 2017 at 4:32 PM, carl langlois
<crl.langlois(a)gmail.com>
>>> wrote:
>>>
>>>>
>>>> jsonrpc.Executor/7::WARNING::2017-03-21 09:27:40,099::outOfProcess::19
>>>> 3::Storage.oop::(validateAccess) Permission denied for directory:
>>>>
/rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__50026B726804F13B1
>>>> with permissions:7
>>>> jsonrpc.Executor/7::INFO::2017-03-21
09:27:40,099::mount::233::storage.Mount::(umount)
>>>> unmounting /rhev/data-center/mnt/_dev_map
>>>> per_KINGSTON__SV300S37A240G__50026B726804F13B1
>>>> jsonrpc.Executor/7::DEBUG::2017-03-21
09:27:40,104::utils::871::storage.Mount::(stopwatch)
>>>>
/rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__50026B726804F13B1
>>>> unmounted: 0.00 seconds
>>>> jsonrpc.Executor/7::ERROR::2017-03-21
09:27:40,104::hsm::2403::Storage.HSM::(connectStorageServer)
>>>> Could not connect to storageServer
>>>> Traceback (most recent call last):
>>>> File "/usr/share/vdsm/storage/hsm.py", line 2400, in
>>>> connectStorageServer
>>>> conObj.connect()
>>>> File "/usr/share/vdsm/storage/storageServer.py", line 249,
in
>>>> connect
>>>> six.reraise(t, v, tb)
>>>> File "/usr/share/vdsm/storage/storageServer.py", line 242,
in
>>>> connect
>>>> self.getMountObj().getRecord().fs_file)
>>>> File "/usr/share/vdsm/storage/fileSD.py", line 81, in
>>>> validateDirAccess
>>>> raise se.StorageServerAccessPermissionError(dirPath)
>>>> StorageServerAccessPermissionError: Permission settings on the
>>>> specified path do not allow access to the storage. Verify permission
>>>> settings on the specified storage path.: 'path =
>>>> /rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__5
>>>> 0026B726804F13B1'
>>>> jsonrpc.Executor/7::DEBUG::201
>>>>
>>>> Thanks again.
>>>>
>>>>
>>>> On Tue, Mar 21, 2017 at 10:14 AM, Fred Rolland
<frolland(a)redhat.com>
>>>> wrote:
>>>>
>>>>> Can you share the VDSM log again ?
>>>>>
>>>>> On Tue, Mar 21, 2017 at 4:08 PM, carl langlois <
>>>>> crl.langlois(a)gmail.com> wrote:
>>>>>
>>>>>> Interesting, when i'm using
/dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>>>>>> now the UI give error on the permission setting..
>>>>>>
>>>>>> root@ovhost4 ~]# ls -al /dev/mapper/KINGSTON_SV300S37A
>>>>>> 240G_50026B726804F13B1
>>>>>> lrwxrwxrwx 1 root root 7 Mar 18 08:28
/dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>>>>>> -> ../dm-3
>>>>>>
>>>>>> and the permission on the dm-3
>>>>>>
>>>>>> [root@ovhost4 ~]# ls -al /dev/dm-3
>>>>>> brw-rw---- 1 vdsm kvm 253, 3 Mar 18 08:28 /dev/dm-3
>>>>>>
>>>>>>
>>>>>> how do i change the permission on the sym link..
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Mar 21, 2017 at 10:00 AM, Fred Rolland
<frolland(a)redhat.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Can you try to use
/dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>>>>>>> in the UI.
>>>>>>> It seems the kernel change the path that we use to mount and
then
>>>>>>> we cannot validate that the mount exists.
>>>>>>>
>>>>>>> It should be anyway better as the mapping could change after
reboot.
>>>>>>>
>>>>>>> On Tue, Mar 21, 2017 at 2:20 PM, carl langlois <
>>>>>>> crl.langlois(a)gmail.com> wrote:
>>>>>>>
>>>>>>>> Here is the /proc/mounts
>>>>>>>>
>>>>>>>> rootfs / rootfs rw 0 0
>>>>>>>> sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
>>>>>>>> proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
>>>>>>>> devtmpfs /dev devtmpfs
rw,nosuid,size=65948884k,nr_inodes=16487221,mode=755
>>>>>>>> 0 0
>>>>>>>> securityfs /sys/kernel/security securityfs
>>>>>>>> rw,nosuid,nodev,noexec,relatime 0 0
>>>>>>>> tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
>>>>>>>> devpts /dev/pts devpts
rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
>>>>>>>> 0 0
>>>>>>>> tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
>>>>>>>> tmpfs /sys/fs/cgroup tmpfs
ro,nosuid,nodev,noexec,mode=755 0 0
>>>>>>>> cgroup /sys/fs/cgroup/systemd cgroup
rw,nosuid,nodev,noexec,relatim
>>>>>>>>
e,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
>>>>>>>> 0 0
>>>>>>>> pstore /sys/fs/pstore pstore
rw,nosuid,nodev,noexec,relatime 0 0
>>>>>>>> cgroup /sys/fs/cgroup/cpu,cpuacct cgroup
>>>>>>>> rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
>>>>>>>> cgroup /sys/fs/cgroup/net_cls,net_prio cgroup
>>>>>>>> rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
>>>>>>>> cgroup /sys/fs/cgroup/pids cgroup
rw,nosuid,nodev,noexec,relatime,pids
>>>>>>>> 0 0
>>>>>>>> cgroup /sys/fs/cgroup/devices cgroup
rw,nosuid,nodev,noexec,relatime,devices
>>>>>>>> 0 0
>>>>>>>> cgroup /sys/fs/cgroup/cpuset cgroup
rw,nosuid,nodev,noexec,relatime,cpuset
>>>>>>>> 0 0
>>>>>>>> cgroup /sys/fs/cgroup/blkio cgroup
rw,nosuid,nodev,noexec,relatime,blkio
>>>>>>>> 0 0
>>>>>>>> cgroup /sys/fs/cgroup/perf_event cgroup
>>>>>>>> rw,nosuid,nodev,noexec,relatime,perf_event 0 0
>>>>>>>> cgroup /sys/fs/cgroup/memory cgroup
rw,nosuid,nodev,noexec,relatime,memory
>>>>>>>> 0 0
>>>>>>>> cgroup /sys/fs/cgroup/freezer cgroup
rw,nosuid,nodev,noexec,relatime,freezer
>>>>>>>> 0 0
>>>>>>>> cgroup /sys/fs/cgroup/hugetlb cgroup
rw,nosuid,nodev,noexec,relatime,hugetlb
>>>>>>>> 0 0
>>>>>>>> configfs /sys/kernel/config configfs rw,relatime 0 0
>>>>>>>> /dev/mapper/cl_ovhost1-root / xfs
rw,relatime,attr2,inode64,noquota
>>>>>>>> 0 0
>>>>>>>> systemd-1 /proc/sys/fs/binfmt_misc autofs
>>>>>>>>
rw,relatime,fd=35,pgrp=1,timeout=300,minproto=5,maxproto=5,direct
>>>>>>>> 0 0
>>>>>>>> mqueue /dev/mqueue mqueue rw,relatime 0 0
>>>>>>>> debugfs /sys/kernel/debug debugfs rw,relatime 0 0
>>>>>>>> hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
>>>>>>>> tmpfs /tmp tmpfs rw 0 0
>>>>>>>> nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
>>>>>>>> /dev/mapper/cl_ovhost1-home /home xfs
>>>>>>>> rw,relatime,attr2,inode64,noquota 0 0
>>>>>>>> /dev/sda1 /boot xfs rw,relatime,attr2,inode64,noquota 0
0
>>>>>>>> sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0
0
>>>>>>>> tmpfs /run/user/42 tmpfs rw,nosuid,nodev,relatime,size=
>>>>>>>> 13192948k,mode=700,uid=42,gid=42 0 0
>>>>>>>> gvfsd-fuse /run/user/42/gvfs fuse.gvfsd-fuse
>>>>>>>> rw,nosuid,nodev,relatime,user_id=42,group_id=42 0 0
>>>>>>>> fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
>>>>>>>> ovhost2:/home/exports/defaultdata
/rhev/data-center/mnt/ovhost2:_home_exports_defaultdata
>>>>>>>> nfs
rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,so
>>>>>>>>
ft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mounta
>>>>>>>>
ddr=10.8.236.162,mountvers=3,mountport=20048,mountproto=udp,
>>>>>>>> local_lock=none,addr=10.8.236.162 0 0
>>>>>>>> ovhost2:/home/exports/ISO
/rhev/data-center/mnt/ovhost2:_home_exports_ISO
>>>>>>>> nfs
rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,so
>>>>>>>>
ft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mounta
>>>>>>>>
ddr=10.8.236.162,mountvers=3,mountport=20048,mountproto=udp,
>>>>>>>> local_lock=none,addr=10.8.236.162 0 0
>>>>>>>> ovhost2:/home/exports/data
/rhev/data-center/mnt/ovhost2:_home_exports_data
>>>>>>>> nfs
rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,so
>>>>>>>>
ft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mounta
>>>>>>>>
ddr=10.8.236.162,mountvers=3,mountport=20048,mountproto=udp,
>>>>>>>> local_lock=none,addr=10.8.236.162 0 0
>>>>>>>> tmpfs /run/user/0 tmpfs
rw,nosuid,nodev,relatime,size=13192948k,mode=700
>>>>>>>> 0 0
>>>>>>>> /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>>>>>>>> /rhev/data-center/mnt/_dev_dm-3 ext4
>>>>>>>> rw,nosuid,relatime,data=ordered 0 0
>>>>>>>>
>>>>>>>> Thanks you for your help.
>>>>>>>>
>>>>>>>> Carl
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Mar 21, 2017 at 6:31 AM, Fred Rolland
<frolland(a)redhat.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Can you provide the content of /proc/mounts after it
has being
>>>>>>>>> mounted by VDSM ?
>>>>>>>>>
>>>>>>>>> On Tue, Mar 21, 2017 at 12:28 PM, carl langlois <
>>>>>>>>> crl.langlois(a)gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Here is the vdsm.log
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> jsonrpc.Executor/0::ERROR::2017-03-18
>>>>>>>>>>
08:23:48,317::hsm::2403::Storage.HSM::(connectStorageServer)
>>>>>>>>>> Could not connect to storageServer
>>>>>>>>>> Traceback (most recent call last):
>>>>>>>>>> File
"/usr/share/vdsm/storage/hsm.py", line 2400, in
>>>>>>>>>> connectStorageServer
>>>>>>>>>> conObj.connect()
>>>>>>>>>> File
"/usr/share/vdsm/storage/storageServer.py", line 242, in
>>>>>>>>>> connect
>>>>>>>>>> self.getMountObj().getRecord().fs_file)
>>>>>>>>>> File
"/usr/lib/python2.7/site-packages/vdsm/storage/mount.py",
>>>>>>>>>> line 260, in getRecord
>>>>>>>>>> (self.fs_spec, self.fs_file))
>>>>>>>>>> OSError: [Errno 2] Mount of `/dev/dm-3` at
>>>>>>>>>> `/rhev/data-center/mnt/_dev_dm-3` does not exist
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> thanks
>>>>>>>>>>
>>>>>>>>>> On Fri, Mar 17, 2017 at 3:06 PM, Fred Rolland
<
>>>>>>>>>> frolland(a)redhat.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Please send Vdsm log.
>>>>>>>>>>> Thanks
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Mar 17, 2017 at 8:46 PM, carl
langlois <
>>>>>>>>>>> crl.langlois(a)gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> The link that you send is for NFS
strorage but i am trying to
>>>>>>>>>>>> add a POSIX compliant.
>>>>>>>>>>>>
>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> when i press okey it mount the disk to :
>>>>>>>>>>>>
>>>>>>>>>>>> [root@ovhost4 ~]# ls -al
/rhev/data-center/mnt/_dev_dm-4/
>>>>>>>>>>>> total 28
>>>>>>>>>>>> drwxr-xr-x. 4 vdsm kvm 4096 Mar 16 12:12
.
>>>>>>>>>>>> drwxr-xr-x. 6 vdsm kvm 4096 Mar 17 13:35
..
>>>>>>>>>>>> drwxr-xr-x. 2 vdsm kvm 16384 Mar 16 11:42
lost+found
>>>>>>>>>>>> drwxr-xr-x. 4 vdsm kvm 4096 Mar 16 12:12
.Trash-0
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> and doing a touch with vdsm user work
>>>>>>>>>>>>
>>>>>>>>>>>> [root@ovhost4 ~]# sudo -u vdsm touch
>>>>>>>>>>>> /rhev/data-center/mnt/_dev_dm-4/test
>>>>>>>>>>>> [root@ovhost4 ~]# ls -al
/rhev/data-center/mnt/_dev_dm-4/
>>>>>>>>>>>> total 28
>>>>>>>>>>>> drwxr-xr-x. 4 vdsm kvm 4096 Mar 17 13:44
.
>>>>>>>>>>>> drwxr-xr-x. 6 vdsm kvm 4096 Mar 17 13:35
..
>>>>>>>>>>>> drwxr-xr-x. 2 vdsm kvm 16384 Mar 16 11:42
lost+found
>>>>>>>>>>>> -rw-r--r--. 1 vdsm kvm 0 Mar 17 13:44
test
>>>>>>>>>>>> drwxr-xr-x. 4 vdsm kvm 4096 Mar 16 12:12
.Trash-0
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> But it fail with a general exception
error and the storage
>>>>>>>>>>>> does not exist in ovirt
>>>>>>>>>>>>
>>>>>>>>>>>> any help would be appreciated.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Which log you need to see?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Le jeu. 16 mars 2017 17:02, Fred Rolland
<frolland(a)redhat.com>
>>>>>>>>>>>> a écrit :
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Can you check if the folder
permissions are OK ?
>>>>>>>>>>>>> Check [1] for more details.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Can you share more of the log ?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> [1]
https://www.ovirt.org/document
>>>>>>>>>>>>>
ation/how-to/troubleshooting/troubleshooting-nfs-storage-iss
>>>>>>>>>>>>> ues/
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Mar 16, 2017 at 7:49 PM, carl
langlois <
>>>>>>>>>>>>> crl.langlois(a)gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Guys,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I am trying to add a posix FS on one
of my host. Ovirt in
>>>>>>>>>>>>> actually mounting it but fail with
"Error while executing
>>>>>>>>>>>>> action Add Storage Connection:
General Exception"
>>>>>>>>>>>>>
>>>>>>>>>>>>> If i look in the vdsm.log i cant see
>>>>>>>>>>>>>
>>>>>>>>>>>>> sonrpc.Executor/7::DEBUG::2017-03-16
>>>>>>>>>>>>>
12:39:28,248::fileUtils::209::Storage.fileUtils::(createdir)
>>>>>>>>>>>>> Creating directory:
/rhev/data-center/mnt/_dev_dm-3 mode:
>>>>>>>>>>>>> None
>>>>>>>>>>>>>
jsonrpc.Executor/7::DEBUG::2017-03-16
>>>>>>>>>>>>>
12:39:28,248::fileUtils::218::Storage.fileUtils::(createdir)
>>>>>>>>>>>>> Using existing directory:
/rhev/data-center/mnt/_dev_dm-3
>>>>>>>>>>>>> jsonrpc.Executor/7::INFO::2017-03-16
>>>>>>>>>>>>>
12:39:28,248::mount::226::storage.Mount::(mount) mounting
>>>>>>>>>>>>> /dev/dm-3 at
/rhev/data-center/mnt/_dev_dm-3
>>>>>>>>>>>>>
jsonrpc.Executor/7::DEBUG::2017-03-16
>>>>>>>>>>>>>
12:39:28,270::utils::871::storage.Mount::(stopwatch)
>>>>>>>>>>>>> /rhev/data-center/mnt/_dev_dm-3
mounted: 0.02 seconds
>>>>>>>>>>>>>
jsonrpc.Executor/7::ERROR::2017-03-16
>>>>>>>>>>>>>
12:39:28,271::hsm::2403::Storage.HSM::(connectStorageServer)
>>>>>>>>>>>>> Could not connect to storageServer
>>>>>>>>>>>>> Traceback (most recent call last):
>>>>>>>>>>>>> File
"/usr/share/vdsm/storage/hsm.py", line 2400, in
>>>>>>>>>>>>> connectStorageServer
>>>>>>>>>>>>> conObj.connect()
>>>>>>>>>>>>> File
"/usr/share/vdsm/storage/storageServer.py", line 242,
>>>>>>>>>>>>> in connect
>>>>>>>>>>>>>
self.getMountObj().getRecord().fs_file)
>>>>>>>>>>>>> File
"/usr/lib/python2.7/site-packages/vdsm/storage/mount.py",
>>>>>>>>>>>>> line 260, in getRecord
>>>>>>>>>>>>> (self.fs_spec, self.fs_file))
>>>>>>>>>>>>> OSError: [Errno 2] Mount of
`/dev/dm-3` at
>>>>>>>>>>>>> `/rhev/data-center/mnt/_dev_dm-3`
does not exist
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> any help would be appreciated.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>
>>>>>>>>>>>>> CL
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
_______________________________________________
>>>>>>>>>>>>> Users mailing list
>>>>>>>>>>>>> Users(a)ovirt.org
>>>>>>>>>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>