Hosted engine deployment fails with storage domaine error

Hi, Hope you are well on your end. I'm still trying to setup a brand new hosted engine but I'm having "Hosted engine deployment fails with storage domaine error" issue, like shown in the screenshot. I can't figure out what's going on, I have checked the NFS server and everything seems to be working fine, as well as the nfs mount point on the client side (RHV host). Please can one here help moving fast with this troubleshooting? Any idea? Regards, Eugène NG -- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au* * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*

Check the log recommended and do a simple test:sudo -u vdsm touc NEWFILE /rhev/...path/to/storage/somefile Usually many users set the in the nfs the anonuid/anongid to 36 and force 'allsquash'. Best Regards,Strahil Nikolov On Sun, Mar 13, 2022 at 21:42, Eugène Ngontang<sympavali@gmail.com> wrote: Hi, Hope you are well on your end. I'm still trying to setup a brand new hosted engine but I'm having "Hosted engine deployment fails with storage domaine error" issue, like shown in the screenshot. I can't figure out what's going on, I have checked the NFS server and everything seems to be working fine, as well as the nfs mount point on the client side (RHV host). Please can one here help moving fast with this troubleshooting? Any idea? Regards,Eugène NG -- LesCDNengontang@lescdn.com------------------------------------------------------------Aux hommes il faut un chef, et au chef il faut des hommes! L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge! _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZV56WOVYDF6JCK...

Hi @Strahil Nikolov <hunter86_bg@yahoo.com> , Please I don't really understand what to test through this command : sudo -u vdsm touc NEWFILE /rhev/...path/to/storage/somefile Can you check and give me more detail please? Best regards, Eugène NG Le lun. 14 mars 2022 à 07:40, Strahil Nikolov <hunter86_bg@yahoo.com> a écrit :
Check the log recommended and do a simple test: sudo -u vdsm touc NEWFILE /rhev/...path/to/storage/somefile
Usually many users set the in the nfs the anonuid/anongid to 36 and force 'allsquash'.
Best Regards, Strahil Nikolov
On Sun, Mar 13, 2022 at 21:42, Eugène Ngontang <sympavali@gmail.com> wrote: Hi,
Hope you are well on your end.
I'm still trying to setup a brand new hosted engine but I'm having "Hosted engine deployment fails with storage domaine error" issue, like shown in the screenshot.
I can't figure out what's going on, I have checked the NFS server and everything seems to be working fine, as well as the nfs mount point on the client side (RHV host).
Please can one here help moving fast with this troubleshooting? Any idea?
Regards, Eugène NG
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!* _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZV56WOVYDF6JCK...
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au* * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*

The storage is usually mounted under '/rhev' path.sudo -u vdsm dd if=/dev/zero of=/rhev.../.../.../somefile bs=4M count=1 -> will write a file inside the mountpoint of your storage , as the user vdsm (just like oVirt does) Best Regards,Strahil Nikolov On Tue, Mar 15, 2022 at 0:59, Eugène Ngontang<sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TO3I3YUIT45AUF...

Ok got it... In my case, the storage is mounted on *~/exports* ... I'll test it that way and let you know ... thx. Eugène NG Le mar. 15 mars 2022, 06:11, Strahil Nikolov <hunter86_bg@yahoo.com> a écrit :
The storage is usually mounted under '/rhev' path. sudo -u vdsm dd if=/dev/zero of=/rhev.../.../.../somefile bs=4M count=1 -> will write a file inside the mountpoint of your storage , as the user vdsm (just like oVirt does)
Best Regards, Strahil Nikolov
On Tue, Mar 15, 2022 at 0:59, Eugène Ngontang <sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TO3I3YUIT45AUF...

~ is your home folder and SELINUX can also get into the path. Have you checked if the storage is not already mounted on the /rhev/.... ? Best Regards,Strahil Nikolov On Tue, Mar 15, 2022 at 12:23, Eugène Ngontang<sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4NN4KVDOCXY6...

No @Strahil Nikolov <hunter86_bg@yahoo.com> it's not, cause I ran the mount command myself from my home directory before running the hosted engine deployment : mount 172.31.81.195:/ ./export Regards, Eugène NG Le mar. 15 mars 2022, 11:39, Strahil Nikolov <hunter86_bg@yahoo.com> a écrit :
~ is your home folder and SELINUX can also get into the path.
Have you checked if the storage is not already mounted on the /rhev/.... ?
Best Regards, Strahil Nikolov
On Tue, Mar 15, 2022 at 12:23, Eugène Ngontang <sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4NN4KVDOCXY6...

This screenshot show the output of `mount -l` command. Le mar. 15 mars 2022 à 13:52, Eugène Ngontang <sympavali@gmail.com> a écrit :
No @Strahil Nikolov <hunter86_bg@yahoo.com> it's not, cause I ran the mount command myself from my home directory before running the hosted engine deployment :
mount 172.31.81.195:/ ./export
Regards, Eugène NG
Le mar. 15 mars 2022, 11:39, Strahil Nikolov <hunter86_bg@yahoo.com> a écrit :
~ is your home folder and SELINUX can also get into the path.
Have you checked if the storage is not already mounted on the /rhev/.... ?
Best Regards, Strahil Nikolov
On Tue, Mar 15, 2022 at 12:23, Eugène Ngontang <sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4NN4KVDOCXY6...
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au* * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*

I can see the nfs is mounted twice, do you think I should remove this, and avoid manually mount the network storage file system as I did? 172.31.81.195:/ on /home/ec2-user/export type nfs4
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)
Regards, Eugène NG Le mar. 15 mars 2022 à 13:55, Eugène Ngontang <sympavali@gmail.com> a écrit :
This screenshot show the output of `mount -l` command.
Le mar. 15 mars 2022 à 13:52, Eugène Ngontang <sympavali@gmail.com> a écrit :
No @Strahil Nikolov <hunter86_bg@yahoo.com> it's not, cause I ran the mount command myself from my home directory before running the hosted engine deployment :
mount 172.31.81.195:/ ./export
Regards, Eugène NG
Le mar. 15 mars 2022, 11:39, Strahil Nikolov <hunter86_bg@yahoo.com> a écrit :
~ is your home folder and SELINUX can also get into the path.
Have you checked if the storage is not already mounted on the /rhev/.... ?
Best Regards, Strahil Nikolov
On Tue, Mar 15, 2022 at 12:23, Eugène Ngontang <sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4NN4KVDOCXY6...
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au* * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*

I unmounted my home *export* folder, but still have the same error :
*[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain][ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook[ INFO ] Stage: Clean up*
My mount points actually, missing the iso directory :
*[root@ip-172-31-21-171 ec2-user]# mount -l* *sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)* *proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)* *devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=197765916k,nr_inodes=49441479,mode=755)* *securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)* *tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)* *devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)* *tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)* *tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)* *cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)* *pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)* *bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)* *cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)* *cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)* *cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)* *cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)* *cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)* *cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)* *cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)* *cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)* *cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)* *cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)* *cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,rdma)* *none on /sys/kernel/tracing type tracefs (rw,relatime,seclabel)* *configfs on /sys/kernel/config type configfs (rw,relatime)* */dev/nvme0n1p2 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)* *selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)* *systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=84184)* *mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)* *debugfs on /sys/kernel/debug type debugfs (rw,relatime,seclabel)* *hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel,pagesize=2M)* *sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)* *tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=39560148k,mode=700,uid=1000,gid=1000)* *hugetlbfs on /dev/hugepages1G type hugetlbfs (rw,relatime,seclabel,pagesize=1024M)* *172.31.81.195:/home/ec2-user/export on /rhev/data-center/mnt/172.31.81.195:_home_ec2-user_export type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)* *tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=39560148k,mode=700)**[root@ip-172-31-21-171 ec2-user]#*
Eugène NG Le mar. 15 mars 2022 à 13:59, Eugène Ngontang <sympavali@gmail.com> a écrit :
I can see the nfs is mounted twice, do you think I should remove this, and avoid manually mount the network storage file system as I did?
172.31.81.195:/ on /home/ec2-user/export type nfs4
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)
Regards, Eugène NG
Le mar. 15 mars 2022 à 13:55, Eugène Ngontang <sympavali@gmail.com> a écrit :
This screenshot show the output of `mount -l` command.
Le mar. 15 mars 2022 à 13:52, Eugène Ngontang <sympavali@gmail.com> a écrit :
No @Strahil Nikolov <hunter86_bg@yahoo.com> it's not, cause I ran the mount command myself from my home directory before running the hosted engine deployment :
mount 172.31.81.195:/ ./export
Regards, Eugène NG
Le mar. 15 mars 2022, 11:39, Strahil Nikolov <hunter86_bg@yahoo.com> a écrit :
~ is your home folder and SELINUX can also get into the path.
Have you checked if the storage is not already mounted on the /rhev/.... ?
Best Regards, Strahil Nikolov
On Tue, Mar 15, 2022 at 12:23, Eugène Ngontang <sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4NN4KVDOCXY6...
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au* * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*

I can write a file inside the mount point as the *vdsm* user : *[ec2-user@ip-172-31-21-171 ~]$ sudo -u vdsm dd if=/dev/zero
of=/rhev/data-center/mnt/172.31.81.195 <http://172.31.81.195>\:_home_ec2-user_export/test_storage_file*
*[ec2-user@ip-172-31-21-171 ~]$ ll /rhev/data-center/mnt/172.31.81.195 <http://172.31.81.195>\:_home_ec2-user_exporttotal 23190428drwxr-xr-x. 6 vdsm kvm 64 Feb 9 19:00 38421e83-a4cd-4e74-bad9-e454187219c7-rw-r--r--. 1 vdsm kvm 23746968064 Mar 15 13:54 test_storage_file[ec2-user@ip-172-31-21-171 ~]$*
So I'll cleanup the deployment and run it again. Let me know in the meanwhile if you have any other idea. Eugène NG Le mar. 15 mars 2022 à 14:33, Eugène Ngontang <sympavali@gmail.com> a écrit :
I unmounted my home *export* folder, but still have the same error :
*[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain][ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook[ INFO ] Stage: Clean up*
My mount points actually, missing the iso directory :
*[root@ip-172-31-21-171 ec2-user]# mount -l* *sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)* *proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)* *devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=197765916k,nr_inodes=49441479,mode=755)* *securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)* *tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)* *devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)* *tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)* *tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)* *cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)* *pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)* *bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)* *cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)* *cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)* *cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)* *cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)* *cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)* *cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)* *cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)* *cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)* *cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)* *cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)* *cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,rdma)* *none on /sys/kernel/tracing type tracefs (rw,relatime,seclabel)* *configfs on /sys/kernel/config type configfs (rw,relatime)* */dev/nvme0n1p2 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)* *selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)* *systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=84184)* *mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)* *debugfs on /sys/kernel/debug type debugfs (rw,relatime,seclabel)* *hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel,pagesize=2M)* *sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)* *tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=39560148k,mode=700,uid=1000,gid=1000)* *hugetlbfs on /dev/hugepages1G type hugetlbfs (rw,relatime,seclabel,pagesize=1024M)* *172.31.81.195:/home/ec2-user/export on /rhev/data-center/mnt/172.31.81.195:_home_ec2-user_export type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)* *tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=39560148k,mode=700)**[root@ip-172-31-21-171 ec2-user]#*
Eugène NG
Le mar. 15 mars 2022 à 13:59, Eugène Ngontang <sympavali@gmail.com> a écrit :
I can see the nfs is mounted twice, do you think I should remove this, and avoid manually mount the network storage file system as I did?
172.31.81.195:/ on /home/ec2-user/export type nfs4
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)
Regards, Eugène NG
Le mar. 15 mars 2022 à 13:55, Eugène Ngontang <sympavali@gmail.com> a écrit :
This screenshot show the output of `mount -l` command.
Le mar. 15 mars 2022 à 13:52, Eugène Ngontang <sympavali@gmail.com> a écrit :
No @Strahil Nikolov <hunter86_bg@yahoo.com> it's not, cause I ran the mount command myself from my home directory before running the hosted engine deployment :
mount 172.31.81.195:/ ./export
Regards, Eugène NG
Le mar. 15 mars 2022, 11:39, Strahil Nikolov <hunter86_bg@yahoo.com> a écrit :
~ is your home folder and SELINUX can also get into the path.
Have you checked if the storage is not already mounted on the /rhev/.... ?
Best Regards, Strahil Nikolov
On Tue, Mar 15, 2022 at 12:23, Eugène Ngontang <sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4NN4KVDOCXY6...
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au* * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*

Same issue, Looking into the log file, I can see like the local_vm_ip fact is missing, but I don't understand why and don't now if it's the real cause. Le mar. 15 mars 2022 à 15:00, Eugène Ngontang <sympavali@gmail.com> a écrit :
I can write a file inside the mount point as the *vdsm* user :
*[ec2-user@ip-172-31-21-171 ~]$ sudo -u vdsm dd if=/dev/zero
of=/rhev/data-center/mnt/172.31.81.195 <http://172.31.81.195>\:_home_ec2-user_export/test_storage_file*
*[ec2-user@ip-172-31-21-171 ~]$ ll /rhev/data-center/mnt/172.31.81.195 <http://172.31.81.195>\:_home_ec2-user_exporttotal 23190428drwxr-xr-x. 6 vdsm kvm 64 Feb 9 19:00 38421e83-a4cd-4e74-bad9-e454187219c7-rw-r--r--. 1 vdsm kvm 23746968064 Mar 15 13:54 test_storage_file[ec2-user@ip-172-31-21-171 ~]$*
So I'll cleanup the deployment and run it again.
Let me know in the meanwhile if you have any other idea.
Eugène NG
Le mar. 15 mars 2022 à 14:33, Eugène Ngontang <sympavali@gmail.com> a écrit :
I unmounted my home *export* folder, but still have the same error :
*[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain][ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook[ INFO ] Stage: Clean up*
My mount points actually, missing the iso directory :
*[root@ip-172-31-21-171 ec2-user]# mount -l* *sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)* *proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)* *devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=197765916k,nr_inodes=49441479,mode=755)* *securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)* *tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)* *devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)* *tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)* *tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)* *cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)* *pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)* *bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)* *cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)* *cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)* *cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)* *cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)* *cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)* *cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)* *cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)* *cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)* *cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)* *cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)* *cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,rdma)* *none on /sys/kernel/tracing type tracefs (rw,relatime,seclabel)* *configfs on /sys/kernel/config type configfs (rw,relatime)* */dev/nvme0n1p2 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)* *selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)* *systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=84184)* *mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)* *debugfs on /sys/kernel/debug type debugfs (rw,relatime,seclabel)* *hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel,pagesize=2M)* *sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)* *tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=39560148k,mode=700,uid=1000,gid=1000)* *hugetlbfs on /dev/hugepages1G type hugetlbfs (rw,relatime,seclabel,pagesize=1024M)* *172.31.81.195:/home/ec2-user/export on /rhev/data-center/mnt/172.31.81.195:_home_ec2-user_export type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)* *tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=39560148k,mode=700)**[root@ip-172-31-21-171 ec2-user]#*
Eugène NG
Le mar. 15 mars 2022 à 13:59, Eugène Ngontang <sympavali@gmail.com> a écrit :
I can see the nfs is mounted twice, do you think I should remove this, and avoid manually mount the network storage file system as I did?
172.31.81.195:/ on /home/ec2-user/export type nfs4
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)
Regards, Eugène NG
Le mar. 15 mars 2022 à 13:55, Eugène Ngontang <sympavali@gmail.com> a écrit :
This screenshot show the output of `mount -l` command.
Le mar. 15 mars 2022 à 13:52, Eugène Ngontang <sympavali@gmail.com> a écrit :
No @Strahil Nikolov <hunter86_bg@yahoo.com> it's not, cause I ran the mount command myself from my home directory before running the hosted engine deployment :
mount 172.31.81.195:/ ./export
Regards, Eugène NG
Le mar. 15 mars 2022, 11:39, Strahil Nikolov <hunter86_bg@yahoo.com> a écrit :
~ is your home folder and SELINUX can also get into the path.
Have you checked if the storage is not already mounted on the /rhev/.... ?
Best Regards, Strahil Nikolov
On Tue, Mar 15, 2022 at 12:23, Eugène Ngontang <sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4NN4KVDOCXY6...
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au* * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*

Hi, The storage domain that I was trying to use for the Self-Hosted Engine (SHE) deployment was used by another setup. I then deleted that from it (172.31.81.195:/home/ec2-user/export) : # cd /rhev/data-center/mnt/172.31.81.195:_home_ec2-user_export # rm ./* After that I cleanup the hosted engine and deployed it again it worked. But the problem now is that the other cluster that was using the storage domain is complaining "No active Storage Domain found. check the Storage Domains and Hosts Status" as you can see in the screenshot attached. I'd like to get that cluster running again, without delete/redeploy the hosted engine. Please how can I overcome that new issue? How to recover the disaster I made by deleting storage data? Best regards, Eugène NG Le mar. 15 mars 2022 à 16:48, Eugène Ngontang <sympavali@gmail.com> a écrit :
Same issue,
Looking into the log file, I can see like the local_vm_ip fact is missing, but I don't understand why and don't now if it's the real cause.
Le mar. 15 mars 2022 à 15:00, Eugène Ngontang <sympavali@gmail.com> a écrit :
I can write a file inside the mount point as the *vdsm* user :
*[ec2-user@ip-172-31-21-171 ~]$ sudo -u vdsm dd if=/dev/zero
of=/rhev/data-center/mnt/172.31.81.195 <http://172.31.81.195>\:_home_ec2-user_export/test_storage_file*
*[ec2-user@ip-172-31-21-171 ~]$ ll /rhev/data-center/mnt/172.31.81.195 <http://172.31.81.195>\:_home_ec2-user_exporttotal 23190428drwxr-xr-x. 6 vdsm kvm 64 Feb 9 19:00 38421e83-a4cd-4e74-bad9-e454187219c7-rw-r--r--. 1 vdsm kvm 23746968064 Mar 15 13:54 test_storage_file[ec2-user@ip-172-31-21-171 ~]$*
So I'll cleanup the deployment and run it again.
Let me know in the meanwhile if you have any other idea.
Eugène NG
Le mar. 15 mars 2022 à 14:33, Eugène Ngontang <sympavali@gmail.com> a écrit :
I unmounted my home *export* folder, but still have the same error :
*[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain][ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook[ INFO ] Stage: Clean up*
My mount points actually, missing the iso directory :
*[root@ip-172-31-21-171 ec2-user]# mount -l* *sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)* *proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)* *devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=197765916k,nr_inodes=49441479,mode=755)* *securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)* *tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)* *devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)* *tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)* *tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)* *cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)* *pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)* *bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)* *cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)* *cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)* *cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)* *cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)* *cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)* *cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)* *cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)* *cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)* *cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)* *cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)* *cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,rdma)* *none on /sys/kernel/tracing type tracefs (rw,relatime,seclabel)* *configfs on /sys/kernel/config type configfs (rw,relatime)* */dev/nvme0n1p2 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)* *selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)* *systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=84184)* *mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)* *debugfs on /sys/kernel/debug type debugfs (rw,relatime,seclabel)* *hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel,pagesize=2M)* *sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)* *tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=39560148k,mode=700,uid=1000,gid=1000)* *hugetlbfs on /dev/hugepages1G type hugetlbfs (rw,relatime,seclabel,pagesize=1024M)* *172.31.81.195:/home/ec2-user/export on /rhev/data-center/mnt/172.31.81.195:_home_ec2-user_export type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)* *tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=39560148k,mode=700)**[root@ip-172-31-21-171 ec2-user]#*
Eugène NG
Le mar. 15 mars 2022 à 13:59, Eugène Ngontang <sympavali@gmail.com> a écrit :
I can see the nfs is mounted twice, do you think I should remove this, and avoid manually mount the network storage file system as I did?
172.31.81.195:/ on /home/ec2-user/export type nfs4
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)
Regards, Eugène NG
Le mar. 15 mars 2022 à 13:55, Eugène Ngontang <sympavali@gmail.com> a écrit :
This screenshot show the output of `mount -l` command.
Le mar. 15 mars 2022 à 13:52, Eugène Ngontang <sympavali@gmail.com> a écrit :
No @Strahil Nikolov <hunter86_bg@yahoo.com> it's not, cause I ran the mount command myself from my home directory before running the hosted engine deployment :
mount 172.31.81.195:/ ./export
Regards, Eugène NG
Le mar. 15 mars 2022, 11:39, Strahil Nikolov <hunter86_bg@yahoo.com> a écrit :
> ~ is your home folder and SELINUX can also get into the path. > > Have you checked if the storage is not already mounted on the > /rhev/.... ? > > Best Regards, > Strahil Nikolov > > On Tue, Mar 15, 2022 at 12:23, Eugène Ngontang > <sympavali@gmail.com> wrote: > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4NN4KVDOCXY6... > >
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au* * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*

On the old cluster you have to go to the UI-> storages , select the storage -> DC -> maintenance and then detach. Each storage domain can be used by a single Engine at a time. In order to use it on a second one -> you have to detach (don't makr the option to clean it up) and then attach it on the new cluster. If you are using NFS, you can create 2 separate directories and export them. Best Regards,Strahil Nikolov On Thu, Mar 17, 2022 at 20:47, Eugène Ngontang<sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BU3JA5ZJBG5JKG...

Hi @Strahil Nikolov <hunter86_bg@yahoo.com>, Yes that's really what I did. Regards, Eugène NG Le sam. 19 mars 2022 à 11:58, Strahil Nikolov <hunter86_bg@yahoo.com> a écrit :
On the old cluster you have to go to the UI-> storages , select the storage -> DC -> maintenance and then detach.
Each storage domain can be used by a single Engine at a time. In order to use it on a second one -> you have to detach (don't makr the option to clean it up) and then attach it on the new cluster.
If you are using NFS, you can create 2 separate directories and export them.
Best Regards, Strahil Nikolov
On Thu, Mar 17, 2022 at 20:47, Eugène Ngontang <sympavali@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BU3JA5ZJBG5JKG...
-- LesCDN <http://lescdn.com> engontang@lescdn.com ------------------------------------------------------------ *Aux hommes il faut un chef, et au* * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te voit on te juge!*
participants (2)
-
Eugène Ngontang
-
Strahil Nikolov