Thank you,
I am trying to migrate a vm that has its disks on cephfs ( as posix domain - mounted on all hosts ),  and it does not work. Not sure if this is normal,  considering the vm disks being on this type of storage.  The error logs in engine are:

2019-03-02 13:35:03,483Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-0) [] Migration of VM 'centos7-test' to host 'node1.internal' failed: VM destroyed during the startup.
2019-03-02 13:35:03,505Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-14) [] Rerun VM 'bec1cd40-9d62-4f6d-a9df-d97a79584441'. Called from VDS 'node2.internal'
2019-03-02 13:35:03,566Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-42967) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed  (VM: centos7-test, Source: node2.internal, Destination: node1.internal).

Any thoughts ?

Thanks,

Leo


On Sat, Mar 2, 2019 at 11:59 AM Strahil <hunter86_bg@yahoo.com> wrote:

If you mean storage migration - could be possible.
If it is about live migration between hosts - shouldn't happen.
Anything in the logs ?

Best Regards,
Strahil Nikolov

On Mar 2, 2019 09:23, Leo David <leoalex@gmail.com> wrote:
Thank you Strahil, yes thought about that too, I'll give it a try.
Now ( to be a bit offtopic ), it seems that I can't live migrate the vm, even thought the cephfs mountpoint exists on all the hosts.
Could it be the fact that the storage type is "posix" and live migration not being possible ?

Thank you !

On Sat, Mar 2, 2019, 04:05 Strahil <hunter86_bg@yahoo.com> wrote:

Can you try to set the credentials in a file (don't recall where that was for ceph) , so you can mount without specifying user/pass ?

Best Regards,
Strahil Nikolov

On Mar 1, 2019 13:46, Leo David <leoalex@gmail.com> wrote:
Hi Everyone,
I am trying to mount cephfs as a posix storage domain and getting an error in vdsm.log, although the direct command run on the node " mount -t ceph 10.10.6.1:/sata/ovirt-data  /cephfs-sata/  -o name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I have configured:
Storage type: POSIX compliant FS
Path: 10.10.6.1:/sata/ovirt-data
VFS Type: ceph
Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==


2019-03-01 11:35:33,457+0000 INFO  (jsonrpc/4) [storage.Mount] mounting 10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data (mount:204)
2019-03-01 11:35:33,464+0000 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
2019-03-01 11:35:33,471+0000 ERROR (jsonrpc/4) [storage.HSM] Could not connect to storageServer (hsm:2414)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411, in connectStorageServer
    conObj.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 180, in connect
    six.reraise(t, v, tb)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 172, in connect
    self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount
    cgroup=cgroup)
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
    return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
    **kwargs)
  File "<string>", line 2, in mount
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
    raise convert_to_error(kind, result)
MountError: (1, ';mount: unsupported option format:  name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==\n')
Any thoughts on this,  what could it be wrong with the options field ?
Using oVirt 4.3.1
Thank you very much and  have a great day !

Leo

--
Best regards, Leo David


--
Best regards, Leo David