Hi Leo,
Check libvirt's logs on the destination host.
Maybe they can provide more information.
Best Regards,
Strahil NikolovOn Mar 2, 2019 15:40, Leo David <leoalex(a)gmail.com> wrote:
Thank you,
I am trying to migrate a vm that has its disks on cephfs ( as posix domain - mounted on
all hosts ), and it does not work. Not sure if this is normal, considering the vm disks
being on this type of storage. The error logs in engine are:
2019-03-02 13:35:03,483Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-0) [] Migration of VM 'centos7-test' to host
'node1.internal' failed: VM destroyed during the startup.
2019-03-02 13:35:03,505Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(ForkJoinPool-1-worker-14) [] Rerun VM 'bec1cd40-9d62-4f6d-a9df-d97a79584441'.
Called from VDS 'node2.internal'
2019-03-02 13:35:03,566Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-42967) [] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: centos7-test, Source:
node2.internal, Destination: node1.internal).
Any thoughts ?
Thanks,
Leo
On Sat, Mar 2, 2019 at 11:59 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>
> If you mean storage migration - could be possible.
> If it is about live migration between hosts - shouldn't happen.
> Anything in the logs ?
>
> Best Regards,
> Strahil Nikolov
>
> On Mar 2, 2019 09:23, Leo David <leoalex(a)gmail.com> wrote:
>>
>> Thank you Strahil, yes thought about that too, I'll give it a try.
>> Now ( to be a bit offtopic ), it seems that I can't live migrate the vm, even
thought the cephfs mountpoint exists on all the hosts.
>> Could it be the fact that the storage type is "posix" and live
migration not being possible ?
>>
>> Thank you !
>>
>> On Sat, Mar 2, 2019, 04:05 Strahil <hunter86_bg(a)yahoo.com> wrote:
>>>
>>> Can you try to set the credentials in a file (don't recall where that was
for ceph) , so you can mount without specifying user/pass ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> On Mar 1, 2019 13:46, Leo David <leoalex(a)gmail.com> wrote:
>>>>
>>>> Hi Everyone,
>>>> I am trying to mount cephfs as a posix storage domain and getting an
error in vdsm.log, although the direct command run on the node " mount -t ceph
10.10.6.1:/sata/ovirt-data /cephfs-sata/ -o
name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I have
configured:
>>>> Storage type: POSIX compliant FS
>>>> Path: 10.10.6.1:/sata/ovirt-data
>>>> VFS Type: ceph
>>>> Mount Options:
name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
>>>>
>>>>
>>>> 2019-03-01 11:35:33,457+0000 INFO (jsonrpc/4) [storage.Mount] mounting
10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data
(mount:204)
>>>> 2019-03-01 11:35:33,464+0000 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer]
RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
>>>> 2019-03-01 11:35:33,471+0000 ERROR (jsonrpc/4) [storage.HSM] Could not
connect to storageServer (hsm:2414)