Ovn command line issue
by Akshita Jain
Dear all
I am running our project on ovirt api and i am facing issue on OVN network
I've created a logical switch by ovn-nbctl ls-add SW switch in oVirt Network, but when i plug two vms in the above switch, ping is not working
If I create OVN swich from ovirt GUI , it is working fine but my requirement is to create ovn network by my code please help me anybody i m stuck on this
5 years, 8 months
Ovirt VM UserDefinedProperties Option
by Akshita Jain
Dear All,
I'm unable to use tablet even after setting it true by command : engine-config -s UserDefinedVMProperties='usbtablet=^(true|false)$'.
Can someone please help me with this issue.
5 years, 8 months
Cannot start VM: Bad volume specification
by Alan G
Hi,
I performed the following: -
1. Shutdown VM.
2. Take a snapshot
3. Create a clone from snapshot.
4. Start the clone. Clone starts fine.
5. Attempt to delete snapshot from original VM, fails.
6. Attempt to start original VM, fails with "Bad volume specification".
This was logged in VDSM during the snapshot deletion attempt.
2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 879, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/storage/task.py", line 333, in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper
return method(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge
merge.finalize(subchainInfo)
File "/usr/share/vdsm/storage/merge.py", line 271, in finalize
optimal_size = subchain.base_vol.optimal_size()
File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size
check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2)
File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check
out = _run_cmd(cmd)
File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd
raise QImgError(cmd, rc, out, err)
QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04
6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={
"image-end-offset": 52210892800,
"total-clusters": 1638400,
"check-errors": 0,
"leaks": 323,
"leaks-fixed": 0,
"allocated-clusters": 795890,
"filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f",
"format": "qcow2",
"fragmented-clusters": 692941
}
, stderr=Leaked cluster 81919 refcount=1 reference=0
Leaked cluster 81920 refcount=1 reference=0
Leaked cluster 81921 refcount=1 reference=0
etc..
Is there any way to fix these leaked clusters?
Running oVirt 4.1.9 with FC block storage.
Thanks,
Alan
5 years, 8 months
Re: low power, low cost glusterfs storage
by Strahil
The problem is that anything on the budged doesn't have decent network + enough storage slots.
Maybe a homemade workstation with AMD ryzen could do the trick - but this is way over the budged compared to raspberry Pi-s
Best Regards,
Strahil NikolovOn Mar 3, 2019 12:22, Jonathan Baecker <jonbae77(a)gmail.com> wrote:
>
> Hello everybody!
>
> Does anyone here have experience with a cheap, energy-saving glusterfs
> storage solution? I'm thinking of something that has more power than a
> rasbian Pi, 3 x 2 TB (SSD) storage, but doesn't cost much more and
> doesn't consume much more power.
>
> Would that be possible? I know the "Red Hat Gluster Storage"
> requirements, but are they generally so high? Only a few VM images would
> have to be on it...
>
> Greetings
>
> Jonathan
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWOA24SHB2C...
5 years, 8 months
Re: Advice around ovirt 4.3 / gluster 5.x
by Strahil
I think that there are updates via 4.3.1 .
Have you checked for gluster updates?
Best Regards,
Strahil NikolovOn Mar 2, 2019 23:16, Endre Karlson <endre.karlson(a)gmail.com> wrote:
>
> Hi, should we downgrade / reinstall our cluster? we have a 4 node cluster that's breakin apart daily due to the issues with GlusterFS after upgrading from 4.2.8 that was rock solid. I am wondering why 4.3 was released as a stable version at all?? **FRUSTRATION**
>
> Endre
5 years, 8 months
Cannot assign Network to Mellanox Interface
by cbop.mail@gmail.com
Hello,
I want to use my Infiniband Network for Storage Access, so I created a new network and tried to assign it to a host. When I do this, I get a success message but it's actually not assigned to my Infiniband NIC.
I'm running the latest CentOS with the latest Mellanox OFED drivers and multiple Subnet Managers are running on the network. The NIC is a Mellanox ConnectX-2 VPI, set to Infiniband mode.
I didnt't knew which logs to send, so when you need to look into a log, just tell me which one you need.
I hope you can help me out.
Roman Meusch
5 years, 8 months
Re: Mounting CephFS
by Strahil
Hi Leo,
Check libvirt's logs on the destination host.
Maybe they can provide more information.
Best Regards,
Strahil NikolovOn Mar 2, 2019 15:40, Leo David <leoalex(a)gmail.com> wrote:
>
> Thank you,
> I am trying to migrate a vm that has its disks on cephfs ( as posix domain - mounted on all hosts ), and it does not work. Not sure if this is normal, considering the vm disks being on this type of storage. The error logs in engine are:
>
> 2019-03-02 13:35:03,483Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-0) [] Migration of VM 'centos7-test' to host 'node1.internal' failed: VM destroyed during the startup.
> 2019-03-02 13:35:03,505Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-14) [] Rerun VM 'bec1cd40-9d62-4f6d-a9df-d97a79584441'. Called from VDS 'node2.internal'
> 2019-03-02 13:35:03,566Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-42967) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: centos7-test, Source: node2.internal, Destination: node1.internal).
>
> Any thoughts ?
>
> Thanks,
>
> Leo
>
>
> On Sat, Mar 2, 2019 at 11:59 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> If you mean storage migration - could be possible.
>> If it is about live migration between hosts - shouldn't happen.
>> Anything in the logs ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Mar 2, 2019 09:23, Leo David <leoalex(a)gmail.com> wrote:
>>>
>>> Thank you Strahil, yes thought about that too, I'll give it a try.
>>> Now ( to be a bit offtopic ), it seems that I can't live migrate the vm, even thought the cephfs mountpoint exists on all the hosts.
>>> Could it be the fact that the storage type is "posix" and live migration not being possible ?
>>>
>>> Thank you !
>>>
>>> On Sat, Mar 2, 2019, 04:05 Strahil <hunter86_bg(a)yahoo.com> wrote:
>>>>
>>>> Can you try to set the credentials in a file (don't recall where that was for ceph) , so you can mount without specifying user/pass ?
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>> On Mar 1, 2019 13:46, Leo David <leoalex(a)gmail.com> wrote:
>>>>>
>>>>> Hi Everyone,
>>>>> I am trying to mount cephfs as a posix storage domain and getting an error in vdsm.log, although the direct command run on the node " mount -t ceph 10.10.6.1:/sata/ovirt-data /cephfs-sata/ -o name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I have configured:
>>>>> Storage type: POSIX compliant FS
>>>>> Path: 10.10.6.1:/sata/ovirt-data
>>>>> VFS Type: ceph
>>>>> Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
>>>>>
>>>>>
>>>>> 2019-03-01 11:35:33,457+0000 INFO (jsonrpc/4) [storage.Mount] mounting 10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data (mount:204)
>>>>> 2019-03-01 11:35:33,464+0000 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
>>>>> 2019-03-01 11:35:33,471+0000 ERROR (jsonrpc/4) [storage.HSM] Could not connect to storageServer (hsm:2414)
5 years, 8 months
oVirt 4.3 and dual socket hosts : numad?
by Guillaume Pavese
Is it recommended to enable numad service for automatic NUMA memory
optimization?
I could not find any recommendation on that front for oVirt ; all
benchmarks and documentation I could find are from around ~2015
I see that kernel autonuma balancing is on :
cat /proc/sys/kernel/numa_balancing
1
Not sure which is better as of now.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
5 years, 8 months
Re: Cannot Increase Hosted Engine VM Memory
by Douglas Duckworth
Ok sounds good.
I will upgrade then hope this goes away.
On Thu, Jan 31, 2019, 12:09 PM Simone Tiraboschi <stirabos(a)redhat.com<mailto:stirabos@redhat.com> wrote:
On Thu, Jan 31, 2019 at 4:20 PM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Hi Simone
Thanks again for your help!
Do you have some ideas on what I can try to resolve this issue?
Honestly I'm not able to reproduce this issue.
I can only suggest to try upgrading to 4.2.8 if still not there, and if still not working open a bug on bugzilla attaching engine.log.
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Fri, Jan 25, 2019 at 3:15 PM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Yes, I do. Gold crown indeed.
It's the "HostedEngine" as seen attached!
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Wed, Jan 23, 2019 at 12:02 PM Simone Tiraboschi <stirabos(a)redhat.com<mailto:stirabos@redhat.com>> wrote:
On Wed, Jan 23, 2019 at 5:51 PM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Hi Simone
Can I get help with this issue? Still cannot increase memory for Hosted Engine.
From the logs it seams that the engine is trying to hotplug memory to the engine VM which is something it should not happen.
The engine should simply update engine VM configuration in the OVF_STORE and require a reboot of the engine VM.
Quick question, in the VM panel do you see a gold crown symbol on the Engine VM?
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Thu, Jan 17, 2019 at 8:08 AM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Sure, they're attached. In "first attempt" the error seems to be:
2019-01-17 07:49:24,795-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-29) [680f82b3-7612-4d91-afdc-43937aa298a2] EVENT_ID: FAILED_HOT_SET_MEMORY_NOT_DIVIDABLE(2,048), Failed to hot plug memory to VM HostedEngine. Amount of added memory (4000MiB) is not dividable by 256MiB.
Followed by:
2019-01-17 07:49:24,814-05 WARN [org.ovirt.engine.core.bll.UpdateRngDeviceCommand] (default task-29) [26f5f3ed] Validation of action 'UpdateRngDevice' failed for user admin@internal-authz. Reasons: ACTION_TYPE_FAILED_VM_IS_RUNNING
2019-01-17 07:49:24,815-05 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-29) [26f5f3ed] Updating RNG device of VM HostedEngine (adf14389-1563-4b1a-9af6-4b40370a825b) failed. Old RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}. New RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}.
In "second attempt" I used values that are dividable by 256 MiB so that's no longer present. Though same error:
2019-01-17 07:56:59,795-05 INFO [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-22) [7059a48f] START, SetAmountOfMemoryVDSCommand(HostName = ovirt-hv1.med.cornell.edu<http://ovirt-hv1.med.cornell.edu>, Params:{hostId='cdd5ffda-95c7-4ffa-ae40-be66f1d15c30', vmId='adf14389-1563-4b1a-9af6-4b40370a825b', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='7f7d97cc-c273-4033-af53-bc9033ea3abe', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='memory', type='MEMORY', specParams='[node=0, size=2048]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='6144'}), log id: 50873daa
2019-01-17 07:56:59,855-05 INFO [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-22) [7059a48f] FINISH, SetAmountOfMemoryVDSCommand, log id: 50873daa
2019-01-17 07:56:59,862-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-22) [7059a48f] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory: changed the amount of memory on VM HostedEngine from 4096 to 4096
2019-01-17 07:56:59,881-05 WARN [org.ovirt.engine.core.bll.UpdateRngDeviceCommand] (default task-22) [28fd4c82] Validation of action 'UpdateRngDevice' failed for user admin@internal-authz. Reasons: ACTION_TYPE_FAILED_VM_IS_RUNNING
2019-01-17 07:56:59,882-05 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-22) [28fd4c82] Updating RNG device of VM HostedEngine (adf14389-1563-4b1a-9af6-4b40370a825b) failed. Old RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}. New RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}.
This message repeats throughout engine.log:
2019-01-17 07:55:43,270-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-89) [] EVENT_ID: VM_MEMORY_UNDER_GUARANTEED_VALUE(148), VM HostedEngine on host ovirt-hv1.med.cornell.edu<http://ovirt-hv1.med.cornell.edu> was guaranteed 8192 MB but currently has 4224 MB
As you can see attached the host has plenty of memory.
Thank you Simone!
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Thu, Jan 17, 2019 at 5:09 AM Simone Tiraboschi <stirabos(a)redhat.com<mailto:stirabos@redhat.com>> wrote:
On Wed, Jan 16, 2019 at 8:22 PM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Sorry for accidental send.
Anyway I try to increase physical memory however it won't go above 4096MB. The hypervisor has 64GB.
Do I need to modify this value with Hosted Engine offline?
No, it's not required.
Can you please attach your engine.log for the relevant time frame?
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Wed, Jan 16, 2019 at 1:58 PM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Hello
I am trying to increase Hosted Engine physical memory above 4GB
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_site_p...>
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_commun...>
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WGSXQVVPJJ2...<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.ovirt.org_arch...>
5 years, 8 months
Re: Mounting CephFS
by Strahil
If you mean storage migration - could be possible.
If it is about live migration between hosts - shouldn't happen.
Anything in the logs ?
Best Regards,
Strahil NikolovOn Mar 2, 2019 09:23, Leo David <leoalex(a)gmail.com> wrote:
>
> Thank you Strahil, yes thought about that too, I'll give it a try.
> Now ( to be a bit offtopic ), it seems that I can't live migrate the vm, even thought the cephfs mountpoint exists on all the hosts.
> Could it be the fact that the storage type is "posix" and live migration not being possible ?
>
> Thank you !
>
> On Sat, Mar 2, 2019, 04:05 Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> Can you try to set the credentials in a file (don't recall where that was for ceph) , so you can mount without specifying user/pass ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Mar 1, 2019 13:46, Leo David <leoalex(a)gmail.com> wrote:
>>>
>>> Hi Everyone,
>>> I am trying to mount cephfs as a posix storage domain and getting an error in vdsm.log, although the direct command run on the node " mount -t ceph 10.10.6.1:/sata/ovirt-data /cephfs-sata/ -o name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I have configured:
>>> Storage type: POSIX compliant FS
>>> Path: 10.10.6.1:/sata/ovirt-data
>>> VFS Type: ceph
>>> Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
>>>
>>>
>>> 2019-03-01 11:35:33,457+0000 INFO (jsonrpc/4) [storage.Mount] mounting 10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data (mount:204)
>>> 2019-03-01 11:35:33,464+0000 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
>>> 2019-03-01 11:35:33,471+0000 ERROR (jsonrpc/4) [storage.HSM] Could not connect to storageServer (hsm:2414)
>>> Traceback (most recent call last):
>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411, in connectStorageServer
>>> conObj.connect()
>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 180, in connect
>>> six.reraise(t, v, tb)
>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 172, in connect
>>> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount
>>> cgroup=cgroup)
>>> File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
>>> return callMethod()
>>> File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
>>> **kwargs)
>>> File "<string>", line 2, in mount
>>> File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
>>> raise convert_to_error(kind, result)
>>> MountError: (1, ';mount: unsupported option format: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==\n')
>>> Any thoughts on this, what could it be wrong with the options field ?
>>> Using oVirt 4.3.1
>>> Thank you very much and have a great day !
>>>
>>> Leo
>>>
>>> --
>>> Best regards, Leo David
5 years, 8 months