Re: Mounting CephFS
by Strahil
Can you try to set the credentials in a file (don't recall where that was for ceph) , so you can mount without specifying user/pass ?
Best Regards,
Strahil NikolovOn Mar 1, 2019 13:46, Leo David <leoalex(a)gmail.com> wrote:
>
> Hi Everyone,
> I am trying to mount cephfs as a posix storage domain and getting an error in vdsm.log, although the direct command run on the node " mount -t ceph 10.10.6.1:/sata/ovirt-data /cephfs-sata/ -o name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I have configured:
> Storage type: POSIX compliant FS
> Path: 10.10.6.1:/sata/ovirt-data
> VFS Type: ceph
> Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
>
>
> 2019-03-01 11:35:33,457+0000 INFO (jsonrpc/4) [storage.Mount] mounting 10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data (mount:204)
> 2019-03-01 11:35:33,464+0000 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2019-03-01 11:35:33,471+0000 ERROR (jsonrpc/4) [storage.HSM] Could not connect to storageServer (hsm:2414)
> Traceback (most recent call last):
> File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411, in connectStorageServer
> conObj.connect()
> File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 180, in connect
> six.reraise(t, v, tb)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 172, in connect
> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount
> cgroup=cgroup)
> File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
> return callMethod()
> File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
> **kwargs)
> File "<string>", line 2, in mount
> File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
> raise convert_to_error(kind, result)
> MountError: (1, ';mount: unsupported option format: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==\n')
> Any thoughts on this, what could it be wrong with the options field ?
> Using oVirt 4.3.1
> Thank you very much and have a great day !
>
> Leo
>
> --
> Best regards, Leo David
5 years, 8 months
impact of --emulate512 setting for VDO volumes
by Guillaume Pavese
Hello,
We are planning to deploy VDO with oVirt 4.3 on centos 7.6 (on SSD devices).
As oVirt does not support 4K devices yet, VDO volumes are created with the
parameter "--emulate512 enabled"
What are the implications of this setting? Does it impact performance? If
so, is it IOPS or throughput that is impacted? What about reliability (is
that mode equally tested as standard mode)?
As I saw on RH Bugzilla, support for 4K devices in oVirt will need to wait
at least for Centos 7.7
Once that is supported, would it be possible to transition/upgrade an
emulate512 vdo volume to a standard one?
Thanks,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
5 years, 8 months
Re: oVirt Performance (Horrific)
by Strahil
Another option is to try nfs-ganesha - in my case I can reach 80MB/s sequential writes in the VM.
Best Regards,
Strahil NikolovOn Mar 1, 2019 00:15, Jayme <jaymef(a)gmail.com> wrote:
>
> Also one more thing, did you make sure to setup the 10Gb gluster network in ovirt and set migration and vm traffic to use the gluster network?
>
> On Thu, Feb 28, 2019 at 6:11 PM Jayme <jaymef(a)gmail.com> wrote:
>>
>> Check volumes in Ovirt admin and make sure the optimize volume for cm storage is selected
>>
>> I have a three node ovirt hci with ssd gluster backed storage and 10Gb storage network and I write at around 50-60 megabytes per second from within vms. Before I used the optimize for vm storage it was about 10x less. Iirc the optimize setting it suppose to get set by the installler by default but it wasn’t set in my case when I used cockpit to deploy (this was a little while ago on ovirt 4.2). Worth checking anyway.
>>
>> On Thu, Feb 28, 2019 at 4:51 PM Drew R <drew.rash(a)gmail.com> wrote:
>>>
>>> I expect VM's to be able to write at 200MB or 500MB to the hdd/ssd when using an NFS mount or gluster from inside a VM.
>>>
>>> But heck, it's not even 10% (20MB or 50MB), it's closer to 2.5%-5% of the speed of the drives.
>>>
>>> I expect windows services to not timeout due to slow hard drives while booting.
>>> I expect a windows vm to boot in less than 6 minutes.
>>> I expect to open chrome in less than 1 minute.
>>>
>>> :) I was hoping for like, at least half speed or quarter speed. I'd be ok with 50MBps or 100MBps. Writing.
>>>
>>> I think what I was expecting was out of the box it would perform. But there appears to be something we've done wrong, or a tweak we aren't doing.
>>>
>>> But we can't even run 1 windows vm at something anyone would remotely call "performs very well".
>>>
>>> Please guide me. Do you have any commands you want me to run? Or tests to do? Do you want a print out or a specific configuration you would like to try?
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLWJHKXBR3A...
5 years, 8 months
Mounting CephFS
by Leo David
Hi Everyone,
I am trying to mount cephfs as a posix storage domain and getting an error
in vdsm.log, although the direct command run on the node " mount -t ceph
10.10.6.1:/sata/ovirt-data /cephfs-sata/ -o
name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I
have configured:
Storage type: POSIX compliant FS
Path: 10.10.6.1:/sata/ovirt-data
VFS Type: ceph
Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
2019-03-01 11:35:33,457+0000 INFO (jsonrpc/4) [storage.Mount] mounting
10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data
(mount:204)
2019-03-01 11:35:33,464+0000 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)
2019-03-01 11:35:33,471+0000 ERROR (jsonrpc/4) [storage.HSM] Could not
connect to storageServer (hsm:2414)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411,
in connectStorageServer
conObj.connect()
File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
line 180, in connect
six.reraise(t, v, tb)
File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
line 172, in connect
self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207,
in mount
cgroup=cgroup)
File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
56, in __call__
return callMethod()
File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
54, in <lambda>
**kwargs)
File "<string>", line 2, in mount
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod
raise convert_to_error(kind, result)
*MountError: (1, ';mount: unsupported option format: *
name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==\n')
Any thoughts on this, what could it be wrong with the options field ?
Using oVirt 4.3.1
Thank you very much and have a great day !
Leo
--
Best regards, Leo David
5 years, 8 months
Gluster messages after upgrade to 4.3.1
by Stefano Danzi
Hello,
I've just upgrade to version 4.3.1 and I can see this message in gluster
log of all my host (running oVirt Node):
The message "E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to
dispatch handler" repeated 59 times between [2019-03-01 10:21:42.099983]
and [2019-03-01 10:23:38.340971
Another strange thing:
A Vm was running. I shutted down it fo mistake. I was no more able to
run this vm. The error was: "Bad volume specification ".
After a little investigation I notice that disk image was no more owned
by vdsm.kvm but root.root. I changed back to correct value and vm
started fine.
5 years, 8 months
Ovirt 4.3.1 Install failed
by kiv@intercom.pro
Hi all.
Testing fresh install 4.3.1
Install Node 4.3 - ok
Install Engine from Cockpit - not working.
Install Engine from CLI - not working, we get error:
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The resolved address doesn't resolve on the selected interface\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
Vlans are not used.
5 years, 8 months
[oVirt 4.3.1-RC2 Test Day] Hyperconverged HE Deployment
by Guillaume Pavese
Hi, I tried again today to deploy HE on Gluster with oVirt 4.3.1 RC2 on a
clean Nested environment (no precedent deploy attempts to clean before...).
Gluster was deployed without problem from cockpit.
I then snapshoted my vms before trying to deploy HE both from cockpit, then
from cmdline.
Both attempts failed at the same spot :
[ INFO ] Creating Storage Domain
[ INFO ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of
steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Check local VM dir stat]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Enforce local VM dir existence]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using
username/password credentials]
[ ERROR ] ConnectionError: Error while sending HTTP request: (7, 'Failed
connect to vs-inf-int-ovt-fr-301-210.hostics.fr:443; No route to host')
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": false,
"msg": "Error while sending HTTP request: (7, 'Failed connect to
vs-inf-int-ovt-fr-301-210.hostics.fr:443; No route to host')"}
Please specify the storage you would like to use (glusterfs, iscsi, fc,
nfs)[nfs]:
[root@vs-inf-int-kvm-fr-301-210 ~]# traceroute
vs-inf-int-ovt-fr-301-210.hostics.fr
traceroute to vs-inf-int-ovt-fr-301-210.hostics.fr (192.168.122.147), 30
hops max, 60 byte packets
1 vs-inf-int-kvm-fr-301-210.hostics.fr (192.168.122.1) 3006.344 ms !H
3006.290 ms !H 3006.275 ms !H
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
5 years, 8 months