"Upgrade" from oVirt to RHV
by Vinícius Ferrão
Hello,
I would like to know if there’s a supported path to move from oVirt to RHV.
oVirt is running on version 4.3.0.4-1.el7.
RHV would be version 4.3.3.7-0.1.el7.
I was thinking in reinstalling a host with RHV 4.3 and adding it to the oVirt HE. Move all the VM’s to the RHV host and than do the same process. After it install a new RHVH-M to remove the oVirt Engine. Not sure if this would work or not.
Thanks,
5 years, 9 months
Unable to deploy Hyperconverged Engine Node - v4.3.3
by anonmix@gmail.com
Hi everyone,
I am trying a Gluster Hyperconvergence deployment where the Gluster part has been completed successfully. All hosts are Centos 7.6.1810 (fresh install) and two HP DL20 G9 (for VM's) and one HP 120 G7 (which hosts the Gluster arbiter volumes). Unfortunately I am unable to deploy the Engine, both CLI and GUI approaches fail with the error below. On first sight it looks similar to https://lists.ovirt.org/pipermail/users/2018-March/087802.html but I've configured a static IP (same subnet as the host), no DHCP. I also tried to force ipv4 with "/usr/sbin/ovirt-hosted-engine-setup --4" but the very same error was thrown in every case when trying to deploy the engine:
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "sub.sub.domain.tld", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "sub.domain.tld", "subject": "O=sub.domain.tld,CN=sub.sub.domain.tld"}, "cluster": {"href": "/ovirt-engine/api/clusters/f083f056-74fd-11e9-bba9-00163e522076", "id": "f083f056-74fd-11e9-bba9-00163e522076"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/dc4f5c15-4989-4454-ba46-3bd600796b69", "id": "dc4f5c15-4989-4454-ba46-3bd600796b69", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "sub.sub.domain.tld", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": fals
e, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:L8YyAMcxLFJEng+CoDympwkpMwoagcBafI4fpLP4Kk0", "port": 22}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false, "vgpu_placement": "consolidated"}]}, "attempts": 120, "changed": false}
Unfortunately I don't really have an idea where to check for what considering the error message. The to be deployed engine VM gets listed as KVM VM, is accessible through the bridge and seems to be started up completely, I can even access the Engine web interface (engine01.sub.domain.tld/ovirt-engine).
In /var/log/messages the following can be found ...
"May 13 12:40:55 host ansible-async_wrapper.py: 15505 still running (86015)
May 13 12:40:57 host python: ansible-ovirt_host_facts Invoked with all_content=False pattern=name=sub.sub.domain.tld fetch_nested=False nested_attributes=[] auth={'timeout': 0, 'url': 'https://engine01.sub.domain.tld/ovirt-engine/api', 'insecure': True, 'kerberos': False, 'compress': True, 'headers': None, 'token': '8s-vELzQqNTR6l7-KRuqnYLE3sVwVWU5NxiNWzc-s2CllaQG_5YZ32fCFkVsAgwEyLWjPIOxvyS-_4js-VYFFQ', 'ca_file': None}"
... and after 120 attempts Ansible stops and fails with a deployment error. When re-trying after removing the VM and ovirt-hosted-engine-cleanup the very same error is thrown.
What is a bit weird is this entry in /var/log/ovirt-hosted-engine-setup/
./engine-logs-2019-05-13T12:26:20Z/ovirt-engine/engine.log:2019-05-13 12:34:40,369Z ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog] (EE-ManagedThreadFactory-engine-Thread-1) [12746235] SSH error running command root(a)sub.sub.domain.tld:'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x && "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine DIALOG/customization=bool:True': RuntimeException: Unexpected error during execution: bash: /tmp/ovirt-pTVEEzlb8b/ovirt-host-deploy: Permission denied
./engine-logs-2019-05-13T12:26:20Z/ovirt-engine/engine.log:2019-05-13 12:34:40,406Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-1) [12746235] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred during installation of Host sub.sub.domain.tld: Unexpected error during execution: bash: /tmp/ovirt-pTVEEzlb8b/ovirt-host-deploy: Permission denied
Could that be the cause and how can I fix it? What else do you guys need?
Thanks in advance, Martin
5 years, 9 months
Engine restore errors out on "Wait for OVF_STORE disk content"
by Andreas Elvers
Hello,
when trying to deploy the engine using "hosted-engine --deploy --restore-from-file=myenginebackup" the ansible playbook errors out at
[ INFO ] TASK [ovirt.hosted_engine_setup : Trigger hosted engine OVF update and enable the serial console]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait until OVF update finishes]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Parse OVF_STORE disk list]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Check OVF_STORE volume status]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for OVF_STORE disk content]
[ ERROR ] {u'_ansible_parsed': True, u'stderr_lines': [u'20+0 records in', u'20+0 records out', u'10240 bytes (10 kB) copied, 0.000141645 s, 72.3 MB/s', u'tar: ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf: Not found in archive', u'tar: Exiting with failure status due to previous errors'], u'changed': True, u'end': u'2019-05-08 15:21:47.595195', u'_ansible_item_label': {u'image_id': u'65fd6c57-033c-4c95-87c1-b16c26e4bc98', u'name': u'OVF_STORE', u'id': u'9ff8b389-5e24-4166-9842-f1d6104b662b'}, u'stdout': u'', u'failed': True, u'_ansible_item_result': True, u'msg': u'non-zero return code', u'rc': 2, u'start': u'2019-05-08 15:21:46.906877', u'attempts': 12, u'cmd': u"vdsm-client Image prepare storagepoolID=597f329c-0296-03af-0369-000000000139 storagedomainID=f708ced4-e339-4d02-a07f-78f1a30fc2a8 imageID=9ff8b389-5e24-4166-9842-f1d6104b662b volumeID=65fd6c57-033c-4c95-87c1-b16c26e4bc98 | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - ebb09b0e-2d03-40f0-8fa4-c40
b18612a54.ovf", u'item': {u'image_id': u'65fd6c57-033c-4c95-87c1-b16c26e4bc98', u'name': u'OVF_STORE', u'id': u'9ff8b389-5e24-4166-9842-f1d6104b662b'}, u'delta': u'0:00:00.688318', u'invocation': {u'module_args': {u'warn': False, u'executable': None, u'_uses_shell': True, u'_raw_params': u"vdsm-client Image prepare storagepoolID=597f329c-0296-03af-0369-000000000139 storagedomainID=f708ced4-e339-4d02-a07f-78f1a30fc2a8 imageID=9ff8b389-5e24-4166-9842-f1d6104b662b volumeID=65fd6c57-033c-4c95-87c1-b16c26e4bc98 | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf", u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout_lines': [], u'stderr': u'20+0 records in\n20+0 records out\n10240 bytes (10 kB) copied, 0.000141645 s, 72.3 MB/s\ntar: ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf: Not found in archive\ntar: Exiting with failure status due to previous errors', u'_ansible_no_log': False}
[ ERROR ] {u'_ansible_parsed': True, u'stderr_lines': [u'20+0 records in', u'20+0 records out', u'10240 bytes (10 kB) copied, 0.000140541 s, 72.9 MB/s', u'tar: ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf: Not found in archive', u'tar: Exiting with failure status due to previous errors'], u'changed': True, u'end': u'2019-05-08 15:24:01.387469', u'_ansible_item_label': {u'image_id': u'dacf9ad8-77b9-4205-8ca2-d6877627ad4a', u'name': u'OVF_STORE', u'id': u'8691076a-8e45-4429-a18a-5faebef866cc'}, u'stdout': u'', u'failed': True, u'_ansible_item_result': True, u'msg': u'non-zero return code', u'rc': 2, u'start': u'2019-05-08 15:24:00.660309', u'attempts': 12, u'cmd': u"vdsm-client Image prepare storagepoolID=597f329c-0296-03af-0369-000000000139 storagedomainID=f708ced4-e339-4d02-a07f-78f1a30fc2a8 imageID=8691076a-8e45-4429-a18a-5faebef866cc volumeID=dacf9ad8-77b9-4205-8ca2-d6877627ad4a | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - ebb09b0e-2d03-40f0-8fa4-c40
b18612a54.ovf", u'item': {u'image_id': u'dacf9ad8-77b9-4205-8ca2-d6877627ad4a', u'name': u'OVF_STORE', u'id': u'8691076a-8e45-4429-a18a-5faebef866cc'}, u'delta': u'0:00:00.727160', u'invocation': {u'module_args': {u'warn': False, u'executable': None, u'_uses_shell': True, u'_raw_params': u"vdsm-client Image prepare storagepoolID=597f329c-0296-03af-0369-000000000139 storagedomainID=f708ced4-e339-4d02-a07f-78f1a30fc2a8 imageID=8691076a-8e45-4429-a18a-5faebef866cc volumeID=dacf9ad8-77b9-4205-8ca2-d6877627ad4a | grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf - ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf", u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout_lines': [], u'stderr': u'20+0 records in\n20+0 records out\n10240 bytes (10 kB) copied, 0.000140541 s, 72.9 MB/s\ntar: ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf: Not found in archive\ntar: Exiting with failure status due to previous errors', u'_ansible_no_log': False}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
I tried twice. Same result. Should I retry?
Is it safe to use the local hosted engine for starting stopping vms? I'm kind of headless for some days :-)
Best regards.
5 years, 9 months
ovirt 4.3.3.7 cannot create a gluster storage domain
by Strahil Nikolov
Hey guys,
I have recently updated (yesterday) my platform to latest available (v4.3.3.7) and upgraded to gluster v6.1 .The setup is hyperconverged 3 node cluster with ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX is for gluster communication) while ovirt3 is the arbiter.
Today I have tried to add new domain storages but they fail with the following:
2019-05-16 10:15:21,296+0300 INFO (jsonrpc/2) [vdsm.api] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n" from=::ffff:192.168.1.2,43864, flow_id=4a54578a, task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in createStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614, in createStorageDomain
storageType, domVersion, block_size, alignment)
File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 106, in create
block_size)
File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 466, in _prepareMetadata
cls.format_external_leases(sdUUID, xleases_path)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255, in format_external_leases
xlease.format_index(lockspace, backend)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 681, in format_index
index.dump(file)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 843, in dump
file.pwrite(INDEX_BASE, self._buf)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1076, in pwrite
self._run(args, data=buf[:])
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1093, in _run
raise cmdutils.Error(args, rc, "[suppressed]", err)
Error: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n"
2019-05-16 10:15:21,296+0300 INFO (jsonrpc/2) [storage.TaskManager.Task] (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') aborting: Task is aborted: u'Command [\'/usr/bin/dd\', \'iflag=fullblock\', u\'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\', \'oflag=direct,seek_bytes\', \'seek=1048576\', \'bs=256512\', \'count=1\', \'conv=notrunc,nocreat,fsync\'] failed with rc=1 out=\'[suppressed]\' err="/usr/bin/dd: error writing \'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\': Invalid argument\\n1+0 records in\\n0+0 records out\\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\\n"' - code 100 (task:1181)
2019-05-16 10:15:21,297+0300 ERROR (jsonrpc/2) [storage.Dispatcher] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n" (dispatcher:87)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 74, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper
return m(self, *a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1189, in prepare
raise self.error
Error: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n"
2019-05-16 10:15:21,297+0300 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 351) in 0.45 seconds (__init__:312)
2019-05-16 10:15:22,068+0300 INFO (jsonrpc/1) [vdsm.api] START disconnectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'backup-volfile-servers=gluster2:ovirt3', u'id': u'7442e9ab-dc54-4b9a-95d9-5d98a1e81b05', u'connection': u'gluster1:/data_fast2', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.1.2,43864, flow_id=33ced9b2-cdd5-4147-a223-d0eb398a2daf, task_id=a9a8f90a-1603-40c6-a959-3cbff29d1d7b (api:48)
2019-05-16 10:15:22,068+0300 INFO (jsonrpc/1) [storage.Mount] unmounting /rhev/data-center/mnt/glusterSD/gluster1:_data__fast2 (mount:212)
I have tested manually mounting and trying it again:
[root@ovirt1 logs]# mount -t glusterfs -o backupvolfile-server=gluster2:ovirt3 gluster1:/data_fast2 /mnt
[root@ovirt1 logs]# cd /mnt/
[root@ovirt1 mnt]# ll
total 0
[root@ovirt1 mnt]# dd if=/dev/zero of=file bs=4M status=progress count=250
939524096 bytes (940 MB) copied, 8.145447 s, 115 MB/s
250+0 records in
250+0 records out
1048576000 bytes (1.0 GB) copied, 9.08347 s, 115 MB/s
[root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock of=file oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync status=progress
^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 46.5877 s, 0.0 kB/s
Can someone give a hint ? Maybe it's related to gluster v6 ?
Can someone test with older version of Gluster ?
Best Regards,Strahil Nikolov
5 years, 9 months
Error while executing action Change CD: Drive image file could not be found
by racevedo@lenovo.com
I see that this was apparently a bug in previous version, but I'm currently using oVirt Node 4.3.3.1. This was working fine before updating from oVirt Node 4.2. This occurs when trying to change the CD on a running VM.
Log shows:
2019-05-03 12:09:24,731-04 ERROR [org.ovirt.engine.core.bll.storage.disk.ChangeDiskCommand] (default task-52) [309a3a99-c754-45d5-add7-c45e094d2e69] Command 'org.ovirt.engine.core.bll.storage.disk.ChangeDiskCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to ChangeDiskVDS, error = Drive image file could not be found, code = 13 (Failed with error imageErr and code 13)
Any advice on how to solve this
Current configuration:
OS Version : RHEL - 7 - 6.1810.2.el7.centos
OS Description : oVirt Node 4.3.3.1
Kernel Version : 3.10.0 - 957.10.1.el7.x86_64
KVM Version : 2.12.0 - 18.el7_6.3.1
LIBVIRT Version : libvirt-4.5.0-10.el7_6.6
VDSM Version : vdsm-4.30.13-1.el7
5 years, 9 months
[ANN] oVirt 4.3.4 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.4 Second Release Candidate, as of May 22nd, 2019.
This update is a release candidate of the fourth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used
inproduction.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
Additional Resources:
* Read more about the oVirt 4.3.4 release highlights:
http://www.ovirt.org/release/4.3.4/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.4/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
<https://redhat.com/summit>
5 years, 9 months
vGPU becomes very inefficient when I use it in ovirt instead of qemu-kvm
by weizhengya16@gmail.com
We distribute vGpu in order to use it in virtual machines. I find the same type vGpu get differently performance in Ovirt and Qemu-kvm .
when I distribute a vGPU to Ovirt virtual machine,it's performance becomes very strange.Some games can run efficiently and attach 200+ fps.However,in some other games,it's can't even attach 10 FPS.
But in Qemu-kvm,the vGpu's performance always efficiently,can run these games all in 200+fps.
I've been stuck with this for weeks and I have to make it always runs efficiently. Can someone help me?Thank you very very very much....
5 years, 9 months
Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available
by Strahil
Dear Krutika,
Yes I did but I use 6 ports (1 gbit/s each) and this is the reason that reads get slower.
Do you know a way to force gluster to open more connections (client to server & server to server)?
Thanks for the detailed explanation.
Best Regards,
Strahil NikolovOn May 21, 2019 08:36, Krutika Dhananjay <kdhananj(a)redhat.com> wrote:
>
> So in our internal tests (with nvme ssd drives, 10g n/w), we found read performance to be better with choose-local
> disabled in hyperconverged setup. See https://bugzilla.redhat.com/show_bug.cgi?id=1566386 for more information.
>
> With choose-local off, the read replica is chosen randomly (based on hash value of the gfid of that shard).
> And when it is enabled, the reads always go to the local replica.
> We attributed better performance with the option disabled to bottlenecks in gluster's rpc/socket layer. Imagine all read
> requests lined up to be sent over the same mount-to-brick connection as opposed to (nearly) randomly getting distributed
> over three (because replica count = 3) such connections.
>
> Did you run any tests that indicate "choose-local=on" is giving better read perf as opposed to when it's disabled?
>
> -Krutika
>
> On Sun, May 19, 2019 at 5:11 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> Ok,
>>
>> so it seems that Darell's case and mine are different as I use vdo.
>>
>> Now I have destroyed Storage Domains, gluster volumes and vdo and recreated again (4 gluster volumes on a single vdo).
>> This time vdo has '--emulate512=true' and no issues have been observed.
>>
>> Gluster volume options before 'Optimize for virt':
>>
>> Volume Name: data_fast
>> Type: Replicate
>> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
>> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
>> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
>> Options Reconfigured:
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>> cluster.enable-shared-storage: enable
>>
>> Gluster volume after 'Optimize for virt':
>>
>> Volume Name: data_fast
>> Type: Replicate
>> Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
>> Status: Stopped
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/gluster_bricks/data_fast/data_fast
>> Brick2: gluster2:/gluster_bricks/data_fast/data_fast
>> Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
>> Options Reconfigured:
>> network.ping-timeout: 30
>> performance.strict-o-direct: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> server.event-threads: 4
>> client.event-threads: 4
>> cluster.choose-local: off
>> user.cifs: off
>> features.shard: on
>> cluster.shd-wait-qlength: 10000
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.eager-lock: enable
>> network.remote-dio: off
>> performance.low-prio-threads: 32
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: on
>> cluster.enable-shared-storage: enable
>>
>> After that adding the volumes as storage domains (via UI) worked without any issues.
>>
>> Can someone clarify why we have now 'cluster.choose-local: off' when in oVirt 4.2.7 (gluster v3.12.15) we didn't have that ?
>> I'm using storage that is faster than network and reading from local brick gives very high read speed.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>> В неделя, 19 май 2019 г., 9:47:27 ч. Г�
5 years, 9 months
Supporting comments on ovirt-site Blog section
by Roy Golan
It would be very useful to have a comment section on our ovirt site.
It is quite standard to have that on every blog out there, and for a reason
- you get
feedback and conversation around the topic, without going somewhere else
(users list, irc, etc...)
What do we need to do to help the middle man with that?
Regards,
Roy
5 years, 9 months
Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster
by Strahil
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file.
Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards,
Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero(a)gmail.com wrote:
>
> Thanks Alex, that makes more sense now while trying to follow the instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
>
> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
> Same thing for sdc, sdd
>
> Should I manually edit the filters inside the OS, what will be the impact?
>
> thanks again.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTY...
5 years, 9 months