Weird problem starting VMs in oVirt-4.4
by Joop
Hi All,
Just had a rather new experience in that starting a VM worked but the
kernel entered grub2 rescue console due to the fact that something was
wrong with its virtio-scsi disk.
The message is Booting from Hard Disk ....
error: ../../grub-core/kern/dl.c:266:invalid arch-independent ELF maginc.
entering rescue mode...
Doing a CTRL-ALT-Del through the spice console let the VM boot
correctly. Shutting it down and repeating the procedure I get a disk
problem everytime. Weird thing is if I activate the BootMenu and then
straight away start the VM all is OK.
I don't see any ERROR messages in either vdsm.log, engine.log
If I would have to guess it looks like the disk image isn't connected
yet when the VM boots but thats weird isn't it?
Regards,
Joop
4 years, 5 months
Recreate OVF_STORE
by Николай Чаплинский
After some troubles with gluster store i was found missing ovf_store file.
I recreated file with dd, but now i get errors in agent.log. What the right
way for recreate OVF_STORE file?
oVirt 4.3.
agent.log:
MainThread::ERROR::2020-07-10
11:34:19,866::agent::145::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Trying to restart agent
MainThread::INFO::2020-07-10
11:34:19,866::agent::89::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
Agent shutting down
MainThread::INFO::2020-07-10
11:34:30,243::agent::67::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
ovirt-hosted-engine-ha agent 2.3.6 started
MainThread::INFO::2020-07-10
11:34:30,305::hosted_engine::234::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
Found certificate common
name: sponode1.vdispo.ru
MainThread::INFO::2020-07-10
11:34:30,421::hosted_engine::543::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Initializing ha-bro
ker connection
MainThread::INFO::2020-07-10
11:34:30,423::brokerlink::80::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor network, options {
'tcp_t_address': '', 'network_test': 'dns', 'tcp_t_port': '', 'addr':
'10.1.99.1'}
MainThread::INFO::2020-07-10
11:34:30,427::brokerlink::92::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id network
MainThread::INFO::2020-07-10
11:34:30,427::brokerlink::80::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor mgmt-bridge, optio
ns {'use_ssl': 'true', 'bridge_name': 'ovirtmgmt', 'address': '0'}
MainThread::INFO::2020-07-10
11:34:30,432::brokerlink::92::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id mgmt-bridge
MainThread::INFO::2020-07-10
11:34:30,432::brokerlink::80::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor mem-free, options
{'use_ssl': 'true', 'address': '0'}
MainThread::INFO::2020-07-10
11:34:30,434::brokerlink::92::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id mem-free
MainThread::INFO::2020-07-10
11:34:30,434::brokerlink::80::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor cpu-load-no-engine
, options {'use_ssl': 'true', 'vm_uuid':
'141397e0-050a-4e6a-9fe7-9042184ca2a8', 'address': '0'}
MainThread::INFO::2020-07-10
11:34:30,437::brokerlink::92::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id cpu-load-no-engine
MainThread::INFO::2020-07-10
11:34:30,437::brokerlink::80::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor engine-health, opt
ions {'use_ssl': 'true', 'vm_uuid': '141397e0-050a-4e6a-9fe7-9042184ca2a8',
'address': '0'}
MainThread::INFO::2020-07-10
11:34:30,439::brokerlink::92::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id engine-health
MainThread::INFO::2020-07-10
11:34:30,440::brokerlink::80::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor storage-domain, op
tions {'sd_uuid': '927801cc-f7fc-40bb-9bb9-92b2b19a5087'}
MainThread::INFO::2020-07-10
11:34:30,441::brokerlink::92::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id storage-domain
MainThread::INFO::2020-07-10
11:34:30,441::hosted_engine::565::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Broker initialized,
all submonitors started
MainThread::INFO::2020-07-10
11:34:30,700::upgrade::979::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade_35_36)
Host configuration is already up-to-d
ate
MainThread::INFO::2020-07-10
11:34:30,799::ovf_store::120::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Found OVF_STORE: imgUUID:7a6cb900-03f2-4f3f
-915c-633482b2f28c, volUUID:1beb6ac5-b55c-44dc-8d71-7ea9e85c2909
MainThread::INFO::2020-07-10
11:34:31,353::ovf_store::120::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Found OVF_STORE: imgUUID:7ae16997-0b97-4c1f
-a06b-1a57dcec7445, volUUID:ad3561e2-ae47-41e9-9e5b-c903f2391e67
MainThread::INFO::2020-07-10
11:34:31,844::ovf_store::151::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path: /var/run/v
dsm/storage/927801cc-f7fc-40bb-9bb9-92b2b19a5087/7ae16997-0b97-4c1f-a06b-1a57dcec7445/ad3561e2-ae47-41e9-9e5b-c903f2391e67
MainThread::ERROR::2020-07-10
11:34:31,861::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid globa
l metadata key: CAP=1073741824
MainThread::ERROR::2020-07-10
11:34:31,861::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid globa
l metadata key: CTIME=1590138019
MainThread::ERROR::2020-07-10
11:34:31,861::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid globa
l metadata key:
DESCRIPTION={"DiskAlias":"he_metadata","DiskDescription":"Hosted-Engine
metadata disk"}
MainThread::ERROR::2020-07-10
11:34:31,861::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid globa
l metadata key: DISKTYPE=HEMD
MainThread::ERROR::2020-07-10
11:34:31,861::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid globa
l metadata key: DOMAIN=927801cc-f7fc-40bb-9bb9-92b2b19a5087
MainThread::ERROR::2020-07-10
11:34:31,861::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid globa
l metadata key: FORMAT=RAW
MainThread::ERROR::2020-07-10
11:34:31,861::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid globa
l metadata key: GEN=0
MainThread::ERROR::2020-07-10
11:34:31,862::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid global metadata key: IMAGE=cdd8e110-ccae-4ff2-9215-18e04227e839
MainThread::ERROR::2020-07-10
11:34:31,862::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid global metadata key: LEGALITY=LEGAL
MainThread::ERROR::2020-07-10
11:34:31,862::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid global metadata key: PUUID=00000000-0000-0000-0000-000000000000
MainThread::ERROR::2020-07-10
11:34:31,862::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid global metadata key: TYPE=PREALLOCATED
MainThread::ERROR::2020-07-10
11:34:31,862::metadata::67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(parse_global_metadata_to_dict)
Invalid global metadata key: VOLTYPE=LEAF
MainThread::ERROR::2020-07-10
11:34:31,862::hosted_engine::452::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Unhandled monitoring loop exception
Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 449, in start_monitoring
self._monitoring_loop()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 468, in _monitoring_loop
for old_state, state, delay in self.fsm:
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/fsm/machine.py",
line 127, in next
new_data = self.refresh(self._state.data)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/state_machine.py",
line 81, in refresh
stats.update(self.hosted_engine.collect_stats())
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 760, in collect_stats
data["cluster"] = self.process_global_metadata(all_stats.pop(0))
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 830, in process_global_metadata
md = metadata.parse_global_metadata_to_dict(self._log, data)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/metadata.py",
line 63, in parse_global_metadata_to_dict
k, v = token.split('=')
ValueError: need more than 1 value to unpack
MainThread::ERROR::2020-07-10
11:34:31,864::agent::144::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 131, in _run_agent
return action(he)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 55, in action_proper
return he.start_monitoring()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 456, in start_monitoring
self.publish(stopped)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 355, in publish
blocks = self._generate_local_blocks(state)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 652, in _generate_local_blocks
lm = state.data.stats.local
AttributeError: 'NoneType' object has no attribute 'local'
MainThread::ERROR::2020-07-10
11:34:31,864::agent::145::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Trying to restart agent
MainThread::INFO::2020-07-10
11:34:31,864::agent::89::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
Agent shutting down
virt OS Version:
RHEL - 7 - 7.1908.0.el7.centos
OS Description:
CentOS Linux 7 (Core)
Kernel Version:
3.10.0 - 1127.8.2.el7.x86_64
KVM Version:
2.12.0 - 44.1.el7_8.1
LIBVIRT Version:
libvirt-4.5.0-33.el7_8.1
VDSM Version:
vdsm-4.30.46-1.el7
SPICE Version:
0.14.0 - 9.el7
GlusterFS Version:
glusterfs-7.5-1.el7
CEPH Version:
librbd1-10.2.5-4.el7
Open vSwitch Version:
openvswitch-2.11.0-4.el7
Kernel Features:
PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
VNC Encryption:
Disabled
4 years, 5 months
Problem with backuping ovirt 4.4 with SDK
by Łukasz Kołaciński
Hello,
I am trying to do full backup on ovirt 4.4 using sdk. I used steps from this youtube video: https://www.youtube.com/watch?v=E2VWUVcycj4 and I got error after running backup_vm.py. I see that sdk has imported disks and created backup entity and then I got sdk.NotFoundError exception. I also tried to do full backup with API and after finalizing backup disappeared (I think) and I couldn't try incremental.
[ 0.0 ] Starting full backup for VM '51708c8e-6671-480b-b2d8-199a1af9cbdc'
Password:
[ 4.2 ] Waiting until backup 0458bf7f-868c-4859-9fa7-767b3ec62b52 is ready
Traceback (most recent call last):
File "./backup_vm.py", line 343, in start_backup
backup = backup_service.get()
File "/usr/lib64/python3.7/site-packages/ovirtsdk4/services.py", line 32333, in get
return self._internal_get(headers, query, wait)
File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line 211, in _internal_get
return future.wait() if wait else future
File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line 55, in wait
return self._code(response)
File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line 208, in callback
self._check_fault(response)
File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line 130, in _check_fault
body = self._internal_read_body(response)
File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line 312, in _internal_read_body
self._raise_error(response)
File "/usr/lib64/python3.7/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
raise error
ovirtsdk4.NotFoundError: HTTP response code is 404.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./backup_vm.py", line 476, in <module>
main()
File "./backup_vm.py", line 173, in main
args.command(args)
File "./backup_vm.py", line 230, in cmd_start
backup = start_backup(connection, args)
File "./backup_vm.py", line 345, in start_backup
raise RuntimeError("Backup {} failed".format(backup.id))
RuntimeError: Backup 0458bf7f-868c-4859-9fa7-767b3ec62b52 failed
P.S
Thank you for previous answers :)
Regards,
Łukasz Kołaciński
Junior Java Developer
e-mail: l.kolacinski(a)storware.eu<mailto:l.kolacinski@storware.eu>
<mailto:m.helbert@storware.eu>
[STORWARE]<http://www.storware.eu/>
ul. Leszno 8/44
01-192 Warszawa
www.storware.eu <https://www.storware.eu/>
[facebook]<https://www.facebook.com/storware>
[twitter]<https://twitter.com/storware>
[linkedin]<https://www.linkedin.com/company/storware>
[Storware_Stopka_09]<https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
Storware Spółka z o.o. nr wpisu do ewidencji KRS dla M.St. Warszawa 000510131 , NIP 5213672602. Wiadomość ta jest przeznaczona jedynie dla osoby lub podmiotu, który jest jej adresatem i może zawierać poufne i/lub uprzywilejowane informacje. Zakazane jest jakiekolwiek przeglądanie, przesyłanie, rozpowszechnianie lub inne wykorzystanie tych informacji lub podjęcie jakichkolwiek działań odnośnie tych informacji przez osoby lub podmioty inne niż zamierzony adresat. Jeżeli Państwo otrzymali przez pomyłkę tę informację prosimy o poinformowanie o tym nadawcy i usunięcie tej wiadomości z wszelkich komputerów. This message is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you have received this message in error, please contact the sender and remove the material from all of your computer systems.
4 years, 5 months
oVirt 4.3 -> 4.4 Upgrade Path Questions
by Andrei Verovski
Hi !
I have 2-node oVirt 4.3 installation, with engine running as KVM guest on SuSE file server (not hosted engine).
Nodes are manually installed on CentOS 7.x (further referred as old node #1 and #2).
I’m going to add 1 additional node, and migrate system to CentOS 8.2 / oVirt 4.4.
Is this correct roadmap, or something can be done in a better way? Here is my plan.
1) Upgrade oVirt Hosted Engine running under KVM / SuSE file server to CentOS 8.2 / oVirt 4.4.
Or its better to install from scratch new CentOS 8.2/oVirt 4.4 and migrate database?
2) Install CentOS 8.2 and oVirt node 4.4 on new server (let’s name it node #3).
3) Migrate virtual machines from old node #1 to the new (#3).
4) Upgrade CentOS on old node #1 to v8.2, and oVirt from web interface. Will that work, or upgrade CentOS 7 -> 8 will render this process unmanageable due to removal of many components?
5) Migrate back virtual machines from node #3 to newly upgraded node #1.
6) Repeat step 4 and 5 for old node #2.
I have set of custom shell and python scripts running on engine and nodes, so manual installation was a way to go.
Thanks in advance for any suggestion(s)
with best regards
Andrei
4 years, 5 months
[ANN] oVirt 4.4.1 is now generally available
by Lev Veyde
oVirt 4.4.1 is now generally available
The oVirt project is excited to announce the general availability of oVirt
4.4.1 , as of July 8th, 2020.
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade
Please note that oVirt 4.4 only supports clusters and data centers with
compatibility version 4.2 and above. If clusters or data centers are
running with an older compatibility version, you need to upgrade them to at
least 4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, the megaraid_sas driver is removed. If you use Enterprise
Linux 8 hosts you can try to provide the necessary drivers for the
deprecated hardware using the DUD method (See the users’ mailing list
thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.4.1 Release?
This update is the first in a series of stabilization updates to the 4.4
series.
This release introduces a new monitoring solution that provides a user
interface to oVirt DWH collected data using Grafana. This allows admins to
track inventory, monitor performance and capacity trends, and easily
identify and troubleshoot resources issues. Grafana is installed and
configured during engine-setup and includes pre-built dashboards that are
based on the data collected by the ovirt_engine_history PostgreSQL Data
Warehouse database (BZ#1777877
<https://bugzilla.redhat.com/show_bug.cgi?id=1777877>).
In oVirt 4.4.1 the maximum memory size for 64-bit x86_64 and ppc64/ppc64le
VMs is now 6TB. For x86_64 this limit is applied also to VMs in 4.2 and 4.3
Cluster Levels.
You can now use CentOS Stream as an alternative to CentOS Linux on
non-production systems.
This release is available now on x86_64 architecture for:
-
Red Hat Enterprise Linux 8.2
-
CentOS Linux (or similar) 8.2
-
CentOS Stream (tech preview)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
-
Red Hat Enterprise Linux 8.2
-
CentOS Linux (or similar) 8.2
-
CentOS Stream (tech preview)
-
oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
oVirt Node and Appliance have been updated, including:
-
oVirt 4.4.1: http://www.ovirt.org/release/4.4.1/
-
CentOS Linux 8.2.2004:
https://lists.centos.org/pipermail/centos-announce/2020-June/035756.html
-
CentOS Virt SIG updates including Advanced Virtualization 8.2 (qemu-kvm
4.2 <https://www.qemu.org/2019/12/13/qemu-4-2-0/>, libvirt 6.0.0
<https://libvirt.org/news.html#v6-0-0-2020-01-15>)
-
Wildfly 19.1.0:
https://wildfly.org/news/2020/05/04/WildFly-1910-Released/
-
Ansible 2.9.10:
https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v...
-
Glusterfs 7.6: https://docs.gluster.org/en/latest/release-notes/7.6/
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
-
oVirt Appliance is already available for CentOS Linux 8
-
oVirt Node NG is already available for CentOS Linux 8
Additional resources:
-
Read more about the oVirt 4.4.1 release highlights:
http://www.ovirt.org/release/4.4.1/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.1/
[2] http://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 5 months
Hosted engine can't start after changing cluster settings
by Patrick Lomakin
I changed the BIOS Type settings in the cluster settings section to UEFI BIOS, and hosted-engine does not start after rebooting. Although, before I made the changes, I looked in the engine at the /boot partition, which has the /efi directory. Is there any way to change the engine settings manually, and how can I connect to it (or to its partitions) to perform actions? I can't imagine how you can reinstall the engine without unplugging storage to import all virtual machines later. Losing data in virtual machines is a disaster for me.
4 years, 5 months
Ovirt 4.3.10 Glusterfs SSD slow performance over 10GE
by jury cat
Hello all,
I am using Ovirt 4.3.10 on Centos 7.8 with glusterfs 6.9 .
My Gluster setup is of 3 hosts in replica 3 (2 hosts + 1 arbiter).
All the 3 hosts are Dell R720 with Perc Raid Controller H710 mini(that has
maximim throughtout 6Gbs) and with 2×1TB samsumg SSD in RAID 0. The
volume is partitioned using LVM thin provision and formated XFS.
The hosts have separate 10GE network cards for storage traffic.
The Gluster Network is connected to this 10GE network cards and is mounted
using Fuse Glusterfs(NFS is disabled).Also Migration Network is activated
on the same storage network.
The problem is that the 10GE network is not used at full potential by the
Gluster.
If i do live Migration of Vms i can see speeds of 7GB/s ~ 9GB/s.
The same network tests using iperf3 reported 9.9GB/s , these exluding the
network setup as a bottleneck(i will not paste all the iperf3 tests here
for now).
I did not enable all the Volume options from "Optimize for Virt Store",
because of the bug that cant set volume cluster.granural-heal to
enable(this was fixed in vdsm-4
40, but that is working only on Centos 8 with ovirt 4.4 ) .
i whould be happy to know what are all these "Optimize for Virt Store"
options, so i can set them manually.
The speed on the disk inside the host using dd is b etween 1GB/s to 700Mbs.
[root@host1 ~]# dd if=/dev/zero of=test bs=100M count=40 cou nt=80
status=progress 8074035200 bytes (8.1 GB) copied, 11.059372 s, 730 MB/s
80+0 records in 80+0 records out 8388608000 bytes (8.4 GB) copied, 11.9928
s, 699 MB/s
The dd write test on the gluster volme inside the host is poor only ~
120MB/s .
During the dd test, if i look at Networks->Gluster network ->Hosts at Tx
and Rx the network speed barerly reaches over 1Gbs (~1073 Mbs) out of
maximum of 10000 Mbs.
dd if=/dev/zero of=/rhev/data-center/mnt/glu
sterSD/gluster1.domain.local\:_data/test
bs=100M count=80 status=progress 8283750400 bytes (8.3 GB) copied,
71.297942 s, 116 MB/s 80+0 records in 80+0 records out 8388608000 bytes
(8.4 GB) copied, 71.9545 s, 117 MB/s
I have attached my Gluster volume settings and mount options.
Thanks,
Emy
4 years, 5 months
Running Self Hosted-Engine on the same iSCSI target as other Storage Domains
by Erez Zarum
We are using Dell SC (Storage) with iSCSI with oVirt, it is impossible to create a new Target Portal with a specific LUN so it's impossible to isolate the SE LUN from other LUNs that are in use by other Storage Domains.
According to the documentations this is not a best practice, while searching i ran into that specific issue but it seems to be solved.
Is it still by best practice when using oVirt 4.4 to use a whole different target for the SE? If it is not possible giving the current envrionment restrictions is the only option for us is to attach a few disks to each servers and create a small gluster cluster (volume) just for the SE?
4 years, 5 months