New to oVirt - Cluster questions
by adam.fasnacht@gmail.com
Hello,
I just discovered oVirt, and before installing, I'm already a fan. I have a question regarding the number of nodes needed for a cluster. My main end goal, is to have HA, where I can have live migrations. I currently have 2 nodes (2 Dell R630s) that have a ton of local SSD storage on them. Is it possible to setup oVirt with just 2 nodes, and have an HA "hyper converged" style cluster?
I can't seem to find a solid answer for this question. I've read the documentation, as well as consulted Google. I seem to get mixed answers. Is this possible?
Thank you!
5 years, 6 months
Re: New to oVirt - Cluster questions
by Strahil
Hi Adam,
The problem with replica2 volumes (1 copy on 1 server , 1 on the other) is that you may be in split-brain easily.
Thus ovirt accepts only replica3 or replica2 arbiter1.
If you have a small system with an SSD , then you can create your arbiter there. My lab has a Lenovo M-series for an arbiter.
Guster (not talking about Glusterd2 here) will very soon support another feature called remote arbiter which will allow you to bring an arbiter in the cloud (or any other remote destination) and the latency will not cause any issues.
Keep in mind that due to the nature of linux bonding/teaming it's better to have high bandwidth NIC vs multiple small ones ( 2 NICs x 10 gbit/s brings better performance vs 20 NICs x 1 gbit/s ).
So, you need a 3rd node - no matter locally (regular arbiter or a full replica ) or remotely (remote arbiter).
Of course for testing, you can still setup gluster (manually) in a replica2 volumes and use oVirt ability to use POSIX-compliant FS . This type of setup will not be supported and you won't be able to use the UI but you can still explore the oVirt project.
Best Regards,
Strahil NikolovOn Jun 15, 2019 20:57, adam.fasnacht(a)gmail.com wrote:
>
> Hello,
>
> I just discovered oVirt, and before installing, I'm already a fan. I have a question regarding the number of nodes needed for a cluster. My main end goal, is to have HA, where I can have live migrations. I currently have 2 nodes (2 Dell R630s) that have a ton of local SSD storage on them. Is it possible to setup oVirt with just 2 nodes, and have an HA "hyper converged" style cluster?
>
> I can't seem to find a solid answer for this question. I've read the documentation, as well as consulted Google. I seem to get mixed answers. Is this possible?
>
> Thank you!
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YRKWAAHN4EG...
5 years, 6 months
Re: Info about soft fencing mechanism
by Strahil
On Jun 13, 2019 16:14, Gianluca Cecchi <gianluca.cecchi(a)gmail.com> wrote:
>
> Hello,
> I would like to know in better detail how soft fencing works in 4.3.
> In particular, with "soft fencing" we "only" mean vdsmd restart attempt, correct?
> Who is responsible for issuing the command? Manager or host itself?
The manager should take the decision, but the actual command should be done by another host.
> Because in case of Manager, if the host has already lost connection, how could the manager be able to do it?
Soft fencing is ussed when ssh is available. In all other cases it doesn't work.
> Thanks in advance for clarifications and eventually documentation pointers
oVirt DOCs need a lot of updates, but I never found a way to add or edit a page.
Best Regards,
Strahil Nikolov
5 years, 6 months
Metrics store install failed
by roy.morris@ventura.org
I'm struggling installing the metrics store VMs. It appears that etcd image fails to download or build a template. Thank you ahead of time for your assistance.
2019-05-28 20:07:12,650 p=22689 u=root | Tuesday 28 May 2019 20:07:12 -0400 (0:00:00.097) 0:04:54.774 ***********
2019-05-28 20:07:12,687 p=22689 u=root | fatal: [master0.ent.co.ventura.ca.us]: FAILED! => {"msg": "The conditional check 'etcd_image != l_default_osm_etcd_image' failed. The error was: An unhandled exception occurred while templating '{{ osm_etcd_image }}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ l_default_osm_etcd_image }}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ etcd_image_dict[openshift_deployment_type] | lib_utils_oo_oreg_image((oreg_url | default('None'))) }}'. Error was a <class 'ansible.errors.AnsibleFilterError'>, original message: oreg_url malformed: registry.redhat.io\n\nThe error appears to have been in '/usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml': line 5, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name:
Warn if osm_etcd_image is redefined\n ^ here\n"}
2019-05-28 20:07:12,688 p=22689 u=root | NO MORE HOSTS LEFT **************************************************************************************************************************************************************************************************************************
2019-05-28 20:07:12,689 p=22689 u=root | PLAY RECAP **********************************************************************************************************************************************************************************************************************************
2019-05-28 20:07:12,689 p=22689 u=root | localhost : ok=35 changed=1 unreachable=0 failed=0
2019-05-28 20:07:12,689 p=22689 u=root | master0.ent.co.ventura.ca.us : ok=204 changed=16 unreachable=0 failed=1
2019-05-28 20:07:12,690 p=22689 u=root | INSTALLER STATUS ****************************************************************************************************************************************************************************************************************************
2019-05-28 20:07:12,693 p=22689 u=root | Initialization : Complete (0:00:10)
2019-05-28 20:07:12,693 p=22689 u=root | Health Check : Complete (0:00:22)
2019-05-28 20:07:12,694 p=22689 u=root | Node Bootstrap Preparation : Complete (0:01:59)
2019-05-28 20:07:12,694 p=22689 u=root | etcd Install : In Progress (0:00:28)
2019-05-28 20:07:12,694 p=22689 u=root | This phase can be restarted by running: playbooks/openshift-etcd/config.yml
2019-05-28 20:07:12,694 p=22689 u=root | Tuesday 28 May 2019 20:07:12 -0400 (0:00:00.044) 0:04:54.819 ***********
2019-05-28 20:07:12,694 p=22689 u=root | ===============================================================================
2019-05-28 20:07:12,697 p=22689 u=root | openshift_node : install needed rpm(s) ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 38.86s
2019-05-28 20:07:12,697 p=22689 u=root | Ensure openshift-ansible installer package deps are installed ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- 36.95s
2019-05-28 20:07:12,697 p=22689 u=root | openshift_node : Install node, clients, and conntrack packages ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- 23.81s
2019-05-28 20:07:12,698 p=22689 u=root | Run health checks (install) - EL ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 22.26s
2019-05-28 20:07:12,698 p=22689 u=root | container_runtime : Install Docker --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 7.10s
2019-05-28 20:07:12,698 p=22689 u=root | openshift_repos : Disable all repositories ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.78s
2019-05-28 20:07:12,698 p=22689 u=root | openshift_repos : Enable RHEL repositories ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.09s
2019-05-28 20:07:12,698 p=22689 u=root | openshift_node : Create credentials for registry auth -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.57s
2019-05-28 20:07:12,698 p=22689 u=root | rhel_subscribe : Install Red Hat Subscription manager -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.22s
2019-05-28 20:07:12,698 p=22689 u=root | openshift_repos : Ensure libselinux-python is installed ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 5.09s
2019-05-28 20:07:12,698 p=22689 u=root | nickhammond.logrotate : nickhammond.logrotate | Install logrotate -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.99s
2019-05-28 20:07:12,699 p=22689 u=root | etcd : Install openssl --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.79s
2019-05-28 20:07:12,699 p=22689 u=root | openshift_node : Install dnsmasq ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.68s
2019-05-28 20:07:12,699 p=22689 u=root | etcd : Install openssl --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.68s
2019-05-28 20:07:12,699 p=22689 u=root | os_firewall : Install firewalld packages --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.63s
2019-05-28 20:07:12,699 p=22689 u=root | install NetworkManager --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.59s
2019-05-28 20:07:12,699 p=22689 u=root | etcd : Install etcd ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 4.57s
2019-05-28 20:07:12,699 p=22689 u=root | container_runtime : Create credentials for oreg_url ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.54s
2019-05-28 20:07:12,700 p=22689 u=root | openshift_node : Install NFS storage plugin dependencies ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.53s
2019-05-28 20:07:12,700 p=22689 u=root | openshift_node : Add firewalld allow rules ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.39s
2019-05-28 20:07:12,700 p=22689 u=root | Failure summary:
1. Hosts: master0.ent.co.ventura.ca.us
Play: Configure etcd
Task: Warn if osm_etcd_image is redefined
Message: [0;31mThe conditional check 'etcd_image != l_default_osm_etcd_image' failed. The error was: An unhandled exception occurred while templating '{{ osm_etcd_image }}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ l_default_osm_etcd_image }}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ etcd_image_dict[openshift_deployment_type] | lib_utils_oo_oreg_image((oreg_url | default('None'))) }}'. Error was a <class 'ansible.errors.AnsibleFilterError'>, original message: oreg_url malformed: registry.redhat.io[0m
[0;31m[0m
[0;31mThe error appears to have been in '/usr/share/ansible/openshift-ansible/roles/etcd/tasks/static.yml': line 5, column 3, but may[0m
[0;31mbe elsewhere in the file depending on the exact syntax problem.[0m
[0;31m[0m
[0;31mThe offending line appears to be:[0m
[0;31m[0m
[0;31m[0m
[0;31m- name: Warn if osm_etcd_image is redefined[0m
[0;31m ^ here[0m
[0;31m[0m
5 years, 6 months
4.3.4 caching disk error during hyperconverged deployment
by adrianquintero@gmail.com
While trying to do a hyperconverged setup and trying to use "configure LV Cache" /dev/sdf the deployment fails. If I dont use the LV cache SSD Disk the setup succeds, thought you mighg want to know, for now I retested with 4.3.3 and all worked fine, so reverting to 4.3.3 unless you know of a workaround?
Error:
TASK [gluster.infra/roles/backend_setup : Extend volume group] *****************
failed: [vmm11.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) => {"ansible_loop_var": "item", "changed": false, "err": " Physical volume \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "0.9G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": "0.1G", "cachemode": "writethrough", "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}
failed: [vmm12.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) => {"ansible_loop_var": "item", "changed": false, "err": " Physical volume \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "0.9G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": "0.1G", "cachemode": "writethrough", "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}
failed: [vmm10.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': u'writethrough', u'cachemetalvsize': u'30G', u'cachelvsize': u'270G'}) => {"ansible_loop_var": "item", "changed": false, "err": " Physical volume \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "270G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": "30G", "cachemode": "writethrough", "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}
PLAY RECAP *********************************************************************
vmm10.mydomain.com : ok=13 changed=4 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0
vmm11.mydomain.com : ok=13 changed=4 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0
vmm12.mydomain.com : ok=13 changed=4 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0
---------------------------------------------------------------------------------------------------------------------
#cat /etc/ansible/hc_wizard_inventory.yml
---------------------------------------------------------------------------------------------------------------------
hc_nodes:
hosts:
vmm10.mydomain.com:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
- vgname: gluster_vg_sdc
pvname: /dev/sdc
- vgname: gluster_vg_sdd
pvname: /dev/sdd
- vgname: gluster_vg_sde
pvname: /dev/sde
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore1
lvname: gluster_lv_vmstore1
vgname: gluster_vg_sdc
- path: /gluster_bricks/data1
lvname: gluster_lv_data1
vgname: gluster_vg_sdd
- path: /gluster_bricks/data2
lvname: gluster_lv_data2
vgname: gluster_vg_sde
gluster_infra_cache_vars:
- vgname: gluster_vg_sdb
cachedisk: /dev/sdf
cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
cachethinpoolname: gluster_thinpool_gluster_vg_sdb
cachelvsize: 270G
cachemetalvsize: 30G
cachemetalvname: cache_gluster_thinpool_gluster_vg_sdb
cachemode: writethrough
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 100G
gluster_infra_thinpools:
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 14G
- vgname: gluster_vg_sdd
thinpoolname: gluster_thinpool_gluster_vg_sdd
poolmetadatasize: 14G
- vgname: gluster_vg_sde
thinpoolname: gluster_thinpool_gluster_vg_sde
poolmetadatasize: 14G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_vmstore1
lvsize: 2700G
- vgname: gluster_vg_sdd
thinpool: gluster_thinpool_gluster_vg_sdd
lvname: gluster_lv_data1
lvsize: 2700G
- vgname: gluster_vg_sde
thinpool: gluster_thinpool_gluster_vg_sde
lvname: gluster_lv_data2
lvsize: 2700G
vmm11.mydomain.com:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
- vgname: gluster_vg_sdc
pvname: /dev/sdc
- vgname: gluster_vg_sdd
pvname: /dev/sdd
- vgname: gluster_vg_sde
pvname: /dev/sde
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore1
lvname: gluster_lv_vmstore1
vgname: gluster_vg_sdc
- path: /gluster_bricks/data1
lvname: gluster_lv_data1
vgname: gluster_vg_sdd
- path: /gluster_bricks/data2
lvname: gluster_lv_data2
vgname: gluster_vg_sde
gluster_infra_cache_vars:
- vgname: gluster_vg_sdb
cachedisk: /dev/sdf
cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
cachethinpoolname: gluster_thinpool_gluster_vg_sdb
cachelvsize: 0.9G
cachemetalvsize: 0.1G
cachemetalvname: cache_gluster_thinpool_gluster_vg_sdb
cachemode: writethrough
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 100G
gluster_infra_thinpools:
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 14G
- vgname: gluster_vg_sdd
thinpoolname: gluster_thinpool_gluster_vg_sdd
poolmetadatasize: 14G
- vgname: gluster_vg_sde
thinpoolname: gluster_thinpool_gluster_vg_sde
poolmetadatasize: 14G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_vmstore1
lvsize: 2700G
- vgname: gluster_vg_sdd
thinpool: gluster_thinpool_gluster_vg_sdd
lvname: gluster_lv_data1
lvsize: 2700G
- vgname: gluster_vg_sde
thinpool: gluster_thinpool_gluster_vg_sde
lvname: gluster_lv_data2
lvsize: 2700G
vmm12.mydomain.com:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
- vgname: gluster_vg_sdc
pvname: /dev/sdc
- vgname: gluster_vg_sdd
pvname: /dev/sdd
- vgname: gluster_vg_sde
pvname: /dev/sde
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore1
lvname: gluster_lv_vmstore1
vgname: gluster_vg_sdc
- path: /gluster_bricks/data1
lvname: gluster_lv_data1
vgname: gluster_vg_sdd
- path: /gluster_bricks/data2
lvname: gluster_lv_data2
vgname: gluster_vg_sde
gluster_infra_cache_vars:
- vgname: gluster_vg_sdb
cachedisk: /dev/sdf
cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
cachethinpoolname: gluster_thinpool_gluster_vg_sdb
cachelvsize: 0.9G
cachemetalvsize: 0.1G
cachemetalvname: cache_gluster_thinpool_gluster_vg_sdb
cachemode: writethrough
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 100G
gluster_infra_thinpools:
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 14G
- vgname: gluster_vg_sdd
thinpoolname: gluster_thinpool_gluster_vg_sdd
poolmetadatasize: 14G
- vgname: gluster_vg_sde
thinpoolname: gluster_thinpool_gluster_vg_sde
poolmetadatasize: 14G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_vmstore1
lvsize: 2700G
- vgname: gluster_vg_sdd
thinpool: gluster_thinpool_gluster_vg_sdd
lvname: gluster_lv_data1
lvsize: 2700G
- vgname: gluster_vg_sde
thinpool: gluster_thinpool_gluster_vg_sde
lvname: gluster_lv_data2
lvsize: 2700G
vars:
gluster_infra_disktype: JBOD
gluster_set_selinux_labels: true
gluster_infra_fw_ports:
- 2049/tcp
- 54321/tcp
- 5900/tcp
- 5900-6923/tcp
- 5666/tcp
- 16514/tcp
gluster_infra_fw_permanent: true
gluster_infra_fw_state: enabled
gluster_infra_fw_zone: public
gluster_infra_fw_services:
- glusterfs
gluster_features_force_varlogsizecheck: false
cluster_nodes:
- vmm10.mydomain.com
- vmm11.mydomain.com
- vmm12.mydomain.com
gluster_features_hci_cluster: '{{ cluster_nodes }}'
gluster_features_hci_volumes:
- volname: engine
brick: /gluster_bricks/engine/engine
arbiter: 0
- volname: vmstore1
brick: /gluster_bricks/vmstore1/vmstore1
arbiter: 0
- volname: data1
brick: /gluster_bricks/data1/data1
arbiter: 0
- volname: data2
brick: /gluster_bricks/data2/data2
arbiter: false
---------------------------------------------------------------------------------------------------------------------
Thanks
5 years, 6 months
Re: New to oVirt - Cluster questions
by ccox@endlessnow.com
It is my opinion that you never want fewer than 3. Somebody else may want to comment on what is actually possible.------ Original message------From: adam.fasnacht(a)gmail.comDate: Sat, Jun 15, 2019 1:41 PMTo: users(a)ovirt.org;Cc: Subject:[ovirt-users] New to oVirt - Cluster questionsHello,
I just discovered oVirt, and before installing, I'm already a fan. I have a question regarding the number of nodes needed for a cluster. My main end goal, is to have HA, where I can have live migrations. I currently have 2 nodes (2 Dell R630s) that have a ton of local SSD storage on them. Is it possible to setup oVirt with just 2 nodes, and have an HA "hyper converged" style cluster?
I can't seem to find a solid answer for this question. I've read the documentation, as well as consulted Google. I seem to get mixed answers. Is this possible?
Thank you!
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YRKWAAHN4EG...
5 years, 6 months
New to oVirt - Cluster questions
by adam.fasnacht@gmail.com
Hello,
I just discovered oVirt, and before installing, I'm already a fan. I have a question regarding the number of nodes needed for a cluster. My main end goal, is to have HA, where I can have live migrations. I currently have 2 nodes (2 Dell R630s) that have a ton of local SSD storage on them. Is it possible to setup oVirt with just 2 nodes, and have an HA "hyper converged" style cluster?
I can't seem to find a solid answer for this question. I've read the documentation, as well as consulted Google. I seem to get mixed answers. Is this possible?
Thank you!
5 years, 6 months
4.3 live migration creates wrong image permissions.
by Alex McWhirter
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
images are become owned by root:root. Live migration succeeds and the vm
stays up, but after shutting down the VM from this point, starting it up
again will cause it to fail. At this point i have to go in and change
the permissions back to vdsm:kvm on the images, and the VM will boot
again.
5 years, 6 months
Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3
by Strahil
Hi Adrian,
Please keep in mind that when a server dies, the easiest way to recover is to get another freshly installed server with different IP/FQDN .
Then you will need to use 'replace-brick' and once gluster replaces that node - you should be able to remove the old entry in oVirt.
Once the old entry is gone, you can add the new installation in oVirt via the UI.
Another approach is to have the same IP/FQDN for the fresh install.In this situation, you need to have the same gluster ID (which should be a text file) and the peer IDs. Most probably you can create them on your own , based on data on the other gluster peers.
Once the fresh install is available in 'gluster peer' , you can initiate a reset-brick' (don't forget to set the SELINUX , firewall and repos) and a full heal.
From there you can reinstall the machine from the UI and it should be available for usage.
P.S.: I know that the whole procedure is not so easy :)
Best Regards,
Strahil NikolovOn Jun 12, 2019 19:02, Adrian Quintero <adrianquintero(a)gmail.com> wrote:
>
> Strahil, I dont use the GUI that much, in this case I need to understand how all is tied together if I want to move to production. As far as Gluster goes, I can get do the administration thru CLI, however when my test environment was set up it was setup using geodeploy for Hyperconverged setup under oVirt.
> The initial setup was 3 servers with the same amount of physical disks: sdb, sdc, sdc, sdd, sde(this last one used for caching as it is an SSD)
>
> vmm10.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm10.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm10.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm10.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm11.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm11.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm11.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm11.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm12.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm12.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm12.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm12.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> As you can see from the above the the engine volume is conformed of hosts vmm10 (Initiating cluster server but now dead sever), vmm11 and vmm12 and on block device /dev/sdb (100Gb LV), also the vmstore1 volume is also on /dev/sdb (2600Gb LV).
> /dev/mapper/gluster_vg_sdb-gluster_lv_engine xfs 100G 2.0G 98G 2% /gluster_bricks/engine
> /dev/mapper/gluster_vg_sdb-gluster_lv_vmstore1 xfs 2.6T 35M 2.6T 1% /gluster_bricks/vmstore1
> /dev/mapper/gluster_vg_sdc-gluster_lv_data1 xfs 2.7T 4.6G 2.7T 1% /gluster_bricks/data1
> /dev/mapper/gluster_vg_sdd-gluster_lv_data2 xfs 2.7T 9.5G 2.7T 1% /gluster_bricks/data2
> vmm10.mydomain.com:/engine fuse.glusterfs 300G 9.2G 291G 4% /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_engine
> vmm10.mydomain.com:/vmstore1 fuse.glusterfs 5.1T 53G 5.1T 2% /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_vmstore1
> vmm10.mydomain.com:/data1 fuse.glusterfs 8.0T 95G 7.9T 2% /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data1
> vmm10.mydomain.com:/data2 fuse.glusterfs 8.0T 112G 7.8T 2% /rhev/data-center/mnt/glusterSD/vmm10.virt.iad3p:_data2
>
>
>
> before any issues I increased the size of the cluster and the gluster cluster with the following, creating 4 distributed replicated volumes (engine, vmstore1, data1, data2)
>
> vmm13.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm13.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm13.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm13.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm14.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm14.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm14.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm14.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm15.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm15.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm15.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm15.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm16.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm16.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm16.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm16.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm17.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm17.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm17.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm17.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data2
>
> vmm18.mydomain.com:/gluster_bricks/brick1(/dev/sdb) engine
> vmm18.mydomain.com:/gluster_bricks/brick2(/dev/sdb) vmstore1
> vmm18.mydomain.com:/gluster_bricks/brick3(/dev/sdc) data1
> vmm18.mydomain.com:/gluster_bricks/brick4(/dev/sdd) data
>
> with your first suggestion I dont think it is possible to recover as I will lose the engine if I stop the "engine" volume, It might be doable for vmstore1, data1 and data2 but not the engine.
> A) If you have space on another gluster volume (or volumes) or on NFS-based storage, you can migrate all VMs live . Once you do it, the simple way will be to stop and remove the storage domain (from UI) and gluster volume that correspond to the problematic brick. Once gone, you can remove the entry in oVirt for the old host and add the newly built one. Then you can recreate your volume and migrate the data back.
>
> I tried removing the brick using CLI but get the following error:
> volume remove-brick start: failed: Host node of the brick vmm10.mydomain.com:/gluster_bricks/engine/engine is down
>
> So I used the force command:
> gluster vol remove-brick engine vmm10.mydomain.com:/gluster_bricks/engine/engine vmm11.mydomain.com:/gluster_bricks/engine/engine vmm12.mydomain.com:/gluster_bricks/engine/engine force
> Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.
> Do you want to continue? (y/n) y
> volume remove-brick commit force: success
>
> so I lost my engine:
> Please enter your authentication name: vdsm@ovirt
> Please enter your password:
> Id Name State
> ----------------------------------------------------
> 3 HostedEngine paused
>
> hosted-engine --vm-start
> The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
>
> I guess this fail scenario is more complex than I thought, hosted engine should of survived, as far as gluster I am able to get around command line, the issue is the engine, though it was running on vmm18 and not running on any bricks belonging to vmm10, 11, or 12 (original setup) it still failed...
> virsh list --all
> Please enter your authentication name: vdsm@ovirt
> Please enter your password:
> Id Name State
> ----------------------------------------------------
> - HostedEngine shut off
>
> Now I cant get it to start:
> hosted-engine --vm-start
> The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
> df -hT still showing mounts from old hosts bricks, could the problem be that this was the initiating host of the hyperconverged setup?
> vmm10.mydomain.com:/engine fuse.glusterfs 200G 6.2G 194G 4% /rhev/data-center/mnt/glusterSD/vmm10.mydomain.com:_engine
>
>
> I will re-create everything from scratch and simulate this again, and see why is it too complex to recover ovirt's engine with gluster when a server dies completely. Maybe it is my lack of understanding with regards how ovirt integrates with gluster though I have a decent understanding of Gluster to work with it...
>
> I will let you know once I have the cluster recreated and will kill the same server and see if I missed anything from the recommendations your provided.
>
> Thanks,
>
> --
> Adrian.
>
>
>
>
>
>
>
> On Tue, Jun 11, 2019 at 4:13 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> Do you have empty space to store the VMs ? If yes, you can always script the migration of the disks via the API . Even a bash script and curl can do the trick.
>>
>> About the /dev/sdb , I still don't get it . A pure "df -hT" from a node will make it way clear. I guess '/dev/sdb' is a PV and you got 2 LVs ontop of it.
>>
>> Note: I should admit that as an admin - I don't use UI for gluster management.
>>
>> For now do not try to remove the brick. The approach is either to migrate the qemu disks to another storage or to reset-brick/replace-brick in order to restore the replica count.
>> I will check the file and I will try to figure it out.
>>
>> Redeployment never fixes the issue, it just speeds up the recovery. If you can afford the time to spent on fixing the issue - then do not redeploy.
>>
>> I would be able to take a look next week , but keep in mind that I'm not so in deep with oVirt - I have started playing with it when I deployed my lab.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> Strahil,
>>
>> Looking at your suggestions I think I need to provide a bit more info on my current setup.
>>
>>
>> I have 9 hosts in total
>>
>> I have 5 storage domains:
>>
>> hosted_storage (Data Master)
>>
>> vmstore1 (Data)
>>
>> data1 (Data)
>>
>> data2 (Data)
>>
>> ISO (NFS) //had to create this one because oVirt 4.3.3.1 would not let me upload disk images to a data domain without an ISO (I think this is due to a bug)
>>
>> Each volume is of the type “Distributed Replicate” and each one is composed of 9 bricks.
>> I started with 3 bricks per volume due to the initial Hyperconverged setup, then I expanded the cluster and the gluster cluster by 3 hosts at a time until I got to a total of 9 hosts.
>>
>> Disks, bricks and sizes used per volume
>> / dev/sdb engine 100GB
>> / dev/sdb vmstore1 2600GB
>> / dev/sdc data1 2600GB
>> / dev/sdd data2 2600GB
>> / dev/sde -------- 400GB SSD Used for caching purposes
>>
>> From the above layout a few questions came up:
>>
>> Using the web UI, How can I create a 100GB brick and a 2600GB brick to replace the bad bricks for “engine” and “vmstore1” within the same block device (sdb) ?
>>
>> What about / dev/sde (caching disk), When I tried creating a new brick thru the UI I saw that I could use / dev/sde for caching but only for 1 brick (i.e. vmstore1) so if I try to create another brick how would I specify it is the same / dev/sde device to be used for caching?
>>
>>
>> If I want to remove a brick and it being a replica 3, I go to storage > Volumes > select the volume > bricks once in there I can select the 3 servers that compose the replicated bricks and click remove, this gives a pop-up window with the following info:
>>
>> Are you sure you want to remove the following Brick(s)?
>> - vmm11:/gluster_bricks/vmstore1/vmstore1
>> - vmm12.virt.iad3p:/gluster_bricks/vmstore1/vmstore1
>> - 192.168.0.100:/gluster-bricks/vmstore1/vmstore1
>> - Migrate Data from the bricks?
>>
>> If I proceed with this that means I will have to do this for all the 4 volumes, that is just not very efficient, but if that is the only way, then I am hesitant to put this into a real production environment as there is no way I can take that kind of a hit for +500 vms :) and also I wont have that much storage or extra volumes to play with in a real sceneario.
>>
>> After modifying yesterday / etc/vdsm/vdsm.id by following (https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids) I was able to add the server back to the cluster using a new fqdn and a new IP, and tested replacing one of the bricks and this is my mistake as mentioned in #3 above I used / dev/sdb entirely for 1 brick because thru the UI I could not separate the block device and be used for 2 bricks (one for the engine and one for vmstore1). So in the “gluster vol info” you might see vmm102.mydomain.com but in reality it is myhost1.mydomain.com
>>
>> I am also attaching gluster_peer_status.txt and in the last 2 entries of that file you will see and entry vmm10.mydomain.com (old/bad entry) and vmm102.mydomain.com (new entry, same server vmm10, but renamed to vmm102). Also please find gluster_vol_info.txt file.
>>
>> I am ready to redeploy this environment if needed, but I am also ready to test any other suggestion. If I can get a good understanding on how to recover from this I will be ready to move to production.
>>
>> Wondering if you’d be willing to have a look at my setup through a shared screen?
>>
>>
>> Thanks
>>
>>
>> Adrian
>>
>>
>> On Mon, Jun 10, 2019 at 11:41 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>>
>>> Hi Adrian,
>>>
>>> You have several options:
>>> A) If you have space on another gluster volume (or volumes) or on NFS-based storage, you can migrate all VMs live . Once you do it, the simple way will be to stop and remove the storage domain (from UI) and gluster volume that correspond to the problematic brick. Once gone, you can remove the entry in oVirt for the old host and add the newly built one.Then you can recreate your volume and migrate the data back.
>>>
>>> B) If you don't have space you have to use a more riskier approach (usually it shouldn't be risky, but I had bad experience in gluster v3):
>>> - New server has same IP and hostname:
>>> Use command line and run the 'gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH HOSTNAME:BRICKPATH commit'
>>> Replace VOLNAME with your volume name.
>>> A more practical example would be:
>>> 'gluster volume reset-brick data ovirt3:/gluster_bricks/data/brick ovirt3:/gluster_ ricks/data/brick commit'
>>>
>>> If it refuses, then you have to cleanup '/gluster_bricks/data' (which should be empty).
>>> Also check if the new peer has been probed via 'gluster peer status'.Check the firewall is allowing gluster communication (you can compare it to the firewalls on another gluster host).
>>>
>>> The automatic healing will kick in 10 minutes (if it succeeds) and will stress the other 2 replicas, so pick your time properly.
>>> Note: I'm not recommending you to use the 'force' option in the previous command ... for now :)
>>>
>>> - The new server has a different IP/hostname:
>>> Instead of 'reset-brick' you can use 'replace-brick':
>>> It should be like this:
>>> gluster volume replace-brick data old-server:/path/to/brick new-server:/new/path/to/brick commit force
>>>
>>> In both cases check the status via:
>>> gluster volume info VOLNAME
>>>
>>> If your cluster is in production , I really recommend you the first option as it is less risky and the chance for unplanned downtime will be minimal.
>>>
>>> The 'reset-brick' in your previous e-mail shows that one of the servers is not connected. Check peer status on all servers, if they are less than they should check for network and/or firewall issues.
>>> On the new node check if glusterd is enabled and running.
>>>
>>> In order to debug - you should provide more info like 'gluster volume info' and the peer status from each node.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> On Jun 10, 2019 20:10, Adrian Quintero <adrianquintero(a)gmail.com> wrote:
>>>>
>>>> >
>>>> > Can you let me know how to fix the gluster and missing brick?,
>>>> > I tried removing it by going to "storage > Volumes > vmstore > bricks > selected the brick
>>>> > However it is showing as an unknown status (which is expected because the server was completely wiped) so if I try to "remove", "replace brick" or "reset brick" it wont work
>>>> > If i do remove brick: Incorrect bricks selected for removal in Distributed Replicate volume. Either all the selected bricks should be from the same sub volume or one brick each for every sub volume!
>>>> > If I try "replace brick" I cant because I dont have another server with extra bricks/disks
>>>> > And if I try "reset brick": Error while executing action Start Gluster Volume Reset Brick: Volume reset brick commit force failed: rc=-1 out=() err=['Host myhost1_mydomain_com not connected']
>>>> >
>>>> > Are you suggesting to try and fix the gluster using command line?
>>>> >
>>>> > Note that I cant "peer detach" the sever , so if I force the removal of the bricks would I need to force downgrade to replica 2 instead of 3? what would happen to oVirt as it only supports replica 3?
>>>> >
>>>> > thanks again.
>>>> >
>>>> > On Mon, Jun 10, 2019 at 12:52 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>>>>
>>>>> >>
>>>>> >> Hi Adrian,
>>>>> >> Did you fix the issue with the gluster and the missing brick?
>>>>> >> If yes, try to set the 'old' host in maintenance an
>>
>>
>>
>> --
>> Adrian Quintero
>
>
>
> --
> Adrian Quintero
5 years, 6 months
Re: Memory ballon question
by Strahil
Hi Martin,
Thanks for clarifying that.
Best Regards,
Strahil NikolovOn Jun 14, 2019 13:02, Martin Sivak <msivak(a)redhat.com> wrote:
>
> Hi,
>
> > 2019-06-13 07:11:40,973 - mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 695648 to 660865
> > 2019-06-13 07:12:51,437 - mom.GuestMonitor.Thread - INFO - GuestMonitor-node1 ending
> >
> > Can someone clarify what exactly does this (from xxxx to yyyy) mean ?
>
> It is the ballooning operation log:
>
> - From - how much memory was left in the VM before the action
> - To - how much after (could be both lower and higher)
>
> I do not remember the units, but I think it was in KiB.
>
> Martin
>
>
> On Thu, Jun 13, 2019 at 9:26 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
> >
> > Hi Martin,Darrell,
> >
> > thanks for your feedback.
> >
> > I have checked the /var/log/vdsm/mom.log and it seems that MOM was actually working:
> >
> > 2019-06-13 07:08:47,690 - mom.GuestMonitor.Thread - INFO - GuestMonitor-node1 starting
> > 2019-06-13 07:09:39,490 - mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 1048576 to 996147
> > 2019-06-13 07:09:54,658 - mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 996148 to 946340
> > 2019-06-13 07:10:09,853 - mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 946340 to 899023
> > 2019-06-13 07:10:25,053 - mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 899024 to 854072
> > 2019-06-13 07:10:40,233 - mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 854072 to 811368
> > 2019-06-13 07:10:55,428 - mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 811368 to 770799
> > 2019-06-13 07:11:10,621 - mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 770800 to 732260
> > 2019-06-13 07:11:25,827 - mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 732260 to 695647
> > 2019-06-13 07:11:40,973 - mom.Controllers.Balloon - INFO - Ballooning guest:node1 from 695648 to 660865
> > 2019-06-13 07:12:51,437 - mom.GuestMonitor.Thread - INFO - GuestMonitor-node1 ending
> >
> > Can someone clarify what exactly does this (from xxxx to yyyy) mean ?
> >
> > Best Regards,
> > Strahil Nikolov
> >
> > В четвъртък, 13 юни 2019 г., 17:27:01 ч. Гринуич+3, Martin Sivak <msivak(a)redhat.com> написа:
> >
> >
> > Hi,
> >
> > iirc the guest agent is not needed anymore as we get almost the same
> > stats from the balloon driver directly.
> >
> > Ballooning has to be enabled on cluster level though. So that is one
> > thing to check. If that is fine then I guess a more detailed
> > description is needed.
> >
> > oVirt generally starts ballooning when the memory load gets over 80%
> > of available memory.
> >
> > The host agent that handles ballooning is called mom and the logs are
> > located in /var/log/vdsm/mom* iirc. It might be a good idea to check
> > whether the virtual machines were declared ready (meaning all data
> > sources we collect provided data).
> >
> > --
> > Martin Sivak
> > used to be maintainer of mom
> >
> > On Thu, Jun 13, 2019 at 12:26 AM Darrell Budic <budic(a)onholyground.com> wrote:
> > >
> > > Do you have the overt-guest-agent running on your VMs? It’s required for ballooning to control allocations on the guest side.
> > >
> > > On Jun 12, 2019, at 11:32 AM, Strahil <hunter86_bg(a)yahoo.com> wrote:
> > >
> > > Hello All,
> > >
> > > as a KVM user I know how usefull is the memory balloon and how you can both increase - and also decrease memory live (both Linux & Windows).
> > > I have noticed that I cannot decrease the memory in oVirt.
> > >
> > > Does anyone got a clue why the situation is like that ?
> > >
> > > I was expecting that the guaranteed memory is the minimum to which the balloon driver will not go bellow, but when I put my host under pressure - the host just started to swap instead of reducing some of the VM memory (and my VMs had plenty of free space).
> > >
> > > It will be great if oVirt can decrease the memory (if the VM has unallocated memory) when the host is under pressure and the VM cannot be relocated.
> > >
> > > Best Regards,
> > > Strahil Nikolov
> > >
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LUWCN2MLNTD...
> > >
> > >
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/22XYJD7XAYZ...
> >
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-l
5 years, 6 months