Re: Hosted-Engine constantly dies
by Strahil
Hi Simone,
> According to gluster administration guide:
> https://docs.gluster.org/en/latest/Administrator%20Guide/Network%20Config...
>
> in the "when to bond" section we can read:
> network throughput limit of client/server \<\< storage throughput limit
>
> 1 GbE (almost always)
> 10-Gbps links or faster -- for writes, replication doubles the load on the network and replicas are usually on different peers to which the client can transmit in parallel.
>
> So if you are using oVirt hyper-converged in replica 3 you have to transmit everything two times over the storage network to sync it with other peers.
>
> I'm not really in that details, but if https://bugzilla.redhat.com/1673058 is really like it's described, we even have an 5x overhead with current gluster 5.x.
>
> This means that with a 1000 Mbps nic we cannot expect more than:
> 1000 Mbps / 2 (other replicas) / 5 (overhead in Gluster 5.x ???) / 8 (bit per bytes) = 12.5 MByte per seconds and this is definitively enough to have sanlock failing especially because we don't have just the sanlock load as you can imagine.
>
> I'd strongly advice to move to 10 Gigabit Ethernet (nowadays with a few hundred dollars you can buy a 4/5 ports 10GBASE-T copper switch plus 3 nics and the cables just for the gluster network) or to bond a few 1 Gigabit Ethernet links.
I didn't know that.
So , with 1 Gbit network everyone should use replica 3 arbiter 1 volumes to minimize replication traffic.
Best Regards,
Strahil Nikolov
5 years, 7 months
VM bandwidth limitations
by John Florian
In my oVirt deployment at home, I'm trying to minimize the amount of
physical HW and its 24/7 power draw. As such I have the NFS server for
my domain virtualized. This is not used for oVirt's SD, but rather the
NFS server's back-end storage comes from oVirt's SD. To maximize the
performance of my NFS server, do I still need to use bonded NICs to
increase bandwidth like I would a physical server or does the
VirtIO-SCSI stuff magically make this unnecessary? In my head I can
argue it both ways, but have never seen it stated one way or the other,
oddly.
--
John Florian
5 years, 7 months
Hosted-Engine constantly dies
by Strahil Nikolov
Hi Guys,
As I'm still quite new in oVirt - I have some problems finding the problem on this one.My Hosted Engine (4.3.2) is constantly dieing (even when the Global Maintenance is enabled).My interpretation of the logs indicates some lease problem , but I don't get the whole picture ,yet.
I'm attaching the output of 'journalctl -f | grep -Ev "Started Session|session opened|session closed"' after I have tried to power on the hosted engine (hosted-engine --vm-start).
The nodes are fully updated and I don't see anything in the gluster v5.5 logs, but I can double check.
Any hints are appreciated and thanks in advance.
Best Regards,Strahil Nikolov
5 years, 7 months
oVirt 4.3.2 Importing VMs from a detached domain not keeping cluster info
by Strahil Nikolov
Hello,
can someone tell me if this is an expected behaviour:
1. I have created a data storage domain exported by nfs-ganesha via NFS2. Stop all VMs on the Storage domain
3. Set to maintenance and detached (without wipe) the storage domain3.2 All VMs are gone (which was expected)4. Imported the existing data domain via Gluster5. Wen to the Gluster domain and imported all templates and VMs5.2 Power on some of the VMs but some of them failed
The reason for failure is that some of the re-imported VMs were automatically assigned to the Default Cluster , while they belonged to another one.
Most probably this is not a supported activity, but can someone clarify it ?
Thanks in advance.
Best Regards,Strahil Nikolov
5 years, 7 months
Upgrade 3.5 to 4.3
by Demeter Tibor
Hi All,
I did began a very big project, I've started upgrade a cluster from 3.5 to 4.3...
It was a mistake:(
Since upgrade, I can't start the host. The UI seems to working fine.
[root@virt ~]# service vdsm-network start
Redirecting to /bin/systemctl start vdsm-network.service
Job for vdsm-network.service failed because the control process exited with error code. See "systemctl status vdsm-network.service" and "journalctl -xe" for details.
[root@virt ~]# service vdsm-network status
Redirecting to /bin/systemctl status vdsm-network.service
● vdsm-network.service - Virtual Desktop Server Manager network restoration
Loaded: loaded (/usr/lib/systemd/system/vdsm-network.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-04-04 15:03:44 CEST; 6s ago
Process: 19325 ExecStart=/usr/bin/vdsm-tool restore-nets (code=exited, status=1/FAILURE)
Process: 19313 ExecStartPre=/usr/bin/vdsm-tool --vvverbose --append --logfile=/var/log/vdsm/upgrade.log upgrade-unified-persistence (code=exited, status=0/SUCCESS)
Main PID: 19325 (code=exited, status=1/FAILURE)
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: return tool_command[cmd]["command"](*args)
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: File "/usr/lib/python2.7/site-packages/vdsm/tool/restore_nets.py", line 41, in restore_command
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: exec_restore(cmd)
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: File "/usr/lib/python2.7/site-packages/vdsm/tool/restore_nets.py", line 54, in exec_restore
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: raise EnvironmentError('Failed to restore the persisted networks')
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: EnvironmentError: Failed to restore the persisted networks
Apr 04 15:03:44 virt.bolax.hu systemd[1]: vdsm-network.service: main process exited, code=exited, status=1/FAILURE
Apr 04 15:03:44 virt.bolax.hu systemd[1]: Failed to start Virtual Desktop Server Manager network restoration.
Apr 04 15:03:44 virt.bolax.hu systemd[1]: Unit vdsm-network.service entered failed state.
Apr 04 15:03:44 virt.bolax.hu systemd[1]: vdsm-network.service failed.
Since, the host does not start.
Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.764+0000: 17705: error : virNetSocketReadWire:1806 : End of file while reading data: Input/output error
Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.965+0000: 17709: error : virNetSASLSessionListMechanisms:393 : internal error: cannot list SASL ...line 1757)
Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.966+0000: 17709: error : remoteDispatchAuthSaslInit:3440 : authentication failed: authentication failed
Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.966+0000: 17705: error : virNetSocketReadWire:1806 : End of file while reading data: Input/output error
Apr 04 14:56:01 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:01.167+0000: 17710: error : virNetSASLSessionListMechanisms:393 : internal error: cannot list SASL ...line 1757)
Apr 04 14:56:01 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:01.167+0000: 17710: error : remoteDispatchAuthSaslInit:3440 : authentication failed: authentication failed
[root@virt ~]# service vdsm-network status
Redirecting to /bin/systemctl status vdsm-network.service
● vdsm-network.service - Virtual Desktop Server Manager network restoration
Loaded: loaded (/usr/lib/systemd/system/vdsm-network.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-04-04 15:00:39 CEST; 2min 49s ago
Process: 19079 ExecStart=/usr/bin/vdsm-tool restore-nets (code=exited, status=1/FAILURE)
Process: 19045 ExecStartPre=/usr/bin/vdsm-tool --vvverbose --append --logfile=/var/log/vdsm/upgrade.log upgrade-unified-persistence (code=exited, status=0/SUCCESS)
Main PID: 19079 (code=exited, status=1/FAILURE)
Also, I upgraded to until 4.0 , but it seems to same. I cant reinstall, and reactivate the host.
Originally it was an AIO installation.
Please help me solve this problem.
Thanks in advance,
R
Tibor
5 years, 7 months
NPE for GetValidHostsForVmsQuery
by nicolas@devels.es
Hi,
We're running oVirt 4.3.2. When we click on the "Migrate" button over a
VM, an error popup shows up and in the ovirt-engine log we see:
2019-04-03 12:37:40,897+01 ERROR
[org.ovirt.engine.core.bll.GetValidHostsForVmsQuery] (default task-6)
[478381f0-18e3-4c96-bcb5-aafd116d7b7a] Query 'GetValidHostsForVmsQuery'
failed: null
I'm attaching the full NPE.
Could someone point out what could be the reason for the NPE?
Thanks.
5 years, 7 months
Ansible hosted-engine deploy still doesnt support manually defined ovirtmgmt?
by Callum Smith
Dear All,
We're trying to deploy our hosted engine remotely using the ansible hosted engine playbook, which has been a rocky road but we're now at the point where it's installing, and failing. We've got a pre-defined bond/VLAN setup for our interface which has the correct bond0 bond0.123 and ovirtmgmt bridge on top but we're hitting the classic error:
Failed to find a valid in
terface for the management network of host virthyp04.virt.in.bmrc.ox.ac.uk. If the interface ovirtmgmt is a bridge, it should be torn-down manually.
Does this bug still exist in the latest (4.3) version, and is installing using ansible with this network configuration impossible?
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
5 years, 7 months
Fwd: Re: VDI broker and oVirt
by Jorick Astrego
Forward to the list.
-------- Forwarded Message --------
Subject: Re: [ovirt-users] Re: VDI broker and oVirt
Date: Fri, 05 Apr 2019 04:52:13 -0400
From: alex(a)triadic.us
As far as official software, the best you'll find is the user portal.
There is also this...
https://github.com/nkovacne/ovirt-desktop-client
We used that as a code base for our own vdi connector using smart card
pksc12 certs to auth to the ovirt API.
On Apr 5, 2019 4:08 AM, Jorick Astrego <jorick(a)netbulae.eu> wrote:
Hi,
I think you mean to ask about the connection broker to connect to
your VDI infrastructure?
Something like this:
Or
https://www.leostream.com/solution/remote-access-for-virtual-and-physical...
Ovirt has the VM user portal https://github.com/oVirt/ovirt-web-ui ,
but I have never used a third party connection broker myself so I'm
not aware of any compatible with oVirt or RHEV...
On 4/4/19 9:10 PM, oquerejazu(a)gmail.com
<mailto:oquerejazu@gmail.com> wrote:
I have Ovirt installed, with two hypervisor and Ovirt engine. I Want To mount a VDI infrastructure, as cheap as possible, but robust and reliable. The question is what broker I can use. Thank you.
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GLH7Y5HCOMB...
Met vriendelijke groet, With kind regards,
Jorick Astrego
*
Netbulae Virtualization Experts *
------------------------------------------------------------------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW
NL821234584B01
------------------------------------------------------------------------
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
5 years, 7 months
Re: Controller recomandation - LSI2008/9265
by Strahil
Adding Gluster users' mail list.On Apr 5, 2019 06:02, Leo David <leoalex(a)gmail.com> wrote:
>
> Hi Everyone,
> Any thoughts on this ?
>
>
> On Wed, Apr 3, 2019, 17:02 Leo David <leoalex(a)gmail.com> wrote:
>>
>> Hi Everyone,
>> For a hyperconverged setup started with 3 nodes and going up in time up to 12 nodes, I have to choose between LSI2008 ( jbod ) and LSI9265 (raid).
>> Perc h710 ( raid ) might be an option too, but on a different chassis.
>> There will not be many disk installed on each node, so the replication will be replica 3 replicated-distribute volumes across the nodes as:
>> node1/disk1 node2/disk1 node3/disk1
>> node1/disk2 node2/disk2 node3/disk2
>> and so on...
>> As i will add nodes to the cluster , I intend expand the volumes using the same rule.
>> What would it be a better way, to used jbod cards ( no cache ) or raid card and create raid0 arrays ( one for each disk ) and therefore have a bit of raid cache ( 512Mb ) ?
>> Is raid caching a benefit to have it underneath ovirt/gluster as long as I go for "Jbod" installation anyway ?
>> Thank you very much !
>> --
>> Best regards, Leo David
5 years, 7 months