oVirt not assigning network to gluster brick
by Felip Moll
Hello,
I have latest version of ovirt 4 installed on a Centos 7, 2 hypervisor
nodes (rdkvm[1-2]) and 1 ovirt-engine node (rdhead1).
I receive the following warning in the logs despite of having the
gluster network set up. Everything is running fine.
2016-10-10 12:24:02,825 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler7) [5a52dffe] Could not associate brick
'rdkvm1-data:/data/sdb1/gluvol1' of volume
'47e45087-1a07-4790-9d30-77edbefa5f2e' with correct network as no
gluster network found in cluster
'57f387dc-0315-020b-019c-0000000000e6'
2016-10-10 12:24:02,831 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler7) [5a52dffe] Could not associate brick
'rdkvm2-data:/data/sdb1/gluvol1' of volume
'47e45087-1a07-4790-9d30-77edbefa5f2e' with correct network as no
gluster network found in cluster
'57f387dc-0315-020b-019c-0000000000e6'
2016-10-10 12:24:02,838 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler7) [5a52dffe] Could not associate brick
'rdkvm1-data:/data/sdc1/gluvol2' of volume
'3c1e9936-4cce-4a21-83f3-ac8611348484' with correct network as no
gluster network found in cluster
'57f387dc-0315-020b-019c-0000000000e6'
2016-10-10 12:24:02,844 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler7) [5a52dffe] Could not associate brick
'rdkvm2-data:/data/sdc1/gluvol2' of volume
'3c1e9936-4cce-4a21-83f3-ac8611348484' with correct network as no
gluster network found in cluster
'57f387dc-0315-020b-019c-0000000000e6'
My ovirt-engine node is not connected to the gluster network directly.
The glusterNetwork is defined in the cluster and is attached to bond0
interface of rdkvm1 and rdkvm2.
[RDHEAD1][root@rdhead1 ~]# su - postgres -c "psql -d engine -c
\"SELECT id,name,type,addr,subnet,vlan_id,storage_pool_id FROM
network\""
id | name | type | addr |
subnet | vlan_id | storage_pool_id
--------------------------------------+----------------+------+------+--------+---------+--------------------------------------
00000000-0000-0000-0000-000000000009 | ovirtmgmt | | |
| | 57f387dc-0389-0252-01f4-000000000316
628ed584-feaf-4952-908c-5b2d654c0731 | glusterNetwork | | |
| | 57f387dc-0389-0252-01f4-000000000316
965b2aeb-1230-4c24-9ca2-e815187372a9 | BMCNetwork | | |
| | 57f387dc-0389-0252-01f4-000000000316
(3 rows)
[RDHEAD1][root@rdhead1 ~]# su - postgres -c "psql -d engine -c
\"SELECT volume_id,status,network_id FROM gluster_volume_bricks\""
volume_id | status | network_id
--------------------------------------+--------+------------
3c1e9936-4cce-4a21-83f3-ac8611348484 | UP |
3c1e9936-4cce-4a21-83f3-ac8611348484 | UP |
47e45087-1a07-4790-9d30-77edbefa5f2e | UP |
47e45087-1a07-4790-9d30-77edbefa5f2e | UP |
(4 rows)
I tried to manually attach the network_id to the volume bricks, but
after a while it gets emptied:
[RDHEAD1][root@rdhead1 squid-in-a-can]# su - postgres -c "psql -d
engine -c \"update gluster_volume_bricks set
network_id='628ed584-feaf-4952-908c-5b2d654c0731' \""
UPDATE 4
Look:
2016-10-10 12:27:28,595 INFO
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
(DefaultQuartzScheduler2) [33756067] Network address for brick
'10.3.10.5:/data/sdc1/gluvol2' detected as 'rdkvm1-data'. Updating
engine DB accordingly.
2016-10-10 12:27:28,607 INFO
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
(DefaultQuartzScheduler2) [33756067] Network address for brick
'10.3.10.6:/data/sdc1/gluvol2' detected as 'rdkvm2-data'. Updating
engine DB accordingly.
2016-10-10 12:27:28,619 INFO
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
(DefaultQuartzScheduler2) [33756067] Network address for brick
'10.3.10.5:/data/sdb1/gluvol1' detected as 'rdkvm1-data'. Updating
engine DB accordingly.
2016-10-10 12:27:28,623 INFO
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
(DefaultQuartzScheduler2) [33756067] Network address for brick
'10.3.10.6:/data/sdb1/gluvol1' detected as 'rdkvm2-data'. Updating
engine DB accordingly.
How can I solve this situation?
Thank you
Felip M
--
Felip Moll Marquès
Computer Science Engineer
E-Mail - lipixx(a)gmail.com
WebPage - http://lipix.ciutadella.es
8 years, 1 month
Gluster service failure
by Koen Vanoppen
Dear all,
One little issue. I have 1 hypervisor in my datacenter that keeps having
it's gluster status disconnected in the GUI. But if I look on the server,
the service is running. I added the logs after I clicked on the "Restart
gluster service"
Kind regards,
Koen
8 years, 1 month
Active Directory domain authorization in oVirt Hosted Engine guest OS
by aleksey.maksimov@it-kb.ru
Hello oVirt guru`s!
I'm sorry for possible offtopic, but I do not know where to seek help.
I want to set up Active Directory domain authorization in oVirt Hosted Engine guest OS.
For this I use SSSD as described here:
https://blog.it-kb.ru/2016/10/15/join-debian-gnu-linux-8-6-to-active-dire...
I attached the computer to the domain using the realm utility.
It looks nice.
[root@KOM-OVIRT1 ~]# realm list
ad.holding.com
type: kerberos
realm-name: AD.HOLDING.COM
domain-name: ad.holding.com
configured: kerberos-member
server-software: active-directory
client-software: sssd
required-package: oddjob
required-package: oddjob-mkhomedir
required-package: sssd
required-package: adcli
required-package: samba-common
login-formats: %U(a)ad.holding.com
login-policy: allow-permitted-logins
permitted-logins:
permitted-groups: KOM-SRV-Linux-Admins(a)ad.holding.com
However, getent does not return information about domain accounts:
[root@KOM-OVIRT1 ~]# getent passwd aleksey(a)ad.holding.com
[root@KOM-OVIRT1 ~]#
getent for local accounts work:
[root@KOM-OVIRT1 ~]# getent passwd root
root:x:0:0:root:/root:/bin/bash
oVirt Hosted Engine guest OS has some tricky authorization settings?
Can you help me?
8 years, 1 month
[ANN] oVirt 4.0.5 Third Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of oVirt 4.0.5
third release candidate for testing, as of October 20th, 2016.
This release is available now for:
* Fedora 23 (tech preview)
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 23 (tech preview)
* oVirt Next Generation Node 4.0
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the third release candidate of the fifth in a series of
stabilization updates to the 4.0 series.
4.0.5 brings 13 enhancements and 81 bugfixes, including 36 high or urgent
severity fixes, on top of oVirt 4.0 series
See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
* A new oVirt Live ISO is available. [4]
* A new oVirt Next Generation Node will be available soon [4]
* A new oVirt Engine Appliance is available for Red Hat Enterprise Linux
and CentOS Linux (or similar)
* Mirrors[5] might need up to one day to synchronize.
Additional Resources:
* Read more about the oVirt 4.0.5 release highlights:
http://www.ovirt.org/release/4.0.5/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.0.5/
[4] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/
[5] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years, 1 month
Change storage domain description
by nicolas@devels.es
Hi,
How can I change an existing storage domain's description? When 'Manage
Storage is clicked, only the name can be changed but both Description
and Comment fields are greyed-out.
This is oVirt 4.0.3 BTW.
Thanks.
8 years, 1 month
planned jenkins and resources downtime
by Evgheni Dereveanchin
Hi everyone,
As part of our planned maintenance I am going to reboot
the Jenkins master along with our resources file server.
Affected services:
jenkins.ovirt.org
resources.ovirt.org
This means that no new jobs will be started on the Jenkins
and our repositories will be unavailable for a short
period of time.
I will follow up on this email once all services
are back up and running.
Regards,
Evgheni Dereveanchin
8 years, 1 month
Data Center jumping between online and non-responsive
by gregor
Hi,
I installed oVirt on another host machine and there the Data Center is jumping between 5-10 minuten from online to non-responsive.
It's nearly impossible to work with the webinterface because of the non-responsive state all action I like to start are blocked.
The installation is on a single host as hosted-engine
These lines fill the log:
...
Okt 19 18:45:47 host01.domain.local ovirt-ha-agent[3048]: pending = getattr(dispatcher, 'pending', lambda: 0)
Okt 19 18:45:47 host01.domain.local ovirt-ha-agent[3048]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.
Okt 19 18:45:47 host01.domain.local ovirt-ha-agent[3048]: pending = getattr(dispatcher, 'pending', lambda: 0)
Okt 19 18:45:48 host01.domain.local ovirt-ha-agent[3048]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.
Okt 19 18:45:48 host01.domain.local ovirt-ha-agent[3048]: pending = getattr(dispatcher, 'pending', lambda: 0)
Okt 19 18:45:49 host01.domain.local ovirt-ha-agent[3048]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.
Okt 19 18:45:49 host01.domain.local ovirt-ha-agent[3048]: pending = getattr(dispatcher, 'pending', lambda: 0)
Okt 19 18:45:50 host01.domain.local ovirt-ha-broker[977]: INFO:mgmt_bridge.MgmtBridge:Found bridge ovirtmgmt with ports
Okt 19 18:45:50 host01.domain.local ovirt-ha-broker[977]: INFO:cpu_load_no_engine.EngineHealth:System load total=0.1130, engine=0.0098, non-engine=0.1031
Okt 19 18:45:51 host01.domain.local ovirt-ha-broker[977]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection established
Okt 19 18:45:51 host01.domain.local ovirt-ha-broker[977]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Okt 19 18:45:51 host01.domain.local ovirt-ha-broker[977]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection established
Okt 19 18:45:51 host01.domain.local ovirt-ha-broker[977]: INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Okt 19 18:45:51 host01.domain.local ovirt-ha-broker[977]: INFO:mem_free.MemFree:memFree: 14127
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: pending = getattr(dispatcher, 'pending', lambda: 0)
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: pending = getattr(dispatcher, 'pending', lambda: 0)
Okt 19 18:45:51 host01.domain.local vdsm[3045]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting storage server
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: pending = getattr(dispatcher, 'pending', lambda: 0)
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Refreshing the storage domain
Okt 19 18:45:51 host01.domain.local kernel: device-mapper: table: 253:3: multipath: error getting device
Okt 19 18:45:51 host01.domain.local kernel: device-mapper: ioctl: error adding target to table
Okt 19 18:45:51 host01.domain.local multipathd[598]: dm-3: remove map (uevent)
Okt 19 18:45:51 host01.domain.local multipathd[598]: dm-3: remove map (uevent)
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: /usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pending is deprecated. Use Dispatcher.socket.pending instead.
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: pending = getattr(dispatcher, 'pending', lambda: 0)
Okt 19 18:45:51 host01.domain.local vdsm[3045]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Preparing images
Okt 19 18:45:51 host01.domain.local ovirt-ha-agent[3048]: INFO:ovirt_hosted_engine_ha.lib.image.Image:Preparing images
8 years, 1 month
Automatic vm installation
by knarra
Hi Everyone,
I would like to automate VM installation using ovirt-python-sdk.
what is the preferred / best way to that?
Thanks
kasturi
8 years, 1 month
Ping returns: Redirect Host(New nexthop ..
by gregor
Hi,
sometimes a ping to a VM give "Redirect Host(New nexthop ...".
Lowering the MTU inside the VM gives better results but the network
throughput is very bad. (45kB/s during yum update)
Networkconfiguration:
Network name: WAN, MTU 1500 (default)
Network name: LAN, MTU 1500 (default)
LAN and WAN assigned to the host and ipv4 set to "none"
added WAN to eth0 to the VM
added LAN to eth1 to the VM
PING inside the LAN network works perfect.
PING to WAN gives sometimes a letency of 4000ms to 8.8.8.8
PING from WAN to the VM gives the problem described above and latency
sometimes up to 200ms and normally 0.7ms
When I set the MTU to 1400 inside the VM network configuration and also
on the VM itself the nexthop-problem are gone. A speedtest with
speedtest-cli[1] give me a speed of 0.35/3.45. The values differ extrem
from the values measured from a laptop inside the WAN network with the
same tool.
Can anybody give me a hint what the problen can be?
[1] https://github.com/sivel/speedtest-cli
cheers
gregor
8 years, 1 month