Requirements to put to cluster version 4.6
by Gianluca Cecchi
Hello,
I updated all my 3 CentOS 4.4.5 hosts to 4.4.6.
Or at least upgrading them from the GUI I see now they are on CentOS 8.4
and:
[root@ov200 ~]# rpm -qa| grep 4.4.6
ovirt-release44-4.4.6.3-1.el8.noarch
ovirt-host-4.4.6-1.el8.x86_64
ovirt-host-dependencies-4.4.6-1.el8.x86_64
[root@ov200 ~]#
How to see from the webadmin GUI or from the Host Console that they are
indeed 4.4.6?
If I try to set cluster compatibility version to 4.6 I get:
Error while executing action: Cannot change Cluster Compatibility Version
to higher version when there are active Hosts with lower version.
-Please move Host ov300, ov301, ov200 with lower version to maintenance
first.
I don't remember the need of giving downtime to get new cluster version...
Or what are further requirements (and benefits....) to upgrade to 4.6?
Thanks,
Gianluca
3 years, 3 months
Expiration date of certificates
by Demeter Tibor
Dear Listmembers,
How can I exctract the expiration dates all certificates which ovirt are uses?
Also, is there any command or tool in ovirt what I can use to create and deploy new certifications on hosted engine and hosts?
Thanks in advance,
Regards,
Tibor
3 years, 3 months
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Penalizing score by 1600 due to network status
by Christoph Timm
Hi List,
I'm trying to understand why my hosted engine is moved from one node to
another from time to time.
It is happening sometime multiple times a day. But there are also days
without it.
I can see the following in the ovirt-hosted-engine-ha/agent.log:
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Penalizing score by 1600 due to network status
After that the engine will be shutdown and started on another host.
The oVirt Admin portal is showing the following around the same time:
Invalid status on Data Center Default. Setting status to Non Responsive.
But the whole cluster is working normally during that time.
I believe that I have somehow a network issue on my side but I have no
clue what kind of check is causing the network status to penalized.
Does anyone have an idea how to investigate this further?
Best regards
Christoph
3 years, 3 months
Gluster bricks error
by Patrick Lomakin
For a long time I have been seeing the same error, which cannot be corrected.After restarting the host which has a Volume of one or more Bricks, the Volume starts with the status "Online", but the Bricks remain "Offline". This leads to having to manually restart Volume, the ovirt-ha and ovirt-broker services, and run the hosted-engine --connect-storage command. And only after that I can start the hosted engine back to normal. I tried this on different server hardware and different operating systems for the host, but the result is the same. This is a very serious flaw that nullifies the high availability in HCI using GlusterFS.Regards!
3 years, 4 months
Create gluster brick in oVirt engine, RAID-parameters
by Dominique D
Hi,
When creating a gluster brick in oVirt (host->Storage Devices->Create Brick), I have to fill in the parameters
of the RAID volume the brick : RAID-type, number of disks and stripe size.
My setup is DELL RAID Controler PERC H710
4 SSD disks raid5
Default stripe for Dell RAID-controller defaults is 64KB.
I have no choice of a Raid 5 and if I choose Raid6, 4 or 3 disk, 64KB or 128KB Stripe. It does not work, (Failed to create brick data_ssd on host xxx of cluster Default). The only option that works is RAID type = None
tried to find information about this, but no luck yet..
I tried ovirt engine 4.4.1.10 and lastest 4.4.7.6
Can I leave this to None ? Performance ?
Thank you
3 years, 4 months
4.4.7 gluster quorum problem
by radchenko.anatoliy@gmail.com
Hi guys,
one strange thing happens, cannot understand it.
Today i installed last version 4.4.7 on centos stream, replica 3, via cockpit, internal lan for sync. Seems all ok, if reboot 3 nodes together aslo ok. But if i reboot 1 node ( and declare node rebooted through web ui) the bricks (engine and data) remain down on that node. All is clear on logs without explicit indication of the situation except "Server quorum regained for volume data. Starting local bricks" on glusterd.log. After "systemctl restart glusterd" bricks gone down on another node. After "systemctl restart glusterd" on that node all is ok.
Where should i look?
some errors of log that i found:
--------------------------------------------- bdocker.log:
statusStorageThread::ERROR::2021-07-12 22:17:02,899::storage_broker::223::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(put_stats) Failed to write metadata for ho$
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 215, in put_stats
f = os.open(path, direct_flag | os.O_WRONLY | os.O_SYNC)
FileNotFoundError: [Errno 2] No such file or directory: '/run/vdsm/storage/53b068c1-beb8-4048-a766-3a4e71ded624/d3df7eb6-d453-439a-8436-d3694d4b5179/de18b2cc-a4e1-4afc-9b5a-6063$
StatusStorageThread::ERROR::2021-07-12 22:17:02,899::status_broker::90::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run) Failed to update state.
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 215, in put_stats
f = os.open(path, direct_flag | os.O_WRONLY | os.O_SYNC)
FileNotFoundError: [Errno 2] No such file or directory: '/run/vdsm/storage/53b068c1-beb8-4048-a766-3a4e71ded624/d3df7eb6-d453-439a-8436-d3694d4b5179/de18b2cc-a4e1-4afc-9b5a-6063$
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 86, in run
entry.data
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 225, in put_stats
.format(str(e)))
ovirt_hosted_engine_ha.lib.exceptions.RequestError: failed to write metadata: [Errno 2] No such file or directory: '/run/vdsm/storage/53b068c1-beb8-4048-a766-3a4e71ded624/d3df7e$
StatusStorageThread::ERROR::2021-07-12 22:17:02,899::storage_broker::223::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(put_stats) Failed to write metadata for ho$
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 215, in put_stats
f = os.open(path, direct_flag | os.O_WRONLY | os.O_SYNC)
FileNotFoundError: [Errno 2] No such file or directory: '/run/vdsm/storage/53b068c1-beb8-4048-a766-3a4e71ded624/d3df7eb6-d453-439a-8436-d3694d4b5179/de18b2cc-a4e1-4afc-9b5a-6063$
StatusStorageThread::ERROR::2021-07-12 22:17:02,899::status_broker::90::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run) Failed to update state.
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 215, in put_stats
f = os.open(path, direct_flag | os.O_WRONLY | os.O_SYNC)
FileNotFoundError: [Errno 2] No such file or directory: '/run/vdsm/storage/53b068c1-beb8-4048-a766-3a4e71ded624/d3df7eb6-d453-439a-8436-d3694d4b5179/de18b2cc-a4e1-4afc-9b5a-6063$
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 86, in run
entry.data
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 225, in put_stats
.format(str(e)))
ovirt_hosted_engine_ha.lib.exceptions.RequestError: failed to write metadata: [Errno 2] No such file or directory: '/run/vdsm/storage/53b068c1-beb8-4048-a766-3a4e71ded624/d3df7e$
StatusStorageThread::ERROR::2021-07-12 22:17:02,902::status_broker::70::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(trigger_restart) Trying to restart the $
StatusStorageThread::ERROR::2021-07-12 22:17:02,902::storage_broker::173::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(get_raw_stats) Failed to read metadata fro$
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 155, in get_raw_stats
f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
FileNotFoundError: [Errno 2] No such file or directory: '/run/vdsm/storage/53b068c1-beb8-4048-a766-3a4e71ded624/d3df7eb6-d453-439a-8436-d3694d4b5179/de18b2cc-a4e1-4afc-9b5a-6063$
StatusStorageThread::ERROR::2021-07-12 22:17:02,902::status_broker::98::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run) Failed to read state.
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 155, in get_raw_stats
f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)
FileNotFoundError: [Errno 2] No such file or directory: '/run/vdsm/storage/53b068c1-beb8-4048-a766-3a4e71ded624/d3df7eb6-d453-439a-8436-d3694d4b5179/de18b2cc-a4e1-4afc-9b5a-6063$
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 94, in run
self._storage_broker.get_raw_stats()
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 175, in get_raw_stats
.format(str(e)))
-------------------------------------------------------supervdsm.log:
ainProcess|jsonrpc/0::DEBUG::2021-07-12 22:22:13,264::commands::211::root::(execCmd) /usr/bin/taskset --cpu-list 0-19 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/b$
MainProcess|jsonrpc/0::DEBUG::2021-07-12 22:22:15,083::commands::224::root::(execCmd) FAILED: <err> = b'Running scope as unit: run-r91d6411af8114090aa28933d562fa473.scope\nMount$
MainProcess|jsonrpc/0::ERROR::2021-07-12 22:22:15,083::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) Error in mount
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 97, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 135, in mount
cgroup=cgroup)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 280, in _mount
_runcmd(cmd)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 308, in _runcmd
raise MountError(cmd, rc, out, err)
vdsm.storage.mount.MountError: Command ['/usr/bin/systemd-run', '--scope', '--slice=vdsm-glusterfs', '/usr/bin/mount', '-t', 'glusterfs', '-o', 'backup-volfile-servers=cluster2.$
netlink/events::DEBUG::2021-07-12 22:22:15,131::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 139867781396224)>
MainProcess|jsonrpc/4::DEBUG::2021-07-12 22:22:15,134::supervdsm_server::102::SuperVdsm.ServerCallback::(wrapper) return network_caps with {'networks': {'ovirtmgmt': {'ports': [$
---------------------------------------------------------vdsm.log:
2021-07-12 22:17:08,718+0200 INFO (jsonrpc/7) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::1,34946 (api:54)
2021-07-12 22:17:09,491+0200 ERROR (monitor/53b068c) [storage.Monitor] Error checking domain 53b068c1-beb8-4048-a766-3a4e71ded624 (monitor:451)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/monitor.py", line 432, in _checkDomainStatus
self.domain.selftest()
File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 712, in selftest
self.oop.os.statvfs(self.domaindir)
File "/usr/lib/python3.6/site-packages/vdsm/storage/outOfProcess.py", line 244, in statvfs
return self._iop.statvfs(path)
File "/usr/lib/python3.6/site-packages/ioprocess/__init__.py", line 510, in statvfs
resdict = self._sendCommand("statvfs", {"path": path}, self.timeout)
File "/usr/lib/python3.6/site-packages/ioprocess/__init__.py", line 479, in _sendCommand
raise OSError(errcode, errstr)
FileNotFoundError: [Errno 2] No such file or directory
2021-07-12 22:17:09,619+0200 INFO (jsonrpc/0) [api.virt] START getStats() from=::1,34946, vmId=9167f682-3c82-4237-93bd-53f0ad32ffba (api:48)
2021-07-12 22:17:09,620+0200 INFO (jsonrpc/0) [api] FINISH getStats error=Virtual machine does not exist: {'vmId': '9167f682-3c82-4237-93bd-53f0ad32ffba'} (api:129)
2021-07-12 22:17:09,620+0200 INFO (jsonrpc/0) [api.virt] FINISH getStats return={'status': {'code': 1, 'message': "Virtual machine does not exist: {'vmId': '9167f682-3c82-4237-$
2021-07-12 22:17:09,620+0200 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2021-07-12 22:17:10,034+0200 INFO (jsonrpc/3) [vdsm.api] START repoStats(domains=['53b068c1-beb8-4048-a766-3a4e71ded624']) from=::1,34946, task_id=4e823c98-f95b-45f7-ad64-90f82$
2021-07-12 22:17:10,034+0200 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats return={'53b068c1-beb8-4048-a766-3a4e71ded624': {'code': 2001, 'lastCheck': '0.5', 'delay': '0', 'vali$
2021-07-12 22:17:10,403+0200 INFO (health) [health] LVM cache hit ratio: 12.50% (hits: 1 misses: 7) (health:131)
2021-07-12 22:17:10,472+0200 INFO (MainThread) [vds] Received signal 15, shutting down (vdsmd:74)
2021-07-12 22:17:10,472+0200 INFO (MainThread) [root] Stopping DHCP monitor. (dhcp_monitor:106)
2021-07-12 22:17:10,473+0200 INFO (ioprocess/11056) [IOProcessClient] (53b068c1-beb8-4048-a766-3a4e71ded624) Poll error 16 on fd 74 (__init__:176)
2021-07-12 22:17:10,473+0200 INFO (ioprocess/11056) [IOProcessClient] (53b068c1-beb8-4048-a766-3a4e71ded624) ioprocess was terminated by signal 15 (__init__:200)
2021-07-12 22:17:10,476+0200 INFO (ioprocess/19109) [IOProcessClient] (e10cbd59-d32e-4b69-a4c1-d213e7bd8973) Poll error 16 on fd 75 (__init__:176)
2021-07-12 22:17:10,476+0200 INFO (ioprocess/19109) [IOProcessClient] (e10cbd59-d32e-4b69-a4c1-d213e7bd8973) ioprocess was terminated by signal 15 (__init__:200)
2021-07-12 22:17:10,513+0200 INFO (ioprocess/44046) [IOProcess] (e10cbd59-d32e-4b69-a4c1-d213e7bd8973) Starting ioprocess (__init__:465)
2021-07-12 22:17:10,513+0200 INFO (ioprocess/44045) [IOProcess] (53b068c1-beb8-4048-a766-3a4e71ded624) Starting ioprocess (__init__:465)
2021-07-12 22:17:10,519+0200 WARN (periodic/0) [root] Failed to retrieve Hosted Engine HA info: timed out (api:198)
2021-07-12 22:17:10,611+0200 ERROR (check/loop) [storage.Monitor] Error checking path /rhev/data-center/mnt/glusterSD/cluster1.int:_data/e10cbd59-d32e-4b69-a4c1-d213e7bd8973/dom$
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/monitor.py", line 507, in _pathChecked
delay = result.delay()
File "/usr/lib/python3.6/site-packages/vdsm/storage/check.py", line 391, in delay
raise exception.MiscFileReadException(self.path, self.rc, self.err)
vdsm.storage.exception.MiscFileReadException: Internal file read failure: ('/rhev/data-center/mnt/glusterSD/cluster1.int:_data/e10cbd59-d32e-4b69-a4c1-d213e7bd8973/dom_md/metada$
2021-07-12 22:17:10,860+0200 INFO (MainThread) [root] Stopping Bond monitor. (bond_monitor:53)
Thanks in advance
Best regards.
3 years, 4 months
PostgreSQL password for ovirt 4.4.7
by louisb@ameritech.net
I've finally installed successfully ovirt 4.4.7 using the default installation. I would like to change the passwords for the following userids: postgres, engine database and data warehouse database. Can someone provide me with information grading the files that contain the password to these items? I would also like to know the best way to change the passwords.
Thanks
3 years, 4 months
Cannot start hosted-engine
by Valerio Luccio
Hi list,
I have a hosted engine running on a CentOS 8. The engine and all the
VM's are stored on a 4-node gluster. I had some issues with the gluster
and then the hosted-engine stopped working (even though the
virtualization dashboard showed 4 virtual machines running). I tried to
"systemctl restart" the hosted-engine, but it failed. I try to reboot
the server and the hosted-engine still will not come up. Note that the
server has no issue mounting the gluster:
$ df
hydra1:/MRIData 390664407040 20530130012 370134277028 6% /rhev/data-center/mnt/glusterSD/hydra1:_MRIData
$ ls -l /rhev/data-center/mnt/glusterSD/hydra1\:_MRIData/6547dc22-b89e-4f14-8958-c9e8d27b29a4/
drwxr-xr-x. 2 vdsm kvm 4.0K Mar 29 12:24 dom_md
drwxr-xr-x. 2 vdsm kvm 4.0K Jul 20 17:47 ha_agent
drwxr-xr-x. 12 vdsm kvm 4.0K Apr 1 16:32 images
drwxr-xr-x. 4 vdsm kvm 4.0K Mar 29 12:24 master
Where "hydra1" is one of my gluster nodes and MRIData is the volume name.
Here is the relevant snippet from /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2021-07-20 17:29:07,584::agent::67::ovirt_hosted_engine_ha.agent.agent.Agent::(run) ovirt-hosted-engine-ha agent 2.4.6 started
MainThread::INFO::2021-07-20 17:29:07,594::hosted_engine::242::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname) Certificate common name not found, using hostname to identify host
MainThread::INFO::2021-07-20 17:29:07,635::hosted_engine::548::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Initializing ha-broker connection
MainThread::INFO::2021-07-20 17:29:07,636::brokerlink::82::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor network, options {'addr': '192.168.39.65', 'network_test': 'dns', 'tcp_t_address': '', 'tcp_t_port': ''}
MainThread::ERROR::2021-07-20 17:29:07,636::hosted_engine::564::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Failed to start necessary monitors
MainThread::ERROR::2021-07-20 17:29:07,637::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 85, in start_monitor
response = self._proxy.start_monitor(type, options)
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__
return self.__send(self.__name, args)
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request
verbose=self.__verbose
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request
http_conn = self.send_request(host, handler, request_body, verbose)
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request
self.send_content(connection, request_body)
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content
connection.endheaders(request_body)
File "/usr/lib64/python3.6/http/client.py", line 1249, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib64/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/lib64/python3.6/http/client.py", line 974, in send
self.connect()
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 74, in connect
self.sock.connect(base64.b16decode(self.host))
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 131, in _run_agent
return action(he)
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 55, in action_proper
return he.start_monitoring()
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 437, in start_monitoring
self._initialize_broker()
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 561, in _initialize_broker
m.get('options', {}))
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 91, in start_monitor
).format(t=type, o=options, e=e)
ovirt_hosted_engine_ha.lib.exceptions.RequestError: brokerlink - failed to start monitor via ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: 'network', options: {'addr': '192.168.39.65', 'network_test': 'dns', 'tcp_t_address': '', 'tcp_t_port': ''}]
MainThread::ERROR::2021-07-20 17:29:07,637::agent::144::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Trying to restart agent
MainThread::INFO::2021-07-20 17:29:07,637::agent::89::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down
I'm puzzled by that "Certificate common name not found", which I had not
seen before. The fqdn of the hosted engine resolves fine on the server,
so does the fqdn of the server itself. The ip address it seems to try to
use for the network is that of one of the university's gateways.
Any ideas ? Any way to debug this further ?
Thanks in advance,
--
Valerio Luccio (212) 998-8736
Center for Brain Imaging 4 Washington Place, Room 157
New York University New York, NY 10003
"In an open world, who needs windows or gates ?"
3 years, 4 months
LDAP auth error "server_error: Cannot locate principal"
by tbural@gmail.com
Trying to configure LDAP auth on engine. After adding user from LDAP i cannot login with this error "server_error: Cannot locate principal"
Errors from engine.log
2021-06-30 17:24:23,830+05 ERROR [org.ovirt.engine.core.sso.servlets.InteractiveAuthServlet] (default task-5) [686f77b] Internal Server Error: Cannot locate principal 'Domain Reader'
2021-06-30 17:24:23,830+05 ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-5) [686f77b] Cannot locate principal 'Domain Reader'
2021-06-30 17:24:23,851+05 ERROR [org.ovirt.engine.core.aaa.servlet.SsoPostLoginServlet] (default task-5) [686f77b] server_error: Cannot locate principal 'Domain Reader'
How i can fix this error?
ovirt 4.3.10
Config /etc/ovirt-engine/aaa/openldap_rfc.properties:
include = <rfc2307-openldap.properties>
vars.server = LDAP.testdom.local
vars.user = CN=Domain Reader,OU=AD,OU=SERVICE,DC=testdom,DC=local
vars.password = password
pool.default.auth.simple.bindDN = ${global:vars.user}
pool.default.auth.simple.password = ${global:vars.password}
pool.default.serverset.type = single
pool.default.serverset.single.server = ${global:vars.server}
pool.default.ssl.startTLS = tlocale
pool.default.ssl.insecure = tlocale
attrmap.map-principal-record.attr.PrincipalRecord_ID.map = uid
attrmap.map-principal-record.attr.PrincipalRecord_PRINCIPAL.map = cn
#LDAP value changes
sequence.openldap-init-vars.030.var-set.value = entryUUID, uid, cn, givenName, sn, Email
sequence.openldap-init-vars.040.var-set.value = (objectClass=posixAccount)(uid=*)
sequence.openldap-init-vars.050.var-set.value = entryUUID, uid
sequence.openldap-init-vars.060.var-set.value = (objectClass=posixGroup)
sequence.openldap-init-vars.070.var-set.value = membelocalid
User attribures:
ovirt-engine-extensions-tool aaa search --extension-name=openldap_rfc-authz --entity=principal --entity-name=domreader
2021-07-21 17:14:33,805+05 INFO ========================================================================
2021-07-21 17:14:33,833+05 INFO ============================ Initialization ============================
2021-07-21 17:14:33,833+05 INFO ========================================================================
2021-07-21 17:14:33,878+05 INFO Loading extension 'internal-authz'
2021-07-21 17:14:33,885+05 INFO Extension 'internal-authz' loaded
------
2021-07-21 17:14:35,885+05 INFO ========================================================================
2021-07-21 17:14:35,886+05 INFO ============================== Execution ===============================
2021-07-21 17:14:35,886+05 INFO ========================================================================
2021-07-21 17:14:35,886+05 INFO Iteration: 0
2021-07-21 17:14:35,891+05 INFO --- Begin QueryFilterRecord ---
2021-07-21 17:14:35,892+05 INFO AAA_AUTHZ_QUERY_FILTER_OPERATOR: 102
2021-07-21 17:14:35,892+05 INFO AAA_AUTHZ_QUERY_ENTITY: AAA_AUTHZ_QUERY_ENTITY_PRINCIPAL[1695cd36-4656-474f-b7bc-4466e12634e4]
2021-07-21 17:14:35,893+05 INFO --- Begin QueryFilterRecord ---
2021-07-21 17:14:35,893+05 INFO AAA_AUTHZ_QUERY_FILTER_OPERATOR: 0
2021-07-21 17:14:35,894+05 INFO AAA_AUTHZ_QUERY_FILTER_KEY: Extkey[name=AAA_AUTHZ_PRINCIPAL_NAME;type=class java.lang.String;uuid=AAA_AUTHZ_PRINCIPAL_NAME[a0df5bcc-6ead-40a2-8565-2f5cc8773bdd];]
2021-07-21 17:14:35,894+05 INFO AAA_AUTHZ_PRINCIPAL_NAME: domreader
2021-07-21 17:14:35,894+05 INFO --- End QueryFilterRecord ---
2021-07-21 17:14:35,895+05 INFO --- End QueryFilterRecord ---
2021-07-21 17:14:35,895+05 INFO API: -->Authz.InvokeCommands.QUERY_OPEN namespace='dc=testdom,dc=local'
2021-07-21 17:14:35,904+05 INFO API: <--Authz.InvokeCommands.QUERY_OPEN
2021-07-21 17:14:35,904+05 INFO API: -->Authz.InvokeCommands.QUERY_EXECUTE
2021-07-21 17:16:04,079+05 INFO API: <--Authz.InvokeCommands.QUERY_EXECUTE count=1
2021-07-21 17:16:04,080+05 INFO --- Begin PrincipalRecord ---
2021-07-21 17:16:04,081+05 INFO AAA_AUTHZ_PRINCIPAL_PRINCIPAL: Domain Reader
2021-07-21 17:16:04,081+05 INFO AAA_AUTHZ_PRINCIPAL_LAST_NAME: Reader
2021-07-21 17:16:04,081+05 INFO AAA_LDAP_UNBOUNDID_DN: cn=Domain Reader,ou=AD,ou=SERVICE,dc=testdom,dc=local
2021-07-21 17:16:04,082+05 INFO AAA_AUTHZ_PRINCIPAL_NAMESPACE: dc=testdom,dc=local
2021-07-21 17:16:04,082+05 INFO AAA_AUTHZ_PRINCIPAL_ID: domreader
2021-07-21 17:16:04,082+05 INFO AAA_AUTHZ_PRINCIPAL_DISPLAY_NAME: Domain Reader
2021-07-21 17:16:04,083+05 INFO AAA_AUTHZ_PRINCIPAL_NAME: domreader
2021-07-21 17:16:04,083+05 INFO AAA_AUTHZ_PRINCIPAL_FIRST_NAME: Domain
2021-07-21 17:16:04,083+05 INFO --- End PrincipalRecord ---
2021-07-21 17:16:04,084+05 INFO API: -->Authz.InvokeCommands.QUERY_EXECUTE
2021-07-21 17:16:04,084+05 INFO API: <--Authz.InvokeCommands.QUERY_EXECUTE count=END
2021-07-21 17:16:04,084+05 INFO API: -->Authz.InvokeCommands.QUERY_CLOSE
2021-07-21 17:16:04,084+05 INFO API: <--Authz.InvokeCommands.QUERY_CLOSE
Trying to auth using ovirt-engine-extensions-tool:
ovirt-engine-extensions-tool aaa login-user --profile=openldap_rfc --user-name=domreader
2021-07-21 17:40:47,318+05 INFO ========================================================================
2021-07-21 17:40:47,350+05 INFO ============================ Initialization ============================
2021-07-21 17:40:47,351+05 INFO ========================================================================
2021-07-21 17:40:47,401+05 INFO Loading extension 'internal-authz'
2021-07-21 17:40:47,407+05 INFO Extension 'internal-authz' loaded
2021-07-21 17:40:47,409+05 INFO Loading extension 'internal-authn'
2021-07-21 17:40:47,410+05 INFO Extension 'internal-authn' loaded
2021-07-21 17:40:47,426+05 INFO Loading extension 'test_ldap'
2021-07-21 17:40:47,508+05 INFO Extension 'test_ldap' loaded
2021-07-21 17:40:47,509+05 INFO Loading extension 'test_ldap-authn'
2021-07-21 17:40:47,523+05 INFO Extension 'test_ldap-authn' loaded
2021-07-21 17:40:47,525+05 INFO Loading extension 'openldap_rfc-authz'
2021-07-21 17:40:47,538+05 INFO Extension 'openldap_rfc-authz' loaded
2021-07-21 17:40:47,540+05 INFO Loading extension 'openldap_rfc-authn'
2021-07-21 17:40:47,551+05 INFO Extension 'openldap_rfc-authn' loaded
2021-07-21 17:40:47,552+05 INFO Initializing extension 'internal-authz'
2021-07-21 17:40:47,671+05 INFO Extension 'internal-authz' initialized
2021-07-21 17:40:47,672+05 INFO Initializing extension 'internal-authn'
2021-07-21 17:40:47,685+05 INFO Extension 'internal-authn' initialized
2021-07-21 17:40:47,685+05 INFO Initializing extension 'test_ldap'
2021-07-21 17:40:47,686+05 INFO [ovirt-engine-extension-aaa-ldap.authz::test_ldap] Creating LDAP pool 'authz'
2021-07-21 17:40:47,787+05 INFO [ovirt-engine-extension-aaa-ldap.authz::test_ldap] LDAP pool 'authz' information: vendor='null' version='null'
2021-07-21 17:40:47,788+05 INFO [ovirt-engine-extension-aaa-ldap.authz::test_ldap] Available Namespaces: [dc=field,dc=example,dc=com]
2021-07-21 17:40:47,789+05 INFO Extension 'test_ldap' initialized
2021-07-21 17:40:47,789+05 INFO Initializing extension 'test_ldap-authn'
2021-07-21 17:40:47,790+05 INFO [ovirt-engine-extension-aaa-ldap.authn::test_ldap-authn] Creating LDAP pool 'authz'
2021-07-21 17:40:47,837+05 INFO [ovirt-engine-extension-aaa-ldap.authn::test_ldap-authn] LDAP pool 'authz' information: vendor='null' version='null'
2021-07-21 17:40:47,838+05 INFO [ovirt-engine-extension-aaa-ldap.authn::test_ldap-authn] Creating LDAP pool 'authn'
2021-07-21 17:40:47,849+05 INFO [ovirt-engine-extension-aaa-ldap.authn::test_ldap-authn] LDAP pool 'authn' information: vendor='null' version='null'
2021-07-21 17:40:47,849+05 INFO Extension 'test_ldap-authn' initialized
2021-07-21 17:40:47,850+05 INFO Initializing extension 'openldap_rfc-authz'
2021-07-21 17:40:47,850+05 INFO [ovirt-engine-extension-aaa-ldap.authz::openldap_rfc-authz] Creating LDAP pool 'authz'
2021-07-21 17:40:47,851+05 WARNING [ovirt-engine-extension-aaa-ldap.authz::openldap_rfc-authz] TLS/SSL insecure mode
2021-07-21 17:40:48,575+05 INFO [ovirt-engine-extension-aaa-ldap.authz::openldap_rfc-authz] LDAP pool 'authz' information: vendor='null' version='null'
2021-07-21 17:40:48,576+05 INFO [ovirt-engine-extension-aaa-ldap.authz::openldap_rfc-authz] Available Namespaces: [dc=testdom,dc=local]
2021-07-21 17:40:48,576+05 INFO Extension 'openldap_rfc-authz' initialized
2021-07-21 17:40:48,576+05 INFO Initializing extension 'openldap_rfc-authn'
2021-07-21 17:40:48,577+05 INFO [ovirt-engine-extension-aaa-ldap.authn::openldap_rfc-authn] Creating LDAP pool 'authz'
2021-07-21 17:40:48,577+05 WARNING [ovirt-engine-extension-aaa-ldap.authn::openldap_rfc-authn] TLS/SSL insecure mode
2021-07-21 17:40:49,174+05 INFO [ovirt-engine-extension-aaa-ldap.authn::openldap_rfc-authn] LDAP pool 'authz' information: vendor='null' version='null'
2021-07-21 17:40:49,175+05 INFO [ovirt-engine-extension-aaa-ldap.authn::openldap_rfc-authn] Creating LDAP pool 'authn'
2021-07-21 17:40:49,175+05 WARNING [ovirt-engine-extension-aaa-ldap.authn::openldap_rfc-authn] TLS/SSL insecure mode
2021-07-21 17:40:49,427+05 INFO [ovirt-engine-extension-aaa-ldap.authn::openldap_rfc-authn] LDAP pool 'authn' information: vendor='null' version='null'
2021-07-21 17:40:49,428+05 INFO Extension 'openldap_rfc-authn' initialized
2021-07-21 17:40:49,428+05 INFO Start of enabled extensions list
2021-07-21 17:40:49,429+05 INFO Instance name: 'openldap_rfc-authz', Extension name: 'ovirt-engine-extension-aaa-ldap.authz', Version: '1.3.10', Notes: 'Display name: ovirt-engine-extension-aaa-ldap-1.3.10-1.el7', License: 'ASL 2.0', Home: 'http://www.ovirt.org', Author 'The oVirt Project', Build interface Version: '0', File: '/etc/ovirt-engine/extensions.d/openldap_rfc-authz.properties', Initialized: 'tlocale'
2021-07-21 17:40:49,429+05 INFO Instance name: 'test_ldap', Extension name: 'ovirt-engine-extension-aaa-ldap.authz', Version: '1.3.10', Notes: 'Display name: ovirt-engine-extension-aaa-ldap-1.3.10-1.el7', License: 'ASL 2.0', Home: 'http://www.ovirt.org', Author 'The oVirt Project', Build interface Version: '0', File: '/etc/ovirt-engine/extensions.d/test_ldap.properties', Initialized: 'tlocale'
2021-07-21 17:40:49,429+05 INFO Instance name: 'internal-authn', Extension name: '"ovirt-engine-extension-aaa-jdbc".authn', Version: '"1.1.10"', Notes: 'Display name: "ovirt-engine-extension-aaa-jdbc"', License: 'ASL 2.0', Home: 'http://www.ovirt.org', Author 'The oVirt Project', Build interface Version: '0', File: '/etc/ovirt-engine/extensions.d/internal-authn.properties', Initialized: 'tlocale'
2021-07-21 17:40:49,430+05 INFO Instance name: 'internal-authz', Extension name: '"ovirt-engine-extension-aaa-jdbc".authz', Version: '"1.1.10"', Notes: 'Display name: "ovirt-engine-extension-aaa-jdbc"', License: 'ASL 2.0', Home: 'http://www.ovirt.org', Author 'The oVirt Project', Build interface Version: '0', File: '/etc/ovirt-engine/extensions.d/internal-authz.properties', Initialized: 'tlocale'
2021-07-21 17:40:49,430+05 INFO Instance name: 'openldap_rfc-authn', Extension name: 'ovirt-engine-extension-aaa-ldap.authn', Version: '1.3.10', Notes: 'Display name: ovirt-engine-extension-aaa-ldap-1.3.10-1.el7', License: 'ASL 2.0', Home: 'http://www.ovirt.org', Author 'The oVirt Project', Build interface Version: '0', File: '/etc/ovirt-engine/extensions.d/openldap_rfc-authn.properties', Initialized: 'tlocale'
2021-07-21 17:40:49,430+05 INFO Instance name: 'test_ldap-authn', Extension name: 'ovirt-engine-extension-aaa-ldap.authn', Version: '1.3.10', Notes: 'Display name: ovirt-engine-extension-aaa-ldap-1.3.10-1.el7', License: 'ASL 2.0', Home: 'http://www.ovirt.org', Author 'The oVirt Project', Build interface Version: '0', File: '/etc/ovirt-engine/extensions.d/test_ldap-authn.properties', Initialized: 'tlocale'
2021-07-21 17:40:49,430+05 INFO End of enabled extensions list
2021-07-21 17:40:49,431+05 INFO ========================================================================
2021-07-21 17:40:49,431+05 INFO ============================== Execution ===============================
2021-07-21 17:40:49,431+05 INFO ========================================================================
2021-07-21 17:40:49,432+05 INFO Iteration: 0
2021-07-21 17:40:49,433+05 INFO Profile='openldap_rfc' authn='openldap_rfc-authn' authz='openldap_rfc-authz' mapping='null'
2021-07-21 17:40:49,433+05 INFO API: -->Authn.InvokeCommands.AUTHENTICATE_CREDENTIALS profile='openldap_rfc' user='domreader'
Password:
2021-07-21 17:42:28,572+05 INFO API: <--Authn.InvokeCommands.AUTHENTICATE_CREDENTIALS profile='openldap_rfc' result=SUCCESS
2021-07-21 17:42:28,576+05 INFO --- Begin AuthRecord ---
2021-07-21 17:42:28,577+05 INFO AAA_AUTHN_AUTH_RECORD_PRINCIPAL: Domain Reader
2021-07-21 17:42:28,577+05 INFO --- End AuthRecord ---
2021-07-21 17:42:28,578+05 INFO API: -->Authz.InvokeCommands.FETCH_PRINCIPAL_RECORD principal='Domain Reader'
2021-07-21 17:43:28,582+05 SEVERE Cannot locate principal 'Domain Reader'
LDAP server working as proxy to AD.
slapd.conf listnig:
### Schema includes ###########################################################
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/misc.schema
include /etc/openldap/schema/nis.schema
include /etc/openldap/schema/ad.schema
## Module paths ##############################################################
modulepath /usr/lib64/openldap/
moduleload back_ldap
moduleload rwm
### Logging ###################################################################
logfile /var/log/slapd/slapd.log
loglevel 256
# Main settings ###############################################################
pidfile /var/run/openldap/slapd.pid
argsfile /var/run/openldap/slapd.args
TLSCipherSuite HIGH:!NULL
TLSCACertificateFile /etc/pki/tls/certs/cacert.pem
TLSCertificateFile /etc/pki/tls/certs/slapd.pem
TLSCertificateKeyFile /etc/pki/tls/certs/slapd.pem
TLSVerifyClient never
# Disallow non-encrypted binds - this will refuse any connection that isn't
# secured with at least 128-bit encryption
security ssf=128
# Allow v2 binding for legacy clients #########################################
allow bind_v2
### Database definition (Proxy to AD) #########################################
database ldap
readonly yes
protocol-version 3
rebind-as-user yes
uri "ldap://testdom.local:389"
suffix "dc=testdom,dc=local"
idassert-bind bindmethod=simple
mode=none
binddn="CN=Domain Reader,OU=AD,OU=SERVICE,DC=testdom,DC=local"
credentials=eOv5rgrNv3eq
starttls=yes
tls_cacertdir=/etc/pki/tls/certs
tls_reqcert=never
idassert-authzFrom "*"
overlay rwm
3 years, 4 months