Hosted engine Single Sign-On to VM with freeIPA not working
by Paul
This is a multipart message in MIME format.
------=_NextPart_000_0050_01D18069.35C995E0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hi,
I am having an issue with getting SSO to work when a standard user(UserRole)
logs in to the UserPortal.
The user has permission to use only this VM, so after login the console is
automatically opened for that VM.
Problem is that it doesn't login on the VM system with the provided
credentials. Manual login at the console works without any issues.
HBAC-rule check on IPA shows access is granted. Client has SELINUX in
permissive mode and a disabled firewalld.
On the client side I do see some PAM related errors in the logs (see details
below). Extensive Google search on error 17 "Failure setting user
credentials" didn't show helpful information :-(
AFAIK this is did a pretty standard set-up, all working with RH-family
products. I would expect others to encounter this issue as well.
If someone knows any solution or has some directions to fix this it would be
greatly appreciated.
Thanks,
Paul
------------------------------------------------------
System setup: I have 3 systems
The connection between the Engine and IPA is working fine. (I can log in
with IPA users etc.) Connection is made according to this document:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat
ion/3.6/html-single/Administration_Guide/index.html#sect-Configuring_an_Exte
rnal_LDAP_Provider
Configuration of the client is done according to this document:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat
ion/3.6/html/Virtual_Machine_Management_Guide/chap-Additional_Configuration.
html#sect-Configuring_Single_Sign-On_for_Virtual_Machines
--- Hosted Engine:
[root@engine ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@engine ~]# uname -a
Linux engine.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@engine ~]# rpm -qa | grep ovirt
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-engine-restapi-3.6.2.6-1.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.3.4-1.el7.centos.noarch
ovirt-engine-setup-3.6.3.4-1.el7.centos.noarch
ovirt-image-uploader-3.6.0-1.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.0.5-1.el7.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-engine-extension-aaa-ldap-setup-1.1.2-1.el7.centos.noarch
ovirt-engine-wildfly-overlay-8.0.4-1.el7.noarch
ovirt-engine-wildfly-8.2.1-1.el7.x86_64
ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
ovirt-engine-tools-3.6.2.6-1.el7.centos.noarch
ovirt-engine-dbscripts-3.6.2.6-1.el7.centos.noarch
ovirt-engine-backend-3.6.2.6-1.el7.centos.noarch
ovirt-engine-3.6.2.6-1.el7.centos.noarch
ovirt-engine-extension-aaa-ldap-1.1.2-1.el7.centos.noarch
ovirt-engine-setup-base-3.6.3.4-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.6.3.4-1.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.6.3.4-1.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-3.6.3.4-1.el7.centos.noarch
ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
ovirt-engine-userportal-3.6.2.6-1.el7.centos.noarch
ovirt-engine-webadmin-portal-3.6.2.6-1.el7.centos.noarch
ovirt-guest-agent-common-1.0.11-1.el7.noarch
ovirt-release36-003-1.noarch
ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
ovirt-engine-lib-3.6.3.4-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.3.4-1.el7.centos.noarch
ovirt-engine-websocket-proxy-3.6.3.4-1.el7.centos.noarch
ovirt-log-collector-3.6.1-1.el7.centos.noarch
ovirt-engine-extensions-api-impl-3.6.3.4-1.el7.centos.noarch
--- FreeIPA:
[root@ipa01 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@ipa01 ~]# uname -a
Linux ipa01.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@ipa01 ~]# rpm -qa | grep ipa
ipa-python-4.2.0-15.el7_2.6.x86_64
ipa-client-4.2.0-15.el7_2.6.x86_64
python-libipa_hbac-1.13.0-40.el7_2.1.x86_64
python-iniparse-0.4-9.el7.noarch
libipa_hbac-1.13.0-40.el7_2.1.x86_64
sssd-ipa-1.13.0-40.el7_2.1.x86_64
ipa-admintools-4.2.0-15.el7_2.6.x86_64
ipa-server-4.2.0-15.el7_2.6.x86_64
ipa-server-dns-4.2.0-15.el7_2.6.x86_64
--- Client:
[root@test06 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@test06 ~]# uname -a
Linux test06.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@test06 ~]# rpm -qa | grep ipa
python-libipa_hbac-1.13.0-40.el7_2.1.x86_64
python-iniparse-0.4-9.el7.noarch
sssd-ipa-1.13.0-40.el7_2.1.x86_64
ipa-client-4.2.0-15.0.1.el7.centos.6.x86_64
libipa_hbac-1.13.0-40.el7_2.1.x86_64
ipa-python-4.2.0-15.0.1.el7.centos.6.x86_64
device-mapper-multipath-0.4.9-85.el7.x86_64
device-mapper-multipath-libs-0.4.9-85.el7.x86_64
[root@test06 ~]# rpm -qa | grep guest-agent
qemu-guest-agent-2.3.0-4.el7.x86_64
ovirt-guest-agent-pam-module-1.0.11-1.el7.x86_64
ovirt-guest-agent-gdm-plugin-1.0.11-1.el7.noarch
ovirt-guest-agent-common-1.0.11-1.el7.noarch
---------------------------------------------------
Relevant logs:
--- Engine:
//var/log/ovirt-engine/engine
2016-03-17 15:22:10,516 INFO
[org.ovirt.engine.core.bll.aaa.LoginUserCommand] (default task-22) []
Running command: LoginUserCommand internal: false.
2016-03-17 15:22:10,568 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-22) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: User test6@DOMAIN logged in.
2016-03-17 15:22:13,795 WARN
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default task-6)
[7400ae46] The message key 'VmLogon' is missing from
'bundles/ExecutionMessages'
2016-03-17 15:22:13,839 INFO [org.ovirt.engine.core.bll.VmLogonCommand]
(default task-6) [7400ae46] Running command: VmLogonCommand internal: false.
Entities affected : ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type: VMAction
group CONNECT_TO_VM with role type USER
2016-03-17 15:22:13,842 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default
task-6) [7400ae46] START, VmLogonVDSCommand(HostName = host01,
VmLogonVDSCommandParameters:{runAsync='true',
hostId='225157c0-224b-4aa6-9210-db4de7c7fc30',
vmId='64a84b40-6050-4a96-a59d-d557a317c38c', domain='DOMAIN-authz',
password='***', userName='test6@DOMAIN'}), log id: 2015a1e0
2016-03-17 15:22:14,848 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default
task-6) [7400ae46] FINISH, VmLogonVDSCommand, log id: 2015a1e0
2016-03-17 15:22:15,317 INFO [org.ovirt.engine.core.bll.SetVmTicketCommand]
(default task-18) [10dad788] Running command: SetVmTicketCommand internal:
true. Entities affected : ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type:
VMAction group CONNECT_TO_VM with role type USER
2016-03-17 15:22:15,322 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default
task-18) [10dad788] START, SetVmTicketVDSCommand(HostName = host01,
SetVmTicketVDSCommandParameters:{runAsync='true',
hostId='225157c0-224b-4aa6-9210-db4de7c7fc30',
vmId='64a84b40-6050-4a96-a59d-d557a317c38c', protocol='SPICE',
ticket='rd8avqvdBnRl', validTime='120', userName='test6',
userId='10b2da3e-6401-4a09-a330-c0780bc0faef',
disconnectAction='LOCK_SCREEN'}), log id: 72efb73b
2016-03-17 15:22:16,340 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default
task-18) [10dad788] FINISH, SetVmTicketVDSCommand, log id: 72efb73b
2016-03-17 15:22:16,377 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-18) [10dad788] Correlation ID: 10dad788, Call Stack: null,
Custom Event ID: -1, Message: User test6@DOMAIN initiated console session
for VM test06
2016-03-17 15:22:19,418 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-53) [] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: User test6@DOMAIN-authz is connected to
VM test06.
--- Client:
/var/log/ovirt-guest-agent/ovirt-guest-agent.log
MainThread::INFO::2016-03-17
15:20:58,145::ovirt-guest-agent::57::root::Starting oVirt guest agent
CredServer::INFO::2016-03-17 15:20:58,214::CredServer::257::root::CredServer
is running...
Dummy-1::INFO::2016-03-17 15:20:58,216::OVirtAgentLogic::294::root::Received
an external command: lock-screen...
Dummy-1::INFO::2016-03-17 15:22:13,104::OVirtAgentLogic::294::root::Received
an external command: login...
Dummy-1::INFO::2016-03-17 15:22:13,104::CredServer::207::root::The following
users are allowed to connect: [0]
Dummy-1::INFO::2016-03-17 15:22:13,104::CredServer::273::root::Opening
credentials channel...
Dummy-1::INFO::2016-03-17 15:22:13,105::CredServer::132::root::Emitting user
authenticated signal (651416).
CredChannel::INFO::2016-03-17 15:22:13,188::CredServer::225::root::Incomming
connection from user: 0 process: 2570
CredChannel::INFO::2016-03-17 15:22:13,188::CredServer::232::root::Sending
user's credential (token: 651416)
Dummy-1::INFO::2016-03-17 15:22:13,189::CredServer::277::root::Credentials
channel was closed.
/var/log/secure
Mar 17 15:21:07 test06 gdm-launch-environment]:
pam_unix(gdm-launch-environment:session): session opened for user gdm by
(uid=0)
Mar 17 15:21:10 test06 polkitd[749]: Registered Authentication Agent for
unix-session:c1 (system bus name :1.34 [gnome-shell --mode=gdm], object path
/org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Mar 17 15:22:13 test06 gdm-ovirtcred]: pam_sss(gdm-ovirtcred:auth):
authentication failure; logname= uid=0 euid=0 tty= ruser= rhost= user=test6
Mar 17 15:22:13 test06 gdm-ovirtcred]: pam_sss(gdm-ovirtcred:auth): received
for user test6: 17 (Failure setting user credentials)
/var/log/sssd/krb5_child.log (debug-level 10)
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [get_and_save_tgt]
(0x0020): 1234: [-1765328360][Preauthentication failed]
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [map_krb5_error]
(0x0020): 1303: [-1765328360][Preauthentication failed]
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [k5c_send_data]
(0x0200): Received error code 1432158215
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [pack_response_packet]
(0x2000): response packet size: [4]
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [k5c_send_data]
(0x4000): Response sent.
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [main] (0x0400):
krb5_child completed successfully
/var/log/sssd/sssd_DOMAIN.COM.log (debug-level 10)
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler] (0x0100):
Got request with the following data
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
command: PAM_AUTHENTICATE
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
domain: DOMAIN.COM
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
user: test6
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
service: gdm-ovirtcred
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
tty:
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
ruser:
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
rhost:
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
authtok type: 1
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
newauthtok type: 0
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
priv: 1
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
cli_pid: 2570
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
logon name: not set
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_auth_queue_send]
(0x1000): Wait queue of user [test6] is empty, running request
[0x7fe30df03cc0] immediately.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_setup] (0x4000): No
mapping for: test6
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added
timed event "ltdb_callback": 0x7fe30df07120
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added
timed event "ltdb_timeout": 0x7fe30df16590
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Running
timer event 0x7fe30df07120 "ltdb_callback"
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Destroying
timer event 0x7fe30df16590 "ltdb_timeout"
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Ending
timer event 0x7fe30df07120 "ltdb_callback"
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [fo_resolve_service_send]
(0x0100): Trying to resolve service 'IPA'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_server_status]
(0x1000): Status of server 'ipa01.DOMAIN.COM' is 'working'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_port_status]
(0x1000): Port status of port 389 for server 'ipa01.DOMAIN.COM' is 'working'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[fo_resolve_service_activate_timeout] (0x2000): Resolve timeout set to 6
seconds
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [resolve_srv_send]
(0x0200): The status of SRV lookup is resolved
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_server_status]
(0x1000): Status of server 'ipa01.DOMAIN.COM' is 'working'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[be_resolve_server_process] (0x1000): Saving the first resolved server
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[be_resolve_server_process] (0x0200): Found address for server
ipa01.DOMAIN.COM: [10.0.1.21] TTL 1200
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ipa_resolve_callback]
(0x0400): Constructed uri 'ldap://ipa01.DOMAIN.COM'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sss_krb5_realm_has_proxy]
(0x0040): profile_get_values failed.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_handler_setup]
(0x2000): Setting up signal handler up for pid [2575]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_handler_setup]
(0x2000): Signal handler set up for pid [2575]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [write_pipe_handler]
(0x0400): All data has been sent!
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler]
(0x1000): Waiting for child [2575].
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler]
(0x0100): child [2575] finished successfully.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [read_pipe_handler]
(0x0400): EOF received, client finished
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [check_wait_queue]
(0x1000): Wait queue for user [test6] is empty.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_auth_queue_done]
(0x1000): krb5_auth_queue request [0x7fe30df03cc0] done.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_id_op_connect_step]
(0x4000): reusing cached connection
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_print_server]
(0x2000): Searching 10.0.1.21
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with
[(&(cn=ipaConfig)(objectClass=ipaGuiConfig))][cn=etc,dc=DOMAIN,dc=com].
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x1000): Requesting attrs:
[ipaMigrationEnabled]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x1000): Requesting attrs:
[ipaSELinuxUserMapDefault]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x1000): Requesting attrs:
[ipaSELinuxUserMapOrder]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid = 122
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_op_add] (0x2000):
New operation 122 timeout 60
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0],
ldap[0x7fe30def2920]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message]
(0x4000): Message type: [LDAP_RES_SEARCH_ENTRY]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_entry]
(0x1000): OriginalDN: [cn=ipaConfig,cn=etc,dc=DOMAIN,dc=com].
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range]
(0x2000): No sub-attributes for [ipaMigrationEnabled]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range]
(0x2000): No sub-attributes for [ipaSELinuxUserMapDefault]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range]
(0x2000): No sub-attributes for [ipaSELinuxUserMapOrder]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0],
ldap[0x7fe30def2920]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message]
(0x4000): Message type: [LDAP_RES_SEARCH_RESULT]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no
errmsg set
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_op_destructor]
(0x2000): Operation 122 finished
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_id_op_destroy]
(0x4000): releasing operation connection
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[ipa_get_migration_flag_done] (0x0100): Password migration is not enabled.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback]
(0x0100): Backend returned: (0, 17, <NULL>) [Success (Failure setting user
credentials)]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback]
(0x0100): Sending result [17][DOMAIN.COM]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback]
(0x0100): Sent result [17][DOMAIN.COM]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: sh[0x7fe30deef090], connected[1], ops[(nil)],
ldap[0x7fe30def2920]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: ldap_result found nothing!
------=_NextPart_000_0050_01D18069.35C995E0
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>Hi,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>I am having =
an issue with getting SSO to work when a standard user(UserRole) logs in =
to the UserPortal.<o:p></o:p></p><p class=3DMsoNormal>The user has =
permission to use only this VM, so after login the console is =
automatically opened for that VM.<o:p></o:p></p><p =
class=3DMsoNormal>Problem is that it doesn't login on the VM system with =
the provided credentials. Manual login at the console works without any =
issues. <o:p></o:p></p><p class=3DMsoNormal>HBAC-rule check on IPA shows =
access is granted. Client has SELINUX in permissive mode and a disabled =
firewalld. <o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>On the client side I do see some PAM related errors in =
the logs (see details below). Extensive Google search on error 17 =
"Failure setting user credentials" didn't show helpful =
information :-(<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>AFAIK this =
is did a pretty standard set-up, all working with RH-family products. I =
would expect others to encounter this issue as well. <o:p></o:p></p><p =
class=3DMsoNormal>If someone knows any solution or has some directions =
to fix this it would be greatly appreciated.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Thanks,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Paul<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>------------------------------------------------------<=
o:p></o:p></p><p class=3DMsoNormal>System setup: I have 3 systems =
<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>The connection between the Engine and IPA is working =
fine. (I can log in with IPA users etc.) Connection is made according to =
this document: =
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali=
zation/3.6/html-single/Administration_Guide/index.html#sect-Configuring_a=
n_External_LDAP_Provider<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Configuration of the client is done according to this =
document: =
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali=
zation/3.6/html/Virtual_Machine_Management_Guide/chap-Additional_Configur=
ation.html#sect-Configuring_Single_Sign-On_for_Virtual_Machines<o:p></o:p=
></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>--- =
Hosted Engine:<o:p></o:p></p><p class=3DMsoNormal>[root@engine ~]# cat =
/etc/redhat-release<o:p></o:p></p><p class=3DMsoNormal>CentOS Linux =
release 7.2.1511 (Core)<o:p></o:p></p><p class=3DMsoNormal>[root@engine =
~]# uname -a<o:p></o:p></p><p class=3DMsoNormal>Linux engine.DOMAIN.COM =
3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 =
x86_64 x86_64 GNU/Linux<o:p></o:p></p><p class=3DMsoNormal>[root@engine =
~]# rpm -qa | grep ovirt<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-vmconsole-1.0.0-1.el7.centos.noarch<o:p></o:p></p=
><p =
class=3DMsoNormal>ovirt-engine-restapi-3.6.2.6-1.el7.centos.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-setup-lib-1.0.1-1.el7.centos.noarch<o:p></o:p></p=
><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-ovirt-engine-common-3.6.3.4-1=
.el7.centos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-3.6.3.4-1.el7.centos.noarch<o:p></o:=
p></p><p =
class=3DMsoNormal>ovirt-image-uploader-3.6.0-1.el7.centos.noarch<o:p></o:=
p></p><p =
class=3DMsoNormal>ovirt-engine-extension-aaa-jdbc-1.0.5-1.el7.noarch<o:p>=
</o:p></p><p =
class=3DMsoNormal>ovirt-host-deploy-1.4.1-1.el7.centos.noarch<o:p></o:p><=
/p><p =
class=3DMsoNormal>ovirt-engine-extension-aaa-ldap-setup-1.1.2-1.el7.cento=
s.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-wildfly-overlay-8.0.4-1.el7.noarch<o:p></o=
:p></p><p =
class=3DMsoNormal>ovirt-engine-wildfly-8.2.1-1.el7.x86_64<o:p></o:p></p><=
p =
class=3DMsoNormal>ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch<o:p></o=
:p></p><p =
class=3DMsoNormal>ovirt-engine-tools-3.6.2.6-1.el7.centos.noarch<o:p></o:=
p></p><p =
class=3DMsoNormal>ovirt-engine-dbscripts-3.6.2.6-1.el7.centos.noarch<o:p>=
</o:p></p><p =
class=3DMsoNormal>ovirt-engine-backend-3.6.2.6-1.el7.centos.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-engine-3.6.2.6-1.el7.centos.noarch<o:p></o:p></p>=
<p =
class=3DMsoNormal>ovirt-engine-extension-aaa-ldap-1.1.2-1.el7.centos.noar=
ch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-base-3.6.3.4-1.el7.centos.noarch<o:p=
></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-ovirt-engine-3.6.3.4-1.el7.ce=
ntos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-websocket-proxy-3.6.3.4-1.el7=
.centos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-vmconsole-proxy-helper-3.6.3.4-1.el7.cento=
s.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch<o:p></o:p>=
</p><p =
class=3DMsoNormal>ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-engine-userportal-3.6.2.6-1.el7.centos.noarch<o:p=
></o:p></p><p =
class=3DMsoNormal>ovirt-engine-webadmin-portal-3.6.2.6-1.el7.centos.noarc=
h<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-common-1.0.11-1.el7.noarch<o:p></o:p>=
</p><p class=3DMsoNormal>ovirt-release36-003-1.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-iso-uploader-3.6.0-1.el7.centos.noarch<o:p></o:p>=
</p><p =
class=3DMsoNormal>ovirt-engine-lib-3.6.3.4-1.el7.centos.noarch<o:p></o:p>=
</p><p =
class=3DMsoNormal>ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch<o:p=
></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.3.=
4-1.el7.centos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-websocket-proxy-3.6.3.4-1.el7.centos.noarc=
h<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-log-collector-3.6.1-1.el7.centos.noarch<o:p></o:p=
></p><p =
class=3DMsoNormal>ovirt-engine-extensions-api-impl-3.6.3.4-1.el7.centos.n=
oarch<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>--- FreeIPA:<o:p></o:p></p><p =
class=3DMsoNormal>[root@ipa01 ~]# cat =
/etc/redhat-release<o:p></o:p></p><p class=3DMsoNormal>CentOS Linux =
release 7.2.1511 (Core) <o:p></o:p></p><p class=3DMsoNormal>[root@ipa01 =
~]# uname -a<o:p></o:p></p><p class=3DMsoNormal>Linux =
ipa01.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 =
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux<o:p></o:p></p><p =
class=3DMsoNormal>[root@ipa01 ~]# rpm -qa | grep ipa<o:p></o:p></p><p =
class=3DMsoNormal>ipa-python-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-client-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>python-libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>python-iniparse-0.4-9.el7.noarch<o:p></o:p></p><p =
class=3DMsoNormal>libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>sssd-ipa-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-admintools-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p=
class=3DMsoNormal>ipa-server-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-server-dns-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p=
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>--- =
Client:<o:p></o:p></p><p class=3DMsoNormal>[root@test06 ~]# cat =
/etc/redhat-release<o:p></o:p></p><p class=3DMsoNormal>CentOS Linux =
release 7.2.1511 (Core) <o:p></o:p></p><p class=3DMsoNormal>[root@test06 =
~]# uname -a<o:p></o:p></p><p class=3DMsoNormal>Linux test06.DOMAIN.COM =
3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 =
x86_64 x86_64 GNU/Linux<o:p></o:p></p><p class=3DMsoNormal>[root@test06 =
~]# rpm -qa | grep ipa<o:p></o:p></p><p =
class=3DMsoNormal>python-libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>python-iniparse-0.4-9.el7.noarch<o:p></o:p></p><p =
class=3DMsoNormal>sssd-ipa-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-client-4.2.0-15.0.1.el7.centos.6.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-python-4.2.0-15.0.1.el7.centos.6.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>device-mapper-multipath-0.4.9-85.el7.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>device-mapper-multipath-libs-0.4.9-85.el7.x86_64<o:p></=
o:p></p><p class=3DMsoNormal>[root@test06 ~]# rpm -qa | grep =
guest-agent<o:p></o:p></p><p =
class=3DMsoNormal>qemu-guest-agent-2.3.0-4.el7.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-pam-module-1.0.11-1.el7.x86_64<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-gdm-plugin-1.0.11-1.el7.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-common-1.0.11-1.el7.noarch<o:p></o:p>=
</p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>---------------------------------------------------<o:p=
></o:p></p><p class=3DMsoNormal>Relevant logs:<o:p></o:p></p><p =
class=3DMsoNormal>--- Engine:<o:p></o:p></p><p =
class=3DMsoNormal>//var/log/ovirt-engine/engine<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:10,516 INFO =
[org.ovirt.engine.core.bll.aaa.LoginUserCommand] (default task-22) [] =
Running command: LoginUserCommand internal: false.<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:10,568 INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(default task-22) [] Correlation ID: null, Call Stack: null, Custom =
Event ID: -1, Message: User test6@DOMAIN logged in.<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:13,795 WARN =
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default =
task-6) [7400ae46] The message key 'VmLogon' is missing from =
'bundles/ExecutionMessages'<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:13,839 INFO =
[org.ovirt.engine.core.bll.VmLogonCommand] (default task-6) [7400ae46] =
Running command: VmLogonCommand internal: false. Entities affected =
: ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type: VMAction group =
CONNECT_TO_VM with role type USER<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:13,842 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default =
task-6) [7400ae46] START, VmLogonVDSCommand(HostName =3D host01, =
VmLogonVDSCommandParameters:{runAsync=3D'true', =
hostId=3D'225157c0-224b-4aa6-9210-db4de7c7fc30', =
vmId=3D'64a84b40-6050-4a96-a59d-d557a317c38c', domain=3D'DOMAIN-authz', =
password=3D'***', userName=3D'test6@DOMAIN'}), log id: =
2015a1e0<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 15:22:14,848 =
INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] =
(default task-6) [7400ae46] FINISH, VmLogonVDSCommand, log id: =
2015a1e0<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 15:22:15,317 =
INFO [org.ovirt.engine.core.bll.SetVmTicketCommand] (default =
task-18) [10dad788] Running command: SetVmTicketCommand internal: true. =
Entities affected : ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type: =
VMAction group CONNECT_TO_VM with role type USER<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:15,322 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] =
(default task-18) [10dad788] START, SetVmTicketVDSCommand(HostName =3D =
host01, SetVmTicketVDSCommandParameters:{runAsync=3D'true', =
hostId=3D'225157c0-224b-4aa6-9210-db4de7c7fc30', =
vmId=3D'64a84b40-6050-4a96-a59d-d557a317c38c', protocol=3D'SPICE', =
ticket=3D'rd8avqvdBnRl', validTime=3D'120', userName=3D'test6', =
userId=3D'10b2da3e-6401-4a09-a330-c0780bc0faef', =
disconnectAction=3D'LOCK_SCREEN'}), log id: 72efb73b<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:16,340 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] =
(default task-18) [10dad788] FINISH, SetVmTicketVDSCommand, log id: =
72efb73b<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 15:22:16,377 =
INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(default task-18) [10dad788] Correlation ID: 10dad788, Call Stack: null, =
Custom Event ID: -1, Message: User test6@DOMAIN initiated console =
session for VM test06<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 =
15:22:19,418 INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(DefaultQuartzScheduler_Worker-53) [] Correlation ID: null, Call Stack: =
null, Custom Event ID: -1, Message: User test6@DOMAIN-authz is connected =
to VM test06.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>--- Client:<o:p></o:p></p><p =
class=3DMsoNormal>/var/log/ovirt-guest-agent/ovirt-guest-agent.log<o:p></=
o:p></p><p class=3DMsoNormal>MainThread::INFO::2016-03-17 =
15:20:58,145::ovirt-guest-agent::57::root::Starting oVirt guest =
agent<o:p></o:p></p><p class=3DMsoNormal>CredServer::INFO::2016-03-17 =
15:20:58,214::CredServer::257::root::CredServer is =
running...<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:20:58,216::OVirtAgentLogic::294::root::Received an external command: =
lock-screen...<o:p></o:p></p><p =
class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,104::OVirtAgentLogic::294::root::Received an external command: =
login...<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,104::CredServer::207::root::The following users are allowed to =
connect: [0]<o:p></o:p></p><p =
class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,104::CredServer::273::root::Opening credentials =
channel...<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,105::CredServer::132::root::Emitting user authenticated signal =
(651416).<o:p></o:p></p><p =
class=3DMsoNormal>CredChannel::INFO::2016-03-17 =
15:22:13,188::CredServer::225::root::Incomming connection from user: 0 =
process: 2570<o:p></o:p></p><p =
class=3DMsoNormal>CredChannel::INFO::2016-03-17 =
15:22:13,188::CredServer::232::root::Sending user's credential (token: =
651416)<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,189::CredServer::277::root::Credentials channel was =
closed.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>/var/log/secure<o:p></o:p></p><p class=3DMsoNormal>Mar =
17 15:21:07 test06 gdm-launch-environment]: =
pam_unix(gdm-launch-environment:session): session opened for user gdm by =
(uid=3D0)<o:p></o:p></p><p class=3DMsoNormal>Mar 17 15:21:10 test06 =
polkitd[749]: Registered Authentication Agent for unix-session:c1 =
(system bus name :1.34 [gnome-shell --mode=3Dgdm], object path =
/org/freedesktop/PolicyKit1/AuthenticationAgent, locale =
en_US.UTF-8)<o:p></o:p></p><p class=3DMsoNormal>Mar 17 15:22:13 test06 =
gdm-ovirtcred]: pam_sss(gdm-ovirtcred:auth): authentication failure; =
logname=3D uid=3D0 euid=3D0 tty=3D ruser=3D rhost=3D =
user=3Dtest6<o:p></o:p></p><p class=3DMsoNormal><b><span =
style=3D'color:red'>Mar 17 15:22:13 test06 gdm-ovirtcred]: =
pam_sss(gdm-ovirtcred:auth): received for user test6: 17 (Failure =
setting user credentials)<o:p></o:p></span></b></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal><span =
lang=3DNL>/var/log/sssd/krb5_child.log (debug-level =
10)<o:p></o:p></span></p><p class=3DMsoNormal><b><span =
style=3D'color:red'>(Thu Mar 17 15:22:13 2016) =
[[sssd[krb5_child[2575]]]] [get_and_save_tgt] (0x0020): 1234: =
[-1765328360][Preauthentication failed]<o:p></o:p></span></b></p><p =
class=3DMsoNormal><b><span style=3D'color:red'>(Thu Mar 17 15:22:13 =
2016) [[sssd[krb5_child[2575]]]] [map_krb5_error] (0x0020): 1303: =
[-1765328360][Preauthentication failed]<o:p></o:p></span></b></p><p =
class=3DMsoNormal><b><span style=3D'color:red'>(Thu Mar 17 15:22:13 =
2016) [[sssd[krb5_child[2575]]]] [k5c_send_data] (0x0200): Received =
error code 1432158215<o:p></o:p></span></b></p><p class=3DMsoNormal>(Thu =
Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [pack_response_packet] =
(0x2000): response packet size: [4]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] =
[k5c_send_data] (0x4000): Response sent.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] =
[main] (0x0400): krb5_child completed successfully<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>/var/log/sssd/sssd_DOMAIN.COM.log (debug-level =
10)<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [be_pam_handler] (0x0100): Got request with the =
following data<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): command: =
PAM_AUTHENTICATE<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): domain: =
DOMAIN.COM<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): user: =
test6<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): service: =
gdm-ovirtcred<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): =
tty:<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): =
ruser:<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): =
rhost:<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): authtok type: =
1<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): newauthtok type: =
0<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): priv: =
1<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): cli_pid: =
2570<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): logon name: not =
set<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [krb5_auth_queue_send] (0x1000): Wait queue of =
user [test6] is empty, running request [0x7fe30df03cc0] =
immediately.<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [krb5_setup] (0x4000): No mapping for: =
test6<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added timed event =
"ltdb_callback": 0x7fe30df07120<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added timed event =
"ltdb_timeout": 0x7fe30df16590<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Running timer =
event 0x7fe30df07120 "ltdb_callback"<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Destroying timer =
event 0x7fe30df16590 "ltdb_timeout"<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Ending timer event =
0x7fe30df07120 "ltdb_callback"<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [fo_resolve_service_send] =
(0x0100): Trying to resolve service 'IPA'<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[get_server_status] (0x1000): Status of server 'ipa01.DOMAIN.COM' is =
'working'<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [get_port_status] (0x1000): Port status of port =
389 for server 'ipa01.DOMAIN.COM' is 'working'<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[fo_resolve_service_activate_timeout] (0x2000): Resolve timeout set to 6 =
seconds<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [resolve_srv_send] (0x0200): The status of SRV =
lookup is resolved<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_server_status] (0x1000): =
Status of server 'ipa01.DOMAIN.COM' is 'working'<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[be_resolve_server_process] (0x1000): Saving the first resolved =
server<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [be_resolve_server_process] (0x0200): Found =
address for server ipa01.DOMAIN.COM: [10.0.1.21] TTL =
1200<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [ipa_resolve_callback] (0x0400): Constructed uri =
'ldap://ipa01.DOMAIN.COM'<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sss_krb5_realm_has_proxy] =
(0x0040): profile_get_values failed.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[child_handler_setup] (0x2000): Setting up signal handler up for pid =
[2575]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [child_handler_setup] (0x2000): Signal handler =
set up for pid [2575]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [write_pipe_handler] (0x0400): All =
data has been sent!<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler] (0x1000): =
Waiting for child [2575].<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler] (0x0100): =
child [2575] finished successfully.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[read_pipe_handler] (0x0400): EOF received, client =
finished<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [check_wait_queue] (0x1000): Wait queue for user =
[test6] is empty.<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_auth_queue_done] (0x1000): =
krb5_auth_queue request [0x7fe30df03cc0] done.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_id_op_connect_step] (0x4000): reusing cached =
connection<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_print_server] (0x2000): Searching =
10.0.1.21<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_get_generic_ext_step] (0x0400): calling =
ldap_search_ext with =
[(&(cn=3DipaConfig)(objectClass=3DipaGuiConfig))][cn=3Detc,dc=3DDOMAI=
N,dc=3Dcom].<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [sdap_get_generic_ext_step] (0x1000): =
Requesting attrs: [ipaMigrationEnabled]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_get_generic_ext_step] (0x1000): Requesting attrs: =
[ipaSELinuxUserMapDefault]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar =
17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_get_generic_ext_step] =
(0x1000): Requesting attrs: [ipaSELinuxUserMapOrder]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid =3D =
122<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_op_add] (0x2000): New operation 122 timeout =
60<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): Trace: =
sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0], =
ldap[0x7fe30def2920]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message] (0x4000): =
Message type: [LDAP_RES_SEARCH_ENTRY]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_parse_entry] (0x1000): OriginalDN: =
[cn=3DipaConfig,cn=3Detc,dc=3DDOMAIN,dc=3Dcom].<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_parse_range] (0x2000): No sub-attributes for =
[ipaMigrationEnabled]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range] (0x2000): No =
sub-attributes for [ipaSELinuxUserMapDefault]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_parse_range] (0x2000): No sub-attributes for =
[ipaSELinuxUserMapOrder]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): =
Trace: sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0], =
ldap[0x7fe30def2920]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message] (0x4000): =
Message type: [LDAP_RES_SEARCH_RESULT]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no =
errmsg set<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_op_destructor] (0x2000): Operation 122 =
finished <o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_id_op_destroy] (0x4000): releasing =
operation connection <o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ipa_get_migration_flag_done] =
(0x0100): Password migration is not enabled. <o:p></o:p></p><p =
class=3DMsoNormal><b><span style=3D'color:red'>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback] (0x0100): Backend =
returned: (0, 17, <NULL>) [Success (Failure setting user =
credentials)] <o:p></o:p></span></b></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback] =
(0x0100): Sending result [17][DOMAIN.COM] <o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[be_pam_handler_callback] (0x0100): Sent result [17][DOMAIN.COM] =
<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): Trace: =
sh[0x7fe30deef090], connected[1], ops[(nil)], ldap[0x7fe30def2920] =
<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): Trace: =
ldap_result found nothing!<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal> =
<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p></div></body></html>
------=_NextPart_000_0050_01D18069.35C995E0--
7 years, 7 months
Changing gateway ping address
by Matteo
Hi all,
I need to change the gateway ping address, the one used by hosted engine setup.
Is ok to edit /etc/ovirt-hosted-engine/hosted-engine.conf on each node,
update the gateway param with the new ip address and restart
the agent&broker on each node?
With a blind test seems ok, but need to understand if is the right procedure.
Thanks,
Matteo
7 years, 8 months
[ovirt-shell] update hostnic/nic ???
by Bloemen, Jurriën
--_000_5697777B2050209dmcamcnetworkscom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGksDQoNCkZpcnN0IEkgY3JlYXRlZCBhIGJvbmRpbmcgaW50ZXJmYWNlOg0KDQojIGFkZCBuaWMg
LS1wYXJlbnQtaG9zdC1uYW1lIHNlcnZlcjAxIC0tbmFtZSBib25kMCAtLW5ldHdvcmstbmFtZSBW
TEFONjAyIC0tYm9uZGluZy1zbGF2ZXMtaG9zdF9uaWMgaG9zdF9uaWMubmFtZT1lbm8xIC0tYm9u
ZGluZy1zbGF2ZXMtaG9zdF9uaWMgaG9zdF9uaWMubmFtZT1lbm8yDQoNClRoaXMgd29ya3MgZ3Jl
YXQgYnV0IG5vIElQIGlzIHNldCBvbiBWTEFONjAyLg0KDQpUaGVuIEknbSB0cnlpbmcgdG8gYWRk
IGFuIGlwIGFkZHJlc3MgdG8gYSBuZXR3b3JrIHdpdGggdGhlIGZvbGxvd2luZyBjb21tYW5kOg0K
DQojIHVwZGF0ZSBob3N0bmljIC0tcGFyZW50LWhvc3QtbmFtZSBzZXJ2ZXIwMSAtLW5ldHdvcmst
bmFtZSBWTEFONjAyIC0tYm9vdF9wcm90b2NvbCBzdGF0aWMgLS1pcC1hZGRyZXNzIDEwLjEwLjEw
LjEwIC0taXAtbmV0bWFzayAyNTUuMjU1LjI1NS4wDQoNCj09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09IEVSUk9SID09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQ0KICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgd3JvbmcgbnVtYmVyIG9mIGFyZ3VtZW50cywgdHJ5ICdoZWxwIHVwZGF0ZScgZm9yIGhl
bHAuDQo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PQ0KDQpMb29raW5nIGF0IHRoaXMgZG9jdW1lbnQgaHR0cHM6Ly9hY2Nlc3MucmVkaGF0LmNvbS9k
b2N1bWVudGF0aW9uL2VuLVVTL1JlZF9IYXRfRW50ZXJwcmlzZV9WaXJ0dWFsaXphdGlvbi8zLjYt
QmV0YS9odG1sL1JIRVZNX1NoZWxsX0d1aWRlL25pYy5odG1sIEkgbmVlZCB0byB1c2UgIm5pYyIg
aW5zdGVhZCBvZiAiaG9zdG5pYyIgYnV0IHRoZW4gSSBkb24ndCBoYXZlIHRoZSBvcHRpb25zIHRv
IHNheSB0aGlzIGlzIGEgLS1wYXJlbnQtaG9zdC1uYW1lLiBPbmx5IFZNIHJlbGF0ZWQgY29tbWFu
ZCBvcHRpb25zLg0KDQpTbyBJIHRoaW5rIHRoZSBkb2N1bWVudGF0aW9uIGlzIGJlaGluZC4NCg0K
Q2FuIHNvbWVib2R5IGhlbHAgbWUgd2l0aCB3aGF0IHRoZSBjb21tYW5kIGlzIHRvIGFkZCBhIElQ
IHRvIGEgVkxBTi9OZXR3b3JrIGZvciBhIGhvc3Q/DQoNCg0KLS0NCktpbmQgcmVnYXJkcywNCg0K
SnVycmnDq24gQmxvZW1lbg0KDQpUaGlzIG1lc3NhZ2UgKGluY2x1ZGluZyBhbnkgYXR0YWNobWVu
dHMpIG1heSBjb250YWluIGluZm9ybWF0aW9uIHRoYXQgaXMgcHJpdmlsZWdlZCBvciBjb25maWRl
bnRpYWwuIElmIHlvdSBhcmUgbm90IHRoZSBpbnRlbmRlZCByZWNpcGllbnQsIHBsZWFzZSBub3Rp
ZnkgdGhlIHNlbmRlciBhbmQgZGVsZXRlIHRoaXMgZW1haWwgaW1tZWRpYXRlbHkgZnJvbSB5b3Vy
IHN5c3RlbXMgYW5kIGRlc3Ryb3kgYWxsIGNvcGllcyBvZiBpdC4gWW91IG1heSBub3QsIGRpcmVj
dGx5IG9yIGluZGlyZWN0bHksIHVzZSwgZGlzY2xvc2UsIGRpc3RyaWJ1dGUsIHByaW50IG9yIGNv
cHkgdGhpcyBlbWFpbCBvciBhbnkgcGFydCBvZiBpdCBpZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5k
ZWQgcmVjaXBpZW50DQo=
--_000_5697777B2050209dmcamcnetworkscom_
Content-Type: text/html; charset="utf-8"
Content-ID: <DED479EC8EDE1E4F9CD5EE636812330C(a)chellomedia.com>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHRleHQ9IiMwMDAwMDAi
IGJnY29sb3I9IiNGRkZGRkYiPg0KPHR0PkhpLDxicj4NCjxicj4NCkZpcnN0IEkgY3JlYXRlZCBh
IGJvbmRpbmcgaW50ZXJmYWNlOjxicj4NCjxicj4NCiMgYWRkIG5pYyAtLXBhcmVudC1ob3N0LW5h
bWUgc2VydmVyMDEgLS1uYW1lIGJvbmQwIC0tbmV0d29yay1uYW1lIFZMQU42MDIgLS1ib25kaW5n
LXNsYXZlcy1ob3N0X25pYyBob3N0X25pYy5uYW1lPWVubzEgLS1ib25kaW5nLXNsYXZlcy1ob3N0
X25pYyBob3N0X25pYy5uYW1lPWVubzI8YnI+DQo8YnI+DQpUaGlzIHdvcmtzIGdyZWF0IGJ1dCBu
byBJUCBpcyBzZXQgb24gVkxBTjYwMi48YnI+DQo8YnI+DQpUaGVuIEknbSB0cnlpbmcgdG8gYWRk
IGFuIGlwIGFkZHJlc3MgdG8gYSBuZXR3b3JrIHdpdGggdGhlIGZvbGxvd2luZyBjb21tYW5kOjxi
cj4NCjxicj4NCiMgdXBkYXRlIGhvc3RuaWMgLS1wYXJlbnQtaG9zdC1uYW1lIHNlcnZlcjAxIC0t
bmV0d29yay1uYW1lIFZMQU42MDIgLS1ib290X3Byb3RvY29sIHN0YXRpYyAtLWlwLWFkZHJlc3Mg
MTAuMTAuMTAuMTAgLS1pcC1uZXRtYXNrIDI1NS4yNTUuMjU1LjA8YnI+DQo8YnI+DQo9PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PSBFUlJPUiA9PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT08YnI+DQombmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsg
d3JvbmcgbnVtYmVyIG9mIGFyZ3VtZW50cywgdHJ5ICdoZWxwIHVwZGF0ZScgZm9yIGhlbHAuPGJy
Pg0KPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT08
YnI+DQo8YnI+DQpMb29raW5nIGF0IHRoaXMgZG9jdW1lbnQgPGEgY2xhc3M9Im1vei10eHQtbGlu
ay1mcmVldGV4dCIgaHJlZj0iaHR0cHM6Ly9hY2Nlc3MucmVkaGF0LmNvbS9kb2N1bWVudGF0aW9u
L2VuLVVTL1JlZF9IYXRfRW50ZXJwcmlzZV9WaXJ0dWFsaXphdGlvbi8zLjYtQmV0YS9odG1sL1JI
RVZNX1NoZWxsX0d1aWRlL25pYy5odG1sIj4NCmh0dHBzOi8vYWNjZXNzLnJlZGhhdC5jb20vZG9j
dW1lbnRhdGlvbi9lbi1VUy9SZWRfSGF0X0VudGVycHJpc2VfVmlydHVhbGl6YXRpb24vMy42LUJl
dGEvaHRtbC9SSEVWTV9TaGVsbF9HdWlkZS9uaWMuaHRtbDwvYT4gSSBuZWVkIHRvIHVzZSAmcXVv
dDtuaWMmcXVvdDsgaW5zdGVhZCBvZiAmcXVvdDtob3N0bmljJnF1b3Q7IGJ1dCB0aGVuIEkgZG9u
J3QgaGF2ZSB0aGUgb3B0aW9ucyB0byBzYXkgdGhpcyBpcyBhIC0tcGFyZW50LWhvc3QtbmFtZS4g
T25seSBWTSByZWxhdGVkIGNvbW1hbmQNCiBvcHRpb25zLjxicj4NCjxicj4NClNvIEkgdGhpbmsg
dGhlIGRvY3VtZW50YXRpb24gaXMgYmVoaW5kLiA8YnI+DQo8YnI+DQpDYW4gc29tZWJvZHkgaGVs
cCBtZSB3aXRoIHdoYXQgdGhlIGNvbW1hbmQgaXMgdG8gYWRkIGEgSVAgdG8gYSBWTEFOL05ldHdv
cmsgZm9yIGEgaG9zdD88YnI+DQo8YnI+DQo8YnI+DQo8L3R0Pg0KPGRpdiBjbGFzcz0ibW96LXNp
Z25hdHVyZSI+LS0gPGJyPg0KPHRpdGxlPjwvdGl0bGU+DQo8ZGl2IHN0eWxlPSJjb2xvcjogcmdi
KDAsIDAsIDApOyI+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIiBzdHlsZT0iZm9udC1zaXplOiAxNHB4
OyBmb250LWZhbWlseToNCiAgICAgICAgICBDYWxpYnJpLCBzYW5zLXNlcmlmOyBtYXJnaW46IDBj
bSAwY20gMC4wMDAxcHQ7Ij4NCjxiPjxmb250IGZhY2U9IkFyaWFsLHNhbnMtc2VyaWYiIGNvbG9y
PSIjMmM4Y2I2Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMHB0OyI+Szwvc3Bhbj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOg0KICAgICAgICAgICAgICAgIDEzcHg7Ij5pPC9zcGFuPjxzcGFuIHN0
eWxlPSJmb250LXNpemU6IDEwcHQ7Ij5uZCByZWdhcmRzLDwvc3Bhbj48L2ZvbnQ+PC9iPjwvcD4N
CjxwIGNsYXNzPSJNc29Ob3JtYWwiIHN0eWxlPSJmb250LXNpemU6IDExcHQ7IGZvbnQtZmFtaWx5
Og0KICAgICAgICAgIENhbGlicmksIHNhbnMtc2VyaWY7IG1hcmdpbjogMGNtIDBjbSAwLjAwMDFw
dDsiPg0KPGI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTBwdDsgZm9udC1mYW1pbHk6IEFyaWFs
LCBzYW5zLXNlcmlmOw0KICAgICAgICAgICAgICBjb2xvcjogcmdiKDQ0LCAxNDAsIDE4Mik7Ij4m
bmJzcDs8L3NwYW4+PC9iPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiIHN0eWxlPSJmb250LXNp
emU6IDE0cHg7IGZvbnQtZmFtaWx5Og0KICAgICAgICAgIENhbGlicmksIHNhbnMtc2VyaWY7IG1h
cmdpbjogMGNtIDBjbSAwLjAwMDFwdDsiPg0KPGIgc3R5bGU9ImZvbnQtc2l6ZTogMTFwdDsiPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6IDEwcHQ7DQogICAgICAgICAgICAgIGZvbnQtZmFtaWx5OiBB
cmlhbCwgc2Fucy1zZXJpZjsgY29sb3I6IHJnYig0NCwgMTQwLCAxODIpOyI+SnVycmnDq24gQmxv
ZW1lbjwvc3Bhbj48L2I+PGIgc3R5bGU9ImZvbnQtc2l6ZTogMTFwdDsiPjxzcGFuIHN0eWxlPSJm
b250LXNpemU6IDEwcHQ7IGZvbnQtZmFtaWx5OiBBcmlhbCwgc2Fucy1zZXJpZjsNCiAgICAgICAg
ICAgICAgY29sb3I6IGdyYXk7Ij48YnI+DQo8L3NwYW4+PC9iPjxmb250IGZhY2U9IkFyaWFsLHNh
bnMtc2VyaWYiIGNvbG9yPSIjODA4MDgwIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMHB0OyI+
PC9zcGFuPjwvZm9udD48L3A+DQo8YnI+DQo8L2Rpdj4NCjwvZGl2Pg0KVGhpcyBtZXNzYWdlIChp
bmNsdWRpbmcgYW55IGF0dGFjaG1lbnRzKSBtYXkgY29udGFpbiBpbmZvcm1hdGlvbiB0aGF0IGlz
IHByaXZpbGVnZWQgb3IgY29uZmlkZW50aWFsLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQg
cmVjaXBpZW50LCBwbGVhc2Ugbm90aWZ5IHRoZSBzZW5kZXIgYW5kIGRlbGV0ZSB0aGlzIGVtYWls
IGltbWVkaWF0ZWx5IGZyb20geW91ciBzeXN0ZW1zIGFuZCBkZXN0cm95IGFsbCBjb3BpZXMgb2Yg
aXQuIFlvdSBtYXkgbm90LA0KIGRpcmVjdGx5IG9yIGluZGlyZWN0bHksIHVzZSwgZGlzY2xvc2Us
IGRpc3RyaWJ1dGUsIHByaW50IG9yIGNvcHkgdGhpcyBlbWFpbCBvciBhbnkgcGFydCBvZiBpdCBp
ZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQgcmVjaXBpZW50DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_5697777B2050209dmcamcnetworkscom_--
7 years, 9 months
Dedicated NICs for gluster network
by Nicolas Ecarnot
Hello,
[Here : oVirt 3.5.3, 3 x CentOS 7.0 hosts with replica-3 gluster SD on
the hosts].
On the switchs, I have created a dedicated VLAN to isolate the glusterFS
traffic, but I'm not using it yet.
I was thinking of creating a dedicated IP for each node's gluster NIC,
and a DNS record by the way ("my_nodes_name_GL"), but I fear using this
hostname or this ip in oVirt GUI host network interface tab, leading
oVirt think this is a different host.
Not being sure this fear is clearly described, let's say :
- On each node, I create a second ip+(dns record in the soa) used by
gluster, plugged on the correct VLAN
- in oVirt gui, in the host network setting tab, the interface will be
seen, with its ip, but reverse-dns-related to a different hostname.
Here, I fear oVirt might check this reverse DNS and declare this NIC
belongs to another host.
I would also prefer not use a reverse pointing to the name of the host
management ip, as this is evil and I'm a good guy.
On your side, how do you cope with a dedicated storage network in case
of storage+compute mixed hosts?
--
Nicolas ECARNOT
8 years, 3 months
oVirt-shell command to move a disk
by Nicolas Ecarnot
Hello,
I'm confused because though I'm using ovirt-shell to script many actions
every day, and even after a large bunch of reading and testing, I can
not find the correct syntax to move (offline/available) disks between
storage domains.
May you help me please?
(oVirt 3.4.4)
--
Nicolas Ecarnot
8 years, 3 months
centos 7.1 and up & ixgbe
by Johan Kooijman
Hi all,
Since we upgraded to the latest ovirt node running 7.2, we're seeing that
nodes become unavailable after a while. It's running fine, with a couple of
VM's on it, untill it becomes non responsive. At that moment it doesn't
even respond to ICMP. It'll come back by itself after a while, but oVirt
fences the machine before that time and restarts VM's elsewhere.
Engine tells me this message:
VDSM host09 command failed: Message timeout which can be caused by
communication issues
Is anyone else experiencing these issues with ixgbe drivers? I'm running on
Intel X540-AT2 cards.
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
8 years, 4 months
VM get stuck randomly
by Christophe TREFOIS
Dear all,
I have a problem since couple of weeks, where randomly 1 VM (not always the same) becomes completely unresponsive.
We find this out because our Icinga server complains that host is down.
Upon inspection, we find we can’t open a console to the VM, nor can we login.
In oVirt engine, the VM looks like “up”. The only weird thing is that RAM usage shows 0% and CPU usage shows 100% or 75% depending on number of cores.
The only way to recover is to force shutdown the VM via 2-times shutdown from the engine.
Could you please help me to start debugging this?
I can provide any logs, but I’m not sure which ones, because I couldn’t see anything with ERROR in the vdsm logs on the host.
The host is running
OS Version: RHEL - 7 - 1.1503.el7.centos.2.8
Kernel Version: 3.10.0 - 229.14.1.el7.x86_64
KVM Version: 2.1.2 - 23.el7_1.8.1
LIBVIRT Version: libvirt-1.2.8-16.el7_1.4
VDSM Version: vdsm-4.16.26-0.el7.centos
SPICE Version: 0.12.4 - 9.el7_1.3
GlusterFS Version: glusterfs-3.7.5-1.el7
We use a locally exported gluster as storage domain (eg, storage is on the same machine exposed via gluster). No replica.
We run around 50 VMs on that host.
Thank you for your help in this,
—
Christophe
8 years, 5 months
One RHEV Virtual Machine does not Automatically Resume following Compellent SAN Controller Failover
by Duckworth, Douglas C
Hello --
Not sure if y'all can help with this issue we've been seeing with RHEV...
On 11/13/2015, during Code Upgrade of Compellent SAN at our Disaster
Recovery Site, we Failed Over to Secondary SAN Controller. Most Virtual
Machines in our DR Cluster Resumed automatically after Pausing except VM
"BADVM" on Host "BADHOST."
In Engine.log you can see that BADVM was sent into "VM_PAUSED_EIO" state
at 10:47:57:
"VM BADVM has paused due to storage I/O problem."
On this Red Hat Enterprise Virtualization Hypervisor 6.6
(20150512.0.el6ev) Host, two other VMs paused but then automatically
resumed without System Administrator intervention...
In our DR Cluster, 22 VMs also resumed automatically...
None of these Guest VMs are engaged in high I/O as these are DR site VMs
not currently doing anything.
We sent this information to Dell. Their response:
"The root cause may reside within your virtualization solution, not the
parent OS (RHEV-Hypervisor disc) or Storage (Dell Compellent.)"
We are doing this Failover again on Sunday November 29th so we would
like to know how to mitigate this issue, given we have to manually
resume paused VMs that don't resume automatically.
Before we initiated SAN Controller Failover, all iSCSI paths to Targets
were present on Host tulhv2p03.
VM logs on Host show in /var/log/libvirt/qemu/badhost.log that Storage
error was reported:
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
All disks used by this Guest VM are provided by single Storage Domain
COM_3TB4_DR with serial "270." In syslog we do see that all paths for
that Storage Domain Failed:
Nov 13 16:47:40 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 0
Though these recovered later:
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: sdbg -
tur checker reports path is up
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 8
Does anyone have an idea of why the VM would fail to automatically
resume if the iSCSI paths used by its Storage Domain recovered?
Thanks
Doug
--
Thanks
Douglas Charles Duckworth
Unix Administrator
Tulane University
Technology Services
1555 Poydras Ave
NOLA -- 70112
E: duckd(a)tulane.edu
O: 504-988-9341
F: 504-988-8505
8 years, 5 months
Can't remove snapshot
by Rik Theys
Hi,
I created a snapshot of a running VM prior to an OS upgrade. The OS
upgrade has now been succesful and I would like to remove the snapshot.
I've selected the snapshot in the UI and clicked Delete to start the task.
After a few minutes, the task has failed. When I click delete again on
the same snapshot, the failed message is returned after a few seconds.
>From browsing through the engine log (attached) it seems the snapshot
was correctly merged in the first try but something went wrong in the
finalizing fase. On retries, the log indicates the snapshot/disk image
no longer exists and the removal of the snapshot fails for this reason.
Is there any way to clean up this snapshot?
I can see the snapshot in the "Disk snapshot" tab of the storage. It has
a status of "illegal". Is it OK to (try to) remove this snapshot? Will
this impact the running VM and/or disk image?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
8 years, 6 months