Official Hyperconverged Gluster oVirt upgrade procedure?
by Hanson
Hi Guys,
Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt gluster pool?
Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine running
in a gluster storage domain.
Put node in maintenance mode and disable glusterfs from ovirt gui, run
yum update?
Thanks!
6 years, 10 months
[vdsm] status update: running containers alongside VMs
by Francesco Romani
Hi everyone,
I'm happy to share some progress about the former "convirt"[1] project,
which aims to let Vdsm containers alongside VMs, on bare metal.
In the last couple of months I kept updating the patch series, which
is approaching the readiness to be merged in Vdsm.
Please read through this mail to see what the patchset can do now,
how you could try it *now*, even before it is merged.
Everyone is invited to share thoughts and ideas about how this effort
could evolve.
This will be a long mail; I will amend, enhance and polish the content
and make a blog post (on https://mojaves.github.io) to make it easier
to consume and to have some easy-to-find documentation. Later on the
same content will appear also on the oVirt blog.
Happy hacking!
+++
# How to try how the experimental container support for Vdsm.
Vdsm is gaining *experimental* support to run containers alongside VMs.
Vdsm had since long time the ability to manage VMs which run containers,
and recently gained support for
[atomic guests](http://www.projectatomic.io/blog/2015/01/running-ovirt-guest-agent-as-privileged-container/).
With the new support we are describing, you will be able to manage containers
with the same, proven infrastructure that let you manage VMs.
This feature is currently being developed and it is still not merged in the
Vdsm codebase, so some extra work is needed if you want to try it out.
We aiming to merge it in the oVirt 4.1.z cycle.
## What works, aka what to expect
The basic features are expected to work:
1. Run any docker image on the public docker registry
2. Make the container accessible from the outside (aka not just from localhost)
3. Use file-based storage for persistent volumes
## What does not yet work, aka what NOT to expect
Few things are planned and currently under active development:
1. Monitoring. Engine will not get any update from the container besides "VM" status (Up, Down...)
One important drawback is that you will not be told the IP of the container from Engine,
you will need to connect to the Vdsm host to discover it using standard docker tools.
2. Proper network integration. Some steps still need manual intervention
3. Stability and recovery - it's pre-alpha software after all! :)
## 1. Introduction and prerequisites
Trying out container support affects only the host and the Vdsm.
Besides add few custom properties (totally safe and supported since early
3.z), there are zero changes required to the DB and to Engine.
Nevertheless, we recommend to dedicate one oVirt 4.y environment,
or at least one 4.y host, to try out the container feature.
To get started, first thing you need is to setup a vanilla oVirt 4.y
installation. We will need to make changes to the Vdsm and to the
Vdsm host, so hosted engine and/or oVirt node may add extra complexity,
better to avoid them at the moment.
The reminder of this tutorial assumes you are using two hosts,
one for Vdsm (will be changed) and one for Engine (will require zero changes);
furthermore, we assume the Vdsm host is running on CentOS 7.y.
We require:
- one test host for Vdsm. This host need to have one NIC dedicated to containers.
We will use the [docker macvlan driver](https://raesene.github.io/blog/2016/07/23/Docker-MacVLAN/),
so this NIC *must not be* part of one bridge.
- docker >= 1.12
- oVirt >= 4.0.5 (Vdsm >= 4.18.15)
- CentOS >= 7.2
Docker >= 1.12 is avaialable for download [here](https://docs.docker.com/engine/installation/linux/centos/)
Caveats:
1. docker from official rpms conflicts con docker from CentOS, and has a different package name: docker-engine vs docker.
Please note that the kubernetes package from CentOS, for example, require 'docker', not 'docker-engine'.
2. you may want to replace the default service file
[with this one](https://github.com/mojaves/convirt/blob/master/patches/centos72/syst...
and to use this
[sysconfig file](https://github.com/mojaves/convirt/blob/master/patches/centos72/sys....
Here I'm just adding the storage options docker requires, much like the CentOS docker is configured.
Configuring docker like this can save you some troubleshooting, especially if you had docker from CentOS installed
on the testing box.
## 2. Patch Vdsm to support containers
You need to patch and rebuild Vdsm.
Fetch [this patch](https://github.com/mojaves/convirt/blob/master/patches/vdsm/4.18.1...
and apply it against Vdsm 4.18.15.1. Vdsm 4.18.15.{1,2,...} are supported as well.
Rebuild Vdsm and reinstall on your box.
[centos 7.2 packages are here](https://github.com/mojaves/convirt/tree/master/rpms/centos72)
Make sure you install the Vdsm command line client (vdsm-cli)
Restart *both* Vdsm and Supervdsm, make sure Engine still works flawlessly with patched Vdsm.
This ensure that no regression is introduced, and that your environment can run VMs just as before.
Now we can proceed adding the container support.
start docker:
# systemctl start docker-engine
(optional)
# systemctl enable docker-engine
Restart Vdsm again
# systemctl restart vdsm
Now we can check if Vdsm detects docker, so you can use it:
still on the same Vdsm host, run
$ vdsClient -s 0 getVdsCaps | grep containers
containers = ['docker', 'fake']
This means this Vdsm can run containers using 'docker' and 'fake' runtimes.
Ignore the 'fake' runtime; as the name suggests, is a test driver, kinda like /dev/null.
Now we need to make sure the host network configuration is fine.
### 2.1. Configure the docker network for Vdsm
PLEASE NOTE
that the suggested network configuration assumes that
* you have one network, `ovirtmgmt` (the default one) you use for everything
* you have one Vdsm host with at least two NICs, one bound to the `ovirtmgmt` network, and one spare
_This step is not yet automated by Vdsm_, so manual action is needed; Vdsm will take
care of this automatically in the future.
You can use
[this helper script](https://github.com/mojaves/convirt/blob/master/patches/vdsm/cont-...,
which reuses the Vdsm libraries. Make sure
you have patched Vdsm to support container before to use it.
Let's review what the script needs:
# ./cont-setup-net -h
usage: cont-setup-net [-h] [--name [NAME]] [--bridge [BRIDGE]]
[--interface [INTERFACE]] [--gateway [GATEWAY]]
[--subnet [SUBNET]] [--mask [MASK]]
optional arguments:
-h, --help show this help message and exit
--name [NAME] network name to use
--bridge [BRIDGE] bridge to use
--interface [INTERFACE]
interface to use
--gateway [GATEWAY] address of the gateway
--subnet [SUBNET] subnet to use
--mask [MASK] netmask to use
So we need to feed --name, --interface, --gateway, --subnet and optionally --mask (default, /24, is often fine).
For my case the default mask was indeed fine, so I used the script like this:
# ./cont-setup-net --name ovirtmgmt --interface enp3s0 --gateway 192.168.1.1 --subnet 192.168.1.0
Thhis is the output I got:
DEBUG:virt.containers.runtime:configuring runtime 'docker'
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
Error: No such network: ovirtmgmt
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'inspect', 'ovirtmgmt']
DEBUG:virt.containers.runtime.Docker:config: cannot load 'ovirtmgmt', ignored
DEBUG:virt.containers.command:* calling ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.command:* called ['/bin/docker', 'network', 'create', '-d', 'macvlan', '--subnet=192.168.1.0/24', '--gateway=192.168.1.1', '--ip-range=192.168.1.0/24', '-o', 'parent=enp3s0', 'ovirtmgmt']
DEBUG:virt.containers.runtime:configuring runtime 'fake'
You can clearly see what the script did, and why it needed the root privileges. Let's deoublecheck using the docker tools:
# docker network ls
NETWORK ID NAME DRIVER SCOPE
91535f3425a8 bridge bridge local
d42f7e5561b5 host host local
621ab6dd49b1 none null local
f4b88e4a67eb ovirtmgmt macvlan local
# docker network inspect ovirtmgmt
[
{
"Name": "ovirtmgmt",
"Id": "f4b88e4a67ebb7886ec74073333d613b1893272530cae4d407c95ab587c5fea1",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.1.0/24",
"IPRange": "192.168.1.0/24",
"Gateway": "192.168.1.1"
}
]
},
"Internal": false,
"Containers": {},
"Options": {
"parent": "enp3s0"
},
"Labels": {}
}
]
Looks good! the host configuration is completed. Let's move to the Engine side.
## 3. Configure Engine
As mentioned above, we need now to configure Engine. This boils down to:
Add a few custom properties for VMs:
In case you were already using custom properties, you need to amend the command
line to not overwrite your existing ones.
# engine-config -s UserDefinedVMProperties='volumeMap=^[a-zA-Z_-]+:[a-zA-Z_-]+$;containerImage=^[a-zA-Z]+(://|)[a-zA-Z]+$;containerType=^(docker|rkt)$' --cver=4.0
It is worth stressing that while the variables are container-specific,
the VM custom properties are totally inuntrusive and old concept in oVirt, so
this step is totally safe.
Now restart Engine to let it use the new variables:
# systemctl restart ovirt-engine
The next step is actually configure one "container VM" and run it.
## 4. Create the container "VM"
To finally run a container, you start creating a VM much like you always did, with
few changes
1. most of the hardware-related configuration isn't relevant for container "VMs",
besides cpu share and memory limits; this will be better documented in the
future; unneeded configuration will just be ignored
2. You need to set some custom properties for your container "VM". Those are
actually needed to enable the container flow, and they are documented in
the next section. You *need* to set at least `containerType` and `containerImage`.
### 4.2. Custom variables for container support
The container support needs some custom properties to be properly configured:
1. `containerImage` (*needed* to enable the container system).
Just select the target image you want to run. You can use the standard syntax of the
container runtimes.
2. `containerType` (*needed* to enable the container system).
Selects the container runtime you want to use. All the available options are always showed.
Please note that unavailable container options are not yet grayed out.
If you *do not* have rkt support on your host, you still can select it, but it won't work.
3. `volumeMap` key:value like. You can map one "VM" disk (key) to one container volume (value),
to have persistent storage. Only file-based storage is supported.
Example configuration:
`containerImage = redis`
`containerType = docker`
`volumeMap = vda:data` (this may not be needed, and the volume label is just for illustrative purposes)
### 4.2. A little bit of extra work: preload the images on the Vdsm host
This step is not needed by the flow, and will be handled by oVirt in the future.
The issue is how the container image are handled. They are stored by the container
management system (rkt, docker) on each host, and they are not pre-downloaded.
To shorten the duration of the first boot, you are advised to pre-download
the image(s) you want to run. For example
## on the Vdsm host you want to use with containers
# docker pull redis
## 5. Run the container "VM"
You are now all set to run your "VM" using oVirt Engine, just like any existing VM.
Some actions doesn't make sense for a container "VM", like live migration.
Engine won't stop you to try to do those actions, but they will fail gracefully
using the standard errors.
## 6. Next steps
What to expect from this project in the future?
For the integration with Vdsm, we want to fix the existing known issues, most notably:
* add proper monitoring/reporting of the container health
* ensure proper integration of the container image store with oVirt storage management
* streamline the network configuration
What is explicitely excluded yet is any Engine change. This is a Vdsm-only change at the
moment, so fixing the following is currently unplanned:
* First and foremost, Engine will not distinguish between real VMs and container VMs.
Actions unavailable to container will not be hidden from UI. Same for monitoring
and configuration data, which will be ignored.
* Engine is NOT aware of the volumes one container can use. You must inspect and do the
mapping manually.
* Engine is NOT aware of the available container runtimes. You must select it carefully
Proper integration with Engine may be added in the future once this feature exits
from the experimental/provisional stage.
Thanks for reading, make sure to share your thoughts on the oVirt mailing lists!
+++
[1] we keep calling it that way _only_ internally, because it's a short
name we are used to. After the merge/once we release it, we will use
a different name, like "vdsm-containers" or something like it.
--
Francesco Romani
Red Hat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
6 years, 10 months
Re: [ovirt-users] iSCSI domain on 4kn drives
by Martijn Grendelman
--------------DE48748F7C67E1FABE46EEAF
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Op 7-8-2016 om 8:19 schreef Yaniv Kaul:
>
> On Fri, Aug 5, 2016 at 4:42 PM, Martijn Grendelman
> <martijn.grendelman(a)isaac.nl <mailto:martijn.grendelman@isaac.nl>> wrote:
>
> Op 4-8-2016 om 18:36 schreef Yaniv Kaul:
>> On Thu, Aug 4, 2016 at 11:49 AM, Martijn Grendelman
>> <martijn.grendelman(a)isaac.nl
>> <mailto:martijn.grendelman@isaac.nl>> wrote:
>>
>> Hi,
>>
>> Does oVirt support iSCSI storage domains on target LUNs using
>> a block
>> size of 4k?
>>
>>
>> No, we do not - not if it exposes 4K blocks.
>> Y.
>
> Is this on the roadmap?
>
>
> Not in the short term roadmap.
> Of course, patches are welcome. It's mainly in VDSM.
> I wonder if it'll work in NFS.
> Y.
I don't think I ever replied to this, but I can confirm that in RHEV 3.6
it works with NFS.
Best regards,
Martijn.
--------------DE48748F7C67E1FABE46EEAF
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Op 7-8-2016 om 8:19 schreef Yaniv Kaul:<br>
<blockquote
cite="mid:280cfbd3a16ad1b76cc7de56bda88f45,CAJgorsbJHLV1e3fH4b4AR3GBp1oi44fDhfeii+PQ1iY1RwUStw@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra">On Fri, Aug 5, 2016 at 4:42 PM, Martijn
Grendelman <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:martijn.grendelman@isaac.nl" target="_blank">martijn.grendelman(a)isaac.nl</a>></span>
wrote:<br>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Op 4-8-2016 om
18:36 schreef Yaniv Kaul:<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote"><span class="">On Thu,
Aug 4, 2016 at 11:49 AM, Martijn Grendelman <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:martijn.grendelman@isaac.nl"
target="_blank">martijn.grendelman(a)isaac.nl</a>></span>
wrote:<br>
</span><span class="">
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">Hi,<br>
<br>
Does oVirt support iSCSI storage domains on
target LUNs using a block<br>
size of 4k?<br>
</blockquote>
<div><br>
</div>
</span><span class="">
<div>No, we do not - not if it exposes 4K
blocks.</div>
<div>Y.</div>
</span></div>
</div>
</div>
</blockquote>
<br>
Is this on the roadmap?<br>
</div>
</blockquote>
<div><br>
</div>
<div>Not in the short term roadmap.</div>
<div>Of course, patches are welcome. It's mainly in VDSM.</div>
<div>I wonder if it'll work in NFS.</div>
<div>Y.</div>
</div>
</div>
</div>
</blockquote>
<br>
I don't think I ever replied to this, but I can confirm that in RHEV
3.6 it works with NFS.<br>
<br>
Best regards,<br>
Martijn.<br>
</body>
</html>
--------------DE48748F7C67E1FABE46EEAF--
7 years
Re: [ovirt-users] How to import a qcow2 disk into ovirt
by Martín Follonier
Hi,
I've done all the recommendations in this thread, and I'm still getting the
"Paused by System" message just after the transfer starts.
Honestly I don't know were else to look at, cause I don't find any log
entry or packet capture that give me a hint about what is happening.
I'll appreciate any help! Thank you in advance!
Regards
Martin
On Thu, Sep 1, 2016 at 5:01 PM, Amit Aviram <aavi...(a)redhat.com> wrote:
> You can do both,
> Through the database, the table is "vdc_options". change "option_value"
> where "option_name" = 'ImageProxyAddress' .
>
> On Thu, Sep 1, 2016 at 4:56 PM, Gianluca Cecchi <gianluca.cec...(a)gmail.com
> > wrote:
>
>> On Thu, Sep 1, 2016 at 3:53 PM, Amit Aviram <aavi...(a)redhat.com> wrote:
>>
>>> You can just replace this value in the DB and change it to the right
>>> FQDN, it is a config value named "ImageProxyAddress". replace "localhost"
>>> with the right address (notice that the port is there too).
>>>
>>> If this will keep happen after users will have the latest version, we
>>> will have to open a bug and fix whatever causes the URL to be "localhost".
>>>
>>>
>> Do you mean through "engine-config" or directly into database?
>> In this second case which is the table involved?
>>
>> Gianluca
>>
>
>
[root@ractorshe bin]# systemctl stop ovirt-imageio-proxy
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value | version
-----------+-------------------+-----------------+---------
950 | ImageProxyAddress | localhost:54323 | general
(1 row)
engine=# update vdc_options set option_value='ractorshe.mydomain:54323'
where option_name='ImageProxyAddress';
UPDATE 1
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value |
version
-----------+-------------------+--------------------------------------+---------
950 | ImageProxyAddress | ractorshe.mydomain:54323 | general
(1 row)
engine=#
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value |
version
-----------+-------------------+--------------------------------------+---------
950 | ImageProxyAddress | ractorshe.mydomain:54323 | general
(1 row)
systemctl stop ovirt-engine
(otherwise it remained localhost)
systemctl start ovirt-engine
systemctl start ovirt-imageio-proxy
Now transfer is ok.
I tried a qcow2 disck configured as 40Gb but containing about 1.6Gb of data.
I'm going to connect it to a VM and see if all is ok also from a contents
point of view.
Gianluca
_______________________________________________
Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
7 years, 1 month
Lowering the bar for wiki contribution?
by Roy Golan
I'm getting the feeling I'm not alone in this, authoring and publishing a
wiki page isn't as used to be for long time.
I want to suggest a bit lighter workflow:
1. Everyone can merge their page - (it's a wiki)
Same as with (public and open) code, no one has the motivation to publish
a badly written
wiki page under their name. True, it can have an impact, but not as with
broken code
2. Use Page-Status marker
The author first merges the draft. Its now out there and should be updated
as time goes and its
status is DRAFT. Maintainers will come later and after review would change
the status to
PUBLISH. That could be a header in on the page:
---
page status: DRAFT/PUBLISH
---
Simple I think, and should work.
7 years, 5 months
log out event but not log in?
by Gianluca Cecchi
Hello,
In oVirt 4.1 web admin gui I see events about users logging out (they have
been created on internal domain with ovirt-aaa-jdbc-tool command), but I
don't see the corresponding log in event.
The same is true for the default admin@internal user.
Is there any reason or is it a bug?
Thanks,
Gianluca
7 years, 6 months
ovirt-ha-agent cpu usage
by Gianluca Cecchi
Hello,
I have a test machine that is a nuc6 with an i5 and 32G of ram and SSD
disks.
It is configured as a single host environment with Self Hosted Engine VM.
Both host and SHE are CentOS 7.2 and oVirt version is 3.6.6.2-1.el7
I notice that having 3 VMs powered on and making nothing special (engine
VM, a CentOS 7 VM and a Fedora 24 VM) the ovirt-ha-agent process on host
often spikes its cpu usage.
See for example this quick video with top command running on host that
reflects what happens continuously.
https://drive.google.com/file/d/0BwoPbcrMv8mvYUVRMFlLVmxRdXM/view?usp=sha...
Is it normal that ovirt-ha-agent consumes all this amount of cpu?
Going into /var/log/ovirt-hosted-engine-ha/agent.log I see nothing special,
only messages of type "INFO". The same for broker.log
Thanks,
Gianluca
7 years, 7 months
Hosted engine Single Sign-On to VM with freeIPA not working
by Paul
This is a multipart message in MIME format.
------=_NextPart_000_0050_01D18069.35C995E0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hi,
I am having an issue with getting SSO to work when a standard user(UserRole)
logs in to the UserPortal.
The user has permission to use only this VM, so after login the console is
automatically opened for that VM.
Problem is that it doesn't login on the VM system with the provided
credentials. Manual login at the console works without any issues.
HBAC-rule check on IPA shows access is granted. Client has SELINUX in
permissive mode and a disabled firewalld.
On the client side I do see some PAM related errors in the logs (see details
below). Extensive Google search on error 17 "Failure setting user
credentials" didn't show helpful information :-(
AFAIK this is did a pretty standard set-up, all working with RH-family
products. I would expect others to encounter this issue as well.
If someone knows any solution or has some directions to fix this it would be
greatly appreciated.
Thanks,
Paul
------------------------------------------------------
System setup: I have 3 systems
The connection between the Engine and IPA is working fine. (I can log in
with IPA users etc.) Connection is made according to this document:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat
ion/3.6/html-single/Administration_Guide/index.html#sect-Configuring_an_Exte
rnal_LDAP_Provider
Configuration of the client is done according to this document:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat
ion/3.6/html/Virtual_Machine_Management_Guide/chap-Additional_Configuration.
html#sect-Configuring_Single_Sign-On_for_Virtual_Machines
--- Hosted Engine:
[root@engine ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@engine ~]# uname -a
Linux engine.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@engine ~]# rpm -qa | grep ovirt
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-engine-restapi-3.6.2.6-1.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.3.4-1.el7.centos.noarch
ovirt-engine-setup-3.6.3.4-1.el7.centos.noarch
ovirt-image-uploader-3.6.0-1.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.0.5-1.el7.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-engine-extension-aaa-ldap-setup-1.1.2-1.el7.centos.noarch
ovirt-engine-wildfly-overlay-8.0.4-1.el7.noarch
ovirt-engine-wildfly-8.2.1-1.el7.x86_64
ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
ovirt-engine-tools-3.6.2.6-1.el7.centos.noarch
ovirt-engine-dbscripts-3.6.2.6-1.el7.centos.noarch
ovirt-engine-backend-3.6.2.6-1.el7.centos.noarch
ovirt-engine-3.6.2.6-1.el7.centos.noarch
ovirt-engine-extension-aaa-ldap-1.1.2-1.el7.centos.noarch
ovirt-engine-setup-base-3.6.3.4-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.6.3.4-1.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.6.3.4-1.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-3.6.3.4-1.el7.centos.noarch
ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
ovirt-engine-userportal-3.6.2.6-1.el7.centos.noarch
ovirt-engine-webadmin-portal-3.6.2.6-1.el7.centos.noarch
ovirt-guest-agent-common-1.0.11-1.el7.noarch
ovirt-release36-003-1.noarch
ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
ovirt-engine-lib-3.6.3.4-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.3.4-1.el7.centos.noarch
ovirt-engine-websocket-proxy-3.6.3.4-1.el7.centos.noarch
ovirt-log-collector-3.6.1-1.el7.centos.noarch
ovirt-engine-extensions-api-impl-3.6.3.4-1.el7.centos.noarch
--- FreeIPA:
[root@ipa01 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@ipa01 ~]# uname -a
Linux ipa01.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@ipa01 ~]# rpm -qa | grep ipa
ipa-python-4.2.0-15.el7_2.6.x86_64
ipa-client-4.2.0-15.el7_2.6.x86_64
python-libipa_hbac-1.13.0-40.el7_2.1.x86_64
python-iniparse-0.4-9.el7.noarch
libipa_hbac-1.13.0-40.el7_2.1.x86_64
sssd-ipa-1.13.0-40.el7_2.1.x86_64
ipa-admintools-4.2.0-15.el7_2.6.x86_64
ipa-server-4.2.0-15.el7_2.6.x86_64
ipa-server-dns-4.2.0-15.el7_2.6.x86_64
--- Client:
[root@test06 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@test06 ~]# uname -a
Linux test06.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16
17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@test06 ~]# rpm -qa | grep ipa
python-libipa_hbac-1.13.0-40.el7_2.1.x86_64
python-iniparse-0.4-9.el7.noarch
sssd-ipa-1.13.0-40.el7_2.1.x86_64
ipa-client-4.2.0-15.0.1.el7.centos.6.x86_64
libipa_hbac-1.13.0-40.el7_2.1.x86_64
ipa-python-4.2.0-15.0.1.el7.centos.6.x86_64
device-mapper-multipath-0.4.9-85.el7.x86_64
device-mapper-multipath-libs-0.4.9-85.el7.x86_64
[root@test06 ~]# rpm -qa | grep guest-agent
qemu-guest-agent-2.3.0-4.el7.x86_64
ovirt-guest-agent-pam-module-1.0.11-1.el7.x86_64
ovirt-guest-agent-gdm-plugin-1.0.11-1.el7.noarch
ovirt-guest-agent-common-1.0.11-1.el7.noarch
---------------------------------------------------
Relevant logs:
--- Engine:
//var/log/ovirt-engine/engine
2016-03-17 15:22:10,516 INFO
[org.ovirt.engine.core.bll.aaa.LoginUserCommand] (default task-22) []
Running command: LoginUserCommand internal: false.
2016-03-17 15:22:10,568 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-22) [] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: User test6@DOMAIN logged in.
2016-03-17 15:22:13,795 WARN
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default task-6)
[7400ae46] The message key 'VmLogon' is missing from
'bundles/ExecutionMessages'
2016-03-17 15:22:13,839 INFO [org.ovirt.engine.core.bll.VmLogonCommand]
(default task-6) [7400ae46] Running command: VmLogonCommand internal: false.
Entities affected : ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type: VMAction
group CONNECT_TO_VM with role type USER
2016-03-17 15:22:13,842 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default
task-6) [7400ae46] START, VmLogonVDSCommand(HostName = host01,
VmLogonVDSCommandParameters:{runAsync='true',
hostId='225157c0-224b-4aa6-9210-db4de7c7fc30',
vmId='64a84b40-6050-4a96-a59d-d557a317c38c', domain='DOMAIN-authz',
password='***', userName='test6@DOMAIN'}), log id: 2015a1e0
2016-03-17 15:22:14,848 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default
task-6) [7400ae46] FINISH, VmLogonVDSCommand, log id: 2015a1e0
2016-03-17 15:22:15,317 INFO [org.ovirt.engine.core.bll.SetVmTicketCommand]
(default task-18) [10dad788] Running command: SetVmTicketCommand internal:
true. Entities affected : ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type:
VMAction group CONNECT_TO_VM with role type USER
2016-03-17 15:22:15,322 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default
task-18) [10dad788] START, SetVmTicketVDSCommand(HostName = host01,
SetVmTicketVDSCommandParameters:{runAsync='true',
hostId='225157c0-224b-4aa6-9210-db4de7c7fc30',
vmId='64a84b40-6050-4a96-a59d-d557a317c38c', protocol='SPICE',
ticket='rd8avqvdBnRl', validTime='120', userName='test6',
userId='10b2da3e-6401-4a09-a330-c0780bc0faef',
disconnectAction='LOCK_SCREEN'}), log id: 72efb73b
2016-03-17 15:22:16,340 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default
task-18) [10dad788] FINISH, SetVmTicketVDSCommand, log id: 72efb73b
2016-03-17 15:22:16,377 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-18) [10dad788] Correlation ID: 10dad788, Call Stack: null,
Custom Event ID: -1, Message: User test6@DOMAIN initiated console session
for VM test06
2016-03-17 15:22:19,418 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-53) [] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: User test6@DOMAIN-authz is connected to
VM test06.
--- Client:
/var/log/ovirt-guest-agent/ovirt-guest-agent.log
MainThread::INFO::2016-03-17
15:20:58,145::ovirt-guest-agent::57::root::Starting oVirt guest agent
CredServer::INFO::2016-03-17 15:20:58,214::CredServer::257::root::CredServer
is running...
Dummy-1::INFO::2016-03-17 15:20:58,216::OVirtAgentLogic::294::root::Received
an external command: lock-screen...
Dummy-1::INFO::2016-03-17 15:22:13,104::OVirtAgentLogic::294::root::Received
an external command: login...
Dummy-1::INFO::2016-03-17 15:22:13,104::CredServer::207::root::The following
users are allowed to connect: [0]
Dummy-1::INFO::2016-03-17 15:22:13,104::CredServer::273::root::Opening
credentials channel...
Dummy-1::INFO::2016-03-17 15:22:13,105::CredServer::132::root::Emitting user
authenticated signal (651416).
CredChannel::INFO::2016-03-17 15:22:13,188::CredServer::225::root::Incomming
connection from user: 0 process: 2570
CredChannel::INFO::2016-03-17 15:22:13,188::CredServer::232::root::Sending
user's credential (token: 651416)
Dummy-1::INFO::2016-03-17 15:22:13,189::CredServer::277::root::Credentials
channel was closed.
/var/log/secure
Mar 17 15:21:07 test06 gdm-launch-environment]:
pam_unix(gdm-launch-environment:session): session opened for user gdm by
(uid=0)
Mar 17 15:21:10 test06 polkitd[749]: Registered Authentication Agent for
unix-session:c1 (system bus name :1.34 [gnome-shell --mode=gdm], object path
/org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Mar 17 15:22:13 test06 gdm-ovirtcred]: pam_sss(gdm-ovirtcred:auth):
authentication failure; logname= uid=0 euid=0 tty= ruser= rhost= user=test6
Mar 17 15:22:13 test06 gdm-ovirtcred]: pam_sss(gdm-ovirtcred:auth): received
for user test6: 17 (Failure setting user credentials)
/var/log/sssd/krb5_child.log (debug-level 10)
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [get_and_save_tgt]
(0x0020): 1234: [-1765328360][Preauthentication failed]
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [map_krb5_error]
(0x0020): 1303: [-1765328360][Preauthentication failed]
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [k5c_send_data]
(0x0200): Received error code 1432158215
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [pack_response_packet]
(0x2000): response packet size: [4]
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [k5c_send_data]
(0x4000): Response sent.
(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [main] (0x0400):
krb5_child completed successfully
/var/log/sssd/sssd_DOMAIN.COM.log (debug-level 10)
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler] (0x0100):
Got request with the following data
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
command: PAM_AUTHENTICATE
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
domain: DOMAIN.COM
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
user: test6
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
service: gdm-ovirtcred
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
tty:
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
ruser:
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
rhost:
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
authtok type: 1
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
newauthtok type: 0
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
priv: 1
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
cli_pid: 2570
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100):
logon name: not set
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_auth_queue_send]
(0x1000): Wait queue of user [test6] is empty, running request
[0x7fe30df03cc0] immediately.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_setup] (0x4000): No
mapping for: test6
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added
timed event "ltdb_callback": 0x7fe30df07120
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added
timed event "ltdb_timeout": 0x7fe30df16590
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Running
timer event 0x7fe30df07120 "ltdb_callback"
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Destroying
timer event 0x7fe30df16590 "ltdb_timeout"
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Ending
timer event 0x7fe30df07120 "ltdb_callback"
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [fo_resolve_service_send]
(0x0100): Trying to resolve service 'IPA'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_server_status]
(0x1000): Status of server 'ipa01.DOMAIN.COM' is 'working'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_port_status]
(0x1000): Port status of port 389 for server 'ipa01.DOMAIN.COM' is 'working'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[fo_resolve_service_activate_timeout] (0x2000): Resolve timeout set to 6
seconds
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [resolve_srv_send]
(0x0200): The status of SRV lookup is resolved
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_server_status]
(0x1000): Status of server 'ipa01.DOMAIN.COM' is 'working'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[be_resolve_server_process] (0x1000): Saving the first resolved server
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[be_resolve_server_process] (0x0200): Found address for server
ipa01.DOMAIN.COM: [10.0.1.21] TTL 1200
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ipa_resolve_callback]
(0x0400): Constructed uri 'ldap://ipa01.DOMAIN.COM'
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sss_krb5_realm_has_proxy]
(0x0040): profile_get_values failed.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_handler_setup]
(0x2000): Setting up signal handler up for pid [2575]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_handler_setup]
(0x2000): Signal handler set up for pid [2575]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [write_pipe_handler]
(0x0400): All data has been sent!
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler]
(0x1000): Waiting for child [2575].
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler]
(0x0100): child [2575] finished successfully.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [read_pipe_handler]
(0x0400): EOF received, client finished
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [check_wait_queue]
(0x1000): Wait queue for user [test6] is empty.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_auth_queue_done]
(0x1000): krb5_auth_queue request [0x7fe30df03cc0] done.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_id_op_connect_step]
(0x4000): reusing cached connection
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_print_server]
(0x2000): Searching 10.0.1.21
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with
[(&(cn=ipaConfig)(objectClass=ipaGuiConfig))][cn=etc,dc=DOMAIN,dc=com].
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x1000): Requesting attrs:
[ipaMigrationEnabled]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x1000): Requesting attrs:
[ipaSELinuxUserMapDefault]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x1000): Requesting attrs:
[ipaSELinuxUserMapOrder]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid = 122
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_op_add] (0x2000):
New operation 122 timeout 60
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0],
ldap[0x7fe30def2920]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message]
(0x4000): Message type: [LDAP_RES_SEARCH_ENTRY]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_entry]
(0x1000): OriginalDN: [cn=ipaConfig,cn=etc,dc=DOMAIN,dc=com].
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range]
(0x2000): No sub-attributes for [ipaMigrationEnabled]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range]
(0x2000): No sub-attributes for [ipaSELinuxUserMapDefault]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range]
(0x2000): No sub-attributes for [ipaSELinuxUserMapOrder]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0],
ldap[0x7fe30def2920]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message]
(0x4000): Message type: [LDAP_RES_SEARCH_RESULT]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no
errmsg set
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_op_destructor]
(0x2000): Operation 122 finished
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_id_op_destroy]
(0x4000): releasing operation connection
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]]
[ipa_get_migration_flag_done] (0x0100): Password migration is not enabled.
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback]
(0x0100): Backend returned: (0, 17, <NULL>) [Success (Failure setting user
credentials)]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback]
(0x0100): Sending result [17][DOMAIN.COM]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback]
(0x0100): Sent result [17][DOMAIN.COM]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: sh[0x7fe30deef090], connected[1], ops[(nil)],
ldap[0x7fe30def2920]
(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result]
(0x2000): Trace: ldap_result found nothing!
------=_NextPart_000_0050_01D18069.35C995E0
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>Hi,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>I am having =
an issue with getting SSO to work when a standard user(UserRole) logs in =
to the UserPortal.<o:p></o:p></p><p class=3DMsoNormal>The user has =
permission to use only this VM, so after login the console is =
automatically opened for that VM.<o:p></o:p></p><p =
class=3DMsoNormal>Problem is that it doesn't login on the VM system with =
the provided credentials. Manual login at the console works without any =
issues. <o:p></o:p></p><p class=3DMsoNormal>HBAC-rule check on IPA shows =
access is granted. Client has SELINUX in permissive mode and a disabled =
firewalld. <o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>On the client side I do see some PAM related errors in =
the logs (see details below). Extensive Google search on error 17 =
"Failure setting user credentials" didn't show helpful =
information :-(<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>AFAIK this =
is did a pretty standard set-up, all working with RH-family products. I =
would expect others to encounter this issue as well. <o:p></o:p></p><p =
class=3DMsoNormal>If someone knows any solution or has some directions =
to fix this it would be greatly appreciated.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Thanks,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Paul<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>------------------------------------------------------<=
o:p></o:p></p><p class=3DMsoNormal>System setup: I have 3 systems =
<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>The connection between the Engine and IPA is working =
fine. (I can log in with IPA users etc.) Connection is made according to =
this document: =
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali=
zation/3.6/html-single/Administration_Guide/index.html#sect-Configuring_a=
n_External_LDAP_Provider<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Configuration of the client is done according to this =
document: =
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali=
zation/3.6/html/Virtual_Machine_Management_Guide/chap-Additional_Configur=
ation.html#sect-Configuring_Single_Sign-On_for_Virtual_Machines<o:p></o:p=
></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>--- =
Hosted Engine:<o:p></o:p></p><p class=3DMsoNormal>[root@engine ~]# cat =
/etc/redhat-release<o:p></o:p></p><p class=3DMsoNormal>CentOS Linux =
release 7.2.1511 (Core)<o:p></o:p></p><p class=3DMsoNormal>[root@engine =
~]# uname -a<o:p></o:p></p><p class=3DMsoNormal>Linux engine.DOMAIN.COM =
3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 =
x86_64 x86_64 GNU/Linux<o:p></o:p></p><p class=3DMsoNormal>[root@engine =
~]# rpm -qa | grep ovirt<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-vmconsole-1.0.0-1.el7.centos.noarch<o:p></o:p></p=
><p =
class=3DMsoNormal>ovirt-engine-restapi-3.6.2.6-1.el7.centos.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-setup-lib-1.0.1-1.el7.centos.noarch<o:p></o:p></p=
><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-ovirt-engine-common-3.6.3.4-1=
.el7.centos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-3.6.3.4-1.el7.centos.noarch<o:p></o:=
p></p><p =
class=3DMsoNormal>ovirt-image-uploader-3.6.0-1.el7.centos.noarch<o:p></o:=
p></p><p =
class=3DMsoNormal>ovirt-engine-extension-aaa-jdbc-1.0.5-1.el7.noarch<o:p>=
</o:p></p><p =
class=3DMsoNormal>ovirt-host-deploy-1.4.1-1.el7.centos.noarch<o:p></o:p><=
/p><p =
class=3DMsoNormal>ovirt-engine-extension-aaa-ldap-setup-1.1.2-1.el7.cento=
s.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-wildfly-overlay-8.0.4-1.el7.noarch<o:p></o=
:p></p><p =
class=3DMsoNormal>ovirt-engine-wildfly-8.2.1-1.el7.x86_64<o:p></o:p></p><=
p =
class=3DMsoNormal>ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch<o:p></o=
:p></p><p =
class=3DMsoNormal>ovirt-engine-tools-3.6.2.6-1.el7.centos.noarch<o:p></o:=
p></p><p =
class=3DMsoNormal>ovirt-engine-dbscripts-3.6.2.6-1.el7.centos.noarch<o:p>=
</o:p></p><p =
class=3DMsoNormal>ovirt-engine-backend-3.6.2.6-1.el7.centos.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-engine-3.6.2.6-1.el7.centos.noarch<o:p></o:p></p>=
<p =
class=3DMsoNormal>ovirt-engine-extension-aaa-ldap-1.1.2-1.el7.centos.noar=
ch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-base-3.6.3.4-1.el7.centos.noarch<o:p=
></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-ovirt-engine-3.6.3.4-1.el7.ce=
ntos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-websocket-proxy-3.6.3.4-1.el7=
.centos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-vmconsole-proxy-helper-3.6.3.4-1.el7.cento=
s.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch<o:p></o:p>=
</p><p =
class=3DMsoNormal>ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-engine-userportal-3.6.2.6-1.el7.centos.noarch<o:p=
></o:p></p><p =
class=3DMsoNormal>ovirt-engine-webadmin-portal-3.6.2.6-1.el7.centos.noarc=
h<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-common-1.0.11-1.el7.noarch<o:p></o:p>=
</p><p class=3DMsoNormal>ovirt-release36-003-1.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-iso-uploader-3.6.0-1.el7.centos.noarch<o:p></o:p>=
</p><p =
class=3DMsoNormal>ovirt-engine-lib-3.6.3.4-1.el7.centos.noarch<o:p></o:p>=
</p><p =
class=3DMsoNormal>ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch<o:p=
></o:p></p><p =
class=3DMsoNormal>ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.3.=
4-1.el7.centos.noarch<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-engine-websocket-proxy-3.6.3.4-1.el7.centos.noarc=
h<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-log-collector-3.6.1-1.el7.centos.noarch<o:p></o:p=
></p><p =
class=3DMsoNormal>ovirt-engine-extensions-api-impl-3.6.3.4-1.el7.centos.n=
oarch<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>--- FreeIPA:<o:p></o:p></p><p =
class=3DMsoNormal>[root@ipa01 ~]# cat =
/etc/redhat-release<o:p></o:p></p><p class=3DMsoNormal>CentOS Linux =
release 7.2.1511 (Core) <o:p></o:p></p><p class=3DMsoNormal>[root@ipa01 =
~]# uname -a<o:p></o:p></p><p class=3DMsoNormal>Linux =
ipa01.DOMAIN.COM 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 =
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux<o:p></o:p></p><p =
class=3DMsoNormal>[root@ipa01 ~]# rpm -qa | grep ipa<o:p></o:p></p><p =
class=3DMsoNormal>ipa-python-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-client-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>python-libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>python-iniparse-0.4-9.el7.noarch<o:p></o:p></p><p =
class=3DMsoNormal>libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>sssd-ipa-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-admintools-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p=
class=3DMsoNormal>ipa-server-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-server-dns-4.2.0-15.el7_2.6.x86_64<o:p></o:p></p><p=
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>--- =
Client:<o:p></o:p></p><p class=3DMsoNormal>[root@test06 ~]# cat =
/etc/redhat-release<o:p></o:p></p><p class=3DMsoNormal>CentOS Linux =
release 7.2.1511 (Core) <o:p></o:p></p><p class=3DMsoNormal>[root@test06 =
~]# uname -a<o:p></o:p></p><p class=3DMsoNormal>Linux test06.DOMAIN.COM =
3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 =
x86_64 x86_64 GNU/Linux<o:p></o:p></p><p class=3DMsoNormal>[root@test06 =
~]# rpm -qa | grep ipa<o:p></o:p></p><p =
class=3DMsoNormal>python-libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>python-iniparse-0.4-9.el7.noarch<o:p></o:p></p><p =
class=3DMsoNormal>sssd-ipa-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-client-4.2.0-15.0.1.el7.centos.6.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>libipa_hbac-1.13.0-40.el7_2.1.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ipa-python-4.2.0-15.0.1.el7.centos.6.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>device-mapper-multipath-0.4.9-85.el7.x86_64<o:p></o:p><=
/p><p =
class=3DMsoNormal>device-mapper-multipath-libs-0.4.9-85.el7.x86_64<o:p></=
o:p></p><p class=3DMsoNormal>[root@test06 ~]# rpm -qa | grep =
guest-agent<o:p></o:p></p><p =
class=3DMsoNormal>qemu-guest-agent-2.3.0-4.el7.x86_64<o:p></o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-pam-module-1.0.11-1.el7.x86_64<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-gdm-plugin-1.0.11-1.el7.noarch<o:p></=
o:p></p><p =
class=3DMsoNormal>ovirt-guest-agent-common-1.0.11-1.el7.noarch<o:p></o:p>=
</p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>---------------------------------------------------<o:p=
></o:p></p><p class=3DMsoNormal>Relevant logs:<o:p></o:p></p><p =
class=3DMsoNormal>--- Engine:<o:p></o:p></p><p =
class=3DMsoNormal>//var/log/ovirt-engine/engine<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:10,516 INFO =
[org.ovirt.engine.core.bll.aaa.LoginUserCommand] (default task-22) [] =
Running command: LoginUserCommand internal: false.<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:10,568 INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(default task-22) [] Correlation ID: null, Call Stack: null, Custom =
Event ID: -1, Message: User test6@DOMAIN logged in.<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:13,795 WARN =
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default =
task-6) [7400ae46] The message key 'VmLogon' is missing from =
'bundles/ExecutionMessages'<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:13,839 INFO =
[org.ovirt.engine.core.bll.VmLogonCommand] (default task-6) [7400ae46] =
Running command: VmLogonCommand internal: false. Entities affected =
: ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type: VMAction group =
CONNECT_TO_VM with role type USER<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:13,842 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default =
task-6) [7400ae46] START, VmLogonVDSCommand(HostName =3D host01, =
VmLogonVDSCommandParameters:{runAsync=3D'true', =
hostId=3D'225157c0-224b-4aa6-9210-db4de7c7fc30', =
vmId=3D'64a84b40-6050-4a96-a59d-d557a317c38c', domain=3D'DOMAIN-authz', =
password=3D'***', userName=3D'test6@DOMAIN'}), log id: =
2015a1e0<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 15:22:14,848 =
INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] =
(default task-6) [7400ae46] FINISH, VmLogonVDSCommand, log id: =
2015a1e0<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 15:22:15,317 =
INFO [org.ovirt.engine.core.bll.SetVmTicketCommand] (default =
task-18) [10dad788] Running command: SetVmTicketCommand internal: true. =
Entities affected : ID: 64a84b40-6050-4a96-a59d-d557a317c38c Type: =
VMAction group CONNECT_TO_VM with role type USER<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:15,322 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] =
(default task-18) [10dad788] START, SetVmTicketVDSCommand(HostName =3D =
host01, SetVmTicketVDSCommandParameters:{runAsync=3D'true', =
hostId=3D'225157c0-224b-4aa6-9210-db4de7c7fc30', =
vmId=3D'64a84b40-6050-4a96-a59d-d557a317c38c', protocol=3D'SPICE', =
ticket=3D'rd8avqvdBnRl', validTime=3D'120', userName=3D'test6', =
userId=3D'10b2da3e-6401-4a09-a330-c0780bc0faef', =
disconnectAction=3D'LOCK_SCREEN'}), log id: 72efb73b<o:p></o:p></p><p =
class=3DMsoNormal>2016-03-17 15:22:16,340 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] =
(default task-18) [10dad788] FINISH, SetVmTicketVDSCommand, log id: =
72efb73b<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 15:22:16,377 =
INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(default task-18) [10dad788] Correlation ID: 10dad788, Call Stack: null, =
Custom Event ID: -1, Message: User test6@DOMAIN initiated console =
session for VM test06<o:p></o:p></p><p class=3DMsoNormal>2016-03-17 =
15:22:19,418 INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(DefaultQuartzScheduler_Worker-53) [] Correlation ID: null, Call Stack: =
null, Custom Event ID: -1, Message: User test6@DOMAIN-authz is connected =
to VM test06.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>--- Client:<o:p></o:p></p><p =
class=3DMsoNormal>/var/log/ovirt-guest-agent/ovirt-guest-agent.log<o:p></=
o:p></p><p class=3DMsoNormal>MainThread::INFO::2016-03-17 =
15:20:58,145::ovirt-guest-agent::57::root::Starting oVirt guest =
agent<o:p></o:p></p><p class=3DMsoNormal>CredServer::INFO::2016-03-17 =
15:20:58,214::CredServer::257::root::CredServer is =
running...<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:20:58,216::OVirtAgentLogic::294::root::Received an external command: =
lock-screen...<o:p></o:p></p><p =
class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,104::OVirtAgentLogic::294::root::Received an external command: =
login...<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,104::CredServer::207::root::The following users are allowed to =
connect: [0]<o:p></o:p></p><p =
class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,104::CredServer::273::root::Opening credentials =
channel...<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,105::CredServer::132::root::Emitting user authenticated signal =
(651416).<o:p></o:p></p><p =
class=3DMsoNormal>CredChannel::INFO::2016-03-17 =
15:22:13,188::CredServer::225::root::Incomming connection from user: 0 =
process: 2570<o:p></o:p></p><p =
class=3DMsoNormal>CredChannel::INFO::2016-03-17 =
15:22:13,188::CredServer::232::root::Sending user's credential (token: =
651416)<o:p></o:p></p><p class=3DMsoNormal>Dummy-1::INFO::2016-03-17 =
15:22:13,189::CredServer::277::root::Credentials channel was =
closed.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>/var/log/secure<o:p></o:p></p><p class=3DMsoNormal>Mar =
17 15:21:07 test06 gdm-launch-environment]: =
pam_unix(gdm-launch-environment:session): session opened for user gdm by =
(uid=3D0)<o:p></o:p></p><p class=3DMsoNormal>Mar 17 15:21:10 test06 =
polkitd[749]: Registered Authentication Agent for unix-session:c1 =
(system bus name :1.34 [gnome-shell --mode=3Dgdm], object path =
/org/freedesktop/PolicyKit1/AuthenticationAgent, locale =
en_US.UTF-8)<o:p></o:p></p><p class=3DMsoNormal>Mar 17 15:22:13 test06 =
gdm-ovirtcred]: pam_sss(gdm-ovirtcred:auth): authentication failure; =
logname=3D uid=3D0 euid=3D0 tty=3D ruser=3D rhost=3D =
user=3Dtest6<o:p></o:p></p><p class=3DMsoNormal><b><span =
style=3D'color:red'>Mar 17 15:22:13 test06 gdm-ovirtcred]: =
pam_sss(gdm-ovirtcred:auth): received for user test6: 17 (Failure =
setting user credentials)<o:p></o:p></span></b></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal><span =
lang=3DNL>/var/log/sssd/krb5_child.log (debug-level =
10)<o:p></o:p></span></p><p class=3DMsoNormal><b><span =
style=3D'color:red'>(Thu Mar 17 15:22:13 2016) =
[[sssd[krb5_child[2575]]]] [get_and_save_tgt] (0x0020): 1234: =
[-1765328360][Preauthentication failed]<o:p></o:p></span></b></p><p =
class=3DMsoNormal><b><span style=3D'color:red'>(Thu Mar 17 15:22:13 =
2016) [[sssd[krb5_child[2575]]]] [map_krb5_error] (0x0020): 1303: =
[-1765328360][Preauthentication failed]<o:p></o:p></span></b></p><p =
class=3DMsoNormal><b><span style=3D'color:red'>(Thu Mar 17 15:22:13 =
2016) [[sssd[krb5_child[2575]]]] [k5c_send_data] (0x0200): Received =
error code 1432158215<o:p></o:p></span></b></p><p class=3DMsoNormal>(Thu =
Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] [pack_response_packet] =
(0x2000): response packet size: [4]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] =
[k5c_send_data] (0x4000): Response sent.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [[sssd[krb5_child[2575]]]] =
[main] (0x0400): krb5_child completed successfully<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>/var/log/sssd/sssd_DOMAIN.COM.log (debug-level =
10)<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [be_pam_handler] (0x0100): Got request with the =
following data<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): command: =
PAM_AUTHENTICATE<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): domain: =
DOMAIN.COM<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): user: =
test6<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): service: =
gdm-ovirtcred<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): =
tty:<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): =
ruser:<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): =
rhost:<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): authtok type: =
1<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): newauthtok type: =
0<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): priv: =
1<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): cli_pid: =
2570<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [pam_print_data] (0x0100): logon name: not =
set<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [krb5_auth_queue_send] (0x1000): Wait queue of =
user [test6] is empty, running request [0x7fe30df03cc0] =
immediately.<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [krb5_setup] (0x4000): No mapping for: =
test6<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added timed event =
"ltdb_callback": 0x7fe30df07120<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Added timed event =
"ltdb_timeout": 0x7fe30df16590<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Running timer =
event 0x7fe30df07120 "ltdb_callback"<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Destroying timer =
event 0x7fe30df16590 "ltdb_timeout"<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ldb] (0x4000): Ending timer event =
0x7fe30df07120 "ltdb_callback"<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [fo_resolve_service_send] =
(0x0100): Trying to resolve service 'IPA'<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[get_server_status] (0x1000): Status of server 'ipa01.DOMAIN.COM' is =
'working'<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [get_port_status] (0x1000): Port status of port =
389 for server 'ipa01.DOMAIN.COM' is 'working'<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[fo_resolve_service_activate_timeout] (0x2000): Resolve timeout set to 6 =
seconds<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [resolve_srv_send] (0x0200): The status of SRV =
lookup is resolved<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [get_server_status] (0x1000): =
Status of server 'ipa01.DOMAIN.COM' is 'working'<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[be_resolve_server_process] (0x1000): Saving the first resolved =
server<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [be_resolve_server_process] (0x0200): Found =
address for server ipa01.DOMAIN.COM: [10.0.1.21] TTL =
1200<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [ipa_resolve_callback] (0x0400): Constructed uri =
'ldap://ipa01.DOMAIN.COM'<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sss_krb5_realm_has_proxy] =
(0x0040): profile_get_values failed.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[child_handler_setup] (0x2000): Setting up signal handler up for pid =
[2575]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [child_handler_setup] (0x2000): Signal handler =
set up for pid [2575]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [write_pipe_handler] (0x0400): All =
data has been sent!<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler] (0x1000): =
Waiting for child [2575].<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [child_sig_handler] (0x0100): =
child [2575] finished successfully.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[read_pipe_handler] (0x0400): EOF received, client =
finished<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [check_wait_queue] (0x1000): Wait queue for user =
[test6] is empty.<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [krb5_auth_queue_done] (0x1000): =
krb5_auth_queue request [0x7fe30df03cc0] done.<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_id_op_connect_step] (0x4000): reusing cached =
connection<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_print_server] (0x2000): Searching =
10.0.1.21<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_get_generic_ext_step] (0x0400): calling =
ldap_search_ext with =
[(&(cn=3DipaConfig)(objectClass=3DipaGuiConfig))][cn=3Detc,dc=3DDOMAI=
N,dc=3Dcom].<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [sdap_get_generic_ext_step] (0x1000): =
Requesting attrs: [ipaMigrationEnabled]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_get_generic_ext_step] (0x1000): Requesting attrs: =
[ipaSELinuxUserMapDefault]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar =
17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_get_generic_ext_step] =
(0x1000): Requesting attrs: [ipaSELinuxUserMapOrder]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid =3D =
122<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_op_add] (0x2000): New operation 122 timeout =
60<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): Trace: =
sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0], =
ldap[0x7fe30def2920]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message] (0x4000): =
Message type: [LDAP_RES_SEARCH_ENTRY]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_parse_entry] (0x1000): OriginalDN: =
[cn=3DipaConfig,cn=3Detc,dc=3DDOMAIN,dc=3Dcom].<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_parse_range] (0x2000): No sub-attributes for =
[ipaMigrationEnabled]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_parse_range] (0x2000): No =
sub-attributes for [ipaSELinuxUserMapDefault]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_parse_range] (0x2000): No sub-attributes for =
[ipaSELinuxUserMapOrder]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): =
Trace: sh[0x7fe30deef090], connected[1], ops[0x7fe30df094a0], =
ldap[0x7fe30def2920]<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [sdap_process_message] (0x4000): =
Message type: [LDAP_RES_SEARCH_RESULT]<o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no =
errmsg set<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_op_destructor] (0x2000): Operation 122 =
finished <o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_id_op_destroy] (0x4000): releasing =
operation connection <o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [ipa_get_migration_flag_done] =
(0x0100): Password migration is not enabled. <o:p></o:p></p><p =
class=3DMsoNormal><b><span style=3D'color:red'>(Thu Mar 17 15:22:13 =
2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback] (0x0100): Backend =
returned: (0, 17, <NULL>) [Success (Failure setting user =
credentials)] <o:p></o:p></span></b></p><p class=3DMsoNormal>(Thu Mar 17 =
15:22:13 2016) [sssd[be[DOMAIN.COM]]] [be_pam_handler_callback] =
(0x0100): Sending result [17][DOMAIN.COM] <o:p></o:p></p><p =
class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) [sssd[be[DOMAIN.COM]]] =
[be_pam_handler_callback] (0x0100): Sent result [17][DOMAIN.COM] =
<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): Trace: =
sh[0x7fe30deef090], connected[1], ops[(nil)], ldap[0x7fe30def2920] =
<o:p></o:p></p><p class=3DMsoNormal>(Thu Mar 17 15:22:13 2016) =
[sssd[be[DOMAIN.COM]]] [sdap_process_result] (0x2000): Trace: =
ldap_result found nothing!<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal> =
<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p></div></body></html>
------=_NextPart_000_0050_01D18069.35C995E0--
7 years, 7 months
VM in oVirt 4 sometimes does not have proper CPU guest count
by Christophe TREFOIS
--_000_8C4DA8EBA840444999AC78F54E14506Cunilu_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
RGVhciBhbGwsDQoNCldlIHVwZ3JhZGVkIHRvZGF5IGVuZ2luZSB0byA0LjAuNiBhbmQgYSBob3N0
IHRvIENlbnRPUyA3LjMgYW5kIGFsbCBwYWNrYWdlcyB1cGRhdGVkLg0KDQpNb3N0IFZNcyBzdGFy
dCB1cCB3aXRoIGNvcnJlY3QgbnVtYmVyIG9mIGd1ZXN0IENQVXMgZXhjZXB0IDIgb3IgMyB0aGF0
IG9ubHkgc3Bhd24gd2l0aCAxIGd1ZXN0IENQVSBjb3VudC4NCldlIHNldCBmb3IgaW5zdGFuY2Ug
YSBDUFUgcHJvZmlsZSBvZiAxOjQ6MSBidXQgb25seSAxIHZDUFUgaXMgYXZhaWxhYmxlIHRvIFZN
Lg0KDQpUaGUgVk0gaXRzZWxmIGlzIHJ1bm5pbmcgVHJ1c3R5LCBidXQgb3RoZXIgVHJ1c3R5IFZN
IHNlZW0gZmluZS4NCg0KRG8geW91IGhhdmUgYW55IGlkZWEgd2hhdCBjb3VsZCBiZSB0aGUgaXNz
dWUgYW5kIGhvdyB0byBwZXJoYXBzIGZpeCB0aGlzPw0KDQpLaW5kIHJlZ2FyZHMsDQpDaHJpc3Rv
cGhlDQoNCi0tDQoNCkRyIENocmlzdG9waGUgVHJlZm9pcywgRGlwbC4tSW5nLg0KVGVjaG5pY2Fs
IFNwZWNpYWxpc3QgLyBQb3N0LURvYw0KDQpVTklWRVJTSVTDiSBEVSBMVVhFTUJPVVJHDQoNCkxV
WEVNQk9VUkcgQ0VOVFJFIEZPUiBTWVNURU1TIEJJT01FRElDSU5FDQpDYW1wdXMgQmVsdmFsIHwg
SG91c2Ugb2YgQmlvbWVkaWNpbmUNCjYsIGF2ZW51ZSBkdSBTd2luZw0KTC00MzY3IEJlbHZhdXgN
ClQ6ICszNTIgNDYgNjYgNDQgNjEyNA0KRjogKzM1MiA0NiA2NiA0NCA2OTQ5DQpodHRwOi8vd3d3
LnVuaS5sdS9sY3NiDQoNCltGYWNlYm9va108aHR0cHM6Ly93d3cuZmFjZWJvb2suY29tL3RyZWZl
eD4gIFtUd2l0dGVyXSA8aHR0cHM6Ly90d2l0dGVyLmNvbS9UcmVmZXg+ICAgW0dvb2dsZSBQbHVz
XSA8aHR0cHM6Ly9wbHVzLmdvb2dsZS5jb20vK0NocmlzdG9waGVUcmVmb2lzLz4gICBbTGlua2Vk
aW5dIDxodHRwczovL3d3dy5saW5rZWRpbi5jb20vaW4vdHJlZm9pc2NocmlzdG9waGU+ICAgW3Nr
eXBlXSA8aHR0cDovL3NreXBlOlRyZWZleD9jYWxsPg0KDQotLS0tDQpUaGlzIG1lc3NhZ2UgaXMg
Y29uZmlkZW50aWFsIGFuZCBtYXkgY29udGFpbiBwcml2aWxlZ2VkIGluZm9ybWF0aW9uLg0KSXQg
aXMgaW50ZW5kZWQgZm9yIHRoZSBuYW1lZCByZWNpcGllbnQgb25seS4NCklmIHlvdSByZWNlaXZl
IGl0IGluIGVycm9yIHBsZWFzZSBub3RpZnkgbWUgYW5kIHBlcm1hbmVudGx5IGRlbGV0ZSB0aGUg
b3JpZ2luYWwgbWVzc2FnZSBhbmQgYW55IGNvcGllcy4NCi0tLS0NCg0KDQoNCg==
--_000_8C4DA8EBA840444999AC78F54E14506Cunilu_
Content-Type: text/html; charset="utf-8"
Content-ID: <12A4DD0F8469944C8B7B680FAEB41A7C(a)uni.lux>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KRGVhciBhbGwsDQo8ZGl2IGNsYXNz
PSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5XZSB1cGdyYWRlZCB0b2Rh
eSBlbmdpbmUgdG8gNC4wLjYgYW5kIGEgaG9zdCB0byBDZW50T1MgNy4zIGFuZCBhbGwgcGFja2Fn
ZXMgdXBkYXRlZC48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8
ZGl2IGNsYXNzPSIiPk1vc3QgVk1zIHN0YXJ0IHVwIHdpdGggY29ycmVjdCBudW1iZXIgb2YgZ3Vl
c3QgQ1BVcyBleGNlcHQgMiBvciAzIHRoYXQgb25seSBzcGF3biB3aXRoIDEgZ3Vlc3QgQ1BVIGNv
dW50LjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5XZSBzZXQgZm9yIGluc3RhbmNlIGEgQ1BVIHByb2Zp
bGUgb2YgMTo0OjEgYnV0IG9ubHkgMSB2Q1BVIGlzIGF2YWlsYWJsZSB0byBWTS48L2Rpdj4NCjxk
aXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlRoZSBWTSBp
dHNlbGYgaXMgcnVubmluZyBUcnVzdHksIGJ1dCBvdGhlciBUcnVzdHkgVk0gc2VlbSBmaW5lLjwv
ZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+
RG8geW91IGhhdmUgYW55IGlkZWEgd2hhdCBjb3VsZCBiZSB0aGUgaXNzdWUgYW5kIGhvdyB0byBw
ZXJoYXBzIGZpeCB0aGlzPzwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rp
dj4NCjxkaXYgY2xhc3M9IiI+S2luZCByZWdhcmRzLDwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5DaHJp
c3RvcGhlPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj4NCjxkaXYgc3R5bGU9
ImNvbG9yOiByZ2IoMCwgMCwgMCk7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IG9ycGhhbnM6IGF1
dG87IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTog
bm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNwYWNpbmc6IDBw
eDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB3b3JkLXdyYXA6IGJyZWFrLXdvcmQ7
IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJyZWFrOiBhZnRlci13aGl0
ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRpdiBzdHlsZT0iY29sb3I6IHJnYigwLCAwLCAwKTsgbGV0
dGVyLXNwYWNpbmc6IG5vcm1hbDsgb3JwaGFuczogYXV0bzsgdGV4dC1hbGlnbjogc3RhcnQ7IHRl
eHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFs
OyB3aWRvd3M6IGF1dG87IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdp
ZHRoOiAwcHg7IHdvcmQtd3JhcDogYnJlYWstd29yZDsgLXdlYmtpdC1uYnNwLW1vZGU6IHNwYWNl
OyAtd2Via2l0LWxpbmUtYnJlYWs6IGFmdGVyLXdoaXRlLXNwYWNlOyIgY2xhc3M9IiI+DQo8ZGl2
IHN0eWxlPSJjb2xvcjogcmdiKDAsIDAsIDApOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyBvcnBo
YW5zOiBhdXRvOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFu
c2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdpZG93czogYXV0bzsgd29yZC1zcGFj
aW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgd29yZC13cmFwOiBicmVh
ay13b3JkOyAtd2Via2l0LW5ic3AtbW9kZTogc3BhY2U7IC13ZWJraXQtbGluZS1icmVhazogYWZ0
ZXItd2hpdGUtc3BhY2U7IiBjbGFzcz0iIj4NCjxwIHN0eWxlPSJmb250LWZhbWlseTogQXJpYWws
IHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTBwdDsgbGluZS1oZWlnaHQ6IDE2cHg7IGNvbG9yOiBy
Z2IoMzMsIDMzLCAzMyk7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDYxLCA1
OSwgNTkpOyBkaXNwbGF5OiBpbmxpbmU7IiBjbGFzcz0iIj4tLSZuYnNwOzwvc3Bhbj48L3A+DQo8
cCBzdHlsZT0iZm9udC1mYW1pbHk6IEFyaWFsLCBzYW5zLXNlcmlmOyBmb250LXNpemU6IDEwcHQ7
IGxpbmUtaGVpZ2h0OiAxNnB4OyBjb2xvcjogcmdiKDMzLCAzMywgMzMpOyIgY2xhc3M9IiI+DQo8
c3BhbiBzdHlsZT0iZm9udC13ZWlnaHQ6IGJvbGQ7IGNvbG9yOiByZ2IoNjEsIDU5LCA1OSk7IGRp
c3BsYXk6IGlubGluZTsiIGNsYXNzPSIiPkRyIENocmlzdG9waGUgVHJlZm9pcywgRGlwbC4tSW5n
Ljwvc3Bhbj48c3BhbiBjbGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJzcDs8L3NwYW4+
PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjxzcGFuIHN0
eWxlPSJkaXNwbGF5OiBpbmxpbmU7IiBjbGFzcz0iIj48L3NwYW4+PGJyIGNsYXNzPSIiPg0KPHNw
YW4gc3R5bGU9ImNvbG9yOiByZ2IoNjEsIDU5LCA1OSk7IGRpc3BsYXk6IGlubGluZTsgZm9udC1z
aXplOiA3LjVwdDsiIGNsYXNzPSIiPlRlY2huaWNhbCBTcGVjaWFsaXN0IC8gUG9zdC1Eb2M8L3Nw
YW4+PC9wPg0KPHAgc3R5bGU9ImZvbnQtZmFtaWx5OiBBcmlhbCwgc2Fucy1zZXJpZjsgZm9udC1z
aXplOiA3LjVwdDsgbGluZS1oZWlnaHQ6IDE2cHg7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJm
b250LXdlaWdodDogYm9sZDsgY29sb3I6IHJnYig2MSwgNTksIDU5KTsgZGlzcGxheTogaW5saW5l
OyIgY2xhc3M9IiI+VU5JVkVSU0lUw4kgRFUgTFVYRU1CT1VSRzwvc3Bhbj48YnIgY2xhc3M9IiI+
DQo8c3BhbiBzdHlsZT0iZGlzcGxheTogaW5saW5lOyIgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0K
PC9zcGFuPjxzcGFuIHN0eWxlPSJmb250LXdlaWdodDogYm9sZDsgY29sb3I6IHJnYig2MSwgNTks
IDU5KTsgZGlzcGxheTogaW5saW5lOyIgY2xhc3M9IiI+TFVYRU1CT1VSRyBDRU5UUkUgRk9SIFNZ
U1RFTVMgQklPTUVESUNJTkU8L3NwYW4+PGJyIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNvbG9y
OiByZ2IoNjEsIDU5LCA1OSk7IGRpc3BsYXk6IGlubGluZTsiIGNsYXNzPSIiPkNhbXB1cyBCZWx2
YWwgfCBIb3VzZSBvZiBCaW9tZWRpY2luZTxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3Bh
Y2UiPiZuYnNwOzwvc3Bhbj48c3BhbiBjbGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJz
cDs8L3NwYW4+PGJyIGNsYXNzPSIiPg0KPHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFj
ZSI+NiwgYXZlbnVlIGR1IFN3aW5nJm5ic3A7PC9zcGFuPjxiciBjbGFzcz0iIj4NCkwtNDM2NyBC
ZWx2YXV4PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjxz
cGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwvc3Bhbj48L3NwYW4+PGJy
IGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoNjEsIDU5LCA1OSk7IGRpc3BsYXk6
IGlubGluZTsiIGNsYXNzPSIiPlQ6PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+
Jm5ic3A7PC9zcGFuPjwvc3Bhbj48c3BhbiBzdHlsZT0iY29sb3I6IHJnYig2MSwgNTksIDU5KTsg
ZGlzcGxheTogaW5saW5lOyIgY2xhc3M9IiI+JiM0MzszNTIgNDYgNjYgNDQgNjEyNDwvc3Bhbj48
c3BhbiBjbGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJzcDs8L3NwYW4+PGJyIGNsYXNz
PSIiPg0KPHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoNjEsIDU5LCA1OSk7IGRpc3BsYXk6IGlubGlu
ZTsiIGNsYXNzPSIiPkY6PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7
PC9zcGFuPjwvc3Bhbj48c3BhbiBzdHlsZT0iY29sb3I6IHJnYig2MSwgNTksIDU5KTsgZGlzcGxh
eTogaW5saW5lOyIgY2xhc3M9IiI+JiM0MzszNTIgNDYgNjYgNDQgNjk0OTwvc3Bhbj48c3BhbiBj
bGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJzcDs8L3NwYW4+PHNwYW4gY2xhc3M9IkFw
cGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjxzcGFuIHN0eWxlPSJkaXNwbGF5OiBp
bmxpbmU7IiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L3NwYW4+PGEgaHJlZj0iaHR0cDovL3d3
dy51bmkubHUvbGNzYiIgc3R5bGU9ImNvbG9yOiByZ2IoMCwgMTA5LCAxODkpOyBkaXNwbGF5OiBp
bmxpbmU7IiBjbGFzcz0iIj5odHRwOi8vd3d3LnVuaS5sdS9sY3NiPC9hPjwvcD4NCjxwIHN0eWxl
PSJmb250LWZhbWlseTogQXJpYWwsIHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTRweDsgbGluZS1o
ZWlnaHQ6IDE2cHg7IiBjbGFzcz0iIj4NCjxhIGhyZWY9Imh0dHBzOi8vd3d3LmZhY2Vib29rLmNv
bS90cmVmZXgiIHN0eWxlPSJkaXNwbGF5OiBpbmxpbmU7IiBjbGFzcz0iIj48aW1nIHdpZHRoPSIy
NCIgaGVpZ2h0PSIyNCIgZGF0YS1maWxlbmFtZT0iZmFjZWJvb2sucG5nIiBzcmM9Imh0dHBzOi8v
czMuYW1hem9uYXdzLmNvbS9odG1sc2lnLWFzc2V0cy9yb3VuZGVkL2ZhY2Vib29rLnBuZyIgYWx0
PSJGYWNlYm9vayIgY2xhc3M9IiI+PC9hPjxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3Bh
Y2UiPiZuYnNwOzwvc3Bhbj48c3BhbiBjbGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJz
cDs8L3NwYW4+PGEgaHJlZj0iaHR0cHM6Ly90d2l0dGVyLmNvbS9UcmVmZXgiIHN0eWxlPSJkaXNw
bGF5OiBpbmxpbmU7IiBjbGFzcz0iIj48aW1nIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgZGF0YS1m
aWxlbmFtZT0idHdpdHRlci5wbmciIHNyYz0iaHR0cHM6Ly9zMy5hbWF6b25hd3MuY29tL2h0bWxz
aWctYXNzZXRzL3JvdW5kZWQvdHdpdHRlci5wbmciIGFsdD0iVHdpdHRlciIgY2xhc3M9IiI+PC9h
PjxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwvc3Bhbj48c3BhbiBj
bGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJzcDs8L3NwYW4+PGEgaHJlZj0iaHR0cHM6
Ly9wbHVzLmdvb2dsZS5jb20vJiM0MztDaHJpc3RvcGhlVHJlZm9pcy8iIHN0eWxlPSJkaXNwbGF5
OiBpbmxpbmU7IiBjbGFzcz0iIj48aW1nIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgZGF0YS1maWxl
bmFtZT0iZ29vZ2xlcGx1cy5wbmciIHNyYz0iaHR0cHM6Ly9zMy5hbWF6b25hd3MuY29tL2h0bWxz
aWctYXNzZXRzL3JvdW5kZWQvZ29vZ2xlcGx1cy5wbmciIGFsdD0iR29vZ2xlIFBsdXMiIGNsYXNz
PSIiPjwvYT48c3BhbiBjbGFzcz0iQXBwbGUtY29udmVydGVkLXNwYWNlIj4mbmJzcDs8L3NwYW4+
PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjxhIGhyZWY9
Imh0dHBzOi8vd3d3LmxpbmtlZGluLmNvbS9pbi90cmVmb2lzY2hyaXN0b3BoZSIgc3R5bGU9ImRp
c3BsYXk6IGlubGluZTsiIGNsYXNzPSIiPjxpbWcgd2lkdGg9IjI0IiBoZWlnaHQ9IjI0IiBkYXRh
LWZpbGVuYW1lPSJsaW5rZWRpbi5wbmciIHNyYz0iaHR0cHM6Ly9zMy5hbWF6b25hd3MuY29tL2h0
bWxzaWctYXNzZXRzL3JvdW5kZWQvbGlua2VkaW4ucG5nIiBhbHQ9IkxpbmtlZGluIiBjbGFzcz0i
Ij48L2E+PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjxz
cGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwvc3Bhbj48YSBocmVmPSJo
dHRwOi8vc2t5cGU6VHJlZmV4P2NhbGwiIHN0eWxlPSJkaXNwbGF5OiBpbmxpbmU7IiBjbGFzcz0i
Ij48aW1nIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgZGF0YS1maWxlbmFtZT0ic2t5cGUucG5nIiBz
cmM9Imh0dHBzOi8vczMuYW1hem9uYXdzLmNvbS9odG1sc2lnLWFzc2V0cy9yb3VuZGVkL3NreXBl
LnBuZyIgYWx0PSJza3lwZSIgY2xhc3M9IiI+PC9hPjwvcD4NCjxwIGNsYXNzPSJiYW5uZXItY29u
dGFpbmVyIiBzdHlsZT0iZm9udC1mYW1pbHk6IEFyaWFsLCBzYW5zLXNlcmlmOyBmb250LXNpemU6
IDE0cHg7IGxpbmUtaGVpZ2h0OiAxNnB4OyI+DQo8L3A+DQo8cCBzdHlsZT0iZm9udC1mYW1pbHk6
IEFyaWFsLCBzYW5zLXNlcmlmOyBjb2xvcjogcmdiKDYxLCA1OSwgNTkpOyBmb250LXNpemU6IDlw
eDsgbGluZS1oZWlnaHQ6IDE2cHg7IiBjbGFzcz0iIj4NCi0tLS08YnIgY2xhc3M9IiI+DQpUaGlz
IG1lc3NhZ2UgaXMgY29uZmlkZW50aWFsIGFuZCBtYXkgY29udGFpbiBwcml2aWxlZ2VkIGluZm9y
bWF0aW9uLjxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwvc3Bhbj48
YnIgY2xhc3M9IiI+DQpJdCBpcyBpbnRlbmRlZCBmb3IgdGhlIG5hbWVkIHJlY2lwaWVudCBvbmx5
LjxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwvc3Bhbj48YnIgY2xh
c3M9IiI+DQpJZiB5b3UgcmVjZWl2ZSBpdCBpbiBlcnJvciBwbGVhc2Ugbm90aWZ5IG1lIGFuZCBw
ZXJtYW5lbnRseSBkZWxldGUgdGhlIG9yaWdpbmFsIG1lc3NhZ2UgYW5kIGFueSBjb3BpZXMuPHNw
YW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjxiciBjbGFzcz0i
Ij4NCi0tLS08YnIgY2xhc3M9IiI+DQo8L3A+DQombmJzcDs8c3BhbiBjbGFzcz0iQXBwbGUtY29u
dmVydGVkLXNwYWNlIj4mbmJzcDs8L3NwYW4+PC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+
DQo8YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_8C4DA8EBA840444999AC78F54E14506Cunilu_--
7 years, 7 months
VM has been paused due to storage I/O problem
by Gianluca Cecchi
Hello,
my test environment is composed by 2 old HP blades BL685c G1 (ovmsrv05 and
ovmsrv06) and they are connected in a SAN with FC-switches to an old IBM
DS4700 storage array.
Apart from being old, they seem all ok from an hw point of view.
I have configured oVirt 4.0.6 and an FCP storage domain.
The hosts are plain CentOS 7.3 servers fully updated.
It is not an hosted engine environment: the manager is a vm outside of the
cluster.
I have configured power mgmt on both and it works good.
I have at the moment only one VM for test and it is doing quite nothing.
Starting point: ovmsrv05 is in maintenance (since about 2 days) and the VM
is running on ovmsrv06.
I update qemu-kvm package on ovmsrv05 and then I restart it from web admin
gui:
Power Mgmt --> Restart
Sequence of events in pane and the problem in subject:
Jan 31, 2017 10:29:43 AM Host ovmsrv05 power management was verified
successfully.
Jan 31, 2017 10:29:43 AM Status of host ovmsrv05 was set to Up.
Jan 31, 2017 10:29:38 AM Executing power management status on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Jan 31, 2017 10:29:29 AM Activation of host ovmsrv05 initiated by
admin@internal-authz.
Jan 31, 2017 10:28:05 AM VM ol65 has recovered from paused back to up.
Jan 31, 2017 10:27:55 AM VM ol65 has been paused due to storage I/O problem.
Jan 31, 2017 10:27:55 AM VM ol65 has been paused.
Jan 31, 2017 10:25:52 AM Host ovmsrv05 was restarted by
admin@internal-authz.
Jan 31, 2017 10:25:52 AM Host ovmsrv05 was started by admin@internal-authz.
Jan 31, 2017 10:25:52 AM Power management start of Host ovmsrv05 succeeded.
Jan 31, 2017 10:25:50 AM Executing power management status on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Jan 31, 2017 10:25:37 AM Executing power management start on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Jan 31, 2017 10:25:37 AM Power management start of Host ovmsrv05 initiated.
Jan 31, 2017 10:25:37 AM Auto fence for host ovmsrv05 was started.
Jan 31, 2017 10:25:37 AM All VMs' status on Non Responsive Host ovmsrv05
were changed to 'Down' by admin@internal-authz
Jan 31, 2017 10:25:36 AM Host ovmsrv05 was stopped by admin@internal-authz.
Jan 31, 2017 10:25:36 AM Power management stop of Host ovmsrv05 succeeded.
Jan 31, 2017 10:25:34 AM Executing power management status on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Jan 31, 2017 10:25:15 AM Executing power management stop on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Jan 31, 2017 10:25:15 AM Power management stop of Host ovmsrv05 initiated.
Jan 31, 2017 10:25:12 AM Executing power management status on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Watching the timestamps, the culprit seems the reboot time of ovmsrv05 that
detects some LUNs in owned state and other ones in unowned
Full messages of both hosts here:
https://drive.google.com/file/d/0BwoPbcrMv8mvekZQT1pjc0NMRlU/view?usp=sha...
and
https://drive.google.com/file/d/0BwoPbcrMv8mvcjBCYVdFZWdXTms/view?usp=sha...
At this time there are 4 LUNs globally seen by the two hosts but only 1 of
them is currently configured as the only storage domain in oVirt cluster.
[root@ovmsrv05 ~]# multipath -l | grep ^36
3600a0b8000299aa80000d08b55014119 dm-5 IBM ,1814 FAStT
3600a0b80002999020000cd3c5501458f dm-3 IBM ,1814 FAStT
3600a0b80002999020000ccf855011198 dm-2 IBM ,1814 FAStT
3600a0b8000299aa80000d08955014098 dm-4 IBM ,1814 FAStT
the configured one:
[root@ovmsrv05 ~]# multipath -l 3600a0b8000299aa80000d08b55014119
3600a0b8000299aa80000d08b55014119 dm-5 IBM ,1814 FAStT
size=4.0T features='0' hwhandler='1 rdac' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:1:3 sdl 8:176 active undef running
| `- 2:0:1:3 sdp 8:240 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:0:3 sdd 8:48 active undef running
`- 2:0:0:3 sdi 8:128 active undef running
In mesages of booting node, arounf the problem registered by the storage:
[root@ovmsrv05 ~]# grep owned /var/log/messages
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:0:1: rdac: LUN 1 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:0:2: rdac: LUN 2 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:0:3: rdac: LUN 3 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:0:1: rdac: LUN 1 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:0:4: rdac: LUN 4 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:0:2: rdac: LUN 2 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:1:1: rdac: LUN 1 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:0:3: rdac: LUN 3 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:0:4: rdac: LUN 4 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:1:2: rdac: LUN 2 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:1:1: rdac: LUN 1 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:1:3: rdac: LUN 3 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:1:2: rdac: LUN 2 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:1:4: rdac: LUN 4 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:1:3: rdac: LUN 3 (RDAC) (owned)
Jan 31 10:27:39 ovmsrv05 kernel: scsi 2:0:1:4: rdac: LUN 4 (RDAC) (owned)
I don't know exactly the meaning of owned/unowned in the output above..
Possibly it detects the 0:0:1:3 and 2:0:1:3 paths (those of the active
group) as "owned" and this could have created problems with the active node?
On active node strangely I don't loose all the paths, but the VM has been
paused anyway
[root@ovmsrv06 log]# grep "remaining active path" /var/log/messages
Jan 31 10:27:48 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 3
Jan 31 10:27:49 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 2
Jan 31 10:27:56 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 3
Jan 31 10:27:56 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 2
Jan 31 10:27:56 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 1
Jan 31 10:27:57 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 2
Jan 31 10:28:01 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 3
Jan 31 10:28:01 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 4
I'm not an expert of this storage array in particular, and of the rdac
hardware handler in general.
What I see is that multipath.conf on both nodes:
# VDSM REVISION 1.3
defaults {
polling_interval 5
no_path_retry fail
user_friendly_names no
flush_on_last_del yes
fast_io_fail_tmo 5
dev_loss_tmo 30
max_fds 4096
}
devices {
device {
# These settings overrides built-in devices settings. It does not
apply
# to devices without built-in settings (these use the settings in
the
# "defaults" section), or to devices defined in the "devices"
section.
# Note: This is not available yet on Fedora 21. For more info see
# https://bugzilla.redhat.com/1253799
all_devs yes
no_path_retry fail
}
}
beginning of /proc/scsi/scsi
[root@ovmsrv06 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 01 Id: 00 Lun: 00
Vendor: HP Model: LOGICAL VOLUME Rev: 1.86
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi0 Channel: 00 Id: 00 Lun: 01
Vendor: IBM Model: 1814 FAStT Rev: 0916
Type: Direct-Access ANSI SCSI revision: 05
...
To get default acquired config for this storage:
multpathd -k
> show config
I can see:
device {
vendor "IBM"
product "^1814"
product_blacklist "Universal Xport"
path_grouping_policy "group_by_prio"
path_checker "rdac"
features "0"
hardware_handler "1 rdac"
prio "rdac"
failback immediate
rr_weight "uniform"
no_path_retry "fail"
}
and
defaults {
verbosity 2
polling_interval 5
max_polling_interval 20
reassign_maps "yes"
multipath_dir "/lib64/multipath"
path_selector "service-time 0"
path_grouping_policy "failover"
uid_attribute "ID_SERIAL"
prio "const"
prio_args ""
features "0"
path_checker "directio"
alias_prefix "mpath"
failback "manual"
rr_min_io 1000
rr_min_io_rq 1
max_fds 4096
rr_weight "uniform"
no_path_retry "fail"
queue_without_daemon "no"
flush_on_last_del "yes"
user_friendly_names "no"
fast_io_fail_tmo 5
dev_loss_tmo 30
bindings_file "/etc/multipath/bindings"
wwids_file /etc/multipath/wwids
log_checker_err always
find_multipaths no
retain_attached_hw_handler no
detect_prio no
hw_str_match no
force_sync no
deferred_remove no
ignore_new_boot_devs no
skip_kpartx no
config_dir "/etc/multipath/conf.d"
delay_watch_checks no
delay_wait_checks no
retrigger_tries 3
retrigger_delay 10
missing_uev_wait_timeout 30
new_bindings_in_boot no
}
Any hint on how to tune multipath.conf so that a powering on server doesn't
create problems to running VMs?
Thanks in advance,
Gianluca
7 years, 7 months