New problem with hosted-engine during "Configuring the management bridge"

This is a multi-part message in MIME format. --------------090102070203040506000604 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hi, I have started a new installation as specified in the 3.4.1 release notes (fresh Fedora 19 install, yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm). This is failing in the step "Configuring the management bridge". Based on the vdsm.log, it appears I am hitting: *Bug 988995* <https://bugzilla.redhat.com/show_bug.cgi?id=988995> -vdsm multipath.py restarts mutipathd, cutting the branch vdsm sits on "multipath -F" is returning "invalid keyword: getuid_callout" and it appears that this is causing vdsm-tool to abort (although the command exit status is 0 and the bug report says that those are only harmless warnings). There is no workaround stated in that bug report. Help? -Bob --------------090102070203040506000604 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hi,<br> <br> I have started a new installation as specified in the 3.4.1 release notes (fresh Fedora 19 install, yum localinstall <a class="moz-txt-link-freetext" href="http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm">http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm</a>).<br> <br> This is failing in the step "Configuring the management bridge".<br> <br> Based on the vdsm.log, it appears I am hitting:<br> <div class="bz_alias_short_desc_container edit_form"> <div class="knob-buttons"> </div> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=988995"><b>Bug 988995</b></a> -<span id="summary_alias_container"> <span id="short_desc_nonedit_display">vdsm multipath.py restarts mutipathd, cutting the branch vdsm sits on</span></span><br> <br> "multipath -F" is returning "invalid keyword: getuid_callout" and it appears that this is causing vdsm-tool to abort (although the command exit status is 0 and the bug report says that those are only harmless warnings).<br> </div> <br> There is no workaround stated in that bug report.<br> <br> Help?<br> <br> -Bob<br> <br> </body> </html> --------------090102070203040506000604--

Maybe this isn't the actual problem after all. I replaced /sbin/multipath with a script runs the old version, but the suppresses those errors and returns exit status 0. But "vdsm-tool service-reload multipathd" is still failing and I don't know why. I have attached my vdsm.log file. Any guidance appreciated. I'll try digging through the python code for service.py and see if I can catch it when the multipath configuration is in place to see the exact issue. -Bob On 05/13/2014 12:27 PM, Bob Doolittle wrote:
Hi,
I have started a new installation as specified in the 3.4.1 release notes (fresh Fedora 19 install, yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm).
This is failing in the step "Configuring the management bridge".
Based on the vdsm.log, it appears I am hitting: *Bug 988995* <https://bugzilla.redhat.com/show_bug.cgi?id=988995> -vdsm multipath.py restarts mutipathd, cutting the branch vdsm sits on
"multipath -F" is returning "invalid keyword: getuid_callout" and it appears that this is causing vdsm-tool to abort (although the command exit status is 0 and the bug report says that those are only harmless warnings).
There is no workaround stated in that bug report.
Help?
-Bob

I could really use some help on this one. My efforts to debug VDSM via instrumenting the python code are not working - the compiled code must be cached somehow. Something is wrong with the way the multipathd service is being restarted. It doesn't look too me that systemctl is even being called for it. Thanks, Bob On May 13, 2014 1:12 PM, "Bob Doolittle" <bob@doolittle.us.com> wrote:
Maybe this isn't the actual problem after all.
I replaced /sbin/multipath with a script runs the old version, but the suppresses those errors and returns exit status 0. But "vdsm-tool service-reload multipathd" is still failing and I don't know why.
I have attached my vdsm.log file.
Any guidance appreciated. I'll try digging through the python code for service.py and see if I can catch it when the multipath configuration is in place to see the exact issue.
-Bob
On 05/13/2014 12:27 PM, Bob Doolittle wrote:
Hi,
I have started a new installation as specified in the 3.4.1 release notes (fresh Fedora 19 install, yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm).
This is failing in the step "Configuring the management bridge".
Based on the vdsm.log, it appears I am hitting: *Bug 988995* <https://bugzilla.redhat.com/show_bug.cgi?id=988995> - vdsm multipath.py restarts mutipathd, cutting the branch vdsm sits on
"multipath -F" is returning "invalid keyword: getuid_callout" and it appears that this is causing vdsm-tool to abort (although the command exit status is 0 and the bug report says that those are only harmless warnings).
There is no workaround stated in that bug report.
Help?
-Bob

Bob, I remember something like this with an all-in-one install a while back where that error showed up, but it was kind of red herring with multipath because the real problem was ovirtmgmt bridge didn't get created. And that was a known problem I think with f19 see http://www.ovirt.org/OVirt_3.4_TestDay "Important Note: Known Fedora 19 bug: If the ovirtmgmt bridge is not successfully installed during initial host-setup, manually click on the host, setup networks, and add the ovirtmgmt bridge. " If you can get to webadmin could you try to setup ovirtmgmt manually on the host. -John On Wed, May 14, 2014 at 6:58 AM, Bob Doolittle <bob@doolittle.us.com> wrote:
I could really use some help on this one. My efforts to debug VDSM via instrumenting the python code are not working - the compiled code must be cached somehow.
Something is wrong with the way the multipathd service is being restarted. It doesn't look too me that systemctl is even being called for it.
Thanks, Bob
On May 13, 2014 1:12 PM, "Bob Doolittle" <bob@doolittle.us.com> wrote:
Maybe this isn't the actual problem after all.
I replaced /sbin/multipath with a script runs the old version, but the suppresses those errors and returns exit status 0. But "vdsm-tool service-reload multipathd" is still failing and I don't know why.
I have attached my vdsm.log file.
Any guidance appreciated. I'll try digging through the python code for service.py and see if I can catch it when the multipath configuration is in place to see the exact issue.
-Bob
On 05/13/2014 12:27 PM, Bob Doolittle wrote:
Hi,
I have started a new installation as specified in the 3.4.1 release notes (fresh Fedora 19 install, yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm).
This is failing in the step "Configuring the management bridge".
Based on the vdsm.log, it appears I am hitting: Bug 988995 - vdsm multipath.py restarts mutipathd, cutting the branch vdsm sits on
"multipath -F" is returning "invalid keyword: getuid_callout" and it appears that this is causing vdsm-tool to abort (although the command exit status is 0 and the bug report says that those are only harmless warnings).
There is no workaround stated in that bug report.
Help?
-Bob
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thanks John. When hosted-engine aborts, it uninstalls everything. So there is no webadmin available. I've tried modifying the VDSM python code (e.g. /usr/share/vdsm/storage/multipath.py and /usr/lib64/python2.7/site-packages/vdsm/tool/service.py) to see/work around what's going wrong, but oddly VDSM does not seem to be affected. I suspect the original code (or bytecodes) are cached somewhere. Restarting vdsmd service has no effect. I'd really appreciate some insight there so I can work around it. Does oVirt 3.4.1 work more smoothly with F20? I chose F19 because it's more stable at this point, thinking that things would be more likely to work smoothly. It's not turning out that way... I'm willing to start over with F20 if there's no path forward. -Bob On 05/14/2014 07:25 PM, John Taylor wrote:
Bob, I remember something like this with an all-in-one install a while back where that error showed up, but it was kind of red herring with multipath because the real problem was ovirtmgmt bridge didn't get created. And that was a known problem I think with f19 see http://www.ovirt.org/OVirt_3.4_TestDay "Important Note: Known Fedora 19 bug: If the ovirtmgmt bridge is not successfully installed during initial host-setup, manually click on the host, setup networks, and add the ovirtmgmt bridge. "
If you can get to webadmin could you try to setup ovirtmgmt manually on the host.
-John
On Wed, May 14, 2014 at 6:58 AM, Bob Doolittle <bob@doolittle.us.com> wrote:
I could really use some help on this one. My efforts to debug VDSM via instrumenting the python code are not working - the compiled code must be cached somehow.
Something is wrong with the way the multipathd service is being restarted. It doesn't look too me that systemctl is even being called for it.
Thanks, Bob
On May 13, 2014 1:12 PM, "Bob Doolittle" <bob@doolittle.us.com> wrote:
Maybe this isn't the actual problem after all.
I replaced /sbin/multipath with a script runs the old version, but the suppresses those errors and returns exit status 0. But "vdsm-tool service-reload multipathd" is still failing and I don't know why.
I have attached my vdsm.log file.
Any guidance appreciated. I'll try digging through the python code for service.py and see if I can catch it when the multipath configuration is in place to see the exact issue.
-Bob
On 05/13/2014 12:27 PM, Bob Doolittle wrote:
Hi,
I have started a new installation as specified in the 3.4.1 release notes (fresh Fedora 19 install, yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm).
This is failing in the step "Configuring the management bridge".
Based on the vdsm.log, it appears I am hitting: Bug 988995 - vdsm multipath.py restarts mutipathd, cutting the branch vdsm sits on
"multipath -F" is returning "invalid keyword: getuid_callout" and it appears that this is causing vdsm-tool to abort (although the command exit status is 0 and the bug report says that those are only harmless warnings).
There is no workaround stated in that bug report.
Help?
-Bob
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, 2014-05-14 at 20:06 -0400, Bob Doolittle wrote:
Thanks John.
When hosted-engine aborts, it uninstalls everything. So there is no webadmin available.
I've tried modifying the VDSM python code (e.g. /usr/share/vdsm/storage/multipath.py and /usr/lib64/python2.7/site-packages/vdsm/tool/service.py) to see/work around what's going wrong, but oddly VDSM does not seem to be affected. I suspect the original code (or bytecodes) are cached somewhere. Restarting vdsmd service has no effect. I'd really appreciate some insight there so I can work around it.
Does oVirt 3.4.1 work more smoothly with F20? I chose F19 because it's more stable at this point, thinking that things would be more likely to work smoothly. It's not turning out that way... I'm willing to start over with F20 if there's no path forward.
If stable´s what you want, why not start over with CentOS instead? /K
-Bob
On 05/14/2014 07:25 PM, John Taylor wrote:
Bob, I remember something like this with an all-in-one install a while back where that error showed up, but it was kind of red herring with multipath because the real problem was ovirtmgmt bridge didn't get created. And that was a known problem I think with f19 see http://www.ovirt.org/OVirt_3.4_TestDay "Important Note: Known Fedora 19 bug: If the ovirtmgmt bridge is not successfully installed during initial host-setup, manually click on the host, setup networks, and add the ovirtmgmt bridge. "
If you can get to webadmin could you try to setup ovirtmgmt manually on the host.
-John
On Wed, May 14, 2014 at 6:58 AM, Bob Doolittle <bob@doolittle.us.com> wrote:
I could really use some help on this one. My efforts to debug VDSM via instrumenting the python code are not working - the compiled code must be cached somehow.
Something is wrong with the way the multipathd service is being restarted. It doesn't look too me that systemctl is even being called for it.
Thanks, Bob
On May 13, 2014 1:12 PM, "Bob Doolittle" <bob@doolittle.us.com> wrote:
Maybe this isn't the actual problem after all.
I replaced /sbin/multipath with a script runs the old version, but the suppresses those errors and returns exit status 0. But "vdsm-tool service-reload multipathd" is still failing and I don't know why.
I have attached my vdsm.log file.
Any guidance appreciated. I'll try digging through the python code for service.py and see if I can catch it when the multipath configuration is in place to see the exact issue.
-Bob
On 05/13/2014 12:27 PM, Bob Doolittle wrote:
Hi,
I have started a new installation as specified in the 3.4.1 release notes (fresh Fedora 19 install, yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm).
This is failing in the step "Configuring the management bridge".
Based on the vdsm.log, it appears I am hitting: Bug 988995 - vdsm multipath.py restarts mutipathd, cutting the branch vdsm sits on
"multipath -F" is returning "invalid keyword: getuid_callout" and it appears that this is causing vdsm-tool to abort (although the command exit status is 0 and the bug report says that those are only harmless warnings).
There is no workaround stated in that bug report.
Help?
-Bob
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Med Vänliga Hälsningar ------------------------------------------------------------------------------- Karli Sjöberg Swedish University of Agricultural Sciences Box 7079 (Visiting Address Kronåsvägen 8) S-750 07 Uppsala, Sweden Phone: +46-(0)18-67 15 66 karli.sjoberg@slu.se

On Wed, May 14, 2014 at 08:06:00PM -0400, Bob Doolittle wrote:
Thanks John.
When hosted-engine aborts, it uninstalls everything. So there is no webadmin available.
I've tried modifying the VDSM python code (e.g. /usr/share/vdsm/storage/multipath.py and /usr/lib64/python2.7/site-packages/vdsm/tool/service.py) to see/work around what's going wrong, but oddly VDSM does not seem to be affected. I suspect the original code (or bytecodes) are cached somewhere.
I do not think that this is the issue, but you can remove all trace of *.pyc/*.pyo to make sure this is not the case.
Restarting vdsmd service has no effect. I'd really appreciate some insight there so I can work around it.
What have you changed exactly? Where? If you add a plain syntax error to the script, does it still run?
Does oVirt 3.4.1 work more smoothly with F20? I chose F19 because it's more stable at this point, thinking that things would be more likely to work smoothly. It's not turning out that way... I'm willing to start over with F20 if there's no path forward.
Vdsm works fine on 3.4.1, but I do not know about hosted-engine. Engine itself is known not to work on F20.

On 05/15/2014 05:03 AM, Dan Kenigsberg wrote:
On Wed, May 14, 2014 at 08:06:00PM -0400, Bob Doolittle wrote:
Thanks John.
When hosted-engine aborts, it uninstalls everything. So there is no webadmin available.
I've tried modifying the VDSM python code (e.g. /usr/share/vdsm/storage/multipath.py and /usr/lib64/python2.7/site-packages/vdsm/tool/service.py) to see/work around what's going wrong, but oddly VDSM does not seem to be affected. I suspect the original code (or bytecodes) are cached somewhere. I do not think that this is the issue, but you can remove all trace of *.pyc/*.pyo to make sure this is not the case.
In fact I see new pyc being produced, so that's myterious as well.
Restarting vdsmd service has no effect. I'd really appreciate some insight there so I can work around it. What have you changed exactly? Where? If you add a plain syntax error to the script, does it still run?
Very simple changes, to try to get a copy of the multipath.conf file that was presumably causing the error (since the hosted-setup cleans up when it aborts). I've attached them (full filenames ^^). The file never appears. I also attached a replacement I installed for multipath, which runs the *real* multipath (moved to multipath.bak) and filters the output to remove the known problematic warnings, and then exits with 0 status. But the weirdest thing is that I instrumented systemctl (replaced it with a script that logged its args and then executed the real one), and systemctl is *never* being invoked to start multipathd. Here's what it logged: show-environment show-environment status vdsmd.service show -p LoadState firewalld.service show -p LoadState sshd.service show -p LoadState firewalld.service show -p Id firewalld.service disable firewalld.service stop firewalld.service stop libvirtd.service start libvirtd.service status sshd.service show -p Id vdsmd.service enable vdsmd.service stop vdsmd.service start vdsmd.service If I run "vdsm-tool service-reload multipathd" by hand then I see the log I'd expect: reload multipathd.service
Does oVirt 3.4.1 work more smoothly with F20? I chose F19 because it's more stable at this point, thinking that things would be more likely to work smoothly. It's not turning out that way... I'm willing to start over with F20 if there's no path forward. Vdsm works fine on 3.4.1, but I do not know about hosted-engine. Engine itself is known not to work on F20.
What OS is hosted-engine known to work on? -Bob

Il 15/05/2014 15:55, Bob Doolittle ha scritto:
On 05/15/2014 05:03 AM, Dan Kenigsberg wrote:
On Wed, May 14, 2014 at 08:06:00PM -0400, Bob Doolittle wrote:
Thanks John.
When hosted-engine aborts, it uninstalls everything. So there is no webadmin available.
I've tried modifying the VDSM python code (e.g. /usr/share/vdsm/storage/multipath.py and /usr/lib64/python2.7/site-packages/vdsm/tool/service.py) to see/work around what's going wrong, but oddly VDSM does not seem to be affected. I suspect the original code (or bytecodes) are cached somewhere. I do not think that this is the issue, but you can remove all trace of *.pyc/*.pyo to make sure this is not the case.
In fact I see new pyc being produced, so that's myterious as well.
Restarting vdsmd service has no effect. I'd really appreciate some insight there so I can work around it. What have you changed exactly? Where? If you add a plain syntax error to the script, does it still run?
Very simple changes, to try to get a copy of the multipath.conf file that was presumably causing the error (since the hosted-setup cleans up when it aborts). I've attached them (full filenames ^^). The file never appears.
I also attached a replacement I installed for multipath, which runs the *real* multipath (moved to multipath.bak) and filters the output to remove the known problematic warnings, and then exits with 0 status.
But the weirdest thing is that I instrumented systemctl (replaced it with a script that logged its args and then executed the real one), and systemctl is *never* being invoked to start multipathd. Here's what it logged:
show-environment show-environment status vdsmd.service show -p LoadState firewalld.service show -p LoadState sshd.service show -p LoadState firewalld.service show -p Id firewalld.service disable firewalld.service stop firewalld.service stop libvirtd.service start libvirtd.service status sshd.service show -p Id vdsmd.service enable vdsmd.service stop vdsmd.service start vdsmd.service
If I run "vdsm-tool service-reload multipathd" by hand then I see the log I'd expect: reload multipathd.service
If this command works, hosted-engine setup should not fail on it. The setup doesn't touch anything related to multipathd.
Does oVirt 3.4.1 work more smoothly with F20? I chose F19 because it's more stable at this point, thinking that things would be more likely to work smoothly. It's not turning out that way... I'm willing to start over with F20 if there's no path forward. Vdsm works fine on 3.4.1, but I do not know about hosted-engine. Engine itself is known not to work on F20.
What OS is hosted-engine known to work on?
I use it on F19 with fedora virt-preview repo and on RHEL 6.5. I've seen mails about people using it on CentOS. I've not tested it on F20 yet but since vdsm should work fine on F20, it should work there too.
-Bob
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On 05/15/2014 09:59 AM, Sandro Bonazzola wrote:
Il 15/05/2014 15:55, Bob Doolittle ha scritto:
If I run "vdsm-tool service-reload multipathd" by hand then I see the log I'd expect: reload multipathd.service If this command works, hosted-engine setup should not fail on it. The setup doesn't touch anything related to multipathd.
Sorry but that is not strictly correct. You can see the code in multipath.py (setupMultipath()) that writes a new multipath.conf file just before it tries to restart the service. It looks like it's trying to modify the scsi_id_path. So when I run it by hand, I am using the original multipath.conf file. When it is run during hosted-engine setup, it is with a modified version. My efforts in modifying the python have been to try to capture that modified version, so that I can exercise it by hand and see what's wrong. -Bob

Il 15/05/2014 16:09, Bob Doolittle ha scritto:
On 05/15/2014 09:59 AM, Sandro Bonazzola wrote:
Il 15/05/2014 15:55, Bob Doolittle ha scritto:
If I run "vdsm-tool service-reload multipathd" by hand then I see the log I'd expect: reload multipathd.service If this command works, hosted-engine setup should not fail on it. The setup doesn't touch anything related to multipathd.
Sorry but that is not strictly correct. You can see the code in multipath.py (setupMultipath()) that writes a new multipath.conf file just before it tries to restart the service. It looks like it's trying to modify the scsi_id_path.
So when I run it by hand, I am using the original multipath.conf file. When it is run during hosted-engine setup, it is with a modified version. My efforts in modifying the python have been to try to capture that modified version, so that I can exercise it by hand and see what's wrong.
If something is configuring it, it must be vdsm-tool: In hosted-engine setup code there's a call to "vdsm-tool configure --force".
-Bob
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On Thu, May 15, 2014 at 04:20:09PM +0200, Sandro Bonazzola wrote:
Il 15/05/2014 16:09, Bob Doolittle ha scritto:
On 05/15/2014 09:59 AM, Sandro Bonazzola wrote:
Il 15/05/2014 15:55, Bob Doolittle ha scritto:
If I run "vdsm-tool service-reload multipathd" by hand then I see the log I'd expect: reload multipathd.service If this command works, hosted-engine setup should not fail on it. The setup doesn't touch anything related to multipathd.
Sorry but that is not strictly correct. You can see the code in multipath.py (setupMultipath()) that writes a new multipath.conf file just before it tries to restart the service. It looks like it's trying to modify the scsi_id_path.
So when I run it by hand, I am using the original multipath.conf file. When it is run during hosted-engine setup, it is with a modified version. My efforts in modifying the python have been to try to capture that modified version, so that I can exercise it by hand and see what's wrong.
If something is configuring it, it must be vdsm-tool:
As I've noted ealier on this thread, that's not correct, unfortunately: Bug 1076531 - vdsm overwrites multipath.conf at every startup
In hosted-engine setup code there's a call to "vdsm-tool configure --force".
However, it's possible to copy the vdsm-sactioned multipath.conf from one host to another, and try `systemctl reload multipathd.service`. Please look into journalctl for hints on why it has failed.

I have moved on. I honestly can't say what fixed this problem, but not seeing it any more. Then I ran into a problem with NIC naming that apuimedo (and jvandewege) helped me through in IRC, but hosted-engine completes successfully now. I can't configure my VM engine network for some reason, but I'll look into that more deeply later. What's the recommended procedure for self-hosted using F20 engine when it comes to the network configuration wizard. Leave it alone? Set for static config and configure with FQDN and address on main host network? Thanks, Bob On 05/15/2014 11:56 AM, Dan Kenigsberg wrote:
On Thu, May 15, 2014 at 04:20:09PM +0200, Sandro Bonazzola wrote:
Il 15/05/2014 16:09, Bob Doolittle ha scritto:
On 05/15/2014 09:59 AM, Sandro Bonazzola wrote:
Il 15/05/2014 15:55, Bob Doolittle ha scritto:
If I run "vdsm-tool service-reload multipathd" by hand then I see the log I'd expect: reload multipathd.service If this command works, hosted-engine setup should not fail on it. The setup doesn't touch anything related to multipathd. Sorry but that is not strictly correct. You can see the code in multipath.py (setupMultipath()) that writes a new multipath.conf file just before it tries to restart the service. It looks like it's trying to modify the scsi_id_path.
So when I run it by hand, I am using the original multipath.conf file. When it is run during hosted-engine setup, it is with a modified version. My efforts in modifying the python have been to try to capture that modified version, so that I can exercise it by hand and see what's wrong. If something is configuring it, it must be vdsm-tool: As I've noted ealier on this thread, that's not correct, unfortunately: Bug 1076531 - vdsm overwrites multipath.conf at every startup
In hosted-engine setup code there's a call to "vdsm-tool configure --force". However, it's possible to copy the vdsm-sactioned multipath.conf from one host to another, and try `systemctl reload multipathd.service`.
Please look into journalctl for hints on why it has failed.

On Tue, May 13, 2014 at 01:11:52PM -0400, Bob Doolittle wrote:
Maybe this isn't the actual problem after all.
I replaced /sbin/multipath with a script runs the old version, but the suppresses those errors and returns exit status 0. But "vdsm-tool service-reload multipathd" is still failing and I don't know why.
Does is work when you call it from the command line? Your traceback has Job for multipathd.service failed. See 'systemctl status multipathd.service' and 'journalctl -xn' for details. so may there are clues there.
I have attached my vdsm.log file.
Any guidance appreciated. I'll try digging through the python code for service.py and see if I can catch it when the multipath configuration is in place to see the exact issue.
BTW, the fact that multipath is reloaded on vdsm startup is a known pain in the neck: Bug 1076531 - vdsm overwrites multipath.conf at every startup Dan.
participants (5)
-
Bob Doolittle
-
Dan Kenigsberg
-
John Taylor
-
Karli Sjöberg
-
Sandro Bonazzola