
Hi, Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. VM engine is also a FC22 Problem: - No VM engine on webui Test Environment Just two machines: 1 - Machine 1 used as storage : - iscsi target with a raw file for the VM engine storage - NFS4 for other data domains 2 - Machine 2 used as hypervisor The installation went without problem, but as always, the VM engine is not present on the webui. PS: 1- I gave the VM engine just 2Gb of memory since I don't have too much RAM on hypervisor, could that be the cause of the problem? 2- This version of ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm package is causing the ovirt-ha-agent to crash, it was replaced with another which I still have ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, but it's not present on the repository, I had to update the package manually at the end of ovirt-hosted-engine-setup installation. Regards

On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. VM engine is also a FC22
Problem: - No VM engine on webui
This is still not supported, see/follow [1]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889
Test Environment Just two machines:
1 - Machine 1 used as storage : - iscsi target with a raw file for the VM engine storage - NFS4 for other data domains
2 - Machine 2 used as hypervisor
The installation went without problem, but as always, the VM engine is not present on the webui.
PS: 1- I gave the VM engine just 2Gb of memory since I don't have too much RAM on hypervisor, could that be the cause of the problem?
Shouldn't be related
2- This version of ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm package is causing the ovirt-ha-agent to crash, it was replaced with another which I still have ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, but it's not present on the repository, I had to update the package manually at the end of ovirt-hosted-engine-setup installation.
Not sure I follow. What exact repo was used? hosted-engine --deploy does not update/install packages for you (as does engine-setup), it's up to you to make sure what you want/need is installed prior to running it. Best, -- Didi

On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. VM engine is also a FC22
Problem: - No VM engine on webui
This is still not supported, see/follow [1].
? :-) Did you also try adding an additional storage domain for regular VMs? engine-VM will be shown on the engine only when you add at least one additional storage domain for regulars VM and the whole datacenter goes up: https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1
Test Environment Just two machines:
1 - Machine 1 used as storage : - iscsi target with a raw file for the VM engine storage - NFS4 for other data domains
2 - Machine 2 used as hypervisor
The installation went without problem, but as always, the VM engine is not present on the webui.
PS: 1- I gave the VM engine just 2Gb of memory since I don't have too much RAM on hypervisor, could that be the cause of the problem?
Shouldn't be related
2- This version of
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm
package is causing the ovirt-ha-agent to crash, it was replaced with another which I still have
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm,
but it's not present on the repository, I had to update the package manually at the end of ovirt-hosted-engine-setup installation.
Not sure I follow.
What exact repo was used?
hosted-engine --deploy does not update/install packages for you (as does engine-setup), it's up to you to make sure what you want/need is installed prior to running it.
Best, -- Didi _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, I am using the repo of the 3.6 version ( http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/) I installed the ovirt-hosted-engine-setup with it's dependencies,and the ovirt-hosted-engine-ha package is one of them. Correction: The problem with this version ovirt-hosted-engine-ha-1.3.0-0.0.master.2015*0819*082341.20150819082338.git183a4ff.fc22.noarch.rpm, is that after the installation is done, the service ovirt-ha-agent crashes after being started, see the bug : https://bugzilla.redhat.com/show_bug.cgi?id=1254745 A new version was produced, I downloaded it manually a few days ago, this is it : ovirt-hosted-engine-ha-1.3.0-0.0.master.2015*0820* 064645.20150820064642.git02529e0.fc22.noarch.rpm This one did correct the problem, but it's not present anymore on the repository. For Simone: yes I did added an NFS4 data domain, but no success so far, no VM engine present. Regards. 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. VM engine is also a FC22
Problem: - No VM engine on webui
This is still not supported, see/follow [1].
? :-)
Did you also try adding an additional storage domain for regular VMs? engine-VM will be shown on the engine only when you add at least one additional storage domain for regulars VM and the whole datacenter goes up: https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1
Test Environment Just two machines:
1 - Machine 1 used as storage : - iscsi target with a raw file for the VM engine storage - NFS4 for other data domains
2 - Machine 2 used as hypervisor
The installation went without problem, but as always, the VM engine is not present on the webui.
PS: 1- I gave the VM engine just 2Gb of memory since I don't have too much RAM on hypervisor, could that be the cause of the problem?
Shouldn't be related
2- This version of
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm
package is causing the ovirt-ha-agent to crash, it was replaced with another which I still have
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm,
but it's not present on the repository, I had to update the package manually at the end of ovirt-hosted-engine-setup installation.
Not sure I follow.
What exact repo was used?
hosted-engine --deploy does not update/install packages for you (as does engine-setup), it's up to you to make sure what you want/need is installed prior to running it.
Best, -- Didi _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
I am using the repo of the 3.6 version (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/)
I installed the ovirt-hosted-engine-setup with it's dependencies,and the ovirt-hosted-engine-ha package is one of them.
Correction: The problem with this version ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, is that after the installation is done, the service ovirt-ha-agent crashes after being started, see the bug :
https://bugzilla.redhat.com/show_bug.cgi?id=1254745
A new version was produced, I downloaded it manually a few days ago, this is it : ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm
This one did correct the problem, but it's not present anymore on the repository.
Was this on ovirt-3.6-pre? ovirt-3.6-snapshot has a newer version.
For Simone: yes I did added an NFS4 data domain, but no success so far, no VM engine present.
Regards.
2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. VM engine is also a FC22
Problem: - No VM engine on webui
This is still not supported, see/follow [1].
? :-)
Sorry :-( https://bugzilla.redhat.com/show_bug.cgi?id=1160094 I see that all patches there are merged, but bug is in POST.
Did you also try adding an additional storage domain for regular VMs? engine-VM will be shown on the engine only when you add at least one additional storage domain for regulars VM and the whole datacenter goes up: https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1
Test Environment Just two machines:
1 - Machine 1 used as storage : - iscsi target with a raw file for the VM engine storage - NFS4 for other data domains
2 - Machine 2 used as hypervisor
The installation went without problem, but as always, the VM engine is not present on the webui.
PS: 1- I gave the VM engine just 2Gb of memory since I don't have too much RAM on hypervisor, could that be the cause of the problem?
Shouldn't be related
2- This version of
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm package is causing the ovirt-ha-agent to crash, it was replaced with another which I still have
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, but it's not present on the repository, I had to update the package manually at the end of ovirt-hosted-engine-setup installation.
Not sure I follow.
What exact repo was used?
hosted-engine --deploy does not update/install packages for you (as does engine-setup), it's up to you to make sure what you want/need is installed prior to running it.
Best, -- Didi _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi

Ok I will test with 3.6 snapshot repository ... Thanks 2015-09-01 11:40 GMT+01:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
I am using the repo of the 3.6 version (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/)
I installed the ovirt-hosted-engine-setup with it's dependencies,and the ovirt-hosted-engine-ha package is one of them.
Correction: The problem with this version
is that after the installation is done, the service ovirt-ha-agent crashes after being started, see the bug :
https://bugzilla.redhat.com/show_bug.cgi?id=1254745
A new version was produced, I downloaded it manually a few days ago,
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, this is
it :
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm
This one did correct the problem, but it's not present anymore on the repository.
Was this on ovirt-3.6-pre?
ovirt-3.6-snapshot has a newer version.
For Simone: yes I did added an NFS4 data domain, but no success so far,
no
VM engine present.
Regards.
2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. VM engine is also a FC22
Problem: - No VM engine on webui
This is still not supported, see/follow [1].
? :-)
Sorry :-(
https://bugzilla.redhat.com/show_bug.cgi?id=1160094
I see that all patches there are merged, but bug is in POST.
Did you also try adding an additional storage domain for regular VMs? engine-VM will be shown on the engine only when you add at least one additional storage domain for regulars VM and the whole datacenter goes
up:
https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1
Test Environment Just two machines:
1 - Machine 1 used as storage : - iscsi target with a raw file for the VM engine storage - NFS4 for other data domains
2 - Machine 2 used as hypervisor
The installation went without problem, but as always, the VM engine
is
not present on the webui.
PS: 1- I gave the VM engine just 2Gb of memory since I don't have too much RAM on hypervisor, could that be the cause of the problem?
Shouldn't be related
2- This version of
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm
package is causing the ovirt-ha-agent to crash, it was replaced with another which I still have
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm,
but it's not present on the repository, I had to update the package manually at the end of ovirt-hosted-engine-setup installation.
Not sure I follow.
What exact repo was used?
hosted-engine --deploy does not update/install packages for you (as does engine-setup), it's up to you to make sure what you want/need is installed prior to running it.
Best, -- Didi _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi

On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
I am using the repo of the 3.6 version (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/)
I installed the ovirt-hosted-engine-setup with it's dependencies,and the ovirt-hosted-engine-ha package is one of them.
Correction: The problem with this version
is that after the installation is done, the service ovirt-ha-agent crashes after being started, see the bug :
https://bugzilla.redhat.com/show_bug.cgi?id=1254745
A new version was produced, I downloaded it manually a few days ago,
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, this is
it :
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm
This one did correct the problem, but it's not present anymore on the repository.
Was this on ovirt-3.6-pre?
ovirt-3.6-snapshot has a newer version.
For Simone: yes I did added an NFS4 data domain, but no success so far,
no
VM engine present.
Regards.
2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. VM engine is also a FC22
Problem: - No VM engine on webui
This is still not supported, see/follow [1].
? :-)
Sorry :-(
This is just about editing the VM from the web GUI but in order to be able to edit the engine VM you should be able at least to find the engine VM in the engine as it was also in 3.5 I'll try to reproduce verifying another patch
I see that all patches there are merged, but bug is in POST.
Did you also try adding an additional storage domain for regular VMs? engine-VM will be shown on the engine only when you add at least one additional storage domain for regulars VM and the whole datacenter goes
up:
https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1
Test Environment Just two machines:
1 - Machine 1 used as storage : - iscsi target with a raw file for the VM engine storage - NFS4 for other data domains
2 - Machine 2 used as hypervisor
The installation went without problem, but as always, the VM engine
is
not present on the webui.
PS: 1- I gave the VM engine just 2Gb of memory since I don't have too much RAM on hypervisor, could that be the cause of the problem?
Shouldn't be related
2- This version of
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm
package is causing the ovirt-ha-agent to crash, it was replaced with another which I still have
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm,
but it's not present on the repository, I had to update the package manually at the end of ovirt-hosted-engine-setup installation.
Not sure I follow.
What exact repo was used?
hosted-engine --deploy does not update/install packages for you (as does engine-setup), it's up to you to make sure what you want/need is installed prior to running it.
Best, -- Didi _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi

Hi again, I tried with the snapshot repository, but I am having this error while executing engine-setup [ INFO ] Creating/refreshing Engine database schema [ ERROR ] Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu te [ INFO ] DNF Performing DNF transaction rollback [ INFO ] Rolling back database schema [ INFO ] Clearing Engine database engine [ ERROR ] Engine database rollback failed: must be owner of schema pg_catalog [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed and in the deployement log I have these errors Saving custom users permissions on database objects... upgrade script detected a change in Config, View or Stored Procedure... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql'... 2015-09-01 15:39:35 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', '-c', 'apply'] stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 2: DATABASE : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 5: The : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 6: Add : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: ` Update section (w/o overriding current value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 2: Currently : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 3: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 5: So, : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 6: Since : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 7: bin/ : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 9: update : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: * *syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: `and exists(select 1 from schema_version where version = '03010250' and current = true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 1: -- : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: **syntax error near the symbole unexpected « ( »* /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 1: -- : commande introuvable /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: `select fn_db_add_column('materialized_views', 'min_refresh_rate_in_sec', 'int default 0');' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE varchar(63);' 2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in _executeMethod method['method']() File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 291, in _misc oenginecons.EngineDBEnv.PGPASS_FILE File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 946, in execute command=args[0], RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 aborting 'DNF Transaction' 2015-09-01 15:39:35 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF Closing transaction with rollback 2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 DNF Performing DNF transaction rollback Regards 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
I am using the repo of the 3.6 version (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/)
I installed the ovirt-hosted-engine-setup with it's dependencies,and the ovirt-hosted-engine-ha package is one of them.
Correction: The problem with this version
is that after the installation is done, the service ovirt-ha-agent crashes after being started, see the bug :
https://bugzilla.redhat.com/show_bug.cgi?id=1254745
A new version was produced, I downloaded it manually a few days ago,
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, this is
it :
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm
This one did correct the problem, but it's not present anymore on the repository.
Was this on ovirt-3.6-pre?
ovirt-3.6-snapshot has a newer version.
For Simone: yes I did added an NFS4 data domain, but no success so far,
no
VM engine present.
Regards.
2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi <wodel.youchi@gmail.com
wrote:
Hi,
Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. VM engine is also a FC22
Problem: - No VM engine on webui
This is still not supported, see/follow [1].
? :-)
Sorry :-(
This is just about editing the VM from the web GUI but in order to be able to edit the engine VM you should be able at least to find the engine VM in the engine as it was also in 3.5
I'll try to reproduce verifying another patch
I see that all patches there are merged, but bug is in POST.
Did you also try adding an additional storage domain for regular VMs? engine-VM will be shown on the engine only when you add at least one additional storage domain for regulars VM and the whole datacenter
goes up:
https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1
Test Environment Just two machines:
1 - Machine 1 used as storage : - iscsi target with a raw file for the VM engine storage - NFS4 for other data domains
2 - Machine 2 used as hypervisor
The installation went without problem, but as always, the VM engine
is
not present on the webui.
PS: 1- I gave the VM engine just 2Gb of memory since I don't have too much RAM on hypervisor, could that be the cause of the problem?
Shouldn't be related
2- This version of
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm
package is causing the ovirt-ha-agent to crash, it was replaced with another which I still have
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm,
but it's not present on the repository, I had to update the package manually at the end of ovirt-hosted-engine-setup installation.
Not sure I follow.
What exact repo was used?
hosted-engine --deploy does not update/install packages for you (as does engine-setup), it's up to you to make sure what you want/need is installed prior to running it.
Best, -- Didi _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi

On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I tried with the snapshot repository, but I am having this error while executing engine-setup
[ INFO ] Creating/refreshing Engine database schema [ ERROR ] Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu te [ INFO ] DNF Performing DNF transaction rollback [ INFO ] Rolling back database schema [ INFO ] Clearing Engine database engine [ ERROR ] Engine database rollback failed: must be owner of schema pg_catalog [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
and in the deployement log I have these errors
Saving custom users permissions on database objects... upgrade script detected a change in Config, View or Stored Procedure... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'...
2015-09-01 15:39:35 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log
*/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', '-c', 'apply'] stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 2: DATABASE : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 5: The : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 6: Add : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: ` Update section (w/o overriding current value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 2: Currently : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 3: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 5: So, : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 6: Since : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 7: bin/ : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 9: update : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: *
*syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: `and exists(select 1 from schema_version where version = '03010250' and current = true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 1: -- : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: **syntax error near the symbole unexpected « ( »* /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 1: -- : commande introuvable /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: `select fn_db_add_column('materialized_views', 'min_refresh_rate_in_sec', 'int default 0');' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE varchar(63);' 2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in _executeMethod method['method']() File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 291, in _misc oenginecons.EngineDBEnv.PGPASS_FILE File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 946, in execute command=args[0], RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 aborting 'DNF Transaction' 2015-09-01 15:39:35 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF Closing transaction with rollback 2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 DNF Performing DNF transaction rollback
It was an issue with package building: all the sql files where executable and so the issue. We fixed it and tomorrow build should be OK. If you prefer to continue right now simply recursively remove the x attribute on each sql file under /usr/share/ovirt-engine/dbscripts
2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David <didi@redhat.com> wrote:
Hi,
I am using the repo of the 3.6 version (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/)
I installed the ovirt-hosted-engine-setup with it's dependencies,and
On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi <wodel.youchi@gmail.com> wrote: the
ovirt-hosted-engine-ha package is one of them.
Correction: The problem with this version
is that after the installation is done, the service ovirt-ha-agent crashes after being started, see the bug :
https://bugzilla.redhat.com/show_bug.cgi?id=1254745
A new version was produced, I downloaded it manually a few days ago,
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, this is
it :
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm
This one did correct the problem, but it's not present anymore on the repository.
Was this on ovirt-3.6-pre?
ovirt-3.6-snapshot has a newer version.
For Simone: yes I did added an NFS4 data domain, but no success so
far, no
VM engine present.
Regards.
2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi <
wodel.youchi@gmail.com>
wrote: > Hi, > > Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. > VM engine is also a FC22 > > Problem: > - No VM engine on webui
This is still not supported, see/follow [1].
? :-)
Sorry :-(
This is just about editing the VM from the web GUI but in order to be able to edit the engine VM you should be able at least to find the engine VM in the engine as it was also in 3.5
I'll try to reproduce verifying another patch
I see that all patches there are merged, but bug is in POST.
Did you also try adding an additional storage domain for regular VMs? engine-VM will be shown on the engine only when you add at least one additional storage domain for regulars VM and the whole datacenter
goes up:
https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1
> Test Environment > Just two machines: > > 1 - Machine 1 used as storage : > - iscsi target with a raw file for the VM engine storage > - NFS4 for other data domains > > 2 - Machine 2 used as hypervisor > > > The installation went without problem, but as always, the VM
engine is
> not > present on the webui. > > > PS: > 1- I gave the VM engine just 2Gb of memory since I don't have too much > RAM > on hypervisor, could that be the cause of the problem?
Shouldn't be related
> 2- This version of > > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm > package is causing the ovirt-ha-agent to crash, it was replaced with > another > which I still have > > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, > but it's not present on the repository, I had to update the package > manually > at the end of ovirt-hosted-engine-setup installation.
Not sure I follow.
What exact repo was used?
hosted-engine --deploy does not update/install packages for you (as does engine-setup), it's up to you to make sure what you want/need is installed prior to running it.
Best, -- Didi _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Didi

Hi, After removing the -x from the sql files, the installation terminated successfully, but ... I had a problem with vdsm, and error about permission denied with KVM module, so I restarted my machine. After the reboot the ovirt-ha-agent stops complaining about the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha And the mount command doesn't show any iscsi mount, the disk is detected via fdisk -l the lvs command returns all logical volumes created. I think it's a mount problem, but since there are many lv, I don't how to mount them manually. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 3b894e23-429d-43bf-b6cd-6427a387799a 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m be78c0fd-52bf-445a-9555-64061029c2d9 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g feede664-5754-4ca2-aeb3-af7aff32ed42 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m ids 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m inbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m leases 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 2,00g master 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g metadata 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 512,00m outbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I tried with the snapshot repository, but I am having this error while executing engine-setup
[ INFO ] Creating/refreshing Engine database schema [ ERROR ] Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu te [ INFO ] DNF Performing DNF transaction rollback [ INFO ] Rolling back database schema [ INFO ] Clearing Engine database engine [ ERROR ] Engine database rollback failed: must be owner of schema pg_catalog [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
and in the deployement log I have these errors
Saving custom users permissions on database objects... upgrade script detected a change in Config, View or Stored Procedure... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'...
2015-09-01 15:39:35 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log
*/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', '-c', 'apply'] stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 2: DATABASE : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 5: The : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 6: Add : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: ` Update section (w/o overriding current value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 2: Currently : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 3: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 5: So, : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 6: Since : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 7: bin/ : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 9: update : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: *
*syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: `and exists(select 1 from schema_version where version = '03010250' and current = true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 1: -- : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: **syntax error near the symbole unexpected « ( »* /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 1: -- : commande introuvable /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: `select fn_db_add_column('materialized_views', 'min_refresh_rate_in_sec', 'int default 0');' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE varchar(63);' 2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in _executeMethod method['method']() File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 291, in _misc oenginecons.EngineDBEnv.PGPASS_FILE File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 946, in execute command=args[0], RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 aborting 'DNF Transaction' 2015-09-01 15:39:35 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF Closing transaction with rollback 2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 DNF Performing DNF transaction rollback
It was an issue with package building: all the sql files where executable and so the issue. We fixed it and tomorrow build should be OK. If you prefer to continue right now simply recursively remove the x attribute on each sql file under /usr/share/ovirt-engine/dbscripts
2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David <didi@redhat.com> wrote:
Hi,
I am using the repo of the 3.6 version (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/)
I installed the ovirt-hosted-engine-setup with it's dependencies,and
On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi <wodel.youchi@gmail.com> wrote: the
ovirt-hosted-engine-ha package is one of them.
Correction: The problem with this version
is that after the installation is done, the service ovirt-ha-agent crashes after being started, see the bug :
https://bugzilla.redhat.com/show_bug.cgi?id=1254745
A new version was produced, I downloaded it manually a few days ago,
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, this is
it :
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm
This one did correct the problem, but it's not present anymore on the repository.
Was this on ovirt-3.6-pre?
ovirt-3.6-snapshot has a newer version.
For Simone: yes I did added an NFS4 data domain, but no success so
far, no
VM engine present.
Regards.
2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David <didi@redhat.com
wrote: > > On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < wodel.youchi@gmail.com> > wrote: > > Hi, > > > > Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. > > VM engine is also a FC22 > > > > Problem: > > - No VM engine on webui > > This is still not supported, see/follow [1]. > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889
? :-)
Sorry :-(
This is just about editing the VM from the web GUI but in order to be able to edit the engine VM you should be able at least to find the engine VM in the engine as it was also in 3.5
I'll try to reproduce verifying another patch
I see that all patches there are merged, but bug is in POST.
Did you also try adding an additional storage domain for regular VMs? engine-VM will be shown on the engine only when you add at least one additional storage domain for regulars VM and the whole datacenter
https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1
> > > Test Environment > > Just two machines: > > > > 1 - Machine 1 used as storage : > > - iscsi target with a raw file for the VM engine storage > > - NFS4 for other data domains > > > > 2 - Machine 2 used as hypervisor > > > > > > The installation went without problem, but as always, the VM engine is > > not > > present on the webui. > > > > > > PS: > > 1- I gave the VM engine just 2Gb of memory since I don't have too much > > RAM > > on hypervisor, could that be the cause of the problem? > > Shouldn't be related > > > 2- This version of > > > > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm > > package is causing the ovirt-ha-agent to crash, it was replaced with > > another > > which I still have > > > > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, > > but it's not present on the repository, I had to update the
goes up: package
> > manually > > at the end of ovirt-hosted-engine-setup installation. > > Not sure I follow. > > What exact repo was used? > > hosted-engine --deploy does not update/install packages for you (as > does engine-setup), > it's up to you to make sure what you want/need is installed prior to > running it. > > Best, > -- > Didi > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
-- Didi

On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
After removing the -x from the sql files, the installation terminated successfully, but ...
I had a problem with vdsm, and error about permission denied with KVM module, so I restarted my machine. After the reboot the ovirt-ha-agent stops complaining about the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha
And the mount command doesn't show any iscsi mount, the disk is detected via fdisk -l the lvs command returns all logical volumes created.
I think it's a mount problem, but since there are many lv, I don't how to mount them manually.
Do you have something mounted under /rhev/data-center/mnt ? If not you probably hit this bug: https://bugzilla.redhat.com/1258465
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
3b894e23-429d-43bf-b6cd-6427a387799a 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
be78c0fd-52bf-445a-9555-64061029c2d9 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g
feede664-5754-4ca2-aeb3-af7aff32ed42 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
ids 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
inbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
leases 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 2,00g
master 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
metadata 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 512,00m
outbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I tried with the snapshot repository, but I am having this error while executing engine-setup
[ INFO ] Creating/refreshing Engine database schema [ ERROR ] Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu te [ INFO ] DNF Performing DNF transaction rollback [ INFO ] Rolling back database schema [ INFO ] Clearing Engine database engine [ ERROR ] Engine database rollback failed: must be owner of schema pg_catalog [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
and in the deployement log I have these errors
Saving custom users permissions on database objects... upgrade script detected a change in Config, View or Stored Procedure... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'...
2015-09-01 15:39:35 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log
*/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', '-c', 'apply'] stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 2: DATABASE : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 5: The : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 6: Add : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: ` Update section (w/o overriding current value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 2: Currently : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 3: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 5: So, : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 6: Since : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 7: bin/ : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 9: update : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: *
*syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: `and exists(select 1 from schema_version where version = '03010250' and current = true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 1: -- : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: **syntax error near the symbole unexpected « ( »* /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 1: -- : commande introuvable /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: `select fn_db_add_column('materialized_views', 'min_refresh_rate_in_sec', 'int default 0');' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE varchar(63);' 2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in _executeMethod method['method']() File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 291, in _misc oenginecons.EngineDBEnv.PGPASS_FILE File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 946, in execute command=args[0], RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 aborting 'DNF Transaction' 2015-09-01 15:39:35 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF Closing transaction with rollback 2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 DNF Performing DNF transaction rollback
It was an issue with package building: all the sql files where executable and so the issue. We fixed it and tomorrow build should be OK. If you prefer to continue right now simply recursively remove the x attribute on each sql file under /usr/share/ovirt-engine/dbscripts
2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David <didi@redhat.com> wrote:
Hi,
I am using the repo of the 3.6 version (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/)
I installed the ovirt-hosted-engine-setup with it's dependencies,and
On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi <wodel.youchi@gmail.com> wrote: the
ovirt-hosted-engine-ha package is one of them.
Correction: The problem with this version
is that after the installation is done, the service ovirt-ha-agent crashes after being started, see the bug :
https://bugzilla.redhat.com/show_bug.cgi?id=1254745
A new version was produced, I downloaded it manually a few days ago,
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, this is
it :
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm
This one did correct the problem, but it's not present anymore on the repository.
Was this on ovirt-3.6-pre?
ovirt-3.6-snapshot has a newer version.
For Simone: yes I did added an NFS4 data domain, but no success so
far, no
VM engine present.
Regards.
2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: > > > > On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < didi@redhat.com> > wrote: >> >> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < wodel.youchi@gmail.com> >> wrote: >> > Hi, >> > >> > Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. >> > VM engine is also a FC22 >> > >> > Problem: >> > - No VM engine on webui >> >> This is still not supported, see/follow [1]. >> >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 > > > ? :-)
Sorry :-(
This is just about editing the VM from the web GUI but in order to be able to edit the engine VM you should be able at least to find the engine VM in the engine as it was also in 3.5
I'll try to reproduce verifying another patch
I see that all patches there are merged, but bug is in POST.
> > Did you also try adding an additional storage domain for regular VMs? > engine-VM will be shown on the engine only when you add at least one > additional storage domain for regulars VM and the whole datacenter goes up: > https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 > > >> >> > Test Environment >> > Just two machines: >> > >> > 1 - Machine 1 used as storage : >> > - iscsi target with a raw file for the VM engine storage >> > - NFS4 for other data domains >> > >> > 2 - Machine 2 used as hypervisor >> > >> > >> > The installation went without problem, but as always, the VM engine is >> > not >> > present on the webui. >> > >> > >> > PS: >> > 1- I gave the VM engine just 2Gb of memory since I don't have too much >> > RAM >> > on hypervisor, could that be the cause of the problem? >> >> Shouldn't be related >> >> > 2- This version of >> > >> > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >> > package is causing the ovirt-ha-agent to crash, it was replaced with >> > another >> > which I still have >> > >> > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >> > but it's not present on the repository, I had to update the package >> > manually >> > at the end of ovirt-hosted-engine-setup installation. >> >> Not sure I follow. >> >> What exact repo was used? >> >> hosted-engine --deploy does not update/install packages for you (as >> does engine-setup), >> it's up to you to make sure what you want/need is installed prior to >> running it. >> >> Best, >> -- >> Didi >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > >
-- Didi

Something mounted on /rhev/data-center/mnt I'm not sure. there were directories, and under these directories there were other directories (dom_md, ha_agent, images), and under them there were symbolic links to devices under /dev (ids, inbox, leases, etc...) the devices pointed to the lvm partitions created by the setup. but the mount command didn't show anything, unlike nfs, when I used it the mount and df commands showed up the engine's VM mount point. 2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
After removing the -x from the sql files, the installation terminated successfully, but ...
I had a problem with vdsm, and error about permission denied with KVM module, so I restarted my machine. After the reboot the ovirt-ha-agent stops complaining about the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha
And the mount command doesn't show any iscsi mount, the disk is detected via fdisk -l the lvs command returns all logical volumes created.
I think it's a mount problem, but since there are many lv, I don't how to mount them manually.
Do you have something mounted under /rhev/data-center/mnt ? If not you probably hit this bug: https://bugzilla.redhat.com/1258465
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
3b894e23-429d-43bf-b6cd-6427a387799a 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
be78c0fd-52bf-445a-9555-64061029c2d9 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g
feede664-5754-4ca2-aeb3-af7aff32ed42 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
ids 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
inbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
leases 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 2,00g
master 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
metadata 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 512,00m
outbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I tried with the snapshot repository, but I am having this error while executing engine-setup
[ INFO ] Creating/refreshing Engine database schema [ ERROR ] Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu te [ INFO ] DNF Performing DNF transaction rollback [ INFO ] Rolling back database schema [ INFO ] Clearing Engine database engine [ ERROR ] Engine database rollback failed: must be owner of schema pg_catalog [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
and in the deployement log I have these errors
Saving custom users permissions on database objects... upgrade script detected a change in Config, View or Stored Procedure... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'...
2015-09-01 15:39:35 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log
*/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', '-c', 'apply'] stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 2: DATABASE : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 5: The : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 6: Add : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: ` Update section (w/o overriding current value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 2: Currently : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 3: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 5: So, : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 6: Since : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 7: bin/ : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 9: update : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: *
*syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: `and exists(select 1 from schema_version where version = '03010250' and current = true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 1: -- : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: **syntax error near the symbole unexpected « ( »* /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 1: -- : commande introuvable /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: `select fn_db_add_column('materialized_views', 'min_refresh_rate_in_sec', 'int default 0');' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE varchar(63);' 2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in _executeMethod method['method']() File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 291, in _misc oenginecons.EngineDBEnv.PGPASS_FILE File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 946, in execute command=args[0], RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 aborting 'DNF Transaction' 2015-09-01 15:39:35 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF Closing transaction with rollback 2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 DNF Performing DNF transaction rollback
It was an issue with package building: all the sql files where executable and so the issue. We fixed it and tomorrow build should be OK. If you prefer to continue right now simply recursively remove the x attribute on each sql file under /usr/share/ovirt-engine/dbscripts
2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi <wodel.youchi@gmail.com> wrote: > Hi, > > I am using the repo of the 3.6 version > (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/) > > I installed the ovirt-hosted-engine-setup with it's dependencies,and the > ovirt-hosted-engine-ha package is one of them. > > Correction: The problem with this version > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, > is that after the installation is done, the service ovirt-ha-agent crashes > after being started, see the bug : > > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 > > A new version was produced, I downloaded it manually a few days ago, this is > it : > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm > > This one did correct the problem, but it's not present anymore on the > repository.
Was this on ovirt-3.6-pre?
ovirt-3.6-snapshot has a newer version.
> > For Simone: yes I did added an NFS4 data domain, but no success so far, no > VM engine present. > > Regards. > > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: >> >> >> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < didi@redhat.com> >> wrote: >>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < wodel.youchi@gmail.com> >>> wrote: >>> > Hi, >>> > >>> > Another test of ovirt hosted-engine on FC22 using ovirt 3.6 Beta3. >>> > VM engine is also a FC22 >>> > >>> > Problem: >>> > - No VM engine on webui >>> >>> This is still not supported, see/follow [1]. >>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >> >> >> ? :-)
Sorry :-(
This is just about editing the VM from the web GUI but in order to be able to edit the engine VM you should be able at least to find the engine VM in the engine as it was also in 3.5
I'll try to reproduce verifying another patch
I see that all patches there are merged, but bug is in POST.
>> >> Did you also try adding an additional storage domain for regular VMs? >> engine-VM will be shown on the engine only when you add at least one >> additional storage domain for regulars VM and the whole datacenter goes up: >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >> >> >>> >>> > Test Environment >>> > Just two machines: >>> > >>> > 1 - Machine 1 used as storage : >>> > - iscsi target with a raw file for the VM engine storage >>> > - NFS4 for other data domains >>> > >>> > 2 - Machine 2 used as hypervisor >>> > >>> > >>> > The installation went without problem, but as always, the VM engine is >>> > not >>> > present on the webui. >>> > >>> > >>> > PS: >>> > 1- I gave the VM engine just 2Gb of memory since I don't have too much >>> > RAM >>> > on hypervisor, could that be the cause of the problem? >>> >>> Shouldn't be related >>> >>> > 2- This version of >>> > >>> > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>> > package is causing the ovirt-ha-agent to crash, it was replaced with >>> > another >>> > which I still have >>> > >>> > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>> > but it's not present on the repository, I had to update the package >>> > manually >>> > at the end of ovirt-hosted-engine-setup installation. >>> >>> Not sure I follow. >>> >>> What exact repo was used? >>> >>> hosted-engine --deploy does not update/install packages for you (as >>> does engine-setup), >>> it's up to you to make sure what you want/need is installed prior to >>> running it. >>> >>> Best, >>> -- >>> Didi >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> >
-- Didi

I could finally terminate the installation, but still no vm engine on webui I added a data domain, the default DC is up, but no engine VM. 2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Something mounted on /rhev/data-center/mnt I'm not sure.
there were directories, and under these directories there were other directories (dom_md, ha_agent, images), and under them there were symbolic links to devices under /dev (ids, inbox, leases, etc...) the devices pointed to the lvm partitions created by the setup.
but the mount command didn't show anything, unlike nfs, when I used it the mount and df commands showed up the engine's VM mount point.
2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
After removing the -x from the sql files, the installation terminated successfully, but ...
I had a problem with vdsm, and error about permission denied with KVM module, so I restarted my machine. After the reboot the ovirt-ha-agent stops complaining about the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha
And the mount command doesn't show any iscsi mount, the disk is detected via fdisk -l the lvs command returns all logical volumes created.
I think it's a mount problem, but since there are many lv, I don't how to mount them manually.
Do you have something mounted under /rhev/data-center/mnt ? If not you probably hit this bug: https://bugzilla.redhat.com/1258465
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
3b894e23-429d-43bf-b6cd-6427a387799a 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
be78c0fd-52bf-445a-9555-64061029c2d9 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g
feede664-5754-4ca2-aeb3-af7aff32ed42 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
ids 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
inbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
leases 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 2,00g
master 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
metadata 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 512,00m
outbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I tried with the snapshot repository, but I am having this error while executing engine-setup
[ INFO ] Creating/refreshing Engine database schema [ ERROR ] Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu te [ INFO ] DNF Performing DNF transaction rollback [ INFO ] Rolling back database schema [ INFO ] Clearing Engine database engine [ ERROR ] Engine database rollback failed: must be owner of schema pg_catalog [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
and in the deployement log I have these errors
Saving custom users permissions on database objects... upgrade script detected a change in Config, View or Stored Procedure... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'...
2015-09-01 15:39:35 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log
*/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', '-c', 'apply'] stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 2: DATABASE : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 5: The : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 6: Add : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: ` Update section (w/o overriding current value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 2: Currently : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 3: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 5: So, : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 6: Since : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 7: bin/ : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 9: update : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: *
*syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: `and exists(select 1 from schema_version where version = '03010250' and current = true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 1: -- : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: **syntax error near the symbole unexpected « ( »* /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 1: -- : commande introuvable /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: `select fn_db_add_column('materialized_views', 'min_refresh_rate_in_sec', 'int default 0');' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE varchar(63);' 2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in _executeMethod method['method']() File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 291, in _misc oenginecons.EngineDBEnv.PGPASS_FILE File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 946, in execute command=args[0], RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 aborting 'DNF Transaction' 2015-09-01 15:39:35 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF Closing transaction with rollback 2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 DNF Performing DNF transaction rollback
It was an issue with package building: all the sql files where executable and so the issue. We fixed it and tomorrow build should be OK. If you prefer to continue right now simply recursively remove the x attribute on each sql file under /usr/share/ovirt-engine/dbscripts
2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David <didi@redhat.com> wrote:
> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi <wodel.youchi@gmail.com> > wrote: > > Hi, > > > > I am using the repo of the 3.6 version > > (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/) > > > > I installed the ovirt-hosted-engine-setup with it's > dependencies,and the > > ovirt-hosted-engine-ha package is one of them. > > > > Correction: The problem with this version > > > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, > > is that after the installation is done, the service ovirt-ha-agent > crashes > > after being started, see the bug : > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 > > > > A new version was produced, I downloaded it manually a few days > ago, this is > > it : > > > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm > > > > This one did correct the problem, but it's not present anymore on > the > > repository. > > Was this on ovirt-3.6-pre? > > ovirt-3.6-snapshot has a newer version. > > > > > For Simone: yes I did added an NFS4 data domain, but no success so > far, no > > VM engine present. > > > > Regards. > > > > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com > >: > >> > >> > >> > >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < > didi@redhat.com> > >> wrote: > >>> > >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < > wodel.youchi@gmail.com> > >>> wrote: > >>> > Hi, > >>> > > >>> > Another test of ovirt hosted-engine on FC22 using ovirt 3.6 > Beta3. > >>> > VM engine is also a FC22 > >>> > > >>> > Problem: > >>> > - No VM engine on webui > >>> > >>> This is still not supported, see/follow [1]. > >>> > >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 > >> > >> > >> ? :-) > > Sorry :-( > > https://bugzilla.redhat.com/show_bug.cgi?id=1160094
This is just about editing the VM from the web GUI but in order to be able to edit the engine VM you should be able at least to find the engine VM in the engine as it was also in 3.5
I'll try to reproduce verifying another patch
> I see that all patches there are merged, but bug is in POST. > > >> > >> Did you also try adding an additional storage domain for regular > VMs? > >> engine-VM will be shown on the engine only when you add at least > one > >> additional storage domain for regulars VM and the whole > datacenter goes up: > >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 > >> > >> > >>> > >>> > Test Environment > >>> > Just two machines: > >>> > > >>> > 1 - Machine 1 used as storage : > >>> > - iscsi target with a raw file for the VM engine storage > >>> > - NFS4 for other data domains > >>> > > >>> > 2 - Machine 2 used as hypervisor > >>> > > >>> > > >>> > The installation went without problem, but as always, the VM > engine is > >>> > not > >>> > present on the webui. > >>> > > >>> > > >>> > PS: > >>> > 1- I gave the VM engine just 2Gb of memory since I don't have > too much > >>> > RAM > >>> > on hypervisor, could that be the cause of the problem? > >>> > >>> Shouldn't be related > >>> > >>> > 2- This version of > >>> > > >>> > > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm > >>> > package is causing the ovirt-ha-agent to crash, it was > replaced with > >>> > another > >>> > which I still have > >>> > > >>> > > ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, > >>> > but it's not present on the repository, I had to update the > package > >>> > manually > >>> > at the end of ovirt-hosted-engine-setup installation. > >>> > >>> Not sure I follow. > >>> > >>> What exact repo was used? > >>> > >>> hosted-engine --deploy does not update/install packages for you > (as > >>> does engine-setup), > >>> it's up to you to make sure what you want/need is installed > prior to > >>> running it. > >>> > >>> Best, > >>> -- > >>> Didi > >>> _______________________________________________ > >>> Users mailing list > >>> Users@ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/users > >> > >> > > > > > > -- > Didi >

On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I could finally terminate the installation, but still no vm engine on webui
I added a data domain, the default DC is up, but no engine VM.
Good now you need to import the HostedEngine storage domain. Try to go to *Storage -> Import Domain and put the path to the domain which you used in the hosted-engine setup.* *After the domain is imported, the engine will be imported automatically. * *This whole process will become automatic eventually. (patch is written currently)*
2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Something mounted on /rhev/data-center/mnt I'm not sure.
there were directories, and under these directories there were other directories (dom_md, ha_agent, images), and under them there were symbolic links to devices under /dev (ids, inbox, leases, etc...) the devices pointed to the lvm partitions created by the setup.
but the mount command didn't show anything, unlike nfs, when I used it the mount and df commands showed up the engine's VM mount point.
2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
After removing the -x from the sql files, the installation terminated successfully, but ...
I had a problem with vdsm, and error about permission denied with KVM module, so I restarted my machine. After the reboot the ovirt-ha-agent stops complaining about the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha
And the mount command doesn't show any iscsi mount, the disk is detected via fdisk -l the lvs command returns all logical volumes created.
I think it's a mount problem, but since there are many lv, I don't how to mount them manually.
Do you have something mounted under /rhev/data-center/mnt ? If not you probably hit this bug: https://bugzilla.redhat.com/1258465
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
3b894e23-429d-43bf-b6cd-6427a387799a 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
be78c0fd-52bf-445a-9555-64061029c2d9 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g
feede664-5754-4ca2-aeb3-af7aff32ed42 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
ids 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
inbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
leases 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 2,00g
master 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
metadata 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 512,00m
outbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I tried with the snapshot repository, but I am having this error while executing engine-setup
[ INFO ] Creating/refreshing Engine database schema [ ERROR ] Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu te [ INFO ] DNF Performing DNF transaction rollback [ INFO ] Rolling back database schema [ INFO ] Clearing Engine database engine [ ERROR ] Engine database rollback failed: must be owner of schema pg_catalog [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
and in the deployement log I have these errors
Saving custom users permissions on database objects... upgrade script detected a change in Config, View or Stored Procedure... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... Running upgrade shell script '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'...
2015-09-01 15:39:35 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log
*/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', '-c', 'apply'] stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 2: DATABASE : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 5: The : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 6: Add : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: ligne 7: ` Update section (w/o overriding current value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 1: /bin : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 2: Currently : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 3: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 4: This : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 5: So, : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 6: Since : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 7: bin/ : is a directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 9: update : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: *
*syntax error near the symbole unexpected « ( »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: ligne 10: `and exists(select 1 from schema_version where version = '03010250' and current = true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 1: -- : command not found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: **syntax error near the symbole unexpected « ( »* /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 1: -- : commande introuvable /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: ligne 2: `select fn_db_add_column('materialized_views', 'min_refresh_rate_in_sec', 'int default 0');' /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: erreur de syntaxe près du symbole inattendu « ( » /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE varchar(63);' 2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in _executeMethod method['method']() File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 291, in _misc oenginecons.EngineDBEnv.PGPASS_FILE File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 946, in execute command=args[0], RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 aborting 'DNF Transaction' 2015-09-01 15:39:35 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF Closing transaction with rollback 2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 DNF Performing DNF transaction rollback
It was an issue with package building: all the sql files where executable and so the issue. We fixed it and tomorrow build should be OK. If you prefer to continue right now simply recursively remove the x attribute on each sql file under /usr/share/ovirt-engine/dbscripts
2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
> > > On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David <didi@redhat.com > > wrote: > >> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >> wodel.youchi@gmail.com> wrote: >> > Hi, >> > >> > I am using the repo of the 3.6 version >> > (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/) >> > >> > I installed the ovirt-hosted-engine-setup with it's >> dependencies,and the >> > ovirt-hosted-engine-ha package is one of them. >> > >> > Correction: The problem with this version >> > >> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >> > is that after the installation is done, the service >> ovirt-ha-agent crashes >> > after being started, see the bug : >> > >> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >> > >> > A new version was produced, I downloaded it manually a few days >> ago, this is >> > it : >> > >> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >> > >> > This one did correct the problem, but it's not present anymore on >> the >> > repository. >> >> Was this on ovirt-3.6-pre? >> >> ovirt-3.6-snapshot has a newer version. >> >> > >> > For Simone: yes I did added an NFS4 data domain, but no success >> so far, no >> > VM engine present. >> > >> > Regards. >> > >> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com >> >: >> >> >> >> >> >> >> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >> didi@redhat.com> >> >> wrote: >> >>> >> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >> wodel.youchi@gmail.com> >> >>> wrote: >> >>> > Hi, >> >>> > >> >>> > Another test of ovirt hosted-engine on FC22 using ovirt 3.6 >> Beta3. >> >>> > VM engine is also a FC22 >> >>> > >> >>> > Problem: >> >>> > - No VM engine on webui >> >>> >> >>> This is still not supported, see/follow [1]. >> >>> >> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >> >> >> >> >> >> ? :-) >> >> Sorry :-( >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 > > > This is just about editing the VM from the web GUI but in order to > be able to edit the engine VM you should be able at least to find the > engine VM in the engine as it was also in 3.5 > > I'll try to reproduce verifying another patch > > >> I see that all patches there are merged, but bug is in POST. >> >> >> >> >> Did you also try adding an additional storage domain for regular >> VMs? >> >> engine-VM will be shown on the engine only when you add at least >> one >> >> additional storage domain for regulars VM and the whole >> datacenter goes up: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >> >> >> >> >> >>> >> >>> > Test Environment >> >>> > Just two machines: >> >>> > >> >>> > 1 - Machine 1 used as storage : >> >>> > - iscsi target with a raw file for the VM engine storage >> >>> > - NFS4 for other data domains >> >>> > >> >>> > 2 - Machine 2 used as hypervisor >> >>> > >> >>> > >> >>> > The installation went without problem, but as always, the VM >> engine is >> >>> > not >> >>> > present on the webui. >> >>> > >> >>> > >> >>> > PS: >> >>> > 1- I gave the VM engine just 2Gb of memory since I don't have >> too much >> >>> > RAM >> >>> > on hypervisor, could that be the cause of the problem? >> >>> >> >>> Shouldn't be related >> >>> >> >>> > 2- This version of >> >>> > >> >>> > >> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >> >>> > package is causing the ovirt-ha-agent to crash, it was >> replaced with >> >>> > another >> >>> > which I still have >> >>> > >> >>> > >> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >> >>> > but it's not present on the repository, I had to update the >> package >> >>> > manually >> >>> > at the end of ovirt-hosted-engine-setup installation. >> >>> >> >>> Not sure I follow. >> >>> >> >>> What exact repo was used? >> >>> >> >>> hosted-engine --deploy does not update/install packages for you >> (as >> >>> does engine-setup), >> >>> it's up to you to make sure what you want/need is installed >> prior to >> >>> running it. >> >>> >> >>> Best, >> >>> -- >> >>> Didi >> >>> _______________________________________________ >> >>> Users mailing list >> >>> Users@ovirt.org >> >>> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> >> > >> >> >> >> -- >> Didi >> > >

I will try this afternoon to do this, but just to clarify something. the hosted_engine setup creates it's own DC the hosted_DC, which contains the hosted engine storage domain, I am correct? if yes, where will I import the hostedengine storage domain, into the default DC? 2015-09-02 8:47 GMT+01:00 Roy Golan <rgolan@redhat.com>:
On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I could finally terminate the installation, but still no vm engine on webui
I added a data domain, the default DC is up, but no engine VM.
Good now you need to import the HostedEngine storage domain. Try to go to
*Storage -> Import Domain and put the path to the domain which you used in the hosted-engine setup.*
*After the domain is imported, the engine will be imported automatically. *
*This whole process will become automatic eventually. (patch is written currently)*
2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Something mounted on /rhev/data-center/mnt I'm not sure.
there were directories, and under these directories there were other directories (dom_md, ha_agent, images), and under them there were symbolic links to devices under /dev (ids, inbox, leases, etc...) the devices pointed to the lvm partitions created by the setup.
but the mount command didn't show anything, unlike nfs, when I used it the mount and df commands showed up the engine's VM mount point.
2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
After removing the -x from the sql files, the installation terminated successfully, but ...
I had a problem with vdsm, and error about permission denied with KVM module, so I restarted my machine. After the reboot the ovirt-ha-agent stops complaining about the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha
And the mount command doesn't show any iscsi mount, the disk is detected via fdisk -l the lvs command returns all logical volumes created.
I think it's a mount problem, but since there are many lv, I don't how to mount them manually.
Do you have something mounted under /rhev/data-center/mnt ? If not you probably hit this bug: https://bugzilla.redhat.com/1258465
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
3b894e23-429d-43bf-b6cd-6427a387799a 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
be78c0fd-52bf-445a-9555-64061029c2d9 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g
feede664-5754-4ca2-aeb3-af7aff32ed42 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
ids 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
inbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
leases 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 2,00g
master 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
metadata 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 512,00m
outbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
> Hi again, > > I tried with the snapshot repository, but I am having this error > while executing engine-setup > > [ INFO ] Creating/refreshing Engine database schema > [ ERROR ] Failed to execute stage 'Misc configuration': Command > '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu > te > [ INFO ] DNF Performing DNF transaction rollback > [ INFO ] Rolling back database schema > [ INFO ] Clearing Engine database engine > [ ERROR ] Engine database rollback failed: must be owner of schema > pg_catalog > [ INFO ] Stage: Clean up > Log file is located at > /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log > [ INFO ] Generating answer file > '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Execution of setup failed > > > > and in the deployement log I have these errors > > Saving custom users permissions on database objects... > upgrade script detected a change in Config, View or Stored > Procedure... > Running upgrade shell script > '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... > Running upgrade shell script > '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... > Running upgrade shell script > '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... > Running upgrade shell script > '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... > Running upgrade shell script > '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'... > > > 2015-09-01 15:39:35 DEBUG > otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 > execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', > 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log > > > > > > > > > > > > > > > > > */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', > '-c', 'apply'] > stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: > ligne 1: /bin : is a > directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: > ligne 2: DATABASE : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: > ligne 4: This : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: > ligne 5: The : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: > ligne 6: Add : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: > ligne 7: syntax error near the symbole unexpected « ( > »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: > ligne 7: ` Update section (w/o overriding current > value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: > ligne 1: /bin : is a > directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: > ligne 2: Currently : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: > ligne 3: This : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: > ligne 4: This : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: > ligne 5: So, : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: > ligne 6: Since : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: > ligne 7: bin/ : is a > directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: > ligne 9: update : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: > ligne 10: * > > > *syntax error near the symbole unexpected « ( > »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: > ligne 10: `and exists(select 1 from schema_version where version = > '03010250' and current = > true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: > ligne 1: -- : command not > found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: > ligne 2: **syntax error near the symbole unexpected « ( »* > /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: > ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' > /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: > ligne 1: -- : commande introuvable > /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: > ligne 2: erreur de syntaxe près du symbole inattendu « ( » > /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: > ligne 2: `select fn_db_add_column('materialized_views', > 'min_refresh_rate_in_sec', 'int default 0');' > /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: > ligne 1: erreur de syntaxe près du symbole inattendu « ( » > /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: > ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE > varchar(63);' > 2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 > method exception > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/otopi/context.py", line > 146, in _executeMethod > method['method']() > File > "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", > line 291, in _misc > oenginecons.EngineDBEnv.PGPASS_FILE > File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 946, > in execute > command=args[0], > RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh' > failed to execute > 2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 > Failed to execute stage 'Misc configuration': Command > '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute > 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 > aborting 'DNF Transaction' > 2015-09-01 15:39:35 DEBUG otopi.plugins.otopi.packagers.dnfpackager > dnfpackager.verbose:90 DNF Closing transaction with rollback > 2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager > dnfpackager.info:94 DNF Performing DNF transaction rollback >
It was an issue with package building: all the sql files where executable and so the issue. We fixed it and tomorrow build should be OK. If you prefer to continue right now simply recursively remove the x attribute on each sql file under /usr/share/ovirt-engine/dbscripts
> > 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: > >> >> >> On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David < >> didi@redhat.com> wrote: >> >>> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >>> wodel.youchi@gmail.com> wrote: >>> > Hi, >>> > >>> > I am using the repo of the 3.6 version >>> > (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/) >>> > >>> > I installed the ovirt-hosted-engine-setup with it's >>> dependencies,and the >>> > ovirt-hosted-engine-ha package is one of them. >>> > >>> > Correction: The problem with this version >>> > >>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >>> > is that after the installation is done, the service >>> ovirt-ha-agent crashes >>> > after being started, see the bug : >>> > >>> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >>> > >>> > A new version was produced, I downloaded it manually a few days >>> ago, this is >>> > it : >>> > >>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >>> > >>> > This one did correct the problem, but it's not present anymore >>> on the >>> > repository. >>> >>> Was this on ovirt-3.6-pre? >>> >>> ovirt-3.6-snapshot has a newer version. >>> >>> > >>> > For Simone: yes I did added an NFS4 data domain, but no success >>> so far, no >>> > VM engine present. >>> > >>> > Regards. >>> > >>> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi < >>> stirabos@redhat.com>: >>> >> >>> >> >>> >> >>> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >>> didi@redhat.com> >>> >> wrote: >>> >>> >>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >>> wodel.youchi@gmail.com> >>> >>> wrote: >>> >>> > Hi, >>> >>> > >>> >>> > Another test of ovirt hosted-engine on FC22 using ovirt 3.6 >>> Beta3. >>> >>> > VM engine is also a FC22 >>> >>> > >>> >>> > Problem: >>> >>> > - No VM engine on webui >>> >>> >>> >>> This is still not supported, see/follow [1]. >>> >>> >>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >>> >> >>> >> >>> >> ? :-) >>> >>> Sorry :-( >>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 >> >> >> This is just about editing the VM from the web GUI but in order to >> be able to edit the engine VM you should be able at least to find the >> engine VM in the engine as it was also in 3.5 >> >> I'll try to reproduce verifying another patch >> >> >>> I see that all patches there are merged, but bug is in POST. >>> >>> >> >>> >> Did you also try adding an additional storage domain for >>> regular VMs? >>> >> engine-VM will be shown on the engine only when you add at >>> least one >>> >> additional storage domain for regulars VM and the whole >>> datacenter goes up: >>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >>> >> >>> >> >>> >>> >>> >>> > Test Environment >>> >>> > Just two machines: >>> >>> > >>> >>> > 1 - Machine 1 used as storage : >>> >>> > - iscsi target with a raw file for the VM engine storage >>> >>> > - NFS4 for other data domains >>> >>> > >>> >>> > 2 - Machine 2 used as hypervisor >>> >>> > >>> >>> > >>> >>> > The installation went without problem, but as always, the VM >>> engine is >>> >>> > not >>> >>> > present on the webui. >>> >>> > >>> >>> > >>> >>> > PS: >>> >>> > 1- I gave the VM engine just 2Gb of memory since I don't >>> have too much >>> >>> > RAM >>> >>> > on hypervisor, could that be the cause of the problem? >>> >>> >>> >>> Shouldn't be related >>> >>> >>> >>> > 2- This version of >>> >>> > >>> >>> > >>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>> >>> > package is causing the ovirt-ha-agent to crash, it was >>> replaced with >>> >>> > another >>> >>> > which I still have >>> >>> > >>> >>> > >>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>> >>> > but it's not present on the repository, I had to update the >>> package >>> >>> > manually >>> >>> > at the end of ovirt-hosted-engine-setup installation. >>> >>> >>> >>> Not sure I follow. >>> >>> >>> >>> What exact repo was used? >>> >>> >>> >>> hosted-engine --deploy does not update/install packages for >>> you (as >>> >>> does engine-setup), >>> >>> it's up to you to make sure what you want/need is installed >>> prior to >>> >>> running it. >>> >>> >>> >>> Best, >>> >>> -- >>> >>> Didi >>> >>> _______________________________________________ >>> >>> Users mailing list >>> >>> Users@ovirt.org >>> >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >>> >> >>> > >>> >>> >>> >>> -- >>> Didi >>> >> >> >

On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I will try this afternoon to do this, but just to clarify something.
the hosted_engine setup creates it's own DC the hosted_DC, which contains the hosted engine storage domain, I am correct?
No, ovirt-hosted-engine-setup doesn't create a special datacenter. The default is to add the host to the Default datacenter in the default cluster. You could choose a different one from ovirt-hosted-engine-setup, simply import the hosted-engine storage domain in the datacenter of the cluster you selected. In setup there is a question like this: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI. Please enter local datacenter name which ask about 'Local storage datacenter' which is basically the description we were using for the storage pool.
if yes, where will I import the hostedengine storage domain, into the default DC?
2015-09-02 8:47 GMT+01:00 Roy Golan <rgolan@redhat.com>:
On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I could finally terminate the installation, but still no vm engine on webui
I added a data domain, the default DC is up, but no engine VM.
Good now you need to import the HostedEngine storage domain. Try to go to
*Storage -> Import Domain and put the path to the domain which you used in the hosted-engine setup.*
*After the domain is imported, the engine will be imported automatically. *
*This whole process will become automatic eventually. (patch is written currently)*
2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Something mounted on /rhev/data-center/mnt I'm not sure.
there were directories, and under these directories there were other directories (dom_md, ha_agent, images), and under them there were symbolic links to devices under /dev (ids, inbox, leases, etc...) the devices pointed to the lvm partitions created by the setup.
but the mount command didn't show anything, unlike nfs, when I used it the mount and df commands showed up the engine's VM mount point.
2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi,
After removing the -x from the sql files, the installation terminated successfully, but ...
I had a problem with vdsm, and error about permission denied with KVM module, so I restarted my machine. After the reboot the ovirt-ha-agent stops complaining about the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha
And the mount command doesn't show any iscsi mount, the disk is detected via fdisk -l the lvs command returns all logical volumes created.
I think it's a mount problem, but since there are many lv, I don't how to mount them manually.
Do you have something mounted under /rhev/data-center/mnt ? If not you probably hit this bug: https://bugzilla.redhat.com/1258465
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
3b894e23-429d-43bf-b6cd-6427a387799a 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
be78c0fd-52bf-445a-9555-64061029c2d9 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g
feede664-5754-4ca2-aeb3-af7aff32ed42 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
ids 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m
inbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
leases 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 2,00g
master 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g
metadata 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 512,00m
outbox 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m
2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
> > > On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi <wodel.youchi@gmail.com > > wrote: > >> Hi again, >> >> I tried with the snapshot repository, but I am having this error >> while executing engine-setup >> >> [ INFO ] Creating/refreshing Engine database schema >> [ ERROR ] Failed to execute stage 'Misc configuration': Command >> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >> te >> [ INFO ] DNF Performing DNF transaction rollback >> [ INFO ] Rolling back database schema >> [ INFO ] Clearing Engine database engine >> [ ERROR ] Engine database rollback failed: must be owner of schema >> pg_catalog >> [ INFO ] Stage: Clean up >> Log file is located at >> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >> [ INFO ] Generating answer file >> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Execution of setup failed >> >> >> >> and in the deployement log I have these errors >> >> Saving custom users permissions on database objects... >> upgrade script detected a change in Config, View or Stored >> Procedure... >> Running upgrade shell script >> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... >> Running upgrade shell script >> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... >> Running upgrade shell script >> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... >> Running upgrade shell script >> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... >> Running upgrade shell script >> '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'... >> >> >> 2015-09-01 15:39:35 DEBUG >> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 >> execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', >> 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', >> '-c', 'apply'] >> stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >> ligne 1: /bin : is a >> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >> ligne 2: DATABASE : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >> ligne 4: This : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >> ligne 5: The : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >> ligne 6: Add : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >> ligne 7: syntax error near the symbole unexpected « ( >> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >> ligne 7: ` Update section (w/o overriding current >> value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >> ligne 1: /bin : is a >> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >> ligne 2: Currently : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >> ligne 3: This : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >> ligne 4: This : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >> ligne 5: So, : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >> ligne 6: Since : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >> ligne 7: bin/ : is a >> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >> ligne 9: update : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >> ligne 10: * >> >> >> *syntax error near the symbole unexpected « ( >> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >> ligne 10: `and exists(select 1 from schema_version where version = >> '03010250' and current = >> true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >> ligne 1: -- : command not >> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >> ligne 2: **syntax error near the symbole unexpected « ( »* >> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >> ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' >> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >> ligne 1: -- : commande introuvable >> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >> ligne 2: erreur de syntaxe près du symbole inattendu « ( » >> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >> ligne 2: `select fn_db_add_column('materialized_views', >> 'min_refresh_rate_in_sec', 'int default 0');' >> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >> ligne 1: erreur de syntaxe près du symbole inattendu « ( » >> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >> ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE >> varchar(63);' >> 2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 >> method exception >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/otopi/context.py", line >> 146, in _executeMethod >> method['method']() >> File >> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", >> line 291, in _misc >> oenginecons.EngineDBEnv.PGPASS_FILE >> File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line >> 946, in execute >> command=args[0], >> RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh' >> failed to execute >> 2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 >> Failed to execute stage 'Misc configuration': Command >> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >> 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 >> aborting 'DNF Transaction' >> 2015-09-01 15:39:35 DEBUG otopi.plugins.otopi.packagers.dnfpackager >> dnfpackager.verbose:90 DNF Closing transaction with rollback >> 2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager >> dnfpackager.info:94 DNF Performing DNF transaction rollback >> > > > It was an issue with package building: all the sql files where > executable and so the issue. > We fixed it and tomorrow build should be OK. If you prefer to > continue right now simply recursively remove the x attribute on each sql > file under /usr/share/ovirt-engine/dbscripts > > > >> >> 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: >> >>> >>> >>> On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David < >>> didi@redhat.com> wrote: >>> >>>> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >>>> wodel.youchi@gmail.com> wrote: >>>> > Hi, >>>> > >>>> > I am using the repo of the 3.6 version >>>> > (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/) >>>> > >>>> > I installed the ovirt-hosted-engine-setup with it's >>>> dependencies,and the >>>> > ovirt-hosted-engine-ha package is one of them. >>>> > >>>> > Correction: The problem with this version >>>> > >>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >>>> > is that after the installation is done, the service >>>> ovirt-ha-agent crashes >>>> > after being started, see the bug : >>>> > >>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >>>> > >>>> > A new version was produced, I downloaded it manually a few days >>>> ago, this is >>>> > it : >>>> > >>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >>>> > >>>> > This one did correct the problem, but it's not present anymore >>>> on the >>>> > repository. >>>> >>>> Was this on ovirt-3.6-pre? >>>> >>>> ovirt-3.6-snapshot has a newer version. >>>> >>>> > >>>> > For Simone: yes I did added an NFS4 data domain, but no success >>>> so far, no >>>> > VM engine present. >>>> > >>>> > Regards. >>>> > >>>> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi < >>>> stirabos@redhat.com>: >>>> >> >>>> >> >>>> >> >>>> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >>>> didi@redhat.com> >>>> >> wrote: >>>> >>> >>>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >>>> wodel.youchi@gmail.com> >>>> >>> wrote: >>>> >>> > Hi, >>>> >>> > >>>> >>> > Another test of ovirt hosted-engine on FC22 using ovirt 3.6 >>>> Beta3. >>>> >>> > VM engine is also a FC22 >>>> >>> > >>>> >>> > Problem: >>>> >>> > - No VM engine on webui >>>> >>> >>>> >>> This is still not supported, see/follow [1]. >>>> >>> >>>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >>>> >> >>>> >> >>>> >> ? :-) >>>> >>>> Sorry :-( >>>> >>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 >>> >>> >>> This is just about editing the VM from the web GUI but in order to >>> be able to edit the engine VM you should be able at least to find the >>> engine VM in the engine as it was also in 3.5 >>> >>> I'll try to reproduce verifying another patch >>> >>> >>>> I see that all patches there are merged, but bug is in POST. >>>> >>>> >> >>>> >> Did you also try adding an additional storage domain for >>>> regular VMs? >>>> >> engine-VM will be shown on the engine only when you add at >>>> least one >>>> >> additional storage domain for regulars VM and the whole >>>> datacenter goes up: >>>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >>>> >> >>>> >> >>>> >>> >>>> >>> > Test Environment >>>> >>> > Just two machines: >>>> >>> > >>>> >>> > 1 - Machine 1 used as storage : >>>> >>> > - iscsi target with a raw file for the VM engine storage >>>> >>> > - NFS4 for other data domains >>>> >>> > >>>> >>> > 2 - Machine 2 used as hypervisor >>>> >>> > >>>> >>> > >>>> >>> > The installation went without problem, but as always, the >>>> VM engine is >>>> >>> > not >>>> >>> > present on the webui. >>>> >>> > >>>> >>> > >>>> >>> > PS: >>>> >>> > 1- I gave the VM engine just 2Gb of memory since I don't >>>> have too much >>>> >>> > RAM >>>> >>> > on hypervisor, could that be the cause of the problem? >>>> >>> >>>> >>> Shouldn't be related >>>> >>> >>>> >>> > 2- This version of >>>> >>> > >>>> >>> > >>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>>> >>> > package is causing the ovirt-ha-agent to crash, it was >>>> replaced with >>>> >>> > another >>>> >>> > which I still have >>>> >>> > >>>> >>> > >>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>>> >>> > but it's not present on the repository, I had to update the >>>> package >>>> >>> > manually >>>> >>> > at the end of ovirt-hosted-engine-setup installation. >>>> >>> >>>> >>> Not sure I follow. >>>> >>> >>>> >>> What exact repo was used? >>>> >>> >>>> >>> hosted-engine --deploy does not update/install packages for >>>> you (as >>>> >>> does engine-setup), >>>> >>> it's up to you to make sure what you want/need is installed >>>> prior to >>>> >>> running it. >>>> >>> >>>> >>> Best, >>>> >>> -- >>>> >>> Didi >>>> >>> _______________________________________________ >>>> >>> Users mailing list >>>> >>> Users@ovirt.org >>>> >>> http://lists.ovirt.org/mailman/listinfo/users >>>> >> >>>> >> >>>> > >>>> >>>> >>>> >>>> -- >>>> Didi >>>> >>> >>> >> >

Thanks, but before that I stuck again with the storage of the VM engine not detected after reboot. the /rhev is populated, but ovirt-ha-agent crashes with MainThread::INFO::2015-09-02 16:12:20,261::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid': 'ab1dc1a9-b6e9-4890-8485-1019da2f328f', 'address': '0 '} MainThread::INFO::2015-09-02 16:12:20,283::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 139994237094736 MainThread::INFO::2015-09-02 16:12:20,702::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_do main) Success, id 139994236985168 MainThread::INFO::2015-09-02 16:12:20,702::hosted_engine::574::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_broker) Broker initialized, all submonitors started MainThread::INFO::2015-09-02 16:12:20,799::hosted_engine::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_sanlock) Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: /var/run/vdsm/storage/8b25f3be-7574 -4f7a-8851-363129704e52/a44d1302-3165-4632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342) MainThread::INFO::2015-09-02 16:12:20,800::hosted_engine::401::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star t_monitoring) Reloading vm.conf from the shared storage domain MainThread::ERROR::2015-09-02 16:12:20,927::agent::201::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) *Error: ''Configuration value not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' - trying to restart agent * MainThread::WARNING::2015-09-02 16:12:25,932::agent::204::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Restarting a gent, attempt '9' MainThread::ERROR::2015-09-02 16:12:25,933::agent::206::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Too many error s occurred, giving up. Please review the log and consider filing a bug. MainThread::INFO::2015-09-02 16:12:25,933::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down I restared vdsm ha-agent and broker-agent wihtout success When executed [root@noveria ~]# hosted-engine --vm-status You must run deploy first I got this [root@noveria ~]# tree /var/run/vdsm/storage/ /var/run/vdsm/storage/ └── 8b25f3be-7574-4f7a-8851-363129704e52 ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1 ade79d1 ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f 67f7342 ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c32 1b4fbd1 └── bf3bdae1-7318-4443-a19b-7371de30b982 └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7 157706b and this [root@noveria rhev]# tree . └── data-center ├── 00000001-0001-0001-0001-000000000221 └── mnt ├── blockSD │ └── 8b25f3be-7574-4f7a-8851-363129704e52 │ ├── dom_md │ │ ├── ids -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ids │ │ ├── inbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox │ │ ├── leases -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/leases │ │ ├── master -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/master │ │ ├── metadata -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata │ │ └── outbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox │ ├── ha_agent │ │ ├── hosted-engine.lockspace -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/a44d1302-3165-4 632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342 │ │ └── hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/f05762e5-e8cd-45e7-ac19-303c1ade79d1 │ └── images │ ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-4 5e7-ac19-303c1ade79d1 │ ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4 c71-b332-c6c7f67f7342 │ ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4 376-aa27-04c321b4fbd1 │ └── bf3bdae1-7318-4443-a19b-7371de30b982 │ └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-4 65e-aed9-412f7157706b ├── openSuse.wodel.wd:_nvms └── _var_lib_ovirt-hosted-engine-setup_tmp2fNoEf Here I did find some symblic links blinking (not present) like this one hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/*f05762e5-e8cd-45e7-ac19-303c1ade79d1* the lvscan command showed that the lv concerned is inactive, is this correct? [root@noveria ~]# lvscan File descriptor 9 (/dev/dri/card0) leaked on lvscan invocation. Parent PID 2935: bash ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata' [512,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases' [2,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ids' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/master' [1,00 GiB] inherit inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c321b4fbd1' [1,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f67f7342' [128,00 MiB] inherit *inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1ade79d1' [128,00 MiB] inherit * inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7157706b' [40,00 GiB] inherit and this [root@noveria ~]# vdsClient -s 0 prepareImage "00000000-0000-0000-0000-000000000000" "8b25f3be-7574-4f7a-8851-363129704e52" " bf3bdae1-7318-4443-a19b-7371de30b982" "cbb10cf0-9600-465e-aed9-412f7157706b" {'domainID': '8b25f3be-7574-4f7a-8851-363129704e52', 'imageID': 'bf3bdae1-7318-4443-a19b-7371de30b982', 'leaseOffset': 112197632, 'leasePath': '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases', 'path': '/rhev/data-center/mnt/blockSD/8b25f3be-7574-4f7a-8851-363129704e52/images/bf3bdae1-7318-4443-a19b-7371de30b982/cbb1 0cf0-9600-465e-aed9-412f7157706b', 'volType': 'path', 'volumeID': 'cbb10cf0-9600-465e-aed9-412f7157706b'} and [root@noveria ~]# vdsClient -s 0 getStorageDomainInfo 8b25f3be-7574-4f7a-8851-363129704e52 uuid = 8b25f3be-7574-4f7a-8851-363129704e52 vguuid = tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m state = OK version = 3 role = Regular type = ISCSI class = Data pool = [] name = hosted_storage [root@noveria ~]# lvs File descriptor 9 (/dev/dri/card0) leaked on lvs invocation. Parent PID 3105: bash LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 0f260ab0-3631-4c71-b332-c6c7f67f7342 8b25f3be-7574-4f7a-8851-363129704e52 -wi-ao---- 128,00m ae352fab-7477-4376-aa27-04c321b4fbd1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 1,00g cbb10cf0-9600-465e-aed9-412f7157706b 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 40,00g f05762e5-e8cd-45e7-ac19-303c1ade79d1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 128,00m ids 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m inbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m leases 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 2,00g master 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 1,00g metadata 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 512,00m outbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m VDSM logs doesn't show me anything MainThread::INFO::2015-09-01 23:34:49,551::vdsm::166::vds::(run) <WorkerThread(Thread-4, started daemon 139990108333824)> MainThread::INFO::2015-09-01 23:34:49,552::vdsm::166::vds::(run) <WorkerThread(Thread-3, started daemon 139990116726528)> MainThread::INFO::2015-09-02 16:07:49,510::vdsm::156::vds::(run) (PID: 1554) I am the actual vdsm 4.17.3-12.git7288ef7.fc22 noveria.wodel.wd (4.1.6-200.fc22.x86_64) MainThread::DEBUG::2015-09-02 16:07:49,524::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2015-09-02 16:07:49,524::threadPool::29::Storage.ThreadPool::(__init__) Enter - numThreads: 10, waitTimeout: 3, maxTasks: 500 MainThread::DEBUG::2015-09-02 16:07:49,526::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt mode: None MainThread::WARNING::2015-09-02 16:07:49,526::fileUtils::152::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt already exists MainThread::DEBUG::2015-09-02 16:07:49,564::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) /usr/bin/sudo -n /usr/sbin/lvm dumpconfig global/locking_type (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::427::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::459::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::460::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::462::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2015-09-02 16:07:49,615::hsm::505::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' storageRefresh::DEBUG::2015-09-02 16:07:49,616::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2015-09-02 16:07:49,617::dispatcher::46::Storage.Dispatcher::(__init__) Starting StorageDispatcher... storageRefresh::DEBUG::2015-09-02 16:07:49,620::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,792::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) storageRefresh::DEBUG::2015-09-02 16:07:49,793::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,793::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds storageRefresh::DEBUG::2015-09-02 16:07:49,924::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,924::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state init -> state preparing MainThread::INFO::2015-09-02 16:07:49,924::logUtils::48::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback(callbackFunc=<functools.partial object at 0x7fc2f03fa6d8>) MainThread::INFO::2015-09-02 16:07:49,924::logUtils::51::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback, Return response: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::1191::Storage.TaskManager.Task::(prepare) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::finished: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state preparing -> state finished MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} MainThread::DEBUG::2015-09-02 16:07:49,928::task::993::Storage.TaskManager.Task::(_decref) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::ref 0 aborting False MainThread::INFO::2015-09-02 16:07:49,928::momIF::46::MOM::(__init__) Preparing MOM interface MainThread::INFO::2015-09-02 16:07:49,929::momIF::55::MOM::(__init__) Using named unix socket /var/run/vdsm/mom-vdsm.sock MainThread::INFO::2015-09-02 16:07:49,929::secret::90::root::(clear) Unregistering all secrests MainThread::DEBUG::2015-09-02 16:07:49,929::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::INFO::2015-09-02 16:07:49,933::vmchannels::196::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::DEBUG::2015-09-02 16:07:49,934::vmchannels::178::vds::(run) Starting VM channels listener thread. MainThread::INFO::2015-09-02 16:07:49,935::protocoldetector::172::vds.MultiProtocolAcceptor::(__init__) Listening at 0.0.0.0:54321 MainThread::DEBUG::2015-09-02 16:07:50,063::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <rpc.bindingxmlrpc.XmlDetector instance at 0x7fc2f00dc440> storageRefresh::DEBUG::2015-09-02 16:07:50,063::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,080::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) storageRefresh::DEBUG::2015-09-02 16:07:50,081::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:50,081::hba::56::Storage.HBA::(rescan) Starting scan storageRefresh::DEBUG::2015-09-02 16:07:50,081::supervdsm::76::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm MainThread::DEBUG::2015-09-02 16:07:50,157::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <yajsonrpc.stompreactor.StompDetector instance at 0x7fc2e01332d8> BindingXMLRPC::INFO::2015-09-02 16:07:50,158::bindingxmlrpc::62::vds::(threaded_start) XMLRPC server running MainThread::DEBUG::2015-09-02 16:07:50,158::schedule::98::Scheduler::(start) Starting scheduler periodic-sched periodic-sched::DEBUG::2015-09-02 16:07:50,159::schedule::142::Scheduler::(_run) started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::69::Executor::(start) Starting executor MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/0 periodic/0::DEBUG::2015-09-02 16:07:50,159::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/1 periodic/1::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/2 periodic/2::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/3 periodic/3::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::DEBUG::2015-09-02 16:07:50,163::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.UpdateVolumes'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.NumaInfoMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.BlockjobMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation <virt.sampling.VMBulkSampler object at 0x7fc2e0151d10> MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.DriveWatermarkMonitor'>) storageRefresh::DEBUG::2015-09-02 16:07:50,167::hba::62::Storage.HBA::(rescan) Scan finished storageRefresh::DEBUG::2015-09-02 16:07:50,167::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,167::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:50,513::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:50,513::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Reactor thread::INFO::2015-09-02 16:07:50,590::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:50,596::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56311) BindingXMLRPC::INFO::2015-09-02 16:07:50,596::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56311 Thread-13::INFO::2015-09-02 16:07:50,597::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 started Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-13::INFO::2015-09-02 16:07:50,599::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 stopped Reactor thread::INFO::2015-09-02 16:07:51,607::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:51,613::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56312) BindingXMLRPC::INFO::2015-09-02 16:07:51,613::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56312 Thread-14::INFO::2015-09-02 16:07:51,613::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 started Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-14::INFO::2015-09-02 16:07:51,615::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 stopped storageRefresh::DEBUG::2015-09-02 16:07:51,924::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,927::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,406::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,459::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,491::hsm::373::Storage.HSM::(storageRefresh) HSM is ready Reactor thread::INFO::2015-09-02 16:07:52,624::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,629::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56313) BindingXMLRPC::INFO::2015-09-02 16:07:52,629::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56313 Thread-15::INFO::2015-09-02 16:07:52,630::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 started Thread-15::DEBUG::2015-09-02 16:07:52,630::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-15::DEBUG::2015-09-02 16:07:52,719::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'System Product Name', 'systemSerialNumber': 'System Serial Number', 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': 'System Version', 'systemUUID': '267A6B80-D7DA-11DD-81CF-C860009B3CD9', 'systemManufacturer': 'System manufacturer'}} Thread-15::INFO::2015-09-02 16:07:52,721::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 stopped Reactor thread::INFO::2015-09-02 16:07:52,730::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,735::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56314) BindingXMLRPC::INFO::2015-09-02 16:07:52,735::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56314 Thread-16::INFO::2015-09-02 16:07:52,735::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 started Thread-16::DEBUG::2015-09-02 16:07:52,736::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-16::DEBUG::2015-09-02 16:07:52,736::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state init -> state preparing Thread-16::INFO::2015-09-02 16:07:52,736::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=3, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b', 'connection': '192.168.1.50', 'iqn': 'iqn.2015-08.openSuse.wodel:target00', 'portal': '1', 'user': 'iscsiuser', 'password': '********', 'port': '3260'}], options=None) Thread-16::DEBUG::2015-09-02 16:07:52,737::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 --op=new (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.authmethod -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,811::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,812::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.username -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,846::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,847::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.password -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m iface -I default (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,905::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::INFO::2015-09-02 16:07:52,905::iscsi::564::Storage.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not provided. Skipping. Thread-16::DEBUG::2015-09-02 16:07:52,906::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -l (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,027::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,028::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.startup -v manual --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,088::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,088::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,182::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,229::hba::56::Storage.HBA::(rescan) Starting scan Thread-16::DEBUG::2015-09-02 16:07:53,300::hba::62::Storage.HBA::(rescan) Scan finished Thread-16::DEBUG::2015-09-02 16:07:53,300::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,300::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,435::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,435::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,919::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,923::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) Thread-16::DEBUG::2015-09-02 16:07:54,058::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:54,059::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2418::Storage.HSM::(__prefetchDomains) Found SD uuids: ('8b25f3be-7574-4f7a-8851-363129704e52',) Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: {8b25f3be-7574-4f7a-8851-363129704e52: storage.blockSD.findDomain} Thread-16::INFO::2015-09-02 16:07:54,059::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::1191::Storage.TaskManager.Task::(prepare) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::finished: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state preparing -> state finished Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-16::DEBUG::2015-09-02 16:07:54,060::task::993::Storage.TaskManager.Task::(_decref) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::ref 0 aborting False Thread-16::INFO::2015-09-02 16:07:54,062::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 stopped Reactor thread::INFO::2015-09-02 16:07:54,070::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,075::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:54,076::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,076::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56316) BindingXMLRPC::INFO::2015-09-02 16:07:54,076::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56316 Thread-17::INFO::2015-09-02 16:07:54,076::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56316 started Thread-17::DEBUG::2015-09-02 16:07:54,077::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-17::DEBUG::2015-09-02 16:07:54,077::task::595::Storage.TaskManager.Task::(_updateState) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::moving from state init -> state preparing Thread-17::INFO::2015-09-02 16:07:54,077::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='8b25f3be-7574-4f7a-8851-363129704e52', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='bf3bdae1-7318-4443-a19b-7371de30b982', leafUUID='cbb10cf0-9600-465e-aed9-412f7157706b') Thread-17::DEBUG::2015-09-02 16:07:54,077::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3194' at 'prepareImage' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' for lock type 'shared' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' is free. Now locking as 'shared' (1 active user) Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Granted request Thread-17::DEBUG::2015-09-02 16:07:54,078::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::_resourcesAcquired: Storage.8b25f3be-7574-4f7a-8851-363129704e52 (shared) Thread-17::DEBUG::2015-09-02 16:07:54,078::task::993::Storage.TaskManager.Task::(_decref) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::ref 1 aborting False Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,130::hba::56::Storage.HBA::(rescan) Starting scan Thread-17::DEBUG::2015-09-02 16:07:54,197::hba::62::Storage.HBA::(rescan) Scan finished Thread-17::DEBUG::2015-09-02 16:07:54,197::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,197::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,298::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,299::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,307::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,309::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,312::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,479::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with LvMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,479::blockSD::337::Storage.Misc.excCmd::(readlines) /usr/bin/dd iflag=direct skip=0 bs=2048 if=/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata count=1 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,553::blockSD::337::Storage.Misc.excCmd::(readlines) SUCCESS: <err> = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,553::misc::260::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s'], size: 2048 Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[] Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::252::Storage.PersistentDict::(refresh) Empty metadata Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with VGTagMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,685::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,686::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,686::persistentDict::234::Storage.PersistentDict::(refresh) read lines (VGTagMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512', 'PHYBLKSIZE=4096', 'POOL_UUID=', u'PV0=pv:33000000100000001,uuid:kTaQQh-4LCD-OghQ-cP5D-R7MM-aj6e-kTdQf0,pestart:0,pecount:397,mapoffset:0', 'ROLE=Regular', 'SDUUID=8b25f3be-7574-4f7a-8851-363129704e52', 'TYPE=ISCSI', 'VERSION=3', 'VGUUID=tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m', '_SHA_CKSUM=4a100ce5195650f43971d849835a8b3d8c0343da'] Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_imageNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_volumeNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_lvmActivationNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex What should I do to bring the VM engine back? 2015-09-02 16:24 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I will try this afternoon to do this, but just to clarify something.
the hosted_engine setup creates it's own DC the hosted_DC, which contains the hosted engine storage domain, I am correct?
No, ovirt-hosted-engine-setup doesn't create a special datacenter. The default is to add the host to the Default datacenter in the default cluster. You could choose a different one from ovirt-hosted-engine-setup, simply import the hosted-engine storage domain in the datacenter of the cluster you selected.
In setup there is a question like this: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI. Please enter local datacenter name which ask about 'Local storage datacenter' which is basically the description we were using for the storage pool.
if yes, where will I import the hostedengine storage domain, into the default DC?
2015-09-02 8:47 GMT+01:00 Roy Golan <rgolan@redhat.com>:
On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I could finally terminate the installation, but still no vm engine on webui
I added a data domain, the default DC is up, but no engine VM.
Good now you need to import the HostedEngine storage domain. Try to go to
*Storage -> Import Domain and put the path to the domain which you used in the hosted-engine setup.*
*After the domain is imported, the engine will be imported automatically. *
*This whole process will become automatic eventually. (patch is written currently)*
2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Something mounted on /rhev/data-center/mnt I'm not sure.
there were directories, and under these directories there were other directories (dom_md, ha_agent, images), and under them there were symbolic links to devices under /dev (ids, inbox, leases, etc...) the devices pointed to the lvm partitions created by the setup.
but the mount command didn't show anything, unlike nfs, when I used it the mount and df commands showed up the engine's VM mount point.
2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi <wodel.youchi@gmail.com> wrote:
> Hi, > > After removing the -x from the sql files, the installation > terminated successfully, but ... > > I had a problem with vdsm, and error about permission denied with > KVM module, so I restarted my machine. > After the reboot the ovirt-ha-agent stops complaining about the > vm.conf file not present in /var/rum/ovirt-hosted-engine-ha > > And the mount command doesn't show any iscsi mount, the disk is > detected via fdisk -l > the lvs command returns all logical volumes created. > > I think it's a mount problem, but since there are many lv, I don't > how to mount them manually. >
Do you have something mounted under /rhev/data-center/mnt ? If not you probably hit this bug: https://bugzilla.redhat.com/1258465
> LV VG > Attr LSize Pool Origin Data% > Meta% Move > Log Cpy%Sync Convert > > 3b894e23-429d-43bf-b6cd-6427a387799a > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m > > > > be78c0fd-52bf-445a-9555-64061029c2d9 > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g > > > > c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g > > > > feede664-5754-4ca2-aeb3-af7aff32ed42 > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m > > > > ids > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-ao---- 128,00m > > > inbox > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-a----- 128,00m > > > leases > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-a----- 2,00g > > > master > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-a----- 1,00g > > > metadata > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- > 512,00m > > > outbox > 5445bbee-bb3a-4e6d-9614-a0c9378fe078 > -wi-a----- 128,00m > > > > > 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: > >> >> >> On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi < >> wodel.youchi@gmail.com> wrote: >> >>> Hi again, >>> >>> I tried with the snapshot repository, but I am having this error >>> while executing engine-setup >>> >>> [ INFO ] Creating/refreshing Engine database schema >>> [ ERROR ] Failed to execute stage 'Misc configuration': Command >>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >>> te >>> [ INFO ] DNF Performing DNF transaction rollback >>> [ INFO ] Rolling back database schema >>> [ INFO ] Clearing Engine database engine >>> [ ERROR ] Engine database rollback failed: must be owner of >>> schema pg_catalog >>> [ INFO ] Stage: Clean up >>> Log file is located at >>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >>> [ INFO ] Generating answer file >>> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >>> [ INFO ] Stage: Pre-termination >>> [ INFO ] Stage: Termination >>> [ ERROR ] Execution of setup failed >>> >>> >>> >>> and in the deployement log I have these errors >>> >>> Saving custom users permissions on database objects... >>> upgrade script detected a change in Config, View or Stored >>> Procedure... >>> Running upgrade shell script >>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... >>> Running upgrade shell script >>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... >>> Running upgrade shell script >>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... >>> Running upgrade shell script >>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... >>> Running upgrade shell script >>> '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'... >>> >>> >>> 2015-09-01 15:39:35 DEBUG >>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 >>> execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', >>> 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', >>> '-c', 'apply'] >>> stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>> ligne 1: /bin : is a >>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>> ligne 2: DATABASE : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>> ligne 4: This : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>> ligne 5: The : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>> ligne 6: Add : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>> ligne 7: syntax error near the symbole unexpected « ( >>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>> ligne 7: ` Update section (w/o overriding current >>> value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>> ligne 1: /bin : is a >>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>> ligne 2: Currently : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>> ligne 3: This : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>> ligne 4: This : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>> ligne 5: So, : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>> ligne 6: Since : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>> ligne 7: bin/ : is a >>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>> ligne 9: update : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>> ligne 10: * >>> >>> >>> *syntax error near the symbole unexpected « ( >>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>> ligne 10: `and exists(select 1 from schema_version where version = >>> '03010250' and current = >>> true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>> ligne 1: -- : command not >>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>> ligne 2: **syntax error near the symbole unexpected « ( »* >>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>> ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' >>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>> ligne 1: -- : commande introuvable >>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>> ligne 2: erreur de syntaxe près du symbole inattendu « ( » >>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>> ligne 2: `select fn_db_add_column('materialized_views', >>> 'min_refresh_rate_in_sec', 'int default 0');' >>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>> ligne 1: erreur de syntaxe près du symbole inattendu « ( » >>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>> ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE >>> varchar(63);' >>> 2015-09-01 15:39:35 DEBUG otopi.context context._executeMethod:156 >>> method exception >>> Traceback (most recent call last): >>> File "/usr/lib/python2.7/site-packages/otopi/context.py", line >>> 146, in _executeMethod >>> method['method']() >>> File >>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", >>> line 291, in _misc >>> oenginecons.EngineDBEnv.PGPASS_FILE >>> File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line >>> 946, in execute >>> command=args[0], >>> RuntimeError: Command >>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>> 2015-09-01 15:39:35 ERROR otopi.context context._executeMethod:165 >>> Failed to execute stage 'Misc configuration': Command >>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>> 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 >>> aborting 'DNF Transaction' >>> 2015-09-01 15:39:35 DEBUG >>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF >>> Closing transaction with rollback >>> 2015-09-01 15:39:35 INFO otopi.plugins.otopi.packagers.dnfpackager >>> dnfpackager.info:94 DNF Performing DNF transaction rollback >>> >> >> >> It was an issue with package building: all the sql files where >> executable and so the issue. >> We fixed it and tomorrow build should be OK. If you prefer to >> continue right now simply recursively remove the x attribute on each sql >> file under /usr/share/ovirt-engine/dbscripts >> >> >> >>> >>> 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com> >>> : >>> >>>> >>>> >>>> On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David < >>>> didi@redhat.com> wrote: >>>> >>>>> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >>>>> wodel.youchi@gmail.com> wrote: >>>>> > Hi, >>>>> > >>>>> > I am using the repo of the 3.6 version >>>>> > (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/) >>>>> > >>>>> > I installed the ovirt-hosted-engine-setup with it's >>>>> dependencies,and the >>>>> > ovirt-hosted-engine-ha package is one of them. >>>>> > >>>>> > Correction: The problem with this version >>>>> > >>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >>>>> > is that after the installation is done, the service >>>>> ovirt-ha-agent crashes >>>>> > after being started, see the bug : >>>>> > >>>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >>>>> > >>>>> > A new version was produced, I downloaded it manually a few >>>>> days ago, this is >>>>> > it : >>>>> > >>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >>>>> > >>>>> > This one did correct the problem, but it's not present anymore >>>>> on the >>>>> > repository. >>>>> >>>>> Was this on ovirt-3.6-pre? >>>>> >>>>> ovirt-3.6-snapshot has a newer version. >>>>> >>>>> > >>>>> > For Simone: yes I did added an NFS4 data domain, but no >>>>> success so far, no >>>>> > VM engine present. >>>>> > >>>>> > Regards. >>>>> > >>>>> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi < >>>>> stirabos@redhat.com>: >>>>> >> >>>>> >> >>>>> >> >>>>> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >>>>> didi@redhat.com> >>>>> >> wrote: >>>>> >>> >>>>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >>>>> wodel.youchi@gmail.com> >>>>> >>> wrote: >>>>> >>> > Hi, >>>>> >>> > >>>>> >>> > Another test of ovirt hosted-engine on FC22 using ovirt >>>>> 3.6 Beta3. >>>>> >>> > VM engine is also a FC22 >>>>> >>> > >>>>> >>> > Problem: >>>>> >>> > - No VM engine on webui >>>>> >>> >>>>> >>> This is still not supported, see/follow [1]. >>>>> >>> >>>>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >>>>> >> >>>>> >> >>>>> >> ? :-) >>>>> >>>>> Sorry :-( >>>>> >>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 >>>> >>>> >>>> This is just about editing the VM from the web GUI but in order >>>> to be able to edit the engine VM you should be able at least to find the >>>> engine VM in the engine as it was also in 3.5 >>>> >>>> I'll try to reproduce verifying another patch >>>> >>>> >>>>> I see that all patches there are merged, but bug is in POST. >>>>> >>>>> >> >>>>> >> Did you also try adding an additional storage domain for >>>>> regular VMs? >>>>> >> engine-VM will be shown on the engine only when you add at >>>>> least one >>>>> >> additional storage domain for regulars VM and the whole >>>>> datacenter goes up: >>>>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >>>>> >> >>>>> >> >>>>> >>> >>>>> >>> > Test Environment >>>>> >>> > Just two machines: >>>>> >>> > >>>>> >>> > 1 - Machine 1 used as storage : >>>>> >>> > - iscsi target with a raw file for the VM engine storage >>>>> >>> > - NFS4 for other data domains >>>>> >>> > >>>>> >>> > 2 - Machine 2 used as hypervisor >>>>> >>> > >>>>> >>> > >>>>> >>> > The installation went without problem, but as always, the >>>>> VM engine is >>>>> >>> > not >>>>> >>> > present on the webui. >>>>> >>> > >>>>> >>> > >>>>> >>> > PS: >>>>> >>> > 1- I gave the VM engine just 2Gb of memory since I don't >>>>> have too much >>>>> >>> > RAM >>>>> >>> > on hypervisor, could that be the cause of the problem? >>>>> >>> >>>>> >>> Shouldn't be related >>>>> >>> >>>>> >>> > 2- This version of >>>>> >>> > >>>>> >>> > >>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>>>> >>> > package is causing the ovirt-ha-agent to crash, it was >>>>> replaced with >>>>> >>> > another >>>>> >>> > which I still have >>>>> >>> > >>>>> >>> > >>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>>>> >>> > but it's not present on the repository, I had to update >>>>> the package >>>>> >>> > manually >>>>> >>> > at the end of ovirt-hosted-engine-setup installation. >>>>> >>> >>>>> >>> Not sure I follow. >>>>> >>> >>>>> >>> What exact repo was used? >>>>> >>> >>>>> >>> hosted-engine --deploy does not update/install packages for >>>>> you (as >>>>> >>> does engine-setup), >>>>> >>> it's up to you to make sure what you want/need is installed >>>>> prior to >>>>> >>> running it. >>>>> >>> >>>>> >>> Best, >>>>> >>> -- >>>>> >>> Didi >>>>> >>> _______________________________________________ >>>>> >>> Users mailing list >>>>> >>> Users@ovirt.org >>>>> >>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >> >>>>> >> >>>>> > >>>>> >>>>> >>>>> >>>>> -- >>>>> Didi >>>>> >>>> >>>> >>> >> >

I found this on vdsm log Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('ab 1dc1a9-b6e9-4890-8485-1019da2f328f',) {} Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1263::vds::(wrapper) return vmGetStats with {'status': {'message': '*Virtual machine does not exist*', 'code': 1}} I really don't understand anything 2015-09-02 17:01 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Thanks,
but before that I stuck again with the storage of the VM engine not detected after reboot.
the /rhev is populated, but ovirt-ha-agent crashes with
MainThread::INFO::2015-09-02 16:12:20,261::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid': 'ab1dc1a9-b6e9-4890-8485-1019da2f328f', 'address': '0 '} MainThread::INFO::2015-09-02 16:12:20,283::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 139994237094736 MainThread::INFO::2015-09-02 16:12:20,702::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_do main) Success, id 139994236985168 MainThread::INFO::2015-09-02 16:12:20,702::hosted_engine::574::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_broker) Broker initialized, all submonitors started MainThread::INFO::2015-09-02 16:12:20,799::hosted_engine::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_sanlock) Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: /var/run/vdsm/storage/8b25f3be-7574 -4f7a-8851-363129704e52/a44d1302-3165-4632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342)
MainThread::INFO::2015-09-02 16:12:20,800::hosted_engine::401::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star t_monitoring) Reloading vm.conf from the shared storage domain MainThread::ERROR::2015-09-02 16:12:20,927::agent::201::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
*Error: ''Configuration value not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' - trying to restart agent * MainThread::WARNING::2015-09-02 16:12:25,932::agent::204::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Restarting a gent, attempt '9' MainThread::ERROR::2015-09-02 16:12:25,933::agent::206::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Too many error s occurred, giving up. Please review the log and consider filing a bug. MainThread::INFO::2015-09-02 16:12:25,933::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down
I restared vdsm ha-agent and broker-agent wihtout success
When executed [root@noveria ~]# hosted-engine --vm-status You must run deploy first
I got this
[root@noveria ~]# tree /var/run/vdsm/storage/ /var/run/vdsm/storage/ └── 8b25f3be-7574-4f7a-8851-363129704e52 ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1 ade79d1 ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f 67f7342 ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c32 1b4fbd1 └── bf3bdae1-7318-4443-a19b-7371de30b982 └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7 157706b
and this [root@noveria rhev]# tree . └── data-center ├── 00000001-0001-0001-0001-000000000221 └── mnt ├── blockSD │ └── 8b25f3be-7574-4f7a-8851-363129704e52 │ ├── dom_md │ │ ├── ids -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ids │ │ ├── inbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox │ │ ├── leases -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/leases │ │ ├── master -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/master │ │ ├── metadata -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata │ │ └── outbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox │ ├── ha_agent │ │ ├── hosted-engine.lockspace -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/a44d1302-3165-4 632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342 │ │ └── hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/f05762e5-e8cd-45e7-ac19-303c1ade79d1 │ └── images │ ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-4 5e7-ac19-303c1ade79d1 │ ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4 c71-b332-c6c7f67f7342 │ ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4 376-aa27-04c321b4fbd1 │ └── bf3bdae1-7318-4443-a19b-7371de30b982 │ └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-4 65e-aed9-412f7157706b ├── openSuse.wodel.wd:_nvms └── _var_lib_ovirt-hosted-engine-setup_tmp2fNoEf
Here I did find some symblic links blinking (not present) like this one hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/*f05762e5-e8cd-45e7-ac19-303c1ade79d1*
the lvscan command showed that the lv concerned is inactive, is this correct? [root@noveria ~]# lvscan File descriptor 9 (/dev/dri/card0) leaked on lvscan invocation. Parent PID 2935: bash ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata' [512,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases' [2,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ids' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/master' [1,00 GiB] inherit inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c321b4fbd1' [1,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f67f7342' [128,00 MiB] inherit *inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1ade79d1' [128,00 MiB] inherit * inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7157706b' [40,00 GiB] inherit
and this [root@noveria ~]# vdsClient -s 0 prepareImage "00000000-0000-0000-0000-000000000000" "8b25f3be-7574-4f7a-8851-363129704e52" " bf3bdae1-7318-4443-a19b-7371de30b982" "cbb10cf0-9600-465e-aed9-412f7157706b" {'domainID': '8b25f3be-7574-4f7a-8851-363129704e52', 'imageID': 'bf3bdae1-7318-4443-a19b-7371de30b982', 'leaseOffset': 112197632, 'leasePath': '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases', 'path': '/rhev/data-center/mnt/blockSD/8b25f3be-7574-4f7a-8851-363129704e52/images/bf3bdae1-7318-4443-a19b-7371de30b982/cbb1 0cf0-9600-465e-aed9-412f7157706b', 'volType': 'path', 'volumeID': 'cbb10cf0-9600-465e-aed9-412f7157706b'}
and [root@noveria ~]# vdsClient -s 0 getStorageDomainInfo 8b25f3be-7574-4f7a-8851-363129704e52 uuid = 8b25f3be-7574-4f7a-8851-363129704e52 vguuid = tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m state = OK version = 3 role = Regular type = ISCSI class = Data pool = [] name = hosted_storage
[root@noveria ~]# lvs File descriptor 9 (/dev/dri/card0) leaked on lvs invocation. Parent PID 3105: bash LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 0f260ab0-3631-4c71-b332-c6c7f67f7342 8b25f3be-7574-4f7a-8851-363129704e52 -wi-ao---- 128,00m
ae352fab-7477-4376-aa27-04c321b4fbd1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 1,00g
cbb10cf0-9600-465e-aed9-412f7157706b 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 40,00g
f05762e5-e8cd-45e7-ac19-303c1ade79d1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 128,00m
ids 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
inbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
leases 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 2,00g
master 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 1,00g
metadata 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 512,00m
outbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
VDSM logs doesn't show me anything MainThread::INFO::2015-09-01 23:34:49,551::vdsm::166::vds::(run) <WorkerThread(Thread-4, started daemon 139990108333824)> MainThread::INFO::2015-09-01 23:34:49,552::vdsm::166::vds::(run) <WorkerThread(Thread-3, started daemon 139990116726528)> MainThread::INFO::2015-09-02 16:07:49,510::vdsm::156::vds::(run) (PID: 1554) I am the actual vdsm 4.17.3-12.git7288ef7.fc22 noveria.wodel.wd (4.1.6-200.fc22.x86_64) MainThread::DEBUG::2015-09-02 16:07:49,524::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2015-09-02 16:07:49,524::threadPool::29::Storage.ThreadPool::(__init__) Enter - numThreads: 10, waitTimeout: 3, maxTasks: 500 MainThread::DEBUG::2015-09-02 16:07:49,526::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt mode: None MainThread::WARNING::2015-09-02 16:07:49,526::fileUtils::152::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt already exists MainThread::DEBUG::2015-09-02 16:07:49,564::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) /usr/bin/sudo -n /usr/sbin/lvm dumpconfig global/locking_type (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::427::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::459::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::460::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::462::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2015-09-02 16:07:49,615::hsm::505::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' storageRefresh::DEBUG::2015-09-02 16:07:49,616::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2015-09-02 16:07:49,617::dispatcher::46::Storage.Dispatcher::(__init__) Starting StorageDispatcher... storageRefresh::DEBUG::2015-09-02 16:07:49,620::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,792::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) storageRefresh::DEBUG::2015-09-02 16:07:49,793::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,793::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds storageRefresh::DEBUG::2015-09-02 16:07:49,924::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,924::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state init -> state preparing MainThread::INFO::2015-09-02 16:07:49,924::logUtils::48::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback(callbackFunc=<functools.partial object at 0x7fc2f03fa6d8>) MainThread::INFO::2015-09-02 16:07:49,924::logUtils::51::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback, Return response: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::1191::Storage.TaskManager.Task::(prepare) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::finished: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state preparing -> state finished MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} MainThread::DEBUG::2015-09-02 16:07:49,928::task::993::Storage.TaskManager.Task::(_decref) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::ref 0 aborting False MainThread::INFO::2015-09-02 16:07:49,928::momIF::46::MOM::(__init__) Preparing MOM interface MainThread::INFO::2015-09-02 16:07:49,929::momIF::55::MOM::(__init__) Using named unix socket /var/run/vdsm/mom-vdsm.sock MainThread::INFO::2015-09-02 16:07:49,929::secret::90::root::(clear) Unregistering all secrests MainThread::DEBUG::2015-09-02 16:07:49,929::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::INFO::2015-09-02 16:07:49,933::vmchannels::196::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::DEBUG::2015-09-02 16:07:49,934::vmchannels::178::vds::(run) Starting VM channels listener thread. MainThread::INFO::2015-09-02 16:07:49,935::protocoldetector::172::vds.MultiProtocolAcceptor::(__init__) Listening at 0.0.0.0:54321 MainThread::DEBUG::2015-09-02 16:07:50,063::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <rpc.bindingxmlrpc.XmlDetector instance at 0x7fc2f00dc440> storageRefresh::DEBUG::2015-09-02 16:07:50,063::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,080::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) storageRefresh::DEBUG::2015-09-02 16:07:50,081::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:50,081::hba::56::Storage.HBA::(rescan) Starting scan storageRefresh::DEBUG::2015-09-02 16:07:50,081::supervdsm::76::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm MainThread::DEBUG::2015-09-02 16:07:50,157::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <yajsonrpc.stompreactor.StompDetector instance at 0x7fc2e01332d8> BindingXMLRPC::INFO::2015-09-02 16:07:50,158::bindingxmlrpc::62::vds::(threaded_start) XMLRPC server running MainThread::DEBUG::2015-09-02 16:07:50,158::schedule::98::Scheduler::(start) Starting scheduler periodic-sched periodic-sched::DEBUG::2015-09-02 16:07:50,159::schedule::142::Scheduler::(_run) started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::69::Executor::(start) Starting executor MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/0 periodic/0::DEBUG::2015-09-02 16:07:50,159::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/1 periodic/1::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/2 periodic/2::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/3 periodic/3::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::DEBUG::2015-09-02 16:07:50,163::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.UpdateVolumes'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.NumaInfoMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.BlockjobMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation <virt.sampling.VMBulkSampler object at 0x7fc2e0151d10> MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.DriveWatermarkMonitor'>) storageRefresh::DEBUG::2015-09-02 16:07:50,167::hba::62::Storage.HBA::(rescan) Scan finished storageRefresh::DEBUG::2015-09-02 16:07:50,167::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,167::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:50,513::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:50,513::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Reactor thread::INFO::2015-09-02 16:07:50,590::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:50,596::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56311) BindingXMLRPC::INFO::2015-09-02 16:07:50,596::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56311 Thread-13::INFO::2015-09-02 16:07:50,597::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 started Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-13::INFO::2015-09-02 16:07:50,599::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 stopped Reactor thread::INFO::2015-09-02 16:07:51,607::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:51,613::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56312) BindingXMLRPC::INFO::2015-09-02 16:07:51,613::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56312 Thread-14::INFO::2015-09-02 16:07:51,613::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 started Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-14::INFO::2015-09-02 16:07:51,615::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 stopped storageRefresh::DEBUG::2015-09-02 16:07:51,924::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,927::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,406::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,459::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,491::hsm::373::Storage.HSM::(storageRefresh) HSM is ready Reactor thread::INFO::2015-09-02 16:07:52,624::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,629::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56313) BindingXMLRPC::INFO::2015-09-02 16:07:52,629::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56313 Thread-15::INFO::2015-09-02 16:07:52,630::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 started Thread-15::DEBUG::2015-09-02 16:07:52,630::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-15::DEBUG::2015-09-02 16:07:52,719::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'System Product Name', 'systemSerialNumber': 'System Serial Number', 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': 'System Version', 'systemUUID': '267A6B80-D7DA-11DD-81CF-C860009B3CD9', 'systemManufacturer': 'System manufacturer'}} Thread-15::INFO::2015-09-02 16:07:52,721::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 stopped Reactor thread::INFO::2015-09-02 16:07:52,730::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,735::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56314) BindingXMLRPC::INFO::2015-09-02 16:07:52,735::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56314 Thread-16::INFO::2015-09-02 16:07:52,735::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 started Thread-16::DEBUG::2015-09-02 16:07:52,736::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-16::DEBUG::2015-09-02 16:07:52,736::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state init -> state preparing Thread-16::INFO::2015-09-02 16:07:52,736::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=3, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b', 'connection': '192.168.1.50', 'iqn': 'iqn.2015-08.openSuse.wodel:target00', 'portal': '1', 'user': 'iscsiuser', 'password': '********', 'port': '3260'}], options=None) Thread-16::DEBUG::2015-09-02 16:07:52,737::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 --op=new (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.authmethod -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,811::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,812::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.username -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,846::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,847::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.password -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m iface -I default (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,905::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::INFO::2015-09-02 16:07:52,905::iscsi::564::Storage.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not provided. Skipping. Thread-16::DEBUG::2015-09-02 16:07:52,906::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -l (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,027::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,028::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.startup -v manual --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,088::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,088::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,182::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,229::hba::56::Storage.HBA::(rescan) Starting scan Thread-16::DEBUG::2015-09-02 16:07:53,300::hba::62::Storage.HBA::(rescan) Scan finished Thread-16::DEBUG::2015-09-02 16:07:53,300::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,300::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,435::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,435::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,919::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,923::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) Thread-16::DEBUG::2015-09-02 16:07:54,058::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:54,059::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2418::Storage.HSM::(__prefetchDomains) Found SD uuids: ('8b25f3be-7574-4f7a-8851-363129704e52',) Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: {8b25f3be-7574-4f7a-8851-363129704e52: storage.blockSD.findDomain} Thread-16::INFO::2015-09-02 16:07:54,059::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::1191::Storage.TaskManager.Task::(prepare) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::finished: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state preparing -> state finished Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-16::DEBUG::2015-09-02 16:07:54,060::task::993::Storage.TaskManager.Task::(_decref) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::ref 0 aborting False Thread-16::INFO::2015-09-02 16:07:54,062::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 stopped Reactor thread::INFO::2015-09-02 16:07:54,070::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,075::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:54,076::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,076::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56316) BindingXMLRPC::INFO::2015-09-02 16:07:54,076::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56316 Thread-17::INFO::2015-09-02 16:07:54,076::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56316 started Thread-17::DEBUG::2015-09-02 16:07:54,077::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-17::DEBUG::2015-09-02 16:07:54,077::task::595::Storage.TaskManager.Task::(_updateState) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::moving from state init -> state preparing Thread-17::INFO::2015-09-02 16:07:54,077::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='8b25f3be-7574-4f7a-8851-363129704e52', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='bf3bdae1-7318-4443-a19b-7371de30b982', leafUUID='cbb10cf0-9600-465e-aed9-412f7157706b') Thread-17::DEBUG::2015-09-02 16:07:54,077::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3194' at 'prepareImage' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' for lock type 'shared' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' is free. Now locking as 'shared' (1 active user) Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Granted request Thread-17::DEBUG::2015-09-02 16:07:54,078::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::_resourcesAcquired: Storage.8b25f3be-7574-4f7a-8851-363129704e52 (shared) Thread-17::DEBUG::2015-09-02 16:07:54,078::task::993::Storage.TaskManager.Task::(_decref) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::ref 1 aborting False Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,130::hba::56::Storage.HBA::(rescan) Starting scan Thread-17::DEBUG::2015-09-02 16:07:54,197::hba::62::Storage.HBA::(rescan) Scan finished Thread-17::DEBUG::2015-09-02 16:07:54,197::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,197::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,298::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,299::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,307::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,309::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,312::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,479::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with LvMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,479::blockSD::337::Storage.Misc.excCmd::(readlines) /usr/bin/dd iflag=direct skip=0 bs=2048 if=/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata count=1 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,553::blockSD::337::Storage.Misc.excCmd::(readlines) SUCCESS: <err> = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,553::misc::260::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s'], size: 2048 Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[] Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::252::Storage.PersistentDict::(refresh) Empty metadata Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with VGTagMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,685::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,686::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,686::persistentDict::234::Storage.PersistentDict::(refresh) read lines (VGTagMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512', 'PHYBLKSIZE=4096', 'POOL_UUID=', u'PV0=pv:33000000100000001,uuid:kTaQQh-4LCD-OghQ-cP5D-R7MM-aj6e-kTdQf0,pestart:0,pecount:397,mapoffset:0', 'ROLE=Regular', 'SDUUID=8b25f3be-7574-4f7a-8851-363129704e52', 'TYPE=ISCSI', 'VERSION=3', 'VGUUID=tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m', '_SHA_CKSUM=4a100ce5195650f43971d849835a8b3d8c0343da'] Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_imageNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_volumeNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_lvmActivationNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
What should I do to bring the VM engine back?
2015-09-02 16:24 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I will try this afternoon to do this, but just to clarify something.
the hosted_engine setup creates it's own DC the hosted_DC, which contains the hosted engine storage domain, I am correct?
No, ovirt-hosted-engine-setup doesn't create a special datacenter. The default is to add the host to the Default datacenter in the default cluster. You could choose a different one from ovirt-hosted-engine-setup, simply import the hosted-engine storage domain in the datacenter of the cluster you selected.
In setup there is a question like this: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI. Please enter local datacenter name which ask about 'Local storage datacenter' which is basically the description we were using for the storage pool.
if yes, where will I import the hostedengine storage domain, into the default DC?
2015-09-02 8:47 GMT+01:00 Roy Golan <rgolan@redhat.com>:
On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I could finally terminate the installation, but still no vm engine on webui
I added a data domain, the default DC is up, but no engine VM.
Good now you need to import the HostedEngine storage domain. Try to go to
*Storage -> Import Domain and put the path to the domain which you used in the hosted-engine setup.*
*After the domain is imported, the engine will be imported automatically. *
*This whole process will become automatic eventually. (patch is written currently)*
2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Something mounted on /rhev/data-center/mnt I'm not sure.
there were directories, and under these directories there were other directories (dom_md, ha_agent, images), and under them there were symbolic links to devices under /dev (ids, inbox, leases, etc...) the devices pointed to the lvm partitions created by the setup.
but the mount command didn't show anything, unlike nfs, when I used it the mount and df commands showed up the engine's VM mount point.
2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
> > > On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi <wodel.youchi@gmail.com > > wrote: > >> Hi, >> >> After removing the -x from the sql files, the installation >> terminated successfully, but ... >> >> I had a problem with vdsm, and error about permission denied with >> KVM module, so I restarted my machine. >> After the reboot the ovirt-ha-agent stops complaining about the >> vm.conf file not present in /var/rum/ovirt-hosted-engine-ha >> >> And the mount command doesn't show any iscsi mount, the disk is >> detected via fdisk -l >> the lvs command returns all logical volumes created. >> >> I think it's a mount problem, but since there are many lv, I don't >> how to mount them manually. >> > > Do you have something mounted under /rhev/data-center/mnt ? > If not you probably hit this bug: > https://bugzilla.redhat.com/1258465 > > > >> LV VG >> Attr LSize Pool Origin Data% >> Meta% Move >> Log Cpy%Sync Convert >> >> 3b894e23-429d-43bf-b6cd-6427a387799a >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m >> >> >> >> be78c0fd-52bf-445a-9555-64061029c2d9 >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g >> >> >> >> c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g >> >> >> >> feede664-5754-4ca2-aeb3-af7aff32ed42 >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m >> >> >> >> ids >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-ao---- 128,00m >> >> >> inbox >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-a----- 128,00m >> >> >> leases >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-a----- 2,00g >> >> >> master >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-a----- 1,00g >> >> >> metadata >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- >> 512,00m >> >> >> outbox >> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >> -wi-a----- 128,00m >> >> >> >> >> 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: >> >>> >>> >>> On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi < >>> wodel.youchi@gmail.com> wrote: >>> >>>> Hi again, >>>> >>>> I tried with the snapshot repository, but I am having this error >>>> while executing engine-setup >>>> >>>> [ INFO ] Creating/refreshing Engine database schema >>>> [ ERROR ] Failed to execute stage 'Misc configuration': Command >>>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >>>> te >>>> [ INFO ] DNF Performing DNF transaction rollback >>>> [ INFO ] Rolling back database schema >>>> [ INFO ] Clearing Engine database engine >>>> [ ERROR ] Engine database rollback failed: must be owner of >>>> schema pg_catalog >>>> [ INFO ] Stage: Clean up >>>> Log file is located at >>>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >>>> [ INFO ] Generating answer file >>>> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >>>> [ INFO ] Stage: Pre-termination >>>> [ INFO ] Stage: Termination >>>> [ ERROR ] Execution of setup failed >>>> >>>> >>>> >>>> and in the deployement log I have these errors >>>> >>>> Saving custom users permissions on database objects... >>>> upgrade script detected a change in Config, View or Stored >>>> Procedure... >>>> Running upgrade shell script >>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... >>>> Running upgrade shell script >>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... >>>> Running upgrade shell script >>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... >>>> Running upgrade shell script >>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... >>>> Running upgrade shell script >>>> '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'... >>>> >>>> >>>> 2015-09-01 15:39:35 DEBUG >>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 >>>> execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', >>>> 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', >>>> '-c', 'apply'] >>>> stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>> ligne 1: /bin : is a >>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>> ligne 2: DATABASE : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>> ligne 4: This : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>> ligne 5: The : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>> ligne 6: Add : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>> ligne 7: syntax error near the symbole unexpected « ( >>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>> ligne 7: ` Update section (w/o overriding current >>>> value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>> ligne 1: /bin : is a >>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>> ligne 2: Currently : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>> ligne 3: This : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>> ligne 4: This : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>> ligne 5: So, : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>> ligne 6: Since : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>> ligne 7: bin/ : is a >>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>> ligne 9: update : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>> ligne 10: * >>>> >>>> >>>> *syntax error near the symbole unexpected « ( >>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>> ligne 10: `and exists(select 1 from schema_version where version = >>>> '03010250' and current = >>>> true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>> ligne 1: -- : command not >>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>> ligne 2: **syntax error near the symbole unexpected « ( »* >>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>> ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' >>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>> ligne 1: -- : commande introuvable >>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>> ligne 2: erreur de syntaxe près du symbole inattendu « ( » >>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>> ligne 2: `select fn_db_add_column('materialized_views', >>>> 'min_refresh_rate_in_sec', 'int default 0');' >>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>> ligne 1: erreur de syntaxe près du symbole inattendu « ( » >>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>> ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE >>>> varchar(63);' >>>> 2015-09-01 15:39:35 DEBUG otopi.context >>>> context._executeMethod:156 method exception >>>> Traceback (most recent call last): >>>> File "/usr/lib/python2.7/site-packages/otopi/context.py", line >>>> 146, in _executeMethod >>>> method['method']() >>>> File >>>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", >>>> line 291, in _misc >>>> oenginecons.EngineDBEnv.PGPASS_FILE >>>> File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line >>>> 946, in execute >>>> command=args[0], >>>> RuntimeError: Command >>>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>> 2015-09-01 15:39:35 ERROR otopi.context >>>> context._executeMethod:165 Failed to execute stage 'Misc configuration': >>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>> 2015-09-01 15:39:35 DEBUG otopi.transaction transaction.abort:134 >>>> aborting 'DNF Transaction' >>>> 2015-09-01 15:39:35 DEBUG >>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF >>>> Closing transaction with rollback >>>> 2015-09-01 15:39:35 INFO >>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 >>>> DNF Performing DNF transaction rollback >>>> >>> >>> >>> It was an issue with package building: all the sql files where >>> executable and so the issue. >>> We fixed it and tomorrow build should be OK. If you prefer to >>> continue right now simply recursively remove the x attribute on each sql >>> file under /usr/share/ovirt-engine/dbscripts >>> >>> >>> >>>> >>>> 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com >>>> >: >>>> >>>>> >>>>> >>>>> On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David < >>>>> didi@redhat.com> wrote: >>>>> >>>>>> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >>>>>> wodel.youchi@gmail.com> wrote: >>>>>> > Hi, >>>>>> > >>>>>> > I am using the repo of the 3.6 version >>>>>> > (http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/ >>>>>> ) >>>>>> > >>>>>> > I installed the ovirt-hosted-engine-setup with it's >>>>>> dependencies,and the >>>>>> > ovirt-hosted-engine-ha package is one of them. >>>>>> > >>>>>> > Correction: The problem with this version >>>>>> > >>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >>>>>> > is that after the installation is done, the service >>>>>> ovirt-ha-agent crashes >>>>>> > after being started, see the bug : >>>>>> > >>>>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >>>>>> > >>>>>> > A new version was produced, I downloaded it manually a few >>>>>> days ago, this is >>>>>> > it : >>>>>> > >>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >>>>>> > >>>>>> > This one did correct the problem, but it's not present >>>>>> anymore on the >>>>>> > repository. >>>>>> >>>>>> Was this on ovirt-3.6-pre? >>>>>> >>>>>> ovirt-3.6-snapshot has a newer version. >>>>>> >>>>>> > >>>>>> > For Simone: yes I did added an NFS4 data domain, but no >>>>>> success so far, no >>>>>> > VM engine present. >>>>>> > >>>>>> > Regards. >>>>>> > >>>>>> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi < >>>>>> stirabos@redhat.com>: >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >>>>>> didi@redhat.com> >>>>>> >> wrote: >>>>>> >>> >>>>>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >>>>>> wodel.youchi@gmail.com> >>>>>> >>> wrote: >>>>>> >>> > Hi, >>>>>> >>> > >>>>>> >>> > Another test of ovirt hosted-engine on FC22 using ovirt >>>>>> 3.6 Beta3. >>>>>> >>> > VM engine is also a FC22 >>>>>> >>> > >>>>>> >>> > Problem: >>>>>> >>> > - No VM engine on webui >>>>>> >>> >>>>>> >>> This is still not supported, see/follow [1]. >>>>>> >>> >>>>>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >>>>>> >> >>>>>> >> >>>>>> >> ? :-) >>>>>> >>>>>> Sorry :-( >>>>>> >>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 >>>>> >>>>> >>>>> This is just about editing the VM from the web GUI but in order >>>>> to be able to edit the engine VM you should be able at least to find the >>>>> engine VM in the engine as it was also in 3.5 >>>>> >>>>> I'll try to reproduce verifying another patch >>>>> >>>>> >>>>>> I see that all patches there are merged, but bug is in POST. >>>>>> >>>>>> >> >>>>>> >> Did you also try adding an additional storage domain for >>>>>> regular VMs? >>>>>> >> engine-VM will be shown on the engine only when you add at >>>>>> least one >>>>>> >> additional storage domain for regulars VM and the whole >>>>>> datacenter goes up: >>>>>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >>>>>> >> >>>>>> >> >>>>>> >>> >>>>>> >>> > Test Environment >>>>>> >>> > Just two machines: >>>>>> >>> > >>>>>> >>> > 1 - Machine 1 used as storage : >>>>>> >>> > - iscsi target with a raw file for the VM engine >>>>>> storage >>>>>> >>> > - NFS4 for other data domains >>>>>> >>> > >>>>>> >>> > 2 - Machine 2 used as hypervisor >>>>>> >>> > >>>>>> >>> > >>>>>> >>> > The installation went without problem, but as always, the >>>>>> VM engine is >>>>>> >>> > not >>>>>> >>> > present on the webui. >>>>>> >>> > >>>>>> >>> > >>>>>> >>> > PS: >>>>>> >>> > 1- I gave the VM engine just 2Gb of memory since I don't >>>>>> have too much >>>>>> >>> > RAM >>>>>> >>> > on hypervisor, could that be the cause of the problem? >>>>>> >>> >>>>>> >>> Shouldn't be related >>>>>> >>> >>>>>> >>> > 2- This version of >>>>>> >>> > >>>>>> >>> > >>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>>>>> >>> > package is causing the ovirt-ha-agent to crash, it was >>>>>> replaced with >>>>>> >>> > another >>>>>> >>> > which I still have >>>>>> >>> > >>>>>> >>> > >>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>>>>> >>> > but it's not present on the repository, I had to update >>>>>> the package >>>>>> >>> > manually >>>>>> >>> > at the end of ovirt-hosted-engine-setup installation. >>>>>> >>> >>>>>> >>> Not sure I follow. >>>>>> >>> >>>>>> >>> What exact repo was used? >>>>>> >>> >>>>>> >>> hosted-engine --deploy does not update/install packages for >>>>>> you (as >>>>>> >>> does engine-setup), >>>>>> >>> it's up to you to make sure what you want/need is installed >>>>>> prior to >>>>>> >>> running it. >>>>>> >>> >>>>>> >>> Best, >>>>>> >>> -- >>>>>> >>> Didi >>>>>> >>> _______________________________________________ >>>>>> >>> Users mailing list >>>>>> >>> Users@ovirt.org >>>>>> >>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >> >>>>>> >> >>>>>> > >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Didi >>>>>> >>>>> >>>>> >>>> >>> >> >

Hi again, I had to restart the installation all over, I used the freshly pushed new packages. I had two problems: 1 - the engine's setup didn't terminate correctly when I chose to use ovirt-vmconsole with this error [ INFO ] Restarting ovirt-vmconsole proxy service *[ ERROR ] **Failed to execute stage 'Closing up': Failed to stop service 'ovirt-vmconsole-proxy-sshd'* [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150903000415-6egi46.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001209-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed So I executed engine-cleanup which terminate with this error [ INFO ] Clearing Engine database engine *[ ERROR ] Failed to execute stage 'Misc configuration': must be owner of schema pg_catalog * [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-remove-20150903001440-da1u76.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001513-cleanup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of cleanup failed And then, I executed again engine-setup without ovirt-vmconsole This time the setup completed. 2 - I added a NFS4 storage domain to the default DC (Default), the DC went up, and then I tried to import the hosted-engine storage domain, but without success. click on import, choose iscsi, connect to the target, scan, login, but no device chown !!! (iscsi.jpeg) The only new thing I had, is the disk of the VM engine being shown under disks tab. 2015-09-02 19:50 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
I found this on vdsm log
Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('ab 1dc1a9-b6e9-4890-8485-1019da2f328f',) {} Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1263::vds::(wrapper) return vmGetStats with {'status': {'message': '*Virtual machine does not exist*', 'code': 1}}
I really don't understand anything
2015-09-02 17:01 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Thanks,
but before that I stuck again with the storage of the VM engine not detected after reboot.
the /rhev is populated, but ovirt-ha-agent crashes with
MainThread::INFO::2015-09-02 16:12:20,261::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid': 'ab1dc1a9-b6e9-4890-8485-1019da2f328f', 'address': '0 '} MainThread::INFO::2015-09-02 16:12:20,283::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 139994237094736 MainThread::INFO::2015-09-02 16:12:20,702::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_do main) Success, id 139994236985168 MainThread::INFO::2015-09-02 16:12:20,702::hosted_engine::574::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_broker) Broker initialized, all submonitors started MainThread::INFO::2015-09-02 16:12:20,799::hosted_engine::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_sanlock) Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: /var/run/vdsm/storage/8b25f3be-7574 -4f7a-8851-363129704e52/a44d1302-3165-4632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342)
MainThread::INFO::2015-09-02 16:12:20,800::hosted_engine::401::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star t_monitoring) Reloading vm.conf from the shared storage domain MainThread::ERROR::2015-09-02 16:12:20,927::agent::201::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
*Error: ''Configuration value not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' - trying to restart agent * MainThread::WARNING::2015-09-02 16:12:25,932::agent::204::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Restarting a gent, attempt '9' MainThread::ERROR::2015-09-02 16:12:25,933::agent::206::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Too many error s occurred, giving up. Please review the log and consider filing a bug. MainThread::INFO::2015-09-02 16:12:25,933::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down
I restared vdsm ha-agent and broker-agent wihtout success
When executed [root@noveria ~]# hosted-engine --vm-status You must run deploy first
I got this
[root@noveria ~]# tree /var/run/vdsm/storage/ /var/run/vdsm/storage/ └── 8b25f3be-7574-4f7a-8851-363129704e52 ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1 ade79d1 ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f 67f7342 ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c32 1b4fbd1 └── bf3bdae1-7318-4443-a19b-7371de30b982 └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7 157706b
and this [root@noveria rhev]# tree . └── data-center ├── 00000001-0001-0001-0001-000000000221 └── mnt ├── blockSD │ └── 8b25f3be-7574-4f7a-8851-363129704e52 │ ├── dom_md │ │ ├── ids -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ids │ │ ├── inbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox │ │ ├── leases -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/leases │ │ ├── master -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/master │ │ ├── metadata -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata │ │ └── outbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox │ ├── ha_agent │ │ ├── hosted-engine.lockspace -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/a44d1302-3165-4 632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342 │ │ └── hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/f05762e5-e8cd-45e7-ac19-303c1ade79d1 │ └── images │ ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-4 5e7-ac19-303c1ade79d1 │ ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4 c71-b332-c6c7f67f7342 │ ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4 376-aa27-04c321b4fbd1 │ └── bf3bdae1-7318-4443-a19b-7371de30b982 │ └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-4 65e-aed9-412f7157706b ├── openSuse.wodel.wd:_nvms └── _var_lib_ovirt-hosted-engine-setup_tmp2fNoEf
Here I did find some symblic links blinking (not present) like this one hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/*f05762e5-e8cd-45e7-ac19-303c1ade79d1*
the lvscan command showed that the lv concerned is inactive, is this correct? [root@noveria ~]# lvscan File descriptor 9 (/dev/dri/card0) leaked on lvscan invocation. Parent PID 2935: bash ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata' [512,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases' [2,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ids' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/master' [1,00 GiB] inherit inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c321b4fbd1' [1,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f67f7342' [128,00 MiB] inherit *inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1ade79d1' [128,00 MiB] inherit * inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7157706b' [40,00 GiB] inherit
and this [root@noveria ~]# vdsClient -s 0 prepareImage "00000000-0000-0000-0000-000000000000" "8b25f3be-7574-4f7a-8851-363129704e52" " bf3bdae1-7318-4443-a19b-7371de30b982" "cbb10cf0-9600-465e-aed9-412f7157706b" {'domainID': '8b25f3be-7574-4f7a-8851-363129704e52', 'imageID': 'bf3bdae1-7318-4443-a19b-7371de30b982', 'leaseOffset': 112197632, 'leasePath': '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases', 'path': '/rhev/data-center/mnt/blockSD/8b25f3be-7574-4f7a-8851-363129704e52/images/bf3bdae1-7318-4443-a19b-7371de30b982/cbb1 0cf0-9600-465e-aed9-412f7157706b', 'volType': 'path', 'volumeID': 'cbb10cf0-9600-465e-aed9-412f7157706b'}
and [root@noveria ~]# vdsClient -s 0 getStorageDomainInfo 8b25f3be-7574-4f7a-8851-363129704e52 uuid = 8b25f3be-7574-4f7a-8851-363129704e52 vguuid = tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m state = OK version = 3 role = Regular type = ISCSI class = Data pool = [] name = hosted_storage
[root@noveria ~]# lvs File descriptor 9 (/dev/dri/card0) leaked on lvs invocation. Parent PID 3105: bash LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 0f260ab0-3631-4c71-b332-c6c7f67f7342 8b25f3be-7574-4f7a-8851-363129704e52 -wi-ao---- 128,00m
ae352fab-7477-4376-aa27-04c321b4fbd1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 1,00g
cbb10cf0-9600-465e-aed9-412f7157706b 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 40,00g
f05762e5-e8cd-45e7-ac19-303c1ade79d1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 128,00m
ids 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
inbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
leases 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 2,00g
master 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 1,00g
metadata 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 512,00m
outbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
VDSM logs doesn't show me anything MainThread::INFO::2015-09-01 23:34:49,551::vdsm::166::vds::(run) <WorkerThread(Thread-4, started daemon 139990108333824)> MainThread::INFO::2015-09-01 23:34:49,552::vdsm::166::vds::(run) <WorkerThread(Thread-3, started daemon 139990116726528)> MainThread::INFO::2015-09-02 16:07:49,510::vdsm::156::vds::(run) (PID: 1554) I am the actual vdsm 4.17.3-12.git7288ef7.fc22 noveria.wodel.wd (4.1.6-200.fc22.x86_64) MainThread::DEBUG::2015-09-02 16:07:49,524::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2015-09-02 16:07:49,524::threadPool::29::Storage.ThreadPool::(__init__) Enter - numThreads: 10, waitTimeout: 3, maxTasks: 500 MainThread::DEBUG::2015-09-02 16:07:49,526::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt mode: None MainThread::WARNING::2015-09-02 16:07:49,526::fileUtils::152::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt already exists MainThread::DEBUG::2015-09-02 16:07:49,564::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) /usr/bin/sudo -n /usr/sbin/lvm dumpconfig global/locking_type (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::427::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::459::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::460::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::462::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2015-09-02 16:07:49,615::hsm::505::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' storageRefresh::DEBUG::2015-09-02 16:07:49,616::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2015-09-02 16:07:49,617::dispatcher::46::Storage.Dispatcher::(__init__) Starting StorageDispatcher... storageRefresh::DEBUG::2015-09-02 16:07:49,620::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,792::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) storageRefresh::DEBUG::2015-09-02 16:07:49,793::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,793::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds storageRefresh::DEBUG::2015-09-02 16:07:49,924::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,924::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state init -> state preparing MainThread::INFO::2015-09-02 16:07:49,924::logUtils::48::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback(callbackFunc=<functools.partial object at 0x7fc2f03fa6d8>) MainThread::INFO::2015-09-02 16:07:49,924::logUtils::51::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback, Return response: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::1191::Storage.TaskManager.Task::(prepare) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::finished: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state preparing -> state finished MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} MainThread::DEBUG::2015-09-02 16:07:49,928::task::993::Storage.TaskManager.Task::(_decref) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::ref 0 aborting False MainThread::INFO::2015-09-02 16:07:49,928::momIF::46::MOM::(__init__) Preparing MOM interface MainThread::INFO::2015-09-02 16:07:49,929::momIF::55::MOM::(__init__) Using named unix socket /var/run/vdsm/mom-vdsm.sock MainThread::INFO::2015-09-02 16:07:49,929::secret::90::root::(clear) Unregistering all secrests MainThread::DEBUG::2015-09-02 16:07:49,929::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::INFO::2015-09-02 16:07:49,933::vmchannels::196::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::DEBUG::2015-09-02 16:07:49,934::vmchannels::178::vds::(run) Starting VM channels listener thread. MainThread::INFO::2015-09-02 16:07:49,935::protocoldetector::172::vds.MultiProtocolAcceptor::(__init__) Listening at 0.0.0.0:54321 MainThread::DEBUG::2015-09-02 16:07:50,063::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <rpc.bindingxmlrpc.XmlDetector instance at 0x7fc2f00dc440> storageRefresh::DEBUG::2015-09-02 16:07:50,063::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,080::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) storageRefresh::DEBUG::2015-09-02 16:07:50,081::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:50,081::hba::56::Storage.HBA::(rescan) Starting scan storageRefresh::DEBUG::2015-09-02 16:07:50,081::supervdsm::76::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm MainThread::DEBUG::2015-09-02 16:07:50,157::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <yajsonrpc.stompreactor.StompDetector instance at 0x7fc2e01332d8> BindingXMLRPC::INFO::2015-09-02 16:07:50,158::bindingxmlrpc::62::vds::(threaded_start) XMLRPC server running MainThread::DEBUG::2015-09-02 16:07:50,158::schedule::98::Scheduler::(start) Starting scheduler periodic-sched periodic-sched::DEBUG::2015-09-02 16:07:50,159::schedule::142::Scheduler::(_run) started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::69::Executor::(start) Starting executor MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/0 periodic/0::DEBUG::2015-09-02 16:07:50,159::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/1 periodic/1::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/2 periodic/2::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/3 periodic/3::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::DEBUG::2015-09-02 16:07:50,163::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.UpdateVolumes'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.NumaInfoMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.BlockjobMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation <virt.sampling.VMBulkSampler object at 0x7fc2e0151d10> MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.DriveWatermarkMonitor'>) storageRefresh::DEBUG::2015-09-02 16:07:50,167::hba::62::Storage.HBA::(rescan) Scan finished storageRefresh::DEBUG::2015-09-02 16:07:50,167::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,167::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:50,513::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:50,513::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Reactor thread::INFO::2015-09-02 16:07:50,590::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:50,596::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56311) BindingXMLRPC::INFO::2015-09-02 16:07:50,596::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56311 Thread-13::INFO::2015-09-02 16:07:50,597::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 started Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-13::INFO::2015-09-02 16:07:50,599::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 stopped Reactor thread::INFO::2015-09-02 16:07:51,607::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:51,613::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56312) BindingXMLRPC::INFO::2015-09-02 16:07:51,613::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56312 Thread-14::INFO::2015-09-02 16:07:51,613::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 started Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-14::INFO::2015-09-02 16:07:51,615::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 stopped storageRefresh::DEBUG::2015-09-02 16:07:51,924::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,927::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,406::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,459::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,491::hsm::373::Storage.HSM::(storageRefresh) HSM is ready Reactor thread::INFO::2015-09-02 16:07:52,624::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,629::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56313) BindingXMLRPC::INFO::2015-09-02 16:07:52,629::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56313 Thread-15::INFO::2015-09-02 16:07:52,630::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 started Thread-15::DEBUG::2015-09-02 16:07:52,630::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-15::DEBUG::2015-09-02 16:07:52,719::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'System Product Name', 'systemSerialNumber': 'System Serial Number', 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': 'System Version', 'systemUUID': '267A6B80-D7DA-11DD-81CF-C860009B3CD9', 'systemManufacturer': 'System manufacturer'}} Thread-15::INFO::2015-09-02 16:07:52,721::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 stopped Reactor thread::INFO::2015-09-02 16:07:52,730::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,735::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56314) BindingXMLRPC::INFO::2015-09-02 16:07:52,735::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56314 Thread-16::INFO::2015-09-02 16:07:52,735::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 started Thread-16::DEBUG::2015-09-02 16:07:52,736::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-16::DEBUG::2015-09-02 16:07:52,736::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state init -> state preparing Thread-16::INFO::2015-09-02 16:07:52,736::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=3, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b', 'connection': '192.168.1.50', 'iqn': 'iqn.2015-08.openSuse.wodel:target00', 'portal': '1', 'user': 'iscsiuser', 'password': '********', 'port': '3260'}], options=None) Thread-16::DEBUG::2015-09-02 16:07:52,737::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 --op=new (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.authmethod -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,811::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,812::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.username -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,846::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,847::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.password -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m iface -I default (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,905::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::INFO::2015-09-02 16:07:52,905::iscsi::564::Storage.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not provided. Skipping. Thread-16::DEBUG::2015-09-02 16:07:52,906::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -l (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,027::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,028::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.startup -v manual --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,088::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,088::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,182::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,229::hba::56::Storage.HBA::(rescan) Starting scan Thread-16::DEBUG::2015-09-02 16:07:53,300::hba::62::Storage.HBA::(rescan) Scan finished Thread-16::DEBUG::2015-09-02 16:07:53,300::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,300::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,435::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,435::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,919::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,923::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) Thread-16::DEBUG::2015-09-02 16:07:54,058::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:54,059::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2418::Storage.HSM::(__prefetchDomains) Found SD uuids: ('8b25f3be-7574-4f7a-8851-363129704e52',) Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: {8b25f3be-7574-4f7a-8851-363129704e52: storage.blockSD.findDomain} Thread-16::INFO::2015-09-02 16:07:54,059::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::1191::Storage.TaskManager.Task::(prepare) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::finished: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state preparing -> state finished Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-16::DEBUG::2015-09-02 16:07:54,060::task::993::Storage.TaskManager.Task::(_decref) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::ref 0 aborting False Thread-16::INFO::2015-09-02 16:07:54,062::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 stopped Reactor thread::INFO::2015-09-02 16:07:54,070::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,075::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:54,076::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,076::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56316) BindingXMLRPC::INFO::2015-09-02 16:07:54,076::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56316 Thread-17::INFO::2015-09-02 16:07:54,076::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56316 started Thread-17::DEBUG::2015-09-02 16:07:54,077::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-17::DEBUG::2015-09-02 16:07:54,077::task::595::Storage.TaskManager.Task::(_updateState) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::moving from state init -> state preparing Thread-17::INFO::2015-09-02 16:07:54,077::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='8b25f3be-7574-4f7a-8851-363129704e52', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='bf3bdae1-7318-4443-a19b-7371de30b982', leafUUID='cbb10cf0-9600-465e-aed9-412f7157706b') Thread-17::DEBUG::2015-09-02 16:07:54,077::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3194' at 'prepareImage' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' for lock type 'shared' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' is free. Now locking as 'shared' (1 active user) Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Granted request Thread-17::DEBUG::2015-09-02 16:07:54,078::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::_resourcesAcquired: Storage.8b25f3be-7574-4f7a-8851-363129704e52 (shared) Thread-17::DEBUG::2015-09-02 16:07:54,078::task::993::Storage.TaskManager.Task::(_decref) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::ref 1 aborting False Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,130::hba::56::Storage.HBA::(rescan) Starting scan Thread-17::DEBUG::2015-09-02 16:07:54,197::hba::62::Storage.HBA::(rescan) Scan finished Thread-17::DEBUG::2015-09-02 16:07:54,197::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,197::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,298::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,299::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,307::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,309::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,312::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,479::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with LvMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,479::blockSD::337::Storage.Misc.excCmd::(readlines) /usr/bin/dd iflag=direct skip=0 bs=2048 if=/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata count=1 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,553::blockSD::337::Storage.Misc.excCmd::(readlines) SUCCESS: <err> = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,553::misc::260::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s'], size: 2048 Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[] Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::252::Storage.PersistentDict::(refresh) Empty metadata Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with VGTagMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,685::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,686::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,686::persistentDict::234::Storage.PersistentDict::(refresh) read lines (VGTagMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512', 'PHYBLKSIZE=4096', 'POOL_UUID=', u'PV0=pv:33000000100000001,uuid:kTaQQh-4LCD-OghQ-cP5D-R7MM-aj6e-kTdQf0,pestart:0,pecount:397,mapoffset:0', 'ROLE=Regular', 'SDUUID=8b25f3be-7574-4f7a-8851-363129704e52', 'TYPE=ISCSI', 'VERSION=3', 'VGUUID=tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m', '_SHA_CKSUM=4a100ce5195650f43971d849835a8b3d8c0343da'] Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_imageNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_volumeNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_lvmActivationNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
What should I do to bring the VM engine back?
2015-09-02 16:24 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I will try this afternoon to do this, but just to clarify something.
the hosted_engine setup creates it's own DC the hosted_DC, which contains the hosted engine storage domain, I am correct?
No, ovirt-hosted-engine-setup doesn't create a special datacenter. The default is to add the host to the Default datacenter in the default cluster. You could choose a different one from ovirt-hosted-engine-setup, simply import the hosted-engine storage domain in the datacenter of the cluster you selected.
In setup there is a question like this: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI. Please enter local datacenter name which ask about 'Local storage datacenter' which is basically the description we were using for the storage pool.
if yes, where will I import the hostedengine storage domain, into the default DC?
2015-09-02 8:47 GMT+01:00 Roy Golan <rgolan@redhat.com>:
On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I could finally terminate the installation, but still no vm engine on webui
I added a data domain, the default DC is up, but no engine VM.
Good now you need to import the HostedEngine storage domain. Try to go to
*Storage -> Import Domain and put the path to the domain which you used in the hosted-engine setup.*
*After the domain is imported, the engine will be imported automatically. *
*This whole process will become automatic eventually. (patch is written currently)*
2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
> Something mounted on /rhev/data-center/mnt I'm not sure. > > there were directories, and under these directories there were other > directories (dom_md, ha_agent, images), and under them there were symbolic > links to devices under /dev > (ids, inbox, leases, etc...) the devices pointed to the lvm > partitions created by the setup. > > but the mount command didn't show anything, unlike nfs, when I used > it the mount and df commands showed up the engine's VM mount point. > > > 2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: > >> >> >> On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi < >> wodel.youchi@gmail.com> wrote: >> >>> Hi, >>> >>> After removing the -x from the sql files, the installation >>> terminated successfully, but ... >>> >>> I had a problem with vdsm, and error about permission denied with >>> KVM module, so I restarted my machine. >>> After the reboot the ovirt-ha-agent stops complaining about the >>> vm.conf file not present in /var/rum/ovirt-hosted-engine-ha >>> >>> And the mount command doesn't show any iscsi mount, the disk is >>> detected via fdisk -l >>> the lvs command returns all logical volumes created. >>> >>> I think it's a mount problem, but since there are many lv, I don't >>> how to mount them manually. >>> >> >> Do you have something mounted under /rhev/data-center/mnt ? >> If not you probably hit this bug: >> https://bugzilla.redhat.com/1258465 >> >> >> >>> LV VG >>> Attr LSize Pool Origin Data% >>> Meta% Move >>> Log Cpy%Sync Convert >>> >>> 3b894e23-429d-43bf-b6cd-6427a387799a >>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m >>> >>> >>> >>> be78c0fd-52bf-445a-9555-64061029c2d9 >>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g >>> >>> >>> >>> c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 >>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g >>> >>> >>> >>> feede664-5754-4ca2-aeb3-af7aff32ed42 >>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m >>> >>> >>> >>> ids >>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>> -wi-ao---- 128,00m >>> >>> >>> inbox >>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>> -wi-a----- 128,00m >>> >>> >>> leases >>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>> -wi-a----- 2,00g >>> >>> >>> master >>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>> -wi-a----- 1,00g >>> >>> >>> metadata >>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- >>> 512,00m >>> >>> >>> outbox >>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>> -wi-a----- 128,00m >>> >>> >>> >>> >>> 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com> >>> : >>> >>>> >>>> >>>> On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi < >>>> wodel.youchi@gmail.com> wrote: >>>> >>>>> Hi again, >>>>> >>>>> I tried with the snapshot repository, but I am having this error >>>>> while executing engine-setup >>>>> >>>>> [ INFO ] Creating/refreshing Engine database schema >>>>> [ ERROR ] Failed to execute stage 'Misc configuration': Command >>>>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >>>>> te >>>>> [ INFO ] DNF Performing DNF transaction rollback >>>>> [ INFO ] Rolling back database schema >>>>> [ INFO ] Clearing Engine database engine >>>>> [ ERROR ] Engine database rollback failed: must be owner of >>>>> schema pg_catalog >>>>> [ INFO ] Stage: Clean up >>>>> Log file is located at >>>>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >>>>> [ INFO ] Generating answer file >>>>> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >>>>> [ INFO ] Stage: Pre-termination >>>>> [ INFO ] Stage: Termination >>>>> [ ERROR ] Execution of setup failed >>>>> >>>>> >>>>> >>>>> and in the deployement log I have these errors >>>>> >>>>> Saving custom users permissions on database objects... >>>>> upgrade script detected a change in Config, View or Stored >>>>> Procedure... >>>>> Running upgrade shell script >>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... >>>>> Running upgrade shell script >>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... >>>>> Running upgrade shell script >>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... >>>>> Running upgrade shell script >>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... >>>>> Running upgrade shell script >>>>> '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'... >>>>> >>>>> >>>>> 2015-09-01 15:39:35 DEBUG >>>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 >>>>> execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', >>>>> 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', >>>>> '-c', 'apply'] >>>>> stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>> ligne 1: /bin : is a >>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>> ligne 2: DATABASE : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>> ligne 4: This : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>> ligne 5: The : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>> ligne 6: Add : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>> ligne 7: syntax error near the symbole unexpected « ( >>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>> ligne 7: ` Update section (w/o overriding current >>>>> value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>> ligne 1: /bin : is a >>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>> ligne 2: Currently : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>> ligne 3: This : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>> ligne 4: This : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>> ligne 5: So, : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>> ligne 6: Since : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>> ligne 7: bin/ : is a >>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>> ligne 9: update : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>> ligne 10: * >>>>> >>>>> >>>>> *syntax error near the symbole unexpected « ( >>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>> ligne 10: `and exists(select 1 from schema_version where version = >>>>> '03010250' and current = >>>>> true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>> ligne 1: -- : command not >>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>> ligne 2: **syntax error near the symbole unexpected « ( »* >>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>> ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' >>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>> ligne 1: -- : commande introuvable >>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>> ligne 2: erreur de syntaxe près du symbole inattendu « ( » >>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>> ligne 2: `select fn_db_add_column('materialized_views', >>>>> 'min_refresh_rate_in_sec', 'int default 0');' >>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>> ligne 1: erreur de syntaxe près du symbole inattendu « ( » >>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>> ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE >>>>> varchar(63);' >>>>> 2015-09-01 15:39:35 DEBUG otopi.context >>>>> context._executeMethod:156 method exception >>>>> Traceback (most recent call last): >>>>> File "/usr/lib/python2.7/site-packages/otopi/context.py", line >>>>> 146, in _executeMethod >>>>> method['method']() >>>>> File >>>>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", >>>>> line 291, in _misc >>>>> oenginecons.EngineDBEnv.PGPASS_FILE >>>>> File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line >>>>> 946, in execute >>>>> command=args[0], >>>>> RuntimeError: Command >>>>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>> 2015-09-01 15:39:35 ERROR otopi.context >>>>> context._executeMethod:165 Failed to execute stage 'Misc configuration': >>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>> 2015-09-01 15:39:35 DEBUG otopi.transaction >>>>> transaction.abort:134 aborting 'DNF Transaction' >>>>> 2015-09-01 15:39:35 DEBUG >>>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF >>>>> Closing transaction with rollback >>>>> 2015-09-01 15:39:35 INFO >>>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 >>>>> DNF Performing DNF transaction rollback >>>>> >>>> >>>> >>>> It was an issue with package building: all the sql files where >>>> executable and so the issue. >>>> We fixed it and tomorrow build should be OK. If you prefer to >>>> continue right now simply recursively remove the x attribute on each sql >>>> file under /usr/share/ovirt-engine/dbscripts >>>> >>>> >>>> >>>>> >>>>> 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi < >>>>> stirabos@redhat.com>: >>>>> >>>>>> >>>>>> >>>>>> On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David < >>>>>> didi@redhat.com> wrote: >>>>>> >>>>>>> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >>>>>>> wodel.youchi@gmail.com> wrote: >>>>>>> > Hi, >>>>>>> > >>>>>>> > I am using the repo of the 3.6 version >>>>>>> > ( >>>>>>> http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/) >>>>>>> > >>>>>>> > I installed the ovirt-hosted-engine-setup with it's >>>>>>> dependencies,and the >>>>>>> > ovirt-hosted-engine-ha package is one of them. >>>>>>> > >>>>>>> > Correction: The problem with this version >>>>>>> > >>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >>>>>>> > is that after the installation is done, the service >>>>>>> ovirt-ha-agent crashes >>>>>>> > after being started, see the bug : >>>>>>> > >>>>>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >>>>>>> > >>>>>>> > A new version was produced, I downloaded it manually a few >>>>>>> days ago, this is >>>>>>> > it : >>>>>>> > >>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >>>>>>> > >>>>>>> > This one did correct the problem, but it's not present >>>>>>> anymore on the >>>>>>> > repository. >>>>>>> >>>>>>> Was this on ovirt-3.6-pre? >>>>>>> >>>>>>> ovirt-3.6-snapshot has a newer version. >>>>>>> >>>>>>> > >>>>>>> > For Simone: yes I did added an NFS4 data domain, but no >>>>>>> success so far, no >>>>>>> > VM engine present. >>>>>>> > >>>>>>> > Regards. >>>>>>> > >>>>>>> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi < >>>>>>> stirabos@redhat.com>: >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >>>>>>> didi@redhat.com> >>>>>>> >> wrote: >>>>>>> >>> >>>>>>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >>>>>>> wodel.youchi@gmail.com> >>>>>>> >>> wrote: >>>>>>> >>> > Hi, >>>>>>> >>> > >>>>>>> >>> > Another test of ovirt hosted-engine on FC22 using ovirt >>>>>>> 3.6 Beta3. >>>>>>> >>> > VM engine is also a FC22 >>>>>>> >>> > >>>>>>> >>> > Problem: >>>>>>> >>> > - No VM engine on webui >>>>>>> >>> >>>>>>> >>> This is still not supported, see/follow [1]. >>>>>>> >>> >>>>>>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >>>>>>> >> >>>>>>> >> >>>>>>> >> ? :-) >>>>>>> >>>>>>> Sorry :-( >>>>>>> >>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 >>>>>> >>>>>> >>>>>> This is just about editing the VM from the web GUI but in order >>>>>> to be able to edit the engine VM you should be able at least to find the >>>>>> engine VM in the engine as it was also in 3.5 >>>>>> >>>>>> I'll try to reproduce verifying another patch >>>>>> >>>>>> >>>>>>> I see that all patches there are merged, but bug is in POST. >>>>>>> >>>>>>> >> >>>>>>> >> Did you also try adding an additional storage domain for >>>>>>> regular VMs? >>>>>>> >> engine-VM will be shown on the engine only when you add at >>>>>>> least one >>>>>>> >> additional storage domain for regulars VM and the whole >>>>>>> datacenter goes up: >>>>>>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >>>>>>> >> >>>>>>> >> >>>>>>> >>> >>>>>>> >>> > Test Environment >>>>>>> >>> > Just two machines: >>>>>>> >>> > >>>>>>> >>> > 1 - Machine 1 used as storage : >>>>>>> >>> > - iscsi target with a raw file for the VM engine >>>>>>> storage >>>>>>> >>> > - NFS4 for other data domains >>>>>>> >>> > >>>>>>> >>> > 2 - Machine 2 used as hypervisor >>>>>>> >>> > >>>>>>> >>> > >>>>>>> >>> > The installation went without problem, but as always, >>>>>>> the VM engine is >>>>>>> >>> > not >>>>>>> >>> > present on the webui. >>>>>>> >>> > >>>>>>> >>> > >>>>>>> >>> > PS: >>>>>>> >>> > 1- I gave the VM engine just 2Gb of memory since I don't >>>>>>> have too much >>>>>>> >>> > RAM >>>>>>> >>> > on hypervisor, could that be the cause of the problem? >>>>>>> >>> >>>>>>> >>> Shouldn't be related >>>>>>> >>> >>>>>>> >>> > 2- This version of >>>>>>> >>> > >>>>>>> >>> > >>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>>>>>> >>> > package is causing the ovirt-ha-agent to crash, it was >>>>>>> replaced with >>>>>>> >>> > another >>>>>>> >>> > which I still have >>>>>>> >>> > >>>>>>> >>> > >>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>>>>>> >>> > but it's not present on the repository, I had to update >>>>>>> the package >>>>>>> >>> > manually >>>>>>> >>> > at the end of ovirt-hosted-engine-setup installation. >>>>>>> >>> >>>>>>> >>> Not sure I follow. >>>>>>> >>> >>>>>>> >>> What exact repo was used? >>>>>>> >>> >>>>>>> >>> hosted-engine --deploy does not update/install packages >>>>>>> for you (as >>>>>>> >>> does engine-setup), >>>>>>> >>> it's up to you to make sure what you want/need is >>>>>>> installed prior to >>>>>>> >>> running it. >>>>>>> >>> >>>>>>> >>> Best, >>>>>>> >>> -- >>>>>>> >>> Didi >>>>>>> >>> _______________________________________________ >>>>>>> >>> Users mailing list >>>>>>> >>> Users@ovirt.org >>>>>>> >>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >> >>>>>>> >> >>>>>>> > >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Didi >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> >

On Thu, Sep 3, 2015 at 2:07 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I had to restart the installation all over, I used the freshly pushed new packages.
I had two problems: 1 - the engine's setup didn't terminate correctly when I chose to use ovirt-vmconsole with this error [ INFO ] Restarting ovirt-vmconsole proxy service *[ ERROR ] **Failed to execute stage 'Closing up': Failed to stop service 'ovirt-vmconsole-proxy-sshd'* [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150903000415-6egi46.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001209-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
So I executed engine-cleanup which terminate with this error
[ INFO ] Clearing Engine database engine *[ ERROR ] Failed to execute stage 'Misc configuration': must be owner of schema pg_catalog * [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-remove-20150903001440-da1u76.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001513-cleanup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of cleanup failed
And then, I executed again engine-setup without ovirt-vmconsole This time the setup completed.
2 - I added a NFS4 storage domain to the default DC (Default), the DC went up, and then I tried to import the hosted-engine storage domain, but without success.
click on import, choose iscsi, connect to the target, scan, login, but no device chown !!! (iscsi.jpeg)
Can you please attach the relevant VDSM logs from the host you used to were using to import that storage domain?
The only new thing I had, is the disk of the VM engine being shown under
disks tab.
2015-09-02 19:50 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
I found this on vdsm log
Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('ab 1dc1a9-b6e9-4890-8485-1019da2f328f',) {} Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1263::vds::(wrapper) return vmGetStats with {'status': {'message': '*Virtual machine does not exist*', 'code': 1}}
I really don't understand anything
2015-09-02 17:01 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Thanks,
but before that I stuck again with the storage of the VM engine not detected after reboot.
the /rhev is populated, but ovirt-ha-agent crashes with
MainThread::INFO::2015-09-02 16:12:20,261::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid': 'ab1dc1a9-b6e9-4890-8485-1019da2f328f', 'address': '0 '} MainThread::INFO::2015-09-02 16:12:20,283::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 139994237094736 MainThread::INFO::2015-09-02 16:12:20,702::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_do main) Success, id 139994236985168 MainThread::INFO::2015-09-02 16:12:20,702::hosted_engine::574::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_broker) Broker initialized, all submonitors started MainThread::INFO::2015-09-02 16:12:20,799::hosted_engine::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_sanlock) Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: /var/run/vdsm/storage/8b25f3be-7574 -4f7a-8851-363129704e52/a44d1302-3165-4632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342)
MainThread::INFO::2015-09-02 16:12:20,800::hosted_engine::401::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star t_monitoring) Reloading vm.conf from the shared storage domain MainThread::ERROR::2015-09-02 16:12:20,927::agent::201::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
*Error: ''Configuration value not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' - trying to restart agent * MainThread::WARNING::2015-09-02 16:12:25,932::agent::204::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Restarting a gent, attempt '9' MainThread::ERROR::2015-09-02 16:12:25,933::agent::206::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Too many error s occurred, giving up. Please review the log and consider filing a bug. MainThread::INFO::2015-09-02 16:12:25,933::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down
I restared vdsm ha-agent and broker-agent wihtout success
When executed [root@noveria ~]# hosted-engine --vm-status You must run deploy first
I got this
[root@noveria ~]# tree /var/run/vdsm/storage/ /var/run/vdsm/storage/ └── 8b25f3be-7574-4f7a-8851-363129704e52 ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1 ade79d1 ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f 67f7342 ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c32 1b4fbd1 └── bf3bdae1-7318-4443-a19b-7371de30b982 └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7 157706b
and this [root@noveria rhev]# tree . └── data-center ├── 00000001-0001-0001-0001-000000000221 └── mnt ├── blockSD │ └── 8b25f3be-7574-4f7a-8851-363129704e52 │ ├── dom_md │ │ ├── ids -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ids │ │ ├── inbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox │ │ ├── leases -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/leases │ │ ├── master -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/master │ │ ├── metadata -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata │ │ └── outbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox │ ├── ha_agent │ │ ├── hosted-engine.lockspace -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/a44d1302-3165-4 632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342 │ │ └── hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/f05762e5-e8cd-45e7-ac19-303c1ade79d1 │ └── images │ ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-4 5e7-ac19-303c1ade79d1 │ ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4 c71-b332-c6c7f67f7342 │ ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4 376-aa27-04c321b4fbd1 │ └── bf3bdae1-7318-4443-a19b-7371de30b982 │ └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-4 65e-aed9-412f7157706b ├── openSuse.wodel.wd:_nvms └── _var_lib_ovirt-hosted-engine-setup_tmp2fNoEf
Here I did find some symblic links blinking (not present) like this one hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/*f05762e5-e8cd-45e7-ac19-303c1ade79d1*
the lvscan command showed that the lv concerned is inactive, is this correct? [root@noveria ~]# lvscan File descriptor 9 (/dev/dri/card0) leaked on lvscan invocation. Parent PID 2935: bash ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata' [512,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases' [2,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ids' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/master' [1,00 GiB] inherit inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c321b4fbd1' [1,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f67f7342' [128,00 MiB] inherit *inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1ade79d1' [128,00 MiB] inherit * inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7157706b' [40,00 GiB] inherit
and this [root@noveria ~]# vdsClient -s 0 prepareImage "00000000-0000-0000-0000-000000000000" "8b25f3be-7574-4f7a-8851-363129704e52" " bf3bdae1-7318-4443-a19b-7371de30b982" "cbb10cf0-9600-465e-aed9-412f7157706b" {'domainID': '8b25f3be-7574-4f7a-8851-363129704e52', 'imageID': 'bf3bdae1-7318-4443-a19b-7371de30b982', 'leaseOffset': 112197632, 'leasePath': '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases', 'path': '/rhev/data-center/mnt/blockSD/8b25f3be-7574-4f7a-8851-363129704e52/images/bf3bdae1-7318-4443-a19b-7371de30b982/cbb1 0cf0-9600-465e-aed9-412f7157706b', 'volType': 'path', 'volumeID': 'cbb10cf0-9600-465e-aed9-412f7157706b'}
and [root@noveria ~]# vdsClient -s 0 getStorageDomainInfo 8b25f3be-7574-4f7a-8851-363129704e52 uuid = 8b25f3be-7574-4f7a-8851-363129704e52 vguuid = tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m state = OK version = 3 role = Regular type = ISCSI class = Data pool = [] name = hosted_storage
[root@noveria ~]# lvs File descriptor 9 (/dev/dri/card0) leaked on lvs invocation. Parent PID 3105: bash LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 0f260ab0-3631-4c71-b332-c6c7f67f7342 8b25f3be-7574-4f7a-8851-363129704e52 -wi-ao---- 128,00m
ae352fab-7477-4376-aa27-04c321b4fbd1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 1,00g
cbb10cf0-9600-465e-aed9-412f7157706b 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 40,00g
f05762e5-e8cd-45e7-ac19-303c1ade79d1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 128,00m
ids 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
inbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
leases 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 2,00g
master 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 1,00g
metadata 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 512,00m
outbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
VDSM logs doesn't show me anything MainThread::INFO::2015-09-01 23:34:49,551::vdsm::166::vds::(run) <WorkerThread(Thread-4, started daemon 139990108333824)> MainThread::INFO::2015-09-01 23:34:49,552::vdsm::166::vds::(run) <WorkerThread(Thread-3, started daemon 139990116726528)> MainThread::INFO::2015-09-02 16:07:49,510::vdsm::156::vds::(run) (PID: 1554) I am the actual vdsm 4.17.3-12.git7288ef7.fc22 noveria.wodel.wd (4.1.6-200.fc22.x86_64) MainThread::DEBUG::2015-09-02 16:07:49,524::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2015-09-02 16:07:49,524::threadPool::29::Storage.ThreadPool::(__init__) Enter - numThreads: 10, waitTimeout: 3, maxTasks: 500 MainThread::DEBUG::2015-09-02 16:07:49,526::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt mode: None MainThread::WARNING::2015-09-02 16:07:49,526::fileUtils::152::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt already exists MainThread::DEBUG::2015-09-02 16:07:49,564::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) /usr/bin/sudo -n /usr/sbin/lvm dumpconfig global/locking_type (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::427::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::459::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::460::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::462::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2015-09-02 16:07:49,615::hsm::505::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' storageRefresh::DEBUG::2015-09-02 16:07:49,616::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2015-09-02 16:07:49,617::dispatcher::46::Storage.Dispatcher::(__init__) Starting StorageDispatcher... storageRefresh::DEBUG::2015-09-02 16:07:49,620::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,792::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) storageRefresh::DEBUG::2015-09-02 16:07:49,793::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,793::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds storageRefresh::DEBUG::2015-09-02 16:07:49,924::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,924::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state init -> state preparing MainThread::INFO::2015-09-02 16:07:49,924::logUtils::48::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback(callbackFunc=<functools.partial object at 0x7fc2f03fa6d8>) MainThread::INFO::2015-09-02 16:07:49,924::logUtils::51::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback, Return response: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::1191::Storage.TaskManager.Task::(prepare) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::finished: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state preparing -> state finished MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} MainThread::DEBUG::2015-09-02 16:07:49,928::task::993::Storage.TaskManager.Task::(_decref) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::ref 0 aborting False MainThread::INFO::2015-09-02 16:07:49,928::momIF::46::MOM::(__init__) Preparing MOM interface MainThread::INFO::2015-09-02 16:07:49,929::momIF::55::MOM::(__init__) Using named unix socket /var/run/vdsm/mom-vdsm.sock MainThread::INFO::2015-09-02 16:07:49,929::secret::90::root::(clear) Unregistering all secrests MainThread::DEBUG::2015-09-02 16:07:49,929::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::INFO::2015-09-02 16:07:49,933::vmchannels::196::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::DEBUG::2015-09-02 16:07:49,934::vmchannels::178::vds::(run) Starting VM channels listener thread. MainThread::INFO::2015-09-02 16:07:49,935::protocoldetector::172::vds.MultiProtocolAcceptor::(__init__) Listening at 0.0.0.0:54321 MainThread::DEBUG::2015-09-02 16:07:50,063::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <rpc.bindingxmlrpc.XmlDetector instance at 0x7fc2f00dc440> storageRefresh::DEBUG::2015-09-02 16:07:50,063::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,080::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) storageRefresh::DEBUG::2015-09-02 16:07:50,081::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:50,081::hba::56::Storage.HBA::(rescan) Starting scan storageRefresh::DEBUG::2015-09-02 16:07:50,081::supervdsm::76::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm MainThread::DEBUG::2015-09-02 16:07:50,157::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <yajsonrpc.stompreactor.StompDetector instance at 0x7fc2e01332d8> BindingXMLRPC::INFO::2015-09-02 16:07:50,158::bindingxmlrpc::62::vds::(threaded_start) XMLRPC server running MainThread::DEBUG::2015-09-02 16:07:50,158::schedule::98::Scheduler::(start) Starting scheduler periodic-sched periodic-sched::DEBUG::2015-09-02 16:07:50,159::schedule::142::Scheduler::(_run) started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::69::Executor::(start) Starting executor MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/0 periodic/0::DEBUG::2015-09-02 16:07:50,159::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/1 periodic/1::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/2 periodic/2::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/3 periodic/3::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::DEBUG::2015-09-02 16:07:50,163::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.UpdateVolumes'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.NumaInfoMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.BlockjobMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation <virt.sampling.VMBulkSampler object at 0x7fc2e0151d10> MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.DriveWatermarkMonitor'>) storageRefresh::DEBUG::2015-09-02 16:07:50,167::hba::62::Storage.HBA::(rescan) Scan finished storageRefresh::DEBUG::2015-09-02 16:07:50,167::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,167::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:50,513::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:50,513::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Reactor thread::INFO::2015-09-02 16:07:50,590::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:50,596::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56311) BindingXMLRPC::INFO::2015-09-02 16:07:50,596::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56311 Thread-13::INFO::2015-09-02 16:07:50,597::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 started Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-13::INFO::2015-09-02 16:07:50,599::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 stopped Reactor thread::INFO::2015-09-02 16:07:51,607::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:51,613::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56312) BindingXMLRPC::INFO::2015-09-02 16:07:51,613::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56312 Thread-14::INFO::2015-09-02 16:07:51,613::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 started Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-14::INFO::2015-09-02 16:07:51,615::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 stopped storageRefresh::DEBUG::2015-09-02 16:07:51,924::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,927::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,406::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,459::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,491::hsm::373::Storage.HSM::(storageRefresh) HSM is ready Reactor thread::INFO::2015-09-02 16:07:52,624::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,629::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56313) BindingXMLRPC::INFO::2015-09-02 16:07:52,629::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56313 Thread-15::INFO::2015-09-02 16:07:52,630::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 started Thread-15::DEBUG::2015-09-02 16:07:52,630::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-15::DEBUG::2015-09-02 16:07:52,719::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'System Product Name', 'systemSerialNumber': 'System Serial Number', 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': 'System Version', 'systemUUID': '267A6B80-D7DA-11DD-81CF-C860009B3CD9', 'systemManufacturer': 'System manufacturer'}} Thread-15::INFO::2015-09-02 16:07:52,721::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 stopped Reactor thread::INFO::2015-09-02 16:07:52,730::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,735::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56314) BindingXMLRPC::INFO::2015-09-02 16:07:52,735::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56314 Thread-16::INFO::2015-09-02 16:07:52,735::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 started Thread-16::DEBUG::2015-09-02 16:07:52,736::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-16::DEBUG::2015-09-02 16:07:52,736::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state init -> state preparing Thread-16::INFO::2015-09-02 16:07:52,736::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=3, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b', 'connection': '192.168.1.50', 'iqn': 'iqn.2015-08.openSuse.wodel:target00', 'portal': '1', 'user': 'iscsiuser', 'password': '********', 'port': '3260'}], options=None) Thread-16::DEBUG::2015-09-02 16:07:52,737::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 --op=new (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.authmethod -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,811::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,812::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.username -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,846::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,847::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.password -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m iface -I default (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,905::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::INFO::2015-09-02 16:07:52,905::iscsi::564::Storage.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not provided. Skipping. Thread-16::DEBUG::2015-09-02 16:07:52,906::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -l (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,027::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,028::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.startup -v manual --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,088::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,088::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,182::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,229::hba::56::Storage.HBA::(rescan) Starting scan Thread-16::DEBUG::2015-09-02 16:07:53,300::hba::62::Storage.HBA::(rescan) Scan finished Thread-16::DEBUG::2015-09-02 16:07:53,300::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,300::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,435::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,435::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,919::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,923::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) Thread-16::DEBUG::2015-09-02 16:07:54,058::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:54,059::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2418::Storage.HSM::(__prefetchDomains) Found SD uuids: ('8b25f3be-7574-4f7a-8851-363129704e52',) Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: {8b25f3be-7574-4f7a-8851-363129704e52: storage.blockSD.findDomain} Thread-16::INFO::2015-09-02 16:07:54,059::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::1191::Storage.TaskManager.Task::(prepare) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::finished: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state preparing -> state finished Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-16::DEBUG::2015-09-02 16:07:54,060::task::993::Storage.TaskManager.Task::(_decref) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::ref 0 aborting False Thread-16::INFO::2015-09-02 16:07:54,062::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 stopped Reactor thread::INFO::2015-09-02 16:07:54,070::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,075::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:54,076::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,076::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56316) BindingXMLRPC::INFO::2015-09-02 16:07:54,076::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56316 Thread-17::INFO::2015-09-02 16:07:54,076::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56316 started Thread-17::DEBUG::2015-09-02 16:07:54,077::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-17::DEBUG::2015-09-02 16:07:54,077::task::595::Storage.TaskManager.Task::(_updateState) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::moving from state init -> state preparing Thread-17::INFO::2015-09-02 16:07:54,077::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='8b25f3be-7574-4f7a-8851-363129704e52', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='bf3bdae1-7318-4443-a19b-7371de30b982', leafUUID='cbb10cf0-9600-465e-aed9-412f7157706b') Thread-17::DEBUG::2015-09-02 16:07:54,077::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3194' at 'prepareImage' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' for lock type 'shared' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' is free. Now locking as 'shared' (1 active user) Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Granted request Thread-17::DEBUG::2015-09-02 16:07:54,078::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::_resourcesAcquired: Storage.8b25f3be-7574-4f7a-8851-363129704e52 (shared) Thread-17::DEBUG::2015-09-02 16:07:54,078::task::993::Storage.TaskManager.Task::(_decref) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::ref 1 aborting False Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,130::hba::56::Storage.HBA::(rescan) Starting scan Thread-17::DEBUG::2015-09-02 16:07:54,197::hba::62::Storage.HBA::(rescan) Scan finished Thread-17::DEBUG::2015-09-02 16:07:54,197::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,197::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,298::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,299::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,307::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,309::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,312::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,479::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with LvMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,479::blockSD::337::Storage.Misc.excCmd::(readlines) /usr/bin/dd iflag=direct skip=0 bs=2048 if=/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata count=1 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,553::blockSD::337::Storage.Misc.excCmd::(readlines) SUCCESS: <err> = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,553::misc::260::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s'], size: 2048 Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[] Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::252::Storage.PersistentDict::(refresh) Empty metadata Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with VGTagMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,685::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,686::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,686::persistentDict::234::Storage.PersistentDict::(refresh) read lines (VGTagMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512', 'PHYBLKSIZE=4096', 'POOL_UUID=', u'PV0=pv:33000000100000001,uuid:kTaQQh-4LCD-OghQ-cP5D-R7MM-aj6e-kTdQf0,pestart:0,pecount:397,mapoffset:0', 'ROLE=Regular', 'SDUUID=8b25f3be-7574-4f7a-8851-363129704e52', 'TYPE=ISCSI', 'VERSION=3', 'VGUUID=tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m', '_SHA_CKSUM=4a100ce5195650f43971d849835a8b3d8c0343da'] Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_imageNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_volumeNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_lvmActivationNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
What should I do to bring the VM engine back?
2015-09-02 16:24 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I will try this afternoon to do this, but just to clarify something.
the hosted_engine setup creates it's own DC the hosted_DC, which contains the hosted engine storage domain, I am correct?
No, ovirt-hosted-engine-setup doesn't create a special datacenter. The default is to add the host to the Default datacenter in the default cluster. You could choose a different one from ovirt-hosted-engine-setup, simply import the hosted-engine storage domain in the datacenter of the cluster you selected.
In setup there is a question like this: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI. Please enter local datacenter name which ask about 'Local storage datacenter' which is basically the description we were using for the storage pool.
if yes, where will I import the hostedengine storage domain, into the default DC?
2015-09-02 8:47 GMT+01:00 Roy Golan <rgolan@redhat.com>:
On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi <wodel.youchi@gmail.com > wrote:
> I could finally terminate the installation, but still no vm engine > on webui > > I added a data domain, the default DC is up, but no engine VM. >
Good now you need to import the HostedEngine storage domain. Try to go to
*Storage -> Import Domain and put the path to the domain which you used in the hosted-engine setup.*
*After the domain is imported, the engine will be imported automatically. *
*This whole process will become automatic eventually. (patch is written currently)*
> > 2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>: > >> Something mounted on /rhev/data-center/mnt I'm not sure. >> >> there were directories, and under these directories there were >> other directories (dom_md, ha_agent, images), and under them there were >> symbolic links to devices under /dev >> (ids, inbox, leases, etc...) the devices pointed to the lvm >> partitions created by the setup. >> >> but the mount command didn't show anything, unlike nfs, when I used >> it the mount and df commands showed up the engine's VM mount point. >> >> >> 2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: >> >>> >>> >>> On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi < >>> wodel.youchi@gmail.com> wrote: >>> >>>> Hi, >>>> >>>> After removing the -x from the sql files, the installation >>>> terminated successfully, but ... >>>> >>>> I had a problem with vdsm, and error about permission denied with >>>> KVM module, so I restarted my machine. >>>> After the reboot the ovirt-ha-agent stops complaining about the >>>> vm.conf file not present in /var/rum/ovirt-hosted-engine-ha >>>> >>>> And the mount command doesn't show any iscsi mount, the disk is >>>> detected via fdisk -l >>>> the lvs command returns all logical volumes created. >>>> >>>> I think it's a mount problem, but since there are many lv, I >>>> don't how to mount them manually. >>>> >>> >>> Do you have something mounted under /rhev/data-center/mnt ? >>> If not you probably hit this bug: >>> https://bugzilla.redhat.com/1258465 >>> >>> >>> >>>> LV VG >>>> Attr LSize Pool Origin Data% >>>> Meta% Move >>>> Log Cpy%Sync Convert >>>> >>>> 3b894e23-429d-43bf-b6cd-6427a387799a >>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m >>>> >>>> >>>> >>>> be78c0fd-52bf-445a-9555-64061029c2d9 >>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g >>>> >>>> >>>> >>>> c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 >>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g >>>> >>>> >>>> >>>> feede664-5754-4ca2-aeb3-af7aff32ed42 >>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m >>>> >>>> >>>> >>>> ids >>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>> -wi-ao---- 128,00m >>>> >>>> >>>> inbox >>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>> -wi-a----- 128,00m >>>> >>>> >>>> leases >>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>> -wi-a----- 2,00g >>>> >>>> >>>> master >>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>> -wi-a----- 1,00g >>>> >>>> >>>> metadata >>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- >>>> 512,00m >>>> >>>> >>>> outbox >>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>> -wi-a----- 128,00m >>>> >>>> >>>> >>>> >>>> 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com >>>> >: >>>> >>>>> >>>>> >>>>> On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi < >>>>> wodel.youchi@gmail.com> wrote: >>>>> >>>>>> Hi again, >>>>>> >>>>>> I tried with the snapshot repository, but I am having this >>>>>> error while executing engine-setup >>>>>> >>>>>> [ INFO ] Creating/refreshing Engine database schema >>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': >>>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >>>>>> te >>>>>> [ INFO ] DNF Performing DNF transaction rollback >>>>>> [ INFO ] Rolling back database schema >>>>>> [ INFO ] Clearing Engine database engine >>>>>> [ ERROR ] Engine database rollback failed: must be owner of >>>>>> schema pg_catalog >>>>>> [ INFO ] Stage: Clean up >>>>>> Log file is located at >>>>>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >>>>>> [ INFO ] Generating answer file >>>>>> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >>>>>> [ INFO ] Stage: Pre-termination >>>>>> [ INFO ] Stage: Termination >>>>>> [ ERROR ] Execution of setup failed >>>>>> >>>>>> >>>>>> >>>>>> and in the deployement log I have these errors >>>>>> >>>>>> Saving custom users permissions on database objects... >>>>>> upgrade script detected a change in Config, View or Stored >>>>>> Procedure... >>>>>> Running upgrade shell script >>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... >>>>>> Running upgrade shell script >>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... >>>>>> Running upgrade shell script >>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... >>>>>> Running upgrade shell script >>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... >>>>>> Running upgrade shell script >>>>>> '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'... >>>>>> >>>>>> >>>>>> 2015-09-01 15:39:35 DEBUG >>>>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 >>>>>> execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', >>>>>> 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', >>>>>> '-c', 'apply'] >>>>>> stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>> ligne 1: /bin : is a >>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>> ligne 2: DATABASE : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>> ligne 4: This : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>> ligne 5: The : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>> ligne 6: Add : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>> ligne 7: syntax error near the symbole unexpected « ( >>>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>> ligne 7: ` Update section (w/o overriding current >>>>>> value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>> ligne 1: /bin : is a >>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>> ligne 2: Currently : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>> ligne 3: This : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>> ligne 4: This : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>> ligne 5: So, : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>> ligne 6: Since : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>> ligne 7: bin/ : is a >>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>> ligne 9: update : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>> ligne 10: * >>>>>> >>>>>> >>>>>> *syntax error near the symbole unexpected « ( >>>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>> ligne 10: `and exists(select 1 from schema_version where version = >>>>>> '03010250' and current = >>>>>> true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>> ligne 1: -- : command not >>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>> ligne 2: **syntax error near the symbole unexpected « ( »* >>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>> ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' >>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>> ligne 1: -- : commande introuvable >>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>> ligne 2: erreur de syntaxe près du symbole inattendu « ( » >>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>> ligne 2: `select fn_db_add_column('materialized_views', >>>>>> 'min_refresh_rate_in_sec', 'int default 0');' >>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>>> ligne 1: erreur de syntaxe près du symbole inattendu « ( » >>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>>> ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE >>>>>> varchar(63);' >>>>>> 2015-09-01 15:39:35 DEBUG otopi.context >>>>>> context._executeMethod:156 method exception >>>>>> Traceback (most recent call last): >>>>>> File "/usr/lib/python2.7/site-packages/otopi/context.py", >>>>>> line 146, in _executeMethod >>>>>> method['method']() >>>>>> File >>>>>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", >>>>>> line 291, in _misc >>>>>> oenginecons.EngineDBEnv.PGPASS_FILE >>>>>> File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line >>>>>> 946, in execute >>>>>> command=args[0], >>>>>> RuntimeError: Command >>>>>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>>> 2015-09-01 15:39:35 ERROR otopi.context >>>>>> context._executeMethod:165 Failed to execute stage 'Misc configuration': >>>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>>> 2015-09-01 15:39:35 DEBUG otopi.transaction >>>>>> transaction.abort:134 aborting 'DNF Transaction' >>>>>> 2015-09-01 15:39:35 DEBUG >>>>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF >>>>>> Closing transaction with rollback >>>>>> 2015-09-01 15:39:35 INFO >>>>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 >>>>>> DNF Performing DNF transaction rollback >>>>>> >>>>> >>>>> >>>>> It was an issue with package building: all the sql files where >>>>> executable and so the issue. >>>>> We fixed it and tomorrow build should be OK. If you prefer to >>>>> continue right now simply recursively remove the x attribute on each sql >>>>> file under /usr/share/ovirt-engine/dbscripts >>>>> >>>>> >>>>> >>>>>> >>>>>> 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi < >>>>>> stirabos@redhat.com>: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David < >>>>>>> didi@redhat.com> wrote: >>>>>>> >>>>>>>> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >>>>>>>> wodel.youchi@gmail.com> wrote: >>>>>>>> > Hi, >>>>>>>> > >>>>>>>> > I am using the repo of the 3.6 version >>>>>>>> > ( >>>>>>>> http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/) >>>>>>>> > >>>>>>>> > I installed the ovirt-hosted-engine-setup with it's >>>>>>>> dependencies,and the >>>>>>>> > ovirt-hosted-engine-ha package is one of them. >>>>>>>> > >>>>>>>> > Correction: The problem with this version >>>>>>>> > >>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >>>>>>>> > is that after the installation is done, the service >>>>>>>> ovirt-ha-agent crashes >>>>>>>> > after being started, see the bug : >>>>>>>> > >>>>>>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >>>>>>>> > >>>>>>>> > A new version was produced, I downloaded it manually a few >>>>>>>> days ago, this is >>>>>>>> > it : >>>>>>>> > >>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >>>>>>>> > >>>>>>>> > This one did correct the problem, but it's not present >>>>>>>> anymore on the >>>>>>>> > repository. >>>>>>>> >>>>>>>> Was this on ovirt-3.6-pre? >>>>>>>> >>>>>>>> ovirt-3.6-snapshot has a newer version. >>>>>>>> >>>>>>>> > >>>>>>>> > For Simone: yes I did added an NFS4 data domain, but no >>>>>>>> success so far, no >>>>>>>> > VM engine present. >>>>>>>> > >>>>>>>> > Regards. >>>>>>>> > >>>>>>>> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi < >>>>>>>> stirabos@redhat.com>: >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >>>>>>>> didi@redhat.com> >>>>>>>> >> wrote: >>>>>>>> >>> >>>>>>>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >>>>>>>> wodel.youchi@gmail.com> >>>>>>>> >>> wrote: >>>>>>>> >>> > Hi, >>>>>>>> >>> > >>>>>>>> >>> > Another test of ovirt hosted-engine on FC22 using ovirt >>>>>>>> 3.6 Beta3. >>>>>>>> >>> > VM engine is also a FC22 >>>>>>>> >>> > >>>>>>>> >>> > Problem: >>>>>>>> >>> > - No VM engine on webui >>>>>>>> >>> >>>>>>>> >>> This is still not supported, see/follow [1]. >>>>>>>> >>> >>>>>>>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> ? :-) >>>>>>>> >>>>>>>> Sorry :-( >>>>>>>> >>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 >>>>>>> >>>>>>> >>>>>>> This is just about editing the VM from the web GUI but in >>>>>>> order to be able to edit the engine VM you should be able at least to find >>>>>>> the engine VM in the engine as it was also in 3.5 >>>>>>> >>>>>>> I'll try to reproduce verifying another patch >>>>>>> >>>>>>> >>>>>>>> I see that all patches there are merged, but bug is in POST. >>>>>>>> >>>>>>>> >> >>>>>>>> >> Did you also try adding an additional storage domain for >>>>>>>> regular VMs? >>>>>>>> >> engine-VM will be shown on the engine only when you add at >>>>>>>> least one >>>>>>>> >> additional storage domain for regulars VM and the whole >>>>>>>> datacenter goes up: >>>>>>>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >>>>>>>> >> >>>>>>>> >> >>>>>>>> >>> >>>>>>>> >>> > Test Environment >>>>>>>> >>> > Just two machines: >>>>>>>> >>> > >>>>>>>> >>> > 1 - Machine 1 used as storage : >>>>>>>> >>> > - iscsi target with a raw file for the VM engine >>>>>>>> storage >>>>>>>> >>> > - NFS4 for other data domains >>>>>>>> >>> > >>>>>>>> >>> > 2 - Machine 2 used as hypervisor >>>>>>>> >>> > >>>>>>>> >>> > >>>>>>>> >>> > The installation went without problem, but as always, >>>>>>>> the VM engine is >>>>>>>> >>> > not >>>>>>>> >>> > present on the webui. >>>>>>>> >>> > >>>>>>>> >>> > >>>>>>>> >>> > PS: >>>>>>>> >>> > 1- I gave the VM engine just 2Gb of memory since I >>>>>>>> don't have too much >>>>>>>> >>> > RAM >>>>>>>> >>> > on hypervisor, could that be the cause of the problem? >>>>>>>> >>> >>>>>>>> >>> Shouldn't be related >>>>>>>> >>> >>>>>>>> >>> > 2- This version of >>>>>>>> >>> > >>>>>>>> >>> > >>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>>>>>>> >>> > package is causing the ovirt-ha-agent to crash, it was >>>>>>>> replaced with >>>>>>>> >>> > another >>>>>>>> >>> > which I still have >>>>>>>> >>> > >>>>>>>> >>> > >>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>>>>>>> >>> > but it's not present on the repository, I had to update >>>>>>>> the package >>>>>>>> >>> > manually >>>>>>>> >>> > at the end of ovirt-hosted-engine-setup installation. >>>>>>>> >>> >>>>>>>> >>> Not sure I follow. >>>>>>>> >>> >>>>>>>> >>> What exact repo was used? >>>>>>>> >>> >>>>>>>> >>> hosted-engine --deploy does not update/install packages >>>>>>>> for you (as >>>>>>>> >>> does engine-setup), >>>>>>>> >>> it's up to you to make sure what you want/need is >>>>>>>> installed prior to >>>>>>>> >>> running it. >>>>>>>> >>> >>>>>>>> >>> Best, >>>>>>>> >>> -- >>>>>>>> >>> Didi >>>>>>>> >>> _______________________________________________ >>>>>>>> >>> Users mailing list >>>>>>>> >>> Users@ovirt.org >>>>>>>> >>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >> >>>>>>>> >> >>>>>>>> > >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Didi >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >

Just to be clear, my test environment is composed of two machines: 1 - one hypervisor just one host 2 - a machine used as a storage, a raw file presented as iscsi device for VM engine storage, and multiple NFS4 shares for the other data domains (data, iso, export). That's it. Here is the vdsm log PS: After rebooting the host, I could not restart the VM engine, the same problem with ha agent, vm.conf file not found. 2015-09-03 8:14 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Thu, Sep 3, 2015 at 2:07 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I had to restart the installation all over, I used the freshly pushed new packages.
I had two problems: 1 - the engine's setup didn't terminate correctly when I chose to use ovirt-vmconsole with this error [ INFO ] Restarting ovirt-vmconsole proxy service *[ ERROR ] **Failed to execute stage 'Closing up': Failed to stop service 'ovirt-vmconsole-proxy-sshd'* [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150903000415-6egi46.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001209-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
So I executed engine-cleanup which terminate with this error
[ INFO ] Clearing Engine database engine *[ ERROR ] Failed to execute stage 'Misc configuration': must be owner of schema pg_catalog * [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-remove-20150903001440-da1u76.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001513-cleanup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of cleanup failed
And then, I executed again engine-setup without ovirt-vmconsole This time the setup completed.
2 - I added a NFS4 storage domain to the default DC (Default), the DC went up, and then I tried to import the hosted-engine storage domain, but without success.
click on import, choose iscsi, connect to the target, scan, login, but no device chown !!! (iscsi.jpeg)
Can you please attach the relevant VDSM logs from the host you used to were using to import that storage domain?
The only new thing I had, is the disk of the VM engine being shown under
disks tab.
2015-09-02 19:50 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
I found this on vdsm log
Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('ab 1dc1a9-b6e9-4890-8485-1019da2f328f',) {} Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1263::vds::(wrapper) return vmGetStats with {'status': {'message': '*Virtual machine does not exist*', 'code': 1}}
I really don't understand anything
2015-09-02 17:01 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Thanks,
but before that I stuck again with the storage of the VM engine not detected after reboot.
the /rhev is populated, but ovirt-ha-agent crashes with
MainThread::INFO::2015-09-02 16:12:20,261::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid': 'ab1dc1a9-b6e9-4890-8485-1019da2f328f', 'address': '0 '} MainThread::INFO::2015-09-02 16:12:20,283::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 139994237094736 MainThread::INFO::2015-09-02 16:12:20,702::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_do main) Success, id 139994236985168 MainThread::INFO::2015-09-02 16:12:20,702::hosted_engine::574::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_broker) Broker initialized, all submonitors started MainThread::INFO::2015-09-02 16:12:20,799::hosted_engine::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_sanlock) Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: /var/run/vdsm/storage/8b25f3be-7574 -4f7a-8851-363129704e52/a44d1302-3165-4632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342)
MainThread::INFO::2015-09-02 16:12:20,800::hosted_engine::401::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star t_monitoring) Reloading vm.conf from the shared storage domain MainThread::ERROR::2015-09-02 16:12:20,927::agent::201::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
*Error: ''Configuration value not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' - trying to restart agent * MainThread::WARNING::2015-09-02 16:12:25,932::agent::204::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Restarting a gent, attempt '9' MainThread::ERROR::2015-09-02 16:12:25,933::agent::206::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Too many error s occurred, giving up. Please review the log and consider filing a bug. MainThread::INFO::2015-09-02 16:12:25,933::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down
I restared vdsm ha-agent and broker-agent wihtout success
When executed [root@noveria ~]# hosted-engine --vm-status You must run deploy first
I got this
[root@noveria ~]# tree /var/run/vdsm/storage/ /var/run/vdsm/storage/ └── 8b25f3be-7574-4f7a-8851-363129704e52 ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1 ade79d1 ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f 67f7342 ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c32 1b4fbd1 └── bf3bdae1-7318-4443-a19b-7371de30b982 └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7 157706b
and this [root@noveria rhev]# tree . └── data-center ├── 00000001-0001-0001-0001-000000000221 └── mnt ├── blockSD │ └── 8b25f3be-7574-4f7a-8851-363129704e52 │ ├── dom_md │ │ ├── ids -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ids │ │ ├── inbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox │ │ ├── leases -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/leases │ │ ├── master -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/master │ │ ├── metadata -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata │ │ └── outbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox │ ├── ha_agent │ │ ├── hosted-engine.lockspace -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/a44d1302-3165-4 632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342 │ │ └── hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/f05762e5-e8cd-45e7-ac19-303c1ade79d1 │ └── images │ ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-4 5e7-ac19-303c1ade79d1 │ ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4 c71-b332-c6c7f67f7342 │ ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4 376-aa27-04c321b4fbd1 │ └── bf3bdae1-7318-4443-a19b-7371de30b982 │ └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-4 65e-aed9-412f7157706b ├── openSuse.wodel.wd:_nvms └── _var_lib_ovirt-hosted-engine-setup_tmp2fNoEf
Here I did find some symblic links blinking (not present) like this one hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/*f05762e5-e8cd-45e7-ac19-303c1ade79d1*
the lvscan command showed that the lv concerned is inactive, is this correct? [root@noveria ~]# lvscan File descriptor 9 (/dev/dri/card0) leaked on lvscan invocation. Parent PID 2935: bash ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata' [512,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases' [2,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ids' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/master' [1,00 GiB] inherit inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c321b4fbd1' [1,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f67f7342' [128,00 MiB] inherit *inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1ade79d1' [128,00 MiB] inherit * inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7157706b' [40,00 GiB] inherit
and this [root@noveria ~]# vdsClient -s 0 prepareImage "00000000-0000-0000-0000-000000000000" "8b25f3be-7574-4f7a-8851-363129704e52" " bf3bdae1-7318-4443-a19b-7371de30b982" "cbb10cf0-9600-465e-aed9-412f7157706b" {'domainID': '8b25f3be-7574-4f7a-8851-363129704e52', 'imageID': 'bf3bdae1-7318-4443-a19b-7371de30b982', 'leaseOffset': 112197632, 'leasePath': '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases', 'path': '/rhev/data-center/mnt/blockSD/8b25f3be-7574-4f7a-8851-363129704e52/images/bf3bdae1-7318-4443-a19b-7371de30b982/cbb1 0cf0-9600-465e-aed9-412f7157706b', 'volType': 'path', 'volumeID': 'cbb10cf0-9600-465e-aed9-412f7157706b'}
and [root@noveria ~]# vdsClient -s 0 getStorageDomainInfo 8b25f3be-7574-4f7a-8851-363129704e52 uuid = 8b25f3be-7574-4f7a-8851-363129704e52 vguuid = tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m state = OK version = 3 role = Regular type = ISCSI class = Data pool = [] name = hosted_storage
[root@noveria ~]# lvs File descriptor 9 (/dev/dri/card0) leaked on lvs invocation. Parent PID 3105: bash LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 0f260ab0-3631-4c71-b332-c6c7f67f7342 8b25f3be-7574-4f7a-8851-363129704e52 -wi-ao---- 128,00m
ae352fab-7477-4376-aa27-04c321b4fbd1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 1,00g
cbb10cf0-9600-465e-aed9-412f7157706b 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 40,00g
f05762e5-e8cd-45e7-ac19-303c1ade79d1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 128,00m
ids 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
inbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
leases 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 2,00g
master 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 1,00g
metadata 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 512,00m
outbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
VDSM logs doesn't show me anything MainThread::INFO::2015-09-01 23:34:49,551::vdsm::166::vds::(run) <WorkerThread(Thread-4, started daemon 139990108333824)> MainThread::INFO::2015-09-01 23:34:49,552::vdsm::166::vds::(run) <WorkerThread(Thread-3, started daemon 139990116726528)> MainThread::INFO::2015-09-02 16:07:49,510::vdsm::156::vds::(run) (PID: 1554) I am the actual vdsm 4.17.3-12.git7288ef7.fc22 noveria.wodel.wd (4.1.6-200.fc22.x86_64) MainThread::DEBUG::2015-09-02 16:07:49,524::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2015-09-02 16:07:49,524::threadPool::29::Storage.ThreadPool::(__init__) Enter - numThreads: 10, waitTimeout: 3, maxTasks: 500 MainThread::DEBUG::2015-09-02 16:07:49,526::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt mode: None MainThread::WARNING::2015-09-02 16:07:49,526::fileUtils::152::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt already exists MainThread::DEBUG::2015-09-02 16:07:49,564::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) /usr/bin/sudo -n /usr/sbin/lvm dumpconfig global/locking_type (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::427::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::459::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::460::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::462::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2015-09-02 16:07:49,615::hsm::505::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' storageRefresh::DEBUG::2015-09-02 16:07:49,616::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2015-09-02 16:07:49,617::dispatcher::46::Storage.Dispatcher::(__init__) Starting StorageDispatcher... storageRefresh::DEBUG::2015-09-02 16:07:49,620::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,792::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) storageRefresh::DEBUG::2015-09-02 16:07:49,793::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,793::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds storageRefresh::DEBUG::2015-09-02 16:07:49,924::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,924::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state init -> state preparing MainThread::INFO::2015-09-02 16:07:49,924::logUtils::48::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback(callbackFunc=<functools.partial object at 0x7fc2f03fa6d8>) MainThread::INFO::2015-09-02 16:07:49,924::logUtils::51::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback, Return response: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::1191::Storage.TaskManager.Task::(prepare) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::finished: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state preparing -> state finished MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} MainThread::DEBUG::2015-09-02 16:07:49,928::task::993::Storage.TaskManager.Task::(_decref) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::ref 0 aborting False MainThread::INFO::2015-09-02 16:07:49,928::momIF::46::MOM::(__init__) Preparing MOM interface MainThread::INFO::2015-09-02 16:07:49,929::momIF::55::MOM::(__init__) Using named unix socket /var/run/vdsm/mom-vdsm.sock MainThread::INFO::2015-09-02 16:07:49,929::secret::90::root::(clear) Unregistering all secrests MainThread::DEBUG::2015-09-02 16:07:49,929::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::INFO::2015-09-02 16:07:49,933::vmchannels::196::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::DEBUG::2015-09-02 16:07:49,934::vmchannels::178::vds::(run) Starting VM channels listener thread. MainThread::INFO::2015-09-02 16:07:49,935::protocoldetector::172::vds.MultiProtocolAcceptor::(__init__) Listening at 0.0.0.0:54321 MainThread::DEBUG::2015-09-02 16:07:50,063::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <rpc.bindingxmlrpc.XmlDetector instance at 0x7fc2f00dc440> storageRefresh::DEBUG::2015-09-02 16:07:50,063::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,080::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) storageRefresh::DEBUG::2015-09-02 16:07:50,081::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:50,081::hba::56::Storage.HBA::(rescan) Starting scan storageRefresh::DEBUG::2015-09-02 16:07:50,081::supervdsm::76::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm MainThread::DEBUG::2015-09-02 16:07:50,157::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <yajsonrpc.stompreactor.StompDetector instance at 0x7fc2e01332d8> BindingXMLRPC::INFO::2015-09-02 16:07:50,158::bindingxmlrpc::62::vds::(threaded_start) XMLRPC server running MainThread::DEBUG::2015-09-02 16:07:50,158::schedule::98::Scheduler::(start) Starting scheduler periodic-sched periodic-sched::DEBUG::2015-09-02 16:07:50,159::schedule::142::Scheduler::(_run) started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::69::Executor::(start) Starting executor MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/0 periodic/0::DEBUG::2015-09-02 16:07:50,159::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/1 periodic/1::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/2 periodic/2::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/3 periodic/3::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::DEBUG::2015-09-02 16:07:50,163::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.UpdateVolumes'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.NumaInfoMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.BlockjobMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation <virt.sampling.VMBulkSampler object at 0x7fc2e0151d10> MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.DriveWatermarkMonitor'>) storageRefresh::DEBUG::2015-09-02 16:07:50,167::hba::62::Storage.HBA::(rescan) Scan finished storageRefresh::DEBUG::2015-09-02 16:07:50,167::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,167::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:50,513::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:50,513::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Reactor thread::INFO::2015-09-02 16:07:50,590::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:50,596::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56311) BindingXMLRPC::INFO::2015-09-02 16:07:50,596::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56311 Thread-13::INFO::2015-09-02 16:07:50,597::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 started Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-13::INFO::2015-09-02 16:07:50,599::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 stopped Reactor thread::INFO::2015-09-02 16:07:51,607::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:51,613::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56312) BindingXMLRPC::INFO::2015-09-02 16:07:51,613::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56312 Thread-14::INFO::2015-09-02 16:07:51,613::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 started Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-14::INFO::2015-09-02 16:07:51,615::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 stopped storageRefresh::DEBUG::2015-09-02 16:07:51,924::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,927::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,406::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,459::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,491::hsm::373::Storage.HSM::(storageRefresh) HSM is ready Reactor thread::INFO::2015-09-02 16:07:52,624::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,629::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56313) BindingXMLRPC::INFO::2015-09-02 16:07:52,629::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56313 Thread-15::INFO::2015-09-02 16:07:52,630::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 started Thread-15::DEBUG::2015-09-02 16:07:52,630::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-15::DEBUG::2015-09-02 16:07:52,719::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'System Product Name', 'systemSerialNumber': 'System Serial Number', 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': 'System Version', 'systemUUID': '267A6B80-D7DA-11DD-81CF-C860009B3CD9', 'systemManufacturer': 'System manufacturer'}} Thread-15::INFO::2015-09-02 16:07:52,721::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 stopped Reactor thread::INFO::2015-09-02 16:07:52,730::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,735::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56314) BindingXMLRPC::INFO::2015-09-02 16:07:52,735::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56314 Thread-16::INFO::2015-09-02 16:07:52,735::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 started Thread-16::DEBUG::2015-09-02 16:07:52,736::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-16::DEBUG::2015-09-02 16:07:52,736::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state init -> state preparing Thread-16::INFO::2015-09-02 16:07:52,736::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=3, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b', 'connection': '192.168.1.50', 'iqn': 'iqn.2015-08.openSuse.wodel:target00', 'portal': '1', 'user': 'iscsiuser', 'password': '********', 'port': '3260'}], options=None) Thread-16::DEBUG::2015-09-02 16:07:52,737::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 --op=new (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.authmethod -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,811::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,812::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.username -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,846::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,847::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.password -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m iface -I default (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,905::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::INFO::2015-09-02 16:07:52,905::iscsi::564::Storage.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not provided. Skipping. Thread-16::DEBUG::2015-09-02 16:07:52,906::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -l (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,027::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,028::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.startup -v manual --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,088::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,088::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,182::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,229::hba::56::Storage.HBA::(rescan) Starting scan Thread-16::DEBUG::2015-09-02 16:07:53,300::hba::62::Storage.HBA::(rescan) Scan finished Thread-16::DEBUG::2015-09-02 16:07:53,300::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,300::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,435::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,435::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,919::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,923::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) Thread-16::DEBUG::2015-09-02 16:07:54,058::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:54,059::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2418::Storage.HSM::(__prefetchDomains) Found SD uuids: ('8b25f3be-7574-4f7a-8851-363129704e52',) Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: {8b25f3be-7574-4f7a-8851-363129704e52: storage.blockSD.findDomain} Thread-16::INFO::2015-09-02 16:07:54,059::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::1191::Storage.TaskManager.Task::(prepare) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::finished: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state preparing -> state finished Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-16::DEBUG::2015-09-02 16:07:54,060::task::993::Storage.TaskManager.Task::(_decref) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::ref 0 aborting False Thread-16::INFO::2015-09-02 16:07:54,062::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 stopped Reactor thread::INFO::2015-09-02 16:07:54,070::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,075::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:54,076::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,076::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56316) BindingXMLRPC::INFO::2015-09-02 16:07:54,076::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56316 Thread-17::INFO::2015-09-02 16:07:54,076::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56316 started Thread-17::DEBUG::2015-09-02 16:07:54,077::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-17::DEBUG::2015-09-02 16:07:54,077::task::595::Storage.TaskManager.Task::(_updateState) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::moving from state init -> state preparing Thread-17::INFO::2015-09-02 16:07:54,077::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='8b25f3be-7574-4f7a-8851-363129704e52', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='bf3bdae1-7318-4443-a19b-7371de30b982', leafUUID='cbb10cf0-9600-465e-aed9-412f7157706b') Thread-17::DEBUG::2015-09-02 16:07:54,077::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3194' at 'prepareImage' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' for lock type 'shared' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' is free. Now locking as 'shared' (1 active user) Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Granted request Thread-17::DEBUG::2015-09-02 16:07:54,078::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::_resourcesAcquired: Storage.8b25f3be-7574-4f7a-8851-363129704e52 (shared) Thread-17::DEBUG::2015-09-02 16:07:54,078::task::993::Storage.TaskManager.Task::(_decref) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::ref 1 aborting False Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,130::hba::56::Storage.HBA::(rescan) Starting scan Thread-17::DEBUG::2015-09-02 16:07:54,197::hba::62::Storage.HBA::(rescan) Scan finished Thread-17::DEBUG::2015-09-02 16:07:54,197::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,197::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,298::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,299::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,307::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,309::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,312::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,479::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with LvMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,479::blockSD::337::Storage.Misc.excCmd::(readlines) /usr/bin/dd iflag=direct skip=0 bs=2048 if=/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata count=1 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,553::blockSD::337::Storage.Misc.excCmd::(readlines) SUCCESS: <err> = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,553::misc::260::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s'], size: 2048 Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[] Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::252::Storage.PersistentDict::(refresh) Empty metadata Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with VGTagMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,685::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,686::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,686::persistentDict::234::Storage.PersistentDict::(refresh) read lines (VGTagMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512', 'PHYBLKSIZE=4096', 'POOL_UUID=', u'PV0=pv:33000000100000001,uuid:kTaQQh-4LCD-OghQ-cP5D-R7MM-aj6e-kTdQf0,pestart:0,pecount:397,mapoffset:0', 'ROLE=Regular', 'SDUUID=8b25f3be-7574-4f7a-8851-363129704e52', 'TYPE=ISCSI', 'VERSION=3', 'VGUUID=tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m', '_SHA_CKSUM=4a100ce5195650f43971d849835a8b3d8c0343da'] Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_imageNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_volumeNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_lvmActivationNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
What should I do to bring the VM engine back?
2015-09-02 16:24 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
I will try this afternoon to do this, but just to clarify something.
the hosted_engine setup creates it's own DC the hosted_DC, which contains the hosted engine storage domain, I am correct?
No, ovirt-hosted-engine-setup doesn't create a special datacenter. The default is to add the host to the Default datacenter in the default cluster. You could choose a different one from ovirt-hosted-engine-setup, simply import the hosted-engine storage domain in the datacenter of the cluster you selected.
In setup there is a question like this: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI. Please enter local datacenter name which ask about 'Local storage datacenter' which is basically the description we were using for the storage pool.
if yes, where will I import the hostedengine storage domain, into the default DC?
2015-09-02 8:47 GMT+01:00 Roy Golan <rgolan@redhat.com>:
> > > On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi < > wodel.youchi@gmail.com> wrote: > >> I could finally terminate the installation, but still no vm engine >> on webui >> >> I added a data domain, the default DC is up, but no engine VM. >> > > > > Good now you need to import the HostedEngine storage domain. Try to > go to > > *Storage -> Import Domain and put the path to the domain which you > used in the hosted-engine setup.* > > > *After the domain is imported, the engine will be imported > automatically. * > > *This whole process will become automatic eventually. (patch is > written currently)* > > >> >> 2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>: >> >>> Something mounted on /rhev/data-center/mnt I'm not sure. >>> >>> there were directories, and under these directories there were >>> other directories (dom_md, ha_agent, images), and under them there were >>> symbolic links to devices under /dev >>> (ids, inbox, leases, etc...) the devices pointed to the lvm >>> partitions created by the setup. >>> >>> but the mount command didn't show anything, unlike nfs, when I >>> used it the mount and df commands showed up the engine's VM mount point. >>> >>> >>> 2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com> >>> : >>> >>>> >>>> >>>> On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi < >>>> wodel.youchi@gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> After removing the -x from the sql files, the installation >>>>> terminated successfully, but ... >>>>> >>>>> I had a problem with vdsm, and error about permission denied >>>>> with KVM module, so I restarted my machine. >>>>> After the reboot the ovirt-ha-agent stops complaining about the >>>>> vm.conf file not present in /var/rum/ovirt-hosted-engine-ha >>>>> >>>>> And the mount command doesn't show any iscsi mount, the disk is >>>>> detected via fdisk -l >>>>> the lvs command returns all logical volumes created. >>>>> >>>>> I think it's a mount problem, but since there are many lv, I >>>>> don't how to mount them manually. >>>>> >>>> >>>> Do you have something mounted under /rhev/data-center/mnt ? >>>> If not you probably hit this bug: >>>> https://bugzilla.redhat.com/1258465 >>>> >>>> >>>> >>>>> LV VG >>>>> Attr LSize Pool Origin Data% >>>>> Meta% Move >>>>> Log Cpy%Sync Convert >>>>> >>>>> 3b894e23-429d-43bf-b6cd-6427a387799a >>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m >>>>> >>>>> >>>>> >>>>> be78c0fd-52bf-445a-9555-64061029c2d9 >>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g >>>>> >>>>> >>>>> >>>>> c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 >>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g >>>>> >>>>> >>>>> >>>>> feede664-5754-4ca2-aeb3-af7aff32ed42 >>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m >>>>> >>>>> >>>>> >>>>> ids >>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>> -wi-ao---- 128,00m >>>>> >>>>> >>>>> inbox >>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>> -wi-a----- 128,00m >>>>> >>>>> >>>>> leases >>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>> -wi-a----- 2,00g >>>>> >>>>> >>>>> master >>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>> -wi-a----- 1,00g >>>>> >>>>> >>>>> metadata >>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- >>>>> 512,00m >>>>> >>>>> >>>>> outbox >>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>> -wi-a----- 128,00m >>>>> >>>>> >>>>> >>>>> >>>>> 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi < >>>>> stirabos@redhat.com>: >>>>> >>>>>> >>>>>> >>>>>> On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi < >>>>>> wodel.youchi@gmail.com> wrote: >>>>>> >>>>>>> Hi again, >>>>>>> >>>>>>> I tried with the snapshot repository, but I am having this >>>>>>> error while executing engine-setup >>>>>>> >>>>>>> [ INFO ] Creating/refreshing Engine database schema >>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': >>>>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >>>>>>> te >>>>>>> [ INFO ] DNF Performing DNF transaction rollback >>>>>>> [ INFO ] Rolling back database schema >>>>>>> [ INFO ] Clearing Engine database engine >>>>>>> [ ERROR ] Engine database rollback failed: must be owner of >>>>>>> schema pg_catalog >>>>>>> [ INFO ] Stage: Clean up >>>>>>> Log file is located at >>>>>>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >>>>>>> [ INFO ] Generating answer file >>>>>>> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >>>>>>> [ INFO ] Stage: Pre-termination >>>>>>> [ INFO ] Stage: Termination >>>>>>> [ ERROR ] Execution of setup failed >>>>>>> >>>>>>> >>>>>>> >>>>>>> and in the deployement log I have these errors >>>>>>> >>>>>>> Saving custom users permissions on database objects... >>>>>>> upgrade script detected a change in Config, View or Stored >>>>>>> Procedure... >>>>>>> Running upgrade shell script >>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... >>>>>>> Running upgrade shell script >>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... >>>>>>> Running upgrade shell script >>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... >>>>>>> Running upgrade shell script >>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... >>>>>>> Running upgrade shell script >>>>>>> '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'... >>>>>>> >>>>>>> >>>>>>> 2015-09-01 15:39:35 DEBUG >>>>>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 >>>>>>> execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', >>>>>>> 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', >>>>>>> '-c', 'apply'] >>>>>>> stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>> ligne 1: /bin : is a >>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>> ligne 2: DATABASE : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>> ligne 4: This : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>> ligne 5: The : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>> ligne 6: Add : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>> ligne 7: syntax error near the symbole unexpected « ( >>>>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>> ligne 7: ` Update section (w/o overriding current >>>>>>> value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>> ligne 1: /bin : is a >>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>> ligne 2: Currently : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>> ligne 3: This : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>> ligne 4: This : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>> ligne 5: So, : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>> ligne 6: Since : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>> ligne 7: bin/ : is a >>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>> ligne 9: update : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>> ligne 10: * >>>>>>> >>>>>>> >>>>>>> *syntax error near the symbole unexpected « ( >>>>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>> ligne 10: `and exists(select 1 from schema_version where version = >>>>>>> '03010250' and current = >>>>>>> true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>> ligne 1: -- : command not >>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>> ligne 2: **syntax error near the symbole unexpected « ( »* >>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>> ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' >>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>> ligne 1: -- : commande introuvable >>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>> ligne 2: erreur de syntaxe près du symbole inattendu « ( » >>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>> ligne 2: `select fn_db_add_column('materialized_views', >>>>>>> 'min_refresh_rate_in_sec', 'int default 0');' >>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>>>> ligne 1: erreur de syntaxe près du symbole inattendu « ( » >>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>>>> ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE >>>>>>> varchar(63);' >>>>>>> 2015-09-01 15:39:35 DEBUG otopi.context >>>>>>> context._executeMethod:156 method exception >>>>>>> Traceback (most recent call last): >>>>>>> File "/usr/lib/python2.7/site-packages/otopi/context.py", >>>>>>> line 146, in _executeMethod >>>>>>> method['method']() >>>>>>> File >>>>>>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", >>>>>>> line 291, in _misc >>>>>>> oenginecons.EngineDBEnv.PGPASS_FILE >>>>>>> File "/usr/lib/python2.7/site-packages/otopi/plugin.py", >>>>>>> line 946, in execute >>>>>>> command=args[0], >>>>>>> RuntimeError: Command >>>>>>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>>>> 2015-09-01 15:39:35 ERROR otopi.context >>>>>>> context._executeMethod:165 Failed to execute stage 'Misc configuration': >>>>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>>>> 2015-09-01 15:39:35 DEBUG otopi.transaction >>>>>>> transaction.abort:134 aborting 'DNF Transaction' >>>>>>> 2015-09-01 15:39:35 DEBUG >>>>>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF >>>>>>> Closing transaction with rollback >>>>>>> 2015-09-01 15:39:35 INFO >>>>>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 >>>>>>> DNF Performing DNF transaction rollback >>>>>>> >>>>>> >>>>>> >>>>>> It was an issue with package building: all the sql files where >>>>>> executable and so the issue. >>>>>> We fixed it and tomorrow build should be OK. If you prefer to >>>>>> continue right now simply recursively remove the x attribute on each sql >>>>>> file under /usr/share/ovirt-engine/dbscripts >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi < >>>>>>> stirabos@redhat.com>: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David < >>>>>>>> didi@redhat.com> wrote: >>>>>>>> >>>>>>>>> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >>>>>>>>> wodel.youchi@gmail.com> wrote: >>>>>>>>> > Hi, >>>>>>>>> > >>>>>>>>> > I am using the repo of the 3.6 version >>>>>>>>> > ( >>>>>>>>> http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/ >>>>>>>>> ) >>>>>>>>> > >>>>>>>>> > I installed the ovirt-hosted-engine-setup with it's >>>>>>>>> dependencies,and the >>>>>>>>> > ovirt-hosted-engine-ha package is one of them. >>>>>>>>> > >>>>>>>>> > Correction: The problem with this version >>>>>>>>> > >>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >>>>>>>>> > is that after the installation is done, the service >>>>>>>>> ovirt-ha-agent crashes >>>>>>>>> > after being started, see the bug : >>>>>>>>> > >>>>>>>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >>>>>>>>> > >>>>>>>>> > A new version was produced, I downloaded it manually a few >>>>>>>>> days ago, this is >>>>>>>>> > it : >>>>>>>>> > >>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >>>>>>>>> > >>>>>>>>> > This one did correct the problem, but it's not present >>>>>>>>> anymore on the >>>>>>>>> > repository. >>>>>>>>> >>>>>>>>> Was this on ovirt-3.6-pre? >>>>>>>>> >>>>>>>>> ovirt-3.6-snapshot has a newer version. >>>>>>>>> >>>>>>>>> > >>>>>>>>> > For Simone: yes I did added an NFS4 data domain, but no >>>>>>>>> success so far, no >>>>>>>>> > VM engine present. >>>>>>>>> > >>>>>>>>> > Regards. >>>>>>>>> > >>>>>>>>> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi < >>>>>>>>> stirabos@redhat.com>: >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >>>>>>>>> didi@redhat.com> >>>>>>>>> >> wrote: >>>>>>>>> >>> >>>>>>>>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >>>>>>>>> wodel.youchi@gmail.com> >>>>>>>>> >>> wrote: >>>>>>>>> >>> > Hi, >>>>>>>>> >>> > >>>>>>>>> >>> > Another test of ovirt hosted-engine on FC22 using >>>>>>>>> ovirt 3.6 Beta3. >>>>>>>>> >>> > VM engine is also a FC22 >>>>>>>>> >>> > >>>>>>>>> >>> > Problem: >>>>>>>>> >>> > - No VM engine on webui >>>>>>>>> >>> >>>>>>>>> >>> This is still not supported, see/follow [1]. >>>>>>>>> >>> >>>>>>>>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >> ? :-) >>>>>>>>> >>>>>>>>> Sorry :-( >>>>>>>>> >>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 >>>>>>>> >>>>>>>> >>>>>>>> This is just about editing the VM from the web GUI but in >>>>>>>> order to be able to edit the engine VM you should be able at least to find >>>>>>>> the engine VM in the engine as it was also in 3.5 >>>>>>>> >>>>>>>> I'll try to reproduce verifying another patch >>>>>>>> >>>>>>>> >>>>>>>>> I see that all patches there are merged, but bug is in POST. >>>>>>>>> >>>>>>>>> >> >>>>>>>>> >> Did you also try adding an additional storage domain for >>>>>>>>> regular VMs? >>>>>>>>> >> engine-VM will be shown on the engine only when you add >>>>>>>>> at least one >>>>>>>>> >> additional storage domain for regulars VM and the whole >>>>>>>>> datacenter goes up: >>>>>>>>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >>> >>>>>>>>> >>> > Test Environment >>>>>>>>> >>> > Just two machines: >>>>>>>>> >>> > >>>>>>>>> >>> > 1 - Machine 1 used as storage : >>>>>>>>> >>> > - iscsi target with a raw file for the VM engine >>>>>>>>> storage >>>>>>>>> >>> > - NFS4 for other data domains >>>>>>>>> >>> > >>>>>>>>> >>> > 2 - Machine 2 used as hypervisor >>>>>>>>> >>> > >>>>>>>>> >>> > >>>>>>>>> >>> > The installation went without problem, but as always, >>>>>>>>> the VM engine is >>>>>>>>> >>> > not >>>>>>>>> >>> > present on the webui. >>>>>>>>> >>> > >>>>>>>>> >>> > >>>>>>>>> >>> > PS: >>>>>>>>> >>> > 1- I gave the VM engine just 2Gb of memory since I >>>>>>>>> don't have too much >>>>>>>>> >>> > RAM >>>>>>>>> >>> > on hypervisor, could that be the cause of the problem? >>>>>>>>> >>> >>>>>>>>> >>> Shouldn't be related >>>>>>>>> >>> >>>>>>>>> >>> > 2- This version of >>>>>>>>> >>> > >>>>>>>>> >>> > >>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>>>>>>>> >>> > package is causing the ovirt-ha-agent to crash, it was >>>>>>>>> replaced with >>>>>>>>> >>> > another >>>>>>>>> >>> > which I still have >>>>>>>>> >>> > >>>>>>>>> >>> > >>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>>>>>>>> >>> > but it's not present on the repository, I had to >>>>>>>>> update the package >>>>>>>>> >>> > manually >>>>>>>>> >>> > at the end of ovirt-hosted-engine-setup installation. >>>>>>>>> >>> >>>>>>>>> >>> Not sure I follow. >>>>>>>>> >>> >>>>>>>>> >>> What exact repo was used? >>>>>>>>> >>> >>>>>>>>> >>> hosted-engine --deploy does not update/install packages >>>>>>>>> for you (as >>>>>>>>> >>> does engine-setup), >>>>>>>>> >>> it's up to you to make sure what you want/need is >>>>>>>>> installed prior to >>>>>>>>> >>> running it. >>>>>>>>> >>> >>>>>>>>> >>> Best, >>>>>>>>> >>> -- >>>>>>>>> >>> Didi >>>>>>>>> >>> _______________________________________________ >>>>>>>>> >>> Users mailing list >>>>>>>>> >>> Users@ovirt.org >>>>>>>>> >>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> > >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Didi >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >

On Thu, Sep 3, 2015 at 11:25 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Just to be clear, my test environment is composed of two machines:
1 - one hypervisor just one host
2 - a machine used as a storage, a raw file presented as iscsi device for VM engine storage, and multiple NFS4 shares for the other data domains (data, iso, export).
That's it.
Here is the vdsm log
The first real error is this one: Thread-49::ERROR::2015-09-03 01:19:29,710::monitor::250::Storage.Monitor::(_monitorDomain) Error monitoring domain 7bd9cad0-151a-4aa4-a7de-15dd64748f17 Traceback (most recent call last): File "/usr/share/vdsm/storage/monitor.py", line 246, in _monitorDomain self._performDomainSelftest() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 759, in wrapper value = meth(self, *a, **kw) File "/usr/share/vdsm/storage/monitor.py", line 313, in _performDomainSelftest self.domain.selftest() File "/usr/share/vdsm/storage/blockSD.py", line 857, in selftest lvm.chkVG(self.sdUUID) File "/usr/share/vdsm/storage/lvm.py", line 1006, in chkVG raise se.StorageDomainAccessError("%s: %s" % (vgName, err)) StorageDomainAccessError: Domain is either partially accessible or entirely inaccessible: ('7bd9cad0-151a-4aa4-a7de-15dd64748f17: [\' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\', \' /dev/mapper/33000000100000001: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/33000000100000001: read failed after 0 of 4096 at 53687025664: Input/output error\', \' /dev/mapper/33000000100000001: read failed after 0 of 4096 at 53687083008: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/33000000100000001 was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 536805376: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 536862720: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 2147418112: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 2147475456: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 1073676288: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 1073733632: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 42949607424: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 42949664768: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900 was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406 was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf was disabled\', \' Volume group "7bd9cad0-151a-4aa4-a7de-15dd64748f17" not found\', \' Cannot process volume group 7bd9cad0-151a-4aa4-a7de-15dd64748f17\']',) Thread-49::INFO::2015-09-03 01:19:29,741::monitor::273::Storage.Monitor::(_notifyStatusChanges) Domain 7bd9cad0-151a-4aa4-a7de-15dd64748f17 became INVALID Thread-4067::DEBUG::2015-09-03 01:19:29,741::misc::777::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Emitting event For some reason your iSCSI connection seams to fails and the hosted-engine-storage domain becomes invalid. All the other issue are subsequent. Can you please check its configuration and the network status?
PS: After rebooting the host, I could not restart the VM engine, the same problem with ha agent, vm.conf file not found.
vm.conf is now on the storage domain but at least you should be able to access it.
2015-09-03 8:14 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Thu, Sep 3, 2015 at 2:07 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I had to restart the installation all over, I used the freshly pushed new packages.
I had two problems: 1 - the engine's setup didn't terminate correctly when I chose to use ovirt-vmconsole with this error [ INFO ] Restarting ovirt-vmconsole proxy service *[ ERROR ] **Failed to execute stage 'Closing up': Failed to stop service 'ovirt-vmconsole-proxy-sshd'* [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150903000415-6egi46.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001209-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
So I executed engine-cleanup which terminate with this error
[ INFO ] Clearing Engine database engine *[ ERROR ] Failed to execute stage 'Misc configuration': must be owner of schema pg_catalog * [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-remove-20150903001440-da1u76.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001513-cleanup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of cleanup failed
And then, I executed again engine-setup without ovirt-vmconsole This time the setup completed.
2 - I added a NFS4 storage domain to the default DC (Default), the DC went up, and then I tried to import the hosted-engine storage domain, but without success.
click on import, choose iscsi, connect to the target, scan, login, but no device chown !!! (iscsi.jpeg)
Can you please attach the relevant VDSM logs from the host you used to were using to import that storage domain?
The only new thing I had, is the disk of the VM engine being shown under
disks tab.
2015-09-02 19:50 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
I found this on vdsm log
Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('ab 1dc1a9-b6e9-4890-8485-1019da2f328f',) {} Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1263::vds::(wrapper) return vmGetStats with {'status': {'message': '*Virtual machine does not exist*', 'code': 1}}
I really don't understand anything
2015-09-02 17:01 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Thanks,
but before that I stuck again with the storage of the VM engine not detected after reboot.
the /rhev is populated, but ovirt-ha-agent crashes with
MainThread::INFO::2015-09-02 16:12:20,261::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid': 'ab1dc1a9-b6e9-4890-8485-1019da2f328f', 'address': '0 '} MainThread::INFO::2015-09-02 16:12:20,283::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 139994237094736 MainThread::INFO::2015-09-02 16:12:20,702::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_do main) Success, id 139994236985168 MainThread::INFO::2015-09-02 16:12:20,702::hosted_engine::574::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_broker) Broker initialized, all submonitors started MainThread::INFO::2015-09-02 16:12:20,799::hosted_engine::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_sanlock) Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: /var/run/vdsm/storage/8b25f3be-7574 -4f7a-8851-363129704e52/a44d1302-3165-4632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342)
MainThread::INFO::2015-09-02 16:12:20,800::hosted_engine::401::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star t_monitoring) Reloading vm.conf from the shared storage domain MainThread::ERROR::2015-09-02 16:12:20,927::agent::201::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
*Error: ''Configuration value not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' - trying to restart agent * MainThread::WARNING::2015-09-02 16:12:25,932::agent::204::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Restarting a gent, attempt '9' MainThread::ERROR::2015-09-02 16:12:25,933::agent::206::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Too many error s occurred, giving up. Please review the log and consider filing a bug. MainThread::INFO::2015-09-02 16:12:25,933::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down
I restared vdsm ha-agent and broker-agent wihtout success
When executed [root@noveria ~]# hosted-engine --vm-status You must run deploy first
I got this
[root@noveria ~]# tree /var/run/vdsm/storage/ /var/run/vdsm/storage/ └── 8b25f3be-7574-4f7a-8851-363129704e52 ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1 ade79d1 ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f 67f7342 ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c32 1b4fbd1 └── bf3bdae1-7318-4443-a19b-7371de30b982 └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7 157706b
and this [root@noveria rhev]# tree . └── data-center ├── 00000001-0001-0001-0001-000000000221 └── mnt ├── blockSD │ └── 8b25f3be-7574-4f7a-8851-363129704e52 │ ├── dom_md │ │ ├── ids -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ids │ │ ├── inbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox │ │ ├── leases -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/leases │ │ ├── master -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/master │ │ ├── metadata -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata │ │ └── outbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox │ ├── ha_agent │ │ ├── hosted-engine.lockspace -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/a44d1302-3165-4 632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342 │ │ └── hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/f05762e5-e8cd-45e7-ac19-303c1ade79d1 │ └── images │ ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-4 5e7-ac19-303c1ade79d1 │ ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4 c71-b332-c6c7f67f7342 │ ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4 376-aa27-04c321b4fbd1 │ └── bf3bdae1-7318-4443-a19b-7371de30b982 │ └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-4 65e-aed9-412f7157706b ├── openSuse.wodel.wd:_nvms └── _var_lib_ovirt-hosted-engine-setup_tmp2fNoEf
Here I did find some symblic links blinking (not present) like this one hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/*f05762e5-e8cd-45e7-ac19-303c1ade79d1*
the lvscan command showed that the lv concerned is inactive, is this correct? [root@noveria ~]# lvscan File descriptor 9 (/dev/dri/card0) leaked on lvscan invocation. Parent PID 2935: bash ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata' [512,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases' [2,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ids' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/master' [1,00 GiB] inherit inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c321b4fbd1' [1,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f67f7342' [128,00 MiB] inherit *inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1ade79d1' [128,00 MiB] inherit * inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7157706b' [40,00 GiB] inherit
and this [root@noveria ~]# vdsClient -s 0 prepareImage "00000000-0000-0000-0000-000000000000" "8b25f3be-7574-4f7a-8851-363129704e52" " bf3bdae1-7318-4443-a19b-7371de30b982" "cbb10cf0-9600-465e-aed9-412f7157706b" {'domainID': '8b25f3be-7574-4f7a-8851-363129704e52', 'imageID': 'bf3bdae1-7318-4443-a19b-7371de30b982', 'leaseOffset': 112197632, 'leasePath': '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases', 'path': '/rhev/data-center/mnt/blockSD/8b25f3be-7574-4f7a-8851-363129704e52/images/bf3bdae1-7318-4443-a19b-7371de30b982/cbb1 0cf0-9600-465e-aed9-412f7157706b', 'volType': 'path', 'volumeID': 'cbb10cf0-9600-465e-aed9-412f7157706b'}
and [root@noveria ~]# vdsClient -s 0 getStorageDomainInfo 8b25f3be-7574-4f7a-8851-363129704e52 uuid = 8b25f3be-7574-4f7a-8851-363129704e52 vguuid = tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m state = OK version = 3 role = Regular type = ISCSI class = Data pool = [] name = hosted_storage
[root@noveria ~]# lvs File descriptor 9 (/dev/dri/card0) leaked on lvs invocation. Parent PID 3105: bash LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 0f260ab0-3631-4c71-b332-c6c7f67f7342 8b25f3be-7574-4f7a-8851-363129704e52 -wi-ao---- 128,00m
ae352fab-7477-4376-aa27-04c321b4fbd1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 1,00g
cbb10cf0-9600-465e-aed9-412f7157706b 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 40,00g
f05762e5-e8cd-45e7-ac19-303c1ade79d1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 128,00m
ids 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
inbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
leases 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 2,00g
master 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 1,00g
metadata 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 512,00m
outbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
VDSM logs doesn't show me anything MainThread::INFO::2015-09-01 23:34:49,551::vdsm::166::vds::(run) <WorkerThread(Thread-4, started daemon 139990108333824)> MainThread::INFO::2015-09-01 23:34:49,552::vdsm::166::vds::(run) <WorkerThread(Thread-3, started daemon 139990116726528)> MainThread::INFO::2015-09-02 16:07:49,510::vdsm::156::vds::(run) (PID: 1554) I am the actual vdsm 4.17.3-12.git7288ef7.fc22 noveria.wodel.wd (4.1.6-200.fc22.x86_64) MainThread::DEBUG::2015-09-02 16:07:49,524::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2015-09-02 16:07:49,524::threadPool::29::Storage.ThreadPool::(__init__) Enter - numThreads: 10, waitTimeout: 3, maxTasks: 500 MainThread::DEBUG::2015-09-02 16:07:49,526::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt mode: None MainThread::WARNING::2015-09-02 16:07:49,526::fileUtils::152::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt already exists MainThread::DEBUG::2015-09-02 16:07:49,564::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) /usr/bin/sudo -n /usr/sbin/lvm dumpconfig global/locking_type (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::427::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::459::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::460::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::462::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2015-09-02 16:07:49,615::hsm::505::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' storageRefresh::DEBUG::2015-09-02 16:07:49,616::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2015-09-02 16:07:49,617::dispatcher::46::Storage.Dispatcher::(__init__) Starting StorageDispatcher... storageRefresh::DEBUG::2015-09-02 16:07:49,620::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,792::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) storageRefresh::DEBUG::2015-09-02 16:07:49,793::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,793::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds storageRefresh::DEBUG::2015-09-02 16:07:49,924::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,924::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state init -> state preparing MainThread::INFO::2015-09-02 16:07:49,924::logUtils::48::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback(callbackFunc=<functools.partial object at 0x7fc2f03fa6d8>) MainThread::INFO::2015-09-02 16:07:49,924::logUtils::51::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback, Return response: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::1191::Storage.TaskManager.Task::(prepare) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::finished: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state preparing -> state finished MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} MainThread::DEBUG::2015-09-02 16:07:49,928::task::993::Storage.TaskManager.Task::(_decref) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::ref 0 aborting False MainThread::INFO::2015-09-02 16:07:49,928::momIF::46::MOM::(__init__) Preparing MOM interface MainThread::INFO::2015-09-02 16:07:49,929::momIF::55::MOM::(__init__) Using named unix socket /var/run/vdsm/mom-vdsm.sock MainThread::INFO::2015-09-02 16:07:49,929::secret::90::root::(clear) Unregistering all secrests MainThread::DEBUG::2015-09-02 16:07:49,929::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::INFO::2015-09-02 16:07:49,933::vmchannels::196::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::DEBUG::2015-09-02 16:07:49,934::vmchannels::178::vds::(run) Starting VM channels listener thread. MainThread::INFO::2015-09-02 16:07:49,935::protocoldetector::172::vds.MultiProtocolAcceptor::(__init__) Listening at 0.0.0.0:54321 MainThread::DEBUG::2015-09-02 16:07:50,063::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <rpc.bindingxmlrpc.XmlDetector instance at 0x7fc2f00dc440> storageRefresh::DEBUG::2015-09-02 16:07:50,063::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,080::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) storageRefresh::DEBUG::2015-09-02 16:07:50,081::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:50,081::hba::56::Storage.HBA::(rescan) Starting scan storageRefresh::DEBUG::2015-09-02 16:07:50,081::supervdsm::76::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm MainThread::DEBUG::2015-09-02 16:07:50,157::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <yajsonrpc.stompreactor.StompDetector instance at 0x7fc2e01332d8> BindingXMLRPC::INFO::2015-09-02 16:07:50,158::bindingxmlrpc::62::vds::(threaded_start) XMLRPC server running MainThread::DEBUG::2015-09-02 16:07:50,158::schedule::98::Scheduler::(start) Starting scheduler periodic-sched periodic-sched::DEBUG::2015-09-02 16:07:50,159::schedule::142::Scheduler::(_run) started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::69::Executor::(start) Starting executor MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/0 periodic/0::DEBUG::2015-09-02 16:07:50,159::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/1 periodic/1::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/2 periodic/2::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/3 periodic/3::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::DEBUG::2015-09-02 16:07:50,163::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.UpdateVolumes'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.NumaInfoMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.BlockjobMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation <virt.sampling.VMBulkSampler object at 0x7fc2e0151d10> MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.DriveWatermarkMonitor'>) storageRefresh::DEBUG::2015-09-02 16:07:50,167::hba::62::Storage.HBA::(rescan) Scan finished storageRefresh::DEBUG::2015-09-02 16:07:50,167::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,167::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:50,513::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:50,513::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Reactor thread::INFO::2015-09-02 16:07:50,590::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:50,596::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56311) BindingXMLRPC::INFO::2015-09-02 16:07:50,596::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56311 Thread-13::INFO::2015-09-02 16:07:50,597::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 started Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-13::INFO::2015-09-02 16:07:50,599::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 stopped Reactor thread::INFO::2015-09-02 16:07:51,607::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:51,613::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56312) BindingXMLRPC::INFO::2015-09-02 16:07:51,613::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56312 Thread-14::INFO::2015-09-02 16:07:51,613::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 started Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-14::INFO::2015-09-02 16:07:51,615::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 stopped storageRefresh::DEBUG::2015-09-02 16:07:51,924::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,927::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,406::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,459::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,491::hsm::373::Storage.HSM::(storageRefresh) HSM is ready Reactor thread::INFO::2015-09-02 16:07:52,624::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,629::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56313) BindingXMLRPC::INFO::2015-09-02 16:07:52,629::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56313 Thread-15::INFO::2015-09-02 16:07:52,630::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 started Thread-15::DEBUG::2015-09-02 16:07:52,630::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-15::DEBUG::2015-09-02 16:07:52,719::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'System Product Name', 'systemSerialNumber': 'System Serial Number', 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': 'System Version', 'systemUUID': '267A6B80-D7DA-11DD-81CF-C860009B3CD9', 'systemManufacturer': 'System manufacturer'}} Thread-15::INFO::2015-09-02 16:07:52,721::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 stopped Reactor thread::INFO::2015-09-02 16:07:52,730::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,735::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56314) BindingXMLRPC::INFO::2015-09-02 16:07:52,735::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56314 Thread-16::INFO::2015-09-02 16:07:52,735::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 started Thread-16::DEBUG::2015-09-02 16:07:52,736::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-16::DEBUG::2015-09-02 16:07:52,736::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state init -> state preparing Thread-16::INFO::2015-09-02 16:07:52,736::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=3, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b', 'connection': '192.168.1.50', 'iqn': 'iqn.2015-08.openSuse.wodel:target00', 'portal': '1', 'user': 'iscsiuser', 'password': '********', 'port': '3260'}], options=None) Thread-16::DEBUG::2015-09-02 16:07:52,737::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 --op=new (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.authmethod -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,811::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,812::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.username -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,846::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,847::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.password -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m iface -I default (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,905::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::INFO::2015-09-02 16:07:52,905::iscsi::564::Storage.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not provided. Skipping. Thread-16::DEBUG::2015-09-02 16:07:52,906::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -l (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,027::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,028::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.startup -v manual --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,088::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,088::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,182::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,229::hba::56::Storage.HBA::(rescan) Starting scan Thread-16::DEBUG::2015-09-02 16:07:53,300::hba::62::Storage.HBA::(rescan) Scan finished Thread-16::DEBUG::2015-09-02 16:07:53,300::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,300::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,435::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,435::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,919::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,923::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) Thread-16::DEBUG::2015-09-02 16:07:54,058::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:54,059::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2418::Storage.HSM::(__prefetchDomains) Found SD uuids: ('8b25f3be-7574-4f7a-8851-363129704e52',) Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: {8b25f3be-7574-4f7a-8851-363129704e52: storage.blockSD.findDomain} Thread-16::INFO::2015-09-02 16:07:54,059::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::1191::Storage.TaskManager.Task::(prepare) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::finished: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state preparing -> state finished Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-16::DEBUG::2015-09-02 16:07:54,060::task::993::Storage.TaskManager.Task::(_decref) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::ref 0 aborting False Thread-16::INFO::2015-09-02 16:07:54,062::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 stopped Reactor thread::INFO::2015-09-02 16:07:54,070::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,075::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:54,076::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,076::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56316) BindingXMLRPC::INFO::2015-09-02 16:07:54,076::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56316 Thread-17::INFO::2015-09-02 16:07:54,076::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56316 started Thread-17::DEBUG::2015-09-02 16:07:54,077::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-17::DEBUG::2015-09-02 16:07:54,077::task::595::Storage.TaskManager.Task::(_updateState) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::moving from state init -> state preparing Thread-17::INFO::2015-09-02 16:07:54,077::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='8b25f3be-7574-4f7a-8851-363129704e52', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='bf3bdae1-7318-4443-a19b-7371de30b982', leafUUID='cbb10cf0-9600-465e-aed9-412f7157706b') Thread-17::DEBUG::2015-09-02 16:07:54,077::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3194' at 'prepareImage' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' for lock type 'shared' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' is free. Now locking as 'shared' (1 active user) Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Granted request Thread-17::DEBUG::2015-09-02 16:07:54,078::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::_resourcesAcquired: Storage.8b25f3be-7574-4f7a-8851-363129704e52 (shared) Thread-17::DEBUG::2015-09-02 16:07:54,078::task::993::Storage.TaskManager.Task::(_decref) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::ref 1 aborting False Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,130::hba::56::Storage.HBA::(rescan) Starting scan Thread-17::DEBUG::2015-09-02 16:07:54,197::hba::62::Storage.HBA::(rescan) Scan finished Thread-17::DEBUG::2015-09-02 16:07:54,197::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,197::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,298::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,299::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,307::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,309::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,312::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,479::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with LvMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,479::blockSD::337::Storage.Misc.excCmd::(readlines) /usr/bin/dd iflag=direct skip=0 bs=2048 if=/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata count=1 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,553::blockSD::337::Storage.Misc.excCmd::(readlines) SUCCESS: <err> = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,553::misc::260::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s'], size: 2048 Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[] Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::252::Storage.PersistentDict::(refresh) Empty metadata Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with VGTagMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,685::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,686::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,686::persistentDict::234::Storage.PersistentDict::(refresh) read lines (VGTagMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512', 'PHYBLKSIZE=4096', 'POOL_UUID=', u'PV0=pv:33000000100000001,uuid:kTaQQh-4LCD-OghQ-cP5D-R7MM-aj6e-kTdQf0,pestart:0,pecount:397,mapoffset:0', 'ROLE=Regular', 'SDUUID=8b25f3be-7574-4f7a-8851-363129704e52', 'TYPE=ISCSI', 'VERSION=3', 'VGUUID=tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m', '_SHA_CKSUM=4a100ce5195650f43971d849835a8b3d8c0343da'] Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_imageNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_volumeNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_lvmActivationNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
What should I do to bring the VM engine back?
2015-09-02 16:24 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi <wodel.youchi@gmail.com > wrote:
> I will try this afternoon to do this, but just to clarify something. > > the hosted_engine setup creates it's own DC the hosted_DC, which > contains the hosted engine storage domain, I am correct? >
No, ovirt-hosted-engine-setup doesn't create a special datacenter. The default is to add the host to the Default datacenter in the default cluster. You could choose a different one from ovirt-hosted-engine-setup, simply import the hosted-engine storage domain in the datacenter of the cluster you selected.
In setup there is a question like this: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI. Please enter local datacenter name which ask about 'Local storage datacenter' which is basically the description we were using for the storage pool.
> if yes, where will I import the hostedengine storage domain, into > the default DC? > > 2015-09-02 8:47 GMT+01:00 Roy Golan <rgolan@redhat.com>: > >> >> >> On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi < >> wodel.youchi@gmail.com> wrote: >> >>> I could finally terminate the installation, but still no vm engine >>> on webui >>> >>> I added a data domain, the default DC is up, but no engine VM. >>> >> >> >> >> Good now you need to import the HostedEngine storage domain. Try to >> go to >> >> *Storage -> Import Domain and put the path to the domain which you >> used in the hosted-engine setup.* >> >> >> *After the domain is imported, the engine will be imported >> automatically. * >> >> *This whole process will become automatic eventually. (patch is >> written currently)* >> >> >>> >>> 2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>: >>> >>>> Something mounted on /rhev/data-center/mnt I'm not sure. >>>> >>>> there were directories, and under these directories there were >>>> other directories (dom_md, ha_agent, images), and under them there were >>>> symbolic links to devices under /dev >>>> (ids, inbox, leases, etc...) the devices pointed to the lvm >>>> partitions created by the setup. >>>> >>>> but the mount command didn't show anything, unlike nfs, when I >>>> used it the mount and df commands showed up the engine's VM mount point. >>>> >>>> >>>> 2015-09-01 20:16 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com >>>> >: >>>> >>>>> >>>>> >>>>> On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi < >>>>> wodel.youchi@gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> After removing the -x from the sql files, the installation >>>>>> terminated successfully, but ... >>>>>> >>>>>> I had a problem with vdsm, and error about permission denied >>>>>> with KVM module, so I restarted my machine. >>>>>> After the reboot the ovirt-ha-agent stops complaining about the >>>>>> vm.conf file not present in /var/rum/ovirt-hosted-engine-ha >>>>>> >>>>>> And the mount command doesn't show any iscsi mount, the disk is >>>>>> detected via fdisk -l >>>>>> the lvs command returns all logical volumes created. >>>>>> >>>>>> I think it's a mount problem, but since there are many lv, I >>>>>> don't how to mount them manually. >>>>>> >>>>> >>>>> Do you have something mounted under /rhev/data-center/mnt ? >>>>> If not you probably hit this bug: >>>>> https://bugzilla.redhat.com/1258465 >>>>> >>>>> >>>>> >>>>>> LV VG >>>>>> Attr LSize Pool Origin Data% >>>>>> Meta% Move >>>>>> Log Cpy%Sync Convert >>>>>> >>>>>> 3b894e23-429d-43bf-b6cd-6427a387799a >>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m >>>>>> >>>>>> >>>>>> >>>>>> be78c0fd-52bf-445a-9555-64061029c2d9 >>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g >>>>>> >>>>>> >>>>>> >>>>>> c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 >>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g >>>>>> >>>>>> >>>>>> >>>>>> feede664-5754-4ca2-aeb3-af7aff32ed42 >>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m >>>>>> >>>>>> >>>>>> >>>>>> ids >>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>> -wi-ao---- 128,00m >>>>>> >>>>>> >>>>>> inbox >>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>> -wi-a----- 128,00m >>>>>> >>>>>> >>>>>> leases >>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>> -wi-a----- 2,00g >>>>>> >>>>>> >>>>>> master >>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>> -wi-a----- 1,00g >>>>>> >>>>>> >>>>>> metadata >>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- >>>>>> 512,00m >>>>>> >>>>>> >>>>>> outbox >>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>> -wi-a----- 128,00m >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi < >>>>>> stirabos@redhat.com>: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi < >>>>>>> wodel.youchi@gmail.com> wrote: >>>>>>> >>>>>>>> Hi again, >>>>>>>> >>>>>>>> I tried with the snapshot repository, but I am having this >>>>>>>> error while executing engine-setup >>>>>>>> >>>>>>>> [ INFO ] Creating/refreshing Engine database schema >>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': >>>>>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >>>>>>>> te >>>>>>>> [ INFO ] DNF Performing DNF transaction rollback >>>>>>>> [ INFO ] Rolling back database schema >>>>>>>> [ INFO ] Clearing Engine database engine >>>>>>>> [ ERROR ] Engine database rollback failed: must be owner of >>>>>>>> schema pg_catalog >>>>>>>> [ INFO ] Stage: Clean up >>>>>>>> Log file is located at >>>>>>>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >>>>>>>> [ INFO ] Generating answer file >>>>>>>> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>> [ INFO ] Stage: Termination >>>>>>>> [ ERROR ] Execution of setup failed >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> and in the deployement log I have these errors >>>>>>>> >>>>>>>> Saving custom users permissions on database objects... >>>>>>>> upgrade script detected a change in Config, View or Stored >>>>>>>> Procedure... >>>>>>>> Running upgrade shell script >>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... >>>>>>>> Running upgrade shell script >>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... >>>>>>>> Running upgrade shell script >>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... >>>>>>>> Running upgrade shell script >>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... >>>>>>>> Running upgrade shell script >>>>>>>> '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'... >>>>>>>> >>>>>>>> >>>>>>>> 2015-09-01 15:39:35 DEBUG >>>>>>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 >>>>>>>> execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', >>>>>>>> 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', >>>>>>>> '-c', 'apply'] >>>>>>>> stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>> ligne 1: /bin : is a >>>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>> ligne 2: DATABASE : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>> ligne 4: This : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>> ligne 5: The : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>> ligne 6: Add : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>> ligne 7: syntax error near the symbole unexpected « ( >>>>>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>> ligne 7: ` Update section (w/o overriding current >>>>>>>> value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>> ligne 1: /bin : is a >>>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>> ligne 2: Currently : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>> ligne 3: This : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>> ligne 4: This : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>> ligne 5: So, : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>> ligne 6: Since : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>> ligne 7: bin/ : is a >>>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>> ligne 9: update : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>> ligne 10: * >>>>>>>> >>>>>>>> >>>>>>>> *syntax error near the symbole unexpected « ( >>>>>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>> ligne 10: `and exists(select 1 from schema_version where version = >>>>>>>> '03010250' and current = >>>>>>>> true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>>> ligne 1: -- : command not >>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>>> ligne 2: **syntax error near the symbole unexpected « ( »* >>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>>> ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' >>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>>> ligne 1: -- : commande introuvable >>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>>> ligne 2: erreur de syntaxe près du symbole inattendu « ( » >>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>>> ligne 2: `select fn_db_add_column('materialized_views', >>>>>>>> 'min_refresh_rate_in_sec', 'int default 0');' >>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>>>>> ligne 1: erreur de syntaxe près du symbole inattendu « ( » >>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>>>>> ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE >>>>>>>> varchar(63);' >>>>>>>> 2015-09-01 15:39:35 DEBUG otopi.context >>>>>>>> context._executeMethod:156 method exception >>>>>>>> Traceback (most recent call last): >>>>>>>> File "/usr/lib/python2.7/site-packages/otopi/context.py", >>>>>>>> line 146, in _executeMethod >>>>>>>> method['method']() >>>>>>>> File >>>>>>>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", >>>>>>>> line 291, in _misc >>>>>>>> oenginecons.EngineDBEnv.PGPASS_FILE >>>>>>>> File "/usr/lib/python2.7/site-packages/otopi/plugin.py", >>>>>>>> line 946, in execute >>>>>>>> command=args[0], >>>>>>>> RuntimeError: Command >>>>>>>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>>>>> 2015-09-01 15:39:35 ERROR otopi.context >>>>>>>> context._executeMethod:165 Failed to execute stage 'Misc configuration': >>>>>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>>>>> 2015-09-01 15:39:35 DEBUG otopi.transaction >>>>>>>> transaction.abort:134 aborting 'DNF Transaction' >>>>>>>> 2015-09-01 15:39:35 DEBUG >>>>>>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF >>>>>>>> Closing transaction with rollback >>>>>>>> 2015-09-01 15:39:35 INFO >>>>>>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:94 >>>>>>>> DNF Performing DNF transaction rollback >>>>>>>> >>>>>>> >>>>>>> >>>>>>> It was an issue with package building: all the sql files where >>>>>>> executable and so the issue. >>>>>>> We fixed it and tomorrow build should be OK. If you prefer to >>>>>>> continue right now simply recursively remove the x attribute on each sql >>>>>>> file under /usr/share/ovirt-engine/dbscripts >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi < >>>>>>>> stirabos@redhat.com>: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David < >>>>>>>>> didi@redhat.com> wrote: >>>>>>>>> >>>>>>>>>> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >>>>>>>>>> wodel.youchi@gmail.com> wrote: >>>>>>>>>> > Hi, >>>>>>>>>> > >>>>>>>>>> > I am using the repo of the 3.6 version >>>>>>>>>> > ( >>>>>>>>>> http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/ >>>>>>>>>> ) >>>>>>>>>> > >>>>>>>>>> > I installed the ovirt-hosted-engine-setup with it's >>>>>>>>>> dependencies,and the >>>>>>>>>> > ovirt-hosted-engine-ha package is one of them. >>>>>>>>>> > >>>>>>>>>> > Correction: The problem with this version >>>>>>>>>> > >>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >>>>>>>>>> > is that after the installation is done, the service >>>>>>>>>> ovirt-ha-agent crashes >>>>>>>>>> > after being started, see the bug : >>>>>>>>>> > >>>>>>>>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >>>>>>>>>> > >>>>>>>>>> > A new version was produced, I downloaded it manually a >>>>>>>>>> few days ago, this is >>>>>>>>>> > it : >>>>>>>>>> > >>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >>>>>>>>>> > >>>>>>>>>> > This one did correct the problem, but it's not present >>>>>>>>>> anymore on the >>>>>>>>>> > repository. >>>>>>>>>> >>>>>>>>>> Was this on ovirt-3.6-pre? >>>>>>>>>> >>>>>>>>>> ovirt-3.6-snapshot has a newer version. >>>>>>>>>> >>>>>>>>>> > >>>>>>>>>> > For Simone: yes I did added an NFS4 data domain, but no >>>>>>>>>> success so far, no >>>>>>>>>> > VM engine present. >>>>>>>>>> > >>>>>>>>>> > Regards. >>>>>>>>>> > >>>>>>>>>> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi < >>>>>>>>>> stirabos@redhat.com>: >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >>>>>>>>>> didi@redhat.com> >>>>>>>>>> >> wrote: >>>>>>>>>> >>> >>>>>>>>>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >>>>>>>>>> wodel.youchi@gmail.com> >>>>>>>>>> >>> wrote: >>>>>>>>>> >>> > Hi, >>>>>>>>>> >>> > >>>>>>>>>> >>> > Another test of ovirt hosted-engine on FC22 using >>>>>>>>>> ovirt 3.6 Beta3. >>>>>>>>>> >>> > VM engine is also a FC22 >>>>>>>>>> >>> > >>>>>>>>>> >>> > Problem: >>>>>>>>>> >>> > - No VM engine on webui >>>>>>>>>> >>> >>>>>>>>>> >>> This is still not supported, see/follow [1]. >>>>>>>>>> >>> >>>>>>>>>> >>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> ? :-) >>>>>>>>>> >>>>>>>>>> Sorry :-( >>>>>>>>>> >>>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 >>>>>>>>> >>>>>>>>> >>>>>>>>> This is just about editing the VM from the web GUI but in >>>>>>>>> order to be able to edit the engine VM you should be able at least to find >>>>>>>>> the engine VM in the engine as it was also in 3.5 >>>>>>>>> >>>>>>>>> I'll try to reproduce verifying another patch >>>>>>>>> >>>>>>>>> >>>>>>>>>> I see that all patches there are merged, but bug is in POST. >>>>>>>>>> >>>>>>>>>> >> >>>>>>>>>> >> Did you also try adding an additional storage domain for >>>>>>>>>> regular VMs? >>>>>>>>>> >> engine-VM will be shown on the engine only when you add >>>>>>>>>> at least one >>>>>>>>>> >> additional storage domain for regulars VM and the whole >>>>>>>>>> datacenter goes up: >>>>>>>>>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >>> >>>>>>>>>> >>> > Test Environment >>>>>>>>>> >>> > Just two machines: >>>>>>>>>> >>> > >>>>>>>>>> >>> > 1 - Machine 1 used as storage : >>>>>>>>>> >>> > - iscsi target with a raw file for the VM engine >>>>>>>>>> storage >>>>>>>>>> >>> > - NFS4 for other data domains >>>>>>>>>> >>> > >>>>>>>>>> >>> > 2 - Machine 2 used as hypervisor >>>>>>>>>> >>> > >>>>>>>>>> >>> > >>>>>>>>>> >>> > The installation went without problem, but as always, >>>>>>>>>> the VM engine is >>>>>>>>>> >>> > not >>>>>>>>>> >>> > present on the webui. >>>>>>>>>> >>> > >>>>>>>>>> >>> > >>>>>>>>>> >>> > PS: >>>>>>>>>> >>> > 1- I gave the VM engine just 2Gb of memory since I >>>>>>>>>> don't have too much >>>>>>>>>> >>> > RAM >>>>>>>>>> >>> > on hypervisor, could that be the cause of the problem? >>>>>>>>>> >>> >>>>>>>>>> >>> Shouldn't be related >>>>>>>>>> >>> >>>>>>>>>> >>> > 2- This version of >>>>>>>>>> >>> > >>>>>>>>>> >>> > >>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>>>>>>>>> >>> > package is causing the ovirt-ha-agent to crash, it >>>>>>>>>> was replaced with >>>>>>>>>> >>> > another >>>>>>>>>> >>> > which I still have >>>>>>>>>> >>> > >>>>>>>>>> >>> > >>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>>>>>>>>> >>> > but it's not present on the repository, I had to >>>>>>>>>> update the package >>>>>>>>>> >>> > manually >>>>>>>>>> >>> > at the end of ovirt-hosted-engine-setup installation. >>>>>>>>>> >>> >>>>>>>>>> >>> Not sure I follow. >>>>>>>>>> >>> >>>>>>>>>> >>> What exact repo was used? >>>>>>>>>> >>> >>>>>>>>>> >>> hosted-engine --deploy does not update/install packages >>>>>>>>>> for you (as >>>>>>>>>> >>> does engine-setup), >>>>>>>>>> >>> it's up to you to make sure what you want/need is >>>>>>>>>> installed prior to >>>>>>>>>> >>> running it. >>>>>>>>>> >>> >>>>>>>>>> >>> Best, >>>>>>>>>> >>> -- >>>>>>>>>> >>> Didi >>>>>>>>>> >>> _______________________________________________ >>>>>>>>>> >>> Users mailing list >>>>>>>>>> >>> Users@ovirt.org >>>>>>>>>> >>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> > >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Didi >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >

Hi, I redid the target configuration on FC22, I used the same raw file, and I got the same result, no vm.conf file. I don't think it's a connection problem, iscsiadm shows the session, lvs shows the lvm, so the connection is up. Yertserday, before reboot I did save vm.conf somewhere, so I had to copy it back to it's location and I got the VM engine up. but the problem is still there. Another thing, this time I tried to add hosted_storage, the iscsi login showed me the lun, but when I tried to added it, the VM engine crashed. After restarting the VM engine, I got the hosted_storage displayed (click on system -> choose Storage tab), but not attached. If I try to attach it to the default DC, I get this error (see image): Storage Domain(s) are already attached to a Data Center. Approving this operation might cause data corruption if both Data Centers are active. - hosted_storage PS: joined log files vdsm ha agent engine 2015-09-03 10:39 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Thu, Sep 3, 2015 at 11:25 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Just to be clear, my test environment is composed of two machines:
1 - one hypervisor just one host
2 - a machine used as a storage, a raw file presented as iscsi device for VM engine storage, and multiple NFS4 shares for the other data domains (data, iso, export).
That's it.
Here is the vdsm log
The first real error is this one:
Thread-49::ERROR::2015-09-03 01:19:29,710::monitor::250::Storage.Monitor::(_monitorDomain) Error monitoring domain 7bd9cad0-151a-4aa4-a7de-15dd64748f17 Traceback (most recent call last): File "/usr/share/vdsm/storage/monitor.py", line 246, in _monitorDomain self._performDomainSelftest() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 759, in wrapper value = meth(self, *a, **kw) File "/usr/share/vdsm/storage/monitor.py", line 313, in _performDomainSelftest self.domain.selftest() File "/usr/share/vdsm/storage/blockSD.py", line 857, in selftest lvm.chkVG(self.sdUUID) File "/usr/share/vdsm/storage/lvm.py", line 1006, in chkVG raise se.StorageDomainAccessError("%s: %s" % (vgName, err)) StorageDomainAccessError: Domain is either partially accessible or entirely inaccessible: ('7bd9cad0-151a-4aa4-a7de-15dd64748f17: [\' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\', \' /dev/mapper/33000000100000001: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/33000000100000001: read failed after 0 of 4096 at 53687025664: Input/output error\', \' /dev/mapper/33000000100000001: read failed after 0 of 4096 at 53687083008: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/33000000100000001 was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 536805376: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 536862720: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 2147418112: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 2147475456: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 1073676288: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 1073733632: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 42949607424: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 42949664768: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900 was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406 was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf was disabled\', \' Volume group "7bd9cad0-151a-4aa4-a7de-15dd64748f17" not found\', \' Cannot process volume group 7bd9cad0-151a-4aa4-a7de-15dd64748f17\']',) Thread-49::INFO::2015-09-03 01:19:29,741::monitor::273::Storage.Monitor::(_notifyStatusChanges) Domain 7bd9cad0-151a-4aa4-a7de-15dd64748f17 became INVALID Thread-4067::DEBUG::2015-09-03 01:19:29,741::misc::777::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Emitting event
For some reason your iSCSI connection seams to fails and the hosted-engine-storage domain becomes invalid. All the other issue are subsequent. Can you please check its configuration and the network status?
PS: After rebooting the host, I could not restart the VM engine, the same problem with ha agent, vm.conf file not found.
vm.conf is now on the storage domain but at least you should be able to access it.
2015-09-03 8:14 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Thu, Sep 3, 2015 at 2:07 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I had to restart the installation all over, I used the freshly pushed new packages.
I had two problems: 1 - the engine's setup didn't terminate correctly when I chose to use ovirt-vmconsole with this error [ INFO ] Restarting ovirt-vmconsole proxy service *[ ERROR ] **Failed to execute stage 'Closing up': Failed to stop service 'ovirt-vmconsole-proxy-sshd'* [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150903000415-6egi46.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001209-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
So I executed engine-cleanup which terminate with this error
[ INFO ] Clearing Engine database engine *[ ERROR ] Failed to execute stage 'Misc configuration': must be owner of schema pg_catalog * [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-remove-20150903001440-da1u76.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001513-cleanup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of cleanup failed
And then, I executed again engine-setup without ovirt-vmconsole This time the setup completed.
2 - I added a NFS4 storage domain to the default DC (Default), the DC went up, and then I tried to import the hosted-engine storage domain, but without success.
click on import, choose iscsi, connect to the target, scan, login, but no device chown !!! (iscsi.jpeg)
Can you please attach the relevant VDSM logs from the host you used to were using to import that storage domain?
The only new thing I had, is the disk of the VM engine being shown under
disks tab.
2015-09-02 19:50 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
I found this on vdsm log
Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('ab 1dc1a9-b6e9-4890-8485-1019da2f328f',) {} Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1263::vds::(wrapper) return vmGetStats with {'status': {'message': '*Virtual machine does not exist*', 'code': 1}}
I really don't understand anything
2015-09-02 17:01 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Thanks,
but before that I stuck again with the storage of the VM engine not detected after reboot.
the /rhev is populated, but ovirt-ha-agent crashes with
MainThread::INFO::2015-09-02 16:12:20,261::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid': 'ab1dc1a9-b6e9-4890-8485-1019da2f328f', 'address': '0 '} MainThread::INFO::2015-09-02 16:12:20,283::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 139994237094736 MainThread::INFO::2015-09-02 16:12:20,702::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_do main) Success, id 139994236985168 MainThread::INFO::2015-09-02 16:12:20,702::hosted_engine::574::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_broker) Broker initialized, all submonitors started MainThread::INFO::2015-09-02 16:12:20,799::hosted_engine::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini tialize_sanlock) Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: /var/run/vdsm/storage/8b25f3be-7574 -4f7a-8851-363129704e52/a44d1302-3165-4632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342)
MainThread::INFO::2015-09-02 16:12:20,800::hosted_engine::401::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star t_monitoring) Reloading vm.conf from the shared storage domain MainThread::ERROR::2015-09-02 16:12:20,927::agent::201::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
*Error: ''Configuration value not found: file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' - trying to restart agent * MainThread::WARNING::2015-09-02 16:12:25,932::agent::204::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Restarting a gent, attempt '9' MainThread::ERROR::2015-09-02 16:12:25,933::agent::206::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Too many error s occurred, giving up. Please review the log and consider filing a bug. MainThread::INFO::2015-09-02 16:12:25,933::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down
I restared vdsm ha-agent and broker-agent wihtout success
When executed [root@noveria ~]# hosted-engine --vm-status You must run deploy first
I got this
[root@noveria ~]# tree /var/run/vdsm/storage/ /var/run/vdsm/storage/ └── 8b25f3be-7574-4f7a-8851-363129704e52 ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1 ade79d1 ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f 67f7342 ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c32 1b4fbd1 └── bf3bdae1-7318-4443-a19b-7371de30b982 └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7 157706b
and this [root@noveria rhev]# tree . └── data-center ├── 00000001-0001-0001-0001-000000000221 └── mnt ├── blockSD │ └── 8b25f3be-7574-4f7a-8851-363129704e52 │ ├── dom_md │ │ ├── ids -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ids │ │ ├── inbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox │ │ ├── leases -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/leases │ │ ├── master -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/master │ │ ├── metadata -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata │ │ └── outbox -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox │ ├── ha_agent │ │ ├── hosted-engine.lockspace -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/a44d1302-3165-4 632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342 │ │ └── hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/f05762e5-e8cd-45e7-ac19-303c1ade79d1 │ └── images │ ├── 8e49032f-680b-40c2-b422-80d86dc7beda │ │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-4 5e7-ac19-303c1ade79d1 │ ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 │ │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4 c71-b332-c6c7f67f7342 │ ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 │ │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4 376-aa27-04c321b4fbd1 │ └── bf3bdae1-7318-4443-a19b-7371de30b982 │ └── cbb10cf0-9600-465e-aed9-412f7157706b -> /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-4 65e-aed9-412f7157706b ├── openSuse.wodel.wd:_nvms └── _var_lib_ovirt-hosted-engine-setup_tmp2fNoEf
Here I did find some symblic links blinking (not present) like this one hosted-engine.metadata -> /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 c2-b422-80d86dc7beda/*f05762e5-e8cd-45e7-ac19-303c1ade79d1*
the lvscan command showed that the lv concerned is inactive, is this correct? [root@noveria ~]# lvscan File descriptor 9 (/dev/dri/card0) leaked on lvscan invocation. Parent PID 2935: bash ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata' [512,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases' [2,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ids' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox' [128,00 MiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/master' [1,00 GiB] inherit inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c321b4fbd1' [1,00 GiB] inherit ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f67f7342' [128,00 MiB] inherit *inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1ade79d1' [128,00 MiB] inherit * inactive '/dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7157706b' [40,00 GiB] inherit
and this [root@noveria ~]# vdsClient -s 0 prepareImage "00000000-0000-0000-0000-000000000000" "8b25f3be-7574-4f7a-8851-363129704e52" " bf3bdae1-7318-4443-a19b-7371de30b982" "cbb10cf0-9600-465e-aed9-412f7157706b" {'domainID': '8b25f3be-7574-4f7a-8851-363129704e52', 'imageID': 'bf3bdae1-7318-4443-a19b-7371de30b982', 'leaseOffset': 112197632, 'leasePath': '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases', 'path': '/rhev/data-center/mnt/blockSD/8b25f3be-7574-4f7a-8851-363129704e52/images/bf3bdae1-7318-4443-a19b-7371de30b982/cbb1 0cf0-9600-465e-aed9-412f7157706b', 'volType': 'path', 'volumeID': 'cbb10cf0-9600-465e-aed9-412f7157706b'}
and [root@noveria ~]# vdsClient -s 0 getStorageDomainInfo 8b25f3be-7574-4f7a-8851-363129704e52 uuid = 8b25f3be-7574-4f7a-8851-363129704e52 vguuid = tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m state = OK version = 3 role = Regular type = ISCSI class = Data pool = [] name = hosted_storage
[root@noveria ~]# lvs File descriptor 9 (/dev/dri/card0) leaked on lvs invocation. Parent PID 3105: bash LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 0f260ab0-3631-4c71-b332-c6c7f67f7342 8b25f3be-7574-4f7a-8851-363129704e52 -wi-ao---- 128,00m
ae352fab-7477-4376-aa27-04c321b4fbd1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 1,00g
cbb10cf0-9600-465e-aed9-412f7157706b 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 40,00g
f05762e5-e8cd-45e7-ac19-303c1ade79d1 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 128,00m
ids 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
inbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
leases 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 2,00g
master 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 1,00g
metadata 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 512,00m
outbox 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 128,00m
VDSM logs doesn't show me anything MainThread::INFO::2015-09-01 23:34:49,551::vdsm::166::vds::(run) <WorkerThread(Thread-4, started daemon 139990108333824)> MainThread::INFO::2015-09-01 23:34:49,552::vdsm::166::vds::(run) <WorkerThread(Thread-3, started daemon 139990116726528)> MainThread::INFO::2015-09-02 16:07:49,510::vdsm::156::vds::(run) (PID: 1554) I am the actual vdsm 4.17.3-12.git7288ef7.fc22 noveria.wodel.wd (4.1.6-200.fc22.x86_64) MainThread::DEBUG::2015-09-02 16:07:49,524::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2015-09-02 16:07:49,524::threadPool::29::Storage.ThreadPool::(__init__) Enter - numThreads: 10, waitTimeout: 3, maxTasks: 500 MainThread::DEBUG::2015-09-02 16:07:49,526::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt mode: None MainThread::WARNING::2015-09-02 16:07:49,526::fileUtils::152::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt already exists MainThread::DEBUG::2015-09-02 16:07:49,564::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) /usr/bin/sudo -n /usr/sbin/lvm dumpconfig global/locking_type (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2015-09-02 16:07:49,611::hsm::427::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::459::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::460::Storage.HSM::(__cleanStorageRepository) Mount list: [] MainThread::DEBUG::2015-09-02 16:07:49,614::hsm::462::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers MainThread::DEBUG::2015-09-02 16:07:49,615::hsm::505::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' storageRefresh::DEBUG::2015-09-02 16:07:49,616::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2015-09-02 16:07:49,617::dispatcher::46::Storage.Dispatcher::(__init__) Starting StorageDispatcher... storageRefresh::DEBUG::2015-09-02 16:07:49,620::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,792::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) storageRefresh::DEBUG::2015-09-02 16:07:49,793::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:49,793::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds storageRefresh::DEBUG::2015-09-02 16:07:49,924::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) MainThread::DEBUG::2015-09-02 16:07:49,924::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state init -> state preparing MainThread::INFO::2015-09-02 16:07:49,924::logUtils::48::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback(callbackFunc=<functools.partial object at 0x7fc2f03fa6d8>) MainThread::INFO::2015-09-02 16:07:49,924::logUtils::51::dispatcher::(wrapper) Run and protect: registerDomainStateChangeCallback, Return response: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::1191::Storage.TaskManager.Task::(prepare) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::finished: None MainThread::DEBUG::2015-09-02 16:07:49,927::task::595::Storage.TaskManager.Task::(_updateState) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state preparing -> state finished MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} MainThread::DEBUG::2015-09-02 16:07:49,927::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} MainThread::DEBUG::2015-09-02 16:07:49,928::task::993::Storage.TaskManager.Task::(_decref) Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::ref 0 aborting False MainThread::INFO::2015-09-02 16:07:49,928::momIF::46::MOM::(__init__) Preparing MOM interface MainThread::INFO::2015-09-02 16:07:49,929::momIF::55::MOM::(__init__) Using named unix socket /var/run/vdsm/mom-vdsm.sock MainThread::INFO::2015-09-02 16:07:49,929::secret::90::root::(clear) Unregistering all secrests MainThread::DEBUG::2015-09-02 16:07:49,929::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::INFO::2015-09-02 16:07:49,933::vmchannels::196::vds::(settimeout) Setting channels' timeout to 30 seconds. VM Channels Listener::DEBUG::2015-09-02 16:07:49,934::vmchannels::178::vds::(run) Starting VM channels listener thread. MainThread::INFO::2015-09-02 16:07:49,935::protocoldetector::172::vds.MultiProtocolAcceptor::(__init__) Listening at 0.0.0.0:54321 MainThread::DEBUG::2015-09-02 16:07:50,063::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <rpc.bindingxmlrpc.XmlDetector instance at 0x7fc2f00dc440> storageRefresh::DEBUG::2015-09-02 16:07:50,063::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,080::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) storageRefresh::DEBUG::2015-09-02 16:07:50,081::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method storageRefresh::DEBUG::2015-09-02 16:07:50,081::hba::56::Storage.HBA::(rescan) Starting scan storageRefresh::DEBUG::2015-09-02 16:07:50,081::supervdsm::76::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm MainThread::DEBUG::2015-09-02 16:07:50,157::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) Adding detector <yajsonrpc.stompreactor.StompDetector instance at 0x7fc2e01332d8> BindingXMLRPC::INFO::2015-09-02 16:07:50,158::bindingxmlrpc::62::vds::(threaded_start) XMLRPC server running MainThread::DEBUG::2015-09-02 16:07:50,158::schedule::98::Scheduler::(start) Starting scheduler periodic-sched periodic-sched::DEBUG::2015-09-02 16:07:50,159::schedule::142::Scheduler::(_run) started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::69::Executor::(start) Starting executor MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/0 periodic/0::DEBUG::2015-09-02 16:07:50,159::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/1 periodic/1::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/2 periodic/2::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/3 periodic/3::DEBUG::2015-09-02 16:07:50,160::executor::171::Executor::(_run) Worker started MainThread::DEBUG::2015-09-02 16:07:50,160::libvirtconnection::160::root::(get) trying to connect libvirt MainThread::DEBUG::2015-09-02 16:07:50,163::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.UpdateVolumes'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.NumaInfoMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.BlockjobMonitor'>) MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation <virt.sampling.VMBulkSampler object at 0x7fc2e0151d10> MainThread::DEBUG::2015-09-02 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting operation VmDispatcher(<class 'virt.periodic.DriveWatermarkMonitor'>) storageRefresh::DEBUG::2015-09-02 16:07:50,167::hba::62::Storage.HBA::(rescan) Scan finished storageRefresh::DEBUG::2015-09-02 16:07:50,167::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:50,167::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:50,513::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:50,513::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Reactor thread::INFO::2015-09-02 16:07:50,590::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:50,596::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56311 Reactor thread::DEBUG::2015-09-02 16:07:50,596::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56311) BindingXMLRPC::INFO::2015-09-02 16:07:50,596::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56311 Thread-13::INFO::2015-09-02 16:07:50,597::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 started Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-13::DEBUG::2015-09-02 16:07:50,597::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-13::INFO::2015-09-02 16:07:50,599::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56311 stopped Reactor thread::INFO::2015-09-02 16:07:51,607::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:51,613::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56312 Reactor thread::DEBUG::2015-09-02 16:07:51,613::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56312) BindingXMLRPC::INFO::2015-09-02 16:07:51,613::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56312 Thread-14::INFO::2015-09-02 16:07:51,613::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 started Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-14::DEBUG::2015-09-02 16:07:51,614::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Recovering from crash or Initializing', 'code': 99}} Thread-14::INFO::2015-09-02 16:07:51,615::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56312 stopped storageRefresh::DEBUG::2015-09-02 16:07:51,924::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,926::misc::743::Storage.SamplingMethod::(__call__) Returning last result storageRefresh::DEBUG::2015-09-02 16:07:51,926::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:51,927::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,405::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,406::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,458::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,459::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 storageRefresh::DEBUG::2015-09-02 16:07:52,491::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex storageRefresh::DEBUG::2015-09-02 16:07:52,491::hsm::373::Storage.HSM::(storageRefresh) HSM is ready Reactor thread::INFO::2015-09-02 16:07:52,624::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,629::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56313 Reactor thread::DEBUG::2015-09-02 16:07:52,629::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56313) BindingXMLRPC::INFO::2015-09-02 16:07:52,629::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56313 Thread-15::INFO::2015-09-02 16:07:52,630::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 started Thread-15::DEBUG::2015-09-02 16:07:52,630::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {} Thread-15::DEBUG::2015-09-02 16:07:52,719::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'System Product Name', 'systemSerialNumber': 'System Serial Number', 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': 'System Version', 'systemUUID': '267A6B80-D7DA-11DD-81CF-C860009B3CD9', 'systemManufacturer': 'System manufacturer'}} Thread-15::INFO::2015-09-02 16:07:52,721::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56313 stopped Reactor thread::INFO::2015-09-02 16:07:52,730::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:52,735::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56314 Reactor thread::DEBUG::2015-09-02 16:07:52,735::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56314) BindingXMLRPC::INFO::2015-09-02 16:07:52,735::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56314 Thread-16::INFO::2015-09-02 16:07:52,735::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 started Thread-16::DEBUG::2015-09-02 16:07:52,736::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-16::DEBUG::2015-09-02 16:07:52,736::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state init -> state preparing Thread-16::INFO::2015-09-02 16:07:52,736::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=3, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b', 'connection': '192.168.1.50', 'iqn': 'iqn.2015-08.openSuse.wodel:target00', 'portal': '1', 'user': 'iscsiuser', 'password': '********', 'port': '3260'}], options=None) Thread-16::DEBUG::2015-09-02 16:07:52,737::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 --op=new (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.authmethod -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,811::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,812::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.username -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,846::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,847::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.session.auth.password -v '****' --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m iface -I default (cwd None) Thread-16::DEBUG::2015-09-02 16:07:52,905::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::INFO::2015-09-02 16:07:52,905::iscsi::564::Storage.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not provided. Skipping. Thread-16::DEBUG::2015-09-02 16:07:52,906::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -l (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,027::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,028::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p 192.168.1.50:3260,1 -n node.startup -v manual --op=update (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,088::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,088::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,182::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-16::DEBUG::2015-09-02 16:07:53,182::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-16::DEBUG::2015-09-02 16:07:53,229::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-16::DEBUG::2015-09-02 16:07:53,229::hba::56::Storage.HBA::(rescan) Starting scan Thread-16::DEBUG::2015-09-02 16:07:53,300::hba::62::Storage.HBA::(rescan) Scan finished Thread-16::DEBUG::2015-09-02 16:07:53,300::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,300::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,435::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,435::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-16::DEBUG::2015-09-02 16:07:53,919::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,921::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,922::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-16::DEBUG::2015-09-02 16:07:53,922::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-16::DEBUG::2015-09-02 16:07:53,923::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) Thread-16::DEBUG::2015-09-02 16:07:54,058::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-16::DEBUG::2015-09-02 16:07:54,059::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2418::Storage.HSM::(__prefetchDomains) Found SD uuids: ('8b25f3be-7574-4f7a-8851-363129704e52',) Thread-16::DEBUG::2015-09-02 16:07:54,059::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: {8b25f3be-7574-4f7a-8851-363129704e52: storage.blockSD.findDomain} Thread-16::INFO::2015-09-02 16:07:54,059::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::1191::Storage.TaskManager.Task::(prepare) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::finished: {'statuslist': [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} Thread-16::DEBUG::2015-09-02 16:07:54,059::task::595::Storage.TaskManager.Task::(_updateState) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state preparing -> state finished Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-16::DEBUG::2015-09-02 16:07:54,060::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-16::DEBUG::2015-09-02 16:07:54,060::task::993::Storage.TaskManager.Task::(_decref) Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::ref 0 aborting False Thread-16::INFO::2015-09-02 16:07:54,062::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56314 stopped Reactor thread::INFO::2015-09-02 16:07:54,070::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,075::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2015-09-02 16:07:54,076::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:56316 Reactor thread::DEBUG::2015-09-02 16:07:54,076::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 56316) BindingXMLRPC::INFO::2015-09-02 16:07:54,076::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:56316 Thread-17::INFO::2015-09-02 16:07:54,076::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:56316 started Thread-17::DEBUG::2015-09-02 16:07:54,077::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] Thread-17::DEBUG::2015-09-02 16:07:54,077::task::595::Storage.TaskManager.Task::(_updateState) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::moving from state init -> state preparing Thread-17::INFO::2015-09-02 16:07:54,077::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='8b25f3be-7574-4f7a-8851-363129704e52', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='bf3bdae1-7318-4443-a19b-7371de30b982', leafUUID='cbb10cf0-9600-465e-aed9-412f7157706b') Thread-17::DEBUG::2015-09-02 16:07:54,077::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3194' at 'prepareImage' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' for lock type 'shared' Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' is free. Now locking as 'shared' (1 active user) Thread-17::DEBUG::2015-09-02 16:07:54,078::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Granted request Thread-17::DEBUG::2015-09-02 16:07:54,078::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::_resourcesAcquired: Storage.8b25f3be-7574-4f7a-8851-363129704e52 (shared) Thread-17::DEBUG::2015-09-02 16:07:54,078::task::993::Storage.TaskManager.Task::(_decref) Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::ref 1 aborting False Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-17::DEBUG::2015-09-02 16:07:54,078::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::733::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) Thread-17::DEBUG::2015-09-02 16:07:54,130::misc::736::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-17::DEBUG::2015-09-02 16:07:54,130::hba::56::Storage.HBA::(rescan) Starting scan Thread-17::DEBUG::2015-09-02 16:07:54,197::hba::62::Storage.HBA::(rescan) Scan finished Thread-17::DEBUG::2015-09-02 16:07:54,197::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,197::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,298::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,299::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,307::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,309::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,310::misc::743::Storage.SamplingMethod::(__call__) Returning last result Thread-17::DEBUG::2015-09-02 16:07:54,310::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,312::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,478::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,479::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with LvMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,479::blockSD::337::Storage.Misc.excCmd::(readlines) /usr/bin/dd iflag=direct skip=0 bs=2048 if=/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata count=1 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,553::blockSD::337::Storage.Misc.excCmd::(readlines) SUCCESS: <err> = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,553::misc::260::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 MB/s'], size: 2048 Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[] Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::252::Storage.PersistentDict::(refresh) Empty metadata Thread-17::DEBUG::2015-09-02 16:07:54,553::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with VGTagMetadataRW backend Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,554::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) Thread-17::DEBUG::2015-09-02 16:07:54,685::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-17::DEBUG::2015-09-02 16:07:54,686::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-17::DEBUG::2015-09-02 16:07:54,686::persistentDict::234::Storage.PersistentDict::(refresh) read lines (VGTagMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512', 'PHYBLKSIZE=4096', 'POOL_UUID=', u'PV0=pv:33000000100000001,uuid:kTaQQh-4LCD-OghQ-cP5D-R7MM-aj6e-kTdQf0,pestart:0,pecount:397,mapoffset:0', 'ROLE=Regular', 'SDUUID=8b25f3be-7574-4f7a-8851-363129704e52', 'TYPE=ISCSI', 'VERSION=3', 'VGUUID=tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m', '_SHA_CKSUM=4a100ce5195650f43971d849835a8b3d8c0343da'] Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_imageNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_volumeNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_lvmActivationNS' Thread-17::DEBUG::2015-09-02 16:07:54,687::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
What should I do to bring the VM engine back?
2015-09-02 16:24 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
> > > On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi < > wodel.youchi@gmail.com> wrote: > >> I will try this afternoon to do this, but just to clarify something. >> >> the hosted_engine setup creates it's own DC the hosted_DC, which >> contains the hosted engine storage domain, I am correct? >> > > No, ovirt-hosted-engine-setup doesn't create a special datacenter. > The default is to add the host to the Default datacenter in the default > cluster. > You could choose a different one from ovirt-hosted-engine-setup, > simply import the hosted-engine storage domain in the datacenter of the > cluster you selected. > > In setup there is a question like this: > Local storage datacenter name is an internal name > and currently will not be shown in engine's admin UI. > Please enter local datacenter name > which ask about 'Local storage datacenter' which is basically the > description we were using for the storage pool. > > > >> if yes, where will I import the hostedengine storage domain, into >> the default DC? >> >> 2015-09-02 8:47 GMT+01:00 Roy Golan <rgolan@redhat.com>: >> >>> >>> >>> On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi < >>> wodel.youchi@gmail.com> wrote: >>> >>>> I could finally terminate the installation, but still no vm >>>> engine on webui >>>> >>>> I added a data domain, the default DC is up, but no engine VM. >>>> >>> >>> >>> >>> Good now you need to import the HostedEngine storage domain. Try >>> to go to >>> >>> *Storage -> Import Domain and put the path to the domain which you >>> used in the hosted-engine setup.* >>> >>> >>> *After the domain is imported, the engine will be imported >>> automatically. * >>> >>> *This whole process will become automatic eventually. (patch is >>> written currently)* >>> >>> >>>> >>>> 2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>: >>>> >>>>> Something mounted on /rhev/data-center/mnt I'm not sure. >>>>> >>>>> there were directories, and under these directories there were >>>>> other directories (dom_md, ha_agent, images), and under them there were >>>>> symbolic links to devices under /dev >>>>> (ids, inbox, leases, etc...) the devices pointed to the lvm >>>>> partitions created by the setup. >>>>> >>>>> but the mount command didn't show anything, unlike nfs, when I >>>>> used it the mount and df commands showed up the engine's VM mount point. >>>>> >>>>> >>>>> 2015-09-01 20:16 GMT+01:00 Simone Tiraboschi < >>>>> stirabos@redhat.com>: >>>>> >>>>>> >>>>>> >>>>>> On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi < >>>>>> wodel.youchi@gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> After removing the -x from the sql files, the installation >>>>>>> terminated successfully, but ... >>>>>>> >>>>>>> I had a problem with vdsm, and error about permission denied >>>>>>> with KVM module, so I restarted my machine. >>>>>>> After the reboot the ovirt-ha-agent stops complaining about >>>>>>> the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha >>>>>>> >>>>>>> And the mount command doesn't show any iscsi mount, the disk >>>>>>> is detected via fdisk -l >>>>>>> the lvs command returns all logical volumes created. >>>>>>> >>>>>>> I think it's a mount problem, but since there are many lv, I >>>>>>> don't how to mount them manually. >>>>>>> >>>>>> >>>>>> Do you have something mounted under /rhev/data-center/mnt ? >>>>>> If not you probably hit this bug: >>>>>> https://bugzilla.redhat.com/1258465 >>>>>> >>>>>> >>>>>> >>>>>>> LV VG >>>>>>> Attr LSize Pool Origin Data% >>>>>>> Meta% Move >>>>>>> Log Cpy%Sync Convert >>>>>>> >>>>>>> 3b894e23-429d-43bf-b6cd-6427a387799a >>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m >>>>>>> >>>>>>> >>>>>>> >>>>>>> be78c0fd-52bf-445a-9555-64061029c2d9 >>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g >>>>>>> >>>>>>> >>>>>>> >>>>>>> c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 >>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g >>>>>>> >>>>>>> >>>>>>> >>>>>>> feede664-5754-4ca2-aeb3-af7aff32ed42 >>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m >>>>>>> >>>>>>> >>>>>>> >>>>>>> ids >>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>>> -wi-ao---- 128,00m >>>>>>> >>>>>>> >>>>>>> inbox >>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>>> -wi-a----- 128,00m >>>>>>> >>>>>>> >>>>>>> leases >>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>>> -wi-a----- 2,00g >>>>>>> >>>>>>> >>>>>>> master >>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>>> -wi-a----- 1,00g >>>>>>> >>>>>>> >>>>>>> metadata >>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- >>>>>>> 512,00m >>>>>>> >>>>>>> >>>>>>> outbox >>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>>> -wi-a----- 128,00m >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi < >>>>>>> stirabos@redhat.com>: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi < >>>>>>>> wodel.youchi@gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi again, >>>>>>>>> >>>>>>>>> I tried with the snapshot repository, but I am having this >>>>>>>>> error while executing engine-setup >>>>>>>>> >>>>>>>>> [ INFO ] Creating/refreshing Engine database schema >>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': >>>>>>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >>>>>>>>> te >>>>>>>>> [ INFO ] DNF Performing DNF transaction rollback >>>>>>>>> [ INFO ] Rolling back database schema >>>>>>>>> [ INFO ] Clearing Engine database engine >>>>>>>>> [ ERROR ] Engine database rollback failed: must be owner of >>>>>>>>> schema pg_catalog >>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>> Log file is located at >>>>>>>>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >>>>>>>>> [ INFO ] Generating answer file >>>>>>>>> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>> [ ERROR ] Execution of setup failed >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> and in the deployement log I have these errors >>>>>>>>> >>>>>>>>> Saving custom users permissions on database objects... >>>>>>>>> upgrade script detected a change in Config, View or Stored >>>>>>>>> Procedure... >>>>>>>>> Running upgrade shell script >>>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... >>>>>>>>> Running upgrade shell script >>>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... >>>>>>>>> Running upgrade shell script >>>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... >>>>>>>>> Running upgrade shell script >>>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... >>>>>>>>> Running upgrade shell script >>>>>>>>> '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'... >>>>>>>>> >>>>>>>>> >>>>>>>>> 2015-09-01 15:39:35 DEBUG >>>>>>>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 >>>>>>>>> execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', >>>>>>>>> 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', >>>>>>>>> '-c', 'apply'] >>>>>>>>> stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>> ligne 1: /bin : is a >>>>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>> ligne 2: DATABASE : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>> ligne 4: This : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>> ligne 5: The : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>> ligne 6: Add : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>> ligne 7: syntax error near the symbole unexpected « ( >>>>>>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>> ligne 7: ` Update section (w/o overriding current >>>>>>>>> value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>> ligne 1: /bin : is a >>>>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>> ligne 2: Currently : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>> ligne 3: This : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>> ligne 4: This : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>> ligne 5: So, : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>> ligne 6: Since : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>> ligne 7: bin/ : is a >>>>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>> ligne 9: update : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>> ligne 10: * >>>>>>>>> >>>>>>>>> >>>>>>>>> *syntax error near the symbole unexpected « ( >>>>>>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>> ligne 10: `and exists(select 1 from schema_version where version = >>>>>>>>> '03010250' and current = >>>>>>>>> true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>>>> ligne 1: -- : command not >>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>>>> ligne 2: **syntax error near the symbole unexpected « ( »* >>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>>>> ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' >>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>>>> ligne 1: -- : commande introuvable >>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>>>> ligne 2: erreur de syntaxe près du symbole inattendu « ( » >>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>>>> ligne 2: `select fn_db_add_column('materialized_views', >>>>>>>>> 'min_refresh_rate_in_sec', 'int default 0');' >>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>>>>>> ligne 1: erreur de syntaxe près du symbole inattendu « ( » >>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>>>>>> ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE >>>>>>>>> varchar(63);' >>>>>>>>> 2015-09-01 15:39:35 DEBUG otopi.context >>>>>>>>> context._executeMethod:156 method exception >>>>>>>>> Traceback (most recent call last): >>>>>>>>> File "/usr/lib/python2.7/site-packages/otopi/context.py", >>>>>>>>> line 146, in _executeMethod >>>>>>>>> method['method']() >>>>>>>>> File >>>>>>>>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", >>>>>>>>> line 291, in _misc >>>>>>>>> oenginecons.EngineDBEnv.PGPASS_FILE >>>>>>>>> File "/usr/lib/python2.7/site-packages/otopi/plugin.py", >>>>>>>>> line 946, in execute >>>>>>>>> command=args[0], >>>>>>>>> RuntimeError: Command >>>>>>>>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>>>>>> 2015-09-01 15:39:35 ERROR otopi.context >>>>>>>>> context._executeMethod:165 Failed to execute stage 'Misc configuration': >>>>>>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>>>>>> 2015-09-01 15:39:35 DEBUG otopi.transaction >>>>>>>>> transaction.abort:134 aborting 'DNF Transaction' >>>>>>>>> 2015-09-01 15:39:35 DEBUG >>>>>>>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF >>>>>>>>> Closing transaction with rollback >>>>>>>>> 2015-09-01 15:39:35 INFO >>>>>>>>> otopi.plugins.otopi.packagers.dnfpackager >>>>>>>>> dnfpackager.info:94 DNF Performing DNF transaction rollback >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> It was an issue with package building: all the sql files >>>>>>>> where executable and so the issue. >>>>>>>> We fixed it and tomorrow build should be OK. If you prefer to >>>>>>>> continue right now simply recursively remove the x attribute on each sql >>>>>>>> file under /usr/share/ovirt-engine/dbscripts >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi < >>>>>>>>> stirabos@redhat.com>: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David < >>>>>>>>>> didi@redhat.com> wrote: >>>>>>>>>> >>>>>>>>>>> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >>>>>>>>>>> wodel.youchi@gmail.com> wrote: >>>>>>>>>>> > Hi, >>>>>>>>>>> > >>>>>>>>>>> > I am using the repo of the 3.6 version >>>>>>>>>>> > ( >>>>>>>>>>> http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/ >>>>>>>>>>> ) >>>>>>>>>>> > >>>>>>>>>>> > I installed the ovirt-hosted-engine-setup with it's >>>>>>>>>>> dependencies,and the >>>>>>>>>>> > ovirt-hosted-engine-ha package is one of them. >>>>>>>>>>> > >>>>>>>>>>> > Correction: The problem with this version >>>>>>>>>>> > >>>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >>>>>>>>>>> > is that after the installation is done, the service >>>>>>>>>>> ovirt-ha-agent crashes >>>>>>>>>>> > after being started, see the bug : >>>>>>>>>>> > >>>>>>>>>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >>>>>>>>>>> > >>>>>>>>>>> > A new version was produced, I downloaded it manually a >>>>>>>>>>> few days ago, this is >>>>>>>>>>> > it : >>>>>>>>>>> > >>>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >>>>>>>>>>> > >>>>>>>>>>> > This one did correct the problem, but it's not present >>>>>>>>>>> anymore on the >>>>>>>>>>> > repository. >>>>>>>>>>> >>>>>>>>>>> Was this on ovirt-3.6-pre? >>>>>>>>>>> >>>>>>>>>>> ovirt-3.6-snapshot has a newer version. >>>>>>>>>>> >>>>>>>>>>> > >>>>>>>>>>> > For Simone: yes I did added an NFS4 data domain, but no >>>>>>>>>>> success so far, no >>>>>>>>>>> > VM engine present. >>>>>>>>>>> > >>>>>>>>>>> > Regards. >>>>>>>>>>> > >>>>>>>>>>> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi < >>>>>>>>>>> stirabos@redhat.com>: >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >>>>>>>>>>> didi@redhat.com> >>>>>>>>>>> >> wrote: >>>>>>>>>>> >>> >>>>>>>>>>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >>>>>>>>>>> wodel.youchi@gmail.com> >>>>>>>>>>> >>> wrote: >>>>>>>>>>> >>> > Hi, >>>>>>>>>>> >>> > >>>>>>>>>>> >>> > Another test of ovirt hosted-engine on FC22 using >>>>>>>>>>> ovirt 3.6 Beta3. >>>>>>>>>>> >>> > VM engine is also a FC22 >>>>>>>>>>> >>> > >>>>>>>>>>> >>> > Problem: >>>>>>>>>>> >>> > - No VM engine on webui >>>>>>>>>>> >>> >>>>>>>>>>> >>> This is still not supported, see/follow [1]. >>>>>>>>>>> >>> >>>>>>>>>>> >>> [1] >>>>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> ? :-) >>>>>>>>>>> >>>>>>>>>>> Sorry :-( >>>>>>>>>>> >>>>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> This is just about editing the VM from the web GUI but in >>>>>>>>>> order to be able to edit the engine VM you should be able at least to find >>>>>>>>>> the engine VM in the engine as it was also in 3.5 >>>>>>>>>> >>>>>>>>>> I'll try to reproduce verifying another patch >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> I see that all patches there are merged, but bug is in >>>>>>>>>>> POST. >>>>>>>>>>> >>>>>>>>>>> >> >>>>>>>>>>> >> Did you also try adding an additional storage domain >>>>>>>>>>> for regular VMs? >>>>>>>>>>> >> engine-VM will be shown on the engine only when you add >>>>>>>>>>> at least one >>>>>>>>>>> >> additional storage domain for regulars VM and the whole >>>>>>>>>>> datacenter goes up: >>>>>>>>>>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >>> >>>>>>>>>>> >>> > Test Environment >>>>>>>>>>> >>> > Just two machines: >>>>>>>>>>> >>> > >>>>>>>>>>> >>> > 1 - Machine 1 used as storage : >>>>>>>>>>> >>> > - iscsi target with a raw file for the VM engine >>>>>>>>>>> storage >>>>>>>>>>> >>> > - NFS4 for other data domains >>>>>>>>>>> >>> > >>>>>>>>>>> >>> > 2 - Machine 2 used as hypervisor >>>>>>>>>>> >>> > >>>>>>>>>>> >>> > >>>>>>>>>>> >>> > The installation went without problem, but as >>>>>>>>>>> always, the VM engine is >>>>>>>>>>> >>> > not >>>>>>>>>>> >>> > present on the webui. >>>>>>>>>>> >>> > >>>>>>>>>>> >>> > >>>>>>>>>>> >>> > PS: >>>>>>>>>>> >>> > 1- I gave the VM engine just 2Gb of memory since I >>>>>>>>>>> don't have too much >>>>>>>>>>> >>> > RAM >>>>>>>>>>> >>> > on hypervisor, could that be the cause of the >>>>>>>>>>> problem? >>>>>>>>>>> >>> >>>>>>>>>>> >>> Shouldn't be related >>>>>>>>>>> >>> >>>>>>>>>>> >>> > 2- This version of >>>>>>>>>>> >>> > >>>>>>>>>>> >>> > >>>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>>>>>>>>>> >>> > package is causing the ovirt-ha-agent to crash, it >>>>>>>>>>> was replaced with >>>>>>>>>>> >>> > another >>>>>>>>>>> >>> > which I still have >>>>>>>>>>> >>> > >>>>>>>>>>> >>> > >>>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>>>>>>>>>> >>> > but it's not present on the repository, I had to >>>>>>>>>>> update the package >>>>>>>>>>> >>> > manually >>>>>>>>>>> >>> > at the end of ovirt-hosted-engine-setup installation. >>>>>>>>>>> >>> >>>>>>>>>>> >>> Not sure I follow. >>>>>>>>>>> >>> >>>>>>>>>>> >>> What exact repo was used? >>>>>>>>>>> >>> >>>>>>>>>>> >>> hosted-engine --deploy does not update/install >>>>>>>>>>> packages for you (as >>>>>>>>>>> >>> does engine-setup), >>>>>>>>>>> >>> it's up to you to make sure what you want/need is >>>>>>>>>>> installed prior to >>>>>>>>>>> >>> running it. >>>>>>>>>>> >>> >>>>>>>>>>> >>> Best, >>>>>>>>>>> >>> -- >>>>>>>>>>> >>> Didi >>>>>>>>>>> >>> _______________________________________________ >>>>>>>>>>> >>> Users mailing list >>>>>>>>>>> >>> Users@ovirt.org >>>>>>>>>>> >>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> > >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Didi >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >

Hi, Yesterday I posted a message, but it was not delivered because of it's size (big log + a screenshot), so I don't know if the moderator will accept the message or not. any way, this is only the text: I redid the target configuration on FC22, I used the same raw file, and I got the same result, no vm.conf file. I don't think it's a connection problem, iscsiadm shows the session, lvs shows the lvm, so the connection is up. Yertserday, before reboot I did save vm.conf somewhere, so I had to copy it back to it's location and I got the VM engine up. but the problem is still there. Another thing, this time I tried to add hosted_storage, the iscsi login showed me the lun, but when I tried to added it, *the VM engine crashed*. After restarting the VM engine, I got the hosted_storage displayed (click on system -> choose Storage tab), *but not attached.* If I try to attach it to the default DC, I get this error : Storage Domain(s) are already attached to a Data Center. Approving this operation might cause data corruption if both Data Centers are active. - hosted_storage and if I approve, the VM engine crashes. Regards 2015-09-03 21:05 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
Hi,
I redid the target configuration on FC22, I used the same raw file, and I got the same result, no vm.conf file.
I don't think it's a connection problem, iscsiadm shows the session, lvs shows the lvm, so the connection is up.
Yertserday, before reboot I did save vm.conf somewhere, so I had to copy it back to it's location and I got the VM engine up.
but the problem is still there.
Another thing, this time I tried to add hosted_storage, the iscsi login showed me the lun, but when I tried to added it, the VM engine crashed.
After restarting the VM engine, I got the hosted_storage displayed (click on system -> choose Storage tab), but not attached.
If I try to attach it to the default DC, I get this error (see image):
Storage Domain(s) are already attached to a Data Center. Approving this operation might cause data corruption if both Data Centers are active. - hosted_storage
PS: joined log files vdsm ha agent engine
2015-09-03 10:39 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Thu, Sep 3, 2015 at 11:25 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Just to be clear, my test environment is composed of two machines:
1 - one hypervisor just one host
2 - a machine used as a storage, a raw file presented as iscsi device for VM engine storage, and multiple NFS4 shares for the other data domains (data, iso, export).
That's it.
Here is the vdsm log
The first real error is this one:
Thread-49::ERROR::2015-09-03 01:19:29,710::monitor::250::Storage.Monitor::(_monitorDomain) Error monitoring domain 7bd9cad0-151a-4aa4-a7de-15dd64748f17 Traceback (most recent call last): File "/usr/share/vdsm/storage/monitor.py", line 246, in _monitorDomain self._performDomainSelftest() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 759, in wrapper value = meth(self, *a, **kw) File "/usr/share/vdsm/storage/monitor.py", line 313, in _performDomainSelftest self.domain.selftest() File "/usr/share/vdsm/storage/blockSD.py", line 857, in selftest lvm.chkVG(self.sdUUID) File "/usr/share/vdsm/storage/lvm.py", line 1006, in chkVG raise se.StorageDomainAccessError("%s: %s" % (vgName, err)) StorageDomainAccessError: Domain is either partially accessible or entirely inaccessible: ('7bd9cad0-151a-4aa4-a7de-15dd64748f17: [\' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\', \' /dev/mapper/33000000100000001: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/33000000100000001: read failed after 0 of 4096 at 53687025664: Input/output error\', \' /dev/mapper/33000000100000001: read failed after 0 of 4096 at 53687083008: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/33000000100000001 was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 536805376: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata: read failed after 0 of 4096 at 536862720: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-metadata was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-outbox was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 2147418112: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases: read failed after 0 of 4096 at 2147475456: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-leases was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-ids was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-inbox was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 1073676288: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master: read failed after 0 of 4096 at 1073733632: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-master was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 42949607424: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900: read failed after 0 of 4096 at 42949664768: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-42b4ec95--1a6e--4274--869a--3c8ad9b85900 was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-b6bbf14e--01fa--426b--8177--2250f5cd5406 was disabled\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 0: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 134152192: Input/output error\', \' /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf: read failed after 0 of 4096 at 134209536: Input/output error\', \' WARNING: Error counts reached a limit of 3. Device /dev/mapper/7bd9cad0--151a--4aa4--a7de--15dd64748f17-c47d5bde--8134--461b--aacd--e9146ae0bfaf was disabled\', \' Volume group "7bd9cad0-151a-4aa4-a7de-15dd64748f17" not found\', \' Cannot process volume group 7bd9cad0-151a-4aa4-a7de-15dd64748f17\']',) Thread-49::INFO::2015-09-03 01:19:29,741::monitor::273::Storage.Monitor::(_notifyStatusChanges) Domain 7bd9cad0-151a-4aa4-a7de-15dd64748f17 became INVALID Thread-4067::DEBUG::2015-09-03 01:19:29,741::misc::777::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Emitting event
For some reason your iSCSI connection seams to fails and the hosted-engine-storage domain becomes invalid. All the other issue are subsequent. Can you please check its configuration and the network status?
PS: After rebooting the host, I could not restart the VM engine, the same problem with ha agent, vm.conf file not found.
vm.conf is now on the storage domain but at least you should be able to access it.
2015-09-03 8:14 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>:
On Thu, Sep 3, 2015 at 2:07 AM, wodel youchi <wodel.youchi@gmail.com> wrote:
Hi again,
I had to restart the installation all over, I used the freshly pushed new packages.
I had two problems: 1 - the engine's setup didn't terminate correctly when I chose to use ovirt-vmconsole with this error [ INFO ] Restarting ovirt-vmconsole proxy service *[ ERROR ] **Failed to execute stage 'Closing up': Failed to stop service 'ovirt-vmconsole-proxy-sshd'* [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20150903000415-6egi46.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001209-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed
So I executed engine-cleanup which terminate with this error
[ INFO ] Clearing Engine database engine *[ ERROR ] Failed to execute stage 'Misc configuration': must be owner of schema pg_catalog * [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-remove-20150903001440-da1u76.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20150903001513-cleanup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of cleanup failed
And then, I executed again engine-setup without ovirt-vmconsole This time the setup completed.
2 - I added a NFS4 storage domain to the default DC (Default), the DC went up, and then I tried to import the hosted-engine storage domain, but without success.
click on import, choose iscsi, connect to the target, scan, login, but no device chown !!! (iscsi.jpeg)
Can you please attach the relevant VDSM logs from the host you used to were using to import that storage domain?
The only new thing I had, is the disk of the VM engine being shown
under disks tab.
2015-09-02 19:50 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
I found this on vdsm log
Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call vmGetStats with ('ab 1dc1a9-b6e9-4890-8485-1019da2f328f',) {} Thread-432::DEBUG::2015-09-02 19:37:30,854::bindingxmlrpc::1263::vds::(wrapper) return vmGetStats with {'status': {'message': '*Virtual machine does not exist*', 'code': 1}}
I really don't understand anything
2015-09-02 17:01 GMT+01:00 wodel youchi <wodel.youchi@gmail.com>:
> Thanks, > > but before that I stuck again with the storage of the VM engine not > detected after reboot. > > the /rhev is populated, but ovirt-ha-agent crashes with > > MainThread::INFO::2015-09-02 > 16:12:20,261::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) > Starting monitor engine-health, options {'use_ssl': 'true', > 'vm_uuid': 'ab1dc1a9-b6e9-4890-8485-1019da2f328f', 'address': '0 > '} > MainThread::INFO::2015-09-02 > 16:12:20,283::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) > Success, id 139994237094736 > MainThread::INFO::2015-09-02 > 16:12:20,702::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_do > main) Success, id 139994236985168 > MainThread::INFO::2015-09-02 > 16:12:20,702::hosted_engine::574::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini > tialize_broker) Broker initialized, all submonitors started > MainThread::INFO::2015-09-02 > 16:12:20,799::hosted_engine::678::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_ini > tialize_sanlock) Ensuring lease for lockspace hosted-engine, host id > 1 is acquired (file: /var/run/vdsm/storage/8b25f3be-7574 > -4f7a-8851-363129704e52/a44d1302-3165-4632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342) > > MainThread::INFO::2015-09-02 > 16:12:20,800::hosted_engine::401::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star > t_monitoring) Reloading vm.conf from the shared storage domain > MainThread::ERROR::2015-09-02 > 16:12:20,927::agent::201::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) > > *Error: ''Configuration value not found: > file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' - trying to > restart agent * > MainThread::WARNING::2015-09-02 > 16:12:25,932::agent::204::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) > Restarting a > gent, attempt '9' > MainThread::ERROR::2015-09-02 > 16:12:25,933::agent::206::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) > Too many error > s occurred, giving up. Please review the log and consider filing a > bug. > MainThread::INFO::2015-09-02 > 16:12:25,933::agent::143::ovirt_hosted_engine_ha.agent.agent.Agent::(run) > Agent shutting down > > > I restared vdsm ha-agent and broker-agent wihtout success > > When executed > [root@noveria ~]# hosted-engine --vm-status > You must run deploy first > > > I got this > > [root@noveria ~]# tree /var/run/vdsm/storage/ > /var/run/vdsm/storage/ > └── 8b25f3be-7574-4f7a-8851-363129704e52 > ├── 8e49032f-680b-40c2-b422-80d86dc7beda > │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1 > ade79d1 > ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 > │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f > 67f7342 > ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 > │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c32 > 1b4fbd1 > └── bf3bdae1-7318-4443-a19b-7371de30b982 > └── cbb10cf0-9600-465e-aed9-412f7157706b -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7 > 157706b > > and this > [root@noveria rhev]# tree > . > └── data-center > ├── 00000001-0001-0001-0001-000000000221 > └── mnt > ├── blockSD > │ └── 8b25f3be-7574-4f7a-8851-363129704e52 > │ ├── dom_md > │ │ ├── ids -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/ids > │ │ ├── inbox -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox > │ │ ├── leases -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/leases > │ │ ├── master -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/master > │ │ ├── metadata -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata > │ │ └── outbox -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox > │ ├── ha_agent > │ │ ├── hosted-engine.lockspace -> > /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/a44d1302-3165-4 > 632-9d99-3e035dfc3ac7/0f260ab0-3631-4c71-b332-c6c7f67f7342 > │ │ └── hosted-engine.metadata -> > /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 > c2-b422-80d86dc7beda/f05762e5-e8cd-45e7-ac19-303c1ade79d1 > │ └── images > │ ├── 8e49032f-680b-40c2-b422-80d86dc7beda > │ │ └── f05762e5-e8cd-45e7-ac19-303c1ade79d1 -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-4 > 5e7-ac19-303c1ade79d1 > │ ├── a44d1302-3165-4632-9d99-3e035dfc3ac7 > │ │ └── 0f260ab0-3631-4c71-b332-c6c7f67f7342 -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4 > c71-b332-c6c7f67f7342 > │ ├── a5475e57-c6f5-4dc5-a3f2-7fb782d613a7 > │ │ └── ae352fab-7477-4376-aa27-04c321b4fbd1 -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4 > 376-aa27-04c321b4fbd1 > │ └── bf3bdae1-7318-4443-a19b-7371de30b982 > │ └── cbb10cf0-9600-465e-aed9-412f7157706b -> > /dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-4 > 65e-aed9-412f7157706b > ├── openSuse.wodel.wd:_nvms > └── _var_lib_ovirt-hosted-engine-setup_tmp2fNoEf > > > Here I did find some symblic links blinking (not present) like this > one > hosted-engine.metadata -> > /var/run/vdsm/storage/8b25f3be-7574-4f7a-8851-363129704e52/8e49032f-680b-40 > c2-b422-80d86dc7beda/*f05762e5-e8cd-45e7-ac19-303c1ade79d1* > > > the lvscan command showed that the lv concerned is inactive, is this > correct? > [root@noveria ~]# lvscan > File descriptor 9 (/dev/dri/card0) leaked on lvscan invocation. > Parent PID 2935: bash > ACTIVE > '/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata' [512,00 > MiB] inherit > ACTIVE > '/dev/8b25f3be-7574-4f7a-8851-363129704e52/outbox' [128,00 MiB] > inherit > ACTIVE > '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases' [2,00 GiB] > inherit > ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ids' > [128,00 MiB] inherit > ACTIVE '/dev/8b25f3be-7574-4f7a-8851-363129704e52/inbox' > [128,00 MiB] inherit > ACTIVE > '/dev/8b25f3be-7574-4f7a-8851-363129704e52/master' [1,00 GiB] > inherit > inactive > '/dev/8b25f3be-7574-4f7a-8851-363129704e52/ae352fab-7477-4376-aa27-04c321b4fbd1' > [1,00 GiB] inherit > ACTIVE > '/dev/8b25f3be-7574-4f7a-8851-363129704e52/0f260ab0-3631-4c71-b332-c6c7f67f7342' > [128,00 MiB] inherit > *inactive > '/dev/8b25f3be-7574-4f7a-8851-363129704e52/f05762e5-e8cd-45e7-ac19-303c1ade79d1' > [128,00 MiB] inherit * > inactive > '/dev/8b25f3be-7574-4f7a-8851-363129704e52/cbb10cf0-9600-465e-aed9-412f7157706b' > [40,00 GiB] inherit > > > > and this > [root@noveria ~]# vdsClient -s 0 prepareImage > "00000000-0000-0000-0000-000000000000" > "8b25f3be-7574-4f7a-8851-363129704e52" " > bf3bdae1-7318-4443-a19b-7371de30b982" > "cbb10cf0-9600-465e-aed9-412f7157706b" > {'domainID': '8b25f3be-7574-4f7a-8851-363129704e52', > 'imageID': 'bf3bdae1-7318-4443-a19b-7371de30b982', > 'leaseOffset': 112197632, > 'leasePath': '/dev/8b25f3be-7574-4f7a-8851-363129704e52/leases', > 'path': > '/rhev/data-center/mnt/blockSD/8b25f3be-7574-4f7a-8851-363129704e52/images/bf3bdae1-7318-4443-a19b-7371de30b982/cbb1 > 0cf0-9600-465e-aed9-412f7157706b', > 'volType': 'path', > 'volumeID': 'cbb10cf0-9600-465e-aed9-412f7157706b'} > > > and > [root@noveria ~]# vdsClient -s 0 getStorageDomainInfo > 8b25f3be-7574-4f7a-8851-363129704e52 > uuid = 8b25f3be-7574-4f7a-8851-363129704e52 > vguuid = tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m > state = OK > version = 3 > role = Regular > type = ISCSI > class = Data > pool = [] > name = hosted_storage > > > [root@noveria ~]# lvs > File descriptor 9 (/dev/dri/card0) leaked on lvs invocation. Parent > PID 3105: bash > LV VG > Attr LSize Pool Origin Data% > Meta% Move > Log Cpy%Sync Convert > 0f260ab0-3631-4c71-b332-c6c7f67f7342 > 8b25f3be-7574-4f7a-8851-363129704e52 -wi-ao---- 128,00m > > > ae352fab-7477-4376-aa27-04c321b4fbd1 > 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 1,00g > > > cbb10cf0-9600-465e-aed9-412f7157706b > 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- 40,00g > > > f05762e5-e8cd-45e7-ac19-303c1ade79d1 > 8b25f3be-7574-4f7a-8851-363129704e52 -wi------- 128,00m > > > ids > 8b25f3be-7574-4f7a-8851-363129704e52 > -wi-a----- 128,00m > > inbox > 8b25f3be-7574-4f7a-8851-363129704e52 > -wi-a----- 128,00m > > leases > 8b25f3be-7574-4f7a-8851-363129704e52 > -wi-a----- 2,00g > > master > 8b25f3be-7574-4f7a-8851-363129704e52 > -wi-a----- 1,00g > > metadata > 8b25f3be-7574-4f7a-8851-363129704e52 -wi-a----- > 512,00m > > outbox > 8b25f3be-7574-4f7a-8851-363129704e52 > -wi-a----- 128,00m > > > > VDSM logs doesn't show me anything > MainThread::INFO::2015-09-01 23:34:49,551::vdsm::166::vds::(run) > <WorkerThread(Thread-4, started daemon 139990108333824)> > MainThread::INFO::2015-09-01 23:34:49,552::vdsm::166::vds::(run) > <WorkerThread(Thread-3, started daemon 139990116726528)> > MainThread::INFO::2015-09-02 16:07:49,510::vdsm::156::vds::(run) > (PID: 1554) I am the actual vdsm 4.17.3-12.git7288ef7.fc22 noveria.wodel.wd > (4.1.6-200.fc22.x86_64) > MainThread::DEBUG::2015-09-02 > 16:07:49,524::resourceManager::421::Storage.ResourceManager::(registerNamespace) > Registering namespace 'Storage' > MainThread::DEBUG::2015-09-02 > 16:07:49,524::threadPool::29::Storage.ThreadPool::(__init__) Enter - > numThreads: 10, waitTimeout: 3, maxTasks: 500 > MainThread::DEBUG::2015-09-02 > 16:07:49,526::fileUtils::143::Storage.fileUtils::(createdir) Creating > directory: /rhev/data-center/mnt mode: None > MainThread::WARNING::2015-09-02 > 16:07:49,526::fileUtils::152::Storage.fileUtils::(createdir) Dir > /rhev/data-center/mnt already exists > MainThread::DEBUG::2015-09-02 > 16:07:49,564::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) > /usr/bin/sudo -n /usr/sbin/lvm dumpconfig global/locking_type (cwd None) > MainThread::DEBUG::2015-09-02 > 16:07:49,611::hsm::403::Storage.Misc.excCmd::(__validateLvmLockingType) > SUCCESS: <err> = ''; <rc> = 0 > MainThread::DEBUG::2015-09-02 > 16:07:49,611::hsm::427::Storage.HSM::(__cleanStorageRepository) Started > cleaning storage repository at '/rhev/data-center' > MainThread::DEBUG::2015-09-02 > 16:07:49,614::hsm::459::Storage.HSM::(__cleanStorageRepository) White list: > ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', > '/rhev/data-center/mnt'] > MainThread::DEBUG::2015-09-02 > 16:07:49,614::hsm::460::Storage.HSM::(__cleanStorageRepository) Mount list: > [] > MainThread::DEBUG::2015-09-02 > 16:07:49,614::hsm::462::Storage.HSM::(__cleanStorageRepository) Cleaning > leftovers > MainThread::DEBUG::2015-09-02 > 16:07:49,615::hsm::505::Storage.HSM::(__cleanStorageRepository) Finished > cleaning storage repository at '/rhev/data-center' > storageRefresh::DEBUG::2015-09-02 > 16:07:49,616::misc::733::Storage.SamplingMethod::(__call__) Trying to enter > sampling method (storage.sdc.refreshStorage) > MainThread::INFO::2015-09-02 > 16:07:49,617::dispatcher::46::Storage.Dispatcher::(__init__) Starting > StorageDispatcher... > storageRefresh::DEBUG::2015-09-02 > 16:07:49,620::misc::736::Storage.SamplingMethod::(__call__) Got in to > sampling method > storageRefresh::DEBUG::2015-09-02 > 16:07:49,792::misc::733::Storage.SamplingMethod::(__call__) Trying to enter > sampling method (storage.iscsi.rescan) > storageRefresh::DEBUG::2015-09-02 > 16:07:49,793::misc::736::Storage.SamplingMethod::(__call__) Got in to > sampling method > storageRefresh::DEBUG::2015-09-02 > 16:07:49,793::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, > this will take up to 30 seconds > storageRefresh::DEBUG::2015-09-02 > 16:07:49,924::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n > /sbin/iscsiadm -m session -R (cwd None) > MainThread::DEBUG::2015-09-02 > 16:07:49,924::task::595::Storage.TaskManager.Task::(_updateState) > Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state init -> > state preparing > MainThread::INFO::2015-09-02 > 16:07:49,924::logUtils::48::dispatcher::(wrapper) Run and protect: > registerDomainStateChangeCallback(callbackFunc=<functools.partial object at > 0x7fc2f03fa6d8>) > MainThread::INFO::2015-09-02 > 16:07:49,924::logUtils::51::dispatcher::(wrapper) Run and protect: > registerDomainStateChangeCallback, Return response: None > MainThread::DEBUG::2015-09-02 > 16:07:49,927::task::1191::Storage.TaskManager.Task::(prepare) > Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::finished: None > MainThread::DEBUG::2015-09-02 > 16:07:49,927::task::595::Storage.TaskManager.Task::(_updateState) > Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::moving from state preparing -> > state finished > MainThread::DEBUG::2015-09-02 > 16:07:49,927::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > Owner.releaseAll requests {} resources {} > MainThread::DEBUG::2015-09-02 > 16:07:49,927::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > Owner.cancelAll requests {} > MainThread::DEBUG::2015-09-02 > 16:07:49,928::task::993::Storage.TaskManager.Task::(_decref) > Task=`68d01d7d-b426-4465-829e-174e2cb47e9e`::ref 0 aborting False > MainThread::INFO::2015-09-02 > 16:07:49,928::momIF::46::MOM::(__init__) Preparing MOM interface > MainThread::INFO::2015-09-02 > 16:07:49,929::momIF::55::MOM::(__init__) Using named unix socket > /var/run/vdsm/mom-vdsm.sock > MainThread::INFO::2015-09-02 16:07:49,929::secret::90::root::(clear) > Unregistering all secrests > MainThread::DEBUG::2015-09-02 > 16:07:49,929::libvirtconnection::160::root::(get) trying to connect libvirt > MainThread::INFO::2015-09-02 > 16:07:49,933::vmchannels::196::vds::(settimeout) Setting channels' timeout > to 30 seconds. > VM Channels Listener::DEBUG::2015-09-02 > 16:07:49,934::vmchannels::178::vds::(run) Starting VM channels listener > thread. > MainThread::INFO::2015-09-02 > 16:07:49,935::protocoldetector::172::vds.MultiProtocolAcceptor::(__init__) > Listening at 0.0.0.0:54321 > MainThread::DEBUG::2015-09-02 > 16:07:50,063::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) > Adding detector <rpc.bindingxmlrpc.XmlDetector instance at 0x7fc2f00dc440> > storageRefresh::DEBUG::2015-09-02 > 16:07:50,063::misc::743::Storage.SamplingMethod::(__call__) Returning last > result > storageRefresh::DEBUG::2015-09-02 > 16:07:50,080::misc::733::Storage.SamplingMethod::(__call__) Trying to enter > sampling method (storage.hba.rescan) > storageRefresh::DEBUG::2015-09-02 > 16:07:50,081::misc::736::Storage.SamplingMethod::(__call__) Got in to > sampling method > storageRefresh::DEBUG::2015-09-02 > 16:07:50,081::hba::56::Storage.HBA::(rescan) Starting scan > storageRefresh::DEBUG::2015-09-02 > 16:07:50,081::supervdsm::76::SuperVdsmProxy::(_connect) Trying to connect > to Super Vdsm > MainThread::DEBUG::2015-09-02 > 16:07:50,157::protocoldetector::199::vds.MultiProtocolAcceptor::(add_detector) > Adding detector <yajsonrpc.stompreactor.StompDetector instance at > 0x7fc2e01332d8> > BindingXMLRPC::INFO::2015-09-02 > 16:07:50,158::bindingxmlrpc::62::vds::(threaded_start) XMLRPC server running > MainThread::DEBUG::2015-09-02 > 16:07:50,158::schedule::98::Scheduler::(start) Starting scheduler > periodic-sched > periodic-sched::DEBUG::2015-09-02 > 16:07:50,159::schedule::142::Scheduler::(_run) started > MainThread::DEBUG::2015-09-02 > 16:07:50,159::executor::69::Executor::(start) Starting executor > MainThread::DEBUG::2015-09-02 > 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/0 > periodic/0::DEBUG::2015-09-02 > 16:07:50,159::executor::171::Executor::(_run) Worker started > MainThread::DEBUG::2015-09-02 > 16:07:50,159::executor::157::Executor::(__init__) Starting worker periodic/1 > periodic/1::DEBUG::2015-09-02 > 16:07:50,160::executor::171::Executor::(_run) Worker started > MainThread::DEBUG::2015-09-02 > 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/2 > periodic/2::DEBUG::2015-09-02 > 16:07:50,160::executor::171::Executor::(_run) Worker started > MainThread::DEBUG::2015-09-02 > 16:07:50,160::executor::157::Executor::(__init__) Starting worker periodic/3 > periodic/3::DEBUG::2015-09-02 > 16:07:50,160::executor::171::Executor::(_run) Worker started > MainThread::DEBUG::2015-09-02 > 16:07:50,160::libvirtconnection::160::root::(get) trying to connect libvirt > MainThread::DEBUG::2015-09-02 > 16:07:50,163::periodic::157::virt.periodic.Operation::(start) starting > operation VmDispatcher(<class 'virt.periodic.UpdateVolumes'>) > MainThread::DEBUG::2015-09-02 > 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting > operation VmDispatcher(<class 'virt.periodic.NumaInfoMonitor'>) > MainThread::DEBUG::2015-09-02 > 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting > operation VmDispatcher(<class 'virt.periodic.BlockjobMonitor'>) > MainThread::DEBUG::2015-09-02 > 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting > operation <virt.sampling.VMBulkSampler object at 0x7fc2e0151d10> > MainThread::DEBUG::2015-09-02 > 16:07:50,164::periodic::157::virt.periodic.Operation::(start) starting > operation VmDispatcher(<class 'virt.periodic.DriveWatermarkMonitor'>) > storageRefresh::DEBUG::2015-09-02 > 16:07:50,167::hba::62::Storage.HBA::(rescan) Scan finished > storageRefresh::DEBUG::2015-09-02 > 16:07:50,167::misc::743::Storage.SamplingMethod::(__call__) Returning last > result > storageRefresh::DEBUG::2015-09-02 > 16:07:50,167::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n > /usr/sbin/multipath (cwd None) > storageRefresh::DEBUG::2015-09-02 > 16:07:50,513::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = > ''; <rc> = 0 > storageRefresh::DEBUG::2015-09-02 > 16:07:50,513::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 > (cwd None) > Reactor thread::INFO::2015-09-02 > 16:07:50,590::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) > Accepting connection from 127.0.0.1:56311 > Reactor thread::DEBUG::2015-09-02 > 16:07:50,596::protocoldetector::82::ProtocolDetector.Detector::(__init__) > Using required_size=11 > Reactor thread::INFO::2015-09-02 > 16:07:50,596::protocoldetector::118::ProtocolDetector.Detector::(handle_read) > Detected protocol xml from 127.0.0.1:56311 > Reactor thread::DEBUG::2015-09-02 > 16:07:50,596::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over > http detected from ('127.0.0.1', 56311) > BindingXMLRPC::INFO::2015-09-02 > 16:07:50,596::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting > request handler for 127.0.0.1:56311 > Thread-13::INFO::2015-09-02 > 16:07:50,597::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request > handler for 127.0.0.1:56311 started > Thread-13::DEBUG::2015-09-02 > 16:07:50,597::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call > getHardwareInfo with () {} > Thread-13::DEBUG::2015-09-02 > 16:07:50,597::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo > with {'status': {'message': 'Recovering from crash or Initializing', > 'code': 99}} > Thread-13::INFO::2015-09-02 > 16:07:50,599::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request > handler for 127.0.0.1:56311 stopped > Reactor thread::INFO::2015-09-02 > 16:07:51,607::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) > Accepting connection from 127.0.0.1:56312 > Reactor thread::DEBUG::2015-09-02 > 16:07:51,613::protocoldetector::82::ProtocolDetector.Detector::(__init__) > Using required_size=11 > Reactor thread::INFO::2015-09-02 > 16:07:51,613::protocoldetector::118::ProtocolDetector.Detector::(handle_read) > Detected protocol xml from 127.0.0.1:56312 > Reactor thread::DEBUG::2015-09-02 > 16:07:51,613::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over > http detected from ('127.0.0.1', 56312) > BindingXMLRPC::INFO::2015-09-02 > 16:07:51,613::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting > request handler for 127.0.0.1:56312 > Thread-14::INFO::2015-09-02 > 16:07:51,613::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request > handler for 127.0.0.1:56312 started > Thread-14::DEBUG::2015-09-02 > 16:07:51,614::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call > getHardwareInfo with () {} > Thread-14::DEBUG::2015-09-02 > 16:07:51,614::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo > with {'status': {'message': 'Recovering from crash or Initializing', > 'code': 99}} > Thread-14::INFO::2015-09-02 > 16:07:51,615::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request > handler for 127.0.0.1:56312 stopped > storageRefresh::DEBUG::2015-09-02 > 16:07:51,924::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 > storageRefresh::DEBUG::2015-09-02 > 16:07:51,926::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) > Operation 'lvm invalidate operation' got the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:51,926::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) > Operation 'lvm invalidate operation' released the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:51,926::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) > Operation 'lvm invalidate operation' got the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:51,926::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) > Operation 'lvm invalidate operation' released the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:51,926::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) > Operation 'lvm invalidate operation' got the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:51,926::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) > Operation 'lvm invalidate operation' released the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:51,926::misc::743::Storage.SamplingMethod::(__call__) Returning last > result > storageRefresh::DEBUG::2015-09-02 > 16:07:51,926::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm > reload operation' got the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:51,927::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n > /usr/sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 > obtain_device_list_from_udev=0 filter = [ > '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', > '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = > 0 } ' --noheadings --units b --nosuffix --separator '|' > --ignoreskippedcluster -o > uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size > (cwd None) > storageRefresh::DEBUG::2015-09-02 > 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' > WARNING: lvmetad is running but disabled. Restart lvmetad before enabling > it!\n'; <rc> = 0 > storageRefresh::DEBUG::2015-09-02 > 16:07:52,341::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm > reload operation' released the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:52,341::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm > reload operation' got the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:52,341::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n > /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 > obtain_device_list_from_udev=0 filter = [ > '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', > '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = > 0 } ' --noheadings --units b --nosuffix --separator '|' > --ignoreskippedcluster -o > uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name > (cwd None) > storageRefresh::DEBUG::2015-09-02 > 16:07:52,405::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' > WARNING: lvmetad is running but disabled. Restart lvmetad before enabling > it!\n'; <rc> = 0 > storageRefresh::DEBUG::2015-09-02 > 16:07:52,405::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm > reload operation' released the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:52,406::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n > /usr/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 > obtain_device_list_from_udev=0 filter = [ > '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', > '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = > 0 } ' --noheadings --units b --nosuffix --separator '|' > --ignoreskippedcluster -o > uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None) > storageRefresh::DEBUG::2015-09-02 > 16:07:52,458::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' > WARNING: lvmetad is running but disabled. Restart lvmetad before enabling > it!\n'; <rc> = 0 > storageRefresh::DEBUG::2015-09-02 > 16:07:52,458::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm > reload operation' got the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:52,459::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n > /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 > obtain_device_list_from_udev=0 filter = [ > '\''a|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', > '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = > 0 } ' --noheadings --units b --nosuffix --separator '|' > --ignoreskippedcluster -o > uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name > (cwd None) > storageRefresh::DEBUG::2015-09-02 > 16:07:52,491::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' > WARNING: lvmetad is running but disabled. Restart lvmetad before enabling > it!\n'; <rc> = 0 > storageRefresh::DEBUG::2015-09-02 > 16:07:52,491::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm > reload operation' released the operation mutex > storageRefresh::DEBUG::2015-09-02 > 16:07:52,491::hsm::373::Storage.HSM::(storageRefresh) HSM is ready > Reactor thread::INFO::2015-09-02 > 16:07:52,624::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) > Accepting connection from 127.0.0.1:56313 > Reactor thread::DEBUG::2015-09-02 > 16:07:52,629::protocoldetector::82::ProtocolDetector.Detector::(__init__) > Using required_size=11 > Reactor thread::INFO::2015-09-02 > 16:07:52,629::protocoldetector::118::ProtocolDetector.Detector::(handle_read) > Detected protocol xml from 127.0.0.1:56313 > Reactor thread::DEBUG::2015-09-02 > 16:07:52,629::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over > http detected from ('127.0.0.1', 56313) > BindingXMLRPC::INFO::2015-09-02 > 16:07:52,629::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting > request handler for 127.0.0.1:56313 > Thread-15::INFO::2015-09-02 > 16:07:52,630::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request > handler for 127.0.0.1:56313 started > Thread-15::DEBUG::2015-09-02 > 16:07:52,630::bindingxmlrpc::1256::vds::(wrapper) client [127.0.0.1]::call > getHardwareInfo with () {} > Thread-15::DEBUG::2015-09-02 > 16:07:52,719::bindingxmlrpc::1263::vds::(wrapper) return getHardwareInfo > with {'status': {'message': 'Done', 'code': 0}, 'info': > {'systemProductName': 'System Product Name', 'systemSerialNumber': 'System > Serial Number', 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': > 'System Version', 'systemUUID': '267A6B80-D7DA-11DD-81CF-C860009B3CD9', > 'systemManufacturer': 'System manufacturer'}} > Thread-15::INFO::2015-09-02 > 16:07:52,721::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request > handler for 127.0.0.1:56313 stopped > Reactor thread::INFO::2015-09-02 > 16:07:52,730::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) > Accepting connection from 127.0.0.1:56314 > Reactor thread::DEBUG::2015-09-02 > 16:07:52,735::protocoldetector::82::ProtocolDetector.Detector::(__init__) > Using required_size=11 > Reactor thread::INFO::2015-09-02 > 16:07:52,735::protocoldetector::118::ProtocolDetector.Detector::(handle_read) > Detected protocol xml from 127.0.0.1:56314 > Reactor thread::DEBUG::2015-09-02 > 16:07:52,735::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over > http detected from ('127.0.0.1', 56314) > BindingXMLRPC::INFO::2015-09-02 > 16:07:52,735::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting > request handler for 127.0.0.1:56314 > Thread-16::INFO::2015-09-02 > 16:07:52,735::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request > handler for 127.0.0.1:56314 started > Thread-16::DEBUG::2015-09-02 > 16:07:52,736::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] > Thread-16::DEBUG::2015-09-02 > 16:07:52,736::task::595::Storage.TaskManager.Task::(_updateState) > Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state init -> > state preparing > Thread-16::INFO::2015-09-02 > 16:07:52,736::logUtils::48::dispatcher::(wrapper) Run and protect: > connectStorageServer(domType=3, > spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id': > '57bc98c0-560f-4e61-9d86-df92ad468d3b', 'connection': '192.168.1.50', > 'iqn': 'iqn.2015-08.openSuse.wodel:target00', 'portal': '1', 'user': > 'iscsiuser', 'password': '********', 'port': '3260'}], options=None) > Thread-16::DEBUG::2015-09-02 > 16:07:52,737::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n > /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p > 192.168.1.50:3260,1 --op=new (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = > ''; <rc> = 0 > Thread-16::DEBUG::2015-09-02 > 16:07:52,789::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm > -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p > 192.168.1.50:3260,1 -n node.session.auth.authmethod -v '****' > --op=update (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:52,811::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = > ''; <rc> = 0 > Thread-16::DEBUG::2015-09-02 > 16:07:52,812::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm > -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p > 192.168.1.50:3260,1 -n node.session.auth.username -v '****' > --op=update (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:52,846::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = > ''; <rc> = 0 > Thread-16::DEBUG::2015-09-02 > 16:07:52,847::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /sbin/iscsiadm > -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p > 192.168.1.50:3260,1 -n node.session.auth.password -v '****' > --op=update (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = > ''; <rc> = 0 > Thread-16::DEBUG::2015-09-02 > 16:07:52,868::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n > /sbin/iscsiadm -m iface -I default (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:52,905::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = > ''; <rc> = 0 > Thread-16::INFO::2015-09-02 > 16:07:52,905::iscsi::564::Storage.ISCSI::(setRpFilterIfNeeded) iSCSI > iface.net_ifacename not provided. Skipping. > Thread-16::DEBUG::2015-09-02 > 16:07:52,906::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n > /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p > 192.168.1.50:3260,1 -l (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:53,027::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = > ''; <rc> = 0 > Thread-16::DEBUG::2015-09-02 > 16:07:53,028::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n > /sbin/iscsiadm -m node -T iqn.2015-08.openSuse.wodel:target00 -I default -p > 192.168.1.50:3260,1 -n node.startup -v manual --op=update (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:53,088::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) SUCCESS: <err> = > ''; <rc> = 0 > Thread-16::DEBUG::2015-09-02 > 16:07:53,088::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 > (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:53,182::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 > Thread-16::DEBUG::2015-09-02 > 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter > sampling method (storage.sdc.refreshStorage) > Thread-16::DEBUG::2015-09-02 > 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to > sampling method > Thread-16::DEBUG::2015-09-02 > 16:07:53,182::misc::733::Storage.SamplingMethod::(__call__) Trying to enter > sampling method (storage.iscsi.rescan) > Thread-16::DEBUG::2015-09-02 > 16:07:53,182::misc::736::Storage.SamplingMethod::(__call__) Got in to > sampling method > Thread-16::DEBUG::2015-09-02 > 16:07:53,182::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, > this will take up to 30 seconds > Thread-16::DEBUG::2015-09-02 > 16:07:53,182::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n > /sbin/iscsiadm -m session -R (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:53,229::misc::743::Storage.SamplingMethod::(__call__) Returning last > result > Thread-16::DEBUG::2015-09-02 > 16:07:53,229::misc::733::Storage.SamplingMethod::(__call__) Trying to enter > sampling method (storage.hba.rescan) > Thread-16::DEBUG::2015-09-02 > 16:07:53,229::misc::736::Storage.SamplingMethod::(__call__) Got in to > sampling method > Thread-16::DEBUG::2015-09-02 > 16:07:53,229::hba::56::Storage.HBA::(rescan) Starting scan > Thread-16::DEBUG::2015-09-02 > 16:07:53,300::hba::62::Storage.HBA::(rescan) Scan finished > Thread-16::DEBUG::2015-09-02 > 16:07:53,300::misc::743::Storage.SamplingMethod::(__call__) Returning last > result > Thread-16::DEBUG::2015-09-02 > 16:07:53,300::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n > /usr/sbin/multipath (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:53,435::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = > ''; <rc> = 0 > Thread-16::DEBUG::2015-09-02 > 16:07:53,435::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 > (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:53,919::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 > Thread-16::DEBUG::2015-09-02 > 16:07:53,921::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) > Operation 'lvm invalidate operation' got the operation mutex > Thread-16::DEBUG::2015-09-02 > 16:07:53,921::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) > Operation 'lvm invalidate operation' released the operation mutex > Thread-16::DEBUG::2015-09-02 > 16:07:53,922::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) > Operation 'lvm invalidate operation' got the operation mutex > Thread-16::DEBUG::2015-09-02 > 16:07:53,922::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) > Operation 'lvm invalidate operation' released the operation mutex > Thread-16::DEBUG::2015-09-02 > 16:07:53,922::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) > Operation 'lvm invalidate operation' got the operation mutex > Thread-16::DEBUG::2015-09-02 > 16:07:53,922::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) > Operation 'lvm invalidate operation' released the operation mutex > Thread-16::DEBUG::2015-09-02 > 16:07:53,922::misc::743::Storage.SamplingMethod::(__call__) Returning last > result > Thread-16::DEBUG::2015-09-02 > 16:07:53,922::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm > reload operation' got the operation mutex > Thread-16::DEBUG::2015-09-02 > 16:07:53,923::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n > /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 > obtain_device_list_from_udev=0 filter = [ > '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', > '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = > 0 } ' --noheadings --units b --nosuffix --separator '|' > --ignoreskippedcluster -o > uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name > (cwd None) > Thread-16::DEBUG::2015-09-02 > 16:07:54,058::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' > WARNING: lvmetad is running but disabled. Restart lvmetad before enabling > it!\n'; <rc> = 0 > Thread-16::DEBUG::2015-09-02 > 16:07:54,059::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm > reload operation' released the operation mutex > Thread-16::DEBUG::2015-09-02 > 16:07:54,059::hsm::2418::Storage.HSM::(__prefetchDomains) Found SD uuids: > ('8b25f3be-7574-4f7a-8851-363129704e52',) > Thread-16::DEBUG::2015-09-02 > 16:07:54,059::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: > {8b25f3be-7574-4f7a-8851-363129704e52: storage.blockSD.findDomain} > Thread-16::INFO::2015-09-02 > 16:07:54,059::logUtils::51::dispatcher::(wrapper) Run and protect: > connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': > '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} > Thread-16::DEBUG::2015-09-02 > 16:07:54,059::task::1191::Storage.TaskManager.Task::(prepare) > Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::finished: {'statuslist': > [{'status': 0, 'id': '57bc98c0-560f-4e61-9d86-df92ad468d3b'}]} > Thread-16::DEBUG::2015-09-02 > 16:07:54,059::task::595::Storage.TaskManager.Task::(_updateState) > Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::moving from state preparing -> > state finished > Thread-16::DEBUG::2015-09-02 > 16:07:54,060::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > Owner.releaseAll requests {} resources {} > Thread-16::DEBUG::2015-09-02 > 16:07:54,060::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > Owner.cancelAll requests {} > Thread-16::DEBUG::2015-09-02 > 16:07:54,060::task::993::Storage.TaskManager.Task::(_decref) > Task=`c4a18001-912b-47dc-9713-7d50e5133b59`::ref 0 aborting False > Thread-16::INFO::2015-09-02 > 16:07:54,062::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request > handler for 127.0.0.1:56314 stopped > Reactor thread::INFO::2015-09-02 > 16:07:54,070::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) > Accepting connection from 127.0.0.1:56316 > Reactor thread::DEBUG::2015-09-02 > 16:07:54,075::protocoldetector::82::ProtocolDetector.Detector::(__init__) > Using required_size=11 > Reactor thread::INFO::2015-09-02 > 16:07:54,076::protocoldetector::118::ProtocolDetector.Detector::(handle_read) > Detected protocol xml from 127.0.0.1:56316 > Reactor thread::DEBUG::2015-09-02 > 16:07:54,076::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml over > http detected from ('127.0.0.1', 56316) > BindingXMLRPC::INFO::2015-09-02 > 16:07:54,076::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting > request handler for 127.0.0.1:56316 > Thread-17::INFO::2015-09-02 > 16:07:54,076::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request > handler for 127.0.0.1:56316 started > Thread-17::DEBUG::2015-09-02 > 16:07:54,077::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1] > Thread-17::DEBUG::2015-09-02 > 16:07:54,077::task::595::Storage.TaskManager.Task::(_updateState) > Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::moving from state init -> > state preparing > Thread-17::INFO::2015-09-02 > 16:07:54,077::logUtils::48::dispatcher::(wrapper) Run and protect: > prepareImage(sdUUID='8b25f3be-7574-4f7a-8851-363129704e52', > spUUID='00000000-0000-0000-0000-000000000000', > imgUUID='bf3bdae1-7318-4443-a19b-7371de30b982', > leafUUID='cbb10cf0-9600-465e-aed9-412f7157706b') > Thread-17::DEBUG::2015-09-02 > 16:07:54,077::resourceManager::198::Storage.ResourceManager.Request::(__init__) > ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Request > was made in '/usr/share/vdsm/storage/hsm.py' line '3194' at 'prepareImage' > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::resourceManager::542::Storage.ResourceManager::(registerResource) > Trying to register resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' > for lock type 'shared' > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::resourceManager::601::Storage.ResourceManager::(registerResource) > Resource 'Storage.8b25f3be-7574-4f7a-8851-363129704e52' is free. Now > locking as 'shared' (1 active user) > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::resourceManager::238::Storage.ResourceManager.Request::(grant) > ResName=`Storage.8b25f3be-7574-4f7a-8851-363129704e52`ReqID=`fc59b8b4-51c5-4a15-9716-aedbb6de62e6`::Granted > request > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::task::827::Storage.TaskManager.Task::(resourceAcquired) > Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::_resourcesAcquired: > Storage.8b25f3be-7574-4f7a-8851-363129704e52 (shared) > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::task::993::Storage.TaskManager.Task::(_decref) > Task=`7936300e-8a1a-47f5-83c4-16ed19853e36`::ref 1 aborting False > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter > sampling method (storage.sdc.refreshStorage) > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to > sampling method > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::misc::733::Storage.SamplingMethod::(__call__) Trying to enter > sampling method (storage.iscsi.rescan) > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::misc::736::Storage.SamplingMethod::(__call__) Got in to > sampling method > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan, > this will take up to 30 seconds > Thread-17::DEBUG::2015-09-02 > 16:07:54,078::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n > /sbin/iscsiadm -m session -R (cwd None) > Thread-17::DEBUG::2015-09-02 > 16:07:54,130::misc::743::Storage.SamplingMethod::(__call__) Returning last > result > Thread-17::DEBUG::2015-09-02 > 16:07:54,130::misc::733::Storage.SamplingMethod::(__call__) Trying to enter > sampling method (storage.hba.rescan) > Thread-17::DEBUG::2015-09-02 > 16:07:54,130::misc::736::Storage.SamplingMethod::(__call__) Got in to > sampling method > Thread-17::DEBUG::2015-09-02 > 16:07:54,130::hba::56::Storage.HBA::(rescan) Starting scan > Thread-17::DEBUG::2015-09-02 > 16:07:54,197::hba::62::Storage.HBA::(rescan) Scan finished > Thread-17::DEBUG::2015-09-02 > 16:07:54,197::misc::743::Storage.SamplingMethod::(__call__) Returning last > result > Thread-17::DEBUG::2015-09-02 > 16:07:54,197::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n > /usr/sbin/multipath (cwd None) > Thread-17::DEBUG::2015-09-02 > 16:07:54,298::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = > ''; <rc> = 0 > Thread-17::DEBUG::2015-09-02 > 16:07:54,299::utils::661::root::(execCmd) /sbin/udevadm settle --timeout=5 > (cwd None) > Thread-17::DEBUG::2015-09-02 > 16:07:54,307::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 > Thread-17::DEBUG::2015-09-02 > 16:07:54,309::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) > Operation 'lvm invalidate operation' got the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,310::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) > Operation 'lvm invalidate operation' released the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,310::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) > Operation 'lvm invalidate operation' got the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,310::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) > Operation 'lvm invalidate operation' released the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,310::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) > Operation 'lvm invalidate operation' got the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,310::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) > Operation 'lvm invalidate operation' released the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,310::misc::743::Storage.SamplingMethod::(__call__) Returning last > result > Thread-17::DEBUG::2015-09-02 > 16:07:54,310::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm > reload operation' got the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,312::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n > /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 > obtain_device_list_from_udev=0 filter = [ > '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', > '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = > 0 } ' --noheadings --units b --nosuffix --separator '|' > --ignoreskippedcluster -o > uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name > 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) > Thread-17::DEBUG::2015-09-02 > 16:07:54,478::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' > WARNING: lvmetad is running but disabled. Restart lvmetad before enabling > it!\n'; <rc> = 0 > Thread-17::DEBUG::2015-09-02 > 16:07:54,478::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm > reload operation' released the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,479::persistentDict::192::Storage.PersistentDict::(__init__) > Created a persistent dict with LvMetadataRW backend > Thread-17::DEBUG::2015-09-02 > 16:07:54,479::blockSD::337::Storage.Misc.excCmd::(readlines) /usr/bin/dd > iflag=direct skip=0 bs=2048 > if=/dev/8b25f3be-7574-4f7a-8851-363129704e52/metadata count=1 (cwd None) > Thread-17::DEBUG::2015-09-02 > 16:07:54,553::blockSD::337::Storage.Misc.excCmd::(readlines) SUCCESS: <err> > = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.00107202 > s, 1.9 MB/s\n'; <rc> = 0 > Thread-17::DEBUG::2015-09-02 > 16:07:54,553::misc::260::Storage.Misc::(validateDDBytes) err: ['1+0 records > in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.00107202 s, 1.9 > MB/s'], size: 2048 > Thread-17::DEBUG::2015-09-02 > 16:07:54,553::persistentDict::234::Storage.PersistentDict::(refresh) read > lines (LvMetadataRW)=[] > Thread-17::DEBUG::2015-09-02 > 16:07:54,553::persistentDict::252::Storage.PersistentDict::(refresh) Empty > metadata > Thread-17::DEBUG::2015-09-02 > 16:07:54,553::persistentDict::192::Storage.PersistentDict::(__init__) > Created a persistent dict with VGTagMetadataRW backend > Thread-17::DEBUG::2015-09-02 > 16:07:54,554::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation > 'lvm invalidate operation' got the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,554::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation > 'lvm invalidate operation' released the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,554::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation > 'lvm invalidate operation' got the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,554::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation > 'lvm invalidate operation' released the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,554::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm > reload operation' got the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,554::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n > /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] > ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 > obtain_device_list_from_udev=0 filter = [ > '\''a|/dev/mapper/33000000100000001|/dev/mapper/Hitachi_HDS721010DLE630_MSK523Y209VK0B|'\'', > '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = > 0 } ' --noheadings --units b --nosuffix --separator '|' > --ignoreskippedcluster -o > uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name > 8b25f3be-7574-4f7a-8851-363129704e52 (cwd None) > Thread-17::DEBUG::2015-09-02 > 16:07:54,685::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' > WARNING: lvmetad is running but disabled. Restart lvmetad before enabling > it!\n'; <rc> = 0 > Thread-17::DEBUG::2015-09-02 > 16:07:54,686::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm > reload operation' released the operation mutex > Thread-17::DEBUG::2015-09-02 > 16:07:54,686::persistentDict::234::Storage.PersistentDict::(refresh) read > lines (VGTagMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', > 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', > 'LOCKRENEWALINTERVALSEC=5', 'LOGBLKSIZE=512', 'PHYBLKSIZE=4096', > 'POOL_UUID=', > u'PV0=pv:33000000100000001,uuid:kTaQQh-4LCD-OghQ-cP5D-R7MM-aj6e-kTdQf0,pestart:0,pecount:397,mapoffset:0', > 'ROLE=Regular', 'SDUUID=8b25f3be-7574-4f7a-8851-363129704e52', > 'TYPE=ISCSI', 'VERSION=3', 'VGUUID=tJKiwH-Cn7v-QCxd-YQrg-MUxA-fbdC-kdga8m', > '_SHA_CKSUM=4a100ce5195650f43971d849835a8b3d8c0343da'] > Thread-17::DEBUG::2015-09-02 > 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) > Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_imageNS' > Thread-17::DEBUG::2015-09-02 > 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) > Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_volumeNS' > Thread-17::DEBUG::2015-09-02 > 16:07:54,687::resourceManager::421::Storage.ResourceManager::(registerNamespace) > Registering namespace '8b25f3be-7574-4f7a-8851-363129704e52_lvmActivationNS' > Thread-17::DEBUG::2015-09-02 > 16:07:54,687::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm > reload operation' got the operation mutex > > What should I do to bring the VM engine back? > > > 2015-09-02 16:24 GMT+01:00 Simone Tiraboschi <stirabos@redhat.com>: > >> >> >> On Wed, Sep 2, 2015 at 10:49 AM, wodel youchi < >> wodel.youchi@gmail.com> wrote: >> >>> I will try this afternoon to do this, but just to clarify >>> something. >>> >>> the hosted_engine setup creates it's own DC the hosted_DC, which >>> contains the hosted engine storage domain, I am correct? >>> >> >> No, ovirt-hosted-engine-setup doesn't create a special datacenter. >> The default is to add the host to the Default datacenter in the default >> cluster. >> You could choose a different one from ovirt-hosted-engine-setup, >> simply import the hosted-engine storage domain in the datacenter of the >> cluster you selected. >> >> In setup there is a question like this: >> Local storage datacenter name is an internal name >> and currently will not be shown in engine's admin UI. >> Please enter local datacenter name >> which ask about 'Local storage datacenter' which is basically the >> description we were using for the storage pool. >> >> >> >>> if yes, where will I import the hostedengine storage domain, into >>> the default DC? >>> >>> 2015-09-02 8:47 GMT+01:00 Roy Golan <rgolan@redhat.com>: >>> >>>> >>>> >>>> On Wed, Sep 2, 2015 at 12:51 AM, wodel youchi < >>>> wodel.youchi@gmail.com> wrote: >>>> >>>>> I could finally terminate the installation, but still no vm >>>>> engine on webui >>>>> >>>>> I added a data domain, the default DC is up, but no engine VM. >>>>> >>>> >>>> >>>> >>>> Good now you need to import the HostedEngine storage domain. Try >>>> to go to >>>> >>>> *Storage -> Import Domain and put the path to the domain which >>>> you used in the hosted-engine setup.* >>>> >>>> >>>> *After the domain is imported, the engine will be imported >>>> automatically. * >>>> >>>> *This whole process will become automatic eventually. (patch is >>>> written currently)* >>>> >>>> >>>>> >>>>> 2015-09-01 21:22 GMT+01:00 wodel youchi <wodel.youchi@gmail.com> >>>>> : >>>>> >>>>>> Something mounted on /rhev/data-center/mnt I'm not sure. >>>>>> >>>>>> there were directories, and under these directories there were >>>>>> other directories (dom_md, ha_agent, images), and under them there were >>>>>> symbolic links to devices under /dev >>>>>> (ids, inbox, leases, etc...) the devices pointed to the lvm >>>>>> partitions created by the setup. >>>>>> >>>>>> but the mount command didn't show anything, unlike nfs, when I >>>>>> used it the mount and df commands showed up the engine's VM mount point. >>>>>> >>>>>> >>>>>> 2015-09-01 20:16 GMT+01:00 Simone Tiraboschi < >>>>>> stirabos@redhat.com>: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Sep 1, 2015 at 7:29 PM, wodel youchi < >>>>>>> wodel.youchi@gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> After removing the -x from the sql files, the installation >>>>>>>> terminated successfully, but ... >>>>>>>> >>>>>>>> I had a problem with vdsm, and error about permission denied >>>>>>>> with KVM module, so I restarted my machine. >>>>>>>> After the reboot the ovirt-ha-agent stops complaining about >>>>>>>> the vm.conf file not present in /var/rum/ovirt-hosted-engine-ha >>>>>>>> >>>>>>>> And the mount command doesn't show any iscsi mount, the disk >>>>>>>> is detected via fdisk -l >>>>>>>> the lvs command returns all logical volumes created. >>>>>>>> >>>>>>>> I think it's a mount problem, but since there are many lv, I >>>>>>>> don't how to mount them manually. >>>>>>>> >>>>>>> >>>>>>> Do you have something mounted under /rhev/data-center/mnt ? >>>>>>> If not you probably hit this bug: >>>>>>> https://bugzilla.redhat.com/1258465 >>>>>>> >>>>>>> >>>>>>> >>>>>>>> LV VG >>>>>>>> Attr LSize Pool Origin Data% >>>>>>>> Meta% Move >>>>>>>> Log Cpy%Sync Convert >>>>>>>> >>>>>>>> 3b894e23-429d-43bf-b6cd-6427a387799a >>>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-ao---- 128,00m >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> be78c0fd-52bf-445a-9555-64061029c2d9 >>>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 1,00g >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> c9f74ffc-2eba-40a9-9c1c-f3b6d8e12657 >>>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 40,00g >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> feede664-5754-4ca2-aeb3-af7aff32ed42 >>>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- 128,00m >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ids >>>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>>>> -wi-ao---- 128,00m >>>>>>>> >>>>>>>> >>>>>>>> inbox >>>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>>>> -wi-a----- 128,00m >>>>>>>> >>>>>>>> >>>>>>>> leases >>>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>>>> -wi-a----- 2,00g >>>>>>>> >>>>>>>> >>>>>>>> master >>>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>>>> -wi-a----- 1,00g >>>>>>>> >>>>>>>> >>>>>>>> metadata >>>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 -wi-a----- >>>>>>>> 512,00m >>>>>>>> >>>>>>>> >>>>>>>> outbox >>>>>>>> 5445bbee-bb3a-4e6d-9614-a0c9378fe078 >>>>>>>> -wi-a----- 128,00m >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> 2015-09-01 16:57 GMT+01:00 Simone Tiraboschi < >>>>>>>> stirabos@redhat.com>: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Sep 1, 2015 at 5:08 PM, wodel youchi < >>>>>>>>> wodel.youchi@gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi again, >>>>>>>>>> >>>>>>>>>> I tried with the snapshot repository, but I am having this >>>>>>>>>> error while executing engine-setup >>>>>>>>>> >>>>>>>>>> [ INFO ] Creating/refreshing Engine database schema >>>>>>>>>> [ ERROR ] Failed to execute stage 'Misc configuration': >>>>>>>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execu >>>>>>>>>> te >>>>>>>>>> [ INFO ] DNF Performing DNF transaction rollback >>>>>>>>>> [ INFO ] Rolling back database schema >>>>>>>>>> [ INFO ] Clearing Engine database engine >>>>>>>>>> [ ERROR ] Engine database rollback failed: must be owner >>>>>>>>>> of schema pg_catalog >>>>>>>>>> [ INFO ] Stage: Clean up >>>>>>>>>> Log file is located at >>>>>>>>>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log >>>>>>>>>> [ INFO ] Generating answer file >>>>>>>>>> '/var/lib/ovirt-engine/setup/answers/20150901153939-setup.conf' >>>>>>>>>> [ INFO ] Stage: Pre-termination >>>>>>>>>> [ INFO ] Stage: Termination >>>>>>>>>> [ ERROR ] Execution of setup failed >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> and in the deployement log I have these errors >>>>>>>>>> >>>>>>>>>> Saving custom users permissions on database objects... >>>>>>>>>> upgrade script detected a change in Config, View or Stored >>>>>>>>>> Procedure... >>>>>>>>>> Running upgrade shell script >>>>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'... >>>>>>>>>> Running upgrade shell script >>>>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'... >>>>>>>>>> Running upgrade shell script >>>>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'... >>>>>>>>>> Running upgrade shell script >>>>>>>>>> '/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'... >>>>>>>>>> Running upgrade shell script >>>>>>>>>> '/usr/share/ovirt-engine/dbscripts/pre_upgrade/0040_extend_installed_by_column.sql'... >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2015-09-01 15:39:35 DEBUG >>>>>>>>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:941 >>>>>>>>>> execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', >>>>>>>>>> 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> */ovirt-engine/setup/ovirt-engine-setup-20150901153202-w0ds25.log', >>>>>>>>>> '-c', 'apply'] >>>>>>>>>> stderr:/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>>> ligne 1: /bin : is a >>>>>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>>> ligne 2: DATABASE : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>>> ligne 4: This : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>>> ligne 5: The : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>>> ligne 6: Add : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>>> ligne 7: syntax error near the symbole unexpected « ( >>>>>>>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql: >>>>>>>>>> ligne 7: ` Update section (w/o overriding current >>>>>>>>>> value)'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>>> ligne 1: /bin : is a >>>>>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>>> ligne 2: Currently : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>>> ligne 3: This : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>>> ligne 4: This : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>>> ligne 5: So, : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>>> ligne 6: Since : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>>> ligne 7: bin/ : is a >>>>>>>>>> directory/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>>> ligne 9: update : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>>> ligne 10: * >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> *syntax error near the symbole unexpected « ( >>>>>>>>>> »/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql: >>>>>>>>>> ligne 10: `and exists(select 1 from schema_version where version = >>>>>>>>>> '03010250' and current = >>>>>>>>>> true);'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>>>>> ligne 1: -- : command not >>>>>>>>>> found/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>>>>> ligne 2: **syntax error near the symbole unexpected « ( »* >>>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql: >>>>>>>>>> ligne 2: `CREATE FUNCTION __temp__0030_add_materialized_views_table()' >>>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>>>>> ligne 1: -- : commande introuvable >>>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>>>>> ligne 2: erreur de syntaxe près du symbole inattendu « ( » >>>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql: >>>>>>>>>> ligne 2: `select fn_db_add_column('materialized_views', >>>>>>>>>> 'min_refresh_rate_in_sec', 'int default 0');' >>>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>>>>>>> ligne 1: erreur de syntaxe près du symbole inattendu « ( » >>>>>>>>>> /usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql: >>>>>>>>>> ligne 1: `ALTER TABLE schema_version ALTER COLUMN installed_by TYPE >>>>>>>>>> varchar(63);' >>>>>>>>>> 2015-09-01 15:39:35 DEBUG otopi.context >>>>>>>>>> context._executeMethod:156 method exception >>>>>>>>>> Traceback (most recent call last): >>>>>>>>>> File "/usr/lib/python2.7/site-packages/otopi/context.py", >>>>>>>>>> line 146, in _executeMethod >>>>>>>>>> method['method']() >>>>>>>>>> File >>>>>>>>>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", >>>>>>>>>> line 291, in _misc >>>>>>>>>> oenginecons.EngineDBEnv.PGPASS_FILE >>>>>>>>>> File "/usr/lib/python2.7/site-packages/otopi/plugin.py", >>>>>>>>>> line 946, in execute >>>>>>>>>> command=args[0], >>>>>>>>>> RuntimeError: Command >>>>>>>>>> '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>>>>>>> 2015-09-01 15:39:35 ERROR otopi.context >>>>>>>>>> context._executeMethod:165 Failed to execute stage 'Misc configuration': >>>>>>>>>> Command '/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute >>>>>>>>>> 2015-09-01 15:39:35 DEBUG otopi.transaction >>>>>>>>>> transaction.abort:134 aborting 'DNF Transaction' >>>>>>>>>> 2015-09-01 15:39:35 DEBUG >>>>>>>>>> otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:90 DNF >>>>>>>>>> Closing transaction with rollback >>>>>>>>>> 2015-09-01 15:39:35 INFO >>>>>>>>>> otopi.plugins.otopi.packagers.dnfpackager >>>>>>>>>> dnfpackager.info:94 DNF Performing DNF transaction rollback >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> It was an issue with package building: all the sql files >>>>>>>>> where executable and so the issue. >>>>>>>>> We fixed it and tomorrow build should be OK. If you prefer >>>>>>>>> to continue right now simply recursively remove the x attribute on each sql >>>>>>>>> file under /usr/share/ovirt-engine/dbscripts >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2015-09-01 13:04 GMT+01:00 Simone Tiraboschi < >>>>>>>>>> stirabos@redhat.com>: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, Sep 1, 2015 at 12:40 PM, Yedidyah Bar David < >>>>>>>>>>> didi@redhat.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> On Tue, Sep 1, 2015 at 1:25 PM, wodel youchi < >>>>>>>>>>>> wodel.youchi@gmail.com> wrote: >>>>>>>>>>>> > Hi, >>>>>>>>>>>> > >>>>>>>>>>>> > I am using the repo of the 3.6 version >>>>>>>>>>>> > ( >>>>>>>>>>>> http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/rpm/fc22/ >>>>>>>>>>>> ) >>>>>>>>>>>> > >>>>>>>>>>>> > I installed the ovirt-hosted-engine-setup with it's >>>>>>>>>>>> dependencies,and the >>>>>>>>>>>> > ovirt-hosted-engine-ha package is one of them. >>>>>>>>>>>> > >>>>>>>>>>>> > Correction: The problem with this version >>>>>>>>>>>> > >>>>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150819082341.20150819082338.git183a4ff.fc22.noarch.rpm, >>>>>>>>>>>> > is that after the installation is done, the service >>>>>>>>>>>> ovirt-ha-agent crashes >>>>>>>>>>>> > after being started, see the bug : >>>>>>>>>>>> > >>>>>>>>>>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1254745 >>>>>>>>>>>> > >>>>>>>>>>>> > A new version was produced, I downloaded it manually a >>>>>>>>>>>> few days ago, this is >>>>>>>>>>>> > it : >>>>>>>>>>>> > >>>>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm >>>>>>>>>>>> > >>>>>>>>>>>> > This one did correct the problem, but it's not present >>>>>>>>>>>> anymore on the >>>>>>>>>>>> > repository. >>>>>>>>>>>> >>>>>>>>>>>> Was this on ovirt-3.6-pre? >>>>>>>>>>>> >>>>>>>>>>>> ovirt-3.6-snapshot has a newer version. >>>>>>>>>>>> >>>>>>>>>>>> > >>>>>>>>>>>> > For Simone: yes I did added an NFS4 data domain, but no >>>>>>>>>>>> success so far, no >>>>>>>>>>>> > VM engine present. >>>>>>>>>>>> > >>>>>>>>>>>> > Regards. >>>>>>>>>>>> > >>>>>>>>>>>> > 2015-09-01 11:15 GMT+01:00 Simone Tiraboschi < >>>>>>>>>>>> stirabos@redhat.com>: >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> On Tue, Sep 1, 2015 at 11:46 AM, Yedidyah Bar David < >>>>>>>>>>>> didi@redhat.com> >>>>>>>>>>>> >> wrote: >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> On Tue, Sep 1, 2015 at 11:25 AM, wodel youchi < >>>>>>>>>>>> wodel.youchi@gmail.com> >>>>>>>>>>>> >>> wrote: >>>>>>>>>>>> >>> > Hi, >>>>>>>>>>>> >>> > >>>>>>>>>>>> >>> > Another test of ovirt hosted-engine on FC22 using >>>>>>>>>>>> ovirt 3.6 Beta3. >>>>>>>>>>>> >>> > VM engine is also a FC22 >>>>>>>>>>>> >>> > >>>>>>>>>>>> >>> > Problem: >>>>>>>>>>>> >>> > - No VM engine on webui >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> This is still not supported, see/follow [1]. >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> [1] >>>>>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1224889 >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> ? :-) >>>>>>>>>>>> >>>>>>>>>>>> Sorry :-( >>>>>>>>>>>> >>>>>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1160094 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> This is just about editing the VM from the web GUI but in >>>>>>>>>>> order to be able to edit the engine VM you should be able at least to find >>>>>>>>>>> the engine VM in the engine as it was also in 3.5 >>>>>>>>>>> >>>>>>>>>>> I'll try to reproduce verifying another patch >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> I see that all patches there are merged, but bug is in >>>>>>>>>>>> POST. >>>>>>>>>>>> >>>>>>>>>>>> >> >>>>>>>>>>>> >> Did you also try adding an additional storage domain >>>>>>>>>>>> for regular VMs? >>>>>>>>>>>> >> engine-VM will be shown on the engine only when you >>>>>>>>>>>> add at least one >>>>>>>>>>>> >> additional storage domain for regulars VM and the >>>>>>>>>>>> whole datacenter goes up: >>>>>>>>>>>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1222010#c1 >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> > Test Environment >>>>>>>>>>>> >>> > Just two machines: >>>>>>>>>>>> >>> > >>>>>>>>>>>> >>> > 1 - Machine 1 used as storage : >>>>>>>>>>>> >>> > - iscsi target with a raw file for the VM engine >>>>>>>>>>>> storage >>>>>>>>>>>> >>> > - NFS4 for other data domains >>>>>>>>>>>> >>> > >>>>>>>>>>>> >>> > 2 - Machine 2 used as hypervisor >>>>>>>>>>>> >>> > >>>>>>>>>>>> >>> > >>>>>>>>>>>> >>> > The installation went without problem, but as >>>>>>>>>>>> always, the VM engine is >>>>>>>>>>>> >>> > not >>>>>>>>>>>> >>> > present on the webui. >>>>>>>>>>>> >>> > >>>>>>>>>>>> >>> > >>>>>>>>>>>> >>> > PS: >>>>>>>>>>>> >>> > 1- I gave the VM engine just 2Gb of memory since I >>>>>>>>>>>> don't have too much >>>>>>>>>>>> >>> > RAM >>>>>>>>>>>> >>> > on hypervisor, could that be the cause of the >>>>>>>>>>>> problem? >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Shouldn't be related >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> > 2- This version of >>>>>>>>>>>> >>> > >>>>>>>>>>>> >>> > >>>>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113926.20150424113923.git7c14f4c.fc22.noarch.rpm >>>>>>>>>>>> >>> > package is causing the ovirt-ha-agent to crash, it >>>>>>>>>>>> was replaced with >>>>>>>>>>>> >>> > another >>>>>>>>>>>> >>> > which I still have >>>>>>>>>>>> >>> > >>>>>>>>>>>> >>> > >>>>>>>>>>>> ovirt-hosted-engine-ha-1.3.0-0.0.master.20150820064645.20150820064642.git02529e0.fc22.noarch.rpm, >>>>>>>>>>>> >>> > but it's not present on the repository, I had to >>>>>>>>>>>> update the package >>>>>>>>>>>> >>> > manually >>>>>>>>>>>> >>> > at the end of ovirt-hosted-engine-setup >>>>>>>>>>>> installation. >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Not sure I follow. >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> What exact repo was used? >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> hosted-engine --deploy does not update/install >>>>>>>>>>>> packages for you (as >>>>>>>>>>>> >>> does engine-setup), >>>>>>>>>>>> >>> it's up to you to make sure what you want/need is >>>>>>>>>>>> installed prior to >>>>>>>>>>>> >>> running it. >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Best, >>>>>>>>>>>> >>> -- >>>>>>>>>>>> >>> Didi >>>>>>>>>>>> >>> _______________________________________________ >>>>>>>>>>>> >>> Users mailing list >>>>>>>>>>>> >>> Users@ovirt.org >>>>>>>>>>>> >>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> > >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Didi >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >
participants (4)
-
Roy Golan
-
Simone Tiraboschi
-
wodel youchi
-
Yedidyah Bar David