Hosted engine setup shooting dirty pool

Or at least, refusing to mount a dirty pool. I have 4.1 set up, configured and functional, currently wired up with two VM hosts and three Gluster hosts. It is configured with a (temporary) NFS data storage domain, with the end-goal being two data domains on Gluster; one for the hosted engine, one for other VMs. The issue is that `hosted-engine` sees any gluster volumes offered as dirty. (I have been creating them via the command line right before attempting the hosted-engine migration; there is nothing in them at that stage.) I *think* what is happening is that ovirt-engine notices a newly created volume and has its way with the volume (visible in the GUI; the volume appears in the list), and the hosted-engine installer becomes upset about that. What I don’t know is what to do about it. Relevant log lines below. The installer almost sounds like it is asking me to remove the UUID-directory and whatnot, but I’m pretty sure that’s just going to leave me with two problems instead of fixing the first one. I’ve considered attempting to wire this together in the DB, which also seems like a great way to break things. I’ve even thought of using a Gluster installation that Ovirt knows nothing about, mainly as an experiment to see if it would even work, but decided it doesn’t matter, because I can’t deploy in that state anyway and it doesn’t actually get me any closer to getting this working. I noticed several bugs in the tracker seemingly related, but the bulk of those were for past versions and I saw nothing that seemed actionable from my end in the others. So, can anyone spare a clue as to what is going wrong, and what to do about that? -j - - - - ovirt-hosted-engine-setup.log - - - - 2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:408 connectStorageServer 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:475 {'status': {'message': 'Done', 'code': 0}, 'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-c610584dea6e'}]} 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:502 {'status': {'message': 'Done', 'code': 0}, 'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-1fd88b84fe14'}]} 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:794 _check_existing_pools 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:795 getConnectedStoragePoolsList 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:797 {'status': {'message': 'Done', 'code': 0}} 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:956 Creating Storage Domain 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:513 createStorageDomain 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:547 {'status': {'message': 'Done', 'code': 0}} 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:549 {'status': {'message': 'Done', 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0} 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:959 Creating Storage Pool 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:553 createFakeStorageDomain 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:570 {'status': {'message': 'Done', 'code': 0}} 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:572 {'status': {'message': 'Done', 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0} 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:587 createStoragePool 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:627 createStoragePool(args=[storagepoolID=9e399f0c-7c4b-4131-be79-922dda038383, name=hosted_datacenter, masterSdUUID=9a5c302b-2a18-4c7e-b75d-29088299988c, masterVersion=1, domainList=['9a5c302b-2a18-4c7e-b75d-29088299988c', 'f26efe61-a2e1-4a85-a212-269d0a047e07'], lockRenewalIntervalSec=None, leaseTimeSec=None, ioOpTimeoutSec=None, leaseRetries=None]) 2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:640 {'status': {'message': 'Done', 'code': 0}} 2017-04-11 16:15:29 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:962 Connecting Storage Pool 2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storagePoolConnection:717 connectStoragePool 2017-04-11 16:15:29 DEBUG otopi.context context._executeMethod:142 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/storage.py", line 963, in _misc self._storagePoolConnection() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/storage.py", line 725, in _storagePoolConnection message=status['status']['message'], RuntimeError: Dirty Storage Domain: Cannot connect pool, already connected to another pool: ('ef5e4496-3095-40a8-89da-6847db67a4b9',) Please clean the storage device and try again 2017-04-11 16:15:29 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Misc configuration': Dirty Storage Domain: Cannot connect pool, already connected to another pool: ('ef5e4496-3095-40a8-89da-6847db67a4b9',) Please clean the storage device and try again 2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'Yum Transaction' 2017-04-11 16:15:29 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Performing yum transaction rollback Loaded plugins: fastestmirror 2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/firewalld/hosted-console.xml'' 2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/firewalld/hosted-cockpit.xml'' 2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/iptables.example'' 2017-04-11 16:15:29 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT DUMP - BEGIN

Hi Jamie, Are you trying to setup hosted engine using the "hosted-engine --deploy" command, or are you trying to migrate existing he vm? For hosted engine setup you need to provide a clean storage domain, which is not a part of your 4.1 setup, this storage domain will be used for the hosted engine and will be visible in the UI once the deployment of the hosted engine is complete. If your storage domain appears in the UI it means that it is already connected to the storage pool and is not "clean". Thanks, Jenny On Wed, Apr 12, 2017 at 2:47 AM, Jamie Lawrence <jlawrence@squaretrade.com> wrote:
Or at least, refusing to mount a dirty pool.
I have 4.1 set up, configured and functional, currently wired up with two VM hosts and three Gluster hosts. It is configured with a (temporary) NFS data storage domain, with the end-goal being two data domains on Gluster; one for the hosted engine, one for other VMs.
The issue is that `hosted-engine` sees any gluster volumes offered as dirty. (I have been creating them via the command line right before attempting the hosted-engine migration; there is nothing in them at that stage.) I *think* what is happening is that ovirt-engine notices a newly created volume and has its way with the volume (visible in the GUI; the volume appears in the list), and the hosted-engine installer becomes upset about that. What I don’t know is what to do about it. Relevant log lines below. The installer almost sounds like it is asking me to remove the UUID-directory and whatnot, but I’m pretty sure that’s just going to leave me with two problems instead of fixing the first one. I’ve considered attempting to wire this together in the DB, which also seems like a great way to break things. I’ve even thought of using a Gluster installation that Ovirt knows nothing about, mainly as an experiment to see if it would even work, but decided it doesn’t matter, because I can’t deploy in that state anyway and it doesn’t actually get me any closer to getting this working.
I noticed several bugs in the tracker seemingly related, but the bulk of those were for past versions and I saw nothing that seemed actionable from my end in the others.
So, can anyone spare a clue as to what is going wrong, and what to do about that?
-j
- - - - ovirt-hosted-engine-setup.log - - - -
2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:408 connectStorageServer 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:475 {'status': {'message': 'Done', 'code': 0}, 'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc- c610584dea6e'}]} 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:502 {'status': {'message': 'Done', 'code': 0}, 'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815- 1fd88b84fe14'}]} 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:794 _check_existing_pools 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:795 getConnectedStoragePoolsList 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:797 {'status': {'message': 'Done', 'code': 0}} 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:956 Creating Storage Domain 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:513 createStorageDomain 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:547 {'status': {'message': 'Done', 'code': 0}} 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:549 {'status': {'message': 'Done', 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0} 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:959 Creating Storage Pool 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:553 createFakeStorageDomain 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:570 {'status': {'message': 'Done', 'code': 0}} 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:572 {'status': {'message': 'Done', 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0} 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:587 createStoragePool 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:627 createStoragePool(args=[ storagepoolID=9e399f0c-7c4b-4131-be79-922dda038383, name=hosted_datacenter, masterSdUUID=9a5c302b-2a18-4c7e-b75d-29088299988c, masterVersion=1, domainList=['9a5c302b-2a18-4c7e-b75d-29088299988c', 'f26efe61-a2e1-4a85-a212-269d0a047e07'], lockRenewalIntervalSec=None, leaseTimeSec=None, ioOpTimeoutSec=None, leaseRetries=None]) 2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:640 {'status': {'message': 'Done', 'code': 0}} 2017-04-11 16:15:29 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:962 Connecting Storage Pool 2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storagePoolConnection:717 connectStoragePool 2017-04-11 16:15:29 DEBUG otopi.context context._executeMethod:142 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../ plugins/gr-he-setup/storage/storage.py", line 963, in _misc self._storagePoolConnection() File "/usr/share/ovirt-hosted-engine-setup/scripts/../ plugins/gr-he-setup/storage/storage.py", line 725, in _storagePoolConnection message=status['status']['message'], RuntimeError: Dirty Storage Domain: Cannot connect pool, already connected to another pool: ('ef5e4496-3095-40a8-89da-6847db67a4b9',) Please clean the storage device and try again 2017-04-11 16:15:29 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Misc configuration': Dirty Storage Domain: Cannot connect pool, already connected to another pool: ('ef5e4496-3095-40a8-89da- 6847db67a4b9',) Please clean the storage device and try again 2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'Yum Transaction' 2017-04-11 16:15:29 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Performing yum transaction rollback Loaded plugins: fastestmirror 2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/ firewalld/hosted-console.xml'' 2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/ firewalld/hosted-cockpit.xml'' 2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/iptables.example'' 2017-04-11 16:15:29 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT DUMP - BEGIN _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Apr 12, 2017, at 1:31 AM, Evgenia Tokar <etokar@redhat.com> wrote:
Hi Jamie,
Are you trying to setup hosted engine using the "hosted-engine --deploy" command, or are you trying to migrate existing he vm?
For hosted engine setup you need to provide a clean storage domain, which is not a part of your 4.1 setup, this storage domain will be used for the hosted engine and will be visible in the UI once the deployment of the hosted engine is complete. If your storage domain appears in the UI it means that it is already connected to the storage pool and is not "clean”.
Hi Jenny, Thanks for the response. I’m using `hosted-engine —deploy`, yes. (Actually, the last few attempts have been with an answerfile, but the responses are the same.) I think I may have been unclear. I understand that it wants an unmolested SD. There just doesn’t seem to be a path to provide that with an Ovirt-managed Gluster cluster. I guess my question is how to provide that with an Ovirt-managed gluster installation. Or a different way of asking, I guess, would be how do I make Ovirt/VDSM ignore a newly created gluster SD so that `hosted-engine` can pick it up? I don’t see any options to tell the Gluster cluster to not auto-discover or similar. So as soon as I create it, the non-hosted engine picks it up. This happens within seconds - I vainly tried to time it with running the installer. This is why I mentioned dismissing the idea of using another Gluster installation, unattached to Ovirt. That’s the only way I could think of to give it a clean pool. (I dismissed it because I can’t run this in production with that sort of dependency.) Do I need to take this Gluster cluster out of Ovirt control (delete the Gluster cluster from the Ovirt GUI, recreate outside of Ovirt manually), install on to that, and then re-associate it in the GUI or something similar? -j

On Wed, Apr 12, 2017 at 11:15 PM, Jamie Lawrence <jlawrence@squaretrade.com> wrote:
On Apr 12, 2017, at 1:31 AM, Evgenia Tokar <etokar@redhat.com> wrote:
Hi Jamie,
Are you trying to setup hosted engine using the "hosted-engine --deploy" command, or are you trying to migrate existing he vm?
For hosted engine setup you need to provide a clean storage domain, which is not a part of your 4.1 setup, this storage domain will be used for the hosted engine and will be visible in the UI once the deployment of the hosted engine is complete. If your storage domain appears in the UI it means that it is already connected to the storage pool and is not "clean”.
Hi Jenny,
Thanks for the response.
I’m using `hosted-engine —deploy`, yes. (Actually, the last few attempts have been with an answerfile, but the responses are the same.)
I think I may have been unclear. I understand that it wants an unmolested SD. There just doesn’t seem to be a path to provide that with an Ovirt-managed Gluster cluster.
I guess my question is how to provide that with an Ovirt-managed gluster installation. Or a different way of asking, I guess, would be how do I make Ovirt/VDSM ignore a newly created gluster SD so that `hosted-engine` can pick it up? I don’t see any options to tell the Gluster cluster to not auto-discover or similar. So as soon as I create it, the non-hosted engine picks it up. This happens within seconds - I vainly tried to time it with running the installer.
This is why I mentioned dismissing the idea of using another Gluster installation, unattached to Ovirt. That’s the only way I could think of to give it a clean pool. (I dismissed it because I can’t run this in production with that sort of dependency.)
Do I need to take this Gluster cluster out of Ovirt control (delete the Gluster cluster from the Ovirt GUI, recreate outside of Ovirt manually), install on to that, and then re-associate it in the GUI or something similar?
The gluster cluster being detected in Ovirt does not make it a dirty storage domain. It looks like the gluster volume was previously used as storage domain and was not cleaned up? You can try mounting the gluster volume and check if it has any content I'm a bit confused about the setup though - do you already have an installation of oVirt engine that you use to manage the gluster hosts. Are you deploying another engine (HE) that's managing the same hosts or using gluster volume from another installation?
-j _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Evgenia Tokar
-
Jamie Lawrence
-
Sahina Bose