[ovirt-users] obtain spm id (host id) from engine using api

joost at familiealbers.nl joost at familiealbers.nl
Fri Dec 16 19:15:59 UTC 2016


HI Nir, this is exactly what i am doing.

connectStoragePool(storagePoolUuid,hostid,'',masterStorageDomainUuid,masterVersion)

What i want however is to ensure i can i.e. start vms even when the 
engine is not available and then make sure they keep running when the 
engine comes online.

If the hostid's mismatch all kinds of errors popup.
If the hostids i use when calling connectStoragePool match those in the 
engine db my vms keep on running.

its an awkward setup , i know but it would be really good if i can find 
out the host id the engine has in mind.
This way i can use this particular hostid when calling 
connectStoragepool.

also.

Another reason for wanting this functionality is due to the fact the 
engine might be unreachable by the hosts for a few hours due to the fact 
my dc's sometimes run without network.




Nir Soffer schreef op 2016-12-16 17:48:
> On Fri, Dec 16, 2016 at 6:15 PM,  <joost at familiealbers.nl> wrote:
>> Hi Nir, thanks.
>>
>> I am actually after cases where the storagepool is not mounted.And 
>> engine
>> can not reach vdsm.
>>
>>
>> I can mount it using the vdsm api however i need to use a hostid.
>> I cant find where to get the particular id from.
>>
>> at the moment, when i manually connect the storage pool (using vdsm 
>> api) i
>> get end up with the following errors as soon as the connection 
>> between
>> engine and vdsm is created.
>>
>> jsonrpc.Executor/6::ERROR::2016-12-16
>> 16:01:57,131::dispatcher::77::Storage.Dispatcher::(wrapper) 
>> {'status':
>> {'message': "Cannot perform action while storage pool is connected:
>> ('hostId=1, newHostId=2',)", 'code': 325}}
>
> Not sure what you mean but storagepool not mounted.
>
> To work with stoage, you need to
> - connectStorageServer
> - connectStoragePool
>
> You can the command engine sends when activating a host, and
> send the same commands from vdsClient or vdscli.py.
>
> If your host is connected to the storage, and sanlock is connected 
> joined
> the lockspace, you can get the host id using sanlock apis.
>
> # sanlock client status
> daemon 82322b57-58f9-4b9a-9661-8489ab66bc86.voodoo6.tl
> p -1 helper
> p -1 listener
> p 4271
> p -1 status
> s
> 
> 16fe0625-be29-4a77-81c5-1bc0e5267eea:1:/dev/16fe0625-be29-4a77-81c5-1bc0e5267eea/ids:0
> s
> 
> eeb8e812-4e69-469a-a07a-272ea3a79105:1:/dev/eeb8e812-4e69-469a-a07a-272ea3a79105/ids:0
> r
> 
> eeb8e812-4e69-469a-a07a-272ea3a79105:SDM:/dev/eeb8e812-4e69-469a-a07a-272ea3a79105/leases:1048576:7
> p 4271
>
> In this case I have 2 locksapces and both use host_id = 1
>
> If you are not connected to storage on this host, you can use any 
> available
> host_id.
>
> You can see which hosts are connected using:
>
> # sanlock client host_status -D
> lockspace 16fe0625-be29-4a77-81c5-1bc0e5267eea
> 1 timestamp 380593
>     last_check=380614
>     last_live=380614
>     last_req=0
>     owner_id=1
>     owner_generation=15
>     timestamp=380593
>     io_timeout=10
>     owner_name=82322b57-58f9-4b9a-9661-8489ab66bc86.voodoo6.tl
> lockspace eeb8e812-4e69-469a-a07a-272ea3a79105
> 1 timestamp 380592
>     last_check=380613
>     last_live=380613
>     last_req=0
>     owner_id=1
>     owner_generation=5
>     timestamp=380592
>     io_timeout=10
>     owner_name=82322b57-58f9-4b9a-9661-8489ab66bc86.voodoo6.tl
>
> In this case I have one host (host_id 1), connected to 2 lockspaces
> (each storage domain have one lockspace using the storage domain 
> uuid)
>
> If you want to connect this host to engine, you should use the host 
> id
> from engine database.
>
>>
>> following this its a cluster of errors, gluster restarts and 
>> complains about
>> quorum etc etc
>>
>>
>> if i'd redo the commands after reboot (still no connection between 
>> vdsm api
>> api and engine and use host id 2 on this particular host my problems 
>> are
>> resolved.
>>
>>
>>
>> also am i right in thinking the values in the db do not change as 
>> long as
>> the hosts remain the same?
>>
>>
>> It would really help me immense to find out the host id before 
>> conencting to
>> the storagepool and without needed to go into the db.
>>
>> thanks.
>>
>> Joost
>>
>>
>>
>>
>>
>>
>>
>>
>> Nir Soffer schreef op 2016-12-16 16:42:
>>
>>> On Fri, Dec 16, 2016 at 5:20 PM,  <joost at familiealbers.nl> wrote:
>>>>
>>>> in ovirt engine the table.
>>>>
>>>> vds_spm_id_map
>>>>
>>>> holds the ids used in spm election.
>>>>
>>>> engine_20150824095956=# select * from vds_spm_id_map ;
>>>>            storage_pool_id            | vds_spm_id |
>>>> vds_id
>>>>
>>>>
>>>> 
>>>> --------------------------------------+------------+--------------------------------------
>>>>  144fb47d-b38c-4bb7-867b-373d7ba9f0a9 |          1 |
>>>> 313ed02c-8029-4fb3-ba1c-5b3c9902ddb1
>>>>  144fb47d-b38c-4bb7-867b-373d7ba9f0a9 |          2 |
>>>> 7fdebf8a-1503-4b54-9681-0201ee330381
>>>>
>>>> these particular id's seem to be added when a vds is added to the 
>>>> engine
>>>> or
>>>> when the storage pool is first setup.
>>>>
>>>> I would like to be able to obtain this nr (in my case its 
>>>> generally 1 or
>>>> 2
>>>> as i have two hosts per dc / cluster) through an api or even 
>>>> better from
>>>> the
>>>> hosts itself.
>>>>
>>>> When testing I connect to the storage pools using the api
>>>
>>>
>>> Do you mean vdsm api?
>>>
>>> You can get the host id using vdsClient:
>>>
>>> # vdsClient -s 0 getStoragePoolInfo 
>>> fe307b9e-8f6b-4958-955a-0faeeae8b017
>>> name = No Description
>>> isoprefix =
>>> pool_status = connected
>>> lver = 7
>>> spm_id = 1
>>> master_uuid = eeb8e812-4e69-469a-a07a-272ea3a79105
>>> version = 4
>>> domains =
>>>
>>>
>>> 
>>> 16fe0625-be29-4a77-81c5-1bc0e5267eea:Active,eeb8e812-4e69-469a-a07a-272ea3a79105:Active
>>> type = ISCSI
>>> master_ver = 3
>>> 16fe0625-be29-4a77-81c5-1bc0e5267eea = {'status': 'Active',
>>> 'diskfree': '97844723712', 'isoprefix': '', 'alerts': [], 
>>> 'disktotal':
>>> '106568876032', 'version': 4}
>>> eeb8e812-4e69-469a-a07a-272ea3a79105 = {'status': 'Active',
>>> 'diskfree': '98918465536', 'isoprefix': '', 'alerts': [], 
>>> 'disktotal':
>>> '106568876032', 'version': 4}
>>>
>>> If you have vdsm source, you can use the new client:
>>>
>>> # contrib/jsonrpc StoragePool getInfo
>>> storagepoolID=fe307b9e-8f6b-4958-955a-0faeeae8b017
>>> {
>>>     "info": {
>>>         "name": "No Description",
>>>         "isoprefix": "",
>>>         "pool_status": "connected",
>>>         "lver": 7,
>>>         "spm_id": 1,
>>>         "master_uuid": "eeb8e812-4e69-469a-a07a-272ea3a79105",
>>>         "version": "4",
>>>         "domains":
>>>
>>>
>>> 
>>> "16fe0625-be29-4a77-81c5-1bc0e5267eea:Active,eeb8e812-4e69-469a-a07a-272ea3a79105:Active",
>>>         "type": "ISCSI",
>>>         "master_ver": 3
>>>     },
>>>     "dominfo": {
>>>         "16fe0625-be29-4a77-81c5-1bc0e5267eea": {
>>>             "status": "Active",
>>>             "diskfree": "97844723712",
>>>             "isoprefix": "",
>>>             "alerts": [],
>>>             "disktotal": "106568876032",
>>>             "version": 4
>>>         },
>>>         "eeb8e812-4e69-469a-a07a-272ea3a79105": {
>>>             "status": "Active",
>>>             "diskfree": "98918465536",
>>>             "isoprefix": "",
>>>             "alerts": [],
>>>             "disktotal": "106568876032",
>>>             "version": 4
>>>         }
>>>     }
>>> }
>>>
>>>> and its important
>>>> the hostid used when making this api call is correct or bad things
>>>> happen.
>>>>
>>>> I have been searching high and low to no avail. In understand the 
>>>> engine
>>>> is
>>>> in charge here but i really would be helped if this values can be
>>>> obtained
>>>> without going into the db.
>>>>
>>>> As i am continuously rebuilding dc's / storage pools and hosts i 
>>>> cannot
>>>> keep
>>>> track of when which host is installed hence the need to know the 
>>>> spm_id
>>>> as
>>>> listed.
>>>>
>>>> It might be i am all wrong but the when i use the vds_spm_id as 
>>>> listed i
>>>> can
>>>> connect the host to the storage pools using the api.
>>
>>




More information about the Users mailing list