Native Access on gluster storage domain
by Stefano Danzi
This is a multi-part message in MIME format.
--------------315A38E1E900138D11EEA015
Content-Type: text/plain; charset=iso-8859-15; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
I have a test environment with a sigle host and self hosted engine
running oVirt Engine: 4.1.5.2-1.el7.centos
I what to try the option "Native Access on gluster storage domain" but I
get an error because I have to put the
host in maintenance mode. I can't do that because I have a single host
so the hosted engine can't be migrated.
There are a way to change this option but apply it at next reboot?
--------------315A38E1E900138D11EEA015
Content-Type: text/html; charset=iso-8859-15
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=iso-8859-15">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hello, <br>
I have a test environment with a sigle host and self hosted engine
running <span class="gwt-InlineLabel" style="box-sizing:
border-box;">oVirt Engine: 4.1.5.2-1.el7.centos<br>
<br>
I what to try the option "Native Access on gluster storage domain"
but I get an error because I have to put the <br>
host in maintenance mode. I can't do that because I have a single
host so the hosted engine can't be migrated.<br>
<br>
There are a way to change this option but apply it at next reboot?<br>
</span>
</body>
</html>
--------------315A38E1E900138D11EEA015--
7 years, 3 months
Re: [ovirt-users] [ovirt-devel] vdsm vds.dispatcher
by Gary Pedretty
--Apple-Mail=_C9F1A44C-AE04-4DA2-AFA9-47FFA59F9CD7
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=utf-8
By someone, I assume you mean some other process running on the host, or =
possibly the engine?
Gary
------------------------------------------------------------------------
Gary Pedretty gary(a)ravnalaska.net =
<mailto:gary@eraalaska.net>
Systems Manager www.flyravn.com =
<http://www.flyravn.com/>
Ravn Alaska /\ 907-450-7251
5245 Airport Industrial Road / \/\ 907-450-7238 fax
Fairbanks, Alaska 99709 /\ / \ \ Second greatest commandment
Serving All of Alaska / \/ /\ \ \/\ =E2=80=9CLove your =
neighbor as
Green, green as far as the eyes can see yourself=E2=80=9D =
Matt 22:39
------------------------------------------------------------------------
> On Aug 31, 2017, at 6:17 AM, Martin Sivak <msivak(a)redhat.com =
<mailto:msivak@redhat.com>> wrote:
>=20
> One more thing:
>=20
> MOM's getStatistics is actually called by VDSM stats reporting code,
> so my guess here is that someone queries VDSM for stats pretty hard,
> VDSM then asks MOM for details.
>=20
> Martin
>=20
> On Thu, Aug 31, 2017 at 4:14 PM, Martin Sivak <msivak(a)redhat.com =
<mailto:msivak@redhat.com>> wrote:
>> Hi,
>>=20
>>> 2017-08-27 23:15:41,199 - mom.RPCServer - INFO - ping()
>>> 2017-08-27 23:15:41,200 - mom.RPCServer - INFO - getStatistics()
>>> 2017-08-27 23:15:43,946 - mom.RPCServer - INFO - ping()
>>> 2017-08-27 23:15:43,947 - mom.RPCServer - INFO - getStatistics()
>>=20
>> These are logs from mom's RPC server, someone is calling MOM way too
>> often. Well about 25 times per minute if my math is right.
>>=20
>> The only client I know about is actually VDSM.
>>=20
>> Martin
>>=20
>>=20
>> On Mon, Aug 28, 2017 at 9:17 AM, Gary Pedretty <gary(a)ravnalaska.net =
<mailto:gary@ravnalaska.net>> wrote:
>>> Be glad to provide logs to help diagnose this. I see nothing =
unusual in the
>>> vdsm.log
>>>=20
>>> mom.log shows the following almost as frequently as the messages log =
entries
>>>=20
>>> 2017-08-27 23:15:41,199 - mom.RPCServer - INFO - ping()
>>> 2017-08-27 23:15:41,200 - mom.RPCServer - INFO - getStatistics()
>>> 2017-08-27 23:15:43,946 - mom.RPCServer - INFO - ping()
>>> 2017-08-27 23:15:43,947 - mom.RPCServer - INFO - getStatistics()
>>>=20
>>>=20
_______________________________________________
Devel mailing list
Devel(a)ovirt.org <mailto:Devel@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/devel =
<http://lists.ovirt.org/mailman/listinfo/devel>=
--Apple-Mail=_C9F1A44C-AE04-4DA2-AFA9-47FFA59F9CD7
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=utf-8
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D""><span style=3D"font-family: LucidaGrande;" class=3D"">By =
someone, I assume you mean some other process running on the host, or =
possibly the engine?</span><div class=3D"" style=3D"font-family: =
LucidaGrande;"><br class=3D""></div><div class=3D"" style=3D"font-family: =
LucidaGrande;">Gary</div><div class=3D"" style=3D"font-family: =
LucidaGrande;"><br class=3D""></div><div class=3D"" style=3D"font-family: =
LucidaGrande;"><br class=3D""><div class=3D""><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><div class=3D"" =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;"><font face=3D"Menlo" class=3D"" =
style=3D"font-size: 12px;"><div =
class=3D"">---------------------------------------------------------------=
---------</div><div class=3D"">Gary Pedretty =
=
<a =
href=3D"mailto:gary@eraalaska.net" =
class=3D"">gary(a)ravnalaska.net</a></div><div class=3D"">Systems Manager =
=
=
<a href=3D"http://www.flyravn.com" =
class=3D"">www.flyravn.com</a></div><div class=3D"">Ravn Alaska =
=
/\ =
907-450-7251</div><div class=3D"">5245 Airport Industrial =
Road / \/\ =
907-450-7238 fax</div><div class=3D"">Fairbanks, Alaska =
99709 /\ / \ \ =
Second greatest commandment</div></font><font face=3D"Monaco" =
class=3D""><span class=3D"" style=3D"font-size: 12px;">Serving All of =
Alaska / \/ /\ \ \/\ =
=E2=80=9CLove your neighbor as</span></font><br class=3D"" =
style=3D"font-family: Monaco;"><font face=3D"Menlo" class=3D""><span =
class=3D"" style=3D"font-size: 12px;">Green, green as far as the eyes =
can see yourself=E2=80=9D Matt =
22:39</span></font><div class=3D"" style=3D"font-family: =
Menlo;"></div><font face=3D"Menlo" class=3D"" style=3D"font-size: =
12px;"></font><span class=3D"" style=3D"font-size: 12px;"><font =
face=3D"Menlo" class=3D""><div =
class=3D"">---------------------------------------------------------------=
---------</div></font></span><div class=3D""><font face=3D"Menlo" =
class=3D"" style=3D"font-size: 12px;"><br =
class=3D""></font></div></div></div></div></div></div></div></div></div></=
div></div></div></div><br class=3D"Apple-interchange-newline"><br =
class=3D"Apple-interchange-newline"></div><br class=3D""><div =
class=3D""><blockquote type=3D"cite" class=3D""><div class=3D"">On Aug =
31, 2017, at 6:17 AM, Martin Sivak <<a =
href=3D"mailto:msivak@redhat.com" class=3D"">msivak(a)redhat.com</a>> =
wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><span =
class=3D"" style=3D"font-family: Monaco; font-size: 12px; float: none; =
display: inline !important;">One more thing:</span><br class=3D"" =
style=3D"font-family: Monaco; font-size: 12px;"><br class=3D"" =
style=3D"font-family: Monaco; font-size: 12px;"><span class=3D"" =
style=3D"font-family: Monaco; font-size: 12px; float: none; display: =
inline !important;">MOM's getStatistics is actually called by VDSM stats =
reporting code,</span><br class=3D"" style=3D"font-family: Monaco; =
font-size: 12px;"><span class=3D"" style=3D"font-family: Monaco; =
font-size: 12px; float: none; display: inline !important;">so my guess =
here is that someone queries VDSM for stats pretty hard,</span><br =
class=3D"" style=3D"font-family: Monaco; font-size: 12px;"><span =
class=3D"" style=3D"font-family: Monaco; font-size: 12px; float: none; =
display: inline !important;">VDSM then asks MOM for details.</span><br =
class=3D"" style=3D"font-family: Monaco; font-size: 12px;"><br class=3D"" =
style=3D"font-family: Monaco; font-size: 12px;"><span class=3D"" =
style=3D"font-family: Monaco; font-size: 12px; float: none; display: =
inline !important;">Martin</span><br class=3D"" style=3D"font-family: =
Monaco; font-size: 12px;"><br class=3D"" style=3D"font-family: Monaco; =
font-size: 12px;"><span class=3D"" style=3D"font-family: Monaco; =
font-size: 12px; float: none; display: inline !important;">On Thu, Aug =
31, 2017 at 4:14 PM, Martin Sivak <</span><a =
href=3D"mailto:msivak@redhat.com" class=3D"" style=3D"font-family: =
Monaco; font-size: 12px;">msivak(a)redhat.com</a><span class=3D"" =
style=3D"font-family: Monaco; font-size: 12px; float: none; display: =
inline !important;">> wrote:</span><br class=3D"" style=3D"font-family:=
Monaco; font-size: 12px;"><blockquote type=3D"cite" class=3D"" =
style=3D"font-family: Monaco; font-size: 12px;">Hi,<br class=3D""><br =
class=3D""><blockquote type=3D"cite" class=3D"">2017-08-27 23:15:41,199 =
- mom.RPCServer - INFO - ping()<br class=3D"">2017-08-27 23:15:41,200 - =
mom.RPCServer - INFO - getStatistics()<br class=3D"">2017-08-27 =
23:15:43,946 - mom.RPCServer - INFO - ping()<br class=3D"">2017-08-27 =
23:15:43,947 - mom.RPCServer - INFO - getStatistics()<br =
class=3D""></blockquote><br class=3D"">These are logs from mom's RPC =
server, someone is calling MOM way too<br class=3D"">often. Well about =
25 times per minute if my math is right.<br class=3D""><br class=3D"">The =
only client I know about is actually VDSM.<br class=3D""><br =
class=3D"">Martin<br class=3D""><br class=3D""><br class=3D"">On Mon, =
Aug 28, 2017 at 9:17 AM, Gary Pedretty <<a =
href=3D"mailto:gary@ravnalaska.net" class=3D"">gary(a)ravnalaska.net</a>>=
wrote:<br class=3D""><blockquote type=3D"cite" class=3D"">Be glad to =
provide logs to help diagnose this. I see nothing unusual in =
the<br class=3D"">vdsm.log<br class=3D""><br class=3D"">mom.log shows =
the following almost as frequently as the messages log entries<br =
class=3D""><br class=3D"">2017-08-27 23:15:41,199 - mom.RPCServer - INFO =
- ping()<br class=3D"">2017-08-27 23:15:41,200 - mom.RPCServer - INFO - =
getStatistics()<br class=3D"">2017-08-27 23:15:43,946 - mom.RPCServer - =
INFO - ping()<br class=3D"">2017-08-27 23:15:43,947 - mom.RPCServer - =
INFO - getStatistics()<br class=3D""><br class=3D""><br =
class=3D""></blockquote></blockquote></div></blockquote></div><br =
class=3D""></div><span style=3D"font-family: LucidaGrande;" =
class=3D"">_______________________________________________</span><br =
style=3D"font-family: LucidaGrande;" class=3D""><span =
style=3D"font-family: LucidaGrande;" class=3D"">Devel mailing =
list</span><br style=3D"font-family: LucidaGrande;" class=3D""><a =
href=3D"mailto:Devel@ovirt.org" style=3D"font-family: LucidaGrande;" =
class=3D"">Devel(a)ovirt.org</a><br style=3D"font-family: LucidaGrande;" =
class=3D""><a href=3D"http://lists.ovirt.org/mailman/listinfo/devel" =
style=3D"font-family: LucidaGrande;" =
class=3D"">http://lists.ovirt.org/mailman/listinfo/devel</a></body></html>=
--Apple-Mail=_C9F1A44C-AE04-4DA2-AFA9-47FFA59F9CD7--
7 years, 3 months
Re: [ovirt-users] oVirt engine with different VM id
by Martin Sivak
Hi,
you can remote the hosted engine storage domain from the engine as
well. It should also be re-imported.
We had cases where destroying the domain ended up with a locked SD,
but removing the SD and re-importing is the proper way here.
Best regards
PS: Re-adding the mailing list, we should really set a proper Reply-To header..
Martin Sivak
On Thu, Aug 31, 2017 at 2:07 PM, Misak Khachatryan <kmisak(a)gmail.com> wrote:
> Hi,
>
> I would love to, but:
>
> Error while executing action:
>
> HostedEngine:
>
> Cannot remove VM. The relevant Storage Domain's status is Inactive.
>
> it seems i should somehow fix storage domain first ...
>
> engine=# update storage_domain_static set id =
> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id =
> 'c44343af-cc4a-4bb7-a548-0c6f609d60d5';
> ERROR: update or delete on table "storage_domain_static" violates
> foreign key constraint "disk_profiles_storage_domain_id_fkey" on table
> "disk_profiles"
> DETAIL: Key (id)=(c44343af-cc4a-4bb7-a548-0c6f609d60d5) is still
> referenced from table "disk_profiles".
>
> engine=# update disk_profiles set storage_domain_id =
> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c' where id =
> 'a6d71571-a13a-415b-9f97-635f17cbe67d';
> ERROR: insert or update on table "disk_profiles" violates foreign key
> constraint "disk_profiles_storage_domain_id_fkey"
> DETAIL: Key (storage_domain_id)=(2e2820f3-8c3d-487d-9a56-1b8cd278ec6c)
> is not present in table "storage_domain_static".
>
> engine=# select * from storage_domain_static;
> id | storage
> | storage_name | storage_domain_type | storage_type |
> storage_domain_format_type | _create_date |
> _update_date | recoverable | last_time_used_as_maste
> r | storage_description | storage_comment | wipe_after_delete |
> warning_low_space_indicator | critical_space_action_blocker |
> first_metadata_device | vg_metadata_device | discard_after_delete
> --------------------------------------+--------------------------------------+------------------------+---------------------+--------------+----------------------------+-------------------------------+-------------------------------+-------------+------------------------
> --+---------------------+-----------------+-------------------+-----------------------------+-------------------------------+-----------------------+--------------------+----------------------
> 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 |
> ceab03af-7220-4d42-8f5c-9b557f5d29af | ovirt-image-repository |
> 4 | 8 | 0 | 2016-11-02
> 21:27:22.118586+04 | | t |
> | | | f |
> | |
> | | f
> 51c903f6-df83-4510-ac69-c164742ca6e7 |
> 34b72ce0-6ad7-4180-a8a1-2acfd45824d7 | iso |
> 2 | 7 | 0 | 2016-11-02
> 23:26:21.296635+04 | | t |
> 0 | | | f |
> 10 | 5 |
> | | f
> ece1f05c-97c9-4482-a1a5-914397cddd35 |
> dd38f31f-7bdc-463c-9ae4-fcd4dc8c99fd | export |
> 3 | 1 | 0 | 2016-12-14
> 11:28:15.736746+04 | 2016-12-14 11:33:12.872562+04 | t |
> 0 | Export | | f |
> 10 | 5 |
> | | f
> 07ea2089-a82b-4ca1-9c8b-54e3895b2ed4 |
> d1e9e3c8-aaf3-43de-ae80-101e5bd2574f | data |
> 0 | 7 | 4 | 2016-11-02
> 23:24:43.402629+04 | 2017-02-22 17:20:42.721092+04 | t |
> 0 | | | f |
> 10 | 5 |
> | | f
> c44343af-cc4a-4bb7-a548-0c6f609d60d5 |
> 8b54ce35-3187-4fba-a2c7-6b604d077f5b | hosted_storage |
> 1 | 7 | 4 | 2016-11-02
> 23:26:13.165435+04 | 2017-02-22 17:20:42.721092+04 | t |
> 0 | | | f |
> 10 | 5 |
> | | f
> 004ca4dd-c621-463d-b514-ccfe07ef99d7 |
> b31a7de9-e789-4ece-9f99-4b150bf581db | virt4-Local |
> 0 | 4 | 4 | 2017-03-23
> 09:02:26.37006+04 | 2017-03-23 09:02:31.887534+04 | t |
> 0 | | | f |
> 10 | 5 |
> | | f
> (6 rows)
>
> engine=# select * from storage_domain_dynamic;
> id | available_disk_size |
> used_disk_size | _update_date | external_status
> --------------------------------------+---------------------+----------------+-------------------------------+-----------------
> 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | |
> | | 0
> 07ea2089-a82b-4ca1-9c8b-54e3895b2ed4 | 1102 |
> 313 | 2017-08-31 14:20:47.444292+04 | 0
> 51c903f6-df83-4510-ac69-c164742ca6e7 | 499 |
> 0 | 2017-08-31 14:20:47.45047+04 | 0
> ece1f05c-97c9-4482-a1a5-914397cddd35 | 9669 |
> 6005 | 2017-08-31 14:20:47.454629+04 | 0
> c44343af-cc4a-4bb7-a548-0c6f609d60d5 | |
> | 2017-08-31 14:18:37.199062+04 | 0
> 004ca4dd-c621-463d-b514-ccfe07ef99d7 | 348 |
> 1 | 2017-08-31 14:20:42.671688+04 | 0
> (6 rows)
>
>
> engine=# select * from disk_profiles;
> id | name |
> storage_domain_id | qos_id | description |
> _create_date | _update_date
> --------------------------------------+----------------+--------------------------------------+--------+-------------+-------------------------------+--------------
> 04257bff-e95d-4380-b120-adcbe46ae213 | data |
> 07ea2089-a82b-4ca1-9c8b-54e3895b2ed4 | | |
> 2016-11-02 23:24:43.528982+04 |
> a6d71571-a13a-415b-9f97-635f17cbe67d | hosted_storage |
> c44343af-cc4a-4bb7-a548-0c6f609d60d5 | | |
> 2016-11-02 23:26:13.178791+04 |
> 0f9ecdb7-4fca-45e7-9b5c-971b50d4c12e | virt4-Local |
> 004ca4dd-c621-463d-b514-ccfe07ef99d7 | | |
> 2017-03-23 09:02:26.409574+04 |
> (3 rows)
>
>
> Best regards,
> Misak Khachatryan
>
>
> On Thu, Aug 31, 2017 at 3:33 PM, Martin Sivak <msivak(a)redhat.com> wrote:
>> Hi,
>>
>> I would not touch the database in this case. I would just delete the
>> old hosted engine VM from the webadmin and wait for it to reimport
>> itself.
>>
>> But I haven't played with this mechanism for some time.
>>
>> Best regards
>>
>> Martin Sivak
>>
>> On Thu, Aug 31, 2017 at 1:17 PM, Misak Khachatryan <kmisak(a)gmail.com> wrote:
>>> Hi,
>>>
>>> Yesterday someone powered off our storage, and all my 3 hosts lose
>>> their disks. After 2 days of recovering i managed to bring back
>>> everything, except engine VM, which is online but not visible to
>>> itself.
>>>
>>> I did new deployment of VM, restored backup and started engine setup.
>>> After manual database updates, my all VMs and hosts are OK now, but
>>> engine. I have engine VM with different VM id running than in
>>> database.
>>>
>>> I've tried this with no luck.
>>>
>>> engine=# update vm_static set vm_guid =
>>> '75072b32-6f93-4c38-8f18-825004072c1a' where vm_guid =(select
>>> vm_guid from vm_static where vm_name = 'HostedEngine');
>>> ERROR: update or delete on table "vm_static" violates foreign key
>>> constraint "fk_disk_vm_element_vm_static" on table "disk_vm_element"
>>> DETAIL: Key (vm_guid)=(d81ccb53-2594-49db-b69a-04c73b504c59) is still
>>> referenced from table "disk_vm_element".
>>>
>>>
>>> Right now i've deployed engine on all 3 hosts but see this picture:
>>>
>>> [root@virt3 ~]# hosted-engine --vm-status
>>>
>>>
>>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>>
>>>
>>>
>>>
>>> [root@virt3 ~]# vdsClient -s 0 list
>>>
>>> 75072b32-6f93-4c38-8f18-825004072c1a
>>> Status = Up
>>> statusTime = 4397337690
>>> kvmEnable = true
>>> emulatedMachine = pc
>>> afterMigrationStatus =
>>> pid = 5280
>>> devices = [{'device': 'console', 'specParams': {}, 'type':
>>> 'console', 'deviceId': '2b6b0e87-c86a-4144-ad39-40d5bfe25df1',
>>> 'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model':
>>> 'none'}, 'type': 'balloon', 'target': 16777216, 'alias': 'balloon0'},
>>> {'specParams': {'source': 'random'}, 'alias': 'rng0', 'address':
>>> {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci',
>>> 'function': '0x0'}, 'device': 'virtio', 'model': 'virtio', 'type':
>>> 'rng'}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel',
>>> 'addr
>>> ess': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port':
>>> '1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel',
>>> 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial',
>>> 'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'ch
>>> annel', 'address': {'bus': '0', 'controller': '0', 'type':
>>> 'virtio-serial', 'port': '3'}}, {'device': 'scsi', 'alias': 'scsi0',
>>> 'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot':
>>> '0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function':
>>> '0x0'}}
>>> , {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address':
>>> {'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci',
>>> 'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type':
>>> 'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain':
>>> '0x00
>>> 00', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial',
>>> 'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot':
>>> '0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function':
>>> '0x0'}}, {'device': 'vga', 'alias': 'video0', 'type': 'video',
>>> 'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type':
>>> 'pci', 'function': '0x0'}}, {'device': 'vnc', 'type': 'graphics',
>>> 'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:01:29:95',
>>> 'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'spec
>>> Params': {}, 'deviceId': 'd348a068-063b-4a40-9119-a3d34f6c7db4',
>>> 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type':
>>> 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface',
>>> 'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'al
>>> ias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId':
>>> 'e738b50b-c200-4429-8489-4519325339c7', 'address': {'bus': '1',
>>> 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'},
>>> 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'},
>>> {'poolI
>>> D': '00000000-0000-0000-0000-000000000000', 'volumeInfo': {'path':
>>> 'engine/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d',
>>> 'protocol': 'gluster', 'hosts': [{'port': '0', 'transport': 'tcp',
>>> 'name': '
>>> virt1'}, {'port': '0', 'transport': 'tcp', 'name': 'virt2'}, {'port':
>>> '0', 'transport': 'tcp', 'name': 'virt3'}]}, 'index': '0', 'iface':
>>> 'virtio', 'apparentsize': '62277025792', 'specParams': {}, 'imageID':
>>> '5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'readonly': 'False', 's
>>> hared': 'exclusive', 'truesize': '3255476224', 'type': 'disk',
>>> 'domainID': '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c', 'reqsize': '0',
>>> 'format': 'raw', 'deviceId': '5deeac2d-18d7-4622-9371-ebf965d2bd6b',
>>> 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type':
>>> 'pci', 'function': '0x0'}, 'device': 'disk', 'path':
>>> '/var/run/vdsm/storage/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d',
>>> 'propagateErrors': 'off', 'optional': 'false', 'name': 'vda',
>>> 'bootOrder': '1', 'v
>>> olumeID': '60aa51b7-32eb-41a9-940d-9489b0375a3d', 'alias':
>>> 'virtio-disk0', 'volumeChain': [{'domainID':
>>> '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c', 'leaseOffset': 0, 'volumeID':
>>> '60aa51b7-32eb-41a9-940d-9489b0375a3d', 'leasePath':
>>> '/rhev/data-center/mnt/glusterSD/virt1:_engi
>>> ne/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d.lease',
>>> 'imageID': '5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'path':
>>> '/rhev/data-center/mnt/glusterSD/virt1:_engine/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c
>>> /images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d'}]}]
>>> guestDiskMapping = {'5deeac2d-18d7-4622-9': {'name':
>>> '/dev/vda'}, 'QEMU_DVD-ROM_QM00003': {'name': '/dev/sr0'}}
>>> vmType = kvm
>>> display = vnc
>>> memSize = 16384
>>> cpuType = Westmere
>>> spiceSecureChannels =
>>> smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
>>> smp = 4
>>> vmName = HostedEngine
>>> clientIp =
>>> maxVCpus = 16
>>> [root@virt3 ~]#
>>>
>>> [root@virt3 ~]# hosted-engine --vm-status
>>>
>>>
>>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>>
>>>
>>>
>>> --== Host 1 status ==--
>>>
>>> conf_on_shared_storage : True
>>> Status up-to-date : True
>>> Hostname : virt1.management.gnc.am
>>> Host ID : 1
>>> Engine status : {"reason": "vm not running on
>>> this host", "health": "bad", "vm": "down", "detail": "unknown"}
>>> Score : 3400
>>> stopped : False
>>> Local maintenance : False
>>> crc32 : ef49e5b4
>>> local_conf_timestamp : 7515
>>> Host timestamp : 7512
>>> Extra metadata (valid at timestamp):
>>> metadata_parse_version=1
>>> metadata_feature_version=1
>>> timestamp=7512 (Thu Aug 31 15:14:59 2017)
>>> host-id=1
>>> score=3400
>>> vm_conf_refresh_time=7515 (Thu Aug 31 15:15:01 2017)
>>> conf_on_shared_storage=True
>>> maintenance=False
>>> state=GlobalMaintenance
>>> stopped=False
>>>
>>>
>>> --== Host 3 status ==--
>>>
>>> conf_on_shared_storage : True
>>> Status up-to-date : True
>>> Hostname : virt3
>>> Host ID : 3
>>> Engine status : {"health": "good", "vm": "up",
>>> "detail": "up"}
>>> Score : 3400
>>> stopped : False
>>> Local maintenance : False
>>> crc32 : 4a85111c
>>> local_conf_timestamp : 102896
>>> Host timestamp : 102893
>>> Extra metadata (valid at timestamp):
>>> metadata_parse_version=1
>>> metadata_feature_version=1
>>> timestamp=102893 (Thu Aug 31 15:14:46 2017)
>>> host-id=3
>>> score=3400
>>> vm_conf_refresh_time=102896 (Thu Aug 31 15:14:49 2017)
>>> conf_on_shared_storage=True
>>> maintenance=False
>>> state=GlobalMaintenance
>>> stopped=False
>>>
>>>
>>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>>
>>> Also my storage domain for hosted engine is inactive, can't activate
>>> it it gives this error in web console:
>>>
>>> VDSM command GetImagesListVDS failed: Storage domain does not exist:
>>> (u'c44343af-cc4a-4bb7-a548-0c6f609d60d5',)
>>>
>>>
>>> It seems I should fiddle with database a bit more, but is't scary thing for me.
>>>
>>> Any help?
>>>
>>>
>>>
>>> Best regards,
>>> Misak Khachatryan
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
7 years, 3 months
oVirt engine with different VM id
by Misak Khachatryan
Hi,
Yesterday someone powered off our storage, and all my 3 hosts lose
their disks. After 2 days of recovering i managed to bring back
everything, except engine VM, which is online but not visible to
itself.
I did new deployment of VM, restored backup and started engine setup.
After manual database updates, my all VMs and hosts are OK now, but
engine. I have engine VM with different VM id running than in
database.
I've tried this with no luck.
engine=# update vm_static set vm_guid =
'75072b32-6f93-4c38-8f18-825004072c1a' where vm_guid =(select
vm_guid from vm_static where vm_name = 'HostedEngine');
ERROR: update or delete on table "vm_static" violates foreign key
constraint "fk_disk_vm_element_vm_static" on table "disk_vm_element"
DETAIL: Key (vm_guid)=(d81ccb53-2594-49db-b69a-04c73b504c59) is still
referenced from table "disk_vm_element".
Right now i've deployed engine on all 3 hosts but see this picture:
[root@virt3 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
[root@virt3 ~]# vdsClient -s 0 list
75072b32-6f93-4c38-8f18-825004072c1a
Status = Up
statusTime = 4397337690
kvmEnable = true
emulatedMachine = pc
afterMigrationStatus =
pid = 5280
devices = [{'device': 'console', 'specParams': {}, 'type':
'console', 'deviceId': '2b6b0e87-c86a-4144-ad39-40d5bfe25df1',
'alias': 'console0'}, {'device': 'memballoon', 'specParams': {'model':
'none'}, 'type': 'balloon', 'target': 16777216, 'alias': 'balloon0'},
{'specParams': {'source': 'random'}, 'alias': 'rng0', 'address':
{'slot': '0x07', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci',
'function': '0x0'}, 'device': 'virtio', 'model': 'virtio', 'type':
'rng'}, {'device': 'unix', 'alias': 'channel0', 'type': 'channel',
'addr
ess': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port':
'1'}}, {'device': 'unix', 'alias': 'channel1', 'type': 'channel',
'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial',
'port': '2'}}, {'device': 'unix', 'alias': 'channel2', 'type': 'ch
annel', 'address': {'bus': '0', 'controller': '0', 'type':
'virtio-serial', 'port': '3'}}, {'device': 'scsi', 'alias': 'scsi0',
'model': 'virtio-scsi', 'type': 'controller', 'address': {'slot':
'0x04', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function':
'0x0'}}
, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address':
{'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci',
'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type':
'controller', 'address': {'slot': '0x01', 'bus': '0x00', 'domain':
'0x00
00', 'type': 'pci', 'function': '0x1'}}, {'device': 'virtio-serial',
'alias': 'virtio-serial0', 'type': 'controller', 'address': {'slot':
'0x05', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function':
'0x0'}}, {'device': 'vga', 'alias': 'video0', 'type': 'video',
'address': {'slot': '0x02', 'bus': '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}}, {'device': 'vnc', 'type': 'graphics',
'port': '5900'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:01:29:95',
'linkActive': True, 'network': 'ovirtmgmt', 'alias': 'net0', 'spec
Params': {}, 'deviceId': 'd348a068-063b-4a40-9119-a3d34f6c7db4',
'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface',
'name': 'vnet0'}, {'index': '2', 'iface': 'ide', 'name': 'hdc', 'al
ias': 'ide0-1-0', 'specParams': {}, 'readonly': 'True', 'deviceId':
'e738b50b-c200-4429-8489-4519325339c7', 'address': {'bus': '1',
'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'},
'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'},
{'poolI
D': '00000000-0000-0000-0000-000000000000', 'volumeInfo': {'path':
'engine/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d',
'protocol': 'gluster', 'hosts': [{'port': '0', 'transport': 'tcp',
'name': '
virt1'}, {'port': '0', 'transport': 'tcp', 'name': 'virt2'}, {'port':
'0', 'transport': 'tcp', 'name': 'virt3'}]}, 'index': '0', 'iface':
'virtio', 'apparentsize': '62277025792', 'specParams': {}, 'imageID':
'5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'readonly': 'False', 's
hared': 'exclusive', 'truesize': '3255476224', 'type': 'disk',
'domainID': '2e2820f3-8c3d-487d-9a56-1b8cd278ec6c', 'reqsize': '0',
'format': 'raw', 'deviceId': '5deeac2d-18d7-4622-9371-ebf965d2bd6b',
'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}, 'device': 'disk', 'path':
'/var/run/vdsm/storage/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d',
'propagateErrors': 'off', 'optional': 'false', 'name': 'vda',
'bootOrder': '1', 'v
olumeID': '60aa51b7-32eb-41a9-940d-9489b0375a3d', 'alias':
'virtio-disk0', 'volumeChain': [{'domainID':
'2e2820f3-8c3d-487d-9a56-1b8cd278ec6c', 'leaseOffset': 0, 'volumeID':
'60aa51b7-32eb-41a9-940d-9489b0375a3d', 'leasePath':
'/rhev/data-center/mnt/glusterSD/virt1:_engi
ne/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c/images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d.lease',
'imageID': '5deeac2d-18d7-4622-9371-ebf965d2bd6b', 'path':
'/rhev/data-center/mnt/glusterSD/virt1:_engine/2e2820f3-8c3d-487d-9a56-1b8cd278ec6c
/images/5deeac2d-18d7-4622-9371-ebf965d2bd6b/60aa51b7-32eb-41a9-940d-9489b0375a3d'}]}]
guestDiskMapping = {'5deeac2d-18d7-4622-9': {'name':
'/dev/vda'}, 'QEMU_DVD-ROM_QM00003': {'name': '/dev/sr0'}}
vmType = kvm
display = vnc
memSize = 16384
cpuType = Westmere
spiceSecureChannels =
smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
smp = 4
vmName = HostedEngine
clientIp =
maxVCpus = 16
[root@virt3 ~]#
[root@virt3 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : virt1.management.gnc.am
Host ID : 1
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : ef49e5b4
local_conf_timestamp : 7515
Host timestamp : 7512
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=7512 (Thu Aug 31 15:14:59 2017)
host-id=1
score=3400
vm_conf_refresh_time=7515 (Thu Aug 31 15:15:01 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : virt3
Host ID : 3
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 4a85111c
local_conf_timestamp : 102896
Host timestamp : 102893
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=102893 (Thu Aug 31 15:14:46 2017)
host-id=3
score=3400
vm_conf_refresh_time=102896 (Thu Aug 31 15:14:49 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
Also my storage domain for hosted engine is inactive, can't activate
it it gives this error in web console:
VDSM command GetImagesListVDS failed: Storage domain does not exist:
(u'c44343af-cc4a-4bb7-a548-0c6f609d60d5',)
It seems I should fiddle with database a bit more, but is't scary thing for me.
Any help?
Best regards,
Misak Khachatryan
7 years, 3 months
Question on Datacenters / clusters / data domains
by Eduardo Mayoral
Hi,
First of all, sorry for the naive question, but I have not been able
to find good guidance on the docs.
I come from the VMWare environment, now I am starting to migrate
some workload from VMWare to oVirt (v4.1.4 , CentOS 7.3 hosts).
In VMWare I am used to have one datacenter, several host clusters,
and a bunch of iSCSI Datastores, but we do not map every iSCSI
LUN/datastore to every host. Actually we used to do that, but we hit
limits on the number of iSCSI paths with our infrastructure.
Rather than that, we have groups of LUNs/Datastores mapped to the
ESXi hosts which form a given VMware cluster. Then we have a couple of
datastores mapped to every ESXi in the vmware datacenter, and we use
those to store the ISO images and as storage that we use when we need to
migrate VMs between clusters for some reason.
Given the role of the Master data domain and the SPM in oVIrt it is
my understanding that I cannot replicate this kind of setup in oVirt: a
data domain in an oVirt Data Center must be available to every Host on
the Data Center: Am I right?
So, our current setup is still small, but I am concerned that as it
grows, if I stay with one Datacenter, several clusters and a group of
data domains mapped to every host I may run again into problems with the
number of iSCSI paths (the limit in VMWare was around 1024), it is easy
to reach that limit as it is (number of hosts) * (number of LUNs) *
(number of paths/LUN).
If I split my setup in several datacenters controlled by a single
oVirt-engine in order to keep the number of iSCSI paths reasonable. Can
I manually migrate VMs between Datacenters? I assume that in order to do
that, those datacenters will need to share some data domain , Can this
be done? Maybe with NFS?
Thanks for your help!
--
Eduardo Mayoral Jimeno (emayoral(a)arsys.es)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153
7 years, 3 months
Want to Contribute
by Yan Naing Myint
Hello,
I am currently only Ambassador of Fedora Project in Yangon, Myanmar.
I want to contribute oVirt by spreading about oVirt in my region.
I am also teaching about oVirt here in my region.
How should I approach to become officially recognized something like "oVirt
Myanmar Community" ?
Best,
--
Yan Naing Myint
CEO
Server & Network Engineer
Cyber Wings Co., Ltd
http://cyberwings.asia
09799950510
7 years, 3 months
I can't see any host to choose when creating new domain
by Khoi Thinh
Hi everyone,
I have a question related to Ovirt.
Sorry it's in Japanese. I did a quick translation as below (not sure if
it's right)
So as you can see, after choose option for
* Data center
* Domain function/feature
* Storage Type
Then i didn't see any option for "Running host/Host in use". Even though i
did create some hosts in dhcp48.
Has any of you guys seen this before?
--
*Khoi Thinh*
7 years, 3 months