Ovirt resilience policy / HA
by Guillaume Penin
Hi all,
I'm building a test ovirt (3.5.1) infrastructure, based on 3 ovirt nodes
and 1 ovirt engine.
Everything runs (almost) fine, but i don't exactly understand the
interaction between resilience policy (Cluster) and HA (VM).
=> What I understand, in case of host failure :
- Setting resilience policy to :
- Migrate Virtual Machines => All VMs (HA and non HA) will be
started on another host.
- Migrate only Highly Available Virtual Machines => HA VMs only will
be started on another host.
- Do Not Migrate Virtual Machines => HA and non HA VMs won't be
started on another host.
=> In practice :
- No matter what parameter i use in resilience policy, HA VMs only
will be started on another host in case of a host failure.
Is this the expected behaviour ? Am I misunderstanding the way it works
?
Kind regards,
9 years, 9 months
very long time to detach an export domain
by Nathanaël Blanchet
Hi all,
I have no latence for attaching an existing v1 export domain to any
datacenter. However, detaching the same export domain takes a while
(more than 30 min) saying "preparing for mainteance". Is it a regular
behaviour? If yes, what is it done at this step for being so long?
9 years, 9 months
Running in db and not running in VDS
by Punit Dambiwal
Hi,
VM failed to create, failed to reboot and through the errors :-
2015-02-23 17:27:11,879 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-64) [1546215a] VM
8325d21f97888adff6bd6b70bfd6c13b (b74d945c-c9f8-4336-a91b-390a11f07650)* is
running in db and not running in VDS compute11*
*and for delete VM :- *
2015-02-23 17:21:44,625 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-57) [4534b0a2] Correlation ID: 6bbbb769, Job ID:
76d135de-ef8d-41dc-9cc0-855134ededce, Call Stack: null, Custom Event ID:
-1, Message: VM 1feac62b8cd19fcc2ff296957adc8a4a *has been removed, but the
following disks could not be removed: Disk1. These disks will appear in the
main disks tab in illegal state, please remove manually when possible.*
*Thanks,*
*Punit*
9 years, 9 months
Re: [ovirt-users] [ovirt-devel] [ACTION REQUIRED] oVirt 3.5.2 and 3.5.3 status (building RC2 today!)
by Oved Ourfali
On Mar 18, 2015 9:57 AM, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>
> Hi,
> we still have 5 open blockers for 3.5.2[1]:
>
> Bug ID Whiteboard Status Summary
> 1161012 infra POST task cleaning utility should erase commands that have running tasks
Simone, latest comment by you implies that it is working now with the latest patches. All appear in the bugzilla as merged. Should it move to modified?
> 1187244 network POST [RHEL 7.0 + 7.1] Host configure with DHCP is losing connectivity after some time - dhclient is not running
> 1177220 storage ASSIGNED [BLOCKED] Failed to Delete First snapshot with live merge
> 1196327 virt ASSIGNED [performance] bad getVMList output creates unnecessary calls from Engine
> 1202360 virt POST [performance] bad getVMList output creates unnecessary calls from Engine
>
> And 2 dependency on libvirt not yet fixed:
> Bug ID Status Summary
> 1199182 POST 2nd active commit after snapshot triggers qemu failure
> 1199036 POST Libvirtd was restarted when do active blockcommit while there is a blockpull job running
>
> ACTION: Assignee to provide ETA for the blocker bug.
>
> Despite the blockers bug count, we're going to build RC2 today 2015-03-18 at 12:00 UTC for allowing the verification of fixed bugs and testing on
> CentOS 7.1.
> If you're going to test this release candidate on CentOS please be sure to have the CR[2] repository enabled and system fully updated to CentOS 7.1.
>
> We still have 7 bugs in MODIFIED and 31 on QA[3]:
>
> MODIFIED ON_QA Total
> infra 2 10 12
> integration 0 2 2
> network 0 2 2
> node 0 1 1
> sla 2 1 3
> storage 3 11 14
> virt 0 4 4
> Total 7 31 38
>
> ACTION: Testers: you're welcome to verify bugs currently ON_QA.
>
> All remaining bugs not marked as blockers have been moved to 3.5.3.
> A release management entry has been added for tracking the schedule of 3.5.3[4]
> A bug tracker [5] has been created for 3.5.3.
> We have 32 bugs currently targeted to 3.5.3[6]:
>
> Whiteboard NEW ASSIGNED POST Total
> docs 2 0 0 2
> external 1 0 0 1
> gluster 4 0 1 5
> infra 2 2 0 4
> node 2 0 1 3
> ppc 0 0 1 1
> sla 4 0 0 4
> storage 8 0 0 8
> ux 1 0 1 2
> virt 1 0 1 2
> Total 25 2 5 32
>
>
> ACTION: Maintainers / Assignee: to review the bugs targeted to 3.5.3 ensuring they're correctly targeted.
> ACTION: Maintainers: to fill release notes for 3.5.2, the page has been created and updated here [7]
> ACTION: Testers: please add yourself to the test page [8]
>
> 7 Patches have been merged for 3.5.3 and not backported to 3.5.2 branch according to Change-Id
>
> commit 6b5a8169093357656d3e638c7018ee516d1f44bd
> Author: Maor Lipchuk <mlipchuk(a)redhat.com>
> Date: Thu Feb 19 14:40:23 2015 +0200
> core: Add validation when Storage Domain is blocked.
> Change-Id: I9a7c12609b3780c74396dab6edf26e4deaff490f
>
> commit 7fd4dca0a7fb15d3e9179457f1f2aea6c727d325
> Author: Maor Lipchuk <mlipchuk(a)redhat.com>
> Date: Sun Mar 1 17:17:16 2015 +0200
> restapi: reconfigure values on import data Storage Domain.
> Change-Id: I2ef7baa850bd6da08ae27d41ebe9e4ad525fbe9b
>
> commit 4283f755e6b77995247ecb9ddd904139bc8c322c
> Author: Maor Lipchuk <mlipchuk(a)redhat.com>
> Date: Tue Mar 10 12:05:05 2015 +0200
> restapi: Quering FCP unregistered Storage Domains
> Change-Id: Iafe2f2afcd0e6e68adbbbb2054c857388acc30a7
>
> commit a3d8b687620817b38a64a3917f4440274831bca3
> Author: Maor Lipchuk <mlipchuk(a)redhat.com>
> Date: Wed Feb 25 17:00:47 2015 +0200
> core: Add fk constraint on vm_interface_statistics
> Change-Id: I53cf2737ef91cf967c93990fcb237f6c4e12a8f8
>
> commit c8caaceb6b1678c702961d298b3d6c48183d9390
> Author: emesika <emesika(a)redhat.com>
> Date: Mon Mar 9 18:01:58 2015 +0200
> core: do not use distinct if sort expr have func
> Change-Id: I7c036b2b9ee94266b6e3df54f2c50167e454ed6a
>
> commit 4332194e55ad40eee423e8611eceb95fd59dac7e
> Author: Vered Volansky <vvolansk(a)redhat.com>
> Date: Thu Mar 12 17:38:35 2015 +0200
> webadmin: Fix punctuation in threshold warnings
> Change-Id: If30f094e52f42b78537e215a2699cf74c248bd83
>
> commit 773f2a108ce18e0029f864c8748d7068b71f8ff3
> Author: Maor Lipchuk <mlipchuk(a)redhat.com>
> Date: Sat Feb 28 11:37:26 2015 +0200
> core: Add managed devices to OVF
> Change-Id: Ie0e912c9b2950f1461ae95f4704f18b818b83a3b
>
> ACTION: Authors please verify they're not meant to be targeted to 3.5.2.
>
>
> [1] https://bugzilla.redhat.com/1186161
> [2] http://mirror.centos.org/centos/7/cr/x86_64/
> [3] http://goo.gl/UEVTCf
> [4] http://www.ovirt.org/OVirt_3.5.z_Release_Management#oVirt_3.5.3
> [5] https://bugzilla.redhat.com/1198142
> [6] https://bugzilla.redhat.com/buglist.cgi?quicksearch=product%3Aovirt%20tar...
> [7] http://www.ovirt.org/OVirt_3.5.2_Release_Notes
> [8] http://www.ovirt.org/Testing/oVirt_3.5.2_Testing
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
9 years, 9 months
Hosted-Engine --vm-status results
by Filipe Guarino
Hello guys
I installed ovirt using hosted-engine procedure with six fisical hosts,
with more than 60 vms, and until now, everythings ok and my environment
works fine.
I decided to use some of my hosts for other tasks, so have been removed
four of my six hosts and put it way from my environment.
After few days, my second host (hosted_engine_2) start to fail. It's
hardware issue. My 10GbE interface stoped. I decide to put my host "4" as a
second hosted_engine_2.
It's works fine. but when I use command "hosted-engine --vm-status", its
still returns all of the old members of hosted-engines (1 to 6)
how can i fix it leave only just active active nodes?
See below the output for my hosted-engine --vm-status
[root@bmh0001 ~]# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True
Hostname : bmh0001.place.brazil
Host ID : 1
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 68830
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=68830 (Sun Mar 8 17:38:05 2015)
host-id=1
score=2400
maintenance=False
state=EngineDown
--== Host 2 status ==--
Status up-to-date : True
Hostname : bmh0004.place.brazil
Host ID : 2
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 2427
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2427 (Sun Mar 8 17:38:09 2015)
host-id=2
score=2400
maintenance=False
state=EngineUp
--== Host 3 status ==--
Status up-to-date : False
Hostname : bmh0003.place.brazil
Host ID : 3
Engine status : unknown stale-data
Score : 0
Local maintenance : True
Host timestamp : 331389
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=331389 (Tue Mar 3 14:48:25 2015)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
--== Host 4 status ==--
Status up-to-date : False
Hostname : bmh0004.place.brazil
Host ID : 4
Engine status : unknown stale-data
Score : 0
Local maintenance : True
Host timestamp : 364358
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=364358 (Tue Mar 3 16:10:36 2015)
host-id=4
score=0
maintenance=True
state=LocalMaintenance
--== Host 5 status ==--
Status up-to-date : False
Hostname : bmh0005.place.brazil
Host ID : 5
Engine status : unknown stale-data
Score : 0
Local maintenance : True
Host timestamp : 241930
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=241930 (Fri Mar 6 09:40:31 2015)
host-id=5
score=0
maintenance=True
state=LocalMaintenance
--== Host 6 status ==--
Status up-to-date : False
Hostname : bmh0006.place.brazil
Host ID : 6
Engine status : unknown stale-data
Score : 0
Local maintenance : True
Host timestamp : 77376
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=77376 (Wed Mar 4 09:11:17 2015)
host-id=6
score=0
maintenance=True
state=LocalMaintenance
[root@bmh0001 ~]# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True
Hostname : bmh0001.place.brazil
Host ID : 1
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score : 2400
Local maintenance : False
Host timestamp : 68122
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=68122 (Sun Mar 8 17:26:16 2015)
host-id=1
score=2400
maintenance=False
state=EngineStarting
--== Host 2 status ==--
Status up-to-date : True
Hostname : bmh0004.place.brazil
Host ID : 2
Engine status : {"reason": "bad vm status", "health":
"bad", "vm": "up", "detail": "powering up"}
Score : 2400
Local maintenance : False
Host timestamp : 1719
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1719 (Sun Mar 8 17:26:21 2015)
host-id=2
score=2400
maintenance=False
state=EngineStarting
--== Host 3 status ==--
Status up-to-date : False
Hostname : bmh0003.place.brazil
Host ID : 3
Engine status : unknown stale-data
Score : 0
Local maintenance : True
Host timestamp : 331389
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=331389 (Tue Mar 3 14:48:25 2015)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
--== Host 4 status ==--
Status up-to-date : False
Hostname : bmh0004.place.brazil
Host ID : 4
Engine status : unknown stale-data
Score : 0
Local maintenance : True
Host timestamp : 364358
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=364358 (Tue Mar 3 16:10:36 2015)
host-id=4
score=0
maintenance=True
state=LocalMaintenance
--== Host 5 status ==--
Status up-to-date : False
Hostname : bmh0005.place.brazil
Host ID : 5
Engine status : unknown stale-data
Score : 0
Local maintenance : True
Host timestamp : 241930
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=241930 (Fri Mar 6 09:40:31 2015)
host-id=5
score=0
maintenance=True
state=LocalMaintenance
--== Host 6 status ==--
Status up-to-date : False
Hostname : bmh0006.place.brazil
Host ID : 6
Engine status : unknown stale-data
Score : 0
Local maintenance : True
Host timestamp : 77376
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=77376 (Wed Mar 4 09:11:17 2015)
host-id=6
score=0
maintenance=True
state=LocalMaintenance
[root@bmh0001 ~]#
thank you very much.
--
Regards
*Filipe Guarino*
9 years, 9 months
[QE] oVirt 3.6.0 status
by Sandro Bonazzola
Hi, here's an update on 3.6 status on integration / rel-eng side
The tracker bug for 3.6.0 [1] currently shows no blockers.
There are 579 bugs [2] targeted to 3.6.0.
NEW ASSIGNED POST Total
docs 11 0 0 11
gluster 35 2 1 38
i18n 2 0 0 2
infra 82 7 8 97
integration 64 5 6 75
network 39 1 9 49
node 27 3 3 33
ppc 0 0 1 1
sla 52 3 2 57
spice 1 0 0 1
storage 72 5 7 84
ux 33 0 10 43
virt 73 5 10 88
Total 491 31 57 579
Features submission is still open until 2015-04-22 as per current release schedule.
Maintainers: be sure to have your features tracked in the google doc[3]
[1] https://bugzilla.redhat.com/1155425
[2] https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_release%3A3.6....
[3] http://goo.gl/9X3G49
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 9 months
Fedora 21 as Ovirt (hypervisor) host
by Markus Stockhausen
------=_NextPartTM-000-bc6c2d8a-f22f-48fd-9454-d0ef1b076f5c
Content-Type: multipart/alternative;
boundary="_000_12EF8D94C6F8734FB2FF37B9FBEDD1735FC31162EXCHANGEcollogi_"
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735FC31162EXCHANGEcollogi_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi,
back in december there was a discussion about Ovirt on Fedora 21. From
my point of view that was about ovirt engine. So I'm somehow lost if
Fedora 21 is at least supported as hypervisor host. Anyone with deeper
knowledge?
The reason I'm asking:
We are currently on FC20 + virt-preview and enjoy the feature of live
merges. But with BZ1199182 (snapshot deletion bug) I guess we might
need newer libvirt builds. With virt-preview for FC20 no longer in
maintenance we would ony be on the save side with FC21.
Markus
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735FC31162EXCHANGEcollogi_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html dir=3D"ltr">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" id=3D"owaParaStyle"></style>
</head>
<body fpstyle=3D"1" ocsi=3D"0">
<div style=3D"direction: ltr;font-family: Tahoma;color: #000000;font-size: =
10pt;">Hi,
<div><br>
</div>
<div>back in december there was a discussion about Ovirt on Fedora 21. From=
</div>
<div>my point of view that was about ovirt engine. So I'm somehow lost if</=
div>
<div>Fedora 21 is at least supported as hypervisor host. Anyone with deeper=
</div>
<div>knowledge?</div>
<div><br>
</div>
<div>The reason I'm asking:</div>
<div><br>
</div>
<div>We are currently on FC20 + virt-preview and enjoy the feature of l=
ive </div>
<div>merges. But with BZ1199182 (snapshot deletion bug) I guess we might</d=
iv>
<div>need newer libvirt builds. With virt-preview for FC20 no longer in&nbs=
p;</div>
<div>maintenance we would ony be on the save side with FC21.</div>
<div><br>
</div>
<div>Markus</div>
</div>
</body>
</html>
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735FC31162EXCHANGEcollogi_--
------=_NextPartTM-000-bc6c2d8a-f22f-48fd-9454-d0ef1b076f5c
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-bc6c2d8a-f22f-48fd-9454-d0ef1b076f5c--
9 years, 9 months
CloudSpin Downtime
by donny@cloudspin.me
Just as an FYI, cloudspin will be down for the next couple of months. I
am moving from Colorado to Virginia, and it will be down until I acquire
a new home for cloudspin.
Sorry for the inconvenience
Donny D
9 years, 9 months