Re: [ovirt-users] oVirt Node ng upgrade failed
by Ryan Barry
Can you grab imgbased.log?
To retry, "rpm -e ovirt-node-ng-image-update" and remove the new LVs. "yum
install ovirt-node-ng-image-update" from the CLI instead of engine so we
can get full logs would be useful
On Thu, Nov 23, 2017 at 16:01 Lev Veyde <lveyde(a)redhat.com> wrote:
>
> ---------- Forwarded message ----------
> From: Kilian Ries <mail(a)kilian-ries.de>
> Date: Thu, Nov 23, 2017 at 5:16 PM
> Subject: [ovirt-users] oVirt Node ng upgrade failed
> To: "Users(a)ovirt.org" <Users(a)ovirt.org>
>
>
> Hi,
>
>
> just tried to upgrade from
>
>
> ovirt-node-ng-4.1.1.1-0.20170504.0+1
>
>
> to
>
>
> ovirt-node-ng-4.1.7-0.20171108.0+1
>
>
> but it failed:
>
>
> ###
>
>
> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Verify: 1/4: ovirt-node-ng-image-update.noarch
> 0:4.1.7-1.el7.centos - u
>
> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Verify: 2/4:
> ovirt-node-ng-image-update-placeholder.noarch 0:4.1.1.1-1.el7.centos - od
>
> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Verify: 3/4: ovirt-node-ng-image.noarch
> 0:4.1.1.1-1.el7.centos - od
>
> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Verify: 4/4: ovirt-node-ng-image-update.noarch
> 0:4.1.1.1-1.el7.centos - ud
>
> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Transaction processed
>
> 2017-11-23 10:19:21 DEBUG otopi.context context._executeMethod:142 method
> exception
>
> Traceback (most recent call last):
>
> File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/context.py", line 132, in
> _executeMethod
>
> method['method']()
>
> File
> "/tmp/ovirt-3JI9q14aGS/otopi-plugins/otopi/packagers/yumpackager.py", line
> 261, in _packages
>
> self._miniyum.processTransaction()
>
> File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/miniyum.py", line 1049, in
> processTransaction
>
> _('One or more elements within Yum transaction failed')
>
> RuntimeError: One or more elements within Yum transaction failed
>
> 2017-11-23 10:19:21 ERROR otopi.context context._executeMethod:151 Failed
> to execute stage 'Package installation': One or more elements within Yum
> transaction failed
>
> 2017-11-23 10:19:21 DEBUG otopi.transaction transaction.abort:119 aborting
> 'Yum Transaction'
>
> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
> yumpackager.info:80 Yum Performing yum transaction rollback
>
> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> centos-opstools-release/7/x86_64/filelists_db (0%)
>
> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> centos-opstools-release/7/x86_64/filelists_db 374 k(100%)
>
> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> centos-opstools-release/7/x86_64/other_db (0%)
>
> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> centos-opstools-release/7/x86_64/other_db 53 k(100%)
>
> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db (0%)
>
> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 55 k(4%)
>
> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 201 k(17%)
>
> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 648 k(56%)
>
> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 1.1 M(99%)
>
> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 1.1 M(100%)
>
> 2017-11-23 10:19:25 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db (0%)
>
> 2017-11-23 10:19:25 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db 45 k(14%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db 207 k(66%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db 311 k(100%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-centos-gluster38/x86_64/filelists_db (0%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-centos-gluster38/x86_64/filelists_db 18 k(100%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-centos-gluster38/x86_64/other_db (0%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-centos-gluster38/x86_64/other_db 7.6 k(100%)
>
> 2017-11-23 10:19:26 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/filelists_db
> (0%)
>
> 2017-11-23 10:19:27 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/filelists_db
> 7.5 M(76%)
>
> 2017-11-23 10:19:27 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/filelists_db
> 9.9 M(100%)
>
> 2017-11-23 10:19:29 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/other_db (0%)
>
> 2017-11-23 10:19:29 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: ovirt-4.1-epel/x86_64/other_db 2.9
> M(100%)
>
> 2017-11-23 10:19:30 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-patternfly1-noarch-epel/x86_64/filelists_db (0%)
>
> 2017-11-23 10:19:30 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-patternfly1-noarch-epel/x86_64/filelists_db 6.5 k(100%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-patternfly1-noarch-epel/x86_64/other_db (0%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-4.1-patternfly1-noarch-epel/x86_64/other_db 851 (100%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-centos-ovirt41/7/x86_64/filelists_db (0%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-centos-ovirt41/7/x86_64/filelists_db 312 k(100%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-centos-ovirt41/7/x86_64/other_db (0%)
>
> 2017-11-23 10:19:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> ovirt-centos-ovirt41/7/x86_64/other_db 84 k(100%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> rnachimu-gdeploy/x86_64/filelists_db (0%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading:
> rnachimu-gdeploy/x86_64/filelists_db 4.5 k(100%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: rnachimu-gdeploy/x86_64/other_db
> (0%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: rnachimu-gdeploy/x86_64/other_db
> 1.4 k(100%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/filelists_db (0%)
>
> 2017-11-23 10:19:32 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/filelists_db 3.9
> k(100%)
>
> 2017-11-23 10:19:33 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/other_db (0%)
>
> 2017-11-23 10:19:33 DEBUG otopi.plugins.otopi.packagers.yumpackager
> yumpackager.verbose:76 Yum Downloading: virtio-win-stable/other_db 4.3
> k(100%)
>
> 2017-11-23 10:19:33 ERROR otopi.plugins.otopi.packagers.yumpackager
> yumpackager.error:85 Yum Transaction close failed: Traceback (most recent
> call last):
>
> File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/miniyum.py", line 761, in
> endTransaction
>
> if self._yb.history_undo(transactionCurrent):
>
> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6086, in
> history_undo
>
> if self.install(pkgtup=pkg.pkgtup):
>
> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 4910, in
> install
>
> raise Errors.InstallError, _('No package(s) available to install')
>
> InstallError: Kein(e) Paket(e) zum Installieren verfügbar.
>
>
> ###
>
>
>
> Some more information on my system:
>
>
> ###
>
>
> $ mount
>
> ...
>
> /dev/mapper/onn-ovirt--node--ng--4.1.1.1--0.20170504.0+1 on / type ext4
> (rw,relatime,discard,stripe=128,data=ordered)
>
>
>
> $ imgbase layout
>
> ovirt-node-ng-4.1.1.1-0.20170406.0
>
> ovirt-node-ng-4.1.1.1-0.20170504.0
>
> +- ovirt-node-ng-4.1.1.1-0.20170504.0+1
>
> ovirt-node-ng-4.1.7-0.20171108.0
>
> +- ovirt-node-ng-4.1.7-0.20171108.0+1
>
>
>
>
>
> $ rpm -q ovirt-node-ng-image
>
> Das Paket ovirt-node-ng-image ist nicht installiert
>
>
>
> $ nodectl check
>
> Status: OK
>
> Bootloader ... OK
>
> Layer boot entries ... OK
>
> Valid boot entries ... OK
>
> Mount points ... OK
>
> Separate /var ... OK
>
> Discard is used ... OK
>
> Basic storage ... OK
>
> Initialized VG ... OK
>
> Initialized Thin Pool ... OK
>
> Initialized LVs ... OK
>
> Thin storage ... OK
>
> Checking available space in thinpool ... OK
>
> Checking thinpool auto-extend ... OK
>
> vdsmd ... OK
>
>
> ###
>
>
> I can restart my Node and VMs are running, but oVirt Engine tells me no
> update is available. It seems 4.1.7 is installed, but Node still boots the
> old 4.1.1 image.
>
>
> Can i force run the upgrade again or is there another way to fix this?
>
>
> Thanks
>
> Greets
>
> Kilian
>
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> <https://www.redhat.com>
>
> lev(a)redhat.com | lveyde(a)redhat.com
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
--
RYAN BARRY
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR
Red Hat NA <https://www.redhat.com/>
rbarry(a)redhat.com M: +1-651-815-9306 <javascript:void(0);> IM: rbarry
<https://red.ht/sig>
7 years
ovirt-node-ng-update
by Nathanaël Blanchet
Hi all,
I didn't find any explicit howto about upgrade of ovirt-node, but I may
mistake...
However, here is what I guess: after installing a fresh ovirt-node-ng
iso, the engine check upgrade finds an available update
"ovirt-node-ng-image-update"
But, the available update is the same as the current one. If I choose
installing it succeeds, but after rebooting, ovirt-node-ng-image-update
is not still part of installed rpms so that engine tells me an update of
ovirt-node is still available.
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
7 years
Configuring LDAP backend in non interactive mode
by Luca 'remix_tj' Lorenzetto
Hello,
i'm looking for some examples on how to configure LDAP authn and authz
in non interactive mode. I'd like to add some authentication sources
immediately after hosted-engine deployment.
I've seen that ovirt-engine-extension-aaa-ldap-setup has an option
config-append. Help refers to an answer file, but i can't find out how
to generate it. All the servers i'm deploying should use the same
configurations to access to 3 different active directory servers.
Anyone has experiences about this?
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
7 years
Migration from Proxmox to oVirt
by Gabriel Stein
Hi again!
well, I'm trying to migrate all my VMs from Proxmox to oVirt. Proxmox
doesn't have libvirt and I can dump the files using vzdump <vm-id>
<directory> and the output is a *.vma file I think from Proxmox. I can't
even find the files, Proxmox create Logical Volumes for every VM.
I converted that to *.qcow2 using qemu-img convert, the conversion
worked(at least no errors) but I can't import it using a script that I
found on web* and the export storage domain(oVirt didn't found it).
I would like to know if there is a way to do that. I read a lot about and
found that one could build a conversion server and use virt-v2v to import.
But it will require a RH Enterprise for that, right? I don't have a
subscription and I would like to know if is possible without the
subscription?
And sorry for the offtopic question, if there is a "redhat" which would
like to answer me privately, if I buy a subscription for 1 Server and I
have NN CentOS Servers, it will be possible to use all benefits since I
have a valid subscription(of course, support just for the RH Server)?
*
https://rwmj.wordpress.com/2015/09/18/importing-kvm-guests-to-ovirt-or-rhev/
Thanks in Advance!
All the best
Gabriel
Gabriel Stein
------------------------------
Gabriel Ferraz Stein
Tel.: +49 (0) 170 2881531
7 years
[ANN] oVirt 4.2.0 Second Beta Release is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Beta Release of oVirt 4.2.0, as of November 15th, 2017
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the second beta release of the 4.2.0 version. This release
brings more than 240 enhancements and more than one thousand bug fixes,
including more than 430 high or urgent severity fixes, on top of oVirt 4.1
series.
What's new in oVirt 4.2.0?
-
The Administration Portal has been completely redesigned using
Patternfly, a widely adopted standard in web application design. It now
features a cleaner, more intuitive design, for an improved user experience.
-
There is an all-new VM Portal for non-admin users.
-
A new High Performance virtual machine type has been added to the New VM
dialog box in the Administration Portal.
-
Open Virtual Network (OVN) adds support for Open vSwitch software
defined networking (SDN).
-
oVirt now supports Nvidia vGPU.
-
The ovirt-ansible-roles package helps users with common administration
tasks.
-
Virt-v2v now supports Debian/Ubuntu based VMs.
For more information about these and other features, check out the oVirt
4.2.0 blog post
<https://ovirt.org/blog/2017/11/introducing-ovirt-4.2.0-beta/>.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.2 (available for x86_64 only)
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available.
- An async release of oVirt Node will follow soon.
Additional Resources:
* Read more about the oVirt 4.2.0 release highlights:
http://www.ovirt.org/release/4.2.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.2.0/
[4] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
<http://www.teraplan.it/redhat-osd-2017/>
7 years
please test UI plugins in 4.2 beta
by Greg Sheremeta
Hi everyone,
[I should have sent this message sooner -- apologies, and thanks to Martin
Sivak for the reminder!]
If you're trying out oVirt 4.2 Beta, please check that any existing UI
plugins [1] you are using 1. still work (they should!), and 2. look good in
the new UI. If something doesn't look quite right with a UI plugin [for
example, if it doesn't quite match the new theme], please contact Alexander
Wels and me and we can assist with getting it updated.
[1] https://www.ovirt.org/develop/release-management/features/ux/uiplugins/
Best wishes,
Greg
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
<https://www.redhat.com/>
gshereme(a)redhat.com IRC: gshereme
<https://red.ht/sig>
7 years
Update gluster HCI from 4.1.3 to 4.1.7
by Gianluca Cecchi
Hello,
I have an environment with 3 hosts and gluster HCI on 4.1.3.
I'm following this link to take it to 4.1.7
https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-...
The hosts and engine were at 7.3 prior of beginning the update.
All went ok for the engine that now is on 7.4 (not rebooted yet)
Points 4. 5. 6. for the first updated host were substituted by rebooting it
I'm at point:
7. Exit the global maintenance mode: in a few minutes the engine VM should
migrate to the fresh upgraded host cause it will get an higher score
One note: actually exiting from global maintenance doesn't imply that the
host previously put into maintenance exiting from it, correct?
So in my workflow, before point 7., actually I have selected the host and
activated it.
Currently the situation is this
- engine running on ovirt02
- update happened on ovirt03
Then after exiting from global maintenance I don't see the engine vm
migrating to it.
And in fact (see below) the score of ovirt02 is the same (3400) as the one
of ovirt03, so it seems it is correct that engine remains there...?
Which kind of messages should I see inside logs of engine/hosts?
[root@ovirt01 ~]# rpm -q vdsm
vdsm-4.19.20-1.el7.centos.x86_64
[root@ovirt02 ~]# rpm -q vdsm
vdsm-4.19.20-1.el7.centos.x86_64
[root@ovirt02 ~]#
[root@ovirt03 ~]# rpm -q vdsm
vdsm-4.19.37-1.el7.centos.x86_64
from host ovirt01:
[root@ovirt01 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt01.localdomain.local
Host ID : 1
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3352
stopped : False
Local maintenance : False
crc32 : 256f2128
local_conf_timestamp : 12251210
Host timestamp : 12251178
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=12251178 (Tue Nov 28 10:11:20 2017)
host-id=1
score=3352
vm_conf_refresh_time=12251210 (Tue Nov 28 10:11:52 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : 192.168.150.103
Host ID : 2
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 9b8c8a6c
local_conf_timestamp : 12219386
Host timestamp : 12219357
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=12219357 (Tue Nov 28 10:11:23 2017)
host-id=2
score=3400
vm_conf_refresh_time=12219386 (Tue Nov 28 10:11:52 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
--== Host 3 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt03.localdomain.local
Host ID : 3
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 9f6399ef
local_conf_timestamp : 2136
Host timestamp : 2136
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2136 (Tue Nov 28 10:11:56 2017)
host-id=3
score=3400
vm_conf_refresh_time=2136 (Tue Nov 28 10:11:56 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
[root@ovirt01 ~]#
Can I manually migrate engine vm to ovirt03?
On ovirt03:
[root@ovirt03 ~]# gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: 6e2bd1d7-9c8e-4c54-9d85-f36e1b871771
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt01.localdomain.local:/gluster/brick1/engine
Brick2: ovirt02.localdomain.local:/gluster/brick1/engine
Brick3: ovirt03.localdomain.local:/gluster/brick1/engine (arbiter)
Options Reconfigured:
performance.strict-o-direct: on
nfs.disable: on
user.cifs: off
network.ping-timeout: 30
cluster.shd-max-threads: 6
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: off
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
transport.address-family: inet
[root@ovirt03 ~]#
[root@ovirt03 ~]# gluster volume heal engine info
Brick ovirt01.localdomain.local:/gluster/brick1/engine
Status: Connected
Number of entries: 0
Brick ovirt02.localdomain.local:/gluster/brick1/engine
Status: Connected
Number of entries: 0
Brick ovirt03.localdomain.local:/gluster/brick1/engine
Status: Connected
Number of entries: 0
[root@ovirt03 ~]#
Thanks,
Gianluca
7 years