Re: [ovirt-users] Ovirt node host installation failing in engine
by Shalabh Goel
Hi
I just want to know if there is any way I can disable the yum looking for
updates on the internet on the node?
Please help me out here.
Thanks
Shalabh Goel
On Tue, Nov 29, 2016 at 5:59 PM, <users-request(a)ovirt.org> wrote:
> Send Users mailing list submissions to
> users(a)ovirt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-request(a)ovirt.org
>
> You can reach the person managing the list at
> users-owner(a)ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
> 1. Re: I wrote an oVirt thing (Yaniv Kaul)
> 2. Ovirt node host installation failing in engine (Shalabh Goel)
> 3. Re: How to migrate Self Hosted Engine (Gianluca Cecchi)
> 4. Re: Need help with dashboard (????? ???????)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 29 Nov 2016 14:06:22 +0200
> From: Yaniv Kaul <ykaul(a)redhat.com>
> To: Konstantin Shalygin <k0ste(a)k0ste.ru>
> Cc: Ovirt Users <users(a)ovirt.org>
> Subject: Re: [ovirt-users] I wrote an oVirt thing
> Message-ID:
> <CAJgorsY_SksEnP2jQxBgdJyKW=4_qZZGZ=DwHQ13MteEN5jG5w@mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Tue, Nov 29, 2016 at 3:40 AM, Konstantin Shalygin <k0ste(a)k0ste.ru>
> wrote:
>
> > ovirt-shell will be deprecated and not supported or some functions on
> > ovirt-shell (or all package ovirt-engine-cli)?
> >
> > We use ovirt-shell on client desktops who connected to SPICE consoles and
> > work (users provided by LDAP on ovirt-engine), like via RDP. For this I
> > wrote very fast-hack patch for ovirt-shell and GUI for enter password (
> > https://github.com/k0ste/ovirt-pygtk). Very simple, but via Internet
> > people use SPICE without negative about packet loss and disconnects,
> > instead RDP.
>
>
> Can you further explain the use case? I assume the user portal is not good
> enough for some reason?
>
>
> >
> >
> > BTW, the ovirt-shell is something we deprecated. It is working on top of
> >> the v3 api, which we plan to remove in 4.2.
> >> So better not use it.
> >>
> >
> >
> > You can start maintain. For example I maintain packes for Arch Linux:
> > ovirt-engine-cli (https://aur.archlinux.org/packages/ovirt-engine-cli)
> > and ovirt-engine-sdk-python (https://aur.archlinux.org/pac
> > kages/ovirt-engine-sdk-python).
>
>
> Hi,
>
> It somehow looks like a fork of the CLI (due to the added patch[1]).
> I'm not sure how happy I am about it, considering the patch is adding a
> feature with security issues (there is a reason we do not support password
> passed via the command line - it's somewhat less secure).
> Since you are already checking for the CLI rc file[2], just add the
> password to it and launch with it (in a temp file in the temp directory
> with the right permissions, etc...)
>
> BTW, note that the attempt to delete the password from memory[3] may or may
> not work. After all, it's a copy of what you got from entry.get_text() few
> lines before.
> And Python GC is not really to be relied upon to delete things ASAP anyway.
> There are some lovely discussions on the Internet about it. For example[4].
> Y.
>
> [1]
> https://github.com/k0ste/ovirt-pygtk/blob/master/add_password_option.patch
> [2] https://github.com/k0ste/ovirt-pygtk/blob/master/ovirt-pygtk.py#L81
> [3] https://github.com/k0ste/ovirt-pygtk/blob/master/ovirt-pygtk.py#L71
> [4]
> http://stackoverflow.com/questions/728164/securely-
> erasing-password-in-memory-python
>
> >
> >
> > My workstation at work is running Ubuntu, and I do not believe that
> >> ovirt-shell is packaged for it.
> >>
> >
> > --
> > Best regards,
> > Konstantin Shalygin
> >
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.ovirt.org/pipermail/users/attachments/
> 20161129/4c62f1cf/attachment-0001.html>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 28 Nov 2016 10:00:31 +0530
> From: Shalabh Goel <shalabhgoel13(a)gmail.com>
> To: users(a)ovirt.org
> Subject: [ovirt-users] Ovirt node host installation failing in engine
> Message-ID:
> <CAEnn9my9QcGYqLsqAV+W9PiOajwQqTpvQdLiYtLOf3sc5TiB_
> Q(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi
>
> I have installed ovirt engine (version 4) on Centos 7.2 in one server
> system and 1 ovirt node (version 4) using ovirt node iso in another system.
> But when am adding the node in the engine from web portal, the installation
> is failing on the node. There is no Internet connectivity to the node. When
> I checked the logs, I found that it is trying to connect to a repo using
> yum. My questions are:
>
> 1. Is internet connectivity required on the node?
>
> 2. If I cannot provide internet connectivity to the node. Please suggest a
> work around.
>
>
> Thank You
>
> Shalabh Goel
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.ovirt.org/pipermail/users/attachments/
> 20161128/6ee65704/attachment-0001.html>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 29 Nov 2016 13:27:06 +0100
> From: Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
> To: Yedidyah Bar David <didi(a)redhat.com>
> Cc: users <users(a)ovirt.org>
> Subject: Re: [ovirt-users] How to migrate Self Hosted Engine
> Message-ID:
> <CAG2kNCz=2Gz1CJtLLT2mtT3E+wzH0=vtugmOBS_Fd7e1cXicBQ@
> mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Mon, Nov 21, 2016 at 8:54 AM, Yedidyah Bar David <didi(a)redhat.com>
> wrote:
>
> > On Mon, Nov 21, 2016 at 1:09 AM, Gianluca Cecchi
> >
> > >
> > > Two notes:
> > > 1) it would be nice to pre-filter the drop down box when you have to
> > choose
> > > the host where migrate the hosted engine...
> > > So that if there are no hosts available you will be given a related
> > message
> > > without choice at all and if there is a subset of eligible hosts inside
> > the
> > > cluster, you will be proposed only to choose one of them and not all
> the
> > > hosts inside the cluster.
> >
> > Makes sense, please open an RFE to track this. Thanks.
> >
>
> Been busy these latest days...
> Opened now:
> https://bugzilla.redhat.com/show_bug.cgi?id=1399609
>
>
> >
> > >
> > > 2) If the gui option becomes the default and preferred way to deploy
> > hosts
> > > in self hosted engine environments I think it should be put in clearer
> > shape
> > > that if you follow the default action you would not have high
> > availability
> > > for the hosted engine vm.
> > > Or changing the default action to "Deploy", or showing a popup if the
> > hosted
> > > engine vm has only one host configured for it but there are other hosts
> > in
> > > the cluster.
> > > Just my opinion.
> >
> > Makes sense too, but I wonder if people will then be annoyed by
> forgetting
> > to uncheck it even after having enough HA hosts, which does have its cost
> > (both actual resource use and also reservations, IIUC). Perhaps we should
> > enable by default and/or remind only until you have enough HA hosts,
> which
> > can be a configurable number (and default e.g. to 3).
> >
> >
> >
> and here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1399613
>
> Cheers,
> Gianluca
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.ovirt.org/pipermail/users/attachments/
> 20161129/3eccc506/attachment-0001.html>
>
> ------------------------------
>
> Message: 4
> Date: Tue, 29 Nov 2016 12:28:55 +0000
> From: ????? ??????? <mishanindv(a)gmail.com>
> To: Shirly Radco <sradco(a)redhat.com>
> Cc: "Users(a)ovirt.org" <Users(a)ovirt.org>
> Subject: Re: [ovirt-users] Need help with dashboard
> Message-ID:
> <CADYaN2fiQAD9abUJrLbx2R175X1VcgGyJ-aQq9WHZ7g8+7eKtQ@mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Thank you, I have created a bug.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1399597
>
>
> ??, 29 ????. 2016 ?. ? 13:54, Shirly Radco <sradco(a)redhat.com>:
>
> > # su - postgres
> > $ pg_dump --file ovirt_engine_history.dump ovirt_engine_history
> >
> > Best regards,
> >
> > Shirly Radco
> >
> > BI Software Engineer
> > Red Hat Israel Ltd.
> > 34 Jerusalem Road
> > Building A, 4th floor
> > Ra'anana, Israel 4350109
> >
> >
> > On Mon, Nov 28, 2016 at 9:25 AM, ????? ??????? <mishanindv(a)gmail.com>
> > wrote:
> >
> > Hi, sorry for the delay, I do not know how to create
> "ovirt_engine_history db dump"?
> >
> >
> > ??, 22 ????. 2016 ?. ? 10:52, Shirly Radco <sradco(a)redhat.com>:
> >
> > Hi,
> >
> > Can you please open a bug?
> > Please attach to the bug these files+ the screenshots +
> > ovirt_engine_history db dump.
> >
> > Thank you,
> >
> > Shirly Radco
> >
> > BI Software Engineer
> > Red Hat Israel Ltd.
> > 34 Jerusalem Road
> > Building A, 4th floor
> > Ra'anana, Israel 4350109
> >
> >
> > On Mon, Nov 21, 2016 at 4:56 PM, ????? ??????? <mishanindv(a)gmail.com>
> > wrote:
> >
> > Thank you that you have answered me. I mean, the data is not displayed
> at the bottom of Dashboard. Log files and the status of the services I
> attached to the letter.
> >
> >
> > ??, 20 ????. 2016 ?. ? 11:13, Shirly Radco <sradco(a)redhat.com>:
> >
> > Hi,
> >
> > You attached twice the same screenshot...
> > Please attach the screenshot with the missing data.
> >
> > Also, the data for the graphs is collected by the ovirt-engine-dwhd
> > service.
> > Please see if the service is running, also check logs for errors.
> > Please attach the log.
> >
> >
> >
> > Best regards,
> >
> > Shirly Radco
> >
> > BI Software Engineer
> > Red Hat Israel Ltd.
> > 34 Jerusalem Road
> > Building A, 4th floor
> > Ra'anana, Israel 4350109
> >
> >
> > On Sun, Nov 20, 2016 at 9:31 AM, ????? ??????? <mishanindv(a)gmail.com>
> > wrote:
> >
> >
> >
> > Hi, I need help to load Dasboard, he is incorrectly displayed graphics
> > processor is not displayed data, memory, disk volume. I installed 3 hosts
> > with glusterfs. After installation everything worked fine and the data is
> > displayed correctly, but then everything was as follows. What could be a
> > problem?
> >
> >
> >
> > oVirt Engine Version: 4.0.5.5-1.el7.centos
> >
> >
> > [image: pasted1]
> >
> > [image: pasted3]
> >
> > [image: pasted2]
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> >
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.ovirt.org/pipermail/users/attachments/
> 20161129/06bfe244/attachment.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: pasted2
> Type: image/png
> Size: 108864 bytes
> Desc: not available
> URL: <http://lists.ovirt.org/pipermail/users/attachments/
> 20161129/06bfe244/attachment.png>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: pasted3
> Type: image/png
> Size: 135899 bytes
> Desc: not available
> URL: <http://lists.ovirt.org/pipermail/users/attachments/
> 20161129/06bfe244/attachment-0001.png>
>
> ------------------------------
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> End of Users Digest, Vol 62, Issue 197
> **************************************
>
--
Shalabh Goel
8 years
planned jenkins and resources downtime
by Evgheni Dereveanchin
Hi everyone,
As part of a planned maintenance window I am going to
apply OS updates and reboot the Jenkins master along
with our resources file server.
Affected services:
jenkins.ovirt.org
resources.ovirt.org
This means that no new jobs will be started on the Jenkins
and our repositories will be unavailable for a short
period of time.
I will follow up on this email once all services
are back up and running.
Regards,
Evgheni Dereveanchin
8 years
How to migrate Self Hosted Engine
by Gianluca Cecchi
Hello,
I have an Hyperconverged gluster cluster with SEH and 3 hosts born in 4.0.5.
The installation was done starting from ovirt01 (named hosted_engine_1 in
webadmin gui) and then deploying two more hosts: ovirt02 and ovirt03 from
web admin gui itself.
All seems ok.
I can migrate a normal VM from one host to another one and nice that I
don't loose console now.
But if I try from webadmin gui to migrate the self hosted engine I get this
message:
https://drive.google.com/file/d/0BwoPbcrMv8mvY3pURVRkX0p4OW8/view?usp=sha...
Is this because the only way to migrate engine is to put its hosting host
to maintenance or is there anything wrong?
I don't understand the message:
The host ovirt02.localdomain.local did not satisfy internal filter HA
because it is not a Hosted Engine host..
some commands executed on hosted_engine_1 (ovirt01):
[root@ovirt01 ~]# vdsClient -s 0 glusterHostsList
{'hosts': [{'hostname': '10.10.100.102/24',
'status': 'CONNECTED',
'uuid': 'e9717281-a356-42aa-a579-a4647a29a0bc'},
{'hostname': 'ovirt03.localdomain.local',
'status': 'CONNECTED',
'uuid': 'ec81a04c-a19c-4d31-9d82-7543cefe79f3'},
{'hostname': 'ovirt02.localdomain.local',
'status': 'CONNECTED',
'uuid': 'b89311fe-257f-4e44-8e15-9bff6245d689'}],
'status': {'code': 0, 'message': 'Done'}}
Done
[root@ovirt01 ~]# vdsClient -s 0 list
87fd6bdb-535d-45b8-81d4-7e3101a6c364
Status = Up
nicModel = rtl8139,pv
statusTime = 4691827920
emulatedMachine = pc
pid = 18217
vmName = HostedEngine
devices = [{'device': 'console', 'specParams': {}, 'type': 'console',
'deviceId': '08628a0d-1c2a-43e9-8820-4c02f14d04e9', 'alias': 'console0'},
{'device': 'memballoon', 'specParams': {'model': 'none'}, 'type':
'balloon', 'alias': 'balloon0'}, {'alias': 'rng0', 'specParams': {'source':
'random'}, 'address': {'slot': '0x07', 'bus': '0x00', 'domain': '0x0000',
'type': 'pci', 'function': '0x0'}, 'device': 'virtio', 'model': 'virtio',
'type': 'rng'}, {'device': 'scsi', 'alias': 'scsi0', 'model':
'virtio-scsi', 'type': 'controller', 'address': {'slot': '0x04', 'bus':
'0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}}, {'device':
'vga', 'alias': 'video0', 'type': 'video', 'address': {'slot': '0x02',
'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}},
{'device': 'vnc', 'specParams': {'spiceSecureChannels':
'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv',
'macAddr': '00:16:3e:0a:e7:ba', 'linkActive': True, 'network': 'ovirtmgmt',
'alias': 'net0', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {},
'deviceId': '79a745a0-e691-4a3d-8d6b-c94306db9113', 'address': {'slot':
'0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function':
'0x0'}, 'device': 'bridge', 'type': 'interface', 'name': 'vnet0'},
{'index': '2', 'iface': 'ide', 'name': 'hdc', 'alias': 'ide0-1-0',
'specParams': {}, 'readonly': 'True', 'deviceId':
'6be25e51-0944-4fc0-93fe-4ecabe32ac6b', 'address': {'bus': '1',
'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device':
'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'poolID':
'00000000-0000-0000-0000-000000000000', 'reqsize': '0', 'index': '0',
'iface': 'virtio', 'apparentsize': '10737418240', 'alias': 'virtio-disk0',
'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'readonly': 'False',
'shared': 'exclusive', 'truesize': '3395743744', 'type': 'disk',
'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volumeInfo':
{'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab', 'volType': 'path',
'leaseOffset': 0, 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6',
'leasePath':
'/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease',
'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path':
'/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'},
'format': 'raw', 'deviceId': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8',
'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'}, 'device': 'disk', 'path':
'/var/run/vdsm/storage/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6',
'propagateErrors': 'off', 'optional': 'false', 'name': 'vda', 'bootOrder':
'1', 'volumeID': '94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'specParams': {},
'volumeChain': [{'domainID': 'e9e4a478-f391-42e5-9bb8-ed22a33e5cab',
'volType': 'path', 'leaseOffset': 0, 'volumeID':
'94c46bac-0a9f-49e8-9188-627fa0caf2b6', 'leasePath':
'/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6.lease',
'imageID': 'cf8b8f4e-fa01-457e-8a96-c5a27f8408f8', 'path':
'/rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:_engine/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/images/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6'}]},
{'device': 'usb', 'alias': 'usb', 'type': 'controller', 'address': {'slot':
'0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function':
'0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 'controller', 'address':
{'slot': '0x01', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci',
'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 'virtio-serial0',
'type': 'controller', 'address': {'slot': '0x05', 'bus': '0x00', 'domain':
'0x0000', 'type': 'pci', 'function': '0x0'}}, {'device': 'unix', 'alias':
'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0',
'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias':
'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0',
'type': 'virtio-serial', 'port': '2'}}, {'device': 'unix', 'alias':
'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller': '0',
'type': 'virtio-serial', 'port': '3'}}]
guestDiskMapping = {'cf8b8f4e-fa01-457e-8': {'name': '/dev/vda'},
'QEMU_DVD-ROM_QM00003': {'name': '/dev/sr0'}}
vmType = kvm
clientIp =
displaySecurePort = -1
memSize = 6144
displayPort = 5900
cpuType = Broadwell
spiceSecureChannels =
smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
smp = 1
displayIp = 0
display = vnc
pauseCode = NOERR
maxVCpus = 2
[root@ovirt01 ~]#
Thanks
Gianluca
8 years
Released update for 4.0.5?
by Gianluca Cecchi
Hello,
in webadmin GUI I see on my 4.0.5 hosts a message related to updates
available
If I run yum update in my plain CentOS 7.2 hosts I get:
1) vdsm packages
vdsm x86_64 4.18.15.3-1.el7.centos ovirt-4.0
688 k
vdsm-api noarch 4.18.15.3-1.el7.centos ovirt-4.0
53 k
vdsm-cli noarch 4.18.15.3-1.el7.centos ovirt-4.0
67 k
vdsm-gluster noarch 4.18.15.3-1.el7.centos ovirt-4.0
53 k
vdsm-hook-vmfex-dev noarch 4.18.15.3-1.el7.centos ovirt-4.0
6.6 k
vdsm-infra noarch 4.18.15.3-1.el7.centos ovirt-4.0
12 k
vdsm-jsonrpc noarch 4.18.15.3-1.el7.centos ovirt-4.0
25 k
vdsm-python noarch 4.18.15.3-1.el7.centos ovirt-4.0
602 k
vdsm-xmlrpc noarch 4.18.15.3-1.el7.centos ovirt-4.0
25 k
vdsm-yajsonrpc noarch 4.18.15.3-1.el7.centos ovirt-4.0
27 k
Are they for anything particular?
2) gluster packages
glusterfs x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 483 k
glusterfs-api x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 87 k
glusterfs-cli x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 180 k
glusterfs-client-xlators
x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 857 k
glusterfs-fuse x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 130 k
glusterfs-geo-replication
x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 206 k
glusterfs-libs x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 355 k
glusterfs-server x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 1.4 M
Currently I have 3.7.16-1.el7.x86_64
How can I manage gluster updates on gluster environment with 3 hosts?
Separated by vdsm updates or in the same run?
Any hint/caveat?
Thanks
Gianluca
8 years
deleted then rebuilt node still showing up in status
by Charles Kozler
I put the node in to maint. mode, removed all VMs from it, and then deleted
it from the web UI. I then rebuilt it with hosted-engine --deploy and
indicated it is now node 4 (was previously #3) and now in --vm-status it is
still showing up as stale data
What can I do?
In fact, it looks like ovirt thinks its still in maint mode I am presuming
because I used the same host name?
--== Host 1 status ==--
Status up-to-date : True
Hostname : node03
Host ID : 1
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 87e08365
Host timestamp : 3270683
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3270683 (Tue Nov 29 08:43:52 2016)
host-id=1
score=3400
maintenance=False
state=EngineDown
stopped=False
--== Host 2 status ==--
Status up-to-date : True
Hostname : node01
Host ID : 2
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 66c7154a
Host timestamp : 3267326
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3267326 (Tue Nov 29 08:43:44 2016)
host-id=2
score=3400
maintenance=False
state=EngineUp
stopped=False
--== Host 3 status ==--
Status up-to-date : False
Hostname : node02
Host ID : 3
Engine status : unknown stale-data
Score : 0
stopped : False
Local maintenance : True
crc32 : 040b4192
Host timestamp : 211984
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=211984 (Mon Nov 28 20:28:06 2016)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
stopped=False
--== Host 4 status ==--
Status up-to-date : True
Hostname : node02
Host ID : 4
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 0
stopped : False
Local maintenance : True
crc32 : 6aa7d568
Host timestamp : 43683
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=43683 (Tue Nov 29 08:44:11 2016)
host-id=4
score=0
maintenance=True
state=LocalMaintenance
stopped=False
In web UI it does not show the node (node02 / Host ID #4) as being in
maint. mode
8 years
Need help with dashboard
by Денис Мишанин
Hi, I need help to load Dasboard, he is incorrectly displayed graphics
processor is not displayed data, memory, disk volume. I installed 3
hosts with glusterfs. After installation everything worked fine and
the data is displayed correctly, but then everything was as follows.
What could be a problem?
oVirt Engine Version: 4.0.5.5-1.el7.centos
[image: pasted1]
[image: pasted3]
[image: pasted2]
8 years
Failed to delete snapshot <UNKNOWN> for VM
by Peter Michael Calum
Hi
I am getting theese events every minute, any ideas ?
Failed to delete snapshot '<UNKNOWN>' for VM 'khk9dsw16'
I have tried :
Restarting VM
Set the host to maintenaince an activated again
Restarted ovirt-engine on Manager
SW : oVirt Engine Version: 4.0.1.1-1.el7.centos
Thanks,
Kind regards
Peter Calum
8 years
can't import vm from KVM host
by Nelson Lameiras
Hello,
I'm trying to import virtual machines from a KVM host (centos 7.2) to an oVirt 4.0.2 Cluster using the "import" feature on GUI.
If the original VM is using RAW/QCOW2 files as storage, everything works fine.
But if the original VM is using a special block special device as storage (like a LVM or SAN volume), it's simply not recognized.
The VM does appear in the import list of the KVM host, but it's disk count is 0!
Is this a known technical obstacle or am I doing something wrong ?
below is the storage part of the xml describing the original VM :
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/mapper/vg_01-lv_sys'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/sdc'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
We have hundreds of virtual machines in production with this type of configuration... How can we migrate them safely to oVirt?
thanks
Nelson
8 years
Failed to delete snapshot '<UNKNOWN>' for VM
by Peter Calum
Hi
I am getting theese events 2-3 times every minute, any ideas ?
Failed to delete snapshot '<UNKNOWN>' for VM ‘khk9dsw16’
I have tried :
Restarting VM
Set the host to maintenaince an activated again
Restarted ovirt-engine on Manager
SW : oVirt Engine Version: 4.0.1.1-1.el7.centos
--
Kind regards
Peter Calum
8 years
How to update ovirt on a single-host hosted-engine host
by Derek Atkins
Hi,
I've got ovirt running on a single host (with hosted-engine). I just
upgraded the engine from 4.0.4 to 4.0.5. That was pretty easy; I just
put the system into global maintenance mode, then yum update, then
engine-setup, then took the host out of maint mode. Easy peasy.
Now, I want to update the host itself. Since it's a single host system
I know I'll need to shut down all the VMs (because there's no place to
migrate them). This means I'll need to shut down the engine VM, too.
That would imply that I can't use the "Update" feature from the ui,
right?
So what IS the process to properly update a single-host host? My guess
is:
* shutdown all the VMs
* go into global maint mode (what IS the difference between global and
local?)
* shutdown the engine/engine VM
* yum update on the host
* restart services (or reboot, I guess)
* bring the system out of maintenance mode
Am I missing a step? Do I need to run hosted-engine --deploy again
similar to how I needed to run "engine-setup" on the engine? Or is
there something else I need to run?
Thanks,
-derek
--
Derek Atkins 617-623-3745
derek(a)ihtfp.com www.ihtfp.com
Computer and Internet Security Consultant
8 years