3.6 : VLAN / non VLAN
by Alexis HAUSER
hi,
I'd like to know what happens when you create a new network, tagged with VLAN for example 25 and using em2 :
- the packets outgoing from em2.25 are tagged, right ?
- the packets outgoing from em2 are tagged or not ?
- the result is packets inside ovirt are tagged, but when you go out of it and reach something from em2, are the packets still tagged ?
8 years, 3 months
erros after upgrade
by Fernando Fuentes
Team,
After upgrading to 4.0.2 I am starting to get the following errors:
Failed to check for available updates on host ogias with message
'Command returned failure code 1 during SSH session 'root@IP''.
Do I need to update each hosts manually?
Regards,
--
Fernando Fuentes
ffuentes(a)txweather.org
http://www.txweather.org
8 years, 3 months
Ovirt-Engine HA
by Sandvik Agustin
Hi Ovirt Users,
Do you have any documentation, tutorial, howto or workaround to perform or
to have an ovirt-engine HA? Iv'e tried to google about this but I can't
find any.
let's say I want to have two engine, engine1 is currently managing two
hypervisor, then engin2 is my reserved engine, so I can still managed those
vm that are running inside of my hypervisor if ever engine1 fails.
TIA :)
Sandvik
8 years, 3 months
HostedEngine NIC setup files
by Hanson
Hi Guys,
I have edited /etc/sysconfig/network-scripts/ifcfg-eth0 &1 for the
various subnets we needed.
Somewhere along the line, when the hosted-engine boots, eth1 comes up
but eth0 does not. If I login using the upped interface and do an ifup
eth0 it comes up.
it is set to ONBOOT=yes in the config.
I know with the nodes, that these files are overwritten on boot. Where
should I be editing for the hosted-engine?
Thanks,
Hanson
8 years, 3 months
qemu-kvm-ev-2.3.0-31.el7_2.21.1 available for testing on x86_64, ppc64le and aarch64
by Sandro Bonazzola
qemu-kvm-ev-2.3.0-31.el7_2.21.1 has been tagged for testing for CentOS Virt
SIG and is already in testing repositories.
It's now available for x86_64, ppc64le and aarch64.
Please help testing and providing feedback, thanks.
We plan to move to stable repo around Wednesday next week.
ChangeLog since previous release:
* Fri Aug 19 2016 Sandro Bonazzola <sbonazzo(a)redhat.com> -
ev-2.3.0-31.el7_2.21 - Removing RH branding from package name * Tue Aug 02
2016 Miroslav Rezanina <mrezanin(a)redhat.com> - rhev-2.3.0-31.el7_2.21 -
kvm-block-iscsi-avoid-potential-overflow-of-acb-task-cdb.patch [bz#1358997]
- Resolves: bz#1358997 (CVE-2016-5126 qemu-kvm-rhev: Qemu: block: iscsi:
buffer overflow in iscsi_aio_ioctl [rhel-7.2.z]) * Wed Jul 27 2016 Miroslav
Rezanina <mrezanin(a)redhat.com> - rhev-2.3.0-31.el7_2.20 -
kvm-virtio-error-out-if-guest-exceeds-virtqueue-size.patch [bz#1359731] -
Resolves: bz#1359731 (EMBARGOED CVE-2016-5403 qemu-kvm-rhev: Qemu: virtio:
unbounded memory allocation on host via guest leading to DoS [rhel-7.2.z])
* Wed Jul 20 2016 Miroslav Rezanina <mrezanin(a)redhat.com> -
rhev-2.3.0-31.el7_2.19 -
kvm-qemu-sockets-use-qapi_free_SocketAddress-in-cleanup.patch [bz#1354090]
- kvm-tap-use-an-exit-notifier-to-call-down_script.patch [bz#1354090] -
kvm-slirp-use-exit-notifier-for-slirp_smb_cleanup.patch [bz#1354090] -
kvm-net-do-not-use-atexit-for-cleanup.patch [bz#1354090] - Resolves:
bz#1354090 (Boot guest with vhostuser server mode, QEMU prompt
'Segmentation fault' after executing '(qemu)system_powerdown') * Fri Jul 08
2016 Miroslav Rezanina <mrezanin(a)redhat.com> - rhev-2.3.0-31.el7_2.18 -
kvm-vhost-user-disable-chardev-handlers-on-close.patch [bz#1351892] -
kvm-char-clean-up-remaining-chardevs-when-leaving.patch [bz#1351892] -
kvm-sockets-add-helpers-for-creating-SocketAddress-from-.patch [bz#1351892]
- kvm-socket-unlink-unix-socket-on-remove.patch [bz#1351892] -
kvm-char-do-not-use-atexit-cleanup-handler.patch [bz#1351892] - Resolves:
bz#1351892 (vhost-user: A socket file is not deleted after VM's port is
detached.) * Tue Jun 28 2016 Miroslav Rezanina <mrezanin(a)redhat.com> -
rhev-2.3.0-31.el7_2.17 -
kvm-vhost-user-set-link-down-when-the-char-device-is-clo.patch [bz#1348593]
- kvm-vhost-user-fix-use-after-free.patch [bz#1348593] -
kvm-vhost-user-test-fix-up-rhel6-build.patch [bz#1348593] -
kvm-vhost-user-test-fix-migration-overlap-test.patch [bz#1348593] -
kvm-vhost-user-test-fix-chardriver-race.patch [bz#1348593] -
kvm-vhost-user-test-use-unix-port-for-migration.patch [bz#1348593] -
kvm-vhost-user-test-fix-crash-with-glib-2.36.patch [bz#1348593] -
kvm-vhost-user-test-use-correct-ROM-to-speed-up-and-avoi.patch [bz#1348593]
- kvm-tests-append-i386-tests.patch [bz#1348593] -
kvm-vhost-user-add-ability-to-know-vhost-user-backend-di.patch [bz#1348593]
- kvm-qemu-char-add-qemu_chr_disconnect-to-close-a-fd-acce.patch
[bz#1348593] - kvm-vhost-user-disconnect-on-start-failure.patch
[bz#1348593] - kvm-vhost-net-do-not-crash-if-backend-is-not-present.patch
[bz#1348593] - kvm-vhost-net-save-restore-vhost-user-acked-features.patch
[bz#1348593] - kvm-vhost-net-save-restore-vring-enable-state.patch
[bz#1348593] - kvm-test-start-vhost-user-reconnect-test.patch [bz#1348593]
- Resolves: bz#1348593 (No recovery after vhost-user process restart)
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years, 3 months
3.5 / 3.6 stuck
by Erik Brakke
Hello,
I changed my cluster and data center compatibility from 3.5 to 3.6 with a
3.5 host. D'oh!
Engine: 3.6 on FC22
Host: 3.5 on FC22 (local storage)
- Can I change the data center and cluster back to 3.5? How?
- I tried to upgrade host to 3.6 in the GUI. Maintenance->Reinstall did
not work, and Upgrade option is greyed out. Is there a way to upgrade host
to 3.6?
Thanks
8 years, 3 months
Cannot start hosted engine after last 4.0.1 packages
by Matt .
Hello,
I'm having issues after the last 4.0.1 package just before 4.0.2 came
out. My engine was running great, hosts started the vm also and the
hosted-engine command was working fine.
Now I get rpcxml 3.6 deprecated warning which I never got before. See
the attachment for this output.
I have checked is IPv6 was disabled, but it isn't as that is what is
needed for this.
Any clue ?
Thanks!
Matt
8 years, 3 months
Re: [ovirt-users] HostedEngine with HA
by Carlos Rodrigues
On Thu, 2016-08-18 at 08:55 +0200, Simone Tiraboschi wrote:
> On Wed, Aug 17, 2016 at 11:06 AM, Carlos Rodrigues <cmar(a)eurotux.com>
> wrote:
> >
> > Anyone can help me to build HA on HostedEngine VM?
> >
> > How can i guarantee that if host with HostedEngine VM goes down,
> > the
> > HostedEngine VM moves to another host?
>
> it's in charge of ovirt-ha-agent running on your hosts.
>
> Can you please post the output of
> hosted-engine --vm-status
> ?
There is the output of both hosts:
[root@ied-blade11 ~]# hosted-engine --vm-status
/usr/lib/python2.7/site-
packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
deprecated, please use vdsm.jsonrpcvdscli
import vdsm.vdscli
--== Host 1 status ==--
Status up-to-date : True
Hostname : ied-blade13.install.eurotux.local
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : a941698e
Host timestamp : 153049
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=153049 (Thu Aug 18 09:22:02 2016)
host-id=1
score=3400
maintenance=False
state=EngineUp
stopped=False
--== Host 2 status ==--
Status up-to-date : True
Hostname : ied-blade11.install.eurotux.local
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 8f79e0e7
Host timestamp : 143139
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=143139 (Thu Aug 18 09:22:12 2016)
host-id=2
score=3400
maintenance=False
state=EngineDown
stopped=False
[root@ied-blade13 ~]# hosted-engine --vm-status
/usr/lib/python2.7/site-
packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
deprecated, please use vdsm.jsonrpcvdscli
import vdsm.vdscli
--== Host 1 status ==--
Status up-to-date : True
Hostname : ied-blade13.install.eurotux.local
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : cdd022a2
Host timestamp : 153085
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=153085 (Thu Aug 18 09:22:38 2016)
host-id=1
score=3400
maintenance=False
state=EngineUp
stopped=False
--== Host 2 status ==--
Status up-to-date : True
Hostname : ied-blade11.install.eurotux.local
Host ID : 2
Engine status : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : fbbad961
Host timestamp : 143175
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=143175 (Thu Aug 18 09:22:48 2016)
host-id=2
score=3400
maintenance=False
state=EngineDown
stopped=False
>
> >
> > Regards,
> > Carlos Rodrigues
> >
> > On Tue, 2016-08-16 at 11:53 +0100, Carlos Rodrigues wrote:
> > >
> > > On Sun, 2016-08-14 at 14:22 +0300, Roy Golan wrote:
> > > >
> > > >
> > > >
> > > >
> > > > On 12 August 2016 at 20:23, Carlos Rodrigues <cmar(a)eurotux.com>
> > > > wrote:
> > > > >
> > > > >
> > > > > Hello,
> > > > >
> > > > > I have one cluster with two hosts with power management
> > > > > correctly
> > > > > configured and one virtual machine with HostedEngine over
> > > > > shared
> > > > > storage with FiberChannel.
> > > > >
> > > > > When i shutdown the network of host with HostedEngine VM, it
> > > > > should be
> > > > > possible the HostedEngine VM migrate automatically to another
> > > > > host?
> > > > >
> > > > migrate on which network?
> > > >
> > > > >
> > > > >
> > > > > What is the expected behaviour on this HA scenario?
> > > >
> > > > After a few minutes your vm will be shutdown by the High
> > > > Availability
> > > > agent, as it can't see network, and started on another host.
> > >
> > >
> > > I'm testing this scenario and after shutdown network, it should
> > > be
> > > expected that agent shutdown ha and started on another host, but
> > > after
> > > couple minutes nothing happens and on host with network we
> > > getting
> > > the
> > > following messages:
> > >
> > > Aug 16 11:44:08 ied-blade11.install.eurotux.local ovirt-ha-
> > > agent[2779]:
> > > ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config
> > > ERROR
> > > Unable to get vm.conf from OVF_STORE, falling back to initial
> > > vm.conf
> > >
> > > I think the HA agent its trying to get vm configuration but some
> > > how
> > > it
> > > can't get vm.conf to start VM.
> > >
> > > Regards,
> > > Carlos Rodrigues
> > >
> > >
> > > >
> > > >
> > > > >
> > > > >
> > > > >
> > > > > Regards,
> > > > >
> > > > > --
> > > > > Carlos Rodrigues
> > > > >
> > > > > Engenheiro de Software Sénior
> > > > >
> > > > > Eurotux Informática, S.A. | www.eurotux.com
> > > > > (t) +351 253 680 300 (m) +351 911 926 110
> > > > >
> > > > > _______________________________________________
> > > > > Users mailing list
> > > > > Users(a)ovirt.org
> > > > > http://lists.ovirt.org/mailman/listinfo/users
> > > > >
> > > >
> > --
> > Carlos Rodrigues
> >
> > Engenheiro de Software Sénior
> >
> > Eurotux Informática, S.A. | www.eurotux.com
> > (t) +351 253 680 300 (m) +351 911 926 110
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
--
Carlos Rodrigues
Engenheiro de Software Sénior
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
8 years, 3 months