Hosted Engine Setup error (v4.2.3)
by ovirt@fateknollogee.com
Engine network config error
Following this blog post:
https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-glus...
I get an error saying the hosted engine setup is "trying" to use vibr0
(192.168.xxx.x) even though I have the bridge interface set to "eno1"
Regardless of whether the Edit Hosts File is checked or unchecked, it
overwrites my engine IP entry from 10.50.235.x to 192.168.xxx.x
The same thing happens whether I set the engine IP to Static or DHCP (I
don't have DNS, I'm using static entries in /etc/hosts).
Any ideas it "insists" on using "vibr0" instead of "eno1"?
**also posted this on IRC
6 years, 7 months
Private VLANs
by Colin Coe
Hi all
We running RHEV 4.1.10 on HPE Blade servers using Virtual Connect which
talk to Cisco switches.
I want to implement private VLANs, does the combination of oVirt + Cisco
switches + HPE Virtual Connect work with private VLANs?
To be clear, I want to have a couple of logical networks (i.e. VLANs) where
the nodes in that VLAN cannot talk directly but must go through the
router/firewall.
Thanks
CC
6 years, 7 months
Ovirt host becomes non_operational
by 03ce007@gmail.com
I am setting up self-hosted-ovirt-engine (4.2) on centos7.4.
While running hosted-engine --deploy script, it fails at "Check host status" with 'host has been set in non_operational status' error.
logs on engine vm at /var/log/ovirt-engine/host-deploy shows ansible tasl for "add host" ran successfully, but yet after that the host becomes non_operational!
Where can i find more information on this error?
Thank you.
6 years, 7 months
[ANN] introducing ovirt-openshift-extensions
by Roy Golan
Hi all,
Running Openshift on oVirt seems more and more attractive and is starting
to get attention.
It is very easy to do so today without needing any special configuration,
however, to tighten the integration and to take advantage of the underlying
infra
provider(ovirt) we can do better. For example, oVirt can connect various
storage
providers, and serve disk space to containers[1][2]. Also, Openshift can
ask oVirt
for VMs and deploy them as master or application nodes for its usage,
without
making the administrator doing all that manually.
This project[1] is the home for the ovirt-flexvolume-driver and
ovirt-provisioner[3]
and merging ovirt-cloudprovider[4] is a work in progress.
The code under this repository is work-in-progress and moving quickly,
however,
it has automation (stdci v2 :)) and is working and does at least what you
observe in
the demo videos. Yet I highly appreciate if any of you that will try will
provide feedback
be that #ovirt channel/ mailing list or to report bugs directly in the Github
project page <https://github.com/oVirt/ovirt-openshift-extensions>
[1] https://github.com/oVirt/ovirt-openshift-extensions
[2] https://ovirt.org/blog/2018/02/your-container-volumes-served-by-ovirt/
[3] https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/
[4] https://github.com/rgolangh/ovirt-k8s-cloudprovider
Thanks,
Roy
6 years, 7 months
Host needs to be reinstalled message
by Gianluca Cecchi
Hello,
on a test environment in 4.1 when selecting an host, I get in the below
pane the exclamation mark and aside it the phrase:
Host needs to be reinstalled as important configuration changes were
applied on it
Where to get more information about what it thanks has changed?
Could it be a change in bonding mode to generate this kind of message?
Thanks,
Gianluca
6 years, 7 months
What is "Active VM" snapshot
by nikita.a.ogurtsov@gmail.com
Hi!
Can anybody explain what is "Active VM" snapshot that present on each VM and does it contain VM memory snapshot?
I need it for PCI assessment and did not found answer in documentation or any other public sources.
Thanks!
6 years, 7 months
Re: Ovirt Self Hosted engine deploy fails
by Sumit Bharadia
thanks, i get below message
*Failed to start ovn-controller.service: Unit not found.*
isn't it included within appliance we get for self-hosted-ovirt-engine?
On 14 May 2018 at 12:53, Simone Tiraboschi <stirabos(a)redhat.com> wrote:
>
>
> On Mon, May 14, 2018 at 11:52 AM, Sumit Bharadia <03ce007(a)gmail.com>
> wrote:
>
>> the output of: journalctl -xe -u ovn-controller
>> -- No entries --
>>
>> what is the command to manual restart of this service?
>>
>
> systemctl start ovn-controller
>
>
>>
>> Thank you.
>>
>> On 14 May 2018 at 09:12, Simone Tiraboschi <stirabos(a)redhat.com> wrote:
>>
>>>
>>>
>>> On Mon, May 14, 2018 at 10:03 AM, Sumit Bharadia <03ce007(a)gmail.com>
>>> wrote:
>>>
>>>> attached from engine VM.
>>>>
>>>
>>>
>>> The issue is here:
>>>
>>> 2018-05-14 08:04:50,596 p=26024 u=ovirt | TASK
>>> [ovirt-provider-ovn-driver : Ensure ovn-controller is started] ************
>>> 2018-05-14 08:04:51,991 p=26024 u=ovirt | fatal: [ovirt]: FAILED! => {
>>> "changed": false
>>> }
>>>
>>> MSG:
>>>
>>> Unable to start service ovn-controller: A dependency job for
>>> ovn-controller.service failed. See 'journalctl -xe' for details.
>>>
>>>
>>> Can you please double check the output of:
>>> journalctl -xe -u ovn-controller
>>>
>>>
>>>
>>>>
>>>> Thank you.
>>>>
>>>> On 14 May 2018 at 08:46, Simone Tiraboschi <stirabos(a)redhat.com> wrote:
>>>>
>>>>> Hi,
>>>>> can you please attach host-deploy logs?
>>>>> You can find them under /var/log/ovirt-engine/host-deploy on the
>>>>> engine VM (you can reach it with ssh from the host you tried to deploy).
>>>>>
>>>>> On Mon, May 14, 2018 at 9:27 AM, <03ce007(a)gmail.com> wrote:
>>>>>
>>>>>> I am trying to setup self hosted ovirt engine (4.2) on centos 7.4,
>>>>>> but the setup fails after 'Add host' task on 'wait for the host to be up'.
>>>>>> The log doesn't seem to give clear indication where the issue might be. But
>>>>>> I got below when ssh manually into ovirt appliance where the engine-deploy
>>>>>> runs and I see the host status as'install_failed'.
>>>>>>
>>>>>> Where the issue might be, and where can i find detailed log on
>>>>>> failure?
>>>>>>
>>>>>> "ovirt_hosts": [
>>>>>> {
>>>>>> "address": "ovirt",
>>>>>> "affinity_labels": [],
>>>>>> "auto_numa_status": "unknown",
>>>>>> "certificate": {
>>>>>> "organization": "ovirt",
>>>>>> "subject": "O=ovirt,CN=ovirt"
>>>>>> },
>>>>>> "cluster": {
>>>>>> "href": "/ovirt-engine/api/clusters/ba
>>>>>> 170b8e-5744-11e8-8676-00163e3c9a32",
>>>>>> "id": "ba170b8e-5744-11e8-8676-00163e3c9a32"
>>>>>> },
>>>>>> "comment": "",
>>>>>> "cpu": {
>>>>>> "speed": 0.0,
>>>>>> "topology": {}
>>>>>> },
>>>>>> "device_passthrough": {
>>>>>> "enabled": false
>>>>>> },
>>>>>> "devices": [],
>>>>>> "external_network_provider_configurations": [],
>>>>>> "external_status": "ok",
>>>>>> "hardware_information": {
>>>>>> "supported_rng_sources": []
>>>>>> },
>>>>>> "hooks": [],
>>>>>> "href": "/ovirt-engine/api/hosts/aad0f
>>>>>> e84-2a9b-446d-ac02-82a8f6eb2a3c",
>>>>>> "id": "aad0fe84-2a9b-446d-ac02-82a8f6eb2a3c",
>>>>>> "katello_errata": [],
>>>>>> "kdump_status": "unknown",
>>>>>> "ksm": {
>>>>>> "enabled": false
>>>>>> },
>>>>>> "max_scheduling_memory": 0,
>>>>>> "memory": 0,
>>>>>> "name": "ovirt",
>>>>>> "network_attachments": [],
>>>>>> "nics": [],
>>>>>> "numa_nodes": [],
>>>>>> "numa_supported": false,
>>>>>> "os": {
>>>>>> "custom_kernel_cmdline": ""
>>>>>> },
>>>>>> "permissions": [],
>>>>>> "port": 54321,
>>>>>> "power_management": {
>>>>>> "automatic_pm_enabled": true,
>>>>>> "enabled": false,
>>>>>> "kdump_detection": true,
>>>>>> "pm_proxies": []
>>>>>> },
>>>>>> "protocol": "stomp",
>>>>>> "se_linux": {},
>>>>>> "spm": {
>>>>>> "priority": 5,
>>>>>> "status": "none"
>>>>>> },
>>>>>> "ssh": {
>>>>>> "fingerprint": "SHA256:o98ZOygBK0jcfY+l5nfi0E
>>>>>> GV9v3A4zjclG9d+C3U0WA",
>>>>>> "port": 22
>>>>>> },
>>>>>> "statistics": [],
>>>>>> "status": "install_failed",
>>>>>> "storage_connection_extensions": [],
>>>>>> "summary": {
>>>>>> "total": 0
>>>>>> },
>>>>>> "tags": [],
>>>>>> "transparent_huge_pages": {
>>>>>> "enabled": false
>>>>>> },
>>>>>> "type": "rhel",
>>>>>> "unmanaged_networks": [],
>>>>>> "update_available": false
>>>>>> }
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
6 years, 7 months
Failed to upgrade from 4.1 to 4.2 - Postgre version required
by Aziz
Hi Ovirt users,
I'm trying to upgrade my Ovirt from version 4.1 to 4.2, but I'm stuck when
issuing the command *engine-setup* which returns the following errors :
Upgrading PostgreSQL
*[ ERROR ] Failed to execute stage 'Misc configuration': Command
'/opt/rh/rh-postgresql95/root/usr/bin/postgresql-setup' failed to execute*
[ INFO ] Yum Performing yum transaction rollback
[ INFO ] Rolling back to the previous PostgreSQL instance (postgresql).
[ INFO ] Stage: Clean up Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20180426122630-rpkrel.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20180426122823-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
*[ ERROR ] Execution of setup failed*
Can anyone help to resolve this ?
Thank you in advance.
BR.
6 years, 7 months
Export Domain Lock File
by Nicholas Vaughan
Hi,
Is there a lock file on the Export Domain which stops it being mounted to a
2nd instance of oVirt? We have a replicated Export Domain in a separate
location that we would like to mount to a backup instance of oVirt for DR
purposes.
Thanks in advance.
Nick
6 years, 7 months
Gluster quorum
by Demeter Tibor
Dear Ovirt Users,
I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3.
I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node.
I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list
volume1
volume2
[root@n1 ~]# gluster volume info
Volume Name: volume1
Type: Distributed-Replicate
Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.104.0.1:/gluster/brick/brick1
Brick2: 10.104.0.2:/gluster/brick/brick1
Brick3: 10.104.0.3:/gluster/brick/brick1
Brick4: 10.104.0.1:/gluster/brick/brick2
Brick5: 10.104.0.2:/gluster/brick/brick2
Brick6: 10.104.0.3:/gluster/brick/brick2
Brick7: 10.104.0.1:/gluster/brick/brick3
Brick8: 10.104.0.2:/gluster/brick/brick3
Brick9: 10.104.0.3:/gluster/brick/brick3
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
storage.owner-uid: 36
storage.owner-gid: 36
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
server.allow-insecure: on
Volume Name: volume2
Type: Distributed-Replicate
Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.104.0.1:/gluster2/brick/brick1
Brick2: 10.104.0.2:/gluster2/brick/brick1
Brick3: 10.104.0.3:/gluster2/brick/brick1
Brick4: 10.104.0.1:/gluster2/brick/brick2
Brick5: 10.104.0.2:/gluster2/brick/brick2
Brick6: 10.104.0.3:/gluster2/brick/brick2
Brick7: 10.104.0.1:/gluster2/brick/brick3
Brick8: 10.104.0.2:/gluster2/brick/brick3
Brick9: 10.104.0.3:/gluster2/brick/brick3
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
cluster.quorum-type: auto
network.ping-timeout: 10
auth.allow: *
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
server.allow-insecure: on
[root@n1 ~]# gluster volume status
Status of volume: volume1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464
Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937
Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506
Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457
Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943
Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514
Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465
Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949
Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520
Self-heal Daemon on localhost N/A N/A Y 54356
Self-heal Daemon on 10.104.0.2 N/A N/A Y 962
Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977
Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: volume2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852
Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955
Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527
Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851
Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961
Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533
Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883
Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968
Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541
Self-heal Daemon on localhost N/A N/A Y 54356
Self-heal Daemon on 10.104.0.2 N/A N/A Y 962
Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977
Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume2
------------------------------------------------------------------------------
There are no active volume tasks
I think ovirt can't read valid informations about gluster.
I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
6 years, 7 months