Ovirt Node 4.3.6 Logical Network Issue - Network Down
by jeremy_tourville@hotmail.com
I have built a new host and my setup is a single hyperconverged node.. I followed the directions to create a new logical network. I see that the engine has marked the network as being down. (Hosts>Network Interfaces Tab>Setup Host Networks)
Here is my network config on the host-
[root@vmh ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp96s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovirtmgmt state UP group default qlen 1000
link/ether 0c:c4:7a:f9:b9:88 brd ff:ff:ff:ff:ff:ff
3: enp96s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 0c:c4:7a:f9:b9:89 brd ff:ff:ff:ff:ff:ff
inet 172.30.51.2/30 brd 172.30.51.3 scope global noprefixroute dynamic enp96s0f1
valid_lft 82411sec preferred_lft 82411sec
inet6 fe80::d899:439c:5ee8:e292/64 scope link noprefixroute
valid_lft forever preferred_lft forever
19: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 96:e4:00:aa:71:f7 brd ff:ff:ff:ff:ff:ff
23: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 36:0f:60:69:e1:2b brd ff:ff:ff:ff:ff:ff
24: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether f6:3b:14:e1:15:48 brd ff:ff:ff:ff:ff:ff
25: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0c:c4:7a:f9:b9:88 brd ff:ff:ff:ff:ff:ff
inet 172.30.50.3/24 brd 172.30.50.255 scope global dynamic ovirtmgmt
valid_lft 84156sec preferred_lft 84156sec
inet6 fe80::ec4:7aff:fef9:b988/64 scope link
valid_lft forever preferred_lft forever
26: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovirtmgmt state UNKNOWN group default qlen 1000
link/ether fe:16:3e:50:53:cd brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe50:53cd/64 scope link
valid_lft forever preferred_lft forever
VLAN 101 is attached to enp96s0f0 which is my ovirtmgmt interface. -IP range 172.30.50.x
VLAN 102 is attached to enp96s0f1 which is my storage NIC for gluster. -IP range 172.30.51.x
VLAN 103 is attached to enp96s0f1 is intended for most of my VMs that are not infrastructure related. -IP range 192.168.2.x
I am pretty confident my router/switch is setup correctly.. As a test, I can go to localhost>networking>add VLAN and assign enp96s0f1 to VLAN 103 and it does get an IP address in the 192.168.2.x range. The host can also ping the 192.168.2.1 gateway.
Why doesn't the engine think the VLAN is up? Which logs do I need to review?
4 years, 6 months
Disk encryption in oVirt
by MIMMIK _
Is there a way to get a full disk encryption on virtual disks used by VMs in oVirt?
Regards
4 years, 6 months
oVirt Metrics Store Installation - Error during SSO authentication
by Markus Schaufler
Hi,
oVirt 4.3.5
changed certificate to one from an official CA
configured active directory auth
no kerberos / no ldapS
as per: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/...
ANSIBLE_JINJA2_EXTENSIONS="jinja2.ext.do" ./configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml --ask-vault-pass -vvvv
During installation of the metrics store following error appears:
TASK [oVirt.image-template : Login to oVirt] *********************************************************************************************************
task path: /usr/share/ansible/roles/ovirt.image-template/tasks/qcow2_image.yml:41
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1571121133.7-63276692983944 `" && echo ansible-tmp-1571121133.7-63276692983944="` echo /root/.ansible/tmp/ansible-tmp-1571121133.7-63276692983944 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/ovirt/ovirt_auth.py
<localhost> PUT /root/.ansible/tmp/ansible-local-32046wbKds4/tmpyM5Ro2 TO /root/.ansible/tmp/ansible-tmp-1571121133.7-63276692983944/AnsiballZ_ovirt_auth.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1571121133.7-63276692983944/ /root/.ansible/tmp/ansible-tmp-1571121133.7-63276692983944/AnsiballZ_ovirt_auth.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1571121133.7-63276692983944/AnsiballZ_ovirt_auth.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1571121133.7-63276692983944/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ovirt_auth_payload_2cmBY9/__main__.py", line 276, in main
token = connection.authenticate()
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 382, in authenticate
self._sso_token = self._get_access_token()
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 628, in _get_access_token
sso_error[1]
AuthError: Error during SSO authentication access_denied : Cannot authenticate user 'xxx(a)xxxx.LOCAL': No valid profile found in credentials..
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"ca_file": "/etc/pki/ovirt-engine/apache-ca.pem",
"compress": true,
"headers": null,
"hostname": "ovirt-poc.xxxx.at",
"insecure": false,
"kerberos": false,
"ovirt_auth": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"state": "present",
"timeout": 0,
"token": null,
"url": "https://ovirt-poc.xxxx.at/ovirt-engine/api",
"username": "xxx(a)xxx.LOCAL"
}
},
"msg": "Error during SSO authentication access_denied : Cannot authenticate user 'xxx(a)xxx.LOCAL': No valid profile found in credentials.."
}
TASK [oVirt.image-template : Remove downloaded image] ************************************************************************************************
task path: /usr/share/ansible/roles/ovirt.image-template/tasks/qcow2_image.yml:210
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}
TASK [oVirt.image-template : Remove vm] **************************************************************************************************************
task path: /usr/share/ansible/roles/ovirt.image-template/tasks/qcow2_image.yml:216
fatal: [localhost]: FAILED! => {
"msg": "The conditional check 'ovirt_templates | length == 0' failed. The error was: error while evaluating conditional (ovirt_templates | length == 0): 'ovirt_templates' is undefined\n\nThe error appears to be in '/usr/share/ansible/roles/ovirt.image-template/tasks/qcow2_image.yml': line 216, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Remove vm\n ^ here\n"
}
###############################################
Double-checked the credentials in "metrics-store-config.yml" and "secure_vars.yaml" and tried with admin@internal with same error.
Any hints on this?
4 years, 6 months
Help/guidelines/ideas on how to create new hosted-engine on final server with old config
by Lars Bækmark
I'm looking for guidelines/ideas for moving/migrating/Hosted-engine from the initial install with local storage, to the "Final" server.
Initial it was a "Test" to see if this will work or not, but this has now become the production, where the initial host named "ovirt" that is hosting the hosted-engine on local storage is "just" doing that. My hosted-engine is called "ovirtmgr" and I created a new host to run Virtual machines called "Hermod" - getting better in the naming thing, right? This test was so successful that I converted the old Xen/Oracle based host to CentOS 7 - the same as host Ovirt and Hermod:
Linux hermod 3.10.0-1062.1.1.el7.x86_64 #1 SMP Fri Sep 13 22:55:44 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux <https://www.facebook.com/linux.lanka?fref=gs&__tn__=%2CdK-R-R&eid=ARDadyp...>
Linux fernis 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
oVirt version is: 4.3.5.4-1.el7
I want to move the hosted-engine to Fenris and at the same time use a NFS share for that. I can shut the whole setup down, as long as it runs again in daytime.
I do know that I should have started with the hosted-engine in the correct place, but that's not the case anymore. There are eleven VM’s in total
Can anyone suggest a plan on how to move/create a new hosted-engine on my server Fenris? The history of the old server is not important, I just want the hosted-engine on fenris using NFS share
I did find some sugesting on the web, but they all look quite complicated: https://www.ovirt.org/…/engine/migrate-to-hosted-engine.html <https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.ovirt.org%2Fdevelop%2Fde...>
I just love the product! I'm now running VM's 2-3 times faster by converting from Xen to KVM, but still use the same storage and server! :-D
Appreciate any help! 😀
/Lars
Lars Bækmark
lars(a)baekmark.dk <mailto:lars@baekmark.dk>
+45 2241 0500
Mølleengen 6
3550 Slangerup
Danmark
-------------------------------------
Oversteer is when you hit the wall with the rear of the car.
Horsepower is how fast you hit the wall.
Torque is how far you take the wall with you
4 years, 6 months
Fencing Agent using SuperMicro IPMI Fails
by jeremy_tourville@hotmail.com
I am trying to setup my power management fencing agent on the host.
Address: I put in the IP address of the web page for IPMI. (Which does work via my web browser without issue)
Username: ADMIN
PW: my_password
Type: ipmilan
Options: lanplus=1
I click on the test button and it fails.
Board Manufacturer: Supermicro
Board Product Name: X11DPi-N
Any suggestions for troubleshooting?
4 years, 6 months
ovirt-ha-broker ERROR Failed to getVdsStats: No 'network' in result
by Strahil Nikolov
Hi All,
After a host reinstall + deploy (UI -> Hosts -> Management -> Reinstall) I see the following error in the ovirt-ha-broker :
ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result
Anyone got an idea what is it going about ? Should I worry about that message ?
Best Regards,Strahil Nikolov
4 years, 6 months
Re: improve file operation speed in HCI
by Strahil
If you seek performance, then set tuned-adm profile in the VM to 'throughput-performance' and scheduler either 'noop' or 'none' (depends if multi queue is enabled).
Usually, if yhou crete the gluster cluster via cockpit and then install the hosted engine via cockpit again - all options on your gluster volumes are the most optimal you need.
Best Rregards,
Strahil NikolovOn Oct 18, 2019 15:30, Jayme <jaymef(a)gmail.com> wrote:
>
> My VMs are using virtual-guest tuned profiles and ovirt node hosts are using virtual-host profile. Those seem to be good defaults from what I'm looking at. I will test I/O schedulers to see if that makes any difference and also try out high performance VM profile (I was staying away from that profile due to loss of high-availability).
>
> On Fri, Oct 18, 2019 at 9:18 AM Jayme <jaymef(a)gmail.com> wrote:
>>
>> The VMs are basically as stock CentOS 7x as you can get. There are so many layers to deal with in HCI it's difficult to know where to begin with tuning. I was focusing mainly on gluster. Is it recommended to do tuning directly on oVirt host nodes as well such as I/O scheduler and tuned-adm profiles etc?
>>
>> On Fri, Oct 18, 2019 at 6:55 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>>
>>> What is your I/O scheduler and tuned-adm profile in the VM.
>>> RedHat based VMs use deadline which prioritizes reads before writes -> you can use 'noop' or 'none'.
>>>
>>> For profile, you can use high-performance.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> On Oct 18, 2019 06:45, Jayme <jaymef(a)gmail.com> wrote:
>>>>
>>>> I'm wondering if anyone has any tips to improve file/directory operations in HCI replica 3 (no arbtr) configuration with SSDs and 10Gbe storage network.
>>>>
>>>> I am running stock optimize for virt store volume settings currently and am wondering what if any improvements I can make for VM write speed and more specifically anything I can tune to increase performance of small file operations such as copying, untar, npm installs etc.
>>>>
>>>> For some context, I'm seeing ~50MB/s write speeds inner VM with: dd if=/dev/zero of=./test bs=512k count=2048 oflag=direct -- I am not sure how this compares to other HCI setups, I feel like it should be higher with SSD backed storage. Same command from gluster mount is over 400MB/s
>>>>
>>>> I've read some things about meta data caching, read ahead and other options. There are so many and I'm not sure where to start, I'm also not sure which could potentially have a negative impact on VM stability/reliability.
>>>>
>>>> Here are options for one of my volumes:
>>>>
>>>> Volume Name: prod_b
>>>> Type: Replicate
>>>> Volume ID: c3e7447e-8514-4e4a-9ff5-a648fe6aa537
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 1 x 3 = 3
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: gluster0.example.com:/gluster_bricks/prod_b/prod_b
>>>> Brick2: gluster1.example.com:/gluster_bricks/prod_b/prod_b
>>>> Brick3: gluster2.example.com:/gluster_bricks/prod_b/prod_b
>>>> Options Reconfigured:
>>>> server.event-threads: 4
>>>> client.event-threads: 4
>>>> performance.client-io-threads: on
>>>> nfs.disable: on
>>>> transport.address-family: inet
>>>> performance.quick-read: off
>>>> performance.read-ahead: off
>>>> performance.io-cache: off
>>>> performance.low-prio-threads: 32
>>>> network.remote-dio: off
>>>> cluster.eager-lock: enable
>>>> cluster.quorum-type: auto
>>>> cluster.server-quorum-type: server
>>>> cluster.data-self-heal-algorithm: full
>>>> cluster.locking-scheme: granular
>>>> cluster.shd-max-threads: 8
>>>> cluster.shd-wait-qlength: 10000
>>>> features.shard: on
>>>> user.cifs: off
>>>> storage.owner-uid: 36
>>>> storage.owner-gid: 36
>>>> network.ping-timeout: 30
>>>> performance.strict-o-direct: on
>>>> cluster.granular-entry-heal: enable
>>>> server.allow-insecure: on
>>>> cluster.choose-local: off
>>>>
>>>>
4 years, 6 months