Failed to add storage domain
by thunderlight1@gmail.com
Hi!
I have installed oVirt using the iso ovirt-node-ng-installer-4.3.2-2019031908.el7. I the did run the Host-engine deployment through Cockpit.
I got an error when it tries to create the domain storage. It sucessfully mounted the NFS-share on the host. Bellow is the error I got:
2019-04-14 10:40:38,967+0200 INFO ansible skipped {'status': 'SKIPPED', 'ansible_task': u'Check storage domain free space', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-04-14 10:40:38,967+0200 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fb6918ad9d0> kwargs
2019-04-14 10:40:39,516+0200 INFO ansible task start {'status': 'OK', 'ansible_task': u'ovirt.hosted_engine_setup : Activate storage domain', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-04-14 10:40:39,516+0200 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Activate storage domain kwargs is_conditional:False
2019-04-14 10:40:41,923+0200 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{
"changed": false,
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response
, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400.\n",
"failed": true,
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."
}"
2019-04-14 10:40:41,924+0200 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<type 'list'>" value: "[]"
2019-04-14 10:40:41,924+0200 DEBUG var changed: host "localhost" var "play_hosts" type "<type 'list'>" value: "[]"
2019-04-14 10:40:41,924+0200 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<type 'list'>" value: "[]"
2019-04-14 10:40:41,924+0200 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Activate storage domain', 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n File "/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py", line 664, in main\\n storage_domains_module.post_create_check(sd_id)\\n File "/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py", line 526', 'task_duration': 2, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
2019-04-14 10:40:41,924+0200 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fb691843190> kwargs ignore_errors:None
2019-04-14 10:40:41,928+0200 INFO ansible stats {
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_playbook_duration": "00:37 Minutes",
"ansible_result": "type: <type 'dict'>\nstr: {u'localhost': {'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 1, 'failures': 1}}",
"ansible_type": "finish",
"status": "FAILED"
}
2019-04-14 10:40:41,928+0200 INFO SUMMARY:
Duration Task Name
-------- --------
[ < 1 sec ] Execute just a specific set of steps
[ 00:01 ] Force facts gathering
[ 00:01 ] Check local VM dir stat
[ 00:01 ] Obtain SSO token using username/password credentials
[ 00:01 ] Fetch host facts
[ < 1 sec ] Fetch cluster ID
[ 00:01 ] Fetch cluster facts
[ 00:01 ] Fetch Datacenter facts
[ < 1 sec ] Fetch Datacenter ID
[ < 1 sec ] Fetch Datacenter name
[ 00:02 ] Add NFS storage domain
[ 00:01 ] Get storage domain details
[ 00:01 ] Find the appliance OVF
[ 00:01 ] Parse OVF
[ < 1 sec ] Get required size
[ FAILED ] Activate storage domain
2019-04-14 10:40:41,928+0200 DEBUG ansible on_any args <ansible.executor.stats.AggregateStats object at 0x7fb69404eb90> kwargs
Any suggestions on how fix this?
4 years, 6 months
Q: Fixing SELinux Permissions on oVirt node
by Andrei Verovski
Hi !
I’m struggling with SELinux blocking SNMP script from reading log file (oVirt node manually installed on CentOS 7).
Log file is readable by all (chmod ugo+r).
Scripts working fine when executed from terminal.
I did not dig deep into CentOS internals, I’m mostly use Debian and SuSE. As far as I know, SELinux can’t be turned off on oVirt node.
Thanks in advance for any suggestion(s).
**********************
option in snmpd.conf
extend .1.3.6.1.4.1.2021.7890.5 checkraid /opt/4anvcheckraid_hp.sh
**********************
script 4anvcheckraid_hp.sh
#!/bin/bash
LOGFILE='/var/log/anvraidcheck.log'
if [ ! -f $LOGFILE ]; then
exit 0
fi
# Variant 1 with sed
sed '/^[ \t]*$/d' $LOGFILE | while read line; do
echo "$line"
exit 1
done
# Variant 2 without sed
while read line
do
if [[ "$line" =~ [^[:space:]] ]]; then
echo "$line"
exit 1
fi
done < $LOGFILE
**********************
SELinux audit log:
type=AVC msg=audit(1590673970.198:469304): avc: denied { read } for pid=12142 comm="sed" name="anvraidcheck.log" dev="dm-8" ino=138 scontext=system_u:system_r:snmpd_t:s0 tcontext=system_u:object_r:cron_log_t:s0 tclass=file permissive=0
type=AVC msg=audit(1590673970.197:469303): avc: denied { read } for pid=12141 comm="4anvcheckraid_h" name="anvraidcheck.log" dev="dm-8" ino=138 scontext=system_u:system_r:snmpd_t:s0 tcontext=system_u:object_r:cron_log_t:s0 tclass=file permissive=0
4 years, 6 months
How to connect to a guest with vGPU ?
by Josep Manel Andrés Moscardó
Hi,
I got vGPU through mdev working but I am wondering how I would connect
to the client and make use of the GPU. So far I try to access the
console through SPICE and at some point in the boot process it switches
to GPU and I cannot see anything else.
Thanks.
--
Josep Manel Andrés Moscardó
Systems Engineer, IT Operations
EMBL Heidelberg
T +49 6221 387-8394
4 years, 6 months
oVirt 4.3.9 Standalone Engine local DB install documentation
by msantoro@lanl.gov
Hello,
I am new to oVirt and installing our first dev deployment. The 4.4 documentation is RHEL 8.1 specific (i.e. "yum module <module>"), and I cannot seem to easily find similar install documentation for the Standalone local DB case for 4.3.x. I am using RHEL 7.x for the time being. Can someone point me to install docs 4.3.x?
Thanks,
Marc
4 years, 6 months
ovirt-websocket-proxy errors when trying noVNC
by Louis Bohm
OS: Oracle Linux 7.8 (unbreakable kernel)
Using Oracle Linux Virtualization Manager: Software Version:4.3.6.6-1.0.9.el7
Since I am running all of it on one physical machine I opted to install the ovirt-engine using the accept defaults option.
When I try to start a noVNC console I see this in the messages file:
May 26 16:49:12 lfg-kvm saslpasswd2: Could not find keytab file: /etc/qemu/krb5.tab: No such file or directory
May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb: BDB0073 DB_NOTFOUND: No matching key/data pair found
May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb: BDB0073 DB_NOTFOUND: No matching key/data pair found
May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb: BDB0073 DB_NOTFOUND: No matching key/data pair found
May 26 16:49:12 lfg-kvm saslpasswd2: error deleting entry from sasldb: BDB0073 DB_NOTFOUND: No matching key/data pair found
May 26 16:49:14 lfg-kvm journal: 2020-05-26 16:49:14,704-0400 ovirt-websocket-proxy: INFO msg:824 handler exception: [SSL: SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:618)
May 26 16:49:14 lfg-kvm ovirt-websocket-proxy.py: ovirt-websocket-proxy[14582] INFO msg:824 handler exception: [SSL: SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:618)
I have checked the following:
[root@lfg-kvm ~]# engine-config -g WebSocketProxy
WebSocketProxy: lfg-kvm.corp.lfg.com:6100 version: general
[root@lfg-kvm ~]# engine-config -g SpiceProxyDefault
SpiceProxyDefault: http://lfg-kvm.corp.lfg.com:6100 version: general
This is a brand new install.
I also am unable to get a VNC console up and running. I have tried with an Ubuntu VM running on my MAC where I installed virt-manager. The viewer comes up for a second says it cannot connect and then shutsdown.
Anyone have any clue?
-<<—->>-
Louis Bohm
louisbohm(a)gmail.com
<https://www.youracclaim.com/badges/f11e0d65-21ad-4458-895b-2c5b5cb11134/p...> <https://www.youracclaim.com/badges/f11e0d65-21ad-4458-895b-2c5b5cb11134/p...>
4 years, 6 months
oVirt4.4 HCI single host mortal combat
by Jiří Sléžka
Hi,
I am still fighting with oVirt 4.4 installation in HCI single host
configuration. Seems to be hard fighter... ;-)
It looks like there is no 4.4 HCI single host installation guide so I am
using compilation of this sources
* https://www.ovirt.org/download/
*
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_no...
I did
* clean minimal install of CentOS 8.1
* setup networks (I am using vlans on bond for internal traffic)
dnf update -y
dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
dnf install cockpit cockpit-ovirt-dashboard vdsm-gluster
ovirt-engine-appliance glusterfs-server gluster-ansible-roles
systemctl enable --now cockpit.socket
firewall-cmd --add-service=cockpit
firewall-cmd --add-service=cockpit --permanent
ssh-keygen
ssh-copy-id root(a)10.0.4.11
ssh root(a)10.0.4.11
(10.0.4.11 is local address on vlan which I would like use as storage
network)
I would like to run gluster-ansible-roles from command line but I am not
sure how exactly do it right way so I am going the cockpit way and the
gluster part works like expected.
Next Boss is HE install
Round 1, Fight!
The hosted engine wizzard ends with missing ca-cert.pem (unfortunately I
close the window and cannot find that log anymore). But it looks to me
like problem mentioned in
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4PUIES2JGAF...
I have switched to command line...
Round 2, Fight!
ovirt-hosted-engine-cleanup
ovirt-hosted-engine-setup
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
host has been set in non_operational status, deployment errors: code
505: Host ovirt-hci01.stud.slu.cz installation failed. Failed to
configure management network on the host., code 9000: Failed to
verify Power Management configuration for Host ovirt-hci01.stud.slu.cz.,
fix accordingly and re-deploy."}
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20200526210340-h7waks.log
interesting thing is the vm looks like running in some way
ps aux | grep kvm
qemu 26790 58.2 6.6 6927972 3295760 ? Sl 21:28 12:40
/usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
-object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
-machine pc-q35-rhel8.1.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Nehalem-IBRS,vme=on,ss=on,x2apic=on,tsc...
...
but
hosted-engine --vm-status
It seems like a previous attempt to deploy hosted-engine failed or it's
still in progress. Please clean it up before trying again
hosted-engine --check-deployed
The hosted engine has not been deployed
ok...
Round 3, Fight!
ovirt-hosted-engine-cleanup
ovirt-hosted-engine-setup --config-append=/root/answers-20200526214934.conf
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd":
["virt-install", "-n", "HostedEngineLocal", "--os-variant", "rhel8.0",
"--virt-type", "kvm", "--memory", "4096", "--vcpus", "2", "--network",
"network=default,mac=00:16:3e:7a:ce:77,model=virtio", "--disk",
"/var/tmp/localvmc7szuw4y/images/6c7c4d4b-9c11-485d-98e0-466a09888515/c16b87ac-f9d4-491d-a972-7dc333a324a0",
"--import", "--disk",
"path=/var/tmp/localvmc7szuw4y/seed.iso,device=cdrom",
"--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc",
"--video", "vga", "--sound", "none", "--controller", "usb,model=none",
"--memballoon", "none", "--boot", "hd,menu=off", "--clock",
"kvmclock_present=yes"], "delta": "0:00:04.419991", "end": "2020-05-26
22:27:20.730780", "msg": "non-zero return code", "rc": 1, "start":
"2020-05-26 22:27:16.310789", "stderr": "ERROR internal error:
process exited while connecting to monitor: 2020-05-26T20:27:19.254675Z
qemu-kvm: -object
tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no:
Unable to access credentials /etc/pki/vdsm/libvirt-vnc/ca-cert.pem: No
such file or directory\nDomain installation does not appear to have been
successful.\nIf it was, you can restart your domain by running:\n virsh
--connect qemu:///system start HostedEngineLocal\notherwise, please
restart your installation.", "stderr_lines": ["ERROR internal error:
process exited while connecting to monitor: 2020-05-26T20:27:19.254675Z
qemu-kvm: -object
tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no:
Unable to access credentials /etc/pki/vdsm/libvirt-vnc/ca-cert.pem: No
such file or directory", "Domain installation does not appear to have
been successful.", "If it was, you can restart your domain by running:",
" virsh --connect qemu:///system start HostedEngineLocal", "otherwise,
please restart your installation."], "stdout": "\nStarting install...",
"stdout_lines": ["", "Starting install..."]}
...and this error I get every round
relevant lines from
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20200526223648-dvejcr.log
2020-05-26 22:49:15,303+0200 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 {'msg': 'non-zero return code', 'cmd':
['virt-install', '-n', 'HostedEngineLocal', '--os-variant', 'rhel8.0',
'--virt-type', 'kvm', '--memory', '4096', '--vcpus', '2', '--network',
'network=default,mac=00:16:3e:7a:ce:77,model=virtio', '--disk',
'/var/tmp/localvm93c7hrj2/images/6c7c4d4b-9c11-485d-98e0-466a09888515/c16b87ac-f9d4-491d-a972-7dc333a324a0',
'--import', '--disk',
'path=/var/tmp/localvm93c7hrj2/seed.iso,device=cdrom',
'--noautoconsole', '--rng', '/dev/random', '--graphics', 'vnc',
'--video', 'vga', '--sound', 'none', '--controller', 'usb,model=none',
'--memballoon', 'none', '--boot', 'hd,menu=off', '--clock',
'kvmclock_present=yes'], 'stdout': '\nStarting install...', 'stderr':
'ERROR internal error: process exited while connecting to monitor:
2020-05-26T20:49:12.368712Z qemu-kvm: -object
tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no:
Unable to access credentials /etc/pki/vdsm/libvirt-vnc/ca-cert.pem: No
such file or directory\nDomain installation does not appear to have been
successful.\nIf it was, you can restart your domain by running:\n virsh
--connect qemu:///system start HostedEngineLocal\notherwise, please
restart your installation.', 'rc': 1, 'start': '2020-05-26
22:49:09.336828', 'end': '2020-05-26 22:49:15.046389', 'delta':
'0:00:05.709561', 'changed': True, 'invocation': {'module_args':
{'_raw_params': 'virt-install -n HostedEngineLocal --os-variant rhel8.0
--virt-type kvm --memory 4096 --vcpus 2 --network
network=default,mac=00:16:3e:7a:ce:77,model=virtio --disk
/var/tmp/localvm93c7hrj2/images/6c7c4d4b-9c11-485d-98e0-466a09888515/c16b87ac-f9d4-491d-a972-7dc333a324a0
--import --disk path=/var/tmp/localvm93c7hrj2/seed.iso,device=cdrom
--noautoconsole --rng /dev/random --graphics vnc --video vga --sound
none --controller usb,model=none --memballoon none --boot hd,menu=off
--clock kvmclock_present=yes', 'warn': True, '_uses_shell': False,
'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None,
'chdir': None, 'executable': None, 'creates': None, 'removes': None,
'stdin': None}}, 'stdout_lines': ['', 'Starting install...'],
'stderr_lines': ['ERROR internal error: process exited while
connecting to monitor: 2020-05-26T20:49:12.368712Z qemu-kvm: -object
tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no:
Unable to access credentials /etc/pki/vdsm/libvirt-vnc/ca-cert.pem: No
such file or directory', 'Domain installation does not appear to have
been successful.', 'If it was, you can restart your domain by running:',
' virsh --connect qemu:///system start HostedEngineLocal', 'otherwise,
please restart your installation.'], '_ansible_no_log': False}
2020-05-26 22:49:15,404+0200 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:107 fatal: [localhost]: FAILED! =>
{"changed": true, "cmd": ["virt-install", "-n", "HostedEngineLocal",
"--os-variant", "rhel8.0", "--virt-type", "kvm", "--memory", "4096",
"--vcpus", "2", "--network",
"network=default,mac=00:16:3e:7a:ce:77,model=virtio", "--disk",
"/var/tmp/localvm93c7hrj2/images/6c7c4d4b-9c11-485d-98e0-466a09888515/c16b87ac-f9d4-491d-a972-7dc333a324a0",
"--import", "--disk",
"path=/var/tmp/localvm93c7hrj2/seed.iso,device=cdrom",
"--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc",
"--video", "vga", "--sound", "none", "--controller", "usb,model=none",
"--memballoon", "none", "--boot", "hd,menu=off", "--clock",
"kvmclock_present=yes"], "delta": "0:00:05.709561", "end": "2020-05-26
22:49:15.046389", "msg": "non-zero return code", "rc": 1, "start":
"2020-05-26 22:49:09.336828", "stderr": "ERROR internal error:
process exited while connecting to monitor: 2020-05-26T20:49:12.368712Z
qemu-kvm: -object
tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no:
Unable to access credentials /etc/pki/vdsm/libvirt-vnc/ca-cert.pem: No
such file or directory\nDomain installation does not appear to have been
successful.\nIf it was, you can restart your domain by running:\n virsh
--connect qemu:///system start HostedEngineLocal\notherwise, please
restart your installation.", "stderr_lines": ["ERROR internal error:
process exited while connecting to monitor: 2020-05-26T20:49:12.368712Z
qemu-kvm: -object
tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no:
Unable to access credentials /etc/pki/vdsm/libvirt-vnc/ca-cert.pem: No
such file or directory", "Domain installation does not appear to have
been successful.", "If it was, you can restart your domain by running:",
" virsh --connect qemu:///system start HostedEngineLocal", "otherwise,
please restart your installation."], "stdout": "\nStarting install...",
"stdout_lines": ["", "Starting install..."]}
ll /etc/pki/vdsm/libvirt-vnc/
total 0
Any help is much appreciated...
Thanks in advance,
Jiri
4 years, 6 months
Engine expands CPU and memory
by xilazz@126.com
Hi, everyone. When I use the ovirt cluster, there are multiple virtual machines in the cluster.Engine controller shows excessive CPU load during administration, I would like to ask if there is a way to extend engine controller CPU and memory online, I would be very grateful.thank you
4 years, 6 months
oVirt not using local GlusterFS bricks
by Randall Wood
I have a three node oVirt 4.3.7 cluster that is using GlusterFS as the underlying storage (each oVirt node is a GlusterFS node). The nodes are named ovirt1, ovirt2, and ovirt3. This has been working wonderfully until last week when ovirt2 crashed (it is *old* hardware; this was not entirely unexpected).
Now I have this situation: all three oVirt nodes are acting as if the GlusterFS volumes only exist on ovirt2. The bricks on all three nodes appear to be in sync.
I *think* this began happening after I restarted ovirt2 (once hard, and once soft) and then restarted glusterd (and only glusterd) on ovirt1 and ovirt3 after `gluster-eventsapi status` on those two nodes showed inconsistent results (this had been used with success before).
How can I make the oVirt nodes read and write from their local bricks?
4 years, 6 months