unable to put ovirt host in maintenance
by slaurencelle@hotmail.com
Hello I'm trying to put one of my ovirt host ( whcih is also the ovirt-engine) into maintenance mode but i receive this error:
Error while executing action: Cannot switch Host OvirtHome to Maintenance mode. Image transfer is in progress for the following (5) disks:
b6338adf-a43b-4051-9bc0-7bb436876b5e,
b8aa8ac8-e4bf-4a31-bd6f-02a33e04f8c6,
f8548d7b-40c4-4d1e-9b21-8d4aa4966fff,
3626520b-ba03-452a-833f-4e4d48215047,
4b3242d0-73f6-46c5-99ac-9bac86b4ab41
Please wait for the operations to complete and try again
I start does transfert from command line ans they failed, how can i delete the reference to those from the linux command line ?
My ovirt version is the 4.3 (latest one)
I'm running it on centos 7.6 with latest update
Hope someone can help me !
Best regards
stephane
5 years, 9 months
Host ends up non-operational after trying to add it to cluster
by Kristian Petersen
Hello all,
I am building a small two-host oVirt cluster and am experiencing problems
adding in the second host. The engine log shows a message saying it is
marking the host as non-operational due to it not meeting minimum CPU
requirements. However, both hosts are the using the same i7-4770 model
CPUs. For some reason, oVirt detects them as the same model but the "CPU
Type" from ost1 is listed as "Intel Haswell-noTSX IBRS SSBD Family" while
host2's CPU Type is listed as "Intel Haswell-noTSX Family". The two
computers are supposedly the exact same model from Dell with the same
hardware in them. Anyone have any insight into this issue?
--
Kristian Petersen
System Administrator
BYU Dept. of Chemistry and Biochemistry
5 years, 9 months
Problems with GlusterFS
by Endre Karlson
Hi we are seeing a high number of errors / failures within the logs and
problems with our ovirt 4.3 cluster. IS there any assumption on a possible
fix?
The message "E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler" repeated 13 times between [2019-02-26 13:53:40.653905] and
[2019-02-26 13:54:04.684140]
[2019-02-26 13:54:08.684591] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
[2019-02-26 13:54:08.689021] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
==> /var/log/glusterfs/glustershd.log <==
[2019-02-26 13:54:08.783380] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
[2019-02-26 13:54:09.427338] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-engine-client-2: changing port to 49152 (from 0)
[2019-02-26 13:54:10.785533] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-engine-client-2: changing port to 49152 (from 0)
[2019-02-26 13:54:12.432411] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
==>
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-ovirt4-stor.creator.local:_engine.log
<==
[2019-02-26 13:54:12.579095] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-engine-client-2: changing port to 49152 (from 0)
==>
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-ovirt4-stor.creator.local:vmstore.log
<==
[2019-02-26 13:54:12.689449] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
==> /var/log/glusterfs/glustershd.log <==
[2019-02-26 13:54:12.790471] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
[2019-02-26 13:54:13.437351] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-engine-client-2: changing port to 49152 (from 0)
5 years, 9 months
Local Storage domain to Shared
by Matt Simonsen
Hello all,
I have a few nodes with local storage, and I've considered exporting
them via NFS to migrate into shared storage, more than a few times.
I have thought of this post on the ovirt-users list many times:
https://lists.ovirt.org/pipermail/users/2017-December/085521.html
Is this procedure documented & fully supported? Or is it something that
just happens to work?
The instructions provided by Gianluca seem very clear. If this isn't
documented better, ie: a blog for the site, what are the things that I
should include to make it of value for going on the site?
Thanks,
Matt
5 years, 9 months
Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment
by Guillaume Pavese
It fails too :
I made sure PermitTunnel=yes in sshd config but when I try to connect to
the forwarded port I get the following error on the openened host ssh
session :
[gpavese@sheepora-X230 ~]$ ssh -v -L 5900:
vs-inf-int-kvm-fr-301-210.hostics.fr:5900
root(a)vs-inf-int-kvm-fr-301-210.hostics.fr
...
[root@vs-inf-int-kvm-fr-301-210 ~]#
debug1: channel 3: free: direct-tcpip: listening port 5900 for
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port 42144
to ::1 port 5900, nchannels 4
debug1: Connection to port 5900 forwarding to
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested.
debug1: channel 3: new [direct-tcpip]
channel 3: open failed: connect failed: Connection refused
debug1: channel 3: free: direct-tcpip: listening port 5900 for
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port
32778 to 127.0.0.1 port 5900, nchannels 4
and in journalctl :
févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: error:
connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi <stirabos(a)redhat.com>
wrote:
>
>
>
> On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese <
> guillaume.pavese(a)interactiv-group.com> wrote:
>
>> I made sure of everything and even stopped firewalld but still can't
>> connect :
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# cat
>> /var/run/libvirt/qemu/HostedEngineLocal.xml
>> <graphics type='vnc' port='*5900*' autoport='yes'
>> *listen='127.0.0.1*'>
>> <listen type='address' address='*127.0.0.1*' fromConfig='1'
>> autoGenerated='no'/>
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59
>> tcp 0 0 127.0.0.1:5900 0.0.0.0:*
>> LISTEN 13376/qemu-kvm
>>
>
>
> I suggest to try ssh tunneling, run
> ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900
> root(a)vs-inf-int-kvm-fr-301-210.hostics.fr
>
> and then
> remote-viewer vnc://localhost:5900
>
>
>
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status firewalld.service
>> ● firewalld.service - firewalld - dynamic firewall daemon
>> Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled;
>> vendor preset: enabled)
>> *Active: inactive (dead)*
>> *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr
>> <http://vs-inf-int-kvm-fr-301-210.hostics.fr> systemd[1]: Stopped firewalld
>> - dynamic firewall daemon.*
>>
>> From my laptop :
>> [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr
>> *5900*
>> Trying 10.199.210.11...
>> [*nothing gets through...*]
>> ^C
>>
>> For making sure :
>> [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr
>> *9090*
>> Trying 10.199.210.11...
>> *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr.
>> Escape character is '^]'.
>>
>>
>>
>>
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
>>
>> On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal <dparth(a)redhat.com> wrote:
>>
>>> Hey!
>>>
>>> You can check under /var/run/libvirt/qemu/HostedEngine.xml
>>> Search for 'vnc'
>>> From there you can look up the port on which the HE VM is available and
>>> connect to the same.
>>>
>>>
>>> On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese <
>>> guillaume.pavese(a)interactiv-group.com> wrote:
>>>
>>>> 1) I am running in a Nested env, but under libvirt/kvm on remote Centos
>>>> 7.4 Hosts
>>>>
>>>> Please advise how to connect with VNC to the local HE vm. I see it's
>>>> running, but this is on a remote host, not my local machine :
>>>> qemu 13376 100 3.7 17679424 845216 ? Sl 12:46 85:08
>>>> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
>>>> -object
>>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
>>>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
>>>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
>>>> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a
>>>> -no-user-config -nodefaults -chardev
>>>> socket,id=charmonitor,fd=27,server,nowait -mon
>>>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
>>>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
>>>> menu=off,strict=on -device
>>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
>>>> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
>>>> -device
>>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>>>> -drive
>>>> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on
>>>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
>>>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
>>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3
>>>> -chardev pty,id=charserial0 -device
>>>> isa-serial,chardev=charserial0,id=serial0 -chardev
>>>> socket,id=charchannel0,fd=31,server,nowait -device
>>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
>>>> *-vnc 127.0.0.1:0 <http://127.0.0.1:0> -device VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2
>>>> -object rng-random,id=objrng0,filename=/dev/random -device
>>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox
>>>> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
>>>> -msg timestamp=on
>>>>
>>>>
>>>> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat
>>>> /etc/libvirt/qemu/networks/default.xml
>>>> <!--
>>>> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
>>>> OVERWRITTEN AND LOST. Changes to this xml configuration should be made
>>>> using:
>>>> virsh net-edit default
>>>> or other application using the libvirt API.
>>>> -->
>>>>
>>>> <network>
>>>> <name>default</name>
>>>> <uuid>ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6</uuid>
>>>> <forward mode='nat'/>
>>>> <bridge name='virbr0' stp='on' delay='0'/>
>>>> <mac address='52:54:00:e5:fe:3b'/>
>>>> <ip address='192.168.122.1' netmask='255.255.255.0'>
>>>> <dhcp>
>>>> <range start='192.168.122.2' end='192.168.122.254'/>
>>>> </dhcp>
>>>> </ip>
>>>> </network>
>>>> You have new mail in /var/spool/mail/root
>>>> [root@vs-inf-int-kvm-fr-301-210 ~]
>>>>
>>>>
>>>>
>>>> Guillaume Pavese
>>>> Ingénieur Système et Réseau
>>>> Interactiv-Group
>>>>
>>>>
>>>> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi <stirabos(a)redhat.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese <
>>>>> guillaume.pavese(a)interactiv-group.com> wrote:
>>>>>
>>>>>> He deployment with "hosted-engine --deploy" fails at TASK
>>>>>> [ovirt.hosted_engine_setup : Get local VM IP]
>>>>>>
>>>>>> See following Error :
>>>>>>
>>>>>> 2019-02-25 12:46:50,154+0100 INFO
>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>>>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get
>>>>>> local VM IP]
>>>>>> 2019-02-25 12:55:26,823+0100 DEBUG
>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>>>>> ansible_utils._process_output:103 {u'_ansible_parsed': True,
>>>>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00
>>>>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end':
>>>>>> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'',
>>>>>> u'changed': True, u'invocation': {u'module_args': {u'warn': True,
>>>>>> u'executable':
>>>>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r
>>>>>> net-dhcp-leases default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' |
>>>>>> cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None,
>>>>>> u'chdir': None, u'std
>>>>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts':
>>>>>> 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines':
>>>>>> []}
>>>>>> 2019-02-25 12:55:26,924+0100 ERROR
>>>>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>>>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! =>
>>>>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default
>>>>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
>>>>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start":
>>>>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout":
>>>>>> "", "stdout_lines": []}
>>>>>>
>>>>>
>>>>> Here we are just waiting for the bootstrap engine VM to fetch an IP
>>>>> address from default libvirt network over DHCP but it your case it never
>>>>> happened.
>>>>> Possible issues: something went wrong in the bootstrap process for the
>>>>> engine VM or the default libvirt network is not correctly configured.
>>>>>
>>>>> 1. can you try to reach the engine VM via VNC and check what's
>>>>> happening there? (another question, are you running it nested? AFAIK it
>>>>> will not work if nested over ESXi)
>>>>> 2. can you please share the output of
>>>>> cat /etc/libvirt/qemu/networks/default.xml
>>>>>
>>>>>
>>>>>>
>>>>>> Guillaume Pavese
>>>>>> Ingénieur Système et Réseau
>>>>>> Interactiv-Group
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTM...
>>>>>>
>>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/45UR44ITQTV...
>>>>
>>>
5 years, 9 months
oVirt Node install failed
by kiv@intercom.pro
Hi all!
The following error occurs during installation oVirt Node 4.2.8:
EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred during installation of Host hostname_ovirt_node2: Yum Cannot queue package dmidecode: Cannot retrieve metalink for repository: ovirt-4.2-epel/x86_64. Please verify its path and try again
From shell ovirt node type command:
yum install dmidecode
Cannot retrieve metalink for repository: ovirt-4.2-epel/x86_64. Please verify its path and try again
Uploading Enabled Repositories Report
Loaded plugins: fastestmirror, product-id, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
Cannot upload enabled repos report, is this client registered?
Does anyone know how to fix this?
5 years, 9 months
AIC JBOD RAID Disks automatically going offline under OVirt?
by nikhilbhalwankar@yahoo.co.in
Hi,
We are facing weird issue. We have a AIC JBOD with Ovirt Node installed. This Node is added under Ovirt engine. We faced issues related to RAID Disks going offline two times,
1) OVA Import from VMWare ESXi to OVirt
2) Delete existing OVirt VM which is in shutdown state
In both these cases, suddenly Disks from RAID array (which we are using as Fiber Channel under OVirt) go offline. Can anybody please help what can be done in this case?
5 years, 9 months
AIC JBOD RAID Disks automatically going offline under OVirt?
by nikhilbhalwankar@yahoo.co.in
Hi,
We are facing weird issue. We have a AIC JBOD with Ovirt Node installed. This Node is added under Ovirt engine. We faced issues related to RAID Disks going offline two times,
1) OVA Import from VMWare ESXi to OVirt
2) Delete existing OVirt VM which is in shutdown state
In both these cases, suddenly Disks from RAID array (which we are using as Fiber Channel under OVirt) go offline. Can anybody please help what can be done in this case?
5 years, 9 months