SPICE proxy behind nginx reverse proxy
by Colin Coe
Hi all
As per $SUBJECT, I have a SPICE proxy behind a reverse proxy which all
external VDI users are forced to use.
We've only started doing this in the last week or so but I'm not getting
heaps of reports SPICE sessions "freezing". The testing that I've done
shows that a SPICE session that is unattended for 10-15 minutes hangs or
freezes. By this I mean that you can't interact with the VM (using SPICE)
via mouse or Keyboard. Restarting the SPICE session fixes the problem.
Is anyone else doing this? If so, have you noticed the SPICE session
freezes?
Thanks in advance
2 years, 10 months
Re: vdsm with NFS storage reboot or shutdown more than 15 minutes. with error failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
by Strahil Nikolov
When you set a host to maintenance from oVirt API/UI, one of the tasks is to umount any shared storage (incluing the NFS you got). Then rebooting should work like a charm.
Why did you reboot without putting the node in maintenance ?
P.S.: Do not confuse rebooting with fencing - the latter kills the node ungracefully in order to safely start HA VMs on another node.
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 10:27:01 Гринуич+2, lifuqiong(a)sunyainfo.com <lifuqiong(a)sunyainfo.com> написа:
Hi everyone:
Description of problem:
When exec "reboot" or "shutdown -h 0" cmd on vdsm server, the vdsm server will reboot or shutdown more than 30 minutes. the screen shows '[FAILED] Failed unmouting /rhev/data-center/mnt/172.18.81.41:_home_nfs_data'.
other messages may be useful: [] watchdog: watchdog0: watchdog did not stop! []systemd-shutdown[5594]: Failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
[]systemd-shutdown[5595]: Failed to remount '/' read-only: Device or resource busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
dracut Warning: Killing all remaining processes
dracut Warning: Killing all remaining processes
Version-Release number of selected component (if applicable):
Software Version:4.2.8.2-1.el7
OS: CentOS Linux release 7.5.1804 (Core)
How reproducible:
100%
Steps to Reproduce:
1. my test enviroment is one Ovirt engine(172.17.81.17) with 4 vdsm servers, exec "reboot" cmd in one of the vdsm servers(172.17.99.105), the server will reboot more than 30 minutes.ovirt-engine : 172.17.81.17/16
vdsm: 172.17.99.105/16
nfs server: 172.17.81.14/16Actual results:
As above. the server will reboot more than 30 minutes
Expected results:
the server will reboot in a short time.
What I have done:
I have capture packet in nfs server while vdsm is rebooting, I found vdsm is always sending nfs packet to nfs server circularly as follows:this is some log files while I reboot vdsm 172.17.99.105 in 2020-10-26 22:12:34. Some conclusion is:
1. the vdsm.log said the vdsm 2020-10-26 22:12:34,461+0800 ERROR (check/loop) [storage.Monitor] Error checking path /rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/metadata
2. the sanlock.log said 2020-10-26 22:13:05 1454 [3301]: s1 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/ids
3. there is nothing message import to this issue.The logs is in the attachment.I'm very appreciate if anyone can help me. Thank you.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2GATAD35SU...
2 years, 10 months
Migrated disk from NFS to iSCSI - Unable to Boot
by Wesley Stewart
This is a new one.
I migrated from an NFS share to an iSCSI share on a small single node oVirt
system (Currently running 4.3.10).
After migrating a disk (Virtual Machine -> Disk -> Move), I was unable to
boot to it. The console tells me "No bootable device". This is a Centos7
guest.
I booted into a CentOS7 ISO and tried a few things...
fdisk -l shows me a 40GB disk (/dev/sda).
fsck -f tells me "bad magic number in superblock"
lvdisplay and pvdisplay show nothing. Even if I can't boot to the drive I
would love to recover a couple of documents from here if possible. Does
anyone have any suggestions? I am running out of ideas.
2 years, 10 months
oVirt 4.4 Upgrade issue with pki for libvirt-vnc
by lee.hanel@gmail.com
Greetings,
After reverting the ovirt_disk module to the ovirt_disk_28, I'm able to get passed that step, however now I'm running into a new issue.
When it tries to start the vm after moving it from local storage to the hosted storage, I get the following errors:
2020-10-27 21:42:17,334+0000 ERROR (vm/9562a74e) [virt.vm] (vmId='9562a74e-2e6c-433b-ac0a-75a2acc7398d') The vm start process failed (vm:872)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 802, in _startUnderlyingVm
self._run()
File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2615, in _run
dom.createWithFlags(flags)
File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1265, in createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
libvirt.libvirtError: internal error: process exited while connecting to monitor: 2020-10-27T21:42:16.133517Z qemu-kvm: -object tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no: Cannot load certificate '/etc/pki/vdsm/libvirt-vnc/server-cert.pem' & key '/etc/pki/vdsm/libvirt-vnc/server-key.pem': Error while reading file.
2020-10-27 21:42:17,335+0000 INFO (vm/9562a74e) [virt.vm] (vmId='9562a74e-2e6c-433b-ac0a-75a2acc7398d') Changed state to Down: internal error: process exited while connecting to monitor: 2020-10-27T21:42:16.133517Z qemu-kvm: -object tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vdsm/libvirt-vnc,endpoint=server,verify-peer=no: Cannot load certificate '/etc/pki/vdsm/libvirt-vnc/server-cert.pem' & key '/etc/pki/vdsm/libvirt-vnc/server-key.pem': Error while reading file. (code=1) (vm:1636)
the permissions on the files appears to be correct.
https://bugzilla.redhat.com/show_bug.cgi?id=1634742 appears similar, but i took the added precaution of completely removing the vdsm packages and /etc/pki/vdsm and /etc/libvirt.
Anyone have any additional troubleshooting steps?
2 years, 10 months
Gluster Domain Storage full
by suporte@logicworks.pt
Hello,
I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 0GB available and 100% used
I can not do anything with this disk, for example, if I try to move it to another Gluster Domain Storage get the message:
Error while executing action: Cannot move Virtual Disk. Low disk space on Storage Domain
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
2 years, 10 months
oVirt 4.4 Upgrade issue?
by lee.hanel@gmail.com
Greetings,
I'm trying to preform an upgrade from 4.3 to 4.4 using the hosted engine option of https://github.com/ovirt/ovirt-ansible-collection.
Unfortunately, when it goes to create the hosted engine disk images I get the following:
[Cannot move Virtual Disk. The operation is not supported for HOSTED_ENGINE_METADATA disks.]
This appears to be related to https://bugzilla.redhat.com/show_bug.cgi?id=1883817, BUT I've manually applied that patch. Shouldn't it be creating a NEW disk image instead of trying to move the existing one?
Any help would be appriciated.
Thanks,
Lee
2 years, 10 months
vdsm with NFS storage reboot or shutdown more than 15 minutes. with error failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
by lifuqiong@sunyainfo.com
Hi everyone:
I met problem as follows:
Description of problem:
When exec "reboot" or "shutdown -h 0" cmd on vdsm server, the vdsm server will reboot or shutdown more than 30 minutes. the screen shows '[FAILED] Failed unmouting /rhev/data-center/mnt/172.18.81.41:_home_nfs_data'.
other messages may be useful:
This message was shown in the screen.
[] watchdog: watchdog0: watchdog did not stop! []systemd-shutdown[5594]: Failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
[]systemd-shutdown[5595]: Failed to remount '/' read-only: Device or resource busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
dracut Warning: Killing all remaining processes
dracut Warning: Killing all remaining processes
Version-Release number of selected component (if applicable):
Software Version:4.2.8.2-1.el7
OS: CentOS Linux release 7.5.1804 (Core)How reproducible:
100%
Steps to Reproduce:
1. my test enviroment is one Ovirt engine(172.17.81.17) with 4 vdsm servers, exec "reboot" cmd in one of the vdsm servers(172.17.99.105), the server will reboot more than 30 minutes.ovirt-engine : 172.17.81.17/16
vdsm: 172.17.99.105/16
nfs server: 172.17.81.14/16Actual results:
As above. the server will reboot more than 30 minutes
Expected results:
the server will reboot in a short time.
What I have done:
I have capture packet in nfs server while vdsm is rebooting, I found vdsm server keeps sending nfs packet to nfs server circularly ;there are some log files while I reboot vdsm 172.17.99.105 in 2020-10-26 22:12:34. Some conclusion is:
1. the vdsm.log said the vdsm 2020-10-26 22:12:34,461+0800 ERROR (check/loop) [storage.Monitor] Error checking path /rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/metadata
2. the sanlock.log said 2020-10-26 22:13:05 1454 [3301]: s1 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/ids
3. there is nothing message import to this issue.The logs is in the attachment.I'm very appreciate if anyone can help me. Thank you.Your Sincerely,Mark Lee
2 years, 10 months
vdsm with NFS storage reboot or shutdown more than 15 minutes. with error failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
by lifuqiong@sunyainfo.com
Hi everyone:
I met a problem as follows:
Description of problem:
When exec "reboot" or "shutdown -h 0" cmd on vdsm server, the vdsm server will reboot or shutdown more than 30 minutes. the screen shows '[FAILED] Failed unmouting /rhev/data-center/mnt/172.18.81.41:_home_nfs_data'.
other messages may be useful:
This message was shown in the screen.
[] watchdog: watchdog0: watchdog did not stop! []systemd-shutdown[5594]: Failed to unmount /rhev/data-center/mnt/172.18.81.14:_home_nfs_data: Device or resource busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
[]systemd-shutdown[5595]: Failed to remount '/' read-only: Device or resource busy
[]systemd-shutdown[1]: Failed to wait for process: Protocol error
dracut Warning: Killing all remaining processes
dracut Warning: Killing all remaining processes
Version-Release number of selected component (if applicable):
Software Version:4.2.8.2-1.el7
OS: CentOS Linux release 7.5.1804 (Core)How reproducible:
100%
Steps to Reproduce:
1. my test enviroment is one Ovirt engine(172.17.81.17) with 4 vdsm servers, exec "reboot" cmd in one of the vdsm servers(172.17.99.105), the server will reboot more than 30 minutes.ovirt-engine : 172.17.81.17/16
vdsm: 172.17.99.105/16
nfs server: 172.17.81.14/16Actual results:
As above. the server will reboot more than 30 minutes
Expected results:
the server will reboot in a short time.
What I have done:
I have capture packet in nfs server while vdsm is rebooting, I found vdsm is always sending nfs packet to nfs server circularly as follows:this is some log files while I reboot vdsm 172.17.99.105 in 2020-10-26 22:12:34. Some conclusion is:
1. the vdsm.log said the vdsm 2020-10-26 22:12:34,461+0800 ERROR (check/loop) [storage.Monitor] Error checking path /rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/metadata
2. the sanlock.log said 2020-10-26 22:13:05 1454 [3301]: s1 delta_renew read timeout 10 sec offset 0 /rhev/data-center/mnt/172.18.81.14:_home_nfs_data/02c4c6ea-7ca9-40f1-a1d0-f1636bc1824e/dom_md/ids
3. there is nothing message import to this issue.The logs is in the attachment.I'm very appreciate if anyone can help me. Thank you.
2 years, 10 months
update host to 4.4: manual ovn config necessary?
by Gianluca Cecchi
Hello,
I have updated an external engine from 4.3 to 4.4 and the OVN configuration
seems to have been retained:
[root@ovmgr1 ovirt-engine]# ovn-nbctl show
switch fc2fc4e8-ff71-4ec3-ba03-536a870cd483
(ovirt-ovn192-1e252228-ade7-47c8-acda-5209be358fcf)
switch 101d686d-7930-4176-b41a-b306d7c30a1a
(ovirt-ovn17217-4bb1d1a7-020d-4843-9ac7-dc4204b528e5)
port c1ec60a4-b4f3-4cb5-8985-43c086156e83
addresses: ["00:1a:4a:19:01:89 dynamic"]
port 174b69f8-00ed-4e25-96fc-7db11ea8a8b9
addresses: ["00:1a:4a:19:01:59 dynamic"]
port ccbd6188-78eb-437b-9df9-9929e272974b
addresses: ["00:1a:4a:19:01:88 dynamic"]
port 7e96ca70-c9e3-4efe-9ac5-e56c18476437
addresses: ["00:1a:4a:19:01:83 dynamic"]
port d2c2d9f1-8fc3-4f17-9ada-76fe3a168e65
addresses: ["00:1a:4a:19:01:5e dynamic"]
port 4d13d63e-5ff3-41c1-9b6b-feac343b514b
addresses: ["00:1a:4a:19:01:60 dynamic"]
port 66359e79-56c4-47e0-8196-2241706329f6
addresses: ["00:1a:4a:19:01:68 dynamic"]
switch 87012fa6-ffaa-4fb0-bd91-b3eb7c0a2fc1
(ovirt-ovn193-d43a7928-0dc8-49d3-8755-5d766dff821a)
port 2ae7391b-4297-4247-a315-99312f6392e6
addresses: ["00:1a:4a:19:01:51 dynamic"]
switch 9e77163a-c4e4-4abf-a554-0388e6b5e4ce
(ovirt-ovn172-4ac7ba24-aad5-432d-b1d2-672eaeea7d63)
[root@ovmgr1 ovirt-engine]#
Then I updated one of the 3 Linux hosts (not node ng) through remove from
web admin gui, install from scratch of CentOS 8.2 OS, configure repos and
then add new host (with the same name) in engine and I was able to connect
to storage (iSCSI) and start VMs in general on the host.
Coming to OVN part it seems it has not been configured on the upgraded host.
Is it expected?
Eg on engine I only see chassis for the 2 hosts still in 4.3:
[root@ovmgr1 ovirt-engine]# ovn-sbctl show
Chassis "b8872ab5-4606-4a79-b77d-9d956a18d349"
hostname: "ov301.mydomain"
Encap geneve
ip: "10.4.192.34"
options: {csum="true"}
Port_Binding "174b69f8-00ed-4e25-96fc-7db11ea8a8b9"
Port_Binding "66359e79-56c4-47e0-8196-2241706329f6"
Chassis "ddecf0da-4708-4f93-958b-6af365a5eeca"
hostname: "ov300.mydomain"
Encap geneve
ip: "10.4.192.33"
options: {csum="true"}
Port_Binding "ccbd6188-78eb-437b-9df9-9929e272974b"
[root@ovmgr1 ovirt-engine]#
What to do to add the upgraded 4.4 host? Can they live together for the OVN
part?
Thanks,
Gianluca
2 years, 10 months
when configuring multi path logical network selection area is empty, hence not able to configure multipathing
by dhanaraj.ramesh@yahoo.com
Hi team,
I have 4 node cluster where in each node I configured 2 dedicated 10 gig NIC with dedicated subnet each ( NIC 1 = 10.10.10.0/24, NIC 2 = 10.10.20.0/24 ) and on the array side I configured 2 targets with 10.10.10.0 /24 & another 2 target with 10.10.20.0/24 subnet. without any errors I could check in all four paths and able to mount iscsi luns in all 4 nodes. However when i try to configure mutipathing at Data center level I could see all the paths but not the logical network, it stays empty although I configured logical network label for both NICs with dedicated names as ISCSI1 & ISCI2. these logical names visible and green at the host network level, no errors, they are just L2 IP config
Am I missing something here? what else I should do to enable multiplathing
2 years, 10 months