Ovirt Storage not reclaim
by Ranesh Tharanga Perera
I have attached 7TB disk ( pre allocated) for VM ( Redhat 8 ).. Due to
some space issue we have unattached disk from VM and deleted it. but free
space still has not changed and we can't use that space for other purposes
.
What would be the issue ?
Thanks,
Ranesh..
3 years, 4 months
Unable to migrate VMs to or from oVirt node 4.4.7
by nroach44@nroach44.id.au
Hi All,
After upgrading some of my hosts to 4.4.7, and after fixing the policy issue, I'm no longer able to migrate VMs to or from 4.4.7 hosts. Starting them works fine regardless of the host version.
HE 4.4.7.6-1.el8, Linux and Windows VMs.
The log on the receiving end (4.4.7 in this case):
VDSM:
2021-07-09 22:02:17,491+0800 INFO (libvirt/events) [vds] Channel state for vm_id=5d11885a-37d3-4f68-a953-72d808f43cdd changed from=UNKNOWN(-1) to=disconnected(2) (qemuguestagent:289)
2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') underlying process disconnected (vm:1134)
2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Release VM resources (vm:5313)
2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Stopping connection (guestagent:438)
2021-07-09 22:02:55,539+0800 INFO (libvirt/events) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Stopping connection (guestagent:438)
2021-07-09 22:02:55,539+0800 INFO (libvirt/events) [vdsm.api] START inappropriateDevices(thiefId='5d11885a-37d3-4f68-a953-72d808f43cdd') from=internal, task_id=7abe370b-13bc-4c49-bf02-2e40db142250 (api:48)
2021-07-09 22:02:55,544+0800 WARN (vm/5d11885a) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Couldn't destroy incoming VM: Domain not found: no domain with matching uuid '5d11885a-37d3-4f68-a953-72d808f43cdd' (vm:4046)
2021-07-09 22:02:55,544+0800 INFO (vm/5d11885a) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Changed state to Down: VM destroyed during the startup (code=10) (vm:1895)
syslog shows:
Jul 09 22:35:01 HOSTNAME abrt-hook-ccpp[177862]: Process 177022 (qemu-kvm) of user 107 killed by SIGABRT - dumping core
qemu:
qemu-kvm: ../util/yank.c:107: yank_unregister_instance: Assertion `QLIST_EMPTY(&entry->yankfns)' failed.
2021-07-09 14:02:54.521+0000: shutting down, reason=failed
When migrating from 4.4.7 to 4.4.6, syslog shows:
Jul 09 22:36:36 HOSTNAME libvirtd[2775]: unsupported configuration: unknown audio type 'spice'
Any suggestions or further logs I can chase up?
Thanks,
N
3 years, 4 months
Q: Strange ping timeouts/network issues when 1 node is down
by Andrei Verovski
Hi,
I have observed strange ping timeouts/network issues when one of nodes is down, for example, for RAM upgrade.
Powering up and connecting node back solves this.
Each node have 2 networks (connected to different Ethernet interfaces), ovirtmgmt 192.168.0.xxx and DMZ 192.168.1.x.
Additionally, time consuming operations like VM cloning causes increase of ping latency to ALL nodes from local workstations.
VM cloning should load only 1 node but it should not affect the whole ovirt mgmt network on all nodes.
How to fix this issue?
Thanks in advance for any help.
Andrei
3 years, 4 months
glusterfs health-check failed, (brick) going down
by Jiří Sléžka
Hello,
I have 3 node HCI cluster with oVirt 4.4.6 and CentOS8.
For time to time (I belive) random brick on random host goes down
because health-check. It looks like
[root@ovirt-hci02 ~]# grep "posix_health_check" /var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
07:13:37.408184] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
07:13:37.408407] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
alive! -> SIGTERM
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
16:11:14.518971] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
16:11:14.519200] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
alive! -> SIGTERM
on other host
[root@ovirt-hci01 ~]# grep "posix_health_check" /var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
13:15:51.983327] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-engine-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
13:15:51.983728] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-engine-posix:
still alive! -> SIGTERM
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
01:53:35.769129] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
01:53:35.769819] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
alive! -> SIGTERM
I cannot link these errors to any storage/fs issue (in dmesg or
/var/log/messages), brick devices looks healthy (smartd).
I can force start brick with
gluster volume start vms|engine force
and after some healing all works fine for few days
Did anybody observe this behavior?
vms volume has this structure (two bricks per host, each is separate
JBOD ssd disk), engine volume has one brick on each host...
gluster volume info vms
Volume Name: vms
Type: Distributed-Replicate
Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.0.4.11:/gluster_bricks/vms/vms
Brick2: 10.0.4.13:/gluster_bricks/vms/vms
Brick3: 10.0.4.12:/gluster_bricks/vms/vms
Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2
Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2
Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.stat-prefetch: off
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
Cheers,
Jiri
3 years, 4 months
export VM from oVirt engine 3.5
by fddi@comcast.net
Hello,
I am trying to export a bunch o VM from oVirt version 3.5
The problem is that there is no OVA support for exporting them. I also tryed to export VMs into OVA format using ovirtsdk4 which anyway won't work on oVirt 3.5.
I can successfully export to a NFS partition but for each VM there are several qcow v2 files which I suppose to be the VM image and its snapshots.
How to identify the correct file to represent the current VM status ?
If I delete all the snapshots, will I end up with only one valid qcow v2 image that I can use ?
3 years, 4 months
passing data between guacamole and xrdp server
by Jason Keltz
Hi.
I have a custom use case where I would like to have two guacamole RDP
connections that point to the same host, but the underlying xrdp start
script would initialize one connection slightly different than the
other. I don't see a simple way that I can pass anything between
Guacamole and the xrdp server which would allow the script to determine
which connection the user chose in guac? I thought *maybe* "client
name" under "Basic Settings" but it seems to apply to Windows only. It
would be neat to be able to pass environment variables in.
Thanks,
Jason.
3 years, 4 months
Gluster deploy error!
by Patrick Lomakin
Hello! I have tried to deploy single node vith gluster, but if select "Compression and dedublication" I get an error:
TASK [gluster.infra/roles/backend_setup : Create thick logical volume] *********
failed: [host01********] (item={'vgname': 'gluster_vg_sda4', 'lvname': 'gluster_lv_engine', 'size': '1970G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_sda4\" has insufficient free space (504319 extents): 504320 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "1970G", "vgname": "gluster_vg_sda4"}, "msg": "Creating logical volume 'gluster_lv_engine' failed", "rc": 5}
3 years, 4 months
change host IP
by radchenko.anatoliy@gmail.com
Hi guys,
what's the right way to change IP ( or FQDN ) on the host in replica 1 and 3 configuration?
Thanks in advance.
Best regards.
3 years, 4 months
Re: Deployed self-hosted engine cannot be reached from network
by Fabrizio Vettore
[SOLVED]
I installed a nested virtualization environment for testing, but I forgot to enable promiscuous mode on the VMware virtual switch.
Unfortunately my brain was not correctly connected during configuration
Sorry for wasting tour time :(
Fabrizio Vettore
ICT Manager
---------------------------------------
Cifarelli Spa
Strada Oriolo 180
27058 Voghera - Italy
Lat. 45.006370 - Long. 9.010843
3 years, 4 months
Deployed self-hosted engine cannot be reached from network
by fabrizio.vettore@cifarelli.it
Hi,
oVirt Node 4.4.7 i
i have just deployed self hosted engine using supplied wizard.
Install ended without any error.
Now.engine can reach ovirtnode (and viceversa) but cannot reach/be reached by external network.
During install node ens192.6 network adapter have been moved to a newly created bridge ovirtmgmt.
Bridge routing have been correctly set.
Actually ovirtnode can reach external networks (and internet) so I assume routing is OK for the node.
I can login to ovirtengine via SSH from ovirtnode (but not from outside) and i can ping/reach the note from ovirtengine but nothing other, even on the some subnet (I have other 2 nodes instaled with adiacent addresses)
Routing on the engine is OK (same gateway than node)
Tried to redeploy with the same result.
Probably it is a a stupid problem but I cannot figure out how to solve it!
Any suggestion?
Thanks in advance
Fabrizio
3 years, 4 months