RHEL 9 (rocky/centos/..) guest console problem
by marek
hi,
i know spice/QXL is deprecated
so, i have guest VM with console config VGA/VNC (tried Bochs/VNC too)
virt-viewer from windows cannot connect to console ("console" button in
ovirt admin)
if i use VNC params from "console.vv" i can connect to it with tigerVNC
client (tightVNC does not work)
any ideas what can be wrong?
btw is there option to prolong password in console.vv? there is note in
the file "# Password is valid for 120 seconds."
Marek
1 year, 5 months
Trying to recover local domain from failed host
by Tim Tuck
Hi all,
I had a catastrophic failure of a host with but the disk with the domain
on it is fine.
I've put the disk in a new machine, mounted it on /backups and I can see
this structure ...
[root@max backups]# ls -laR *
2f2da2d7-596f-4a53-8a1b-a301f84b3b74:
total 20
drwxr-xr-x. 5 vdsm kvm 4096 Sep 15 2020 .
drwxr-xr-x. 4 vdsm kvm 4096 Jun 10 11:53 ..
drwxr-xr-x. 2 vdsm kvm 4096 Sep 15 2020 dom_md
drwxr-xr-x. 23 vdsm kvm 4096 May 20 11:35 images
drwxr-xr-x. 4 vdsm kvm 4096 Sep 15 2020 master
2f2da2d7-596f-4a53-8a1b-a301f84b3b74/dom_md:
total 32780
drwxr-xr-x. 2 vdsm kvm 4096 Sep 15 2020 .
drwxr-xr-x. 5 vdsm kvm 4096 Sep 15 2020 ..
-rwxr-xr-x. 1 vdsm kvm 0 Sep 15 2020 ids
-rwxr-xr-x. 1 vdsm kvm 16777216 Sep 15 2020 inbox
-rwxr-xr-x. 1 vdsm kvm 0 Sep 15 2020 leases
-rwxr-xr-x. 1 vdsm kvm 486 Sep 15 2020 metadata
-rwxr-xr-x. 1 vdsm kvm 16777216 Sep 15 2020 outbox
2f2da2d7-596f-4a53-8a1b-a301f84b3b74/images:
total 92
drwxr-xr-x. 23 vdsm kvm 4096 May 20 11:35 .
drwxr-xr-x. 5 vdsm kvm 4096 Sep 15 2020 ..
drwxr-xr-x. 2 vdsm kvm 4096 Feb 3 2021
1eb2b682-ef45-409d-9341-abcf61418619
drwxr-xr-x. 2 vdsm kvm 4096 Aug 11 2022
283838fd-43b5-42b9-a332-a7b980613188
drwxr-xr-x. 2 vdsm kvm 4096 Oct 4 2020
36237183-54a2-4232-b0b3-6d1a9457c23a
drwxr-xr-x. 2 vdsm kvm 4096 Jan 5 2022
366e7b29-2123-45c0-9190-dafead431651
drwxr-xr-x. 2 vdsm kvm 4096 May 20 09:29
3ea98df5-3ee8-4d21-8607-94284e7fff37
drwxr-xr-x. 2 vdsm kvm 4096 May 20 09:29
44c3611b-1205-48ca-9a84-007eaae03cbb
drwxr-xr-x. 2 vdsm kvm 4096 May 18 23:49
674b364c-b74a-4fff-a38b-ca1089103241
drwxr-xr-x. 2 vdsm kvm 4096 Oct 15 2021
6949a9e0-4cc0-4eee-be99-e88795cd6447
drwxr-xr-x. 2 vdsm kvm 4096 Jan 5 2022
8b618bad-6dfb-48de-90ad-f325bf50cd45
drwxr-xr-x. 2 vdsm kvm 4096 Feb 6 2021
91f1f606-0f26-4ef0-a2fe-cccde6c78e89
drwxr-xr-x. 2 vdsm kvm 4096 May 20 11:35
94f4bf41-2b2f-42bc-ae21-0e80205ba19a
drwxr-xr-x. 2 vdsm kvm 4096 May 20 11:35
a18f0c48-b3ac-4b89-9e52-fddaddbc8f4a
drwxr-xr-x. 2 vdsm kvm 4096 May 26 15:51
ac4a428d-961e-46ae-a279-f282fd9ecf94
drwxr-xr-x. 2 vdsm kvm 4096 May 26 15:51
ba1a624c-6995-4109-b770-7bbd1efc340e
drwxr-xr-x. 2 vdsm kvm 4096 Sep 15 2020
dec14b5b-69fe-4c51-abb4-d8296c547a0b
drwxr-xr-x. 2 vdsm kvm 4096 Sep 15 2020
e7a17f3b-7567-4c90-b58e-18f34d4686c0
drwxr-xr-x. 2 vdsm kvm 4096 May 20 09:29
ea786bee-b98e-44c0-a44a-a63af63b8c51
drwxr-xr-x. 2 vdsm kvm 4096 Apr 21 2021
ec37f395-3b4b-41df-8f5b-8a22b7e1273b
drwxr-xr-x. 2 vdsm kvm 4096 Feb 25 2022
f3e641c7-a2cc-42af-bcd1-b5ecdf437e03
drwxr-xr-x. 2 vdsm kvm 4096 May 18 23:55
f4c221a6-6ce4-44cc-8d32-d673e40aa915
drwxr-xr-x. 2 vdsm kvm 4096 May 20 11:35
f539c098-3f16-410f-938f-3de6ef37b017
2f2da2d7-596f-4a53-8a1b-a301f84b3b74/images/1eb2b682-ef45-409d-9341-abcf61418619:
total 104857624
drwxr-xr-x. 2 vdsm kvm 4096 Feb 3 2021 .
drwxr-xr-x. 23 vdsm kvm 4096 May 20 11:35 ..
-rwxr-xr-x. 1 vdsm kvm 107374182400 Feb 3 2021
e29b005f-d272-49f8-a162-41f908ca67a4
-rwxr-xr-x. 1 vdsm kvm 300 Feb 3 2021
e29b005f-d272-49f8-a162-41f908ca67a4.meta
2f2da2d7-596f-4a53-8a1b-a301f84b3b74/images/283838fd-43b5-42b9-a332-a7b980613188:
total 23223760
drwxr-xr-x. 2 vdsm kvm 4096 Aug 11 2022 .
drwxr-xr-x. 23 vdsm kvm 4096 May 20 11:35 ..
-rwxr-xr-x. 1 vdsm kvm 23781113856 Aug 11 2022
3273221e-1649-4e4a-816c-2ffec18e51c7
-rwxr-xr-x. 1 vdsm kvm 251 Aug 11 2022
3273221e-1649-4e4a-816c-2ffec18e51c7.meta
.
. etc
So... is there a way to get this back ?
I tried "Import Domain" both with the disk manually mounted and
unmounted, both failed and I get errors in the vdsm.log like this...
2023-06-15 15:57:55,166+1000 INFO (jsonrpc/0)
[storage.StorageServer.MountConnection] Creating directory
'/rhev/data-center/mnt/_backups_2f2da2d7-596f-4a53-8a1b-a301f84b3b74'
(storageServer:167)
2023-06-15 15:57:55,166+1000 INFO (jsonrpc/0) [storage.fileUtils]
Creating directory:
/rhev/data-center/mnt/_backups_2f2da2d7-596f-4a53-8a1b-a301f84b3b74
mode: None (fileUtils:201)
2023-06-15 15:57:55,166+1000 INFO (jsonrpc/0) [storage.Mount] mounting
/backups/2f2da2d7-596f-4a53-8a1b-a301f84b3b74 at
/rhev/data-center/mnt/_backups_2f2da2d7-596f-4a53-8a1b-a301f84b3b74
(mount:207)
2023-06-15 15:57:55,176+1000 ERROR (jsonrpc/0) [storage.HSM] Could not
connect to storageServer (hsm:2374)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line
2371, in connectStorageServer
conObj.connect()
File
"/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line
180, in connect
six.reraise(t, v, tb)
File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File
"/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line
171, in connect
self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line
210, in mount
cgroup=cgroup)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py",
line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py",
line 54, in <lambda>
**kwargs)
File "<string>", line 2, in mount
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in
_callmethod
raise convert_to_error(kind, result)
vdsm.storage.mount.MountError: Command ['/usr/bin/mount', '-t', 'posix',
'/backups/2f2da2d7-596f-4a53-8a1b-a301f84b3b74',
'/rhev/data-center/mnt/_backups_2f2da2d7-596f-4a53-8a1b-a301f84b3b74']
failed with rc=32 out=b'' err=b"mount:
/rhev/data-center/mnt/_backups_2f2da2d7-596f-4a53-8a1b-a301f84b3b74:
unknown filesystem type 'posix'.\n"
2023-06-15 15:57:55,176+1000 INFO (jsonrpc/0)
[storage.StorageDomainCache] Invalidating storage domain cache (sdc:74)
2023-06-15 15:57:55,176+1000 INFO (jsonrpc/0) [vdsm.api] FINISH
connectStorageServer return={'statuslist': [{'id':
'00000000-0000-0000-0000-000000000000', 'status': 477}]}
from=::ffff:172.20.1.160,47702,
flow_id=6693c76d-b62b-4e62-9b70-c87f2a199705,
task_id=682abcff-3040-46d7-aaa1-f2445f0a6698 (api:54)
2023-06-15 15:57:55,283+1000 INFO (jsonrpc/2) [vdsm.api] START
disconnectStorageServer(domType=6,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'password':
'********', 'vfs_type': 'posix', 'port': '', 'iqn': '', 'connection':
'/backups/2f2da2d7-596f-4a53-8a1b-a301f84b3b74', 'ipv6_enabled':
'false', 'id': '00000000-0000-0000-0000-000000000000', 'user': '',
'tpgt': '1'}]) from=::ffff:172.20.1.160,47702,
flow_id=0d3d4a3e-3523-4436-93b3-e22bff47f082,
task_id=11870123-365d-4b40-b481-be5d44098fc2 (api:48)
2023-06-15 15:57:55,284+1000 INFO (jsonrpc/2) [storage.Mount]
unmounting
/rhev/data-center/mnt/_backups_2f2da2d7-596f-4a53-8a1b-a301f84b3b74
(mount:215)
2023-06-15 15:57:55,290+1000 ERROR (jsonrpc/2) [storage.HSM] Could not
disconnect from storageServer (hsm:2480)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line
2476, in disconnectStorageServer
conObj.disconnect()
File
"/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line
202, in disconnect
self._mount.umount(True, True)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line
217, in umount
umount(self.fs_file, force=force, lazy=lazy, freeloop=freeloop)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py",
line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py",
line 54, in <lambda>
**kwargs)
File "<string>", line 2, in umount
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in
_callmethod
raise convert_to_error(kind, result)
vdsm.storage.mount.MountError: Command ['/usr/bin/umount', '-f', '-l',
'/rhev/data-center/mnt/_backups_2f2da2d7-596f-4a53-8a1b-a301f84b3b74']
failed with rc=32 out=b'' err=b'umount:
/rhev/data-center/mnt/_backups_2f2da2d7-596f-4a53-8a1b-a301f84b3b74: no
mount point specified.\n'
2023-06-15 15:57:55,290+1000 INFO (jsonrpc/2)
[storage.StorageDomainCache] Refreshing storage domain cache
(resize=False) (sdc:80)
2023-06-15 15:57:55,290+1000 INFO (jsonrpc/2) [storage.ISCSI] Scanning
iSCSI devices (iscsi:442)
2023-06-15 15:57:55,328+1000 INFO (jsonrpc/2) [storage.ISCSI] Scanning
iSCSI devices: 0.04 seconds (utils:390)
2023-06-15 15:57:55,328+1000 INFO (jsonrpc/2) [storage.HBA] Scanning FC
devices (hba:60)
2023-06-15 15:57:55,416+1000 INFO (jsonrpc/2) [storage.HBA] Scanning FC
devices: 0.09 seconds (utils:390)
2023-06-15 15:57:55,416+1000 INFO (jsonrpc/2) [storage.Multipath]
Waiting until multipathd is ready (multipath:112)
2023-06-15 15:57:57,458+1000 INFO (jsonrpc/2) [storage.Multipath]
Waited 2.04 seconds for multipathd (tries=2, ready=2) (multipath:139)
2023-06-15 15:57:57,458+1000 INFO (jsonrpc/2)
[storage.StorageDomainCache] Refreshing storage domain cache: 2.17
seconds (utils:390)
2023-06-15 15:57:57,458+1000 INFO (jsonrpc/2) [vdsm.api] FINISH
disconnectStorageServer return={'statuslist': [{'id':
'00000000-0000-0000-0000-000000000000', 'status': 477}]}
from=::ffff:172.20.1.160,47702,
flow_id=0d3d4a3e-3523-4436-93b3-e22bff47f082,
task_id=11870123-365d-4b40-b481-be5d44098fc2 (api:54)
2023-06-15 15:57:57,458+1000 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer]
RPC call StoragePool.disconnectStorageServer took more than 1.00 seconds
to succeed: 2.18 (__init__:316)
any help appreciated
thanks
Tim
1 year, 5 months
self-hosted engine with Local Storage
by Jorge Visentini
Hello.
I know that there is no logic, that there is no HA, redundancy and all
that... but is there any possibility that I can deploy the oVirt
self-hosted engine using an array of local disks on the host?
*In my scenario I don't need to have HA.*
*What I need:*
1 host with a self-hosted mechanism on a local domain store, that's all. On
this host I will locally run some old VMs. Isolated VMs.
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 5 months
Cinder and Managed Block Storage
by Jorge Visentini
Is it still possible to configure cinder to use Managed Block Storage? If
yes, is there a particular setting or can I just stick to Cinder's default
setting?
Thanks all.
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 5 months
Had issue with storage and now storage domain won't mount VMs are in unknown status
by jsmith1299@live.com
Hi All,
We have an odd setup in out environment but each storage data center has one host and one storage domain.
We had an issue with the storage domain attached to a host. After the reboot I am seeing in the vdsm logs over and over again vmrecovery
2023-06-09 21:01:30,419+0000 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=40f5b198-cb82-4ba2-8c20-b8cee34a7f47 (api:48)
2023-06-09 21:01:30,420+0000 INFO (periodic/2) [vdsm.api] FINISH repoStats return={} from=internal, task_id=40f5b198-cb82-4ba2-8c20-b8cee34a7f47 (api:54)
2023-06-09 21:01:30,810+0000 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=74b1a1cf-fab1-4918-b0da-b3fd152d9d1a (api:48)
2023-06-09 21:01:30,811+0000 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=74b1a1cf-fab1-4918-b0da-b3fd152d9d1a (api:54)
2023-06-09 21:01:30,811+0000 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:723)
I've also checked the firewall and it is still disabled.
systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/libvirtd.service.d
└─unlimited-core.conf
Active: active (running) since Fri 2023-06-09 20:51:11 UTC; 16min ago
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 4984 (libvirtd)
Tasks: 17 (limit: 32768)
Memory: 39.7M
CGroup: /system.slice/libvirtd.service
└─4984 /usr/sbin/libvirtd --listen
Jun 09 20:51:11 hlkvm01 systemd[1]: Starting Virtualization daemon...
Jun 09 20:51:11 hlkvm01 systemd[1]: Started Virtualization daemon.
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2023-06-09 20:53:11 UTC; 14min ago
Process: 10496 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 10587 (vdsmd)
Tasks: 39
Memory: 79.5M
CGroup: /system.slice/vdsmd.service
└─10587 /usr/bin/python2 /usr/share/vdsm/vdsmd
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|596001c3-33e7-44a4-bdf9-0b53ab1dd810' args={'596001c3-33e7-44a4-bdf9-0b53ab1dd810': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-2283890943663580625', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '596001c3-33e7-44a4-bdf9-0b53ab1dd810', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893978', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|87155499-1e10-4228-aa69-7c487007746e' args={'87155499-1e10-4228-aa69-7c487007746e': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-5453960159391982695', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '87155499-1e10-4228-aa69-7c487007746e', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893973', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|0ec7a66d-fac2-4a4a-a939-e05fc7b097b7' args={'0ec7a66d-fac2-4a4a-a939-e05fc7b097b7': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-1793949836195780752', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '0ec7a66d-fac2-4a4a-a939-e05fc7b097b7', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893976', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|9c8802c3-c7c9-473c-bbfb-abb0bd0f8fdb' args={'9c8802c3-c7c9-473c-bbfb-abb0bd0f8fdb': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-1144924804541449415', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '9c8802c3-c7c9-473c-bbfb-abb0bd0f8fdb', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893971', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|f799c326-9969-4892-8d67-3b1229baf0ef' args={'f799c326-9969-4892-8d67-3b1229baf0ef': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '5564598485369155833', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': 'f799c326-9969-4892-8d67-3b1229baf0ef', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893980', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|e9311d9f-d770-458b-b5ad-cdc2eb35f1bd' args={'e9311d9f-d770-458b-b5ad-cdc2eb35f1bd': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-5622951617346770490', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': 'e9311d9f-d770-458b-b5ad-cdc2eb35f1bd', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893972', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|1fc4ddad-203f-4cdf-9cb3-c3d66fb97c87' args={'1fc4ddad-203f-4cdf-9cb3-c3d66fb97c87': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-1397731328049024241', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '1fc4ddad-203f-4cdf-9cb3-c3d66fb97c87', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893981', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|321183ed-b0a6-42c7-bbee-2ad46a5f37ae' args={'321183ed-b0a6-42c7-bbee-2ad46a5f37ae': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '4398712824561987912', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '321183ed-b0a6-42c7-bbee-2ad46a5f37ae', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893970', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|731a11e8-62ba-4639-bdee-8c44b5790d82' args={'731a11e8-62ba-4639-bdee-8c44b5790d82': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-1278467655696539707', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '731a11e8-62ba-4639-bdee-8c44b5790d82', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893977', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|411a97e6-41c7-473e-819b-04aa10bc2bf0' args={'411a97e6-41c7-473e-819b-04aa10bc2bf0': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-11964682092647781', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '411a97e6-41c7-473e-819b-04aa10bc2bf0', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893975', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
This has been going on for hours. On the management VM I am seeing the following over and over again
2023-06-09 13:59:25,129-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM hlkvm01 command Get Host Capabilities failed: Message timeout which can be caused by communication issues
2023-06-09 13:59:25,129-07 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [] Unable to RefreshCapabilities: VDSNetworkE
xception: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues
I am able to restart the Host from the management VM (Web Console). When I try to put the host in maintenance mode I get "Error while executing action. Cannot switch Host to Maintenance mode. Hose still has running VMs on it and is in Non Responsive state"
If I try to set the "Confirm host has been rebooted" I get an error saying that another power management action is already in progress. Can someone please help me out here? Is there a way to set the manage for all of the VMs to down? Anything I can do to get the storage domain back up?
Thanks
1 year, 5 months
how to expand a arbitrated-distributed-replicated volume
by Hans Kuhl
Hello all,
how can I expand a arbitrated-distributed-replicated volume without adding additional arbiter bricks?
my current gluster config is:
Volume Name: data
Type: Distributed-Replicate
Volume ID: dcf75cf7-1937-4ce9-a7d0-46a56a3be4b4
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gl1.lan:/gluster_bricks/sdc/sdc
Brick2: gl2.lan:/gluster_bricks/sdc/sdc
Brick3: gl3.lan:/gluster_bricks/sdx/sdx (arbiter)
Brick4: gl2.lan:/gluster_bricks/sdd/sdd
Brick5: gl3.lan:/gluster_bricks/sdc/sdc
Brick6: gl1.lan:/gluster_bricks/sdx/sdx (arbiter)
Brick7: gl3.lan:/gluster_bricks/sdd/sdd
Brick8: gl1.lan:/gluster_bricks/sdd/sdd
Brick9: gl2.lan:/gluster_bricks/sdx/sdx (arbiter)
1 year, 5 months
Non responsive host (4.3.10)
by Maria Souvalioti
Hello everyone!
Due to a recent major power outage in my area I now have an unresponsive self hosted host in an environment of 3 self hosted hosts. There's one vm stuck on there as well as some metadata I guess from when hosted engine was running there (before the power went down).
I'm running 4.3.10 ovirt node with 3 nodes and GlusterFS, no arbiter, and I'm using it to provide services to our clients i.e. DNS, web sites, wikis, ticketing etc. and I cannot shut them down.
The ovirt engine is up and running and I can manage all the other VMs that run on the other hosts through the web gui.
The unresponsive host replies only to ICMP requests; in every other sense it's dead, no ssh, no gluster bricks, no console, nothing.
I tried to place the faulty host in maintenance, using the option to stop glusterd, but wasn't able to as the engine won't let the host go into maintenance mode because it thinks the host has running VMs on it. The host won't go into maintenance even if I chose the "Ignore gluster quorum and self-heal validations" option.
I spent last week creating a backup environment were I copied the VMs, to have somewhere to run them in case something goes terribly wrong with the systems or the gluster in the production system.
I'm thinking of using the global maintenance mode and then shutting down the engine itself with *hosted-engine --vm-shutdown* and rebooting the affected host.
Should I remove the host from the cluster and then re-add it or should I do something else?
Thanks for any of your help!
1 year, 5 months
Renew the engine certificate - egine-setup
by Fabrice Soler
Hello,
We have an standalone engine version 4.3.6 and we try to renew the
engine certificate.
We have followed the official documentation but the engine-setup never
ask us the question "Renew certificate ?"
Do you have an idea ?
Sincerely,
--
Carte de visite
DSI-DIT
*Fabrice SOLER*
Responsable du Département des Infrastructures Techniques (DIT)
/Rectorat de la Guadeloupe - BP 480/
97183 Les Abymes cedex
*Téléphone : *0590 47 8321
+590 690 335 564
*Courriel*:
www.ac-guadeloupe.fr
1 year, 5 months