Hi,
I have a 3 node cluster with gluster (2 rhel 8.7 and one rhel 9.1).
I updated one of my nodes to rhel9 and I'm unable to start any VM that's
running on gluster volumes on the rhel9 node or move a running VM on the
rhel9 node.
However, I'm able to start a node stored on a NFS node on the rhel9 node.
According to gluster (10.3) all 3 nodes are working fine.
I don't want to render my setup useless, that's why I didn't move the 2
other nodes.
wells = rhel9 node.
In the event log I'm getting the following error:
Failed to sync storage devices from host wells
VDSM wells command GetStorageDeviceListVDS failed: Internal JSON-RPC error
Here is a part of the vdsm.log:
2023-04-03 23:25:01,242-0400 INFO (vm/69f6480f) [vds] prepared volume
path: (clientIF:506)
2023-04-03 23:25:01,243-0400 INFO (vm/69f6480f) [vdsm.api] START
prepareImage(sdUUID='a874a247-d8de-4951-86e8-99aaeda1a510',
spUUID='595abf38-2c6d-11eb-9ba9-3ca82afed888',
imgUUID='69de5e2b-dfb7-4e47-93d0-65ed4f978017',
leafUUID='e66df35e-1254-48ef-bb83-ae8957fe8651', allowIllegal=False)
from=internal, task_id=bf3e89e0-5d4e-47a1-8d86-3933c5004d25 (api:31)
2023-04-03 23:25:01,243-0400 INFO (vm/69f6480f) [vdsm.api] FINISH
prepareImage error=Unknown pool id, pool not connected:
('595abf38-2c6d-11eb-9ba9-3ca82afed888',) from=internal,
task_id=bf3e89e0-5d4e-47a1-8d86-3933c5004d25 (api:35)
2023-04-03 23:25:01,244-0400 INFO (vm/69f6480f) [storage.taskmanager.task]
(Task='bf3e89e0-5d4e-47a1-8d86-3933c5004d25') aborting: Task is aborted:
"value=Unknown pool id, pool not connected:
('595abf38-2c6d-11eb-9ba9-3ca82afed888',) abortedcode=309" (task:1165)
2023-04-03 23:25:01,244-0400 INFO (vm/69f6480f) [storage.dispatcher]
FINISH prepareImage error=Unknown pool id, pool not connected:
('595abf38-2c6d-11eb-9ba9-3ca82afed888',) (dispatcher:64)
2023-04-03 23:25:01,244-0400 ERROR (vm/69f6480f) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') The vm start process failed
(vm:1001)
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 928, in
_startUnderlyingVm
self._run()
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 2769, in
_run
self._devices = self._make_devices()
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 2649, in
_make_devices
disk_objs = self._perform_host_local_adjustment()
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 2687, in
_perform_host_local_adjustment
self._preparePathsForDrives(disk_params)
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 1109, in
_preparePathsForDrives
drive['path'] = self.cif.prepareVolumePath(
File "/usr/lib/python3.9/site-packages/vdsm/clientIF.py", line 418, in
prepareVolumePath
raise vm.VolumeError(drive)
vdsm.virt.vm.VolumeError: Bad volume specification {'device': 'disk',
'type': 'disk', 'diskType': 'file', 'specParams':
{}, 'alias':
'ua-69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'address': {'bus':
'0',
'controller': '0', 'target': '0', 'type':
'drive', 'unit': '0'},
'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510', 'guestName':
'/dev/sdb', 'imageID': '69de5e2b-dfb7-4e47-93d0-65ed4f978017',
'poolID':
'595abf38-2c6d-11eb-9ba9-3ca82afed888', 'volumeID':
'e66df35e-1254-48ef-bb83-ae8957fe8651', 'managed': False,
'volumeChain':
[{'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510', 'imageID':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'leaseOffset': 0,
'leasePath':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651.lease',
'path':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'volumeID': 'e66df35e-1254-48ef-bb83-ae8957fe8651'}], 'path':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'discard': False, 'format': 'raw', 'propagateErrors':
'off', 'cache':
'none', 'iface': 'scsi', 'name': 'sda',
'bootOrder': '1', 'serial':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'index': 0, 'reqsize':
'0',
'truesize': '5331688448', 'apparentsize': '42949672960'}
2023-04-03 23:25:01,244-0400 INFO (vm/69f6480f) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') Changed state to Down: Bad
volume specification {'device': 'disk', 'type': 'disk',
'diskType': 'file',
'specParams': {}, 'alias':
'ua-69de5e2b-dfb7-4e47-93d0-65ed4f978017',
'address': {'bus': '0', 'controller': '0',
'target': '0', 'type': 'drive',
'unit': '0'}, 'domainID':
'a874a247-d8de-4951-86e8-99aaeda1a510',
'guestName': '/dev/sdb', 'imageID':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017',
'poolID': '595abf38-2c6d-11eb-9ba9-3ca82afed888', 'volumeID':
'e66df35e-1254-48ef-bb83-ae8957fe8651', 'managed': False,
'volumeChain':
[{'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510', 'imageID':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'leaseOffset': 0,
'leasePath':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651.lease',
'path':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'volumeID': 'e66df35e-1254-48ef-bb83-ae8957fe8651'}], 'path':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'discard': False, 'format': 'raw', 'propagateErrors':
'off', 'cache':
'none', 'iface': 'scsi', 'name': 'sda',
'bootOrder': '1', 'serial':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'index': 0, 'reqsize':
'0',
'truesize': '5331688448', 'apparentsize': '42949672960'}
(code=1) (vm:1743)
2023-04-03 23:25:01,245-0400 INFO (vm/69f6480f) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') Stopping connection
(guestagent:421)
2023-04-03 23:25:01,252-0400 INFO (jsonrpc/7) [api.virt] START
destroy(gracefulAttempts=1) from=::ffff:192.168.2.24,56838,
vmId=69f6480f-5ee0-4210-84b9-58b4dbb4e423 (api:31)
2023-04-03 23:25:01,252-0400 INFO (jsonrpc/7) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') Release VM resources (vm:5324)