Problem importing storage domain from oVirt 4.3.10 to 4.5.5
by Sumit Basu
Hi,
I want to import a FC storage domain from an oVirt 4.3.10 setup to a fresh installed oVirt 4.5.5.
I could move domain to maintenance, detach, remove from 4.3.10 and import with activate option to 4.5.5 for one FC LUN fine.
When i tried for the second FC lun - during import,activate i get "Error while executing action Attach Storage Domain: Internal Engine Error" and in engine.log i get " Query 'GetUnregisteredDisksQuery' failed: null',
and on the standalone engine, i get :-
2024-06-26 07:15:07,277+05 ERROR [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (default task-102) [] An error occurred while fetching unregistered disks from Storage Domain id 'a5a72b44-f9bc-4e5b-ab43-ca2b1d97b429'
2024-06-26 07:15:07,277+05 ERROR [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (default task-102) [] Command 'org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand' failed: Failed to retrieve unregistered disks
2024-06-26 07:15:07,277+05 ERROR [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (default task-102) [] Exception: java.lang.RuntimeException: Failed to retrieve unregistered disks
I have checked using "Scan Disk" on the FC lun before exporting from 4.3.10 and there were no floating disks. I have checked if any of the VM's on the domain have any of their disks on any other storage domain - there are none.
On the host used for importing the domain, i get :-
2024-06-26 07:15:04,829+0530 INFO (jsonrpc/5) [storage.hba] Scanning FC devices (hba:42)
2024-06-26 07:15:04,938+0530 INFO (periodic/0) [vdsm.api] START repoStats(domains=()) from=internal, task_id=9f03d412-56a7-4f3f-85b6-c3fd72a68817 (api:31)
2024-06-26 07:15:04,938+0530 INFO (periodic/0) [vdsm.api] FINISH repoStats return={'7c9e748c-442b-48d0-9b35-a769d349c73b': {'code': 0, 'lastCheck': '3.7', 'delay': '0.000974057', 'valid': True, 'version': 0, 'acquired': True, 'actual': True}, 'a589fe14-f40a-4d67-afcd-ec74b07009cf': {'code': 0, 'lastCheck': '3.7', 'delay': '0.000501601', 'valid': True, 'version': 0, 'acquired': True, 'actual': True}, '6b625ecb-1cd1-43c5-b533-32000cd050ec': {'code': 0, 'lastCheck': '3.7', 'delay': '0.000570869', 'valid': True, 'version': 5, 'acquired': True, 'actual': True}} from=internal, task_id=9f03d412-56a7-4f3f-85b6-c3fd72a68817 (api:37)
2024-06-26 07:15:04,968+0530 INFO (jsonrpc/5) [storage.hba] Scanning FC devices: 0.14 seconds (utils:373)
2024-06-26 07:15:04,968+0530 INFO (jsonrpc/5) [storage.multipath] Waiting until multipathd is ready (multipath:95)
2024-06-26 07:15:07,003+0530 INFO (jsonrpc/5) [storage.multipath] Waited 2.04 seconds for multipathd (tries=2, ready=2) (multipath:120)
2024-06-26 07:15:07,003+0530 INFO (jsonrpc/5) [storage.multipath] Resizing multipath devices (multipath:223)
2024-06-26 07:15:07,014+0530 INFO (jsonrpc/5) [storage.multipath] Resizing multipath devices: 0.01 seconds (utils:373)
2024-06-26 07:15:07,014+0530 INFO (jsonrpc/5) [storage.storagedomaincache] Refreshing storage domain cache: 2.60 seconds (utils:373)
2024-06-26 07:15:07,263+0530 INFO (jsonrpc/5) [storage.volumemanifest] Creating image directory '/rhev/data-center/mnt/blockSD/a5a72b44-f9bc-4e5b-ab43-ca2b1d97b429/images/f054c394-2b49-4087-9e9d-0490b23e34e4' (blockVolume:106)
2024-06-26 07:15:07,263+0530 ERROR (jsonrpc/5) [storage.volumemanifest] Unexpected error (blockVolume:110)
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/storage/blockVolume.py", line 108, in validateImagePath
os.mkdir(imageDir, 0o755)
FileNotFoundError: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/a5a72b44-f9bc-4e5b-ab43-ca2b1d97b429/images/f054c394-2b49-4087-9e9d-0490b23e34e4'
2024-06-26 07:15:07,263+0530 INFO (jsonrpc/5) [vdsm.api] FINISH prepareImage error=Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/a5a72b44-f9bc-4e5b-ab43-ca2b1d97b429/images/f054c394-2b49-4087-9e9d-0490b23e34e4',) from=::ffff:10.10.99.100,51116, flow_id=872d7930-e487-4fe4-8e0a-599074105d7b, task_id=523690f8-24b3-4f1f-801e-8391ebb087a2 (api:35)
2024-06-26 07:15:07,263+0530 ERROR (jsonrpc/5) [storage.taskmanager.task] (Task='523690f8-24b3-4f1f-801e-8391ebb087a2') Unexpected error (task:860)
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/storage/blockVolume.py", line 108, in validateImagePath
os.mkdir(imageDir, 0o755)
FileNotFoundError: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/a5a72b44-f9bc-4e5b-ab43-ca2b1d97b429/images/f054c394-2b49-4087-9e9d-0490b23e34e4'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/storage/task.py", line 867, in _run
return fn(*args, **kargs)
File "<decorator-gen-169>", line 2, in prepareImage
File "/usr/lib/python3.9/site-packages/vdsm/common/api.py", line 33, in method
ret = func(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/vdsm/storage/hsm.py", line 2958, in prepareImage
legality = dom.produceVolume(imgUUID, volUUID).getLegality()
File "/usr/lib/python3.9/site-packages/vdsm/storage/sd.py", line 1188, in produceVolume
return self.getVolumeClass()(self.mountpoint, self.sdUUID, imgUUID,
File "/usr/lib/python3.9/site-packages/vdsm/storage/volume.py", line 881, in __init__
self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
File "/usr/lib/python3.9/site-packages/vdsm/storage/blockVolume.py", line 40, in __init__
volume.VolumeManifest.__init__(self, repoPath, sdUUID, imgUUID,
File "/usr/lib/python3.9/site-packages/vdsm/storage/volume.py", line 67, in __init__
self.validate()
File "/usr/lib/python3.9/site-packages/vdsm/storage/blockVolume.py", line 135, in validate
volume.VolumeManifest.validate(self)
File "/usr/lib/python3.9/site-packages/vdsm/storage/volume.py", line 99, in validate
self.validateImagePath()
File "/usr/lib/python3.9/site-packages/vdsm/storage/blockVolume.py", line 111, in validateImagePath
raise se.ImagePathError(imageDir)
vdsm.storage.exception.ImagePathError: Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/a5a72b44-f9bc-4e5b-ab43-ca2b1d97b429/images/f054c394-2b49-4087-9e9d-0490b23e34e4',)
2024-06-26 07:15:07,263+0530 INFO (jsonrpc/5) [storage.taskmanager.task] (Task='523690f8-24b3-4f1f-801e-8391ebb087a2') aborting: Task is aborted: "value=Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/a5a72b44-f9bc-4e5b-ab43-ca2b1d97b429/images/f054c394-2b49-4087-9e9d-0490b23e34e4',) abortedcode=254" (task:1165)
2024-06-26 07:15:07,264+0530 ERROR (jsonrpc/5) [storage.dispatcher] FINISH prepareImage error=Image path does not exist or cannot be accessed/created: ('/rhev/data-center/mnt/blockSD/a5a72b44-f9bc-4e5b-ab43-ca2b1d97b429/images/f054c394-2b49-4087-9e9d-0490b23e34e4',) (dispatcher:66)
2024-06-26 07:15:07,264+0530 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Image.prepare failed (error 254) in 2.85 seconds (__init__:300)
Any help will be useful, we are moving from 4.3.10 on IBM servers and storage to 4.5.5 on DELL servers and storage
Regards
Sumit Basu
4 months, 2 weeks
4.5.4 with Ceph only storage
by Maurice Burrows
Hey ... A long story short ... I have an existing Red Hat Virt / Gluster hyperconverged solution that I am moving away from.
I have an existing Ceph cluster that I primarily use for OpenStack and a small requirement for S3 via RGW.
I'm planning to build a new oVirt 4.5.4 cluster on RHEL9 using Ceph for all storage requirements. I've read many online articles on oVirt and Ceph, and they all seem to use the Ceph iSCSI gateway, which is now in maintenance, so I'm not real keen to commit to iSCSI.
So my question is, IS there any reason I cannot use CephFS for both hosted-engine and as a data storage domain?
I'm currently running Ceph Pacific FWIW.
Cheers
4 months, 2 weeks
After restoring the failed host and synchronizing the data, it prompts that there are unsynchronized items
by ziyi Liu
gluster volume info
Volume Name: data
Type: Replicate
Number of Bricks: 1 x (2 + 1) = 3
Bricks:
Brick1: node-gfs1:/gluster_bricks/data/data1
Brick2: node-gfs2:/gluster_bricks/data/data1
Brick3: node-gfs3:/gluster_bricks/data/data1 (arbiter)
gluster volume heal data info
Number of entries: 39
/var/log/glusterfs/glustershd.log The log appears
client-rpc-fops_v2.c:785:client4_0_fsync_cbk] 0-data-client-0: remote operation failed. [{errno=5}, {error=Input/output error}]
How should I solve these unrepaired entries
4 months, 2 weeks
Error importing OVA on Ovirt-4.5.4 - no value for $TERM is defined
by raulmevi2@hotmail.com
Hello,
I'm facing an issue where, in a new fresh installed Oracle OLVM - Ovirt 4.5.4, self-hosted engine with NFS shared storage.
That an .OVA file cannot be imported.
Steps to reproduce:
- Created test virtual machine in an NFS storage domain, installed it, allright working fine.
- Shutdown test VM
- Export test VM to a purposedly created directory /home/vdsm, whose owner is vdsm:kvm with full permissions.
- Export .OVA file concludes successfully and the file is created at that location
Then, in the exact same server, same datacenter.
- Rename source VM so it doesn't conflict because the same name (same result if I delete the source VM)
- try to import the .OVA. using the GUI. It finds successfully the file at the specified location and begins the procedure
-# up to a given point, it fails without more indications at the web GUI. No more info at the "events" than "Failed to import vm Test to Datacenter".
Fiddling into the logs, I was able to find in the hosted-engine VM, at /var/log/ovirt-engine/ova the log that goes by "ovirt-ova-external-data-ansible-DATE-HostFQDN-uuid.log" an error that specifiies
stdout_lines" : [ "tput: No value for $TERM and no -T specified", "tput: No value for $TERM and no -T specified", "tput: No value for $TERM and no -T specified"
I tinkered with the ansible role that calls a python script called "get_ova_data.py" (located at /usr/share/ovirt-engine/ansible-runner-service-project/project/roles-/ovirt-ova-external-data/)
I tried to define the variable TERM into null or into xterm but then it fails in another manner complaining about that variable.
By surfing the webs, it seems something calls "tput" like if it was in console mode but then it breaks.
Can I have some help with this, please?
Thanks
4 months, 3 weeks
HE deployment on FC (fibre-channel) disk fails at 99% completed at final "hosted-engine --reinitialize-lockspace --force"
by Jeffrey Slapp
This identical to many others who have encountered this issue, yet nothing definitive has been suggested.
The entire HE deployment nearly finishes, but after the copy of the HE VM to the shared disk, shortly after that I reach the "initialize lockspace" section and the following error occurs:
20083 2024-06-19 13:24:01,777-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': True, 'stdout': '', 'stderr': 'Traceback (most recent call last):\n File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main\n return _run_ code(code, main_globals, None,\n File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code\n exec(code, run_globals)\n File "/usr/lib/p ython3.9/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py", line 30, in <module>\n ha_cli.reset_lockspace(force)\n File " /usr/lib/python3.9/site-packages/ovirt_hosted_engine_ha/client/client.py", line 286, in reset_lockspace\n stats = broker.get_stats_from_sto rage()\n File "/usr/lib/python3.9/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 148, in get_stats_from_storage\n result = self._proxy.get_stats()\n File "/usr/lib64/python3.9/xmlrpc/client.py", line 1122, in __call__\n ret
urn self.__send(self.__name, args)\n File "/usr/lib64/python3.9/xmlrpc/client.py", line 1464, in __request\n response = self.__transport.request(\n File "/usr/lib64/python3.9/ xmlrpc/client.py", line 1166, in request\n return self.single_request(host, handler, request_body, verbose)\n File "/usr/lib64/python3.9/x mlrpc/client.py", line 1178, in single_request\n http_conn = self.send_request(host, handler, request_body, verbose)\n File "/usr/lib64/py thon3.9/xmlrpc/client.py", line 1291, in send_request\n self.send_content(connection, request_body)\n File "/usr/lib64/python3.9/xmlrpc/cl ient.py", line 1321, in send_content\n connection.endheaders(request_body)\n File "/usr/lib64/python3.9/http/client.py", line 1280, in end headers\n self._send_output(message_body, encode_chunked=encode_chunked)\n File "/usr/lib64/python3.9/http/client.py", line 1040, in _send _output\n self.send(msg)\n File "/usr/lib64/python3.9/http/cl
ient.py", line 980, in send\n self.connect()\n File "/usr/lib/python3.9/s ite-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 76, in connect\n self.sock.connect(base64.b16decode(self.host))\nFileNotFoundErro r: [Errno 2] No such file or directory', 'rc': 1, 'cmd': ['hosted-engine', '--reinitialize-lockspace', '--force'], 'start': '2024-06-19 13:24: 01.438386', 'end': '2024-06-19 13:24:01.618227', 'delta': '0:00:00.179841', 'msg': 'non-zero return code', 'invocation': {'module_args': {'_ra w_params': 'hosted-engine --reinitialize-lockspace --force', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv' : None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ['Traceback (most recent call last):', ' File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main', ' return _run_code(code, main_global s, None,', ' File "/usr/l
ib64/python3.9/runpy.py", line 87, in _run_code', ' exec(code, run_globals)', ' File "/usr/lib/python3.9/site-pa ckages/ovirt_hosted_engine_setup/reinitialize_lockspace.py", line 30, in <module>', ' ha_cli.reset_lockspace(force)', ' File "/usr/lib/pyt hon3.9/site-packages/ovirt_hosted_engine_ha/client/client.py", line 286, in reset_lockspace', ' stats = broker.get_stats_from_storage()', ' File "/usr/lib/python3.9/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 148, in get_stats_from_storage', ' result = self._ proxy.get_stats()', ' File "/usr/lib64/python3.9/xmlrpc/client.py", line 1122, in __call__', ' return self.__send(self.__name, args)', ' File "/usr/lib64/python3.9/xmlrpc/client.py", line 1464, in __request', ' response = self.__transport.request(', ' File "/usr/lib64/python 3.9/xmlrpc/client.py", line 1166, in request', ' return self.single_request(host, handler, request_body, verbose)', ' File "/usr/li
b64/pyt hon3.9/xmlrpc/client.py", line 1178, in single_request', ' http_conn = self.send_request(host, handler, request_body, verbose)', ' File "/ usr/lib64/python3.9/xmlrpc/client.py", line 1291, in send_request', ' self.send_content(connection, request_body)', ' File "/usr/lib64/pyt hon3.9/xmlrpc/client.py", line 1321, in send_content', ' connection.endheaders(request_body)', ' File "/usr/lib64/python3.9/http/client.py ", line 1280, in endheaders', ' self._send_output(message_body, encode_chunked=encode_chunked)', ' File "/usr/lib64/python3.9/http/client. py", line 1040, in _send_output', ' self.send(msg)', ' File "/usr/lib64/python3.9/http/client.py", line 980, in send', ' self.connect() ', ' File "/usr/lib/python3.9/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 76, in connect', ' self.sock.connect(base64.b16de code(self.host))', 'FileNotFoundError: [Errno 2] No such file or directory'], '_ansible_no_log':
False, 'attempts': 5}
No errors in /var/log/messages or /var/log/sanlock.log.
I have this working with iSCSI on another storage system, but can't seem to get this to work on FC. I have read that the sector sizes could possible cause this. On my iSCSI system I have [PHY-SEC:LOG-SEC] as [512:512] but on my FC system I have [4096:512].
Hoping that someone can confirm whether this is the issue or not. Interestingly all previous "initialize lockspace" phases of the install are fine, it just appears to be this final one.
4 months, 3 weeks
New host cannot connect to master domain
by ovirt@kirschke.de
Hi,
i just installed a new host, which cannot connect to the master domain. The domain is an iscsi device on a nas, and ovirt is actually fine with it, other hosts have no problem. I'sure , I#m just overlooking something.
What I see in the log is:
024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagepoolmemorybackend] new storage pool master version 302 and domains map {'6f018fbd-de93-4c56-880d-8ede2aad2674': 'Active', '2c870e06-6c70-45ec-b665-ce29408c8a8e': 'Active', 'a154
96dc-c241-4658-af9d-0dfe11783916': 'Active', '41012bfb-b802-4092-b699-7f5284d95c8e': 'Active'} (spbackends:417)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagepool] updating pool 5836aaac-0030-0064-024d-0000000002e4 backend from type NoneType instance 0x7f10d667fb70 to type StoragePoolMemoryBackend instance 0x7f10702d7408 (sp:149)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagepool] Connect host #2 to the storage pool 5836aaac-0030-0064-024d-0000000002e4 with master domain: a15496dc-c241-4658-af9d-0dfe11783916 (ver = 302) (sp:699)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Invalidating storage domain cache (sdc:57)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Clearing storage domain cache (sdc:182)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Refreshing storage domain cache (resize=True) (sdc:63)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.iscsi] Scanning iSCSI devices (iscsi:445)
2024-04-16 11:10:06,456-0400 INFO (jsonrpc/3) [storage.iscsi] Scanning iSCSI devices: 0.14 seconds (utils:373)
2024-04-16 11:10:06,456-0400 INFO (jsonrpc/3) [storage.hba] Scanning FC devices (hba:42)
2024-04-16 11:10:06,481-0400 INFO (jsonrpc/3) [storage.hba] Scanning FC devices: 0.03 seconds (utils:373)
2024-04-16 11:10:06,481-0400 INFO (jsonrpc/3) [storage.multipath] Waiting until multipathd is ready (multipath:95)
2024-04-16 11:10:08,498-0400 INFO (jsonrpc/3) [storage.multipath] Waited 2.02 seconds for multipathd (tries=2, ready=2) (multipath:122)
2024-04-16 11:10:08,498-0400 INFO (jsonrpc/3) [storage.multipath] Resizing multipath devices (multipath:223)
2024-04-16 11:10:08,499-0400 INFO (jsonrpc/3) [storage.multipath] Resizing multipath devices: 0.00 seconds (utils:373)
2024-04-16 11:10:08,499-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Refreshing storage domain cache: 2.18 seconds (utils:373)
2024-04-16 11:10:08,499-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Looking up domain a15496dc-c241-4658-af9d-0dfe11783916 (sdc:154)
2024-04-16 11:10:08,536-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Looking up domain a15496dc-c241-4658-af9d-0dfe11783916: 0.04 seconds (utils:373)
2024-04-16 11:10:08,537-0400 INFO (jsonrpc/3) [vdsm.api] FINISH connectStoragePool error=Cannot find master domain: 'spUUID=5836aaac-0030-0064-024d-0000000002e4, msdUUID=a15496dc-c241-4658-af9d-0dfe11783916' from=::ffff:10.2.0.4,44914, flo
w_id=44d9e674, task_id=32f0ea2a-044b-4a81-a12e-a269e59a802b (api:35)
2024-04-16 11:10:08,537-0400 ERROR (jsonrpc/3) [storage.taskmanager.task] (Task='32f0ea2a-044b-4a81-a12e-a269e59a802b') Unexpected error (task:860)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 1550, in setMasterDomain
domain = sdCache.produce(msdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 98, in produce
domain.getRealDomain()
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 34, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 122, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 139, in _findDomain
return findMethod(sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 169, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
vdsm.storage.exception.StorageDomainDoesNotExist: Storage domain does not exist: ('a15496dc-c241-4658-af9d-0dfe11783916',)
During handling of the above exception, another exception occurred:
....
The domain a15496dc-c241-4658-af9d-0dfe11783916 definitely exist, and works on other systems.
THe new host can acces the iscsi targets, and i am able to log into them.
Any one who knows, which blinder I have to remove, so I see the actual problem?
Thanks and best regards
Steffen
4 months, 3 weeks
Admin interface broken
by ovirt@kirschke.de
Hi
i tried to migrate my ovirt engine form a centos 8 to a 9 machine. So, I made a backup, created and configured a new machine, stopped the old one, started the new one with identical network configuration, restored the backup and tried engine-setup to set up the new site, exping it to work. It did not, but I thought, no prob, I still have my old one.
Now, going back to the old mache, when logging into Admin Portal, I get error messages:
- Error while executing action: A Request to the Server failed: Type 'org.ovirt.engine.core.common.queries.QueryType' was not assignable to 'com.google.gwt.user.client.rpc.IsSerializable' and did not have a
custom field serializer. For security purposes, this type will not be deserialized.
- Error while executing query: null
Each in a Box, which I clicked away getting the messages toggeling.
After a while, the abckground of the admin Portal's dashboad is becomes visible, but remains empty, with the 'loading' spinner.
In the serverlog I only see the following exveption:
Caused by: com.google.gwt.user.client.rpc.SerializationException: Type 'org.ovirt.engine.core.common.queries.QueryType' was not assignable to 'com.google.gwt.user.client.rpc.IsSerializable' and did not have a custom field serializer. For security purposes, this type will not be deserialized.
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.impl.LegacySerializationPolicy.validateDeserialize(LegacySerializationPolicy.java:128)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.impl.ServerSerializationStreamReader.deserialize(ServerSerializationStreamReader.java:676)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.impl.ServerSerializationStreamReader.readObject(ServerSerializationStreamReader.java:592)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.core.java.util.Collection_ServerCustomFieldSerializerBase.deserialize(Collection_ServerCustomFieldSerializerBase.java:38)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.core.java.util.ArrayList_ServerCustomFieldSerializer.deserialize(ArrayList_ServerCustomFieldSerializer.java:40)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.core.java.util.ArrayList_ServerCustomFieldSerializer.deserializeInstance(ArrayList_ServerCustomFieldSerializer.java:54)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.core.java.util.ArrayList_ServerCustomFieldSerializer.deserializeInstance(ArrayList_ServerCustomFieldSerializer.java:33)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.impl.ServerSerializationStreamReader.deserializeImpl(ServerSerializationStreamReader.java:887)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.impl.ServerSerializationStreamReader.deserialize(ServerSerializationStreamReader.java:687)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.impl.ServerSerializationStreamReader.readObject(ServerSerializationStreamReader.java:592)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.impl.ServerSerializationStreamReader$ValueReader$8.readValue(ServerSerializationStreamReader.java:149)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.impl.ServerSerializationStreamReader.deserializeValue(ServerSerializationStreamReader.java:434)
at deployment.engine.ear.webadmin.war//com.google.gwt.user.server.rpc.RPC.decodeRequest(RPC.java:312)
... 64 more
Which , of course I do not understand, what it tries to tell me.
The VM-Portal is still reachable, and seems to work.
Has anybody had an effect like that, or, even better, an idea, what I am doing wrong and how I can fix that, or where I can look to find the main cause for the bahaviour-
Forgot to mention: Desperate, as I am, I also tried to upgrade to 4.5.7-master, which worked without any problem or warning, but the effect is still the same.
Thanks and best regards
Steffen
4 months, 3 weeks
Ovirt4.5 installation failure on Rocky 8
by 盛家杰
Hello Ovit Devs,
I am new to ovirt and am trying to install Ovirt4.5 on Rocky8 machine, and I am stuck at the first step, enable ovirt engine repo.
1. I follow the manual to do some yum repo configuration staff, with bash code in "Rocky" part , https://www.ovirt.org/download/install_on_rhel.html
# On Rocky there's an issue with centos-release-nfv package from extras
dnf install -y dnf-plugins-core
dnf config-manager --set-disabled extras
2. I try to install ovirt4.5 with "dnf install -y centos-release-ovirt45", but it fails with "unable to find a match". I find centos-release-ovirt45 is listed in extras repo and extras repo is disabled. If extras repo needs to be disabled, how can I install centos-release-ovirt45?
Any suggestions would be highly appreciated!
Jiajie
4 months, 3 weeks
oVirt CLI tool for automation tasks
by munnadawood@gmail.com
We recently migrated from VMware to oVirt. I am looking for any CLI tool well suited for my automation tasks like VM create, clone, migrate 100s of Virtual machines in oVirt cluster.
with VMware I was using govc (vSphere CLI built on top of govmomi). Another option I read is powercli, quite unsure if it works with oVirt.
Any suggestions would be highly appreciated.
Thanks!
4 months, 3 weeks