Windows 10 on older CPUs
by Chris Adams
I have an oVirt cluster running 4.3.10 on older hardware with a cluster
CPU type of Intel Nehalem. I needed to install a Windows 10 (latest
release) VM, which oVirt rejected because of the CPU model. I remember
there being issues with certain versions of Windows and this CPU model
under oVirt (I don't do Windows very often and had forgotten).
I instead told oVirt I was installing Windows 8 (which it accepted), and
then installed - it installed and appears to be working just fine. I
don't know if Microsoft changed whatever used to break or what... I
remember the problem before not even getting through install, so it
seems to be okay.
Any issues I should be worried about?
--
Chris Adams <cma(a)cmadams.net>
3 years, 2 months
ovirt node ng 4.4 crash in anaconda when setting ntp
by Gianluca Cecchi
Hello,
when installing ovirt node ng 4.4 on a Dell M620 I remember I had a crash
in anaconda if I try to set up ntp.
Using ovirt-node-ng-installer-4.4.4-2020122111.el8.iso
As soon as I select it and try to type the hostname to use, all stops and
then anaconda aborts.
Just today I had the same with the latest RHVH 4.4 iso:
RHVH-4.4-20201210.0-RHVH-x86_64-dvd1.iso on an R630.
Quite disappointing because I also have to fight with iDRAC8 to install the
OS: it is slow to die.
In practice I waste about one hour....
Is anyone aware of it or able to reproduce on another platform so that
eventually I'm going to open a bug/case for it?
My config is default one in anaconda accepting default and creating a
bonded connection (LACP).
Then as the last step I go into the set time/date and click on the ntp
button and I get the problem as soon as I try to type inside the text box.
Thanks,
Gianluca
3 years, 2 months
LVM Filesystem not mount VMStore
by marcel@deheureu.se
Hi,
We got the following error message from our 3 Server Setup this morning.
Two of the servers where switched off by a "User". Simply power off.
One of the server startup well and the other one provide the following
message.
What should we do first, we can't mount the vmstore manual. The system
takes a lot of minutes. Did anyone have an idea?
Br
Marcel
3 years, 2 months
[ANN] oVirt 4.4.5 Third Release Candidate is now available for testing
by Sandro Bonazzola
oVirt 4.4.5 Third Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.5
Third Release Candidate for testing, as of January 29th, 2021.
This update is the fifth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.5 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.5 (redeploy in case of already being on 4.4.5).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
* oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
- We found a few issues while testing on CentOS Stream so we are still
basing oVirt 4.4.5 Node and Appliance on CentOS Linux.
Additional Resources:
* Read more about the oVirt 4.4.5 release highlights:
http://www.ovirt.org/release/4.4.5/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.5/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 2 months
How to reset network on all cluster members (controller and host) to defaults for cleans start?
by David Johnson
I have a new Ovirt 4.4.5 cluster consisting of a controller and a single
host, both running on centos 8. I believe I have messed up my network
configurations by using a mix of oVirt and nmcli functions to try to
resolve problems.
Is there an easy way to reset everything on the controller and host
machines to get a clean start? Or do I need to start from scratch and
re-install centos.
Thank you in advance.
David Johnson
3 years, 2 months
Gluster volume slower then raid1 zpool speed
by Harry O
Hi,
Can anyone help me with the performance on my 3 node gluster on zfs (it is setup with one arbiter)
The performance on the single vm I have on it (with engine) is 50% worse then a single bare metal disk, on the writes.
I have enabled "Optimize for virt store"
I run 1Gbps 1500MTU network, could this be the write performance killer?
Is this to be expected from a 2xHDD zfs raid one on each node, with 3xNode arbiter setup?
Maybe I should move to raid 5 or 6?
Maybe I should add SSD cache to raid1 zfs zpools?
What are your thoughts? What to do for optimize this setup?
I would like to run zfs with gluster and I can deal with a little performance loss, but not that much.
3 years, 2 months
vGPU on ovirt 4.3
by kim.kargaard@noroff.no
Hi,
We are looking at getting some GPU's for our servers and to use vGPU passthrough so that our students can do some video renders on the VM's. Does anyone have good experience with the Nvidia Quadro RTX6000 or RTX8000 and ovirt 4.3?
Thanks.
Kim
3 years, 2 months
Re: VM templates
by Strahil Nikolov
If you system will have only Gluster -> you can blacklist everything in multipath.But imagine you have gluster and also SAN or iSCSI -> then you need to blacklist only local disks that Gluster uses, but not the SAN.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Wed, Jan 27, 2021 at 16:28, Robert Tongue<phunyguy(a)neverserio.us> wrote: #yiv8013230980 P {margin-top:0;margin-bottom:0;}Ahh, OK, I didn't realize that. I appreciate it, and was hoping to get good feedback like this when I posted what I did. Will make the changes.
What is meant by "gluster only machine" here?
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Wednesday, January 27, 2021 7:11 AM
To: Robert Tongue <phunyguy(a)neverserio.us>; users <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: VM templates You should create a file like mine, cause vdsm manages /etc/multipathd.conf
# cat /etc/multipath/conf.d/blacklist.confblacklist { devnode "*" wwid nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-00000001 wwid TOSHIBA-TR200_Z7KB600SK46S wwid ST500NM0011_Z1M00LM7 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2303189 wwid WDC_WD15EADS-00P8B0_WD-WMAVU0885453 wwid WDC_WD5003ABYZ-011FA0_WD-WMAYP0F35PJ4}
Keep in mind 'devnode *' is OK only for gluster-only machine.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Wed, Jan 27, 2021 at 6:02, Robert Tongue<phunyguy(a)neverserio.us> wrote:_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YD7ROMATPWF...
3 years, 2 months
New setup - Failing to Activate storage domain on NFS shared storage
by Matt Snow
I installed ovirt node 4.4.4 as well as 4.4.5-pre and experience the same problem with both versions. The issue occurs in both cockpit UI and tmux'd CLI of ovirt-hosted-engine-setup. I get passed the point where the VM is created and running.
I tried to do some debugging on my own before reaching out to this list. Any help is much appreciated!
ovirt node hardware: NUC format Jetway w/ Intel N3160 (Braswell 4 cores/4threads), 8GB RAM, 64GB SSD. I understand this is underspec'd, but I believe it meets the minimum requirements.
NFS server:
* Ubuntu 19.10 w/ ZFS share w/ 17TB available space.
* NFS share settings are just 'rw=(a)172.16.1.0/24' but have also tried 'rw,sec=sys,anon=0' and '@172.16.1.0/24,insecure'
* The target directory is always empty and chown'd 36:36 with 0755 permissions.
* I have tried using both IP and DNS names. forward and reverse DNS works from ovirt host and other systems on the network.
* The NFS share always gets mounted successfully on the ovirt node system.
* I have tried auto and v3 NFS versions in other various combinations.
* I have also tried setting up an NFS server on a non-ZFS backed storage system that is open to any host and get the same errors as shown below.
* I ran nfs-check.py script without issue against both NFS servers and followed other verification steps listed on https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
***Snip from ovirt-hosted-engine-setup***
Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: nfs
Please specify the nfs version you would like to use (auto, v3, v4, v4_0, v4_1, v4_2)[auto]:
Please specify the full shared storage connection path to use (example: host:/path): stumpy.mydomain.com:/tanker/ovirt/host_storage
If needed, specify additional mount options for the connection to the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []: rw
[ INFO ] Creating Storage Domain
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the storage interface to be up]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check local VM dir stat]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce local VM dir existence]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch host facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster ID]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch cluster facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter ID]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch Datacenter name]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add iSCSI storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add Fibre Channel storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get storage domain details]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the appliance OVF]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Parse OVF]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get required size]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove unsuitable storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check storage domain free space]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}
Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
***End snippet**
The relevant section from /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_storage_domain-20210116195220-bnfg1w.log:
**Snip***
2021-01-16 19:53:56,010-0700 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f11a04c8320> kwargs
2021-01-16 19:53:57,219-0700 INFO ansible task start {'status': 'OK', 'ansible_type': 'task', 'ansible_playbook': '/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_task': 'ovirt.ovirt.hosted_engine_setup : Activate storage domain'}
2021-01-16 19:53:57,220-0700 DEBUG ansible on_any args TASK: ovirt.ovirt.hosted_engine_setup : Activate storage domain kwargs is_conditional:False
2021-01-16 19:53:57,221-0700 DEBUG ansible on_any args localhost TASK: ovirt.ovirt.hosted_engine_setup : Activate storage domain kwargs
2021-01-16 19:54:00,346-0700 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<class 'list'>" value: "[]"
2021-01-16 19:54:00,347-0700 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<class 'list'>" value: "[]"
2021-01-16 19:54:00,348-0700 DEBUG var changed: host "localhost" var "play_hosts" type "<class 'list'>" value: "[]"
2021-01-16 19:54:00,349-0700 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"changed": false,
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_f38n25ab/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py\", line 783, in main\n File \"/tmp/ansible_ovirt_storage_domain_payload_f38n25ab/ansible_ovirt_storage_domain_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_storage_domain.py\", line 638, in post_create_check\n File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py\", line 3647, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(respon
se)\n File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response, body)\n File \"/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\novirtsdk4.Error: Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400.\n",
"invocation": {
"module_args": {
"backup": null,
"comment": null,
"critical_space_action_blocker": null,
"data_center": "Default",
"description": null,
"destroy": null,
"discard_after_delete": null,
"domain_function": "data",
"fcp": null,
"fetch_nested": false,
"format": null,
"glusterfs": null,
"host": "brick.mydomain.com",
"id": null,
"iscsi": null,
"localfs": null,
"managed_block_storage": null,
"name": "hosted_storage",
"nested_attributes": [],
"nfs": null,
"poll_interval": 3,
"posixfs": null,
"state": "present",
"timeout": 180,
"wait": true,
"warning_low_space": null,
"wipe_after_delete": null
}
},
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."
},
"ansible_task": "Activate storage domain",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 4
}
2021-01-16 19:54:00,350-0700 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f11a05f2d30> kwargs ignore_errors:None
2021-01-16 19:54:00,358-0700 INFO ansible stats {
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_playbook_duration": "01:35 Minutes",
"ansible_result": "type: <class 'dict'>\nstr: {'localhost': {'ok': 23, 'failures': 1, 'unreachable': 0, 'changed': 0, 'skipped': 7, 'rescued': 0, 'ignored': 0}}",
"ansible_type": "finish",
"status": "FAILED"
}
2021-01-16 19:54:00,359-0700 INFO SUMMARY:
Duration Task Name
-------- --------
[ < 1 sec ] Execute just a specific set of steps
[ 00:05 ] Force facts gathering
[ 00:03 ] Check local VM dir stat
[ 00:04 ] Obtain SSO token using username/password credentials
[ 00:04 ] Fetch host facts
[ 00:02 ] Fetch cluster ID
[ 00:04 ] Fetch cluster facts
[ 00:04 ] Fetch Datacenter facts
[ 00:02 ] Fetch Datacenter ID
[ 00:02 ] Fetch Datacenter name
[ 00:04 ] Add NFS storage domain
[ 00:04 ] Get storage domain details
[ 00:03 ] Find the appliance OVF
[ 00:03 ] Parse OVF
[ 00:02 ] Get required size
[ FAILED ] Activate storage domain
2021-01-16 19:54:00,359-0700 DEBUG ansible on_any args <ansible.executor.stats.AggregateStats object at 0x7f11a2e3b8d0> kwargs
**End snip**
The relevantsection from /var/log/vdsm/vdsm.log:
***begin snip***
2021-01-16 19:53:58,439-0700 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=b8b21668-189e-4b68-a7f0-c2d2ebf14546 (api:48)
2021-01-16 19:53:58,439-0700 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=b8b21668-189e-4b68-a7f0-c2d2ebf14546 (api:54)
2021-01-16 19:53:58,440-0700 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:726)
2021-01-16 19:53:58,885-0700 INFO (jsonrpc/3) [vdsm.api] START connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'password': '********', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled': 'false', 'id': '3ffd1e3b-168e-4248-a2af-b28fbdf49eef', 'user': '', 'tpgt': '1'}], options=None) from=::ffff:192.168.222.53,41192, flow_id=592e278f, task_id=5bd52fa3-f790-4ed3-826d-c1f51e5f2291 (api:48)
2021-01-16 19:53:58,892-0700 INFO (jsonrpc/3) [storage.StorageDomainCache] Invalidating storage domain cache (sdc:74)
2021-01-16 19:53:58,892-0700 INFO (jsonrpc/3) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '3ffd1e3b-168e-4248-a2af-b28fbdf49eef', 'status': 0}]} from=::ffff:192.168.222.53,41192, flow_id=592e278f, task_id=5bd52fa3-f790-4ed3-826d-c1f51e5f2291 (api:54)
2021-01-16 19:53:58,914-0700 INFO (jsonrpc/6) [vdsm.api] START getStorageDomainInfo(sdUUID='54532dd4-3e5b-4885-b88e-599c81efb146', options=None) from=::ffff:192.168.222.53,41192, flow_id=592e278f, task_id=4d5a352d-6096-45b7-a4ee-b6e08ac02f7b (api:48)
2021-01-16 19:53:58,914-0700 INFO (jsonrpc/6) [storage.StorageDomain] sdUUID=54532dd4-3e5b-4885-b88e-599c81efb146 (fileSD:535)
2021-01-16 19:53:58,918-0700 INFO (jsonrpc/6) [vdsm.api] FINISH getStorageDomainInfo error=Domain is either partially accessible or entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',) from=::ffff:192.168.222.53,41192, flow_id=592e278f, task_id=4d5a352d-6096-45b7-a4ee-b6e08ac02f7b (api:52)
2021-01-16 19:53:58,918-0700 ERROR (jsonrpc/6) [storage.TaskManager.Task] (Task='4d5a352d-6096-45b7-a4ee-b6e08ac02f7b') Unexpected error (task:880)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 887, in _run
return fn(*args, **kargs)
File "<decorator-gen-131>", line 2, in getStorageDomainInfo
File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2796, in getStorageDomainInfo
dom = self.validateSdUUID(sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 312, in validateSdUUID
sdDom.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 538, in validate
raise se.StorageDomainAccessError(self.sdUUID)
vdsm.storage.exception.StorageDomainAccessError: Domain is either partially accessible or entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',)
2021-01-16 19:53:58,918-0700 INFO (jsonrpc/6) [storage.TaskManager.Task] (Task='4d5a352d-6096-45b7-a4ee-b6e08ac02f7b') aborting: Task is aborted: "value=Domain is either partially accessible or entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',) abortedcode=379" (task:1190)
2021-01-16 19:53:58,918-0700 ERROR (jsonrpc/6) [storage.Dispatcher] FINISH getStorageDomainInfo error=Domain is either partially accessible or entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',) (dispatcher:83)
2021-01-16 19:53:58,919-0700 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call StorageDomain.getInfo failed (error 379) in 0.00 seconds (__init__:312)
2021-01-16 19:53:58,956-0700 INFO (jsonrpc/1) [vdsm.api] START createStoragePool(poolType=None, spUUID='91fb3f22-5795-11eb-ad9f-00163e3f4683', poolName='Default', masterDom='54532dd4-3e5b-4885-b88e-599c81efb146', domList=['54532dd4-3e5b-4885-b88e-599c81efb146'], masterVersion=7, lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60, ioOpTimeoutSec=10, leaseRetries=3, options=None) from=::ffff:192.168.222.53,41192, flow_id=592e278f, task_id=a946357f-537a-4c9c-9040-e70a98ee5643 (api:48)
2021-01-16 19:53:58,958-0700 INFO (jsonrpc/1) [storage.StoragePool] updating pool 91fb3f22-5795-11eb-ad9f-00163e3f4683 backend from type NoneType instance 0x7f86e45f89d0 to type StoragePoolDiskBackend instance 0x7f8680738408 (sp:157)
2021-01-16 19:53:58,958-0700 INFO (jsonrpc/1) [storage.StoragePool] spUUID=91fb3f22-5795-11eb-ad9f-00163e3f4683 poolName=Default master_sd=54532dd4-3e5b-4885-b88e-599c81efb146 domList=['54532dd4-3e5b-4885-b88e-599c81efb146'] masterVersion=7 {'LEASERETRIES': 3, 'LEASETIMESEC': 60, 'LOCKRENEWALINTERVALSEC': 5, 'IOOPTIMEOUTSEC': 10} (sp:602)
2021-01-16 19:53:58,958-0700 INFO (jsonrpc/1) [storage.StorageDomain] sdUUID=54532dd4-3e5b-4885-b88e-599c81efb146 (fileSD:535)
2021-01-16 19:53:58,963-0700 ERROR (jsonrpc/1) [storage.StoragePool] Unexpected error (sp:618)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 613, in create
domain.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 538, in validate
raise se.StorageDomainAccessError(self.sdUUID)
vdsm.storage.exception.StorageDomainAccessError: Domain is either partially accessible or entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',)
2021-01-16 19:53:58,963-0700 INFO (jsonrpc/1) [vdsm.api] FINISH createStoragePool error=Domain is either partially accessible or entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',) from=::ffff:192.168.222.53,41192, flow_id=592e278f, task_id=a946357f-537a-4c9c-9040-e70a98ee5643 (api:52)
2021-01-16 19:53:58,963-0700 ERROR (jsonrpc/1) [storage.TaskManager.Task] (Task='a946357f-537a-4c9c-9040-e70a98ee5643') Unexpected error (task:880)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 613, in create
domain.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 538, in validate
raise se.StorageDomainAccessError(self.sdUUID)
vdsm.storage.exception.StorageDomainAccessError: Domain is either partially accessible or entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 887, in _run
return fn(*args, **kargs)
File "<decorator-gen-31>", line 2, in createStoragePool
File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 1027, in createStoragePool
leaseParams)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 619, in create
raise se.StorageDomainAccessError(sdUUID)
vdsm.storage.exception.StorageDomainAccessError: Domain is either partially accessible or entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',)
2021-01-16 19:53:58,964-0700 INFO (jsonrpc/1) [storage.TaskManager.Task] (Task='a946357f-537a-4c9c-9040-e70a98ee5643') aborting: Task is aborted: "value=Domain is either partially accessible or entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',) abortedcode=379" (task:1190)
2021-01-16 19:53:58,964-0700 ERROR (jsonrpc/1) [storage.Dispatcher] FINISH createStoragePool error=Domain is either partially accessible or entirely inaccessible: ('54532dd4-3e5b-4885-b88e-599c81efb146',) (dispatcher:83)
2021-01-16 19:53:58,965-0700 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.create failed (error 379) in 0.01 seconds (__init__:312)
2021-01-16 19:54:01,923-0700 INFO (jsonrpc/5) [api.host] START getStats() from=::ffff:192.168.222.53,41192 (api:48)
2021-01-16 19:54:01,964-0700 INFO (jsonrpc/7) [api.host] START getAllVmStats() from=::ffff:192.168.222.53,41192 (api:48)
2021-01-16 19:54:01,966-0700 INFO (jsonrpc/7) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:192.168.222.53,41192 (api:54)
2021-01-16 19:54:01,977-0700 INFO (jsonrpc/5) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.222.53,41192, task_id=7a6c8536-3f1e-41b3-9ab7-bb58a5e75fac (api:48)
2021-01-16 19:54:01,978-0700 INFO (jsonrpc/5) [vdsm.api] FINISH repoStats return={} from=::ffff:192.168.222.53,41192, task_id=7a6c8536-3f1e-41b3-9ab7-bb58a5e75fac (api:54)
2021-01-16 19:54:01,980-0700 INFO (jsonrpc/5) [vdsm.api] START multipath_health() from=::ffff:192.168.222.53,41192, task_id=b8af45b5-6488-4940-b7d0-1d11b5f76db4 (api:48)
2021-01-16 19:54:01,981-0700 INFO (jsonrpc/5) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.222.53,41192, task_id=b8af45b5-6488-4940-b7d0-1d11b5f76db4 (api:54)
2021-01-16 19:54:01,996-0700 INFO (jsonrpc/5) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:192.168.222.53,41192 (api:54)
***end snip***
3 years, 2 months