4.5.4 with Ceph only storage
by Maurice Burrows
Hey ... A long story short ... I have an existing Red Hat Virt / Gluster hyperconverged solution that I am moving away from.
I have an existing Ceph cluster that I primarily use for OpenStack and a small requirement for S3 via RGW.
I'm planning to build a new oVirt 4.5.4 cluster on RHEL9 using Ceph for all storage requirements. I've read many online articles on oVirt and Ceph, and they all seem to use the Ceph iSCSI gateway, which is now in maintenance, so I'm not real keen to commit to iSCSI.
So my question is, IS there any reason I cannot use CephFS for both hosted-engine and as a data storage domain?
I'm currently running Ceph Pacific FWIW.
Cheers
6 months, 3 weeks
i can't access console with noVNC or VNC client(console.vv)
by z84614242@163.com
i installed the ovirt 4.5 engine on centos stream 9 and add a ovirt node(ovirt node 4.5 iso) to this engine. i am going to run my vm on this node. i follow the instruction to create the data center, the cluster, the storage domain, upload the image. everything is fine. and after i create a vm with ubuntu image attach, i found that i can't visit the console. when i using the noVNC, it says "Something went wrong, connection is closed", when i visit vnc with virt-viewver, is says "Failed to complete handshake Error in the pull function". i try to change the console type to Bochs one and it appear the same. i change to QXL mode and the vm can't start any more. i check the log, it says "unsupported configuration: domain configuration does not support video model 'qxl'".
so now i can't visit my vm by anyway. i deploy the engine follow the official instruction and keep mostly option default but why still have this issue. why the noVNC says "Something went wrong" instead of telling me what is actually wrong
8 months
Oracle Virtualization Manager 4.5 anyone?
by Thomas Hoberg
Redhat's decision to shut down RHV caught Oracle pretty unprepared, I'd guess, who had just shut down their own vSphere clone in favor of a RHV clone a couple of years ago.
Oracle is even less vocal about their "Oracle Virtualization" strategy, they don't even seem to have a proper naming convention or branding.
But they have been pushing out OV releases without a publicly announced EOL almost a year behind Redhat for the last years.
And after a 4.4 release in September 22, a few days ago on December 12th actually a release 4.5 was made public.
I've operated oVirt 4.3 with significant quality issues for some years and failed to make oVirt 4.4 work with any degree of acceptable stability but Oracle's variant of 4.4 proved to be rather better than 4.3 on CentOS7 with no noticable bugs, especially in the Hyperconverged setup that I am using with GlusterFS.
I assumed that this was because Oracle based their 4.4 in fact on RHV 4.4 and not oVirt, but since they're not telling, who knows?
One issue with 4.4 was that Oracle is pushing their UE-Kernel and that created immediate issues e.g. with VDO missing modules for UEK and other stuff, but that was solved easily enough by using the RHEL kernel.
With 4.5 Oracle obviously can't use RHV 4.5 as a base, because there is no such thing with RHV declared EOL and according to Oracle their 4.5 is based on oVirt 4.5.4, which made the quality of that release somewhat questionable, but perhaps they have spent the year that has passed since productively killing bugs... only to be caught by surprise again, I presume, by an oVirt release 4.5.5 on December 1st, that no one saw coming!
Long story slightly shorter, I've been testing Oracle's 4.5 variant a bit and it's not without issues.
But much worse, Oracle's variant of oVirt seems to be entirely without any community that I could find.
Now oVirt has been a somewhat secret society for years, but compared to what's going on with Oracle this forum is teaming with life!
So did I just not look around enough? Is there a secret lair where all those OV users are hiding?
Anyhow, here is what I've tested so far and where I'd love to have some feedback:
1. Setting up a three node HCI cluster from scratch using OL8.9 and OV 4.5
Since I don't have extra physical hardware for a 3 node HCI I'm using VMware workstation 17.5 on a Workstation running Windows 2022, a test platform that has been working for all kinds of virtualization tests from VMware ESXi, via Xcp-ng and ovirt.
Created three VMs with OL8.9 minimal and then installed OV 4.5. I used the UEK default kernels and then had an issue when Ansible is trying to create the (local) management engine: the VM simply could not reach the Oracle repo servers to install the packages inside the ME. Since that VM is entirely under the control of Ansible and no console access of any type is possible in that installation phase, I couldn't do diagnostics.
But with 4.4 I used to have similar issues and there switching back to the Redhat kernel for the ME (and the hosts) resolved them.
But with 4.5 it seems that UEK has become a baked-in dependency: the OV team doesn't even seem to do any testing with the Redhat kernel any more. Or not with the HCI setup, which has become deprecated somewhere in oVirt 4.4... Or not with the Cockpit wizard, which might be in a totally untested state, or....
Doing the same install on OL 8.9 with OV 4.4, however, did work just fine and I was even able to update to 4.5 afterwards, which was a nice surprise...
...that I could not repeat on my physical test farm using three Atoms. There switching to the UEK kernel on the hosts caused issues, hosts were becoming unresponsive, file systems inaccessible, even if they were perfectly fine at the Gluster CLI level and in the end the ME VM simply would not longer start. Switching back to the Redhat kernel resolved things there.
In short, switching between the Redhat kernel and UEK, which should be 100% transparent to all things userland including hypervisors, doesn't work.
But my attempts to go with a clean install of 4.5 on a Redhat kernel or UEK is also facing issues. So far the only thing that has worked was a single node HCI install using UEK and OV 4.5 and upgrading to OV 4.5 on a virtualized triple node OV 4.4 HCI cluster.
Anyone else out there trying these things?
I was mostly determined to move to Proxmox VE, but Oracle's OV 4.5 seemed to be handing a bit of a life-line to oVirt and the base architecture is just much more powerful (or less manual) than Proxmox, which doesn't have a management engine.
8 months, 2 weeks
Changing disk QoS causes segfault with IO-Threads enabled (oVirt 4.3.0.4-1.el7)
by jloh@squiz.net
We recently upgraded to 4.3.0 and have found that when changing disk QoS settings on VMs whilst IO-Threads is enabled causes them to segfault and the VM to reboot. We've been able to replicate this across several VMs. VMs with IO-Threads disabled/turned off do not segfault when changing the QoS.
Mar 1 11:49:06 srvXX kernel: IO iothread1[30468]: segfault at fffffffffffffff8 ip 0000557649f2bd24 sp 00007f80de832f60 error 5 in qemu-kvm[5576498dd000+a03000]
Mar 1 11:49:06 srvXX abrt-hook-ccpp: invalid number 'iothread1'
Mar 1 11:49:11 srvXX libvirtd: 2019-03-01 00:49:11.116+0000: 13365: error : qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer
Happy to supply some more logs to someone if they'll help but just wondering whether anyone else has experienced this or knows of a current fix other than turning io-threads off.
Cheers.
8 months, 3 weeks
virt-v2v cannot authenticate with oVirt engine API with OAuth2
by malcolm.strydom@pacxa.com
I've been reading through archives but not able to find what i need. Essentially what I'm trying to do is migrate a larger number of VMs from our OVM environment to a new OLVM setup. In an effort to reduce lots of replication and copying of the disk image (export, convert, copy over, import etc.) I found this article which shows a pretty slick way to do it in one shot
https://blogs.oracle.com/scoter/post/how-to-migrate-oracle-vm-to-oracle-l...
The main command behind it all is the virt-v2v that makes it possible. It looks something like this:
virt-v2v -i libvirtxml vm-test1.xml -o rhv-upload -oc https://<OLVM-server>/ovirt-engine/api -os <my storage> -op /tmp/ovirt-admin-password -of raw -oo rhv-cluster=Default -oo rhv-cafile=/root/ca.pem
The problem I'm having is I cannot authenticate with my new OLVM server at the ovirt-engine/api URL. Since user/password is depricated and you must use OAuth 2.0 with a token I'm stuck.
I have OLVM 4.5.4-1.0.27.el8 and from what I've read in oVirt 4.5 (not sure what version it started) they use keycloak oAuth 2.0 and the older ovirt-aaa-jdbc-tool is now deprecated.
In doing some testing I found I can use curl and authenticate against the ovirt-engine/api and get a token like this:
OVIRT_ENGINE_URL="https://<myolvm1>/ovirt-engine"
USERNAME="admin@ovirt@internalsso"
PASSWORD="<mypassword>"
CLUSTER_NAME="Default"
TOKEN=$(curl -k -X POST -H "Accept: application/json" -H "Content-Type: application/x-www-form-urlencoded" -d "grant_type=password
&username=$USERNAME&password=$PASSWORD&scope=ovirt-app-api" $OVIRT_ENGINE_URL/sso/oauth/token | jq -r '.access_token')
I was then able to query the API to validate my token works
curl -k -H "Accept: application/json" -H "Authorization: Bearer $TOKEN" "$OVIRT_ENGINE_URL/api/clusters?search=name=$CLUSTER_NAME"
The problem is virt-v2v does not support posting any form information or the token to authenticate. Best I can tell the -oc option is strictly the URL and if you want a username in there it's in the form of https://<name>@<server>. So even if I wrote a script and used curl to authenticate and get a token I still can't find a way to make virt-v2v use it.
So I'm stuck how do I get virt-v2v working? Is there a way to re-enable the deprecated user/pass method of accessing the ovirt-engine/api ? or as a last resort a way to get virt-v2v supporting the token?
Thanks for any insight
Malcolm
9 months, 4 weeks
Deploy oVirt Engine fail behind proxy
by Matteo Bonardi
Hi,
I am trying to deploy the ovirt engine following self-hosted engine installation procedure on documentation.
Deployment servers are behind a proxy and I have set it in environment and in yum.conf before run deploy.
Deploy fails because ovirt engine vm cannot resolve AppStream repository url:
[ INFO ] TASK [ovirt.engine-setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> ovirt-manager.mydomain]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'AppStream': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=... [Could not resolve host: mirrorlist.centos.org]", "rc": 1, "results": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean up
[ INFO ] Cleaning temporary resources
[ INFO ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Clean local storage pools]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Destroy local storage-pool {{ he_local_vm_dir | basename }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Undefine local storage-pool {{ he_local_vm_dir | basename }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
[ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20201109165237.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20201109164244-b3e8sd.log
How I can set proxy for the engine vm?
Ovirt version:
[root@myhost ~]# rpm -qa | grep ovirt-engine-appliance
ovirt-engine-appliance-4.4-20200916125954.1.el8.x86_64
[root@myhost ~]# rpm -qa | grep ovirt-hosted-engine-setup
ovirt-hosted-engine-setup-2.4.6-1.el8.noarch
OS version:
[root@myhost ~]# cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core)
[root@myhost ~]# uname -a
Linux myhost.mydomain 4.18.0-193.28.1.el8_2.x86_64 #1 SMP Thu Oct 22 00:20:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Thanks for the help.
Regards,
Matteo
9 months, 4 weeks
Ovirt 4.5 HA over NFS fails when a single host goes down
by youssef.khristo@elsewedyintel.com
Greetings,
we have recently installed ovirt as a hosted-engine with high availability on six nodes over NFS storage (no Gluster), with power management through an on-board IPMI device, and the setup was successful. All the nodes (from Supermicro) are identical in every aspect, so no hardware differences exist and no modifications to the servers' hardware were performed. The hosted-engine was deployed on a second host, where two of the six hosts only were required to host the HE VM.
The network interface on each node is bonded between two physical fiber optics NICs in LACP mode with a VLAN on top, serving as the sole networking interface for the server/node, no separate VM or storage networks were needed, as the host OS, hosted-engine vm, and storage are required to be on the same network and VLAN.
We started by testing the high-availability of the hosted-engine VM (as it was deployed on two of the six nodes) by rebooting or powering off one of the hosts, and the VM would migrate successfully to the second HE node. The main goal of our experiments is to test the robustness of the setup, as it is required for the cluster to remain functional even when up to two hosts are brought down (whether due to a network or power issue), however, when rebooting or powering off one of the hosts, the HE VM goes down and takes the entire cluster with it, where we can't even access the web portal. Once the host is rebooted, the HE VM and the cluster becomes functional again. Sometimes the HE VM stays down for a set amount of time (5 to 6 minutes) and then goes back up, and sometimes it goes down until the problematic host is back up. This behavior happens to other VMs as well not the the HE.
We suspected an issue with the NFS storage, however, during ovirt operation it is being mounted properly over /rhev/data-center/mnt/<nfs:directory>, while the expected behavior is for the cluster to stay operational and any other VMs to be migrated to other hosts. During one of the tests, we tried to mount the NFS storage on a different directory and there was no problem, we were even able to perform commands such as ls without any issues, as well as writing a text file at the directory's root, and be able to modify it normally.
We suspected a couple of things the first being that the HE is unable to fence the problematic host (the one we took down), however, power management is setup properly.
The other thing we suspected is the cluster hosts (after taking down one of them) are unable to acquire storage lease, which is weird since the host in question is down and non-operational, hence no locks should be in place. The reason behind this suspicion is the following two errors that we receive frequently when one host or more goes down from the engine\ovirt-engine\engine.log file:
1- "EVENT_ID: VM_DOWN_ERROR(119), VM HostedEngine is down with error. Exit message: resource busy: Failed to acquire lock: Lease is held by another host."
2- "[<id>] Command 'GetVmLeaseInfoVDSCommand( VmLeaseVDSParameters:{expectedEngineErrors='[NoSuchVmLeaseOnDomain]', storagePoolId='<pool-id>', ignoreFailoverLimit='false', leaseId='<lease-id>', storageDomainId='<domain-id>'})' execution failed: IRSGenericException: IRSErrorException: No such lease: 'lease=<lease-id>'"
This is a third warning from the /var/log/vdsm/vdsm.log file
1- "WARN (check/loop) [storage.check] Checker '/rhev/data-center/mnt/<nfs-domain:/directory>/<id>/dom_md/metadata' is blocked for 310.00 seconds (check:265)"
All the tests are done without setting nodes into maintenance mode as we are simulating an emergency situation. No HE configuration were modified via the config-engine command, the default values are used.
Is this a normal behavior? Are we missing something? Do we need to tweak a certain configuration using the config-engine command to get a better behavior (e.g., shorter down period)?
Best regards
10 months, 1 week
Advice on upgrading oVirt 4.2 CentOS 7.x cluster to oVirt 4.5 Rocky Linux 8.9
by ovirt@the-dawg.net
I have multiple aging clusters that I now need to upgrade both the host OS and the oVirt release. I have seen the notes about upgrading 4.2 to 4.3 and then being able to upgrade to 4.5, but what's not so clear is how to upgrade the host nodes' OS versions. After thinking about it, I wonder if this method is sane. Any advice or guidance is greatly appreciated.
1) spin up a new cluster running oVirt 4.5.x on the latest Rocky Linux release.
2) shutdown VMs on the original cluster
3) detach the domain storage
4) snapshot the domain storage (for rollback)
5) import the domain on the new cluster.
6) reload the OS on the original cluster to the new OS release (no local storage)
7) add those hosts to the new cluster
10 months, 2 weeks
Snapshots and mess with Storage
by thorsten.bolten@googlemail.com
Hi,
I experienced a weird Failure. I hope I explain it understandable and not too hard to read
We had issues with our backups, that they can't finalize
Here is a Thread someone having the same issue.
https://forums.veeam.com/ovirt-kvm-f62/veeam-rhv-12-1-command-removeimage...
so I made some investigations and recognized, it seems to be an Issue with Snapshot
1. I created a Snapshot
https://paste.fo/c0b8e77a3400
what works
2. Then I tried to deleted the snapshot, what didn't work
https://paste.fo/013e5632e0d6
So I logged in to a node and checked several things
First strange thing was, I can see pvs vgs etc, I am used, that ovirt hides them in the System (on one of those nodes it's still the case) but anyways
[root@ovnb05 ~]# pvs
WARNING: Couldn't find device with uuid uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb.
WARNING: VG 515bebca-972b-42ac-abff-d76af0071613 is missing PV uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb (last written to /dev/mapper/360002ac000000000000000880002a7fd).
PV VG Fmt Attr PSize PFree
/dev/mapper/360002ac000000000000000870002a7fd 515bebca-972b-42ac-abff-d76af0071613 lvm2 a-- <12.00t 10.77t
/dev/sda3 rl lvm2 a-- 892.16g 0
[unknown] 515bebca-972b-42ac-abff-d76af0071613 lvm2 a-m <12.00t <12.00t
So I did
[root@ovnb05 ~]# pvdisplay
WARNING: Couldn't find device with uuid uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb.
WARNING: VG 515bebca-972b-42ac-abff-d76af0071613 is missing PV uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb (last written to /dev/mapper/360002ac000000000000000880002a7fd).
--- Physical volume ---
PV Name /dev/mapper/360002ac000000000000000870002a7fd
VG Name 515bebca-972b-42ac-abff-d76af0071613
PV Size 12.00 TiB / not usable 384.00 MiB
Allocatable yes
PE Size 128.00 MiB
Total PE 98301
Free PE 88247
Allocated PE 10054
PV UUID D6Typd-s7lA-PyoI-PvrL-rd9N-mS64-VeV1j3
--- Physical volume ---
PV Name [unknown]
VG Name 515bebca-972b-42ac-abff-d76af0071613
PV Size 12.00 TiB / not usable 384.00 MiB
Allocatable yes
PE Size 128.00 MiB
Total PE 98301
Free PE 98301
Allocated PE 0
PV UUID uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb
--- Physical volume ---
PV Name /dev/sda3
VG Name rl
PV Size 892.16 GiB / not usable 2.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 228393
Free PE 0
Allocated PE 228393
PV UUID kRe0EF-736L-QwZA-iGtM-67wp-8cr4-DKKcX2
I have never seen an unknown before, so I guess something happened to that LUN
But the LUN is available and in /var/log/messages I can't find any issue with the LUN or the path etc.
[root@ovnb05 data-center]# blkid |grep uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb
/dev/sdi: UUID="uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb" TYPE="LVM2_member"
/dev/sdg: UUID="uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb" TYPE="LVM2_member"
/dev/sde: UUID="uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb" TYPE="LVM2_member"
/dev/sdn: UUID="uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb" TYPE="LVM2_member"
/dev/sdc: UUID="uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb" TYPE="LVM2_member"
/dev/sdl: UUID="uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb" TYPE="LVM2_member"
/dev/sdr: UUID="uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb" TYPE="LVM2_member"
/dev/sdp: UUID="uHJs0a-T7R3-2H8E-15W9-cGqW-ZN1e-xsdCsb" TYPE="LVM2_member"
So I did some nasty stuff, like removing it and trying to add it again, through that process it told me that there is no meta Data on the Device
So at the End I am a little bit lost now.
Maybe the storage was failing for some reason or maybe veeam messed something up, but what leaves me afraid is, that the Ovirt-Engine didn't show me any Problem with the Cluster or the Storage.
I mean it's obvious, that there was/is a huge failure with the storage and Ovirt did not recognize neither did it alarm me.
At the moment I detached the LUN from the Storage Domain and created a seperate one (cause I recognized that having several LUN's in a Storage Domain seems to be an bad Idea) and one PVS is now hidden, the other one not
[root@ovnb05 data-center]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/360002ac000000000000000870002a7fd 515bebca-972b-42ac-abff-d76af0071613 lvm2 a-- <12.00t 11.81t
/dev/sda3 rl lvm2 a-- 892.16g 0
[root@ovnb05 data-center]# cat /etc/lvm/devices/system.devices
# LVM uses devices listed in this file.
# Created by LVM command vgs pid 6123 at Fri Feb 16 18:16:35 2024
VERSION=1.1.7
IDTYPE=mpath_uuid IDNAME=mpath-360002ac000000000000000870002a7fd DEVNAME=/dev/mapper/360002ac000000000000000870002a7fd PVID=D6Typds7lAPyoIPvrLrd9NmS64VeV1j3
IDTYPE=sys_wwid IDNAME=naa.61c721d06b5fb2002c49fbecd02f9d90 DEVNAME=/dev/sda3 PVID=kRe0EF736LQwZAiGtM67wp8cr4DKKcX2 PART=3
[root@ovnb05 data-center]# multipath -ll
360000970000197800382533030303031 dm-5 EMC,SYMMETRIX
size=5.6M features='1 queue_if_no_path' hwhandler='0' wp=ro
`-+- policy='service-time 0' prio=1 status=active
`- 9:0:25:0 sdj 8:144 active ready running
360002ac000000000000000870002a7fd dm-3 3PARdata,VV
size=12T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 9:0:8:1 sdh 8:112 active ready running
|- 9:0:0:1 sdb 8:16 active ready running
|- 9:0:1:1 sdd 8:48 active ready running
|- 9:0:2:1 sdf 8:80 active ready running
|- 10:0:7:1 sdq 65:0 active ready running
|- 10:0:0:1 sdk 8:160 active ready running
|- 10:0:1:1 sdm 8:192 active ready running
`- 10:0:2:1 sdo 8:224 active ready running
360002ac000000000000000880002a7fd dm-4 3PARdata,VV
size=12T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 9:0:8:2 sdi 8:128 active ready running
|- 9:0:0:2 sdc 8:32 active ready running
|- 9:0:1:2 sde 8:64 active ready running
|- 9:0:2:2 sdg 8:96 active ready running
|- 10:0:7:2 sdr 65:16 active ready running
|- 10:0:0:2 sdl 8:176 active ready running
|- 10:0:1:2 sdn 8:208 active ready running
`- 10:0:2:2 sdp 8:240 active ready running
360002ac0000000000000008f0002a7fd dm-26 3PARdata,VV
size=12T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 10:0:1:3 sdt 65:48 active ready running
|- 10:0:0:3 sds 65:32 active ready running
|- 10:0:7:3 sdv 65:80 active ready running
|- 10:0:2:3 sdu 65:64 active ready running
|- 9:0:1:3 sdx 65:112 active ready running
|- 9:0:2:3 sdy 65:128 active ready running
|- 9:0:0:3 sdw 65:96 active ready running
`- 9:0:8:3 sdz 65:144 active ready running
If I forgot something or you need more input just let me know, maybe you can light me up what's wrong
thx in advance for any help/suggestions
10 months, 2 weeks