---------- Forwarded message ----------
From: Leo David <leoalex@gmail.com>
Date: Tue, Jun 12, 2018 at 7:57 PM
Subject: Re: [ovirt-users] Re: Single node single node SelfHosted Hyperconverged
To: femi adegoke <ovirt@fateknollogee.com>


Thank you very much for you response, now it feels I can barelly see the light !
So:
 multipath -ll
3614187705c01820022b002b00c52f72e dm-1 DELL    ,PERC H730P Mini
size=931G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 0:2:0:0 sda     8:0   active ready running

lsblk

NAME                                                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                                      8:0    0   931G  0 disk
├─sda1                                                   8:1    0     1G  0 part
├─sda2                                                   8:2    0   930G  0 part
└─3614187705c01820022b002b00c52f72e                    253:1    0   931G  0 mpath
  ├─3614187705c01820022b002b00c52f72e1                 253:3    0     1G  0 part  /boot
  └─3614187705c01820022b002b00c52f72e2                 253:4    0   930G  0 part
    ├─onn-pool00_tmeta                                 253:6    0     1G  0 lvm
    │ └─onn-pool00-tpool                               253:8    0 825.2G  0 lvm
    │   ├─onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:9    0 798.2G  0 lvm   /
    │   ├─onn-pool00                                   253:12   0 825.2G  0 lvm
    │   ├─onn-var_log_audit                            253:13   0     2G  0 lvm   /var/log/audit
    │   ├─onn-var_log                                  253:14   0     8G  0 lvm   /var/log
    │   ├─onn-var                                      253:15   0    15G  0 lvm   /var
    │   ├─onn-tmp                                      253:16   0     1G  0 lvm   /tmp
    │   ├─onn-home                                     253:17   0     1G  0 lvm   /home
    │   └─onn-var_crash                                253:20   0    10G  0 lvm   /var/crash
    ├─onn-pool00_tdata                                 253:7    0 825.2G  0 lvm
    │ └─onn-pool00-tpool                               253:8    0 825.2G  0 lvm
    │   ├─onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:9    0 798.2G  0 lvm   /
    │   ├─onn-pool00                                   253:12   0 825.2G  0 lvm
    │   ├─onn-var_log_audit                            253:13   0     2G  0 lvm   /var/log/audit
    │   ├─onn-var_log                                  253:14   0     8G  0 lvm   /var/log
    │   ├─onn-var                                      253:15   0    15G  0 lvm   /var
    │   ├─onn-tmp                                      253:16   0     1G  0 lvm   /tmp
    │   ├─onn-home                                     253:17   0     1G  0 lvm   /home
    │   └─onn-var_crash                                253:20   0    10G  0 lvm   /var/crash
    └─onn-swap                                         253:10   0     4G  0 lvm   [SWAP]
sdb                                                      8:16   0   931G  0 disk
└─sdb1                                                   8:17   0   931G  0 part
sdc                                                      8:32   0   4.6T  0 disk
└─sdc1                                                   8:33   0   4.6T  0 part
nvme0n1                                                259:0    0   1.1T  0 disk


So the multipath "3614187705c01820022b002b00c52f72e" that was shown in the error  is actually the root filesystem, which was created at node installation ( from iso ).
Is this mpath ok that is activated on sda ?
What should I do in this situation ?
Thank you ?






On Tue, Jun 12, 2018 at 5:38 PM, femi adegoke <ovirt@fateknollogee.com> wrote:
Are your disks "multipathing"?

What's your output if you run the command multipath -ll

For comparison sake, here is my gdeploy.conf (used for a single host gluster install) - lv1 was changed to 62gb
**Credit for that pastebin to Squeakz on the IRC channel
https://pastebin.com/LTRQ78aJ
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EEIE4PWUFCXHHTT6PGP2EPFQXIWL6H5P/



--
Best regards, Leo David



--
Best regards, Leo David