Finnally !!!
Femi, thank you so much !!! I wouldn't have been able do solve this
without your help.
So, just as info:
In this version ( iso ovirt-node-ng-installer-ovirt-4.2-2018053012 ) ,
multipah.conf seems to be alrady configured to blacklist all devices.
It already has configured :
# VDSM PRIVATE - as second line and,
blacklist {
devnode "*"
}
HE installation went fine, but after reboot the gluster volumes were not
present - glusterd daemon was not enabled at boot, so the ansible playbook
did not changed that.
Now, I am trying to figure out wat it would be the best procedure to create
a new gluster volume on sdc drive, and put some vms to run on it.
Cockpit UI won't let me, because its asking for 3 nodes and "No two hosts
can be the same" .
Should i just format the drive as xfs, mount it ( fstab ) under
/gluster_briks/spinning_storage, create the gluster volume from cli and
add it as a new storage domain ?
What it would be the best approach to accomplish this ?
Again, thank you very much !
On Wed, Jun 13, 2018 at 12:05 PM, Leo David <leoalex(a)gmail.com> wrote:
Gluster volumes are started, so I assume that the gluster part is
fine.
Would'nt make more sense to add everything to black list, reboot the
server and continue with ovirt-engine vm setup ( and adding 10.10.8.111:/engine
for vm storage ) ?
ie:
blacklist {
devnode "*"
}
Thank you !
On Wed, Jun 13, 2018 at 11:05 AM, femi adegoke <ovirt(a)fateknollogee.com>
wrote:
> It's blacklist { not kblacklist (remove the "k")
>
> run this command to find wwids: ls -la /dev/disk/by-id/
>
> Is the existing gluster setup installed correctly?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
https://www.ovirt.org/communit
> y/about/community-guidelines/
> List Archives:
https://lists.ovirt.org/archiv
> es/list/users(a)ovirt.org/message/W47PLDI3RYTGXVVFXORYPMJ2YH77XAY2/
>
--
Best regards, Leo David
--
Best regards, Leo David