[ovirt-users] Educational use case question
Yedidyah Bar David
didi at redhat.com
Thu Apr 14 10:21:05 UTC 2016
On Thu, Apr 14, 2016 at 11:27 AM, Gianluca Cecchi
<gianluca.cecchi at gmail.com> wrote:
> On Thu, Apr 14, 2016 at 8:15 AM, Yedidyah Bar David <didi at redhat.com> wrote:
>>
>> On Thu, Apr 14, 2016 at 5:18 AM, Michael Hall <mike at mjhall.org> wrote:
>>
>>
>> 3. NFS
>> loop-back mounting nfs is considered risky, due to potential locking
>> issues. Therefore, if you want to use NFS, you are better off doing
>> something like this:
>>
>
> Hello,
> can you give more details about these potential locking issues? So that I
> can reproduce
Most of what I know about this is:
https://lwn.net/Articles/595652/
> I have 2 little environments where I'm using this kind of setup. In one of
> them the hypervisor is a physical server, in the other one the hypervisor is
> itself a libvirt VM inside a Fedora 23 based laptop. oVirt version is 3.6.4
> on both.
>
> The test VM has 2 disks sda and sdb; all ovirt related stuff on sdb
>
> My raw steps for the lab have been, after setting up CentOS 7.2 OS,
> disabling ipv6 and NetworkManager, putting SELinux to permissive and
> enabling ovirt repo:
selinux enforcing should work too, if it fails please open a bug. Thanks.
You might have to set the right contexts for your local disks.
>
> NOTE: I also stop and disable firewalld
>
> My host is ovc72.localdomain.local and name of my future engine
> shengine.localdomain.local
>
> yum -y update
>
> yum install ovirt-hosted-engine-setup ovirt-engine-appliance
>
> yum install rpcbind nfs-utils nfs-server
> (some of them probably already pulled in as dependencies from previous
> command)
>
> When I start from scratch the system
>
> pvcreate /dev/sdb
> vgcreate OVIRT_DOMAIN /dev/sdb
> lvcreate -n ISO_DOMAIN -L 5G OVIRT_DOMAIN
> lvcreate -n SHE_DOMAIN -L 25G OVIRT_DOMAIN
> lvcreate -n NFS_DOMAIN -l +100%FREE OVIRT_DOMAIN
>
> if I only have to reinitialize I start from here
> mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-ISO_DOMAIN
> mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-NFS_DOMAIN
> mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-SHE_DOMAIN
>
> mkdir /ISO_DOMAIN /NFS_DOMAIN /SHE_DOMAIN
>
> /etc/fstab
> /dev/mapper/OVIRT_DOMAIN-ISO_DOMAIN /ISO_DOMAIN xfs defaults 0 0
> /dev/mapper/OVIRT_DOMAIN-NFS_DOMAIN /NFS_DOMAIN xfs defaults 0 0
> /dev/mapper/OVIRT_DOMAIN-SHE_DOMAIN /SHE_DOMAIN xfs defaults 0 0
>
> mount /ISO_DOMAIN/ --> this for ISO images
> mount /NFS_DOMAIN/ ---> this for data storage domain where your VMs will
> live (NFS based)
> mount /SHE_DOMAIN/ --> this is for the HE VM
>
> chown 36:36 /ISO_DOMAIN
> chown 36:36 /NFS_DOMAIN
> chown 36:36 /SHE_DOMAIN
>
> chmod 0755 /ISO_DOMAIN
> chmod 0755 /NFS_DOMAIN
> chmod 0755 /SHE_DOMAIN
>
> /etc/exports
> /ISO_DOMAIN *(rw,anonuid=36,anongid=36,all_squash)
> /NFS_DOMAIN *(rw,anonuid=36,anongid=36,all_squash)
> /SHE_DOMAIN *(rw,anonuid=36,anongid=36,all_squash)
>
> systemctl enable rpcbind
> systemctl start rpcbind
>
> systemctl enable nfs-server
> systemctl start nfs-server
>
> hosted-engine --deploy
>
> During setup I choose:
>
> Engine FQDN : shengine.localdomain.local
>
> Firewall manager : iptables
>
> Storage connection :
> ovc71.localdomain.local:/SHE_DOMAIN
>
> OVF archive (for disk boot) :
> /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-20151015.0-1.el7.centos.ova
>
> Also, I used the appliance provided by ovirt-engine-appliance package
>
> After install you have to make a dependency so that VDSM Broker starts after
> NFS Server
>
> In /usr/lib/systemd/system/ovirt-ha-broker.service
>
> Added in section [Unit] the line:
>
> After=nfs-server.service
>
> Also in file vdsmd.service changed from:
> After=multipathd.service libvirtd.service iscsid.service rpcbind.service \
> supervdsmd.service sanlock.service vdsm-network.service
>
> to:
> After=multipathd.service libvirtd.service iscsid.service rpcbind.service \
> supervdsmd.service sanlock.service vdsm-network.service \
> nfs-server.service
>
> NOTE: the files will be overwritten by future updates, so you have to keep
> in mind...
>
> On ovc72 in /etc/multipath.conf aright after line
> # VDSM REVISION 1.3
>
> added
> # RHEV PRIVATE
>
> blacklist {
> wwid 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1
> wwid 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0
> }
>
> To exclude both 2 internal drives... probably oVirt keeps in mind only the
> first one?
No idea
> Otherwise many messages like:
> Jan 25 11:02:00 ovc72 kernel: device-mapper: table: 253:6: multipath: error
> getting device
> Jan 25 11:02:00 ovc72 kernel: device-mapper: ioctl: error adding target to
> table
>
> So far I didn't find any problems. Only a little trick when you have to make
> ful lmaintenance where you have to power off the (only) hypervisor, where
> you have to make the right order steps.
I guess you can probably script that too...
Thanks for sharing. As wrote above, no personal experience with loopback nfs.
For the multipath question, if interested, perhaps ask again with a different
subject.
thanks for sharing!
--
Didi
More information about the Users
mailing list