<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Apr 14, 2016 at 8:15 AM, Yedidyah Bar David <span dir="ltr"><<a href="mailto:didi@redhat.com" target="_blank">didi@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="">On Thu, Apr 14, 2016 at 5:18 AM, Michael Hall <<a href="mailto:mike@mjhall.org">mike@mjhall.org</a>> wrote:<br></span><br>
<br>
3. NFS<br>
loop-back mounting nfs is considered risky, due to potential locking<br>
issues. Therefore, if you want to use NFS, you are better off doing<br>
something like this:<br><div class=""><div class="h5"><br></div></div></blockquote><div><br></div><div>Hello,</div><div>can you give more details about these potential locking issues? So that I can reproduce</div><div>I have 2 little environments where I'm using this kind of setup. In one of them the hypervisor is a physical server, in the other one the hypervisor is itself a libvirt VM inside a Fedora 23 based laptop. oVirt version is 3.6.4 on both.</div><div><br></div><div>The test VM has 2 disks sda and sdb; all ovirt related stuff on sdb</div><div><br></div><div>My raw steps for the lab have been, after setting up CentOS 7.2 OS, disabling ipv6 and NetworkManager, putting SELinux to permissive and enabling ovirt repo:</div><div><br></div><div>NOTE: I also stop and disable firewalld</div><div><br></div><div>My host is ovc72.localdomain.local and name of my future engine shengine.localdomain.local</div><div> <br></div><div><div>yum -y update</div><div><br></div><div>yum install ovirt-hosted-engine-setup ovirt-engine-appliance</div><div><br></div><div>yum install rpcbind nfs-utils nfs-server<br></div><div>(some of them probably already pulled in as dependencies from previous command)</div><div><br></div><div>When I start from scratch the system</div><div><br></div><div>pvcreate /dev/sdb</div><div>vgcreate OVIRT_DOMAIN /dev/sdb</div><div>lvcreate -n ISO_DOMAIN -L 5G OVIRT_DOMAIN</div><div>lvcreate -n SHE_DOMAIN -L 25G OVIRT_DOMAIN</div><div>lvcreate -n NFS_DOMAIN -l +100%FREE OVIRT_DOMAIN</div><div><br></div><div>if I only have to reinitialize I start from here</div><div>mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-ISO_DOMAIN</div><div>mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-NFS_DOMAIN </div><div>mkfs -t xfs -f /dev/mapper/OVIRT_DOMAIN-SHE_DOMAIN</div><div><br></div><div>mkdir /ISO_DOMAIN /NFS_DOMAIN /SHE_DOMAIN</div><div><br></div><div>/etc/fstab</div><div>/dev/mapper/OVIRT_DOMAIN-ISO_DOMAIN /ISO_DOMAIN xfs defaults 0 0</div><div>/dev/mapper/OVIRT_DOMAIN-NFS_DOMAIN /NFS_DOMAIN xfs defaults 0 0</div><div>/dev/mapper/OVIRT_DOMAIN-SHE_DOMAIN /SHE_DOMAIN xfs defaults 0 0</div><div><br></div><div>mount /ISO_DOMAIN/ --> this for ISO images</div><div>mount /NFS_DOMAIN/ ---> this for data storage domain where your VMs will live (NFS based)</div><div>mount /SHE_DOMAIN/ --> this is for the HE VM</div><div><br></div><div>chown 36:36 /ISO_DOMAIN</div><div>chown 36:36 /NFS_DOMAIN</div><div>chown 36:36 /SHE_DOMAIN</div><div><br></div><div>chmod 0755 /ISO_DOMAIN</div><div>chmod 0755 /NFS_DOMAIN</div><div>chmod 0755 /SHE_DOMAIN</div><div><br></div><div>/etc/exports</div><div>/ISO_DOMAIN *(rw,anonuid=36,anongid=36,all_squash)</div><div>/NFS_DOMAIN *(rw,anonuid=36,anongid=36,all_squash)</div><div>/SHE_DOMAIN *(rw,anonuid=36,anongid=36,all_squash)</div><div><br></div><div>systemctl enable rpcbind<br></div><div>systemctl start rpcbind</div><div><br></div><div>systemctl enable nfs-server</div><div>systemctl start nfs-server</div><div><br></div><div>hosted-engine --deploy</div></div><div><br></div><div>During setup I choose: </div><div><br></div><div> Engine FQDN : shengine.localdomain.local<br></div><div><br></div><div> Firewall manager : iptables<br></div><div><br></div><div> Storage connection : ovc71.localdomain.local:/SHE_DOMAIN<br></div><div><br></div><div> OVF archive (for disk boot) : /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-20151015.0-1.el7.centos.ova<br></div><div><br></div><div>Also, I used the appliance provided by ovirt-engine-appliance package</div><div><br></div><div>After install you have to make a dependency so that VDSM Broker starts after NFS Server</div><div><br></div><div>In /usr/lib/systemd/system/ovirt-ha-broker.service</div><div><div><br></div><div>Added in section [Unit] the line:</div><div><br></div><div>After=nfs-server.service</div></div><div><br></div><div>Also in file vdsmd.service changed from:<br></div><div><div>After=multipathd.service libvirtd.service iscsid.service rpcbind.service \</div><div> supervdsmd.service sanlock.service vdsm-network.service </div><div><br></div><div>to:</div><div>After=multipathd.service libvirtd.service iscsid.service rpcbind.service \</div><div> supervdsmd.service sanlock.service vdsm-network.service \</div><div> nfs-server.service</div></div><div><br></div><div>NOTE: the files will be overwritten by future updates, so you have to keep in mind...<br></div><div><br></div><div><div>On ovc72 in /etc/multipath.conf aright after line</div><div># VDSM REVISION 1.3</div><div><br></div><div>added</div><div># RHEV PRIVATE</div><div><br></div><div>blacklist {</div><div> wwid 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1</div><div> wwid 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0</div><div>}</div><div><br></div><div>To exclude both 2 internal drives... probably oVirt keeps in mind only the first one?</div></div><div>Otherwise many messages like:</div><div><div>Jan 25 11:02:00 ovc72 kernel: device-mapper: table: 253:6: multipath: error getting device</div><div>Jan 25 11:02:00 ovc72 kernel: device-mapper: ioctl: error adding target to table</div></div><div><br></div><div>So far I didn't find any problems. Only a little trick when you have to make ful lmaintenance where you have to power off the (only) hypervisor, where you have to make the right order steps.</div><div><br></div><div>Gianluca </div></div></div></div>