Hi,
On Thu, Feb 7, 2019 at 9:27 AM Sahina Bose <sabose(a)redhat.com> wrote:
+Sachidananda URS to review user request about systemd mount files
On Tue, Feb 5, 2019 at 10:22 PM feral <blistovmhz(a)gmail.com> wrote:
>
> Using SystemD makes way more sense to me. I was just trying to use
ovirt-node as it was ... intended? Mainly because I have no idea how it all
works yet, so I've been trying to do the most stockish deployment possible,
following deployment instructions and not thinking I'm smarter than the
software :p.
> I've given up on 4.2 for now, as 4.3 was just released, so giving that a
try now. Will report back. Hopefully 4.3 enlists systemd for stuff?
>
Unless we have really complicated mount setup, it is better to use fstab.
We had certain difficulties while using vdo, maybe for such cases?
However the systemd.mount(5) manpage suggests that the preferred way of
mount configuration
should be /etc/fstab.
src:
https://manpages.debian.org/jessie/systemd/systemd.mount.5.en.html#/ETC/F...
<snip>
/ETC/FSTAB
Mount units may either be configured via unit files, or via /etc/fstab
(seefstab(5) for details). Mounts listed in /etc/fstab will be converted
into native units dynamically at boot and when the configuration of the
system manager is reloaded. In general, configuring mount points through
/etc/fstab is the preferred approach. See systemd-fstab-generator(8) for
details about the conversion.
</snip>
> On Tue, Feb 5, 2019 at 4:33 AM Strahil Nikolov
<hunter86_bg(a)yahoo.com>
wrote:
>>
>> Dear Feral,
>>
>> >On that note, have you also had issues with gluster not restarting on
reboot, as well as >all of the HA stuff failing on reboot after power loss?
Thus far, the only way I've got the >cluster to come back to life, is to
manually restart glusterd on all nodes, then put the >cluster back into
"not mainentance" mode, and then manually starting the hosted-engine vm.
>This also fails after 2 or 3 power losses, even though the entire cluster
is happy through >the first 2.
>>
>>
>> About the gluster not starting - use systemd.mount unit files.
>> here is my setup and for now works:
>>
>> [root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.mount
>> # /etc/systemd/system/gluster_bricks-engine.mount
>> [Unit]
>> Description=Mount glusterfs brick - ENGINE
>> Requires = vdo.service
>> After = vdo.service
>> Before = glusterd.service
>> Conflicts = umount.target
>>
>> [Mount]
>> What=/dev/mapper/gluster_vg_md0-gluster_lv_engine
>> Where=/gluster_bricks/engine
>> Type=xfs
>> Options=inode64,noatime,nodiratime
>>
>> [Install]
>> WantedBy=glusterd.service
>> [root@ovirt2 yum.repos.d]# systemctl cat
gluster_bricks-engine.automount
>> # /etc/systemd/system/gluster_bricks-engine.automount
>> [Unit]
>> Description=automount for gluster brick ENGINE
>>
>> [Automount]
>> Where=/gluster_bricks/engine
>>
>> [Install]
>> WantedBy=multi-user.target
>> [root@ovirt2 yum.repos.d]# systemctl cat glusterd
>> # /etc/systemd/system/glusterd.service
>> [Unit]
>> Description=GlusterFS, a clustered file-system server
>> Requires=rpcbind.service gluster_bricks-engine.mount
gluster_bricks-data.mount gluster_bricks-isos.mount
>> After=network.target rpcbind.service gluster_bricks-engine.mount
gluster_bricks-data.mount gluster_bricks-isos.mount
>> Before=network-online.target
>>
>> [Service]
>> Type=forking
>> PIDFile=/var/run/glusterd.pid
>> LimitNOFILE=65536
>> Environment="LOG_LEVEL=INFO"
>> EnvironmentFile=-/etc/sysconfig/glusterd
>> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
$LOG_LEVEL $GLUSTERD_OPTIONS
>> KillMode=process
>> SuccessExitStatus=15
>>
>> [Install]
>> WantedBy=multi-user.target
>>
>> # /etc/systemd/system/glusterd.service.d/99-cpu.conf
>> [Service]
>> CPUAccounting=yes
>> Slice=glusterfs.slice
>>
>>
>> Best Regards,
>> Strahil Nikolov
>
>
>
> --
> _____
> Fact:
> 1. Ninjas are mammals.
> 2. Ninjas fight ALL the time.
> 3. The purpose of the ninja is to flip out and kill people.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4AE6YQHYL7...