<div dir="ltr"><div><div>Hello again!<br><br></div>After persisting selinux config, at reboot I get "Curerent mode: enforced"" although ""Mode from config file: permissive"" !<br></div>Due to this, i think I get an denied for glusterfsd:<br>
<br>type=AVC msg=audit(1387365750.532:5873): avc: denied { relabelfrom } for pid=30249 comm="glusterfsd" name="23fe702e-be59-4f65-8c55-58b1b1e1b023" dev="dm-10" ino=1835015 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file<br>
<br><br><br><div><div><br><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Dec 18, 2013 at 3:30 PM, Fabian Deutsch <span dir="ltr"><<a href="mailto:fabiand@redhat.com" target="_blank">fabiand@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Am Mittwoch, den 18.12.2013, 14:14 +0200 schrieb Gabi C:<br>
<div class="im">> Still, now I cannot start none of the 2 machines! I get<br>
><br>
> ID 119 VM proxy2 is down. Exit message: Child quit during startup<br>
> handshake: Input/output error.""<br>
<br>
</div>Could you try ot find out in what context this IO error appears?<br>
<span class="HOEnZb"><font color="#888888"><br>
- fabian<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
><br>
> Something similar to bug<br>
> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1033064" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1033064</a>, except that in my<br>
> case selinux is permissive!<br>
><br>
><br>
><br>
> On Wed, Dec 18, 2013 at 2:10 PM, Gabi C <<a href="mailto:gabicr@gmail.com">gabicr@gmail.com</a>> wrote:<br>
> in my case $brick_path =/data<br>
><br>
><br>
> getfattr -d /data return NOTHING on both nodes!!!<br>
><br>
><br>
><br>
><br>
> On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch<br>
> <<a href="mailto:fabiand@redhat.com">fabiand@redhat.com</a>> wrote:<br>
> Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi<br>
> C:<br>
> > Update on Glusterfs issue<br>
> ><br>
> ><br>
> > I manage to recover lost volume after recretaing the<br>
> same volume name<br>
> > with same bricks, whisch raised an error message,<br>
> resolved by, on both<br>
> > nodes:<br>
> ><br>
> > setfattr -x trusted.glusterfs.volume-id $brick_path<br>
> > setfattr -x trusted.gfid $brick_path<br>
><br>
><br>
> Hey,<br>
><br>
> good that you could recover them.<br>
><br>
> Could you please provide $brick_path and getfattr -d<br>
> $brick_path<br>
><br>
> The question is if and/or why the fattrs are not<br>
> stored.<br>
><br>
> - fabian<br>
><br>
> ><br>
> ><br>
> ><br>
> > On Wed, Dec 18, 2013 at 12:12 PM, Gabi C<br>
> <<a href="mailto:gabicr@gmail.com">gabicr@gmail.com</a>> wrote:<br>
> > node 1:<br>
> ><br>
> > [root@virtual5 admin]# cat /config/files<br>
> > /etc/fstab<br>
> > /etc/shadow<br>
> > /etc/default/ovirt<br>
> > /etc/ssh/ssh_host_key<br>
> > /etc/ssh/ssh_host_key.pub<br>
> > /etc/ssh/ssh_host_dsa_key<br>
> > /etc/ssh/ssh_host_dsa_key.pub<br>
> > /etc/ssh/ssh_host_rsa_key<br>
> > /etc/ssh/ssh_host_rsa_key.pub<br>
> > /etc/rsyslog.conf<br>
> > /etc/libvirt/libvirtd.conf<br>
> > /etc/libvirt/passwd.db<br>
> > /etc/passwd<br>
> > /etc/sysconfig/network<br>
> > /etc/collectd.conf<br>
> > /etc/libvirt/qemu/networks<br>
> > /etc/ssh/sshd_config<br>
> > /etc/pki<br>
> > /etc/logrotate.d/ovirt-node<br>
> > /var/lib/random-seed<br>
> > /etc/iscsi/initiatorname.iscsi<br>
> > /etc/libvirt/qemu.conf<br>
> > /etc/sysconfig/libvirtd<br>
> > /etc/logrotate.d/libvirtd<br>
> > /etc/multipath.conf<br>
> > /etc/hosts<br>
> > /etc/sysconfig/network-scripts/ifcfg-enp3s0<br>
> > /etc/sysconfig/network-scripts/ifcfg-lo<br>
> > /etc/ntp.conf<br>
> > /etc/shadow<br>
> > /etc/vdsm-reg/vdsm-reg.conf<br>
> > /etc/shadow<br>
> > /etc/shadow<br>
> ><br>
> /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt<br>
> ><br>
> /etc/sysconfig/network-scripts/route-ovirtmgmt<br>
> ><br>
> /etc/sysconfig/network-scripts/rule-ovirtmgmt<br>
> > /root/.ssh/authorized_keys<br>
> > /etc/vdsm/<a href="http://vdsm.id" target="_blank">vdsm.id</a><br>
> > /etc/udev/rules.d/12-ovirt-iosched.rules<br>
> > /etc/vdsm/vdsm.conf<br>
> > /etc/sysconfig/iptables<br>
> > /etc/resolv.conf<br>
> ><br>
> /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY<br>
> > /etc/sysconfig/network-scripts/ifcfg-enp6s0<br>
> ><br>
> /etc/sysconfig/network-scripts/ifcfg-enp6s0.50<br>
> > /etc/glusterfs/glusterd.vol<br>
> > /etc/selinux/config<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > node 2:<br>
> ><br>
> ><br>
> > [root@virtual4 ~]# cat /config/files<br>
> > /etc/fstab<br>
> > /etc/shadow<br>
> > /etc/default/ovirt<br>
> > /etc/ssh/ssh_host_key<br>
> > /etc/ssh/ssh_host_key.pub<br>
> > /etc/ssh/ssh_host_dsa_key<br>
> > /etc/ssh/ssh_host_dsa_key.pub<br>
> > /etc/ssh/ssh_host_rsa_key<br>
> > /etc/ssh/ssh_host_rsa_key.pub<br>
> > /etc/rsyslog.conf<br>
> > /etc/libvirt/libvirtd.conf<br>
> > /etc/libvirt/passwd.db<br>
> > /etc/passwd<br>
> > /etc/sysconfig/network<br>
> > /etc/collectd.conf<br>
> > /etc/libvirt/qemu/networks<br>
> > /etc/ssh/sshd_config<br>
> > /etc/pki<br>
> > /etc/logrotate.d/ovirt-node<br>
> > /var/lib/random-seed<br>
> > /etc/iscsi/initiatorname.iscsi<br>
> > /etc/libvirt/qemu.conf<br>
> > /etc/sysconfig/libvirtd<br>
> > /etc/logrotate.d/libvirtd<br>
> > /etc/multipath.conf<br>
> > /etc/hosts<br>
> > /etc/sysconfig/network-scripts/ifcfg-enp3s0<br>
> > /etc/sysconfig/network-scripts/ifcfg-lo<br>
> > /etc/shadow<br>
> > /etc/shadow<br>
> > /etc/vdsm-reg/vdsm-reg.conf<br>
> ><br>
> /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt<br>
> ><br>
> /etc/sysconfig/network-scripts/route-ovirtmgmt<br>
> ><br>
> /etc/sysconfig/network-scripts/rule-ovirtmgmt<br>
> > /root/.ssh/authorized_keys<br>
> > /etc/shadow<br>
> > /etc/shadow<br>
> > /etc/vdsm/<a href="http://vdsm.id" target="_blank">vdsm.id</a><br>
> > /etc/udev/rules.d/12-ovirt-iosched.rules<br>
> > /etc/sysconfig/iptables<br>
> > /etc/vdsm/vdsm.conf<br>
> > /etc/shadow<br>
> > /etc/resolv.conf<br>
> > /etc/ntp.conf<br>
> ><br>
> /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY<br>
> > /etc/sysconfig/network-scripts/ifcfg-enp6s0<br>
> ><br>
> /etc/sysconfig/network-scripts/ifcfg-enp6s0.50<br>
> > /etc/glusterfs/glusterd.vol<br>
> > /etc/selinux/config<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > On Wed, Dec 18, 2013 at 12:07 PM, Fabian<br>
> Deutsch<br>
> > <<a href="mailto:fabiand@redhat.com">fabiand@redhat.com</a>> wrote:<br>
> > Am Mittwoch, den 18.12.2013, 12:03<br>
> +0200 schrieb Gabi<br>
> > C:<br>
> > > So here it is:<br>
> > ><br>
> > ><br>
> > > in tab volumes add new volume -<br>
> Replicated, then<br>
> > added storage -<br>
> > > data/glusterfs. Then I impoerted<br>
> Vm, ran them and at<br>
> > some point,<br>
> > > needing some space for a Redhat<br>
> Satellite instance<br>
> > I decided to put<br>
> > > both node in maintenace stop them<br>
> add new disk<br>
> > devices and restart,<br>
> > > but after restart the gluster<br>
> volume defined under<br>
> > Volumes Tab<br>
> > > vanished!<br>
> ><br>
> ><br>
> > Antoni,<br>
> ><br>
> > can you tell what log files to look<br>
> at to find out why<br>
> > that storage<br>
> > domain vanished - from a Engine<br>
> side?<br>
> ><br>
> > And do you know what files related<br>
> to gluster are<br>
> > changed on the Node<br>
> > side?<br>
> ><br>
> > Gabi,<br>
> ><br>
> > could you please provide the<br>
> contents of /config/files<br>
> > on the Node.<br>
> ><br>
> > > Glusterfs data goes under /data<br>
> directory which was<br>
> > automatically<br>
> > > configured when I installed the<br>
> node.<br>
> ><br>
> ><br>
> > Yep, /data is on the Data LV - that<br>
> should be good.<br>
> ><br>
> > - fabian<br>
> ><br>
> > ><br>
> > ><br>
> > > On Wed, Dec 18, 2013 at 11:45 AM,<br>
> Fabian Deutsch<br>
> > <<a href="mailto:fabiand@redhat.com">fabiand@redhat.com</a>><br>
> > > wrote:<br>
> > > Am Mittwoch, den<br>
> 18.12.2013, 11:42 +0200<br>
> > schrieb Gabi C:<br>
> > > > Yes, it is the VM<br>
> part..I just run into an<br>
> > issue. My setup<br>
> > > consist in<br>
> > > > 2 nodes with glusterfs<br>
> and after adding<br>
> > supplemental hard<br>
> > > disk, after<br>
> > > > reboot I've lost<br>
> glusterfs volumes!<br>
> > ><br>
> > ><br>
> > > Could you exactly explain<br>
> what you<br>
> > configured?<br>
> > ><br>
> > > ><br>
> > > > How can I persist any<br>
> configuration on<br>
> > node and I refer here<br>
> > > to<br>
> > > > ''setenforce 0'' - for<br>
> ssh login to work-<br>
> > and further<br>
> > ><br>
> > ><br>
> > > How changes can be<br>
> persisted on Node can be<br>
> > found here:<br>
> > ><br>
> ><br>
> <a href="http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host" target="_blank">http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host</a><br>
> > ><br>
> > > Do you know into what path<br>
> the glusterfs<br>
> > data goes? Or is it<br>
> > > written<br>
> > > directly onto a disk/LV?<br>
> > ><br>
> > > - fabian<br>
> > ><br>
> > > > ""<br>
> ><br>
> <a href="http://www.ovirt.org/Features/GlusterFS_Storage_Domain" target="_blank">http://www.ovirt.org/Features/GlusterFS_Storage_Domain</a><br>
> > > > * option<br>
> rpc-auth-allow-insecure on<br>
> > ==> in<br>
> > > glusterd.vol (ensure<br>
> > > > u restart<br>
> glusterd service... for<br>
> > this to take<br>
> > > effect)<br>
> > ><br>
> > > > * volume set<br>
> <volname><br>
> > server.allow-insecure on ==><br>
> > > (ensure u<br>
> > > > stop and start<br>
> the volume.. for<br>
> > this to take<br>
> > > effect)''<br>
> > > ><br>
> > > ><br>
> > > > Thanks!<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > On Wed, Dec 18, 2013 at<br>
> 11:35 AM, Fabian<br>
> > Deutsch<br>
> > > <<a href="mailto:fabiand@redhat.com">fabiand@redhat.com</a>><br>
> > > > wrote:<br>
> > > > Am Mittwoch, den<br>
> 18.12.2013, 08:34<br>
> > +0200 schrieb<br>
> > > Gabi C:<br>
> > > > > Hello!<br>
> > > > ><br>
> > > > ><br>
> > > > > In order to<br>
> increase disk space<br>
> > I want to add a<br>
> > > new disk<br>
> > > > drive to<br>
> > > > > ovirt node.<br>
> After adding this<br>
> > should I proceed as<br>
> > > "normal" -<br>
> > > > pvcreate,<br>
> > > > > vgcreate,<br>
> lvcreate and so on -<br>
> > or these<br>
> > > configuration will<br>
> > > > not<br>
> > > > > persist?<br>
> > > ><br>
> > > ><br>
> > > > Hey Gabi,<br>
> > > ><br>
> > > > basically plain<br>
> LVM is used in<br>
> > Node - so yes<br>
> > > pvcreate and<br>
> > > > lvextend can<br>
> > > > be used.<br>
> > > > What storage<br>
> part do you want to<br>
> > extend? The part<br>
> > > where the<br>
> > > > VMs reside?<br>
> > > > You will also<br>
> need to take care to<br>
> > extend the<br>
> > > filesystem.<br>
> > > ><br>
> > > > - fabian<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
><br>
><br>
><br>
><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>