<div dir="ltr"><div><div>Still, now I cannot start none of the 2 machines! I get <br></div>ID 119 VM proxy2 is down. Exit message: Child quit during startup handshake: Input/output error.""<br><br><br></div>Something similar to bug <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1033064">https://bugzilla.redhat.com/show_bug.cgi?id=1033064</a>, except that in my case selinux is permissive!<br>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Dec 18, 2013 at 2:10 PM, Gabi C <span dir="ltr"><<a href="mailto:gabicr@gmail.com" target="_blank">gabicr@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">in my case $brick_path =/data<br><br><br>getfattr -d /data return NOTHING on both nodes!!!<br><br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch <span dir="ltr"><<a href="mailto:fabiand@redhat.com" target="_blank">fabiand@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi C:<br>
<div>> Update on Glusterfs issue<br>
><br>
><br>
> I manage to recover lost volume after recretaing the same volume name<br>
> with same bricks, whisch raised an error message, resolved by, on both<br>
> nodes:<br>
><br>
> setfattr -x trusted.glusterfs.volume-id $brick_path<br>
> setfattr -x trusted.gfid $brick_path<br>
<br>
</div>Hey,<br>
<br>
good that you could recover them.<br>
<br>
Could you please provide $brick_path and getfattr -d $brick_path<br>
<br>
The question is if and/or why the fattrs are not stored.<br>
<div><div><br>
- fabian<br>
<br>
><br>
><br>
><br>
> On Wed, Dec 18, 2013 at 12:12 PM, Gabi C <<a href="mailto:gabicr@gmail.com" target="_blank">gabicr@gmail.com</a>> wrote:<br>
> node 1:<br>
><br>
> [root@virtual5 admin]# cat /config/files<br>
> /etc/fstab<br>
> /etc/shadow<br>
> /etc/default/ovirt<br>
> /etc/ssh/ssh_host_key<br>
> /etc/ssh/ssh_host_key.pub<br>
> /etc/ssh/ssh_host_dsa_key<br>
> /etc/ssh/ssh_host_dsa_key.pub<br>
> /etc/ssh/ssh_host_rsa_key<br>
> /etc/ssh/ssh_host_rsa_key.pub<br>
> /etc/rsyslog.conf<br>
> /etc/libvirt/libvirtd.conf<br>
> /etc/libvirt/passwd.db<br>
> /etc/passwd<br>
> /etc/sysconfig/network<br>
> /etc/collectd.conf<br>
> /etc/libvirt/qemu/networks<br>
> /etc/ssh/sshd_config<br>
> /etc/pki<br>
> /etc/logrotate.d/ovirt-node<br>
> /var/lib/random-seed<br>
> /etc/iscsi/initiatorname.iscsi<br>
> /etc/libvirt/qemu.conf<br>
> /etc/sysconfig/libvirtd<br>
> /etc/logrotate.d/libvirtd<br>
> /etc/multipath.conf<br>
> /etc/hosts<br>
> /etc/sysconfig/network-scripts/ifcfg-enp3s0<br>
> /etc/sysconfig/network-scripts/ifcfg-lo<br>
> /etc/ntp.conf<br>
> /etc/shadow<br>
> /etc/vdsm-reg/vdsm-reg.conf<br>
> /etc/shadow<br>
> /etc/shadow<br>
> /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt<br>
> /etc/sysconfig/network-scripts/route-ovirtmgmt<br>
> /etc/sysconfig/network-scripts/rule-ovirtmgmt<br>
> /root/.ssh/authorized_keys<br>
> /etc/vdsm/<a href="http://vdsm.id" target="_blank">vdsm.id</a><br>
> /etc/udev/rules.d/12-ovirt-iosched.rules<br>
> /etc/vdsm/vdsm.conf<br>
> /etc/sysconfig/iptables<br>
> /etc/resolv.conf<br>
> /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY<br>
> /etc/sysconfig/network-scripts/ifcfg-enp6s0<br>
> /etc/sysconfig/network-scripts/ifcfg-enp6s0.50<br>
> /etc/glusterfs/glusterd.vol<br>
> /etc/selinux/config<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> node 2:<br>
><br>
><br>
> [root@virtual4 ~]# cat /config/files<br>
> /etc/fstab<br>
> /etc/shadow<br>
> /etc/default/ovirt<br>
> /etc/ssh/ssh_host_key<br>
> /etc/ssh/ssh_host_key.pub<br>
> /etc/ssh/ssh_host_dsa_key<br>
> /etc/ssh/ssh_host_dsa_key.pub<br>
> /etc/ssh/ssh_host_rsa_key<br>
> /etc/ssh/ssh_host_rsa_key.pub<br>
> /etc/rsyslog.conf<br>
> /etc/libvirt/libvirtd.conf<br>
> /etc/libvirt/passwd.db<br>
> /etc/passwd<br>
> /etc/sysconfig/network<br>
> /etc/collectd.conf<br>
> /etc/libvirt/qemu/networks<br>
> /etc/ssh/sshd_config<br>
> /etc/pki<br>
> /etc/logrotate.d/ovirt-node<br>
> /var/lib/random-seed<br>
> /etc/iscsi/initiatorname.iscsi<br>
> /etc/libvirt/qemu.conf<br>
> /etc/sysconfig/libvirtd<br>
> /etc/logrotate.d/libvirtd<br>
> /etc/multipath.conf<br>
> /etc/hosts<br>
> /etc/sysconfig/network-scripts/ifcfg-enp3s0<br>
> /etc/sysconfig/network-scripts/ifcfg-lo<br>
> /etc/shadow<br>
> /etc/shadow<br>
> /etc/vdsm-reg/vdsm-reg.conf<br>
> /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt<br>
> /etc/sysconfig/network-scripts/route-ovirtmgmt<br>
> /etc/sysconfig/network-scripts/rule-ovirtmgmt<br>
> /root/.ssh/authorized_keys<br>
> /etc/shadow<br>
> /etc/shadow<br>
> /etc/vdsm/<a href="http://vdsm.id" target="_blank">vdsm.id</a><br>
> /etc/udev/rules.d/12-ovirt-iosched.rules<br>
> /etc/sysconfig/iptables<br>
> /etc/vdsm/vdsm.conf<br>
> /etc/shadow<br>
> /etc/resolv.conf<br>
> /etc/ntp.conf<br>
> /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY<br>
> /etc/sysconfig/network-scripts/ifcfg-enp6s0<br>
> /etc/sysconfig/network-scripts/ifcfg-enp6s0.50<br>
> /etc/glusterfs/glusterd.vol<br>
> /etc/selinux/config<br>
><br>
><br>
><br>
><br>
> On Wed, Dec 18, 2013 at 12:07 PM, Fabian Deutsch<br>
> <<a href="mailto:fabiand@redhat.com" target="_blank">fabiand@redhat.com</a>> wrote:<br>
> Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi<br>
> C:<br>
> > So here it is:<br>
> ><br>
> ><br>
> > in tab volumes add new volume - Replicated, then<br>
> added storage -<br>
> > data/glusterfs. Then I impoerted Vm, ran them and at<br>
> some point,<br>
> > needing some space for a Redhat Satellite instance<br>
> I decided to put<br>
> > both node in maintenace stop them add new disk<br>
> devices and restart,<br>
> > but after restart the gluster volume defined under<br>
> Volumes Tab<br>
> > vanished!<br>
><br>
><br>
> Antoni,<br>
><br>
> can you tell what log files to look at to find out why<br>
> that storage<br>
> domain vanished - from a Engine side?<br>
><br>
> And do you know what files related to gluster are<br>
> changed on the Node<br>
> side?<br>
><br>
> Gabi,<br>
><br>
> could you please provide the contents of /config/files<br>
> on the Node.<br>
><br>
> > Glusterfs data goes under /data directory which was<br>
> automatically<br>
> > configured when I installed the node.<br>
><br>
><br>
> Yep, /data is on the Data LV - that should be good.<br>
><br>
> - fabian<br>
><br>
> ><br>
> ><br>
> > On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch<br>
> <<a href="mailto:fabiand@redhat.com" target="_blank">fabiand@redhat.com</a>><br>
> > wrote:<br>
> > Am Mittwoch, den 18.12.2013, 11:42 +0200<br>
> schrieb Gabi C:<br>
> > > Yes, it is the VM part..I just run into an<br>
> issue. My setup<br>
> > consist in<br>
> > > 2 nodes with glusterfs and after adding<br>
> supplemental hard<br>
> > disk, after<br>
> > > reboot I've lost glusterfs volumes!<br>
> ><br>
> ><br>
> > Could you exactly explain what you<br>
> configured?<br>
> ><br>
> > ><br>
> > > How can I persist any configuration on<br>
> node and I refer here<br>
> > to<br>
> > > ''setenforce 0'' - for ssh login to work-<br>
> and further<br>
> ><br>
> ><br>
> > How changes can be persisted on Node can be<br>
> found here:<br>
> ><br>
> <a href="http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host" target="_blank">http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host</a><br>
> ><br>
> > Do you know into what path the glusterfs<br>
> data goes? Or is it<br>
> > written<br>
> > directly onto a disk/LV?<br>
> ><br>
> > - fabian<br>
> ><br>
> > > ""<br>
> <a href="http://www.ovirt.org/Features/GlusterFS_Storage_Domain" target="_blank">http://www.ovirt.org/Features/GlusterFS_Storage_Domain</a><br>
> > > * option rpc-auth-allow-insecure on<br>
> ==> in<br>
> > glusterd.vol (ensure<br>
> > > u restart glusterd service... for<br>
> this to take<br>
> > effect)<br>
> ><br>
> > > * volume set <volname><br>
> server.allow-insecure on ==><br>
> > (ensure u<br>
> > > stop and start the volume.. for<br>
> this to take<br>
> > effect)''<br>
> > ><br>
> > ><br>
> > > Thanks!<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > On Wed, Dec 18, 2013 at 11:35 AM, Fabian<br>
> Deutsch<br>
> > <<a href="mailto:fabiand@redhat.com" target="_blank">fabiand@redhat.com</a>><br>
> > > wrote:<br>
> > > Am Mittwoch, den 18.12.2013, 08:34<br>
> +0200 schrieb<br>
> > Gabi C:<br>
> > > > Hello!<br>
> > > ><br>
> > > ><br>
> > > > In order to increase disk space<br>
> I want to add a<br>
> > new disk<br>
> > > drive to<br>
> > > > ovirt node. After adding this<br>
> should I proceed as<br>
> > "normal" -<br>
> > > pvcreate,<br>
> > > > vgcreate, lvcreate and so on -<br>
> or these<br>
> > configuration will<br>
> > > not<br>
> > > > persist?<br>
> > ><br>
> > ><br>
> > > Hey Gabi,<br>
> > ><br>
> > > basically plain LVM is used in<br>
> Node - so yes<br>
> > pvcreate and<br>
> > > lvextend can<br>
> > > be used.<br>
> > > What storage part do you want to<br>
> extend? The part<br>
> > where the<br>
> > > VMs reside?<br>
> > > You will also need to take care to<br>
> extend the<br>
> > filesystem.<br>
> > ><br>
> > > - fabian<br>
> > ><br>
> > ><br>
> > ><br>
> ><br>
> ><br>
> ><br>
> ><br>
><br>
><br>
><br>
><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>