[Users] Adding new disk to Ovort node

Hello! In order to increase disk space I want to add a new disk drive to ovirt node. After adding this should I proceed as "normal" - pvcreate, vgcreate, lvcreate and so on - or these configuration will not persist? Thx

--=-RriPnfwiAryInpYOgASV Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb Gabi C:
Hello! =20 =20 In order to increase disk space I want to add a new disk drive to ovirt node. After adding this should I proceed as "normal" - pvcreate, vgcreate, lvcreate and so on - or these configuration will not persist?
Hey Gabi, basically plain LVM is used in Node - so yes pvcreate and lvextend can be used. What storage part do you want to extend? The part where the VMs reside? You will also need to take care to extend the filesystem. - fabian --=-RriPnfwiAryInpYOgASV Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) iQIcBAABAgAGBQJSsWxIAAoJEC9+uOgSHVGUjB0P/3rhwcOZDp8kHqWerYdAZtoy 1w8D2QTnXDqD9bLVyOVJtvdvRWvYCPIEEKCVlckcJ46eUJy0oUbRU6KmlQ2u3/fG Ym61g1iylsfVBBlQM5qha/9Og4cL54zwmd1RtVVGvWUc2NLsL4b0nNmpRPDkHPn0 8WlExrkPC2+dQYOc3TnQEhPPL6IE1QfWe43wN1PkgDR3aKWYNb97IzBCFaHknxmS n/oHPLTsSzSwQsct/fMnx+VDvAWs5kSI5mWFDH3/UuY2rQq0EOvb6XpzDJEwA82J Lf/gSMlrUaWImI5TFToIwyVeduphdqJUsoowUk4+U6e2VGBwCJSomIKaqChnUkUe d60Syd4OnsRmNrUyqANEVVxr6fzAUkMdw9ZCiRa52RgDX9sWFJuKO1OBU8OOpBPf bloN+e5jUVRrY+Xmsj/QWPAGUKPO9elTz7n80/NOJDFqq5NcnTk7n9ijJgJ3PaZg vOik68ruKlzqg/9FpUvHw33MdjZWLVeuAMXlyZLpRmuPU4tpW64S9+Qyf+F1vnRc Lt6wVX/NnRb7vpWTVYzZFbsNSOV9PLz1EWcTt5QqizoKvu0fGEDOxzoKGG/gEaXU VL5PKo1VYwSfTENx3F90Q1sNwQcbWVJfTxWPJRUwU4RghzobI3gvM640GEF6q7HY S3iZyIhZWliM/z+wmfnR =ivtO -----END PGP SIGNATURE----- --=-RriPnfwiAryInpYOgASV--

Yes, it is the VM part..I just run into an issue. My setup consist in 2 nodes with glusterfs and after adding supplemental hard disk, after reboot I've lost glusterfs volumes! How can I persist any configuration on node and I refer here to ''setenforce 0'' - for ssh login to work- and further "" http://www.ovirt.org/Features/GlusterFS_Storage_Domain - option rpc-auth-allow-insecure on ==> in glusterd.vol (ensure u restart glusterd service... for this to take effect) - volume set <volname> server.allow-insecure on ==> (ensure u stop and start the volume.. for this to take effect)'' Thanks! On Wed, Dec 18, 2013 at 11:35 AM, Fabian Deutsch <fabiand@redhat.com> wrote:
Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb Gabi C:
Hello!
In order to increase disk space I want to add a new disk drive to ovirt node. After adding this should I proceed as "normal" - pvcreate, vgcreate, lvcreate and so on - or these configuration will not persist?
Hey Gabi,
basically plain LVM is used in Node - so yes pvcreate and lvextend can be used. What storage part do you want to extend? The part where the VMs reside? You will also need to take care to extend the filesystem.
- fabian

--=-pUFrunYcUcTjeN7LbMJh Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Am Mittwoch, den 18.12.2013, 11:42 +0200 schrieb Gabi C:
Yes, it is the VM part..I just run into an issue. My setup consist in 2 nodes with glusterfs and after adding supplemental hard disk, after reboot I've lost glusterfs volumes!=20
Could you exactly explain what you configured?
=20 How can I persist any configuration on node and I refer here to ''setenforce 0'' - for ssh login to work- and further=20
How changes can be persisted on Node can be found here: http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host Do you know into what path the glusterfs data goes? Or is it written directly onto a disk/LV? - fabian
"" http://www.ovirt.org/Features/GlusterFS_Storage_Domain=20 * option rpc-auth-allow-insecure on =3D=3D> in glusterd.vol (ensure u restart glusterd service... for this to take effect)=20 * volume set <volname> server.allow-insecure on =3D=3D> (ensure u stop and start the volume.. for this to take effect)'' =20 =20 Thanks! =20 =20 =20 =20 =20 On Wed, Dec 18, 2013 at 11:35 AM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb Gabi C: > Hello! > > > In order to increase disk space I want to add a new disk drive to > ovirt node. After adding this should I proceed as "normal" - pvcreate, > vgcreate, lvcreate and so on - or these configuration will not > persist? =20 =20 Hey Gabi, =20 basically plain LVM is used in Node - so yes pvcreate and lvextend can be used. What storage part do you want to extend? The part where the VMs reside? You will also need to take care to extend the filesystem. =20 - fabian =20 =20 =20
--=-pUFrunYcUcTjeN7LbMJh Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) iQIcBAABAgAGBQJSsW7PAAoJEC9+uOgSHVGUmlIP/39YuhEMPYiYxXGmfiLdLg+d ZXyVojp5TWkFMAWRydUjkW1/WNrNYlHSh7m05B/SGjKlJUJYHEcF7ofkARepDc7G Ptvw/veTpdKE2nbTU6JdHjk1MDP8OAd/LT7ZnA228j0nTWURGv1aChir3CkxT/G1 NjlWApSJIT81txduuG//oNJRaLTNqpW99/PBGr2GrpjT9Dnd4ZZXGrLosckEO9El 3FbuWIbJV0p95c4HdSF/XfMjF5Bj045IGw8bqlCXjWNHdm4F+QJnpBw2K7rfR17w XN/w2+tLJikQddEMsV7RErWkqiSPMRLp8Cl4gN385d3WcSvD52xlX59Berl2ou+l 4oduypx2upUS1KeO45UXQTDRxS8XejJKcTRRvZ0EUrrVeEo/CdlbErvVZ1i+QDHA 0y87MoQixsPAB1YFD+EoOml0S5Fq3WloWntOqGtuZioJoseEjtEvQcJuUaG5Itnp j/pLpUW4fpSPK1Yix1oms+49iMY7CnXKLFXErhiMcLM3yvIxTSkF+tisVP2JdJNN Vvmn/5lvc8re961O53KpTxsj6qfGEbZHgNlJ1mVp7l8EfQuCE4tfH/9Uu1WzBNDm WPEsX499MH+ihR17J1LgosyTEfX18itdRtKUNDDgShflUAekCRvV307UxoAKplkw 2UyHwUs/03/qOcy4klvZ =jb4x -----END PGP SIGNATURE----- --=-pUFrunYcUcTjeN7LbMJh--

So here it is: in tab volumes add new volume - Replicated, then added storage - data/glusterfs. Then I impoerted Vm, ran them and at some point, needing some space for a Redhat Satellite instance I decided to put both node in maintenace stop them add new disk devices and restart, but after restart the gluster volume defined under Volumes Tab vanished! Glusterfs data goes under /data directory which was automatically configured when I installed the node. On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch <fabiand@redhat.com> wrote:
Am Mittwoch, den 18.12.2013, 11:42 +0200 schrieb Gabi C:
Yes, it is the VM part..I just run into an issue. My setup consist in 2 nodes with glusterfs and after adding supplemental hard disk, after reboot I've lost glusterfs volumes!
Could you exactly explain what you configured?
How can I persist any configuration on node and I refer here to ''setenforce 0'' - for ssh login to work- and further
How changes can be persisted on Node can be found here: http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host
Do you know into what path the glusterfs data goes? Or is it written directly onto a disk/LV?
- fabian
"" http://www.ovirt.org/Features/GlusterFS_Storage_Domain * option rpc-auth-allow-insecure on ==> in glusterd.vol (ensure u restart glusterd service... for this to take effect) * volume set <volname> server.allow-insecure on ==> (ensure u stop and start the volume.. for this to take effect)''
Thanks!
On Wed, Dec 18, 2013 at 11:35 AM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb Gabi C: > Hello! > > > In order to increase disk space I want to add a new disk drive to > ovirt node. After adding this should I proceed as "normal" - pvcreate, > vgcreate, lvcreate and so on - or these configuration will not > persist?
Hey Gabi,
basically plain LVM is used in Node - so yes pvcreate and lvextend can be used. What storage part do you want to extend? The part where the VMs reside? You will also need to take care to extend the filesystem.
- fabian

--=-JyxQAT+IFlafinceXkF7 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi C:
So here it is: =20 =20 in tab volumes add new volume - Replicated, then added storage - data/glusterfs. Then I impoerted Vm, ran them and at some point, needing some space for a Redhat Satellite instance I decided to put both node in maintenace stop them add new disk devices and restart, but after restart the gluster volume defined under Volumes Tab vanished!
Antoni, can you tell what log files to look at to find out why that storage domain vanished - from a Engine side? And do you know what files related to gluster are changed on the Node side? Gabi, could you please provide the contents of /config/files on the Node.
Glusterfs data goes under /data directory which was automatically configured when I installed the node.
Yep, /data is on the Data LV - that should be good. - fabian
=20 =20 On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 11:42 +0200 schrieb Gabi C: > Yes, it is the VM part..I just run into an issue. My setup consist in > 2 nodes with glusterfs and after adding supplemental hard disk, after > reboot I've lost glusterfs volumes! =20 =20 Could you exactly explain what you configured? =20 > > How can I persist any configuration on node and I refer here to > ''setenforce 0'' - for ssh login to work- and further =20 =20 How changes can be persisted on Node can be found here: http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_h= ost =20 Do you know into what path the glusterfs data goes? Or is it written directly onto a disk/LV? =20 - fabian =20 > "" http://www.ovirt.org/Features/GlusterFS_Storage_Domain > * option rpc-auth-allow-insecure on =3D=3D> in glusterd.vol (ensure > u restart glusterd service... for this to take effect) =20 > * volume set <volname> server.allow-insecure on =3D=3D> (ensure u > stop and start the volume.. for this to take effect)'' > > > Thanks! > > > > > > On Wed, Dec 18, 2013 at 11:35 AM, Fabian Deutsch <fabiand@redhat.com> > wrote: > Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb Gabi C: > > Hello! > > > > > > In order to increase disk space I want to add a new disk > drive to > > ovirt node. After adding this should I proceed as "normal" - > pvcreate, > > vgcreate, lvcreate and so on - or these configuration will > not > > persist? > > > Hey Gabi, > > basically plain LVM is used in Node - so yes pvcreate and > lvextend can > be used. > What storage part do you want to extend? The part where the > VMs reside? > You will also need to take care to extend the filesystem. > > - fabian > > > =20 =20 =20 =20
--=-JyxQAT+IFlafinceXkF7 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) iQIcBAABAgAGBQJSsXPKAAoJEC9+uOgSHVGUhZoQAJxTqL+Guq0nA4zQCjKty8Sw i7P+9MU/tEHcdMEXNsPgxiWXw4htVgrXZcxEloNQCaPlrfBvBAfJ4EzN8BUkDZTk KC67jHbw227PdnKfcqGmUdooMUj4/JqrTkHIiCu52jXboaXLm0Mm63weNPrEaD6v HdN5cGd57Rsqx3mWfbhWJeiYOcoc3wmqQYQLM1mGECn4cKvx5KNOgepJ6QO2sQNB 1IOoJwseq0zDGG9/zUMQvk4bhH7+FOqjHHUdJ6YPPJuAGZotTere84DQaonGH49l bLojXt4L894qN2DdwhYhKfxorGrrmViy0w773mkmVqIkkhrCERTXrhtjOqrGdSj3 LO9OilzU4OK/pPJsKilt2W/LH3dD86+/GuycrjNvZkdsJf55xUCWTuqn+H97q2fg FYcyNvIjrdvBh+EdjWbytSOM3EdIwg2TzFuzSdA8rxNOHboI9rXbQx1lI2Blu4L6 MAJg02XizlyRXb2FZhUq7DEqROj2GDhyYNSVuop+YtpS+bB7bvCbblBvszvYtkG5 OD4PhbRb62y0vxzhc4C1jroJykqMTLKWqoIKUts3V2brEacKUaig4yOe+/1n7q4X P/n/0uSxCCKXOByPSG2YKczNVKBHTxTjphUGhL0wk+Ui3gdR5uehTcCUzekr3+y4 EOHhoi6HGeLJPRgEhuK3 =tKR5 -----END PGP SIGNATURE----- --=-JyxQAT+IFlafinceXkF7--

node 1: [root@virtual5 admin]# cat /config/files /etc/fstab /etc/shadow /etc/default/ovirt /etc/ssh/ssh_host_key /etc/ssh/ssh_host_key.pub /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_dsa_key.pub /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub /etc/rsyslog.conf /etc/libvirt/libvirtd.conf /etc/libvirt/passwd.db /etc/passwd /etc/sysconfig/network /etc/collectd.conf /etc/libvirt/qemu/networks /etc/ssh/sshd_config /etc/pki /etc/logrotate.d/ovirt-node /var/lib/random-seed /etc/iscsi/initiatorname.iscsi /etc/libvirt/qemu.conf /etc/sysconfig/libvirtd /etc/logrotate.d/libvirtd /etc/multipath.conf /etc/hosts /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-lo /etc/ntp.conf /etc/shadow /etc/vdsm-reg/vdsm-reg.conf /etc/shadow /etc/shadow /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt /etc/sysconfig/network-scripts/route-ovirtmgmt /etc/sysconfig/network-scripts/rule-ovirtmgmt /root/.ssh/authorized_keys /etc/vdsm/vdsm.id /etc/udev/rules.d/12-ovirt-iosched.rules /etc/vdsm/vdsm.conf /etc/sysconfig/iptables /etc/resolv.conf /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY /etc/sysconfig/network-scripts/ifcfg-enp6s0 /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 /etc/glusterfs/glusterd.vol /etc/selinux/config node 2: [root@virtual4 ~]# cat /config/files /etc/fstab /etc/shadow /etc/default/ovirt /etc/ssh/ssh_host_key /etc/ssh/ssh_host_key.pub /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_dsa_key.pub /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub /etc/rsyslog.conf /etc/libvirt/libvirtd.conf /etc/libvirt/passwd.db /etc/passwd /etc/sysconfig/network /etc/collectd.conf /etc/libvirt/qemu/networks /etc/ssh/sshd_config /etc/pki /etc/logrotate.d/ovirt-node /var/lib/random-seed /etc/iscsi/initiatorname.iscsi /etc/libvirt/qemu.conf /etc/sysconfig/libvirtd /etc/logrotate.d/libvirtd /etc/multipath.conf /etc/hosts /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-lo /etc/shadow /etc/shadow /etc/vdsm-reg/vdsm-reg.conf /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt /etc/sysconfig/network-scripts/route-ovirtmgmt /etc/sysconfig/network-scripts/rule-ovirtmgmt /root/.ssh/authorized_keys /etc/shadow /etc/shadow /etc/vdsm/vdsm.id /etc/udev/rules.d/12-ovirt-iosched.rules /etc/sysconfig/iptables /etc/vdsm/vdsm.conf /etc/shadow /etc/resolv.conf /etc/ntp.conf /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY /etc/sysconfig/network-scripts/ifcfg-enp6s0 /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 /etc/glusterfs/glusterd.vol /etc/selinux/config On Wed, Dec 18, 2013 at 12:07 PM, Fabian Deutsch <fabiand@redhat.com> wrote:
Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi C:
So here it is:
in tab volumes add new volume - Replicated, then added storage - data/glusterfs. Then I impoerted Vm, ran them and at some point, needing some space for a Redhat Satellite instance I decided to put both node in maintenace stop them add new disk devices and restart, but after restart the gluster volume defined under Volumes Tab vanished!
Antoni,
can you tell what log files to look at to find out why that storage domain vanished - from a Engine side?
And do you know what files related to gluster are changed on the Node side?
Gabi,
could you please provide the contents of /config/files on the Node.
Glusterfs data goes under /data directory which was automatically configured when I installed the node.
Yep, /data is on the Data LV - that should be good.
- fabian
On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 11:42 +0200 schrieb Gabi C: > Yes, it is the VM part..I just run into an issue. My setup consist in > 2 nodes with glusterfs and after adding supplemental hard disk, after > reboot I've lost glusterfs volumes!
Could you exactly explain what you configured?
> > How can I persist any configuration on node and I refer here to > ''setenforce 0'' - for ssh login to work- and further
How changes can be persisted on Node can be found here:
http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host
Do you know into what path the glusterfs data goes? Or is it written directly onto a disk/LV?
- fabian
> "" http://www.ovirt.org/Features/GlusterFS_Storage_Domain > * option rpc-auth-allow-insecure on ==> in glusterd.vol (ensure > u restart glusterd service... for this to take effect)
> * volume set <volname> server.allow-insecure on ==> (ensure u > stop and start the volume.. for this to take effect)'' > > > Thanks! > > > > > > On Wed, Dec 18, 2013 at 11:35 AM, Fabian Deutsch <fabiand@redhat.com> > wrote: > Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb Gabi C: > > Hello! > > > > > > In order to increase disk space I want to add a new disk > drive to > > ovirt node. After adding this should I proceed as "normal" - > pvcreate, > > vgcreate, lvcreate and so on - or these configuration will > not > > persist? > > > Hey Gabi, > > basically plain LVM is used in Node - so yes pvcreate and > lvextend can > be used. > What storage part do you want to extend? The part where the > VMs reside? > You will also need to take care to extend the filesystem. > > - fabian > > >

Update on Glusterfs issue I manage to recover lost volume after recretaing the same volume name with same bricks, whisch raised an error message, resolved by, on both nodes: setfattr -x trusted.glusterfs.volume-id $brick_path setfattr -x trusted.gfid $brick_path On Wed, Dec 18, 2013 at 12:12 PM, Gabi C <gabicr@gmail.com> wrote:
node 1:
[root@virtual5 admin]# cat /config/files /etc/fstab /etc/shadow /etc/default/ovirt /etc/ssh/ssh_host_key /etc/ssh/ssh_host_key.pub /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_dsa_key.pub /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub /etc/rsyslog.conf /etc/libvirt/libvirtd.conf /etc/libvirt/passwd.db /etc/passwd /etc/sysconfig/network /etc/collectd.conf /etc/libvirt/qemu/networks /etc/ssh/sshd_config /etc/pki /etc/logrotate.d/ovirt-node /var/lib/random-seed /etc/iscsi/initiatorname.iscsi /etc/libvirt/qemu.conf /etc/sysconfig/libvirtd /etc/logrotate.d/libvirtd /etc/multipath.conf /etc/hosts /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-lo /etc/ntp.conf /etc/shadow /etc/vdsm-reg/vdsm-reg.conf /etc/shadow /etc/shadow /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt /etc/sysconfig/network-scripts/route-ovirtmgmt /etc/sysconfig/network-scripts/rule-ovirtmgmt /root/.ssh/authorized_keys /etc/vdsm/vdsm.id /etc/udev/rules.d/12-ovirt-iosched.rules /etc/vdsm/vdsm.conf /etc/sysconfig/iptables /etc/resolv.conf /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY /etc/sysconfig/network-scripts/ifcfg-enp6s0 /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 /etc/glusterfs/glusterd.vol /etc/selinux/config
node 2:
[root@virtual4 ~]# cat /config/files /etc/fstab /etc/shadow /etc/default/ovirt /etc/ssh/ssh_host_key /etc/ssh/ssh_host_key.pub /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_dsa_key.pub /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub /etc/rsyslog.conf /etc/libvirt/libvirtd.conf /etc/libvirt/passwd.db /etc/passwd /etc/sysconfig/network /etc/collectd.conf /etc/libvirt/qemu/networks /etc/ssh/sshd_config /etc/pki /etc/logrotate.d/ovirt-node /var/lib/random-seed /etc/iscsi/initiatorname.iscsi /etc/libvirt/qemu.conf /etc/sysconfig/libvirtd /etc/logrotate.d/libvirtd /etc/multipath.conf /etc/hosts /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-lo /etc/shadow /etc/shadow /etc/vdsm-reg/vdsm-reg.conf /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt /etc/sysconfig/network-scripts/route-ovirtmgmt /etc/sysconfig/network-scripts/rule-ovirtmgmt /root/.ssh/authorized_keys /etc/shadow /etc/shadow /etc/vdsm/vdsm.id /etc/udev/rules.d/12-ovirt-iosched.rules /etc/sysconfig/iptables /etc/vdsm/vdsm.conf /etc/shadow /etc/resolv.conf /etc/ntp.conf /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY /etc/sysconfig/network-scripts/ifcfg-enp6s0 /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 /etc/glusterfs/glusterd.vol /etc/selinux/config
On Wed, Dec 18, 2013 at 12:07 PM, Fabian Deutsch <fabiand@redhat.com>wrote:
Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi C:
So here it is:
in tab volumes add new volume - Replicated, then added storage - data/glusterfs. Then I impoerted Vm, ran them and at some point, needing some space for a Redhat Satellite instance I decided to put both node in maintenace stop them add new disk devices and restart, but after restart the gluster volume defined under Volumes Tab vanished!
Antoni,
can you tell what log files to look at to find out why that storage domain vanished - from a Engine side?
And do you know what files related to gluster are changed on the Node side?
Gabi,
could you please provide the contents of /config/files on the Node.
Glusterfs data goes under /data directory which was automatically configured when I installed the node.
Yep, /data is on the Data LV - that should be good.
- fabian
On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 11:42 +0200 schrieb Gabi C: > Yes, it is the VM part..I just run into an issue. My setup consist in > 2 nodes with glusterfs and after adding supplemental hard disk, after > reboot I've lost glusterfs volumes!
Could you exactly explain what you configured?
> > How can I persist any configuration on node and I refer here to > ''setenforce 0'' - for ssh login to work- and further
How changes can be persisted on Node can be found here:
http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host
Do you know into what path the glusterfs data goes? Or is it written directly onto a disk/LV?
- fabian
> "" http://www.ovirt.org/Features/GlusterFS_Storage_Domain > * option rpc-auth-allow-insecure on ==> in glusterd.vol (ensure > u restart glusterd service... for this to take effect)
> * volume set <volname> server.allow-insecure on ==> (ensure u > stop and start the volume.. for this to take effect)'' > > > Thanks! > > > > > > On Wed, Dec 18, 2013 at 11:35 AM, Fabian Deutsch <fabiand@redhat.com> > wrote: > Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb Gabi C: > > Hello! > > > > > > In order to increase disk space I want to add a new disk > drive to > > ovirt node. After adding this should I proceed as "normal" - > pvcreate, > > vgcreate, lvcreate and so on - or these configuration will > not > > persist? > > > Hey Gabi, > > basically plain LVM is used in Node - so yes pvcreate and > lvextend can > be used. > What storage part do you want to extend? The part where the > VMs reside? > You will also need to take care to extend the filesystem. > > - fabian > > >

--=-o9dqb7jed78TkUvdneys Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi C:
Update on Glusterfs issue =20 =20 I manage to recover lost volume after recretaing the same volume name with same bricks, whisch raised an error message, resolved by, on both nodes: =20 setfattr -x trusted.glusterfs.volume-id $brick_path setfattr -x trusted.gfid $brick_path
Hey, good that you could recover them. Could you please provide $brick_path and getfattr -d $brick_path The question is if and/or why the fattrs are not stored. - fabian
=20 =20 =20 On Wed, Dec 18, 2013 at 12:12 PM, Gabi C <gabicr@gmail.com> wrote: node 1: =20 [root@virtual5 admin]# cat /config/files=20 /etc/fstab /etc/shadow /etc/default/ovirt /etc/ssh/ssh_host_key /etc/ssh/ssh_host_key.pub /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_dsa_key.pub /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub /etc/rsyslog.conf /etc/libvirt/libvirtd.conf /etc/libvirt/passwd.db /etc/passwd /etc/sysconfig/network /etc/collectd.conf /etc/libvirt/qemu/networks /etc/ssh/sshd_config /etc/pki /etc/logrotate.d/ovirt-node /var/lib/random-seed /etc/iscsi/initiatorname.iscsi /etc/libvirt/qemu.conf /etc/sysconfig/libvirtd /etc/logrotate.d/libvirtd /etc/multipath.conf /etc/hosts /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-lo /etc/ntp.conf /etc/shadow /etc/vdsm-reg/vdsm-reg.conf /etc/shadow /etc/shadow /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt /etc/sysconfig/network-scripts/route-ovirtmgmt /etc/sysconfig/network-scripts/rule-ovirtmgmt /root/.ssh/authorized_keys /etc/vdsm/vdsm.id /etc/udev/rules.d/12-ovirt-iosched.rules /etc/vdsm/vdsm.conf /etc/sysconfig/iptables /etc/resolv.conf /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY /etc/sysconfig/network-scripts/ifcfg-enp6s0 /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 /etc/glusterfs/glusterd.vol /etc/selinux/config =20 =20 =20 =20 =20 =20 =20 =20 node 2: =20 =20 [root@virtual4 ~]# cat /config/files /etc/fstab /etc/shadow /etc/default/ovirt /etc/ssh/ssh_host_key /etc/ssh/ssh_host_key.pub /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_dsa_key.pub /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub /etc/rsyslog.conf /etc/libvirt/libvirtd.conf /etc/libvirt/passwd.db /etc/passwd /etc/sysconfig/network /etc/collectd.conf /etc/libvirt/qemu/networks /etc/ssh/sshd_config /etc/pki /etc/logrotate.d/ovirt-node /var/lib/random-seed /etc/iscsi/initiatorname.iscsi /etc/libvirt/qemu.conf /etc/sysconfig/libvirtd /etc/logrotate.d/libvirtd /etc/multipath.conf /etc/hosts /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-lo /etc/shadow /etc/shadow /etc/vdsm-reg/vdsm-reg.conf /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt /etc/sysconfig/network-scripts/route-ovirtmgmt /etc/sysconfig/network-scripts/rule-ovirtmgmt /root/.ssh/authorized_keys /etc/shadow /etc/shadow /etc/vdsm/vdsm.id /etc/udev/rules.d/12-ovirt-iosched.rules /etc/sysconfig/iptables /etc/vdsm/vdsm.conf /etc/shadow /etc/resolv.conf /etc/ntp.conf /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY /etc/sysconfig/network-scripts/ifcfg-enp6s0 /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 /etc/glusterfs/glusterd.vol /etc/selinux/config =20 =20 =20 =20 On Wed, Dec 18, 2013 at 12:07 PM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi C: > So here it is: > > > in tab volumes add new volume - Replicated, then added storage - > data/glusterfs. Then I impoerted Vm, ran them and at some point, > needing some space for a Redhat Satellite instance I decided to put > both node in maintenace stop them add new disk devices and restart, > but after restart the gluster volume defined under Volumes Tab > vanished! =20 =20 Antoni, =20 can you tell what log files to look at to find out why that storage domain vanished - from a Engine side? =20 And do you know what files related to gluster are changed on the Node side? =20 Gabi, =20 could you please provide the contents of /config/files on the Node. =20 > Glusterfs data goes under /data directory which was automatically > configured when I installed the node. =20 =20 Yep, /data is on the Data LV - that should be good. =20 - fabian =20 > > > On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch <fabiand@redhat.com> > wrote: > Am Mittwoch, den 18.12.2013, 11:42 +0200 schrieb Gabi C: > > Yes, it is the VM part..I just run into an issue. My setup > consist in > > 2 nodes with glusterfs and after adding supplemental hard > disk, after > > reboot I've lost glusterfs volumes! > > > Could you exactly explain what you configured? > > > > > How can I persist any configuration on node and I refer here > to > > ''setenforce 0'' - for ssh login to work- and further > > > How changes can be persisted on Node can be found here: > http://www.ovirt.org/Node_Troubleshooting#Making_changes_= on_the_host > > Do you know into what path the glusterfs data goes? Or is it > written > directly onto a disk/LV? > > - fabian > > > "" http://www.ovirt.org/Features/GlusterFS_Storage_Domain > > * option rpc-auth-allow-insecure on =3D=3D> in > glusterd.vol (ensure > > u restart glusterd service... for this to take > effect) > > > * volume set <volname> server.allow-insecure on =3D=3D> > (ensure u > > stop and start the volume.. for this to take > effect)'' > > > > > > Thanks! > > > > > > > > > > > > On Wed, Dec 18, 2013 at 11:35 AM, Fabian Deutsch > <fabiand@redhat.com> > > wrote: > > Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb > Gabi C: > > > Hello! > > > > > > > > > In order to increase disk space I want to add a > new disk > > drive to > > > ovirt node. After adding this should I proceed as > "normal" - > > pvcreate, > > > vgcreate, lvcreate and so on - or these > configuration will > > not > > > persist? > > > > > > Hey Gabi, > > > > basically plain LVM is used in Node - so yes > pvcreate and > > lvextend can > > be used. > > What storage part do you want to extend? The part > where the > > VMs reside? > > You will also need to take care to extend the > filesystem. > > > > - fabian > > > > > > > > > > =20 =20 =20 =20 =20 =20
--=-o9dqb7jed78TkUvdneys Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) iQIcBAABAgAGBQJSsYr8AAoJEC9+uOgSHVGUL0gP/RuQrRclsXWZVlDiU3hh3CCC TwK3o9ACGa1CrNSF2w9fzrl0C6+2/xGHkFADJFV+xQmsD07glkC66SPMJYEDpFNE 9Cn/mOKoDi4UEa9ik9v1RKXQUiYPQilxNCFhjZYAWqZFgF7LmLz/D3b+VAUStrP+ RL8hgmUnPfsUXlT1jO8Ulx76FmK17HUEC7B09BuICFAS/w+vKk2CkApMjVKkjOsw 2UEmWDDB3NVLCY1OHjYPfOkUFPuYgE6bbPZ/ci68Asm2mdm4JkVcbsOvNXuAHXx1 0rze9f23TSrLwK9RDMBWQ2JBHCIKecFsjT0BSHjCxYmglpz4p5UD83nyV5SQgwix mtzVs7ohAEreKuO86Q8bpFHhOLd/2qRv7Z9Z3yhU4CAYZh9+1camTuU29IE2usYJ ID8jEc4BQ/OtTWqWHZb3dAkuXGv5RraFHX2iByI+u41o24mhDmvMeJQ7yfhNL9el UJxHUU6Q88VZqb8j1+1BaXmHGMFiDyfq5vhe2rYTWvbiIs/U02wKDIe5mYDe7wQt iyWSWwxY9wADO68ofqzvrxOq0aDMR1HC+Fiji1Z33czv9t0TIUQ8EXTvbgPpDQgP 4RElYaztYlIJCnOPx6UC/kgFiXIOhveCp8CzedARUqNOUSDsNKm0ytXI4Kx2m29u aNxMRamEe6tDJfSzR16E =uaFk -----END PGP SIGNATURE----- --=-o9dqb7jed78TkUvdneys--

in my case $brick_path =/data getfattr -d /data return NOTHING on both nodes!!! On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch <fabiand@redhat.com> wrote:
Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi C:
Update on Glusterfs issue
I manage to recover lost volume after recretaing the same volume name with same bricks, whisch raised an error message, resolved by, on both nodes:
setfattr -x trusted.glusterfs.volume-id $brick_path setfattr -x trusted.gfid $brick_path
Hey,
good that you could recover them.
Could you please provide $brick_path and getfattr -d $brick_path
The question is if and/or why the fattrs are not stored.
- fabian
On Wed, Dec 18, 2013 at 12:12 PM, Gabi C <gabicr@gmail.com> wrote: node 1:
[root@virtual5 admin]# cat /config/files /etc/fstab /etc/shadow /etc/default/ovirt /etc/ssh/ssh_host_key /etc/ssh/ssh_host_key.pub /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_dsa_key.pub /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub /etc/rsyslog.conf /etc/libvirt/libvirtd.conf /etc/libvirt/passwd.db /etc/passwd /etc/sysconfig/network /etc/collectd.conf /etc/libvirt/qemu/networks /etc/ssh/sshd_config /etc/pki /etc/logrotate.d/ovirt-node /var/lib/random-seed /etc/iscsi/initiatorname.iscsi /etc/libvirt/qemu.conf /etc/sysconfig/libvirtd /etc/logrotate.d/libvirtd /etc/multipath.conf /etc/hosts /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-lo /etc/ntp.conf /etc/shadow /etc/vdsm-reg/vdsm-reg.conf /etc/shadow /etc/shadow /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt /etc/sysconfig/network-scripts/route-ovirtmgmt /etc/sysconfig/network-scripts/rule-ovirtmgmt /root/.ssh/authorized_keys /etc/vdsm/vdsm.id /etc/udev/rules.d/12-ovirt-iosched.rules /etc/vdsm/vdsm.conf /etc/sysconfig/iptables /etc/resolv.conf /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY /etc/sysconfig/network-scripts/ifcfg-enp6s0 /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 /etc/glusterfs/glusterd.vol /etc/selinux/config
node 2:
[root@virtual4 ~]# cat /config/files /etc/fstab /etc/shadow /etc/default/ovirt /etc/ssh/ssh_host_key /etc/ssh/ssh_host_key.pub /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_dsa_key.pub /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub /etc/rsyslog.conf /etc/libvirt/libvirtd.conf /etc/libvirt/passwd.db /etc/passwd /etc/sysconfig/network /etc/collectd.conf /etc/libvirt/qemu/networks /etc/ssh/sshd_config /etc/pki /etc/logrotate.d/ovirt-node /var/lib/random-seed /etc/iscsi/initiatorname.iscsi /etc/libvirt/qemu.conf /etc/sysconfig/libvirtd /etc/logrotate.d/libvirtd /etc/multipath.conf /etc/hosts /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-lo /etc/shadow /etc/shadow /etc/vdsm-reg/vdsm-reg.conf /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt /etc/sysconfig/network-scripts/route-ovirtmgmt /etc/sysconfig/network-scripts/rule-ovirtmgmt /root/.ssh/authorized_keys /etc/shadow /etc/shadow /etc/vdsm/vdsm.id /etc/udev/rules.d/12-ovirt-iosched.rules /etc/sysconfig/iptables /etc/vdsm/vdsm.conf /etc/shadow /etc/resolv.conf /etc/ntp.conf /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY /etc/sysconfig/network-scripts/ifcfg-enp6s0 /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 /etc/glusterfs/glusterd.vol /etc/selinux/config
On Wed, Dec 18, 2013 at 12:07 PM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi C: > So here it is: > > > in tab volumes add new volume - Replicated, then added storage - > data/glusterfs. Then I impoerted Vm, ran them and at some point, > needing some space for a Redhat Satellite instance I decided to put > both node in maintenace stop them add new disk devices and restart, > but after restart the gluster volume defined under Volumes Tab > vanished!
Antoni,
can you tell what log files to look at to find out why that storage domain vanished - from a Engine side?
And do you know what files related to gluster are changed on the Node side?
Gabi,
could you please provide the contents of /config/files on the Node.
> Glusterfs data goes under /data directory which was automatically > configured when I installed the node.
Yep, /data is on the Data LV - that should be good.
- fabian
> > > On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch <fabiand@redhat.com> > wrote: > Am Mittwoch, den 18.12.2013, 11:42 +0200 schrieb Gabi C: > > Yes, it is the VM part..I just run into an issue. My setup > consist in > > 2 nodes with glusterfs and after adding supplemental hard > disk, after > > reboot I've lost glusterfs volumes! > > > Could you exactly explain what you configured? > > > > > How can I persist any configuration on node and I refer here > to > > ''setenforce 0'' - for ssh login to work- and further > > > How changes can be persisted on Node can be found here: >
http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host
> > Do you know into what path the glusterfs data goes? Or is it > written > directly onto a disk/LV? > > - fabian > > > "" http://www.ovirt.org/Features/GlusterFS_Storage_Domain > > * option rpc-auth-allow-insecure on ==> in > glusterd.vol (ensure > > u restart glusterd service... for this to take > effect) > > > * volume set <volname> server.allow-insecure on ==> > (ensure u > > stop and start the volume.. for this to take > effect)'' > > > > > > Thanks! > > > > > > > > > > > > On Wed, Dec 18, 2013 at 11:35 AM, Fabian Deutsch > <fabiand@redhat.com> > > wrote: > > Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb > Gabi C: > > > Hello! > > > > > > > > > In order to increase disk space I want to add a > new disk > > drive to > > > ovirt node. After adding this should I proceed as > "normal" - > > pvcreate, > > > vgcreate, lvcreate and so on - or these > configuration will > > not > > > persist? > > > > > > Hey Gabi, > > > > basically plain LVM is used in Node - so yes > pvcreate and > > lvextend can > > be used. > > What storage part do you want to extend? The part > where the > > VMs reside? > > You will also need to take care to extend the > filesystem. > > > > - fabian > > > > > > > > > >

Still, now I cannot start none of the 2 machines! I get ID 119 VM proxy2 is down. Exit message: Child quit during startup handshake: Input/output error."" Something similar to bug https://bugzilla.redhat.com/show_bug.cgi?id=1033064, except that in my case selinux is permissive! On Wed, Dec 18, 2013 at 2:10 PM, Gabi C <gabicr@gmail.com> wrote:
in my case $brick_path =/data
getfattr -d /data return NOTHING on both nodes!!!
On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch <fabiand@redhat.com>wrote:
Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi C:
Update on Glusterfs issue
I manage to recover lost volume after recretaing the same volume name with same bricks, whisch raised an error message, resolved by, on both nodes:
setfattr -x trusted.glusterfs.volume-id $brick_path setfattr -x trusted.gfid $brick_path
Hey,
good that you could recover them.
Could you please provide $brick_path and getfattr -d $brick_path
The question is if and/or why the fattrs are not stored.
- fabian
On Wed, Dec 18, 2013 at 12:12 PM, Gabi C <gabicr@gmail.com> wrote: node 1:
[root@virtual5 admin]# cat /config/files /etc/fstab /etc/shadow /etc/default/ovirt /etc/ssh/ssh_host_key /etc/ssh/ssh_host_key.pub /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_dsa_key.pub /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub /etc/rsyslog.conf /etc/libvirt/libvirtd.conf /etc/libvirt/passwd.db /etc/passwd /etc/sysconfig/network /etc/collectd.conf /etc/libvirt/qemu/networks /etc/ssh/sshd_config /etc/pki /etc/logrotate.d/ovirt-node /var/lib/random-seed /etc/iscsi/initiatorname.iscsi /etc/libvirt/qemu.conf /etc/sysconfig/libvirtd /etc/logrotate.d/libvirtd /etc/multipath.conf /etc/hosts /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-lo /etc/ntp.conf /etc/shadow /etc/vdsm-reg/vdsm-reg.conf /etc/shadow /etc/shadow /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt /etc/sysconfig/network-scripts/route-ovirtmgmt /etc/sysconfig/network-scripts/rule-ovirtmgmt /root/.ssh/authorized_keys /etc/vdsm/vdsm.id /etc/udev/rules.d/12-ovirt-iosched.rules /etc/vdsm/vdsm.conf /etc/sysconfig/iptables /etc/resolv.conf /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY /etc/sysconfig/network-scripts/ifcfg-enp6s0 /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 /etc/glusterfs/glusterd.vol /etc/selinux/config
node 2:
[root@virtual4 ~]# cat /config/files /etc/fstab /etc/shadow /etc/default/ovirt /etc/ssh/ssh_host_key /etc/ssh/ssh_host_key.pub /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_dsa_key.pub /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub /etc/rsyslog.conf /etc/libvirt/libvirtd.conf /etc/libvirt/passwd.db /etc/passwd /etc/sysconfig/network /etc/collectd.conf /etc/libvirt/qemu/networks /etc/ssh/sshd_config /etc/pki /etc/logrotate.d/ovirt-node /var/lib/random-seed /etc/iscsi/initiatorname.iscsi /etc/libvirt/qemu.conf /etc/sysconfig/libvirtd /etc/logrotate.d/libvirtd /etc/multipath.conf /etc/hosts /etc/sysconfig/network-scripts/ifcfg-enp3s0 /etc/sysconfig/network-scripts/ifcfg-lo /etc/shadow /etc/shadow /etc/vdsm-reg/vdsm-reg.conf /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt /etc/sysconfig/network-scripts/route-ovirtmgmt /etc/sysconfig/network-scripts/rule-ovirtmgmt /root/.ssh/authorized_keys /etc/shadow /etc/shadow /etc/vdsm/vdsm.id /etc/udev/rules.d/12-ovirt-iosched.rules /etc/sysconfig/iptables /etc/vdsm/vdsm.conf /etc/shadow /etc/resolv.conf /etc/ntp.conf /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY /etc/sysconfig/network-scripts/ifcfg-enp6s0 /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 /etc/glusterfs/glusterd.vol /etc/selinux/config
On Wed, Dec 18, 2013 at 12:07 PM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi C: > So here it is: > > > in tab volumes add new volume - Replicated, then added storage - > data/glusterfs. Then I impoerted Vm, ran them and at some point, > needing some space for a Redhat Satellite instance I decided to put > both node in maintenace stop them add new disk devices and restart, > but after restart the gluster volume defined under Volumes Tab > vanished!
Antoni,
can you tell what log files to look at to find out why that storage domain vanished - from a Engine side?
And do you know what files related to gluster are changed on the Node side?
Gabi,
could you please provide the contents of /config/files on the Node.
> Glusterfs data goes under /data directory which was automatically > configured when I installed the node.
Yep, /data is on the Data LV - that should be good.
- fabian
> > > On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch <fabiand@redhat.com> > wrote: > Am Mittwoch, den 18.12.2013, 11:42 +0200 schrieb Gabi C: > > Yes, it is the VM part..I just run into an issue. My setup > consist in > > 2 nodes with glusterfs and after adding supplemental hard > disk, after > > reboot I've lost glusterfs volumes! > > > Could you exactly explain what you configured? > > > > > How can I persist any configuration on node and I refer here > to > > ''setenforce 0'' - for ssh login to work- and further > > > How changes can be persisted on Node can be found here: >
http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host
> > Do you know into what path the glusterfs data goes? Or is it > written > directly onto a disk/LV? > > - fabian > > > "" http://www.ovirt.org/Features/GlusterFS_Storage_Domain > > * option rpc-auth-allow-insecure on ==> in > glusterd.vol (ensure > > u restart glusterd service... for this to take > effect) > > > * volume set <volname> server.allow-insecure on ==> > (ensure u > > stop and start the volume.. for this to take > effect)'' > > > > > > Thanks! > > > > > > > > > > > > On Wed, Dec 18, 2013 at 11:35 AM, Fabian Deutsch > <fabiand@redhat.com> > > wrote: > > Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb > Gabi C: > > > Hello! > > > > > > > > > In order to increase disk space I want to add a > new disk > > drive to > > > ovirt node. After adding this should I proceed as > "normal" - > > pvcreate, > > > vgcreate, lvcreate and so on - or these > configuration will > > not > > > persist? > > > > > > Hey Gabi, > > > > basically plain LVM is used in Node - so yes > pvcreate and > > lvextend can > > be used. > > What storage part do you want to extend? The part > where the > > VMs reside? > > You will also need to take care to extend the > filesystem. > > > > - fabian > > > > > > > > > >

--=-4v3zPUA3r1zDDng29EaN Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Am Mittwoch, den 18.12.2013, 14:14 +0200 schrieb Gabi C:
Still, now I cannot start none of the 2 machines! I get=20 =20 ID 119 VM proxy2 is down. Exit message: Child quit during startup handshake: Input/output error.""
Could you try ot find out in what context this IO error appears? - fabian
=20 Something similar to bug https://bugzilla.redhat.com/show_bug.cgi?id=3D1033064, except that in my case selinux is permissive! =20 =20 =20 On Wed, Dec 18, 2013 at 2:10 PM, Gabi C <gabicr@gmail.com> wrote: in my case $brick_path =3D/data =20 =20 getfattr -d /data return NOTHING on both nodes!!! =20 =20 =20 =20 On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi C: > Update on Glusterfs issue > > > I manage to recover lost volume after recretaing the same volume name > with same bricks, whisch raised an error message, resolved by, on both > nodes: > > setfattr -x trusted.glusterfs.volume-id $brick_path > setfattr -x trusted.gfid $brick_path =20 =20 Hey, =20 good that you could recover them. =20 Could you please provide $brick_path and getfattr -d $brick_path =20 The question is if and/or why the fattrs are not stored. =20 - fabian =20 > > > > On Wed, Dec 18, 2013 at 12:12 PM, Gabi C <gabicr@gmail.com> wrote: > node 1: > > [root@virtual5 admin]# cat /config/files > /etc/fstab > /etc/shadow > /etc/default/ovirt > /etc/ssh/ssh_host_key > /etc/ssh/ssh_host_key.pub > /etc/ssh/ssh_host_dsa_key > /etc/ssh/ssh_host_dsa_key.pub > /etc/ssh/ssh_host_rsa_key > /etc/ssh/ssh_host_rsa_key.pub > /etc/rsyslog.conf > /etc/libvirt/libvirtd.conf > /etc/libvirt/passwd.db > /etc/passwd > /etc/sysconfig/network > /etc/collectd.conf > /etc/libvirt/qemu/networks > /etc/ssh/sshd_config > /etc/pki > /etc/logrotate.d/ovirt-node > /var/lib/random-seed > /etc/iscsi/initiatorname.iscsi > /etc/libvirt/qemu.conf > /etc/sysconfig/libvirtd > /etc/logrotate.d/libvirtd > /etc/multipath.conf > /etc/hosts > /etc/sysconfig/network-scripts/ifcfg-enp3s0 > /etc/sysconfig/network-scripts/ifcfg-lo > /etc/ntp.conf > /etc/shadow > /etc/vdsm-reg/vdsm-reg.conf > /etc/shadow > /etc/shadow > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt > /etc/sysconfig/network-scripts/route-ovirtmgmt > /etc/sysconfig/network-scripts/rule-ovirtmgmt > /root/.ssh/authorized_keys > /etc/vdsm/vdsm.id > /etc/udev/rules.d/12-ovirt-iosched.rules > /etc/vdsm/vdsm.conf > /etc/sysconfig/iptables > /etc/resolv.conf > /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY > /etc/sysconfig/network-scripts/ifcfg-enp6s0 > /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 > /etc/glusterfs/glusterd.vol > /etc/selinux/config > > > > > > > > > node 2: > > > [root@virtual4 ~]# cat /config/files > /etc/fstab > /etc/shadow > /etc/default/ovirt > /etc/ssh/ssh_host_key > /etc/ssh/ssh_host_key.pub > /etc/ssh/ssh_host_dsa_key > /etc/ssh/ssh_host_dsa_key.pub > /etc/ssh/ssh_host_rsa_key > /etc/ssh/ssh_host_rsa_key.pub > /etc/rsyslog.conf > /etc/libvirt/libvirtd.conf > /etc/libvirt/passwd.db > /etc/passwd > /etc/sysconfig/network > /etc/collectd.conf > /etc/libvirt/qemu/networks > /etc/ssh/sshd_config > /etc/pki > /etc/logrotate.d/ovirt-node > /var/lib/random-seed > /etc/iscsi/initiatorname.iscsi > /etc/libvirt/qemu.conf > /etc/sysconfig/libvirtd > /etc/logrotate.d/libvirtd > /etc/multipath.conf > /etc/hosts > /etc/sysconfig/network-scripts/ifcfg-enp3s0 > /etc/sysconfig/network-scripts/ifcfg-lo > /etc/shadow > /etc/shadow > /etc/vdsm-reg/vdsm-reg.conf > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt > /etc/sysconfig/network-scripts/route-ovirtmgmt > /etc/sysconfig/network-scripts/rule-ovirtmgmt > /root/.ssh/authorized_keys > /etc/shadow > /etc/shadow > /etc/vdsm/vdsm.id > /etc/udev/rules.d/12-ovirt-iosched.rules > /etc/sysconfig/iptables > /etc/vdsm/vdsm.conf > /etc/shadow > /etc/resolv.conf > /etc/ntp.conf > /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY > /etc/sysconfig/network-scripts/ifcfg-enp6s0 > /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 > /etc/glusterfs/glusterd.vol > /etc/selinux/config > > > > > On Wed, Dec 18, 2013 at 12:07 PM, Fabian Deutsch > <fabiand@redhat.com> wrote: > Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi > C: > > So here it is: > > > > > > in tab volumes add new volume - Replicated, then > added storage - > > data/glusterfs. Then I impoerted Vm, ran them and at > some point, > > needing some space for a Redhat Satellite instance > I decided to put > > both node in maintenace stop them add new disk > devices and restart, > > but after restart the gluster volume defined under > Volumes Tab > > vanished! > > > Antoni, > > can you tell what log files to look at to find out why > that storage > domain vanished - from a Engine side? > > And do you know what files related to gluster are > changed on the Node > side? > > Gabi, > > could you please provide the contents of /config/files > on the Node. > > > Glusterfs data goes under /data directory which was > automatically > > configured when I installed the node. > > > Yep, /data is on the Data LV - that should be good. > > - fabian > > > > > > > On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch > <fabiand@redhat.com> > > wrote: > > Am Mittwoch, den 18.12.2013, 11:42 +0200 > schrieb Gabi C: > > > Yes, it is the VM part..I just run into an > issue. My setup > > consist in > > > 2 nodes with glusterfs and after adding > supplemental hard > > disk, after > > > reboot I've lost glusterfs volumes! > > > > > > Could you exactly explain what you > configured? > > > > > > > > How can I persist any configuration on > node and I refer here > > to > > > ''setenforce 0'' - for ssh login to work- > and further > > > > > > How changes can be persisted on Node can be > found here: > > > http://www.ovirt.org/Node_Troubleshooting#Making_changes_= on_the_host > > > > Do you know into what path the glusterfs > data goes? Or is it > > written > > directly onto a disk/LV? > > > > - fabian > > > > > "" > http://www.ovirt.org/Features/GlusterFS_Storage_Domain > > > * option rpc-auth-allow-insecure on > =3D=3D> in > > glusterd.vol (ensure > > > u restart glusterd service... for > this to take > > effect) > > > > > * volume set <volname> > server.allow-insecure on =3D=3D> > > (ensure u > > > stop and start the volume.. for > this to take > > effect)'' > > > > > > > > > Thanks! > > > > > > > > > > > > > > > > > > On Wed, Dec 18, 2013 at 11:35 AM, Fabian > Deutsch > > <fabiand@redhat.com> > > > wrote: > > > Am Mittwoch, den 18.12.2013, 08:34 > +0200 schrieb > > Gabi C: > > > > Hello! > > > > > > > > > > > > In order to increase disk space > I want to add a > > new disk > > > drive to > > > > ovirt node. After adding this > should I proceed as > > "normal" - > > > pvcreate, > > > > vgcreate, lvcreate and so on - > or these > > configuration will > > > not > > > > persist? > > > > > > > > > Hey Gabi, > > > > > > basically plain LVM is used in > Node - so yes > > pvcreate and > > > lvextend can > > > be used. > > > What storage part do you want to > extend? The part > > where the > > > VMs reside? > > > You will also need to take care to > extend the > > filesystem. > > > > > > - fabian > > > > > > > > > > > > > > > > > > > > > > > =20 =20 =20 =20 =20 =20
--=-4v3zPUA3r1zDDng29EaN Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) iQIcBAABAgAGBQJSsaNsAAoJEC9+uOgSHVGU8KEQAKMl57YYn1UtFVZxCkUWNDGc 6NQX9LuSHv0DbmtMb+PjV1Ry0t4bVqH8KQd5i8kPoejlvLSulwvEQEXQDfDgnL8h rrb9k3isohLZ0I2+8k73iuM58B89NxLk0G2Ugazyt+ueltej29DWrimxMCNdwvOR 8h6w4nn/ERVv8TpTbkl05eqilhky/ezzOulGl+trTU6Vm8kmlN+CmxB3o3s7yRoU mfYn0eFBkXlXoFoiyMkxjOgzEYYmYxtpHlDipFnztfTVNmRsIckJOkqyNPmCo38f E2F9tQLF1OntWWxcCkZr3yrbfkXFe14WSZJWCXFsTgi1t5I/RL6quk98jsSaekMp gxASSHQYVUy+hqIj9Ve51+8qW9Ct7XzewvX0vQwMoM2JNptlpP4WCtbtOyD5+cy/ L2hKCjlrpkXTtVbVbx9WowmHYC+d4+d3M81fTRi9KXS+kcQLENuichjCI1IT1DZB 7EzVhOsKUwiiKP8wIw6YUSApOtZzAIwdShu0LWJ4MOFQY/gjFT3qnKgFVOwmEoY1 XLzalGMrnF7DA0+R/YUUUN5/JVg6gX4Xe2JCjZfSyjD3di2QUrREAxt3oXD5GP/g ykChlE5olKJigL1CAmXcZEWiyS/0PUuCEJP9C/ZuUyY9+c86Y9tO3H36Sk1I8HXN BufFjsMX7aaM8CttB57x =AbcI -----END PGP SIGNATURE----- --=-4v3zPUA3r1zDDng29EaN--

Hello again! After persisting selinux config, at reboot I get "Curerent mode: enforced"" although ""Mode from config file: permissive"" ! Due to this, i think I get an denied for glusterfsd: type=AVC msg=audit(1387365750.532:5873): avc: denied { relabelfrom } for pid=30249 comm="glusterfsd" name="23fe702e-be59-4f65-8c55-58b1b1e1b023" dev="dm-10" ino=1835015 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file On Wed, Dec 18, 2013 at 3:30 PM, Fabian Deutsch <fabiand@redhat.com> wrote:
Am Mittwoch, den 18.12.2013, 14:14 +0200 schrieb Gabi C:
Still, now I cannot start none of the 2 machines! I get
ID 119 VM proxy2 is down. Exit message: Child quit during startup handshake: Input/output error.""
Could you try ot find out in what context this IO error appears?
- fabian
Something similar to bug https://bugzilla.redhat.com/show_bug.cgi?id=1033064, except that in my case selinux is permissive!
On Wed, Dec 18, 2013 at 2:10 PM, Gabi C <gabicr@gmail.com> wrote: in my case $brick_path =/data
getfattr -d /data return NOTHING on both nodes!!!
On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi C: > Update on Glusterfs issue > > > I manage to recover lost volume after recretaing the same volume name > with same bricks, whisch raised an error message, resolved by, on both > nodes: > > setfattr -x trusted.glusterfs.volume-id $brick_path > setfattr -x trusted.gfid $brick_path
Hey,
good that you could recover them.
Could you please provide $brick_path and getfattr -d $brick_path
The question is if and/or why the fattrs are not stored.
- fabian
> > > > On Wed, Dec 18, 2013 at 12:12 PM, Gabi C <gabicr@gmail.com> wrote: > node 1: > > [root@virtual5 admin]# cat /config/files > /etc/fstab > /etc/shadow > /etc/default/ovirt > /etc/ssh/ssh_host_key > /etc/ssh/ssh_host_key.pub > /etc/ssh/ssh_host_dsa_key > /etc/ssh/ssh_host_dsa_key.pub > /etc/ssh/ssh_host_rsa_key > /etc/ssh/ssh_host_rsa_key.pub > /etc/rsyslog.conf > /etc/libvirt/libvirtd.conf > /etc/libvirt/passwd.db > /etc/passwd > /etc/sysconfig/network > /etc/collectd.conf > /etc/libvirt/qemu/networks > /etc/ssh/sshd_config > /etc/pki > /etc/logrotate.d/ovirt-node > /var/lib/random-seed > /etc/iscsi/initiatorname.iscsi > /etc/libvirt/qemu.conf > /etc/sysconfig/libvirtd > /etc/logrotate.d/libvirtd > /etc/multipath.conf > /etc/hosts > /etc/sysconfig/network-scripts/ifcfg-enp3s0 > /etc/sysconfig/network-scripts/ifcfg-lo > /etc/ntp.conf > /etc/shadow > /etc/vdsm-reg/vdsm-reg.conf > /etc/shadow > /etc/shadow > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt > /etc/sysconfig/network-scripts/route-ovirtmgmt > /etc/sysconfig/network-scripts/rule-ovirtmgmt > /root/.ssh/authorized_keys > /etc/vdsm/vdsm.id > /etc/udev/rules.d/12-ovirt-iosched.rules > /etc/vdsm/vdsm.conf > /etc/sysconfig/iptables > /etc/resolv.conf > /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY > /etc/sysconfig/network-scripts/ifcfg-enp6s0 > /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 > /etc/glusterfs/glusterd.vol > /etc/selinux/config > > > > > > > > > node 2: > > > [root@virtual4 ~]# cat /config/files > /etc/fstab > /etc/shadow > /etc/default/ovirt > /etc/ssh/ssh_host_key > /etc/ssh/ssh_host_key.pub > /etc/ssh/ssh_host_dsa_key > /etc/ssh/ssh_host_dsa_key.pub > /etc/ssh/ssh_host_rsa_key > /etc/ssh/ssh_host_rsa_key.pub > /etc/rsyslog.conf > /etc/libvirt/libvirtd.conf > /etc/libvirt/passwd.db > /etc/passwd > /etc/sysconfig/network > /etc/collectd.conf > /etc/libvirt/qemu/networks > /etc/ssh/sshd_config > /etc/pki > /etc/logrotate.d/ovirt-node > /var/lib/random-seed > /etc/iscsi/initiatorname.iscsi > /etc/libvirt/qemu.conf > /etc/sysconfig/libvirtd > /etc/logrotate.d/libvirtd > /etc/multipath.conf > /etc/hosts > /etc/sysconfig/network-scripts/ifcfg-enp3s0 > /etc/sysconfig/network-scripts/ifcfg-lo > /etc/shadow > /etc/shadow > /etc/vdsm-reg/vdsm-reg.conf > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt > /etc/sysconfig/network-scripts/route-ovirtmgmt > /etc/sysconfig/network-scripts/rule-ovirtmgmt > /root/.ssh/authorized_keys > /etc/shadow > /etc/shadow > /etc/vdsm/vdsm.id > /etc/udev/rules.d/12-ovirt-iosched.rules > /etc/sysconfig/iptables > /etc/vdsm/vdsm.conf > /etc/shadow > /etc/resolv.conf > /etc/ntp.conf > /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY > /etc/sysconfig/network-scripts/ifcfg-enp6s0 > /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 > /etc/glusterfs/glusterd.vol > /etc/selinux/config > > > > > On Wed, Dec 18, 2013 at 12:07 PM, Fabian Deutsch > <fabiand@redhat.com> wrote: > Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi > C: > > So here it is: > > > > > > in tab volumes add new volume - Replicated, then > added storage - > > data/glusterfs. Then I impoerted Vm, ran them and at > some point, > > needing some space for a Redhat Satellite instance > I decided to put > > both node in maintenace stop them add new disk > devices and restart, > > but after restart the gluster volume defined under > Volumes Tab > > vanished! > > > Antoni, > > can you tell what log files to look at to find out why > that storage > domain vanished - from a Engine side? > > And do you know what files related to gluster are > changed on the Node > side? > > Gabi, > > could you please provide the contents of /config/files > on the Node. > > > Glusterfs data goes under /data directory which was > automatically > > configured when I installed the node. > > > Yep, /data is on the Data LV - that should be good. > > - fabian > > > > > > > On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch > <fabiand@redhat.com> > > wrote: > > Am Mittwoch, den 18.12.2013, 11:42 +0200 > schrieb Gabi C: > > > Yes, it is the VM part..I just run into an > issue. My setup > > consist in > > > 2 nodes with glusterfs and after adding > supplemental hard > > disk, after > > > reboot I've lost glusterfs volumes! > > > > > > Could you exactly explain what you > configured? > > > > > > > > How can I persist any configuration on > node and I refer here > > to > > > ''setenforce 0'' - for ssh login to work- > and further > > > > > > How changes can be persisted on Node can be > found here: > > >
http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host
> > > > Do you know into what path the glusterfs > data goes? Or is it > > written > > directly onto a disk/LV? > > > > - fabian > > > > > "" > http://www.ovirt.org/Features/GlusterFS_Storage_Domain > > > * option rpc-auth-allow-insecure on > ==> in > > glusterd.vol (ensure > > > u restart glusterd service... for > this to take > > effect) > > > > > * volume set <volname> > server.allow-insecure on ==> > > (ensure u > > > stop and start the volume.. for > this to take > > effect)'' > > > > > > > > > Thanks! > > > > > > > > > > > > > > > > > > On Wed, Dec 18, 2013 at 11:35 AM, Fabian > Deutsch > > <fabiand@redhat.com> > > > wrote: > > > Am Mittwoch, den 18.12.2013, 08:34 > +0200 schrieb > > Gabi C: > > > > Hello! > > > > > > > > > > > > In order to increase disk space > I want to add a > > new disk > > > drive to > > > > ovirt node. After adding this > should I proceed as > > "normal" - > > > pvcreate, > > > > vgcreate, lvcreate and so on - > or these > > configuration will > > > not > > > > persist? > > > > > > > > > Hey Gabi, > > > > > > basically plain LVM is used in > Node - so yes > > pvcreate and > > > lvextend can > > > be used. > > > What storage part do you want to > extend? The part > > where the > > > VMs reside? > > > You will also need to take care to > extend the > > filesystem. > > > > > > - fabian > > > > > > > > > > > > > > > > > > > > > > >

I have also these in audit log type=AVC msg=audit(1387273747.999:111): avc: denied { lock } for pid=2299 comm="login" path="/var/log/wtmp" dev="dm-8" ino=38 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387273858.567:171): avc: denied { read } for pid=2941 comm="login" name="btmp" dev="dm-8" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387273858.567:174): avc: denied { lock } for pid=2941 comm="login" path="/var/log/wtmp" dev="dm-8" ino=38 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387274101.999:229): avc: denied { lock } for pid=3118 comm="login" path="/var/log/btmp" dev="dm-8" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387274106.721:232): avc: denied { lock } for pid=3118 comm="login" path="/var/log/btmp" dev="dm-8" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387274111.958:239): avc: denied { read } for pid=3118 comm="login" name="btmp" dev="dm-8" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387274111.959:242): avc: denied { lock } for pid=3118 comm="login" path="/var/log/wtmp" dev="dm-8" ino=38 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387274151.848:253): avc: denied { sigchld } for pid=3158 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387274151.949:259): avc: denied { dyntransition } for pid=3158 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387274156.584:280): avc: denied { open } for pid=3181 comm="agetty" path="/var/log/wtmp" dev="dm-8" ino=38 scontext=system_u:system_r:getty_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387274195.492:289): avc: denied { sigchld } for pid=3183 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387274268.624:341): avc: denied { sigchld } for pid=3623 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387274268.723:347): avc: denied { dyntransition } for pid=3623 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387274398.240:420): avc: denied { sigchld } for pid=4440 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387274398.341:426): avc: denied { dyntransition } for pid=4440 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387274400.470:457): avc: denied { sigchld } for pid=4462 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387274400.566:463): avc: denied { dyntransition } for pid=4462 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387274401.854:481): avc: denied { open } for pid=4501 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387274559.404:571): avc: denied { sigchld } for pid=5533 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387274559.505:577): avc: denied { dyntransition } for pid=5533 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387275001.263:602): avc: denied { open } for pid=6237 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387275601.308:654): avc: denied { open } for pid=7183 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387276201.380:855): avc: denied { open } for pid=8568 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387276801.433:948): avc: denied { open } for pid=9670 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387277401.474:977): avc: denied { open } for pid=10390 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387277646.293:1044): avc: denied { sigchld } for pid=10911 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387277646.397:1050): avc: denied { dyntransition } for pid=10911 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387277708.026:1080): avc: denied { sigchld } for pid=11024 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387277708.128:1091): avc: denied { dyntransition } for pid=11032 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387278001.552:1168): avc: denied { open } for pid=11653 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387278601.647:1198): avc: denied { open } for pid=12577 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387278937.733:1209): avc: denied { sigchld } for pid=13127 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387278937.840:1215): avc: denied { dyntransition } for pid=13127 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387279201.695:1234): avc: denied { open } for pid=13580 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387282201.945:1348): avc: denied { open } for pid=18142 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387283170.033:1379): avc: denied { open } for pid=19723 comm="qemu-system-x86" path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/dd2062fe-85fa-4ede-8eb8-8a747ffb2298/f97e3e03-178a-4627-986f-7482a860be35" dev="fuse" ino=13067829551237403768 scontext=system_u:system_r:svirt_t:s0:c107,c132 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1387283170.034:1380): avc: denied { getattr } for pid=19723 comm="qemu-system-x86" name="/" dev="fuse" ino=1 scontext=system_u:system_r:svirt_t:s0:c107,c132 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem type=AVC msg=audit(1387283319.223:1394): avc: denied { open } for pid=20229 comm="qemu-system-x86" path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/321ad473-3741-4825-8ac0-6c416aa8f490/d587d693-5a21-4f82-b499-fce1a976fb34" dev="fuse" ino=9611908447529383451 scontext=system_u:system_r:svirt_t:s0:c765,c915 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1387283319.224:1395): avc: denied { getattr } for pid=20229 comm="qemu-system-x86" name="/" dev="fuse" ino=1 scontext=system_u:system_r:svirt_t:s0:c765,c915 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem type=AVC msg=audit(1387283401.033:1412): avc: denied { open } for pid=20456 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387284001.078:1440): avc: denied { open } for pid=21704 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387284419.112:1460): avc: denied { open } for pid=22657 comm="qemu-system-x86" path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/dd2062fe-85fa-4ede-8eb8-8a747ffb2298/f97e3e03-178a-4627-986f-7482a860be35" dev="fuse" ino=13067829551237403768 scontext=system_u:system_r:svirt_t:s0:c93,c328 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1387284419.113:1461): avc: denied { getattr } for pid=22657 comm="qemu-system-x86" name="/" dev="fuse" ino=1 scontext=system_u:system_r:svirt_t:s0:c93,c328 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem type=AVC msg=audit(1387284498.390:1482): avc: denied { open } for pid=23020 comm="qemu-system-x86" path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/321ad473-3741-4825-8ac0-6c416aa8f490/d587d693-5a21-4f82-b499-fce1a976fb34" dev="fuse" ino=9611908447529383451 scontext=system_u:system_r:svirt_t:s0:c533,c863 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1387284498.391:1483): avc: denied { getattr } for pid=23020 comm="qemu-system-x86" name="/" dev="fuse" ino=1 scontext=system_u:system_r:svirt_t:s0:c533,c863 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem type=AVC msg=audit(1387284510.039:1496): avc: denied { open } for pid=23180 comm="qemu-system-x86" path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/dd2062fe-85fa-4ede-8eb8-8a747ffb2298/f97e3e03-178a-4627-986f-7482a860be35" dev="fuse" ino=13067829551237403768 scontext=system_u:system_r:svirt_t:s0:c377,c700 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1387284510.040:1497): avc: denied { getattr } for pid=23180 comm="qemu-system-x86" name="/" dev="fuse" ino=1 scontext=system_u:system_r:svirt_t:s0:c377,c700 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem type=AVC msg=audit(1387284601.124:1513): avc: denied { open } for pid=23506 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387285201.193:1544): avc: denied { open } for pid=24722 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387285390.572:1562): avc: denied { open } for pid=25215 comm="qemu-system-x86" path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/321ad473-3741-4825-8ac0-6c416aa8f490/d587d693-5a21-4f82-b499-fce1a976fb34" dev="fuse" ino=9611908447529383451 scontext=system_u:system_r:svirt_t:s0:c165,c765 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1387285390.573:1563): avc: denied { getattr } for pid=25215 comm="qemu-system-x86" name="/" dev="fuse" ino=1 scontext=system_u:system_r:svirt_t:s0:c165,c765 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem type=AVC msg=audit(1387285801.297:1595): avc: denied { open } for pid=26199 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387286176.904:1606): avc: denied { open } for pid=27341 comm="qemu-system-x86" path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/dd2062fe-85fa-4ede-8eb8-8a747ffb2298/f97e3e03-178a-4627-986f-7482a860be35" dev="fuse" ino=13067829551237403768 scontext=system_u:system_r:svirt_t:s0:c288,c302 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1387286176.906:1607): avc: denied { getattr } for pid=27341 comm="qemu-system-x86" name="/" dev="fuse" ino=1 scontext=system_u:system_r:svirt_t:s0:c288,c302 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem type=AVC msg=audit(1387286254.901:1628): avc: denied { open } for pid=27764 comm="qemu-system-x86" path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/321ad473-3741-4825-8ac0-6c416aa8f490/d587d693-5a21-4f82-b499-fce1a976fb34" dev="fuse" ino=9611908447529383451 scontext=system_u:system_r:svirt_t:s0:c786,c947 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1387286254.902:1629): avc: denied { getattr } for pid=27764 comm="qemu-system-x86" name="/" dev="fuse" ino=1 scontext=system_u:system_r:svirt_t:s0:c786,c947 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem type=AVC msg=audit(1387286401.342:1646): avc: denied { open } for pid=28176 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387287001.404:1669): avc: denied { open } for pid=29599 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387288180.517:1708): avc: denied { open } for pid=32133 comm="qemu-system-x86" path="/rhev/data-center/mnt/glusterSD/10.125.1.194:volum1/fefb20e4-0585-4592-ba78-6715045bcf69/images/321ad473-3741-4825-8ac0-6c416aa8f490/d587d693-5a21-4f82-b499-fce1a976fb34" dev="fuse" ino=9611908447529383451 scontext=system_u:system_r:svirt_t:s0:c185,c292 tcontext=system_u:object_r:fusefs_t:s0 tclass=file type=AVC msg=audit(1387288180.518:1709): avc: denied { getattr } for pid=32133 comm="qemu-system-x86" name="/" dev="fuse" ino=1 scontext=system_u:system_r:svirt_t:s0:c185,c292 tcontext=system_u:object_r:fusefs_t:s0 tclass=filesystem type=AVC msg=audit(1387288201.491:1725): avc: denied { open } for pid=32226 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387289401.633:1777): avc: denied { open } for pid=2364 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387293761.064:1921): avc: denied { sigchld } for pid=11668 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387293761.168:1927): avc: denied { dyntransition } for pid=11668 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387294201.473:1953): avc: denied { open } for pid=12683 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387294801.508:1975): avc: denied { open } for pid=13953 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387297201.633:2059): avc: denied { open } for pid=19295 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387312801.587:2580): avc: denied { open } for pid=18968 comm="sadc" path="/var/log/sa/sa17" dev="dm-8" ino=29 scontext=system_u:system_r:sysstat_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387352472.746:3921): avc: denied { sigchld } for pid=942 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387352472.853:3932): avc: denied { dyntransition } for pid=954 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387353052.756:3962): avc: denied { execute } for pid=2415 comm="glusterd" name="tune2fs" dev="dm-5" ino=2748 scontext=system_u:system_r:glusterd_t:s0 tcontext=unconfined_u:object_r:fsadm_exec_t:s0 tclass=file type=AVC msg=audit(1387353052.756:3962): avc: denied { execute_no_trans } for pid=2415 comm="glusterd" path="/usr/sbin/tune2fs" dev="dm-5" ino=2748 scontext=system_u:system_r:glusterd_t:s0 tcontext=unconfined_u:object_r:fsadm_exec_t:s0 tclass=file type=AVC msg=audit(1387353052.762:3963): avc: denied { read } for pid=2415 comm="tune2fs" name="dm-9" dev="devtmpfs" ino=16827 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file type=AVC msg=audit(1387353052.762:3963): avc: denied { open } for pid=2415 comm="tune2fs" path="/dev/dm-9" dev="devtmpfs" ino=16827 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file type=AVC msg=audit(1387353052.762:3964): avc: denied { getattr } for pid=2415 comm="tune2fs" path="/dev/dm-9" dev="devtmpfs" ino=16827 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file type=AVC msg=audit(1387353052.762:3965): avc: denied { ioctl } for pid=2415 comm="tune2fs" path="/dev/dm-9" dev="devtmpfs" ino=16827 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:fixed_disk_device_t:s0 tclass=blk_file type=AVC msg=audit(1387355983.201:105): avc: denied { read } for pid=3066 comm="login" name="btmp" dev="dm-9" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387355983.201:108): avc: denied { lock } for pid=3066 comm="login" path="/var/log/wtmp" dev="dm-9" ino=38 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387356016.592:152): avc: denied { sigchld } for pid=3176 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387356016.699:158): avc: denied { dyntransition } for pid=3176 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387356040.436:179): avc: denied { open } for pid=3206 comm="agetty" path="/var/log/wtmp" dev="dm-9" ino=38 scontext=system_u:system_r:getty_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387359059.970:308): avc: denied { sigchld } for pid=3931 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387359060.074:319): avc: denied { dyntransition } for pid=3935 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387365717.209:5806): avc: denied { sigchld } for pid=29945 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387365717.315:5817): avc: denied { dyntransition } for pid=29963 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387365750.532:5873): avc: denied { relabelfrom } for pid=30249 comm="glusterfsd" name="23fe702e-be59-4f65-8c55-58b1b1e1b023" dev="dm-10" ino=1835015 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file type=AVC msg=audit(1387365750.532:5873): avc: denied { relabelto } for pid=30249 comm="glusterfsd" name="23fe702e-be59-4f65-8c55-58b1b1e1b023" dev="dm-10" ino=1835015 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file type=AVC msg=audit(1387440663.583:9021): avc: denied { sigchld } for pid=18342 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387440663.687:9032): avc: denied { dyntransition } for pid=18349 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1387446189.047:315): avc: denied { lock } for pid=3083 comm="login" path="/var/log/btmp" dev="dm-10" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387446196.317:318): avc: denied { lock } for pid=3083 comm="login" path="/var/log/btmp" dev="dm-10" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387446199.539:321): avc: denied { lock } for pid=3083 comm="login" path="/var/log/btmp" dev="dm-10" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387446225.129:332): avc: denied { lock } for pid=11171 comm="login" path="/var/log/btmp" dev="dm-10" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387446232.941:335): avc: denied { lock } for pid=11171 comm="login" path="/var/log/btmp" dev="dm-10" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387446246.007:338): avc: denied { lock } for pid=11171 comm="login" path="/var/log/btmp" dev="dm-10" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387446260.743:346): avc: denied { lock } for pid=11220 comm="login" path="/var/log/btmp" dev="dm-10" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387446266.925:353): avc: denied { read } for pid=11220 comm="login" name="btmp" dev="dm-10" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387446266.926:356): avc: denied { lock } for pid=11220 comm="login" path="/var/log/wtmp" dev="dm-10" ino=38 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387446579.597:402): avc: denied { open } for pid=11808 comm="agetty" path="/var/log/wtmp" dev="dm-10" ino=38 scontext=system_u:system_r:getty_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387447631.139:154): avc: denied { lock } for pid=3085 comm="login" path="/var/log/btmp" dev="dm-10" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387447636.069:161): avc: denied { read } for pid=3085 comm="login" name="btmp" dev="dm-10" ino=15 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387447636.069:164): avc: denied { lock } for pid=3085 comm="login" path="/var/log/wtmp" dev="dm-10" ino=38 scontext=system_u:system_r:local_login_t:s0-s0:c0.c1023 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387447683.198:173): avc: denied { open } for pid=4619 comm="agetty" path="/var/log/wtmp" dev="dm-10" ino=38 scontext=system_u:system_r:getty_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file type=AVC msg=audit(1387448298.423:215): avc: denied { sigchld } for pid=5571 comm="sshd" scontext=system_u:system_r:sshd_net_t:s0 tcontext=system_u:system_r:initrc_t:s0 tclass=process type=AVC msg=audit(1387448298.529:226): avc: denied { dyntransition } for pid=5578 comm="sshd" scontext=system_u:system_r:initrc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process On Thu, Dec 19, 2013 at 12:25 PM, Gabi C <gabicr@gmail.com> wrote:
Hello again!
After persisting selinux config, at reboot I get "Curerent mode: enforced"" although ""Mode from config file: permissive"" ! Due to this, i think I get an denied for glusterfsd:
type=AVC msg=audit(1387365750.532:5873): avc: denied { relabelfrom } for pid=30249 comm="glusterfsd" name="23fe702e-be59-4f65-8c55-58b1b1e1b023" dev="dm-10" ino=1835015 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
On Wed, Dec 18, 2013 at 3:30 PM, Fabian Deutsch <fabiand@redhat.com>wrote:
Am Mittwoch, den 18.12.2013, 14:14 +0200 schrieb Gabi C:
Still, now I cannot start none of the 2 machines! I get
ID 119 VM proxy2 is down. Exit message: Child quit during startup handshake: Input/output error.""
Could you try ot find out in what context this IO error appears?
- fabian
Something similar to bug https://bugzilla.redhat.com/show_bug.cgi?id=1033064, except that in my case selinux is permissive!
On Wed, Dec 18, 2013 at 2:10 PM, Gabi C <gabicr@gmail.com> wrote: in my case $brick_path =/data
getfattr -d /data return NOTHING on both nodes!!!
On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi C: > Update on Glusterfs issue > > > I manage to recover lost volume after recretaing the same volume name > with same bricks, whisch raised an error message, resolved by, on both > nodes: > > setfattr -x trusted.glusterfs.volume-id $brick_path > setfattr -x trusted.gfid $brick_path
Hey,
good that you could recover them.
Could you please provide $brick_path and getfattr -d $brick_path
The question is if and/or why the fattrs are not stored.
- fabian
> > > > On Wed, Dec 18, 2013 at 12:12 PM, Gabi C <gabicr@gmail.com> wrote: > node 1: > > [root@virtual5 admin]# cat /config/files > /etc/fstab > /etc/shadow > /etc/default/ovirt > /etc/ssh/ssh_host_key > /etc/ssh/ssh_host_key.pub > /etc/ssh/ssh_host_dsa_key > /etc/ssh/ssh_host_dsa_key.pub > /etc/ssh/ssh_host_rsa_key > /etc/ssh/ssh_host_rsa_key.pub > /etc/rsyslog.conf > /etc/libvirt/libvirtd.conf > /etc/libvirt/passwd.db > /etc/passwd > /etc/sysconfig/network > /etc/collectd.conf > /etc/libvirt/qemu/networks > /etc/ssh/sshd_config > /etc/pki > /etc/logrotate.d/ovirt-node > /var/lib/random-seed > /etc/iscsi/initiatorname.iscsi > /etc/libvirt/qemu.conf > /etc/sysconfig/libvirtd > /etc/logrotate.d/libvirtd > /etc/multipath.conf > /etc/hosts > /etc/sysconfig/network-scripts/ifcfg-enp3s0 > /etc/sysconfig/network-scripts/ifcfg-lo > /etc/ntp.conf > /etc/shadow > /etc/vdsm-reg/vdsm-reg.conf > /etc/shadow > /etc/shadow > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt > /etc/sysconfig/network-scripts/route-ovirtmgmt > /etc/sysconfig/network-scripts/rule-ovirtmgmt > /root/.ssh/authorized_keys > /etc/vdsm/vdsm.id > /etc/udev/rules.d/12-ovirt-iosched.rules > /etc/vdsm/vdsm.conf > /etc/sysconfig/iptables > /etc/resolv.conf > /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY > /etc/sysconfig/network-scripts/ifcfg-enp6s0 > /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 > /etc/glusterfs/glusterd.vol > /etc/selinux/config > > > > > > > > > node 2: > > > [root@virtual4 ~]# cat /config/files > /etc/fstab > /etc/shadow > /etc/default/ovirt > /etc/ssh/ssh_host_key > /etc/ssh/ssh_host_key.pub > /etc/ssh/ssh_host_dsa_key > /etc/ssh/ssh_host_dsa_key.pub > /etc/ssh/ssh_host_rsa_key > /etc/ssh/ssh_host_rsa_key.pub > /etc/rsyslog.conf > /etc/libvirt/libvirtd.conf > /etc/libvirt/passwd.db > /etc/passwd > /etc/sysconfig/network > /etc/collectd.conf > /etc/libvirt/qemu/networks > /etc/ssh/sshd_config > /etc/pki > /etc/logrotate.d/ovirt-node > /var/lib/random-seed > /etc/iscsi/initiatorname.iscsi > /etc/libvirt/qemu.conf > /etc/sysconfig/libvirtd > /etc/logrotate.d/libvirtd > /etc/multipath.conf > /etc/hosts > /etc/sysconfig/network-scripts/ifcfg-enp3s0 > /etc/sysconfig/network-scripts/ifcfg-lo > /etc/shadow > /etc/shadow > /etc/vdsm-reg/vdsm-reg.conf > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt > /etc/sysconfig/network-scripts/route-ovirtmgmt > /etc/sysconfig/network-scripts/rule-ovirtmgmt > /root/.ssh/authorized_keys > /etc/shadow > /etc/shadow > /etc/vdsm/vdsm.id > /etc/udev/rules.d/12-ovirt-iosched.rules > /etc/sysconfig/iptables > /etc/vdsm/vdsm.conf > /etc/shadow > /etc/resolv.conf > /etc/ntp.conf > /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY > /etc/sysconfig/network-scripts/ifcfg-enp6s0 > /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 > /etc/glusterfs/glusterd.vol > /etc/selinux/config > > > > > On Wed, Dec 18, 2013 at 12:07 PM, Fabian Deutsch > <fabiand@redhat.com> wrote: > Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi > C: > > So here it is: > > > > > > in tab volumes add new volume - Replicated, then > added storage - > > data/glusterfs. Then I impoerted Vm, ran them and at > some point, > > needing some space for a Redhat Satellite instance > I decided to put > > both node in maintenace stop them add new disk > devices and restart, > > but after restart the gluster volume defined under > Volumes Tab > > vanished! > > > Antoni, > > can you tell what log files to look at to find out why > that storage > domain vanished - from a Engine side? > > And do you know what files related to gluster are > changed on the Node > side? > > Gabi, > > could you please provide the contents of /config/files > on the Node. > > > Glusterfs data goes under /data directory which was > automatically > > configured when I installed the node. > > > Yep, /data is on the Data LV - that should be good. > > - fabian > > > > > > > On Wed, Dec 18, 2013 at 11:45 AM, Fabian Deutsch > <fabiand@redhat.com> > > wrote: > > Am Mittwoch, den 18.12.2013, 11:42 +0200 > schrieb Gabi C: > > > Yes, it is the VM part..I just run into an > issue. My setup > > consist in > > > 2 nodes with glusterfs and after adding > supplemental hard > > disk, after > > > reboot I've lost glusterfs volumes! > > > > > > Could you exactly explain what you > configured? > > > > > > > > How can I persist any configuration on > node and I refer here > > to > > > ''setenforce 0'' - for ssh login to work- > and further > > > > > > How changes can be persisted on Node can be > found here: > > >
http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host
> > > > Do you know into what path the glusterfs > data goes? Or is it > > written > > directly onto a disk/LV? > > > > - fabian > > > > > "" > http://www.ovirt.org/Features/GlusterFS_Storage_Domain > > > * option rpc-auth-allow-insecure on > ==> in > > glusterd.vol (ensure > > > u restart glusterd service... for > this to take > > effect) > > > > > * volume set <volname> > server.allow-insecure on ==> > > (ensure u > > > stop and start the volume.. for > this to take > > effect)'' > > > > > > > > > Thanks! > > > > > > > > > > > > > > > > > > On Wed, Dec 18, 2013 at 11:35 AM, Fabian > Deutsch > > <fabiand@redhat.com> > > > wrote: > > > Am Mittwoch, den 18.12.2013, 08:34 > +0200 schrieb > > Gabi C: > > > > Hello! > > > > > > > > > > > > In order to increase disk space > I want to add a > > new disk > > > drive to > > > > ovirt node. After adding this > should I proceed as > > "normal" - > > > pvcreate, > > > > vgcreate, lvcreate and so on - > or these > > configuration will > > > not > > > > persist? > > > > > > > > > Hey Gabi, > > > > > > basically plain LVM is used in > Node - so yes > > pvcreate and > > > lvextend can > > > be used. > > > What storage part do you want to > extend? The part > > where the > > > VMs reside? > > > You will also need to take care to > extend the > > filesystem. > > > > > > - fabian > > > > > > > > > > > > > > > > > > > > > > >

--=-GVMvPrk+RfZkifC2aWjW Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Am Donnerstag, den 19.12.2013, 12:25 +0200 schrieb Gabi C:
Hello again! =20 =20 After persisting selinux config, at reboot I get "Curerent mode: enforced"" although ""Mode from config file: permissive"" ! =20 Due to this, i think I get an denied for glusterfsd: =20 type=3DAVC msg=3Daudit(1387365750.532:5873): avc: denied { relabelfrom = } for pid=3D30249 comm=3D"glusterfsd" name=3D"23fe702e-be59-4f65-8c55-58b1b1e1b023" dev=3D"dm-10" ino=3D1835015 scontext=3Dsystem_u:system_r:glusterd_t:s0 tcontext=3Dsystem_u:object_r:file_t:s0 tclass=3Dfile =20
Hey Gabi, just a small update here. The problems are all related to some mislabeling during build - ths builds problems. You'll need a new image to get rid of all selinux relatde bugs. - fabian
=20 =20 =20 =20 On Wed, Dec 18, 2013 at 3:30 PM, Fabian Deutsch <fabiand@redhat.com> wrote: Am Mittwoch, den 18.12.2013, 14:14 +0200 schrieb Gabi C: > Still, now I cannot start none of the 2 machines! I get > > ID 119 VM proxy2 is down. Exit message: Child quit during startup > handshake: Input/output error."" =20 =20 Could you try ot find out in what context this IO error appears? =20 - fabian =20 > > Something similar to bug > https://bugzilla.redhat.com/show_bug.cgi?id=3D1033064, except that in my > case selinux is permissive! > > > > On Wed, Dec 18, 2013 at 2:10 PM, Gabi C <gabicr@gmail.com> wrote: > in my case $brick_path =3D/data > > > getfattr -d /data return NOTHING on both nodes!!! > > > > > On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch > <fabiand@redhat.com> wrote: > Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi > C: > > Update on Glusterfs issue > > > > > > I manage to recover lost volume after recretaing the > same volume name > > with same bricks, whisch raised an error message, > resolved by, on both > > nodes: > > > > setfattr -x trusted.glusterfs.volume-id $brick_path > > setfattr -x trusted.gfid $brick_path > > > Hey, > > good that you could recover them. > > Could you please provide $brick_path and getfattr -d > $brick_path > > The question is if and/or why the fattrs are not > stored. > > - fabian > > > > > > > > > On Wed, Dec 18, 2013 at 12:12 PM, Gabi C > <gabicr@gmail.com> wrote: > > node 1: > > > > [root@virtual5 admin]# cat /config/files > > /etc/fstab > > /etc/shadow > > /etc/default/ovirt > > /etc/ssh/ssh_host_key > > /etc/ssh/ssh_host_key.pub > > /etc/ssh/ssh_host_dsa_key > > /etc/ssh/ssh_host_dsa_key.pub > > /etc/ssh/ssh_host_rsa_key > > /etc/ssh/ssh_host_rsa_key.pub > > /etc/rsyslog.conf > > /etc/libvirt/libvirtd.conf > > /etc/libvirt/passwd.db > > /etc/passwd > > /etc/sysconfig/network > > /etc/collectd.conf > > /etc/libvirt/qemu/networks > > /etc/ssh/sshd_config > > /etc/pki > > /etc/logrotate.d/ovirt-node > > /var/lib/random-seed > > /etc/iscsi/initiatorname.iscsi > > /etc/libvirt/qemu.conf > > /etc/sysconfig/libvirtd > > /etc/logrotate.d/libvirtd > > /etc/multipath.conf > > /etc/hosts > > /etc/sysconfig/network-scripts/ifcfg-enp3s0 > > /etc/sysconfig/network-scripts/ifcfg-lo > > /etc/ntp.conf > > /etc/shadow > > /etc/vdsm-reg/vdsm-reg.conf > > /etc/shadow > > /etc/shadow > > > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt > > > /etc/sysconfig/network-scripts/route-ovirtmgmt > > > /etc/sysconfig/network-scripts/rule-ovirtmgmt > > /root/.ssh/authorized_keys > > /etc/vdsm/vdsm.id > > /etc/udev/rules.d/12-ovirt-iosched.rules > > /etc/vdsm/vdsm.conf > > /etc/sysconfig/iptables > > /etc/resolv.conf > > > /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY > > /etc/sysconfig/network-scripts/ifcfg-enp6s0 > > > /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 > > /etc/glusterfs/glusterd.vol > > /etc/selinux/config > > > > > > > > > > > > > > > > > > node 2: > > > > > > [root@virtual4 ~]# cat /config/files > > /etc/fstab > > /etc/shadow > > /etc/default/ovirt > > /etc/ssh/ssh_host_key > > /etc/ssh/ssh_host_key.pub > > /etc/ssh/ssh_host_dsa_key > > /etc/ssh/ssh_host_dsa_key.pub > > /etc/ssh/ssh_host_rsa_key > > /etc/ssh/ssh_host_rsa_key.pub > > /etc/rsyslog.conf > > /etc/libvirt/libvirtd.conf > > /etc/libvirt/passwd.db > > /etc/passwd > > /etc/sysconfig/network > > /etc/collectd.conf > > /etc/libvirt/qemu/networks > > /etc/ssh/sshd_config > > /etc/pki > > /etc/logrotate.d/ovirt-node > > /var/lib/random-seed > > /etc/iscsi/initiatorname.iscsi > > /etc/libvirt/qemu.conf > > /etc/sysconfig/libvirtd > > /etc/logrotate.d/libvirtd > > /etc/multipath.conf > > /etc/hosts > > /etc/sysconfig/network-scripts/ifcfg-enp3s0 > > /etc/sysconfig/network-scripts/ifcfg-lo > > /etc/shadow > > /etc/shadow > > /etc/vdsm-reg/vdsm-reg.conf > > > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt > > > /etc/sysconfig/network-scripts/route-ovirtmgmt > > > /etc/sysconfig/network-scripts/rule-ovirtmgmt > > /root/.ssh/authorized_keys > > /etc/shadow > > /etc/shadow > > /etc/vdsm/vdsm.id > > /etc/udev/rules.d/12-ovirt-iosched.rules > > /etc/sysconfig/iptables > > /etc/vdsm/vdsm.conf > > /etc/shadow > > /etc/resolv.conf > > /etc/ntp.conf > > > /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY > > /etc/sysconfig/network-scripts/ifcfg-enp6s0 > > > /etc/sysconfig/network-scripts/ifcfg-enp6s0.50 > > /etc/glusterfs/glusterd.vol > > /etc/selinux/config > > > > > > > > > > On Wed, Dec 18, 2013 at 12:07 PM, Fabian > Deutsch > > <fabiand@redhat.com> wrote: > > Am Mittwoch, den 18.12.2013, 12:03 > +0200 schrieb Gabi > > C: > > > So here it is: > > > > > > > > > in tab volumes add new volume - > Replicated, then > > added storage - > > > data/glusterfs. Then I impoerted > Vm, ran them and at > > some point, > > > needing some space for a Redhat > Satellite instance > > I decided to put > > > both node in maintenace stop them > add new disk > > devices and restart, > > > but after restart the gluster > volume defined under > > Volumes Tab > > > vanished! > > > > > > Antoni, > > > > can you tell what log files to look > at to find out why > > that storage > > domain vanished - from a Engine > side? > > > > And do you know what files related > to gluster are > > changed on the Node > > side? > > > > Gabi, > > > > could you please provide the > contents of /config/files > > on the Node. > > > > > Glusterfs data goes under /data > directory which was > > automatically > > > configured when I installed the > node. > > > > > > Yep, /data is on the Data LV - that > should be good. > > > > - fabian > > > > > > > > > > > On Wed, Dec 18, 2013 at 11:45 AM, > Fabian Deutsch > > <fabiand@redhat.com> > > > wrote: > > > Am Mittwoch, den > 18.12.2013, 11:42 +0200 > > schrieb Gabi C: > > > > Yes, it is the VM > part..I just run into an > > issue. My setup > > > consist in > > > > 2 nodes with glusterfs > and after adding > > supplemental hard > > > disk, after > > > > reboot I've lost > glusterfs volumes! > > > > > > > > > Could you exactly explain > what you > > configured? > > > > > > > > > > > How can I persist any > configuration on > > node and I refer here > > > to > > > > ''setenforce 0'' - for > ssh login to work- > > and further > > > > > > > > > How changes can be > persisted on Node can be > > found here: > > > > > > http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_h= ost > > > > > > Do you know into what path > the glusterfs > > data goes? Or is it > > > written > > > directly onto a disk/LV? > > > > > > - fabian > > > > > > > "" > > > http://www.ovirt.org/Features/GlusterFS_Storage_Domain > > > > * option > rpc-auth-allow-insecure on > > =3D=3D> in > > > glusterd.vol (ensure > > > > u restart > glusterd service... for > > this to take > > > effect) > > > > > > > * volume set > <volname> > > server.allow-insecure on =3D=3D> > > > (ensure u > > > > stop and start > the volume.. for > > this to take > > > effect)'' > > > > > > > > > > > > Thanks! > > > > > > > > > > > > > > > > > > > > > > > > On Wed, Dec 18, 2013 at > 11:35 AM, Fabian > > Deutsch > > > <fabiand@redhat.com> > > > > wrote: > > > > Am Mittwoch, den > 18.12.2013, 08:34 > > +0200 schrieb > > > Gabi C: > > > > > Hello! > > > > > > > > > > > > > > > In order to > increase disk space > > I want to add a > > > new disk > > > > drive to > > > > > ovirt node. > After adding this > > should I proceed as > > > "normal" - > > > > pvcreate, > > > > > vgcreate, > lvcreate and so on - > > or these > > > configuration will > > > > not > > > > > persist? > > > > > > > > > > > > Hey Gabi, > > > > > > > > basically plain > LVM is used in > > Node - so yes > > > pvcreate and > > > > lvextend can > > > > be used. > > > > What storage > part do you want to > > extend? The part > > > where the > > > > VMs reside? > > > > You will also > need to take care to > > extend the > > > filesystem. > > > > > > > > - fabian > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > =20 =20 =20 =20
--=-GVMvPrk+RfZkifC2aWjW Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) iQIcBAABAgAGBQJStFk7AAoJEC9+uOgSHVGU3j4P/imwOPyk5D039xxCAXaqE3Wv hiK/SqQ7BelFzv407kCLd4ljnDsbFdaYBDvF6Om1e44uhQRPfFz8MmAbs5DosdCP Yj3fdqzUr3NWoHaAuqcjz7DyHXA2UNitSHXvsFUJJkueTHsZOmqUY7HC+sXZypy8 Jcs6OFpt/J/oUc+JUvIUEz8Cr5S1dXGIVplJxVqSapva/92BtiYW3T3Q2/b7sZ2s UI+wu8wuFuzzGuYp6Kc5CcuUBFaWMDBA+q2Qbxbn6Bl9VKDa6ch5gvhUqwZh24pb G+PwbAMgzmHZb4sdqTLcGHznWiR29C9EOf+G5Jz3DX+4qhSl+bDAqrRCfSFzprmo X+o3XoxsMfT2CJS0QVIZPiM8RTJKzh+4Atw5voj8Kui1QRCn+buSA3q/R15u6U4l okIczsRr0OWPsHTh1qwdoE8EzPm6oZ3LZ1jOqdQidRPBajsooo5bNtnSOG//mM8H AlItegQQIArXortoPBcpWygd70bZFUX0R3ezm/f6/lPW4TTe1PNeKQ9DfPY5WvhZ PcaJlIcL/3HY6+Z+b19g82XYSXFzpi8OuMjHjZaZkUzS+vUWwjbD7kqB1PCRRbcR zhS9LZyL+IdBHoDx4e60Xy3/RGs5IbhGauKkCjVhjMsXfk8bsLpnkApD1xZ3Oi4l bEKbOHxhssfGSUC3cYFL =TfrL -----END PGP SIGNATURE----- --=-GVMvPrk+RfZkifC2aWjW--
participants (2)
-
Fabian Deutsch
-
Gabi C