[Users] Adding new disk to Ovort node

Fabian Deutsch fabiand at redhat.com
Wed Dec 18 13:30:20 UTC 2013


Am Mittwoch, den 18.12.2013, 14:14 +0200 schrieb Gabi C:
> Still, now I cannot start none of the 2 machines! I get 
> 
> ID 119 VM proxy2 is down. Exit message: Child quit during startup
> handshake: Input/output error.""

Could you try ot find out in what context this IO error appears?

- fabian

> 
> Something similar to bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1033064, except that in my
> case selinux is permissive!
> 
> 
> 
> On Wed, Dec 18, 2013 at 2:10 PM, Gabi C <gabicr at gmail.com> wrote:
>         in my case $brick_path =/data
>         
>         
>         getfattr -d /data return NOTHING on both nodes!!!
>         
>         
>         
>         
>         On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch
>         <fabiand at redhat.com> wrote:
>                 Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi
>                 C:
>                 > Update on Glusterfs issue
>                 >
>                 >
>                 > I manage to recover lost volume after recretaing the
>                 same volume name
>                 > with same bricks, whisch raised an error message,
>                 resolved by, on both
>                 > nodes:
>                 >
>                 > setfattr -x trusted.glusterfs.volume-id $brick_path
>                 > setfattr -x trusted.gfid $brick_path
>                 
>                 
>                 Hey,
>                 
>                 good that you could recover them.
>                 
>                 Could you please provide $brick_path and getfattr -d
>                 $brick_path
>                 
>                 The question is if and/or why the fattrs are not
>                 stored.
>                 
>                 - fabian
>                 
>                 >
>                 >
>                 >
>                 > On Wed, Dec 18, 2013 at 12:12 PM, Gabi C
>                 <gabicr at gmail.com> wrote:
>                 >         node 1:
>                 >
>                 >         [root at virtual5 admin]# cat /config/files
>                 >         /etc/fstab
>                 >         /etc/shadow
>                 >         /etc/default/ovirt
>                 >         /etc/ssh/ssh_host_key
>                 >         /etc/ssh/ssh_host_key.pub
>                 >         /etc/ssh/ssh_host_dsa_key
>                 >         /etc/ssh/ssh_host_dsa_key.pub
>                 >         /etc/ssh/ssh_host_rsa_key
>                 >         /etc/ssh/ssh_host_rsa_key.pub
>                 >         /etc/rsyslog.conf
>                 >         /etc/libvirt/libvirtd.conf
>                 >         /etc/libvirt/passwd.db
>                 >         /etc/passwd
>                 >         /etc/sysconfig/network
>                 >         /etc/collectd.conf
>                 >         /etc/libvirt/qemu/networks
>                 >         /etc/ssh/sshd_config
>                 >         /etc/pki
>                 >         /etc/logrotate.d/ovirt-node
>                 >         /var/lib/random-seed
>                 >         /etc/iscsi/initiatorname.iscsi
>                 >         /etc/libvirt/qemu.conf
>                 >         /etc/sysconfig/libvirtd
>                 >         /etc/logrotate.d/libvirtd
>                 >         /etc/multipath.conf
>                 >         /etc/hosts
>                 >         /etc/sysconfig/network-scripts/ifcfg-enp3s0
>                 >         /etc/sysconfig/network-scripts/ifcfg-lo
>                 >         /etc/ntp.conf
>                 >         /etc/shadow
>                 >         /etc/vdsm-reg/vdsm-reg.conf
>                 >         /etc/shadow
>                 >         /etc/shadow
>                 >
>                   /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
>                 >
>                   /etc/sysconfig/network-scripts/route-ovirtmgmt
>                 >
>                   /etc/sysconfig/network-scripts/rule-ovirtmgmt
>                 >         /root/.ssh/authorized_keys
>                 >         /etc/vdsm/vdsm.id
>                 >         /etc/udev/rules.d/12-ovirt-iosched.rules
>                 >         /etc/vdsm/vdsm.conf
>                 >         /etc/sysconfig/iptables
>                 >         /etc/resolv.conf
>                 >
>                   /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY
>                 >         /etc/sysconfig/network-scripts/ifcfg-enp6s0
>                 >
>                   /etc/sysconfig/network-scripts/ifcfg-enp6s0.50
>                 >         /etc/glusterfs/glusterd.vol
>                 >         /etc/selinux/config
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >         node 2:
>                 >
>                 >
>                 >         [root at virtual4 ~]# cat /config/files
>                 >         /etc/fstab
>                 >         /etc/shadow
>                 >         /etc/default/ovirt
>                 >         /etc/ssh/ssh_host_key
>                 >         /etc/ssh/ssh_host_key.pub
>                 >         /etc/ssh/ssh_host_dsa_key
>                 >         /etc/ssh/ssh_host_dsa_key.pub
>                 >         /etc/ssh/ssh_host_rsa_key
>                 >         /etc/ssh/ssh_host_rsa_key.pub
>                 >         /etc/rsyslog.conf
>                 >         /etc/libvirt/libvirtd.conf
>                 >         /etc/libvirt/passwd.db
>                 >         /etc/passwd
>                 >         /etc/sysconfig/network
>                 >         /etc/collectd.conf
>                 >         /etc/libvirt/qemu/networks
>                 >         /etc/ssh/sshd_config
>                 >         /etc/pki
>                 >         /etc/logrotate.d/ovirt-node
>                 >         /var/lib/random-seed
>                 >         /etc/iscsi/initiatorname.iscsi
>                 >         /etc/libvirt/qemu.conf
>                 >         /etc/sysconfig/libvirtd
>                 >         /etc/logrotate.d/libvirtd
>                 >         /etc/multipath.conf
>                 >         /etc/hosts
>                 >         /etc/sysconfig/network-scripts/ifcfg-enp3s0
>                 >         /etc/sysconfig/network-scripts/ifcfg-lo
>                 >         /etc/shadow
>                 >         /etc/shadow
>                 >         /etc/vdsm-reg/vdsm-reg.conf
>                 >
>                   /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
>                 >
>                   /etc/sysconfig/network-scripts/route-ovirtmgmt
>                 >
>                   /etc/sysconfig/network-scripts/rule-ovirtmgmt
>                 >         /root/.ssh/authorized_keys
>                 >         /etc/shadow
>                 >         /etc/shadow
>                 >         /etc/vdsm/vdsm.id
>                 >         /etc/udev/rules.d/12-ovirt-iosched.rules
>                 >         /etc/sysconfig/iptables
>                 >         /etc/vdsm/vdsm.conf
>                 >         /etc/shadow
>                 >         /etc/resolv.conf
>                 >         /etc/ntp.conf
>                 >
>                   /etc/sysconfig/network-scripts/ifcfg-VPO_IPPROXY
>                 >         /etc/sysconfig/network-scripts/ifcfg-enp6s0
>                 >
>                   /etc/sysconfig/network-scripts/ifcfg-enp6s0.50
>                 >         /etc/glusterfs/glusterd.vol
>                 >         /etc/selinux/config
>                 >
>                 >
>                 >
>                 >
>                 >         On Wed, Dec 18, 2013 at 12:07 PM, Fabian
>                 Deutsch
>                 >         <fabiand at redhat.com> wrote:
>                 >                 Am Mittwoch, den 18.12.2013, 12:03
>                 +0200 schrieb Gabi
>                 >                 C:
>                 >                 > So here it is:
>                 >                 >
>                 >                 >
>                 >                 > in tab volumes add new volume -
>                 Replicated, then
>                 >                 added storage -
>                 >                 > data/glusterfs. Then I impoerted
>                 Vm, ran them and at
>                 >                 some point,
>                 >                 > needing some space for a Redhat
>                 Satellite  instance
>                 >                 I decided to put
>                 >                 > both node in maintenace stop them
>                 add new disk
>                 >                 devices and restart,
>                 >                 > but after restart the gluster
>                 volume defined under
>                 >                 Volumes Tab
>                 >                 > vanished!
>                 >
>                 >
>                 >                 Antoni,
>                 >
>                 >                 can you tell what log files to look
>                 at to find out why
>                 >                 that storage
>                 >                 domain vanished - from a Engine
>                 side?
>                 >
>                 >                 And do you know what files related
>                 to gluster are
>                 >                 changed on the Node
>                 >                 side?
>                 >
>                 >                 Gabi,
>                 >
>                 >                 could you please provide the
>                 contents of /config/files
>                 >                 on the Node.
>                 >
>                 >                 > Glusterfs data goes under /data
>                 directory which was
>                 >                 automatically
>                 >                 > configured when I installed the
>                 node.
>                 >
>                 >
>                 >                 Yep, /data is on the Data LV - that
>                 should be good.
>                 >
>                 >                 - fabian
>                 >
>                 >                 >
>                 >                 >
>                 >                 > On Wed, Dec 18, 2013 at 11:45 AM,
>                 Fabian Deutsch
>                 >                 <fabiand at redhat.com>
>                 >                 > wrote:
>                 >                 >         Am Mittwoch, den
>                 18.12.2013, 11:42 +0200
>                 >                 schrieb Gabi C:
>                 >                 >         > Yes, it is the VM
>                 part..I just run into an
>                 >                 issue.  My setup
>                 >                 >         consist in
>                 >                 >         > 2 nodes with glusterfs
>                 and after adding
>                 >                 supplemental hard
>                 >                 >         disk, after
>                 >                 >         > reboot I've lost
>                 glusterfs volumes!
>                 >                 >
>                 >                 >
>                 >                 >         Could you exactly explain
>                 what you
>                 >                 configured?
>                 >                 >
>                 >                 >         >
>                 >                 >         > How can I persist any
>                 configuration on
>                 >                 node and I refer here
>                 >                 >         to
>                 >                 >         > ''setenforce 0'' - for
>                 ssh login to work-
>                 >                 and further
>                 >                 >
>                 >                 >
>                 >                 >         How changes can be
>                 persisted on Node can be
>                 >                 found here:
>                 >                 >
>                 >
>                 http://www.ovirt.org/Node_Troubleshooting#Making_changes_on_the_host
>                 >                 >
>                 >                 >         Do you know into what path
>                 the glusterfs
>                 >                 data goes? Or is it
>                 >                 >         written
>                 >                 >         directly onto a disk/LV?
>                 >                 >
>                 >                 >         - fabian
>                 >                 >
>                 >                 >         > ""
>                 >
>                 http://www.ovirt.org/Features/GlusterFS_Storage_Domain
>                 >                 >         >       * option
>                 rpc-auth-allow-insecure on
>                 >                 ==> in
>                 >                 >         glusterd.vol (ensure
>                 >                 >         >         u restart
>                 glusterd service... for
>                 >                 this to take
>                 >                 >         effect)
>                 >                 >
>                 >                 >         >       * volume set
>                 <volname>
>                 >                 server.allow-insecure on ==>
>                 >                 >         (ensure u
>                 >                 >         >         stop and start
>                 the volume.. for
>                 >                 this to take
>                 >                 >         effect)''
>                 >                 >         >
>                 >                 >         >
>                 >                 >         > Thanks!
>                 >                 >         >
>                 >                 >         >
>                 >                 >         >
>                 >                 >         >
>                 >                 >         >
>                 >                 >         > On Wed, Dec 18, 2013 at
>                 11:35 AM, Fabian
>                 >                 Deutsch
>                 >                 >         <fabiand at redhat.com>
>                 >                 >         > wrote:
>                 >                 >         >         Am Mittwoch, den
>                 18.12.2013, 08:34
>                 >                 +0200 schrieb
>                 >                 >         Gabi C:
>                 >                 >         >         > Hello!
>                 >                 >         >         >
>                 >                 >         >         >
>                 >                 >         >         > In order to
>                 increase disk space
>                 >                 I want to add a
>                 >                 >         new disk
>                 >                 >         >         drive to
>                 >                 >         >         > ovirt node.
>                 After adding this
>                 >                 should I proceed as
>                 >                 >         "normal" -
>                 >                 >         >         pvcreate,
>                 >                 >         >         > vgcreate,
>                 lvcreate and so on -
>                 >                 or these
>                 >                 >         configuration will
>                 >                 >         >         not
>                 >                 >         >         > persist?
>                 >                 >         >
>                 >                 >         >
>                 >                 >         >         Hey Gabi,
>                 >                 >         >
>                 >                 >         >         basically plain
>                 LVM is used in
>                 >                 Node - so yes
>                 >                 >         pvcreate and
>                 >                 >         >         lvextend can
>                 >                 >         >         be used.
>                 >                 >         >         What storage
>                 part do you want to
>                 >                 extend? The part
>                 >                 >         where the
>                 >                 >         >         VMs reside?
>                 >                 >         >         You will also
>                 need to take care to
>                 >                 extend the
>                 >                 >         filesystem.
>                 >                 >         >
>                 >                 >         >         - fabian
>                 >                 >         >
>                 >                 >         >
>                 >                 >         >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 >
>                 
>                 
>         
>         
> 
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://lists.ovirt.org/pipermail/users/attachments/20131218/bc0a4d02/attachment-0001.sig>


More information about the Users mailing list