[ovirt-users] hosted engine setup with Gluster fail
Kasturi Narra
knarra at redhat.com
Mon Aug 28 09:49:02 UTC 2017
can you please check if you have any additional disk in the system? If you
have additional disk in the system other than the disk which is being used
for root partition then you could specify the disk in the cockpit UI (i
hope you are using cockpit UI to do the installation) with no partitions on
that. That will take care of the installation and make your life easier as
cockpit + gdeploy would take care of configuring gluster bricks and volumes
for you.
On Mon, Aug 28, 2017 at 2:55 PM, Anzar Esmail Sainudeen <
anzar at it.thumbay.com> wrote:
> Dear Nara,
>
>
>
> All the partitions, pv and vg are created automatically during the initial
> setup time.
>
>
>
> [root at ovirtnode1 ~]# vgs
>
> VG #PV #LV #SN Attr VSize VFree
>
> onn 1 12 0 wz--n- 555.73g 14.93g
>
>
>
> All space are mounted to the below location, all free space are mounted in
> /.
>
>
>
> Filesystem Size Used Avail
> Use% Mounted on
>
> /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1 513G 4.2G
> 483G 1% /
>
> devtmpfs 44G 0
> 44G 0% /dev
>
> tmpfs 44G 4.0K
> 44G 1% /dev/shm
>
> tmpfs 44G 33M
> 44G 1% /run
>
> tmpfs 44G 0
> 44G 0% /sys/fs/cgroup
>
> /dev/sda2 976M 135M 774M
> 15% /boot
>
> /dev/mapper/onn-home 976M 2.6M
> 907M 1% /home
>
> /dev/mapper/onn-tmp 2.0G 6.3M
> 1.8G 1% /tmp
>
> /dev/sda1 200M 9.5M
> 191M 5% /boot/efi
>
> /dev/mapper/onn-var 15G 1.8G 13G
> 13% /var
>
> /dev/mapper/onn-var--log 7.8G 224M
> 7.2G 3% /var/log
>
> /dev/mapper/onn-var--log--audit 2.0G 44M
> 1.8G 3% /var/log/audit
>
> tmpfs 8.7G 0
> 8.7G 0% /run/user/0
>
>
>
> If we need any space we want to reduce the vg size and create new
> one.(This is correct)
>
>
>
>
>
> If the above step is complicated, can you please suggest to setup
> glusterfs datastore in ovirt
>
>
>
> Anzar Esmail Sainudeen
>
> Group Datacenter Incharge| IT Infra Division | Thumbay Group
>
> P.O Box : 4184 | Ajman | United Arab Emirates.
>
> Mobile: 055-8633699|Tel: 06 7431333 | Extn :1303
>
> Email: anzar at it.thumbay.com | Website: www.thumbay.com
>
> [image: cid:image001.jpg at 01D18D9D.15A17620]
>
>
>
> Disclaimer: This message contains confidential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify the sender immediately by e-mail if you have received this e-mail by
> mistake, and delete this material. Thumbay Group accepts no liability for
> errors or omissions in the contents of this message, which arise as a
> result of e-mail transmission.
>
>
>
> *From:* Kasturi Narra [mailto:knarra at redhat.com]
> *Sent:* Monday, August 28, 2017 1:14 PM
>
> *To:* Anzar Esmail Sainudeen
> *Cc:* users
> *Subject:* Re: [ovirt-users] hosted engine setup with Gluster fail
>
>
>
> yes, you can create. I do not see any problems there.
>
>
>
> May i know how these vgs are created ? If they are not created using
> gdeploy then you will have to create bricks manually from the new vg you
> have created.
>
>
>
> On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen <
> anzar at it.thumbay.com> wrote:
>
> Dear Nara,
>
>
>
> Thank you for your great reply.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have labels or any partitions on them ?
>
>
>
> Yes I agreed there is no labels partition available, my doubt is it
> possible to create required bricks partition from available 406.7G Linux
> LVM. Following are the physical volume and volume group information.
>
>
>
>
>
> [root at ovirtnode1 ~]# pvdisplay
>
> --- Physical volume ---
>
> PV Name /dev/sda3
>
> VG Name onn
>
> PV Size 555.73 GiB / not usable 2.00 MiB
>
> Allocatable yes
>
> PE Size 4.00 MiB
>
> Total PE 142267
>
> Free PE 3823
>
> Allocated PE 138444
>
> PV UUID v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe
>
>
>
> [root at ovirtnode1 ~]# vgdisplay
>
> --- Volume group ---
>
> VG Name onn
>
> System ID
>
> Format lvm2
>
> Metadata Areas 1
>
> Metadata Sequence No 48
>
> VG Access read/write
>
> VG Status resizable
>
> MAX LV 0
>
> Cur LV 12
>
> Open LV 7
>
> Max PV 0
>
> Cur PV 1
>
> Act PV 1
>
> VG Size 555.73 GiB
>
> PE Size 4.00 MiB
>
> Total PE 142267
>
> Alloc PE / Size 138444 / 540.80 GiB
>
> Free PE / Size 3823 / 14.93 GiB
>
> VG UUID nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy
>
>
>
>
>
> I am thinking, to reduce the vg size and create new vg for gluster. Is it
> a good thinking.
>
>
>
>
>
>
>
> Anzar Esmail Sainudeen
>
> Group Datacenter Incharge| IT Infra Division | Thumbay Group
>
> P.O Box : 4184 | Ajman | United Arab Emirates.
>
> Mobile: 055-8633699|Tel: 06 7431333 | Extn :1303
>
> Email: anzar at it.thumbay.com | Website: www.thumbay.com
>
> [image: cid:image001.jpg at 01D18D9D.15A17620]
>
>
>
> Disclaimer: This message contains confidential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify the sender immediately by e-mail if you have received this e-mail by
> mistake, and delete this material. Thumbay Group accepts no liability for
> errors or omissions in the contents of this message, which arise as a
> result of e-mail transmission.
>
>
>
> *From:* Kasturi Narra [mailto:knarra at redhat.com]
> *Sent:* Monday, August 28, 2017 9:48 AM
> *To:* Anzar Esmail Sainudeen
> *Cc:* users
> *Subject:* Re: [ovirt-users] hosted engine setup with Gluster fail
>
>
>
> Hi,
>
>
>
> If i understand right gdeploy script is failing at [1]. There could be
> two possible reasons why that would fail.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have lables or any partitions on them ?
>
>
>
> 2) can you please check if the path [1] exists. If it does not can you
> please change the path of the script in gdeploy.conf file
> to /usr/share/gdeploy/scripts/grafton-sanity-check.sh
>
>
>
> [1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
>
>
>
> Thanks
>
> kasturi
>
>
>
> On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <
> anzar at it.thumbay.com> wrote:
>
> Dear Team Ovirt,
>
>
>
> I am trying to deploy hosted engine setup with Gluster. Hosted engine
> setup was failed. Total number of host is 3 server
>
>
>
>
>
> PLAY [gluster_servers] ******************************
> ***************************
>
>
>
> TASK [Run a shell script] ******************************
> ************************
>
> fatal: [ovirtnode4.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode3.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode2.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry
>
>
>
> PLAY RECAP ************************************************************
> *********
>
> ovirtnode2.thumbaytechlabs.int : ok=0 changed=0 unreachable=0
> failed=1
>
> ovirtnode3.thumbaytechlabs.int : ok=0 changed=0 unreachable=0
> failed=1
>
> ovirtnode4.thumbaytechlabs.int : ok=0 changed=0 unreachable=0
> failed=1
>
>
>
>
>
> Please note my finding.
>
>
>
> 1. Still I am doubt with bricks setup ares . because during the ovirt
> node setup time automatically create partition and mount all space. Please
> find below #fdisk –l output
>
> 2.
>
> [root at ovirtnode4 ~]# fdisk –l
>
>
>
> WARNING: fdisk GPT support is currently new, and therefore in an
> experimental phase. Use at your own discretion.
>
>
>
> Disk /dev/sda: 438.0 GB, 437998583808 bytes, 855465984 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
> Disk label type: gpt
>
>
>
>
>
> # Start End Size Type Name
>
> 1 2048 411647 200M EFI System EFI System Partition
>
> 2 411648 2508799 1G Microsoft basic
>
> 3 2508800 855463935 406.7G Linux LVM
>
>
>
> Disk /dev/mapper/onn-swap: 25.4 GB, 25367150592 bytes, 49545216 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00_tmeta: 1073 MB, 1073741824 bytes, 2097152
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00_tdata: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00-tpool: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1: 378.1 GB,
> 378053591040 bytes, 738385920 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-var: 16.1 GB, 16106127360 bytes, 31457280 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-root: 378.1 GB, 378053591040 bytes, 738385920 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-var--log: 8589 MB, 8589934592 bytes, 16777216 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-home: 1073 MB, 1073741824 bytes, 2097152 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-tmp: 2147 MB, 2147483648 bytes, 4194304 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-var--log--audit: 2147 MB, 2147483648 bytes, 4194304
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-ovirt--node--ng--4.1.5--0.20170821.0+1: 378.1 GB,
> 378053591040 bytes, 738385920 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
> 3. Is it possible to create a LVM portions from 406.7G Linux LVM free
> space for the required gluster size ?
>
>
>
> Please suggest…..
>
>
>
> Anzar Esmail Sainudeen
>
> Group Datacenter Incharge| IT Infra Division | Thumbay Group
>
> P.O Box : 4184 | Ajman | United Arab Emirates.
>
> Mobile: 055-8633699|Tel: 06 7431333 | Extn :1303
>
> Email: anzar at it.thumbay.com | Website: www.thumbay.com
>
> [image: cid:image001.jpg at 01D18D9D.15A17620]
>
>
>
> Disclaimer: This message contains confidential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify the sender immediately by e-mail if you have received this e-mail by
> mistake, and delete this material. Thumbay Group accepts no liability for
> errors or omissions in the contents of this message, which arise as a
> result of e-mail transmission.
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170828/a0db4ea7/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image005.jpg
Type: image/jpeg
Size: 11313 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170828/a0db4ea7/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.jpg
Type: image/jpeg
Size: 11308 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170828/a0db4ea7/attachment-0001.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.jpg
Type: image/jpeg
Size: 11299 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170828/a0db4ea7/attachment-0002.jpg>
More information about the Users
mailing list