[ovirt-users] hosted engine setup with Gluster fail
Anzar Esmail Sainudeen
anzar at it.thumbay.com
Mon Aug 28 08:40:25 UTC 2017
Dear Nara,
Thank you for your great reply.
1) can you please check if the disks what would be used for brick creation does not have labels or any partitions on them ?
Yes I agreed there is no labels partition available, my doubt is it possible to create required bricks partition from available 406.7G Linux LVM. Following are the physical volume and volume group information.
[root at ovirtnode1 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda3
VG Name onn
PV Size 555.73 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 142267
Free PE 3823
Allocated PE 138444
PV UUID v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe
[root at ovirtnode1 ~]# vgdisplay
--- Volume group ---
VG Name onn
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 48
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 12
Open LV 7
Max PV 0
Cur PV 1
Act PV 1
VG Size 555.73 GiB
PE Size 4.00 MiB
Total PE 142267
Alloc PE / Size 138444 / 540.80 GiB
Free PE / Size 3823 / 14.93 GiB
VG UUID nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy
I am thinking, to reduce the vg size and create new vg for gluster. Is it a good thinking.
Anzar Esmail Sainudeen
Group Datacenter Incharge| IT Infra Division | Thumbay Group
P.O Box : 4184 | Ajman | United Arab Emirates.
Mobile: 055-8633699|Tel: 06 7431333 | Extn :1303
Email: anzar at it.thumbay.com <mailto:anzar at it.thumbay.com> | Website: www.thumbay.com <http://www.thumbay.com/>
Disclaimer: This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are hereby notified that disclosing, copying, distributing or taking any action in reliance on the contents of this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this material. Thumbay Group accepts no liability for errors or omissions in the contents of this message, which arise as a result of e-mail transmission.
From: Kasturi Narra [mailto:knarra at redhat.com]
Sent: Monday, August 28, 2017 9:48 AM
To: Anzar Esmail Sainudeen
Cc: users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail
Hi,
If i understand right gdeploy script is failing at [1]. There could be two possible reasons why that would fail.
1) can you please check if the disks what would be used for brick creation does not have lables or any partitions on them ?
2) can you please check if the path [1] exists. If it does not can you please change the path of the script in gdeploy.conf file to /usr/share/gdeploy/scripts/grafton-sanity-check.sh
[1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
Thanks
kasturi
On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <anzar at it.thumbay.com <mailto:anzar at it.thumbay.com> > wrote:
Dear Team Ovirt,
I am trying to deploy hosted engine setup with Gluster. Hosted engine setup was failed. Total number of host is 3 server
PLAY [gluster_servers] *********************************************************
TASK [Run a shell script] ******************************************************
fatal: [ovirtnode4.thumbaytechlabs.int <http://ovirtnode4.thumbaytechlabs.int> ]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ovirtnode3.thumbaytechlabs.int <http://ovirtnode3.thumbaytechlabs.int> ]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ovirtnode2.thumbaytechlabs.int <http://ovirtnode2.thumbaytechlabs.int> ]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}
to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry
PLAY RECAP *********************************************************************
ovirtnode2.thumbaytechlabs.int <http://ovirtnode2.thumbaytechlabs.int> : ok=0 changed=0 unreachable=0 failed=1
ovirtnode3.thumbaytechlabs.int <http://ovirtnode3.thumbaytechlabs.int> : ok=0 changed=0 unreachable=0 failed=1
ovirtnode4.thumbaytechlabs.int <http://ovirtnode4.thumbaytechlabs.int> : ok=0 changed=0 unreachable=0 failed=1
Please note my finding.
1. Still I am doubt with bricks setup ares . because during the ovirt node setup time automatically create partition and mount all space. Please find below #fdisk –l output
2.
[root at ovirtnode4 ~]# fdisk –l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sda: 438.0 GB, 437998583808 bytes, 855465984 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
# Start End Size Type Name
1 2048 411647 200M EFI System EFI System Partition
2 411648 2508799 1G Microsoft basic
3 2508800 855463935 406.7G Linux LVM
Disk /dev/mapper/onn-swap: 25.4 GB, 25367150592 bytes, 49545216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/onn-pool00_tmeta: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/onn-pool00_tdata: 394.2 GB, 394159718400 bytes, 769843200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/onn-pool00-tpool: 394.2 GB, 394159718400 bytes, 769843200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1: 378.1 GB, 378053591040 bytes, 738385920 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/onn-pool00: 394.2 GB, 394159718400 bytes, 769843200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/onn-var: 16.1 GB, 16106127360 bytes, 31457280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/onn-root: 378.1 GB, 378053591040 bytes, 738385920 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/onn-var--log: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/onn-home: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/onn-tmp: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/onn-var--log--audit: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/onn-ovirt--node--ng--4.1.5--0.20170821.0+1: 378.1 GB, 378053591040 bytes, 738385920 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
3. Is it possible to create a LVM portions from 406.7G Linux LVM free space for the required gluster size ?
Please suggest…..
Anzar Esmail Sainudeen
Group Datacenter Incharge| IT Infra Division | Thumbay Group
P.O Box : 4184 | Ajman | United Arab Emirates.
Mobile: 055-8633699|Tel: 06 7431333 | Extn :1303
Email: anzar at it.thumbay.com <mailto:anzar at it.thumbay.com> | Website: www.thumbay.com <http://www.thumbay.com/>
Disclaimer: This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are hereby notified that disclosing, copying, distributing or taking any action in reliance on the contents of this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this material. Thumbay Group accepts no liability for errors or omissions in the contents of this message, which arise as a result of e-mail transmission.
_______________________________________________
Users mailing list
Users at ovirt.org <mailto:Users at ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170828/512c3c63/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.jpg
Type: image/jpeg
Size: 11313 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170828/512c3c63/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.jpg
Type: image/jpeg
Size: 11308 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170828/512c3c63/attachment-0001.jpg>
More information about the Users
mailing list