[ovirt-users] hosted engine setup with Gluster fail -

Anzar Esmail Sainudeen anzar at it.thumbay.com
Wed Sep 20 09:21:18 UTC 2017


Dear Katuri,

 

Thank you for your support.

 

Your finding is correct, it is working and accepted to the next level of ovirt-engine setup.

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Technologies |Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: anzar at it.thumbay.com <mailto:anzar at it.thumbay.com>  | Website: www.thumbay.com <http://www.thumbay.com/> 



 

Disclaimer: This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are hereby notified that disclosing, copying, distributing or taking any action in reliance on the contents of this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this material. Thumbay Group accepts no liability for errors or omissions in the contents of this message, which arise as a result of e-mail transmission.

 

From: Kasturi Narra [mailto:knarra at redhat.com] 
Sent: Monday, September 18, 2017 10:18 AM
To: Anzar Esmail Sainudeen
Cc: Johan Bernhardsson; users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail -

 

Hi,

 

    The path of the gluster volume you should be specifying is ovirtnode1.opensource.int:/ovirtengine . Since there is no volume named '/mnt/volume-ovirtengine' it failed. Can you please specify the above and try again.

 

Thanks

kasturi. 

 

On Sun, Sep 17, 2017 at 6:16 PM, Anzar Esmail Sainudeen <anzar at it.thumbay.com <mailto:anzar at it.thumbay.com> > wrote:

Dear John,

 

Thank you for your great support 

 

I tried with replica 3 gluster volume setup successfully mounted. 

 

[root at ovirtnode1 ~]# df -Th

Filesystem                                             Type            Size  Used Avail Use% Mounted on

/dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1 ext4             24G  3.2G   20G  14% /

devtmpfs                                               devtmpfs        462M     0  462M   0% /dev

tmpfs                                                  tmpfs           489M  4.0K  489M   1% /dev/shm

tmpfs                                                  tmpfs           489M   13M  476M   3% /run

tmpfs                                                  tmpfs           489M     0  489M   0% /sys/fs/cgroup

/dev/mapper/onn-tmp                                    ext4            2.0G  6.5M  1.8G   1% /tmp

/dev/mapper/glusterfs-glusterfs--data                  ext4             40G   49M   38G   1% /data/glusterfs

/dev/mapper/onn-home                                   ext4            976M  2.6M  907M   1% /home

/dev/sda1                                              ext4            976M  145M  765M  16% /boot

/dev/mapper/onn-var                                    ext4             15G  200M   14G   2% /var

/dev/mapper/onn-var--log                               ext4            7.8G   51M  7.3G   1% /var/log

/dev/mapper/onn-var--log--audit                        ext4            2.0G  7.0M  1.8G   1% /var/log/audit

ovirtnode1.opensource.int:/ovirtengine                 fuse.glusterfs   40G   49M   38G   1% /mnt/volume-ovirtengine

 

 

[root at ovirtnode1 ~]# gluster volume info ovirtengine

Volume Name: ovirtengine

Type: Replicate

Volume ID: 765ec486-e2bf-4d1d-a2a8-5118a22c3ea6

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: ovirtnode1.opensource.int:/data/glusterfs/ovirtengine

Brick2: ovirtnode2.opensource.int:/data/glusterfs/ovirtengine

Brick3: ovirtnode3.opensource.int:/data/glusterfs/ovirtengine

Options Reconfigured:

transport.address-family: inet

performance.readdir-ahead: on

nfs.disable: on

 

After that we start the ovirt engine installation and in storage domain space mentioned the path ovirtnode1.opensource.int:/mnt/volume-ovirtengine , unfortunately I got the error message Failed to retrieve Gluster Volume info .                                                                                                                                                                             

 

[root at ovirtnode1 ~]# ovirt-hosted-engine-setup                                                                                                                                                                                

[ INFO  ] Stage: Initializing                                                                                                                                                                                                 

[ INFO  ] Generating a temporary VNC password.                                                                                                                                                                                

[ INFO  ] Stage: Environment setup                                                                                                                                                                                            

          During customization use CTRL-D to abort.                                                                                                                                                                           

          Continuing will configure this host for serving as hypervisor and create a VM where you have to install the engine afterwards.                                                                                      

          Are you sure you want to continue? (Yes, No)[Yes]:                                                                                                                                                                  

[ INFO  ] Hardware supports virtualization                                                                                                                                                                                    

          Configuration files: []                                                                                                                                                                                             

          Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170917151019-2a112a.log                                                                                                                    

          Version: otopi-1.6.2 (otopi-1.6.2-1.el7.centos)                                                                                                                                                                     

[ INFO  ] Detecting available oVirt engine appliances                                                                                                                                                                         

[ INFO  ] Stage: Environment packages setup                                                                                                                                                                                   

[ INFO  ] Stage: Programs detection                                                                                                                                                                                           

[ INFO  ] Stage: Environment setup                                                                                                                                                                                            

[ INFO  ] Stage: Environment customization                                                                                                                                                                                    

                                                                                                                                                                                                                              

          --== STORAGE CONFIGURATION ==--                                                                                                                                                                                     

                                                                                                                                                                                                                              

          Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]: glusterfs                                                                                                                

[ INFO  ] Please note that Replica 3 support is required for the shared storage.                                                                                                                                              

          Please specify the full shared storage connection path to use (example: host:/path): ovirtnode1.opensource.int:/mnt/volume-ovirtengine                                                                              

[ ERROR ] Failed to retrieve Gluster Volume info                                                                                                                                                                              

[ ERROR ] Cannot access storage connection ovirtnode1.opensource.int:/mnt/volume-ovirtengine: Failed to retrieve Gluster Volume info [30806]: Volume does not exist                                                           

          Please specify the full shared storage connection path to use (example: host:/path): ovirtnode1.opensource.int:/data/glusterfs/ovirtengine                                                                          

[ ERROR ] Failed to retrieve Gluster Volume info                                                                                                                                                                              

[ ERROR ] Cannot access storage connection ovirtnode1.opensource.int:/data/glusterfs/ovirtengine: Failed to retrieve Gluster Volume info [30806]: Volume does not exist                                                       

          Please specify the full shared storage connection path to use (example: host:/path):                                                                                                                                

                                                                                               

So please advise.

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Technologies |Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: anzar at it.thumbay.com <mailto:anzar at it.thumbay.com>  | Website: www.thumbay.com <http://www.thumbay.com/> 



 

Disclaimer: This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are hereby notified that disclosing, copying, distributing or taking any action in reliance on the contents of this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this material. Thumbay Group accepts no liability for errors or omissions in the contents of this message, which arise as a result of e-mail transmission.

 

From: Johan Bernhardsson [mailto:johan at kafit.se <mailto:johan at kafit.se> ] 
Sent: Saturday, September 16, 2017 11:32 AM
To: Anzar Esmail Sainudeen; 'Kasturi Narra'; users at ovirt.org <mailto:users at ovirt.org> 
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail -

 

To use clustering need replica 3. And in your specification you only have 2 nodes and that is replica 2.

It could be that the engine install doesn't give a correct error message.

/Johan

On September 16, 2017 9:27:58 AM "Anzar Esmail Sainudeen" <anzar at it.thumbay.com <mailto:anzar at it.thumbay.com> > wrote:

Dear Team,

 

I tried to use gdepoly to setup glusterfs to install ovirt engine. Unfortunately I got lot of error during the deployment time.

 

So I tried to do install glusterfs manually. I successfully setup gluster volumes, ollowing are the output related to gluster for volumes “engine” & “data” . But during the engine installation time, I am getting the following error

 

Please specify the full shared storage connection path to use (example: host:/path): ovirtnode1.opensource.int:/engine but it don’t move to next level

 

My Concerns:

 

1.     The specific path is wrong ?

 

2.      Mount the gluster volume on ovirt node before install engine. If it is yes , how to mount  

 

 

 

Volume : engine

 

[root at ovirtnode1 ~]# gluster volume status engine

Status of volume: engine

Gluster process                             TCP Port  RDMA Port  Online  Pid

------------------------------------------------------------------------------

Brick ovirtnode1.opensource.int:/data/glust

erfs/engine                                 49155     0          Y       1997 

Brick ovirtnode2.opensource.int:/data/glust

erfs/engine                                 49155     0          Y       1959 

Self-heal Daemon on localhost               N/A       N/A        Y       1984 

Self-heal Daemon on ovirtnode2.opensource.i

nt                                          N/A       N/A        Y       1924 

 

Task Status of Volume engine

------------------------------------------------------------------------------

There are no active volume tasks

[root at ovirtnode1 ~]# gluster volume info engine

Volume Name: engine

Type: Replicate

Volume ID: 3f1b5239-d442-48e2-b074-12cb7f6ddb8f

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ovirtnode1.opensource.int:/data/glusterfs/engine

Brick2: ovirtnode2.opensource.int:/data/glusterfs/engine

Options Reconfigured:

nfs.disable: on

performance.readdir-ahead: on

transport.address-family: inet

 

 

Volume : Data

 

[root at ovirtnode1 ~]# gluster volume info data

Volume Name: data

Type: Replicate

Volume ID: 62b4a031-23b9-482e-a4b5-a1ee0e24e4f4

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ovirtnode1.opensource.int:/data/glusterfs/data

Brick2: ovirtnode2.opensource.int:/data/glusterfs/data

Options Reconfigured:

nfs.disable: on

performance.readdir-ahead: on

transport.address-family: inet

[root at ovirtnode1 ~]# gluster volume status data

Status of volume: data

Gluster process                             TCP Port  RDMA Port  Online  Pid

------------------------------------------------------------------------------

Brick ovirtnode1.opensource.int:/data/glust

erfs/data                                   49154     0          Y       1992 

Brick ovirtnode2.opensource.int:/data/glust

erfs/data                                   49154     0          Y       1953 

Self-heal Daemon on localhost               N/A       N/A        Y       1984 

Self-heal Daemon on ovirtnode2.opensource.i

nt                                          N/A       N/A        Y       1924 

 

Task Status of Volume data

------------------------------------------------------------------------------

There are no active volume tasks

 

 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Technologies |Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: anzar at it.thumbay.com <mailto:anzar at it.thumbay.com>  | Website: www.thumbay.com <http://www.thumbay.com/> 



 

Disclaimer: This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are hereby notified that disclosing, copying, distributing or taking any action in reliance on the contents of this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this material. Thumbay Group accepts no liability for errors or omissions in the contents of this message, which arise as a result of e-mail transmission.

 

From: Kasturi Narra [mailto:knarra at redhat.com] 
Sent: Monday, August 28, 2017 1:49 PM
To: Anzar Esmail Sainudeen
Cc: users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail

 

can you please check if you have any additional disk in the system? If you have additional disk in the system other than the disk which is being used for root partition then you could specify the disk in the cockpit UI (i hope you are using cockpit UI to do the installation) with no partitions on that. That will take care of the installation and make your life easier as cockpit + gdeploy would take care of configuring gluster bricks and volumes for you. 

 

On Mon, Aug 28, 2017 at 2:55 PM, Anzar Esmail Sainudeen <anzar at it.thumbay.com <mailto:anzar at it.thumbay.com> > wrote:

Dear Nara,

 

All the partitions, pv and vg are created automatically during the initial setup time.

 

[root at ovirtnode1 ~]# vgs

  VG  #PV #LV #SN Attr   VSize   VFree 

  onn   1  12   0 wz--n- 555.73g 14.93g

 

All space are mounted to the below location, all free space are mounted in /.

 

Filesystem                                              Size  Used Avail Use% Mounted on

/dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1  513G  4.2G  483G   1% /

devtmpfs                                                 44G     0   44G   0% /dev

tmpfs                                                    44G  4.0K   44G   1% /dev/shm

tmpfs                                                    44G   33M   44G   1% /run

tmpfs                                                    44G     0   44G   0% /sys/fs/cgroup

/dev/sda2                                               976M  135M  774M  15% /boot

/dev/mapper/onn-home                                    976M  2.6M  907M   1% /home

/dev/mapper/onn-tmp                                     2.0G  6.3M  1.8G   1% /tmp

/dev/sda1                                               200M  9.5M  191M   5% /boot/efi

/dev/mapper/onn-var                                      15G  1.8G   13G  13% /var

/dev/mapper/onn-var--log                                7.8G  224M  7.2G   3% /var/log

/dev/mapper/onn-var--log--audit                         2.0G   44M  1.8G   3% /var/log/audit

tmpfs                                                   8.7G     0  8.7G   0% /run/user/0

 

If we need any space we want to reduce the vg size and create new one.(This is correct)

 

 

If the above step is complicated, can you please suggest to setup glusterfs datastore in ovirt 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: anzar at it.thumbay.com <mailto:anzar at it.thumbay.com>  | Website: www.thumbay.com <http://www.thumbay.com/> 



 

Disclaimer: This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are hereby notified that disclosing, copying, distributing or taking any action in reliance on the contents of this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this material. Thumbay Group accepts no liability for errors or omissions in the contents of this message, which arise as a result of e-mail transmission.

 

From: Kasturi Narra [mailto:knarra at redhat.com <mailto:knarra at redhat.com> ] 
Sent: Monday, August 28, 2017 1:14 PM


To: Anzar Esmail Sainudeen
Cc: users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail

 

yes, you can create. I do not see any problems there. 

 

May i know how these vgs are created ? If they are not created using gdeploy then you will have to create bricks manually from the new vg you have created.

 

On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen <anzar at it.thumbay.com <mailto:anzar at it.thumbay.com> > wrote:

Dear Nara,

 

Thank you for your great reply.

 

1) can you please check if the disks what would be used for brick creation does not have labels or any partitions on them ?

 

Yes I agreed there is no labels partition available, my doubt is it possible to create required bricks partition from available 406.7G  Linux LVM. Following are the physical volume and volume group information.

 

 

[root at ovirtnode1 ~]# pvdisplay 

  --- Physical volume ---

  PV Name               /dev/sda3

  VG Name               onn

  PV Size               555.73 GiB / not usable 2.00 MiB

  Allocatable           yes 

  PE Size               4.00 MiB

  Total PE              142267

  Free PE               3823

  Allocated PE          138444

  PV UUID               v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe

   

[root at ovirtnode1 ~]# vgdisplay 

  --- Volume group ---

  VG Name               onn

  System ID             

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  48

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                12

  Open LV               7

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               555.73 GiB

  PE Size               4.00 MiB

  Total PE              142267

  Alloc PE / Size       138444 / 540.80 GiB

  Free  PE / Size       3823 / 14.93 GiB

  VG UUID               nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy

   

 

I am thinking, to reduce the vg size and create new vg for gluster. Is it a good thinking.

   

 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: anzar at it.thumbay.com <mailto:anzar at it.thumbay.com>  | Website: www.thumbay.com <http://www.thumbay.com/> 



 

Disclaimer: This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are hereby notified that disclosing, copying, distributing or taking any action in reliance on the contents of this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this material. Thumbay Group accepts no liability for errors or omissions in the contents of this message, which arise as a result of e-mail transmission.

 

From: Kasturi Narra [mailto:knarra at redhat.com <mailto:knarra at redhat.com> ] 
Sent: Monday, August 28, 2017 9:48 AM
To: Anzar Esmail Sainudeen
Cc: users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail

 

Hi,

 

   If i understand right gdeploy script is failing at [1]. There could be two possible reasons why that would fail.

 

1) can you please check if the disks what would be used for brick creation does not have lables or any partitions on them ?

 

2) can you please check if the path [1] exists. If it does not can you please change the path of the script in gdeploy.conf file to /usr/share/gdeploy/scripts/grafton-sanity-check.sh

 

[1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

 

Thanks

kasturi

 

On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <anzar at it.thumbay.com <mailto:anzar at it.thumbay.com> > wrote:

Dear Team Ovirt,

 

I am trying to deploy hosted engine setup with Gluster. Hosted engine setup was failed. Total number of host is 3 server 

 

 

PLAY [gluster_servers] *********************************************************

 

TASK [Run a shell script] ******************************************************

fatal: [ovirtnode4.thumbaytechlabs.int <http://ovirtnode4.thumbaytechlabs.int> ]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}

fatal: [ovirtnode3.thumbaytechlabs.int <http://ovirtnode3.thumbaytechlabs.int> ]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}

fatal: [ovirtnode2.thumbaytechlabs.int <http://ovirtnode2.thumbaytechlabs.int> ]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}

            to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry

 

PLAY RECAP *********************************************************************

ovirtnode2.thumbaytechlabs.int <http://ovirtnode2.thumbaytechlabs.int>  : ok=0    changed=0    unreachable=0    failed=1   

ovirtnode3.thumbaytechlabs.int <http://ovirtnode3.thumbaytechlabs.int>  : ok=0    changed=0    unreachable=0    failed=1   

ovirtnode4.thumbaytechlabs.int <http://ovirtnode4.thumbaytechlabs.int>  : ok=0    changed=0    unreachable=0    failed=1   

 

 

Please note my finding.

 

1.    Still I am doubt with bricks setup ares . because during the ovirt node setup time automatically create partition and mount all space. Please find below #fdisk –l output

2.     

[root at ovirtnode4 ~]# fdisk –l

 

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

 

Disk /dev/sda: 438.0 GB, 437998583808 bytes, 855465984 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: gpt

 

 

#         Start          End    Size  Type            Name

1         2048       411647    200M  EFI System      EFI System Partition

2       411648      2508799      1G  Microsoft basic 

 3      2508800    855463935  406.7G  Linux LVM       

 

Disk /dev/mapper/onn-swap: 25.4 GB, 25367150592 bytes, 49545216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/mapper/onn-pool00_tmeta: 1073 MB, 1073741824 bytes, 2097152 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/mapper/onn-pool00_tdata: 394.2 GB, 394159718400 bytes, 769843200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Disk /dev/mapper/onn-pool00-tpool: 394.2 GB, 394159718400 bytes, 769843200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1: 378.1 GB, 378053591040 bytes, 738385920 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-pool00: 394.2 GB, 394159718400 bytes, 769843200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-var: 16.1 GB, 16106127360 bytes, 31457280 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-root: 378.1 GB, 378053591040 bytes, 738385920 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-var--log: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-home: 1073 MB, 1073741824 bytes, 2097152 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-tmp: 2147 MB, 2147483648 bytes, 4194304 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-var--log--audit: 2147 MB, 2147483648 bytes, 4194304 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

 

Disk /dev/mapper/onn-ovirt--node--ng--4.1.5--0.20170821.0+1: 378.1 GB, 378053591040 bytes, 738385920 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 131072 bytes / 262144 bytes

 

3.    Is it possible to create a LVM portions from 406.7G  Linux LVM free space for the required gluster size ?

 

Please suggest…..

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: anzar at it.thumbay.com <mailto:anzar at it.thumbay.com>  | Website: www.thumbay.com <http://www.thumbay.com/> 



 

Disclaimer: This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you are hereby notified that disclosing, copying, distributing or taking any action in reliance on the contents of this e-mail is strictly prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this material. Thumbay Group accepts no liability for errors or omissions in the contents of this message, which arise as a result of e-mail transmission.

 


_______________________________________________
Users mailing list
Users at ovirt.org <mailto:Users at ovirt.org> 
http://lists.ovirt.org/mailman/listinfo/users

 

 

 

_______________________________________________
Users mailing list
Users at ovirt.org <mailto:Users%40ovirt.org> 
http://lists.ovirt.org/mailman/listinfo/users

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170920/9bc191b8/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 11299 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170920/9bc191b8/attachment.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.jpg
Type: image/jpeg
Size: 11313 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170920/9bc191b8/attachment-0001.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.jpg
Type: image/jpeg
Size: 11299 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170920/9bc191b8/attachment-0002.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.jpg
Type: image/jpeg
Size: 11308 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170920/9bc191b8/attachment-0003.jpg>


More information about the Users mailing list