[ovirt-devel] adding new paths to iscsi data storage domains
Paul Dyer
pmdyermms at gmail.com
Mon Mar 16 17:23:04 UTC 2015
Bug 1202471 added.
On Sun, Mar 15, 2015 at 9:38 AM, Paul Dyer <pmdyermms at gmail.com> wrote:
> Hi,
>
> 1) The storage device has 2 controllers. Each controller has 4 nics.
> All 8 nics are connected to a level 2 switch that has ethernet connections
> to the hosts. The storage device nics enumerated are 0, 1, 2, 3.
> Controller 0, nic 0 are the primary target for a group which includes
> controller 1, nic 0. In /var/lib/iscsi/nodes/{IQN}/{nic}/default, the
> difference between c0n0 and c1n0 is :
> # diff 10.251.6.10?,3260,?/default
> 3c3
> < node.tpgt = 1
> ---
> > node.tpgt = 2
> 47c47
> < node.conn[0].address = 10.251.6.101
> ---
> > node.conn[0].address = 10.251.6.102
>
> The 4 groups of nics have different subnets. Dell docs recommend this,
> and the storage devices keeps packets separated by these subnets. For my
> setup, I have this config:
> controller 0 controller 1 host-1
> host-2
> nic0 10.251.6.101 10.251.6.102 10.251.6.135 10.251.6.136
> nic1 10.251.7.101 10.251.7.102 10.251.7.135 10.251.7.136
> nic2 10.251.8.101 10.251.8.102 10.251.8.135 10.251.8.136
> nic3 10.251.9.101 10.251.9.102 10.251.9.135 10.251.9.136
>
> On each virtualization host, I have 1 nics configured on each of the 4
> subnets.
>
> 2) We are using rhev 3.5. I have deployed this as round-robin, not bond.
> According to Dell support, iscsi works best with round-robin, whereas FC
> NAS works best with bond. I follow their recommendations. The rdac
> driver is setup for this, and the prio= below separates the 8 nic paths
> into 2 groups.
>
> # multipath -ll
> 36f01faf000d7ddeb000002085258bce5 dm-1 DELL,MD32xxi
> size=756G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1
> rdac' wp=rw
> |-+- policy='round-robin 0' *prio=14* status=active
> | |- 7:0:0:1 sdc 8:32 active ready running
> | |- 8:0:0:1 sdj 8:144 active ready running
> | |- 11:0:0:1 sdu 65:64 active ready running
> | `- 12:0:0:1 sdv 65:80 active ready running
> `-+- policy='round-robin 0' *prio=9* status=enabled
> |- 6:0:0:1 sdf 8:80 active ready running
> |- 10:0:0:1 sdk 8:160 active ready running
> |- 5:0:0:1 sdo 8:224 active ready running
> `- 9:0:0:1 sdt 65:48 active ready running
>
> Hope this helps explain my setup. I am not sure how to file a bug. Is
> this done on bugzilla or somewhere else?
>
> Paul
>
>
> On Sun, Mar 15, 2015 at 7:44 AM, Elad Ben Aharon <ebenahar at redhat.com>
> wrote:
>
>> Hi Paul,
>>
>> I would like to know the following details:
>> 1) Are the hosts's NICs connected to the storage server located in the
>> same network subnet as the storage server itself?
>> 2) Have you tried to deploy the connection to the storage server using
>> the 'iSCSI multipath' bond that available in RHEV-3.4?
>>
>> ------------------------------
>> *From: *"Nir Soffer" <nsoffer at redhat.com>
>> *To: *"Paul Dyer" <pmdyermms at gmail.com>
>> *Cc: *devel at ovirt.org, "Elad Ben Aharon" <ebenahar at redhat.com>
>> *Sent: *Sunday, 15 March, 2015 12:54:44 PM
>>
>> *Subject: *Re: [ovirt-devel] adding new paths to iscsi data storage
>> domains
>>
>> Adding Elad who tested this feature lately to add more info.
>>
>> ----- Original Message -----
>> > From: "Paul Dyer" <pmdyermms at gmail.com>
>> > To: "Nir Soffer" <nsoffer at redhat.com>
>> > Cc: devel at ovirt.org
>> > Sent: Friday, March 13, 2015 6:25:05 PM
>> > Subject: Re: [ovirt-devel] adding new paths to iscsi data storage
>> domains
>> >
>> > Nir,
>> >
>> > we have added 2 more nics to each virtualization host. In order to
>> get
>> > this working, I had to add an after_network_setup hook. The shell
>> script
>> > simply does "/sbin/iscsiadm -m node -L all", to get the extra targets
>> > login after reboot.
>> >
>> > I looked in the engine table storage_server_connections and found that
>> only
>> > the iscsi targets selected during the original storage domain create
>> were
>> > present. If ovirt-engine added rows here, then most of the work would
>> > have been done.
>> >
>> > I say mostly, because the Dell MD3200i did not return exactly the
>> correct
>> > portal target values. The device has 2 controllers, with 4 nics each.
>> > Controller 0, ports 0,1,2,3 uses portal target 1. Controller 1, the
>> > ports uses portal target 2. After iscsiadm discovery, the portal
>> targets
>> > for ports 1,2,3 were all 1. Ports 0 had targets 1 and 2, correctly.
>> I
>> > adjusted the values saved on the filesystem, and login/logoff works
>> fine.
>> >
>> > Paul
>> >
>> >
>> >
>> > On Fri, Feb 6, 2015 at 11:57 AM, Paul Dyer <pmdyermms at gmail.com> wrote:
>> >
>> > > First of all, thank you for your time. I must apologize that in this
>> > > install, I am using RHEV 3.4.5. I will try to reproduce this on an
>> ovirt
>> > > install. I just need to create some paths to iscsi targets.
>> > >
>> > > 1. This configuration has 2 physical hosts, Dell PE-R715 servers,
>> with a
>> > > Dell PowerVault MD3200i iSCSI data storage domain. The EM3 nic was
>> the
>> > > original link. EM4 was the new connection to storage.
>> > >
>> > > 2. From the manager interface, I selected Storage tab, then the
>> > > kvm5DataDomain, then edit. From the popup, I added the IP address
>> under
>> > > Discovery Targets, then clicked the Discover button. Then, clicked
>> the
>> > > login arrow on the new targets discovered.
>> > >
>> > > I have attached the engine and vdsm logs. I was working on this at
>> about
>> > > 11:40am Feb 4th.
>> > >
>> > > When setting the host in maintenance mode, then reboot, and Activate,
>> the
>> > > new paths do not get a login.
>> > >
>> > > Thanks,
>> > > Paul
>> > >
>> > >
>> > >
>> > > On Fri, Feb 6, 2015 at 5:38 AM, Nir Soffer <nsoffer at redhat.com>
>> wrote:
>> > >
>> > >> ----- Original Message -----
>> > >> > From: "Paul Dyer" <pmdyermms at gmail.com>
>> > >> > To: devel at ovirt.org
>> > >> > Sent: Friday, February 6, 2015 12:20:23 AM
>> > >> > Subject: [ovirt-devel] adding new paths to iscsi data storage
>> domains
>> > >> >
>> > >> > Hi,
>> > >> >
>> > >> > I have been reading the devel list for months now, and would like
>> to
>> > >> ask a
>> > >> > question.
>> > >> >
>> > >> > In version 3.4.5, adding new paths to an existing iSCSI data
>> storage
>> > >> domain
>> > >> > does not work from the manager.
>> > >>
>> > >> It works on ovirt 3.5 and master and it should work also in all
>> previous
>> > >> versions.
>> > >>
>> > >> Please open a bug for this:
>> > >> 1. Describe the configuration you are modifying
>> > >> 2. Describe the steps you take
>> > >> 3. Include engine log
>> > >> 4. Include vdsm log from the host trying to add new devices
>> > >>
>> > >> > I have been able to add the paths with
>> > >> > command line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L
>> > >> all".
>> > >> >
>> > >> > Is there any plan to allow for adding new storage paths after the
>> data
>> > >> domain
>> > >> > has been created?
>> > >> >
>> > >> > Thanks,
>> > >> > Paul
>> > >> >
>> > >> >
>> > >> > --
>> > >> > Paul Dyer,
>> > >> > Mercury Consulting Group, RHCE
>> > >> > 504-302-8750
>> > >> >
>> > >> > _______________________________________________
>> > >> > Devel mailing list
>> > >> > Devel at ovirt.org
>> > >> > http://lists.ovirt.org/mailman/listinfo/devel
>> > >>
>> > >
>> > >
>> > >
>> > > --
>> > > Paul Dyer,
>> > > Mercury Consulting Group, RHCE
>> > > 504-302-8750
>> > >
>> >
>> >
>> >
>> > --
>> > Paul Dyer,
>> > Mercury Consulting Group, RHCE
>> > 504-302-8750
>> >
>>
>>
>
>
> --
> Paul Dyer,
> Mercury Consulting Group, RHCE
> 504-302-8750
>
--
Paul Dyer,
Mercury Consulting Group, RHCE
504-302-8750
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/devel/attachments/20150316/5ea544d3/attachment-0001.html>
More information about the Devel
mailing list