adding new paths to iscsi data storage domains

Hi, I have been reading the devel list for months now, and would like to ask a question. In version 3.4.5, adding new paths to an existing iSCSI data storage domain does not work from the manager. I have been able to add the paths with command line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L all". Is there any plan to allow for adding new storage paths after the data domain has been created? Thanks, Paul -- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750

----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: devel@ovirt.org Sent: Friday, February 6, 2015 12:20:23 AM Subject: [ovirt-devel] adding new paths to iscsi data storage domains
Hi,
I have been reading the devel list for months now, and would like to ask a question.
In version 3.4.5, adding new paths to an existing iSCSI data storage domain does not work from the manager.
It works on ovirt 3.5 and master and it should work also in all previous versions. Please open a bug for this: 1. Describe the configuration you are modifying 2. Describe the steps you take 3. Include engine log 4. Include vdsm log from the host trying to add new devices
I have been able to add the paths with command line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L all".
Is there any plan to allow for adding new storage paths after the data domain has been created?
Thanks, Paul
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

First of all, thank you for your time. I must apologize that in this install, I am using RHEV 3.4.5. I will try to reproduce this on an ovirt install. I just need to create some paths to iscsi targets. 1. This configuration has 2 physical hosts, Dell PE-R715 servers, with a Dell PowerVault MD3200i iSCSI data storage domain. The EM3 nic was the original link. EM4 was the new connection to storage. 2. From the manager interface, I selected Storage tab, then the kvm5DataDomain, then edit. From the popup, I added the IP address under Discovery Targets, then clicked the Discover button. Then, clicked the login arrow on the new targets discovered. I have attached the engine and vdsm logs. I was working on this at about 11:40am Feb 4th. When setting the host in maintenance mode, then reboot, and Activate, the new paths do not get a login. Thanks, Paul On Fri, Feb 6, 2015 at 5:38 AM, Nir Soffer <nsoffer@redhat.com> wrote:
----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: devel@ovirt.org Sent: Friday, February 6, 2015 12:20:23 AM Subject: [ovirt-devel] adding new paths to iscsi data storage domains
Hi,
I have been reading the devel list for months now, and would like to ask a question.
In version 3.4.5, adding new paths to an existing iSCSI data storage domain does not work from the manager.
It works on ovirt 3.5 and master and it should work also in all previous versions.
Please open a bug for this: 1. Describe the configuration you are modifying 2. Describe the steps you take 3. Include engine log 4. Include vdsm log from the host trying to add new devices
I have been able to add the paths with command line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L all".
Is there any plan to allow for adding new storage paths after the data domain has been created?
Thanks, Paul
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750

Nir, we have added 2 more nics to each virtualization host. In order to get this working, I had to add an after_network_setup hook. The shell script simply does "/sbin/iscsiadm -m node -L all", to get the extra targets login after reboot. I looked in the engine table storage_server_connections and found that only the iscsi targets selected during the original storage domain create were present. If ovirt-engine added rows here, then most of the work would have been done. I say mostly, because the Dell MD3200i did not return exactly the correct portal target values. The device has 2 controllers, with 4 nics each. Controller 0, ports 0,1,2,3 uses portal target 1. Controller 1, the ports uses portal target 2. After iscsiadm discovery, the portal targets for ports 1,2,3 were all 1. Ports 0 had targets 1 and 2, correctly. I adjusted the values saved on the filesystem, and login/logoff works fine. Paul On Fri, Feb 6, 2015 at 11:57 AM, Paul Dyer <pmdyermms@gmail.com> wrote:
First of all, thank you for your time. I must apologize that in this install, I am using RHEV 3.4.5. I will try to reproduce this on an ovirt install. I just need to create some paths to iscsi targets.
1. This configuration has 2 physical hosts, Dell PE-R715 servers, with a Dell PowerVault MD3200i iSCSI data storage domain. The EM3 nic was the original link. EM4 was the new connection to storage.
2. From the manager interface, I selected Storage tab, then the kvm5DataDomain, then edit. From the popup, I added the IP address under Discovery Targets, then clicked the Discover button. Then, clicked the login arrow on the new targets discovered.
I have attached the engine and vdsm logs. I was working on this at about 11:40am Feb 4th.
When setting the host in maintenance mode, then reboot, and Activate, the new paths do not get a login.
Thanks, Paul
On Fri, Feb 6, 2015 at 5:38 AM, Nir Soffer <nsoffer@redhat.com> wrote:
----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: devel@ovirt.org Sent: Friday, February 6, 2015 12:20:23 AM Subject: [ovirt-devel] adding new paths to iscsi data storage domains
Hi,
I have been reading the devel list for months now, and would like to ask a question.
In version 3.4.5, adding new paths to an existing iSCSI data storage domain does not work from the manager.
It works on ovirt 3.5 and master and it should work also in all previous versions.
Please open a bug for this: 1. Describe the configuration you are modifying 2. Describe the steps you take 3. Include engine log 4. Include vdsm log from the host trying to add new devices
I have been able to add the paths with command line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L all".
Is there any plan to allow for adding new storage paths after the data domain has been created?
Thanks, Paul
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750

----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: "Nir Soffer" <nsoffer@redhat.com> Cc: devel@ovirt.org Sent: Friday, March 13, 2015 6:25:05 PM Subject: Re: [ovirt-devel] adding new paths to iscsi data storage domains
Nir,
we have added 2 more nics to each virtualization host. In order to get this working, I had to add an after_network_setup hook. The shell script simply does "/sbin/iscsiadm -m node -L all", to get the extra targets login after reboot.
I looked in the engine table storage_server_connections and found that only the iscsi targets selected during the original storage domain create were present. If ovirt-engine added rows here, then most of the work would have been done.
I say mostly, because the Dell MD3200i did not return exactly the correct portal target values. The device has 2 controllers, with 4 nics each. Controller 0, ports 0,1,2,3 uses portal target 1. Controller 1, the ports uses portal target 2. After iscsiadm discovery, the portal targets for ports 1,2,3 were all 1. Ports 0 had targets 1 and 2, correctly. I adjusted the values saved on the filesystem, and login/logoff works fine.
Please open a bug for this so we can track this issue. Nir
Paul
On Fri, Feb 6, 2015 at 11:57 AM, Paul Dyer <pmdyermms@gmail.com> wrote:
First of all, thank you for your time. I must apologize that in this install, I am using RHEV 3.4.5. I will try to reproduce this on an ovirt install. I just need to create some paths to iscsi targets.
1. This configuration has 2 physical hosts, Dell PE-R715 servers, with a Dell PowerVault MD3200i iSCSI data storage domain. The EM3 nic was the original link. EM4 was the new connection to storage.
2. From the manager interface, I selected Storage tab, then the kvm5DataDomain, then edit. From the popup, I added the IP address under Discovery Targets, then clicked the Discover button. Then, clicked the login arrow on the new targets discovered.
I have attached the engine and vdsm logs. I was working on this at about 11:40am Feb 4th.
When setting the host in maintenance mode, then reboot, and Activate, the new paths do not get a login.
Thanks, Paul
On Fri, Feb 6, 2015 at 5:38 AM, Nir Soffer <nsoffer@redhat.com> wrote:
----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: devel@ovirt.org Sent: Friday, February 6, 2015 12:20:23 AM Subject: [ovirt-devel] adding new paths to iscsi data storage domains
Hi,
I have been reading the devel list for months now, and would like to ask a question.
In version 3.4.5, adding new paths to an existing iSCSI data storage domain does not work from the manager.
It works on ovirt 3.5 and master and it should work also in all previous versions.
Please open a bug for this: 1. Describe the configuration you are modifying 2. Describe the steps you take 3. Include engine log 4. Include vdsm log from the host trying to add new devices
I have been able to add the paths with command line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L all".
Is there any plan to allow for adding new storage paths after the data domain has been created?
Thanks, Paul
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750

Adding Elad who tested this feature lately to add more info. ----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: "Nir Soffer" <nsoffer@redhat.com> Cc: devel@ovirt.org Sent: Friday, March 13, 2015 6:25:05 PM Subject: Re: [ovirt-devel] adding new paths to iscsi data storage domains
Nir,
we have added 2 more nics to each virtualization host. In order to get this working, I had to add an after_network_setup hook. The shell script simply does "/sbin/iscsiadm -m node -L all", to get the extra targets login after reboot.
I looked in the engine table storage_server_connections and found that only the iscsi targets selected during the original storage domain create were present. If ovirt-engine added rows here, then most of the work would have been done.
I say mostly, because the Dell MD3200i did not return exactly the correct portal target values. The device has 2 controllers, with 4 nics each. Controller 0, ports 0,1,2,3 uses portal target 1. Controller 1, the ports uses portal target 2. After iscsiadm discovery, the portal targets for ports 1,2,3 were all 1. Ports 0 had targets 1 and 2, correctly. I adjusted the values saved on the filesystem, and login/logoff works fine.
Paul
On Fri, Feb 6, 2015 at 11:57 AM, Paul Dyer <pmdyermms@gmail.com> wrote:
First of all, thank you for your time. I must apologize that in this install, I am using RHEV 3.4.5. I will try to reproduce this on an ovirt install. I just need to create some paths to iscsi targets.
1. This configuration has 2 physical hosts, Dell PE-R715 servers, with a Dell PowerVault MD3200i iSCSI data storage domain. The EM3 nic was the original link. EM4 was the new connection to storage.
2. From the manager interface, I selected Storage tab, then the kvm5DataDomain, then edit. From the popup, I added the IP address under Discovery Targets, then clicked the Discover button. Then, clicked the login arrow on the new targets discovered.
I have attached the engine and vdsm logs. I was working on this at about 11:40am Feb 4th.
When setting the host in maintenance mode, then reboot, and Activate, the new paths do not get a login.
Thanks, Paul
On Fri, Feb 6, 2015 at 5:38 AM, Nir Soffer <nsoffer@redhat.com> wrote:
----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: devel@ovirt.org Sent: Friday, February 6, 2015 12:20:23 AM Subject: [ovirt-devel] adding new paths to iscsi data storage domains
Hi,
I have been reading the devel list for months now, and would like to ask a question.
In version 3.4.5, adding new paths to an existing iSCSI data storage domain does not work from the manager.
It works on ovirt 3.5 and master and it should work also in all previous versions.
Please open a bug for this: 1. Describe the configuration you are modifying 2. Describe the steps you take 3. Include engine log 4. Include vdsm log from the host trying to add new devices
I have been able to add the paths with command line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L all".
Is there any plan to allow for adding new storage paths after the data domain has been created?
Thanks, Paul
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750

------=_Part_26974475_504505131.1426423449250 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi Paul, I would like to know the following details: 1) Are the hosts's NICs connected to the storage server located in the same network subnet as the storage server itself? 2) Have you tried to deploy the connection to the storage server using the 'iSCSI multipath' bond that available in RHEV-3.4? ----- Original Message ----- From: "Nir Soffer" <nsoffer@redhat.com> To: "Paul Dyer" <pmdyermms@gmail.com> Cc: devel@ovirt.org, "Elad Ben Aharon" <ebenahar@redhat.com> Sent: Sunday, 15 March, 2015 12:54:44 PM Subject: Re: [ovirt-devel] adding new paths to iscsi data storage domains Adding Elad who tested this feature lately to add more info. ----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: "Nir Soffer" <nsoffer@redhat.com> Cc: devel@ovirt.org Sent: Friday, March 13, 2015 6:25:05 PM Subject: Re: [ovirt-devel] adding new paths to iscsi data storage domains
Nir,
we have added 2 more nics to each virtualization host. In order to get this working, I had to add an after_network_setup hook. The shell script simply does "/sbin/iscsiadm -m node -L all", to get the extra targets login after reboot.
I looked in the engine table storage_server_connections and found that only the iscsi targets selected during the original storage domain create were present. If ovirt-engine added rows here, then most of the work would have been done.
I say mostly, because the Dell MD3200i did not return exactly the correct portal target values. The device has 2 controllers, with 4 nics each. Controller 0, ports 0,1,2,3 uses portal target 1. Controller 1, the ports uses portal target 2. After iscsiadm discovery, the portal targets for ports 1,2,3 were all 1. Ports 0 had targets 1 and 2, correctly. I adjusted the values saved on the filesystem, and login/logoff works fine.
Paul
On Fri, Feb 6, 2015 at 11:57 AM, Paul Dyer <pmdyermms@gmail.com> wrote:
First of all, thank you for your time. I must apologize that in this install, I am using RHEV 3.4.5. I will try to reproduce this on an ovirt install. I just need to create some paths to iscsi targets.
1. This configuration has 2 physical hosts, Dell PE-R715 servers, with a Dell PowerVault MD3200i iSCSI data storage domain. The EM3 nic was the original link. EM4 was the new connection to storage.
2. From the manager interface, I selected Storage tab, then the kvm5DataDomain, then edit. From the popup, I added the IP address under Discovery Targets, then clicked the Discover button. Then, clicked the login arrow on the new targets discovered.
I have attached the engine and vdsm logs. I was working on this at about 11:40am Feb 4th.
When setting the host in maintenance mode, then reboot, and Activate, the new paths do not get a login.
Thanks, Paul
On Fri, Feb 6, 2015 at 5:38 AM, Nir Soffer <nsoffer@redhat.com> wrote:
----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: devel@ovirt.org Sent: Friday, February 6, 2015 12:20:23 AM Subject: [ovirt-devel] adding new paths to iscsi data storage domains
Hi,
I have been reading the devel list for months now, and would like to ask a question.
In version 3.4.5, adding new paths to an existing iSCSI data storage domain does not work from the manager.
It works on ovirt 3.5 and master and it should work also in all previous versions.
Please open a bug for this: 1. Describe the configuration you are modifying 2. Describe the steps you take 3. Include engine log 4. Include vdsm log from the host trying to add new devices
I have been able to add the paths with command line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L all".
Is there any plan to allow for adding new storage paths after the data domain has been created?
Thanks, Paul
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
<div>I would like to know the following details:<br></div><div>1) Are the = hosts's NICs connected to the storage server located in the same network su= bnet as the storage server itself?<br></div><div>2) Have you tried to deplo= y the connection to the storage server using the 'iSCSI multipath' bond tha= t available in RHEV-3.4? <br><br></div><hr id=3D"zwchr"><div style=3D"color= :#000;font-weight:normal;font-style:normal;text-decoration:none;font-family= :Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Nir Soffer" <= nsoffer@redhat.com><br><b>To: </b>"Paul Dyer" <pmdyermms@gmail.com>= ;<br><b>Cc: </b>devel@ovirt.org, "Elad Ben Aharon" <ebenahar@redhat.com&= gt;<br><b>Sent: </b>Sunday, 15 March, 2015 12:54:44 PM<br><b>Subject: </b>R= e: [ovirt-devel] adding new paths to iscsi data storage domains<br><div><br= </div>Adding Elad who tested this feature lately to add more info.<br><div= <br></div>----- Original Message -----<br>> From: "Paul Dyer" <pmdye= rmms@gmail.com><br>> To: "Nir Soffer" <nsoffer@redhat.com><br>&= gt; Cc: devel@ovirt.org<br>> Sent: Friday, March 13, 2015 6:25:05 PM<br>= > Subject: Re: [ovirt-devel] adding new paths to iscsi data storage doma= ins<br>> <br>> Nir,<br>> <br>> we have added 2 more nics to eac= h virtualization host. In order to get<br>> this working, I= had to add an after_network_setup hook. The shell script<br>> si= mply does "/sbin/iscsiadm -m node -L all", to get the extra targets<b= r>> login after reboot.<br>> <br>> I looked in the engine table st= orage_server_connections and found that only<br>> the iscsi targets sele= cted during the original storage domain create were<br>> present. = If ovirt-engine added rows here, then most of the work would<br>> have = been done.<br>> <br>> I say mostly, because the Dell MD3200i did not = return exactly the correct<br>> portal target values. The device = has 2 controllers, with 4 nics each.<br>> Controller 0, ports 0,1,2,3 us= es portal target 1. Controller 1, the<br>> ports uses porta= l target 2. After iscsiadm discovery, the portal targets<br>> for=
------=_Part_26974475_504505131.1426423449250 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: times new roman, new york, times, se= rif; font-size: 12pt; color: #000000"><div>Hi Paul,<br></div><div><br></div= ports 1,2,3 were all 1. Ports 0 had targets 1 and 2, correctl= y. I<br>> adjusted the values saved on the filesystem, and = login/logoff works fine.<br>> <br>> Paul<br>> <br>> <br>> <b= r>> On Fri, Feb 6, 2015 at 11:57 AM, Paul Dyer <pmdyermms@gmail.com&g= t; wrote:<br>> <br>> > First of all, thank you for your time. &nbs= p; I must apologize that in this<br>> > install, I am using RHEV 3.4.= 5. I will try to reproduce this on an ovirt<br>> > insta= ll. I just need to create some paths to iscsi targets.<br>> ><= br>> > 1. This configuration has 2 physical hosts, Dell PE-R71= 5 servers, with a<br>> > Dell PowerVault MD3200i iSCSI data storage d= omain. The EM3 nic was the<br>> > original link. EM4 wa= s the new connection to storage.<br>> ><br>> > 2. From t= he manager interface, I selected Storage tab, then the<br>> > kvm5Dat= aDomain, then edit. From the popup, I added the IP address under<br>= > > Discovery Targets, then clicked the Discover button.  = ;Then, clicked the<br>> > login arrow on the new targets discovered.<= br>> ><br>> > I have attached the engine and vdsm logs. = I was working on this at about<br>> > 11:40am Feb 4th.<br>> ><b= r>> > When setting the host in maintenance mode, then reboot, and Act= ivate, the<br>> > new paths do not get a login.<br>> ><br>> = > Thanks,<br>> > Paul<br>> ><br>> ><br>> ><br>&g= t; > On Fri, Feb 6, 2015 at 5:38 AM, Nir Soffer <nsoffer@redhat.com&g= t; wrote:<br>> ><br>> >> ----- Original Message -----<br>>= ; >> > From: "Paul Dyer" <pmdyermms@gmail.com><br>> >&= gt; > To: devel@ovirt.org<br>> >> > Sent: Friday, February 6= , 2015 12:20:23 AM<br>> >> > Subject: [ovirt-devel] adding new = paths to iscsi data storage domains<br>> >> ><br>> >> = > Hi,<br>> >> ><br>> >> > I have been reading th= e devel list for months now, and would like to<br>> >> ask a<br>&g= t; >> > question.<br>> >> ><br>> >> > In v= ersion 3.4.5, adding new paths to an existing iSCSI data storage<br>> &g= t;> domain<br>> >> > does not work from the manager.<br>>= >><br>> >> It works on ovirt 3.5 and master and it should w= ork also in all previous<br>> >> versions.<br>> >><br>>= ; >> Please open a bug for this:<br>> >> 1. Describe the con= figuration you are modifying<br>> >> 2. Describe the steps you tak= e<br>> >> 3. Include engine log<br>> >> 4. Include vdsm l= og from the host trying to add new devices<br>> >><br>> >>= ; > I have been able to add the paths with<br>> >> > command= line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L<br>> >&g= t; all".<br>> >> ><br>> >> > Is there any plan to a= llow for adding new storage paths after the data<br>> >> domain<br=
> >> > has been created?<br>> >> ><br>> >>= ; > Thanks,<br>> >> > Paul<br>> >> ><br>> >= ;> ><br>> >> > --<br>> >> > Paul Dyer,<br>>= ; >> > Mercury Consulting Group, RHCE<br>> >> > 504-30= 2-8750<br>> >> ><br>> >> > ________________________= _______________________<br>> >> > Devel mailing list<br>> &g= t;> > Devel@ovirt.org<br>> >> > http://lists.ovirt.org/ma= ilman/listinfo/devel<br>> >><br>> ><br>> ><br>> >= ;<br>> > --<br>> > Paul Dyer,<br>> > Mercury Consulting G= roup, RHCE<br>> > 504-302-8750<br>> ><br>> <br>> <br>>= <br>> --<br>> Paul Dyer,<br>> Mercury Consulting Group, RHCE<br>&= gt; 504-302-8750<br>> <br></div><div><br></div></div></body></html> ------=_Part_26974475_504505131.1426423449250--

Hi, 1) The storage device has 2 controllers. Each controller has 4 nics. All 8 nics are connected to a level 2 switch that has ethernet connections to the hosts. The storage device nics enumerated are 0, 1, 2, 3. Controller 0, nic 0 are the primary target for a group which includes controller 1, nic 0. In /var/lib/iscsi/nodes/{IQN}/{nic}/default, the difference between c0n0 and c1n0 is : # diff 10.251.6.10?,3260,?/default 3c3 < node.tpgt = 1 ---
node.tpgt = 2 47c47 < node.conn[0].address = 10.251.6.101
node.conn[0].address = 10.251.6.102
The 4 groups of nics have different subnets. Dell docs recommend this, and the storage devices keeps packets separated by these subnets. For my setup, I have this config: controller 0 controller 1 host-1 host-2 nic0 10.251.6.101 10.251.6.102 10.251.6.135 10.251.6.136 nic1 10.251.7.101 10.251.7.102 10.251.7.135 10.251.7.136 nic2 10.251.8.101 10.251.8.102 10.251.8.135 10.251.8.136 nic3 10.251.9.101 10.251.9.102 10.251.9.135 10.251.9.136 On each virtualization host, I have 1 nics configured on each of the 4 subnets. 2) We are using rhev 3.5. I have deployed this as round-robin, not bond. According to Dell support, iscsi works best with round-robin, whereas FC NAS works best with bond. I follow their recommendations. The rdac driver is setup for this, and the prio= below separates the 8 nic paths into 2 groups. # multipath -ll 36f01faf000d7ddeb000002085258bce5 dm-1 DELL,MD32xxi size=756G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw |-+- policy='round-robin 0' *prio=14* status=active | |- 7:0:0:1 sdc 8:32 active ready running | |- 8:0:0:1 sdj 8:144 active ready running | |- 11:0:0:1 sdu 65:64 active ready running | `- 12:0:0:1 sdv 65:80 active ready running `-+- policy='round-robin 0' *prio=9* status=enabled |- 6:0:0:1 sdf 8:80 active ready running |- 10:0:0:1 sdk 8:160 active ready running |- 5:0:0:1 sdo 8:224 active ready running `- 9:0:0:1 sdt 65:48 active ready running Hope this helps explain my setup. I am not sure how to file a bug. Is this done on bugzilla or somewhere else? Paul On Sun, Mar 15, 2015 at 7:44 AM, Elad Ben Aharon <ebenahar@redhat.com> wrote:
Hi Paul,
I would like to know the following details: 1) Are the hosts's NICs connected to the storage server located in the same network subnet as the storage server itself? 2) Have you tried to deploy the connection to the storage server using the 'iSCSI multipath' bond that available in RHEV-3.4?
------------------------------ *From: *"Nir Soffer" <nsoffer@redhat.com> *To: *"Paul Dyer" <pmdyermms@gmail.com> *Cc: *devel@ovirt.org, "Elad Ben Aharon" <ebenahar@redhat.com> *Sent: *Sunday, 15 March, 2015 12:54:44 PM
*Subject: *Re: [ovirt-devel] adding new paths to iscsi data storage domains
Adding Elad who tested this feature lately to add more info.
From: "Paul Dyer" <pmdyermms@gmail.com> To: "Nir Soffer" <nsoffer@redhat.com> Cc: devel@ovirt.org Sent: Friday, March 13, 2015 6:25:05 PM Subject: Re: [ovirt-devel] adding new paths to iscsi data storage domains
Nir,
we have added 2 more nics to each virtualization host. In order to get this working, I had to add an after_network_setup hook. The shell
simply does "/sbin/iscsiadm -m node -L all", to get the extra targets login after reboot.
I looked in the engine table storage_server_connections and found that only the iscsi targets selected during the original storage domain create were present. If ovirt-engine added rows here, then most of the work would have been done.
I say mostly, because the Dell MD3200i did not return exactly the correct portal target values. The device has 2 controllers, with 4 nics each. Controller 0, ports 0,1,2,3 uses portal target 1. Controller 1, the ports uses portal target 2. After iscsiadm discovery, the portal targets for ports 1,2,3 were all 1. Ports 0 had targets 1 and 2, correctly. I adjusted the values saved on the filesystem, and login/logoff works fine.
Paul
On Fri, Feb 6, 2015 at 11:57 AM, Paul Dyer <pmdyermms@gmail.com> wrote:
First of all, thank you for your time. I must apologize that in this install, I am using RHEV 3.4.5. I will try to reproduce this on an ovirt install. I just need to create some paths to iscsi targets.
1. This configuration has 2 physical hosts, Dell PE-R715 servers, with a Dell PowerVault MD3200i iSCSI data storage domain. The EM3 nic was
original link. EM4 was the new connection to storage.
2. From the manager interface, I selected Storage tab, then the kvm5DataDomain, then edit. From the popup, I added the IP address under Discovery Targets, then clicked the Discover button. Then, clicked
login arrow on the new targets discovered.
I have attached the engine and vdsm logs. I was working on this at about 11:40am Feb 4th.
When setting the host in maintenance mode, then reboot, and Activate,
new paths do not get a login.
Thanks, Paul
On Fri, Feb 6, 2015 at 5:38 AM, Nir Soffer <nsoffer@redhat.com> wrote:
----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: devel@ovirt.org Sent: Friday, February 6, 2015 12:20:23 AM Subject: [ovirt-devel] adding new paths to iscsi data storage domains
Hi,
I have been reading the devel list for months now, and would like to ask a question.
In version 3.4.5, adding new paths to an existing iSCSI data storage domain does not work from the manager.
It works on ovirt 3.5 and master and it should work also in all
----- Original Message ----- script the the the previous
versions.
Please open a bug for this: 1. Describe the configuration you are modifying 2. Describe the steps you take 3. Include engine log 4. Include vdsm log from the host trying to add new devices
I have been able to add the paths with command line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L all".
Is there any plan to allow for adding new storage paths after the data domain has been created?
Thanks, Paul
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750

On 15/03/15 15:38, Paul Dyer wrote:
Hope this helps explain my setup. I am not sure how to file a bug. Is this done on bugzilla or somewhere else?
Yes, here: https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt HTH -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

Bug 1202471 added. On Sun, Mar 15, 2015 at 9:38 AM, Paul Dyer <pmdyermms@gmail.com> wrote:
Hi,
1) The storage device has 2 controllers. Each controller has 4 nics. All 8 nics are connected to a level 2 switch that has ethernet connections to the hosts. The storage device nics enumerated are 0, 1, 2, 3. Controller 0, nic 0 are the primary target for a group which includes controller 1, nic 0. In /var/lib/iscsi/nodes/{IQN}/{nic}/default, the difference between c0n0 and c1n0 is : # diff 10.251.6.10?,3260,?/default 3c3 < node.tpgt = 1 ---
node.tpgt = 2 47c47 < node.conn[0].address = 10.251.6.101
node.conn[0].address = 10.251.6.102
The 4 groups of nics have different subnets. Dell docs recommend this, and the storage devices keeps packets separated by these subnets. For my setup, I have this config: controller 0 controller 1 host-1 host-2 nic0 10.251.6.101 10.251.6.102 10.251.6.135 10.251.6.136 nic1 10.251.7.101 10.251.7.102 10.251.7.135 10.251.7.136 nic2 10.251.8.101 10.251.8.102 10.251.8.135 10.251.8.136 nic3 10.251.9.101 10.251.9.102 10.251.9.135 10.251.9.136
On each virtualization host, I have 1 nics configured on each of the 4 subnets.
2) We are using rhev 3.5. I have deployed this as round-robin, not bond. According to Dell support, iscsi works best with round-robin, whereas FC NAS works best with bond. I follow their recommendations. The rdac driver is setup for this, and the prio= below separates the 8 nic paths into 2 groups.
# multipath -ll 36f01faf000d7ddeb000002085258bce5 dm-1 DELL,MD32xxi size=756G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw |-+- policy='round-robin 0' *prio=14* status=active | |- 7:0:0:1 sdc 8:32 active ready running | |- 8:0:0:1 sdj 8:144 active ready running | |- 11:0:0:1 sdu 65:64 active ready running | `- 12:0:0:1 sdv 65:80 active ready running `-+- policy='round-robin 0' *prio=9* status=enabled |- 6:0:0:1 sdf 8:80 active ready running |- 10:0:0:1 sdk 8:160 active ready running |- 5:0:0:1 sdo 8:224 active ready running `- 9:0:0:1 sdt 65:48 active ready running
Hope this helps explain my setup. I am not sure how to file a bug. Is this done on bugzilla or somewhere else?
Paul
On Sun, Mar 15, 2015 at 7:44 AM, Elad Ben Aharon <ebenahar@redhat.com> wrote:
Hi Paul,
I would like to know the following details: 1) Are the hosts's NICs connected to the storage server located in the same network subnet as the storage server itself? 2) Have you tried to deploy the connection to the storage server using the 'iSCSI multipath' bond that available in RHEV-3.4?
------------------------------ *From: *"Nir Soffer" <nsoffer@redhat.com> *To: *"Paul Dyer" <pmdyermms@gmail.com> *Cc: *devel@ovirt.org, "Elad Ben Aharon" <ebenahar@redhat.com> *Sent: *Sunday, 15 March, 2015 12:54:44 PM
*Subject: *Re: [ovirt-devel] adding new paths to iscsi data storage domains
Adding Elad who tested this feature lately to add more info.
From: "Paul Dyer" <pmdyermms@gmail.com> To: "Nir Soffer" <nsoffer@redhat.com> Cc: devel@ovirt.org Sent: Friday, March 13, 2015 6:25:05 PM Subject: Re: [ovirt-devel] adding new paths to iscsi data storage domains
Nir,
we have added 2 more nics to each virtualization host. In order to get this working, I had to add an after_network_setup hook. The shell
simply does "/sbin/iscsiadm -m node -L all", to get the extra targets login after reboot.
I looked in the engine table storage_server_connections and found that only the iscsi targets selected during the original storage domain create were present. If ovirt-engine added rows here, then most of the work would have been done.
I say mostly, because the Dell MD3200i did not return exactly the correct portal target values. The device has 2 controllers, with 4 nics each. Controller 0, ports 0,1,2,3 uses portal target 1. Controller 1, the ports uses portal target 2. After iscsiadm discovery, the portal targets for ports 1,2,3 were all 1. Ports 0 had targets 1 and 2, correctly. I adjusted the values saved on the filesystem, and login/logoff works fine.
Paul
On Fri, Feb 6, 2015 at 11:57 AM, Paul Dyer <pmdyermms@gmail.com> wrote:
First of all, thank you for your time. I must apologize that in this install, I am using RHEV 3.4.5. I will try to reproduce this on an ovirt install. I just need to create some paths to iscsi targets.
1. This configuration has 2 physical hosts, Dell PE-R715 servers, with a Dell PowerVault MD3200i iSCSI data storage domain. The EM3 nic was
original link. EM4 was the new connection to storage.
2. From the manager interface, I selected Storage tab, then the kvm5DataDomain, then edit. From the popup, I added the IP address under Discovery Targets, then clicked the Discover button. Then, clicked
login arrow on the new targets discovered.
I have attached the engine and vdsm logs. I was working on this at about 11:40am Feb 4th.
When setting the host in maintenance mode, then reboot, and Activate,
new paths do not get a login.
Thanks, Paul
On Fri, Feb 6, 2015 at 5:38 AM, Nir Soffer <nsoffer@redhat.com> wrote:
----- Original Message -----
From: "Paul Dyer" <pmdyermms@gmail.com> To: devel@ovirt.org Sent: Friday, February 6, 2015 12:20:23 AM Subject: [ovirt-devel] adding new paths to iscsi data storage domains
Hi,
I have been reading the devel list for months now, and would like to ask a question.
In version 3.4.5, adding new paths to an existing iSCSI data storage domain does not work from the manager.
It works on ovirt 3.5 and master and it should work also in all
----- Original Message ----- script the the the previous
versions.
Please open a bug for this: 1. Describe the configuration you are modifying 2. Describe the steps you take 3. Include engine log 4. Include vdsm log from the host trying to add new devices
I have been able to add the paths with command line "iscsiadm -m discovery -t st" and "iscsiadm -m node -L all".
Is there any plan to allow for adding new storage paths after the data domain has been created?
Thanks, Paul
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
-- Paul Dyer, Mercury Consulting Group, RHCE 504-302-8750
participants (4)
-
Elad Ben Aharon
-
Nir Soffer
-
Paul Dyer
-
Sven Kieske