used bandwidth when live migrating

Hi all, We use in production a dedicated 10G vlan link for live migration. In the cluster option tab I put the migration bandwidth limit to 10000 Mbps. Everything work as expected and now 25 vms on a host migrate in a few seconds (excatly 13), but I'm not able to measure the real consumed bandwidth. I want to evaluate such a thing beacuse my goal is to dedicate a vlan for gluster ont the same 10G nic, and I don't want an overload issue with gluster when vms migrations happen. So my questions are : how does live migration work? Is it a RAM to RAM transport between two hosts? Are migration bandwidth limited by I/O disk anywhere or by the nic capabilities? Could 10Gbps be fully used for such staff? What would you advice to make work gluster and migration on the same nic (QoS?) Thank you for help. -- Nathanaël Blanchet Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr

Le 25/11/2016 à 10:28, Nathanaël Blanchet a écrit :
Hi all,
We use in production a dedicated 10G vlan link for live migration. In the cluster option tab I put the migration bandwidth limit to 10000 Mbps. Everything work as expected and now 25 vms on a host migrate in a few seconds (excatly 13), but I'm not able to measure the real consumed bandwidth. I want to evaluate such a thing beacuse my goal is to dedicate a vlan for gluster ont the same 10G nic, and I don't want an overload issue with gluster when vms migrations happen.
So my questions are : how does live migration work? Is it a RAM to RAM transport between two hosts? Are migration bandwidth limited by I/O disk anywhere or by the nic capabilities? Could 10Gbps be fully used for such staff? What would you advice to make work gluster and migration on the same nic (QoS?)
Is this what I need ? https://www.ovirt.org/develop/release-management/features/network/detailed-h... Cluster/DC (Data Center) - control the traffic related to a specific logical network throughout the entire cluster/DC, including through its infrastructure (e.g. L2 switches). Cluster/DC-wide QoS remains to be handled in the future. It seems not to be yet present in 4.0
Thank you for help.
-- Nathanaël Blanchet Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr

On 25 Nov 2016, at 11:08, Nathanaël Blanchet <blanchet@abes.fr> wrote:
Le 25/11/2016 à 10:28, Nathanaël Blanchet a écrit :
Hi all,
We use in production a dedicated 10G vlan link for live migration. In the cluster option tab I put the migration bandwidth limit to 10000 Mbps. Everything work as expected and now 25 vms on a host migrate in a few seconds (excatly 13), but I'm not able to measure the real consumed bandwidth. I want to evaluate such a thing beacuse my goal is to dedicate a vlan for gluster ont the same 10G nic, and I don't want an overload issue with gluster when vms migrations happen.
So my questions are : how does live migration work? Is it a RAM to RAM transport between two hosts? Are migration bandwidth limited by I/O disk anywhere or by the nic capabilities? Could 10Gbps be fully used for such staff? What would you advice to make work gluster and migration on the same nic (QoS?)
Is this what I need ? https://www.ovirt.org/develop/release-management/features/network/detailed-h...
even the existing QoS capabilities should be enough Separating storage and migration traffic to different logical networks is indeed a good idea You can limit the migration bandwidth if it’s not critical, do you have nay specific requirements there?
Cluster/DC (Data Center) - control the traffic related to a specific logical network throughout the entire cluster/DC, including through its infrastructure (e.g. L2 switches).
Cluster/DC-wide QoS remains to be handled in the future.
It seems not to be yet present in 4.0
Thank you for help.
-- Nathanaël Blanchet
Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Le 25/11/2016 à 15:25, Michal Skrivanek a écrit :
On 25 Nov 2016, at 11:08, Nathanaël Blanchet <blanchet@abes.fr> wrote:
Le 25/11/2016 à 10:28, Nathanaël Blanchet a écrit :
Hi all,
We use in production a dedicated 10G vlan link for live migration. In the cluster option tab I put the migration bandwidth limit to 10000 Mbps. Everything work as expected and now 25 vms on a host migrate in a few seconds (excatly 13), but I'm not able to measure the real consumed bandwidth. I want to evaluate such a thing beacuse my goal is to dedicate a vlan for gluster ont the same 10G nic, and I don't want an overload issue with gluster when vms migrations happen.
So my questions are : how does live migration work? Is it a RAM to RAM transport between two hosts? Are migration bandwidth limited by I/O disk anywhere or by the nic capabilities? Could 10Gbps be fully used for such staff? What would you advice to make work gluster and migration on the same nic (QoS?) Is this what I need ? https://www.ovirt.org/develop/release-management/features/network/detailed-h... even the existing QoS capabilities should be enough Separating storage and migration traffic to different logical networks is indeed a good idea You can limit the migration bandwidth if it’s not critical, do you have nay specific requirements there? If I understand what is explained, host network QoS level is what I need. To do that, it is enough to declare the predefined QoS in the edition of the logical network, so that the traffic will be limited on all the physical nics being used by this vlan. Tell me if I'm wrong.
Cluster/DC (Data Center) - control the traffic related to a specific logical network throughout the entire cluster/DC, including through its infrastructure (e.g. L2 switches).
Cluster/DC-wide QoS remains to be handled in the future.
It seems not to be yet present in 4.0
Thank you for help.
-- Nathanaël Blanchet
Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nathanaël Blanchet Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
participants (2)
-
Michal Skrivanek
-
Nathanaël Blanchet