From danken at redhat.com Tue Jan 1 12:47:57 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Tue, 1 Jan 2013 14:47:57 +0200 Subject: feature suggestion: initial generation of management network In-Reply-To: <11330867.78.1356611742783.JavaMail.javamailuser@localhost> References: <20121227121406.GD8915@redhat.com> <11330867.78.1356611742783.JavaMail.javamailuser@localhost> Message-ID: <20130101124757.GI7274@redhat.com> On Thu, Dec 27, 2012 at 07:36:40AM -0500, Simon Grinberg wrote: > > > ----- Original Message ----- > > From: "Dan Kenigsberg" > > To: "Simon Grinberg" > > Cc: "arch" > > Sent: Thursday, December 27, 2012 2:14:06 PM > > Subject: Re: feature suggestion: initial generation of management network > > > > On Tue, Dec 25, 2012 at 09:29:26AM -0500, Simon Grinberg wrote: > > > > > > > > > ----- Original Message ----- > > > > From: "Dan Kenigsberg" > > > > To: "arch" > > > > Sent: Tuesday, December 25, 2012 2:27:22 PM > > > > Subject: feature suggestion: initial generation of management > > > > network > > > > > > > > Current condition: > > > > ================== > > > > The management network, named ovirtmgmt, is created during host > > > > bootstrap. It consists of a bridge device, connected to the > > > > network > > > > device that was used to communicate with Engine (nic, bonding or > > > > vlan). > > > > It inherits its ip settings from the latter device. > > > > > > > > Why Is the Management Network Needed? > > > > ===================================== > > > > Understandably, some may ask why do we need to have a management > > > > network - why having a host with IPv4 configured on it is not > > > > enough. > > > > The answer is twofold: > > > > 1. In oVirt, a network is an abstraction of the resources > > > > required > > > > for > > > > connectivity of a host for a specific usage. This is true for > > > > the > > > > management network just as it is for VM network or a display > > > > network. > > > > The network entity is the key for adding/changing nics and IP > > > > address. > > > > 2. In many occasions (such as small setups) the management > > > > network is > > > > used as a VM/display network as well. > > > > > > > > Problems in current connectivity: > > > > ================================ > > > > According to alonbl of ovirt-host-deploy fame, and with no > > > > conflict > > > > to > > > > my own experience, creating the management network is the most > > > > fragile, > > > > error-prone step of bootstrap. > > > > > > +1, > > > I've raise that repeatedly in the past, bootstrap should not create > > > the management network but pick up the existing configuration and > > > let the engine override later with it's own configuration if it > > > differs , I'm glad that we finally get to that. > > > > > > > > > > > Currently it always creates a bridged network (even if the DC > > > > requires a > > > > non-bridged ovirtmgmt), it knows nothing about the defined MTU > > > > for > > > > ovirtmgmt, it uses ping to guess on top of which device to build > > > > (and > > > > thus requires Vdsm-to-Engine reverse connectivity), and is the > > > > sole > > > > remaining user of the addNetwork/vdsm-store-net-conf scripts. > > > > > > > > Suggested feature: > > > > ================== > > > > Bootstrap would avoid creating a management network. Instead, > > > > after > > > > bootstrapping a host, Engine would send a getVdsCaps probe to the > > > > installed host, receiving a complete picture of the network > > > > configuration on the host. Among this picture is the device that > > > > holds > > > > the host's management IP address. > > > > > > > > Engine would send setupNetwork command to generate ovirtmgmt with > > > > details devised from this picture, and according to the DC > > > > definition > > > > of > > > > ovirtmgmt. For example, if Vdsm reports: > > > > > > > > - vlan bond4.3000 has the host's IP, configured to use dhcp. > > > > - bond4 is comprises eth2 and eth3 > > > > - ovirtmgmt is defined as a VM network with MTU 9000 > > > > > > > > then Engine sends the likes of: > > > > setupNetworks(ovirtmgmt: {bridged=True, vlan=3000, iface=bond4, > > > > bonding=bond4: {eth2,eth3}, MTU=9000) > > > > > > Just one comment here, > > > In order to save time and confusion - if the ovirtmgmt is defined > > > with default values meaning the user did not bother to touch it, > > > let it pick up the VLAN configuration from the first host added in > > > the Data Center. > > > > > > Otherwise, you may override the host VLAN and loose connectivity. > > > > > > This will also solve the situation many users encounter today. > > > 1. The engine in on a host that actually has VLAN defined > > > 2. The ovirtmgmt network was not updated in the DC > > > 3. A host, with VLAN already defined is added - everything works > > > fine > > > 4. Any number of hosts are now added, again everything seems to > > > work fine. > > > > > > But, now try to use setupNetworks, and you'll find out that you > > > can't do much on the interface that contains the ovirtmgmt since > > > the definition does not match. You can't sync (Since this will > > > remove the VLAN and cause connectivity lose) you can't add more > > > networks on top since it already has non-VLAN network on top > > > according to the DC definition, etc. > > > > > > On the other hand you can't update the ovirtmgmt definition on the > > > DC since there are clusters in the DC that use the network. > > > > > > The only workaround not involving DB hack to change the VLAN on the > > > network is to: > > > 1. Create new DC > > > 2. Do not use the wizard that pops up to create your cluster. > > > 3. Modify the ovirtmgmt network to have VLANs > > > 4. Now create a cluster and add your hosts. > > > > > > If you insist on using the default DC and cluster then before > > > adding the first host, create an additional DC and move the > > > Default cluster over there. You may then change the network on the > > > Default cluster and then move the Default cluster back > > > > > > Both are ugly. And should be solved by the proposal above. > > > > > > We do something similar for the Default cluster CPU level, where we > > > set the intial level based on the first host added to the cluster. > > > > I'm not sure what Engine has for Default cluster CPU level. But I > > have > > reservation of the hysteresis in your proposal - after a host is > > added, > > the DC cannot forget ovirtmgmt's vlan. > > > > How about letting the admin edit ovirtmgmt's vlan in the DC level, > > thus > > rendering all hosts out-of-sync. The the admin could manually, or > > through a script, or in the future through a distributed operation, > > sync > > all the hosts to the definition? > > Usually if you do that you will loose connectivity to the hosts. Yes, changing the management vlan id (or ip address) is never fun, and requires out-of-band intervention. > I'm not insisting on the automatic adjustment of the ovirtmgmt network to match the hosts' (that is just a nice touch) we can take the allow edit approach. > > But allow to change VLAN on the ovirtmgmt network will indeed solve the issue I'm trying to solve while creating another issue of user expecting that we'll be able to re-tag the host from the engine side, which is challenging to do. > > On the other hand, if we allow to change the VLAN as long as the change matches the hosts' configuration, it will both solve the issue while not eluding the user to think that we really can solve the chicken and egg issue of re-tag the entire system. > > Now with the above ability you do get a flow to do the re-tag. > 1. Place all the hosts in maintenance > 2. Re-tag the ovirtmgmt on all the hosts > 3. Re-tag the hosts on which the engine on > 4. Activate the hosts - this should work well now since connectivity exist > 5. Change the tag on ovirtmgmt on the engine to match the hosts' > > Simple and clear process. > > When the workaround of creating another DC was not possible since the system was already long in use and the need was re-tag of the network the above is what I've recommended in the, except that steps 4-5 where done as: > 4. Stop the engine > 5. Change the tag in the DB > 6. Start the engine > 7. Activate the hosts Sounds reasonable to me - but as far as I am aware this is not tightly related to the $Subject, which is the post-boot ovirtmgmt definition. I've added a few details to http://www.ovirt.org/Features/Normalized_ovirtmgmt_Initialization#Engine and I would apreciate a review from someone with intimate Engine know-how. Dan. From ecohen at redhat.com Tue Jan 1 21:25:44 2013 From: ecohen at redhat.com (Einav Cohen) Date: Tue, 1 Jan 2013 16:25:44 -0500 (EST) Subject: [Vojtech Szocs] oVirt UI Plugins overview Message-ID: <636294573.59796099.1357075544790.JavaMail.root@redhat.com> The following is a new meeting request: Subject: [Vojtech Szocs] oVirt UI Plugins overview Organizer: "Einav Cohen" Location: Intercall Conference code: 7128867405# Time: Thursday, January 3, 2013, 8:00:00 AM - 9:00:00 AM GMT -05:00 US/Canada Eastern Invitees: arch at ovirt.org; engine-devel at ovirt.org; jrankin at redhat.com; rluxenbe at redhat.com; amit at tonian.com; vszocs at redhat.com *~*~*~*~*~*~*~*~*~* Intercall dial-in numbers: https://www.intercallonline.com/listNumbersByCode.action?confCode=7128867405 Intercall conf code: 7128867405# elluminate session: https://sas.elluminate.com/m.jnlp?sid=819&password=M.6BD0C502D1CD17B559EE5EE9F9FB09 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 3446 bytes Desc: not available URL: From danken at redhat.com Thu Jan 3 10:07:22 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Thu, 3 Jan 2013 12:07:22 +0200 Subject: feature suggestion: in-host network with no external nics Message-ID: <20130103100722.GD21553@redhat.com> Description =========== In oVirt, after a VM network is defined in the Data Center level and added to a cluster, it needs to be implemented on each host. All VM networks are (currently) based on a Linux software bridge. The specific implementation controls how traffic from that bridge reaches the outer world. For example, the bridge may be connected externally via eth3, or bond3 over eth2 and p1p2. This feature is about implementing a network with no network interfaces (NICs) at all. Having a disconnected network may first seem to add complexity to VM placement. Until now, we assumed that if a network (say, blue) is defined on two hosts, the two hosts lie in the same broadcast domain. If a couple of VMs are connected to "blue" it does not matter where they run - they would always hear each other. This is of course no longer true if one of the hosts implements "blue" as nicless. However, this is nothing new. oVirt never validates the single broadcast domain assumption, which can be easily broken by an admin: on one host, an admin can implement blue using a nic that has completely unrelated physical connectivity. Benefits ======== * All-in-One http://www.ovirt.org/Feature/AllInOne use case: we'd like to have a complete oVirt deployment that does not rely on external resources, such as layer-2 connectivity or DNS. * Collaborative computing: an oVirt user may wish to have a group of VMs with heavy in-group secret communication, where only one of the VMs exposes an external web service. The in-group secret communication could be limited to a nic-less network, no need to let it spill outside. * [SciFi] NIC-less networks can be tunneled to remove network segments over IP, a layer 2 NIC may not be part of its definition. Vdsm ==== Vdsm already supports defining a network with no nics attached. Engine ====== I am told that implementing this in Engine is quite a pain, as network is not a first-class citizen in the DB; it is more of an attribute of its primary external interface. This message is an html-to-text redering of http://www.ovirt.org/Features/Nicless_Network (I like the name, it sounds like a jewelery) and I am sure it is missing a lot (Pasternak is intentionally CCed). Comments are most welcome. Dan. From iheim at redhat.com Sun Jan 6 19:05:30 2013 From: iheim at redhat.com (Itamar Heim) Date: Sun, 06 Jan 2013 21:05:30 +0200 Subject: gerrit queue Message-ID: <50E9CAFA.1090903@redhat.com> our gerrit patch queue is a bit long. I'd appreciate if folks can take a look at patches pending for their review. also, for old patches you may ave submitted and need reviewers/rebases/abandon. I've asked some of the developers to dedicate tuesday, january 8th for this effort. Thanks, Itamar From simon at redhat.com Mon Jan 7 17:07:15 2013 From: simon at redhat.com (Simon Grinberg) Date: Mon, 7 Jan 2013 12:07:15 -0500 (EST) Subject: feature suggestion: in-host network with no external nics In-Reply-To: <20130103100722.GD21553@redhat.com> Message-ID: <16895101.1935.1357578368795.JavaMail.javamailuser@localhost> ----- Original Message ----- > From: "Dan Kenigsberg" > To: "arch" > Cc: "Livnat Peer" , "Moti Asayag" , "Michael Pasternak" > Sent: Thursday, January 3, 2013 12:07:22 PM > Subject: feature suggestion: in-host network with no external nics > > Description > =========== > In oVirt, after a VM network is defined in the Data Center level and > added to a cluster, it needs to be implemented on each host. All VM > networks are (currently) based on a Linux software bridge. The > specific > implementation controls how traffic from that bridge reaches the > outer > world. For example, the bridge may be connected externally via eth3, > or > bond3 over eth2 and p1p2. This feature is about implementing a > network > with no network interfaces (NICs) at all. > > Having a disconnected network may first seem to add complexity to VM > placement. Until now, we assumed that if a network (say, blue) is > defined on two hosts, the two hosts lie in the same broadcast domain. > If > a couple of VMs are connected to "blue" it does not matter where they > run - they would always hear each other. This is of course no longer > true if one of the hosts implements "blue" as nicless. > However, this is nothing new. oVirt never validates the single > broadcast > domain assumption, which can be easily broken by an admin: on one > host, > an admin can implement blue using a nic that has completely unrelated > physical connectivity. > > Benefits > ======== > * All-in-One http://www.ovirt.org/Feature/AllInOne use case: we'd > like > to have a complete oVirt deployment that does not rely on external > resources, such as layer-2 connectivity or DNS. > * Collaborative computing: an oVirt user may wish to have a group > of VMs with heavy in-group secret communication, where only one of > the > VMs exposes an external web service. The in-group secret > communication > could be limited to a nic-less network, no need to let it spill > outside. > * [SciFi] NIC-less networks can be tunneled to remove network > segments > over IP, a layer 2 NIC may not be part of its definition. > > Vdsm > ==== > Vdsm already supports defining a network with no nics attached. > > Engine > ====== > I am told that implementing this in Engine is quite a pain, as > network > is not a first-class citizen in the DB; it is more of an attribute of > its primary external interface. There is more then that. You may take the approach of: 1. Configure this network statically on a host 2. Pin the VMs to host since otherwise what use there is to define such a network on VMs if the scheduler is free to schedule the VMs on different hosts? Or, 1. Create this network ad-hoc according to the first VM that needs it 2. Use the VM affinity feature to state that these VMs must run together on the same host 3. Assigning a network to these VMs automatically configures the affinity. The first is simplistic, and requires minimal changes to the engine (you do need to allow LN as device-less entity*) , the second approach is more robust and user friendly but require more work in the engine. On top of the above you may like to: 1. Allow this network to be NATed - libvirt already supports that - should be simple. 2. Combine this with the upcoming IP setting for the guests - A bit more complex 3. May want to easily define it as a Inter-VM-group-channel property same as affinity-group instead of explicit define such a network. Meaning define group of VMs. Define affinity, define Inter-VM-group-channel, define group's SLA etc - Let's admit that VMs that require this type of internal networking are part of VM group that together compose a workload/application. *A relativity easy change under current modelling (a model that I don't like in the first place), is to define another 'NIC' of type bridge (same as you have VLAN nic, bond nic, and NIC nic) so a 'floating bridge' is a LN on the Bridge NIC. Ugly but this is the current modelling. > > This message is an html-to-text rendering of > http://www.ovirt.org/Features/Nicless_Network > (I like the name, it sounds like a jewelery) The name commonly used for this is 'Host only network' Though we really into inventing new terminologies to things, in this case I would rather not since it's used in similar solutions, (VMWare, Parallels, Virtual-Box, etc) hence it's not vendor specific. In any case Nicless is bad since external interface may also be a Bond. > and I am sure it is missing a lot (Pasternak is intentionally CCed). > Comments are most welcome. > > Dan. > From lhawthor at redhat.com Mon Jan 7 19:52:26 2013 From: lhawthor at redhat.com (Leslie Hawthorn) Date: Mon, 07 Jan 2013 11:52:26 -0800 Subject: Action Needed: Upcoming Deadlines for oVirt Workshop at NetApp Message-ID: <50EB277A.4040103@redhat.com> Hello everyone, ***If you will not be attending the oVirt workshop taking place at NetApp HQ on 22-24 January 2013 [0], you can stop reading now.*** Hotel Room Block Expiring: If you require a hotel room as part of your visit for the workshop, please book your room ASAP [1], as we will be releasing extra rooms in our block at close of business tomorrow, 8 January. If you are unable to book a room by close of business tomorrow, all unbooked rooms in our block will be released. You may still request our room block rate but you will no longer be guaranteed lodging at the Country Inn and Suites as of Wednesday, 9 January. Registration Deadline: In order to have an accurate headcount for catering, please ensure you have completed your registration for the event no later than close of business on Tuesday, 15 January. [1] Please take a moment to register and to remind any friends and colleagues who would be interested in attending of our registration deadline. If you or a colleague are unable to register by Tuesday, 15 January, but would still like to attend please contact Dave Neary off-list. [2] Dave will do his best to ensure that we are able to process late registrations, though we unfortunately cannot make any guarantees. Thank you once again to Patrick Rogers, Denise Ridolfo, Jon Benedict, Talia Reyes-Ortiz and the rest of the fine folks at NetApp for hosting this workshop and all their hard work to bring the community together in Sunnyvale. If you have any questions, please let me know. I look forward to (re)meeting you at the oVirt workshop at NetApp. [0] - http://www.ovirt.org/NetApp_Workshop_January_2013 [1] - http://ovirtnetapp2013.eventbrite.com/# [2] - dneary at redhat dot com Cheers, LH -- Leslie Hawthorn Community Action and Impact Open Source and Standards @ Red Hat identi.ca/lh twitter.com/lhawthorn] From danken at redhat.com Tue Jan 8 12:22:14 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Tue, 8 Jan 2013 14:22:14 +0200 Subject: feature suggestion: in-host network with no external nics In-Reply-To: <16895101.1935.1357578368795.JavaMail.javamailuser@localhost> References: <20130103100722.GD21553@redhat.com> <16895101.1935.1357578368795.JavaMail.javamailuser@localhost> Message-ID: <20130108122214.GE1534@redhat.com> On Mon, Jan 07, 2013 at 12:07:15PM -0500, Simon Grinberg wrote: > > > ----- Original Message ----- > > From: "Dan Kenigsberg" > > To: "arch" > > Cc: "Livnat Peer" , "Moti Asayag" , "Michael Pasternak" > > Sent: Thursday, January 3, 2013 12:07:22 PM > > Subject: feature suggestion: in-host network with no external nics > > > > Description > > =========== > > In oVirt, after a VM network is defined in the Data Center level and > > added to a cluster, it needs to be implemented on each host. All VM > > networks are (currently) based on a Linux software bridge. The > > specific > > implementation controls how traffic from that bridge reaches the > > outer > > world. For example, the bridge may be connected externally via eth3, > > or > > bond3 over eth2 and p1p2. This feature is about implementing a > > network > > with no network interfaces (NICs) at all. > > > > Having a disconnected network may first seem to add complexity to VM > > placement. Until now, we assumed that if a network (say, blue) is > > defined on two hosts, the two hosts lie in the same broadcast domain. > > If > > a couple of VMs are connected to "blue" it does not matter where they > > run - they would always hear each other. This is of course no longer > > true if one of the hosts implements "blue" as nicless. > > However, this is nothing new. oVirt never validates the single > > broadcast > > domain assumption, which can be easily broken by an admin: on one > > host, > > an admin can implement blue using a nic that has completely unrelated > > physical connectivity. > > > > Benefits > > ======== > > * All-in-One http://www.ovirt.org/Feature/AllInOne use case: we'd > > like > > to have a complete oVirt deployment that does not rely on external > > resources, such as layer-2 connectivity or DNS. > > * Collaborative computing: an oVirt user may wish to have a group > > of VMs with heavy in-group secret communication, where only one of > > the > > VMs exposes an external web service. The in-group secret > > communication > > could be limited to a nic-less network, no need to let it spill > > outside. > > * [SciFi] NIC-less networks can be tunneled to remove network > > segments > > over IP, a layer 2 NIC may not be part of its definition. > > > > Vdsm > > ==== > > Vdsm already supports defining a network with no nics attached. > > > > Engine > > ====== > > I am told that implementing this in Engine is quite a pain, as > > network > > is not a first-class citizen in the DB; it is more of an attribute of > > its primary external interface. > > There is more then that. Indeed what you describe is alot more than my planned idea. I'd say that my Nicless_Network feature is a small building block of yours. However, since it is a strictly required building block, Nicless_Network should be implemented as a first stage of smarter scheduler logic above it. I'd fire up a new Host Only Network feature page that would note Nicless_Network as a requirement. > You may take the approach of: > 1. Configure this network statically on a host > 2. Pin the VMs to host since otherwise what use there is to define such a network on VMs if the scheduler is free to schedule the VMs on different hosts? I tried to answer this worry in my original post. A user may want to redifine "blue" on eth2 instead of eth1. Another user may have his reasons to define "blue" with no nic at all - he may devise a cool tunnel to connect his host to the world. Nicless_Network is only about allowing this. I'd say that a host-only network is nicless network with some logic attached to it; I actually like the one you detail here: > > Or, > 1. Create this network ad-hoc according to the first VM that needs it > 2. Use the VM affinity feature to state that these VMs must run together on the same host > 3. Assigning a network to these VMs automatically configures the affinity. > > The first is simplistic, and requires minimal changes to the engine (you do need to allow LN as device-less entity*) , the second approach is more robust and user friendly but require more work in the engine. > > On top of the above you may like to: > 1. Allow this network to be NATed - libvirt already supports that - should be simple. > 2. Combine this with the upcoming IP setting for the guests - A bit more complex Once we have IPAM ticking, we could apply it to any network, including this. Or do you mean anything else? > 3. May want to easily define it as a Inter-VM-group-channel property > same as affinity-group instead of explicit define such a network. > Meaning define group of VMs. Define affinity, define > Inter-VM-group-channel, define group's SLA etc - Let's admit that VMs > that require this type of internal networking are part of VM group > that together compose a workload/application. I'm not sure why defining an "Inter-VM-group-channel" is easier than calling it a "network". The VMs would see a NIC, and someone should decide if there is only one NIC per vm and provide it with a mac address, so I do not see the merit of this abstraction layer. I'm probably missing something crucial. > > *A relativity easy change under current modelling (a model that I > don't like in the first place), is to define another 'NIC' of type > bridge (same as you have VLAN nic, bond nic, and NIC nic) so a > 'floating bridge' is a LN on the Bridge NIC. Ugly but this is the > current modelling. We may implement Nicless_Network as something like that, under the hood. (but bridge, vlan and bond are not nics, they are Linux net devices) > > > > > This message is an html-to-text rendering of > > http://www.ovirt.org/Features/Nicless_Network > > (I like the name, it sounds like a jewelery) > > The name commonly used for this is 'Host only network' > Though we really into inventing new terminologies to things, in this case I would rather not since it's used in similar solutions, (VMWare, Parallels, Virtual-Box, etc) hence it's not vendor specific. > > In any case Nicless is bad since external interface may also be a Bond. From simon at redhat.com Tue Jan 8 12:51:39 2013 From: simon at redhat.com (Simon Grinberg) Date: Tue, 8 Jan 2013 07:51:39 -0500 (EST) Subject: feature suggestion: in-host network with no external nics In-Reply-To: <20130108122214.GE1534@redhat.com> Message-ID: <9020917.2438.1357649431385.JavaMail.javamailuser@localhost> ----- Original Message ----- > From: "Dan Kenigsberg" > To: "Simon Grinberg" > Cc: "Livnat Peer" , "Moti Asayag" , "Michael Pasternak" , > "arch" > Sent: Tuesday, January 8, 2013 2:22:14 PM > Subject: Re: feature suggestion: in-host network with no external nics > > On Mon, Jan 07, 2013 at 12:07:15PM -0500, Simon Grinberg wrote: > > > > > > ----- Original Message ----- > > > From: "Dan Kenigsberg" > > > To: "arch" > > > Cc: "Livnat Peer" , "Moti Asayag" > > > , "Michael Pasternak" > > > Sent: Thursday, January 3, 2013 12:07:22 PM > > > Subject: feature suggestion: in-host network with no external > > > nics > > > > > > Description > > > =========== > > > In oVirt, after a VM network is defined in the Data Center level > > > and > > > added to a cluster, it needs to be implemented on each host. All > > > VM > > > networks are (currently) based on a Linux software bridge. The > > > specific > > > implementation controls how traffic from that bridge reaches the > > > outer > > > world. For example, the bridge may be connected externally via > > > eth3, > > > or > > > bond3 over eth2 and p1p2. This feature is about implementing a > > > network > > > with no network interfaces (NICs) at all. > > > > > > Having a disconnected network may first seem to add complexity to > > > VM > > > placement. Until now, we assumed that if a network (say, blue) is > > > defined on two hosts, the two hosts lie in the same broadcast > > > domain. > > > If > > > a couple of VMs are connected to "blue" it does not matter where > > > they > > > run - they would always hear each other. This is of course no > > > longer > > > true if one of the hosts implements "blue" as nicless. > > > However, this is nothing new. oVirt never validates the single > > > broadcast > > > domain assumption, which can be easily broken by an admin: on one > > > host, > > > an admin can implement blue using a nic that has completely > > > unrelated > > > physical connectivity. > > > > > > Benefits > > > ======== > > > * All-in-One http://www.ovirt.org/Feature/AllInOne use case: we'd > > > like > > > to have a complete oVirt deployment that does not rely on > > > external > > > resources, such as layer-2 connectivity or DNS. > > > * Collaborative computing: an oVirt user may wish to have a group > > > of VMs with heavy in-group secret communication, where only one > > > of > > > the > > > VMs exposes an external web service. The in-group secret > > > communication > > > could be limited to a nic-less network, no need to let it spill > > > outside. > > > * [SciFi] NIC-less networks can be tunneled to remove network > > > segments > > > over IP, a layer 2 NIC may not be part of its definition. > > > > > > Vdsm > > > ==== > > > Vdsm already supports defining a network with no nics attached. > > > > > > Engine > > > ====== > > > I am told that implementing this in Engine is quite a pain, as > > > network > > > is not a first-class citizen in the DB; it is more of an > > > attribute of > > > its primary external interface. > > > > There is more then that. > > Indeed what you describe is alot more than my planned idea. I'd say > that > my Nicless_Network feature is a small building block of yours. > However, > since it is a strictly required building block, Nicless_Network > should > be implemented as a first stage of smarter scheduler logic above it. > > I'd fire up a new Host Only Network feature page that would note > Nicless_Network as a requirement. No problem with that as long as we keep in mind where we want to get to with it, the rest is management :) But please rename nickless :( Let's call it floating network or something. It's not host only yet, as you rightfully mentioned it can later be connected to a tunnel device or be routed using iptables rules. At first stage we may even settle for the two steps below, which practically says no real managements, just use the building block that is provided by VDSM. > > > You may take the approach of: > > 1. Configure this network statically on a host > > 2. Pin the VMs to host since otherwise what use there is to define > > such a network on VMs if the scheduler is free to schedule the VMs > > on different hosts? > > I tried to answer this worry in my original post. A user may want to > redifine "blue" on eth2 instead of eth1. Another user may have his > reasons to define "blue" with no nic at all - he may devise a cool > tunnel to connect his host to the world. Nicless_Network is only > about > allowing this. > > I'd say that a host-only network is nicless network with some logic > attached to it; I actually like the one you detail here: > > > > > Or, > > 1. Create this network ad-hoc according to the first VM that needs > > it > > 2. Use the VM affinity feature to state that these VMs must run > > together on the same host > > 3. Assigning a network to these VMs automatically configures the > > affinity. > > > > The first is simplistic, and requires minimal changes to the engine > > (you do need to allow LN as device-less entity*) , the second > > approach is more robust and user friendly but require more work in > > the engine. > > > > On top of the above you may like to: > > 1. Allow this network to be NATed - libvirt already supports that - > > should be simple. > > 2. Combine this with the upcoming IP setting for the guests - A bit > > more complex > Once we have IPAM ticking, we could apply it to any network, > including > this. Or do you mean anything else? > > > 3. May want to easily define it as a Inter-VM-group-channel > > property > > same as affinity-group instead of explicit define such a network. > > Meaning define group of VMs. Define affinity, define > > Inter-VM-group-channel, define group's SLA etc - Let's admit that > > VMs > > that require this type of internal networking are part of VM group > > that together compose a workload/application. > > I'm not sure why defining an "Inter-VM-group-channel" is easier than > calling it a "network". The VMs would see a NIC, and someone should > decide if there is only one NIC per vm and provide it with a mac > address, so I do not see the merit of this abstraction layer. I'm > probably missing something crucial. One step instead of 3 per each and remove overhead. Instead of: 1. Creating a network (and thinking of a unique name) 2. Add a NIC to VM A - Consume a NIC from the pool 3. Add a NIC to VM B - Consume a NIC from the pool You just say connect these two and the engine does it for you since usually in this scenario you don't care about the name of the network nor the MACs and since they are not exposed they may be from a range not taken from the pool. In addition these networks will not 'pollute' your DC Network Subtab nor your Networks tab since hey are not interesting there. > > > > > *A relativity easy change under current modelling (a model that I > > don't like in the first place), is to define another 'NIC' of type > > bridge (same as you have VLAN nic, bond nic, and NIC nic) so a > > 'floating bridge' is a LN on the Bridge NIC. Ugly but this is the > > current modelling. > > We may implement Nicless_Network as something like that, under the > hood. > (but bridge, vlan and bond are not nics, they are Linux net devices) Agree, but look at the API - everything is under the nics collection: https://ovirt31.demo.redhat.com/api/hosts/36ac384e-eb9a-11e1-b7c3-525400acfd62/nics And everything is a NIC . . . em1.51 > > > > > > > > This message is an html-to-text rendering of > > > http://www.ovirt.org/Features/Nicless_Network > > > (I like the name, it sounds like a jewelery) > > > > The name commonly used for this is 'Host only network' > > Though we really into inventing new terminologies to things, in > > this case I would rather not since it's used in similar solutions, > > (VMWare, Parallels, Virtual-Box, etc) hence it's not vendor > > specific. > > > > In any case Nicless is bad since external interface may also be a > > Bond. > From danken at redhat.com Tue Jan 8 13:04:33 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Tue, 8 Jan 2013 15:04:33 +0200 Subject: feature suggestion: migration network In-Reply-To: References: <20130106214941.GJ14546@redhat.com> Message-ID: <20130108130415.GG1534@redhat.com> There's talk about this for ages, so it's time to have proper discussion and a feature page about it: let us have a "migration" network role, and use such networks to carry migration data When Engine requests to migrate a VM from one node to another, the VM state (Bios, IO devices, RAM) is transferred over a TCP/IP connection that is opened from the source qemu process to the destination qemu. Currently, destination qemu listens for the incoming connection on the management IP address of the destination host. This has serious downsides: a "migration storm" may choke the destination's management interface; migration is plaintext and ovirtmgmt includes Engine which sits may sit the node cluster. With this feature, a cluster administrator may grant the "migration" role to one of the cluster networks. Engine would use that network's IP address on the destination host when it requests a migration of a VM. With proper network setup, migration data would be separated to that network. === Benefit to oVirt === * Users would be able to define and dedicate a separate network for migration. Users that need quick migration would use nics with high bandwidth. Users who want to cap the bandwidth consumed by migration could define a migration network over nics with bandwidth limitation. * Migration data can be limited to a separate network, that has no layer-2 access from Engine === Vdsm === The "migrate" verb should be extended with an additional parameter, specifying the address that the remote qemu process should listen on. A new argument is to be added to the currently-defined migration arguments: * vmId: UUID * dst: management address of destination host * dstparams: hibernation volumes definition * mode: migration/hibernation * method: rotten legacy * ''New'': migration uri, according to http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 such as tcp:// === Engine === As usual, complexity lies here, and several changes are required: 1. Network definition. 1.1 A new network role - not unlike "display network" should be added.Only one migration network should be defined on a cluster. 1.2 If none is defined, the legacy "use ovirtmgmt for migration" behavior would apply. 1.3 A migration network is more likely to be a ''required'' network, but a user may opt for non-required. He may face unpleasant surprises if he wants to migrate his machine, but no candidate host has the network available. 1.4 The "migration" role can be granted or taken on-the-fly, when hosts are active, as long as there are no currently-migrating VMs. 2. Scheduler 2.1 when deciding which host should be used for automatic migration, take into account the existence and availability of the migration network on the destination host. 2.2 For manual migration, let user migrate a VM to a host with no migration network - if the admin wants to keep jamming the management network with migration traffic, let her. 3. VdsBroker migration verb. 3.1 For the a modern cluster level, with migration network defined on the destination host, an additional ''miguri'' parameter should be added to the "migrate" command From ykaul at redhat.com Tue Jan 8 14:46:10 2013 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 08 Jan 2013 16:46:10 +0200 Subject: feature suggestion: migration network In-Reply-To: <20130108130415.GG1534@redhat.com> References: <20130106214941.GJ14546@redhat.com> <20130108130415.GG1534@redhat.com> Message-ID: <50EC3132.1030807@redhat.com> On 08/01/13 15:04, Dan Kenigsberg wrote: > There's talk about this for ages, so it's time to have proper discussion > and a feature page about it: let us have a "migration" network role, and > use such networks to carry migration data > > When Engine requests to migrate a VM from one node to another, the VM > state (Bios, IO devices, RAM) is transferred over a TCP/IP connection > that is opened from the source qemu process to the destination qemu. > Currently, destination qemu listens for the incoming connection on the > management IP address of the destination host. This has serious > downsides: a "migration storm" may choke the destination's management > interface; migration is plaintext and ovirtmgmt includes Engine which > sits may sit the node cluster. > > With this feature, a cluster administrator may grant the "migration" > role to one of the cluster networks. Engine would use that network's IP > address on the destination host when it requests a migration of a VM. > With proper network setup, migration data would be separated to that > network. > > === Benefit to oVirt === > * Users would be able to define and dedicate a separate network for > migration. Users that need quick migration would use nics with high > bandwidth. Users who want to cap the bandwidth consumed by migration > could define a migration network over nics with bandwidth limitation. > * Migration data can be limited to a separate network, that has no > layer-2 access from Engine > > === Vdsm === > The "migrate" verb should be extended with an additional parameter, > specifying the address that the remote qemu process should listen on. A > new argument is to be added to the currently-defined migration > arguments: > * vmId: UUID > * dst: management address of destination host > * dstparams: hibernation volumes definition > * mode: migration/hibernation > * method: rotten legacy > * ''New'': migration uri, according to http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 such as tcp:// > > === Engine === > As usual, complexity lies here, and several changes are required: > > 1. Network definition. > 1.1 A new network role - not unlike "display network" should be > added.Only one migration network should be defined on a cluster. > 1.2 If none is defined, the legacy "use ovirtmgmt for migration" > behavior would apply. > 1.3 A migration network is more likely to be a ''required'' network, but > a user may opt for non-required. He may face unpleasant surprises if he > wants to migrate his machine, but no candidate host has the network > available. > 1.4 The "migration" role can be granted or taken on-the-fly, when hosts > are active, as long as there are no currently-migrating VMs. > > 2. Scheduler > 2.1 when deciding which host should be used for automatic > migration, take into account the existence and availability of the > migration network on the destination host. > 2.2 For manual migration, let user migrate a VM to a host with no > migration network - if the admin wants to keep jamming the > management network with migration traffic, let her. > > 3. VdsBroker migration verb. > 3.1 For the a modern cluster level, with migration network defined on > the destination host, an additional ''miguri'' parameter should be added > to the "migrate" command > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch How is the authentication of the peers handled? Do we need a cert per each source/destination logical interface? Y. From simon at redhat.com Tue Jan 8 18:23:02 2013 From: simon at redhat.com (Simon Grinberg) Date: Tue, 8 Jan 2013 13:23:02 -0500 (EST) Subject: feature suggestion: migration network In-Reply-To: <50EC3132.1030807@redhat.com> Message-ID: <26777200.2948.1357669314263.JavaMail.javamailuser@localhost> ----- Original Message ----- > From: "Yaniv Kaul" > To: "Dan Kenigsberg" > Cc: "Limor Gavish" , "Yuval M" , arch at ovirt.org, "Simon Grinberg" > > Sent: Tuesday, January 8, 2013 4:46:10 PM > Subject: Re: feature suggestion: migration network > > On 08/01/13 15:04, Dan Kenigsberg wrote: > > There's talk about this for ages, so it's time to have proper > > discussion > > and a feature page about it: let us have a "migration" network > > role, and > > use such networks to carry migration data > > > > When Engine requests to migrate a VM from one node to another, the > > VM > > state (Bios, IO devices, RAM) is transferred over a TCP/IP > > connection > > that is opened from the source qemu process to the destination > > qemu. > > Currently, destination qemu listens for the incoming connection on > > the > > management IP address of the destination host. This has serious > > downsides: a "migration storm" may choke the destination's > > management > > interface; migration is plaintext and ovirtmgmt includes Engine > > which > > sits may sit the node cluster. > > > > With this feature, a cluster administrator may grant the > > "migration" > > role to one of the cluster networks. Engine would use that > > network's IP > > address on the destination host when it requests a migration of a > > VM. > > With proper network setup, migration data would be separated to > > that > > network. > > > > === Benefit to oVirt === > > * Users would be able to define and dedicate a separate network for > > migration. Users that need quick migration would use nics with > > high > > bandwidth. Users who want to cap the bandwidth consumed by > > migration > > could define a migration network over nics with bandwidth > > limitation. > > * Migration data can be limited to a separate network, that has no > > layer-2 access from Engine > > > > === Vdsm === > > The "migrate" verb should be extended with an additional parameter, > > specifying the address that the remote qemu process should listen > > on. A > > new argument is to be added to the currently-defined migration > > arguments: > > * vmId: UUID > > * dst: management address of destination host > > * dstparams: hibernation volumes definition > > * mode: migration/hibernation > > * method: rotten legacy > > * ''New'': migration uri, according to > > http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > > such as tcp:// > > > > === Engine === > > As usual, complexity lies here, and several changes are required: > > > > 1. Network definition. > > 1.1 A new network role - not unlike "display network" should be > > added.Only one migration network should be defined on a > > cluster. We are considering multiple display networks already, then why not the same for migration? > > 1.2 If none is defined, the legacy "use ovirtmgmt for migration" > > behavior would apply. > > 1.3 A migration network is more likely to be a ''required'' > > network, but > > a user may opt for non-required. He may face unpleasant > > surprises if he > > wants to migrate his machine, but no candidate host has the > > network > > available. I think the enforcement should be at least one migration network per host -> in the case we support more then one Else always required. > > 1.4 The "migration" role can be granted or taken on-the-fly, when > > hosts > > are active, as long as there are no currently-migrating VMs. > > > > 2. Scheduler > > 2.1 when deciding which host should be used for automatic > > migration, take into account the existence and availability of > > the > > migration network on the destination host. > > 2.2 For manual migration, let user migrate a VM to a host with no > > migration network - if the admin wants to keep jamming the > > management network with migration traffic, let her. Since you send migration network per migration command, why not allow to choose any network on the host same as you allow to choose host? If host is not selected then allow to choose from cluster's networks. The default should be the cluster's migration network. If you allow for the above, we can waver the enforcement of migration network per host. No migration network == no automatic migration to/from this host. > > > > 3. VdsBroker migration verb. > > 3.1 For the a modern cluster level, with migration network defined > > on > > the destination host, an additional ''miguri'' parameter > > should be added > > to the "migrate" command > > > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > How is the authentication of the peers handled? Do we need a cert per > each source/destination logical interface? > Y. > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From danken at redhat.com Tue Jan 8 19:34:11 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Tue, 8 Jan 2013 21:34:11 +0200 Subject: feature suggestion: migration network In-Reply-To: <26777200.2948.1357669314263.JavaMail.javamailuser@localhost> References: <50EC3132.1030807@redhat.com> <26777200.2948.1357669314263.JavaMail.javamailuser@localhost> Message-ID: <20130108193411.GH1534@redhat.com> On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg wrote: > > > ----- Original Message ----- > > From: "Yaniv Kaul" > > To: "Dan Kenigsberg" > > Cc: "Limor Gavish" , "Yuval M" , arch at ovirt.org, "Simon Grinberg" > > > > Sent: Tuesday, January 8, 2013 4:46:10 PM > > Subject: Re: feature suggestion: migration network > > > > On 08/01/13 15:04, Dan Kenigsberg wrote: > > > There's talk about this for ages, so it's time to have proper > > > discussion > > > and a feature page about it: let us have a "migration" network > > > role, and > > > use such networks to carry migration data > > > > > > When Engine requests to migrate a VM from one node to another, the > > > VM > > > state (Bios, IO devices, RAM) is transferred over a TCP/IP > > > connection > > > that is opened from the source qemu process to the destination > > > qemu. > > > Currently, destination qemu listens for the incoming connection on > > > the > > > management IP address of the destination host. This has serious > > > downsides: a "migration storm" may choke the destination's > > > management > > > interface; migration is plaintext and ovirtmgmt includes Engine > > > which > > > sits may sit the node cluster. > > > > > > With this feature, a cluster administrator may grant the > > > "migration" > > > role to one of the cluster networks. Engine would use that > > > network's IP > > > address on the destination host when it requests a migration of a > > > VM. > > > With proper network setup, migration data would be separated to > > > that > > > network. > > > > > > === Benefit to oVirt === > > > * Users would be able to define and dedicate a separate network for > > > migration. Users that need quick migration would use nics with > > > high > > > bandwidth. Users who want to cap the bandwidth consumed by > > > migration > > > could define a migration network over nics with bandwidth > > > limitation. > > > * Migration data can be limited to a separate network, that has no > > > layer-2 access from Engine > > > > > > === Vdsm === > > > The "migrate" verb should be extended with an additional parameter, > > > specifying the address that the remote qemu process should listen > > > on. A > > > new argument is to be added to the currently-defined migration > > > arguments: > > > * vmId: UUID > > > * dst: management address of destination host > > > * dstparams: hibernation volumes definition > > > * mode: migration/hibernation > > > * method: rotten legacy > > > * ''New'': migration uri, according to > > > http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > > > such as tcp:// > > > > > > === Engine === > > > As usual, complexity lies here, and several changes are required: > > > > > > 1. Network definition. > > > 1.1 A new network role - not unlike "display network" should be > > > added.Only one migration network should be defined on a > > > cluster. > > We are considering multiple display networks already, then why not the > same for migration? What is the motivation of having multiple migration networks? Extending the bandwidth (and thus, any network can be taken when needed) or data separation (and thus, a migration network should be assigned to each VM in the cluster)? Or another morivation with consequence? > > > > > 1.2 If none is defined, the legacy "use ovirtmgmt for migration" > > > behavior would apply. > > > 1.3 A migration network is more likely to be a ''required'' > > > network, but > > > a user may opt for non-required. He may face unpleasant > > > surprises if he > > > wants to migrate his machine, but no candidate host has the > > > network > > > available. > > I think the enforcement should be at least one migration network per host -> in the case we support more then one > Else always required. Fine by me - if we keep backward behavior of ovirtmgmt being a migration network by default. I think that the worst case is that the user finds out - in the least convinient moment - that ovirt 3.3 would not migrate his VMs without explicitly assigning the "migration" role. > > > > 1.4 The "migration" role can be granted or taken on-the-fly, when > > > hosts > > > are active, as long as there are no currently-migrating VMs. > > > > > > 2. Scheduler > > > 2.1 when deciding which host should be used for automatic > > > migration, take into account the existence and availability of > > > the > > > migration network on the destination host. > > > 2.2 For manual migration, let user migrate a VM to a host with no > > > migration network - if the admin wants to keep jamming the > > > management network with migration traffic, let her. > > Since you send migration network per migration command, why not allow > to choose any network on the host same as you allow to choose host? If > host is not selected then allow to choose from cluster's networks. > The default should be the cluster's migration network. Cool. Added to wiki page. > > If you allow for the above, we can waver the enforcement of migration network per host. No migration network == no automatic migration to/from this host. again, I'd prefer to keep the current default status of ovirtmgmt as a migration network. Besides that, +1. > > > > > > > > 3. VdsBroker migration verb. > > > 3.1 For the a modern cluster level, with migration network defined > > > on > > > the destination host, an additional ''miguri'' parameter > > > should be added > > > to the "migrate" command > > > > > > _______________________________________________ > > > Arch mailing list > > > Arch at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/arch > > > > How is the authentication of the peers handled? Do we need a cert per > > each source/destination logical interface? I hope Orit or Lain correct me, but I am not aware of any authentication scheme that protects non-tunneled qemu destination from an evil process with network acess to the host. Dan. From lpeer at redhat.com Wed Jan 9 07:26:25 2013 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 09 Jan 2013 09:26:25 +0200 Subject: feature suggestion: migration network In-Reply-To: <20130108193411.GH1534@redhat.com> References: <50EC3132.1030807@redhat.com> <26777200.2948.1357669314263.JavaMail.javamailuser@localhost> <20130108193411.GH1534@redhat.com> Message-ID: <50ED1BA1.3020605@redhat.com> On 01/08/2013 09:34 PM, Dan Kenigsberg wrote: > On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg wrote: >> >> >> ----- Original Message ----- >>> From: "Yaniv Kaul" >>> To: "Dan Kenigsberg" >>> Cc: "Limor Gavish" , "Yuval M" , arch at ovirt.org, "Simon Grinberg" >>> >>> Sent: Tuesday, January 8, 2013 4:46:10 PM >>> Subject: Re: feature suggestion: migration network >>> >>> On 08/01/13 15:04, Dan Kenigsberg wrote: >>>> There's talk about this for ages, so it's time to have proper >>>> discussion >>>> and a feature page about it: let us have a "migration" network >>>> role, and >>>> use such networks to carry migration data >>>> >>>> When Engine requests to migrate a VM from one node to another, the >>>> VM >>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP >>>> connection >>>> that is opened from the source qemu process to the destination >>>> qemu. >>>> Currently, destination qemu listens for the incoming connection on >>>> the >>>> management IP address of the destination host. This has serious >>>> downsides: a "migration storm" may choke the destination's >>>> management >>>> interface; migration is plaintext and ovirtmgmt includes Engine >>>> which >>>> sits may sit the node cluster. >>>> >>>> With this feature, a cluster administrator may grant the >>>> "migration" >>>> role to one of the cluster networks. Engine would use that >>>> network's IP >>>> address on the destination host when it requests a migration of a >>>> VM. >>>> With proper network setup, migration data would be separated to >>>> that >>>> network. >>>> >>>> === Benefit to oVirt === >>>> * Users would be able to define and dedicate a separate network for >>>> migration. Users that need quick migration would use nics with >>>> high >>>> bandwidth. Users who want to cap the bandwidth consumed by >>>> migration >>>> could define a migration network over nics with bandwidth >>>> limitation. >>>> * Migration data can be limited to a separate network, that has no >>>> layer-2 access from Engine >>>> >>>> === Vdsm === >>>> The "migrate" verb should be extended with an additional parameter, >>>> specifying the address that the remote qemu process should listen >>>> on. A >>>> new argument is to be added to the currently-defined migration >>>> arguments: >>>> * vmId: UUID >>>> * dst: management address of destination host >>>> * dstparams: hibernation volumes definition >>>> * mode: migration/hibernation >>>> * method: rotten legacy >>>> * ''New'': migration uri, according to >>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 >>>> such as tcp:// >>>> >>>> === Engine === >>>> As usual, complexity lies here, and several changes are required: >>>> >>>> 1. Network definition. >>>> 1.1 A new network role - not unlike "display network" should be >>>> added.Only one migration network should be defined on a >>>> cluster. >> >> We are considering multiple display networks already, then why not the >> same for migration? > > What is the motivation of having multiple migration networks? Extending > the bandwidth (and thus, any network can be taken when needed) or > data separation (and thus, a migration network should be assigned to > each VM in the cluster)? Or another morivation with consequence? > In addition to the questions above there are some behavioral changes driven by supporting multiple-migration-network - 1.Today cluster is a migration domain (give or take optional networks - which was designed to enable dynamic network provisioning not for breaking cluster migration domain...) adding multiple migration network in the cluster means you break this assumption and now only hosts with shared migration network can migrate VMs between them...that's splitting the cluster to sub-migration domain. what is the meaning of cluster now? Or did you mean ALL hosts in the cluster should have ALL migration networks? (a motivation for that is not clear to me) 2. What happens if a single host has multiple migration networks assigned to it. (I am assuming the migration role is not necessarily a standalone role but can be an additional tag on an existing network that can be used for management or VMs or any future role). Do we really want to get into managing a policy around it, which migration network to use - random/RR/even-traffic-load etc. >> >> >>>> 1.2 If none is defined, the legacy "use ovirtmgmt for migration" >>>> behavior would apply. >>>> 1.3 A migration network is more likely to be a ''required'' >>>> network, but >>>> a user may opt for non-required. He may face unpleasant >>>> surprises if he >>>> wants to migrate his machine, but no candidate host has the >>>> network >>>> available. >> >> I think the enforcement should be at least one migration network per host -> in the case we support more then one >> Else always required. Why? If we are thinking that migration network is optional there could be hosts that do not have migration network and only hols pin-to-host VMs for example.... This one falls to don't nanny the user I think... > > Fine by me - if we keep backward behavior of ovirtmgmt being a migration > network by default. I think that the worst case is that the user finds > out - in the least convinient moment - that ovirt 3.3 would not migrate > his VMs without explicitly assigning the "migration" role. > We can assign the migration role in the engine to all exising management network upon upgrade, from there the user can change definitions the way he sees fit. If we get to a point of having a host with no migration network a warning to the user is in place but no more than that IMO. >> >>>> 1.4 The "migration" role can be granted or taken on-the-fly, when >>>> hosts >>>> are active, as long as there are no currently-migrating VMs. >>>> >>>> 2. Scheduler >>>> 2.1 when deciding which host should be used for automatic >>>> migration, take into account the existence and availability of >>>> the >>>> migration network on the destination host. >>>> 2.2 For manual migration, let user migrate a VM to a host with no >>>> migration network - if the admin wants to keep jamming the >>>> management network with migration traffic, let her. >> >> Since you send migration network per migration command, why not allow >> to choose any network on the host same as you allow to choose host? If >> host is not selected then allow to choose from cluster's networks. >> The default should be the cluster's migration network. > WHY??? It is only adding complexity to the user experience, I don't see the big benefit of having it. The user can do more things and the UI is getting more and more complex for simple actions. I think we should keep the requirement reasonable, what should lead us when adding a requirement is how exotic this use-case is and what is the complexity it adds to the more common use cases. > Cool. Added to wiki page. > >> >> If you allow for the above, we can waver the enforcement of migration network per host. No migration network == no automatic migration to/from this host. > I think you can wave the enforcement anyway... > again, I'd prefer to keep the current default status of ovirtmgmt as a > migration network. Besides that, +1. > >> >> >>>> >>>> 3. VdsBroker migration verb. >>>> 3.1 For the a modern cluster level, with migration network defined >>>> on >>>> the destination host, an additional ''miguri'' parameter >>>> should be added >>>> to the "migrate" command >>>> >>>> _______________________________________________ >>>> Arch mailing list >>>> Arch at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/arch >>> >>> How is the authentication of the peers handled? Do we need a cert per >>> each source/destination logical interface? > > I hope Orit or Lain correct me, but I am not aware of any > authentication scheme that protects non-tunneled qemu destination from > an evil process with network acess to the host. > > Dan. > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From simon at redhat.com Wed Jan 9 11:23:50 2013 From: simon at redhat.com (Simon Grinberg) Date: Wed, 9 Jan 2013 06:23:50 -0500 (EST) Subject: feature suggestion: migration network In-Reply-To: <50ED1BA1.3020605@redhat.com> Message-ID: <31791180.3651.1357730560590.JavaMail.javamailuser@localhost> ----- Original Message ----- > From: "Livnat Peer" > To: "Dan Kenigsberg" , "Simon Grinberg" > Cc: "Orit Wasserman" , "Laine Stump" , "Yuval M" , "Limor > Gavish" , arch at ovirt.org > Sent: Wednesday, January 9, 2013 9:26:25 AM > Subject: Re: feature suggestion: migration network > > On 01/08/2013 09:34 PM, Dan Kenigsberg wrote: > > On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg wrote: > >> > >> > >> ----- Original Message ----- > >>> From: "Yaniv Kaul" > >>> To: "Dan Kenigsberg" > >>> Cc: "Limor Gavish" , "Yuval M" > >>> , arch at ovirt.org, "Simon Grinberg" > >>> > >>> Sent: Tuesday, January 8, 2013 4:46:10 PM > >>> Subject: Re: feature suggestion: migration network > >>> > >>> On 08/01/13 15:04, Dan Kenigsberg wrote: > >>>> There's talk about this for ages, so it's time to have proper > >>>> discussion > >>>> and a feature page about it: let us have a "migration" network > >>>> role, and > >>>> use such networks to carry migration data > >>>> > >>>> When Engine requests to migrate a VM from one node to another, > >>>> the > >>>> VM > >>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP > >>>> connection > >>>> that is opened from the source qemu process to the destination > >>>> qemu. > >>>> Currently, destination qemu listens for the incoming connection > >>>> on > >>>> the > >>>> management IP address of the destination host. This has serious > >>>> downsides: a "migration storm" may choke the destination's > >>>> management > >>>> interface; migration is plaintext and ovirtmgmt includes Engine > >>>> which > >>>> sits may sit the node cluster. > >>>> > >>>> With this feature, a cluster administrator may grant the > >>>> "migration" > >>>> role to one of the cluster networks. Engine would use that > >>>> network's IP > >>>> address on the destination host when it requests a migration of > >>>> a > >>>> VM. > >>>> With proper network setup, migration data would be separated to > >>>> that > >>>> network. > >>>> > >>>> === Benefit to oVirt === > >>>> * Users would be able to define and dedicate a separate network > >>>> for > >>>> migration. Users that need quick migration would use nics > >>>> with > >>>> high > >>>> bandwidth. Users who want to cap the bandwidth consumed by > >>>> migration > >>>> could define a migration network over nics with bandwidth > >>>> limitation. > >>>> * Migration data can be limited to a separate network, that has > >>>> no > >>>> layer-2 access from Engine > >>>> > >>>> === Vdsm === > >>>> The "migrate" verb should be extended with an additional > >>>> parameter, > >>>> specifying the address that the remote qemu process should > >>>> listen > >>>> on. A > >>>> new argument is to be added to the currently-defined migration > >>>> arguments: > >>>> * vmId: UUID > >>>> * dst: management address of destination host > >>>> * dstparams: hibernation volumes definition > >>>> * mode: migration/hibernation > >>>> * method: rotten legacy > >>>> * ''New'': migration uri, according to > >>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > >>>> such as tcp:// > >>>> > >>>> === Engine === > >>>> As usual, complexity lies here, and several changes are > >>>> required: > >>>> > >>>> 1. Network definition. > >>>> 1.1 A new network role - not unlike "display network" should be > >>>> added.Only one migration network should be defined on a > >>>> cluster. > >> > >> We are considering multiple display networks already, then why not > >> the > >> same for migration? > > > > What is the motivation of having multiple migration networks? > > Extending > > the bandwidth (and thus, any network can be taken when needed) or > > data separation (and thus, a migration network should be assigned > > to > > each VM in the cluster)? Or another morivation with consequence? > > All of the above, I'll explain in my answer to Livnat. > > In addition to the questions above there are some behavioral changes > driven by supporting multiple-migration-network - > > 1.Today cluster is a migration domain (give or take optional networks > - > which was designed to enable dynamic network provisioning not for > breaking cluster migration domain...) It was designed as a simple workaround to many issue, however its not fully satisfies any: As a workaround for Dynamic-Network - It misses the part of set up on demand, but at least allows the host not be non-operational when we use hooks to set up the networks As a workaround to allow fore multiple Storage-Network to be used with multi-path - It misses the property of 'at least one available -> dependency', but at least allows the host not be non-operational And so on, it's non of the above but it's easy and helpful > adding multiple migration > network in the cluster means you break this assumption and now only hosts > with shared migration network can migrate VMs between them...that's > splitting the cluster to sub-migration domain. what is the meaning of cluster > now? Good question. For a data center use case, current cluster definition is good and should be maintained - it's simple, easy to understand, and cover most of the use cases here. For multi-tenant and cloud use case where multiple tenants share physical resources - it's probably not enough and we'll need farther partitioning into sub-resouce-domains > Or did you mean ALL hosts in the cluster should have ALL migration > networks? (a motivation for that is not clear to me) Indeed, this is not the best use case for this. But still has some uses in case there are small number of tenants sharing the cluster or if the customer due to the unsymmetrical nature of most bonding modes prefers to use muplitple networks for migration and will ask us to optimize migration across these. See farther details below > > 2. What happens if a single host has multiple migration networks > assigned to it. (I am assuming the migration role is not necessarily > a > standalone role but can be an additional tag on an existing network > that > can be used for management or VMs or any future role). > Do we really want to get into managing a policy around it, which > migration network to use - random/RR/even-traffic-load etc. Yes we do and now I'll finally explain: Having a single Display network and Migration network per cluster, is good enough when there is a single tenant per cluster. But what happens when you have for example two tenants sharing the same physical resources? Then their Display network should be different (so you'll be able to rout traffic to different client proxies if needed, guarantee SLA, security etc). Same may go to migration network for SLA grantee and security. But I'll take it for simplicity just from the SLA POV. 1. You may need to guarantee global resources per tenant - that what he signs for. 2. Within a tenant resources he may may need to grantee specific VM / VMs-group resources - for his own better use of the resources he signed for. The second should probably done by a VM SLA while the first will be done easier on the network level. Yes it could be done on setting some kind of rules on traffic type and aggregated sources list etc, but they all funnel at the end into a logical network. The above is even more obvious when you use external hardware network management, like Mellanox or CISCO-UCS. There you set the SLA on profiles which is the equivalent to logical network. I know that when you think SLA you think on VM data traffic, but it is not enough. Display traffic SLA is important as well and the same are 'facilities' SLA - migration network lies under this. So what I'm trying to say? That except for the management network which is strictly for our use (and needs SLA for other reasons) all other 'Facility' networks may be require to be set per tenant as they may not share the same interfaces or even if they practically share the same interface. > > > > >> > >> > >>>> 1.2 If none is defined, the legacy "use ovirtmgmt for migration" > >>>> behavior would apply. > >>>> 1.3 A migration network is more likely to be a ''required'' > >>>> network, but > >>>> a user may opt for non-required. He may face unpleasant > >>>> surprises if he > >>>> wants to migrate his machine, but no candidate host has the > >>>> network > >>>> available. > >> > >> I think the enforcement should be at least one migration network > >> per host -> in the case we support more then one > >> Else always required. > > Why? > If we are thinking that migration network is optional there could be > hosts that do not have migration network and only hols pin-to-host > VMs > for example.... > This one falls to don't nanny the user I think... Makes sense > > > > > Fine by me - if we keep backward behavior of ovirtmgmt being a > > migration > > network by default. Yap, my bad for not mentioning that it is build on top of what you said not instead. Default should be management network as before, the customer will need to explicitly remove that role to end up with no migration network. > > I think that the worst case is that the user > > finds > > out - in the least convinient moment - that ovirt 3.3 would not > > migrate > > his VMs without explicitly assigning the "migration" role. > > > > We can assign the migration role in the engine to all exising > management > network upon upgrade, from there the user can change definitions the > way > he sees fit. > If we get to a point of having a host with no migration network a > warning to the user is in place but no more than that IMO. makes sense > > >> > >>>> 1.4 The "migration" role can be granted or taken on-the-fly, > >>>> when > >>>> hosts > >>>> are active, as long as there are no currently-migrating > >>>> VMs. > >>>> > >>>> 2. Scheduler > >>>> 2.1 when deciding which host should be used for automatic > >>>> migration, take into account the existence and availability > >>>> of > >>>> the > >>>> migration network on the destination host. > >>>> 2.2 For manual migration, let user migrate a VM to a host with > >>>> no > >>>> migration network - if the admin wants to keep jamming the > >>>> management network with migration traffic, let her. > >> > >> Since you send migration network per migration command, why not > >> allow > >> to choose any network on the host same as you allow to choose > >> host? If > >> host is not selected then allow to choose from cluster's networks. > >> The default should be the cluster's migration network. > > > > WHY??? > It is only adding complexity to the user experience, I don't see the > big > benefit of having it. > The user can do more things and the UI is getting more and more > complex > for simple actions. > > I think we should keep the requirement reasonable, what should lead > us > when adding a requirement is how exotic this use-case is and what is > the > complexity it adds to the more common use cases. What complexity does it add? Drop box to select a network filtered by the selected host, or not filtered. With the suggested VDSM migration API, it's isn't more complex then allowing to select the host while at the same time allowing flexibility to the user. Consider the case where you have huge VMs that takes ages to migrate - this will allow temporary diverting some of the migration to another network. This will also allow for your suggestion above not to enforce a migration network but still allow for migrations (though manual only). If migrations for a customer (and we have those) is a rare event that is always fully orchestrated, why does he need to have a migration network in the first place. Why not to allow him to select during migration what network to use? > > > > Cool. Added to wiki page. > > > >> > >> If you allow for the above, we can waver the enforcement of > >> migration network per host. No migration network == no automatic > >> migration to/from this host. > > > > I think you can wave the enforcement anyway... > > > again, I'd prefer to keep the current default status of ovirtmgmt > > as a > > migration network. Besides that, +1. > > > >> > >> > >>>> > >>>> 3. VdsBroker migration verb. > >>>> 3.1 For the a modern cluster level, with migration network > >>>> defined > >>>> on > >>>> the destination host, an additional ''miguri'' parameter > >>>> should be added > >>>> to the "migrate" command > >>>> > >>>> _______________________________________________ > >>>> Arch mailing list > >>>> Arch at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/arch > >>> > >>> How is the authentication of the peers handled? Do we need a cert > >>> per > >>> each source/destination logical interface? > > > > I hope Orit or Lain correct me, but I am not aware of any > > authentication scheme that protects non-tunneled qemu destination > > from > > an evil process with network acess to the host. > > > > Dan. > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > > From mburns at redhat.com Thu Jan 10 01:07:58 2013 From: mburns at redhat.com (Mike Burns) Date: Wed, 09 Jan 2013 20:07:58 -0500 Subject: ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines Message-ID: <1357780078.2865.26.camel@beelzebub.mburnsfire.net> (Sorry for cross posting, trying to ensure I hit all the relevant maintainers) If you are the primary maintainer of a sub-project in oVirt, this message is for you. At the Weekly oVirt Meeting, the final devel freeze and beta dates were decided. Freeze: 2013-01-14 Beta Post: 2013-01-15 Action items: * You project should create a new branch in gerrit for the release * You should create a formal build of your project for the beta post * Get the formal build of your project into the hands of someone who can post it [1][2] These should all be done by EOD on 2013-01-14 (with the exception of ovirt-node-iso) [3] Packages that this impacts: * mom * otopi * ovirt-engine * ovirt-engine-cli * ovirt-engine-sdk * ovirt-guest-agent * ovirt-host-deploy * ovirt-image-uploader * ovirt-iso-uploader * ovirt-log-collector * ovirt-node * ovirt-node-iso * vdsm Thanks Mike Burns [1] This is only necessary if the package is *not* already in fedora repos (must be in actual fedora repos, not just updates-testing or koji) [2] Communicate with mburns, mgoldboi, oschreib to deliver the packages [3] ovirt-node-iso requires some of the other packages to be available prior to creating the image. This image will be created either on 2013-01-14 or 2013-01-15 and posted along with the rest of the Beta. From wudxw at linux.vnet.ibm.com Thu Jan 10 02:45:42 2013 From: wudxw at linux.vnet.ibm.com (Mark Wu) Date: Thu, 10 Jan 2013 10:45:42 +0800 Subject: feature suggestion: migration network In-Reply-To: <50EC3132.1030807@redhat.com> References: <20130106214941.GJ14546@redhat.com> <20130108130415.GG1534@redhat.com> <50EC3132.1030807@redhat.com> Message-ID: <50EE2B56.8020000@linux.vnet.ibm.com> On 01/08/2013 10:46 PM, Yaniv Kaul wrote: > On 08/01/13 15:04, Dan Kenigsberg wrote: >> There's talk about this for ages, so it's time to have proper discussion >> and a feature page about it: let us have a "migration" network role, and >> use such networks to carry migration data >> >> When Engine requests to migrate a VM from one node to another, the VM >> state (Bios, IO devices, RAM) is transferred over a TCP/IP connection >> that is opened from the source qemu process to the destination qemu. >> Currently, destination qemu listens for the incoming connection on the >> management IP address of the destination host. This has serious >> downsides: a "migration storm" may choke the destination's management >> interface; migration is plaintext and ovirtmgmt includes Engine which >> sits may sit the node cluster. >> >> With this feature, a cluster administrator may grant the "migration" >> role to one of the cluster networks. Engine would use that network's IP >> address on the destination host when it requests a migration of a VM. >> With proper network setup, migration data would be separated to that >> network. >> >> === Benefit to oVirt === >> * Users would be able to define and dedicate a separate network for >> migration. Users that need quick migration would use nics with high >> bandwidth. Users who want to cap the bandwidth consumed by migration >> could define a migration network over nics with bandwidth limitation. >> * Migration data can be limited to a separate network, that has no >> layer-2 access from Engine >> >> === Vdsm === >> The "migrate" verb should be extended with an additional parameter, >> specifying the address that the remote qemu process should listen on. A >> new argument is to be added to the currently-defined migration >> arguments: >> * vmId: UUID >> * dst: management address of destination host >> * dstparams: hibernation volumes definition >> * mode: migration/hibernation >> * method: rotten legacy >> * ''New'': migration uri, according to >> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 >> such as tcp:// >> >> === Engine === >> As usual, complexity lies here, and several changes are required: >> >> 1. Network definition. >> 1.1 A new network role - not unlike "display network" should be >> added.Only one migration network should be defined on a cluster. >> 1.2 If none is defined, the legacy "use ovirtmgmt for migration" >> behavior would apply. >> 1.3 A migration network is more likely to be a ''required'' network, but >> a user may opt for non-required. He may face unpleasant >> surprises if he >> wants to migrate his machine, but no candidate host has the network >> available. >> 1.4 The "migration" role can be granted or taken on-the-fly, when hosts >> are active, as long as there are no currently-migrating VMs. >> >> 2. Scheduler >> 2.1 when deciding which host should be used for automatic >> migration, take into account the existence and availability of the >> migration network on the destination host. >> 2.2 For manual migration, let user migrate a VM to a host with no >> migration network - if the admin wants to keep jamming the >> management network with migration traffic, let her. >> >> 3. VdsBroker migration verb. >> 3.1 For the a modern cluster level, with migration network defined on >> the destination host, an additional ''miguri'' parameter should >> be added >> to the "migrate" command >> >> _______________________________________________ >> Arch mailing list >> Arch at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/arch > > How is the authentication of the peers handled? Do we need a cert per > each source/destination logical interface? > Y. In my understanding, using a separate migration network doesn't change the current peers authentication. We still use the URI ''qemu+tls://remoeHost/system' to connect the target libvirt service if ssl enabled, and the remote host should be the ip address of management interface. But we can choose other interfaces except the manage interface to transport the migration data. We just change the migrateURI, so the current authentication mechanism should still work for this new feature. > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From wudxw at linux.vnet.ibm.com Thu Jan 10 03:04:04 2013 From: wudxw at linux.vnet.ibm.com (Mark Wu) Date: Thu, 10 Jan 2013 11:04:04 +0800 Subject: feature suggestion: migration network In-Reply-To: <20130108130415.GG1534@redhat.com> References: <20130106214941.GJ14546@redhat.com> <20130108130415.GG1534@redhat.com> Message-ID: <50EE2FA4.9000703@linux.vnet.ibm.com> On 01/08/2013 09:04 PM, Dan Kenigsberg wrote: > There's talk about this for ages, so it's time to have proper discussion > and a feature page about it: let us have a "migration" network role, and > use such networks to carry migration data > > When Engine requests to migrate a VM from one node to another, the VM > state (Bios, IO devices, RAM) is transferred over a TCP/IP connection > that is opened from the source qemu process to the destination qemu. > Currently, destination qemu listens for the incoming connection on the > management IP address of the destination host. This has serious > downsides: a "migration storm" may choke the destination's management > interface; migration is plaintext and ovirtmgmt includes Engine which > sits may sit the node cluster. > > With this feature, a cluster administrator may grant the "migration" > role to one of the cluster networks. Engine would use that network's IP > address on the destination host when it requests a migration of a VM. > With proper network setup, migration data would be separated to that > network. > > === Benefit to oVirt === > * Users would be able to define and dedicate a separate network for > migration. Users that need quick migration would use nics with high > bandwidth. Users who want to cap the bandwidth consumed by migration > could define a migration network over nics with bandwidth limitation. > * Migration data can be limited to a separate network, that has no > layer-2 access from Engine > > === Vdsm === > The "migrate" verb should be extended with an additional parameter, > specifying the address that the remote qemu process should listen on. A > new argument is to be added to the currently-defined migration > arguments: > * vmId: UUID > * dst: management address of destination host > * dstparams: hibernation volumes definition > * mode: migration/hibernation > * method: rotten legacy > * ''New'': migration uri, according to http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 such as tcp:// If we would like to resolve the migration storm, we also could add the qemu migration bandwidth limit as a parameter for migrate verb. Currently, we use it as a static configuration on vdsm host. It's not flexible. Engine could pass appropriate parameters according to the traffic load and bandwidth of migration network. It also could be specified by customer according to the priority they suppose. > > === Engine === > As usual, complexity lies here, and several changes are required: > > 1. Network definition. > 1.1 A new network role - not unlike "display network" should be > added.Only one migration network should be defined on a cluster. > 1.2 If none is defined, the legacy "use ovirtmgmt for migration" > behavior would apply. > 1.3 A migration network is more likely to be a ''required'' network, but > a user may opt for non-required. He may face unpleasant surprises if he > wants to migrate his machine, but no candidate host has the network > available. > 1.4 The "migration" role can be granted or taken on-the-fly, when hosts > are active, as long as there are no currently-migrating VMs. > > 2. Scheduler > 2.1 when deciding which host should be used for automatic > migration, take into account the existence and availability of the > migration network on the destination host. > 2.2 For manual migration, let user migrate a VM to a host with no > migration network - if the admin wants to keep jamming the > management network with migration traffic, let her. > > 3. VdsBroker migration verb. > 3.1 For the a modern cluster level, with migration network defined on > the destination host, an additional ''miguri'' parameter should be added > to the "migrate" command > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From wudxw at linux.vnet.ibm.com Thu Jan 10 03:13:23 2013 From: wudxw at linux.vnet.ibm.com (Mark Wu) Date: Thu, 10 Jan 2013 11:13:23 +0800 Subject: feature suggestion: migration network In-Reply-To: <20130108193411.GH1534@redhat.com> References: <50EC3132.1030807@redhat.com> <26777200.2948.1357669314263.JavaMail.javamailuser@localhost> <20130108193411.GH1534@redhat.com> Message-ID: <50EE31D3.2080602@linux.vnet.ibm.com> On 01/09/2013 03:34 AM, Dan Kenigsberg wrote: > On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg wrote: >> >> ----- Original Message ----- >>> From: "Yaniv Kaul" >>> To: "Dan Kenigsberg" >>> Cc: "Limor Gavish" , "Yuval M" , arch at ovirt.org, "Simon Grinberg" >>> >>> Sent: Tuesday, January 8, 2013 4:46:10 PM >>> Subject: Re: feature suggestion: migration network >>> >>> On 08/01/13 15:04, Dan Kenigsberg wrote: >>>> There's talk about this for ages, so it's time to have proper >>>> discussion >>>> and a feature page about it: let us have a "migration" network >>>> role, and >>>> use such networks to carry migration data >>>> >>>> When Engine requests to migrate a VM from one node to another, the >>>> VM >>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP >>>> connection >>>> that is opened from the source qemu process to the destination >>>> qemu. >>>> Currently, destination qemu listens for the incoming connection on >>>> the >>>> management IP address of the destination host. This has serious >>>> downsides: a "migration storm" may choke the destination's >>>> management >>>> interface; migration is plaintext and ovirtmgmt includes Engine >>>> which >>>> sits may sit the node cluster. >>>> >>>> With this feature, a cluster administrator may grant the >>>> "migration" >>>> role to one of the cluster networks. Engine would use that >>>> network's IP >>>> address on the destination host when it requests a migration of a >>>> VM. >>>> With proper network setup, migration data would be separated to >>>> that >>>> network. >>>> >>>> === Benefit to oVirt === >>>> * Users would be able to define and dedicate a separate network for >>>> migration. Users that need quick migration would use nics with >>>> high >>>> bandwidth. Users who want to cap the bandwidth consumed by >>>> migration >>>> could define a migration network over nics with bandwidth >>>> limitation. >>>> * Migration data can be limited to a separate network, that has no >>>> layer-2 access from Engine >>>> >>>> === Vdsm === >>>> The "migrate" verb should be extended with an additional parameter, >>>> specifying the address that the remote qemu process should listen >>>> on. A >>>> new argument is to be added to the currently-defined migration >>>> arguments: >>>> * vmId: UUID >>>> * dst: management address of destination host >>>> * dstparams: hibernation volumes definition >>>> * mode: migration/hibernation >>>> * method: rotten legacy >>>> * ''New'': migration uri, according to >>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 >>>> such as tcp:// >>>> >>>> === Engine === >>>> As usual, complexity lies here, and several changes are required: >>>> >>>> 1. Network definition. >>>> 1.1 A new network role - not unlike "display network" should be >>>> added.Only one migration network should be defined on a >>>> cluster. >> We are considering multiple display networks already, then why not the >> same for migration? > What is the motivation of having multiple migration networks? Extending > the bandwidth (and thus, any network can be taken when needed) or > data separation (and thus, a migration network should be assigned to > each VM in the cluster)? Or another morivation with consequence? My suggestion is making the migration network role determined dynamically on each migrate. If we only define one migration network per cluster, the migration storm could happen to that network. It could cause some bad impact on VM applications. So I think engine could choose the network which has lower traffic load on migration, or leave the choice to user. > >> >>>> 1.2 If none is defined, the legacy "use ovirtmgmt for migration" >>>> behavior would apply. >>>> 1.3 A migration network is more likely to be a ''required'' >>>> network, but >>>> a user may opt for non-required. He may face unpleasant >>>> surprises if he >>>> wants to migrate his machine, but no candidate host has the >>>> network >>>> available. >> I think the enforcement should be at least one migration network per host -> in the case we support more then one >> Else always required. > Fine by me - if we keep backward behavior of ovirtmgmt being a migration > network by default. I think that the worst case is that the user finds > out - in the least convinient moment - that ovirt 3.3 would not migrate > his VMs without explicitly assigning the "migration" role. > >>>> 1.4 The "migration" role can be granted or taken on-the-fly, when >>>> hosts >>>> are active, as long as there are no currently-migrating VMs. >>>> >>>> 2. Scheduler >>>> 2.1 when deciding which host should be used for automatic >>>> migration, take into account the existence and availability of >>>> the >>>> migration network on the destination host. >>>> 2.2 For manual migration, let user migrate a VM to a host with no >>>> migration network - if the admin wants to keep jamming the >>>> management network with migration traffic, let her. >> Since you send migration network per migration command, why not allow >> to choose any network on the host same as you allow to choose host? If >> host is not selected then allow to choose from cluster's networks. >> The default should be the cluster's migration network. > Cool. Added to wiki page. > >> If you allow for the above, we can waver the enforcement of migration network per host. No migration network == no automatic migration to/from this host. > again, I'd prefer to keep the current default status of ovirtmgmt as a > migration network. Besides that, +1. > >> >>>> 3. VdsBroker migration verb. >>>> 3.1 For the a modern cluster level, with migration network defined >>>> on >>>> the destination host, an additional ''miguri'' parameter >>>> should be added >>>> to the "migrate" command >>>> >>>> _______________________________________________ >>>> Arch mailing list >>>> Arch at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/arch >>> How is the authentication of the peers handled? Do we need a cert per >>> each source/destination logical interface? > I hope Orit or Lain correct me, but I am not aware of any > authentication scheme that protects non-tunneled qemu destination from > an evil process with network acess to the host. > > Dan. > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From simon at redhat.com Thu Jan 10 08:38:56 2013 From: simon at redhat.com (Simon Grinberg) Date: Thu, 10 Jan 2013 03:38:56 -0500 (EST) Subject: feature suggestion: migration network In-Reply-To: <50EE31D3.2080602@linux.vnet.ibm.com> Message-ID: <5798161.4159.1357807064987.JavaMail.javamailuser@localhost> ----- Original Message ----- > From: "Mark Wu" > To: "Dan Kenigsberg" > Cc: "Simon Grinberg" , "Orit Wasserman" , "Laine Stump" , > "Yuval M" , "Limor Gavish" , arch at ovirt.org > Sent: Thursday, January 10, 2013 5:13:23 AM > Subject: Re: feature suggestion: migration network > > On 01/09/2013 03:34 AM, Dan Kenigsberg wrote: > > On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg wrote: > >> > >> ----- Original Message ----- > >>> From: "Yaniv Kaul" > >>> To: "Dan Kenigsberg" > >>> Cc: "Limor Gavish" , "Yuval M" > >>> , arch at ovirt.org, "Simon Grinberg" > >>> > >>> Sent: Tuesday, January 8, 2013 4:46:10 PM > >>> Subject: Re: feature suggestion: migration network > >>> > >>> On 08/01/13 15:04, Dan Kenigsberg wrote: > >>>> There's talk about this for ages, so it's time to have proper > >>>> discussion > >>>> and a feature page about it: let us have a "migration" network > >>>> role, and > >>>> use such networks to carry migration data > >>>> > >>>> When Engine requests to migrate a VM from one node to another, > >>>> the > >>>> VM > >>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP > >>>> connection > >>>> that is opened from the source qemu process to the destination > >>>> qemu. > >>>> Currently, destination qemu listens for the incoming connection > >>>> on > >>>> the > >>>> management IP address of the destination host. This has serious > >>>> downsides: a "migration storm" may choke the destination's > >>>> management > >>>> interface; migration is plaintext and ovirtmgmt includes Engine > >>>> which > >>>> sits may sit the node cluster. > >>>> > >>>> With this feature, a cluster administrator may grant the > >>>> "migration" > >>>> role to one of the cluster networks. Engine would use that > >>>> network's IP > >>>> address on the destination host when it requests a migration of > >>>> a > >>>> VM. > >>>> With proper network setup, migration data would be separated to > >>>> that > >>>> network. > >>>> > >>>> === Benefit to oVirt === > >>>> * Users would be able to define and dedicate a separate network > >>>> for > >>>> migration. Users that need quick migration would use nics > >>>> with > >>>> high > >>>> bandwidth. Users who want to cap the bandwidth consumed by > >>>> migration > >>>> could define a migration network over nics with bandwidth > >>>> limitation. > >>>> * Migration data can be limited to a separate network, that has > >>>> no > >>>> layer-2 access from Engine > >>>> > >>>> === Vdsm === > >>>> The "migrate" verb should be extended with an additional > >>>> parameter, > >>>> specifying the address that the remote qemu process should > >>>> listen > >>>> on. A > >>>> new argument is to be added to the currently-defined migration > >>>> arguments: > >>>> * vmId: UUID > >>>> * dst: management address of destination host > >>>> * dstparams: hibernation volumes definition > >>>> * mode: migration/hibernation > >>>> * method: rotten legacy > >>>> * ''New'': migration uri, according to > >>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > >>>> such as tcp:// > >>>> > >>>> === Engine === > >>>> As usual, complexity lies here, and several changes are > >>>> required: > >>>> > >>>> 1. Network definition. > >>>> 1.1 A new network role - not unlike "display network" should be > >>>> added.Only one migration network should be defined on a > >>>> cluster. > >> We are considering multiple display networks already, then why not > >> the > >> same for migration? > > What is the motivation of having multiple migration networks? > > Extending > > the bandwidth (and thus, any network can be taken when needed) or > > data separation (and thus, a migration network should be assigned > > to > > each VM in the cluster)? Or another morivation with consequence? > My suggestion is making the migration network role determined > dynamically on each migrate. If we only define one migration network > per cluster, > the migration storm could happen to that network. It could cause some > bad impact on VM applications. So I think engine could choose the > network which > has lower traffic load on migration, or leave the choice to user. Dynamic migration selection is indeed desirable but only from migration networks - migration traffic is insecure so it's undesirable to have it mixed with VM traffic unless permitted by the admin by marking this network as migration network. To clarify what I've meant in the previous response to Livnat - When I've said "...if the customer due to the unsymmetrical nature of most bonding modes prefers to use muplitple networks for migration and will ask us to optimize migration across these..." But the dynamic selection should be based on SLA which the above is just part: 1. Need to consider tenant traffic segregation rules = security 2. SLA contracts If you keep 2, migration storms mitigation is granted. But you are right that another feature required for #2 above is to control the migration bandwidth (BW) per migration. We had discussion in the past for VDSM to do dynamic calculation based on f(Line Speed, Max Migration BW, Max allowed per VM, Free BW, number of migrating machines) when starting migration. (I actually wanted to do so years ago, but never got to that - one of those things you always postpone to when you'll find the time). We did not think that the engine should provide some, but coming to think of it, you are right and it makes sense. For SLA - Max per VM + Min guaranteed should be provided by the engine to maintain SLA. And it's up to the engine not to VMs with Min-Guaranteed x number of concurrent migrations will exceed Max Migration BW. Dan this is way too much for initial implementation, but don't you think we should at least add place holders in the migration API? Maybe Doron can assist with the required verbs. (P.S., I don't want to alarm but we may need SLA parameters for setupNetworks as well :) unless we want these as separate API tough it means more calls during set up) > > > > >> > >>>> 1.2 If none is defined, the legacy "use ovirtmgmt for migration" > >>>> behavior would apply. > >>>> 1.3 A migration network is more likely to be a ''required'' > >>>> network, but > >>>> a user may opt for non-required. He may face unpleasant > >>>> surprises if he > >>>> wants to migrate his machine, but no candidate host has > >>>> the > >>>> network > >>>> available. > >> I think the enforcement should be at least one migration network > >> per host -> in the case we support more then one > >> Else always required. > > Fine by me - if we keep backward behavior of ovirtmgmt being a > > migration > > network by default. I think that the worst case is that the user > > finds > > out - in the least convinient moment - that ovirt 3.3 would not > > migrate > > his VMs without explicitly assigning the "migration" role. > > > >>>> 1.4 The "migration" role can be granted or taken on-the-fly, > >>>> when > >>>> hosts > >>>> are active, as long as there are no currently-migrating > >>>> VMs. > >>>> > >>>> 2. Scheduler > >>>> 2.1 when deciding which host should be used for automatic > >>>> migration, take into account the existence and > >>>> availability of > >>>> the > >>>> migration network on the destination host. > >>>> 2.2 For manual migration, let user migrate a VM to a host with > >>>> no > >>>> migration network - if the admin wants to keep jamming the > >>>> management network with migration traffic, let her. > >> Since you send migration network per migration command, why not > >> allow > >> to choose any network on the host same as you allow to choose > >> host? If > >> host is not selected then allow to choose from cluster's networks. > >> The default should be the cluster's migration network. > > Cool. Added to wiki page. > > > >> If you allow for the above, we can waver the enforcement of > >> migration network per host. No migration network == no automatic > >> migration to/from this host. > > again, I'd prefer to keep the current default status of ovirtmgmt > > as a > > migration network. Besides that, +1. > > > >> > >>>> 3. VdsBroker migration verb. > >>>> 3.1 For the a modern cluster level, with migration network > >>>> defined > >>>> on > >>>> the destination host, an additional ''miguri'' parameter > >>>> should be added > >>>> to the "migrate" command > >>>> > >>>> _______________________________________________ > >>>> Arch mailing list > >>>> Arch at ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/arch > >>> How is the authentication of the peers handled? Do we need a cert > >>> per > >>> each source/destination logical interface? > > I hope Orit or Lain correct me, but I am not aware of any > > authentication scheme that protects non-tunneled qemu destination > > from > > an evil process with network acess to the host. > > > > Dan. > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > > From dfediuck at redhat.com Thu Jan 10 09:43:45 2013 From: dfediuck at redhat.com (Doron Fediuck) Date: Thu, 10 Jan 2013 04:43:45 -0500 (EST) Subject: feature suggestion: migration network In-Reply-To: <5798161.4159.1357807064987.JavaMail.javamailuser@localhost> Message-ID: <662338454.2575751.1357811025529.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Simon Grinberg" > To: "Mark Wu" , "Doron Fediuck" > Cc: "Orit Wasserman" , "Laine Stump" , "Yuval M" , "Limor > Gavish" , arch at ovirt.org, "Dan Kenigsberg" > Sent: Thursday, January 10, 2013 10:38:56 AM > Subject: Re: feature suggestion: migration network > > > > ----- Original Message ----- > > From: "Mark Wu" > > To: "Dan Kenigsberg" > > Cc: "Simon Grinberg" , "Orit Wasserman" > > , "Laine Stump" , > > "Yuval M" , "Limor Gavish" , > > arch at ovirt.org > > Sent: Thursday, January 10, 2013 5:13:23 AM > > Subject: Re: feature suggestion: migration network > > > > On 01/09/2013 03:34 AM, Dan Kenigsberg wrote: > > > On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg wrote: > > >> > > >> ----- Original Message ----- > > >>> From: "Yaniv Kaul" > > >>> To: "Dan Kenigsberg" > > >>> Cc: "Limor Gavish" , "Yuval M" > > >>> , arch at ovirt.org, "Simon Grinberg" > > >>> > > >>> Sent: Tuesday, January 8, 2013 4:46:10 PM > > >>> Subject: Re: feature suggestion: migration network > > >>> > > >>> On 08/01/13 15:04, Dan Kenigsberg wrote: > > >>>> There's talk about this for ages, so it's time to have proper > > >>>> discussion > > >>>> and a feature page about it: let us have a "migration" network > > >>>> role, and > > >>>> use such networks to carry migration data > > >>>> > > >>>> When Engine requests to migrate a VM from one node to another, > > >>>> the > > >>>> VM > > >>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP > > >>>> connection > > >>>> that is opened from the source qemu process to the destination > > >>>> qemu. > > >>>> Currently, destination qemu listens for the incoming > > >>>> connection > > >>>> on > > >>>> the > > >>>> management IP address of the destination host. This has > > >>>> serious > > >>>> downsides: a "migration storm" may choke the destination's > > >>>> management > > >>>> interface; migration is plaintext and ovirtmgmt includes > > >>>> Engine > > >>>> which > > >>>> sits may sit the node cluster. > > >>>> > > >>>> With this feature, a cluster administrator may grant the > > >>>> "migration" > > >>>> role to one of the cluster networks. Engine would use that > > >>>> network's IP > > >>>> address on the destination host when it requests a migration > > >>>> of > > >>>> a > > >>>> VM. > > >>>> With proper network setup, migration data would be separated > > >>>> to > > >>>> that > > >>>> network. > > >>>> > > >>>> === Benefit to oVirt === > > >>>> * Users would be able to define and dedicate a separate > > >>>> network > > >>>> for > > >>>> migration. Users that need quick migration would use nics > > >>>> with > > >>>> high > > >>>> bandwidth. Users who want to cap the bandwidth consumed by > > >>>> migration > > >>>> could define a migration network over nics with bandwidth > > >>>> limitation. > > >>>> * Migration data can be limited to a separate network, that > > >>>> has > > >>>> no > > >>>> layer-2 access from Engine > > >>>> > > >>>> === Vdsm === > > >>>> The "migrate" verb should be extended with an additional > > >>>> parameter, > > >>>> specifying the address that the remote qemu process should > > >>>> listen > > >>>> on. A > > >>>> new argument is to be added to the currently-defined migration > > >>>> arguments: > > >>>> * vmId: UUID > > >>>> * dst: management address of destination host > > >>>> * dstparams: hibernation volumes definition > > >>>> * mode: migration/hibernation > > >>>> * method: rotten legacy > > >>>> * ''New'': migration uri, according to > > >>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > > >>>> such as tcp:// > > >>>> > > >>>> === Engine === > > >>>> As usual, complexity lies here, and several changes are > > >>>> required: > > >>>> > > >>>> 1. Network definition. > > >>>> 1.1 A new network role - not unlike "display network" should > > >>>> be > > >>>> added.Only one migration network should be defined on a > > >>>> cluster. > > >> We are considering multiple display networks already, then why > > >> not > > >> the > > >> same for migration? > > > What is the motivation of having multiple migration networks? > > > Extending > > > the bandwidth (and thus, any network can be taken when needed) or > > > data separation (and thus, a migration network should be assigned > > > to > > > each VM in the cluster)? Or another morivation with consequence? > > My suggestion is making the migration network role determined > > dynamically on each migrate. If we only define one migration > > network > > per cluster, > > the migration storm could happen to that network. It could cause > > some > > bad impact on VM applications. So I think engine could choose the > > network which > > has lower traffic load on migration, or leave the choice to user. > > Dynamic migration selection is indeed desirable but only from > migration networks - migration traffic is insecure so it's > undesirable to have it mixed with VM traffic unless permitted by the > admin by marking this network as migration network. > > To clarify what I've meant in the previous response to Livnat - When > I've said "...if the customer due to the unsymmetrical nature of > most bonding modes prefers to use muplitple networks for migration > and will ask us to optimize migration across these..." > > But the dynamic selection should be based on SLA which the above is > just part: > 1. Need to consider tenant traffic segregation rules = security > 2. SLA contracts > > If you keep 2, migration storms mitigation is granted. But you are > right that another feature required for #2 above is to control the > migration bandwidth (BW) per migration. We had discussion in the > past for VDSM to do dynamic calculation based on f(Line Speed, Max > Migration BW, Max allowed per VM, Free BW, number of migrating > machines) when starting migration. (I actually wanted to do so years > ago, but never got to that - one of those things you always postpone > to when you'll find the time). We did not think that the engine > should provide some, but coming to think of it, you are right and it > makes sense. For SLA - Max per VM + Min guaranteed should be > provided by the engine to maintain SLA. And it's up to the engine > not to VMs with Min-Guaranteed x number of concurrent migrations > will exceed Max Migration BW. > > Dan this is way too much for initial implementation, but don't you > think we should at least add place holders in the migration API? > Maybe Doron can assist with the required verbs. > > (P.S., I don't want to alarm but we may need SLA parameters for > setupNetworks as well :) unless we want these as separate API tough > it means more calls during set up) > As with other resources the bare minimum are usually MIN capacity and MAX to avoid choking of other tenants / VMs. In this context we may need to consider other QoS elements (delays, etc) but indeed it can be an additional limitation on top of the basic one. > > > > > > > >> > > >>>> 1.2 If none is defined, the legacy "use ovirtmgmt for > > >>>> migration" > > >>>> behavior would apply. > > >>>> 1.3 A migration network is more likely to be a ''required'' > > >>>> network, but > > >>>> a user may opt for non-required. He may face unpleasant > > >>>> surprises if he > > >>>> wants to migrate his machine, but no candidate host has > > >>>> the > > >>>> network > > >>>> available. > > >> I think the enforcement should be at least one migration network > > >> per host -> in the case we support more then one > > >> Else always required. > > > Fine by me - if we keep backward behavior of ovirtmgmt being a > > > migration > > > network by default. I think that the worst case is that the user > > > finds > > > out - in the least convinient moment - that ovirt 3.3 would not > > > migrate > > > his VMs without explicitly assigning the "migration" role. > > > > > >>>> 1.4 The "migration" role can be granted or taken on-the-fly, > > >>>> when > > >>>> hosts > > >>>> are active, as long as there are no currently-migrating > > >>>> VMs. > > >>>> > > >>>> 2. Scheduler > > >>>> 2.1 when deciding which host should be used for automatic > > >>>> migration, take into account the existence and > > >>>> availability of > > >>>> the > > >>>> migration network on the destination host. > > >>>> 2.2 For manual migration, let user migrate a VM to a host with > > >>>> no > > >>>> migration network - if the admin wants to keep jamming > > >>>> the > > >>>> management network with migration traffic, let her. > > >> Since you send migration network per migration command, why not > > >> allow > > >> to choose any network on the host same as you allow to choose > > >> host? If > > >> host is not selected then allow to choose from cluster's > > >> networks. > > >> The default should be the cluster's migration network. > > > Cool. Added to wiki page. > > > > > >> If you allow for the above, we can waver the enforcement of > > >> migration network per host. No migration network == no automatic > > >> migration to/from this host. > > > again, I'd prefer to keep the current default status of ovirtmgmt > > > as a > > > migration network. Besides that, +1. > > > > > >> > > >>>> 3. VdsBroker migration verb. > > >>>> 3.1 For the a modern cluster level, with migration network > > >>>> defined > > >>>> on > > >>>> the destination host, an additional ''miguri'' parameter > > >>>> should be added > > >>>> to the "migrate" command > > >>>> > > >>>> _______________________________________________ > > >>>> Arch mailing list > > >>>> Arch at ovirt.org > > >>>> http://lists.ovirt.org/mailman/listinfo/arch > > >>> How is the authentication of the peers handled? Do we need a > > >>> cert > > >>> per > > >>> each source/destination logical interface? > > > I hope Orit or Lain correct me, but I am not aware of any > > > authentication scheme that protects non-tunneled qemu destination > > > from > > > an evil process with network acess to the host. > > > > > > Dan. > > > _______________________________________________ > > > Arch mailing list > > > Arch at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/arch > > > > > > > > From rydekull at gmail.com Thu Jan 10 11:36:59 2013 From: rydekull at gmail.com (Alexander Rydekull) Date: Thu, 10 Jan 2013 12:36:59 +0100 Subject: oVirt - Infiniband Support / Setup Message-ID: Hello all, My name is Alexander and I'm somewhat involved with the infra-team, trying to become a more active contributor. I have a friend of mine who was prodding me and asking if oVirt would benefit from some infiniband equipment to further development on those parts. What he said was, that he could sort out servers / switches / adapters for the project to use and develop on. And he asked if oVirt would make any use of it? So basically, that's my question to you. Is this an area of interest? Would you want me to try and see if we can get a infiniband-setup for oVirt done? -- /Alexander Rydekull -------------- next part -------------- An HTML attachment was scrubbed... URL: From danken at redhat.com Thu Jan 10 11:46:08 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Thu, 10 Jan 2013 13:46:08 +0200 Subject: feature suggestion: migration network In-Reply-To: <662338454.2575751.1357811025529.JavaMail.root@redhat.com> References: <5798161.4159.1357807064987.JavaMail.javamailuser@localhost> <662338454.2575751.1357811025529.JavaMail.root@redhat.com> Message-ID: <20130110114608.GI26998@redhat.com> On Thu, Jan 10, 2013 at 04:43:45AM -0500, Doron Fediuck wrote: > > > ----- Original Message ----- > > From: "Simon Grinberg" > > To: "Mark Wu" , "Doron Fediuck" > > Cc: "Orit Wasserman" , "Laine Stump" , "Yuval M" , "Limor > > Gavish" , arch at ovirt.org, "Dan Kenigsberg" > > Sent: Thursday, January 10, 2013 10:38:56 AM > > Subject: Re: feature suggestion: migration network > > > > > > > > ----- Original Message ----- > > > From: "Mark Wu" > > > To: "Dan Kenigsberg" > > > Cc: "Simon Grinberg" , "Orit Wasserman" > > > , "Laine Stump" , > > > "Yuval M" , "Limor Gavish" , > > > arch at ovirt.org > > > Sent: Thursday, January 10, 2013 5:13:23 AM > > > Subject: Re: feature suggestion: migration network > > > > > > On 01/09/2013 03:34 AM, Dan Kenigsberg wrote: > > > > On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg wrote: > > > >> > > > >> ----- Original Message ----- > > > >>> From: "Yaniv Kaul" > > > >>> To: "Dan Kenigsberg" > > > >>> Cc: "Limor Gavish" , "Yuval M" > > > >>> , arch at ovirt.org, "Simon Grinberg" > > > >>> > > > >>> Sent: Tuesday, January 8, 2013 4:46:10 PM > > > >>> Subject: Re: feature suggestion: migration network > > > >>> > > > >>> On 08/01/13 15:04, Dan Kenigsberg wrote: > > > >>>> There's talk about this for ages, so it's time to have proper > > > >>>> discussion > > > >>>> and a feature page about it: let us have a "migration" network > > > >>>> role, and > > > >>>> use such networks to carry migration data > > > >>>> > > > >>>> When Engine requests to migrate a VM from one node to another, > > > >>>> the > > > >>>> VM > > > >>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP > > > >>>> connection > > > >>>> that is opened from the source qemu process to the destination > > > >>>> qemu. > > > >>>> Currently, destination qemu listens for the incoming > > > >>>> connection > > > >>>> on > > > >>>> the > > > >>>> management IP address of the destination host. This has > > > >>>> serious > > > >>>> downsides: a "migration storm" may choke the destination's > > > >>>> management > > > >>>> interface; migration is plaintext and ovirtmgmt includes > > > >>>> Engine > > > >>>> which > > > >>>> sits may sit the node cluster. > > > >>>> > > > >>>> With this feature, a cluster administrator may grant the > > > >>>> "migration" > > > >>>> role to one of the cluster networks. Engine would use that > > > >>>> network's IP > > > >>>> address on the destination host when it requests a migration > > > >>>> of > > > >>>> a > > > >>>> VM. > > > >>>> With proper network setup, migration data would be separated > > > >>>> to > > > >>>> that > > > >>>> network. > > > >>>> > > > >>>> === Benefit to oVirt === > > > >>>> * Users would be able to define and dedicate a separate > > > >>>> network > > > >>>> for > > > >>>> migration. Users that need quick migration would use nics > > > >>>> with > > > >>>> high > > > >>>> bandwidth. Users who want to cap the bandwidth consumed by > > > >>>> migration > > > >>>> could define a migration network over nics with bandwidth > > > >>>> limitation. > > > >>>> * Migration data can be limited to a separate network, that > > > >>>> has > > > >>>> no > > > >>>> layer-2 access from Engine > > > >>>> > > > >>>> === Vdsm === > > > >>>> The "migrate" verb should be extended with an additional > > > >>>> parameter, > > > >>>> specifying the address that the remote qemu process should > > > >>>> listen > > > >>>> on. A > > > >>>> new argument is to be added to the currently-defined migration > > > >>>> arguments: > > > >>>> * vmId: UUID > > > >>>> * dst: management address of destination host > > > >>>> * dstparams: hibernation volumes definition > > > >>>> * mode: migration/hibernation > > > >>>> * method: rotten legacy > > > >>>> * ''New'': migration uri, according to > > > >>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > > > >>>> such as tcp:// > > > >>>> > > > >>>> === Engine === > > > >>>> As usual, complexity lies here, and several changes are > > > >>>> required: > > > >>>> > > > >>>> 1. Network definition. > > > >>>> 1.1 A new network role - not unlike "display network" should > > > >>>> be > > > >>>> added.Only one migration network should be defined on a > > > >>>> cluster. > > > >> We are considering multiple display networks already, then why > > > >> not > > > >> the > > > >> same for migration? > > > > What is the motivation of having multiple migration networks? > > > > Extending > > > > the bandwidth (and thus, any network can be taken when needed) or > > > > data separation (and thus, a migration network should be assigned > > > > to > > > > each VM in the cluster)? Or another morivation with consequence? > > > My suggestion is making the migration network role determined > > > dynamically on each migrate. If we only define one migration > > > network > > > per cluster, > > > the migration storm could happen to that network. It could cause > > > some > > > bad impact on VM applications. So I think engine could choose the > > > network which > > > has lower traffic load on migration, or leave the choice to user. > > > > Dynamic migration selection is indeed desirable but only from > > migration networks - migration traffic is insecure so it's > > undesirable to have it mixed with VM traffic unless permitted by the > > admin by marking this network as migration network. > > > > To clarify what I've meant in the previous response to Livnat - When > > I've said "...if the customer due to the unsymmetrical nature of > > most bonding modes prefers to use muplitple networks for migration > > and will ask us to optimize migration across these..." > > > > But the dynamic selection should be based on SLA which the above is > > just part: > > 1. Need to consider tenant traffic segregation rules = security > > 2. SLA contracts We could devise a complex logic of assigning each Vm a pool of applicable migration networks, where one of them is chosen by Engine upon migration startup. I am, however, not at all sure that extending the migration bandwidth by means of multiple migration networks is worth the design hassle and the GUI noise. A simpler solution would be to build a single migration network on top of a fat bond, tweaked by a fine-tuned SLA. > > > > If you keep 2, migration storms mitigation is granted. But you are > > right that another feature required for #2 above is to control the > > migration bandwidth (BW) per migration. We had discussion in the > > past for VDSM to do dynamic calculation based on f(Line Speed, Max > > Migration BW, Max allowed per VM, Free BW, number of migrating > > machines) when starting migration. (I actually wanted to do so years > > ago, but never got to that - one of those things you always postpone > > to when you'll find the time). We did not think that the engine > > should provide some, but coming to think of it, you are right and it > > makes sense. For SLA - Max per VM + Min guaranteed should be > > provided by the engine to maintain SLA. And it's up to the engine > > not to VMs with Min-Guaranteed x number of concurrent migrations > > will exceed Max Migration BW. > > > > Dan this is way too much for initial implementation, but don't you > > think we should at least add place holders in the migration API? In my opinion this should wait for another feature. For each VM, I'd like to see a means to define the SLA of each of its vNIC. When we have that, we should similarly define how much bandwidth does it have for migration > > Maybe Doron can assist with the required verbs. > > > > (P.S., I don't want to alarm but we may need SLA parameters for > > setupNetworks as well :) unless we want these as separate API tough > > it means more calls during set up) Exactly - when we have a migration network concept, and when we have general network SLA defition, we could easily apply the latter on the former. > > > > As with other resources the bare minimum are usually MIN capacity and > MAX to avoid choking of other tenants / VMs. In this context we may need > to consider other QoS elements (delays, etc) but indeed it can be an additional > limitation on top of the basic one. > From dfediuck at redhat.com Thu Jan 10 11:59:05 2013 From: dfediuck at redhat.com (Doron Fediuck) Date: Thu, 10 Jan 2013 06:59:05 -0500 (EST) Subject: feature suggestion: migration network In-Reply-To: <20130110114608.GI26998@redhat.com> Message-ID: <1054764838.2630317.1357819145118.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Dan Kenigsberg" > To: "Doron Fediuck" > Cc: "Simon Grinberg" , "Orit Wasserman" , "Laine Stump" , > "Yuval M" , "Limor Gavish" , arch at ovirt.org, "Mark Wu" > > Sent: Thursday, January 10, 2013 1:46:08 PM > Subject: Re: feature suggestion: migration network > > On Thu, Jan 10, 2013 at 04:43:45AM -0500, Doron Fediuck wrote: > > > > > > ----- Original Message ----- > > > From: "Simon Grinberg" > > > To: "Mark Wu" , "Doron Fediuck" > > > > > > Cc: "Orit Wasserman" , "Laine Stump" > > > , "Yuval M" , "Limor > > > Gavish" , arch at ovirt.org, "Dan Kenigsberg" > > > > > > Sent: Thursday, January 10, 2013 10:38:56 AM > > > Subject: Re: feature suggestion: migration network > > > > > > > > > > > > ----- Original Message ----- > > > > From: "Mark Wu" > > > > To: "Dan Kenigsberg" > > > > Cc: "Simon Grinberg" , "Orit Wasserman" > > > > , "Laine Stump" , > > > > "Yuval M" , "Limor Gavish" > > > > , > > > > arch at ovirt.org > > > > Sent: Thursday, January 10, 2013 5:13:23 AM > > > > Subject: Re: feature suggestion: migration network > > > > > > > > On 01/09/2013 03:34 AM, Dan Kenigsberg wrote: > > > > > On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg > > > > > wrote: > > > > >> > > > > >> ----- Original Message ----- > > > > >>> From: "Yaniv Kaul" > > > > >>> To: "Dan Kenigsberg" > > > > >>> Cc: "Limor Gavish" , "Yuval M" > > > > >>> , arch at ovirt.org, "Simon Grinberg" > > > > >>> > > > > >>> Sent: Tuesday, January 8, 2013 4:46:10 PM > > > > >>> Subject: Re: feature suggestion: migration network > > > > >>> > > > > >>> On 08/01/13 15:04, Dan Kenigsberg wrote: > > > > >>>> There's talk about this for ages, so it's time to have > > > > >>>> proper > > > > >>>> discussion > > > > >>>> and a feature page about it: let us have a "migration" > > > > >>>> network > > > > >>>> role, and > > > > >>>> use such networks to carry migration data > > > > >>>> > > > > >>>> When Engine requests to migrate a VM from one node to > > > > >>>> another, > > > > >>>> the > > > > >>>> VM > > > > >>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP > > > > >>>> connection > > > > >>>> that is opened from the source qemu process to the > > > > >>>> destination > > > > >>>> qemu. > > > > >>>> Currently, destination qemu listens for the incoming > > > > >>>> connection > > > > >>>> on > > > > >>>> the > > > > >>>> management IP address of the destination host. This has > > > > >>>> serious > > > > >>>> downsides: a "migration storm" may choke the destination's > > > > >>>> management > > > > >>>> interface; migration is plaintext and ovirtmgmt includes > > > > >>>> Engine > > > > >>>> which > > > > >>>> sits may sit the node cluster. > > > > >>>> > > > > >>>> With this feature, a cluster administrator may grant the > > > > >>>> "migration" > > > > >>>> role to one of the cluster networks. Engine would use that > > > > >>>> network's IP > > > > >>>> address on the destination host when it requests a > > > > >>>> migration > > > > >>>> of > > > > >>>> a > > > > >>>> VM. > > > > >>>> With proper network setup, migration data would be > > > > >>>> separated > > > > >>>> to > > > > >>>> that > > > > >>>> network. > > > > >>>> > > > > >>>> === Benefit to oVirt === > > > > >>>> * Users would be able to define and dedicate a separate > > > > >>>> network > > > > >>>> for > > > > >>>> migration. Users that need quick migration would use > > > > >>>> nics > > > > >>>> with > > > > >>>> high > > > > >>>> bandwidth. Users who want to cap the bandwidth > > > > >>>> consumed by > > > > >>>> migration > > > > >>>> could define a migration network over nics with > > > > >>>> bandwidth > > > > >>>> limitation. > > > > >>>> * Migration data can be limited to a separate network, > > > > >>>> that > > > > >>>> has > > > > >>>> no > > > > >>>> layer-2 access from Engine > > > > >>>> > > > > >>>> === Vdsm === > > > > >>>> The "migrate" verb should be extended with an additional > > > > >>>> parameter, > > > > >>>> specifying the address that the remote qemu process should > > > > >>>> listen > > > > >>>> on. A > > > > >>>> new argument is to be added to the currently-defined > > > > >>>> migration > > > > >>>> arguments: > > > > >>>> * vmId: UUID > > > > >>>> * dst: management address of destination host > > > > >>>> * dstparams: hibernation volumes definition > > > > >>>> * mode: migration/hibernation > > > > >>>> * method: rotten legacy > > > > >>>> * ''New'': migration uri, according to > > > > >>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > > > > >>>> such as tcp:// > > > > >>>> > > > > >>>> === Engine === > > > > >>>> As usual, complexity lies here, and several changes are > > > > >>>> required: > > > > >>>> > > > > >>>> 1. Network definition. > > > > >>>> 1.1 A new network role - not unlike "display network" > > > > >>>> should > > > > >>>> be > > > > >>>> added.Only one migration network should be defined > > > > >>>> on a > > > > >>>> cluster. > > > > >> We are considering multiple display networks already, then > > > > >> why > > > > >> not > > > > >> the > > > > >> same for migration? > > > > > What is the motivation of having multiple migration networks? > > > > > Extending > > > > > the bandwidth (and thus, any network can be taken when > > > > > needed) or > > > > > data separation (and thus, a migration network should be > > > > > assigned > > > > > to > > > > > each VM in the cluster)? Or another morivation with > > > > > consequence? > > > > My suggestion is making the migration network role determined > > > > dynamically on each migrate. If we only define one migration > > > > network > > > > per cluster, > > > > the migration storm could happen to that network. It could > > > > cause > > > > some > > > > bad impact on VM applications. So I think engine could choose > > > > the > > > > network which > > > > has lower traffic load on migration, or leave the choice to > > > > user. > > > > > > Dynamic migration selection is indeed desirable but only from > > > migration networks - migration traffic is insecure so it's > > > undesirable to have it mixed with VM traffic unless permitted by > > > the > > > admin by marking this network as migration network. > > > > > > To clarify what I've meant in the previous response to Livnat - > > > When > > > I've said "...if the customer due to the unsymmetrical nature of > > > most bonding modes prefers to use muplitple networks for > > > migration > > > and will ask us to optimize migration across these..." > > > > > > But the dynamic selection should be based on SLA which the above > > > is > > > just part: > > > 1. Need to consider tenant traffic segregation rules = security > > > 2. SLA contracts > > We could devise a complex logic of assigning each Vm a pool of > applicable migration networks, where one of them is chosen by Engine > upon migration startup. > > I am, however, not at all sure that extending the migration bandwidth > by > means of multiple migration networks is worth the design hassle and > the > GUI noise. A simpler solution would be to build a single migration > network on top of a fat bond, tweaked by a fine-tuned SLA. > > > > > > > If you keep 2, migration storms mitigation is granted. But you > > > are > > > right that another feature required for #2 above is to control > > > the > > > migration bandwidth (BW) per migration. We had discussion in the > > > past for VDSM to do dynamic calculation based on f(Line Speed, > > > Max > > > Migration BW, Max allowed per VM, Free BW, number of migrating > > > machines) when starting migration. (I actually wanted to do so > > > years > > > ago, but never got to that - one of those things you always > > > postpone > > > to when you'll find the time). We did not think that the engine > > > should provide some, but coming to think of it, you are right and > > > it > > > makes sense. For SLA - Max per VM + Min guaranteed should be > > > provided by the engine to maintain SLA. And it's up to the engine > > > not to VMs with Min-Guaranteed x number of concurrent migrations > > > will exceed Max Migration BW. > > > > > > Dan this is way too much for initial implementation, but don't > > > you > > > think we should at least add place holders in the migration API? > > In my opinion this should wait for another feature. For each VM, I'd > like to see a means to define the SLA of each of its vNIC. When we > have > that, we should similarly define how much bandwidth does it have for > migration > > > > Maybe Doron can assist with the required verbs. > > > > > > (P.S., I don't want to alarm but we may need SLA parameters for > > > setupNetworks as well :) unless we want these as separate API > > > tough > > > it means more calls during set up) > > Exactly - when we have a migration network concept, and when we have > general network SLA defition, we could easily apply the latter on the > former. > Note that decision making is not in host level. > > > > > > > As with other resources the bare minimum are usually MIN capacity > > and > > MAX to avoid choking of other tenants / VMs. In this context we may > > need > > to consider other QoS elements (delays, etc) but indeed it can be > > an additional > > limitation on top of the basic one. > > > From danken at redhat.com Thu Jan 10 12:00:57 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Thu, 10 Jan 2013 14:00:57 +0200 Subject: feature suggestion: migration network In-Reply-To: <50EE2B56.8020000@linux.vnet.ibm.com> References: <20130106214941.GJ14546@redhat.com> <20130108130415.GG1534@redhat.com> <50EC3132.1030807@redhat.com> <50EE2B56.8020000@linux.vnet.ibm.com> Message-ID: <20130110120057.GJ26998@redhat.com> On Thu, Jan 10, 2013 at 10:45:42AM +0800, Mark Wu wrote: > On 01/08/2013 10:46 PM, Yaniv Kaul wrote: > >On 08/01/13 15:04, Dan Kenigsberg wrote: > >>There's talk about this for ages, so it's time to have proper discussion > >>and a feature page about it: let us have a "migration" network role, and > >>use such networks to carry migration data > >> > >>When Engine requests to migrate a VM from one node to another, the VM > >>state (Bios, IO devices, RAM) is transferred over a TCP/IP connection > >>that is opened from the source qemu process to the destination qemu. > >>Currently, destination qemu listens for the incoming connection on the > >>management IP address of the destination host. This has serious > >>downsides: a "migration storm" may choke the destination's management > >>interface; migration is plaintext and ovirtmgmt includes Engine which > >>sits may sit the node cluster. > >> > >>With this feature, a cluster administrator may grant the "migration" > >>role to one of the cluster networks. Engine would use that network's IP > >>address on the destination host when it requests a migration of a VM. > >>With proper network setup, migration data would be separated to that > >>network. > >> > >>=== Benefit to oVirt === > >>* Users would be able to define and dedicate a separate network for > >> migration. Users that need quick migration would use nics with high > >> bandwidth. Users who want to cap the bandwidth consumed by migration > >> could define a migration network over nics with bandwidth limitation. > >>* Migration data can be limited to a separate network, that has no > >> layer-2 access from Engine > >> > >>=== Vdsm === > >>The "migrate" verb should be extended with an additional parameter, > >>specifying the address that the remote qemu process should listen on. A > >>new argument is to be added to the currently-defined migration > >>arguments: > >>* vmId: UUID > >>* dst: management address of destination host > >>* dstparams: hibernation volumes definition > >>* mode: migration/hibernation > >>* method: rotten legacy > >>* ''New'': migration uri, according to > >>http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > >>such as tcp:// > >> > >>=== Engine === > >>As usual, complexity lies here, and several changes are required: > >> > >>1. Network definition. > >>1.1 A new network role - not unlike "display network" should be > >> added.Only one migration network should be defined on a cluster. > >>1.2 If none is defined, the legacy "use ovirtmgmt for migration" > >> behavior would apply. > >>1.3 A migration network is more likely to be a ''required'' network, but > >> a user may opt for non-required. He may face unpleasant > >>surprises if he > >> wants to migrate his machine, but no candidate host has the network > >> available. > >>1.4 The "migration" role can be granted or taken on-the-fly, when hosts > >> are active, as long as there are no currently-migrating VMs. > >> > >>2. Scheduler > >>2.1 when deciding which host should be used for automatic > >> migration, take into account the existence and availability of the > >> migration network on the destination host. > >>2.2 For manual migration, let user migrate a VM to a host with no > >> migration network - if the admin wants to keep jamming the > >> management network with migration traffic, let her. > >> > >>3. VdsBroker migration verb. > >>3.1 For the a modern cluster level, with migration network defined on > >> the destination host, an additional ''miguri'' parameter > >>should be added > >> to the "migrate" command > >> > >>_______________________________________________ > >>Arch mailing list > >>Arch at ovirt.org > >>http://lists.ovirt.org/mailman/listinfo/arch > > > >How is the authentication of the peers handled? Do we need a cert > >per each source/destination logical interface? > >Y. > In my understanding, using a separate migration network doesn't > change the current peers authentication. We still use the URI > ''qemu+tls://remoeHost/system' to connect the target libvirt service > if ssl enabled, and the remote host should be the ip address of > management interface. But we can choose other interfaces except the > manage interface to transport the migration data. We just change the > migrateURI, so the current authentication mechanism should still > work for this new feature. vdsm-vdsm and libvirt-libvirt communication is authenticated, but I am not sure at all that qemu-qemu communication is. After qemu is sprung up on the destination with -incoming : , anything with access to that address could hijack the process. Our migrateURI starts with "tcp://" with all the consequences of this. That a good reason to make sure has as limited access as possible. But maybe I'm wrong here, and libvir-list can show me the light. Dan. From danken at redhat.com Thu Jan 10 12:04:58 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Thu, 10 Jan 2013 14:04:58 +0200 Subject: feature suggestion: migration network In-Reply-To: <50EE2FA4.9000703@linux.vnet.ibm.com> References: <20130106214941.GJ14546@redhat.com> <20130108130415.GG1534@redhat.com> <50EE2FA4.9000703@linux.vnet.ibm.com> Message-ID: <20130110120458.GK26998@redhat.com> On Thu, Jan 10, 2013 at 11:04:04AM +0800, Mark Wu wrote: > On 01/08/2013 09:04 PM, Dan Kenigsberg wrote: > >There's talk about this for ages, so it's time to have proper discussion > >and a feature page about it: let us have a "migration" network role, and > >use such networks to carry migration data > > > >When Engine requests to migrate a VM from one node to another, the VM > >state (Bios, IO devices, RAM) is transferred over a TCP/IP connection > >that is opened from the source qemu process to the destination qemu. > >Currently, destination qemu listens for the incoming connection on the > >management IP address of the destination host. This has serious > >downsides: a "migration storm" may choke the destination's management > >interface; migration is plaintext and ovirtmgmt includes Engine which > >sits may sit the node cluster. > > > >With this feature, a cluster administrator may grant the "migration" > >role to one of the cluster networks. Engine would use that network's IP > >address on the destination host when it requests a migration of a VM. > >With proper network setup, migration data would be separated to that > >network. > > > >=== Benefit to oVirt === > >* Users would be able to define and dedicate a separate network for > > migration. Users that need quick migration would use nics with high > > bandwidth. Users who want to cap the bandwidth consumed by migration > > could define a migration network over nics with bandwidth limitation. > >* Migration data can be limited to a separate network, that has no > > layer-2 access from Engine > > > >=== Vdsm === > >The "migrate" verb should be extended with an additional parameter, > >specifying the address that the remote qemu process should listen on. A > >new argument is to be added to the currently-defined migration > >arguments: > >* vmId: UUID > >* dst: management address of destination host > >* dstparams: hibernation volumes definition > >* mode: migration/hibernation > >* method: rotten legacy > >* ''New'': migration uri, according to http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 such as tcp:// > If we would like to resolve the migration storm, we also could add > the qemu migration bandwidth limit as a parameter for migrate verb. > Currently, we use it > as a static configuration on vdsm host. It's not flexible. Engine > could pass appropriate parameters according to the traffic load and > bandwidth of migration network. > It also could be specified by customer according to the priority > they suppose. Yes, we should be able to cap vm traffic on vNics and on migration. But as I've answered to Doron and Simon, I believe that bandwidth capping should be kept out of this specific feature: when we define it for VM networks, we should keep migration netowkr in mind, too. From iheim at redhat.com Thu Jan 10 12:14:53 2013 From: iheim at redhat.com (Itamar Heim) Date: Thu, 10 Jan 2013 14:14:53 +0200 Subject: oVirt - Infiniband Support / Setup In-Reply-To: References: Message-ID: <50EEB0BD.50300@redhat.com> On 01/10/2013 01:36 PM, Alexander Rydekull wrote: > Hello all, > > My name is Alexander and I'm somewhat involved with the infra-team, > trying to become a more active contributor. > > I have a friend of mine who was prodding me and asking if oVirt would > benefit from some infiniband equipment to further development on those > parts. > > What he said was, that he could sort out servers / switches / adapters > for the project to use and develop on. And he asked if oVirt would make > any use of it? > > So basically, that's my question to you. Is this an area of interest? > Would you want me to try and see if we can get a infiniband-setup for > oVirt done? > > -- > /Alexander Rydekull > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > I think there are some active members from mellanox pushing patches for better infiniband integration. would be great if these modes can be tested by him? Thanks, Itamar From simon at redhat.com Thu Jan 10 12:54:25 2013 From: simon at redhat.com (Simon Grinberg) Date: Thu, 10 Jan 2013 07:54:25 -0500 (EST) Subject: feature suggestion: migration network In-Reply-To: <20130110114608.GI26998@redhat.com> Message-ID: <29660868.4849.1357822391533.JavaMail.javamailuser@localhost> ----- Original Message ----- > From: "Dan Kenigsberg" > To: "Doron Fediuck" > Cc: "Simon Grinberg" , "Orit Wasserman" , "Laine Stump" , > "Yuval M" , "Limor Gavish" , arch at ovirt.org, "Mark Wu" > > Sent: Thursday, January 10, 2013 1:46:08 PM > Subject: Re: feature suggestion: migration network > > On Thu, Jan 10, 2013 at 04:43:45AM -0500, Doron Fediuck wrote: > > > > > > ----- Original Message ----- > > > From: "Simon Grinberg" > > > To: "Mark Wu" , "Doron Fediuck" > > > > > > Cc: "Orit Wasserman" , "Laine Stump" > > > , "Yuval M" , "Limor > > > Gavish" , arch at ovirt.org, "Dan Kenigsberg" > > > > > > Sent: Thursday, January 10, 2013 10:38:56 AM > > > Subject: Re: feature suggestion: migration network > > > > > > > > > > > > ----- Original Message ----- > > > > From: "Mark Wu" > > > > To: "Dan Kenigsberg" > > > > Cc: "Simon Grinberg" , "Orit Wasserman" > > > > , "Laine Stump" , > > > > "Yuval M" , "Limor Gavish" > > > > , > > > > arch at ovirt.org > > > > Sent: Thursday, January 10, 2013 5:13:23 AM > > > > Subject: Re: feature suggestion: migration network > > > > > > > > On 01/09/2013 03:34 AM, Dan Kenigsberg wrote: > > > > > On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg > > > > > wrote: > > > > >> > > > > >> ----- Original Message ----- > > > > >>> From: "Yaniv Kaul" > > > > >>> To: "Dan Kenigsberg" > > > > >>> Cc: "Limor Gavish" , "Yuval M" > > > > >>> , arch at ovirt.org, "Simon Grinberg" > > > > >>> > > > > >>> Sent: Tuesday, January 8, 2013 4:46:10 PM > > > > >>> Subject: Re: feature suggestion: migration network > > > > >>> > > > > >>> On 08/01/13 15:04, Dan Kenigsberg wrote: > > > > >>>> There's talk about this for ages, so it's time to have > > > > >>>> proper > > > > >>>> discussion > > > > >>>> and a feature page about it: let us have a "migration" > > > > >>>> network > > > > >>>> role, and > > > > >>>> use such networks to carry migration data > > > > >>>> > > > > >>>> When Engine requests to migrate a VM from one node to > > > > >>>> another, > > > > >>>> the > > > > >>>> VM > > > > >>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP > > > > >>>> connection > > > > >>>> that is opened from the source qemu process to the > > > > >>>> destination > > > > >>>> qemu. > > > > >>>> Currently, destination qemu listens for the incoming > > > > >>>> connection > > > > >>>> on > > > > >>>> the > > > > >>>> management IP address of the destination host. This has > > > > >>>> serious > > > > >>>> downsides: a "migration storm" may choke the destination's > > > > >>>> management > > > > >>>> interface; migration is plaintext and ovirtmgmt includes > > > > >>>> Engine > > > > >>>> which > > > > >>>> sits may sit the node cluster. > > > > >>>> > > > > >>>> With this feature, a cluster administrator may grant the > > > > >>>> "migration" > > > > >>>> role to one of the cluster networks. Engine would use that > > > > >>>> network's IP > > > > >>>> address on the destination host when it requests a > > > > >>>> migration > > > > >>>> of > > > > >>>> a > > > > >>>> VM. > > > > >>>> With proper network setup, migration data would be > > > > >>>> separated > > > > >>>> to > > > > >>>> that > > > > >>>> network. > > > > >>>> > > > > >>>> === Benefit to oVirt === > > > > >>>> * Users would be able to define and dedicate a separate > > > > >>>> network > > > > >>>> for > > > > >>>> migration. Users that need quick migration would use > > > > >>>> nics > > > > >>>> with > > > > >>>> high > > > > >>>> bandwidth. Users who want to cap the bandwidth > > > > >>>> consumed by > > > > >>>> migration > > > > >>>> could define a migration network over nics with > > > > >>>> bandwidth > > > > >>>> limitation. > > > > >>>> * Migration data can be limited to a separate network, > > > > >>>> that > > > > >>>> has > > > > >>>> no > > > > >>>> layer-2 access from Engine > > > > >>>> > > > > >>>> === Vdsm === > > > > >>>> The "migrate" verb should be extended with an additional > > > > >>>> parameter, > > > > >>>> specifying the address that the remote qemu process should > > > > >>>> listen > > > > >>>> on. A > > > > >>>> new argument is to be added to the currently-defined > > > > >>>> migration > > > > >>>> arguments: > > > > >>>> * vmId: UUID > > > > >>>> * dst: management address of destination host > > > > >>>> * dstparams: hibernation volumes definition > > > > >>>> * mode: migration/hibernation > > > > >>>> * method: rotten legacy > > > > >>>> * ''New'': migration uri, according to > > > > >>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > > > > >>>> such as tcp:// > > > > >>>> > > > > >>>> === Engine === > > > > >>>> As usual, complexity lies here, and several changes are > > > > >>>> required: > > > > >>>> > > > > >>>> 1. Network definition. > > > > >>>> 1.1 A new network role - not unlike "display network" > > > > >>>> should > > > > >>>> be > > > > >>>> added.Only one migration network should be defined > > > > >>>> on a > > > > >>>> cluster. > > > > >> We are considering multiple display networks already, then > > > > >> why > > > > >> not > > > > >> the > > > > >> same for migration? > > > > > What is the motivation of having multiple migration networks? > > > > > Extending > > > > > the bandwidth (and thus, any network can be taken when > > > > > needed) or > > > > > data separation (and thus, a migration network should be > > > > > assigned > > > > > to > > > > > each VM in the cluster)? Or another morivation with > > > > > consequence? > > > > My suggestion is making the migration network role determined > > > > dynamically on each migrate. If we only define one migration > > > > network > > > > per cluster, > > > > the migration storm could happen to that network. It could > > > > cause > > > > some > > > > bad impact on VM applications. So I think engine could choose > > > > the > > > > network which > > > > has lower traffic load on migration, or leave the choice to > > > > user. > > > > > > Dynamic migration selection is indeed desirable but only from > > > migration networks - migration traffic is insecure so it's > > > undesirable to have it mixed with VM traffic unless permitted by > > > the > > > admin by marking this network as migration network. > > > > > > To clarify what I've meant in the previous response to Livnat - > > > When > > > I've said "...if the customer due to the unsymmetrical nature of > > > most bonding modes prefers to use muplitple networks for > > > migration > > > and will ask us to optimize migration across these..." > > > > > > But the dynamic selection should be based on SLA which the above > > > is > > > just part: > > > 1. Need to consider tenant traffic segregation rules = security > > > 2. SLA contracts > > We could devise a complex logic of assigning each Vm a pool of > applicable migration networks, where one of them is chosen by Engine > upon migration startup. > > I am, however, not at all sure that extending the migration bandwidth > by > means of multiple migration networks is worth the design hassle and > the > GUI noise. A simpler solution would be to build a single migration > network on top of a fat bond, tweaked by a fine-tuned SLA. Except for mod-4 most bonding modes are either optimized for outbound optimization or inbound - not both. It's far from optimal. And you are forgetting the other reason I've raised, like isolation of tenants traffic and not just from SLA reasons. Even from pure active - active redundancy you may want to have more then one or asymmetrical hosts Example. We have a host with 3 nics - you dedicate each for management, migration, storage - respectively. But if the migration fails, you want the engagement network to become your migration (automatically) Another: A large host with many nics and smaller host with less - as long as this a rout between the migration and management networks you could think on a scenario where on the larger host you have separate networks for each role while on the smaller you have a single network assuming both rolls. Other examples can be found. It's really not just one reason to support more then one migration network or display network or storage or any other 'facility' network. Any facility network may apply for more then one on a cluster. > > > > > > > If you keep 2, migration storms mitigation is granted. But you > > > are > > > right that another feature required for #2 above is to control > > > the > > > migration bandwidth (BW) per migration. We had discussion in the > > > past for VDSM to do dynamic calculation based on f(Line Speed, > > > Max > > > Migration BW, Max allowed per VM, Free BW, number of migrating > > > machines) when starting migration. (I actually wanted to do so > > > years > > > ago, but never got to that - one of those things you always > > > postpone > > > to when you'll find the time). We did not think that the engine > > > should provide some, but coming to think of it, you are right and > > > it > > > makes sense. For SLA - Max per VM + Min guaranteed should be > > > provided by the engine to maintain SLA. And it's up to the engine > > > not to VMs with Min-Guaranteed x number of concurrent migrations > > > will exceed Max Migration BW. > > > > > > Dan this is way too much for initial implementation, but don't > > > you > > > think we should at least add place holders in the migration API? > > In my opinion this should wait for another feature. For each VM, I'd > like to see a means to define the SLA of each of its vNIC. When we > have > that, we should similarly define how much bandwidth does it have for > migration > > > > Maybe Doron can assist with the required verbs. > > > > > > (P.S., I don't want to alarm but we may need SLA parameters for > > > setupNetworks as well :) unless we want these as separate API > > > tough > > > it means more calls during set up) > > Exactly - when we have a migration network concept, and when we have > general network SLA defition, we could easily apply the latter on the > former. > > > > > > > > As with other resources the bare minimum are usually MIN capacity > > and > > MAX to avoid choking of other tenants / VMs. In this context we may > > need > > to consider other QoS elements (delays, etc) but indeed it can be > > an additional > > limitation on top of the basic one. > > > From danken at redhat.com Thu Jan 10 12:57:32 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Thu, 10 Jan 2013 14:57:32 +0200 Subject: feature suggestion: migration network In-Reply-To: <1054764838.2630317.1357819145118.JavaMail.root@redhat.com> References: <20130110114608.GI26998@redhat.com> <1054764838.2630317.1357819145118.JavaMail.root@redhat.com> Message-ID: <20130110125732.GN26998@redhat.com> On Thu, Jan 10, 2013 at 06:59:05AM -0500, Doron Fediuck wrote: > > > ----- Original Message ----- > > From: "Dan Kenigsberg" > > To: "Doron Fediuck" > > Cc: "Simon Grinberg" , "Orit Wasserman" , "Laine Stump" , > > "Yuval M" , "Limor Gavish" , arch at ovirt.org, "Mark Wu" > > > > Sent: Thursday, January 10, 2013 1:46:08 PM > > Subject: Re: feature suggestion: migration network > > > > On Thu, Jan 10, 2013 at 04:43:45AM -0500, Doron Fediuck wrote: > > > > > > > > > ----- Original Message ----- > > > > From: "Simon Grinberg" > > > > To: "Mark Wu" , "Doron Fediuck" > > > > > > > > Cc: "Orit Wasserman" , "Laine Stump" > > > > , "Yuval M" , "Limor > > > > Gavish" , arch at ovirt.org, "Dan Kenigsberg" > > > > > > > > Sent: Thursday, January 10, 2013 10:38:56 AM > > > > Subject: Re: feature suggestion: migration network > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "Mark Wu" > > > > > To: "Dan Kenigsberg" > > > > > Cc: "Simon Grinberg" , "Orit Wasserman" > > > > > , "Laine Stump" , > > > > > "Yuval M" , "Limor Gavish" > > > > > , > > > > > arch at ovirt.org > > > > > Sent: Thursday, January 10, 2013 5:13:23 AM > > > > > Subject: Re: feature suggestion: migration network > > > > > > > > > > On 01/09/2013 03:34 AM, Dan Kenigsberg wrote: > > > > > > On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg > > > > > > wrote: > > > > > >> > > > > > >> ----- Original Message ----- > > > > > >>> From: "Yaniv Kaul" > > > > > >>> To: "Dan Kenigsberg" > > > > > >>> Cc: "Limor Gavish" , "Yuval M" > > > > > >>> , arch at ovirt.org, "Simon Grinberg" > > > > > >>> > > > > > >>> Sent: Tuesday, January 8, 2013 4:46:10 PM > > > > > >>> Subject: Re: feature suggestion: migration network > > > > > >>> > > > > > >>> On 08/01/13 15:04, Dan Kenigsberg wrote: > > > > > >>>> There's talk about this for ages, so it's time to have > > > > > >>>> proper > > > > > >>>> discussion > > > > > >>>> and a feature page about it: let us have a "migration" > > > > > >>>> network > > > > > >>>> role, and > > > > > >>>> use such networks to carry migration data > > > > > >>>> > > > > > >>>> When Engine requests to migrate a VM from one node to > > > > > >>>> another, > > > > > >>>> the > > > > > >>>> VM > > > > > >>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP > > > > > >>>> connection > > > > > >>>> that is opened from the source qemu process to the > > > > > >>>> destination > > > > > >>>> qemu. > > > > > >>>> Currently, destination qemu listens for the incoming > > > > > >>>> connection > > > > > >>>> on > > > > > >>>> the > > > > > >>>> management IP address of the destination host. This has > > > > > >>>> serious > > > > > >>>> downsides: a "migration storm" may choke the destination's > > > > > >>>> management > > > > > >>>> interface; migration is plaintext and ovirtmgmt includes > > > > > >>>> Engine > > > > > >>>> which > > > > > >>>> sits may sit the node cluster. > > > > > >>>> > > > > > >>>> With this feature, a cluster administrator may grant the > > > > > >>>> "migration" > > > > > >>>> role to one of the cluster networks. Engine would use that > > > > > >>>> network's IP > > > > > >>>> address on the destination host when it requests a > > > > > >>>> migration > > > > > >>>> of > > > > > >>>> a > > > > > >>>> VM. > > > > > >>>> With proper network setup, migration data would be > > > > > >>>> separated > > > > > >>>> to > > > > > >>>> that > > > > > >>>> network. > > > > > >>>> > > > > > >>>> === Benefit to oVirt === > > > > > >>>> * Users would be able to define and dedicate a separate > > > > > >>>> network > > > > > >>>> for > > > > > >>>> migration. Users that need quick migration would use > > > > > >>>> nics > > > > > >>>> with > > > > > >>>> high > > > > > >>>> bandwidth. Users who want to cap the bandwidth > > > > > >>>> consumed by > > > > > >>>> migration > > > > > >>>> could define a migration network over nics with > > > > > >>>> bandwidth > > > > > >>>> limitation. > > > > > >>>> * Migration data can be limited to a separate network, > > > > > >>>> that > > > > > >>>> has > > > > > >>>> no > > > > > >>>> layer-2 access from Engine > > > > > >>>> > > > > > >>>> === Vdsm === > > > > > >>>> The "migrate" verb should be extended with an additional > > > > > >>>> parameter, > > > > > >>>> specifying the address that the remote qemu process should > > > > > >>>> listen > > > > > >>>> on. A > > > > > >>>> new argument is to be added to the currently-defined > > > > > >>>> migration > > > > > >>>> arguments: > > > > > >>>> * vmId: UUID > > > > > >>>> * dst: management address of destination host > > > > > >>>> * dstparams: hibernation volumes definition > > > > > >>>> * mode: migration/hibernation > > > > > >>>> * method: rotten legacy > > > > > >>>> * ''New'': migration uri, according to > > > > > >>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > > > > > >>>> such as tcp:// > > > > > >>>> > > > > > >>>> === Engine === > > > > > >>>> As usual, complexity lies here, and several changes are > > > > > >>>> required: > > > > > >>>> > > > > > >>>> 1. Network definition. > > > > > >>>> 1.1 A new network role - not unlike "display network" > > > > > >>>> should > > > > > >>>> be > > > > > >>>> added.Only one migration network should be defined > > > > > >>>> on a > > > > > >>>> cluster. > > > > > >> We are considering multiple display networks already, then > > > > > >> why > > > > > >> not > > > > > >> the > > > > > >> same for migration? > > > > > > What is the motivation of having multiple migration networks? > > > > > > Extending > > > > > > the bandwidth (and thus, any network can be taken when > > > > > > needed) or > > > > > > data separation (and thus, a migration network should be > > > > > > assigned > > > > > > to > > > > > > each VM in the cluster)? Or another morivation with > > > > > > consequence? > > > > > My suggestion is making the migration network role determined > > > > > dynamically on each migrate. If we only define one migration > > > > > network > > > > > per cluster, > > > > > the migration storm could happen to that network. It could > > > > > cause > > > > > some > > > > > bad impact on VM applications. So I think engine could choose > > > > > the > > > > > network which > > > > > has lower traffic load on migration, or leave the choice to > > > > > user. > > > > > > > > Dynamic migration selection is indeed desirable but only from > > > > migration networks - migration traffic is insecure so it's > > > > undesirable to have it mixed with VM traffic unless permitted by > > > > the > > > > admin by marking this network as migration network. > > > > > > > > To clarify what I've meant in the previous response to Livnat - > > > > When > > > > I've said "...if the customer due to the unsymmetrical nature of > > > > most bonding modes prefers to use muplitple networks for > > > > migration > > > > and will ask us to optimize migration across these..." > > > > > > > > But the dynamic selection should be based on SLA which the above > > > > is > > > > just part: > > > > 1. Need to consider tenant traffic segregation rules = security > > > > 2. SLA contracts > > > > We could devise a complex logic of assigning each Vm a pool of > > applicable migration networks, where one of them is chosen by Engine > > upon migration startup. > > > > I am, however, not at all sure that extending the migration bandwidth > > by > > means of multiple migration networks is worth the design hassle and > > the > > GUI noise. A simpler solution would be to build a single migration > > network on top of a fat bond, tweaked by a fine-tuned SLA. > > > > > > > > > > If you keep 2, migration storms mitigation is granted. But you > > > > are > > > > right that another feature required for #2 above is to control > > > > the > > > > migration bandwidth (BW) per migration. We had discussion in the > > > > past for VDSM to do dynamic calculation based on f(Line Speed, > > > > Max > > > > Migration BW, Max allowed per VM, Free BW, number of migrating > > > > machines) when starting migration. (I actually wanted to do so > > > > years > > > > ago, but never got to that - one of those things you always > > > > postpone > > > > to when you'll find the time). We did not think that the engine > > > > should provide some, but coming to think of it, you are right and > > > > it > > > > makes sense. For SLA - Max per VM + Min guaranteed should be > > > > provided by the engine to maintain SLA. And it's up to the engine > > > > not to VMs with Min-Guaranteed x number of concurrent migrations > > > > will exceed Max Migration BW. > > > > > > > > Dan this is way too much for initial implementation, but don't > > > > you > > > > think we should at least add place holders in the migration API? > > > > In my opinion this should wait for another feature. For each VM, I'd > > like to see a means to define the SLA of each of its vNIC. When we > > have > > that, we should similarly define how much bandwidth does it have for > > migration > > > > > > Maybe Doron can assist with the required verbs. > > > > > > > > (P.S., I don't want to alarm but we may need SLA parameters for > > > > setupNetworks as well :) unless we want these as separate API > > > > tough > > > > it means more calls during set up) > > > > Exactly - when we have a migration network concept, and when we have > > general network SLA defition, we could easily apply the latter on the > > former. > > > > Note that decision making is not in host level. Ok, drop the "easily" from my former message. From alonbl at redhat.com Thu Jan 10 13:11:58 2013 From: alonbl at redhat.com (Alon Bar-Lev) Date: Thu, 10 Jan 2013 08:11:58 -0500 (EST) Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <1357780078.2865.26.camel@beelzebub.mburnsfire.net> Message-ID: <1902281664.969057.1357823518752.JavaMail.root@redhat.com> Hello Mike and all, I am little confused with the release engineering of oVirt. Let me describe what I know about the release engineering of an upstream project, and please tell how we are different and why. 1. Downstream schedules are not relevant to upstream project and vise versa. 2. Upstream project releases its sources and optionally binaries in milestones. 3. Milestone are determine by upstream project and upstream project, and has several standard, for example: package-2.0.0_alpha package-2.0.0_alpha1 package-2.0.0_beta package-2.0.0_beta1 package-2.0.0_beta2 package-2.0.0_rc package-2.0.0_rc1 package-2.0.0_rc2 package-2.0.0_rc3 package-2.0.0_rc4 package-2.0.0 Or: 1.99.1 1.99.2 1.99.3 1.99.4 2.0.0 4. Binaries are built over the *source tarball* released at the milestones. The important artifact is the source tarball, it is the source of all good and evil. 5. Downstream may adopt / modify / re-write packaging but it will use the release source tarball at their choice of milestone, probably it won't adopt pre-release version. In case of oVirt, I do understand the the release schedule is tight between fedora and oVirt as it is the only supported distribution. However, I do expect that the sources will have similar to the above cycle, and the formal build will be produced out of the sources. Alon ----- Original Message ----- > From: "Mike Burns" > To: "arch" > Cc: "engine-devel" , "vdsm-devel" , "node-devel" > > Sent: Thursday, January 10, 2013 3:07:58 AM > Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines > > (Sorry for cross posting, trying to ensure I hit all the relevant > maintainers) > > If you are the primary maintainer of a sub-project in oVirt, this > message is for you. > > At the Weekly oVirt Meeting, the final devel freeze and beta dates > were > decided. > > Freeze: 2013-01-14 > Beta Post: 2013-01-15 > > Action items: > > * You project should create a new branch in gerrit for the release > * You should create a formal build of your project for the beta post > * Get the formal build of your project into the hands of someone who > can > post it [1][2] > > These should all be done by EOD on 2013-01-14 (with the exception of > ovirt-node-iso) [3] > > Packages that this impacts: > > * mom > * otopi > * ovirt-engine > * ovirt-engine-cli > * ovirt-engine-sdk > * ovirt-guest-agent > * ovirt-host-deploy > * ovirt-image-uploader > * ovirt-iso-uploader > * ovirt-log-collector > * ovirt-node > * ovirt-node-iso > * vdsm > > Thanks > > Mike Burns > > [1] This is only necessary if the package is *not* already in fedora > repos (must be in actual fedora repos, not just updates-testing or > koji) > [2] Communicate with mburns, mgoldboi, oschreib to deliver the > packages > [3] ovirt-node-iso requires some of the other packages to be > available > prior to creating the image. This image will be created either on > 2013-01-14 or 2013-01-15 and posted along with the rest of the Beta. > > _______________________________________________ > vdsm-devel mailing list > vdsm-devel at lists.fedorahosted.org > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel > From danken at redhat.com Thu Jan 10 13:38:34 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Thu, 10 Jan 2013 15:38:34 +0200 Subject: tunnelled migration Message-ID: <20130110133834.GO26998@redhat.com> For a long long time, libvirt supports a VIR_MIGRATE_TUNNELLED migration mode. In it, the qemu-to-qemu communication carrying private guest data, is tunnelled within libvirt-to-libvirt connection. libvirt-to-libvirt communication is (usually) well-encrypted and uses a known firewall hole. On the downside, multiplexing qemu migration traffic and encrypting it carries a heavy burdain on libvirtds and the hosts' cpu. Choosing tunnelled migration is thus a matter of policy. I would like to suggest a new cluster-level configurable in Engine, that controls whether migrations in this cluster are tunnelled. The configurable must be available only in new cluster levels where hosts support it. The cluster-level configurable should be overridable by a VM-level one. An admin may have a critical VM whose data should not migrate around in the plaintext. When Engine decides (or asked) to perform migration, it would pass a new "tunnlled" boolean value to the "migrate" verb. Vdsm patch in these lines is posted to http://gerrit.ovirt.org/2551 I believe it's pretty easy to do it in Engine, too, and that it would enhance the security of our users. Dan. From acathrow at redhat.com Thu Jan 10 14:06:35 2013 From: acathrow at redhat.com (Andrew Cathrow) Date: Thu, 10 Jan 2013 09:06:35 -0500 (EST) Subject: tunnelled migration In-Reply-To: <20130110133834.GO26998@redhat.com> Message-ID: <2006847430.2704939.1357826795071.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Dan Kenigsberg" > To: arch at ovirt.org > Cc: "Michal Skrivanek" > Sent: Thursday, January 10, 2013 8:38:34 AM > Subject: tunnelled migration > > For a long long time, libvirt supports a VIR_MIGRATE_TUNNELLED > migration > mode. In it, the qemu-to-qemu communication carrying private guest > data, > is tunnelled within libvirt-to-libvirt connection. > > libvirt-to-libvirt communication is (usually) well-encrypted and uses > a > known firewall hole. On the downside, multiplexing qemu migration > traffic and encrypting it carries a heavy burdain on libvirtds and > the > hosts' cpu. > > Choosing tunnelled migration is thus a matter of policy. I would like > to > suggest a new cluster-level configurable in Engine, that controls > whether > migrations in this cluster are tunnelled. The configurable must be > available only in new cluster levels where hosts support it. > > The cluster-level configurable should be overridable by a VM-level > one. > An admin may have a critical VM whose data should not migrate around > in > the plaintext. > > When Engine decides (or asked) to perform migration, it would pass a > new > "tunnlled" boolean value to the "migrate" verb. Vdsm patch in these > lines is posted to http://gerrit.ovirt.org/2551 > > I believe it's pretty easy to do it in Engine, too, and that it would > enhance the security of our users. It should be disabled by default given the significant overhead. > > Dan. > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From ofrenkel at redhat.com Thu Jan 10 14:46:28 2013 From: ofrenkel at redhat.com (Omer Frenkel) Date: Thu, 10 Jan 2013 09:46:28 -0500 (EST) Subject: tunnelled migration In-Reply-To: <2006847430.2704939.1357826795071.JavaMail.root@redhat.com> Message-ID: <1580151849.2731221.1357829188466.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Andrew Cathrow" > To: "Dan Kenigsberg" > Cc: arch at ovirt.org, "Michal Skrivanek" > Sent: Thursday, January 10, 2013 4:06:35 PM > Subject: Re: tunnelled migration > > > > ----- Original Message ----- > > From: "Dan Kenigsberg" > > To: arch at ovirt.org > > Cc: "Michal Skrivanek" > > Sent: Thursday, January 10, 2013 8:38:34 AM > > Subject: tunnelled migration > > > > For a long long time, libvirt supports a VIR_MIGRATE_TUNNELLED > > migration > > mode. In it, the qemu-to-qemu communication carrying private guest > > data, > > is tunnelled within libvirt-to-libvirt connection. > > > > libvirt-to-libvirt communication is (usually) well-encrypted and > > uses > > a > > known firewall hole. On the downside, multiplexing qemu migration > > traffic and encrypting it carries a heavy burdain on libvirtds and > > the > > hosts' cpu. > > > > Choosing tunnelled migration is thus a matter of policy. I would > > like > > to > > suggest a new cluster-level configurable in Engine, that controls > > whether > > migrations in this cluster are tunnelled. The configurable must be > > available only in new cluster levels where hosts support it. > > > > The cluster-level configurable should be overridable by a VM-level > > one. > > An admin may have a critical VM whose data should not migrate > > around > > in > > the plaintext. > > > > When Engine decides (or asked) to perform migration, it would pass > > a > > new > > "tunnlled" boolean value to the "migrate" verb. Vdsm patch in these > > lines is posted to http://gerrit.ovirt.org/2551 > > > > I believe it's pretty easy to do it in Engine, too, and that it > > would > > enhance the security of our users. > > It should be disabled by default given the significant overhead. > Agree, this really sound like an easy enhancement (and important), we can have this flag on the cluster as you say (default - false) and save for each vm the "migration tunnel policy" (?) if it's: cluster default, tunnelled or not tunnelled and pass it on migration. need to decide how it will look (named) in api and ui > > > > Dan. > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From alonbl at redhat.com Thu Jan 10 15:07:53 2013 From: alonbl at redhat.com (Alon Bar-Lev) Date: Thu, 10 Jan 2013 10:07:53 -0500 (EST) Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <1902281664.969057.1357823518752.JavaMail.root@redhat.com> Message-ID: <1926337210.1018028.1357830473680.JavaMail.root@redhat.com> Hello All, Just to clarify some terms.... as I got some feedback of confusion. oVirt is an UPSTREAM project, it is the source origin (manufacturer). Fedora is a DOWNSTREAM to oVirt, it is a distribution that provides the oVirt product. Red Hat Enterprise Linux is DOWNSTREAM to oVirt, it is a distribution that provides the oVirt product. Debian is, well, you got the point. Fedora (as distribution) is in some sense UPSTREAM to Red Hat Enterprise Linux, however, this relationship has no impact of oVirt. Regards, Alon Bar-Lev ----- Original Message ----- > From: "Alon Bar-Lev" > To: "Mike Burns" > Cc: "arch" > Sent: Thursday, January 10, 2013 3:11:58 PM > Subject: Re: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines > > Hello Mike and all, > > I am little confused with the release engineering of oVirt. > > Let me describe what I know about the release engineering of an > upstream project, and please tell how we are different and why. > > 1. Downstream schedules are not relevant to upstream project and vise > versa. > > 2. Upstream project releases its sources and optionally binaries in > milestones. > > 3. Milestone are determine by upstream project and upstream project, > and has several standard, for example: > > package-2.0.0_alpha > package-2.0.0_alpha1 > package-2.0.0_beta > package-2.0.0_beta1 > package-2.0.0_beta2 > package-2.0.0_rc > package-2.0.0_rc1 > package-2.0.0_rc2 > package-2.0.0_rc3 > package-2.0.0_rc4 > package-2.0.0 > > Or: > > 1.99.1 > 1.99.2 > 1.99.3 > 1.99.4 > 2.0.0 > > 4. Binaries are built over the *source tarball* released at the > milestones. The important artifact is the source tarball, it is the > source of all good and evil. > > 5. Downstream may adopt / modify / re-write packaging but it will use > the release source tarball at their choice of milestone, probably it > won't adopt pre-release version. > > In case of oVirt, I do understand the the release schedule is tight > between fedora and oVirt as it is the only supported distribution. > However, I do expect that the sources will have similar to the above > cycle, and the formal build will be produced out of the sources. > > Alon > > ----- Original Message ----- > > From: "Mike Burns" > > To: "arch" > > Cc: "engine-devel" , "vdsm-devel" > > , "node-devel" > > > > Sent: Thursday, January 10, 2013 3:07:58 AM > > Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta > > Build deadlines > > > > (Sorry for cross posting, trying to ensure I hit all the relevant > > maintainers) > > > > If you are the primary maintainer of a sub-project in oVirt, this > > message is for you. > > > > At the Weekly oVirt Meeting, the final devel freeze and beta dates > > were > > decided. > > > > Freeze: 2013-01-14 > > Beta Post: 2013-01-15 > > > > Action items: > > > > * You project should create a new branch in gerrit for the release > > * You should create a formal build of your project for the beta > > post > > * Get the formal build of your project into the hands of someone > > who > > can > > post it [1][2] > > > > These should all be done by EOD on 2013-01-14 (with the exception > > of > > ovirt-node-iso) [3] > > > > Packages that this impacts: > > > > * mom > > * otopi > > * ovirt-engine > > * ovirt-engine-cli > > * ovirt-engine-sdk > > * ovirt-guest-agent > > * ovirt-host-deploy > > * ovirt-image-uploader > > * ovirt-iso-uploader > > * ovirt-log-collector > > * ovirt-node > > * ovirt-node-iso > > * vdsm > > > > Thanks > > > > Mike Burns > > > > [1] This is only necessary if the package is *not* already in > > fedora > > repos (must be in actual fedora repos, not just updates-testing or > > koji) > > [2] Communicate with mburns, mgoldboi, oschreib to deliver the > > packages > > [3] ovirt-node-iso requires some of the other packages to be > > available > > prior to creating the image. This image will be created either on > > 2013-01-14 or 2013-01-15 and posted along with the rest of the > > Beta. > > > > _______________________________________________ > > vdsm-devel mailing list > > vdsm-devel at lists.fedorahosted.org > > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From dneary at redhat.com Thu Jan 10 15:25:08 2013 From: dneary at redhat.com (Dave Neary) Date: Thu, 10 Jan 2013 16:25:08 +0100 Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <1926337210.1018028.1357830473680.JavaMail.root@redhat.com> References: <1926337210.1018028.1357830473680.JavaMail.root@redhat.com> Message-ID: <50EEDD54.3050100@redhat.com> Hi Alon, On 01/10/2013 04:07 PM, Alon Bar-Lev wrote: > Just to clarify some terms.... as I got some feedback of confusion. > > oVirt is an UPSTREAM project, it is the source origin (manufacturer). > > Fedora is a DOWNSTREAM to oVirt, it is a distribution that provides the oVirt product. My understanding is that Fedora is very much an upstream of oVirt Node, in that it provides a component of what becomes oVirt Node. I would not call the OS downstream in general... downstream is "code flows there from somewhere else" - commercially sold products and services around integrated/patched/nicely packages open source code. Whereas distributions are aggregations of components. They're delivery mechanisms. Red Hat Enterprise Linux is a downstream of Fedora, and an upstream for CentOS in some sense, but I don't think it's a downstream of (say) MySQL. It's just a delivery channel. Cheers, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From alonbl at redhat.com Thu Jan 10 15:29:25 2013 From: alonbl at redhat.com (Alon Bar-Lev) Date: Thu, 10 Jan 2013 10:29:25 -0500 (EST) Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <50EEDD54.3050100@redhat.com> Message-ID: <1094335910.1024980.1357831765327.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Dave Neary" > To: arch at ovirt.org > Sent: Thursday, January 10, 2013 5:25:08 PM > Subject: Re: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines > > Hi Alon, > > On 01/10/2013 04:07 PM, Alon Bar-Lev wrote: > > Just to clarify some terms.... as I got some feedback of confusion. > > > > oVirt is an UPSTREAM project, it is the source origin > > (manufacturer). > > > > Fedora is a DOWNSTREAM to oVirt, it is a distribution that provides > > the oVirt product. > > My understanding is that Fedora is very much an upstream of oVirt > Node, > in that it provides a component of what becomes oVirt Node. Let's leave oVirt node out of the discussion for now, it is somewhat exceptional. Thanks, Alon From Caitlin.Bestler at nexenta.com Thu Jan 10 20:14:52 2013 From: Caitlin.Bestler at nexenta.com (Caitlin Bestler) Date: Thu, 10 Jan 2013 20:14:52 +0000 Subject: tunnelled migration In-Reply-To: <20130110133834.GO26998@redhat.com> References: <20130110133834.GO26998@redhat.com> Message-ID: <719CD19D2B2BFA4CB1B3F00D2A8CDCD09F9A066B@AUSP01DAG0106> Dan Kenisberg wrote: > Choosing tunnelled migration is thus a matter of policy. I would like to suggest a new cluster-level configurable in Engine, > that controls whether migrations in this cluster are tunnelled. The configurable must be available only in new cluster levels > where hosts support it. Why not just dump this issue to network configuration? Migrations occur over a secure network. That security could be provided by port groups, VLANs or encrypted tunnels. From wudxw at linux.vnet.ibm.com Fri Jan 11 06:05:10 2013 From: wudxw at linux.vnet.ibm.com (Mark Wu) Date: Fri, 11 Jan 2013 14:05:10 +0800 Subject: tunnelled migration In-Reply-To: <719CD19D2B2BFA4CB1B3F00D2A8CDCD09F9A066B@AUSP01DAG0106> References: <20130110133834.GO26998@redhat.com> <719CD19D2B2BFA4CB1B3F00D2A8CDCD09F9A066B@AUSP01DAG0106> Message-ID: <50EFAB96.30109@linux.vnet.ibm.com> On 01/11/2013 04:14 AM, Caitlin Bestler wrote: > Dan Kenisberg wrote: > > >> Choosing tunnelled migration is thus a matter of policy. I would like to suggest a new cluster-level configurable in Engine, >> that controls whether migrations in this cluster are tunnelled. The configurable must be available only in new cluster levels >> where hosts support it. > Why not just dump this issue to network configuration? > > Migrations occur over a secure network. That security could be provided by port groups, VLANs or encrypted tunnels. Agreed. Is a separate vlan network not secure enough? If yes, we could build a virtual encrypted network, like using openvpn + iptables. > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From wudxw at linux.vnet.ibm.com Fri Jan 11 08:31:48 2013 From: wudxw at linux.vnet.ibm.com (Mark Wu) Date: Fri, 11 Jan 2013 16:31:48 +0800 Subject: feature suggestion: migration network In-Reply-To: <20130110120057.GJ26998@redhat.com> References: <20130106214941.GJ14546@redhat.com> <20130108130415.GG1534@redhat.com> <50EC3132.1030807@redhat.com> <50EE2B56.8020000@linux.vnet.ibm.com> <20130110120057.GJ26998@redhat.com> Message-ID: <50EFCDF4.505@linux.vnet.ibm.com> On 01/10/2013 08:00 PM, Dan Kenigsberg wrote: > On Thu, Jan 10, 2013 at 10:45:42AM +0800, Mark Wu wrote: >> On 01/08/2013 10:46 PM, Yaniv Kaul wrote: >>> On 08/01/13 15:04, Dan Kenigsberg wrote: >>>> There's talk about this for ages, so it's time to have proper discussion >>>> and a feature page about it: let us have a "migration" network role, and >>>> use such networks to carry migration data >>>> >>>> When Engine requests to migrate a VM from one node to another, the VM >>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP connection >>>> that is opened from the source qemu process to the destination qemu. >>>> Currently, destination qemu listens for the incoming connection on the >>>> management IP address of the destination host. This has serious >>>> downsides: a "migration storm" may choke the destination's management >>>> interface; migration is plaintext and ovirtmgmt includes Engine which >>>> sits may sit the node cluster. >>>> >>>> With this feature, a cluster administrator may grant the "migration" >>>> role to one of the cluster networks. Engine would use that network's IP >>>> address on the destination host when it requests a migration of a VM. >>>> With proper network setup, migration data would be separated to that >>>> network. >>>> >>>> === Benefit to oVirt === >>>> * Users would be able to define and dedicate a separate network for >>>> migration. Users that need quick migration would use nics with high >>>> bandwidth. Users who want to cap the bandwidth consumed by migration >>>> could define a migration network over nics with bandwidth limitation. >>>> * Migration data can be limited to a separate network, that has no >>>> layer-2 access from Engine >>>> >>>> === Vdsm === >>>> The "migrate" verb should be extended with an additional parameter, >>>> specifying the address that the remote qemu process should listen on. A >>>> new argument is to be added to the currently-defined migration >>>> arguments: >>>> * vmId: UUID >>>> * dst: management address of destination host >>>> * dstparams: hibernation volumes definition >>>> * mode: migration/hibernation >>>> * method: rotten legacy >>>> * ''New'': migration uri, according to >>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 >>>> such as tcp:// >>>> >>>> === Engine === >>>> As usual, complexity lies here, and several changes are required: >>>> >>>> 1. Network definition. >>>> 1.1 A new network role - not unlike "display network" should be >>>> added.Only one migration network should be defined on a cluster. >>>> 1.2 If none is defined, the legacy "use ovirtmgmt for migration" >>>> behavior would apply. >>>> 1.3 A migration network is more likely to be a ''required'' network, but >>>> a user may opt for non-required. He may face unpleasant >>>> surprises if he >>>> wants to migrate his machine, but no candidate host has the network >>>> available. >>>> 1.4 The "migration" role can be granted or taken on-the-fly, when hosts >>>> are active, as long as there are no currently-migrating VMs. >>>> >>>> 2. Scheduler >>>> 2.1 when deciding which host should be used for automatic >>>> migration, take into account the existence and availability of the >>>> migration network on the destination host. >>>> 2.2 For manual migration, let user migrate a VM to a host with no >>>> migration network - if the admin wants to keep jamming the >>>> management network with migration traffic, let her. >>>> >>>> 3. VdsBroker migration verb. >>>> 3.1 For the a modern cluster level, with migration network defined on >>>> the destination host, an additional ''miguri'' parameter >>>> should be added >>>> to the "migrate" command >>>> >>>> _______________________________________________ >>>> Arch mailing list >>>> Arch at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/arch >>> How is the authentication of the peers handled? Do we need a cert >>> per each source/destination logical interface? >>> Y. >> In my understanding, using a separate migration network doesn't >> change the current peers authentication. We still use the URI >> ''qemu+tls://remoeHost/system' to connect the target libvirt service >> if ssl enabled, and the remote host should be the ip address of >> management interface. But we can choose other interfaces except the >> manage interface to transport the migration data. We just change the >> migrateURI, so the current authentication mechanism should still >> work for this new feature. > vdsm-vdsm and libvirt-libvirt communication is authenticated, but I am > not sure at all that qemu-qemu communication is. AFAIK, there's not authentication between qemu-qemu communications. > > After qemu is sprung up on the destination with > -incoming : , anything with access to that > address could hijack the process. Our migrateURI starts with "tcp://" Dest libvirtd starts qemu with listening on that address/port, and qemu will close the listening socket on : as soon as the src host connects to it successfully. So it just listens in a very small window, but still possible to be hijacked. We could use iptables to only open the access to src host dynamically on migration for secure. > with all the consequences of this. That a good reason to make sure > has as limited access as possible > > But maybe I'm wrong here, and libvir-list can show me the light. > > Dan. > From berrange at redhat.com Fri Jan 11 09:51:30 2013 From: berrange at redhat.com (Daniel P. Berrange) Date: Fri, 11 Jan 2013 09:51:30 +0000 Subject: [libvirt] feature suggestion: migration network In-Reply-To: <20130110120057.GJ26998@redhat.com> References: <20130106214941.GJ14546@redhat.com> <20130108130415.GG1534@redhat.com> <50EC3132.1030807@redhat.com> <50EE2B56.8020000@linux.vnet.ibm.com> <20130110120057.GJ26998@redhat.com> Message-ID: <20130111095130.GD4629@redhat.com> On Thu, Jan 10, 2013 at 02:00:57PM +0200, Dan Kenigsberg wrote: > vdsm-vdsm and libvirt-libvirt communication is authenticated, but I am > not sure at all that qemu-qemu communication is. > > After qemu is sprung up on the destination with > -incoming : , anything with access to that > address could hijack the process. Our migrateURI starts with "tcp://" > with all the consequences of this. That a good reason to make sure > has as limited access as possible. The QEMU<->QEMU communication channel is neither authenticated or encrypted, so if you are allowing migration directly over QEMU TCP channels you have a requirement for a trusted, secure mgmt network for this traffic. If your network is not trusted, then currently the only alternative is to make use of libvirt tunnelled migration. I would like to see QEMU gain support for using TLS on its migration sockets, so that you can have a secure QEMU<->QEMU path without needing to tunnel via libvirtd. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From acathrow at redhat.com Fri Jan 11 12:28:27 2013 From: acathrow at redhat.com (Andrew Cathrow) Date: Fri, 11 Jan 2013 07:28:27 -0500 (EST) Subject: tunnelled migration In-Reply-To: <50EFAB96.30109@linux.vnet.ibm.com> Message-ID: <1428294550.3398622.1357907307410.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Mark Wu" > To: "Caitlin Bestler" > Cc: "Michal Skrivanek" , arch at ovirt.org > Sent: Friday, January 11, 2013 1:05:10 AM > Subject: Re: tunnelled migration > > On 01/11/2013 04:14 AM, Caitlin Bestler wrote: > > Dan Kenisberg wrote: > > > > > >> Choosing tunnelled migration is thus a matter of policy. I would > >> like to suggest a new cluster-level configurable in Engine, > >> that controls whether migrations in this cluster are tunnelled. > >> The configurable must be available only in new cluster levels > >> where hosts support it. > > Why not just dump this issue to network configuration? > > > > Migrations occur over a secure network. That security could be > > provided by port groups, VLANs or encrypted tunnels. > Agreed. Is a separate vlan network not secure enough? If yes, we > could > build a virtual encrypted network, like using openvpn + iptables. While I agree that a vlan should be enough, and that's their purpose we've learned from downstream customers that this isn't enough and their security teams require all traffic to be encrypted. > > > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From alonbl at redhat.com Fri Jan 11 14:49:45 2013 From: alonbl at redhat.com (Alon Bar-Lev) Date: Fri, 11 Jan 2013 09:49:45 -0500 (EST) Subject: tunnelled migration In-Reply-To: <1428294550.3398622.1357907307410.JavaMail.root@redhat.com> Message-ID: <981539207.1217354.1357915785362.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Andrew Cathrow" > To: "Mark Wu" > Cc: arch at ovirt.org, "Michal Skrivanek" > Sent: Friday, January 11, 2013 2:28:27 PM > Subject: Re: tunnelled migration > > > > ----- Original Message ----- > > From: "Mark Wu" > > To: "Caitlin Bestler" > > Cc: "Michal Skrivanek" , arch at ovirt.org > > Sent: Friday, January 11, 2013 1:05:10 AM > > Subject: Re: tunnelled migration > > > > On 01/11/2013 04:14 AM, Caitlin Bestler wrote: > > > Dan Kenisberg wrote: > > > > > > > > >> Choosing tunnelled migration is thus a matter of policy. I would > > >> like to suggest a new cluster-level configurable in Engine, > > >> that controls whether migrations in this cluster are tunnelled. > > >> The configurable must be available only in new cluster levels > > >> where hosts support it. > > > Why not just dump this issue to network configuration? > > > > > > Migrations occur over a secure network. That security could be > > > provided by port groups, VLANs or encrypted tunnels. > > Agreed. Is a separate vlan network not secure enough? If yes, we > > could > > build a virtual encrypted network, like using openvpn + iptables. > > While I agree that a vlan should be enough, and that's their purpose > we've learned from downstream customers that this isn't enough and > their security teams require all traffic to be encrypted. In time we go from hardware enforced security to application enforced security. At the end what we really need to do is to stop messing with network configuration and add cryptographic lan to qemu, so that every virtual nic will encrypt the traffic using key based on destination. The key for destination will be acquired from central key manager, much like kerberos (if not kerberos). This will enable a totally flat network (either layer 2 or layer 3) and have total virtual cryptographic based virtual network over that. Regards, Alon Bar-Lev. From Caitlin.Bestler at nexenta.com Fri Jan 11 16:14:17 2013 From: Caitlin.Bestler at nexenta.com (Caitlin Bestler) Date: Fri, 11 Jan 2013 16:14:17 +0000 Subject: tunnelled migration In-Reply-To: <1428294550.3398622.1357907307410.JavaMail.root@redhat.com> References: <50EFAB96.30109@linux.vnet.ibm.com> <1428294550.3398622.1357907307410.JavaMail.root@redhat.com> Message-ID: <719CD19D2B2BFA4CB1B3F00D2A8CDCD09F9A0EE0@AUSP01DAG0106> Andrew Cathrow wrote: > While I agree that a vlan should be enough, and that's their purpose we've learned from downstream > customers that this isn't enough and their security teams require all traffic to be encrypted. Leaving the issue to the network layer allows each customer to easily control the quality of encryption used for each tunnel. When you do encryption at the application layer the user has to learn how to exercise whatever options you allow for each and every application. From mburns at redhat.com Fri Jan 11 16:33:32 2013 From: mburns at redhat.com (Mike Burns) Date: Fri, 11 Jan 2013 11:33:32 -0500 Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <1926337210.1018028.1357830473680.JavaMail.root@redhat.com> References: <1926337210.1018028.1357830473680.JavaMail.root@redhat.com> Message-ID: <1357922012.3919.18.camel@beelzebub.mburnsfire.net> See inline On Thu, 2013-01-10 at 10:07 -0500, Alon Bar-Lev wrote: > Hello All, > > Just to clarify some terms.... as I got some feedback of confusion. > > oVirt is an UPSTREAM project, it is the source origin (manufacturer). Agreed > > Fedora is a DOWNSTREAM to oVirt, it is a distribution that provides the oVirt product. Agreed, in general (except for ovirt-node which is an exception) > > Red Hat Enterprise Linux is DOWNSTREAM to oVirt, it is a distribution that provides the oVirt product. ACK > > Debian is, well, you got the point. > > Fedora (as distribution) is in some sense UPSTREAM to Red Hat Enterprise Linux, however, this relationship has no impact of oVirt. > > Regards, > Alon Bar-Lev > > ----- Original Message ----- > > From: "Alon Bar-Lev" > > To: "Mike Burns" > > Cc: "arch" > > Sent: Thursday, January 10, 2013 3:11:58 PM > > Subject: Re: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines > > > > Hello Mike and all, > > > > I am little confused with the release engineering of oVirt. > > > > Let me describe what I know about the release engineering of an > > upstream project, and please tell how we are different and why. > > > > 1. Downstream schedules are not relevant to upstream project and vise > > versa. Usually, yes, but we made the call to support F18 as the primary OS for oVirt 3.2. Given that, we couldn't really release 3.2 before F18 was released. > > > > 2. Upstream project releases its sources and optionally binaries in > > milestones. Again, I agree, except for the fact that we're targeting F18 as the primary release we're supporting for 3.2. > > > > 3. Milestone are determine by upstream project and upstream project, > > and has several standard, for example: > > > > package-2.0.0_alpha > > package-2.0.0_alpha1 > > package-2.0.0_beta > > package-2.0.0_beta1 > > package-2.0.0_beta2 > > package-2.0.0_rc > > package-2.0.0_rc1 > > package-2.0.0_rc2 > > package-2.0.0_rc3 > > package-2.0.0_rc4 > > package-2.0.0 > > > > Or: > > > > 1.99.1 > > 1.99.2 > > 1.99.3 > > 1.99.4 > > 2.0.0 > > > > 4. Binaries are built over the *source tarball* released at the > > milestones. The important artifact is the source tarball, it is the > > source of all good and evil. > > > > 5. Downstream may adopt / modify / re-write packaging but it will use > > the release source tarball at their choice of milestone, probably it > > won't adopt pre-release version. > > > > In case of oVirt, I do understand the the release schedule is tight > > between fedora and oVirt as it is the only supported distribution. > > However, I do expect that the sources will have similar to the above > > cycle, and the formal build will be produced out of the sources. Yes, I agree. Long term, once we get stable enough on multiple distros, I'd definitely want to move toward a model where we are releasing just src tarballs, and distro maintainers update the packages based on the new upstream source. I just don't think we're at that stability level yet. Mike > > > > Alon > > > > ----- Original Message ----- > > > From: "Mike Burns" > > > To: "arch" > > > Cc: "engine-devel" , "vdsm-devel" > > > , "node-devel" > > > > > > Sent: Thursday, January 10, 2013 3:07:58 AM > > > Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta > > > Build deadlines > > > > > > (Sorry for cross posting, trying to ensure I hit all the relevant > > > maintainers) > > > > > > If you are the primary maintainer of a sub-project in oVirt, this > > > message is for you. > > > > > > At the Weekly oVirt Meeting, the final devel freeze and beta dates > > > were > > > decided. > > > > > > Freeze: 2013-01-14 > > > Beta Post: 2013-01-15 > > > > > > Action items: > > > > > > * You project should create a new branch in gerrit for the release > > > * You should create a formal build of your project for the beta > > > post > > > * Get the formal build of your project into the hands of someone > > > who > > > can > > > post it [1][2] > > > > > > These should all be done by EOD on 2013-01-14 (with the exception > > > of > > > ovirt-node-iso) [3] > > > > > > Packages that this impacts: > > > > > > * mom > > > * otopi > > > * ovirt-engine > > > * ovirt-engine-cli > > > * ovirt-engine-sdk > > > * ovirt-guest-agent > > > * ovirt-host-deploy > > > * ovirt-image-uploader > > > * ovirt-iso-uploader > > > * ovirt-log-collector > > > * ovirt-node > > > * ovirt-node-iso > > > * vdsm > > > > > > Thanks > > > > > > Mike Burns > > > > > > [1] This is only necessary if the package is *not* already in > > > fedora > > > repos (must be in actual fedora repos, not just updates-testing or > > > koji) > > > [2] Communicate with mburns, mgoldboi, oschreib to deliver the > > > packages > > > [3] ovirt-node-iso requires some of the other packages to be > > > available > > > prior to creating the image. This image will be created either on > > > 2013-01-14 or 2013-01-15 and posted along with the rest of the > > > Beta. > > > > > > _______________________________________________ > > > vdsm-devel mailing list > > > vdsm-devel at lists.fedorahosted.org > > > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel > > > > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > From alonbl at redhat.com Fri Jan 11 16:36:32 2013 From: alonbl at redhat.com (Alon Bar-Lev) Date: Fri, 11 Jan 2013 11:36:32 -0500 (EST) Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <1357922012.3919.18.camel@beelzebub.mburnsfire.net> Message-ID: <1004227467.1255591.1357922192280.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Mike Burns" > To: "Alon Bar-Lev" > Cc: "arch" > Sent: Friday, January 11, 2013 6:33:32 PM > Subject: Re: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines > > See inline > > On Thu, 2013-01-10 at 10:07 -0500, Alon Bar-Lev wrote: > > Hello All, > > > > Just to clarify some terms.... as I got some feedback of confusion. > > > > oVirt is an UPSTREAM project, it is the source origin > > (manufacturer). > > Agreed > > > > Fedora is a DOWNSTREAM to oVirt, it is a distribution that provides > > the oVirt product. > > Agreed, in general (except for ovirt-node which is an exception) > > > > > Red Hat Enterprise Linux is DOWNSTREAM to oVirt, it is a > > distribution that provides the oVirt product. > > ACK > > > > Debian is, well, you got the point. > > > > Fedora (as distribution) is in some sense UPSTREAM to Red Hat > > Enterprise Linux, however, this relationship has no impact of > > oVirt. > > > > Regards, > > Alon Bar-Lev > > > > ----- Original Message ----- > > > From: "Alon Bar-Lev" > > > To: "Mike Burns" > > > Cc: "arch" > > > Sent: Thursday, January 10, 2013 3:11:58 PM > > > Subject: Re: [vdsm] ATTN: Project Maintainers: Code > > > Freeze/Branch/Beta Build deadlines > > > > > > Hello Mike and all, > > > > > > I am little confused with the release engineering of oVirt. > > > > > > Let me describe what I know about the release engineering of an > > > upstream project, and please tell how we are different and why. > > > > > > 1. Downstream schedules are not relevant to upstream project and > > > vise > > > versa. > > Usually, yes, but we made the call to support F18 as the primary OS > for > oVirt 3.2. Given that, we couldn't really release 3.2 before F18 was > released. Can we discuss this a little farther? Why do you think that we cannot release ovirt-3.2 before fedora-18? > > > > > > > 2. Upstream project releases its sources and optionally binaries > > > in > > > milestones. > > Again, I agree, except for the fact that we're targeting F18 as the > primary release we're supporting for 3.2. > > > > > > > 3. Milestone are determine by upstream project and upstream > > > project, > > > and has several standard, for example: > > > > > > package-2.0.0_alpha > > > package-2.0.0_alpha1 > > > package-2.0.0_beta > > > package-2.0.0_beta1 > > > package-2.0.0_beta2 > > > package-2.0.0_rc > > > package-2.0.0_rc1 > > > package-2.0.0_rc2 > > > package-2.0.0_rc3 > > > package-2.0.0_rc4 > > > package-2.0.0 > > > > > > Or: > > > > > > 1.99.1 > > > 1.99.2 > > > 1.99.3 > > > 1.99.4 > > > 2.0.0 > > > > > > 4. Binaries are built over the *source tarball* released at the > > > milestones. The important artifact is the source tarball, it is > > > the > > > source of all good and evil. > > > > > > 5. Downstream may adopt / modify / re-write packaging but it will > > > use > > > the release source tarball at their choice of milestone, probably > > > it > > > won't adopt pre-release version. > > > > > > In case of oVirt, I do understand the the release schedule is > > > tight > > > between fedora and oVirt as it is the only supported > > > distribution. > > > However, I do expect that the sources will have similar to the > > > above > > > cycle, and the formal build will be produced out of the sources. > > Yes, I agree. Long term, once we get stable enough on multiple > distros, > I'd definitely want to move toward a model where we are releasing > just > src tarballs, and distro maintainers update the packages based on the > new upstream source. I just don't think we're at that stability > level > yet. > > Mike > > > > > > > > Alon > > > > > > ----- Original Message ----- > > > > From: "Mike Burns" > > > > To: "arch" > > > > Cc: "engine-devel" , "vdsm-devel" > > > > , "node-devel" > > > > > > > > Sent: Thursday, January 10, 2013 3:07:58 AM > > > > Subject: [vdsm] ATTN: Project Maintainers: Code > > > > Freeze/Branch/Beta > > > > Build deadlines > > > > > > > > (Sorry for cross posting, trying to ensure I hit all the > > > > relevant > > > > maintainers) > > > > > > > > If you are the primary maintainer of a sub-project in oVirt, > > > > this > > > > message is for you. > > > > > > > > At the Weekly oVirt Meeting, the final devel freeze and beta > > > > dates > > > > were > > > > decided. > > > > > > > > Freeze: 2013-01-14 > > > > Beta Post: 2013-01-15 > > > > > > > > Action items: > > > > > > > > * You project should create a new branch in gerrit for the > > > > release > > > > * You should create a formal build of your project for the beta > > > > post > > > > * Get the formal build of your project into the hands of > > > > someone > > > > who > > > > can > > > > post it [1][2] > > > > > > > > These should all be done by EOD on 2013-01-14 (with the > > > > exception > > > > of > > > > ovirt-node-iso) [3] > > > > > > > > Packages that this impacts: > > > > > > > > * mom > > > > * otopi > > > > * ovirt-engine > > > > * ovirt-engine-cli > > > > * ovirt-engine-sdk > > > > * ovirt-guest-agent > > > > * ovirt-host-deploy > > > > * ovirt-image-uploader > > > > * ovirt-iso-uploader > > > > * ovirt-log-collector > > > > * ovirt-node > > > > * ovirt-node-iso > > > > * vdsm > > > > > > > > Thanks > > > > > > > > Mike Burns > > > > > > > > [1] This is only necessary if the package is *not* already in > > > > fedora > > > > repos (must be in actual fedora repos, not just updates-testing > > > > or > > > > koji) > > > > [2] Communicate with mburns, mgoldboi, oschreib to deliver the > > > > packages > > > > [3] ovirt-node-iso requires some of the other packages to be > > > > available > > > > prior to creating the image. This image will be created either > > > > on > > > > 2013-01-14 or 2013-01-15 and posted along with the rest of the > > > > Beta. > > > > > > > > _______________________________________________ > > > > vdsm-devel mailing list > > > > vdsm-devel at lists.fedorahosted.org > > > > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel > > > > > > > _______________________________________________ > > > Arch mailing list > > > Arch at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/arch > > > > > > From pmyers at redhat.com Fri Jan 11 17:28:29 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 11 Jan 2013 12:28:29 -0500 Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <1004227467.1255591.1357922192280.JavaMail.root@redhat.com> References: <1004227467.1255591.1357922192280.JavaMail.root@redhat.com> Message-ID: <50F04BBD.4020607@redhat.com> > Can we discuss this a little farther? > Why do you think that we cannot release ovirt-3.2 before fedora-18? oVirt contains both Node and Engine parts AIUI In 3.2 vdsm requires F18 version of libvirt, so if oVirt 3.2 comes out prior to F18, then oVirt 3.2 would require a yet unreleased version of Fedora to run on which is probably not a good idea oVirt Node (which is part of oVirt project) also requires Fedora 18, since a Fedora 17 based oVirt Node would not have the required version of libvirt needed to support 3.2 functionality Perry From dneary at redhat.com Fri Jan 11 18:22:48 2013 From: dneary at redhat.com (Dave Neary) Date: Fri, 11 Jan 2013 19:22:48 +0100 Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <50F04BBD.4020607@redhat.com> References: <1004227467.1255591.1357922192280.JavaMail.root@redhat.com> <50F04BBD.4020607@redhat.com> Message-ID: <50F05878.7000803@redhat.com> Hi, On 01/11/2013 06:28 PM, Perry Myers wrote: > In 3.2 vdsm requires F18 version of libvirt, so if oVirt 3.2 comes out > prior to F18, then oVirt 3.2 would require a yet unreleased version of > Fedora to run on which is probably not a good idea Is VDSM not considered part of oVirt? It feels like it should be, but it's hosted on a different site (as is its mailing list). Also, is it possible to have F17 builds of the required version of libvirt, or does that library have some dependencies on other system components which changed significantly between F17 and F18? Thanks! Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From alonbl at redhat.com Fri Jan 11 18:51:27 2013 From: alonbl at redhat.com (Alon Bar-Lev) Date: Fri, 11 Jan 2013 13:51:27 -0500 (EST) Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <50F04BBD.4020607@redhat.com> Message-ID: <896984664.1284130.1357930287660.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Perry Myers" > To: "Alon Bar-Lev" > Cc: "Mike Burns" , "arch" > Sent: Friday, January 11, 2013 7:28:29 PM > Subject: Re: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines > > > Can we discuss this a little farther? > > Why do you think that we cannot release ovirt-3.2 before fedora-18? > > oVirt contains both Node and Engine parts ovirt-node is totally different discussion as it is mini distribution, re-redistributing fedora and oVirt components. ovirt-node can be released in other milestones, based on the lineup of dependencies. > > AIUI > > In 3.2 vdsm requires F18 version of libvirt, so if oVirt 3.2 comes > out > prior to F18, then oVirt 3.2 would require a yet unreleased version > of > Fedora to run on which is probably not a good idea libvirt is upstream. oVirt is upstrema. Upstream oVirt requires some upstream version of libvirt at specific version of oVirt. Downstream maintainers of oVirt will require the proper minimal version of libvirt exists in downstream for proper execution. Upstream maintainers do not care which version of libvirt exists in downstream, as a specific downstream is irrelevant. Let's say we support debian and fedora, the version of libvirt is probably different in each, do we aim to the libvirt of debian? of fedora? No... upstream aims for libvirt it actually requires (minimum version to provide features). > oVirt Node (which is part of oVirt project) also requires Fedora 18, > since a Fedora 17 based oVirt Node would not have the required > version > of libvirt needed to support 3.2 functionality Are you sure that the required version of libvirt cannot be run on fedora-17? I think we can provide (had we wanted to) ovirt-node based on fedora-17 with oVirt-3.2 with all dependencies, before fedora-18 is out, and I also believe that it would have been healthier approach. The fact that oVirt-3.2 requires out-of-tree libvirt on fedora-17, does not mean user cannot install the required version, this is true to any other dependency. The gain is large, as we take a stable platform with minimum unstable components. What I would like is for us to start acting like pure upstream... Regards, Alon From alonbl at redhat.com Fri Jan 11 18:54:19 2013 From: alonbl at redhat.com (Alon Bar-Lev) Date: Fri, 11 Jan 2013 13:54:19 -0500 (EST) Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <50F05878.7000803@redhat.com> Message-ID: <883604775.1284655.1357930459913.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Dave Neary" > To: "Perry Myers" > Cc: "Alon Bar-Lev" , "arch" > Sent: Friday, January 11, 2013 8:22:48 PM > Subject: Re: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines > > Hi, > > On 01/11/2013 06:28 PM, Perry Myers wrote: > > In 3.2 vdsm requires F18 version of libvirt, so if oVirt 3.2 comes > > out > > prior to F18, then oVirt 3.2 would require a yet unreleased version > > of > > Fedora to run on which is probably not a good idea > > Is VDSM not considered part of oVirt? It feels like it should be, but > it's hosted on a different site (as is its mailing list). That's true, it is CONFUSING... I raised it several times, I guess other people should. vdsm should be moved into ovirt.org, moving away from fedora hosted ASAP. Regards, Alon From pmyers at redhat.com Fri Jan 11 19:09:36 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 11 Jan 2013 14:09:36 -0500 Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <896984664.1284130.1357930287660.JavaMail.root@redhat.com> References: <896984664.1284130.1357930287660.JavaMail.root@redhat.com> Message-ID: <50F06370.7030007@redhat.com> On 01/11/2013 01:51 PM, Alon Bar-Lev wrote: > > > ----- Original Message ----- >> From: "Perry Myers" >> To: "Alon Bar-Lev" >> Cc: "Mike Burns" , "arch" >> Sent: Friday, January 11, 2013 7:28:29 PM >> Subject: Re: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines >> >>> Can we discuss this a little farther? >>> Why do you think that we cannot release ovirt-3.2 before fedora-18? >> >> oVirt contains both Node and Engine parts > > ovirt-node is totally different discussion as it is mini distribution, re-redistributing fedora and oVirt components. ovirt-node can be released in other milestones, based on the lineup of dependencies. > >> >> AIUI >> >> In 3.2 vdsm requires F18 version of libvirt, so if oVirt 3.2 comes >> out >> prior to F18, then oVirt 3.2 would require a yet unreleased version >> of >> Fedora to run on which is probably not a good idea > > libvirt is upstream. > oVirt is upstrema. > Upstream oVirt requires some upstream version of libvirt at specific version of oVirt. > > Downstream maintainers of oVirt will require the proper minimal version of libvirt exists in downstream for proper execution. > Upstream maintainers do not care which version of libvirt exists in downstream, as a specific downstream is irrelevant. > Let's say we support debian and fedora, the version of libvirt is probably different in each, do we aim to the libvirt of debian? of fedora? > No... upstream aims for libvirt it actually requires (minimum version to provide features). > >> oVirt Node (which is part of oVirt project) also requires Fedora 18, >> since a Fedora 17 based oVirt Node would not have the required >> version >> of libvirt needed to support 3.2 functionality > > Are you sure that the required version of libvirt cannot be run on fedora-17? > I think we can provide (had we wanted to) ovirt-node based on fedora-17 with oVirt-3.2 with all dependencies, before fedora-18 is out, and I also believe that it would have been healthier approach. > The fact that oVirt-3.2 requires out-of-tree libvirt on fedora-17, does not mean user cannot install the required version, this is true to any other dependency. > The gain is large, as we take a stable platform with minimum unstable components. > > What I would like is for us to start acting like pure upstream... In that case, oVirt Node project should be removed from the oVirt release cadence, since it cannot be decoupled from the Fedora release cadence. Also given that the intent is for oVirt Node to be used for multiple other projects (oVirt Engine, Fedora OpenStack, Fedora Gluster), perhaps it does make sense to decouple oVirt Node core from oVirt Engine releases and treat it as a separate project from oVirt Engine with separate release schedule. I think this might be a good way to go. oVirt Node core team can produce oVirt Node images on a Fedora release cadence (6 months) and then oVirt Engine team can consume these images and inject vdsm into them on their own release cadence. So the release numbering like 3.1, 3.2 would only apply to the Engine side of things, while oVirt release numbering would be tied to Fedora release numbering. Mike, what are your thoughts? Perry From mburns at redhat.com Fri Jan 11 19:31:51 2013 From: mburns at redhat.com (Mike Burns) Date: Fri, 11 Jan 2013 14:31:51 -0500 Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <50F06370.7030007@redhat.com> References: <896984664.1284130.1357930287660.JavaMail.root@redhat.com> <50F06370.7030007@redhat.com> Message-ID: <1357932711.3919.30.camel@beelzebub.mburnsfire.net> On Fri, 2013-01-11 at 14:09 -0500, Perry Myers wrote: > On 01/11/2013 01:51 PM, Alon Bar-Lev wrote: > > > > > > ----- Original Message ----- > >> From: "Perry Myers" > >> To: "Alon Bar-Lev" > >> Cc: "Mike Burns" , "arch" > >> Sent: Friday, January 11, 2013 7:28:29 PM > >> Subject: Re: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines > >> > >>> Can we discuss this a little farther? > >>> Why do you think that we cannot release ovirt-3.2 before fedora-18? > >> > >> oVirt contains both Node and Engine parts > > > > ovirt-node is totally different discussion as it is mini distribution, re-redistributing fedora and oVirt components. ovirt-node can be released in other milestones, based on the lineup of dependencies. > > > >> > >> AIUI > >> > >> In 3.2 vdsm requires F18 version of libvirt, so if oVirt 3.2 comes > >> out > >> prior to F18, then oVirt 3.2 would require a yet unreleased version > >> of > >> Fedora to run on which is probably not a good idea > > > > libvirt is upstream. > > oVirt is upstrema. > > Upstream oVirt requires some upstream version of libvirt at specific version of oVirt. > > > > Downstream maintainers of oVirt will require the proper minimal version of libvirt exists in downstream for proper execution. > > Upstream maintainers do not care which version of libvirt exists in downstream, as a specific downstream is irrelevant. > > Let's say we support debian and fedora, the version of libvirt is probably different in each, do we aim to the libvirt of debian? of fedora? > > No... upstream aims for libvirt it actually requires (minimum version to provide features). > > > >> oVirt Node (which is part of oVirt project) also requires Fedora 18, > >> since a Fedora 17 based oVirt Node would not have the required > >> version > >> of libvirt needed to support 3.2 functionality > > > > Are you sure that the required version of libvirt cannot be run on fedora-17? > > I think we can provide (had we wanted to) ovirt-node based on fedora-17 with oVirt-3.2 with all dependencies, before fedora-18 is out, and I also believe that it would have been healthier approach. > > The fact that oVirt-3.2 requires out-of-tree libvirt on fedora-17, does not mean user cannot install the required version, this is true to any other dependency. > > The gain is large, as we take a stable platform with minimum unstable components. > > > > What I would like is for us to start acting like pure upstream... > > In that case, oVirt Node project should be removed from the oVirt > release cadence, since it cannot be decoupled from the Fedora release > cadence. > > Also given that the intent is for oVirt Node to be used for multiple > other projects (oVirt Engine, Fedora OpenStack, Fedora Gluster), perhaps > it does make sense to decouple oVirt Node core from oVirt Engine > releases and treat it as a separate project from oVirt Engine with > separate release schedule. > > I think this might be a good way to go. > > oVirt Node core team can produce oVirt Node images on a Fedora release > cadence (6 months) and then oVirt Engine team can consume these images > and inject vdsm into them on their own release cadence. > > So the release numbering like 3.1, 3.2 would only apply to the Engine > side of things, while oVirt release numbering would be tied to Fedora > release numbering. > > Mike, what are your thoughts? > > Perry In general, I completely agree with Alon that we should be doing tarball releases only and have distros handle packaging for their own distro. It's something I'll push strongly for during the next release. For this particular release, since we made a decision (right or wrong) to support F18 only, I think we should at least make sure that F18 is released prior to our release. If we were also supporting F17, then I would completely agree that the F18 schedule should have no impact on our schedule. The plan that was decided on months ago was that we were going to concentrate on F18 only. There weren't enough people to handle both F17 and F18 at the time. As for splitting oVirt Node out and make it a completely separate project with a different cadence, that is something I could agree with. I think we do need to make sure that there is a *working* ovirt-node iso image available at oVirt release, but having an ovirt-node tarball that releases with and oVirt release doesn't necessarily need to be required. Mike From kwade at redhat.com Fri Jan 11 20:41:50 2013 From: kwade at redhat.com (Karsten 'quaid' Wade) Date: Fri, 11 Jan 2013 12:41:50 -0800 Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <50F05878.7000803@redhat.com> References: <1004227467.1255591.1357922192280.JavaMail.root@redhat.com> <50F04BBD.4020607@redhat.com> <50F05878.7000803@redhat.com> Message-ID: <50F0790E.60804@redhat.com> On 01/11/2013 10:22 AM, Dave Neary wrote: > Is VDSM not considered part of oVirt? It feels like it should be, but > it's hosted on a different site (as is its mailing list). I'd have to dig through the history to be sure, but I think we never made a *requirement* that a component of the oVirt release be hosted partially or entirely on *.ovirt.org infrastructure. As each of the sub-projects are somewhat independently run, they could (or should) have the right to choose where they want to host services, within reason. "Within reason" would include e.g. "git yes, svn no - but not necessarily requiring git.ovirt.org." What happens when we get a project that is outside of oVirt but wants to be part of the family? Do we require it to use only *.ovirt.org infrastructure of participation as primary? http://www.ovirt.org/Governance states: "The oVirt is a collection of subprojects, each of which shares a framework for its governance, and commits to a common release schedule, but which has its own set of maintainers and community communication channels." Nothing on that page requires sub-projects to use oVirt infrastructure. VDSM uses the governance and release schedule for oVirt, so it matches the requirement. IMO, I want people to use *.ovirt.org because it's the best location for their code, not because they are required. But this whole thing sounds like a Board decision to make - which discussion would include the voice & vote of the VDSM maintainer (Ayal Baron.) - Karsten -- Karsten 'quaid' Wade, Sr. Analyst - Community Growth http://TheOpenSourceWay.org .^\ http://community.redhat.com @quaid (identi.ca/twitter/IRC) \v' gpg: AD0E0C41 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 253 bytes Desc: OpenPGP digital signature URL: From dneary at redhat.com Fri Jan 11 21:14:25 2013 From: dneary at redhat.com (Dave Neary) Date: Fri, 11 Jan 2013 22:14:25 +0100 Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <1357932711.3919.30.camel@beelzebub.mburnsfire.net> References: <896984664.1284130.1357930287660.JavaMail.root@redhat.com> <50F06370.7030007@redhat.com> <1357932711.3919.30.camel@beelzebub.mburnsfire.net> Message-ID: <50F080B1.4070006@redhat.com> Hi, On 01/11/2013 08:31 PM, Mike Burns wrote: > As for splitting oVirt Node out and make it a completely separate > project with a different cadence, that is something I could agree with. > I think we do need to make sure that there is a *working* ovirt-node iso > image available at oVirt release, but having an ovirt-node tarball that > releases with and oVirt release doesn't necessarily need to be required. This would make even more sense if an oVirt Node instance were also a Nova compute node. How difficult would it be to achieve that? Cheers, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From mburns at redhat.com Fri Jan 11 21:24:25 2013 From: mburns at redhat.com (Mike Burns) Date: Fri, 11 Jan 2013 16:24:25 -0500 Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <50F080B1.4070006@redhat.com> References: <896984664.1284130.1357930287660.JavaMail.root@redhat.com> <50F06370.7030007@redhat.com> <1357932711.3919.30.camel@beelzebub.mburnsfire.net> <50F080B1.4070006@redhat.com> Message-ID: <50F08309.3010208@redhat.com> On 01/11/2013 04:14 PM, Dave Neary wrote: > Hi, > > On 01/11/2013 08:31 PM, Mike Burns wrote: >> As for splitting oVirt Node out and make it a completely separate >> project with a different cadence, that is something I could agree with. >> I think we do need to make sure that there is a *working* ovirt-node iso >> image available at oVirt release, but having an ovirt-node tarball that >> releases with and oVirt release doesn't necessarily need to be required. > > This would make even more sense if an oVirt Node instance were also a > Nova compute node. How difficult would it be to achieve that? > It's in the works. Probably doable in Grizzly time frame. Mike > Cheers, > Dave. > From kwade at redhat.com Fri Jan 11 22:18:45 2013 From: kwade at redhat.com (Karsten 'quaid' Wade) Date: Fri, 11 Jan 2013 14:18:45 -0800 Subject: oVirt - Infiniband Support / Setup In-Reply-To: <50EEB0BD.50300@redhat.com> References: <50EEB0BD.50300@redhat.com> Message-ID: <50F08FC5.3060404@redhat.com> On 01/10/2013 04:14 AM, Itamar Heim wrote: > On 01/10/2013 01:36 PM, Alexander Rydekull wrote: >> Hello all, >> >> My name is Alexander and I'm somewhat involved with the infra-team, >> trying to become a more active contributor. >> >> I have a friend of mine who was prodding me and asking if oVirt would >> benefit from some infiniband equipment to further development on those >> parts. >> >> What he said was, that he could sort out servers / switches / adapters >> for the project to use and develop on. And he asked if oVirt would make >> any use of it? >> >> So basically, that's my question to you. Is this an area of interest? >> Would you want me to try and see if we can get a infiniband-setup for >> oVirt done? >> >> -- >> /Alexander Rydekull >> >> >> _______________________________________________ >> Arch mailing list >> Arch at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/arch >> > > I think there are some active members from mellanox pushing patches for > better infiniband integration. would be great if these modes can be > tested by him? Since there could be interest and need, +1 for integrating this with oVirt via the Infra team. Alexander, do you want to take on coordinating this? Do you know if your friend is interested in being a formal sponsor of the project through this mechanism? - Karsten -- Karsten 'quaid' Wade, Sr. Analyst - Community Growth http://TheOpenSourceWay.org .^\ http://community.redhat.com @quaid (identi.ca/twitter/IRC) \v' gpg: AD0E0C41 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 253 bytes Desc: OpenPGP digital signature URL: From alonbl at redhat.com Sun Jan 13 09:42:33 2013 From: alonbl at redhat.com (Alon Bar-Lev) Date: Sun, 13 Jan 2013 04:42:33 -0500 (EST) Subject: [vdsm] ATTN: Project Maintainers: Code Freeze/Branch/Beta Build deadlines In-Reply-To: <50F08309.3010208@redhat.com> Message-ID: <463230213.1432168.1358070153094.JavaMail.root@redhat.com> Thanks, Let's start over... I just want to understand the exact process... I am maintainer of otopi and ovirt-host-deploy. I will release and tag tomorrow otopi-1.0.0_beta and ovirt-host-deploy-1.0.0_beta, tarballs: @TAG at .tar.gz Where do I put the tarballs so that I can reference them in the fedora spec file? Before release I will need to tag the otopi-1.0.0 and ovirt-host-deploy-1.0.0, and put the tarballs at specific location, will I have the opportunity to do so? After release the specs at fedora will also need to drop the suffix, and use these release tarballs, will this be possible? If the above cannot be done, what exactly is the process? I understand the tag part, but other than that I do not follow. Thanks, Alon. From danken at redhat.com Sun Jan 13 10:50:30 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Sun, 13 Jan 2013 12:50:30 +0200 Subject: tunnelled migration In-Reply-To: <50EFAB96.30109@linux.vnet.ibm.com> References: <20130110133834.GO26998@redhat.com> <719CD19D2B2BFA4CB1B3F00D2A8CDCD09F9A066B@AUSP01DAG0106> <50EFAB96.30109@linux.vnet.ibm.com> Message-ID: <20130113105030.GP26998@redhat.com> On Fri, Jan 11, 2013 at 02:05:10PM +0800, Mark Wu wrote: > On 01/11/2013 04:14 AM, Caitlin Bestler wrote: > >Dan Kenisberg wrote: > > > > > >>Choosing tunnelled migration is thus a matter of policy. I would like to suggest a new cluster-level configurable in Engine, > >>that controls whether migrations in this cluster are tunnelled. The configurable must be available only in new cluster levels > >>where hosts support it. > >Why not just dump this issue to network configuration? > > > >Migrations occur over a secure network. That security could be provided by port groups, VLANs or encrypted tunnels. > Agreed. Is a separate vlan network not secure enough? If yes, we > could build a virtual encrypted network, like using openvpn + > iptables. I agree that separating migration traffic to a different, optionally-encrypted network, is a noble goal. In fact, it is a parallel effort that I am pushing for: http://lists.ovirt.org/pipermail/arch/2013-January/001117.html Building our own tunnel between hosts is cool, but using libvirt's tunneling is here and now and easy, and should not wait just because there's even better technology around the third next corner. With my suggested API, we could even change the implementation of "tunnelled" to "tunnel over our own vpn" if we need to. Now is the time to eat the low-hanging fruit of VIR_MIGRATE_TUNNELLED. Dan. From lpeer at redhat.com Sun Jan 13 11:53:23 2013 From: lpeer at redhat.com (Livnat Peer) Date: Sun, 13 Jan 2013 13:53:23 +0200 Subject: feature suggestion: migration network In-Reply-To: <29660868.4849.1357822391533.JavaMail.javamailuser@localhost> References: <29660868.4849.1357822391533.JavaMail.javamailuser@localhost> Message-ID: <50F2A033.6050802@redhat.com> On 01/10/2013 02:54 PM, Simon Grinberg wrote: > > > ----- Original Message ----- >> From: "Dan Kenigsberg" >> To: "Doron Fediuck" >> Cc: "Simon Grinberg" , "Orit Wasserman" , "Laine Stump" , >> "Yuval M" , "Limor Gavish" , arch at ovirt.org, "Mark Wu" >> >> Sent: Thursday, January 10, 2013 1:46:08 PM >> Subject: Re: feature suggestion: migration network >> >> On Thu, Jan 10, 2013 at 04:43:45AM -0500, Doron Fediuck wrote: >>> >>> >>> ----- Original Message ----- >>>> From: "Simon Grinberg" >>>> To: "Mark Wu" , "Doron Fediuck" >>>> >>>> Cc: "Orit Wasserman" , "Laine Stump" >>>> , "Yuval M" , "Limor >>>> Gavish" , arch at ovirt.org, "Dan Kenigsberg" >>>> >>>> Sent: Thursday, January 10, 2013 10:38:56 AM >>>> Subject: Re: feature suggestion: migration network >>>> >>>> >>>> >>>> ----- Original Message ----- >>>>> From: "Mark Wu" >>>>> To: "Dan Kenigsberg" >>>>> Cc: "Simon Grinberg" , "Orit Wasserman" >>>>> , "Laine Stump" , >>>>> "Yuval M" , "Limor Gavish" >>>>> , >>>>> arch at ovirt.org >>>>> Sent: Thursday, January 10, 2013 5:13:23 AM >>>>> Subject: Re: feature suggestion: migration network >>>>> >>>>> On 01/09/2013 03:34 AM, Dan Kenigsberg wrote: >>>>>> On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg >>>>>> wrote: >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>>> From: "Yaniv Kaul" >>>>>>>> To: "Dan Kenigsberg" >>>>>>>> Cc: "Limor Gavish" , "Yuval M" >>>>>>>> , arch at ovirt.org, "Simon Grinberg" >>>>>>>> >>>>>>>> Sent: Tuesday, January 8, 2013 4:46:10 PM >>>>>>>> Subject: Re: feature suggestion: migration network >>>>>>>> >>>>>>>> On 08/01/13 15:04, Dan Kenigsberg wrote: >>>>>>>>> There's talk about this for ages, so it's time to have >>>>>>>>> proper >>>>>>>>> discussion >>>>>>>>> and a feature page about it: let us have a "migration" >>>>>>>>> network >>>>>>>>> role, and >>>>>>>>> use such networks to carry migration data >>>>>>>>> >>>>>>>>> When Engine requests to migrate a VM from one node to >>>>>>>>> another, >>>>>>>>> the >>>>>>>>> VM >>>>>>>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP >>>>>>>>> connection >>>>>>>>> that is opened from the source qemu process to the >>>>>>>>> destination >>>>>>>>> qemu. >>>>>>>>> Currently, destination qemu listens for the incoming >>>>>>>>> connection >>>>>>>>> on >>>>>>>>> the >>>>>>>>> management IP address of the destination host. This has >>>>>>>>> serious >>>>>>>>> downsides: a "migration storm" may choke the destination's >>>>>>>>> management >>>>>>>>> interface; migration is plaintext and ovirtmgmt includes >>>>>>>>> Engine >>>>>>>>> which >>>>>>>>> sits may sit the node cluster. >>>>>>>>> >>>>>>>>> With this feature, a cluster administrator may grant the >>>>>>>>> "migration" >>>>>>>>> role to one of the cluster networks. Engine would use that >>>>>>>>> network's IP >>>>>>>>> address on the destination host when it requests a >>>>>>>>> migration >>>>>>>>> of >>>>>>>>> a >>>>>>>>> VM. >>>>>>>>> With proper network setup, migration data would be >>>>>>>>> separated >>>>>>>>> to >>>>>>>>> that >>>>>>>>> network. >>>>>>>>> >>>>>>>>> === Benefit to oVirt === >>>>>>>>> * Users would be able to define and dedicate a separate >>>>>>>>> network >>>>>>>>> for >>>>>>>>> migration. Users that need quick migration would use >>>>>>>>> nics >>>>>>>>> with >>>>>>>>> high >>>>>>>>> bandwidth. Users who want to cap the bandwidth >>>>>>>>> consumed by >>>>>>>>> migration >>>>>>>>> could define a migration network over nics with >>>>>>>>> bandwidth >>>>>>>>> limitation. >>>>>>>>> * Migration data can be limited to a separate network, >>>>>>>>> that >>>>>>>>> has >>>>>>>>> no >>>>>>>>> layer-2 access from Engine >>>>>>>>> >>>>>>>>> === Vdsm === >>>>>>>>> The "migrate" verb should be extended with an additional >>>>>>>>> parameter, >>>>>>>>> specifying the address that the remote qemu process should >>>>>>>>> listen >>>>>>>>> on. A >>>>>>>>> new argument is to be added to the currently-defined >>>>>>>>> migration >>>>>>>>> arguments: >>>>>>>>> * vmId: UUID >>>>>>>>> * dst: management address of destination host >>>>>>>>> * dstparams: hibernation volumes definition >>>>>>>>> * mode: migration/hibernation >>>>>>>>> * method: rotten legacy >>>>>>>>> * ''New'': migration uri, according to >>>>>>>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 >>>>>>>>> such as tcp:// >>>>>>>>> >>>>>>>>> === Engine === >>>>>>>>> As usual, complexity lies here, and several changes are >>>>>>>>> required: >>>>>>>>> >>>>>>>>> 1. Network definition. >>>>>>>>> 1.1 A new network role - not unlike "display network" >>>>>>>>> should >>>>>>>>> be >>>>>>>>> added.Only one migration network should be defined >>>>>>>>> on a >>>>>>>>> cluster. >>>>>>> We are considering multiple display networks already, then >>>>>>> why >>>>>>> not >>>>>>> the >>>>>>> same for migration? >>>>>> What is the motivation of having multiple migration networks? >>>>>> Extending >>>>>> the bandwidth (and thus, any network can be taken when >>>>>> needed) or >>>>>> data separation (and thus, a migration network should be >>>>>> assigned >>>>>> to >>>>>> each VM in the cluster)? Or another morivation with >>>>>> consequence? >>>>> My suggestion is making the migration network role determined >>>>> dynamically on each migrate. If we only define one migration >>>>> network >>>>> per cluster, >>>>> the migration storm could happen to that network. It could >>>>> cause >>>>> some >>>>> bad impact on VM applications. So I think engine could choose >>>>> the >>>>> network which >>>>> has lower traffic load on migration, or leave the choice to >>>>> user. >>>> >>>> Dynamic migration selection is indeed desirable but only from >>>> migration networks - migration traffic is insecure so it's >>>> undesirable to have it mixed with VM traffic unless permitted by >>>> the >>>> admin by marking this network as migration network. >>>> >>>> To clarify what I've meant in the previous response to Livnat - >>>> When >>>> I've said "...if the customer due to the unsymmetrical nature of >>>> most bonding modes prefers to use muplitple networks for >>>> migration >>>> and will ask us to optimize migration across these..." >>>> >>>> But the dynamic selection should be based on SLA which the above >>>> is >>>> just part: >>>> 1. Need to consider tenant traffic segregation rules = security >>>> 2. SLA contracts >> >> We could devise a complex logic of assigning each Vm a pool of >> applicable migration networks, where one of them is chosen by Engine >> upon migration startup. >> >> I am, however, not at all sure that extending the migration bandwidth >> by >> means of multiple migration networks is worth the design hassle and >> the >> GUI noise. A simpler solution would be to build a single migration >> network on top of a fat bond, tweaked by a fine-tuned SLA. > > Except for mod-4 most bonding modes are either optimized for outbound optimization or inbound - not both. It's far from optimal. > And you are forgetting the other reason I've raised, like isolation of tenants traffic and not just from SLA reasons. > Why do we need isolation of tenants migration traffic if not for SLA reasons? > Even from pure active - active redundancy you may want to have more then one or asymmetrical hosts That's again going back to SLA policies and not specific for the migration network. > Example. > We have a host with 3 nics - you dedicate each for management, migration, storage - respectively. But if the migration fails, you want the engagement network to become your migration (automatically) > OR you may not want that. That's a policy for handling network roles, not related specifically to migration network. > Another: > A large host with many nics and smaller host with less - as long as this a rout between the migration and management networks you could think on a scenario where on the larger host you have separate networks for each role while on the smaller you have a single network assuming both rolls. > I'm not sure this is the main use case and if we want to make the general flow complicated because of exotic use cases. Maybe what you are looking for is override on host level to network roles. Not sure how useful this is though. > Other examples can be found. > If you have some main use cases I would love to here them maybe they can make the requirement more clear. > It's really not just one reason to support more then one migration network or display network or storage or any other 'facility' network. Any facility network may apply for more then one on a cluster. > I'm not sure display can be on the same bucket as migration management and storage. > >> >>>> >>>> If you keep 2, migration storms mitigation is granted. But you >>>> are >>>> right that another feature required for #2 above is to control >>>> the >>>> migration bandwidth (BW) per migration. We had discussion in the >>>> past for VDSM to do dynamic calculation based on f(Line Speed, >>>> Max >>>> Migration BW, Max allowed per VM, Free BW, number of migrating >>>> machines) when starting migration. (I actually wanted to do so >>>> years >>>> ago, but never got to that - one of those things you always >>>> postpone >>>> to when you'll find the time). We did not think that the engine >>>> should provide some, but coming to think of it, you are right and >>>> it >>>> makes sense. For SLA - Max per VM + Min guaranteed should be >>>> provided by the engine to maintain SLA. And it's up to the engine >>>> not to VMs with Min-Guaranteed x number of concurrent migrations >>>> will exceed Max Migration BW. >>>> >>>> Dan this is way too much for initial implementation, but don't >>>> you >>>> think we should at least add place holders in the migration API? >> >> In my opinion this should wait for another feature. For each VM, I'd >> like to see a means to define the SLA of each of its vNIC. When we >> have >> that, we should similarly define how much bandwidth does it have for >> migration >> >>>> Maybe Doron can assist with the required verbs. >>>> >>>> (P.S., I don't want to alarm but we may need SLA parameters for >>>> setupNetworks as well :) unless we want these as separate API >>>> tough >>>> it means more calls during set up) >> >> Exactly - when we have a migration network concept, and when we have >> general network SLA defition, we could easily apply the latter on the >> former. >> >>>> >>> >>> As with other resources the bare minimum are usually MIN capacity >>> and >>> MAX to avoid choking of other tenants / VMs. In this context we may >>> need >>> to consider other QoS elements (delays, etc) but indeed it can be >>> an additional >>> limitation on top of the basic one. >>> >> > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From simon at redhat.com Sun Jan 13 12:47:59 2013 From: simon at redhat.com (Simon Grinberg) Date: Sun, 13 Jan 2013 07:47:59 -0500 (EST) Subject: feature suggestion: migration network In-Reply-To: <50F2A033.6050802@redhat.com> Message-ID: <19476369.304.1358081205517.JavaMail.javamailuser@localhost> ----- Original Message ----- > From: "Livnat Peer" > To: "Simon Grinberg" > Cc: "Dan Kenigsberg" , arch at ovirt.org, "Orit Wasserman" , "Yuval M" > , "Laine Stump" , "Limor Gavish" > Sent: Sunday, January 13, 2013 1:53:23 PM > Subject: Re: feature suggestion: migration network > > On 01/10/2013 02:54 PM, Simon Grinberg wrote: > > > > > > ----- Original Message ----- > >> From: "Dan Kenigsberg" > >> To: "Doron Fediuck" > >> Cc: "Simon Grinberg" , "Orit Wasserman" > >> , "Laine Stump" , > >> "Yuval M" , "Limor Gavish" , > >> arch at ovirt.org, "Mark Wu" > >> > >> Sent: Thursday, January 10, 2013 1:46:08 PM > >> Subject: Re: feature suggestion: migration network > >> > >> On Thu, Jan 10, 2013 at 04:43:45AM -0500, Doron Fediuck wrote: > >>> > >>> > >>> ----- Original Message ----- > >>>> From: "Simon Grinberg" > >>>> To: "Mark Wu" , "Doron Fediuck" > >>>> > >>>> Cc: "Orit Wasserman" , "Laine Stump" > >>>> , "Yuval M" , "Limor > >>>> Gavish" , arch at ovirt.org, "Dan Kenigsberg" > >>>> > >>>> Sent: Thursday, January 10, 2013 10:38:56 AM > >>>> Subject: Re: feature suggestion: migration network > >>>> > >>>> > >>>> > >>>> ----- Original Message ----- > >>>>> From: "Mark Wu" > >>>>> To: "Dan Kenigsberg" > >>>>> Cc: "Simon Grinberg" , "Orit Wasserman" > >>>>> , "Laine Stump" , > >>>>> "Yuval M" , "Limor Gavish" > >>>>> , > >>>>> arch at ovirt.org > >>>>> Sent: Thursday, January 10, 2013 5:13:23 AM > >>>>> Subject: Re: feature suggestion: migration network > >>>>> > >>>>> On 01/09/2013 03:34 AM, Dan Kenigsberg wrote: > >>>>>> On Tue, Jan 08, 2013 at 01:23:02PM -0500, Simon Grinberg > >>>>>> wrote: > >>>>>>> > >>>>>>> ----- Original Message ----- > >>>>>>>> From: "Yaniv Kaul" > >>>>>>>> To: "Dan Kenigsberg" > >>>>>>>> Cc: "Limor Gavish" , "Yuval M" > >>>>>>>> , arch at ovirt.org, "Simon Grinberg" > >>>>>>>> > >>>>>>>> Sent: Tuesday, January 8, 2013 4:46:10 PM > >>>>>>>> Subject: Re: feature suggestion: migration network > >>>>>>>> > >>>>>>>> On 08/01/13 15:04, Dan Kenigsberg wrote: > >>>>>>>>> There's talk about this for ages, so it's time to have > >>>>>>>>> proper > >>>>>>>>> discussion > >>>>>>>>> and a feature page about it: let us have a "migration" > >>>>>>>>> network > >>>>>>>>> role, and > >>>>>>>>> use such networks to carry migration data > >>>>>>>>> > >>>>>>>>> When Engine requests to migrate a VM from one node to > >>>>>>>>> another, > >>>>>>>>> the > >>>>>>>>> VM > >>>>>>>>> state (Bios, IO devices, RAM) is transferred over a TCP/IP > >>>>>>>>> connection > >>>>>>>>> that is opened from the source qemu process to the > >>>>>>>>> destination > >>>>>>>>> qemu. > >>>>>>>>> Currently, destination qemu listens for the incoming > >>>>>>>>> connection > >>>>>>>>> on > >>>>>>>>> the > >>>>>>>>> management IP address of the destination host. This has > >>>>>>>>> serious > >>>>>>>>> downsides: a "migration storm" may choke the destination's > >>>>>>>>> management > >>>>>>>>> interface; migration is plaintext and ovirtmgmt includes > >>>>>>>>> Engine > >>>>>>>>> which > >>>>>>>>> sits may sit the node cluster. > >>>>>>>>> > >>>>>>>>> With this feature, a cluster administrator may grant the > >>>>>>>>> "migration" > >>>>>>>>> role to one of the cluster networks. Engine would use that > >>>>>>>>> network's IP > >>>>>>>>> address on the destination host when it requests a > >>>>>>>>> migration > >>>>>>>>> of > >>>>>>>>> a > >>>>>>>>> VM. > >>>>>>>>> With proper network setup, migration data would be > >>>>>>>>> separated > >>>>>>>>> to > >>>>>>>>> that > >>>>>>>>> network. > >>>>>>>>> > >>>>>>>>> === Benefit to oVirt === > >>>>>>>>> * Users would be able to define and dedicate a separate > >>>>>>>>> network > >>>>>>>>> for > >>>>>>>>> migration. Users that need quick migration would use > >>>>>>>>> nics > >>>>>>>>> with > >>>>>>>>> high > >>>>>>>>> bandwidth. Users who want to cap the bandwidth > >>>>>>>>> consumed by > >>>>>>>>> migration > >>>>>>>>> could define a migration network over nics with > >>>>>>>>> bandwidth > >>>>>>>>> limitation. > >>>>>>>>> * Migration data can be limited to a separate network, > >>>>>>>>> that > >>>>>>>>> has > >>>>>>>>> no > >>>>>>>>> layer-2 access from Engine > >>>>>>>>> > >>>>>>>>> === Vdsm === > >>>>>>>>> The "migrate" verb should be extended with an additional > >>>>>>>>> parameter, > >>>>>>>>> specifying the address that the remote qemu process should > >>>>>>>>> listen > >>>>>>>>> on. A > >>>>>>>>> new argument is to be added to the currently-defined > >>>>>>>>> migration > >>>>>>>>> arguments: > >>>>>>>>> * vmId: UUID > >>>>>>>>> * dst: management address of destination host > >>>>>>>>> * dstparams: hibernation volumes definition > >>>>>>>>> * mode: migration/hibernation > >>>>>>>>> * method: rotten legacy > >>>>>>>>> * ''New'': migration uri, according to > >>>>>>>>> http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrateToURI2 > >>>>>>>>> such as tcp:// > >>>>>>>>> > >>>>>>>>> === Engine === > >>>>>>>>> As usual, complexity lies here, and several changes are > >>>>>>>>> required: > >>>>>>>>> > >>>>>>>>> 1. Network definition. > >>>>>>>>> 1.1 A new network role - not unlike "display network" > >>>>>>>>> should > >>>>>>>>> be > >>>>>>>>> added.Only one migration network should be defined > >>>>>>>>> on a > >>>>>>>>> cluster. > >>>>>>> We are considering multiple display networks already, then > >>>>>>> why > >>>>>>> not > >>>>>>> the > >>>>>>> same for migration? > >>>>>> What is the motivation of having multiple migration networks? > >>>>>> Extending > >>>>>> the bandwidth (and thus, any network can be taken when > >>>>>> needed) or > >>>>>> data separation (and thus, a migration network should be > >>>>>> assigned > >>>>>> to > >>>>>> each VM in the cluster)? Or another morivation with > >>>>>> consequence? > >>>>> My suggestion is making the migration network role determined > >>>>> dynamically on each migrate. If we only define one migration > >>>>> network > >>>>> per cluster, > >>>>> the migration storm could happen to that network. It could > >>>>> cause > >>>>> some > >>>>> bad impact on VM applications. So I think engine could choose > >>>>> the > >>>>> network which > >>>>> has lower traffic load on migration, or leave the choice to > >>>>> user. > >>>> > >>>> Dynamic migration selection is indeed desirable but only from > >>>> migration networks - migration traffic is insecure so it's > >>>> undesirable to have it mixed with VM traffic unless permitted by > >>>> the > >>>> admin by marking this network as migration network. > >>>> > >>>> To clarify what I've meant in the previous response to Livnat - > >>>> When > >>>> I've said "...if the customer due to the unsymmetrical nature of > >>>> most bonding modes prefers to use muplitple networks for > >>>> migration > >>>> and will ask us to optimize migration across these..." > >>>> > >>>> But the dynamic selection should be based on SLA which the above > >>>> is > >>>> just part: > >>>> 1. Need to consider tenant traffic segregation rules = security > >>>> 2. SLA contracts > >> > >> We could devise a complex logic of assigning each Vm a pool of > >> applicable migration networks, where one of them is chosen by > >> Engine > >> upon migration startup. > >> > >> I am, however, not at all sure that extending the migration > >> bandwidth > >> by > >> means of multiple migration networks is worth the design hassle > >> and > >> the > >> GUI noise. A simpler solution would be to build a single migration > >> network on top of a fat bond, tweaked by a fine-tuned SLA. > > > > Except for mod-4 most bonding modes are either optimized for > > outbound optimization or inbound - not both. It's far from > > optimal. > > And you are forgetting the other reason I've raised, like isolation > > of tenants traffic and not just from SLA reasons. > > > > Why do we need isolation of tenants migration traffic if not for SLA > reasons? Security (migration is not encrypted) and segregation of resources (poor man's/simple stupid SLA or until you have real SLA) and as said before better utilization of resources (Bond are asymmetric). SLA in our discussion is maintained via traffic shaping and this has it's performance impact, the first 3 are not. Another reason would be to use with external network providers like CISCO or Mellanox who already have traffic control. There you may well easily have dedicated networks per tenant, including migration network (as part of a tenant dedicated resources and segregation of resources) > > > Even from pure active - active redundancy you may want to have more > > then one or asymmetrical hosts > > That's again going back to SLA policies and not specific for the > migration network. > > > Example. > > We have a host with 3 nics - you dedicate each for management, > > migration, storage - respectively. But if the migration fails, you > > want the engagement network to become your migration > > (automatically) > > > > OR you may not want that. > That's a policy for handling network roles, not related specifically > to > migration network. right, but there is a chicken and egg thing here Unless you have multiple migration networks, you won't be able to implement the above If you implement the above without pre-defining multiple networks that are allowed to act as migration networks, the implementation may be more complex. > > > > Another: > > A large host with many nics and smaller host with less - as long as > > this a rout between the migration and management networks you > > could think on a scenario where on the larger host you have > > separate networks for each role while on the smaller you have a > > single network assuming both rolls. > > > > I'm not sure this is the main use case and if we want to make the > general flow complicated because of exotic use cases. What I'm trying to say here is: Please do not look at each use case separately, I agree that estimating each one by one may lead you to say: This one is not worth it, and the other on stand alone not worth it, and so on. But looking at everything put together it accumulates. > > Maybe what you are looking for is override on host level to network > roles. Not sure how useful this is though. Maybe, I've already suggested to allow override on per migration bases > > Other examples can be found. > > > > If you have some main use cases I would love to here them maybe they > can > make the requirement more clear. Gave some above, I think for the immediate terms the most compelling is the external network provider use case, where you want to allow the external network management to rout/shape the traffic per tenant, something that will be hard to do if all is aggregated on the host. But coming to think of it, I like more and more the idea of having migration network as part of the VM configuration. It's both simple to do now and later add logic on top if required, and VDSM supports that already now. So: 1. Have a default migration network per cluster (default is the management network as before) 2. This is the default migration network for all VMs created in that cluster 3. Allow in VM properties to override this (Tenant use case, and supports the external network manager use case) 4. Allow from the migration network to override as well. Simple, powerful, flexible, while the logic is not complicated since the engine has nothing to decide - everything is orchestrated by the admin while initial out of the box setup is very simple (one migration network for all which is by default the management network). Later you may apply policies on top of this. Thoughts? > > > It's really not just one reason to support more then one migration > > network or display network or storage or any other 'facility' > > network. Any facility network may apply for more then one on a > > cluster. > > > > I'm not sure display can be on the same bucket as migration > management > and storage. I think it can in the tenant use case, but I will be happy to get a solution like the above (have a default network per cluster and allow to override per VM) > > > > >> > >>>> > >>>> If you keep 2, migration storms mitigation is granted. But you > >>>> are > >>>> right that another feature required for #2 above is to control > >>>> the > >>>> migration bandwidth (BW) per migration. We had discussion in the > >>>> past for VDSM to do dynamic calculation based on f(Line Speed, > >>>> Max > >>>> Migration BW, Max allowed per VM, Free BW, number of migrating > >>>> machines) when starting migration. (I actually wanted to do so > >>>> years > >>>> ago, but never got to that - one of those things you always > >>>> postpone > >>>> to when you'll find the time). We did not think that the engine > >>>> should provide some, but coming to think of it, you are right > >>>> and > >>>> it > >>>> makes sense. For SLA - Max per VM + Min guaranteed should be > >>>> provided by the engine to maintain SLA. And it's up to the > >>>> engine > >>>> not to VMs with Min-Guaranteed x number of concurrent migrations > >>>> will exceed Max Migration BW. > >>>> > >>>> Dan this is way too much for initial implementation, but don't > >>>> you > >>>> think we should at least add place holders in the migration API? > >> > >> In my opinion this should wait for another feature. For each VM, > >> I'd > >> like to see a means to define the SLA of each of its vNIC. When we > >> have > >> that, we should similarly define how much bandwidth does it have > >> for > >> migration > >> > >>>> Maybe Doron can assist with the required verbs. > >>>> > >>>> (P.S., I don't want to alarm but we may need SLA parameters for > >>>> setupNetworks as well :) unless we want these as separate API > >>>> tough > >>>> it means more calls during set up) > >> > >> Exactly - when we have a migration network concept, and when we > >> have > >> general network SLA defition, we could easily apply the latter on > >> the > >> former. > >> > >>>> > >>> > >>> As with other resources the bare minimum are usually MIN capacity > >>> and > >>> MAX to avoid choking of other tenants / VMs. In this context we > >>> may > >>> need > >>> to consider other QoS elements (delays, etc) but indeed it can be > >>> an additional > >>> limitation on top of the basic one. > >>> > >> > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > > From rgolan at redhat.com Tue Jan 15 14:09:03 2013 From: rgolan at redhat.com (Roy Golan) Date: Tue, 15 Jan 2013 16:09:03 +0200 Subject: tunnelled migration In-Reply-To: <1580151849.2731221.1357829188466.JavaMail.root@redhat.com> References: <1580151849.2731221.1357829188466.JavaMail.root@redhat.com> Message-ID: <50F562FF.2050402@redhat.com> On 01/10/2013 04:46 PM, Omer Frenkel wrote: > > ----- Original Message ----- >> From: "Andrew Cathrow" >> To: "Dan Kenigsberg" >> Cc: arch at ovirt.org, "Michal Skrivanek" >> Sent: Thursday, January 10, 2013 4:06:35 PM >> Subject: Re: tunnelled migration >> >> >> >> ----- Original Message ----- >>> From: "Dan Kenigsberg" >>> To: arch at ovirt.org >>> Cc: "Michal Skrivanek" >>> Sent: Thursday, January 10, 2013 8:38:34 AM >>> Subject: tunnelled migration >>> >>> For a long long time, libvirt supports a VIR_MIGRATE_TUNNELLED >>> migration >>> mode. In it, the qemu-to-qemu communication carrying private guest >>> data, >>> is tunnelled within libvirt-to-libvirt connection. >>> >>> libvirt-to-libvirt communication is (usually) well-encrypted and >>> uses >>> a >>> known firewall hole. On the downside, multiplexing qemu migration >>> traffic and encrypting it carries a heavy burdain on libvirtds and >>> the >>> hosts' cpu. >>> >>> Choosing tunnelled migration is thus a matter of policy. I would >>> like >>> to >>> suggest a new cluster-level configurable in Engine, that controls >>> whether >>> migrations in this cluster are tunnelled. The configurable must be >>> available only in new cluster levels where hosts support it. >>> >>> The cluster-level configurable should be overridable by a VM-level >>> one. >>> An admin may have a critical VM whose data should not migrate >>> around >>> in >>> the plaintext. >>> >>> When Engine decides (or asked) to perform migration, it would pass >>> a >>> new >>> "tunnlled" boolean value to the "migrate" verb. Vdsm patch in these >>> lines is posted to http://gerrit.ovirt.org/2551 >>> >>> I believe it's pretty easy to do it in Engine, too, and that it >>> would >>> enhance the security of our users. >> It should be disabled by default given the significant overhead. >> > Agree, this really sound like an easy enhancement (and important), > we can have this flag on the cluster as you say (default - false) > and save for each vm the "migration tunnel policy" (?) > if it's: cluster default, tunnelled or not tunnelled > and pass it on migration. > need to decide how it will look (named) in api and ui +1 for gaining security quick with small effort. >>> Dan. >>> _______________________________________________ >>> Arch mailing list >>> Arch at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/arch >>> >> _______________________________________________ >> Arch mailing list >> Arch at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/arch >> > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch From dneary at redhat.com Tue Jan 15 15:02:44 2013 From: dneary at redhat.com (Dave Neary) Date: Tue, 15 Jan 2013 16:02:44 +0100 Subject: oVirt features survey - please participate! Message-ID: <50F56F94.902@redhat.com> Hi everyone, After the mammoth thread these past few days on what you would like to see next from oVirt, Itamar and I have put together a list of all of the features you requested and made a survey to help us understand a bit more which features are more important to you, and the way in which you use oVirt. https://www.surveymonkey.com/s/oVirtFeatures It will take you between 1 and 3 minutes to participate in this survey, and help prioritise efforts for the next version or two of oVirt. If you know of people who are oVirt users, but who are not on this mailing list, please feel free to forward this link on to them! Also, let me remind you that you can see first hand what is coming in the upcoming oVirt 3.2 release and talk to the people behind oVirt during the oVirt Workshop in NetApp HQ, Sunnyvale, California next week. Registration is still open for another day or so, and we have about 10 places still available. Sign up now! http://www.ovirt.org/NetApp_Workshop_January_2013 Regards, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From ofrenkel at redhat.com Tue Jan 15 17:17:22 2013 From: ofrenkel at redhat.com (Omer Frenkel) Date: Tue, 15 Jan 2013 12:17:22 -0500 (EST) Subject: tunnelled migration In-Reply-To: <20130113105030.GP26998@redhat.com> Message-ID: <2043976934.5323513.1358270242403.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Dan Kenigsberg" > To: "Mark Wu" > Cc: arch at ovirt.org, "Michal Skrivanek" > Sent: Sunday, January 13, 2013 12:50:30 PM > Subject: Re: tunnelled migration > > On Fri, Jan 11, 2013 at 02:05:10PM +0800, Mark Wu wrote: > > On 01/11/2013 04:14 AM, Caitlin Bestler wrote: > > >Dan Kenisberg wrote: > > > > > > > > >>Choosing tunnelled migration is thus a matter of policy. I would > > >>like to suggest a new cluster-level configurable in Engine, > > >>that controls whether migrations in this cluster are tunnelled. > > >>The configurable must be available only in new cluster levels > > >>where hosts support it. > > >Why not just dump this issue to network configuration? > > > > > >Migrations occur over a secure network. That security could be > > >provided by port groups, VLANs or encrypted tunnels. > > Agreed. Is a separate vlan network not secure enough? If yes, we > > could build a virtual encrypted network, like using openvpn + > > iptables. > > I agree that separating migration traffic to a different, > optionally-encrypted network, is a noble goal. In fact, it is a > parallel > effort that I am pushing for: > http://lists.ovirt.org/pipermail/arch/2013-January/001117.html > > Building our own tunnel between hosts is cool, but using libvirt's > tunneling is here and now and easy, and should not wait just because > there's even better technology around the third next corner. > > With my suggested API, we could even change the implementation of > "tunnelled" to "tunnel over our own vpn" if we need to. Now is the > time > to eat the low-hanging fruit of VIR_MIGRATE_TUNNELLED. > > Dan. suggested implementation for engine (without rest/ui): http://gerrit.ovirt.org/#/c/11062/ > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From vincent at vanderkussen.org Wed Jan 16 20:38:16 2013 From: vincent at vanderkussen.org (Vincent Van der Kussen) Date: Wed, 16 Jan 2013 21:38:16 +0100 Subject: Concerning Loadays 2013 In-Reply-To: <50D0EEF5.6030303@redhat.com> References: <20121205224626.15444l2co26x8k36@horde.vanderkussen.org> <50CB3D3B.1040205@redhat.com> <20121215094410.GB3314@faramir.homebase.local> <50D0EEF5.6030303@redhat.com> Message-ID: <20130116203816.GA29953@faramir.homebase.local> On Wed, Dec 19, 2012 at 12:32:21AM +0200, Itamar Heim wrote: > On 12/15/2012 11:44 AM, Vincent Van der Kussen wrote: > >On Fri, Dec 14, 2012 at 03:52:43PM +0100, Dave Neary wrote: > > > >Hi, > > > > > >>Hi, > >> > >>I would love to see us have a speaker at this conference. > >> > >>Vincent, what is the level of the audience usually? Would the > >>audience be more interested in an "overview of oVirt"? or more > >>community outreach ("how to get involved")? > > > >Most people are active in several open source projects and have a technical background. > >So a more in depth and practical overview + how to participate in the oVirt community would cover it. > >There's always place (depending on the other talks/workshops of course) to do a workshop to show > >the technical/practical side of oVirt. > > > >I think most people would like to know more about how oVirt handles storage, networking, does Gluster work?, etc.. > >> > >>I can think of a few potential speakers, if they're willing to > >>try... Ewoud is the nearest community member to Brussels, but I am > >>also very close. > >> > >>When is the deadline for proposals? > > > >Deadline is arounf 1st of March. > > > >If you have any further questions you can always contact me. > > sounds interesting, are you looking for 1-2 sessions, or more than > that (intro, arch, ovirt-live hands-on session, etc.)? > > thanks, > Itamar Hi, FYI : the CFP is open until mid of March. info at http://www.loadays.org Regards, Vincent > > From dneary at redhat.com Thu Jan 17 15:30:36 2013 From: dneary at redhat.com (Dave Neary) Date: Thu, 17 Jan 2013 16:30:36 +0100 Subject: oVirt features survey status In-Reply-To: <50F56F94.902@redhat.com> References: <50F56F94.902@redhat.com> Message-ID: <50F8191C.5020708@redhat.com> Hi everyone, On 01/15/2013 04:02 PM, Dave Neary wrote: > https://www.surveymonkey.com/s/oVirtFeatures Thank you to everyone who has participated so far! We have had a good response rate for such a short time. I will be taking the results of the survey tomorrow morning CET, so if you have not yet participated, and would like to express your opinion, please do so as soon as possible. As a sneak peek of the results: 40% of you use oVirt in production, vs 68% of you in test labs or proofs of concept, the median number of hypervisors is between 3 and 5, and the majority of you run over 20 VMs in your set-up! Top requests include packages for .el6, the ability to upgrade hypervisors through the engine and have a redundant highly available engine, disk resize, back-up scheduling and VM cloning without using a template, integration with OpenvSwitch, the integration of some guest configuration in guest agents, and the integration of V2V and Nagiosin the engine interface. If your pet feature is not in this list, you still have a chance to vote it up! https://www.surveymonkey.com/s/oVirtFeatures Thanks, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From simon at redhat.com Sun Jan 20 14:29:43 2013 From: simon at redhat.com (Simon Grinberg) Date: Sun, 20 Jan 2013 09:29:43 -0500 (EST) Subject: feature suggestion: migration network In-Reply-To: <20130120140452.GG6145@redhat.com> Message-ID: <33245819.37.1358692102331.JavaMail.javamailuser@localhost> ----- Original Message ----- > From: "Dan Kenigsberg" > To: "Simon Grinberg" , masayag at redhat.com > Cc: "Livnat Peer" , arch at ovirt.org > Sent: Sunday, January 20, 2013 4:04:52 PM > Subject: Re: feature suggestion: migration network > > On Sun, Jan 13, 2013 at 07:47:59AM -0500, Simon Grinberg wrote: > > > > > I think for the immediate terms the most compelling is the external > > network provider use case, where you want to allow the external > > network management to rout/shape the traffic per tenant, something > > that will be hard to do if all is aggregated on the host. > > > > But coming to think of it, I like more and more the idea of having > > migration network as part of the VM configuration. It's both > > simple to do now and later add logic on top if required, and VDSM > > supports that already now. > > > > So: > > 1. Have a default migration network per cluster (default is the > > management network as before) > > 2. This is the default migration network for all VMs created in > > that cluster > > 3. Allow in VM properties to override this (Tenant use case, and > > supports the external network manager use case) > > 4. Allow from the migration network to override as well. > > > > Simple, powerful, flexible, while the logic is not complicated > > since the engine has nothing to decide - everything is > > orchestrated by the admin while initial out of the box setup is > > very simple (one migration network for all which is by default the > > management network). > > > > Later you may apply policies on top of this. > > > > Thoughts? > > I'm not sure that multiple migration networks is an urgent necessity, > but what you suggest seems simple indeed. > > Simple because each VM has exactly ONE choice for a migration > network. > The multiplicity is for separation, not for automatic redundancy. An > admin may manually split his VMs among migration networks, but no > scheduling logic is required from Engine. If the migration network is > unavailable for some reason, no migration would take place. > > We should design the solution with N networks in mind, and at worse, > if we > feel that the UI is needlessly cluttered we can limit to N=1. > > If there are no objections let's do it this way: > - add a new network role of migration network. > - add a per-cluster property of defaultMigrationNetwork. Its factory > default is ovirtmgmt, for backward compatibility. > - add a per-VM propery of migrationNetwork. If Null, the cluster > defaultMigrationNetwork would be used. +1 for the above I'll be happy if we also get the 4th item on my list which I should have phrased "Allow override from the migration dialogue as well, where the default in the drop box is the VM migration network, which in turn defaults to the cluster's migration network" > > Dan. > From mpastern at redhat.com Sun Jan 20 14:31:31 2013 From: mpastern at redhat.com (Michael Pasternak) Date: Sun, 20 Jan 2013 16:31:31 +0200 Subject: tunnelled migration In-Reply-To: <1580151849.2731221.1357829188466.JavaMail.root@redhat.com> References: <1580151849.2731221.1357829188466.JavaMail.root@redhat.com> Message-ID: <50FBFFC3.5020206@redhat.com> On 01/10/2013 04:46 PM, Omer Frenkel wrote: > > > ----- Original Message ----- >> From: "Andrew Cathrow" >> To: "Dan Kenigsberg" >> Cc: arch at ovirt.org, "Michal Skrivanek" >> Sent: Thursday, January 10, 2013 4:06:35 PM >> Subject: Re: tunnelled migration >> >> >> >> ----- Original Message ----- >>> From: "Dan Kenigsberg" >>> To: arch at ovirt.org >>> Cc: "Michal Skrivanek" >>> Sent: Thursday, January 10, 2013 8:38:34 AM >>> Subject: tunnelled migration >>> >>> For a long long time, libvirt supports a VIR_MIGRATE_TUNNELLED >>> migration >>> mode. In it, the qemu-to-qemu communication carrying private guest >>> data, >>> is tunnelled within libvirt-to-libvirt connection. >>> >>> libvirt-to-libvirt communication is (usually) well-encrypted and >>> uses >>> a >>> known firewall hole. On the downside, multiplexing qemu migration >>> traffic and encrypting it carries a heavy burdain on libvirtds and >>> the >>> hosts' cpu. >>> >>> Choosing tunnelled migration is thus a matter of policy. I would >>> like >>> to >>> suggest a new cluster-level configurable in Engine, that controls >>> whether >>> migrations in this cluster are tunnelled. The configurable must be >>> available only in new cluster levels where hosts support it. >>> >>> The cluster-level configurable should be overridable by a VM-level >>> one. >>> An admin may have a critical VM whose data should not migrate >>> around >>> in >>> the plaintext. >>> >>> When Engine decides (or asked) to perform migration, it would pass >>> a >>> new >>> "tunnlled" boolean value to the "migrate" verb. Vdsm patch in these >>> lines is posted to http://gerrit.ovirt.org/2551 >>> >>> I believe it's pretty easy to do it in Engine, too, and that it >>> would >>> enhance the security of our users. >> >> It should be disabled by default given the significant overhead. >> > > Agree, this really sound like an easy enhancement (and important), > we can have this flag on the cluster as you say (default - false) > and save for each vm the "migration tunnel policy" (?) > if it's: cluster default, tunnelled or not tunnelled > and pass it on migration. > need to decide how it will look (named) in api and ui from the api PoV, it's pretty simple: 1. boolean at cluster to represent the policy. 2. boolean at vm.migrate action parameters 3. if vdsm will report that host supports 'tunnelled migration', r/o boolean in host. (doesn't have strong opinion for naming) > >>> >>> Dan. >>> _______________________________________________ >>> Arch mailing list >>> Arch at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/arch >>> >> _______________________________________________ >> Arch mailing list >> Arch at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/arch >> > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch -- Michael Pasternak RedHat, ENG-Virtualization R&D From danken at redhat.com Sun Jan 20 14:04:52 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Sun, 20 Jan 2013 16:04:52 +0200 Subject: feature suggestion: migration network In-Reply-To: <19476369.304.1358081205517.JavaMail.javamailuser@localhost> References: <50F2A033.6050802@redhat.com> <19476369.304.1358081205517.JavaMail.javamailuser@localhost> Message-ID: <20130120140452.GG6145@redhat.com> On Sun, Jan 13, 2013 at 07:47:59AM -0500, Simon Grinberg wrote: > I think for the immediate terms the most compelling is the external network provider use case, where you want to allow the external network management to rout/shape the traffic per tenant, something that will be hard to do if all is aggregated on the host. > > But coming to think of it, I like more and more the idea of having migration network as part of the VM configuration. It's both simple to do now and later add logic on top if required, and VDSM supports that already now. > > So: > 1. Have a default migration network per cluster (default is the management network as before) > 2. This is the default migration network for all VMs created in that cluster > 3. Allow in VM properties to override this (Tenant use case, and supports the external network manager use case) > 4. Allow from the migration network to override as well. > > Simple, powerful, flexible, while the logic is not complicated since the engine has nothing to decide - everything is orchestrated by the admin while initial out of the box setup is very simple (one migration network for all which is by default the management network). > > Later you may apply policies on top of this. > > Thoughts? I'm not sure that multiple migration networks is an urgent necessity, but what you suggest seems simple indeed. Simple because each VM has exactly ONE choice for a migration network. The multiplicity is for separation, not for automatic redundancy. An admin may manually split his VMs among migration networks, but no scheduling logic is required from Engine. If the migration network is unavailable for some reason, no migration would take place. We should design the solution with N networks in mind, and at worse, if we feel that the UI is needlessly cluttered we can limit to N=1. If there are no objections let's do it this way: - add a new network role of migration network. - add a per-cluster property of defaultMigrationNetwork. Its factory default is ovirtmgmt, for backward compatibility. - add a per-VM propery of migrationNetwork. If Null, the cluster defaultMigrationNetwork would be used. Dan. From kwade at redhat.com Sun Jan 20 17:50:36 2013 From: kwade at redhat.com (Karsten 'quaid' Wade) Date: Sun, 20 Jan 2013 09:50:36 -0800 Subject: Planned outage :: resources.ovirt.org/lists.ovirt.org :: 2013-01-21 01:00 UTC Message-ID: <50FC2E6C.5030000@redhat.com> There will be an outage of www.ovirt.org for approximately 45 minutes. The outage will occur at 2013-01-21 01:00 UTC. To view in your local time: date -d '2013-01-21 01:00 UTC' I may start part of the outage 15 minutes before that. If you anticipate needing services until the outage window, reply back to me with details ASAP. == Details == We need to resize the Linode instance to get another 15 GB for storage until we can move services off the Linode permanently, as planned. This resizing should give us some breathing room. The account resize is estimated to take 30 minutes, during which time the Linode VM will be offline. After that, there will be a few minutes for reboot and restart of services, including the manual starting of the IRC bot 'ovirtbot'. The time window chosen coincides with the lowest CPU usage typically seen on any given day - 01:00 UTC tends to be very quiet for about an hour. Hopefully no one will even notice the downtime. If you have any services, such as Jenkins or Gerrit backup, that may go off during that window, you may want to retime it or be prepared for an error. == Affected services == * resources.ovirt.org ** meeting logs ** packages * lists.ovirt.org (MailMan) * ovirtbot * Gerrit backup (anacron may pick this up) * Other cronjobs (anacron may pick this up) == Not-affected services == * www.ovirt.org (MediaWiki) * jenkins.ovirt.org * gerrit.ovirt.org * alterway{01,02}.ovirt.org -- Karsten 'quaid' Wade, Sr. Analyst - Community Growth http://TheOpenSourceWay.org .^\ http://community.redhat.com @quaid (identi.ca/twitter/IRC) \v' gpg: AD0E0C41 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 255 bytes Desc: OpenPGP digital signature URL: From kwade at redhat.com Mon Jan 21 02:12:14 2013 From: kwade at redhat.com (Karsten 'quaid' Wade) Date: Sun, 20 Jan 2013 18:12:14 -0800 Subject: Planned outage :: resources.ovirt.org/lists.ovirt.org :: 2013-01-21 01:00 UTC In-Reply-To: <50FC2E6C.5030000@redhat.com> References: <50FC2E6C.5030000@redhat.com> Message-ID: <50FCA3FE.70706@redhat.com> This outage is now complete. All services appear to be running normally. Let me know if you see any anomalies. df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 48G 34G 14G 71% / Extra storage is now available. Let's hope that covers us through our migration. :) - Karsten On 01/20/2013 09:50 AM, Karsten 'quaid' Wade wrote: > There will be an outage of www.ovirt.org for approximately 45 minutes. > > The outage will occur at 2013-01-21 01:00 UTC. To view in your local time: > > date -d '2013-01-21 01:00 UTC' > > I may start part of the outage 15 minutes before that. If you anticipate > needing services until the outage window, reply back to me with details > ASAP. > > == Details == > > We need to resize the Linode instance to get another 15 GB for storage > until we can move services off the Linode permanently, as planned. This > resizing should give us some breathing room. > > The account resize is estimated to take 30 minutes, during which time > the Linode VM will be offline. After that, there will be a few minutes > for reboot and restart of services, including the manual starting of the > IRC bot 'ovirtbot'. > > The time window chosen coincides with the lowest CPU usage typically > seen on any given day - 01:00 UTC tends to be very quiet for about an > hour. Hopefully no one will even notice the downtime. > > If you have any services, such as Jenkins or Gerrit backup, that may go > off during that window, you may want to retime it or be prepared for an > error. > > == Affected services == > > * resources.ovirt.org > ** meeting logs > ** packages > * lists.ovirt.org (MailMan) > * ovirtbot > * Gerrit backup (anacron may pick this up) > * Other cronjobs (anacron may pick this up) > > == Not-affected services == > > * www.ovirt.org (MediaWiki) > * jenkins.ovirt.org > * gerrit.ovirt.org > * alterway{01,02}.ovirt.org > > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > -- Karsten 'quaid' Wade, Sr. Analyst - Community Growth http://TheOpenSourceWay.org .^\ http://community.redhat.com @quaid (identi.ca/twitter/IRC) \v' gpg: AD0E0C41 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 255 bytes Desc: OpenPGP digital signature URL: From danken at redhat.com Mon Jan 21 10:58:25 2013 From: danken at redhat.com (Dan Kenigsberg) Date: Mon, 21 Jan 2013 12:58:25 +0200 Subject: feature suggestion: migration network In-Reply-To: <33245819.37.1358692102331.JavaMail.javamailuser@localhost> References: <20130120140452.GG6145@redhat.com> <33245819.37.1358692102331.JavaMail.javamailuser@localhost> Message-ID: <20130121105825.GO6145@redhat.com> On Sun, Jan 20, 2013 at 09:29:43AM -0500, Simon Grinberg wrote: > > > ----- Original Message ----- > > From: "Dan Kenigsberg" > > To: "Simon Grinberg" , masayag at redhat.com > > Cc: "Livnat Peer" , arch at ovirt.org > > Sent: Sunday, January 20, 2013 4:04:52 PM > > Subject: Re: feature suggestion: migration network > > > > On Sun, Jan 13, 2013 at 07:47:59AM -0500, Simon Grinberg wrote: > > > > > > > > > I think for the immediate terms the most compelling is the external > > > network provider use case, where you want to allow the external > > > network management to rout/shape the traffic per tenant, something > > > that will be hard to do if all is aggregated on the host. > > > > > > But coming to think of it, I like more and more the idea of having > > > migration network as part of the VM configuration. It's both > > > simple to do now and later add logic on top if required, and VDSM > > > supports that already now. > > > > > > So: > > > 1. Have a default migration network per cluster (default is the > > > management network as before) > > > 2. This is the default migration network for all VMs created in > > > that cluster > > > 3. Allow in VM properties to override this (Tenant use case, and > > > supports the external network manager use case) > > > 4. Allow from the migration network to override as well. > > > > > > Simple, powerful, flexible, while the logic is not complicated > > > since the engine has nothing to decide - everything is > > > orchestrated by the admin while initial out of the box setup is > > > very simple (one migration network for all which is by default the > > > management network). > > > > > > Later you may apply policies on top of this. > > > > > > Thoughts? > > > > I'm not sure that multiple migration networks is an urgent necessity, > > but what you suggest seems simple indeed. > > > > Simple because each VM has exactly ONE choice for a migration > > network. > > The multiplicity is for separation, not for automatic redundancy. An > > admin may manually split his VMs among migration networks, but no > > scheduling logic is required from Engine. If the migration network is > > unavailable for some reason, no migration would take place. > > > > We should design the solution with N networks in mind, and at worse, > > if we > > feel that the UI is needlessly cluttered we can limit to N=1. > > > > If there are no objections let's do it this way: > > - add a new network role of migration network. > > - add a per-cluster property of defaultMigrationNetwork. Its factory > > default is ovirtmgmt, for backward compatibility. > > - add a per-VM propery of migrationNetwork. If Null, the cluster > > defaultMigrationNetwork would be used. > > +1 for the above > > I'll be happy if we also get the 4th item on my list which I should have phrased > "Allow override from the migration dialogue as well, where the default in the drop box is the VM migration network, which in turn defaults to the cluster's migration network" I am so sorry that I have to retract my own words. I do not see a use case for the migration networks that are not the defaultMigrationNetwork. They are not used anywhere. I wish I wrote down the following notes First phase: - add a new network role of migration network. Each cluster has one, and it is the default migration network for VMs on the cluster. Factory default is that ovirtmgmt is the cluster migration network. Second phase: - add a per-VM propery of migrationNetwork. If Null, the cluster migrationNetwork would be used. - let the user override the VM migration network in the migrate API and in the GUI. Does this make sense to you, Simon? From lpeer at redhat.com Mon Jan 21 11:17:21 2013 From: lpeer at redhat.com (Livnat Peer) Date: Mon, 21 Jan 2013 13:17:21 +0200 Subject: feature suggestion: migration network In-Reply-To: <20130121105825.GO6145@redhat.com> References: <20130120140452.GG6145@redhat.com> <33245819.37.1358692102331.JavaMail.javamailuser@localhost> <20130121105825.GO6145@redhat.com> Message-ID: <50FD23C1.2010503@redhat.com> On 01/21/2013 12:58 PM, Dan Kenigsberg wrote: > On Sun, Jan 20, 2013 at 09:29:43AM -0500, Simon Grinberg wrote: >> >> >> ----- Original Message ----- >>> From: "Dan Kenigsberg" >>> To: "Simon Grinberg" , masayag at redhat.com >>> Cc: "Livnat Peer" , arch at ovirt.org >>> Sent: Sunday, January 20, 2013 4:04:52 PM >>> Subject: Re: feature suggestion: migration network >>> >>> On Sun, Jan 13, 2013 at 07:47:59AM -0500, Simon Grinberg wrote: >>> >>> >>> >>>> I think for the immediate terms the most compelling is the external >>>> network provider use case, where you want to allow the external >>>> network management to rout/shape the traffic per tenant, something >>>> that will be hard to do if all is aggregated on the host. >>>> >>>> But coming to think of it, I like more and more the idea of having >>>> migration network as part of the VM configuration. It's both >>>> simple to do now and later add logic on top if required, and VDSM >>>> supports that already now. >>>> >>>> So: >>>> 1. Have a default migration network per cluster (default is the >>>> management network as before) >>>> 2. This is the default migration network for all VMs created in >>>> that cluster >>>> 3. Allow in VM properties to override this (Tenant use case, and >>>> supports the external network manager use case) >>>> 4. Allow from the migration network to override as well. >>>> >>>> Simple, powerful, flexible, while the logic is not complicated >>>> since the engine has nothing to decide - everything is >>>> orchestrated by the admin while initial out of the box setup is >>>> very simple (one migration network for all which is by default the >>>> management network). >>>> >>>> Later you may apply policies on top of this. >>>> >>>> Thoughts? >>> >>> I'm not sure that multiple migration networks is an urgent necessity, >>> but what you suggest seems simple indeed. >>> >>> Simple because each VM has exactly ONE choice for a migration >>> network. >>> The multiplicity is for separation, not for automatic redundancy. An >>> admin may manually split his VMs among migration networks, but no >>> scheduling logic is required from Engine. If the migration network is >>> unavailable for some reason, no migration would take place. >>> >>> We should design the solution with N networks in mind, and at worse, >>> if we >>> feel that the UI is needlessly cluttered we can limit to N=1. >>> >>> If there are no objections let's do it this way: >>> - add a new network role of migration network. >>> - add a per-cluster property of defaultMigrationNetwork. Its factory >>> default is ovirtmgmt, for backward compatibility. >>> - add a per-VM propery of migrationNetwork. If Null, the cluster >>> defaultMigrationNetwork would be used. >> >> +1 for the above >> >> I'll be happy if we also get the 4th item on my list which I should have phrased >> "Allow override from the migration dialogue as well, where the default in the drop box is the VM migration network, which in turn defaults to the cluster's migration network" > > I am so sorry that I have to retract my own words. I do not see a use > case for the migration networks that are not the > defaultMigrationNetwork. They are not used anywhere. I wish I wrote down > the following notes > > First phase: > - add a new network role of migration network. Each cluster has one, and > it is the default migration network for VMs on the cluster. Factory > default is that ovirtmgmt is the cluster migration network. > +1, I think for first phase this is a simple and a quick win. Nothing here seems to contradict the second phase. > Second phase: > - add a per-VM propery of migrationNetwork. If Null, the cluster > migrationNetwork would be used. > > - let the user override the VM migration network in the migrate API and > in the GUI. > Before implementing the second phase I would like to define better the scheduling implications of the above (scheduling a VM, moving host to maintenance etc.). Personally I would wait for a concrete 'user' request for this. I think the examples you gave above which are tenant related are interesting but we don't have tenants in oVirt yet. Until we have a use case we can utilize this API change for I would not change the migration (engine-VDSM) API. > Does this make sense to you, Simon? > From kanagaraj.rk at gmail.com Mon Jan 21 12:18:34 2013 From: kanagaraj.rk at gmail.com (RK RK) Date: Mon, 21 Jan 2013 17:48:34 +0530 Subject: feature suggestion: migration network In-Reply-To: <33245819.37.1358692102331.JavaMail.javamailuser@localhost> References: <20130120140452.GG6145@redhat.com> <33245819.37.1358692102331.JavaMail.javamailuser@localhost> Message-ID: On Sun, Jan 20, 2013 at 7:59 PM, Simon Grinberg wrote: > > > ----- Original Message ----- > > From: "Dan Kenigsberg" > > To: "Simon Grinberg" , masayag at redhat.com > > Cc: "Livnat Peer" , arch at ovirt.org > > Sent: Sunday, January 20, 2013 4:04:52 PM > > Subject: Re: feature suggestion: migration network > > > > On Sun, Jan 13, 2013 at 07:47:59AM -0500, Simon Grinberg wrote: > > > > > > > > > I think for the immediate terms the most compelling is the external > > > network provider use case, where you want to allow the external > > > network management to rout/shape the traffic per tenant, something > > > that will be hard to do if all is aggregated on the host. > > > > > > But coming to think of it, I like more and more the idea of having > > > migration network as part of the VM configuration. It's both > > > simple to do now and later add logic on top if required, and VDSM > > > supports that already now. > > > > > > So: > > > 1. Have a default migration network per cluster (default is the > > > management network as before) > > > 2. This is the default migration network for all VMs created in > > > that cluster > > > 3. Allow in VM properties to override this (Tenant use case, and > > > supports the external network manager use case) > > > 4. Allow from the migration network to override as well. > > > > > > Simple, powerful, flexible, while the logic is not complicated > > > since the engine has nothing to decide - everything is > > > orchestrated by the admin while initial out of the box setup is > > > very simple (one migration network for all which is by default the > > > management network). > > > > > > Later you may apply policies on top of this. > > > > > > Thoughts? > > > > I'm not sure that multiple migration networks is an urgent necessity, > > but what you suggest seems simple indeed. > > > > Simple because each VM has exactly ONE choice for a migration > > network. > > The multiplicity is for separation, not for automatic redundancy. An > > admin may manually split his VMs among migration networks, but no > > scheduling logic is required from Engine. If the migration network is > > unavailable for some reason, no migration would take place. > > > > We should design the solution with N networks in mind, and at worse, > > if we > > feel that the UI is needlessly cluttered we can limit to N=1. > > > > If there are no objections let's do it this way: > > - add a new network role of migration network. > > - add a per-cluster property of defaultMigrationNetwork. Its factory > > default is ovirtmgmt, for backward compatibility. > > - add a per-VM propery of migrationNetwork. If Null, the cluster > > defaultMigrationNetwork would be used. > > +1 for the above > > I'll be happy if we also get the 4th item on my list which I should have > phrased > "Allow override from the migration dialogue as well, where the default in > the drop box is the VM migration network, which in turn defaults to the > cluster's migration network" > > > > > > Dan. > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > +1 for the fourth phrase as it will add more value -- With Regards, RK, +91 9840483044 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rydekull at gmail.com Mon Jan 21 12:26:52 2013 From: rydekull at gmail.com (Alexander Rydekull) Date: Mon, 21 Jan 2013 13:26:52 +0100 Subject: feature suggestion: migration network In-Reply-To: References: <20130120140452.GG6145@redhat.com> <33245819.37.1358692102331.JavaMail.javamailuser@localhost> Message-ID: I'd guess a "use case" would be the fact that at different customer sites I venture too I meet all kinds of setup. A very common setup is to split management(usually on a low-profile slower network without any fuss about it, just seperate from the core to ensure management access. And then, migrations of VMs and heavier workloads are usually put off into the faster core network. On Mon, Jan 21, 2013 at 1:18 PM, RK RK wrote: > > > On Sun, Jan 20, 2013 at 7:59 PM, Simon Grinberg wrote: > >> >> >> ----- Original Message ----- >> > From: "Dan Kenigsberg" >> > To: "Simon Grinberg" , masayag at redhat.com >> > Cc: "Livnat Peer" , arch at ovirt.org >> > Sent: Sunday, January 20, 2013 4:04:52 PM >> > Subject: Re: feature suggestion: migration network >> > >> > On Sun, Jan 13, 2013 at 07:47:59AM -0500, Simon Grinberg wrote: >> > >> > >> > >> > > I think for the immediate terms the most compelling is the external >> > > network provider use case, where you want to allow the external >> > > network management to rout/shape the traffic per tenant, something >> > > that will be hard to do if all is aggregated on the host. >> > > >> > > But coming to think of it, I like more and more the idea of having >> > > migration network as part of the VM configuration. It's both >> > > simple to do now and later add logic on top if required, and VDSM >> > > supports that already now. >> > > >> > > So: >> > > 1. Have a default migration network per cluster (default is the >> > > management network as before) >> > > 2. This is the default migration network for all VMs created in >> > > that cluster >> > > 3. Allow in VM properties to override this (Tenant use case, and >> > > supports the external network manager use case) >> > > 4. Allow from the migration network to override as well. >> > > >> > > Simple, powerful, flexible, while the logic is not complicated >> > > since the engine has nothing to decide - everything is >> > > orchestrated by the admin while initial out of the box setup is >> > > very simple (one migration network for all which is by default the >> > > management network). >> > > >> > > Later you may apply policies on top of this. >> > > >> > > Thoughts? >> > >> > I'm not sure that multiple migration networks is an urgent necessity, >> > but what you suggest seems simple indeed. >> > >> > Simple because each VM has exactly ONE choice for a migration >> > network. >> > The multiplicity is for separation, not for automatic redundancy. An >> > admin may manually split his VMs among migration networks, but no >> > scheduling logic is required from Engine. If the migration network is >> > unavailable for some reason, no migration would take place. >> > >> > We should design the solution with N networks in mind, and at worse, >> > if we >> > feel that the UI is needlessly cluttered we can limit to N=1. >> > >> > If there are no objections let's do it this way: >> > - add a new network role of migration network. >> > - add a per-cluster property of defaultMigrationNetwork. Its factory >> > default is ovirtmgmt, for backward compatibility. >> > - add a per-VM propery of migrationNetwork. If Null, the cluster >> > defaultMigrationNetwork would be used. >> >> +1 for the above >> >> I'll be happy if we also get the 4th item on my list which I should have >> phrased >> "Allow override from the migration dialogue as well, where the default in >> the drop box is the VM migration network, which in turn defaults to the >> cluster's migration network" >> >> >> > >> > Dan. >> > >> _______________________________________________ >> Arch mailing list >> Arch at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/arch >> > > > > +1 for the fourth phrase as it will add more value > > -- > > With Regards, > RK, > +91 9840483044 > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > > -- /Alexander Rydekull -------------- next part -------------- An HTML attachment was scrubbed... URL: From lpeer at redhat.com Mon Jan 21 13:56:25 2013 From: lpeer at redhat.com (Livnat Peer) Date: Mon, 21 Jan 2013 15:56:25 +0200 Subject: feature suggestion: migration network In-Reply-To: References: <20130120140452.GG6145@redhat.com> <33245819.37.1358692102331.JavaMail.javamailuser@localhost> Message-ID: <50FD4909.6050400@redhat.com> On 01/21/2013 02:26 PM, Alexander Rydekull wrote: > I'd guess a "use case" would be the fact that at different customer > sites I venture too I meet all kinds of setup. > > A very common setup is to split management(usually on a low-profile > slower network without any fuss about it, just seperate from the core to > ensure management access. > > And then, migrations of VMs and heavier workloads are usually put off > into the faster core network. > > I think you are describing a use case for separating the migration network from the management network. I see a reason for providing that, for example the use case you described above. I am curious if there is a use case to have more than one migration network in a Cluser. Livnat > > On Mon, Jan 21, 2013 at 1:18 PM, RK RK > wrote: > > > > On Sun, Jan 20, 2013 at 7:59 PM, Simon Grinberg > wrote: > > > > ----- Original Message ----- > > From: "Dan Kenigsberg" > > > To: "Simon Grinberg" >, masayag at redhat.com > > > Cc: "Livnat Peer" >, arch at ovirt.org > > Sent: Sunday, January 20, 2013 4:04:52 PM > > Subject: Re: feature suggestion: migration network > > > > On Sun, Jan 13, 2013 at 07:47:59AM -0500, Simon Grinberg wrote: > > > > > > > > > I think for the immediate terms the most compelling is the > external > > > network provider use case, where you want to allow the external > > > network management to rout/shape the traffic per tenant, > something > > > that will be hard to do if all is aggregated on the host. > > > > > > But coming to think of it, I like more and more the idea of > having > > > migration network as part of the VM configuration. It's both > > > simple to do now and later add logic on top if required, and > VDSM > > > supports that already now. > > > > > > So: > > > 1. Have a default migration network per cluster (default is the > > > management network as before) > > > 2. This is the default migration network for all VMs created in > > > that cluster > > > 3. Allow in VM properties to override this (Tenant use case, and > > > supports the external network manager use case) > > > 4. Allow from the migration network to override as well. > > > > > > Simple, powerful, flexible, while the logic is not complicated > > > since the engine has nothing to decide - everything is > > > orchestrated by the admin while initial out of the box setup is > > > very simple (one migration network for all which is by > default the > > > management network). > > > > > > Later you may apply policies on top of this. > > > > > > Thoughts? > > > > I'm not sure that multiple migration networks is an urgent > necessity, > > but what you suggest seems simple indeed. > > > > Simple because each VM has exactly ONE choice for a migration > > network. > > The multiplicity is for separation, not for automatic > redundancy. An > > admin may manually split his VMs among migration networks, but no > > scheduling logic is required from Engine. If the migration > network is > > unavailable for some reason, no migration would take place. > > > > We should design the solution with N networks in mind, and at > worse, > > if we > > feel that the UI is needlessly cluttered we can limit to N=1. > > > > If there are no objections let's do it this way: > > - add a new network role of migration network. > > - add a per-cluster property of defaultMigrationNetwork. Its > factory > > default is ovirtmgmt, for backward compatibility. > > - add a per-VM propery of migrationNetwork. If Null, the cluster > > defaultMigrationNetwork would be used. > > +1 for the above > > I'll be happy if we also get the 4th item on my list which I > should have phrased > "Allow override from the migration dialogue as well, where the > default in the drop box is the VM migration network, which in > turn defaults to the cluster's migration network" > > > > > > Dan. > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > > > > > +1 for the fourth phrase as it will add more value > > -- > > With Regards, > RK, > +91 9840483044 > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > > > > > -- > /Alexander Rydekull > > > _______________________________________________ > Arch mailing list > Arch at ovirt.org > http://lists.ovirt.org/mailman/listinfo/arch > From rydekull at gmail.com Mon Jan 21 16:17:37 2013 From: rydekull at gmail.com (Alexander Rydekull) Date: Mon, 21 Jan 2013 17:17:37 +0100 Subject: feature suggestion: migration network In-Reply-To: <50FD4909.6050400@redhat.com> References: <20130120140452.GG6145@redhat.com> <33245819.37.1358692102331.JavaMail.javamailuser@localhost> <50FD4909.6050400@redhat.com> Message-ID: Yes, I was, sorry if that wasnt clear. The only use case for having additional, that I could see right now for that. Is... Someone have two different clusters and want to migrate inbetween them, and temporarily connect them up. Well, i don't see it happen that much really. But is there a reason for setting a limit? I'll ponder the subject some more to see if I can find more use cases. On Mon, Jan 21, 2013 at 2:56 PM, Livnat Peer wrote: > On 01/21/2013 02:26 PM, Alexander Rydekull wrote: > > I'd guess a "use case" would be the fact that at different customer > > sites I venture too I meet all kinds of setup. > > > > A very common setup is to split management(usually on a low-profile > > slower network without any fuss about it, just seperate from the core to > > ensure management access. > > > > And then, migrations of VMs and heavier workloads are usually put off > > into the faster core network. > > > > > > I think you are describing a use case for separating the migration > network from the management network. > I see a reason for providing that, for example the use case you > described above. > I am curious if there is a use case to have more than one migration > network in a Cluser. > > Livnat > > > > > > > On Mon, Jan 21, 2013 at 1:18 PM, RK RK > > wrote: > > > > > > > > On Sun, Jan 20, 2013 at 7:59 PM, Simon Grinberg > > wrote: > > > > > > > > ----- Original Message ----- > > > From: "Dan Kenigsberg" > > > > > To: "Simon Grinberg" > >, masayag at redhat.com > > > > > Cc: "Livnat Peer" > >, arch at ovirt.org arch at ovirt.org> > > > Sent: Sunday, January 20, 2013 4:04:52 PM > > > Subject: Re: feature suggestion: migration network > > > > > > On Sun, Jan 13, 2013 at 07:47:59AM -0500, Simon Grinberg wrote: > > > > > > > > > > > > > I think for the immediate terms the most compelling is the > > external > > > > network provider use case, where you want to allow the > external > > > > network management to rout/shape the traffic per tenant, > > something > > > > that will be hard to do if all is aggregated on the host. > > > > > > > > But coming to think of it, I like more and more the idea of > > having > > > > migration network as part of the VM configuration. It's both > > > > simple to do now and later add logic on top if required, and > > VDSM > > > > supports that already now. > > > > > > > > So: > > > > 1. Have a default migration network per cluster (default is > the > > > > management network as before) > > > > 2. This is the default migration network for all VMs created > in > > > > that cluster > > > > 3. Allow in VM properties to override this (Tenant use case, > and > > > > supports the external network manager use case) > > > > 4. Allow from the migration network to override as well. > > > > > > > > Simple, powerful, flexible, while the logic is not > complicated > > > > since the engine has nothing to decide - everything is > > > > orchestrated by the admin while initial out of the box setup > is > > > > very simple (one migration network for all which is by > > default the > > > > management network). > > > > > > > > Later you may apply policies on top of this. > > > > > > > > Thoughts? > > > > > > I'm not sure that multiple migration networks is an urgent > > necessity, > > > but what you suggest seems simple indeed. > > > > > > Simple because each VM has exactly ONE choice for a migration > > > network. > > > The multiplicity is for separation, not for automatic > > redundancy. An > > > admin may manually split his VMs among migration networks, but > no > > > scheduling logic is required from Engine. If the migration > > network is > > > unavailable for some reason, no migration would take place. > > > > > > We should design the solution with N networks in mind, and at > > worse, > > > if we > > > feel that the UI is needlessly cluttered we can limit to N=1. > > > > > > If there are no objections let's do it this way: > > > - add a new network role of migration network. > > > - add a per-cluster property of defaultMigrationNetwork. Its > > factory > > > default is ovirtmgmt, for backward compatibility. > > > - add a per-VM propery of migrationNetwork. If Null, the > cluster > > > defaultMigrationNetwork would be used. > > > > +1 for the above > > > > I'll be happy if we also get the 4th item on my list which I > > should have phrased > > "Allow override from the migration dialogue as well, where the > > default in the drop box is the VM migration network, which in > > turn defaults to the cluster's migration network" > > > > > > > > > > Dan. > > > > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > > > > > > > > +1 for the fourth phrase as it will add more value > > > > -- > > > > With Regards, > > RK, > > +91 9840483044 > > > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > > > > > > > > -- > > /Alexander Rydekull > > > > > > _______________________________________________ > > Arch mailing list > > Arch at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/arch > > > > -- /Alexander Rydekull -------------- next part -------------- An HTML attachment was scrubbed... URL: From Caitlin.Bestler at nexenta.com Mon Jan 21 18:21:56 2013 From: Caitlin.Bestler at nexenta.com (Caitlin Bestler) Date: Mon, 21 Jan 2013 18:21:56 +0000 Subject: feature suggestion: migration network In-Reply-To: <50FD4909.6050400@redhat.com> References: <20130120140452.GG6145@redhat.com> <33245819.37.1358692102331.JavaMail.javamailuser@localhost> <50FD4909.6050400@redhat.com> Message-ID: <719CD19D2B2BFA4CB1B3F00D2A8CDCD09F9CE16A@AUSP01DAG0108.collaborationhost.net> Livnat Peer wrote: > I think you are describing a use case for separating the migration network from the management network. > I see a reason for providing that, for example the use case you described above. > I am curious if there is a use case to have more than one migration network in a Cluser. Given that tenants do not explicitly request that their VMs be migrated I can't see charging for them. If there is no separate chargeback I do not see any benefit from having multiple migration networks. The management plane is already capable of favoring Platinum customers over Bronze customers without real-time support from the network. On the other hand there is clear value in preventing migration traffic from interfering with management traffic, for example cancelling a migration. From lpeer at redhat.com Wed Jan 23 12:48:35 2013 From: lpeer at redhat.com (Livnat Peer) Date: Wed, 23 Jan 2013 14:48:35 +0200 Subject: feature suggestion: migration network In-Reply-To: <719CD19D2B2BFA4CB1B3F00D2A8CDCD09F9CE16A@AUSP01DAG0108.collaborationhost.net> References: <20130120140452.GG6145@redhat.com> <33245819.37.1358692102331.JavaMail.javamailuser@localhost> <50FD4909.6050400@redhat.com> <719CD19D2B2BFA4CB1B3F00D2A8CDCD09F9CE16A@AUSP01DAG0108.collaborationhost.net> Message-ID: <50FFDC23.1080009@redhat.com> On 01/21/2013 08:21 PM, Caitlin Bestler wrote: > Livnat Peer wrote: > > >> I think you are describing a use case for separating the migration network from the management network. >> I see a reason for providing that, for example the use case you described above. >> I am curious if there is a use case to have more than one migration network in a Cluser. > > Given that tenants do not explicitly request that their VMs be migrated I can't see charging for them. > If there is no separate chargeback I do not see any benefit from having multiple migration networks. > The management plane is already capable of favoring Platinum customers over Bronze customers > without real-time support from the network. > > On the other hand there is clear value in preventing migration traffic from interfering with management > traffic, for example cancelling a migration. > > Thanks for your input Caitlin, I think that's a solid argument for avoiding the complexity of supporting multiple migration networks per cluster.