On 05/15/2012 05:34 PM, Gary Kotton wrote:
...
> 2. host management --> interface
> 2.1 you are suggesting to remove the vlan field with a fabric field?
> are we sure this is something which shouldn't be presented in the main
> view and only via extended properties?
Yes, this is what I am suggesting. VLAN tagging is one method of
network
isolation. There are more, for example GRE tunneling.
that doesn't mean this is not an important enough piece of information
to have in the subtab, rather than in an additional dialog (the fabric
type could be represented in an icon for example)
miki/andrew/einav - thoughts on the UI aspect of this?
>
> 2.2 is the fabric really a host interface level property, or a cluster
> wide property (would same logical network be implemented on different
> hosts in the cluster via both vdsm and quantum)?
> would live migration work in same cluster if one host has qunatum
> fabric and another vdsm fabric for same logical network?
These are all up for discussion. I just wanted to provide something that
we can start to work with. It would be nice if there was some input from
product management on this issue (not exactly sure how this works in an
open source community :))
This goes to assumptions on qunatum integration and capabilities.
I'm missing this part in the beginning of the feature page.
if quantum can support multiple types, and live migration between, then
it can a host level config.
if live migration wouldn't work, it would have to be a cluster level config.
if it is limited to a single technology, it is kind of a system level
config.
(and actually, after reading your reply to 3.7.1, i understand the
plugin is an interface level config, since you can have more than a
single one on the same host).
same goes to implications to the logical network entity (see my comment
in 2.4 below).
I'd also like to understand in that regard how do we match up host
reporting it can support (i.e., installed and configured) to a specific
technology.
i.e., there needs to be some correlation between which logical networks
can be deployed on which plugin.
so also need to understand how this correlation is done automatically
between logical network, to which fabric/interface it belongs to.
(I admit your reply to 3.7.1 confused me as to how quantum will decide
on which plugin to create a specific attachment)?
>
> 2.3 you are describing the subtab of an interface, and mention a
> qunatum pull down menu. is this in the subtab? in a dialog?
> UI mockups are needed here.
I think that prior to getting mockups, we should ensure that we have the
idea crystallized.
I can live with that, goes back to what i wrote in 2.2.
>
> 2.4 also, is this pulldown correct at host level or cluster level
> (would live migration work between hosts implementing different
> plugins for same logical network in same cluster?)
as i stated above - this has to be clarified in assumptions.
doesn't have to be the current state, rather a clear understanding where
it is going to, or as to what makes sense.
(I am not sure requiring live migration between multiple technologies
makes sense, since implementation of the logical network could be based
on different concepts (vlan vs. gre).
however, this means we need to define if the logical network (which is
not limited to a cluster) should be limited to clusters supporting these
technologies.
i.e., a vlan based logical network can cross (in different clusters)
UCS, openvswitch and bridges. but a gre based logical network cannot
cross all of them, etc.
>
> on to backend:
> 3.1 "The Quantum Service is a process that runs the Quantum API web
> server (port 9696)"
>
> how is authentication between engine and quantum service is done (we
> have a client certificate for engine which qunatum can verify i guess)
In my opinion this should be running locally on the same host as the
engine.
even on same server, we have authentication between components running
in different processes.
on top of that, one of our main goals is to allow multiple engines
running side by side for scale. maybe quantum will have to be in
active/passive mode for a while, but multiple nodes should be able to
access it.
there is no need for fancy authentication/authorization. simply trusting
ovirt engine CA certificate and validating engine client certificate
should suffice (and i assume can be done via config in the http layer of
quantum).
also, this can be a phase 2, but let's have a list of what is a must for
merging the code (this isn't), to deployment (phase 2), to more than
that (phase 3).
under deployment i'd expect not requiring a different db technology,
integrating with installer, an upgrade path, cleaner and secure
communication channels, etc.
>
> 3.2 is there a config of the quantum service uri? should it be a
> single instance or a per DC property?
> (it sounds like a per DC property, can be a future thing)
I am sorry but I do not understand the question.
even if quantum is running on same host, you will have a configuration
item of the URI to the service, and will configure this as part of
installation flow.
if this is not configured, I'd expect engine/ui to not show quantum options.
is this a single uri or a DC instance level property goes to
modeling/assumptions part. can we live with a single quantum service for
multiple technologies, or will need different quantum for different
technologies.
this can also be a later phase, starting with a limitation of a single,
system-wide quantum.
>
> 3.3 network creation
>
> semantics: we create a network at DC level. we attach it at cluster
> level. the UI allows you to "create at DC and attach to cluster" at
> cluster level.
>
> 3.3.1 what if i create a network with same VLAN in two DC's? what if
> we attach same logical network to multiple clusters - each will create
> the logical network in qunatum again?
If I understand you correctly then I think that one Quantum network can
be created. This should work. In my opinion this should be managed by
the oVirt engine so the actual creation is not an issue. Ensuring that
each cluster has the correct network management configurations is what
is important.
> 3.3.2 what if the qunatum service is down when engine performs this
> action or quantum returns an error?
The engine should be able to deal with this - similar to the way in
which it deals with a network creation when VDSM is down.
network creation is not affected when VDSM is down, as the admin
configures it on the hosts after it is created (via the UI).
in quantum case, there is no admin step.
so need to solve engine monitoring quantum for list of networks relevant
to a cluster are defined to mark non defined networks as
non-operational, and have timer based or manual retry mechanism.
I don't think this is a merge blocker to code, but it should be part of
the phase 1 of this feature.
> 3.3.3 each creation of a DC creates a rhevm network - I assume
these
> are not created, since "no host in a cluster in the DC yet"?
This functionally can remain the same.
ok, goes to assumptions part.
"quantum integration will be done only for vm networks"
> 3.3.4 what happens on moving of a host to another cluster, or
moving
> of a cluster to another DC (possible if it's DC was removed iirc).
The Quantum agents take care of this. On each host there will be a
Quantum agent that treats the network management.
I meant in the aspect of provisioning needed configuration to the host
(doing it when a host tries to go to up status is fine, but need to
remember this use case when desiging, since it means this isn't a
bootstrap aspect)
also, please document this so flow so someone trying to test the
integration covers it.
> 3.3.5 shouldn't this be limited to vm networks, or all
networks are
> relevant?
I think all Quantum networks are relevant
I understood in 3.3.3 that quantum networks are only relevant to vm
networks (networks that can run virtual machines, rather than the
management, live migration or storage networks)?
(I'd suggest keeping it simple and limited to vm networks for first phase)
important part here is code should not try to create this network via
quantum i guess.
> 3.3.6 what if the host with the qunatum fabric is in maint mode?
I do not understand. When a host is in maintenace mode do VM's receive
traffic? The Quantum port for the VM's can be set as DOWN
I'm offline - we'll need to re-read the wiki to reply.
> 3.3.7 could a host with qunatum fabric have a VDSM fabric on
another
> interface (for vm neworks? for non-vm networks)?
Yes. This is something that I would like. I also would like the Quantum
then this should go to assumptions part.
API to be updated so that we can indicate the phycal network
interface
that the Quantum network will be running on.
i don't think this is needed.
when admin configures the interface or bond on the host to be 'fabric'
or 'quantum' it calls vdsm to perform the network configuration change.
this will flag the relevant interface in the host as fabric/quantum
(either via config, or alias, etc.)
but for modeling sake, can you please explain how this is supposed to
look for a UCS host?
>
> 3.5 network deletion (detach network from a cluster)
> 3.5.1 what happens if qunatum is down or returned an error?
The engine should be able to deal with this - similarly to what it does
today
see my reply on 3.3.2. this is managed by the admin today.
so this is coding to be done that should be documented (it is not a
merge blocker in my view, but it is part of phase 1.
>
> 3.6 vm creation
> 3.6.1 what about moving a VM from/to a cluster with a quantum fabric?
I do not see a problem here. The agenets running on VDSM will detect and
treat accordingly.
i meant in the aspect of the need to create the port for them in quantum?
> 3.6.2 what about import of a VM into a cluster with a qunatum
fabric?
Same as above
indeed - need to handle port creations?
> 3.6.3 you have vm creation/removal - missing vnic
addition/removal
please document this operation so someone testing this will have full
picture.
> 3.6.4 worth mentioning qunatum doesn't care about the vnic
model?
> about the mac address?
The Quantum driver on VDSM does take into account the MAC address. This
is used in some plugins - for example the openvswicth plugin.
ok, can you please explain the flow of data of how it gets it (simply
vdsm needs to pass it to the driver?)
>
> 3.7 vm start
> 3.7.1 why is the plugin parameter required at vm start?
This is to indicate to VDSM the operations to take place - for example
updating the openvswitch integration bridge
i thought the integration bridge is pre-configured at host level?
i thought the host already knows which interface is used for quantum,
and for sure, which plugin is supposed to be used for that interface.
so why is this a vm start parameter?
> can multiple plugins be supported in parallel at vdsm/host level
or by
> the quantum service?
Yes/ Multiple agents can run on VDSM. In the engine a little work is
ok, please document this in the assumptions part.
i assume multiple agents, but still only one per fabric interface?
also, if plugins are services, I want to understand when vdsm needs to
start them (I assume when admin configures an interface of type fabric
for a specific plugin type).
required to run multiple plugins.
in the engine or in quantum?
I updated my reply on assumptions above based on this info as something
that needs clearing up.
> 3.7.2 aren't network_uuid and port uuid redundant to
attachment uuid
> (I assume qunatum service knows from attachment the port and network)
> - i have no objection to passing them to vdsm, just trying to
> understand reasoning for this.
> I am missing what happens at vdsm level in this point (even after
> reading the matching vdsm part)
The network ID and the port ID are returned by the Quantum service. The
attachment ID is passed to the Quantum server. If this is unique then it
can be a key to the above (I am currently working on the scale issue
with Quantum and there are 2 options, the first is that it is unique,
the second is that all three are passed to the agent). It is a few extra
bytes passed. I think that it is better to be safe and have all of the
information on VDSM for future use.
>
> 3.8 vm suspend/resume (it's fine to say it behaves like X, but still
> need to be covered to not cause bugs/regressions)
The Quantum port status can be updated.
please document this operation so someone testing this will have full
picture.
>
> 3.9 vm stop
Same as above
please document this operation so someone testing this will have full
picture.
> 3.9.1 need to make sure vm stop when engine is down is handled
correctly.
this requires adhering to. again, not a merge blocker to code. but in my
view part of phase 1 (my view of phase 1 is deployment is still manual,
but flows are handled correctly and do not leave leftovers around)
please document this operation so someone testing this will have full
picture.
> 3.9.2 what happens if qunatum service is down? unlike start vm
or
> network creation the operation in this case cannot be
> stopped/rolledback, only rolled forward.
We have to ensure that it is up.
we have several use cases requiring to be able to start virtual machines
even when engine is down (regardless of HA aspects for the engine).
this means admin wants to go to a host, and start the VM from command line.
the VM definition is recovered from the OVF for such a manual emergency
procedure.
sanlock is supposed to protect against vm corruption in this case.
but the VMs will need networking to be useful.
so what's the emergency flow to starting a VM?
>
> 3.10 hot plug nic?
Each new NIC has an attachment ID - the agent know that a new NIC is
added and treats accordingly.
true, but i assume you would need to change the api here to get the
quantum details like the attachemnt_id (and the rest).
also, all affected use cases should be documented for someone testing
this to have full picture of scope.
>
> 3.11 vm migration
> 3.11.1 ok, so this is the first time i understand hosts can be mixed
> in same cluster. worth specifying this in the beginning (still not
> clear if mixed plugins can exist)
If the networks are configured with the same characteristics then this
should be OK. As mentioned above we are working on a blueprint to deal
with connecting to existing networks - i.e. enable the user/admin to
configure the VLAN tags
please define under assumptions part what are the network
characteristics, how they are mapped to which plugins support which
characteristics to be relevant.
also, monitoring wise, you are implying each host must be able to
accommodate all types of characteristics in the networks in the cluster.
> 3.11.2 afair, we don't deal with engine talking to both hosts
during
> live migration. only to host A, who is then communicating with host B.
> so why not always have the VM configuration at vm start (and hot plug
> nic) have the quantum details so live migration can occur at will
> without additional information?
I do not understand can you please explain. VDSM creates the tap device
and builds the libvirt files. The agents detect the tap device and
attach to the network. I do not understand why it is a problem for the
live migration. This is also driven by the libvirt XML's being created.
Is this correct?
that is fine. iirc, the wiki states engine needs to pass information to
target host. we don't have such a flow in live migration (which is
complex enough).
live migration should be based on the information already available to
vdsm in source host.
I don't know if quantum cares about the attachement/port residing on
more than one host (the target host has to set them up before migration
starts, and source host removes them only after migration ends).
i hope we are both understanding this the same way.
in any case, need to remember to test failure of live migration in its
various steps cleans up any quantum related changes.
> 3.11.3 "In order to implement the above a REST client needs
to be
> implemented in the oVirt engine. "
> did not understand this statement - please elaborate.
All interface with the Quantum server is done via REST. In order for
oVirt to be able to communicate, it will need to send REST messages to
the server and be able to parse the replies - this is what I meant by
the REST client.
>
> 4. host management
> 4.1 deployment
> we do not deploy packages from engine to hosts, we can install them
> from a repo configured to the host. but this is done today only as
> part of initial bootstrap/installation of host.
> also, it is not relevant for ovirt node which is 'firmware' like.
> any reason to not require the 'plugin installation packages' at vdsm
> rpm level for plugins we think are good enough to use (until then,
> responsibility to deploy them is of admin)
>
Correct. I agree with you.
ok, so on ovirt-node all quantum agents will be pre-installed.
if a quantum agent is changed to work correctly with ovirt/vdsm, why not
simply always require at rpm level to deploy it on the host?
then a regular host and ovirt-node are the same.
still need to define how vdsm reports which agents are supported by it
(which is different than those which are installed, say for UCS?), which
is different from those that are actively running since they are
configured).
> (what are plugin level packages at host level - aren't these
the agents?)
Each plugin has the relevant packages that should be installed.
>
> 4.2 plugin configuration
> per DC? per cluster? per host? per plugin? please provide more details
> here on configuration details expected and flow of when they are
> configured and when they are expected to change?
I think that plugin per cluster is a good start. This could limit live
migration problems.
I agree...
question is this part of phases, or part of assumptions (end goal)
modeling should be aligned with end goal (i am not suggesting it can do
in steps, just that end goal should be understood so it doesn't
interrupt it. especially for the API part which requires to keep
backward compatibility as much as possible).
> I think this will merit a per 'ovirt supported' qunatum
plugin to see
> it works in a way we can use.
>
> 4.3 connectivity
> again, this requires more details. if needed per plugin.
> what is expected? how authentication/encryption happens? what iptables
> rules need to change in engine/host if at all, etc.
> I'm fine with this being 'direct access to db from hosts' for POC
> level of patches, but not for something we actually merge/provide
> support for later.
I am currently working on a blueprint to ensure better scaling of
quantum agents. This will be done by making use of the nova RPC library.
This supports Qpid, rabbit mq, kombu etc. These have an option of being
secure. Please clarify if this suffices?
having all work on same channel will solve most problems.
still need to map those that require data not in quantum for some reason
to try and pass to qunatum the needed data as customer parameters rather
than add a channel to engine if possible/makes sense.
do all of these require a broker to be deployed?
I assume all support certificate based authentication.
it should be the same one we'll pick for engine-vdsm (I heard fans of
zeromq as well), but this doesn't have to be phase 1, or we might be
lucky and agree on one already supported :)
>
> 5. VDSM
...
>
> 5.2 "The agent package can and may be received from the oVirt Engine
> or can be downloaded via RPM's"
> see 4.1 above - we don't deploy rpm's/code on the fly today.
When I was sitting with the guys from oVirt this is what I was told. I
guess that I misunderstood.
ok, I assume 4.1 resolves this and we simply deploy all those made to
work with ovirt as part of vdsm rpm requirements?
(implies each one should have a sub rpm, not sure if that is the case today)
>
> 5.3 "in addition to the treatment below VDSM should also maintain a
> health check to the Quantum agent"
> what if the restart didn't help? how should ovirt treat the host wrt
> networking? to running VMs? to ability to live migrate VMs from the
> host if needed?
If the agent is down then the oVirt engine should be notified to at
least events for the user.
ok.
I assume vdsm knows which agent to start and later monitor based on the
host level config by admin of the fabric interface (whether the plugin
type is cluster or interface level)
>
> 5.4 "Logical Network Management"
> please see 3.11 above - we would want live migration to not require
> additional information, so details should be available even if not
> immediately acted upon.
>
> 5.5 "The tap device created uses an "ethernet" network device. This
> means that the creation of the libvirt XML file is a bit different."
Yes, that is correct
> 5.5.1 does this impact stable device addresses somehow?
Not that I am aware of
> 5.5.2 how is live migration possible if the libvirt xml to a non
> quantum host is different (or is this the binding part only)?
If the libvirt is "pacthed" when the migration takes place then this
should not be a problem.
does libvirt support something like that? the live migration happens
directly between the two libvirt's.
libvirt created an abstraction layer to their networking model to allow
live migration between host where the physical layer is different so the
logical layer remains the same during live migration.
I just don't know it the 'ethernet' attribute you mentioned is in that
physical mapping layer (which would be fine), or in the logical one
(which won't live migrate).
please verify with a libvirt expert (dbp modeled that part iirc)
>
> 5.6 I assume libvirtvm.py is part of vdsm.
> is quantum.py part of quantum code base or vdsm codebase (it sounds
> like it should be part of quantum code base)?
> so how exactly the rpm for this would look like? deploy it to
> /xxx/qunatum and have it add a symbolic link to it from vdsm paths?
In my opinion this should be part of VDSM. It would be ideal if each
plugin can bind load a driver - in Nova each driver is part of the nova
code. If the API is well defined then all vendors can provide their
drivers. This is open for discussion and we should try and understand
what the best way of doing this is. I like the NOVA model.
doesn't this mean we need to recode each driver for vdsm?
I thought it was part of making quantum a separate entity that it
provides drivers as well (especially if the api for drivers is well
defined)?
>
> 5.7 "When a communication channel is established between VDSM and the
> oVirt engine. The VDSM host should notify the oVirt Engine of the
> Quantum fabric that is supported."
> how does vdsm goes about detecting an agent is installed exactly?
The user needs to install a package for the agent - this creates the
relevant configuartion files. These files can be used to detect the
running agent or via "ps"
1. I assume per the discussion in the deployment section that all
relevant code is always deployed.
2. vdsm needs to report what is installed
3. vdsm needs to get the chosen technology at host level fabric
interface definition to configure it to up state.
4. or, as part of engine "init_vds_on_up" flow, engine needs to pass to
vdsm quantum relevant configuration if defined at cluster level and not
relevant to interface level maybe)
> (especially since deployment wise, most likely is all agents
would be
> deployed via rpm requirements?)
> is any agent a service at host level rather than code only?
> is there a scenario where a vdsm level plugin isn't required?
>
> 6. open issues
> 6.1 worth mentioning if any are in the works
No shortage :). But we have a good start. There are some blueprints in
the works which solve a large amount of problems
I'd add links to those that exists.
> 6.2 specifying the vlan from ovirt engine is not a gap?
Yes it is. This is currently being addressed. This is not a reason to
stop us moving ahead.
> 6.3 ok, so this is the first time "no multiple plugins" is mentioned
> ("Non-uniform technology"). but it sounds like the approach in the
> wiki assume it is/will be possible to have multiple technologies
> (agents/plugins) going forward?
My take is that this can be hidden by oVirt engine. It is just an
implementation detail - this should not affect the user experience -
which at the end of the day is what counts
this is kind of a crucial part of the assumptions/modeling part for the
engine.
> 6.4 need to discuss which of the open issues should block going
> forward with merging of code, and expected timeframe for resolution of
> some of them
Correct. In my opinion there are no major blocking issues at the moment.
again, apart of code level issues, we need to clear the modeling so code
review can be done compared to some understanding of the plan.
in my view, clarifying these is blocking understanding the feature:
- defining the assumptions (even if some are not supported yet)
- modeling per the assumptions
- UI mockups per the modeling (since they make things visible)
- API modeling, since it is a pain on any one integrating with it when
it changes
- db scheme changes
I agree solving these isn't blocking merging code:
- communication channels between quantum components
- deployment implications/flow
I do think these would block further consumption, so they should be
covered/understood as to what the gaps are.
for communication part, i think the approach you suggested should be
fine (assuming it will merge with the same brokering technology used by
engine/vdsm)
but for deployment part, "how this will look like" should be cleared up
in a bit more detail to have the gap list (on both engine and vdsm)
>
> 7. other questions/items (some mentioned above as well)
> 7.1 please add ui mockups, db scheme changes and REST api changes
OK
> 7.2 i'm missing the deployment flow:
OK
> - user install engine. is quantum a separate install or bundled going
> forward?
> - any configuration items by user at this point? what are they?
> - what rpms are deployed at host level? what configuration do they need
> - communication channels and their security are a must to understand
> 7.3 upgrade path / compatibility levels this is relevant to?
> 7.4 monitoring flow - how is monitoring/state of a host is affected by
> quantum service being down, communication problem from host to (where
> actually?)