On 05/18/2012 01:07 PM, Itamar Heim wrote:
On 05/15/2012 05:34 PM, Gary Kotton wrote:
>> 2. host management --> interface
>> 2.1 you are suggesting to remove the vlan field with a fabric field?
>> are we sure this is something which shouldn't be presented in the main
>> view and only via extended properties?
> Yes, this is what I am suggesting. VLAN tagging is one method of
> isolation. There are more, for example GRE tunneling.
that doesn't mean this is not an important enough piece of information
to have in the subtab, rather than in an additional dialog (the fabric
type could be represented in an icon for example)
miki/andrew/einav - thoughts on the UI aspect of this?
This is a very important
piece of information. The network specific are
very important. I think that they can be displayed on a different screen
and do not need to clutter this specific view. Please not that currently
Quantum does not return the network specifics, for example the VLAN tag.
This is something that is being addressed by a blueprint for connecting
to existing networks.
>> 2.2 is the fabric really a host interface level property, or a cluster
>> wide property (would same logical network be implemented on different
>> hosts in the cluster via both vdsm and quantum)?
>> would live migration work in same cluster if one host has qunatum
>> fabric and another vdsm fabric for same logical network?
> These are all up for discussion. I just wanted to provide something that
> we can start to work with. It would be nice if there was some input from
> product management on this issue (not exactly sure how this works in an
> open source community :))
This goes to assumptions on qunatum integration and capabilities.
I'm missing this part in the beginning of the feature page.
I can add a link to
the Quantum wiki or documentation. I am not sure
that this will address this issue. Quantum is evolving. It started with
simple layer 2 connectivity. Now layer 3 address management is being
added. Next stop will be security policies and quality of service.
if quantum can support multiple types, and live migration between,
then it can a host level config.
if live migration wouldn't work, it would have to be a cluster level
if it is limited to a single technology, it is kind of a system level
(and actually, after reading your reply to 3.7.1, i understand the
plugin is an interface level config, since you can have more than a
single one on the same host).
Quantum is a network service. It can work in all of the above scenarios
- for example, it works in OpenStack which supports the above. It is
just a matter of configuring it correctly.
same goes to implications to the logical network entity (see my
comment in 2.4 below).
I'd also like to understand in that regard how do we match up host
reporting it can support (i.e., installed and configured) to a
i.e., there needs to be some correlation between which logical
networks can be deployed on which plugin.
so also need to understand how this correlation is done automatically
between logical network, to which fabric/interface it belongs to.
This is a gap in
Quantum at the moment, that is, Quantum does not return
logical network statistics.
(I admit your reply to 3.7.1 confused me as to how quantum will decide
on which plugin to create a specific attachment)?
>> 2.3 you are describing the subtab of an interface, and mention a
>> qunatum pull down menu. is this in the subtab? in a dialog?
>> UI mockups are needed here.
> I think that prior to getting mockups, we should ensure that we have the
> idea crystallized.
I can live with that, goes back to what i wrote in 2.2.
>> 2.4 also, is this pulldown correct at host level or cluster level
>> (would live migration work between hosts implementing different
>> plugins for same logical network in same cluster?)
as i stated above - this has to be clarified in assumptions.
doesn't have to be the current state, rather a clear understanding
where it is going to, or as to what makes sense.
(I am not sure requiring live migration between multiple technologies
makes sense, since implementation of the logical network could be
based on different concepts (vlan vs. gre).
however, this means we need to define if the logical network (which is
not limited to a cluster) should be limited to clusters supporting
i.e., a vlan based logical network can cross (in different clusters)
UCS, openvswitch and bridges. but a gre based logical network cannot
cross all of them, etc.
For a start we can limit live migration only for
homogeneous hosts, that
is the same Quantum agent. If the VM has characteristics that are
supported by more than one plugin then I do not see any reason why this
should be limited. For example, every plugin supports VLAN tagging. If a
VM runs on a tagged network then there is no reason why the VM cannot be
moved from a host running OVS to a Host that has UCS
>> on to backend:
>> 3.1 "The Quantum Service is a process that runs the Quantum API web
>> server (port 9696)"
>> how is authentication between engine and quantum service is done (we
>> have a client certificate for engine which qunatum can verify i guess)
> In my opinion this should be running locally on the same host as the
even on same server, we have authentication between components running
in different processes.
Can you please point me to an example so that I can
understand in more
detail. The interface with the Quantum server is via REST API. If
necesasry this can be done over SSL.
on top of that, one of our main goals is to allow multiple engines
running side by side for scale. maybe quantum will have to be in
active/passive mode for a while, but multiple nodes should be able to
Do the engines have a common database or does each engine have their own
database? If the service si the problem then a load balancer can be used
to select the "best" server. The issue is the information stored in the
there is no need for fancy authentication/authorization. simply
trusting ovirt engine CA certificate and validating engine client
certificate should suffice (and i assume can be done via config in the
http layer of quantum).
also, this can be a phase 2, but let's have a list of what is a must
for merging the code (this isn't), to deployment (phase 2), to more
than that (phase 3).
under deployment i'd expect not requiring a different db technology,
integrating with installer, an upgrade path, cleaner and secure
communication channels, etc.
Some Quantum plugins have a database, others do not.
The open source
plugins are implemented in python whcih uses the SQL Alchemy database
interface. The user can select the database type. I have current used
this with the default mysql. If we use the postgres database and it is
on the same database with the same authentication as the oVirt engine
does this cover the authentication issues?
>> 3.2 is there a config of the quantum service uri? should it be a
>> single instance or a per DC property?
>> (it sounds like a per DC property, can be a future thing)
> I am sorry but I do not understand the question.
even if quantum is running on same host, you will have a configuration
item of the URI to the service, and will configure this as part of
if this is not configured, I'd expect engine/ui to not show quantum
is this a single uri or a DC instance level property goes to
modeling/assumptions part. can we live with a single quantum service
for multiple technologies, or will need different quantum for
this can also be a later phase, starting with a limitation of a
single, system-wide quantum.
The way that I see this is that the user should know
of the existance of
a Quantum server. They just know that Quantum implemnts the networking.
It would be ideal if the oVirt engine hides all of the interfaces with
the server. This is just an implementation detail.
>> 3.3 network creation
>> semantics: we create a network at DC level. we attach it at cluster
>> level. the UI allows you to "create at DC and attach to cluster" at
>> cluster level.
>> 3.3.1 what if i create a network with same VLAN in two DC's? what if
>> we attach same logical network to multiple clusters - each will create
>> the logical network in qunatum again?
> If I understand you correctly then I think that one Quantum network can
> be created. This should work. In my opinion this should be managed by
> the oVirt engine so the actual creation is not an issue. Ensuring that
> each cluster has the correct network management configurations is what
> is important.
>> 3.3.2 what if the qunatum service is down when engine performs this
>> action or quantum returns an error?
> The engine should be able to deal with this - similar to the way in
> which it deals with a network creation when VDSM is down.
network creation is not affected when VDSM is down, as the admin
configures it on the hosts after it is created (via the UI).
in quantum case, there is no admin step.
so need to solve engine monitoring quantum for list of networks
relevant to a cluster are defined to mark non defined networks as
non-operational, and have timer based or manual retry mechanism.
I personally think
that the issue of "non operational" status has to be
addressed. In my opinion the goal is to have VM's up. I'll give you an
example. Say we have VM1, VM2 and VM3 running on one host. VM1 and VM2
communicate on an internal network. VM3 is connectd to the internal
network and to the outside work. If there is a problem with the
networking then the user is unable to do anything with VM1 and VM2 due
to the fact that thir host is non operational...
How does one monitor the state of a network? At the moment VDSM just
sends a periodic interface update to the engine. The engine has a logic
that defines if the host is up or down according to this. I think that
it should suffice to the user to know that the network link is up or
down. VMware does not have such a strict and limiting system.
If a user takes a hardware device and connects it to the wrong switch
port they are still able to work on the device. Why should this be
different for a virtual machine?
I don't think this is a merge blocker to code, but it should be part
of the phase 1 of this feature.
I am not sure that I am in sync with the phases. It
would be great maybe
if we could describe the scope of the various phases. This will help
address the issues for each phase.
>> 3.3.3 each creation of a DC creates a rhevm network - I assume these
>> are not created, since "no host in a cluster in the DC yet"?
> This functionally can remain the same.
ok, goes to assumptions part.
"quantum integration will be done only for vm networks"
>> 3.3.4 what happens on moving of a host to another cluster, or moving
>> of a cluster to another DC (possible if it's DC was removed iirc).
> The Quantum agents take care of this. On each host there will be a
> Quantum agent that treats the network management.
I meant in the aspect of provisioning needed configuration to the host
(doing it when a host tries to go to up status is fine, but need to
remember this use case when desiging, since it means this isn't a
also, please document this so flow so someone trying to test the
integration covers it.
>> 3.3.5 shouldn't this be limited to vm networks, or all networks are
> I think all Quantum networks are relevant
I understood in 3.3.3 that quantum networks are only relevant to vm
networks (networks that can run virtual machines, rather than the
management, live migration or storage networks)?
(I'd suggest keeping it simple and limited to vm networks for first
important part here is code should not try to create this network via
quantum i guess.
This is why the user should still be able to still have an option
using the "VDSM style" networking.
>> 3.3.6 what if the host with the qunatum fabric is in maint mode?
> I do not understand. When a host is in maintenace mode do VM's receive
> traffic? The Quantum port for the VM's can be set as DOWN
I'm offline - we'll need to re-read the wiki to reply.
>> 3.3.7 could a host with qunatum fabric have a VDSM fabric on another
>> interface (for vm neworks? for non-vm networks)?
> Yes. This is something that I would like. I also would like the Quantum
then this should go to assumptions part.
> API to be updated so that we can indicate the phycal network interface
> that the Quantum network will be running on.
i don't think this is needed.
when admin configures the interface or bond on the host to be 'fabric'
or 'quantum' it calls vdsm to perform the network configuration
change. this will flag the relevant interface in the host as
fabric/quantum (either via config, or alias, etc.)
At the moment Quantum has one
network interface for all of the networks.
We would like Quantum to be extended so that it can indicate which
network is run on which physical interface. For example if NIC 1 (eth0)
is on a DMZ and NIC 2 (eth2) is on a provate network, we want the
Quantum DMZ network to only be connected to NIC1.
but for modeling sake, can you please explain how this is supposed to
look for a UCS host?
>> 3.5 network deletion (detach network from a cluster)
>> 3.5.1 what happens if qunatum is down or returned an error?
> The engine should be able to deal with this - similarly to what it does
see my reply on 3.3.2. this is managed by the admin today.
so this is coding to be done that should be documented (it is not a
merge blocker in my view, but it is part of phase 1.
>> 3.6 vm creation
>> 3.6.1 what about moving a VM from/to a cluster with a quantum fabric?
> I do not see a problem here. The agenets running on VDSM will detect and
> treat accordingly.
i meant in the aspect of the need to create the port for them in quantum?
that we need to discuss the port creation. Quantum enable the
user to create a network a port on the network and the ability to attach
an interface to the port. I thin that the port creation is something
that should be hidden from the oVirt engine user. They just nee to know
about networks and that a VM has been connected to the network.
>> 3.6.2 what about import of a VM into a cluster with a qunatum fabric?
> Same as above
indeed - need to handle port creations?
>> 3.6.3 you have vm creation/removal - missing vnic addition/removal
please document this operation so someone testing this will have full
:) OK - seems like I have quite a lot of documentation to do. The same
test cases should be used as those with the existing VDSM implemntation.
Our first goal is to have network parity. Once we have this we can
expand the scope.
>> 3.6.4 worth mentioning qunatum doesn't care about the vnic model?
>> about the mac address?
> The Quantum driver on VDSM does take into account the MAC address. This
> is used in some plugins - for example the openvswicth plugin.
ok, can you please explain the flow of data of how it gets it (simply
vdsm needs to pass it to the driver?)
In the POC implementation that I did the
Quantum treatment in VDSM
received the following information:-
- mac address of the interface
- VM UUID
- Quantum ID's
>> 3.7 vm start
>> 3.7.1 why is the plugin parameter required at vm start?
> This is to indicate to VDSM the operations to take place - for example
> updating the openvswitch integration bridge
i thought the integration bridge is pre-configured at host level?
i thought the host already knows which interface is used for quantum,
and for sure, which plugin is supposed to be used for that interface.
so why is this a vm start parameter?
When I spoke with the guys from VDSM they did
not want me to change the
VDSM configuration files. The idea was to deal with each plugin type at
run time. This is logical and leaves changes on the VDSM side to a
minimal. Less chance for configuration problems.
>> can multiple plugins be supported in parallel at vdsm/host level or by
>> the quantum service?
> Yes/ Multiple agents can run on VDSM. In the engine a little work is
ok, please document this in the assumptions part.
i assume multiple agents, but still only one per fabric interface?
also, if plugins are services, I want to understand when vdsm needs
start them (I assume when admin configures an interface of type fabric
for a specific plugin type).
The agent will be running on VDSM. This service is
started when the VDSM
host boots. The installation of the agent shoud ensure this.
> required to run multiple plugins.
in the engine or in quantum?
The plugin runs on the engine. The agent runs on the
I updated my reply on assumptions above based on this info as
something that needs clearing up.
>> 3.7.2 aren't network_uuid and port uuid redundant to attachment uuid
>> (I assume qunatum service knows from attachment the port and network)
>> - i have no objection to passing them to vdsm, just trying to
>> understand reasoning for this.
>> I am missing what happens at vdsm level in this point (even after
>> reading the matching vdsm part)
> The network ID and the port ID are returned by the Quantum service. The
> attachment ID is passed to the Quantum server. If this is unique then it
> can be a key to the above (I am currently working on the scale issue
> with Quantum and there are 2 options, the first is that it is unique,
> the second is that all three are passed to the agent). It is a few extra
> bytes passed. I think that it is better to be safe and have all of the
> information on VDSM for future use.
>> 3.8 vm suspend/resume (it's fine to say it behaves like X, but still
>> need to be covered to not cause bugs/regressions)
> The Quantum port status can be updated.
please document this operation so someone testing this will have full
>> 3.9 vm stop
> Same as above
please document this operation so someone testing this will have full
>> 3.9.1 need to make sure vm stop when engine is down is handled
this requires adhering to. again, not a merge blocker to code. but in
my view part of phase 1 (my view of phase 1 is deployment is still
manual, but flows are handled correctly and do not leave leftovers
please document this operation so someone testing this will have full
>> 3.9.2 what happens if qunatum service is down? unlike start vm or
>> network creation the operation in this case cannot be
>> stopped/rolledback, only rolled forward.
> We have to ensure that it is up.
we have several use cases requiring to be able to start virtual
machines even when engine is down (regardless of HA aspects for the
this means admin wants to go to a host, and start the VM from command
the VM definition is recovered from the OVF for such a manual
sanlock is supposed to protect against vm corruption in this case.
but the VMs will need networking to be useful.
so what's the emergency flow to starting a VM?
If the user is usingt he VDSM
command line for VM creation then the same
support should be offered for Quantum. The underlying assumption here is
that the VM is using a network that exists. If this is the case then we
should be OK here.
>> 3.10 hot plug nic?
> Each new NIC has an attachment ID - the agent know that a new NIC is
> added and treats accordingly.
true, but i assume you would need to change the api here to get the
quantum details like the attachemnt_id (and the rest).
also, all affected use cases should be documented for someone testing
this to have full picture of scope.
The oVirt engine for allocating the attachment
ID. The fact that the
engine does the hot nic assignment ensure that it will also assigned the
attachment. I have yet to understand the hot vnic flow - the code that I
was working on did not have this support. I should also test this with
>> 3.11 vm migration
>> 3.11.1 ok, so this is the first time i understand hosts can be mixed
>> in same cluster. worth specifying this in the beginning (still not
>> clear if mixed plugins can exist)
> If the networks are configured with the same characteristics then this
> should be OK. As mentioned above we are working on a blueprint to deal
> with connecting to existing networks - i.e. enable the user/admin to
> configure the VLAN tags
please define under assumptions part what are the network
characteristics, how they are mapped to which plugins support which
characteristics to be relevant.
also, monitoring wise, you are implying each host must be able to
accommodate all types of characteristics in the networks in the cluster.
whould be discussed. Statistics are currently not reported.
>> 3.11.2 afair, we don't deal with engine talking to both hosts during
>> live migration. only to host A, who is then communicating with host B.
>> so why not always have the VM configuration at vm start (and hot plug
>> nic) have the quantum details so live migration can occur at will
>> without additional information?
> I do not understand can you please explain. VDSM creates the tap device
> and builds the libvirt files. The agents detect the tap device and
> attach to the network. I do not understand why it is a problem for the
> live migration. This is also driven by the libvirt XML's being created.
> Is this correct?
that is fine. iirc, the wiki states engine needs to pass information
to target host. we don't have such a flow in live migration (which is
live migration should be based on the information already available to
vdsm in source host.
I don't know if quantum cares about the attachement/port residing on
more than one host (the target host has to set them up before
migration starts, and source host removes them only after migration
i hope we are both understanding this the same way.
in any case, need to remember to test failure of live migration in its
various steps cleans up any quantum related changes.
>> 3.11.3 "In order to implement the above a REST client needs to be
>> implemented in the oVirt engine. "
>> did not understand this statement - please elaborate.
> All interface with the Quantum server is done via REST. In order for
> oVirt to be able to communicate, it will need to send REST messages to
> the server and be able to parse the replies - this is what I meant by
> the REST client.
>> 4. host management
>> 4.1 deployment
>> we do not deploy packages from engine to hosts, we can install them
>> from a repo configured to the host. but this is done today only as
>> part of initial bootstrap/installation of host.
>> also, it is not relevant for ovirt node which is 'firmware' like.
>> any reason to not require the 'plugin installation packages' at vdsm
>> rpm level for plugins we think are good enough to use (until then,
>> responsibility to deploy them is of admin)
> Correct. I agree with you.
ok, so on ovirt-node all quantum agents will be pre-installed.
if a quantum agent is changed to work correctly with ovirt/vdsm, why
not simply always require at rpm level to deploy it on the host?
then a regular host and ovirt-node are the same.
still need to define how vdsm reports which agents are supported by it
(which is different than those which are installed, say for UCS?),
which is different from those that are actively running since they are
I was thinking that when VDSM dos the "handshake" with the
this information would be transferred.
>> (what are plugin level packages at host level - aren't these the
> Each plugin has the relevant packages that should be installed.
>> 4.2 plugin configuration
>> per DC? per cluster? per host? per plugin? please provide more details
>> here on configuration details expected and flow of when they are
>> configured and when they are expected to change?
> I think that plugin per cluster is a good start. This could limit live
> migration problems.
question is this part of phases, or part of assumptions (end goal)
modeling should be aligned with end goal (i am not suggesting it can
do in steps, just that end goal should be understood so it doesn't
interrupt it. especially for the API part which requires to keep
backward compatibility as much as possible).
>> I think this will merit a per 'ovirt supported' qunatum plugin to see
>> it works in a way we can use.
>> 4.3 connectivity
>> again, this requires more details. if needed per plugin.
>> what is expected? how authentication/encryption happens? what iptables
>> rules need to change in engine/host if at all, etc.
>> I'm fine with this being 'direct access to db from hosts' for POC
>> level of patches, but not for something we actually merge/provide
>> support for later.
> I am currently working on a blueprint to ensure better scaling of
> quantum agents. This will be done by making use of the nova RPC library.
> This supports Qpid, rabbit mq, kombu etc. These have an option of being
> secure. Please clarify if this suffices?
having all work on same channel will solve most problems.
still need to map those that require data not in quantum for some
reason to try and pass to qunatum the needed data as customer
parameters rather than add a channel to engine if possible/makes sense.
do all of these require a broker to be deployed?
I assume all support certificate based authentication.
it should be the same one we'll pick for engine-vdsm (I heard fans of
zeromq as well), but this doesn't have to be phase 1, or we might be
lucky and agree on one already supported :)
>> 5. VDSM
>> 5.2 "The agent package can and may be received from the oVirt Engine
>> or can be downloaded via RPM's"
>> see 4.1 above - we don't deploy rpm's/code on the fly today.
> When I was sitting with the guys from oVirt this is what I was told. I
> guess that I misunderstood.
ok, I assume 4.1 resolves this and we simply deploy all those made to
work with ovirt as part of vdsm rpm requirements?
I think that we need to discuss
packaging - should the quantum agents be
part of VDSM? I am not sure.
(implies each one should have a sub rpm, not sure if that is the case
>> 5.3 "in addition to the treatment below VDSM should also maintain a
>> health check to the Quantum agent"
>> what if the restart didn't help? how should ovirt treat the host wrt
>> networking? to running VMs? to ability to live migrate VMs from the
>> host if needed?
> If the agent is down then the oVirt engine should be notified to at
> least events for the user.
I assume vdsm knows which agent to start and later monitor based on
the host level config by admin of the fabric interface (whether the
plugin type is cluster or interface level)
This is related to the issue above. If
the user runs the RPM for VDSM
then she/he can run the agent RPM - this will register the agent.
>> 5.4 "Logical Network Management"
>> please see 3.11 above - we would want live migration to not require
>> additional information, so details should be available even if not
>> immediately acted upon.
>> 5.5 "The tap device created uses an "ethernet" network device.
>> means that the creation of the libvirt XML file is a bit different."
> Yes, that is correct
>> 5.5.1 does this impact stable device addresses somehow?
> Not that I am aware of
>> 5.5.2 how is live migration possible if the libvirt xml to a non
>> quantum host is different (or is this the binding part only)?
> If the libvirt is "pacthed" when the migration takes place then this
> should not be a problem.
does libvirt support something like that? the live migration happens
directly between the two libvirt's.
libvirt created an abstraction layer to their networking model to
allow live migration between host where the physical layer is
different so the logical layer remains the same during live migration.
I just don't know it the 'ethernet' attribute you mentioned is in that
physical mapping layer (which would be fine), or in the logical one
(which won't live migrate).
please verify with a libvirt expert (dbp modeled that part iirc)
speak with Dan
>> 5.6 I assume libvirtvm.py is part of vdsm.
>> is quantum.py part of quantum code base or vdsm codebase (it sounds
>> like it should be part of quantum code base)?
>> so how exactly the rpm for this would look like? deploy it to
>> /xxx/qunatum and have it add a symbolic link to it from vdsm paths?
> In my opinion this should be part of VDSM. It would be ideal if each
> plugin can bind load a driver - in Nova each driver is part of the nova
> code. If the API is well defined then all vendors can provide their
> drivers. This is open for discussion and we should try and understand
> what the best way of doing this is. I like the NOVA model.
doesn't this mean we need to recode each driver for vdsm?
I thought it was part of making quantum a separate entity that it
provides drivers as well (especially if the api for drivers is well
OpenStack nova has Quantum drivers. It would be best if we could use the
>> 5.7 "When a communication channel is established between VDSM and the
>> oVirt engine. The VDSM host should notify the oVirt Engine of the
>> Quantum fabric that is supported."
>> how does vdsm goes about detecting an agent is installed exactly?
> The user needs to install a package for the agent - this creates the
> relevant configuartion files. These files can be used to detect the
> running agent or via "ps"
1. I assume per the discussion in the deployment section that all
relevant code is always deployed.
2. vdsm needs to report what is installed
3. vdsm needs to get the chosen technology at host level fabric
interface definition to configure it to up state.
4. or, as part of engine "init_vds_on_up" flow, engine needs to pass
to vdsm quantum relevant configuration if defined at cluster level and
not relevant to interface level maybe)
>> (especially since deployment wise, most likely is all agents would be
>> deployed via rpm requirements?)
>> is any agent a service at host level rather than code only?
>> is there a scenario where a vdsm level plugin isn't required?
>> 6. open issues
>> 6.1 worth mentioning if any are in the works
> No shortage :). But we have a good start. There are some blueprints in
> the works which solve a large amount of problems
I'd add links to those that exists.
(may be relevant for authentication)
>> 6.2 specifying the vlan from ovirt engine is not a gap?
> Yes it is. This is currently being addressed. This is not a reason to
> stop us moving ahead.
>> 6.3 ok, so this is the first time "no multiple plugins" is mentioned
>> ("Non-uniform technology"). but it sounds like the approach in the
>> wiki assume it is/will be possible to have multiple technologies
>> (agents/plugins) going forward?
> My take is that this can be hidden by oVirt engine. It is just an
> implementation detail - this should not affect the user experience -
> which at the end of the day is what counts
this is kind of a crucial part of the assumptions/modeling part for
>> 6.4 need to discuss which of the open issues should block going
>> forward with merging of code, and expected timeframe for resolution of
>> some of them
> Correct. In my opinion there are no major blocking issues at the moment.
again, apart of code level issues, we need to clear the modeling so
code review can be done compared to some understanding of the plan.
in my view, clarifying these is blocking understanding the feature:
- defining the assumptions (even if some are not supported yet)
- modeling per the assumptions
- UI mockups per the modeling (since they make things visible)
- API modeling, since it is a pain on any one integrating with it when
- db scheme changes
Quantum is evolving and changing. We will have to be
I agree solving these isn't blocking merging code:
- communication channels between quantum components
- deployment implications/flow
I do think these would block further consumption, so they should be
covered/understood as to what the gaps are.
for communication part, i think the approach you suggested should be
fine (assuming it will merge with the same brokering technology used
but for deployment part, "how this will look like" should be cleared
up in a bit more detail to have the gap list (on both engine and vdsm)
>> 7. other questions/items (some mentioned above as well)
>> 7.1 please add ui mockups, db scheme changes and REST api changes
>> 7.2 i'm missing the deployment flow:
>> - user install engine. is quantum a separate install or bundled going
>> - any configuration items by user at this point? what are they?
>> - what rpms are deployed at host level? what configuration do they need
>> - communication channels and their security are a must to understand
>> 7.3 upgrade path / compatibility levels this is relevant to?
>> 7.4 monitoring flow - how is monitoring/state of a host is affected by
>> quantum service being down, communication problem from host to (where