
----- Original Message -----
On 05/15/2012 02:55 PM, Irena Berezovsky wrote: ...
ovirt engine keeps it. how would one query by it though? what api and credentials would they be using to do so? [IB] I think it should be accessible via REST and SDK . I guess it should use admin credentials, but not sure.
REST doesn't support lookup by every property and the is search for vnics/ports. so a modeling question. I'm not sure i like the concept of providing an administrative user credentials to all hosts, or of managing a password change in this case. not sure hosts should use the REST API, or another API intended for bi-directional communication with engine, secured in the same way the xml/rpc is secured.
[IB2] Considering Quantum plugin implementation, I guess that may also handle physical Network configuration (Network between Hypervisors). In such case Quantum Plugin will need to resolve for VIF UUID plugged to certain network on what Server it is deployed. I think is the best way is to get it via SDK or REST. Do you have any suggestion?
from what i understand, this isn't about using quantum for the non vm networks (management, storage, live migration, etc.)
I don't think it is advisable to spread networking configuration and monitoring all over the place. ultimately all network configuration including physical and non vm networks should be owned by a single "network manager", so I think we should aim for augmenting quantum to support that.
we need to find a solution to passing needed info for agents/drivers through supported communication channels. yes, the REST API is there, but i don't see a provisioning process providing hosts with credentials to access the REST API (or as specified above, REST API having a query to match the requested information).
OTOH, it could make sense to have hosts pull various type of configuration information relevant to them from ovirt engine via an api limited to hosts, based on their hosts certificates. then have agents use this as the supported communication channel.
I think that agents/drivers should only be able to talk back home to the plugin that supervise them. The quantum plugin, in turn, can access ovirt through REST api to get/extract any information needed.
VM Migrate: Even though it's just an initial suggestion, I think that VM migration use case should be elaborated also for 'else' cases. What happens if target Host does not support Quantum? What happens if target host fails to run VM? Another issue is a lack of calls to Quantum. For my understanding (but I can be wrong) in OpenStack it will issue unplug and plug attachment calls. I agree. The goal here should be VM uptime and connectivity. If the "network" connectivity can be support but in a limited fashion then great - the migration can take place - this should nonetheless only be done when there is no other option. The live migration algorithm may have to be tweaked to support this.
we do not assume involvement in engine in live migration today other than issuing a 'migrate' command to source host, or host needing information from engine to perform the live migration. there must be a very good reason to break this assumption. [IB] If this assumption should be in place, maybe it worth to consider to invoke 'create port' and 'plug interface' Quantum calls from VDSM node and not from oVirt engine. This this the model in OpenStack. Nova-compute invokes 'create port' and 'plug interface' Quantum calls. It is described in the following doc: http://wiki.openstack.org/QuantumAPIUseCases
I'm not sure i like that better. would it be possible for engine to pre-provision the port on more than one host? [IB2] For my understanding, definition of port on the Network just defines a logical port and does not apply the physical location. Regarding the interface plugging it probably should work for plugging same interface more than once temporarily during VM migration phase.
if it doesn't provide physical location, then i am not sure i understand why another call is required to begin with?
The 'plug interface' is used to instantiate and configure the logical network/port on the destination host as well as on the fabric the host is connected to (adjacent switch, etc.) Thanks, Roni