Re: [Users] Fwd: Successfully virt-v2v from CentOS 6.3 VM to Ovirt 3.2 nightly

[Users] Successfully virt-v2v from CentOS 6_3 VM to Ovirt 3_2 nightly.eml
Subject: [Users] Successfully virt-v2v from CentOS 6.3 VM to Ovirt 3.2 nightly From: Gianluca Cecchi <gianluca.cecchi@gmail.com> Date: 09/01/13 15:55
To: users <users@ovirt.org>
Hello, on my oVirt Host configured with F18 and all-in-one and ovirt-nightly as of ovirt-engine-3.2.0-1.20130107.git1a60fea.fc18.noarch
I was able to import a CentOS 5.8 VM coming from a CentOS 6.3 host.
The oVirt node server is the same where I'm unable to run a newly created WIndows 7 32bit vm... See http://lists.ovirt.org/pipermail/users/2013-January/011390.html
In this thread I would like to report about successful import phases and some doubts about: 1) no password requested during virt-v2v 2) no connectivity in guest imported.
On CentOS 6.3 host # virt-v2v -o rhev -osd 10.4.4.59:/EXPORT --network ovirtmgmt c56cr c56cr_001: 100% [===================================================================================]D 0h02m17s virt-v2v: c56cr configured with virtio drivers.
---> I would expect to be asked for the password of a privileged user in oVirt infra, instead the export process started without any prompt. Is this correct? In my opinion in this case it could be a security concern....
virt-v2v doesn't require a password here because it connects directly to your NFS server. This lack of security is inherent in NFS(*). This is a limitation you must manage within your oVirt deployment. Ideally you would treat your NFS network as a SAN and control access to it accordingly. * There is no truth in the rumour that this stands for No F%*$&"£g Security ;)
Import process has begun for VM(s): c56cr. You can check import status in the 'Events' tab of the specific destination storage domain, or in the main 'Events' tab
---> regarding the import status, the "specific destination storage domain" would be my DATA domain, correct? Because I see nothing in it and nothing in export domain. Instead I correctly see in main events tab of the cluster these two messages
2013-Jan-09, 16:16 Starting to import Vm c56cr to Data Center Poli, Cluster Poli1 2013-Jan-09, 16:18 Vm c56cr was imported successfully to Data Center Poli, Cluster Poli1
SO probably the first option should go away....?
I'm afraid I didn't follow this. Which option?
I was then able to power on and connect via vnc to the console. But I noticed it has no connectivity with its gateway
Host is on vlan 65 (em3 + em3.65 cofigured)
host has 3: em3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovirtmgmt state UP qlen 1000 link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff inet6 fe80::21c:c4ff:feab:3add/64 scope link valid_lft forever preferred_lft forever ... 6: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff inet6 fe80::21c:c4ff:feab:3add/64 scope link valid_lft forever preferred_lft forever 7: em3.65@em3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff inet 10.4.4.59/24 <http://10.4.4.59/24> brd 10.4.4.255 scope global em3.65 inet6 fe80::21c:c4ff:feab:3add/64 scope link valid_lft forever preferred_lft forever ... 13: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UNKNOWN qlen 500 link/ether fe:54:00:d3:8f:a3 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fed3:8fa3/64 scope link valid_lft forever preferred_lft forever
[g.cecchi@f18aio ~]$ ip route list default via 10.4.4.250 dev em3.65 10.4.4.0/24 <http://10.4.4.0/24> dev em3.65 proto kernel scope link src 10.4.4.59
ovirtmgmt is tagged in datacenter Poli1
guest is originally configured (and it maintained this) on bridged vlan65 on CentOS 63 host. Its parameters
eth0 with ip 10.4.4.53 and gw 10.4.4.250
from webadmin pov it seems ok. see also this screenshot https://docs.google.com/open?id=0BwoPbcrMv8mvbENvR242VFJ2M1k
any help will be appreciated. do I have to enable some kind of routing not enabled by default..?
virt-v2v doesn't update IP configuration in the guest. This means that the target guest must be on the same ethernet segment as the source, or it will have to be manually reconfigured after conversion. Matt -- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

On Mon, Jan 14, 2013 at 11:55 AM, Matthew Booth wrote:
Import process has begun for VM(s): c56cr. You can check import status in the 'Events' tab of the specific destination storage domain, or in the main 'Events' tab
---> regarding the import status, the "specific destination storage domain" would be my DATA domain, correct? Because I see nothing in it and nothing in export domain. Instead I correctly see in main events tab of the cluster these two messages
2013-Jan-09, 16:16 Starting to import Vm c56cr to Data Center Poli, Cluster Poli1 2013-Jan-09, 16:18 Vm c56cr was imported successfully to Data Center Poli, Cluster Poli1
SO probably the first option should go away....?
I'm afraid I didn't follow this. Which option?
It says that you can see in two places to check the status of import phase: a) in the 'Events' tab of the specific destination storage domain b) in the main 'Events' tab and a) is not true At least, I don't see anything there, nor during import not after it completes. So you have to remove text a) in the messge or make changes to the code so that also a) is true. As I wrote in the first message, this happens using ovirt-nightly repo for fedora 18 at level 3.2.0-1.20130107.git1a60fea Gianluca
participants (2)
-
Gianluca Cecchi
-
Matthew Booth