[Users] Successfully virt-v2v from CentOS 6.3 VM to Ovirt 3.2 nightly
Gianluca Cecchi
gianluca.cecchi at gmail.com
Wed Jan 9 10:55:53 EST 2013
Hello,
on my oVirt Host configured with F18 and all-in-one and ovirt-nightly as of
ovirt-engine-3.2.0-1.20130107.git1a60fea.fc18.noarch
I was able to import a CentOS 5.8 VM coming from a CentOS 6.3 host.
The oVirt node server is the same where I'm unable to run a newly created
WIndows 7 32bit vm...
See http://lists.ovirt.org/pipermail/users/2013-January/011390.html
In this thread I would like to report about successful import phases and
some doubts about:
1) no password requested during virt-v2v
2) no connectivity in guest imported.
On CentOS 6.3 host
# virt-v2v -o rhev -osd 10.4.4.59:/EXPORT --network ovirtmgmt c56cr
c56cr_001: 100%
[===================================================================================]D
0h02m17s
virt-v2v: c56cr configured with virtio drivers.
---> I would expect to be asked for the password of a privileged user in
oVirt infra, instead the export process started without any prompt.
Is this correct?
In my opinion in this case it could be a security concern....
during virt-v2v command, on oVirt node I see this inside NFS Export domain:
$ sudo ls -l
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/v2v.pmbPOGM_/30df5806-6911-41b3-8fef-1fd8d755659f
total 10485764
-rw-r--r--. 1 vdsm kvm 10737418240 Jan 9 16:05
0d0e8e12-8b35-4034-89fc-8cbd4a7d7d81
At the end of the process:
$ sudo ls -l /EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/images/
total 4
drwxr-xr-x. 2 vdsm kvm 4096 Jan 9 16:05
30df5806-6911-41b3-8fef-1fd8d755659f
$ sudo ls -lR /EXPORT/
/EXPORT/:
total 4
drwxr-xr-x. 5 vdsm kvm 4096 Jan 9 16:06
b878ad09-602f-47da-87f5-2829d67d3321
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321:
total 12
drwxr-xr-x. 2 vdsm kvm 4096 Jan 9 16:01 dom_md
drwxr-xr-x. 3 vdsm kvm 4096 Jan 9 16:06 images
drwxr-xr-x. 4 vdsm kvm 4096 Jan 9 16:02 master
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/dom_md:
total 8
-rw-rw----. 1 vdsm kvm 0 Jan 9 16:01 ids
-rw-rw----. 1 vdsm kvm 0 Jan 9 16:01 inbox
-rw-rw----. 1 vdsm kvm 512 Jan 9 16:01 leases
-rw-r--r--. 1 vdsm kvm 350 Jan 9 16:01 metadata
-rw-rw----. 1 vdsm kvm 0 Jan 9 16:01 outbox
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/images:
total 4
drwxr-xr-x. 2 vdsm kvm 4096 Jan 9 16:05
30df5806-6911-41b3-8fef-1fd8d755659f
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/images/30df5806-6911-41b3-8fef-1fd8d755659f:
total 10485768
-rw-r--r--. 1 vdsm kvm 10737418240 Jan 9 16:06
0d0e8e12-8b35-4034-89fc-8cbd4a7d7d81
-rw-r--r--. 1 vdsm kvm 330 Jan 9 16:05
0d0e8e12-8b35-4034-89fc-8cbd4a7d7d81.meta
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/master:
total 8
drwxr-xr-x. 2 vdsm kvm 4096 Jan 9 16:02 tasks
drwxr-xr-x. 3 vdsm kvm 4096 Jan 9 16:06 vms
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/master/tasks:
total 0
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/master/vms:
total 4
drwxr-xr-x. 2 vdsm kvm 4096 Jan 9 16:06
2398149c-32b9-4bae-b572-134d973a759c
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/master/vms/2398149c-32b9-4bae-b572-134d973a759c:
total 8
-rw-r--r--. 1 vdsm kvm 4649 Jan 9 16:06
2398149c-32b9-4bae-b572-134d973a759c.ovf
Then I began the vm import in webadmin:
Import process has begun for VM(s): c56cr.
You can check import status in the 'Events' tab of the specific destination
storage domain, or in the main 'Events' tab
---> regarding the import status, the "specific destination storage domain"
would be my DATA domain, correct?
Because I see nothing in it and nothing in export domain.
Instead I correctly see in main events tab of the cluster these two messages
2013-Jan-09, 16:16 Starting to import Vm c56cr to Data Center Poli, Cluster
Poli1
2013-Jan-09, 16:18 Vm c56cr was imported successfully to Data Center Poli,
Cluster Poli1
SO probably the first option should go away....?
During the import, on the oVirt host
[g.cecchi at f18aio ~]$ vmstat 3
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
r b swpd free buff cache si so bi bo in cs us sy id
wa
1 1 0 1684556 121824 28660956 0 0 8 69 21 66 0 0
99 0
1 1 0 1515192 121824 28830112 0 0 0 58749 4468 6068 0 3
85 11
0 1 0 1330708 121828 29014320 0 0 0 59415 4135 5149 0 4
85 11
$ sudo iotop -d 3 -P -o -k
Total DISK READ: 0.33 K/s | Total DISK WRITE: 56564.47 K/s
PID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
22501 idle vdsm 55451.24 K/s 56459.45 K/s 0.00 % 91.03 % dd
if=/rhev/data-center/~count=10240 oflag=direct
831 be/4 root 0.00 K/s 0.00 K/s 0.00 % 3.56 % [flush-253:1]
576 be/3 root 0.00 K/s 19.69 K/s 0.00 % 0.72 % [jbd2/dm-1-8]
23309 be/3 vdsm 0.33 K/s 0.00 K/s 0.00 % 0.00 % python
/usr/share/vdsm/st~moteFileHandler.pyc 49 47
17057 be/4 apache 0.00 K/s 2.63 K/s 0.00 % 0.00 % httpd
-DFOREGROUND
15524 be/4 root 0.00 K/s 1.31 K/s 0.00 % 0.00 % libvirtd
--listen
$ ps -wfp 22501
UID PID PPID C STIME TTY TIME CMD
vdsm 22501 16120 8 16:16 ? 00:00:14 /usr/bin/dd
if=/rhev/data-center/89d40d09-5109-4070-b9b0-86f1addce8af/b878ad09-602f-
I was then able to power on and connect via vnc to the console.
But I noticed it has no connectivity with its gateway
Host is on vlan 65
(em3 + em3.65 cofigured)
host has
3: em3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
ovirtmgmt state UP qlen 1000
link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
inet6 fe80::21c:c4ff:feab:3add/64 scope link
valid_lft forever preferred_lft forever
...
6: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP
link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
inet6 fe80::21c:c4ff:feab:3add/64 scope link
valid_lft forever preferred_lft forever
7: em3.65 at em3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP
link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
inet 10.4.4.59/24 brd 10.4.4.255 scope global em3.65
inet6 fe80::21c:c4ff:feab:3add/64 scope link
valid_lft forever preferred_lft forever
...
13: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master ovirtmgmt state UNKNOWN qlen 500
link/ether fe:54:00:d3:8f:a3 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fed3:8fa3/64 scope link
valid_lft forever preferred_lft forever
[g.cecchi at f18aio ~]$ ip route list
default via 10.4.4.250 dev em3.65
10.4.4.0/24 dev em3.65 proto kernel scope link src 10.4.4.59
ovirtmgmt is tagged in datacenter Poli1
guest is originally configured (and it maintained this) on bridged vlan65
on CentOS 63 host. Its parameters
eth0 with
ip 10.4.4.53 and gw 10.4.4.250
from webadmin pov it seems ok. see also this screenshot
https://docs.google.com/open?id=0BwoPbcrMv8mvbENvR242VFJ2M1k
any help will be appreciated.
do I have to enable some kind of routing not enabled by default..?
Thanks,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130109/c51b6851/attachment-0001.html>
More information about the Users
mailing list