[Users] Report size of thin provisioned disk
by Jeff Bailey
With 3.2 alpha, on the disks subtab of the virtual machines tab I'm
seeing the correct "virtual size" for the disk but the "actual size"
remains 1GB even though the drive has grown to 11GB (according to lvs
output). It doesn't seem to be causing any problems. The auto-growth
(from my perspective) is working much smoother than it did a release ago
:) I just wondered if anyone else had noticed this.
Thanks,
Jeff
11 years, 11 months
[Users] AIO f18 nightly local storage active only in permissive mode
by Gianluca Cecchi
Hello,
Fedora 18 with all-in-one config and nightly build repo at this level:
ovirt-engine-3.2.0-1.20121217.git1e01c00.fc18.noarch
When I try to activate local domain I view it remains in "contending" and
then fails.
I try 2-3 times.
[g.cecchi@f18aio ~]$ getenforce
Enforcing
As soon as I set
setenforce 0
[g.cecchi@f18aio ~]$ getenforce
Permissive
And I try to activate again, it works and my local_host results correctly
up.
Is this something already fixed in next nightly or should I send my logs?
Is it supposed to be a supported config with SELinux in Enforcing or not
(I'm referring to the problem of tuned too..)?
Thanks
Gianluca
11 years, 11 months
Re: [Users] rhevm-cli - add nfs data storage
by Michael Pasternak
Hi Meni,
When you pass --datacenter-identifier, you attaching existent SD to the data-center,
note, you can see what parameters use for any command/action just running 'help' command:
[oVirt shell (connected)]# help add storagedomain
USAGE
add <type> [base identifiers] [attribute options]
DESCRIPTION
Creates a new object or adds existent with type storagedomain. See 'help add' for generic
help on creating objects.
ATTRIBUTE OPTIONS
The following options are available for objects with type storagedomain:
Overload 1:
* --host-id|name: string
* --type: string
* --storage-type: string
* --format: boolean
* --storage-address: string
* --storage-logical_unit: collection
{
logical_unit.address: string
logical_unit.port: int
logical_unit.target: string
logical_unit.username: string
logical_unit.password: string
logical_unit.serial: string
logical_unit.vendor_id: string
logical_unit.product_id: string
logical_unit.lun_mapping: int
logical_unit.portal: string
logical_unit.paths: int
logical_unit.id: string
}
* [--name: string]
* [--storage-override_luns: boolean]
Overload 2:
* --host-id|name: string
* --type: string
* --storage-type: string
* --format: boolean
* --storage-address: string
* --storage-path: string
* [--name: string]
Overload 3:
* --host-id|name: string
* --type: string
* --storage-type: string
* --format: boolean
* --storage-path: string
* [--name: string]
Overload 4:
* --host-id|name: string
* --type: string
* --storage-type: string
* --format: boolean
* --storage-path: string
* --storage-vfs_type: string
* [--name: string]
* [--storage-address: string]
* [--storage-mount_options: string]
* [--expect: 201-created]
* [--correlation_id: anystring]
....
On 12/19/2012 11:56 AM, Meni Yakove wrote:
> Hi,
>
> I'm trying to add new NFS data storage to my setup using CLI and it fail:
>
> [RHEVM shell (connected)]# add storagedomain --name CLI_DataStorage --host-name orchid-vds1 --type data --storage-type nfs --format v3 --storage-address
> orion.qa.lab.tlv.redhat.com --storage-path /export/meni/dc31-cli --datacenter-identifier DC31-CLI
>
> error:
> status: 400
> reason: Bad Request
> detail: Entity not found: Storage: name=CLI_DataStorage
>
> On the user guide:
>
> *Create NFS data storage*
>
> An NFS data storage domain is an exported NFS share attached to a data center. It provides storage for virtual machines. Add the NFS share as a data storage domain with
> the |add storagedomain| command.
>
> [RHEVM shell (connected)]# add storagedomain --name DataStorage --host-name MyHost --type data --storage-type nfs --format v1 --storage-address!
> 192.168.0
> .10 --storage-path /exports/data --datacenter-indentifier Default
>
>
> And:
>
>
> Example 5.13. Creating a new storage domain
>
> [RHEVM shell (connected)]# add storagedomain --name DataStorage --datacenter-name Default -type data
>
>
> I have tried both but still the same error. what I'm doing wrong?
>
> Thanks
> Meni
>
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
11 years, 11 months
[Users] ovirt 3.1 vnc passwordless setup
by Alexandru Vladulescu
Hi guys,
I am running Ovirt 3.1 (fresh install -- no upgrade) on some Centos 6.3
servers; and I did a small setup regarding 2 hypervisors, 1 node
controller and 1 NAS server providing NFS export, storage & ISO domains
for the appliances.
All this solution has one server more that's acting as a firewall and
traffic policy shaping in front of the network setup.
The problem that I am facing right now, is that I am trying to setup
Ovirt on VNC remote connection in the manner that does not require
password login with 120 seconds expiration timeout. Somehow I would like
to map VNC ports on the 2 hypervisor nodes to bond with a vnc reflector
installed on the node controller, which also acts as a VPN server on the
setup for out-band connections.
Therefore, users connecting over VPN could get access to the VNC of the
VMs without entering the web portal and requiring an active VNC password
with 120 seconds expiration time.
I tried digging through the libvirtd.conf and qemu.conf configuration
files, found some VNC parameters but none seemed to do the job for my
purpose. I must mention that I have been looking for a similar setup on
the Ovirt documentation page as well as googlin' it.
You help will be much appreciated.
My regards,
Alex Vladulescu
11 years, 11 months
[Users] Maintenance outage :: lists.ovirt.org :: 2012-12-20 01:00 UTC
by Karsten 'quaid' Wade
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig499090C9A4399B31F318A0E8
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
There is an outage of lists.ovirt.org for ten minutes to increase disk
space on the host.
The outage will occur at 2012-12-20 01:00 UTC. To view in your local time=
:
date -d '2012-12-20 01:00 UTC'
=3D=3D Details =3D=3D
We are constantly nearly out of disk space on linode01.ovirt.org, which
hosts the Mailman and yum/download repositories.
As a stop-gap help, I've got 5 GB more of space I can put in to the host =
VM.
We plan to migrate this host to one of the Alter Way instances, probably
a VM running on one of the two physical machines. If possible, we'll
make this move soonest for both the Mailman and download repositories.
=3D=3D Affected services =3D=3D
* lists.ovirt.org
* resources.ovirt.org
* httpd redirects of old URLs
* IRC bot
=3D=3D=3D Not-affected services =3D=3D
* www.ovirt.org
* jenkins.ovirt.org
* gerrit.ovirt.org
* git.ovirt.org
=3D=3D Future plans =3D=3D
Move services to a host with more space and capability, as part of the
Infra team general hosting plan.
--=20
Karsten 'quaid' Wade, Sr. Analyst - Community Growth
http://TheOpenSourceWay.org .^\ http://community.redhat.com
@quaid (identi.ca/twitter/IRC) \v' gpg: AD0E0C41
--------------enig499090C9A4399B31F318A0E8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/
iD8DBQFQ0kjP2ZIOBq0ODEERAkcqAKDNPGtRQMuvXdFTAg2onhI0t/uY1ACgtyB9
vUGV+2yJZuWyiK1ppiRusB8=
=jSAP
-----END PGP SIGNATURE-----
--------------enig499090C9A4399B31F318A0E8--
11 years, 11 months
[Users] ovirt-shell as ForceCommand for ssh logins
by Jiri Belka
Hi,
ForceCommand for ssh session can force command for logging user.
Problem is ovirt-shell enables shell commands, that's not nice if we
would just want to give sysadmins some "restricted" cli for managing
oVirt environment.
1. Could be implemented an option to disable these shell "escapes"?
Like '-S', so it would be 'comment="/usr/bin/ovirt-shell -S"' in
user's authorized_keys.
2. Could be implemented an ovirt-shell command like 'set' to set
configuration from ovirt-shell and save it(yes, user in ovirt-shell
should not touch filesystem directly)?
Example:
> set username = "foo@domain"
> save -a # save all runtime settings
3. Aliases like in lftp client?
> alias lsvmmyvm list vms --query "name=myvm*"
> save alias lsvmmyvm
jbelka
11 years, 11 months
[Users] attempted live snapshot, machine paused, wouldn't restart
by Erik Jacobs
I attempted to take a snapshot of a machine while it was running. I
noticed that the machine was paused, and then I attempted to resume it.
The machine looked like it was going to launch, but the events tab simply
indicated that execution of launching the vm failed.
Here's the engine.log at the time:
2012-12-18 22:53:55,838 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(ajp--0.0.0.0-8009-5) START, IsValidVDSCommand(storagePoolId =
2ccd03b1-fd0e-4578-88ce-e5065a9742d7, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 19399f32
2012-12-18 22:53:55,848 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(ajp--0.0.0.0-8009-5) FINISH, IsValidVDSCommand, return: true, log id:
19399f32
2012-12-18 22:53:55,894 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--0.0.0.0-8009-5) START, IsVmDuringInitiatingVDSCommand(vmId =
35d58779-ee24-4926-8612-e053ff48881b), log id: 76d4a2b2
2012-12-18 22:53:55,895 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--0.0.0.0-8009-5) FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 76d4a2b2
2012-12-18 22:53:55,935 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) [17252a1c] Lock Acquired to object EngineLock
[exclusiveLocks= key: 35d58779-ee24-4926-8612-e053ff48881b value: VM
, sharedLocks= ]
2012-12-18 22:53:55,952 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) [17252a1c] Running command: RunVmCommand internal:
false. Entities affected : ID: 35d58779-ee24-4926-8612-e053ff48881b Type:
VM
2012-12-18 22:53:55,966 INFO
[org.ovirt.engine.core.vdsbroker.ResumeVDSCommand] (pool-3-thread-49)
[17252a1c] START, ResumeVDSCommand(vdsId =
0a7046ea-216d-11e2-8fe8-001372eb596b,
vmId=35d58779-ee24-4926-8612-e053ff48881b), log id: 607d6dc2
2012-12-18 22:53:55,976 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ResumeBrokerVDSCommand]
(pool-3-thread-49) [17252a1c] START, ResumeBrokerVDSCommand(vdsId =
0a7046ea-216d-11e2-8fe8-001372eb596b,
vmId=35d58779-ee24-4926-8612-e053ff48881b), log id: 66b1872c
2012-12-18 22:53:56,292 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ResumeBrokerVDSCommand]
(pool-3-thread-49) [17252a1c] FINISH, ResumeBrokerVDSCommand, log id:
66b1872c
2012-12-18 22:53:56,298 INFO
[org.ovirt.engine.core.vdsbroker.ResumeVDSCommand] (pool-3-thread-49)
[17252a1c] FINISH, ResumeVDSCommand, return: PoweringUp, log id: 607d6dc2
2012-12-18 22:53:56,302 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-3-thread-49) [17252a1c] Lock freed to object EngineLock
[exclusiveLocks= key: 35d58779-ee24-4926-8612-e053ff48881b value: VM
, sharedLocks= ]
I'm not sure if I was missing something, or if I needed to wait for
something to complete, or what. I ended up just stopping the machine and
then starting it from cold.
Any thoughts here? I'm happy to attempt this again and to see if it breaks
the same way and to capture more data.
Cheers,
--
Erik Jacobs
ATLElite / www.atlelite.com
DGTrials / www.DGTrials.com
FestiveGarage / www.FestiveGarage.com
Riding Resource / www.RidingResource.com
www.erikjacobs.com
(C) 646-284-3482 (F) 404-585-4409
11 years, 11 months
[Users] Network firewall doubts on allinone setup
by Adrian Gibanel
First I describe my firewall setup:
Default firewall content
( /etc/sysconfig/iptables )
----------------------------------
# Generated by ovirt-engine installer
#filtering rules
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [52:9697]
:RH-Firewall-1-INPUT - [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 81 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 444 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p udp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p udp --dport 892 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 892 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p udp --dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p udp --dport 662 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 662 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 2049 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 32803 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p udp --dport 32769 -j ACCEPT
#drop all rule
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT
/etc/libvirt/qemu/networks/honly0200.xml
-------------------------------------------------------
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh net-edit honly0200
or other application using the libvirt API.
-->
<network>
<name>honly0200</name>
<uuid>09697e5f-e834-8f26-c55e-5866cb1abafc</uuid>
<forward mode='nat'/>
<bridge name='honly0200' stp='on' delay='0' />
<mac address='52:54:00:41:16:38'/>
<ip address='192.168.2.1' netmask='255.255.255.248'>
</ip>
</network>
-----------------------------------------------
So if I run:
service iptables restart
then:
iptables -L -v --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 504 153K RH-Firewall-1-INPUT all -- any any anywhere anywhere
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 RH-Firewall-1-INPUT all -- any any anywhere anywhere
Chain OUTPUT (policy ACCEPT 478 packets, 159K bytes)
num pkts bytes target prot opt in out source destination
Chain RH-Firewall-1-INPUT (2 references)
num pkts bytes target prot opt in out source destination
1 374 132K ACCEPT all -- lo any anywhere anywhere
2 0 0 ACCEPT icmp -- any any anywhere anywhere icmp any
3 120 19824 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED
4 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:ssh
5 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:rfb
6 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:5901
7 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:81
8 8 480 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:snpp
9 0 0 ACCEPT udp -- any any anywhere anywhere state NEW udp dpt:sunrpc
10 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:sunrpc
11 0 0 ACCEPT udp -- any any anywhere anywhere state NEW udp dpt:892
12 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:892
13 0 0 ACCEPT udp -- any any anywhere anywhere state NEW udp dpt:rquotad
14 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:rquotad
15 0 0 ACCEPT udp -- any any anywhere anywhere state NEW udp dpt:pftp
16 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:pftp
17 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:nfs
18 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:32803
19 0 0 ACCEPT udp -- any any anywhere anywhere state NEW udp dpt:filenet-rpc
20 2 80 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited
And I also get:
iptables -L -v --line-numbers -t nat
Chain PREROUTING (policy ACCEPT 3021 packets, 300K bytes)
num pkts bytes target prot opt in out source destination
1 0 0 DNAT tcp -- eth0 any anywhere anywhere tcp dpt:50202 to:192.168.2.2:22
2 0 0 DNAT udp -- eth0 any anywhere anywhere udp dpt:50202 to:192.168.2.2:22
Chain INPUT (policy ACCEPT 1296 packets, 78884 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 2833 packets, 442K bytes)
num pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 2833 packets, 442K bytes)
num pkts bytes target prot opt in out source destination
1 0 0 MASQUERADE tcp -- any any 192.168.2.0/29 !192.168.2.0/29 masq ports: 1024-65535
2 0 0 MASQUERADE udp -- any any 192.168.2.0/29 !192.168.2.0/29 masq ports: 1024-65535
3 0 0 MASQUERADE all -- any any 192.168.2.0/29 !192.168.2.0/29
So... as I want that honly0200 to have access to the Internet via NAT I restart libvirtd service as it's suggested here:
http://wiki.libvirt.org/page/Guest_can_reach_host,_but_can%27t_reach_outs....
service libvirtd restart
So if I run:
service iptables restart
after libvirtd restarted then:
iptables -L -v --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT udp -- honly0200 any anywhere anywhere udp dpt:domain
2 0 0 ACCEPT tcp -- honly0200 any anywhere anywhere tcp dpt:domain
3 0 0 ACCEPT udp -- honly0200 any anywhere anywhere udp dpt:bootps
4 0 0 ACCEPT tcp -- honly0200 any anywhere anywhere tcp dpt:bootps
5 29974 10M RH-Firewall-1-INPUT all -- any any anywhere anywhere
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT all -- any honly0200 anywhere 192.168.2.0/29 state RELATED,ESTABLISHED
2 0 0 ACCEPT all -- honly0200 any 192.168.2.0/29 anywhere
3 0 0 ACCEPT all -- honly0200 honly0200 anywhere anywhere
4 0 0 REJECT all -- any honly0200 anywhere anywhere reject-with icmp-port-unreachable
5 0 0 REJECT all -- honly0200 any anywhere anywhere reject-with icmp-port-unreachable
6 0 0 RH-Firewall-1-INPUT all -- any any anywhere anywhere
Chain OUTPUT (policy ACCEPT 22701 packets, 7628K bytes)
num pkts bytes target prot opt in out source destination
Chain RH-Firewall-1-INPUT (2 references)
num pkts bytes target prot opt in out source destination
1 25844 9124K ACCEPT all -- lo any anywhere anywhere
2 0 0 ACCEPT icmp -- any any anywhere anywhere icmp any
3 3364 921K ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED
4 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:ssh
5 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:rfb
6 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:5901
7 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:81
8 458 27480 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:snpp
9 0 0 ACCEPT udp -- any any anywhere anywhere state NEW udp dpt:sunrpc
10 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:sunrpc
11 0 0 ACCEPT udp -- any any anywhere anywhere state NEW udp dpt:892
12 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:892
13 0 0 ACCEPT udp -- any any anywhere anywhere state NEW udp dpt:rquotad
14 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:rquotad
15 0 0 ACCEPT udp -- any any anywhere anywhere state NEW udp dpt:pftp
16 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:pftp
17 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:nfs
18 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:32803
19 0 0 ACCEPT udp -- any any anywhere anywhere state NEW udp dpt:filenet-rpc
20 308 14264 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited
And I also get:
iptables -L -v --line-numbers -t nat
Chain PREROUTING (policy ACCEPT 1116 packets, 118K bytes)
num pkts bytes target prot opt in out source destination
1 0 0 DNAT tcp -- eth0 any anywhere anywhere tcp dpt:50202 to:192.168.2.2:22
2 0 0 DNAT udp -- eth0 any anywhere anywhere udp dpt:50202 to:192.168.2.2:22
Chain INPUT (policy ACCEPT 399 packets, 23940 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 637 packets, 56768 bytes)
num pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 637 packets, 56768 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 MASQUERADE tcp -- any any 192.168.2.0/29 !192.168.2.0/29 masq ports: 1024-65535
2 0 0 MASQUERADE udp -- any any 192.168.2.0/29 !192.168.2.0/29 masq ports: 1024-65535
3 0 0 MASQUERADE all -- any any 192.168.2.0/29 !192.168.2.0/29
So... That's it. The last setup is the one I want it to persist. Well, actually, I also need that an script called from:
/etc/rc.d/rc.local
later modifies firewall rules too. But let's focus on restarting iptables and libvirtd services.
Now I'm going to reboot and you're going to see that its firewall rules are not the expected ones:
iptables -L -v --line-numbers
Chain INPUT (policy ACCEPT 192K packets, 62M bytes)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT udp -- honly0200 any anywhere anywhere udp dpt:domain
2 0 0 ACCEPT tcp -- honly0200 any anywhere anywhere tcp dpt:domain
3 0 0 ACCEPT udp -- honly0200 any anywhere anywhere udp dpt:bootps
4 0 0 ACCEPT tcp -- honly0200 any anywhere anywhere tcp dpt:bootps
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT all -- any honly0200 anywhere 192.168.2.0/29 state RELATED,ESTABLISHED
2 0 0 ACCEPT all -- honly0200 any 192.168.2.0/29 anywhere
3 0 0 ACCEPT all -- honly0200 honly0200 anywhere anywhere
4 0 0 REJECT all -- any honly0200 anywhere anywhere reject-with icmp-port-unreachable
5 0 0 REJECT all -- honly0200 any anywhere anywhere reject-with icmp-port-unreachable
6 0 0 ACCEPT tcp -- eth0 any anywhere 192.168.2.2 tcp dpt:ssh
7 0 0 ACCEPT udp -- eth0 any anywhere 192.168.2.2 udp dpt:ssh
Chain OUTPUT (policy ACCEPT 191K packets, 61M bytes)
num pkts bytes target prot opt in out source destinatio
and also:
iptables -L -v --line-numbers -t nat
Chain PREROUTING (policy ACCEPT 8683 packets, 933K bytes)
num pkts bytes target prot opt in out source destination
1 0 0 DNAT tcp -- eth0 any anywhere anywhere tcp dpt:50202 to:192.168.2.2:22
2 0 0 DNAT udp -- eth0 any anywhere anywhere udp dpt:50202 to:192.168.2.2:22
Chain INPUT (policy ACCEPT 2687 packets, 165K bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 7602 packets, 936K bytes)
num pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 7602 packets, 936K bytes)
num pkts bytes target prot opt in out source destination
1 0 0 MASQUERADE tcp -- any any 192.168.2.0/29 !192.168.2.0/29 masq ports: 1024-65535
2 0 0 MASQUERADE udp -- any any 192.168.2.0/29 !192.168.2.0/29 masq ports: 1024-65535
3 0 0 MASQUERADE all -- any any 192.168.2.0/29 !192.168.2.0/29
I suppose I will fix this issue by running iptables and libvirtd service restarts at rc.d/rc.local but...
I would like to understand a little bit better what's going under the hood so that I don't have to implement workarounds.
So... any other service that might modify iptables rules? Maybe anyone oVirt specific?
Thank you.
--
--
Adrián Gibanel
I.T. Manager
+34 675 683 301
www.btactic.com
Ens podeu seguir a/Nos podeis seguir en:
i
Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El medio ambiente es cosa de todos.
AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge per error, us agrairem que ho feu saber immediatament al remitent i que procediu a destruir el missatge .
AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es el destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si han recibido este mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al remitente y que procedan a destruir el mensaje .
11 years, 11 months
[Users] Migration failing
by Neil
Hi guys,
I've just migrated all(3) my VM's from one host to another for planned
maintenance, and now when trying to migrate the VM's back to the
original host, I'm receiving a "Fatal error" and can't migrate a
single machine back to node02.
Attached is my engine.log
I'm running Centos 6.3 with the following oVirt package versions from dreyou.
ovirt-engine-setup-3.1.0-3.19.el6.noarch
ovirt-engine-3.1.0-3.19.el6.noarch
ovirt-engine-sdk-3.1.0.5-1.el6.noarch
ovirt-engine-jbossas711-1-0.x86_64
ovirt-engine-config-3.1.0-3.19.el6.noarch
ovirt-image-uploader-3.1.0-16.el6.noarch
ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch
ovirt-engine-notification-service-3.1.0-3.19.el6.noarch
ovirt-log-collector-3.1.0-16.el6.noarch
ovirt-engine-genericapi-3.1.0-3.19.el6.noarch
ovirt-engine-userportal-3.1.0-3.19.el6.noarch
ovirt-engine-tools-common-3.1.0-3.19.el6.noarch
ovirt-engine-backend-3.1.0-3.19.el6.noarch
ovirt-iso-uploader-3.1.0-16.el6.noarch
ovirt-engine-cli-3.1.0.7-1.el6.noarch
ovirt-engine-restapi-3.1.0-3.19.el6.noarch
ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch
Any ideas? I need to shut down both hosts tomorrow for RAM upgrades,
but my plan was to avoid downtime by migrating all the VM's to one
machine and then the other.
Any assistance is greatly appreciated.
This system was upgraded from oVirt 3.0 to 3.1 and the engine was
moved to another physical machine and the hostname was changed as
well.
Thanks.
Regards,
Neil Wilson.
11 years, 11 months