Re: [Users] so, what do you want next in oVirt?
by Chris Smith
What about using ATA over ethernet (AoE) as an optional storage transport
within ovirt? Is this feasible? It seems that the protocol is fast and
efficient.
On Sep 10, 2013 11:58 AM, "Baptiste AGASSE" <
baptiste.agasse(a)lyra-network.com> wrote:
> Hi all,
>
> ----- Mail original -----
> > De: "Itamar Heim" <iheim(a)redhat.com>
> > À: users(a)ovirt.org
> > Envoyé: Mardi 20 Août 2013 23:19:16
> > Objet: [Users] so, what do you want next in oVirt?
> >
> > earlier in the year we did a survey for feature requests /
> > improvements
> > / etc.
> >
> > since a lot of things were added, and priorities usually change, I'd
> > like to ask again for "what do you need the most from oVirt / what
> > are
> > your pain points" next?
> >
> > below[1] I've listed my understanding of what already went in from
> > previous survey requests (to various degrees of coverage).
> >
> > Thanks,
> > Itamar
> >
> > [1] from the top 12
> > V Allow disk resize
> > V Integrate Nagios/Zabbix monitoring - via a ui plugin
> > V Highly Available engine - via hosted engine[2]
> > V Open vSwitch integration - via neutron integration
> > X Allow cloning VMs without template
> > ? Enable hypervisor upgrade/updates through engine[3]
> > V Allow engine on an oVirt hosted VM - via hosted engine[2]
> > V Enable guest configuration (root password, SSH keys, network) via
> > guest agent in engine - via cloud-init
> > X Integrate v2v into engine
> > ? Bond/extend ovirtmgmt with a second network for HA/increased
> > bandwidth[4]
> > X Integrate scheduling of snapshots and VM export for backups in
> > engine[5]
> > V Spice – support Google Chrome - via mime based launch
> >
> >
> > Other items mentioned in previous survey which should be covered by
> > now:
> > - Fix timeout when adding local host during all-in-one configuration
> > - Fix engine set-up when SELinux is disabled
> > - Provide packages for el6 (CentOS, Red Hat Enterprise Linux)
> > - Allow multiple VMs to be deployed from the same template at the
> > same
> > time
> > - ISO domains on local/GlusterS
> > - Show IP addresses in Virtual Machines->Network Interfaces
> > - OpenStack Quantum support (now called Neutron)
> > - noVNC support
> > - Support spice.html5 and websocket proxy
> > - Add other guest OSes to list
> > - Port oVirt guest agent to Ubuntu[6]
> > - SLA - Allow resource time-sharing
> > - Spice - Mac client (via mime based launch)
> > - Spice - port XPI plug-in to Windows (not sure this will happen, but
> > mime based launch allows using firefox now)
> > - Spice - client for Ubuntu/Debian (should be covered via mime based
> > launch)
> >
> >
> > [2] hosted engine is in active development, but not released yet.
> > [3] host update is supported, but not for general yum update.
> > [4] a lot of improvements were done in this space, but i'm not sure
> > if
> > they cover this exact use case
> > [5] backup api is now being pushed to master, and orchestration of
> > backups should probably happen via 3rd part backup vendors?
> > [6] I'm not sure packaging exists yet, but ubuntu is covered for the
> > basic functionality of the guest agent.
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
> Thanks for this thread !
>
> - ISCSI EqualLogic SAN support or use standard iscsi tools/configuration
> - SSO for webui and cli (IPA integration)
> - PXE boot for nodes
> - VMs dependencies on startup
>
> Have a nice day.
>
> Regards.
>
> ---
> Baptiste
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
11 years, 3 months
Re: [Users] oVirt 3.3/F19 - Windows guest unable to access network
by SULLIVAN, Chris (WGK)
Hi,
Just following up on this issue. Turns out the network problems were being caused by the bond0 interface.
The initial configuration was two NICs teamed as bond0, which was then bridged to the ovirtmgmt interface. With this configuration, RHEL guests could access the network normally but Windows guests (XP, 7, 2008 R2) could not. After deactivating the bond0 interface and bridging one of the NICs directly to the ovirtmgmt interface, both RHEL and Windows guests have fully functioning networks.
I am not sure why exactly the bond0 interface was not working as intended. The initial configuration had the mode as balance-rr, is this known to cause problems? My intention was to have the mode as balance-alb however the bonding driver in F19 seems to completely ignore any BONDING_OPTS settings in the ifcfg-bond0 file. Attempts to change the bonding mode directly via /sys/class/net/bond0/bonding/mode repeatedly failed due to 'the bond having slaves', even after the bond had been taken down via ifconfig. I was not able to remove the bond0 definition either, even after removing the ifcfg-bond0 file and the modprobe.d alias.
Is there a recommended/tested bonding configuration HOWTO for oVirt on F19?
Joop: Responses as follows:
- Windows firewall was disabled in each Windows VM
- Changing the CPU setting and starting the VM directly on the host via QEMU (i.e. not through oVirt) did not seem to affect the behavior
Thanks,
Chris
PLEASE CONSIDER THE ENVIRONMENT, DON'T PRINT THIS EMAIL UNLESS YOU REALLY NEED TO.
This email and its attachments may contain information which is confidential and/or legally privileged. If you are not the intended recipient of this e-mail please notify the sender immediately by e-mail and delete this e-mail and its attachments from your computer and IT systems. You must not copy, re-transmit, use or disclose (other than to the sender) the existence or contents of this email or its attachments or permit anyone else to do so.
-----------------------------
-----Original Message-----
From: users-bounces(a)ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of users-request(a)ovirt.org
Sent: Thursday, September 19, 2013 3:31 PM
To: users(a)ovirt.org
Subject: Users Digest, Vol 24, Issue 93
------------------------------
Message: 3
Date: Thu, 19 Sep 2013 09:13:43 +0200
From: noc <noc(a)nieuwland.nl>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [Users] oVirt 3.3/F19 - Windows guest unable to access
network
Message-ID: <523AA427.2060605(a)nieuwland.nl>
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
On 18-9-2013 23:08, SULLIVAN, Chris (WGK) wrote:
>
> Hi,
>
> I'm having network issues with a Windows Server 2008 R2 guest running
> on an F19 host. The guest has a static configuration and is able to
> ping itself and the host it is running on, however cannot ping the
> gateway, any other hosts on the local network, or external hosts. A
> RHEL 6.4 guest on the same host with a similar static configuration
> works normally.
>
> Iptables/firewalld on the host are switched off and the network
> definitions in the XML for each VM (Windows/RHEL) are the same. The
> virtio network drivers are installed in the guest. The guest was
> created from a Win 2008 R2 template, which was created from a VM
> imported from oVirt 3.2. Software versions below.
>
Just to be sure, iptables/firewalld!=Windows Firewall. Is there a rule
in the windows firewall to allow ping or is it disabled?
> Are there any manual configuration steps required on the host to
> support Windows guests? Are there any particular diagnostic steps I
> could take to try and narrow down the cause?
>
Don't think so, just converted a Windows2008R2 datacenter guest from
Vmware to oVirt and it ran, after adding virtio drivers or using e1000
and/or ide disks.
> Versions:
>
> -oVirt 3.3.0-4
>
> -F19 3.10.11-200
>
> -QEMU 1.4.2-9
>
> -Libvirt 1.1.2-1
>
> -VDSM 4.12.1-2
>
> -virtio-win 0.1-52
>
>
Your problem looks like the problem Ren? had with his Solaris guest, its
a recent thread. Turned out that setting -cpu Nehalem by ovirt caused
networking in the Solaris guest to fail.
Don't think this is your problem though since lots of people run Windows
guest without problems.
Regards,
Joop
11 years, 3 months
[Users] oVirt 3.3.0-4/F19 - Extending VM disk gives correct size but appears to wipe the drive contents
by SULLIVAN, Chris (WGK)
--_000_C417081C2581084FB080BCF68A21CC691CAACC89EPEXMBX02WOODGR_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGksDQoNCkkgaGFkIGEgbnVtYmVyIG9mIFdpbmRvd3MgVk1zIHJ1bm5pbmcgaW4gb1ZpcnQgMy4z
IHRoYXQgcmVxdWlyZWQgdGhlaXIgcHJlYWxsb2NhdGVkIE9TIGRpc2tzIHRvIGJlIGV4dGVuZGVk
LiBFYWNoIE9TIGRpc2sgaGFkIGEgc2luZ2xlIHBhcnRpdGlvbiB0YWtpbmcgdXAgdGhlIGVudGly
ZSBkcml2ZS4gQXMgcGVyIGh0dHA6Ly93d3cub3ZpcnQub3JnL0ZlYXR1cmVzL09ubGluZV9WaXJ0
dWFsX0RyaXZlX1Jlc2l6ZSBJIHNodXQgZG93biBhbGwgdGhlIFZNcywgZXh0ZW5kZWQgZWFjaCBP
UyBkaXNrIGJ5IDEwR0IgKHRvdGFsIDI1R0IpIHZpYSB0aGUgd2ViIGludGVyZmFjZSwgdGhlbiBj
bGlja2VkIE9LLiBUaGUgdGFza3MgYXBwZWFyZWQgdG8gY29tcGxldGUgc3VjY2Vzc2Z1bGx5IGFu
ZCBlYWNoIG9mIHRoZSBPUyBkaXNrcyBoYWQgdGhlIGV4cGVjdGVkIHJlYWwgc2l6ZSBvbiB0aGUg
R2x1c3RlciBzdG9yYWdlIHZvbHVtZS4NCg0KT24gc3RhcnR1cCBob3dldmVyIG5vbmUgb2YgdGhl
IFZNcyB3b3VsZCByZWNvZ25pemUgdGhlaXIgT1MgZGlzayBhcyBiZWluZyBhIGJvb3RhYmxlIGRl
dmljZS4gQ2hlY2tpbmcgb25lIG9mIHRoZSBPUyBkaXNrcyB2aWEgVGVzdERpc2sgKGJvdGggcXVp
Y2sgYW5kIGRlZXAgc2NhbnMpIHJldmVhbGVkIG5vIHBhcnRpdGlvbnMgYW5kIHRoZSBlcnJvciDi
gJhQYXJ0aXRpb24gc2VjdG9yIGRvZXNu4oCZdCBoYXZlIHRoZSBlbmRtYXJrIDB4QUE1NeKAmS4g
SXQgYXBwZWFycyB0aGF0IGVhY2ggT1MgZGlzayB3YXMgd2lwZWQgYXMgcGFydCBvZiB0aGUgZXh0
ZW5zaW9uIHByb2Nlc3MgYWx0aG91Z2ggSeKAmW0gcmVhbGx5IGhvcGluZyB0aGF0IHRoaXMgaXNu
4oCZdCB0aGUgY2FzZSENCg0KQXJlIHRoZXJlIGFueSBvdGhlciBhcHByb2FjaGVzIEkgY291bGQg
dXNlIHRvIGF0dGVtcHQgdG8gcmVjb3ZlciB0aGUgT1MgZGlzayBkYXRhIG9yIGF0IGxlYXN0IHZl
cmlmeSB3aGV0aGVyIHRoZSBvcmlnaW5hbCBkaXNrIHBhcnRpdGlvbnMgYXJlIHJlY292ZXJhYmxl
Pw0KDQpUaGFua3MsDQoNCkNocmlzDQoNCg0KDQoNCg0KDQoNCg0KUExFQVNFIENPTlNJREVSIFRI
RSBFTlZJUk9OTUVOVCwgRE9OJ1QgUFJJTlQgVEhJUyBFTUFJTCBVTkxFU1MgWU9VIFJFQUxMWSBO
RUVEIFRPLg0KDQoNCg0KVGhpcyBlbWFpbCBhbmQgaXRzIGF0dGFjaG1lbnRzIG1heSBjb250YWlu
IGluZm9ybWF0aW9uIHdoaWNoIGlzIGNvbmZpZGVudGlhbCBhbmQvb3IgbGVnYWxseSBwcml2aWxl
Z2VkLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQgcmVjaXBpZW50IG9mIHRoaXMgZS1tYWls
IHBsZWFzZSBub3RpZnkgdGhlIHNlbmRlciBpbW1lZGlhdGVseSBieSBlLW1haWwgYW5kIGRlbGV0
ZSB0aGlzIGUtbWFpbCBhbmQgaXRzIGF0dGFjaG1lbnRzIGZyb20geW91ciBjb21wdXRlciBhbmQg
SVQgc3lzdGVtcy4gWW91IG11c3Qgbm90IGNvcHksIHJlLXRyYW5zbWl0LCB1c2Ugb3IgZGlzY2xv
c2UgKG90aGVyIHRoYW4gdG8gdGhlIHNlbmRlcikgdGhlIGV4aXN0ZW5jZSBvciBjb250ZW50cyBv
ZiB0aGlzIGVtYWlsIG9yIGl0cyBhdHRhY2htZW50cyBvciBwZXJtaXQgYW55b25lIGVsc2UgdG8g
ZG8gc28uDQoNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCg0KDQo=
--_000_C417081C2581084FB080BCF68A21CC691CAACC89EPEXMBX02WOODGR_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy
bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt
YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj
cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv
VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjwhLS0gVGVtcGxhdGUgZ2VuZXJhdGVkIGJ5IEV4Y2xh
aW1lciBNYWlsIERpc2NsYWltZXJzIG9uIDA4OjA4OjI3IE1vbmRheSwgMjMgU2VwdGVtYmVyIDIw
MTMgLS0+DQo8bWV0YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiIGNvbnRlbnQ9InRleHQvaHRt
bDsgY2hhcnNldD11dGYtOCI+DQo8c3R5bGUgdHlwZT0idGV4dC9jc3MiPlAuOWU4Zjk3NjYtYTE4
Zi00OWZjLWIzYzctNDdkM2I2ZWNhM2ZiIHsNCglNQVJHSU46IDBjbSAwY20gMHB0DQp9DQpMSS45
ZThmOTc2Ni1hMThmLTQ5ZmMtYjNjNy00N2QzYjZlY2EzZmIgew0KCU1BUkdJTjogMGNtIDBjbSAw
cHQNCn0NCkRJVi45ZThmOTc2Ni1hMThmLTQ5ZmMtYjNjNy00N2QzYjZlY2EzZmIgew0KCU1BUkdJ
TjogMGNtIDBjbSAwcHQNCn0NClRBQkxFLjllOGY5NzY2LWExOGYtNDlmYy1iM2M3LTQ3ZDNiNmVj
YTNmYlRhYmxlIHsNCglNQVJHSU46IDBjbSAwY20gMHB0DQp9DQpESVYuU2VjdGlvbjEgew0KCXBh
Z2U6IFNlY3Rpb24xDQp9DQo8L3N0eWxlPg0KPG1ldGEgbmFtZT0iR2VuZXJhdG9yIiBjb250ZW50
PSJNaWNyb3NvZnQgV29yZCAxNCAoZmlsdGVyZWQgbWVkaXVtKSI+DQo8c3R5bGU+PCEtLQ0KLyog
Rm9udCBEZWZpbml0aW9ucyAqLw0KQGZvbnQtZmFjZQ0KCXtmb250LWZhbWlseTpDYWxpYnJpOw0K
CXBhbm9zZS0xOjIgMTUgNSAyIDIgMiA0IDMgMiA0O30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1p
bHk6VGFob21hOw0KCXBhbm9zZS0xOjIgMTEgNiA0IDMgNSA0IDQgMiA0O30NCi8qIFN0eWxlIERl
ZmluaXRpb25zICovDQpwLk1zb05vcm1hbCwgbGkuTXNvTm9ybWFsLCBkaXYuTXNvTm9ybWFsDQoJ
e21hcmdpbjowaW47DQoJbWFyZ2luLWJvdHRvbTouMDAwMXB0Ow0KCWZvbnQtc2l6ZToxMS4wcHQ7
DQoJZm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjt9DQphOmxpbmssIHNwYW4uTXNv
SHlwZXJsaW5rDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjpibHVlOw0KCXRleHQt
ZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQ0KYTp2aXNpdGVkLCBzcGFuLk1zb0h5cGVybGlua0ZvbGxv
d2VkDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjpwdXJwbGU7DQoJdGV4dC1kZWNv
cmF0aW9uOnVuZGVybGluZTt9DQpwLk1zb0FjZXRhdGUsIGxpLk1zb0FjZXRhdGUsIGRpdi5Nc29B
Y2V0YXRlDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgltc28tc3R5bGUtbGluazoiQmFsbG9v
biBUZXh0IENoYXIiOw0KCW1hcmdpbjowaW47DQoJbWFyZ2luLWJvdHRvbTouMDAwMXB0Ow0KCWZv
bnQtc2l6ZTo4LjBwdDsNCglmb250LWZhbWlseToiVGFob21hIiwic2Fucy1zZXJpZiI7fQ0Kc3Bh
bi5FbWFpbFN0eWxlMTcNCgl7bXNvLXN0eWxlLXR5cGU6cGVyc29uYWwtY29tcG9zZTsNCglmb250
LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiOw0KCWNvbG9yOndpbmRvd3RleHQ7fQ0Kc3Bh
bi5CYWxsb29uVGV4dENoYXINCgl7bXNvLXN0eWxlLW5hbWU6IkJhbGxvb24gVGV4dCBDaGFyIjsN
Cgltc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJbXNvLXN0eWxlLWxpbms6IkJhbGxvb24gVGV4dCI7
DQoJZm9udC1mYW1pbHk6IlRhaG9tYSIsInNhbnMtc2VyaWYiO30NCi5Nc29DaHBEZWZhdWx0DQoJ
e21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5Ow0KCWZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fu
cy1zZXJpZiI7fQ0KQHBhZ2UgV29yZFNlY3Rpb24xDQoJe3NpemU6OC41aW4gMTEuMGluOw0KCW1h
cmdpbjoxLjBpbiAxLjBpbiAxLjBpbiAxLjBpbjt9DQpkaXYuV29yZFNlY3Rpb24xDQoJe3BhZ2U6
V29yZFNlY3Rpb24xO30NCi0tPjwvc3R5bGU+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+DQo8bzpz
aGFwZWRlZmF1bHRzIHY6ZXh0PSJlZGl0IiBzcGlkbWF4PSIxMDI2IiAvPg0KPC94bWw+PCFbZW5k
aWZdLS0+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+DQo8bzpzaGFwZWxheW91dCB2OmV4dD0iZWRp
dCI+DQo8bzppZG1hcCB2OmV4dD0iZWRpdCIgZGF0YT0iMSIgLz4NCjwvbzpzaGFwZWxheW91dD48
L3htbD48IVtlbmRpZl0tLT4NCjwvaGVhZD4NCjxib2R5IGxhbmc9IkVOLVVTIiBsaW5rPSJibHVl
IiB2bGluaz0icHVycGxlIj4NCjxwIGNsYXNzPSI5ZThmOTc2Ni1hMThmLTQ5ZmMtYjNjNy00N2Qz
YjZlY2EzZmIiPjwvcD4NCjxkaXYgY2xhc3M9IldvcmRTZWN0aW9uMSI+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj5IaSw8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5i
c3A7PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SSBoYWQgYSBudW1iZXIgb2YgV2lu
ZG93cyBWTXMgcnVubmluZyBpbiBvVmlydCAzLjMgdGhhdCByZXF1aXJlZCB0aGVpciBwcmVhbGxv
Y2F0ZWQgT1MgZGlza3MgdG8gYmUgZXh0ZW5kZWQuIEVhY2ggT1MgZGlzayBoYWQgYSBzaW5nbGUg
cGFydGl0aW9uIHRha2luZyB1cCB0aGUgZW50aXJlIGRyaXZlLiBBcyBwZXINCjxhIGhyZWY9Imh0
dHA6Ly93d3cub3ZpcnQub3JnL0ZlYXR1cmVzL09ubGluZV9WaXJ0dWFsX0RyaXZlX1Jlc2l6ZSI+
aHR0cDovL3d3dy5vdmlydC5vcmcvRmVhdHVyZXMvT25saW5lX1ZpcnR1YWxfRHJpdmVfUmVzaXpl
PC9hPiBJIHNodXQgZG93biBhbGwgdGhlIFZNcywgZXh0ZW5kZWQgZWFjaCBPUyBkaXNrIGJ5IDEw
R0IgKHRvdGFsIDI1R0IpIHZpYSB0aGUgd2ViIGludGVyZmFjZSwgdGhlbiBjbGlja2VkIE9LLiBU
aGUgdGFza3MgYXBwZWFyZWQNCiB0byBjb21wbGV0ZSBzdWNjZXNzZnVsbHkgYW5kIGVhY2ggb2Yg
dGhlIE9TIGRpc2tzIGhhZCB0aGUgZXhwZWN0ZWQgcmVhbCBzaXplIG9uIHRoZSBHbHVzdGVyIHN0
b3JhZ2Ugdm9sdW1lLjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4m
bmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5PbiBzdGFydHVwIGhvd2V2ZXIg
bm9uZSBvZiB0aGUgVk1zIHdvdWxkIHJlY29nbml6ZSB0aGVpciBPUyBkaXNrIGFzIGJlaW5nIGEg
Ym9vdGFibGUgZGV2aWNlLiBDaGVja2luZyBvbmUgb2YgdGhlIE9TIGRpc2tzIHZpYSBUZXN0RGlz
ayAoYm90aCBxdWljayBhbmQgZGVlcCBzY2FucykgcmV2ZWFsZWQgbm8gcGFydGl0aW9ucyBhbmQg
dGhlIGVycm9yIOKAmFBhcnRpdGlvbiBzZWN0b3IgZG9lc27igJl0IGhhdmUgdGhlIGVuZG1hcmsN
CiAweEFBNTXigJkuIEl0IGFwcGVhcnMgdGhhdCBlYWNoIE9TIGRpc2sgd2FzIHdpcGVkIGFzIHBh
cnQgb2YgdGhlIGV4dGVuc2lvbiBwcm9jZXNzIGFsdGhvdWdoIEnigJltIHJlYWxseSBob3Bpbmcg
dGhhdCB0aGlzIGlzbuKAmXQgdGhlIGNhc2UhPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPkFyZSB0
aGVyZSBhbnkgb3RoZXIgYXBwcm9hY2hlcyBJIGNvdWxkIHVzZSB0byBhdHRlbXB0IHRvIHJlY292
ZXIgdGhlIE9TIGRpc2sgZGF0YSBvciBhdCBsZWFzdCB2ZXJpZnkgd2hldGhlciB0aGUgb3JpZ2lu
YWwgZGlzayBwYXJ0aXRpb25zIGFyZSByZWNvdmVyYWJsZT88bzpwPjwvbzpwPjwvcD4NCjxwIGNs
YXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1h
bCI+VGhhbmtzLDxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJz
cDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5DaHJpczxvOnA+PC9vOnA+PC9wPg0K
PHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+
Jm5ic3A7PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tQVUi
IHN0eWxlPSJmb250LXNpemU6OS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssJnF1
b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjpibGFjazttc28tZmFyZWFzdC1sYW5ndWFnZTpFTi1B
VSI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86
cD4mbmJzcDs8L286cD48L3A+DQo8L2Rpdj4NCjxwPjwvcD4NCjxwIGNsYXNzPSI5ZThmOTc2Ni1h
MThmLTQ5ZmMtYjNjNy00N2QzYjZlY2EzZmIiPiZuYnNwOzwvcD4NCjxwIGNsYXNzPSI5ZThmOTc2
Ni1hMThmLTQ5ZmMtYjNjNy00N2QzYjZlY2EzZmIiPjxmb250IHNpemU9IjIiIGZhY2U9IkFyaWFs
Ij5QTEVBU0UgQ09OU0lERVIgVEhFIEVOVklST05NRU5ULCBET04nVCBQUklOVCBUSElTIEVNQUlM
IFVOTEVTUyBZT1UgUkVBTExZIE5FRUQgVE8uPC9mb250PjwvcD4NCjxwIGNsYXNzPSI5ZThmOTc2
Ni1hMThmLTQ5ZmMtYjNjNy00N2QzYjZlY2EzZmIiPjxmb250IHNpemU9IjIiIGZhY2U9IkFyaWFs
Ij48L2ZvbnQ+Jm5ic3A7PC9wPg0KPHAgY2xhc3M9IjllOGY5NzY2LWExOGYtNDlmYy1iM2M3LTQ3
ZDNiNmVjYTNmYiI+PGZvbnQgc2l6ZT0iMiIgZmFjZT0iQXJpYWwiPlRoaXMgZW1haWwgYW5kIGl0
cyBhdHRhY2htZW50cyBtYXkgY29udGFpbiBpbmZvcm1hdGlvbiB3aGljaCBpcyBjb25maWRlbnRp
YWwgYW5kL29yIGxlZ2FsbHkgcHJpdmlsZWdlZC4gSWYgeW91IGFyZSBub3QgdGhlIGludGVuZGVk
IHJlY2lwaWVudCBvZiB0aGlzIGUtbWFpbCBwbGVhc2Ugbm90aWZ5IHRoZSBzZW5kZXINCiBpbW1l
ZGlhdGVseSBieSBlLW1haWwgYW5kIGRlbGV0ZSB0aGlzIGUtbWFpbCBhbmQgaXRzIGF0dGFjaG1l
bnRzIGZyb20geW91ciBjb21wdXRlciBhbmQgSVQgc3lzdGVtcy4gWW91IG11c3Qgbm90IGNvcHks
IHJlLXRyYW5zbWl0LCB1c2Ugb3IgZGlzY2xvc2UgKG90aGVyIHRoYW4gdG8gdGhlIHNlbmRlcikg
dGhlIGV4aXN0ZW5jZSBvciBjb250ZW50cyBvZiB0aGlzIGVtYWlsIG9yIGl0cyBhdHRhY2htZW50
cyBvciBwZXJtaXQgYW55b25lIGVsc2UNCiB0byBkbyBzby48L2ZvbnQ+PC9wPg0KPHAgY2xhc3M9
IjllOGY5NzY2LWExOGYtNDlmYy1iM2M3LTQ3ZDNiNmVjYTNmYiI+PGZvbnQgZmFjZT0iQXJpYWwi
Pi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS08L2ZvbnQ+PC9wPg0KPHAgY2xhc3M9IjllOGY5
NzY2LWExOGYtNDlmYy1iM2M3LTQ3ZDNiNmVjYTNmYiI+PGZvbnQgZmFjZT0iQXJpYWwiPjwvZm9u
dD4mbmJzcDs8L3A+DQo8cCBjbGFzcz0iOWU4Zjk3NjYtYTE4Zi00OWZjLWIzYzctNDdkM2I2ZWNh
M2ZiIj48L3A+DQo8cD48Zm9udCBmYWNlPSJBcmlhbCI+PC9mb250PjwvcD4NCjxwPjxmb250IGZh
Y2U9IkFyaWFsIj48L2ZvbnQ+PC9wPg0KPHA+PGZvbnQgZmFjZT0iQXJpYWwiPjwvZm9udD48L3A+
DQo8cD48L3A+DQo8cD48L3A+DQo8cD48L3A+DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_C417081C2581084FB080BCF68A21CC691CAACC89EPEXMBX02WOODGR_--
11 years, 3 months
Re: [Users] Update from 3.2 to 3.3 (CentOS 6.4)
by Mike Burns
On 09/23/2013 12:11 PM, H. Haven Liu wrote:
> I was able to upgrade with "engine-setup" after running "yum update
> ovirt-engine-setup" as requested. The upgrade processes completed
> successfully (at least according to "[ INFO ] Execution of upgrade
> completed successfully")
>
> However, it appears I'm unable to gain the benefits of 3.3, such as
> Online Virtual Drive Resize[1], which I'm guessing has something to do
> with that my VMs still report that the "Cluster Compatibility Version"
> as 3.2. Is there any way to "update" the VMs or otherwise enable that
> feature?
>
>
> [1]http://www.ovirt.org/Features/Online_Virtual_Drive_Resize
> On Sep 23, 2013, at 5:11 AM, Mike Burns <mburns(a)redhat.com
> <mailto:mburns@redhat.com>> wrote:
>
>> On 09/23/2013 03:51 AM, Sandro Bonazzola wrote:
>>> Il 17/09/2013 00:48, Mike Burns ha scritto:
>>>> On 09/16/2013 06:41 PM, H. Haven Liu wrote:
>>>>> Thanks for the discussion. But for those of us that are not using
>>>>> gluster, but just good ol' NFS, is updating simply "yum update
>>>>> ovirt-*"?
>>>>>
>>>>
>>>> No, yum update won't upgrade ovirt packages.
>>>>
>>>> If you're running on Fedora, you need to update Fedora first, then
>>>> run engine-upgrade. If you're on EL6, a simple engine-upgrade
>>>> should work.
>>>>
>>>> Ofer, any other gotchas? Can you have someone create a 3.2 to 3.3
>>>> upgrade page on the wiki?
>>>
>>> Here you can find the test results of upgrading from Fedora 18 /
>>> oVirt 3.2 to Fedora 19 oVirt 3.3:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1005950
>>>
>>> I'm a bit short on time, but I can try to find some for creating that
>>> page. Any preference on the URL / page name?
>>
>> Previous ones exist like this:
>>
>> http://www.ovirt.org/OVirt_3.0_to_3.1_upgrade
>> http://www.ovirt.org/OVirt_3.1_to_3.2_upgrade
>>
>> so this would make sense:
>>
>> http://www.ovirt.org/OVirt_3.2_to_3.2_upgrade
>>>
>>>
>>>
>>>>
>>>> Thanks
>>>>
>>>> Mike
>>>>
>>>>> On Sep 16, 2013, at 2:54 PM, Mike Burns <mburns(a)redhat.com> wrote:
>>>>>
>>>>>> On 09/16/2013 05:26 PM, Joop wrote:
>>>>>>> Mike Burns wrote:
>>>>>>>> On 09/16/2013 04:30 PM, Joop wrote:
>>>>>>>>> H. Haven Liu wrote:
>>>>>>>>>> Hello,
>>>>>>>>>>
>>>>>>>>>> Is there any recommended procedure for updating from 3.2 to
>>>>>>>>>> 3.3 (namely on a CentOS 6.4 system), or anything one should
>>>>>>>>>> be careful when doing such update?
>>>>>>>>> Be careful when you use glusterfs, not nfs over glusterfs,
>>>>>>>>> but read the release-notes. It should have something to say
>>>>>>>>> about glusterfs domain not (yet) working on el6. Saw a small
>>>>>>>>> discussion on irc just yet and my two cents are that you
>>>>>>>>> can't add el6 support to ovirt in release 3.2 and then
>>>>>>>>> withdraw it with 3.3 and say well just wait for
>>>>>>>>> Centos/Rhel-6.5.
>>>>>>>>
>>>>>>>> We haven't removed any functionality in 3.3. In 3.2, we added
>>>>>>>> support for gluster domains through a POSIXFS interface. In
>>>>>>>> 3.3, we're adding a feature where we support gluster natively.
>>>>>>>> This works in Fedora, but is not available on EL6. The POSIXFS
>>>>>>>> option still exists
>>>>>>> You're right but what about users who want to use the (much)
>>>>>>> improved speed of the gluster domain over the POSIXFS interface?
>>>>>>> They are left out or they should move to Fed19 which I would do
>>>>>>> but I need to convince a couple of other people as well and they
>>>>>>> aren't going to agree.
>>>>>>
>>>>>> I understand the complaint, honestly. And we're working on a
>>>>>> solution so that it will work for people on EL6.
>>>>>>
>>>>>> From the perspective of whether we should release with this
>>>>>> limitation or not, I'd point out that by not releasing, we'd be
>>>>>> preventing everyone from using any of the new features until we get
>>>>>> a solution for this. I'd rather release and make it available for
>>>>>> everyone now and say that the Gluster domain for EL6 will come as
>>>>>> soon as we can work out the dependency issues.
>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>> The kernel can come from elrepo so that is not a burden for
>>>>>>>>> the ovirt team, qemu/libvirt should be build by the ovirt
>>>>>>>>> team and be available from the ovirt repo. At the moment I
>>>>>>>>> also see/saw Jboss-7.1.1 qemu/libvirt wouldn't be the first
>>>>>>>>> packages to be in the ovirt-repo which are also in the main
>>>>>>>>> distributions repos.
>>>>>>>>
>>>>>>>> We're trying to work out a way to do this in a consistent
>>>>>>>> manner going forward. We should have a solution soon, but in
>>>>>>>> the meantime, the other functionality and features should work
>>>>>>>> on both Fedora and EL6.
>>>>>>> Thanks for the clarification and I'm waiting eagerly for
>>>>>>> what/when the solution comes out.
>>>>>>
>>>>>> There have been a few considerations for solving this including
>>>>>> rebuilding pure upstream or fedora packages for EL6. That is a
>>>>>> risky solution in my mind since there are rather large deltas
>>>>>> between Fedora and EL6. We're looking at whether we can have a
>>>>>> "virt-preview" type of repo for EL6 similar to what exists today
>>>>>> for Fedora[1].
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> Mike
>>>>>>
>>>>>> [1]
>>>>>> http://fedoraproject.org/wiki/Virtualization_Preview_Repository
>>>>>>>
>>>>>>> Joop
>>>>>>>
>>>>>>> _______________________________________________ Users mailing
>>>>>>> list Users(a)ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>> _______________________________________________ Users mailing list
>>>>>> Users(a)ovirt.org http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>> _______________________________________________ Users mailing list
>>>>> Users(a)ovirt.org http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
11 years, 3 months
[Users] Can not add host to ovirt-engine
by kumar shantanu
Hello guys,
I am trying to setup ovirt now and it's installed and setup as per
http://www.ovirt.org/Download.
However when I am doing the first step which is adding host this is what I
am getting in the logs and it's stuck at initializing hosts
, Custom Event ID: -1, Message: Host Local is initializing. Message:
Recovering from crash or Initializing
2013-09-20 12:00:01,766 WARN [org.ovirt.engine.core.vdsbroker.VdsManager]
(DefaultQuartzScheduler_Worker-98) Failed to refresh VDS , vds =
f70911ce-457d-4ef7-9a01-d66bfd25b910 : Local, error = Recovering from crash
or Initializing, continuing.
2013-09-20 12:00:04,772 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-99) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand return
value
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturnForXmlRpc@23c03b6d
2013-09-20 12:00:04,773 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-99) HostName = Local
2013-09-20 12:00:04,774 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-99) Command GetCapabilitiesVDS execution
failed. Exception: VDSRecoveringException: Recovering from crash or
Initializing
2013-09-20 12:00:07,781 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-100) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand return
value
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturnForXmlRpc@3aad8c77
2013-09-20 12:00:07,782 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-100) HostName = Local
2013-09-20 12:00:07,782 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-100) Command GetCapabilitiesVDS execution
failed. Exception: VDSRecoveringException: Recovering from crash or
Initializing
2013-09-20 12:00:10,796 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-9) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand return
value
Any idea please
Thanks
Shantanu
11 years, 3 months
[Users] Cannot start VMs or delete disks from gluster storage
by David Riedl
This is a multi-part message in MIME format.
--------------070703050305060909030602
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hello everyone,
I recently created my first ovirt/vdms/gluster cluster. I did everything
as it is described in the ovirt and glusterfs quick start.
The glusterfs Domain is recognized in the UI and is also mounted in the
system. Everything looks fine to me. I can even create VMs.
But when I try to start them, the VM is stuck and doesn't start up and I
don't get any error message (in the WebUI) either.
I can delete the VM, but not the disk.
OS Version:
RHEL - 6 - 4.el6.centos.10
Kernel Version:
2.6.32 - 358.18.1.el6.x86_64
KVM Version:
0.12.1.2 - 2.355.0.1.el6.centos.7
LIBVIRT Version:
libvirt-0.10.2-18.el6_4.14
VDSM Version:
vdsm-4.12.1-2.el6
SPICE Version:
0.12.0 - 12.el6_4.3
Oh and which log files do you need?
Regards
David
PS: Sorry if this is the wrong place to ask such things/problems. I'm
pretty new to oVirt. :)
--------------070703050305060909030602
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hello everyone,<br>
I recently created my first ovirt/vdms/gluster cluster. I did
everything as it is described in the ovirt and glusterfs quick
start.<br>
The glusterfs Domain is recognized in the UI and is also mounted in
the system. Everything looks fine to me. I can even create VMs.<br>
But when I try to start them, the VM is stuck and doesn't start up
and I don't get any error message (in the WebUI) either.<br>
I can delete the VM, but not the disk.<br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
OS Version:<br>
RHEL - 6 - 4.el6.centos.10<br>
Kernel Version:<br>
2.6.32 - 358.18.1.el6.x86_64<br>
KVM Version:<br>
0.12.1.2 - 2.355.0.1.el6.centos.7<br>
LIBVIRT Version:<br>
libvirt-0.10.2-18.el6_4.14<br>
VDSM Version:<br>
vdsm-4.12.1-2.el6<br>
SPICE Version:<br>
0.12.0 - 12.el6_4.3<br>
<br>
Oh and which log files do you need? <br>
<br>
Regards<br>
<br>
David<br>
<br>
PS: Sorry if this is the wrong place to ask such things/problems.
I'm pretty new to oVirt. :)<br>
</body>
</html>
--------------070703050305060909030602--
11 years, 3 months
Re: [Users] Multiple network interfaces on engine?
by Michael Pasternak
On 09/23/2013 02:23 PM, Rickard Kristiansson wrote:
> Hmm.. yes, I do have the storage network configured in DC and Cluster, and also attached to all my hypervisors.
> The question is just how I also attach it to the Engine server, as this isn't visible as a node in the GUI?
i think this could be done with puppet, afaik some work for integrating it to
oVirt been done lately, though not sure if it's already available,
oved?
>
>
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> *Från: *"Michael Pasternak" <mpastern(a)redhat.com>
> *Till: *"Rickard Kristiansson" <rickard(a)sanuksystems.com>
> *Kopia: *users(a)ovirt.org, "\"René \"Koch (ovido)\"\"" <r.koch(a)ovido.at>
> *Skickat: *måndag, 23 sep 2013 13:20:06
> *Ämne: *Re: [Users] Multiple network interfaces on engine?
>
> On 09/23/2013 01:50 PM, Rickard Kristiansson wrote:
>> Thanks Michael,
>>
>> I'm still trying to get my head around this, but I can't see how I configure the networks on the engine?
>> For nodes and AOI it's OK, but in my case I am running the engine as a separate server which is not a node and does not show up as a Host in the GUI.
>> Or do I need to add the engine as a host in the GUI as well, to be able to configure it?
>
> IIUC your question, you have various options:
>
> 1. GUI
> 2. SDKs
> 3. CLI
>
> you need to create network in DC, attach it to cluster, attach to host NIC/s (if you want it to be
> atomic operation, i.e you expect connectivity loss during the operation/s, setupnetworks should be used,
> otherwise you can use attach-network dialogues in UI)
>
>>
>>
>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>> *Från: *"Michael Pasternak" <mpastern(a)redhat.com>
>> *Till: *"Rickard Kristiansson" <rickard(a)sanuksystems.com>
>> *Kopia: *users(a)ovirt.org
>> *Skickat: *måndag, 23 sep 2013 12:21:40
>> *Ämne: *Re: [Users] Multiple network interfaces on engine?
>>
>> On 09/23/2013 12:54 PM, Rickard Kristiansson wrote:
>>> Hi,
>>>
>>> I want to use two network interfaces connected to different networks in Engine (3.3), where one is for ovirtmgmt and one is for a separate storage network.
>>> It works fine for a minute or so after rebooting Engine, but then the configuration is changed by Engine, connecting both interfaces to bond0 and losing connectivity (as
>>> obviously you can't bond those two interfaces when they are connected to separate physical networks..).
>>> What is the mechanism doing this,
>>
>> please see this [1].
>>
>> [1] http://www.ovirt.org/Features/Design/Network/SetupNetworks
>>
>>> and it is possible to disable this automatic network bonding on the engine so that we can use separate networks on the network interfaces..?
>>
>> attaching logs would help, but i guess it happen cause you did not saved
>> your host network config, i.e when host got rebooted, it performs roll-back
>> to old config, this is actually disaster recovery mechanism.
>>
>>>
>>> Rickard
>>>
>>> **
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>> --
>>
>> Michael Pasternak
>> RedHat, ENG-Virtualization R&D
>>
>
>
> --
>
> Michael Pasternak
> RedHat, ENG-Virtualization R&D
>
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
11 years, 3 months
[Users] Multiple network interfaces on engine?
by Rickard Kristiansson
------=_Part_185106_980955163.1379930065055
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
I want to use two network interfaces connected to different networks in Engine (3.3), where one is for ovirtmgmt and one is for a separate storage network.
It works fine for a minute or so after rebooting Engine, but then the configuration is changed by Engine, connecting both interfaces to bond0 and losing connectivity (as obviously you can't bond those two interfaces when they are connected to separate physical networks..).
What is the mechanism doing this, and it is possible to disable this automatic network bonding on the engine so that we can use separate networks on the network interfaces..?
Rickard
------=_Part_185106_980955163.1379930065055
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Arial, Helvetica, sans-serif; font-s=
ize: 12pt; color: #000000"><div>Hi,</div><div><br></div><div>I want to use =
two network interfaces connected to different networks in Engine (3.3), whe=
re one is for ovirtmgmt and one is for a separate storage network.<br></div=
><div>It works fine for a minute or so after rebooting Engine, but then the=
configuration is changed by Engine, connecting both interfaces to bond0 an=
d losing connectivity (as obviously you can't bond those two interfaces whe=
n they are connected to separate physical networks..).<br></div><div>What i=
s the mechanism doing this, and it is possible to disable this automatic ne=
twork bonding on the engine so that we can use separate networks on the net=
work interfaces..?</div><div> <br></div><div>Rickard<br></div><div><br></di=
v><div><span name=3D"x"></span><div><strong></strong><span style=3D"font-si=
ze: small;" data-mce-style=3D"font-size: small;" size=3D"2"><span style=3D"=
font-size: small;" data-mce-style=3D"font-size: small;" size=3D"2"><span st=
yle=3D"font-weight: bold;" data-mce-style=3D"font-weight: bold;"></span></s=
pan></span><br></div></div></div></body></html>
------=_Part_185106_980955163.1379930065055--
11 years, 3 months
[Users] cant add storage connection via api
by Yuriy Demchenko
Hi,
I've recently upgraded my test lab to ovirt-3.3 (el6) and trying to add
additional target for iscsi domain.
As described here -
http://www.ovirt.org/Features/Manage_Storage_Connections , I'm trying
first to add new connection via restapi, but operation fails with error
"HTTP Status 400 - javax.xml.bind.UnmarshalException: unexpected element
(uri:"", local:"storage_connection")"
I'm not very familiar with restapi and maybe doing something wrong, so
please help me to figure it out.
here's what i put and reply from server:
> curl -k -v -u "admin@internal:pass" -H "Content-type: application/xml"
> -d '<storage_connection>
> <type>iscsi</type>
> <address>192.168.221.5</address>
> <port>3260</port>
> <target>iqn.2013-09.local.stone.spb:target3.disk</target>
> </storage_connection>'
> 'https://ovirt.spb.stone.local/api/storageconnections'
> * About to connect() to ovirt.spb.stone.local port 443 (#0)
> * Trying 192.168.220.13...
> * connected
> * Connected to ovirt.spb.stone.local (192.168.220.13) port 443 (#0)
> * Initializing NSS with certpath: sql:/etc/pki/nssdb
> * warning: ignoring value of ssl.verifyhost
> * skipping SSL peer certificate verification
> * SSL connection using TLS_DHE_RSA_WITH_AES_256_CBC_SHA
> * Server certificate:
> * subject: CN=ovirt.spb.stone.local,O=spb.stone.local,C=US
> * start date: Aug 28 09:28:45 2013 GMT
> * expire date: Aug 03 09:28:47 2018 GMT
> * common name: ovirt.spb.stone.local
> * issuer: CN=CA-ovirt.spb.stone.local.95565,O=spb.stone.local,C=US
> * Server auth using Basic with user 'admin@internal'
> > POST /api/storageconnections HTTP/1.1
> > Authorization: Basic YWRtaW5AaW50ZXJuYWw6bXAyMjFjMg==
> > User-Agent: curl/7.24.0 (x86_64-redhat-linux-gnu) libcurl/7.24.0
> NSS/3.14.3.0 zlib/1.2.5 libidn/1.24 libssh2/1.4.1
> > Host: ovirt.spb.stone.local
> > Accept: */*
> > Content-type: application/xml
> > Content-Length: 170
> >
> * upload completely sent off: 170 out of 170 bytes
> < HTTP/1.1 400 Bad Request
> < Date: Wed, 18 Sep 2013 12:05:51 GMT
> < Content-Type: text/html;charset=utf-8
> < Vary: Accept-Encoding
> < Connection: close
> < Transfer-Encoding: chunked
> <
> <html><head><title>JBoss Web/7.0.13.Final - Error
> report</title><style><!--H1
> {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;}
> H2
> {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;}
> H3
> {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;}
> BODY
> {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;}
> B
> {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;}
> P
> {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A
> {color : black;}A.name {color : black;}HR {color :
> #525D76;}--></style> </head><body><h1>HTTP Status 400 -
> javax.xml.bind.UnmarshalException: unexpected element
> (uri:"", local:"storage_connection"). Expected
> elements are
> <{}action>,<{}agent>,<{}agents>,<{}api>,<{}application>,<{}applications>,<{}authentication_methods>,<{}body>,<{}bonding>,<{}boot_devices>,<{}boot_protocols>,<{}brick>,<{}brick_details>,<{}brick_memoryinfo>,<{}brick_states>,<{}bricks>,<{}capabilities>,<{}cdrom>,<{}cdroms>,<{}certificate>,<{}cluster>,<{}clusters>,<{}console>,<{}content_types>,<{}cpu>,<{}cpu_modes>,<{}cpu_tune>,<{}cpus>,<{}creation>,<{}creation_states>,<{}custom_properties>,<{}data_center>,<{}data_center_states>,<{}data_centers>,<{}detailedLink>,<{}detailedLinks>,<{}disk>,<{}disk_formats>,<{}disk_interfaces>,<{}disk_states>,<{}disks>,<{}display>,<{}display_types>,<{}domain>,<{}domains>,<{}error_handling>,<{}event>,<{}events>,<{}fault>,<{}feature>,<{}features>,<{}fence_types>,<{}file>,<{}files>,<{}floppies>,<{}floppy>,<{}general_metadata>,<{}gluster_client>,<{}gluster_clients>,<{}gluster_hook>,<{}gluster_volume>,<{}gluster_volume_states>,<{}gluster_volume_types>,<{}gluster_volumes>,<{}glusterhooks>,<{}group>,<{}groups>,<{}hardware_information>,<{}header>,<{}headers>,<{}hook>,<{}hook_states>,<{}hooks>,<{}host>,<{}host_nic>,<{}host_nic_states>,<{}host_nics>,<{}host_non_operational_details>,<{}host_states>,<{}host_storage>,<{}hosts>,<{}ip>,<{}ip_versions>,<{}ips>,<{}job>,<{}jobs>,<{}keyValuePair>,<{}ksm>,<{}link>,<{}linkCapabilities>,<{}logical_unit>,<{}mac>,<{}memory_pool>,<{}memory_pools>,<{}network>,<{}network_states>,<{}networks>,<{}nfs_versions>,<{}nic>,<{}nic_interfaces>,<{}nics>,<{}options>,<{}os_types>,<{}parameter>,<{}parameters_set>,<{}payload>,<{}payload_file>,<{}payloads>,<{}permission>,<{}permissions>,<{}permit>,<{}permits>,<{}pm_proxies>,<{}pm_proxy>,<{}pm_proxy_types>,<{}port_mirroring>,<{}power_management>,<{}power_management_states>,<{}power_managers>,<{}preview_vms>,<{}product_info>,<{}quota>,<{}quotas>,<{}reported_device>,<{}reported_device_types>,<{}reported_devices>,<{}request>,<{}response>,<{}role>,<{}roles>,<{}rsdl>,<{}scheduling_policies>,<{}schema>,<{}server_hook>,<{}server_hooks>,<{}sgio_options>,<{}slaves>,<{}snapshot>,<{}snapshots>,<{}special_objects>,<{}ssh>,<{}stages>,<{}statistic>,<{}statistics>,<{}status>,<{}step>,<{}step_types>,<{}steps>,<{}storage>,<{}storage_connections>,<{}storage_domain>,<{}storage_domain_states>,<{}storage_domain_types>,<{}storage_domains>,<{}storage_formats>,<{}storage_manager>,<{}storage_types>,<{}summary>,<{}system_version>,<{}tag>,<{}tags>,<{}template>,<{}template_states>,<{}templates>,<{}transparent_hugepages>,<{}transport_types>,<{}url>,<{}usages>,<{}usb>,<{}user>,<{}users>,<{}value>,<{}values>,<{}vcpu_pin>,<{}version>,<{}vlan>,<{}vm>,<{}vm_affinities>,<{}vm_device_types>,<{}vm_pause_details>,<{}vm_states>,<{}vm_types>,<{}vmpool>,<{}vmpools>,<{}vms>,<{}volume_group>,<{}watchdog>,<{}watchdog_actions>,<{}watchdog_models>,<{}watchdogs></h1><HR
> size="1" noshade="noshade"><p><b>type</b> Status
> report</p><p><b>message</b> <u>javax.xml.bind.UnmarshalException:
> unexpected element (uri:"",
> local:"storage_connection"). Expected elements are
> <{}action>,<{}agent>,<{}agents>,<{}api>,<{}application>,<{}applications>,<{}authentication_methods>,<{}body>,<{}bonding>,<{}boot_devices>,<{}boot_protocols>,<{}brick>,<{}brick_details>,<{}brick_memoryinfo>,<{}brick_states>,<{}bricks>,<{}capabilities>,<{}cdrom>,<{}cdroms>,<{}certificate>,<{}cluster>,<{}clusters>,<{}console>,<{}content_types>,<{}cpu>,<{}cpu_modes>,<{}cpu_tune>,<{}cpus>,<{}creation>,<{}creation_states>,<{}custom_properties>,<{}data_center>,<{}data_center_states>,<{}data_centers>,<{}detailedLink>,<{}detailedLinks>,<{}disk>,<{}disk_formats>,<{}disk_interfaces>,<{}disk_states>,<{}disks>,<{}display>,<{}display_types>,<{}domain>,<{}domains>,<{}error_handling>,<{}event>,<{}events>,<{}fault>,<{}feature>,<{}features>,<{}fence_types>,<{}file>,<{}files>,<{}floppies>,<{}floppy>,<{}general_metadata>,<{}gluster_client>,<{}gluster_clients>,<{}gluster_hook>,<{}gluster_volume>,<{}gluster_volume_states>,<{}gluster_volume_types>,<{}gluster_volumes>,<{}glusterhooks>,<{}group>,<{}groups>,<{}hardware_information>,<{}header>,<{}headers>,<{}hook>,<{}hook_states>,<{}hooks>,<{}host>,<{}host_nic>,<{}host_nic_states>,<{}host_nics>,<{}host_non_operational_details>,<{}host_states>,<{}host_storage>,<{}hosts>,<{}ip>,<{}ip_versions>,<{}ips>,<{}job>,<{}jobs>,<{}keyValuePair>,<{}ksm>,<{}link>,<{}linkCapabilities>,<{}logical_unit>,<{}mac>,<{}memory_pool>,<{}memory_pools>,<{}network>,<{}network_states>,<{}networks>,<{}nfs_versions>,<{}nic>,<{}nic_interfaces>,<{}nics>,<{}options>,<{}os_types>,<{}parameter>,<{}parameters_set>,<{}payload>,<{}payload_file>,<{}payloads>,<{}permission>,<{}permissions>,<{}permit>,<{}permits>,<{}pm_proxies>,<{}pm_proxy>,<{}pm_proxy_types>,<{}port_mirroring>,<{}power_management>,<{}power_management_states>,<{}power_managers>,<{}preview_vms>,<{}product_info>,<{}quota>,<{}quotas>,<{}reported_device>,<{}reported_device_types>,<{}reported_devices>,<{}request>,<{}response>,<{}role>,<{}roles>,<{}rsdl>,<{}scheduling_policies>,<{}schema>,<{}server_hook>,<{}server_hooks>,<{}sgio_options>,<{}slaves>,<{}snapshot>,<{}snapshots>,<{}special_objects>,<{}ssh>,<{}stages>,<{}statistic>,<{}statistics>,<{}status>,<{}step>,<{}step_types>,<{}steps>,<{}storage>,<{}storage_connections>,<{}storage_domain>,<{}storage_domain_states>,<{}storage_domain_types>,<{}storage_domains>,<{}storage_formats>,<{}storage_manager>,<{}storage_types>,<{}summary>,<{}system_version>,<{}tag>,<{}tags>,<{}template>,<{}template_states>,<{}templates>,<{}transparent_hugepages>,<{}transport_types>,<{}url>,<{}usages>,<{}usb>,<{}user>,<{}users>,<{}value>,<{}values>,<{}vcpu_pin>,<{}version>,<{}vlan>,<{}vm>,<{}vm_affinities>,<{}vm_device_types>,<{}vm_pause_details>,<{}vm_states>,<{}vm_types>,<{}vmpool>,<{}vmpools>,<{}vms>,<{}volume_group>,<{}watchdog>,<{}watchdog_actions>,<{}watchdog_models>,<{}watchdogs></u></p><p><b>description</b>
> <u>The request sent by the client was syntactically incorrect
> (javax.xml.bind.UnmarshalException: unexpected element
> (uri:"", local:"storage_connection"). Expected
> elements are
> <{}action>,<{}agent>,<{}agents>,<{}api>,<{}application>,<{}applications>,<{}authentication_methods>,<{}body>,<{}bonding>,<{}boot_devices>,<{}boot_protocols>,<{}brick>,<{}brick_details>,<{}brick_memoryinfo>,<{}brick_states>,<{}bricks>,<{}capabilities>,<{}cdrom>,<{}cdroms>,<{}certificate>,<{}cluster>,<{}clusters>,<{}console>,<{}content_types>,<{}cpu>,<{}cpu_modes>,<{}cpu_tune>,<{}cpus>,<{}creation>,<{}creation_states>,<{}custom_properties>,<{}data_center>,<{}data_center_states>,<{}data_centers>,<{}detailedLink>,<{}detailedLinks>,<{}disk>,<{}disk_formats>,<{}disk_interfaces>,<{}disk_states>,<{}disks>,<{}display>,<{}display_types>,<{}domain>,<{}domains>,<{}error_handling>,<{}event>,<{}events>,<{}fault>,<{}feature>,<{}features>,<{}fence_types>,<{}file>,<{}files>,<{}floppies>,<{}floppy>,<{}general_metadata>,<{}gluster_client>,<{}gluster_clients>,<{}gluster_hook>,<{}gluster_volume>,<{}gluster_volume_states>,<{}gluster_volume_types>,<{}gluster_volumes>,<{}glusterhooks>,<{}group>,<{}groups>,<{}hardware_information>,<{}header>,<{}headers>,<{}hook>,<{}hook_states>,<{}hooks>,<{}host>,<{}host_nic>,<{}host_nic_states>,<{}host_nics>,<{}host_non_operational_details>,<{}host_states>,<{}host_storage>,<{}hosts>,<{}ip>,<{}ip_versions>,<{}ips>,<{}job>,<{}jobs>,<{}keyValuePair>,<{}ksm>,<{}link>,<{}linkCapabilities>,<{}logical_unit>,<{}mac>,<{}memory_pool>,<{}memory_pools>,<{}network>,<{}network_states>,<{}networks>,<{}nfs_versions>,<{}nic>,<{}nic_interfaces>,<{}nics>,<{}options>,<{}os_types>,<{}parameter>,<{}parameters_set>,<{}payload>,<{}payload_file>,<{}payloads>,<{}permission>,<{}permissions>,<{}permit>,<{}permits>,<{}pm_proxies>,<{}pm_proxy>,<{}pm_proxy_types>,<{}port_mirroring>,<{}power_management>,<{}power_management_states>,<{}power_managers>,<{}preview_vms>,<{}product_info>,<{}quota>,<{}quotas>,<{}reported_device>,<{}reported_device_types>,<{}reported_devices>,<{}request>,<{}response>,<{}role>,<{}roles>,<{}rsdl>,<{}scheduling_policies>,<{}schema>,<{}server_hook>,<{}server_hooks>,<{}sgio_options>,<{}slaves>,<{}snapshot>,<{}snapshots>,<{}special_objects>,<{}ssh>,<{}stages>,<{}statistic>,<{}statistics>,<{}status>,<{}step>,<{}step_types>,<{}steps>,<{}storage>,<{}storage_connections>,<{}sto*
> Closing connection #0
> rage_domain>,<{}storage_domain_states>,<{}storage_domain_types>,<{}storage_domains>,<{}storage_formats>,<{}storage_manager>,<{}storage_types>,<{}summary>,<{}system_version>,<{}tag>,<{}tags>,<{}template>,<{}template_states>,<{}templates>,<{}transparent_hugepages>,<{}transport_types>,<{}url>,<{}usages>,<{}usb>,<{}user>,<{}users>,<{}value>,<{}values>,<{}vcpu_pin>,<{}version>,<{}vlan>,<{}vm>,<{}vm_affinities>,<{}vm_device_types>,<{}vm_pause_details>,<{}vm_states>,<{}vm_types>,<{}vmpool>,<{}vmpools>,<{}vms>,<{}volume_group>,<{}watchdog>,<{}watchdog_actions>,<{}watchdog_models>,<{}watchdogs>).</u></p><HR
> size="1" noshade="noshade"><h3>JBoss Web/7.0.13.Final</h3></body></html>
--
Yuriy Demchenko
11 years, 3 months
[Users] All VMs disappeared
by Jakub Bittner
Hello,
Iam running ovirt 3.3.0.beta1 on centos 6,4 and all our VMs disappeared
from VMs tab. Nodes running on centos too. Repeating problem in log is:
2013-08-09 09:44:24,203 WARN
[org.ovirt.engine.core.vdsbroker.VdsManager]
(DefaultQuartzScheduler_Worker-45) Failed to refresh VDS , vds =
7cb6aedf-47bc-40b0-877f-2a537fca5c64 : node2.x.com, error =
java.lang.NullPointerException, continuing.: java.lang.NullPointerException
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.proceedGuaranteedMemoryCheck(VdsUpdateRunTimeInfo.java:1313)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.refreshVmStats(VdsUpdateRunTimeInfo.java:968)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.refreshVdsRunTimeInfo(VdsUpdateRunTimeInfo.java:542)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.Refresh(VdsUpdateRunTimeInfo.java:383)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsManager.OnTimer(VdsManager.java:237)
[vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor171.invoke(Unknown
Source) [:1.7.0_25]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_25]
at java.lang.reflect.Method.invoke(Method.java:606)
[rt.jar:1.7.0_25]
at
org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60)
[scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
[quartz.jar:]
at
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
[quartz.jar:]
2013-08-09 09:44:27,039 WARN
[org.ovirt.engine.core.vdsbroker.VdsManager]
(DefaultQuartzScheduler_Worker-47) Failed to refresh VDS , vds =
653e34a6-6b7e-4757-9042-dc02e7a0cddf : node3.x.com, error =
java.lang.NullPointerException, continuing.: java.lang.NullPointerException
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.proceedGuaranteedMemoryCheck(VdsUpdateRunTimeInfo.java:1313)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.refreshVmStats(VdsUpdateRunTimeInfo.java:968)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.refreshVdsRunTimeInfo(VdsUpdateRunTimeInfo.java:542)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.Refresh(VdsUpdateRunTimeInfo.java:383)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsManager.OnTimer(VdsManager.java:237)
[vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor171.invoke(Unknown
Source) [:1.7.0_25]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_25]
at java.lang.reflect.Method.invoke(Method.java:606)
[rt.jar:1.7.0_25]
at
org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60)
[scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
[quartz.jar:]
at
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
[quartz.jar:]
11 years, 3 months