network config change
by mailing-ovirt@qic.ca
Greetings,
So we have a single hosted engine ovirt box deployed.
When this was deployed, the network requirements were poorly determined and
there was no port teaming or any vlan configuration.
now we need to change that.
I have searched all over for any form of procedure or guide as to how we do
this.
in summary:
I need to go from a setup that uses em1 as it's default device and gateway
to a setup where em1 and 3 are bonded into bond0 and now use that instead.
As for the vlan we could live with keeping the native vlan configured on the
cisco side for management and only add extra ones for VMs.
any assistance as to what I need to go through from a bash shell would be
appreciated.
thanks
Simon
5 years, 2 months
Failure while import VM from export domain
by Timmi
Hi oVirt list,
I'm planing to migrate my oVirt installation on a new HW.
My current platform is running a 4.2.8. The new one a 4.3.5.
I'm currently trying to import the VMs through the export domain. But
some of my old VMs are failing.
Which log file would be interesting to check?
Also maybe there is a better way to copy more less all VMs from my
current oVirt to the new one.
Best regards
Christoph
5 years, 2 months
HP NetXen Incorporated NX3031 driver
by Leo David
Hi everyone,
I have this bloody card installed on a node, and it seems that no driver
can be loaded for it, "ip link sh" does not show it.
Any recommendation about installing this network card on oVirt 4.2.8 ?
lspci -v
....
05:00.0 Ethernet controller: NetXen Incorporated NX3031 Multifunction
1/10-Gigabit Server Adapter (rev 42)
Subsystem: Hewlett-Packard Company NC522SFP Dual Port 10GbE Server
Adapter
Flags: fast devsel, IRQ 31
Memory at d9c00000 (64-bit, non-prefetchable) [size=2M]
Memory at da000000 (64-bit, non-prefetchable) [size=32M]
Expansion ROM at d8000000 [disabled] [size=64K]
Capabilities: [40] MSI-X: Enable- Count=64 Masked-
Capabilities: [80] Power Management version 3
Capabilities: [a0] MSI: Enable- Count=1/32 Maskable- 64bit+
Capabilities: [c0] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [140] Device Serial Number 59-69-46-61-6e-48-73-75
Kernel modules: netxen_nic
05:00.1 Ethernet controller: NetXen Incorporated NX3031 Multifunction
1/10-Gigabit Server Adapter (rev 42)
Subsystem: Hewlett-Packard Company NC522SFP Dual Port 10GbE Server
Adapter
Flags: fast devsel, IRQ 33
Memory at d9e00000 (64-bit, non-prefetchable) [size=2M]
Memory at dc000000 (64-bit, non-prefetchable) [size=32M]
Capabilities: [40] MSI-X: Enable- Count=64 Masked-
Capabilities: [80] Power Management version 3
Capabilities: [a0] MSI: Enable- Count=1/32 Maskable- 64bit+
Capabilities: [c0] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [140] Device Serial Number 59-69-46-61-6e-48-73-75
Kernel modules: netxen_nic
dmesg | grep netxen
[ 2.188078] netxen_nic 0000:05:00.0: 2MB memory map
[ 2.188371] netxen_nic 0000:05:00.0: Timeout reached waiting for rom
done
[ 2.188442] netxen_nic 0000:05:00.0: Error getting board config info.
[ 2.189488] netxen_nic: probe of 0000:05:00.0 failed with error -5
[ 2.190438] netxen_nic 0000:05:00.1: 2MB memory map
[ 2.190741] netxen_nic 0000:05:00.1: Timeout reached waiting for rom
done
[ 2.190808] netxen_nic 0000:05:00.1: Error getting board config info.
[ 2.190964] netxen_nic: probe of 0000:05:00.1 failed with error -5
...
Thank you very much !
Leo
--
Best regards, Leo David
5 years, 2 months
ovirt-guest: Network device hanging shortly after startup (after Upgrade to 4.3.5)
by julius.kempa@astelon.ch
We have a some old guests (SLES 10.4) which has been worked corectly till 4.2.7 (Manager + Host)
After Upgrade to 4.3.5, we have got a network issue, that the gues network device, not depending on used device type, stopts to work after some seconds.... after about 15-20 pings device is hanging.
Sometimes, in the first time after the device hungs up, the message "no buffer space available" comes from ping.
Does someone knows a solution for it?
greetings
Julius
5 years, 2 months
[ANN] oVirt 4.3.6 Fifth Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.6 Fifth Release Candidate for testing, as of September 5th, 2019.
This update is a release candidate of the sixth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) has been built consuming
CentOS 7.7 Candidate Release
See the release notes [1] for known issues, new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available based on CentOS 7.7 Candidate Release
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights:
http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
<https://mojo.redhat.com/docs/DOC-1199578>*
5 years, 2 months
VM crushed and cant shutdown successfully,I shoud to reboot the ovirt-node machine
by zhouhao@vip.friendtimes.net
VM crushed and cant shutdown successfully,It always on the way to shutdown,even if I wait it for 10 hours;Everytimes I shoud to reboot the ovirt-node machine
This comes up 4 times, It appes both on the ovirt-noe 4.3 and 4.2;
This bug appers when my VM is running at high load(seems memory and disk's w/r)
My storage domain is local on host;
When the bug appers,it could cost my node's io,looks like the picture below;But sometimes it will not
The vm's threading on the ovirt-node, the IO ratio is 100%
Everytime the vm's process will change to defunct
I can not kill it,I had to shutdown the ovirt-node
on the engine website ,the vm's status always in the way to shutdown ,even if I wait for it for hours;
It either failed or is shutting down
and the "power off" cant shutdown the vm too.
the vm's error below
The ovirt-node's error below
----------------------------------------------------------------
The other infomation about my ovirt-node
node-version:
node hardware
zhouhao(a)vip.friendtimes.net
5 years, 2 months
Re: [ANN] oVirt 4.3.6 Fourth Release Candidate is now available for testing
by Strahil
Hi Sandro,
Can the update work with 7.6 ?
I'm asking because CentOS 7.7 is still building and unavailable.
Best Regards,
Strahil NikolovOn Aug 29, 2019 10:47, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>
> The oVirt Project is pleased to announce the availability of the oVirt 4.3.6 Fourth Release Candidate for testing, as of August 29th, 2019.
>
> This update is a release candidate of the sixth in a series of stabilization updates to the 4.3 series.
> This is pre-release software. This pre-release should not to be used in production.
>
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.7 or later (but <8)
> * CentOS Linux (or similar) 7.7 or later (but <8)
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
> * Red Hat Enterprise Linux 7.7 or later (but <8)
> * CentOS Linux (or similar) 7.7 or later (but <8)
> * oVirt Node 4.3 (available for x86_64 only) will be made available when CentOS 7.7 will be released.
>
> See the release notes [1] for known issues, new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node is not yet available, pending CentOS 7.7 release to be available
>
> Additional Resources:
> * Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.3.6/
> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
>
> sbonazzo(a)redhat.com
>
> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
5 years, 2 months
Re: Any real experiences with nested virtualisation?
by Strahil
I guess you want to do something like this one : http://www.chrisj.cloud/?q=node/8
As Ovirt is upstream of RHV , I believe that RDO and Ovirt have almost the same integration as in the post.
Still,such setup requires a lot of knowledge about KVM, oVirt and OpenStack.
Best Regards,
Strahil NikolovOn Sep 4, 2019 21:36, thomas(a)hoberg.net wrote:
>
> First, it's a scenario the oVirt team seem to actually embrace themselves.
>
> I believe I have seen that type of deployment mentioned either in one of the forward looking blog posts or Red Hat summit videos.
>
> And then it simply makes a lot of sense to have those crucial management VMs that control the swarm of resources in a cloud run under a fault-tolerant VM management environment such as oVirt.
>
> As for nested virtualization, I can only provide some negative experience with oVirt.
>
> I've done nested virtualization with VMware, running ESX on VMWare Workstation and then some VMs underneath that ESX successfully.
>
> When I tried to do the same with oVirt, running an oVirt hyperconverged cluster as VMs on VMware workstation, I got reasonably far, but then every VM I tried to launch nested would just stop right at boot.
>
> It first happened with the hosted-engine, as that tries to start as a VM, but it happened more obviously when I tried to migrate VMs (just a plain Fedora 30 in this case) from a physical compute node running oVirt nodeOS to a virtualized compute node, running the same current oVirt nodeOS image on VMware Workstation (Windows 2016 host, if that matters).
>
> The machine would start the live-migration properly and then just freeze and stay frozen until I migrated it back or indeed to any other non-virtualized node.
>
> I haven't as yet tried to do that with nested KVM, which I believe might actually work: But it's on my things to test, so I'll report back, once I get around it.
>
> Actually I am wondering, a) how deep this nesting would be allowed to go and b) if the 'leaf' nodes at the last nesting level should actually have nesting support activated in their kernel parameters...
>
> Any insight here would be well appreciated.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YDBQGIXUWT...
5 years, 2 months