NIC Setup
by Christian Reiss
Hey folks,
General question for understanding:
Assume my hosts have
- 1 10gb NIC with 10.0.0.0/24 ip for Storage/Gluster and Ovirt Engine,
- 2 1gb NICs with no configured ip.
Now the VMs can have a plethora of IPs from all the assigned ip blocks etc.
I can't seem to figure out the right way to attach a ovirt Network to
specific NICs.
I keep reading that you should seperate the Storage from the VM traffic,
but I am unable to find any documentation on the suggested layout,
examples or anything, really.
The closest thing I came to seperate the traffic, at least logically, is
via vlan tagging on the ovirt storage network. But even that failed due
to "No hosts has this network".
*scratches head*
Can someone enlighten me or push me to the Network Docs?
-Chris.
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 10 months
HCL: 4.3.7: Hosted engine fails
by Christian Reiss
Hey all,
Using a homogeneous ovirt-node-ng-4.3.7-0.20191121.0 freshly created
cluster using node installer I am unable to deploy the hosted engine.
Everything else worked.
In vdsm.log is a line, just after attempting to start the engine:
libvirtError: the CPU is incompatible with host CPU: Host CPU does not
provide required features: virt-ssbd
I am using AMD EPYC 7282 16-Core Processors.
I have attached
- vdsm.log (during and failing the start)
- messages (for bootup / libvirt messages)
- dmesg (grub / boot config)
- deploy.log (browser output during deployment)
- virt-capabilites (virsh -r capabilities)
I can't think -or don't know- off any other log files of interest here,
but I am more than happy to oblige.
notectl check tells me
Status: OK
Bootloader ... OK
Layer boot entries ... OK
Valid boot entries ... OK
Mount points ... OK
Separate /var ... OK
Discard is used ... OK
Basic storage ... OK
Initialized VG ... OK
Initialized Thin Pool ... OK
Initialized LVs ... OK
Thin storage ... OK
Checking available space in thinpool ... OK
Checking thinpool auto-extend ... OK
vdsmd ... OK
layers:
ovirt-node-ng-4.3.7-0.20191121.0:
ovirt-node-ng-4.3.7-0.20191121.0+1
bootloader:
default: ovirt-node-ng-4.3.7-0.20191121.0 (3.10.0-1062.4.3.el7.x86_64)
entries:
ovirt-node-ng-4.3.7-0.20191121.0 (3.10.0-1062.4.3.el7.x86_64):
index: 0
title: ovirt-node-ng-4.3.7-0.20191121.0 (3.10.0-1062.4.3.el7.x86_64)
kernel:
/boot/ovirt-node-ng-4.3.7-0.20191121.0+1/vmlinuz-3.10.0-1062.4.3.el7.x86_64
args: "ro crashkernel=auto rd.lvm.lv=onn_node01/swap
rd.lvm.lv=onn_node01/ovirt-node-ng-4.3.7-0.20191121.0+1 rhgb quiet
LANG=en_GB.UTF-8 img.bootid=ovirt-node-ng-4.3.7-0.20191121.0+1"
initrd:
/boot/ovirt-node-ng-4.3.7-0.20191121.0+1/initramfs-3.10.0-1062.4.3.el7.x86_64.img
root: /dev/onn_node01/ovirt-node-ng-4.3.7-0.20191121.0+1
current_layer: ovirt-node-ng-4.3.7-0.20191121.0+1
The odd thing is the hosted engine vm does get started during initial
configuration and works. Just when the ansible stuff is done an its
moved over to ha storage the CPU quirks start.
So far I learned that ssbd is a mitigation protection but the flag is
not in my cpu. Well, ssbd is virt-ssbd is not.
I am *starting* with ovirt. I would really, really welcome it if
recommendations would include clues on how to make it happen.
I do rtfm, but I was unable to find anything (or any solution) anywhere.
Not after 80 hours of working on this.
Thank you all.
-Chris.
--
Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon
support(a)alpha-labs.net \ / Campaign
X against HTML
WEB alpha-labs.net / \ in eMails
GPG Retrieval https://gpg.christian-reiss.de
GPG ID ABCD43C5, 0x44E29126ABCD43C5
GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5
"It's better to reign in hell than to serve in heaven.",
John Milton, Paradise lost.
4 years, 10 months
Re: ISO Upload
by Strahil
Then ... It's a bug and shoiuld be reported.
Best Regards,
Strahil NikolovOn Jan 7, 2020 16:00, Chris Adams <cma(a)cmadams.net> wrote:
>
> Once upon a time, m.skrzetuski(a)gmail.com <m.skrzetuski(a)gmail.com> said:
> > I'd give up on the ISO domain. I started like you and then read the docs which said that ISO domain is deprecated.
> > I'd upload all files to a data domain.
>
> Note that that only works if your data domain is NFS... iSCSI data
> domains will let you upload ISOs, but connecting them to a VM fails.
> --
> Chris Adams <cma(a)cmadams.net>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ND4OHFK6IH6...
4 years, 10 months
OVirt Engine Server Died - Steps for Rebuilding the Ovirt Engine System
by bob.franzke@mdaemon.com
Full disclosure here.....I am not an Ovirt Expert. I am a network Engineer that has been forced to take over sysadmin duties for a departed co-worker. I have little experience with Ovirt so apologies up front for anything I say that comes across as stupid or "RTM" questions. Normally I would do just that but I am in a bind and am trying to figure this out quickly. We have an OVirt installation setup that consists of 4 nodes and a server that hosts the ovirt-engine all running CentOS 7. The server that hosts the engine has a pair of failing hard drives and I need to replace the hardware ASAP. Need to outline the steps needed to build a new server to serve as and replace the ovirt engine server. I have backed up the entire /etc directory and the backups being done nightly by the engine itself. I also backed up the iscsi info and took a printout of all the disk arrangement . The disk has gotten so bad at this point that the DB won't back up any longer. Get fatal:backup failed error when
trying to run the ovirt backup tool. Also the Ovirt management site is not rendering and I am not sure why.
Is there anything else I need to make sure I backup in order to migrate the engine from one server to another? Also, until I can get the engine running again, is there any tool available to manage the VMs on the hosts themselves. The VMs on the hosts are running but need a way to manage them if needed in case something happens while the engine is being repaired. Any info on this as well as what to backup and the steps to move the engine from one server to another would be much much appreciated. Sorry I know this a real RTM type post but I am in a bind and need a solution rather quickly. Thanks in advance.
4 years, 10 months
Deleted gluster volume
by Christian Reiss
Hey folks,
I accidently deleted a volume in the oVirt Engine, Storage -> Volumes.
The gluster is still up on all three nodes, but the ovirt engine refuses
to re-scan it. Or to re-introduce it.
How could I add the still valid and working (but deleted in the ovirt
engine) gluster volume back to the engine?
It's a test setup. I am trying to break & repair things. Still I am
unable to find any documentation on this "use case".
Thanks!
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 10 months
strength of oVirt compared to others???
by yam yam
Hi everyone!
I want to know clear strength of oVirt compared to openstack and kubevirt(VM add-on for k8s).
compared OpenStack, I heard oVirt is specialized in long-lasting traditional apps requiring robust and resilient infra while OpenStack is cloud solution.
so, backend is suited for oVirt and frontend like web server is suited for OpenStack.
but I 'm wondering why these traditional apps are not suited for OpenStack because I think OpenStack encompasses functions(live migration, scale-up, snapshot, ...) oVirt provides.
I want to know which features in oVirt make such difference.
compared to kubevirt, it seems more difficult to find strength in that both are designed to run legacy apps.
under mixed environment, it seems like rather kubevirt has more benefits :'(
4 years, 10 months
Question about HCI gluster_inventory.yml
by John Call
Hi ovirt-users,
I'm trying to automate my HCI deployment, but can't figure out how to
specify multiple network interfaces in gluster_inventory.yml. My servers
have two NICs, one for ovirtmgmt (and everything else), and the other is
just for Gluster. How should I populate the inventory/vars file? Is this
correct?
[root@rhhi1 hc-ansible-deployment]# pwd
/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
[root@rhhi1 hc-ansible-deployment]# cat gluster_inventory.yml
--lots of stuff omitted--
hc_nodes:
hosts:
host1-STORAGE-fqdn:
host2-STORAGE-fqdn:
host3-STORAGE-fqdn:
vars:
cluster_nodes:
- host1-ovirtmgmt-fqdn
- host2-ovirtmgmt-fqdn
- host3-ovirtmgmt-fqdn
gluster_features_hci_cluster: "{{ cluster_nodes }}"
gluster:
host2-STORAGE-fqdn:
host3-STORAGE-fqdn:
storage_domains:
[{"name":"data","host":"host1-STORAGE-fqdn","address":"host1-STORAGE-fqdn","path":"/data","mount_options":"backup-volfile-servers=host2-STORAGE-fqdn:host3-STORAGE-fqdn"},{"name":"vmstore","host":"host1-STORAGE-fqdn","address":"host1-STORAGE-fqdn","path":"/vmstore","mount_options":"backup-volfile-servers=host2-STORAGE-fqdn:host3-STORAGE-fqdn"}]
4 years, 10 months
VM auto start with Host?
by m.skrzetuski@gmail.com
Hello everyone,
please tell me it is possible to auto start all VMs together with a Host. How do I configure this?
I found some RFEs and Bugzilla Bugs and Redhat engineers promising this but no docs about how to enable it.
Kind regards
Skrzetuski
4 years, 10 months