oVirt MAC Pool question
by Vrgotic, Marko
Dear oVirt,
While investigating and DHCP & DDNS collision issues between two VM servers from different oVirt clusters, I noticed that oVirt assigns same default MAC range for each of it’s managed clusters.
Question1: Does oVirt-Engine keep separate place in DB or … for MAC addresses assigned per cluster or it keeps them all in same place?
Question2: Would there be an harming effect on existing VMs if the default mac pool would be changed?
Additional info:
Self Hosted ovirt-engine – 4.3.4 and 4.3.7
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
ActiveVideo
4 years, 9 months
Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"
by Guillaume Pavese
Hi,
Trying to deploy ovirt 4.3-stable Hosted Engine with cockpit
This fails with the following :
[ INFO ] TASK [ovirt.hosted_engine_setup : Set VLAN ID at datacenter level]
[ ERROR ] Exception: Entity 'None' was not found.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Entity
'None' was not found."}
Any idea?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
4 years, 9 months
Understanding ovirt memory management which appears incorrect
by divan@santanas.co.za
Hi All,
A question regarding memory management with ovirt. I know memory can
be complicated hence I'm asking the experts. :)
Two examples of where it looks - to me - that memory management from
ovirt perspective is incorrect. This is resulting in us not getting as
much out of a host as we'd expect.
## Example 1:
host: dev-cluster-04
I understand the mem on the host to be:
128G total (physical)
68G used
53G available
56G buff/cache
I understand therefore 53G should still be available to allocate
(approximately, minus a few things).
```
DEV [root@dev-cluster-04:~] # free -m
total used free shared buff/cache available
Mem: 128741 68295 4429 4078 56016 53422
Swap: 12111 1578 10533
DEV [root@dev-cluster-04:~] # cat /proc/meminfo
MemTotal: 131831292 kB
MemFree: 4540852 kB
MemAvailable: 54709832 kB
Buffers: 3104 kB
Cached: 5174136 kB
SwapCached: 835012 kB
Active: 66943552 kB
Inactive: 5980340 kB
Active(anon): 66236968 kB
Inactive(anon): 5713972 kB
Active(file): 706584 kB
Inactive(file): 266368 kB
Unevictable: 50036 kB
Mlocked: 54132 kB
SwapTotal: 12402684 kB
SwapFree: 10786688 kB
Dirty: 812 kB
Writeback: 0 kB
AnonPages: 67068548 kB
Mapped: 143880 kB
Shmem: 4176328 kB
Slab: 52183680 kB
SReclaimable: 49822156 kB
SUnreclaim: 2361524 kB
KernelStack: 20000 kB
PageTables: 213628 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 78318328 kB
Committed_AS: 110589076 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 859104 kB
VmallocChunk: 34291324976 kB
HardwareCorrupted: 0 kB
AnonHugePages: 583680 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 621088 kB
DirectMap2M: 44439552 kB
DirectMap1G: 91226112 kB
```
The ovirt engine, compute -> hosts view shows s4-dev-cluster-01 as 93%
memory utilised.
Clicking on the node says:
Physical Memory: 128741 MB total, 119729 MB used, 9012 MB free
So ovirt engine says 9G free. The OS reports 4G free but 53G
available. Surely ovirt should be looking at available memory?
This is a problem, for instance, when trying to run a VM, called
dev-cassandra-01, with mem size 24576, max mem 24576 and mem
guarantee set to 10240 on this host it fails with:
```
Cannot run VM. There is no host that satisfies current scheduling
constraints. See below for details:
The host dev-cluster-04.fnb.co.za did not satisfy internal filter
Memory because its available memory is too low (19884 MB) to run the
VM.
```
To me this looks blatantly wrong. The host has 53G available according
to free -m.
Guessing I'm missing something, unless this is some sort of bug?
versions:
```
engine: 4.3.7.2-1.el7
host:
OS Version: RHEL - 7 - 6.1810.2.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 3.10.0 - 957.12.1.el7.x86_64
KVM Version: 2.12.0 - 18.el7_6.3.1
LIBVIRT Version: libvirt-4.5.0-10.el7_6.7
VDSM Version: vdsm-4.30.13-1.el7
SPICE Version: 0.14.0 - 6.el7_6.1
GlusterFS Version: [N/A]
CEPH Version: librbd1-10.2.5-4.el7
Open vSwitch Version: openvswitch-2.10.1-3.el7
Kernel Features: PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
VNC Encryption: Disabled
```
## Example 2:
A ovirt host with two VMs:
According to the host, it has 128G of physical memory of which 56G is
used, 69G is buff/cache and 65G is available.
As is shown here:
```
LIVE [root@prod-cluster-01:~] # cat /proc/meminfo
MemTotal: 131326836 kB
MemFree: 2630812 kB
MemAvailable: 66573596 kB
Buffers: 2376 kB
Cached: 5670628 kB
SwapCached: 151072 kB
Active: 59106140 kB
Inactive: 2744176 kB
Active(anon): 58099732 kB
Inactive(anon): 2327428 kB
Active(file): 1006408 kB
Inactive(file): 416748 kB
Unevictable: 40004 kB
Mlocked: 42052 kB
SwapTotal: 4194300 kB
SwapFree: 3579492 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 56085040 kB
Mapped: 121816 kB
Shmem: 4231808 kB
Slab: 65143868 kB
SReclaimable: 63145684 kB
SUnreclaim: 1998184 kB
KernelStack: 25296 kB
PageTables: 148336 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 69857716 kB
Committed_AS: 76533164 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 842296 kB
VmallocChunk: 34291404724 kB
HardwareCorrupted: 0 kB
AnonHugePages: 55296 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 722208 kB
DirectMap2M: 48031744 kB
DirectMap1G: 87031808 kB
LIVE [root@prod-cluster-01:~] # free -m
total used free shared buff/cache available
Mem: 128248 56522 2569 4132 69157 65013
Swap: 4095 600 3495
```
However the compute -> hosts ovirt screen shows this node as 94%
memory.
Clicking compute -> hosts -> prod-cluster-01 -> general says:
Physical Memory: 128248 MB total, 120553 MB used, 7695 MB free
Swap Size: 4095 MB total, 600 MB used, 3495 MB free
The physical memory in the above makes no sense to me. Unless it
includes caches which I would think it shouldn't.
This host has just two VMs:
LIVE [root@prod-cluster-01:~] # virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf list
Id Name State
----------------------------------------------------
35 prod-box-18 running
36 prod-box-11 running
Moreover each VM has 32G memory set, in every possible place - from
what I can see.
```
LIVE [root@prod-cluster-01:~] # virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf dumpxml prod-box-11|grep -i mem
<ovirt-vm:memGuaranteedSize type="int">32768</ovirt-vm:memGuaranteedSize>
<ovirt-vm:minGuaranteedMemoryMb type="int">32768</ovirt-vm:minGuaranteedMemoryMb>
<memory unit='KiB'>33554432</memory>
<currentMemory unit='KiB'>33554432</currentMemory>
<cell id='0' cpus='0-27' memory='33554432' unit='KiB'/>
<suspend-to-mem enabled='no'/>
<model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/>
<memballoon model='virtio'>
</memballoon>
```
prod-box-11 is however set as high performance VM. That could cause a
problem.
Same for the other VM:
```
LIVE [root@prod-cluster-01:~] # virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf dumpxml prod-box-18|grep -i mem
<ovirt-vm:memGuaranteedSize type="int">32768</ovirt-vm:memGuaranteedSize>
<ovirt-vm:minGuaranteedMemoryMb type="int">32768</ovirt-vm:minGuaranteedMemoryMb>
<memory unit='KiB'>33554432</memory>
<currentMemory unit='KiB'>33554432</currentMemory>
<cell id='0' cpus='0-27' memory='33554432' unit='KiB'/>
<suspend-to-mem enabled='no'/>
<model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/>
<memballoon model='virtio'>
</memballoon>
```
So I understand that two VMs each having allocated 32G of ram should
consume approx 64G of ram on the host. The host has 128G of ram, so
usage should be at approx 50%. However ovirt is reporting 94% usage.
Versions:
```
engine: 4.3.5.5-1.el7
host:
OS Version: RHEL - 7 - 6.1810.2.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 3.10.0 - 957.10.1.el7.x86_64
KVM Version: 2.12.0 - 18.el7_6.3.1
LIBVIRT Version: libvirt-4.5.0-10.el7_6.6
VDSM Version: vdsm-4.30.11-1.el7
SPICE Version: 0.14.0 - 6.el7_6.1
GlusterFS Version: [N/A]
CEPH Version: librbd1-10.2.5-4.el7
Open vSwitch Version: openvswitch-2.10.1-3.el7
Kernel Features: PTI: 1, IBRS: 0, RETP: 1
VNC Encryption: Disabled
```
Thanks for any insights!
--
Divan Santana
https://divansantana.com
4 years, 9 months
Two host cluster without hyperconverged
by Göker Dalar
Hello everyone,
I want to get an idea in this topic.
I have two servers with the same capabilities and 8 same physical disc per
node. I want to setup a cluster using a redundant disc.I dont have another
server for gluster hyperconverged . How i should build for this
structure ?
Thanks in advance,
Göker
4 years, 9 months
Gluster Heal Issue
by Christian Reiss
Hey folks,
in our production setup with 3 nodes (HCI) we took one host down
(maintenance, stop gluster, poweroff via ssh/ovirt engine). Once it was
up the gluster hat 2k healing entries that went down in a matter on 10
minutes to 2.
Those two give me a headache:
[root@node03:~] # gluster vol heal ssd_storage info
Brick node01:/gluster_bricks/ssd_storage/ssd_storage
<gfid:a121e4fb-0984-4e41-94d7-8f0c4f87f4b6>
<gfid:6f8817dc-3d92-46bf-aa65-a5d23f97490e>
Status: Connected
Number of entries: 2
Brick node02:/gluster_bricks/ssd_storage/ssd_storage
Status: Connected
Number of entries: 0
Brick node03:/gluster_bricks/ssd_storage/ssd_storage
<gfid:a121e4fb-0984-4e41-94d7-8f0c4f87f4b6>
<gfid:6f8817dc-3d92-46bf-aa65-a5d23f97490e>
Status: Connected
Number of entries: 2
No paths, only gfid. We took down node2, so it does not have the file:
[root@node01:~] # md5sum
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
75c4941683b7eabc223fc9d5f022a77c
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
[root@node02:~] # md5sum
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
md5sum:
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6:
No such file or directory
[root@node03:~] # md5sum
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
75c4941683b7eabc223fc9d5f022a77c
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
The other two files are md5-identical.
These flags are identical, too:
[root@node01:~] # getfattr -d -m . -e hex
/gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
getfattr: Removing leading '/' from absolute path names
# file:
gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.ssd_storage-client-1=0x0000004f0000000100000000
trusted.gfid=0xa121e4fb09844e4194d78f0c4f87f4b6
trusted.gfid2path.d4cf876a215b173f=0x62653331383633382d653861302d346336642d393737642d3761393337616138343830362f38366461303238392d663734662d343230302d393238342d3637386537626437363139352e31323030
trusted.glusterfs.mdata=0x010000000000000000000000005e349b1e000000001139aa2a000000005e349b1e000000001139aa2a000000005e34994900000000304a5eb2
getfattr: Removing leading '/' from absolute path names
# file:
gluster_bricks/ssd_storage/ssd_storage/.glusterfs/a1/21/a121e4fb-0984-4e41-94d7-8f0c4f87f4b6
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.ssd_storage-client-1=0x0000004f0000000100000000
trusted.gfid=0xa121e4fb09844e4194d78f0c4f87f4b6
trusted.gfid2path.d4cf876a215b173f=0x62653331383633382d653861302d346336642d393737642d3761393337616138343830362f38366461303238392d663734662d343230302d393238342d3637386537626437363139352e31323030
trusted.glusterfs.mdata=0x010000000000000000000000005e349b1e000000001139aa2a000000005e349b1e000000001139aa2a000000005e34994900000000304a5eb2
Now, I dont dare simply proceeding withouth some advice.
Anyone got a clue on who to resolve this issue? File #2 is identical to
this one, from a problem point of view.
Have a great weekend!
-Chris.
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 9 months
Ovirt-engine-ha cannot to see live status of Hosted Engine
by asm@pioner.kz
Good day for all.
I have some issues with Ovirt 4.2.6. But now the main this of it:
I have two Centos 7 Nodes with same config and last Ovirt 4.2.6 with Hostedengine with disk on NFS storage.
Also some of virtual machines working good.
But, when HostedEngine running on one node (srv02.local) everything is fine.
After migrating to another node (srv00.local), i see that agent cannot to check livelinness of HostedEngine. After few minutes HostedEngine going to reboot and after some time i see some situation. After migration to another node (srv00.local) all looks OK.
hosted-engine --vm-status commang when HosterEngine on srv00 node:
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : srv02.local
Host ID : 1
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}
Score : 0
stopped : False
Local maintenance : False
crc32 : ecc7ad2d
local_conf_timestamp : 78328
Host timestamp : 78328
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=78328 (Tue Sep 18 12:44:18 2018)
host-id=1
score=0
vm_conf_refresh_time=78328 (Tue Sep 18 12:44:18 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Fri Jan 2 03:49:58 1970
--== Host 2 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : srv00.local
Host ID : 2
Engine status : {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 1d62b106
local_conf_timestamp : 326288
Host timestamp : 326288
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=326288 (Tue Sep 18 12:44:21 2018)
host-id=2
score=3400
vm_conf_refresh_time=326288 (Tue Sep 18 12:44:21 2018)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
Log agent.log from srv00.local:
MainThread::INFO::2018-09-18 12:40:51,749::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:40:52,052::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:01,066::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:01,374::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:11,393::state_machine::169::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(refresh) Global metadata: {'maintenance': False}
MainThread::INFO::2018-09-18 12:41:11,393::state_machine::174::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(refresh) Host srv02.local.pioner.kz (id 1): {'conf_on_shared_storage': True, 'extra': 'meta
data_parse_version=1\nmetadata_feature_version=1\ntimestamp=78128 (Tue Sep 18 12:40:58 2018)\nhost-id=1\ns
core=0\nvm_conf_refresh_time=78128 (Tue Sep 18 12:40:58 2018)\nconf_on_shared_storage=True\nmaintenance=Fa
lse\nstate=EngineUnexpectedlyDown\nstopped=False\ntimeout=Fri Jan 2 03:49:58 1970\n', 'hostname': 'srv02.
local.pioner.kz', 'alive': True, 'host-id': 1, 'engine-status': {'reason': 'vm not running on this host',
'health': 'bad', 'vm': 'down_unexpected', 'detail': 'unknown'}, 'score': 0, 'stopped': False, 'maintenance
': False, 'crc32': 'e18e3f22', 'local_conf_timestamp': 78128, 'host-ts': 78128}
MainThread::INFO::2018-09-18 12:41:11,393::state_machine::177::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(refresh) Local (id 2): {'engine-health': {'reason': 'failed liveliness check', 'health': 'b
ad', 'vm': 'up', 'detail': 'Up'}, 'bridge': True, 'mem-free': 12763.0, 'maintenance': False, 'cpu-load': 0
.0364, 'gateway': 1.0, 'storage-domain': True}
MainThread::INFO::2018-09-18 12:41:11,393::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:11,703::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:21,716::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:22,020::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
MainThread::INFO::2018-09-18 12:41:31,033::states::779::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE
ngine::(consume) VM is powering up..
MainThread::INFO::2018-09-18 12:41:31,344::hosted_engine::491::ovirt_hosted_engine_ha.agent.hosted_engine.
HostedEngine::(_monitoring_loop) Current state EngineStarting (score: 3400)
As we can see, agent thinking that HostedEngine just in powering up mode. I cannot to do anythink with it. I allready reinstalled many times srv00 node without success.
One time i even has to uninstall ovirt* and vdsm* software. Also here one interesting point, after installing just "yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm" on this node i try to install this node from engine web interface with "Deploy" action. But, installation was unsuccesfull, before i didnt install ovirt-hosted-engine-ha on this node. I dont see in documentation that its need bofore installation of new hosts. But this is for information and checking. After installing ovirt-hosted-engine-ha node was installed with HostedEngine support. But the main issue not changed.
Thanks in advance for help.
BR,
Alexandr
4 years, 9 months
Reimport VMs after lost Engine with broken Backup
by Vinícius Ferrão
Hello,
I’m with a scenario with a lost hosted-engine. For reasons unknown the backup is broken and I’ve tried everything: redeploy with backup file, deploy a new one and them restore the backup. Changed the HE storage domain in both cases just be sure. Reinstalled one of the hosts just to be safe but nothing worked.
So I just deployed a brand new engine and now I want to import back the VM’s.
My scenario right now is:
ovirt1 brand new with the new VM in a new storage domain.
ovirt2 have production VMs running without any issue.
So my questions right now:
* What happens if I just add the ovirt2 machine to the new engine? It will reboot or will add everything back: storage, networks etc?
My plan right now would be:
* Reconfigure the DC and the cluster.
* Readd all the networks.
* Reattach the storage domains.
* Recreate all the VM”s attaching it’s disks.
Is there’s something that I can do better to mitigate the work?
Thanks,
4 years, 9 months
Re: High level network advice request
by Richard Nilsson
Thanks so much for your reply Robert!
I like your set-up alot, that's where I'm going too actually. But now I have only one node, I'm trying to learn very basic setup with just the one for the moment (Because I have it running after years of trying! It will take a few weeks and another motherboard / rebuild before I have the second node, I'll get there soon).
I've just learned (I think) that I can't sync new logical networks with the host, because I can't put the only host in maintenance mode...I thought that there might be a way with cli and restarts and all that but lo, there no point, I will have another node in a few weeks or months :)
That's okay, I'm trying to work out why I can't access my new test server from WAN. I use split dns with pfsense and haproxy reverse redirects. I can get to the server test pages from LAN via the pfSense dns resolution (LAN) but the reverse redirects are not working from WAN. I don't know what next step to take to debug the problem. The engine is accessible from WAN, so I think it should work for the vm server, which is also on the default ovirt management network and uses all defaults like the hosted engine. I suspect that there is a security setting on the engine, logical network or maybe the server?
What should I check next?
My singe node, is also in the same condition, which may be instructive to a noob like me...the node I can reach from the LAN but not the WAN. So the engine is a special case. Do I need to create certificates on the vm webserver?
I'd like to see if I can set-up a NextCloud server after trying a SuiteCRM server. But I started with fedora 31 server and a very basic lamp stack to limit variables...
Thanks in advance. Let me know if I can ever help you with anything! I'm an Architect, but a real one; not IT but bricks and all that :)
These are the links:
engine.metrodesignoffice.com
mdowebserver.metrodesignoffice.com
4 years, 9 months
0virt VMs status down after host reboot
by Eugène Ngontang
Hi all,
I've set up an infrastructure with OVirt, using self-hosted engine.
I use some ansible scripts from my Virtualization Host (the physical
machine), to bootstrap the hosted engine, and create a set of virtual
machines on which I deploy a k8s cluster.
The deployment goes well, and everything is OK.
Now I'm doing some reboot tests, and when I reboot the physical server,
only the hosted-engine vm is up after the reboot, the rest of VMs and thus
the k8s cluster are down.
Had someone here ever experienced this issue? What can cause it and how to
automate the virtual machines startup in RHVE/Ovirt?
Thanks.
Regards,
Eugene
--
LesCDN <http://lescdn.com>
engontang(a)lescdn.com
------------------------------------------------------------
*Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
voit on te juge!*
4 years, 9 months
Update 4.4
by Dirk Streubel
Hello,
i use for testing Version 4.4 and i wanted to make a update.
This is the result:
LANG=C engine-setup
...
[ INFO ] Checking for product updates...
[ ERROR ] Yum
[u'ovirt-engine-backend-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
requires java-client-kubevirt >= 0.1.0']
[ INFO ] Yum Performing yum transaction rollback
[ ERROR ] Failed to execute stage 'Environment customization':
[u'ovirt-engine-backend-4.4.0-0.0.master.20200122090542.git5f0a359.el7.noarch
requires java-client-kubevirt >= 0.1.0']
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20200127193352-x6p04t.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20200127193405-setup.conf'
[ ERROR ] Failed to execute stage 'Clean up': must be unicode, not str
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
So, i found this:
https://repo1.maven.org/maven2/org/ovirt/java-client-kubevirt/java-client...
So, do i have to install a.jar to make the engine update or what is the
best way to do the update?
Dirk
4 years, 9 months