SSBD issues on live cluster
by Christian Reiss
Hey folks,
new hardware arrived \o/
Installation as HCI was a bliss, with gluster et all.
Deploying the hosted engine also worked until it came to the very last
point: Health checks, which failed.
vdsm.log:
--- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
2019-11-15 09:54:02,588+0100 INFO (jsonrpc/4) [api.virt] FINISH
getStats return={'status': {'message': 'Done', 'code': 0}, 'statsList':
[{'status': 'Down', 'exitMessage': 'the CPU is incompatible with host
CPU: Host CPU does not provide required features: virt-ssbd',
'statusTime': '4344202670', 'vmId':
'50ac6250-4c24-40fd-894c-bc248c4f6fa2', 'exitReason': 1, 'exitCode':
1}]} from=::1,37492, vmId=50ac6250-4c24-40fd-894c-bc248c4f6fa2 (api:54)
--- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
But:
--- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
[root@node01 vdsm]# cat /proc/cpuinfo | grep flags | tail -n 1 | grep
-i --color ssb
flags : fpu vme [...] ssbd [...]
--- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
Cpu is a
--- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
processor : 1
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7282 16-Core Processor
--- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
Anyone willing to shed some light on this issue?
Thanks in advance!
-Chris.
--
Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon
support(a)alpha-labs.net \ / Campaign
X against HTML
WEB alpha-labs.net / \ in eMails
GPG Retrieval https://gpg.christian-reiss.de
GPG ID ABCD43C5, 0x44E29126ABCD43C5
GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5
"It's better to reign in hell than to serve in heaven.",
John Milton, Paradise lost.
5 years
Re: Gluster setup
by Strahil
In /etc/hosts you first enter the IP then FQDN, and last shortname (aliases).
Fix the /etc/hosts file !
Best Regards,
Strahil NikolovOn Nov 15, 2019 14:15, rob.downer(a)orbitalsystems.co.uk wrote:
>
> I have set up a 3 node system.
>
> Gluster has its own backend network and I have tried entering the FQDN hosts via ssh as follows...
> gfs1.gluster.private 10.10.45.11
> gfs2.gluster.private 10.10.45.12
> gfs3.gluster.private 10.10.45.13
>
> I entered at /etc/hosts
>
> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
> gfs1.gluster.private 10.10.45.11
> gfs2.gluster.private 10.10.45.12
> gfs3.gluster.private 10.10.45.13
>
> but on the CLI
>
> host gfs1.gluster.private
>
> returns
>
> [root@ovirt1 etc]# host gfs1.gluster.private
> Host gfs1.gluster.private not found: 3(NXDOMAIN)
> [root@ovirt1 etc]#
>
> I guess this is the wrong hosts file, resolver.conf lists files first for lookup...
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ILABGNZFOH5...
5 years
Gluster Questions
by Christian Reiss
Hey folks,
Running a 3 node HCI cluster (in testing stages) I would love to hear
your input. All nodes are exactly identical and have a local storage of
8tb in SSDs made out of a RAID6.
Gluster was setup to match this (Raid6, cluster of 256k).
There is the option of compression & dedup, coming from ZFS this is a
memory hog and kind of insane. What are your thoughts on compression &
dedup at this time?
The effective size of the compressed drive is suggested 10-fold of the
original sizes. Seems a big crazy high; any suggestions here?
Thanks for your input!
-Chris.
--
Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon
support(a)alpha-labs.net \ / Campaign
X against HTML
WEB alpha-labs.net / \ in eMails
GPG Retrieval https://gpg.christian-reiss.de
GPG ID ABCD43C5, 0x44E29126ABCD43C5
GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5
"It's better to reign in hell than to serve in heaven.",
John Milton, Paradise lost.
5 years
External VM vdsbroker parsing error
by Marcus van Dam
Hi,
I am running a two node oVirt cluster, based on the "High-Availability
oVirt-Cluster with iSCSI-Storage" document published by Linbit. Although
a older document, I am happy with the setup.
With this setup, I notice a flood of failed VM imports in the engine logs.
This setup is based on a separate Qemu VM running the oVirt-Engine.
Which seems to be the origin of the error (posted below).
Is there anything I can do to either ignore this instance and not fill
the logs, or fix the import error?
Thanks!
- Marcus
---- %< ---- SNIP ---- %< ----- %< ---- SNIP ---- %< -----
2019-11-14 14:50:27,155+01 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.FullListAdapter]
(EE-ManagedThreadFactory-engineScheduled-Thread-38) [2e46652f] Failed
during parsing configuration of VM 133789fc-b329-40b8-bac3-e3ff8c37b1f9
(<domain type='kvm' id='2'>
<name>oVirtm</name>
<uuid>133789fc-b329-40b8-bac3-e3ff8c37b1f9</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'>
<timer name='kvmclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/drbd/by-res/kvm/0'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/etc/ovirtm/seed.iso'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<readonly/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='none'>
<alias name='usb'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:75:3c:7d'/>
<source bridge='ovirtmgmt'/>
<target dev='vnet1'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/4'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/4'>
<source path='/dev/pts/4'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='unix'>
<source mode='bind'
path='/var/lib/libvirt/qemu/channel/target/domain-2-oVirtm/org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0'
state='connected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<video>
<model type='vga' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
<memballoon model='none'/>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<alias name='rng0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</rng>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c624,c838</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c624,c838</imagelabel>
</seclabel>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
</domain>
), error is: {}: java.lang.NullPointerException
2019-11-14 14:50:27,155+01 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.FullListAdapter]
(EE-ManagedThreadFactory-engineScheduled-Thread-38) [2e46652f]
Exception:: java.lang.NullPointerException
5 years
Do I need to use self hosted engine with a GlusterFs three node setup
by donagh.moran@oracle.com
Every piece of documentation I read regarding the setup of Gluster seems to suggest using at least one of the the nodes to host the engine. Is this a requirement of can I use an engine that is deployed on a VM on another server in the same network? Any info would be much appreciated.
5 years
Gluster Network Issues....
by rob.downer@orbitalsystems.co.uk
I have set up 3 SuperMicro's with Ovirt Node and all pretty sweet.
FQDN set up for LAN and also after setup I have enabled a second NIC with FQDN for a Gluster network.
The issue is the second ports seem to be unavailable for network access by ping or login.... if you login on root the system says that the ports are available for login on the bash shell after login and node check comes back fine.
I have IPMI set up on the systems as well for access.
am I missing something ?
I realise Gluster should be on a seperate LAN and will put it on a 10BGe network but I'm just testing.
I have the lastest stable build.
Any help would be appreciated.
5 years
Hosted-Engine wizard disappeared after cockpit idle session timeout
by wodel youchi
Hi,
I got this behavior once and I didn't tested again.
I started the hosted-engine deployment using cockpit webui, the process
went smoothly, the LocalHosttedEngine VM was created, at that time I left
the console for sometime.
When I get back the cockpit console was closed due to idle session timeout,
I reconnect, and the hosted-engine wizard had disappeared and I didn't find
a way to get it back, the only choice was to start the process again even
if the console was showing that the host was registered with a hosted
engine Manager, so I couldn't continue the deployment (the storage phase).
I had to stop the LocalHosttedEngine VM, delete it's temporary disk, then
restart the deployment again and this time I stick with cockpit webui to
get it done.
Regards.
5 years
Warning: out of sync hosts ovirt-node3 ?
by wangyu13476969128@126.com
In the ovirt-engine web interface, compute-->clusters, there is a cluster with the following information:
Warning: out of sync hosts ovirt-node3 ?
What is the cause ? How can I solve this problem ?
5 years